1. 07 5月, 2014 3 次提交
  2. 30 3月, 2014 1 次提交
  3. 29 3月, 2014 4 次提交
  4. 28 3月, 2014 1 次提交
    • K
      NFSD: Traverse unconfirmed client through hash-table · 2b905635
      Kinglong Mee 提交于
      When stopping nfsd, I got BUG messages, and soft lockup messages,
      The problem is cuased by double rb_erase() in nfs4_state_destroy_net()
      and destroy_client().
      
      This patch just let nfsd traversing unconfirmed client through
      hash-table instead of rbtree.
      
      [ 2325.021995] BUG: unable to handle kernel NULL pointer dereference at
                (null)
      [ 2325.022809] IP: [<ffffffff8133c18c>] rb_erase+0x14c/0x390
      [ 2325.022982] PGD 7a91b067 PUD 7a33d067 PMD 0
      [ 2325.022982] Oops: 0000 [#1] SMP DEBUG_PAGEALLOC
      [ 2325.022982] Modules linked in: nfsd(OF) cfg80211 rfkill bridge stp
      llc snd_intel8x0 snd_ac97_codec ac97_bus auth_rpcgss nfs_acl serio_raw
      e1000 i2c_piix4 ppdev snd_pcm snd_timer lockd pcspkr joydev parport_pc
      snd parport i2c_core soundcore microcode sunrpc ata_generic pata_acpi
      [last unloaded: nfsd]
      [ 2325.022982] CPU: 1 PID: 2123 Comm: nfsd Tainted: GF          O
      3.14.0-rc8+ #2
      [ 2325.022982] Hardware name: innotek GmbH VirtualBox/VirtualBox, BIOS
      VirtualBox 12/01/2006
      [ 2325.022982] task: ffff88007b384800 ti: ffff8800797f6000 task.ti:
      ffff8800797f6000
      [ 2325.022982] RIP: 0010:[<ffffffff8133c18c>]  [<ffffffff8133c18c>]
      rb_erase+0x14c/0x390
      [ 2325.022982] RSP: 0018:ffff8800797f7d98  EFLAGS: 00010246
      [ 2325.022982] RAX: ffff880079c1f010 RBX: ffff880079f4c828 RCX:
      0000000000000000
      [ 2325.022982] RDX: 0000000000000000 RSI: ffff880079bcb070 RDI:
      ffff880079f4c810
      [ 2325.022982] RBP: ffff8800797f7d98 R08: 0000000000000000 R09:
      ffff88007964fc70
      [ 2325.022982] R10: 0000000000000000 R11: 0000000000000400 R12:
      ffff880079f4c800
      [ 2325.022982] R13: ffff880079bcb000 R14: ffff8800797f7da8 R15:
      ffff880079f4c860
      [ 2325.022982] FS:  0000000000000000(0000) GS:ffff88007f900000(0000)
      knlGS:0000000000000000
      [ 2325.022982] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
      [ 2325.022982] CR2: 0000000000000000 CR3: 000000007a3ef000 CR4:
      00000000000006e0
      [ 2325.022982] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
      0000000000000000
      [ 2325.022982] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7:
      0000000000000400
      [ 2325.022982] Stack:
      [ 2325.022982]  ffff8800797f7de0 ffffffffa0191c6e ffff8800797f7da8
      ffff8800797f7da8
      [ 2325.022982]  ffff880079f4c810 ffff880079bcb000 ffffffff81cc26c0
      ffff880079c1f010
      [ 2325.022982]  ffff880079bcb070 ffff8800797f7e28 ffffffffa01977f2
      ffff8800797f7df0
      [ 2325.022982] Call Trace:
      [ 2325.022982]  [<ffffffffa0191c6e>] destroy_client+0x32e/0x3b0 [nfsd]
      [ 2325.022982]  [<ffffffffa01977f2>] nfs4_state_shutdown_net+0x1a2/0x220
      [nfsd]
      [ 2325.022982]  [<ffffffffa01700b8>] nfsd_shutdown_net+0x38/0x70 [nfsd]
      [ 2325.022982]  [<ffffffffa017013e>] nfsd_last_thread+0x4e/0x80 [nfsd]
      [ 2325.022982]  [<ffffffffa001f1eb>] svc_shutdown_net+0x2b/0x30 [sunrpc]
      [ 2325.022982]  [<ffffffffa017064b>] nfsd_destroy+0x5b/0x80 [nfsd]
      [ 2325.022982]  [<ffffffffa0170773>] nfsd+0x103/0x130 [nfsd]
      [ 2325.022982]  [<ffffffffa0170670>] ? nfsd_destroy+0x80/0x80 [nfsd]
      [ 2325.022982]  [<ffffffff810a8232>] kthread+0xd2/0xf0
      [ 2325.022982]  [<ffffffff810a8160>] ? insert_kthread_work+0x40/0x40
      [ 2325.022982]  [<ffffffff816c493c>] ret_from_fork+0x7c/0xb0
      [ 2325.022982]  [<ffffffff810a8160>] ? insert_kthread_work+0x40/0x40
      [ 2325.022982] Code: 48 83 e1 fc 48 89 10 0f 84 02 01 00 00 48 3b 41 10
      0f 84 08 01 00 00 48 89 51 08 48 89 fa e9 74 ff ff ff 0f 1f 40 00 48 8b
      50 10 <f6> 02 01 0f 84 93 00 00 00 48 8b 7a 10 48 85 ff 74 05 f6 07 01
      [ 2325.022982] RIP  [<ffffffff8133c18c>] rb_erase+0x14c/0x390
      [ 2325.022982]  RSP <ffff8800797f7d98>
      [ 2325.022982] CR2: 0000000000000000
      [ 2325.022982] ---[ end trace 28c27ed011655e57 ]---
      
      [  228.064071] BUG: soft lockup - CPU#0 stuck for 22s! [nfsd:558]
      [  228.064428] Modules linked in: ip6t_rpfilter ip6t_REJECT cfg80211
      xt_conntrack rfkill ebtable_nat ebtable_broute bridge stp llc
      ebtable_filter ebtables ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6
      nf_nat_ipv6 ip6table_mangle ip6table_security ip6table_raw
      ip6table_filter ip6_tables iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4
      nf_nat_ipv4 nf_nat nf_conntrack iptable_mangle iptable_security
      iptable_raw nfsd(OF) auth_rpcgss nfs_acl lockd snd_intel8x0
      snd_ac97_codec ac97_bus joydev snd_pcm snd_timer e1000 sunrpc snd ppdev
      parport_pc serio_raw pcspkr i2c_piix4 microcode parport soundcore
      i2c_core ata_generic pata_acpi
      [  228.064539] CPU: 0 PID: 558 Comm: nfsd Tainted: GF          O
      3.14.0-rc8+ #2
      [  228.064539] Hardware name: innotek GmbH VirtualBox/VirtualBox, BIOS
      VirtualBox 12/01/2006
      [  228.064539] task: ffff880076adec00 ti: ffff880074616000 task.ti:
      ffff880074616000
      [  228.064539] RIP: 0010:[<ffffffff8133ba17>]  [<ffffffff8133ba17>]
      rb_next+0x27/0x50
      [  228.064539] RSP: 0018:ffff880074617de0  EFLAGS: 00000282
      [  228.064539] RAX: ffff880074478010 RBX: ffff88007446f860 RCX:
      0000000000000014
      [  228.064539] RDX: ffff880074478010 RSI: 0000000000000000 RDI:
      ffff880074478010
      [  228.064539] RBP: ffff880074617de0 R08: 0000000000000000 R09:
      0000000000000012
      [  228.064539] R10: 0000000000000001 R11: ffffffffffffffec R12:
      ffffea0001d11a00
      [  228.064539] R13: ffff88007f401400 R14: ffff88007446f800 R15:
      ffff880074617d50
      [  228.064539] FS:  0000000000000000(0000) GS:ffff88007f800000(0000)
      knlGS:0000000000000000
      [  228.064539] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
      [  228.064539] CR2: 00007fe9ac6ec000 CR3: 000000007a5d6000 CR4:
      00000000000006f0
      [  228.064539] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
      0000000000000000
      [  228.064539] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7:
      0000000000000400
      [  228.064539] Stack:
      [  228.064539]  ffff880074617e28 ffffffffa01ab7db ffff880074617df0
      ffff880074617df0
      [  228.064539]  ffff880079273000 ffffffff81cc26c0 ffffffff81cc26c0
      0000000000000000
      [  228.064539]  0000000000000000 ffff880074617e48 ffffffffa01840b8
      ffffffff81cc26c0
      [  228.064539] Call Trace:
      [  228.064539]  [<ffffffffa01ab7db>] nfs4_state_shutdown_net+0x18b/0x220
      [nfsd]
      [  228.064539]  [<ffffffffa01840b8>] nfsd_shutdown_net+0x38/0x70 [nfsd]
      [  228.064539]  [<ffffffffa018413e>] nfsd_last_thread+0x4e/0x80 [nfsd]
      [  228.064539]  [<ffffffffa00aa1eb>] svc_shutdown_net+0x2b/0x30 [sunrpc]
      [  228.064539]  [<ffffffffa018464b>] nfsd_destroy+0x5b/0x80 [nfsd]
      [  228.064539]  [<ffffffffa0184773>] nfsd+0x103/0x130 [nfsd]
      [  228.064539]  [<ffffffffa0184670>] ? nfsd_destroy+0x80/0x80 [nfsd]
      [  228.064539]  [<ffffffff810a8232>] kthread+0xd2/0xf0
      [  228.064539]  [<ffffffff810a8160>] ? insert_kthread_work+0x40/0x40
      [  228.064539]  [<ffffffff816c493c>] ret_from_fork+0x7c/0xb0
      [  228.064539]  [<ffffffff810a8160>] ? insert_kthread_work+0x40/0x40
      [  228.064539] Code: 1f 44 00 00 55 48 8b 17 48 89 e5 48 39 d7 74 3b 48
      8b 47 08 48 85 c0 75 0e eb 25 66 0f 1f 84 00 00 00 00 00 48 89 d0 48 8b
      50 10 <48> 85 d2 75 f4 5d c3 66 90 48 3b 78 08 75 f6 48 8b 10 48 89 c7
      
      Fixes: ac55fdc4 (nfsd: move the confirmed and unconfirmed hlists...)
      Signed-off-by: NKinglong Mee <kinglongmee@gmail.com>
      Cc: stable@vger.kernel.org
      Reviewed-by: NJeff Layton <jlayton@redhat.com>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      2b905635
  5. 28 1月, 2014 2 次提交
  6. 07 1月, 2014 1 次提交
  7. 04 1月, 2014 3 次提交
  8. 09 11月, 2013 1 次提交
  9. 05 11月, 2013 1 次提交
  10. 30 10月, 2013 2 次提交
  11. 29 10月, 2013 4 次提交
  12. 25 10月, 2013 1 次提交
  13. 03 10月, 2013 1 次提交
  14. 31 8月, 2013 2 次提交
  15. 08 8月, 2013 1 次提交
  16. 27 7月, 2013 1 次提交
  17. 24 7月, 2013 1 次提交
  18. 09 7月, 2013 1 次提交
    • J
      nfsd4: allow destroy_session over destroyed session · f0f51f5c
      J. Bruce Fields 提交于
      RFC 5661 allows a client to destroy a session using a compound
      associated with the destroyed session, as long as the DESTROY_SESSION op
      is the last op of the compound.
      
      We attempt to allow this, but testing against a Solaris client (which
      does destroy sessions in this way) showed that we were failing the
      DESTROY_SESSION with NFS4ERR_DELAY, because we assumed the reference
      count on the session (held by us) represented another rpc in progress
      over this session.
      
      Fix this by noting that in this case the expected reference count is 1,
      not 0.
      
      Also, note as long as the session holds a reference to the compound
      we're destroying, we can't free it here--instead, delay the free till
      the final put in nfs4svc_encode_compoundres.
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      f0f51f5c
  19. 02 7月, 2013 7 次提交
    • J
      nfsd4: return delegation immediately if lease fails · d08d32e6
      J. Bruce Fields 提交于
      This case shouldn't happen--the administrator shouldn't really allow
      other applications access to the export until clients have had the
      chance to reclaim their state--but if it does then we should set the
      "return this lease immediately" bit on the reply.  That still leaves
      some small races, but it's the best the protocol allows us to do in the
      case a lease is ripped out from under us....
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      d08d32e6
    • J
      nfsd4: do not throw away 4.1 lock state on last unlock · 0a262ffb
      J. Bruce Fields 提交于
      This reverts commit eb2099f3 "nfsd4:
      release lockowners on last unlock in 4.1 case".  Trond identified
      language in rfc 5661 section 8.2.4 which forbids this behavior:
      
      	Stateids associated with byte-range locks are an exception.
      	They remain valid even if a LOCKU frees all remaining locks, so
      	long as the open file with which they are associated remains
      	open, unless the client frees the stateids via the FREE_STATEID
      	operation.
      
      And bakeathon 2013 testing found a 4.1 freebsd client was getting an
      incorrect BAD_STATEID return from a FREE_STATEID in the above situation
      and then failing.
      
      The spec language honestly was probably a mistake but at this point with
      implementations already following it we're probably stuck with that.
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      0a262ffb
    • J
      nfsd4: clean up nfs4_open_delegation · 99c41515
      J. Bruce Fields 提交于
      The nfs4_open_delegation logic is unecessarily baroque.
      
      Also stop pretending we support write delegations in several places.
      
      Some day we will support write delegations, but when that happens adding
      back in these flag parameters will be the easy part.  For now they're
      just confusing.
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      99c41515
    • S
      NFSD: Don't give out read delegations on creates · 9a0590ae
      Steve Dickson 提交于
      When an exclusive create is done with the mode bits
      set (aka open(testfile, O_CREAT | O_EXCL, 0777)) this
      causes a OPEN op followed by a SETATTR op. When a
      read delegation is given in the OPEN, it causes
      the SETATTR to delay with EAGAIN until the
      delegation is recalled.
      
      This patch caused exclusive creates to give out
      a write delegation (which turn into no delegation)
      which allows the SETATTR seamlessly succeed.
      Signed-off-by: NSteve Dickson <steved@redhat.com>
      [bfields: do this for any CREATE, not just exclusive; comment]
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      9a0590ae
    • J
      nfsd4: fail attempts to request gss on the backchannel · b78724b7
      J. Bruce Fields 提交于
      We don't support gss on the backchannel.  We should state that fact up
      front rather than just letting things continue and later making the
      client try to figure out why the backchannel isn't working.
      
      Trond suggested instead returning NFS4ERR_NOENT.  I think it would be
      tricky for the client to distinguish between the case "I don't support
      gss on the backchannel" and "I can't find that in my cache, please
      create another context and try that instead", and I'd prefer something
      that currently doesn't have any other meaning for this operation, hence
      the (somewhat arbitrary) NFS4ERR_ENCR_ALG_UNSUPP.
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      b78724b7
    • J
      nfsd4: implement minimal SP4_MACH_CRED · 57266a6e
      J. Bruce Fields 提交于
      Do a minimal SP4_MACH_CRED implementation suggested by Trond, ignoring
      the client-provided spo_must_* arrays and just enforcing credential
      checks for the minimum required operations.
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      57266a6e
    • J
      svcrpc: store gss mech in svc_cred · 0dc1531a
      J. Bruce Fields 提交于
      Store a pointer to the gss mechanism used in the rq_cred and cl_cred.
      This will make it easier to enforce SP4_MACH_CRED, which needs to
      compare the mechanism used on the exchange_id with that used on
      protected operations.
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      0dc1531a
  20. 29 6月, 2013 1 次提交
    • J
      locks: protect most of the file_lock handling with i_lock · 1c8c601a
      Jeff Layton 提交于
      Having a global lock that protects all of this code is a clear
      scalability problem. Instead of doing that, move most of the code to be
      protected by the i_lock instead. The exceptions are the global lists
      that the ->fl_link sits on, and the ->fl_block list.
      
      ->fl_link is what connects these structures to the
      global lists, so we must ensure that we hold those locks when iterating
      over or updating these lists.
      
      Furthermore, sound deadlock detection requires that we hold the
      blocked_list state steady while checking for loops. We also must ensure
      that the search and update to the list are atomic.
      
      For the checking and insertion side of the blocked_list, push the
      acquisition of the global lock into __posix_lock_file and ensure that
      checking and update of the  blocked_list is done without dropping the
      lock in between.
      
      On the removal side, when waking up blocked lock waiters, take the
      global lock before walking the blocked list and dequeue the waiters from
      the global list prior to removal from the fl_block list.
      
      With this, deadlock detection should be race free while we minimize
      excessive file_lock_lock thrashing.
      
      Finally, in order to avoid a lock inversion problem when handling
      /proc/locks output we must ensure that manipulations of the fl_block
      list are also protected by the file_lock_lock.
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      1c8c601a
  21. 21 5月, 2013 1 次提交