1. 01 4月, 2014 1 次提交
  2. 29 3月, 2014 3 次提交
  3. 28 3月, 2014 4 次提交
  4. 25 1月, 2014 1 次提交
  5. 09 1月, 2014 1 次提交
  6. 08 1月, 2014 2 次提交
  7. 04 1月, 2014 4 次提交
  8. 03 1月, 2014 1 次提交
  9. 11 12月, 2013 1 次提交
  10. 20 11月, 2013 1 次提交
    • J
      nfsd4: fix xdr decoding of large non-write compounds · 365da4ad
      J. Bruce Fields 提交于
      This fixes a regression from 24750082
      "nfsd4: fix decoding of compounds across page boundaries".  The previous
      code was correct: argp->pagelist is initialized in
      nfs4svc_deocde_compoundargs to rqstp->rq_arg.pages, and is therefore a
      pointer to the page *after* the page we are currently decoding.
      
      The reason that patch nevertheless fixed a problem with decoding
      compounds containing write was a bug in the write decoding introduced by
      5a80a54d "nfsd4: reorganize write
      decoding", after which write decoding no longer adhered to the rule that
      argp->pagelist point to the next page.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      365da4ad
  11. 15 11月, 2013 1 次提交
  12. 14 11月, 2013 1 次提交
  13. 02 11月, 2013 1 次提交
  14. 31 10月, 2013 1 次提交
  15. 30 10月, 2013 1 次提交
  16. 04 9月, 2013 1 次提交
  17. 08 8月, 2013 1 次提交
  18. 09 7月, 2013 1 次提交
    • J
      nfsd4: allow destroy_session over destroyed session · f0f51f5c
      J. Bruce Fields 提交于
      RFC 5661 allows a client to destroy a session using a compound
      associated with the destroyed session, as long as the DESTROY_SESSION op
      is the last op of the compound.
      
      We attempt to allow this, but testing against a Solaris client (which
      does destroy sessions in this way) showed that we were failing the
      DESTROY_SESSION with NFS4ERR_DELAY, because we assumed the reference
      count on the session (held by us) represented another rpc in progress
      over this session.
      
      Fix this by noting that in this case the expected reference count is 1,
      not 0.
      
      Also, note as long as the session holds a reference to the compound
      we're destroying, we can't free it here--instead, delay the free till
      the final put in nfs4svc_encode_compoundres.
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      f0f51f5c
  19. 02 7月, 2013 4 次提交
  20. 15 5月, 2013 2 次提交
  21. 13 5月, 2013 1 次提交
  22. 01 5月, 2013 2 次提交
    • C
      NFSD: SECINFO doesn't handle unsupported pseudoflavors correctly · 676e4ebd
      Chuck Lever 提交于
      If nfsd4_do_encode_secinfo() can't find GSS info that matches an
      export security flavor, it assumes the flavor is not a GSS
      pseudoflavor, and simply puts it on the wire.
      
      However, if this XDR encoding logic is given a legitimate GSS
      pseudoflavor but the RPC layer says it does not support that
      pseudoflavor for some reason, then the server leaks GSS pseudoflavor
      numbers onto the wire.
      
      I confirmed this happens by blacklisting rpcsec_gss_krb5, then
      attempted a client transition from the pseudo-fs to a Kerberos-only
      share.  The client received a flavor list containing the Kerberos
      pseudoflavor numbers, rather than GSS tuples.
      
      The encoder logic can check that each pseudoflavor in flavs[] is
      less than MAXFLAVOR before writing it into the buffer, to prevent
      this.  But after "nflavs" is written into the XDR buffer, the
      encoder can't skip writing flavor information into the buffer when
      it discovers the RPC layer doesn't support that flavor.
      
      So count the number of valid flavors as they are written into the
      XDR buffer, then write that count into a placeholder in the XDR
      buffer when all recognized flavors have been encoded.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      676e4ebd
    • C
      ed9411a0
  23. 24 4月, 2013 1 次提交
  24. 17 4月, 2013 1 次提交
  25. 08 4月, 2013 1 次提交
    • J
      nfsd4: cleanup handling of nfsv4.0 closed stateid's · 9411b1d4
      J. Bruce Fields 提交于
      Closed stateid's are kept around a little while to handle close replays
      in the 4.0 case.  So we stash them in the last-used stateid in the
      oo_last_closed_stateid field of the open owner.  We can free that in
      encode_seqid_op_tail once the seqid on the open owner is next
      incremented.  But we don't want to do that on the close itself; so we
      set NFS4_OO_PURGE_CLOSE flag set on the open owner, skip freeing it the
      first time through encode_seqid_op_tail, then when we see that flag set
      next time we free it.
      
      This is unnecessarily baroque.
      
      Instead, just move the logic that increments the seqid out of the xdr
      code and into the operation code itself.
      
      The justification given for the current placement is that we need to
      wait till the last minute to be sure we know whether the status is a
      sequence-id-mutating error or not, but examination of the code shows
      that can't actually happen.
      Reported-by: NYanchuan Nian <ycnian@gmail.com>
      Tested-by: NYanchuan Nian <ycnian@gmail.com>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      9411b1d4
  26. 03 4月, 2013 1 次提交
    • J
      nfsd4: don't destroy in-use clients · 221a6876
      J. Bruce Fields 提交于
      When a setclientid_confirm or create_session confirms a client after a
      client reboot, it also destroys any previous state held by that client.
      
      The shutdown of that previous state must be careful not to free the
      client out from under threads processing other requests that refer to
      the client.
      
      This is a particular problem in the NFSv4.1 case when we hold a
      reference to a session (hence a client) throughout compound processing.
      
      The server attempts to handle this by unhashing the client at the time
      it's destroyed, then delaying the final free to the end.  But this still
      leaves some races in the current code.
      
      I believe it's simpler just to fail the attempt to destroy the client by
      returning NFS4ERR_DELAY.  This is a case that should never happen
      anyway.
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      221a6876