1. 16 3月, 2011 1 次提交
  2. 12 3月, 2011 6 次提交
  3. 11 3月, 2011 3 次提交
    • B
      sunrpc: Propagate errors from xs_bind() through xs_create_sock() · 4cea288a
      Ben Hutchings 提交于
      xs_create_sock() is supposed to return a pointer or an ERR_PTR-encoded
      error, but it currently returns 0 if xs_bind() fails.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      Cc: stable@kernel.org [v2.6.37]
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      4cea288a
    • J
      SUNRPC: Remove resource leak in svc_rdma_send_error() · a5e50268
      Jesper Juhl 提交于
      We leak the memory allocated to 'ctxt' when we return after
      'ib_dma_mapping_error()' returns !=0.
      Signed-off-by: NJesper Juhl <jj@chaosbits.net>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      a5e50268
    • T
      SUNRPC: Close a race in __rpc_wait_for_completion_task() · bf294b41
      Trond Myklebust 提交于
      Although they run as rpciod background tasks, under normal operation
      (i.e. no SIGKILL), functions like nfs_sillyrename(), nfs4_proc_unlck()
      and nfs4_do_close() want to be fully synchronous. This means that when we
      exit, we want all references to the rpc_task to be gone, and we want
      any dentry references etc. held by that task to be released.
      
      For this reason these functions call __rpc_wait_for_completion_task(),
      followed by rpc_put_task() in the expectation that the latter will be
      releasing the last reference to the rpc_task, and thus ensuring that the
      callback_ops->rpc_release() has been called synchronously.
      
      This patch fixes a race which exists due to the fact that
      rpciod calls rpc_complete_task() (in order to wake up the callers of
      __rpc_wait_for_completion_task()) and then subsequently calls
      rpc_put_task() without ensuring that these two steps are done atomically.
      
      In order to avoid adding new spin locks, the patch uses the existing
      waitqueue spin lock to order the rpc_task reference count releases between
      the waiting process and rpciod.
      The common case where nobody is waiting for completion is optimised for by
      checking if the RPC_TASK_ASYNC flag is cleared and/or if the rpc_task
      reference count is 1: in those cases we drop trying to grab the spin lock,
      and immediately free up the rpc_task.
      
      Those few processes that need to put the rpc_task from inside an
      asynchronous context and that do not care about ordering are given a new
      helper: rpc_put_task_async().
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      bf294b41
  4. 26 1月, 2011 1 次提交
  5. 12 1月, 2011 3 次提交
    • J
      rpc: allow xprt_class->setup to return a preexisting xprt · f0418aa4
      J. Bruce Fields 提交于
      This allows us to reuse the xprt associated with a server connection if
      one has already been set up.
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      f0418aa4
    • J
      rpc: keep backchannel xprt as long as server connection · 99de8ea9
      J. Bruce Fields 提交于
      Multiple backchannels can share the same tcp connection; from rfc 5661 section
      2.10.3.1:
      
      	A connection's association with a session is not exclusive.  A
      	connection associated with the channel(s) of one session may be
      	simultaneously associated with the channel(s) of other sessions
      	including sessions associated with other client IDs.
      
      However, multiple backchannels share a connection, they must all share
      the same xid stream (hence the same rpc_xprt); the only way we have to
      match replies with calls at the rpc layer is using the xid.
      
      So, keep the rpc_xprt around as long as the connection lasts, in case
      we're asked to use the connection as a backchannel again.
      
      Requests to create new backchannel clients over a given server
      connection should results in creating new clients that reuse the
      existing rpc_xprt.
      
      But to start, just reject attempts to associate multiple rpc_xprt's with
      the same underlying bc_xprt.
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      99de8ea9
    • J
      rpc: move sk_bc_xprt to svc_xprt · d75faea3
      J. Bruce Fields 提交于
      This seems obviously transport-level information even if it's currently
      used only by the server socket code.
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      d75faea3
  6. 11 1月, 2011 1 次提交
    • T
      NFS: Don't use vm_map_ram() in readdir · 6650239a
      Trond Myklebust 提交于
      vm_map_ram() is not available on NOMMU platforms, and causes trouble
      on incoherrent architectures such as ARM when we access the page data
      through both the direct and the virtual mapping.
      
      The alternative is to use the direct mapping to access page data
      for the case when we are not crossing a page boundary, but to copy
      the data into a linear scratch buffer when we are accessing data
      that spans page boundaries.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Tested-by: NMarc Kleine-Budde <mkl@pengutronix.de>
      Cc: stable@kernel.org  [2.6.37]
      6650239a
  7. 07 1月, 2011 9 次提交
  8. 05 1月, 2011 7 次提交
    • J
      svcrpc: ensure cache_check caller sees updated entry · fdef7aa5
      J. Bruce Fields 提交于
      Supposes cache_check runs simultaneously with an update on a different
      CPU:
      
      	cache_check			task doing update
      	^^^^^^^^^^^			^^^^^^^^^^^^^^^^^
      
      	1. test for CACHE_VALID		1'. set entry->data
      	   & !CACHE_NEGATIVE
      
      	2. use entry->data		2'. set CACHE_VALID
      
      If the two memory writes performed in step 1' and 2' appear misordered
      with respect to the reads in step 1 and 2, then the caller could get
      stale data at step 2 even though it saw CACHE_VALID set on the cache
      entry.
      
      Add memory barriers to prevent this.
      Reviewed-by: NNeilBrown <neilb@suse.de>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      fdef7aa5
    • J
      svcrpc: take lock on turning entry NEGATIVE in cache_check · 6bab93f8
      J. Bruce Fields 提交于
      We attempt to turn a cache entry negative in place.  But that entry may
      already have been filled in by some other task since we last checked
      whether it was valid, so we could be modifying an already-valid entry.
      If nothing else there's a likely leak in such a case when the entry is
      eventually put() and contents are not freed because it has
      CACHE_NEGATIVE set.
      
      So, take the cache_lock just as sunrpc_cache_update() does.
      Reviewed-by: NNeilBrown <neilb@suse.de>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      6bab93f8
    • J
      svcrpc: simpler request dropping · 9e701c61
      J. Bruce Fields 提交于
      Currently we use -EAGAIN returns to determine when to drop a deferred
      request.  On its own, that is error-prone, as it makes us treat -EAGAIN
      returns from other functions specially to prevent inadvertent dropping.
      
      So, use a flag on the request instead.
      
      Returning an error on request deferral is still required, to prevent
      further processing, but we no longer need worry that an error return on
      its own could result in a drop.
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      9e701c61
    • J
      svcrpc: avoid double reply caused by deferral race · d76d1815
      J. Bruce Fields 提交于
      Commit d29068c4 "sunrpc: Simplify cache_defer_req and related
      functions." asserted that cache_check() could determine success or
      failure of cache_defer_req() by checking the CACHE_PENDING bit.
      
      This isn't quite right.
      
      We need to know whether cache_defer_req() created a deferred request,
      in which case sending an rpc reply has become the responsibility of the
      deferred request, and it is important that we not send our own reply,
      resulting in two different replies to the same request.
      
      And the CACHE_PENDING bit doesn't tell us that; we could have
      succesfully created a deferred request at the same time as another
      thread cleared the CACHE_PENDING bit.
      
      So, partially revert that commit, to ensure that cache_check() returns
      -EAGAIN if and only if a deferred request has been created.
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      Acked-by: NNeilBrown <neilb@suse.de>
      d76d1815
    • J
      SUNRPC: Remove more code when NFSD_DEPRECATED is not configured · bdd5f05d
      J. Bruce Fields 提交于
      Signed-off-by: NNeilBrown <neilb@suse.de>
      [bfields@redhat.com: moved svcauth_unix_purge outside ifdef's.]
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      bdd5f05d
    • J
      svcrpc: modifying valid sunrpc cache entries is racy · 31f7aa65
      J. Bruce Fields 提交于
      Once a sunrpc cache entry is VALID, we should be replacing it (and
      allowing any concurrent users to destroy it on last put) instead of
      trying to update it in place.
      
      Otherwise someone referencing the ip_map we're modifying here could try
      to use the m_client just as we're putting the last reference.
      
      The bug should only be seen by users of the legacy nfsd interfaces.
      
      (Thanks to Neil for suggestion to use sunrpc_invalidate.)
      Reviewed-by: NNeilBrown <neilb@suse.de>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      31f7aa65
    • T
      kernel panic when mount NFSv4 · beb0f0a9
      Trond Myklebust 提交于
      On Tue, 2010-12-14 at 16:58 +0800, Mi Jinlong wrote:
      > Hi,
      >
      > When testing NFSv4 at RHEL6 with kernel 2.6.32, I got a kernel panic
      > at NFS client's __rpc_create_common function.
      >
      > The panic place is:
      >   rpc_mkpipe
      >     __rpc_lookup_create()          <=== find pipefile *idmap*
      >     __rpc_mkpipe()                 <=== pipefile is *idmap*
      >       __rpc_create_common()
      >        ******  BUG_ON(!d_unhashed(dentry)); ******    *panic*
      >
      > It means that the dentry's d_flags have be set DCACHE_UNHASHED,
      > but it should not be set here.
      >
      > Is someone known this bug? or give me some idea?
      >
      > A reproduce program is append, but it can't reproduce the bug every time.
      > the export is: "/nfsroot       *(rw,no_root_squash,fsid=0,insecure)"
      >
      > And the panic message is append.
      >
      > ============================================================================
      > #!/bin/sh
      >
      > LOOPTOTAL=768
      > LOOPCOUNT=0
      > ret=0
      >
      > while [ $LOOPCOUNT -ne $LOOPTOTAL ]
      > do
      > 	((LOOPCOUNT += 1))
      > 	service nfs restart
      > 	/usr/sbin/rpc.idmapd
      > 	mount -t nfs4 127.0.0.1:/ /mnt|| return 1;
      > 	ls -l /var/lib/nfs/rpc_pipefs/nfs/*/
      > 	umount /mnt
      > 	echo $LOOPCOUNT
      > done
      >
      > ===============================================================================
      > Code: af 60 01 00 00 89 fa 89 f0 e8 64 cf 89 f0 e8 5c 7c 64 cf 31 c0 8b 5c 24 10 8b
      > 74 24 14 8b 7c 24 18 8b 6c 24 1c 83 c4 20 c3 <0f> 0b eb fc 8b 46 28 c7 44 24 08 20
      > de ee f0 c7 44 24 04 56 ea
      > EIP:[<f0ee92ea>] __rpc_create_common+0x8a/0xc0 [sunrpc] SS:ESP 0068:eccb5d28
      > ---[ end trace 8f5606cd08928ed2]---
      > Kernel panic - not syncing: Fatal exception
      > Pid:7131, comm: mount.nfs4 Tainted: G     D   -------------------2.6.32 #1
      > Call Trace:
      >  [<c080ad18>] ? panic+0x42/0xed
      >  [<c080e42c>] ? oops_end+0xbc/0xd0
      >  [<c040b090>] ? do_invalid_op+0x0/0x90
      >  [<c040b10f>] ? do_invalid_op+0x7f/0x90
      >  [<f0ee92ea>] ? __rpc_create_common+0x8a/0xc0[sunrpc]
      >  [<f0edc433>] ? rpc_free_task+0x33/0x70[sunrpc]
      >  [<f0ed6508>] ? prc_call_sync+0x48/0x60[sunrpc]
      >  [<f0ed656e>] ? rpc_ping+0x4e/0x60[sunrpc]
      >  [<f0ed6eaf>] ? rpc_create+0x38f/0x4f0[sunrpc]
      >  [<c080d80b>] ? error_code+0x73/0x78
      >  [<f0ee92ea>] ? __rpc_create_common+0x8a/0xc0[sunrpc]
      >  [<c0532bda>] ? d_lookup+0x2a/0x40
      >  [<f0ee94b1>] ? rpc_mkpipe+0x111/0x1b0[sunrpc]
      >  [<f10a59f4>] ? nfs_create_rpc_client+0xb4/0xf0[nfs]
      >  [<f10d6c6d>] ? nfs_fscache_get_client_cookie+0x1d/0x50[nfs]
      >  [<f10d3fcb>] ? nfs_idmap_new+0x7b/0x140[nfs]
      >  [<c05e76aa>] ? strlcpy+0x3a/0x60
      >  [<f10a60ca>] ? nfs4_set_client+0xea/0x2b0[nfs]
      >  [<f10a6d0c>] ? nfs4_create_server+0xac/0x1b0[nfs]
      >  [<c04f1400>] ? krealloc+0x40/0x50
      >  [<f10b0e8b>] ? nfs4_remote_get_sb+0x6b/0x250[nfs]
      >  [<c04f14ec>] ? kstrdup+0x3c/0x60
      >  [<c0520739>] ? vfs_kern_mount+0x69/0x170
      >  [<f10b1a3c>] ? nfs_do_root_mount+0x6c/0xa0[nfs]
      >  [<f10b1b47>] ? nfs4_try_mount+0x37/0xa0[nfs]
      >  [<f10afe6d>] ? nfs4_validate_text_mount_data+-x7d/0xf0[nfs]
      >  [<f10b1c42>] ? nfs4_get_sb+0x92/0x2f0
      >  [<c0520739>] ? vfs_kern_mount+0x69/0x170
      >  [<c05366d2>] ? get_fs_type+0x32/0xb0
      >  [<c052089f>] ? do_kern_mount+0x3f/0xe0
      >  [<c053954f>] ? do_mount+0x2ef/0x740
      >  [<c0537740>] ? copy_mount_options+0xb0/0x120
      >  [<c0539a0e>] ? sys_mount+0x6e/0xa0
      
      Hi,
      
      Does the following patch fix the problem?
      
      Cheers
        Trond
      
      --------------------------
      SUNRPC: Fix a BUG in __rpc_create_common
      
      From: Trond Myklebust <Trond.Myklebust@netapp.com>
      
      Mi Jinlong reports:
      
      When testing NFSv4 at RHEL6 with kernel 2.6.32, I got a kernel panic
      at NFS client's __rpc_create_common function.
      
      The panic place is:
        rpc_mkpipe
            __rpc_lookup_create()          <=== find pipefile *idmap*
            __rpc_mkpipe()                 <=== pipefile is *idmap*
              __rpc_create_common()
               ******  BUG_ON(!d_unhashed(dentry)); ****** *panic*
      
      The test is wrong: we can find ourselves with a hashed negative dentry here
      if the idmapper tried to look up the file before we got round to creating
      it.
      
      Just replace the BUG_ON() with a d_drop(dentry).
      Reported-by: NMi Jinlong <mijinlong@cn.fujitsu.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      beb0f0a9
  9. 22 12月, 2010 1 次提交
  10. 18 12月, 2010 4 次提交
  11. 17 12月, 2010 4 次提交