- 18 3月, 2014 1 次提交
-
-
由 Trond Myklebust 提交于
When the server is unavailable due to a networking error, etc, we want the RPC client to respect the timeout delays when attempting to reconnect. Reported-by: NNeil Brown <neilb@suse.de> Fixes: 561ec160 (SUNRPC: call_connect_status should recheck bind..) Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com>
-
- 06 1月, 2014 1 次提交
-
-
由 Weston Andros Adamson 提交于
When a task enters call_refreshresult with status 0 from call_refresh and !rpcauth_uptodatecred(task) it enters call_refresh again with no rate-limiting or max number of retries. Instead of trying forever, make use of the retry path that other errors use. This only seems to be possible when the crrefresh callback is gss_refresh_null, which only happens when destroying the context. To reproduce: 1) mount with sec=krb5 (or sec=sys with krb5 negotiated for non FSID specific operations). 2) reboot - the client will be stuck and will need to be hard rebooted BUG: soft lockup - CPU#0 stuck for 22s! [kworker/0:2:46] Modules linked in: rpcsec_gss_krb5 nfsv4 nfs fscache ppdev crc32c_intel aesni_intel aes_x86_64 glue_helper lrw gf128mul ablk_helper cryptd serio_raw i2c_piix4 i2c_core e1000 parport_pc parport shpchp nfsd auth_rpcgss oid_registry exportfs nfs_acl lockd sunrpc autofs4 mptspi scsi_transport_spi mptscsih mptbase ata_generic floppy irq event stamp: 195724 hardirqs last enabled at (195723): [<ffffffff814a925c>] restore_args+0x0/0x30 hardirqs last disabled at (195724): [<ffffffff814b0a6a>] apic_timer_interrupt+0x6a/0x80 softirqs last enabled at (195722): [<ffffffff8103f583>] __do_softirq+0x1df/0x276 softirqs last disabled at (195717): [<ffffffff8103f852>] irq_exit+0x53/0x9a CPU: 0 PID: 46 Comm: kworker/0:2 Not tainted 3.13.0-rc3-branch-dros_testing+ #4 Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 07/31/2013 Workqueue: rpciod rpc_async_schedule [sunrpc] task: ffff8800799c4260 ti: ffff880079002000 task.ti: ffff880079002000 RIP: 0010:[<ffffffffa0064fd4>] [<ffffffffa0064fd4>] __rpc_execute+0x8a/0x362 [sunrpc] RSP: 0018:ffff880079003d18 EFLAGS: 00000246 RAX: 0000000000000005 RBX: 0000000000000007 RCX: 0000000000000007 RDX: 0000000000000007 RSI: ffff88007aecbae8 RDI: ffff8800783d8900 RBP: ffff880079003d78 R08: ffff88006e30e9f8 R09: ffffffffa005a3d7 R10: ffff88006e30e7b0 R11: ffff8800783d8900 R12: ffffffffa006675e R13: ffff880079003ce8 R14: ffff88006e30e7b0 R15: ffff8800783d8900 FS: 0000000000000000(0000) GS:ffff88007f200000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007f3072333000 CR3: 0000000001a0b000 CR4: 00000000001407f0 Stack: ffff880079003d98 0000000000000246 0000000000000000 ffff88007a9a4830 ffff880000000000 ffffffff81073f47 ffff88007f212b00 ffff8800799c4260 ffff8800783d8988 ffff88007f212b00 ffffe8ffff604800 0000000000000000 Call Trace: [<ffffffff81073f47>] ? trace_hardirqs_on_caller+0x145/0x1a1 [<ffffffffa00652d3>] rpc_async_schedule+0x27/0x32 [sunrpc] [<ffffffff81052974>] process_one_work+0x211/0x3a5 [<ffffffff810528d5>] ? process_one_work+0x172/0x3a5 [<ffffffff81052eeb>] worker_thread+0x134/0x202 [<ffffffff81052db7>] ? rescuer_thread+0x280/0x280 [<ffffffff81052db7>] ? rescuer_thread+0x280/0x280 [<ffffffff810584a0>] kthread+0xc9/0xd1 [<ffffffff810583d7>] ? __kthread_parkme+0x61/0x61 [<ffffffff814afd6c>] ret_from_fork+0x7c/0xb0 [<ffffffff810583d7>] ? __kthread_parkme+0x61/0x61 Code: e8 87 63 fd e0 c6 05 10 dd 01 00 01 48 8b 43 70 4c 8d 6b 70 45 31 e4 a8 02 0f 85 d5 02 00 00 4c 8b 7b 48 48 c7 43 48 00 00 00 00 <4c> 8b 4b 50 4d 85 ff 75 0c 4d 85 c9 4d 89 cf 0f 84 32 01 00 00 And the output of "rpcdebug -m rpc -s all": RPC: 61 call_refresh (status 0) RPC: 61 call_refresh (status 0) RPC: 61 refreshing RPCSEC_GSS cred ffff88007a413cf0 RPC: 61 refreshing RPCSEC_GSS cred ffff88007a413cf0 RPC: 61 call_refreshresult (status 0) RPC: 61 refreshing RPCSEC_GSS cred ffff88007a413cf0 RPC: 61 call_refreshresult (status 0) RPC: 61 refreshing RPCSEC_GSS cred ffff88007a413cf0 RPC: 61 call_refresh (status 0) RPC: 61 call_refreshresult (status 0) RPC: 61 call_refresh (status 0) RPC: 61 call_refresh (status 0) RPC: 61 refreshing RPCSEC_GSS cred ffff88007a413cf0 RPC: 61 call_refreshresult (status 0) RPC: 61 call_refresh (status 0) RPC: 61 refreshing RPCSEC_GSS cred ffff88007a413cf0 RPC: 61 call_refresh (status 0) RPC: 61 refreshing RPCSEC_GSS cred ffff88007a413cf0 RPC: 61 refreshing RPCSEC_GSS cred ffff88007a413cf0 RPC: 61 call_refreshresult (status 0) RPC: 61 call_refresh (status 0) RPC: 61 call_refresh (status 0) RPC: 61 call_refresh (status 0) RPC: 61 call_refresh (status 0) RPC: 61 call_refreshresult (status 0) RPC: 61 refreshing RPCSEC_GSS cred ffff88007a413cf0 Signed-off-by: NWeston Andros Adamson <dros@netapp.com> Cc: stable@vger.kernel.org # 2.6.37+ Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com>
-
- 01 1月, 2014 1 次提交
-
-
由 Trond Myklebust 提交于
Ensure that call_bind_status, call_connect_status, call_transmit_status and call_status all are capable of handling ECONNABORTED and EHOSTUNREACH. Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com>
-
- 13 11月, 2013 1 次提交
-
-
由 Trond Myklebust 提交于
In cases where an rpc client has a parent hierarchy, then rpc_free_client may end up calling rpc_release_client() on the parent, thus recursing back into rpc_free_client. If the hierarchy is deep enough, then we can get into situations where the stack simply overflows. The fix is to have rpc_release_client() loop so that it can take care of the parent rpc client hierarchy without needing to recurse. Reported-by: NJeff Layton <jlayton@redhat.com> Reported-by: NWeston Andros Adamson <dros@netapp.com> Reported-by: NBruce Fields <bfields@fieldses.org> Link: http://lkml.kernel.org/r/2C73011F-0939-434C-9E4D-13A1EB1403D7@netapp.com Cc: stable@vger.kernel.org Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 30 10月, 2013 1 次提交
-
-
由 Wei Yongjun 提交于
Remove duplicated include. Signed-off-by: NWei Yongjun <yongjun_wei@trendmicro.com.cn> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 29 10月, 2013 3 次提交
-
-
由 Trond Myklebust 提交于
rpc_clnt_set_transport should use rcu_derefence_protected(), as it is only safe to be called with the rpc_clnt::cl_lock held. Cc: Chuck Lever <Chuck.Lever@oracle.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
Add an RPC client API to redirect an rpc_clnt's transport from a source server to a destination server during a migration event. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com> [ cel: forward ported to 3.12 ] Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Chuck Lever 提交于
The rpc_client_register() helper was added in commit e73f4cc0, "SUNRPC: split client creation routine into setup and registration," Mon Jun 24 11:52:52 2013. In a subsequent patch, I'd like to invoke rpc_client_register() from a context where a struct rpc_create_args is not available. Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 02 10月, 2013 4 次提交
-
-
由 Trond Myklebust 提交于
Currently, we go directly to call_transmit which sends us to call_status on error. If we know that the connect attempt failed, we should rather just jump straight back to call_bind and call_connect. Ditto for EAGAIN, except do not delay. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
A retransmit should be when you successfully transmit an RPC call to the server a second time. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 05 9月, 2013 1 次提交
-
-
由 Trond Myklebust 提交于
Add an identifier in order to aid debugging. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 04 9月, 2013 2 次提交
-
-
由 Andy Adamson 提交于
Most of the time an error from the credops crvalidate function means the server has sent us a garbage verifier. The gss_validate function is the exception where there is an -EACCES case if the user GSS_context on the client has expired. Signed-off-by: NAndy Adamson <andros@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Andy Adamson 提交于
The NFS layer needs to know when a key has expired. This change also returns -EKEYEXPIRED to the application, and the informative "Key has expired" error message is displayed. The user then knows that credential renewal is required. Signed-off-by: NAndy Adamson <andros@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 03 9月, 2013 1 次提交
-
-
由 Trond Myklebust 提交于
Ensure that we set rpc_clnt->cl_parent before calling rpc_client_register so that rpcauth_create can find any existing RPCSEC_GSS caches for this transport. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 01 9月, 2013 2 次提交
-
-
由 Trond Myklebust 提交于
It is now redundant. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 30 8月, 2013 5 次提交
-
-
由 Trond Myklebust 提交于
The current system requires everyone to set up notifiers, manage directory locking, etc. What we really want to do is have the rpc_client create its directory, and then create all the entries. This patch will allow the RPCSEC_GSS and NFS code to register all the objects that they want to have appear in the directory, and then have the sunrpc code call them back to actually create/destroy their pipefs dentries when the rpc_client creates/destroys the parent. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
The clnt->cl_principal is being used exclusively to store the service target name for RPCSEC_GSS/krb5 callbacks. Replace it with something that is stored only in the RPCSEC_GSS-specific code. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
The directory name is _always_ clnt->cl_program->pipe_dir_name. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
It just duplicates the cl_program->name, and is not used in any fast paths where the extra dereference will cause a hit. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 08 8月, 2013 1 次提交
-
-
由 Trond Myklebust 提交于
If rpcbind causes our connection to the AF_LOCAL socket to close after we've registered a service, then we want to be careful about reconnecting since the mount namespace may have changed. By simply refusing to reconnect the AF_LOCAL socket in the case of unregister, we avoid the need to somehow save the mount namespace. While this may lead to some services not unregistering properly, it should be safe. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com> Cc: Nix <nix@esperi.org.uk> Cc: Jeff Layton <jlayton@redhat.com> Cc: stable@vger.kernel.org # 3.9.x
-
- 15 7月, 2013 1 次提交
-
-
由 Trond Myklebust 提交于
Fix the error pathway if rpcauth_create() fails. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 14 7月, 2013 1 次提交
-
-
由 Al Viro 提交于
just pass the name Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 11 7月, 2013 1 次提交
-
-
由 Trond Myklebust 提交于
Commit 38481605 (SUNRPC: fix races on PipeFS MOUNT notifications) introduces a regression when we call rpc_setup_pipedir() with RPCSEC_GSS as the auth flavour. By calling rpcauth_create() while holding the sn->pipefs_sb_lock, we end up deadlocking in gss_pipes_dentries_create_net(). Fix is to register the client and release the mutex before calling rpcauth_create(). Reported-by: NWeston Andros Adamson <dros@netapp.com> Tested-by: NWeston Andros Adamson <dros@netapp.com> Cc: Stanislav Kinsbursky <skinsbursky@parallels.com> Cc: <stable@vger.kernel.org> # : 38481605: SUNRPC: fix races on PipeFS MOUNT Cc: <stable@vger.kernel.org> # : e73f4cc0: SUNRPC: split client creation Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 29 6月, 2013 4 次提交
-
-
由 Stanislav Kinsbursky 提交于
Not need to create pipes for dying client. So just skip them. Note: we can safely dereference the client structure, because notification caller is holding sn->pipefs_sb_lock. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Cc: stable@vger.kernel.org Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Stanislav Kinsbursky 提交于
This helper moves all "registration" code to the new rpc_client_register() helper. This helper will be used later in the series to synchronize against PipeFS MOUNT/UMOUNT events. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Stanislav Kinsbursky 提交于
CPU#0 CPU#1 ----------------------------- ----------------------------- rpc_kill_sb sn->pipefs_sb = NULL rpc_release_client (UMOUNT_EVENT) rpc_free_auth rpc_pipefs_event rpc_get_client_for_event !atomic_inc_not_zero(cl_count) <skip the client> atomic_inc(cl_count) rpc_free_client rpc_clnt_remove_pipedir <skip client dir removing> To fix this, this patch does the following: 1) Calls RPC_PIPEFS_UMOUNT notification with sn->pipefs_sb_lock being held. 2) Removes SUNRPC client from the list AFTER pipes destroying. 3) Doesn't hold RPC client on notification: if client in the list, then it can't be destroyed while sn->pipefs_sb_lock in hold by notification caller. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Cc: stable@vger.kernel.org Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Stanislav Kinsbursky 提交于
Below are races, when RPC client can be created without PiepFS dentries CPU#0 CPU#1 ----------------------------- ----------------------------- rpc_new_client rpc_fill_super rpc_setup_pipedir mutex_lock(&sn->pipefs_sb_lock) rpc_get_sb_net == NULL (no per-net PipeFS superblock) sn->pipefs_sb = sb; notifier_call_chain(MOUNT) (client is not in the list) rpc_register_client (client without pipes dentries) To fix this patch: 1) makes PipeFS mount notification call with pipefs_sb_lock being held. 2) releases pipefs_sb_lock on new SUNRPC client creation only after registration. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Cc: stable@vger.kernel.org Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 04 5月, 2013 1 次提交
-
-
由 Trond Myklebust 提交于
Just convert those messages to dprintk()s so that they can be used when debugging. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 26 4月, 2013 2 次提交
-
-
由 Simo Sorce 提交于
This patch implements a sunrpc client to use the services of the gssproxy userspace daemon. In particular it allows to perform calls in user space using an RPC call instead of custom hand-coded upcall/downcall messages. Currently only accept_sec_context is implemented as that is all is needed for the server case. File server modules like NFS and CIFS can use full gssapi services this way, once init_sec_context is also implemented. For the NFS server case this code allow to lift the limit of max 2k krb5 tickets. This limit is prevents legitimate kerberos deployments from using krb5 authentication with the Linux NFS server as they have normally ticket that are many kilobytes large. It will also allow to lift the limitation on the size of the credential set (uid,gid,gids) passed down from user space for users that have very many groups associated. Currently the downcall mechanism used by rpc.svcgssd is limited to around 2k secondary groups of the 65k allowed by kernel structures. Signed-off-by: NSimo Sorce <simo@redhat.com> [bfields: containerization, concurrent upcalls, misc. fixes and cleanup] Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
In the gss-proxy case we don't want to have to reconnect at random--we want to connect only on gss-proxy startup when we can steal gss-proxy's context to do the connect in the right namespace. So, provide a flag that allows the rpc_create caller to turn off the idle timeout. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 15 4月, 2013 2 次提交
-
-
由 Trond Myklebust 提交于
This is mainly for use by NFSv4.1, where the session negotiation ultimately wants to decide how many RPC slots we can fill. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
This patch ensures that we throttle new RPC requests if there are requests already waiting in the xprt->backlog queue. The reason for doing this is to fix livelock issues that can occur when an existing (high priority) task is waiting in the backlog queue, gets woken up by xprt_free_slot(), but a new task then steals the slot. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 06 4月, 2013 2 次提交
-
-
由 Trond Myklebust 提交于
If the call to rpciod_up() fails, we currently leak a reference to the struct rpc_xprt. As part of the fix, we also remove the redundant check for xprt!=NULL. This is already taken care of by the callers. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Chuck Lever 提交于
While testing error cases where rpc_new_client() fails, I saw some oopses. If rpc_new_client() fails, it already invokes xprt_put(). Thus __rpc_clone_client() does not need to invoke it again. Introduced by commit 1b63a751 "SUNRPC: Refactor rpc_clone_client()" Fri Sep 14, 2012. Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Cc: stable@vger.kernel.org [>=3.7] Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 26 3月, 2013 1 次提交
-
-
由 Trond Myklebust 提交于
In the case of a SOFTCONN rpc task, we really want to ensure that it reports errors like ENETUNREACH back to the caller. Currently, only some of these errors are being reported back (connect errors are not), and they are being converted by the RPC layer into EIO. Reported-by: NJan Engelhardt <jengelh@inai.de> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 03 3月, 2013 1 次提交
-
-
由 Trond Myklebust 提交于
Reported-by: NWeston Andros Adamson <dros@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-