- 04 7月, 2013 1 次提交
-
-
由 Kees Cook 提交于
Calling kthread_run with a single name parameter causes it to be handled as a format string. Many callers are passing potentially dynamic string content, so use "%s" in those cases to avoid any potential accidents. Signed-off-by: NKees Cook <keescook@chromium.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 29 6月, 2013 3 次提交
-
-
由 Jeff Layton 提交于
Currently, the hashing that the locking code uses to add these values to the blocked_hash is simply calculated using fl_owner field. That's valid in most cases except for server-side lockd, which validates the owner of a lock based on fl_owner and fl_pid. In the case where you have a small number of NFS clients doing a lot of locking between different processes, you could end up with all the blocked requests sitting in a very small number of hash buckets. Add a new lm_owner_key operation to the lock_manager_operations that will generate an unsigned long to use as the key in the hashtable. That function is only implemented for server-side lockd, and simply XORs the fl_owner and fl_pid. Signed-off-by: NJeff Layton <jlayton@redhat.com> Acked-by: NJ. Bruce Fields <bfields@fieldses.org> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Jeff Layton 提交于
Having a global lock that protects all of this code is a clear scalability problem. Instead of doing that, move most of the code to be protected by the i_lock instead. The exceptions are the global lists that the ->fl_link sits on, and the ->fl_block list. ->fl_link is what connects these structures to the global lists, so we must ensure that we hold those locks when iterating over or updating these lists. Furthermore, sound deadlock detection requires that we hold the blocked_list state steady while checking for loops. We also must ensure that the search and update to the list are atomic. For the checking and insertion side of the blocked_list, push the acquisition of the global lock into __posix_lock_file and ensure that checking and update of the blocked_list is done without dropping the lock in between. On the removal side, when waking up blocked lock waiters, take the global lock before walking the blocked list and dequeue the waiters from the global list prior to removal from the fl_block list. With this, deadlock detection should be race free while we minimize excessive file_lock_lock thrashing. Finally, in order to avoid a lock inversion problem when handling /proc/locks output we must ensure that manipulations of the fl_block list are also protected by the file_lock_lock. Signed-off-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Jeff Layton 提交于
Signed-off-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 22 4月, 2013 1 次提交
-
-
由 Trond Myklebust 提交于
After a server reboot, the reclaimer thread will recover all the existing locks. For locks that are blocked, however, it will change the value of block->b_status to nlm_lck_denied_grace_period in order to signal that they need to wake up and resend the original blocking lock request. Due to a bug, however, the block->b_status never gets reset after the blocked locks have been woken up, and so the process goes into an infinite loop of resends until the blocked lock is satisfied. Reported-by: NMarc Eshel <eshel@us.ibm.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com> Cc: stable@vger.kernel.org
-
- 28 2月, 2013 1 次提交
-
-
由 Sasha Levin 提交于
I'm not sure why, but the hlist for each entry iterators were conceived list_for_each_entry(pos, head, member) The hlist ones were greedy and wanted an extra parameter: hlist_for_each_entry(tpos, pos, head, member) Why did they need an extra pos parameter? I'm not quite sure. Not only they don't really need it, it also prevents the iterator from looking exactly like the list iterator, which is unfortunate. Besides the semantic patch, there was some manual work required: - Fix up the actual hlist iterators in linux/list.h - Fix up the declaration of other iterators based on the hlist ones. - A very small amount of places were using the 'node' parameter, this was modified to use 'obj->member' instead. - Coccinelle didn't handle the hlist_for_each_entry_safe iterator properly, so those had to be fixed up manually. The semantic patch which is mostly the work of Peter Senna Tschudin is here: @@ iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host; type T; expression a,c,d,e; identifier b; statement S; @@ -T b; <+... when != b ( hlist_for_each_entry(a, - b, c, d) S | hlist_for_each_entry_continue(a, - b, c) S | hlist_for_each_entry_from(a, - b, c) S | hlist_for_each_entry_rcu(a, - b, c, d) S | hlist_for_each_entry_rcu_bh(a, - b, c, d) S | hlist_for_each_entry_continue_rcu_bh(a, - b, c) S | for_each_busy_worker(a, c, - b, d) S | ax25_uid_for_each(a, - b, c) S | ax25_for_each(a, - b, c) S | inet_bind_bucket_for_each(a, - b, c) S | sctp_for_each_hentry(a, - b, c) S | sk_for_each(a, - b, c) S | sk_for_each_rcu(a, - b, c) S | sk_for_each_from -(a, b) +(a) S + sk_for_each_from(a) S | sk_for_each_safe(a, - b, c, d) S | sk_for_each_bound(a, - b, c) S | hlist_for_each_entry_safe(a, - b, c, d, e) S | hlist_for_each_entry_continue_rcu(a, - b, c) S | nr_neigh_for_each(a, - b, c) S | nr_neigh_for_each_safe(a, - b, c, d) S | nr_node_for_each(a, - b, c) S | nr_node_for_each_safe(a, - b, c, d) S | - for_each_gfn_sp(a, c, d, b) S + for_each_gfn_sp(a, c, d) S | - for_each_gfn_indirect_valid_sp(a, c, d, b) S + for_each_gfn_indirect_valid_sp(a, c, d) S | for_each_host(a, - b, c) S | for_each_host_safe(a, - b, c, d) S | for_each_mesh_entry(a, - b, c, d) S ) ...+> [akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c] [akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c] [akpm@linux-foundation.org: checkpatch fixes] [akpm@linux-foundation.org: fix warnings] [akpm@linux-foudnation.org: redo intrusive kvm changes] Tested-by: NPeter Senna Tschudin <peter.senna@gmail.com> Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: NSasha Levin <sasha.levin@oracle.com> Cc: Wu Fengguang <fengguang.wu@intel.com> Cc: Marcelo Tosatti <mtosatti@redhat.com> Cc: Gleb Natapov <gleb@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 23 2月, 2013 1 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 20 2月, 2013 1 次提交
-
-
由 Trond Myklebust 提交于
Currently, nlmclnt_lock will break out of the for(;;) loop when the reclaimer wakes up the blocking lock thread by setting nlm_lck_denied_grace_period. This causes the lock request to fail with an ENOLCK error. The intention was always to ensure that we resend the lock request after the grace period has expired. Reported-by: NWangyuan Zhang <Wangyuan.Zhang@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com> Cc: stable@vger.kernel.org
-
- 16 2月, 2013 1 次提交
-
-
由 Tim Gardner 提交于
Even though nlmclnt_reclaim() is only one call into the stack frame, 928 bytes on the stack seems like a lot. Recode to dynamically allocate the request structure once from within the reclaimer task, then pass this pointer into nlmclnt_reclaim() for reuse on subsequent calls. smatch analysis: fs/lockd/clntproc.c:620 nlmclnt_reclaim() warn: 'reqst' puts 928 bytes on stack Also remove redundant assignment of 0 after memset. Cc: Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by: NTim Gardner <tim.gardner@canonical.com> Reviewed-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 05 2月, 2013 1 次提交
-
-
由 Jeff Layton 提交于
These routines are used by server and client code, so having them in a separate header would be best. Signed-off-by: NJeff Layton <jlayton@redhat.com> Acked-by: NTrond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 05 11月, 2012 4 次提交
-
-
由 Trond Myklebust 提交于
Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
- Convert the non-trivial ones into WARN_ON_ONCE(). - Remove the trivial refcounting BUGs Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
- Offset bound checks are done in the NFS client code. - So are filehandle size checks - The cookie length is a constant - The utsname()->nodename is already bounded Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 24 10月, 2012 2 次提交
-
-
由 Trond Myklebust 提交于
The current code is clearing it in all cases _except_ when zero. Reported-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com> Cc: stable@vger.kernel.org
-
由 Trond Myklebust 提交于
Commit e9406db2 (lockd: per-net NSM client creation and destruction helpers introduced) contains a nasty race on initialisation of the per-net NSM client because it doesn't check whether or not the client is set after grabbing the nsm_create_mutex. Reported-by: NNix <nix@esperi.org.uk> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com> Cc: stable@vger.kernel.org
-
- 17 10月, 2012 1 次提交
-
-
由 Trond Myklebust 提交于
If the filehandle is stale, or open access is denied for some reason, nlm_fopen() may return one of the NLMv4-specific error codes nlm4_stale_fh or nlm4_failed. These get passed right through nlm_lookup_file(), and so when nlmsvc_retrieve_args() calls the latter, it needs to filter the result through the cast_status() machinery. Failure to do so, will trigger the BUG_ON() in encode_nlm_stat... Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com> Reported-by: NLarry McVoy <lm@bitmover.com> Cc: stable@kernel.org Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 02 10月, 2012 3 次提交
-
-
由 Stanislav Kinsbursky 提交于
NSM RPC client can be required on NFSv3 umount, when child reaper is dying (and destroying it's mount namespace). It means, that current nsproxy is set to NULL already, but creation of RPC client requires UTS namespace for gaining hostname string. This patch creates reference-counted per-net NSM client on first monitor request and destroys it after last unmonitor request. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Cc: <stable@vger.kernel.org> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Stanislav Kinsbursky 提交于
Taking hostname from uts namespace if not safe, because this cuold be performind during umount operation on child reaper death. And in this case current->nsproxy is NULL already. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Cc: <stable@vger.kernel.org> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Stanislav Kinsbursky 提交于
NSM RPC client can be required on NFSv3 umount, when child reaper is dying (and destroying it's mount namespace). It means, that current nsproxy is set to NULL already, but creation of RPC client requires UTS namespace for gaining hostname string. This patch introduces reference counted NFS RPC clients creation and destruction helpers (similar to RPCBIND RPC clients). Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Cc: <stable@vger.kernel.org> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 23 9月, 2012 1 次提交
-
-
由 Al Viro 提交于
we need to grab mutex before the reference counter reaches 0 Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 22 8月, 2012 1 次提交
-
-
由 J. Bruce Fields 提交于
svc_recv() returns only -EINTR or -EAGAIN. If we really want to worry about the case where it has a bug that causes it to return something else, we could stick a WARN() in svc_recv. But it's silly to require every caller to have all this boilerplate to handle that case. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 30 7月, 2012 2 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
It's used both for client and server hosts; we can't do nlmclnt_release_host() on failure exits, since the host might need nlmsvc_release_host(), with BUG_ON() for calling the wrong one. Makes life simpler for callers, actually... Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 28 7月, 2012 12 次提交
-
-
由 Stanislav Kinsbursky 提交于
Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
Passed network namespace replaced hard-coded init_net Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
This is a cleanup patch - makes code looks simplier. It replaces widely used rqstp->rq_xprt->xpt_net by introduced SVC_NET(rqstp). Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
Just a small cleanup. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
This patch introduces moves nrhosts in per-net data. It also adds kernel warning to nlm_shutdown_hosts_net() about remaining hosts in specified network namespace context. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
This patch moves next_gc to per-net data. Note: passed network can be NULL (when Lockd kthread is exiting of Lockd module is removing). Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
This is required for per-network NLM shutdown and cleanup. This patch passes init_net for a while. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 01 6月, 2012 4 次提交
-
-
由 Stanislav Kinsbursky 提交于
Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
This is just a code move, which from my POV makes the code look better. I.e. now on start we have 3 different stages: 1) Service creation. 2) Service per-net data allocation. 3) Service start. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
This function creates service if it doesn't exist, or increases usage counter if it does, and returns a pointer to it. The usage counter will be droppepd by svc_destroy() later in lockd_up(). Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-