- 31 3月, 2014 1 次提交
-
-
由 Theodore Ts'o 提交于
Use cmpxchg() to atomically set i_flags instead of clearing out the S_IMMUTABLE, S_APPEND, etc. flags and then setting them from the EXT4_IMMUTABLE_FL, EXT4_APPEND_FL flags, since this opens up a race where an immutable file has the immutable flag cleared for a brief window of time. Reported-by: NJohn Sullivan <jsrhbz@kanargh.force9.co.uk> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu> Cc: stable@kernel.org Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 29 3月, 2014 3 次提交
-
-
由 ZhangZhen 提交于
Tejun Heo has made WQ_NON_REENTRANT useless in the dbf2576e ("workqueue: make all workqueues non-reentrant"). So remove its usages and definition. This patch doesn't introduce any behavior changes. tj: minor description updates. Signed-off-by: NZhangZhen <zhenzhang.zhang@huawei.com> Sigend-off-by: NTejun Heo <tj@kernel.org> Acked-by: NJames Chapman <jchapman@katalix.com> Acked-by: NUlf Hansson <ulf.hansson@linaro.org>
-
由 Vlad Yasevich 提交于
Some drivers incorrectly assign vlan acceleration features to vlan_features thus causing issues for Q-in-Q vlan configurations. Warn the user of such cases. Signed-off-by: NVlad Yasevich <vyasevic@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Vlad Yasevich 提交于
skb_network_protocol() already accounts for multiple vlan headers that may be present in the skb. However, skb_mac_gso_segment() doesn't know anything about it and assumes that skb->mac_len is set correctly to skip all mac headers. That may not always be the case. If we are simply forwarding the packet (via bridge or macvtap), all vlan headers may not be accounted for. A simple solution is to allow skb_network_protocol to return the vlan depth it has calculated. This way skb_mac_gso_segment will correctly skip all mac headers. Signed-off-by: NVlad Yasevich <vyasevic@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 28 3月, 2014 2 次提交
-
-
由 Zoltan Kiss 提交于
skb_zerocopy can copy elements of the frags array between skbs, but it doesn't orphan them. Also, it doesn't handle errors, so this patch takes care of that as well, and modify the callers accordingly. skb_tx_error() is also added to the callers so they will signal the failed delivery towards the creator of the skb. Signed-off-by: NZoltan Kiss <zoltan.kiss@citrix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Oliver Neukum 提交于
This fixes a race which happens by freeing an object on the stack. Quoting Julius: > The issue is > that it calls usbnet_terminate_urbs() before that, which temporarily > installs a waitqueue in dev->wait in order to be able to wait on the > tasklet to run and finish up some queues. The waiting itself looks > okay, but the access to 'dev->wait' is totally unprotected and can > race arbitrarily. I think in this case usbnet_bh() managed to succeed > it's dev->wait check just before usbnet_terminate_urbs() sets it back > to NULL. The latter then finishes and the waitqueue_t structure on its > stack gets overwritten by other functions halfway through the > wake_up() call in usbnet_bh(). The fix is to just not allocate the data structure on the stack. As dev->wait is abused as a flag it also takes a runtime PM change to fix this bug. Signed-off-by: NOliver Neukum <oneukum@suse.de> Reported-by: NGrant Grundler <grundler@google.com> Tested-by: NGrant Grundler <grundler@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 26 3月, 2014 1 次提交
-
-
由 Thomas Gleixner 提交于
If a delayed or deferrable work is on stack we need to tell debug objects that we are destroying the timer and the work. Otherwise we leak the tracking object. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Acked-by: NTejun Heo <tj@kernel.org> Link: http://lkml.kernel.org/r/20140323141939.911487677@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 25 3月, 2014 1 次提交
-
-
由 Geert Uytterhoeven 提交于
Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: NTejun Heo <tj@kernel.org>
-
- 21 3月, 2014 4 次提交
-
-
由 Hugh Dickins 提交于
Add remove_linear_migration_ptes_from_nonlinear(), to fix an interesting little include/linux/swapops.h:131 BUG_ON(!PageLocked) found by trinity: indicating that remove_migration_ptes() failed to find one of the migration entries that was temporarily inserted. The problem comes from remap_file_pages()'s switch from vma_interval_tree (good for inserting the migration entry) to i_mmap_nonlinear list (no good for locating it again); but can only be a problem if the remap_file_pages() range does not cover the whole of the vma (zap_pte() clears the range). remove_migration_ptes() needs a file_nonlinear method to go down the i_mmap_nonlinear list, applying linear location to look for migration entries in those vmas too, just in case there was this race. The file_nonlinear method does need rmap_walk_control.arg to do this; but it never needed vma passed in - vma comes from its own iteration. Reported-and-tested-by: NDave Jones <davej@redhat.com> Reported-and-tested-by: NSasha Levin <sasha.levin@oracle.com> Signed-off-by: NHugh Dickins <hughd@google.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Paul E. McKenney 提交于
The following pattern is currently not well supported by RCU: 1. Make data element inaccessible to RCU readers. 2. Do work that probably lasts for more than one grace period. 3. Do something to make sure RCU readers in flight before #1 above have completed. Here are some things that could currently be done: a. Do a synchronize_rcu() unconditionally at either #1 or #3 above. This works, but imposes needless work and latency. b. Post an RCU callback at #1 above that does a wakeup, then wait for the wakeup at #3. This works well, but likely results in an extra unneeded grace period. Open-coding this is also a bit more semi-tricky code than would be good. This commit therefore adds get_state_synchronize_rcu() and cond_synchronize_rcu() APIs. Call get_state_synchronize_rcu() at #1 above and pass its return value to cond_synchronize_rcu() at #3 above. This results in a call to synchronize_rcu() if no grace period has elapsed between #1 and #3, but requires only a load, comparison, and memory barrier if a full grace period did elapse. Requested-by: NPeter Zijlstra <peterz@infradead.org> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Acked-by: NPeter Zijlstra <peterz@infradead.org>
-
由 Dave Jones 提交于
Rename TAINT_UNSAFE_SMP to TAINT_CPU_OUT_OF_SPEC, so we can repurpose the flag to encompass a wider range of pushing the CPU beyond its warrany. Signed-off-by: NDave Jones <davej@fedoraproject.org> Link: http://lkml.kernel.org/r/20140226154949.GA770@redhat.comSigned-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Vaibhav Nagarnaik 提交于
In event format strings, the array size is reported in two locations. One in array subscript and then via the "size:" attribute. The values reported there have a mismatch. For e.g., in sched:sched_switch the prev_comm and next_comm character arrays have subscript values as [32] where as the actual field size is 16. name: sched_switch ID: 301 format: field:unsigned short common_type; offset:0; size:2; signed:0; field:unsigned char common_flags; offset:2; size:1; signed:0; field:unsigned char common_preempt_count; offset:3; size:1;signed:0; field:int common_pid; offset:4; size:4; signed:1; field:char prev_comm[32]; offset:8; size:16; signed:1; field:pid_t prev_pid; offset:24; size:4; signed:1; field:int prev_prio; offset:28; size:4; signed:1; field:long prev_state; offset:32; size:8; signed:1; field:char next_comm[32]; offset:40; size:16; signed:1; field:pid_t next_pid; offset:56; size:4; signed:1; field:int next_prio; offset:60; size:4; signed:1; After bisection, the following commit was blamed: 92edca07 tracing: Use direct field, type and system names This commit removes the duplication of strings for field->name and field->type assuming that all the strings passed in __trace_define_field() are immutable. This is not true for arrays, where the type string is created in event_storage variable and field->type for all array fields points to event_storage. Use __stringify() to create a string constant for the type string. Also, get rid of event_storage and event_storage_mutex that are not needed anymore. also, an added benefit is that this reduces the overhead of events a bit more: text data bss dec hex filename 8424787 2036472 1302528 11763787 b3804b vmlinux 8420814 2036408 1302528 11759750 b37086 vmlinux.patched Link: http://lkml.kernel.org/r/1392349908-29685-1-git-send-email-vnagarnaik@google.com Cc: Laurent Chavey <chavey@google.com> Cc: stable@vger.kernel.org # 3.10+ Signed-off-by: NVaibhav Nagarnaik <vnagarnaik@google.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 20 3月, 2014 5 次提交
-
-
由 Heiko Carstens 提交于
linux/compat.h does not include linux/unistd.h but the compat.h header file contains various conditional #ifdef __ARCH_WANT_COMPAT_... asmlinkage long compat...() #endif compat system call function declarations. If linux/unistd.h isn't included it depends on previous includes if those __ARCH_WANT_COMPAT_... defines are defined or not. So add an additional linux/unistd.h include. Should fix this compile error on tile: include/uapi/asm-generic/unistd.h:195:1: error: 'compat_sys_getdents64' undeclared make[3]: *** [arch/tile/kernel/compat.o] Error 1 Reported-by: NGeert Uytterhoeven <geert@linux-m68k.org> Acked-by: NGeert Uytterhoeven <geert@linux-m68k.org> Acked-by: NChris Metcalf <cmetcalf@tilera.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
-
由 Viresh Kumar 提交于
There are only two users of get_nohz_timer_target(): timer and hrtimer. Both call it under same circumstances, i.e. #ifdef CONFIG_NO_HZ_COMMON if (!pinned && get_sysctl_timer_migration() && idle_cpu(this_cpu)) return get_nohz_timer_target(); #endif So, it makes more sense to get all this as part of get_nohz_timer_target() instead of duplicating code at two places. For this another parameter is required to be passed to this routine, pinned. Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Cc: linaro-kernel@lists.linaro.org Cc: fweisbec@gmail.com Cc: peterz@infradead.org Link: http://lkml.kernel.org/r/1e1b53537217d58d48c2d7a222a9c3ac47d5b64c.1395140107.git.viresh.kumar@linaro.orgSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Viresh Kumar 提交于
Rearrange kernel doc comments in the order members of struct hrtimer are declared. Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Cc: linaro-kernel@lists.linaro.org Cc: fweisbec@gmail.com Cc: trivial@kernel.org Link: http://lkml.kernel.org/r/1db1a3cfbe8a9ea49396af75c6ac04a2e67e3ab0.1395226248.git.viresh.kumar@linaro.orgSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Dan Williams 提交于
Commit bc6e7c4b "libata, libsas: kill pm_result and related cleanup" renamed ata_sas_port_async_resume() to ata_sas_port_resume(), but missed a CONFIG_PM=n stub conversion. Randy fixed that up in commit a5a65699 "libata.h: add stub for ata_sas_port_resume", but missed the deletion of the now unused ata_sas_port_async_resume() routine. Cc: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: NDan Williams <dan.j.williams@intel.com> Signed-off-by: NTejun Heo <tj@kernel.org>
-
由 Randy Dunlap 提交于
Fix build error when CONFIG_PM is not enabled by adding a stub function in <linux/libata.h>. drivers/scsi/libsas/sas_ata.c: In function 'sas_resume_sata': drivers/scsi/libsas/sas_ata.c:756:3: error: implicit declaration of function 'ata_sas_port_resume' [-Werror=implicit-function-declaration] Signed-off-by: NRandy Dunlap <rdunlap@infradead.org> Reported-by: NJim Davis <jim.epost@gmail.com> Signed-off-by: NTejun Heo <tj@kernel.org> Cc: Dan Williams <dan.j.williams@intel.com>
-
- 19 3月, 2014 2 次提交
-
-
由 Dan Williams 提交于
Tejun says: "At least for libata, worrying about suspend/resume failures don't make whole lot of sense. If suspend failed, just proceed with suspend. If the device can't be woken up afterwards, that's that. There isn't anything we could have done differently anyway. The same for resume, if spinup fails, the device is dud and the following commands will invoke EH actions and will eventually fail. Again, there really isn't any *choice* to make. Just making sure the errors are handled gracefully (ie. don't crash) and the following commands are handled correctly should be enough." The only libata user that actually cares about the result from a suspend operation is libsas. However, it only cares about whether queuing a new operation collides with an in-flight one. All libsas does with the error is retry, but we can just let libata wait for the previous operation before continuing. Other cleanups include: 1/ Unifying all ata port pm operations on an ata_port_pm_ prefix 2/ Marking all ata port pm helper routines as returning void, only ata_port_pm_ entry points need to fake a 0 return value. 3/ Killing ata_port_{suspend|resume}_common() in favor of calling ata_port_request_pm() directly 4/ Killing the wrappers that just do a to_ata_port() conversion 5/ Clearly marking the entry points that do async operations with an _async suffix. Reference: http://marc.info/?l=linux-scsi&m=138995409532286&w=2 Cc: Phillip Susi <psusi@ubuntu.com> Cc: Alan Stern <stern@rowland.harvard.edu> Suggested-by: NTejun Heo <tj@kernel.org> Signed-off-by: NTodd Brandt <todd.e.brandt@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com> Signed-off-by: NTejun Heo <tj@kernel.org> -
由 Bjørn Mork 提交于
This is a context modified revert of commit 6a9612e2 ("net: cdc_ncm: remove ncm_parm field") which introduced a NCM specification violation, causing setup errors for some devices. These errors resulted in the device and host disagreeing about shared settings, with complete failure to communicate as the end result. The NCM specification require that many of the NCM specific control reuests are sent only while the NCM Data Interface is in alternate setting 0. Reverting the commit ensures that we follow this requirement. Fixes: 6a9612e2 ("net: cdc_ncm: remove ncm_parm field") Reported-and-tested-by: NPasi Kärkkäinen <pasik@iki.fi> Reported-by: NThomas Schäfer <tschaefer@t-online.de> Signed-off-by: NBjørn Mork <bjorn@mork.no> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 14 3月, 2014 2 次提交
-
-
由 Thomas Gleixner 提交于
The flag is necessary for interrupt chips which require an ACK/EOI after the handler has run. In case of threaded handlers this needs to happen after the threaded handler has completed before the unmask of the interrupt. The flag is only unseful in combination with the handle_fasteoi_irq flow control handler. It can be combined with the flag IRQCHIP_EOI_IF_HANDLED, so the EOI is not issued when the interrupt is disabled or in progress. Tested-by: NHans de Goede <hdegoede@redhat.com> Reviewed-by: NHans de Goede <hdegoede@redhat.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-sunxi@googlegroups.com Cc: Maxime Ripard <maxime.ripard@free-electrons.com> Link: http://lkml.kernel.org/r/1394733834-26839-2-git-send-email-hdegoede@redhat.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Thomas Gleixner 提交于
Add missing documentation of the flag. Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 13 3月, 2014 2 次提交
-
-
由 Frederic Weisbecker 提交于
We already have nsecs_to_cputime(). Now we need to be able to convert the other way around in order to fix a bug on steal time accounting. Cc: Ingo Molnar <mingo@kernel.org> Cc: Marcelo Tosatti <mtosatti@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Acked-by: NRik van Riel <riel@redhat.com> Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
-
由 Frederic Weisbecker 提交于
The architectures that override cputime_t (s390, ppc) don't provide any version of nsecs_to_cputime(). Indeed this cputime_t implementation by backend only happens when CONFIG_VIRT_CPU_ACCOUNTING_NATIVE=y under which the core code doesn't make any use of nsecs_to_cputime(). At least for now. We are going to make a broader use of it so lets provide a default version with a per usecs granularity. It should be good enough for most usecases. Cc: Ingo Molnar <mingo@kernel.org> Cc: Marcelo Tosatti <mtosatti@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Acked-by: NRik van Riel <riel@redhat.com> Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
-
- 12 3月, 2014 2 次提交
-
-
由 Thomas Gleixner 提交于
For certain irq types, e.g. gpios, it's necessary to request resources before starting up the irq. This might fail so we cannot use the irq_startup() callback because we might call the irq_set_type() callback before that which does not make sense when the resource is not available. Calling irq_startup() before irq_set_type() can lead to spurious interrupts which is not desired either. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Jean-Jacques Hiblot <jjhiblot@traphandler.com> Cc: Grant Likely <grant.likely@linaro.org> Cc: linux-arm-kernel@lists.infradead.org Reviewed-by: NLinus Walleij <linus.walleij@linaro.org> Link: http://lkml.kernel.org/r/alpine.DEB.2.02.1403080857160.18573@ionos.tec.linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Laura Abbott 提交于
Apart from setting the limit of memblock, it's also useful to be able to get the limit to avoid recalculating it every time. Add the function to do so. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: NAndrew Morton <akpm@linux-foundation.org> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 11 3月, 2014 3 次提交
-
-
由 Peter Zijlstra 提交于
Since we want a task waiting for a mutex_lock() to go to sleep and reschedule on need_resched() we must be able to abort the mcs_spin_lock() around the adaptive spin. Therefore implement a cancelable mcs lock. Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Cc: chegu_vinod@hp.com Cc: paulmck@linux.vnet.ibm.com Cc: Waiman.Long@hp.com Cc: torvalds@linux-foundation.org Cc: tglx@linutronix.de Cc: riel@redhat.com Cc: akpm@linux-foundation.org Cc: davidlohr@hp.com Cc: hpa@zytor.com Cc: andi@firstfloor.org Cc: aswin@hp.com Cc: scott.norton@hp.com Cc: Jason Low <jason.low2@hp.com> Link: http://lkml.kernel.org/n/tip-62hcl5wxydmjzd182zhvk89m@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Peter Zijlstra 提交于
The mcs_spinlock code is not meant (or suitable) as a generic locking primitive, therefore take it away from the normal includes and place it in kernel/locking/. This way the locking primitives implemented there can use it as part of their implementation but we do not risk it getting used inapropriately. Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/n/tip-byirmpamgr7h25m5kyavwpzx@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Johannes Weiner 提交于
GFP_THISNODE is for callers that implement their own clever fallback to remote nodes. It restricts the allocation to the specified node and does not invoke reclaim, assuming that the caller will take care of it when the fallback fails, e.g. through a subsequent allocation request without GFP_THISNODE set. However, many current GFP_THISNODE users only want the node exclusive aspect of the flag, without actually implementing their own fallback or triggering reclaim if necessary. This results in things like page migration failing prematurely even when there is easily reclaimable memory available, unless kswapd happens to be running already or a concurrent allocation attempt triggers the necessary reclaim. Convert all callsites that don't implement their own fallback strategy to __GFP_THISNODE. This restricts the allocation a single node too, but at the same time allows the allocator to enter the slowpath, wake kswapd, and invoke direct reclaim if necessary, to make the allocation happen when memory is full. Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org> Acked-by: NRik van Riel <riel@redhat.com> Cc: Jan Stancek <jstancek@redhat.com> Cc: Mel Gorman <mgorman@suse.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 10 3月, 2014 3 次提交
-
-
由 Al Viro 提交于
instead of returning the flags by reference, we can just have the low-level primitive return those in lower bits of unsigned long, with struct file * derived from the rest. Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> -
由 Linus Torvalds 提交于
Our write() system call has always been atomic in the sense that you get the expected thread-safe contiguous write, but we haven't actually guaranteed that concurrent writes are serialized wrt f_pos accesses, so threads (or processes) that share a file descriptor and use "write()" concurrently would quite likely overwrite each others data. This violates POSIX.1-2008/SUSv4 Section XSI 2.9.7 that says: "2.9.7 Thread Interactions with Regular File Operations All of the following functions shall be atomic with respect to each other in the effects specified in POSIX.1-2008 when they operate on regular files or symbolic links: [...]" and one of the effects is the file position update. This unprotected file position behavior is not new behavior, and nobody has ever cared. Until now. Yongzhi Pan reported unexpected behavior to Michael Kerrisk that was due to this. This resolves the issue with a f_pos-specific lock that is taken by read/write/lseek on file descriptors that may be shared across threads or processes. Reported-by: NYongzhi Pan <panyongzhi@gmail.com> Reported-by: NMichael Kerrisk <mtk.manpages@gmail.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Nikolay Aleksandrov 提交于
security_xfrm_policy_alloc can be called in atomic context so the allocation should be done with GFP_ATOMIC. Add an argument to let the callers choose the appropriate way. In order to do so a gfp argument needs to be added to the method xfrm_policy_alloc_security in struct security_operations and to the internal function selinux_xfrm_alloc_user. After that switch to GFP_ATOMIC in the atomic callers and leave GFP_KERNEL as before for the rest. The path that needed the gfp argument addition is: security_xfrm_policy_alloc -> security_ops.xfrm_policy_alloc_security -> all users of xfrm_policy_alloc_security (e.g. selinux_xfrm_policy_alloc) -> selinux_xfrm_alloc_user (here the allocation used to be GFP_KERNEL only) Now adding a gfp argument to selinux_xfrm_alloc_user requires us to also add it to security_context_to_sid which is used inside and prior to this patch did only GFP_KERNEL allocation. So add gfp argument to security_context_to_sid and adjust all of its callers as well. CC: Paul Moore <paul@paul-moore.com> CC: Dave Jones <davej@redhat.com> CC: Steffen Klassert <steffen.klassert@secunet.com> CC: Fan Du <fan.du@windriver.com> CC: David S. Miller <davem@davemloft.net> CC: LSM list <linux-security-module@vger.kernel.org> CC: SELinux list <selinux@tycho.nsa.gov> Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com> Acked-by: NPaul Moore <paul@paul-moore.com> Signed-off-by: NSteffen Klassert <steffen.klassert@secunet.com>
-
- 07 3月, 2014 3 次提交
-
-
由 Tejun Heo 提交于
Peter Hurley noticed that since a2c1c57b ("workqueue: consider work function when searching for busy work items"), a work item which gets assigned a different work function would break out of the non-reentrancy guarantee as workqueue would consider it a different work item. This is fragile and extremely subtle. PREPARE_[DELAYED_]WORK() have never been used widely and its semantics has always been somewhat iffy. If the work item is known not to be on queue when PREPARE_WORK() is called, there's no difference from using INIT_WORK(). If the work item may be queued at the time of PREPARE_WORK(), we can't really tell whether the old or new function will be executed the next time. We really don't want this level of subtlety in workqueue interface for such marginal use cases. The previous patches converted all existing users away from PREPARE_[DELAYED_]WORK(). Let's remove them. Signed-off-by: NTejun Heo <tj@kernel.org> Cc: Peter Hurley <peter@hurleysoftware.com> Link: http://lkml.kernel.org/g/1392493119-9277-1-git-send-email-peter@hurleysoftware.com
-
由 Tejun Heo 提交于
PREPARE_[DELAYED_]WORK() are being phased out. They have few users and a nasty surprise in terms of reentrancy guarantee as workqueue considers work items to be different if they don't have the same work function. nvme_dev->reset_work is multiplexed with multiple work functions. Introduce nvme_reset_workfn() which invokes nvme_dev->reset_workfn and always use it as the work function and update the users to set the ->reset_workfn field instead of overriding the work function using PREPARE_WORK(). It would probably be best to route this with other related updates through the workqueue tree. Compile tested. Signed-off-by: NTejun Heo <tj@kernel.org> Cc: Matthew Wilcox <willy@linux.intel.com> Cc: linux-nvme@lists.infradead.org -
由 Tejun Heo 提交于
PREPARE_[DELAYED_]WORK() are being phased out. They have few users and a nasty surprise in terms of reentrancy guarantee as workqueue considers work items to be different if they don't have the same work function. firewire core-device and sbp2 have been been multiplexing work items with multiple work functions. Introduce fw_device_workfn() and sbp2_lu_workfn() which invoke fw_device->workfn and sbp2_logical_unit->workfn respectively and always use the two functions as the work functions and update the users to set the ->workfn fields instead of overriding work functions using PREPARE_DELAYED_WORK(). This fixes a variety of possible regressions since a2c1c57b "workqueue: consider work function when searching for busy work items" due to which fw_workqueue lost its required non-reentrancy property. Signed-off-by: NTejun Heo <tj@kernel.org> Acked-by: NStefan Richter <stefanr@s5r6.in-berlin.de> Cc: linux1394-devel@lists.sourceforge.net Cc: stable@vger.kernel.org # v3.9+ Cc: stable@vger.kernel.org # v3.8.2+ Cc: stable@vger.kernel.org # v3.4.60+ Cc: stable@vger.kernel.org # v3.2.40+
-
- 06 3月, 2014 6 次提交
-
-
由 Heiko Carstens 提交于
In order to allow the COMPAT_SYSCALL_DEFINE macro generate code that performs proper zero and sign extension convert all 64 bit parameters to their corresponding 32 bit compat counterparts. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> -
由 Heiko Carstens 提交于
In order to allow the COMPAT_SYSCALL_DEFINE macro generate code that performs proper zero and sign extension convert all 64 bit parameters to their corresponding 32 bit compat counterparts. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> -
由 Heiko Carstens 提交于
In order to allow the COMPAT_SYSCALL_DEFINE macro generate code that performs proper zero and sign extension convert all 64 bit parameters to their corresponding 32 bit compat counterparts. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> -
由 Heiko Carstens 提交于
In order to allow the COMPAT_SYSCALL_DEFINE macro generate code that performs proper zero and sign extension convert all 64 bit parameters to their corresponding 32 bit compat counterparts. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> -
由 Heiko Carstens 提交于
Some fs compat system calls have unsigned long parameters instead of compat_ulong_t. In order to allow the COMPAT_SYSCALL_DEFINE macro generate code that performs proper zero and sign extension convert all 64 bit parameters their corresponding 32 bit counterparts. compat_sys_io_getevents() is a bit different: the non-compat version has signed parameters for the "min_nr" and "nr" parameters while the compat version has unsigned parameters. So change this as well. For all practical purposes this shouldn't make any difference (doesn't fix a real bug). Also introduce a generic compat_aio_context_t type which can be used everywhere. The access_ok() check within compat_sys_io_getevents() got also removed since the non-compat sys_io_getevents() should be able to handle everything anyway. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> -
由 Heiko Carstens 提交于
The preadv64/pwrite64 have been implemented for the x32 ABI, in order to allow passing 64 bit arguments from user space without splitting them into two 32 bit parameters, like it would be necessary for usual compat tasks. Howevert these two system calls are only being used for the x32 ABI, so add __ARCH_WANT_COMPAT defines for these two compat syscalls and make these two only visible for x86. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
-