- 05 1月, 2011 1 次提交
-
-
由 NeilBrown 提交于
wait_for_completion_*_timeout() can return: 0: if the wait timed out -ve: if the wait was interrupted +ve: if the completion was completed. As they currently return an 'unsigned long', the last two cases are not easily distinguished which can easily result in buggy code, as is the case for the recently added wait_for_completion_interruptible_timeout() call in net/sunrpc/cache.c So change them both to return 'long'. As MAX_SCHEDULE_TIMEOUT is LONG_MAX, a large +ve return value should never overflow. Signed-off-by: NNeilBrown <neilb@suse.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: J. Bruce Fields <bfields@fieldses.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <20110105125016.64ccab0e@notabene.brown> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 03 1月, 2011 1 次提交
-
-
由 Guennadi Liakhovetski 提交于
This lets drivers, optionally using the dmaengine, build with DMA_ENGINE unselected. Signed-off-by: NGuennadi Liakhovetski <g.liakhovetski@gmx.de> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 23 12月, 2010 3 次提交
-
-
由 Jeff Mahoney 提交于
The taskstats structure is internally aligned on 8 byte boundaries but the layout of the aggregrate reply, with two NLA headers and the pid (each 4 bytes), actually force the entire structure to be unaligned. This causes the kernel to issue unaligned access warnings on some architectures like ia64. Unfortunately, some software out there doesn't properly unroll the NLA packet and assumes that the start of the taskstats structure will always be 20 bytes from the start of the netlink payload. Aligning the start of the taskstats structure breaks this software, which we don't want. So, for now the alignment only happens on architectures that require it and those users will have to update to fixed versions of those packages. Space is reserved in the packet only when needed. This ifdef should be removed in several years e.g. 2012 once we can be confident that fixed versions are installed on most systems. We add the padding before the aggregate since the aggregate is already a defined type. Commit 85893120 ("delayacct: align to 8 byte boundary on 64-bit systems") previously addressed the alignment issues by padding out the pid field. This was supposed to be a compatible change but the circumstances described above mean that it wasn't. This patch backs out that change, since it was a hack, and introduces a new NULL attribute type to provide the padding. Padding the response with 4 bytes avoids allocating an aligned taskstats structure and copying it back. Since the structure weighs in at 328 bytes, it's too big to do it on the stack. Signed-off-by: NJeff Mahoney <jeffm@suse.com> Reported-by: NBrian Rogers <brian@xyzw.org> Cc: Jeff Mahoney <jeffm@suse.com> Cc: Guillaume Chazarain <guichaz@gmail.com> Cc: Balbir Singh <balbir@in.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Will Newton 提交于
The current packed struct implementation of unaligned access adds the packed attribute only to the field within the unaligned struct rather than to the struct as a whole. This is not sufficient to enforce proper behaviour on architectures with a default struct alignment of more than one byte. For example, the current implementation of __get_unaligned_cpu16 when compiled for arm with gcc -O1 -mstructure-size-boundary=32 assumes the struct is on a 4 byte boundary so performs the load of the 16bit packed field as if it were on a 4 byte boundary: __get_unaligned_cpu16: ldrh r0, [r0, #0] bx lr Moving the packed attribute to the struct rather than the field causes the proper unaligned access code to be generated: __get_unaligned_cpu16: ldrb r3, [r0, #0] @ zero_extendqisi2 ldrb r0, [r0, #1] @ zero_extendqisi2 orr r0, r3, r0, asl #8 bx lr Signed-off-by: NWill Newton <will.newton@gmail.com> Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Don Zickus 提交于
The x86 arch has shifted its use of the nmi_watchdog from a local implementation to the global one provide by kernel/watchdog.c. This shift has caused a whole bunch of compile problems under different config options. I attempt to simplify things with the patch below. In order to simplify things, I had to come to terms with the meaning of two terms ARCH_HAS_NMI_WATCHDOG and CONFIG_HARDLOCKUP_DETECTOR. Basically they mean the same thing, the former on a local level and the latter on a global level. With the old x86 nmi watchdog gone, there is no need to rely on defining the ARCH_HAS_NMI_WATCHDOG variable because it doesn't make sense any more. x86 will now use the global implementation. The changes below do a few things. First it changes the few places that relied on ARCH_HAS_NMI_WATCHDOG to use CONFIG_X86_LOCAL_APIC (the former was an alias for the latter anyway, so nothing unusual here). Those pieces of code were relying more on local apic functionality the nmi watchdog functionality, so the change should make sense. Second, I removed the x86 implementation of touch_nmi_watchdog(). It isn't need now, instead x86 will rely on kernel/watchdog.c's implementation. Third, I removed the #define ARCH_HAS_NMI_WATCHDOG itself from x86. And tweaked the include/linux/nmi.h file to tell users to look for an externally defined touch_nmi_watchdog in the case of ARCH_HAS_NMI_WATCHDOG _or_ CONFIG_HARDLOCKUP_DETECTOR. This changes removes some of the ugliness in that file. Finally, I added a Kconfig dependency for CONFIG_HARDLOCKUP_DETECTOR that said you can't have ARCH_HAS_NMI_WATCHDOG _and_ CONFIG_HARDLOCKUP_DETECTOR. You can only have one nmi_watchdog. Tested with ARCH=i386: allnoconfig, defconfig, allyesconfig, (various broken configs) ARCH=x86_64: allnoconfig, defconfig, allyesconfig, (various broken configs) Hopefully, after this patch I won't get any more compile broken emails. :-) v3: changed a couple of 'linux/nmi.h' -> 'asm/nmi.h' to pick-up correct function prototypes when CONFIG_HARDLOCKUP_DETECTOR is not set. Signed-off-by: NDon Zickus <dzickus@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: fweisbec@gmail.com LKML-Reference: <1293044403-14117-1-git-send-email-dzickus@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 22 12月, 2010 1 次提交
-
-
由 Yong Zhang 提交于
spinlock in kthread_worker and wait_queue_head in kthread_work both should be lockdep sensible, so change the interface to make it suiltable for CONFIG_LOCKDEP. tj: comment update Reported-by: NNicolas <nicolas.mailhot@laposte.net> Signed-off-by: NYong Zhang <yong.zhang0@gmail.com> Signed-off-by: NAndy Walls <awalls@md.metrocast.net> Tested-by: NAndy Walls <awalls@md.metrocast.net> Cc: Tejun Heo <tj@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: NTejun Heo <tj@kernel.org>
-
- 21 12月, 2010 1 次提交
-
-
由 Nicolas Pitre 提交于
The cnt32_to_63 algorithm relies on proper counter data evaluation ordering to work properly. This was missing from the provided documentation. Let's augment the documentation with the missing usage constraint and fix the only instance that got it wrong. Signed-off-by: NNicolas Pitre <nico@fluxnic.net> Acked-by: NDavid Howells <dhowells@redhat.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 18 12月, 2010 7 次提交
-
-
由 Paul E. McKenney 提交于
Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 Mariusz Kozlowski 提交于
This restores parentheses blance. Signed-off-by: NMariusz Kozlowski <mk@lab.zgora.pl> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 Tejun Heo 提交于
The fix in commit #6a0cc49 requires more than three concurrent instances of synchronize_sched_expedited() before batching is possible. This patch uses a ticket-counter-like approach that is also not unrelated to Lai Jiangshan's Ring RCU to allow sharing of expedited grace periods even when there are only two concurrent instances of synchronize_sched_expedited(). This commit builds on Tejun's original posting, which may be found at http://lkml.org/lkml/2010/11/9/204, adding memory barriers, avoiding overflow of signed integers (other than via atomic_t), and fixing the detection of batching. Signed-off-by: NTejun Heo <tj@kernel.org> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 Dmitry V. Levin 提交于
$ cat << EOF | gcc -Wconversion -xc -S -o/dev/null - unsigned f(void) {return NLMSG_HDRLEN;} EOF <stdin>: In function 'f': <stdin>:3:26: warning: negative integer implicitly converted to unsigned type Signed-off-by: NDmitry V. Levin <ldv@altlinux.org> Signed-off-by: NKirill A. Shutemov <kirill@shutemov.name> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Bjorn Helgaas 提交于
This adds arch_remove_reservations(), which an arch can implement if it needs to protect part of the address space from allocation. Sometimes that can be done by just putting a region in the resource tree, but there are cases where that doesn't work well. For example, x86 BIOS E820 reservations are not related to devices, so they may overlap part of, all of, or more than a device resource, so they may not end up at the correct spot in the resource tree. Acked-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NBjorn Helgaas <bjorn.helgaas@hp.com> Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org>
-
由 Bjorn Helgaas 提交于
This reverts commit e7f8567d. Acked-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NBjorn Helgaas <bjorn.helgaas@hp.com> Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org>
-
由 Henry C Chang 提交于
For read operation, we have to set the argument _write_ of get_user_pages to 1 since we will write data to pages. Also, we need to SetPageDirty before releasing these pages. Signed-off-by: NHenry C Chang <henry_c_chang@tcloudcomputing.com> Signed-off-by: NSage Weil <sage@newdream.net>
-
- 17 12月, 2010 4 次提交
-
-
由 Mike Snitzer 提交于
Implement blk_limits_max_hw_sectors() and make blk_queue_max_hw_sectors() a wrapper around it. DM needs this to avoid setting queue_limits' max_hw_sectors and max_sectors directly. dm_set_device_limits() now leverages blk_limits_max_hw_sectors() logic to establish the appropriate max_hw_sectors minimum (PAGE_SIZE). Fixes issue where DM was incorrectly setting max_sectors rather than max_hw_sectors (which caused dm_merge_bvec()'s max_hw_sectors check to be ineffective). Signed-off-by: NMike Snitzer <snitzer@redhat.com> Cc: stable@kernel.org Acked-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 Martin K. Petersen 提交于
When stacking devices, a request_queue is not always available. This forced us to have a no_cluster flag in the queue_limits that could be used as a carrier until the request_queue had been set up for a metadevice. There were several problems with that approach. First of all it was up to the stacking device to remember to set queue flag after stacking had completed. Also, the queue flag and the queue limits had to be kept in sync at all times. We got that wrong, which could lead to us issuing commands that went beyond the max scatterlist limit set by the driver. The proper fix is to avoid having two flags for tracking the same thing. We deprecate QUEUE_FLAG_CLUSTER and use the queue limit directly in the block layer merging functions. The queue_limit 'no_cluster' is turned into 'cluster' to avoid double negatives and to ease stacking. Clustering defaults to being enabled as before. The queue flag logic is removed from the stacking function, and explicitly setting the cluster flag is no longer necessary in DM and MD. Reported-by: NEd Lin <ed.lin@promise.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com> Acked-by: NMike Snitzer <snitzer@redhat.com> Cc: stable@kernel.org Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 Hauke Mehrtens 提交于
The nvram_get function was never in the mainline kernel, it only existed in an external OpenWrt patch. Use nvram_getenv function, which is in mainline and use an include instead of an extra function declaration. et0macaddr contains the mac address in text from like 00:11:22:33:44:55. We have to parse it before adding it into macaddr. nvram_parse_macaddr will be merged into asm/mach-bcm47xx/nvram.h through the MIPS git tree and will be available soon. It will not build now without nvram_parse_macaddr, but it hasn't before either. Signed-off-by: NHauke Mehrtens <hauke@hauke-m.de> To: linux-mips@linux-mips.org Cc: mb@bu3sch.de Cc: netdev@vger.kernel.org Cc: Hauke Mehrtens <hauke@hauke-m.de> Acked-by: NMichael Buesch <mb@bu3sch.de> Patchwork: https://patchwork.linux-mips.org/patch/1849/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Rafael J. Wysocki 提交于
There are some situations (e.g. in __pm_generic_call()), where pm_runtime_suspended() is used to decide whether or not to execute a device's (system) ->suspend() callback. The callback is not executed if pm_runtime_suspended() returns true, but it does so for devices that don't even support runtime PM, because the power.disable_depth device field is ignored by it. This leads to problems (i.e. devices are not suspened when they should), so rework pm_runtime_suspended() so that it returns false if the device's power.disable_depth field is different from zero. Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl> Cc: stable@kernel.org
-
- 16 12月, 2010 3 次提交
-
-
由 Peter Zijlstra 提交于
Simple sysfs emumeration of the PMUs. Use a "event_source" bus, and add PMU devices using their name. Each PMU device has a type attribute which contrains the value needed for perf_event_attr::type to identify this PMU. This is the minimal stub needed to start using this interface, we'll consider extending the sysfs usage later. Cc: Kay Sievers <kay.sievers@vrfy.org> Cc: Greg KH <gregkh@suse.de> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <20101117222056.316982569@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Extend the perf_pmu_register() interface to allow for named and dynamic pmu types. Because we need to support the existing static types we cannot use dynamic types for everything, hence provide a type argument. If we want to enumerate the PMUs they need a name, provide one. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <20101117222056.259707703@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Alexey Zaytsev 提交于
To implement per event type optional headers we are interested in knowing how long the metadata structure is. This patch slits the __u32 version field into a __u8 version and a __u16 metadata_len field (with __u8 left over). This should allow for backwards compat ABI. Signed-off-by: NAlexey Zaytsev <alexey.zaytsev@gmail.com> [rewrote descrtion and changed object sizes and ordering - eparis] Signed-off-by: NEric Paris <eparis@redhat.com>
-
- 15 12月, 2010 1 次提交
-
-
由 Dmitry Torokhov 提交于
The desire to keep old names for the EVIOCGKEYCODE/EVIOCSKEYCODE while extending them to support large scancodes was a mistake. While we tried to keep ABI intact (and we succeeded in doing that, programs compiled on older kernels will work on newer ones) there is still a problem with recompiling existing software with newer kernel headers. New kernel headers will supply updated ioctl numbers and kernel will expect that userspace will use struct input_keymap_entry to set and retrieve keymap data. But since the names of ioctls are still the same userspace will happily compile even if not adjusted to make use of the new structure and will start miraculously fail in the field. To avoid this issue let's revert EVIOCGKEYCODE/EVIOCSKEYCODE definitions and add EVIOCGKEYCODE_V2/EVIOCSKEYCODE_V2 so that userspace can explicitly select the style of ioctls it wants to employ. Reviewed-by: NHenrik Rydberg <rydberg@euromail.se> Acked-by: NJarod Wilson <jarod@redhat.com> Acked-by: NMauro Carvalho Chehab <mchehab@redhat.com> Signed-off-by: NDmitry Torokhov <dtor@mail.ru>
-
- 14 12月, 2010 1 次提交
-
-
由 Suresh Siddha 提交于
Add an alloc_bootmem_align() interface to allocate bootmem with specified alignment. This is necessary to be able to allocate the xsave area in a subsequent patch. Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> LKML-Reference: <20101116212441.977574826@sbsiddha-MOBL3.sc.intel.com> Acked-by: NH. Peter Anvin <hpa@linux.intel.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com> Cc: <stable@kernel.org>
-
- 11 12月, 2010 3 次提交
-
-
由 Len Brown 提交于
drivers/built-in.o: In function `acpi_video_bus_put_devices': video.c:(.text+0x79663): undefined reference to `video_output_unregister' drivers/built-in.o: In function `acpi_video_bus_add': video.c:(.text+0x7b0b3): undefined reference to `video_output_register' Signed-off-by: NLen Brown <len.brown@intel.com>
-
由 Lin Ming 提交于
commit b0ed7a91(ACPICA/ACPI: Add new host interfaces for _OSI suppor) introduced a regression that _OSI string setup fails. There are 2 paths to setup _OSI string. DMI: acpi_dmi_osi_linux -> set_osi_linux -> acpi_osi_setup -> copy _OSI string to osi_setup_string Boot command line: acpi_osi_setup -> copy _OSI string to osi_setup_string Later, acpi_osi_setup_late will be called to handle osi_setup_string. If _OSI string is "Linux" or "!Linux", then the call path is, acpi_osi_setup_late -> acpi_cmdline_osi_linux -> set_osi_linux -> acpi_osi_setup -> copy _OSI string to osi_setup_string This actually never installs _OSI string(acpi_install_interface not called), but just copy the _OSI string to osi_setup_string. This patch fixes the regression. Reported-and-tested-by: NLukas Hejtmanek <xhejtman@ics.muni.cz> Signed-off-by: NLin Ming <ming.m.lin@intel.com> Signed-off-by: NLen Brown <len.brown@intel.com>
-
由 Dan Williams 提交于
The ATM subsystem was incorrectly creating the 'device' link for ATM nodes in sysfs. This led to incorrect device/parent relationships exposed by sysfs and udev. Instead of rolling the 'device' link by hand in the generic ATM code, pass each ATM driver's bus device down to the sysfs code and let sysfs do this stuff correctly. Signed-off-by: NDan Williams <dcbw@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 10 12月, 2010 1 次提交
-
-
由 Don Zickus 提交于
My patch that removed the old x86 nmi watchdog broke other arches. This change reverts a piece of that patch and puts the change in the correct spot. Signed-off-by: NDon Zickus <dzickus@redhat.com> Reviewed-by: NCyrill Gorcunov <gorcunov@gmail.com> Cc: fweisbec@gmail.com Cc: yinghai@kernel.org LKML-Reference: <1291068437-5331-2-git-send-email-dzickus@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 09 12月, 2010 4 次提交
-
-
由 Tom Herbert 提交于
Rather than printing the message to the log, use a mib counter to keep track of the count of occurences of time wait bucket overflow. Reduces spam in logs. Signed-off-by: NTom Herbert <therbert@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dario Faggioli 提交于
As noted by Peter Zijlstra at https://lkml.org/lkml/2010/11/10/391 (while reviewing other stuff, though), tracking pushable tasks only makes sense on SMP systems. Signed-off-by: NDario Faggioli <raistlin@linux.it> Acked-by: NSteven Rostedt <rostedt@goodmis.org> Acked-by: NGregory Haskins <ghaskins@novell.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1291143093.2697.298.camel@Palantir> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
There's a long-running regression that proved difficult to fix and which is hitting certain people and is rather annoying in its effects. Damien reported that after 74f5187a (sched: Cure load average vs NO_HZ woes) his load average is unnaturally high, he also noted that even with that patch reverted the load avgerage numbers are not correct. The problem is that the previous patch only solved half the NO_HZ problem, it addressed the part of going into NO_HZ mode, not of comming out of NO_HZ mode. This patch implements that missing half. When comming out of NO_HZ mode there are two important things to take care of: - Folding the pending idle delta into the global active count. - Correctly aging the averages for the idle-duration. So with this patch the NO_HZ interaction should be complete and behaviour between CONFIG_NO_HZ=[yn] should be equivalent. Furthermore, this patch slightly changes the load average computation by adding a rounding term to the fixed point multiplication. Reported-by: NDamien Wyart <damien.wyart@free.fr> Reported-by: NTim McGrath <tmhikaru@gmail.com> Tested-by: NDamien Wyart <damien.wyart@free.fr> Tested-by: NOrion Poplawski <orion@cora.nwra.com> Tested-by: NKyle McMartin <kyle@mcmartin.ca> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: stable@kernel.org Cc: Chase Douglas <chase.douglas@canonical.com> LKML-Reference: <1291129145.32004.874.camel@laptop> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Because the multi-pmu bits can share contexts between struct pmu instances we could get duplicate events by iterating the pmu list. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> LKML-Reference: <new-submission> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 08 12月, 2010 5 次提交
-
-
由 Trond Myklebust 提交于
When a nfs_page is freed, nfs_free_request is called which also calls nfs_clear_request to clean out the lock and open contexts and free the pagecache page. However, a couple of places in the nfs code call nfs_clear_request themselves. What happens here if the refcount on the request is still high? We'll be releasing contexts and freeing pointers while the request is possibly still in use. Remove those bare calls to nfs_clear_context. That should only be done when the request is being freed. Note that when doing this, we need to watch out for tests of req->wb_page. Previously, nfs_set_page_tag_locked() and nfs_clear_page_tag_locked() would check the value of req->wb_page to figure out if the page is mapped into the nfsi->nfs_page_tree. We now indicate the page is mapped using the new bit PG_MAPPED in req->wb_flags . Reported-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Lino Sanfilippo 提交于
FAN_NOFD is used in fanotify events that do not provide an open file descriptor (like the overflow_event). Signed-off-by: NLino Sanfilippo <LinoSanfilippo@gmx.de> Signed-off-by: NEric Paris <eparis@redhat.com>
-
由 Lino Sanfilippo 提交于
When fanotify_release() is called, there may still be processes waiting for access permission. Currently only processes for which an event has already been queued into the groups access list will be woken up. Processes for which no event has been queued will continue to sleep and thus cause a deadlock when fsnotify_put_group() is called. Furthermore there is a race allowing further processes to be waiting on the access wait queue after wake_up (if they arrive before clear_marks_by_group() is called). This patch corrects this by setting a flag to inform processes that the group is about to be destroyed and thus not to wait for access permission. [additional changelog from eparis] Lets think about the 4 relevant code paths from the PoV of the 'operator' 'listener' 'responder' and 'closer'. Where operator is the process doing an action (like open/read) which could require permission. Listener is the task (or in this case thread) slated with reading from the fanotify file descriptor. The 'responder' is the thread responsible for responding to access requests. 'Closer' is the thread attempting to close the fanotify file descriptor. The 'operator' is going to end up in: fanotify_handle_event() get_response_from_access() (THIS BLOCKS WAITING ON USERSPACE) The 'listener' interesting code path fanotify_read() copy_event_to_user() prepare_for_access_response() (THIS CREATES AN fanotify_response_event) The 'responder' code path: fanotify_write() process_access_response() (REMOVE A fanotify_response_event, SET RESPONSE, WAKE UP 'operator') The 'closer': fanotify_release() (SUPPOSED TO CLEAN UP THE REST OF THIS MESS) What we have today is that in the closer we remove all of the fanotify_response_events and set a bit so no more response events are ever created in prepare_for_access_response(). The bug is that we never wake all of the operators up and tell them to move along. You fix that in fanotify_get_response_from_access(). You also fix other operators which haven't gotten there yet. So I agree that's a good fix. [/additional changelog from eparis] [remove additional changes to minimize patch size] [move initialization so it was inside CONFIG_FANOTIFY_PERMISSION] Signed-off-by: NLino Sanfilippo <LinoSanfilippo@gmx.de> Signed-off-by: NEric Paris <eparis@redhat.com>
-
由 Lino Sanfilippo 提交于
Unsetting FMODE_NONOTIFY in fsnotify_open() is too late, since fsnotify_perm() is called before. If FMODE_NONOTIFY is set fsnotify_perm() will skip permission checks, so a user can still disable permission checks by setting this flag in an open() call. This patch corrects this by unsetting the flag before fsnotify_perm is called. Signed-off-by: NLino Sanfilippo <LinoSanfilippo@gmx.de> Signed-off-by: NEric Paris <eparis@redhat.com>
-
由 Eric Paris 提交于
Since fanotify has decided to be careful about alignment and packing rather than rely on __attribute__((packed)) for multiarch support. Since this attribute isn't doing anything on fanotify_response we just drop it. This does not break API/ABI. Suggested-by: NTvrtko Ursulin <tvrtko.ursulin@sophos.com> Signed-off-by: NEric Paris <eparis@redhat.com>
-
- 07 12月, 2010 4 次提交
-
-
由 Gabor Juhos 提交于
The existing gpio-keys driver can be usable only for GPIO lines with interrupt support. Several devices have buttons connected to a GPIO line which is not capable to generate interrupts. This patch adds a new input driver using the generic GPIO layer and the input-polldev to support such buttons. [Ben Gardiner <bengardiner@nanometrics.ca: fold code to use more of the original gpio_keys infrastructure; cleanups and other improvements.] Signed-off-by: NGabor Juhos <juhosg@openwrt.org> Signed-off-by: NBen Gardiner <bengardiner@nanometrics.ca> Tested-by: NBen Gardiner <bengardiner@nanometrics.ca> Signed-off-by: NDmitry Torokhov <dtor@mail.ru>
-
由 Rafael J. Wysocki 提交于
There is a problem that swap pages allocated before the creation of a hibernation image can be released and used for storing the contents of different memory pages while the image is being saved. Since the kernel stored in the image doesn't know of that, it causes memory corruption to occur after resume from hibernation, especially on systems with relatively small RAM that need to swap often. This issue can be addressed by keeping the GFP_IOFS bits clear in gfp_allowed_mask during the entire hibernation, including the saving of the image, until the system is finally turned off or the hibernation is aborted. Unfortunately, for this purpose it's necessary to rework the way in which the hibernate and suspend code manipulates gfp_allowed_mask. This change is based on an earlier patch from Hugh Dickins. Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl> Reported-by: NOndrej Zary <linux@rainbow-software.org> Acked-by: NHugh Dickins <hughd@google.com> Reviewed-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: stable@kernel.org
-
由 Masami Hiramatsu 提交于
Use text_poke_smp_batch() on unoptimization path for reducing the number of stop_machine() issues. If the number of unoptimizing probes is more than MAX_OPTIMIZE_PROBES(=256), kprobes unoptimizes first MAX_OPTIMIZE_PROBES probes and kicks optimizer for remaining probes. Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Jason Baron <jbaron@redhat.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: 2nddept-manager@sdl.hitachi.co.jp Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Steven Rostedt <rostedt@goodmis.org> LKML-Reference: <20101203095434.2961.22657.stgit@ltc236.sdl.hitachi.co.jp> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Masami Hiramatsu 提交于
Use text_poke_smp_batch() in optimization path for reducing the number of stop_machine() issues. If the number of optimizing probes is more than MAX_OPTIMIZE_PROBES(=256), kprobes optimizes first MAX_OPTIMIZE_PROBES probes and kicks optimizer for remaining probes. Changes in v5: - Use kick_kprobe_optimizer() instead of directly calling schedule_delayed_work(). - Rescheduling optimizer outside of kprobe mutex lock. Changes in v2: - Allocate code buffer and parameters in arch_init_kprobes() instead of using static arraies. - Merge previous max optimization limit patch into this patch. So, this patch introduces upper limit of optimization at once. Signed-off-by: NMasami Hiramatsu <mhiramat@redhat.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Jason Baron <jbaron@redhat.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: 2nddept-manager@sdl.hitachi.co.jp Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Steven Rostedt <rostedt@goodmis.org> LKML-Reference: <20101203095428.2961.8994.stgit@ltc236.sdl.hitachi.co.jp> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-