- 20 6月, 2006 8 次提交
-
-
由 Al Viro 提交于
We should not send a pile of replies while holding audit_netlink_mutex since we hold the same mutex when we receive commands. As the result, we can get blocked while sending and sit there holding the mutex while auditctl is unable to send the next command and get around to receiving what we'd sent. Solution: create skb and put them into a queue instead of sending; once we are done, send what we've got on the list. The former can be done synchronously while we are handling AUDIT_LIST or AUDIT_LIST_RULES; we are holding audit_netlink_mutex at that point. The latter is done asynchronously and without messing with audit_netlink_mutex. Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
... no need to provide a stub; note that extern is already gone from include/linux/audit.h Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Amy Griffis 提交于
Update kernel documentation to include a description of the inotify kernel API. Signed-off-by: NAmy Griffis <amy.griffis@hp.com> Acked-by: NRobert Love <rml@novell.com> Acked-by: NJohn McCutchan <john@johnmccutchan.com> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Amy Griffis 提交于
Allow callers to remove watches from their event handler via inotify_remove_watch_locked(). This functionality can be used to achieve IN_ONESHOT-like functionality for a subset of events in the mask. Signed-off-by: NAmy Griffis <amy.griffis@hp.com> Acked-by: NRobert Love <rml@novell.com> Acked-by: NJohn McCutchan <john@johnmccutchan.com> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Amy Griffis 提交于
Add inotify_init_watch() so caller can use inotify_watch refcounts before calling inotify_add_watch(). Add inotify_find_watch() to find an existing watch for an (ih,inode) pair. This is similar to inotify_find_update_watch(), but does not update the watch's mask if one is found. Add inotify_rm_watch() to remove a watch via the watch pointer instead of the watch descriptor. Signed-off-by: NAmy Griffis <amy.griffis@hp.com> Acked-by: NRobert Love <rml@novell.com> Acked-by: NJohn McCutchan <john@johnmccutchan.com> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Amy Griffis 提交于
When an inotify event includes a dentry name, also include the inode associated with that name. Signed-off-by: NAmy Griffis <amy.griffis@hp.com> Acked-by: NRobert Love <rml@novell.com> Acked-by: NJohn McCutchan <john@johnmccutchan.com> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Amy Griffis 提交于
The following series of patches introduces a kernel API for inotify, making it possible for kernel modules to benefit from inotify's mechanism for watching inodes. With these patches, inotify will maintain for each caller a list of watches (via an embedded struct inotify_watch), where each inotify_watch is associated with a corresponding struct inode. The caller registers an event handler and specifies for which filesystem events their event handler should be called per inotify_watch. Signed-off-by: NAmy Griffis <amy.griffis@hp.com> Acked-by: NRobert Love <rml@novell.com> Acked-by: NJohn McCutchan <john@johnmccutchan.com> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 18 6月, 2006 9 次提交
-
-
由 Linus Torvalds 提交于
Being named "Crazed Snow-Weasel" instills a lot of confidence in this release, so I'm sure this will be one of the better ones.
-
由 Arnd Bergmann 提交于
Reflect the fact that the Cell Broadband Engine supports 64k pages by adding the bit to the CPU features. Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Arnd Bergmann 提交于
The page size encoding passed to tlbie is incorrect for new-style large pages. This fixes it. This doesn't affect anything on older machines because mmu_psize_defs[psize].penc (the page size encoding) is 0 for 4k and 16M pages (the two are distinguished by a separate "is a large page" bit). Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Oleg Nesterov 提交于
arm_timer() checks PF_EXITING to prevent BUG_ON(->exit_state) in run_posix_cpu_timers(). However, for some reason it does so only for CPUCLOCK_PERTHREAD case (which is imho wrong). Also, this check is not reliable, PF_EXITING could be set on another cpu without any locks/barriers just after the check, so it can't prevent from attaching the timer to the exiting task. The previous patch makes this check unneeded. Signed-off-by: NOleg Nesterov <oleg@tv-sign.ru> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Oleg Nesterov 提交于
do_exit() clears ->it_##clock##_expires, but nothing prevents another cpu to attach the timer to exiting process after that. arm_timer() tries to protect against this race, but the check is racy. After exit_notify() does 'write_unlock_irq(&tasklist_lock)' and before do_exit() calls 'schedule() local timer interrupt can find tsk->exit_state != 0. If that state was EXIT_DEAD (or another cpu does sys_wait4) interrupted task has ->signal == NULL. At this moment exiting task has no pending cpu timers, they were cleanuped in __exit_signal()->posix_cpu_timers_exit{,_group}(), so we can just return from irq. John Stultz recently confirmed this bug, see http://marc.theaimsgroup.com/?l=linux-kernel&m=115015841413687Signed-off-by: NOleg Nesterov <oleg@tv-sign.ru> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Oleg Nesterov 提交于
If the local timer interrupt happens just after do_exit() sets PF_EXITING (and before it clears ->it_xxx_expires) run_posix_cpu_timers() will call check_process_timers() with tasklist_lock + ->siglock held and check_process_timers: t = tsk; do { .... do { t = next_thread(t); } while (unlikely(t->flags & PF_EXITING)); } while (t != tsk); the outer loop will never stop. Actually, the window is bigger. Another process can attach the timer after ->it_xxx_expires was cleared (see the next commit) and the 'if (PF_EXITING)' check in arm_timer() is racy (see the one after that). Signed-off-by: NOleg Nesterov <oleg@tv-sign.ru> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Stephen Hemminger 提交于
A couple of fixes that should prevent crashes when using netconsole and suspend/resume. First, netconsole poll routine shouldn't run unless the device is up; second, the NAPI poll should be disabled during suspend. This is only an issue on sky2, because it has to have one NAPI poll routine for both ports on dual port boards. Normal drivers use netif_rx_schedule_prep and that checks for netif_running. Signed-off-by: NStephen Hemminger <shemminger@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Jens Axboe 提交于
If get_user_pages() returns less pages than what we asked for, we jump to out_unmap which will return ERR_PTR(ret). But ret can contain a positive number just smaller than local_nr_pages, so be sure to set it to -EFAULT always. Problem found and diagnosed by Damien Le Moal <damien@sdl.hitachi.co.jp> Signed-off-by: NJens Axboe <axboe@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Jens Axboe 提交于
Some time ago the cdrom open routine was changed so that we call the driver's open routine before checking to see if it is read only. However, if we discovered that a read write open was not possible and the open flags required a writable open, we just returned -EROFS without calling the driver's release routine. This seems to work for most cdrom drivers, but breaks the Powerpc iSeries virtual cdrom rather badly. This just inserts the release call in the error path to balance the call to "->open()" done by "open_for_data()". Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NJens Axboe <axboe@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 15 6月, 2006 1 次提交
-
-
由 Jens Axboe 提交于
We don't clear the seek stat values in cfq_alloc_io_context(), and if ->seek_mean is unlucky enough to be set to -36 by chance, the first invocation of cfq_update_io_seektime() will oops with a divide by zero in do_div(). Just memset the entire cic instead of filling invididual values independently. Signed-off-by: NJens Axboe <axboe@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 14 6月, 2006 6 次提交
-
-
由 Kirill Korotaev 提交于
If flock_lock_file() failed to allocate flock with locks_alloc_lock() then "error = 0" is returned. Need to return some non-zero. Signed-off-by: NPavel Emelianov <xemul@openvz.org> Signed-off-by: NKirill Korotaev <dev@openvz.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Stephen Hemminger 提交于
The resume bug was caused not by an early interrupt but because the idle timeout was not being stopped on suspend. Also disable hardware IRQ's on suspend. Will need to revisit this with hotplug? Signed-off-by: NStephen Hemminger <shemminger@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Stephen Hemminger 提交于
The hardware should be fully shut off during suspend, and the base irq mask restored during resume. Signed-off-by: NStephen Hemminger <shemminger@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Stephen Hemminger 提交于
If the poll routine detects no hardware available, it needs to dequeue it self from the network poll list. Linus didn't understand NAPI. Signed-off-by: NStephen Hemminger <shemminger@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Stephen Hemminger 提交于
It is cleaner, to not loop over both ports if only one exists. Signed-off-by: NStephen Hemminger <shemminger@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Stephen Hemminger 提交于
The set power state function is cleaner if it doesn't return anything. The only caller that could fail is in suspend() and it can check the argument there. Signed-off-by: NStephen Hemminger <shemminger@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 13 6月, 2006 8 次提交
-
-
由 Randy Dunlap 提交于
From: Randy Dunlap <rdunlap@xenotime.net> According to include/asm-alpha/bitops.h, only ALPHA_EV67 has hardware hweight support, so ALPHA_EV6 needs to use GENERIC_HWEIGHT. Signed-off-by: NRandy Dunlap <rdunlap@xenotime.net> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Ernst Herzberg <earny@net4u.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Sergey Vlasov 提交于
shmem_rmdir() must undo the increment of i_nlink done in shmem_get_inode() for directories, otherwise at least IN_DELETE_SELF inotify event generation is broken. Signed-off-by: NSergey Vlasov <vsu@altlinux.ru> Signed-off-by: NHugh Dickins <hugh@veritas.com> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Robin H. Johnson 提交于
I noticed a strange behavior in a tmpfs file system the other day, while building packages - occasionally, and seemingly at random, make decided to rebuild a target. However, only on tmpfs. A file would be created, and if checked, it had a sub-second timestamp. However, after an utimes related call where sub-seconds should be set, they were zeroed instead. In the case that a file was created, and utimes(...,NULL) was used on it in the same second, the timestamp on the file moved backwards. After some digging, I found that this was being caused by tmpfs not having a time granularity set, thus inheriting the default 1 second granularity. Hugh adds: yes, we missed tmpfs when the s_time_gran mods went into 2.6.11. Unfortunately, the granularity of CURRENT_TIME, often used in filesystems, does not match the default granularity set by alloc_super. A few more such discrepancies have been found, but this is the most important to fix now. Signed-off-by: NRobin H. Johnson <robbat2@gentoo.org> Acked-by: NAndi Kleen <ak@suse.de> Signed-off-by: NHugh Dickins <hugh@veritas.com> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Linus Torvalds 提交于
* master.kernel.org:/pub/scm/linux/kernel/git/davem/sparc-2.6: [SPARC64]: Do not double-export sys_close() when CONFIG_SOLARIS_EMUL_MODULE
-
由 Linus Torvalds 提交于
* master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6: [IPV4]: Increment ipInHdrErrors when TTL expires. [TCP]: continued: reno sacked_out count fix [DCCP] Ackvec: fix soft lockup in ackvec handling code
-
由 Linus Torvalds 提交于
* master.kernel.org:/home/rmk/linux-2.6-arm: [ARM] Fix Integrator and Versatile interrupt initialisation [ARM] 3546/1: PATCH: subtle lost interrupts bug on i.MX [ARM] 3547/1: PXA-OHCI: Allow platforms to specify a power budget [ARM] Fix Neponset IRQ handling
-
由 Weidong 提交于
Signed-off-by: NWeidong <weid@nanjing-fnst.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Linus Torvalds 提交于
This fixes two independent problems: it would not save the PCI state on suspend (and thus try to resume a nonexistent state on resume), and while shut off, if an interrupt happened on the same shared irq, the irq handler would react very badly to the interrupt status being an invalid all-ones state. Acked-by: NJeff Garzik <jgarzik@pobox.com> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 12 6月, 2006 8 次提交
-
-
由 Linus Torvalds 提交于
* 'upstream-linus' of master.kernel.org:/pub/scm/linux/kernel/git/jgarzik/libata-dev: [PATCH] sata_mv: grab host lock inside eng_timeout
-
由 Aki M Nyrhinen 提交于
From: Aki M Nyrhinen <anyrhine@cs.helsinki.fi> IMHO the current fix to the problem (in_flight underflow in reno) is incorrect. it treats the symptons but ignores the problem. the problem is timing out packets other than the head packet when we don't have sack. i try to explain (sorry if explaining the obvious). with sack, scanning the retransmit queue for timed out packets is fine because we know which packets in our retransmit queue have been acked by the receiver. without sack, we know only how many packets in our retransmit queue the receiver has acknowledged, but no idea which packets. think of a "typical" slow-start overshoot case, where for example every third packet in a window get lost because a router buffer gets full. with sack, we check for timeouts on those every third packet (as the rest have been sacked). the packet counting works out and if there is no reordering, we'll retransmit exactly the packets that were lost. without sack, however, we check for timeout on every packet and end up retransmitting consecutive packets in the retransmit queue. in our slow-start example, 2/3 of those retransmissions are unnecessary. these unnecessary retransmissions eat the congestion window and evetually prevent fast recovery from continuing, if enough packets were lost. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Andrea Bittau 提交于
A soft lockup existed in the handling of ack vector records. Specifically, when a tail of the list of ack vector records was removed, it was possible to end up iterating infinitely on an element of the tail. Signed-off-by: NAndrea Bittau <a.bittau@cs.ucl.ac.uk> Signed-off-by: NIan McDonald <ian.mcdonald@jandi.co.nz> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
It is already exported by fs/open.c Noticed by Ben Collins. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Paul Mackerras 提交于
People have been reporting that PPP connections over ptys, such as used with PPTP, will hang randomly when transferring large amounts of data, for instance in http://bugzilla.kernel.org/show_bug.cgi?id=6530. I have managed to reproduce the problem, and the patch below fixes the actual cause. The problem is not in fact in ppp_async.c but in n_tty.c. What happens is that when pptp reads from the pty, we call read_chan() in drivers/char/n_tty.c on the master side of the pty. That copies all the characters out of its buffer to userspace and then calls check_unthrottle(), which calls the pty unthrottle routine, which calls tty_wakeup on the slave side, which calls ppp_asynctty_wakeup, which calls tasklet_schedule. So far so good. Since we are in process context, the tasklet runs immediately and calls ppp_async_process(), which calls ppp_async_push, which calls the tty->driver->write function to send some more output. However, tty->driver->write() returns zero, because the master tty->receive_room is still zero. We haven't returned from check_unthrottle() yet, and read_chan() only updates tty->receive_room _after_ calling check_unthrottle. That means that the driver->write call in ppp_async_process() returns 0. That would be fine if we were going to get a subsequent wakeup call, but we aren't (we just had it, and the buffer is now empty). The solution is for n_tty.c to update tty->receive_room _before_ calling the driver unthrottle routine. The patch below does this. With this patch I was able to transfer a 900MB file over a PPTP connection (taking about 25 minutes), whereas without the patch the connection would always stall in under a minute. Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Mark Lord 提交于
Bug fix: mv_eng_timeout() calls mv_err_intr() without first grabbing the host lock, which can lead to all sorts of interesting scenarios. This whole error-handling portion of sata_mv is nasty (and will get fixed for the new EH stuff), but for now this patch will help keep it on life-support. Signed-off-by: NMark Lord <liml@rtr.ca> Signed-off-by: NJeff Garzik <jeff@garzik.org>
-
由 Linus Torvalds 提交于
* master.kernel.org:/pub/scm/linux/kernel/git/gregkh/pci-2.6: [PATCH] PCI: reverse pci config space restore order [PATCH] PCI: Improve PCI config space writeback [PATCH] PCI: Error handling on PCI device resume [PATCH] PCI: fix pciehp compile issue when CONFIG_ACPI is not enabled
-
由 Christoph Lameter 提交于
From: Christoph Lameter <clameter@sgi.com> Looks like a comma was left from the conversion from a struct to an assignment. Signed-off-by: NChristoph Lameter <clameter@sgi.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-