- 21 2月, 2006 8 次提交
-
-
由 Pavel Machek 提交于
Currently, acpi video options can only be set on kernel command line. That's little inflexible; I'd like userland s2ram application that just works, and modifying kernel command line according to whitelist is not fun. It is better to just allow s2ram application to set video options just before suspend (according to the whitelist). This implements sysctl to allow setting suspend video options without reboot. (akpm: Documentation updates for this new sysctl are pending..) Signed-off-by: NPavel Machek <pavel@suse.cz> Cc: "Brown, Len" <len.brown@intel.com> Cc: "Antonino A. Daplas" <adaplas@pol.net> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Heiko Carstens 提交于
Looks like there was a merge conflict when patches 8f8b1138 and 255acee7 were applied which wasn't properly resolved. Fix this and add some additional description. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Cc: Ashok Raj <ashok.raj@intel.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Andi Kleen 提交于
Undo setting of CONFIG_DEBUG_INFO in the previous defconfig update. It will make every build much slower and need more disk space and isn't a good default. Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Stephen Street 提交于
Fix two problems in the spi subsystem: 1) spi subsystem core dumps when modular spi master is unloaded. 2) spi subsystem core dumps when spi slave device is suspended/resumed and module slave driver is not loaded. Signed-off-by: NStephen Street <stephen@streetfiresound.com> Signed-off-by: NDavid Brownell <dbrownell@users.sourceforge.net> Cc: Greg KH <greg@kroah.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Alexey Korolev 提交于
I found an issue in cfi_cmdset0001.c. It is related to cache region invalidation in the buffered write procedure. The code performs cache invalidation from "cmd_addr" to "cmd_adr + len" in do_write_buffer() while we modify region from "adr" to "adr+len". This issue affects writes + reads of data by small chunks. Signed-off-by: NNicolas Pitre <nico@cam.org> Acked-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Daniel Yeisley 提交于
I'm seeing a kernel panic on an ES7000-600 when booting in virtual wire mode. The panic happens because smp_read_mpc() is passed a physical address, and it should be virtual. I tested the attached patch on the ES7000-600 and on a 2 cpu Dell box, and saw no problems on either. Signed-off-by: NDan Yeisley <dan.yeisley@unisys.com> Acked-by: NAndi Kleen <ak@muc.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Christoph Lameter 提交于
Some allocations are restricted to a limited set of nodes (due to memory policies or cpuset constraints). If the page allocator is not able to find enough memory then that does not mean that overall system memory is low. In particular going postal and more or less randomly shooting at processes is not likely going to help the situation but may just lead to suicide (the whole system coming down). It is better to signal to the process that no memory exists given the constraints that the process (or the configuration of the process) has placed on the allocation behavior. The process may be killed but then the sysadmin or developer can investigate the situation. The solution is similar to what we do when running out of hugepages. This patch adds a check before we kill processes. At that point performance considerations do not matter much so we just scan the zonelist and reconstruct a list of nodes. If the list of nodes does not contain all online nodes then this is a constrained allocation and we should kill the current process. Signed-off-by: NChristoph Lameter <clameter@sgi.com> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Cc: Andi Kleen <ak@muc.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Kurt Garloff 提交于
In the badness() calculation, there's currently this piece of code: /* * Processes which fork a lot of child processes are likely * a good choice. We add the vmsize of the children if they * have an own mm. This prevents forking servers to flood the * machine with an endless amount of children */ list_for_each(tsk, &p->children) { struct task_struct *chld; chld = list_entry(tsk, struct task_struct, sibling); if (chld->mm = p->mm && chld->mm) points += chld->mm->total_vm; } The intention is clear: If some server (apache) keeps spawning new children and we run OOM, we want to kill the father rather than picking a child. This -- to some degree -- also helps a bit with getting fork bombs under control, though I'd consider this a desirable side-effect rather than a feature. There's one problem with this: No matter how many or few children there are, if just one of them misbehaves, and all others (including the father) do everything right, we still always kill the whole family. This hits in real life; whether it's javascript in konqueror resulting in kdeinit (and thus the whole KDE session) being hit or just a classical server that spawns children. Sidenote: The killer does kill all direct children as well, not only the selected father, see oom_kill_process(). The idea in attached patch is that we do want to account the memory consumption of the (direct) children to the father -- however not fully. This maintains the property that fathers with too many children will still very likely be picked, whereas a single misbehaving child has the chance to be picked by the OOM killer. In the patch I account only half (rounded up) of the children's vm_size to the parent. This means that if one child eats more mem than the rest of the family, it will be picked, otherwise it's still the father and thus the whole family that gets selected. This is heuristics -- we could debate whether accounting for a fourth would be better than for half of it. Or -- if people would consider it worth the trouble -- make it a sysctl. For now I sticked to accounting for half, which should IMHO be a significant improvement. The patch does one more thing: As users tend to be irritated by the choice of killed processes (mainly because the children are killed first, despite some of them having a very low OOM score), I added some more output: The selected (father) process will be reported first and it's oom_score printed to syslog. Description: Only account for half of children's vm size in oom score calculation This should still give the parent enough point in case of fork bombs. If any child however has more than 50% of the vm size of all children together, it'll get a higher score and be elected. This patch also makes the kernel display the oom_score. Signed-off-by: NKurt Garloff <garloff@suse.de> Cc: Rik van Riel <riel@redhat.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 18 2月, 2006 32 次提交
-
-
由 Linus Torvalds 提交于
-
由 Bjorn Helgaas 提交于
acpi_rs_get_list_length() needs to account for all the vendor-defined data bytes. Failing to include these causes buffers to be sized too small, which causes slab corruption when we later convert AML to resources and run off the end of the buffer. This causes slab corruption on machines that use ACPI vendor-defined resources. All HP ia64 machines do, and I'm told that some NEC machines may as well. Signed-off-by: NBjorn Helgaas <bjorn.helgaas@hp.com> Cc: "Brown, Len" <len.brown@intel.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Chris Wright 提交于
Make sure maxnodes is safe size before calculating nlongs in get_nodes(). Signed-off-by: NChris Wright <chrisw@sous-sol.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Andrew Morton 提交于
I got all of these backwards. We want to return min(input timeout, new timeout) to userspace to prevent increasing the time-remaining value. Thanks to Ernst Herzberg <earny@net4u.de> for reporting and diagnosing. Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 David Gibson 提交于
One of the parameters to the __pud_free_tlb() macro for powerpc is incorrect (see patch) . We get away with it by accident, because the one place the macro is called, the second parameter is a variable named "pud". Signed-off-by: NDavid Gibson <dwg@au1.ibm.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Tim Hockin 提交于
Don't print KERN_INFO in the middle of a printk line. printk(KERN_INFO "OEM ID: %s ",str); is just above this. This is already fixed up in i386 copy. Signed-off-by: NMartin J. Bligh <mbligh@google.com> Cc: Andi Kleen <ak@muc.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Johannes Berg 提交于
The windfarm_pm112 module relies on smu_sat_get_sdb_partition which is in windfarm_smu_sat.c but is not exported to modules, so despite Kconfig having the option to build the pm112 as modules, this can never be loaded. This patch fixes that by exporting smu_sat_get_sdb_partition with EXPORT_SYMBOL_GPL Signed-off-by: NJohannes Berg <johannes@sipsolutions.net> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Miklos Szeredi 提交于
There's a rather theoretical case of the BUG triggering in fuse_reset_request(): - iget() fails because of OOM after a successful CREATE_OPEN request - during IO on the resulting RELEASE request the connection is aborted Fix and add warning to fuse_reset_request(). Signed-off-by: NMiklos Szeredi <miklos@szeredi.hu> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Rafael J. Wysocki 提交于
Restore the compatibility with the older code and make it possible to suspend if the kernel command line doesn't contain the "resume=" argument Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl> Cc: Pavel Machek <pavel@ucw.cz> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Heiko Carstens 提交于
Just rename the compat system call to keep the name consistent with all the other *64 compat system calls. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Cornelia Huck 提交于
Fix assignment instead of check in ccw_device_set_online(). Also remove unneeded assignment in ccw_device_do_sense(). Signed-off-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Heiko Carstens 提交于
The last changes that introduced the additional_cpus command line parameter also introduced a regression regarding smp initialization speed. In smp_setup_cpu_possible_map() cpu_present_map is set to the same value as cpu_possible_map. Especially that means that bits in the present map will be set for cpus that are not present. This will cause a slow down in the initial cpu_up() loop in smp_init() since trying to take cpus online that aren't present takes a while. Fix this by setting only bits for present cpus in cpu_present_map and set cpu_present_map to cpu_possible_map in smp_cpus_done(). Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Heiko Carstens 提交于
Introduce possible_cpus command line option. Hard sets the number of bits set in cpu_possible_map. Unlike the additional_cpus parameter this one guarantees that num_possible_cpus() will stay constant even if the system gets rebooted and a different number of cpus are present at startup. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Heiko Carstens 提交于
Introduce additional_cpus command line option. By default no additional cpu can be attached to the system anymore. Only the cpus present at IPL time can be switched on/off. If it is desired that additional cpus can be attached to the system the maximum number of additional cpus needs to be specified with this option. This change is necessary in order to limit the waste of per_cpu data structures. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Heiko Carstens 提交于
Set preempt_count of idle_thread to zero before switching off cpu. Otherwise the preempt_count will be wrong if the cpu is switched on again since the thread will be reused. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Cornelia Huck 提交于
If __ccw_device_disband_start() fails to initiate disbanding, it should finish with ccw_device_disband_done() (which leaves the device in offline state) instead of ccw_device_verify_done() (which leaves the device in online state). Signed-off-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Ingo Molnar 提交于
Heiko Carstens <heiko.carstens@de.ibm.com> wrote: The boot sequence on s390 sometimes takes ages and we spend a very long time (up to one or two minutes) in calibrate_migration_costs. The time spent there differs from boot to boot. Also the calculated costs differ a lot. I've seen differences by up to a factor of 15 (yes, factor not percent). Also I doubt that making these measurements make much sense on a completely virtualized architecture where you cannot tell how much cpu time you will get anyway. So introduce the CONFIG_DEFAULT_MIGRATION_COST method for an architecture to set the scheduler migration costs. This turns off automatic detection of migration costs. Makes sense on virtual platforms, where migration costs are hard to measure accurately. Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Adrian Bunk 提交于
Jean-Luc Leger <reiga@dspnet.fr.eu.org> found this obvious typo. Signed-off-by: NAdrian Bunk <bunk@stusta.de> Acked-by: NPaul Mundt <lethal@linux-sh.org> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Marcel Selhorst 提交于
Fix IO-port leakage from request_region in case of error during TPM initialization, adds more pnp-verification and fixes a WTX-bug. Signed-off-by: NMarcel Selhorst <selhorst@crypto.rub.de> Acked-by: NKylene Jo Hall <kjhall@us.ibm.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Peter Staubach 提交于
Fix a deadlock possible in the ext2 file system implementation. This deadlock occurs when a file is removed from an ext2 file system which was mounted with the "sync" mount option. The problem is that ext2_xattr_delete_inode() was invoking the routine, sync_dirty_buffer(), using a buffer head which was previously locked via lock_buffer(). The first thing that sync_dirty_buffer() does is to lock the buffer head that it was passed. It does this via lock_buffer(). Oops. The solution is to unlock the buffer head in ext2_xattr_delete_inode() before invoking sync_dirty_buffer(). This makes the code in ext2_xattr_delete_inode() obey the same locking rules as all other callers of sync_dirty_buffer() in the ext2 file system implementation. Signed-off-by: NPeter Staubach <staubach@redhat.com> Cc: <stable@kernel.org> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Linus Torvalds 提交于
-
由 Jens Axboe 提交于
Signed-off-by: NJens Axboe <axboe@suse.de> Signed-off-by: NJeff Garzik <jgarzik@pobox.com>
-
由 Albert Lee 提交于
- Fix the array index value in ata_rwcmd_protocol() for the added FUA commands. - Filter out ATAPI packet command error messages in ata_pio_error() Signed-off-by: NAlbert Lee <albertcc@tw.ibm.com> Signed-off-by: NJeff Garzik <jgarzik@pobox.com>
-
由 Linus Torvalds 提交于
-
由 Dan Williams 提交于
* libata does not care about error interrupts, so handle them locally * the interrupts that are ignored only appear to happen at init time Signed-off-by: NDan Williams <dan.j.williams@intel.com> Signed-off-by: NJeff Garzik <jgarzik@pobox.com>
-
由 Linus Torvalds 提交于
Change the find_next_best_node algorithm to correctly skip over holes in the node online mask. Previously it would not handle missing nodes correctly and cause crashes at boot. [Written by Linus, tested by AK] Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Jay Vosburgh 提交于
bond_release returns EINVAL without releasing the bond lock if the slave device is not being bonded by the bond. The following patch ensures that the lock is released in this case. Signed-off-by: NStephen J. Bevan <stephen@dino.dnsalias.com> Acked-by: NJay Vosburgh <fubar@us.ibm.com> Signed-off-by: NJeff Garzik <jgarzik@pobox.com>
-
由 Frank Pavlic 提交于
[patch 2/2] s390: some qeth driver fixes From: Frank Pavlic <fpavlic@de.ibm.com> - fixed kernel panic when using EDDP support in Layer 2 mode - NULL pointer exception in qeth_set_offline fixed. - setting EDDP in Layer 2 mode did not set NETIF_F_(SG/TSO) flags when device became online. - use sscanf for parsing and converting IPv4 addresses from string to __u8 values. - qeth_string_to_ipaddr6 fixed. in case of double colon the converted IPv6 address out from the string was not correct in previous implementation. Signed-off-by: NFrank Pavlic <fpavlic@de.ibm.com> diffstat: qeth.h | 112 +++++++++++++++++++++++++----------------------------------- qeth_eddp.c | 11 ++++- qeth_main.c | 17 +++------ 3 files changed, 63 insertions(+), 77 deletions(-) Signed-off-by: NJeff Garzik <jgarzik@pobox.com>
-
由 Frank Pavlic 提交于
[patch 1/2] s390: lcs performance enhancements From: Klaus Wacker <kdwacker@de.ibm.com> - When flood pinging (with large packet size) an LCS device, about 90 % of all packets are dropped by driver. - increased number of lcs IO buffers to 32. - use netif_stop_queue/netif_wake_queue in lcs_start_xmit routine - don't lock the whole xmit routine but just the piece of code where tx_buffer is touched. Signed-off-by: NFrank Pavlic <fpavlic@de.ibm.com> diffstat: lcs.c | 31 +++++++++++++++++-------------- lcs.h | 2 +- 2 files changed, 18 insertions(+), 15 deletions(-) Signed-off-by: NJeff Garzik <jgarzik@pobox.com>
-
由 Andrew Morton 提交于
drivers/net/tokenring/smctr.c: In function `smctr_load_firmware': drivers/net/tokenring/smctr.c:2981: warning: assignment discards qualifiers from pointer target type Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NJeff Garzik <jgarzik@pobox.com>
-
由 Stephen Hemminger 提交于
Users report problems w/ auto-negotiation disabled and the link set to 100/Half or 10/Half. Problems range from poor performance to no link at all. The current sky2 code does not set things properly on link up if autonegotiation is disabled. Plus it does not contemplate a 10Mbit setting at all. This patch corrects that. Signed-off-by: NJohn W. Linville <linville@tuxdriver.com> Signed-off-by: NStephen Hemminger <shemminger@osdl.org> Signed-off-by: NJeff Garzik <jgarzik@pobox.com>
-
由 Stephen Hemminger 提交于
This is a clone of John Linville's fixed for speed setting on sky2 driver. The skge driver has the same code (and bug). It would not allow manually forcing 100 and 10 mbit. Signed-off-by: NStephen Hemminger <shemminger@osdl.org> Signed-off-by: NJeff Garzik <jgarzik@pobox.com>
-