- 08 12月, 2006 3 次提交
-
-
由 Paul Jackson 提交于
Optimize the critical zonelist scanning for free pages in the kernel memory allocator by caching the zones that were found to be full recently, and skipping them. Remembers the zones in a zonelist that were short of free memory in the last second. And it stashes a zone-to-node table in the zonelist struct, to optimize that conversion (minimize its cache footprint.) Recent changes: This differs in a significant way from a similar patch that I posted a week ago. Now, instead of having a nodemask_t of recently full nodes, I have a bitmask of recently full zones. This solves a problem that last weeks patch had, which on systems with multiple zones per node (such as DMA zone) would take seeing any of these zones full as meaning that all zones on that node were full. Also I changed names - from "zonelist faster" to "zonelist cache", as that seemed to better convey what we're doing here - caching some of the key zonelist state (for faster access.) See below for some performance benchmark results. After all that discussion with David on why I didn't need them, I went and got some ;). I wanted to verify that I had not hurt the normal case of memory allocation noticeably. At least for my one little microbenchmark, I found (1) the normal case wasn't affected, and (2) workloads that forced scanning across multiple nodes for memory improved up to 10% fewer System CPU cycles and lower elapsed clock time ('sys' and 'real'). Good. See details, below. I didn't have the logic in get_page_from_freelist() for various full nodes and zone reclaim failures correct. That should be fixed up now - notice the new goto labels zonelist_scan, this_zone_full, and try_next_zone, in get_page_from_freelist(). There are two reasons I persued this alternative, over some earlier proposals that would have focused on optimizing the fake numa emulation case by caching the last useful zone: 1) Contrary to what I said before, we (SGI, on large ia64 sn2 systems) have seen real customer loads where the cost to scan the zonelist was a problem, due to many nodes being full of memory before we got to a node we could use. Or at least, I think we have. This was related to me by another engineer, based on experiences from some time past. So this is not guaranteed. Most likely, though. The following approach should help such real numa systems just as much as it helps fake numa systems, or any combination thereof. 2) The effort to distinguish fake from real numa, using node_distance, so that we could cache a fake numa node and optimize choosing it over equivalent distance fake nodes, while continuing to properly scan all real nodes in distance order, was going to require a nasty blob of zonelist and node distance munging. The following approach has no new dependency on node distances or zone sorting. See comment in the patch below for a description of what it actually does. Technical details of note (or controversy): - See the use of "zlc_active" and "did_zlc_setup" below, to delay adding any work for this new mechanism until we've looked at the first zone in zonelist. I figured the odds of the first zone having the memory we needed were high enough that we should just look there, first, then get fancy only if we need to keep looking. - Some odd hackery was needed to add items to struct zonelist, while not tripping up the custom zonelists built by the mm/mempolicy.c code for MPOL_BIND. My usual wordy comments below explain this. Search for "MPOL_BIND". - Some per-node data in the struct zonelist is now modified frequently, with no locking. Multiple CPU cores on a node could hit and mangle this data. The theory is that this is just performance hint data, and the memory allocator will work just fine despite any such mangling. The fields at risk are the struct 'zonelist_cache' fields 'fullzones' (a bitmask) and 'last_full_zap' (unsigned long jiffies). It should all be self correcting after at most a one second delay. - This still does a linear scan of the same lengths as before. All I've optimized is making the scan faster, not algorithmically shorter. It is now able to scan a compact array of 'unsigned short' in the case of many full nodes, so one cache line should cover quite a few nodes, rather than each node hitting another one or two new and distinct cache lines. - If both Andi and Nick don't find this too complicated, I will be (pleasantly) flabbergasted. - I removed the comment claiming we only use one cachline's worth of zonelist. We seem, at least in the fake numa case, to have put the lie to that claim. - I pay no attention to the various watermarks and such in this performance hint. A node could be marked full for one watermark, and then skipped over when searching for a page using a different watermark. I think that's actually quite ok, as it will tend to slightly increase the spreading of memory over other nodes, away from a memory stressed node. =============== Performance - some benchmark results and analysis: This benchmark runs a memory hog program that uses multiple threads to touch alot of memory as quickly as it can. Multiple runs were made, touching 12, 38, 64 or 90 GBytes out of the total 96 GBytes on the system, and using 1, 19, 37, or 55 threads (on a 56 CPU system.) System, user and real (elapsed) timings were recorded for each run, shown in units of seconds, in the table below. Two kernels were tested - 2.6.18-mm3 and the same kernel with this zonelist caching patch added. The table also shows the percentage improvement the zonelist caching sys time is over (lower than) the stock *-mm kernel. number 2.6.18-mm3 zonelist-cache delta (< 0 good) percent GBs N ------------ -------------- ---------------- systime mem threads sys user real sys user real sys user real better 12 1 153 24 177 151 24 176 -2 0 -1 1% 12 19 99 22 8 99 22 8 0 0 0 0% 12 37 111 25 6 112 25 6 1 0 0 -0% 12 55 115 25 5 110 23 5 -5 -2 0 4% 38 1 502 74 576 497 73 570 -5 -1 -6 0% 38 19 426 78 48 373 76 39 -53 -2 -9 12% 38 37 544 83 36 547 82 36 3 -1 0 -0% 38 55 501 77 23 511 80 24 10 3 1 -1% 64 1 917 125 1042 890 124 1014 -27 -1 -28 2% 64 19 1118 138 119 965 141 103 -153 3 -16 13% 64 37 1202 151 94 1136 150 81 -66 -1 -13 5% 64 55 1118 141 61 1072 140 58 -46 -1 -3 4% 90 1 1342 177 1519 1275 174 1450 -67 -3 -69 4% 90 19 2392 199 192 2116 189 176 -276 -10 -16 11% 90 37 3313 238 175 2972 225 145 -341 -13 -30 10% 90 55 1948 210 104 1843 213 100 -105 3 -4 5% Notes: 1) This test ran a memory hog program that started a specified number N of threads, and had each thread allocate and touch 1/N'th of the total memory to be used in the test run in a single loop, writing a constant word to memory, one store every 4096 bytes. Watching this test during some earlier trial runs, I would see each of these threads sit down on one CPU and stay there, for the remainder of the pass, a different CPU for each thread. 2) The 'real' column is not comparable to the 'sys' or 'user' columns. The 'real' column is seconds wall clock time elapsed, from beginning to end of that test pass. The 'sys' and 'user' columns are total CPU seconds spent on that test pass. For a 19 thread test run, for example, the sum of 'sys' and 'user' could be up to 19 times the number of 'real' elapsed wall clock seconds. 3) Tests were run on a fresh, single-user boot, to minimize the amount of memory already in use at the start of the test, and to minimize the amount of background activity that might interfere. 4) Tests were done on a 56 CPU, 28 Node system with 96 GBytes of RAM. 5) Notice that the 'real' time gets large for the single thread runs, even though the measured 'sys' and 'user' times are modest. I'm not sure what that means - probably something to do with it being slow for one thread to be accessing memory along ways away. Perhaps the fake numa system, running ostensibly the same workload, would not show this substantial degradation of 'real' time for one thread on many nodes -- lets hope not. 6) The high thread count passes (one thread per CPU - on 55 of 56 CPUs) ran quite efficiently, as one might expect. Each pair of threads needed to allocate and touch the memory on the node the two threads shared, a pleasantly parallizable workload. 7) The intermediate thread count passes, when asking for alot of memory forcing them to go to a few neighboring nodes, improved the most with this zonelist caching patch. Conclusions: * This zonelist cache patch probably makes little difference one way or the other for most workloads on real numa hardware, if those workloads avoid heavy off node allocations. * For memory intensive workloads requiring substantial off-node allocations on real numa hardware, this patch improves both kernel and elapsed timings up to ten per-cent. * For fake numa systems, I'm optimistic, but will have to leave that up to Rohit Seth to actually test (once I get him a 2.6.18 backport.) Signed-off-by: NPaul Jackson <pj@sgi.com> Cc: Rohit Seth <rohitseth@google.com> Cc: Christoph Lameter <clameter@engr.sgi.com> Cc: David Rientjes <rientjes@cs.washington.edu> Cc: Paul Menage <menage@google.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Christoph Lameter 提交于
The zone table is mostly not needed. If we have a node in the page flags then we can get to the zone via NODE_DATA() which is much more likely to be already in the cpu cache. In case of SMP and UP NODE_DATA() is a constant pointer which allows us to access an exact replica of zonetable in the node_zones field. In all of the above cases there will be no need at all for the zone table. The only remaining case is if in a NUMA system the node numbers do not fit into the page flags. In that case we make sparse generate a table that maps sections to nodes and use that table to to figure out the node number. This table is sized to fit in a single cache line for the known 32 bit NUMA platform which makes it very likely that the information can be obtained without a cache miss. For sparsemem the zone table seems to be have been fairly large based on the maximum possible number of sections and the number of zones per node. There is some memory saving by removing zone_table. The main benefit is to reduce the cache foootprint of the VM from the frequent lookups of zones. Plus it simplifies the page allocator. [akpm@osdl.org: build fix] Signed-off-by: NChristoph Lameter <clameter@sgi.com> Cc: Dave Hansen <haveblue@us.ibm.com> Cc: Andy Whitcroft <apw@shadowen.org> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Andrew Morton 提交于
With CONFIG_SMP=n: drivers/input/ff-memless.c:384: warning: implicit declaration of function 'local_bh_disable' drivers/input/ff-memless.c:393: warning: implicit declaration of function 'local_bh_enable' Really linux/spinlock.h should include linux/interrupt.h. But interrupt.h includes sched.h which will need spinlock.h. So the patch breaks the _bh declarations out into a separate header and includes it in both interrupt.h and spinlock.h. Cc: "Randy.Dunlap" <rdunlap@xenotime.net> Cc: Andi Kleen <ak@suse.de> Cc: <stable@kernel.org> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 06 12月, 2006 1 次提交
-
-
由 David Howells 提交于
Fix up arch-specific work items where possible to use the new work_struct and delayed_work structs. Three places that enqueue bits of their stack and then return have been marked with #error as this is not permitted. Signed-Off-By: NDavid Howells <dhowells@redhat.com>
-
- 05 12月, 2006 3 次提交
-
-
由 Matthew Wilcox 提交于
CONFIG_LBD and CONFIG_LSF are spread into asm/types.h for no particularly good reason. Centralising the definition in linux/types.h means that arch maintainers don't need to bother adding it, as well as fixing the problem with x86-64 users being asked to make a decision that has absolutely no effect. The H8/300 porters seem particularly confused since I'm not aware of any microcontrollers that need to support 2TB filesystems. Signed-off-by: NMatthew Wilcox <matthew@wil.cx> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Al Viro 提交于
OK, that seems to be enough to deal with the mess. Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Linus Torvalds 提交于
This was exposed by Al's recent header file dependency reduction patches.. Cc: Al Viro <viro@ftp.linux.org.uk> Cc: David S. Miller <davem@davemloft.net> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 04 12月, 2006 8 次提交
-
-
由 Dwayne Grant McConnell 提交于
This patch adds SPU elf notes to the coredump. It creates a separate note for each of /regs, /fpcr, /lslr, /decr, /decr_status, /mem, /signal1, /signal1_type, /signal2, /signal2_type, /event_mask, /event_status, /mbox_info, /ibox_info, /wbox_info, /dma_info, /proxydma_info, /object-id. A new macro, ARCH_HAVE_EXTRA_NOTES, was created for architectures to specify they have extra elf core notes. A new macro, ELF_CORE_EXTRA_NOTES_SIZE, was created so the size of the additional notes could be calculated and added to the notes phdr entry. A new macro, ELF_CORE_WRITE_EXTRA_NOTES, was created so the new notes would be written after the existing notes. The SPU coredump code resides in spufs. Stub functions are provided in the kernel which are hooked into the spufs code which does the actual work via register_arch_coredump_calls(). A new set of __spufs_<file>_read/get() functions was provided to allow the coredump code to read from the spufs files without having to lock the SPU context for each file read from. Cc: <linux-arch@vger.kernel.org> Signed-off-by: NDwayne Grant McConnell <decimal@us.ibm.com> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Jeff Garzik 提交于
It's bitrotten, long unmaintained, long hidden under BROKEN_ON_SMP, etc. As scheduled in feature-removal-schedule.txt, and ack'd several times on lkml. Signed-off-by: NJeff Garzik <jeff@garzik.org>
-
- 03 12月, 2006 25 次提交
-
-
由 Tejun Heo 提交于
libata switched to IRQ-driven IDENTIFY when IRQ-driven PIO was introduced. This has caused a lot of problems including device misdetection and phantom device. ATA_FLAG_DETECT_POLLING was added recently to selectively use polling IDENTIFY on problemetic drivers but many controllers and devices are affected by this problem and trying to adding ATA_FLAG_DETECT_POLLING for each such case is diffcult and not very rewarding. This patch makes libata always use polling IDENTIFY. This is consistent with libata's original behavior and drivers/ide's behavior. Signed-off-by: NTejun Heo <htejun@gmail.com> Signed-off-by: NJeff Garzik <jeff@garzik.org>
-
由 Tejun Heo 提交于
This patch implements ATA_FLAG_SETXFER_POLLING and use in pata_via. If this flag is set, transfer mode setting performed by polling not by interrupt. This should help those controllers which raise interrupt before the command is actually complete on SETXFER. Rationale for this approach. * uses existing facility and relatively simple * no busy sleep in the interrupt handler * updating drivers is easy While at it, kill now unused flag ATA_FLAG_SRST in pata_via. Signed-off-by: NTejun Heo <htejun@gmail.com>
-
由 Tejun Heo 提交于
HSM_ST_UNKNOWN is not used anywhere. Its value is zero and supposed to serve sanity check purpose but HSM_ST_IDLE is used for that purpose. This unused state causes confusion. After a port is initialized but before the first command is executed, the idle hsm state is UNKNOWN. However, once a command has completed, the idle hsm state is IDLE. This defeats sanity check in ata_pio_task() for the first command. This patch removes HSM_ST_UNKNOWN and consequently make HSM_ST_IDLE the default state. Signed-off-by: NTejun Heo <htejun@gmail.com>
-
由 Jamal Hadi Salim 提交于
aevents can not uniquely identify an SA. We break the ABI with this patch, but consensus is that since it is not yet utilized by any (known) application then it is fine (better do it now than later). Signed-off-by: NJamal Hadi Salim <hadi@cyberus.ca> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Patrick McHardy 提交于
Add IPv4 and IPv6 capable nf_conntrack port of the TFTP conntrack/NAT helper. Signed-off-by: NPatrick McHardy <kaber@trash.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Patrick McHardy 提交于
Add IPv4 and IPv6 capable nf_conntrack port of the SIP conntrack/NAT helper. Signed-off-by: NPatrick McHardy <kaber@trash.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Patrick McHardy 提交于
Add nf_conntrack port of the PPtP conntrack/NAT helper. Since there seems to be no IPv6-capable PPtP implementation the helper only support IPv4. Signed-off-by: NPatrick McHardy <kaber@trash.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Patrick McHardy 提交于
Add nf_conntrack port of the IRC conntrack/NAT helper. Since DCC doesn't support IPv6 yet, the helper is still IPv4 only. Signed-off-by: NPatrick McHardy <kaber@trash.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Patrick McHardy 提交于
Add IPv4 and IPv6 capable nf_conntrack port of the H.323 conntrack/NAT helper. Signed-off-by: NPatrick McHardy <kaber@trash.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Patrick McHardy 提交于
Add IPv4 and IPv6 capable nf_conntrack port of the Amanda conntrack/NAT helper. Signed-off-by: NPatrick McHardy <kaber@trash.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jozsef Kadlecsik 提交于
Add FTP NAT helper. Split out from Jozsef's big nf_nat patch with a few small fixes by myself. Signed-off-by: NJozsef Kadlecsik <kadlec@blackhole.kfki.hu> Signed-off-by: NPatrick McHardy <kaber@trash.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jozsef Kadlecsik 提交于
Add NAT support for nf_conntrack. Joint work of Jozsef Kadlecsik, Yasuyuki Kozakai, Martin Josefsson and myself. Signed-off-by: NJozsef Kadlecsik <kadlec@blackhole.kfki.hu> Signed-off-by: NPatrick McHardy <kaber@trash.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Patrick McHardy 提交于
Accept -1 as delimiter to abort parsing without an error at the first unknown character. This is needed by the upcoming nf_conntrack SIP helper, where addresses are delimited by either '\r' or '\n' characters. Signed-off-by: NPatrick McHardy <kaber@trash.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Bart De Schuymer 提交于
The attached patch adds --snat-arp support, which makes it possible to change the source mac address in both the mac header and the arp header with one rule. Signed-off-by: NBart De Schuymer <bdschuym@pandora.be> Signed-off-by: NPatrick McHardy <kaber@trash.net>
-
由 Patrick McHardy 提交于
Add new NFLOG target to allow use of nfnetlink_log for both IPv4 and IPv6. Currently we have two (unsupported by userspace) hacks in the LOG and ULOG targets to optionally call to the nflog API. They lack a few features, namely the IPv4 and IPv6 LOG targets can not specify a number of arguments related to nfnetlink_log, while the ULOG target is only available for IPv4. Remove those hacks and add a clean way to use nfnetlink_log. Signed-off-by: NPatrick McHardy <kaber@trash.net>
-
由 Patrick McHardy 提交于
Signed-off-by: NPatrick McHardy <kaber@trash.net>
-
由 Patrick McHardy 提交于
There is no reason for limiting netlink attributes in size. Signed-off-by: NPatrick McHardy <kaber@trash.net>
-
由 Eric Leblond 提交于
Signed-off-by: NEric Leblond <eric@inl.fr> Signed-off-by: NPatrick McHardy <kaber@trash.net>
-
由 Patrick McHardy 提交于
The NAT handling of the SIP helper has a few problems: - Request headers are only mangled in the reply direction, From/To headers not at all, which can lead to authentication failures with DNAT in case the authentication domain is the IP address - Contact headers in responses are only mangled for REGISTER responses - Headers may be mangled even though they contain addresses not participating in the connection, like alternative addresses - Packets are droppen when domain names are used where the helper expects IP addresses This patch takes a different approach, instead of fixed rules what field to mangle to what content, it adds symetric mapping of From/To/Via/Contact headers, which allows to deal properly with echoed addresses in responses and foreign addresses not belonging to the connection. Signed-off-by: NPatrick McHardy <kaber@trash.net>
-
由 Patrick McHardy 提交于
SIP headers are generally case-insensitive, only SDP headers are case sensitive. Signed-off-by: NPatrick McHardy <kaber@trash.net>
-
由 Patrick McHardy 提交于
- Use enum for header field enumeration - Use numerical value instead of pointer to header info structure to identify headers, unexport ct_sip_hdrs - group SIP and SDP entries in header info structure - remove double forward declaration of ct_sip_get_info Signed-off-by: NPatrick McHardy <kaber@trash.net>
-
由 Yasuyuki Kozakai 提交于
We usually uses 'xxx_find_get' for function which increments reference count. Signed-off-by: NYasuyuki Kozakai <yasuyuki.kozakai@toshiba.co.jp> Signed-off-by: NPatrick McHardy <kaber@trash.net>
-
由 Patrick McHardy 提交于
Add helper functions for sysctl registration with optional instantiating of common path elements (like net/netfilter) and use it for support for automatic registation of conntrack protocol sysctls. Signed-off-by: NPatrick McHardy <kaber@trash.net>
-
由 Gerrit Renker 提交于
This removes and cleans up unused variables and structures which have become unnecessary following the introduction of the EWMA patch to automatically track the CCID 3 receiver/sender packet sizes `s'. It deprecates the PACKET_SIZE socket option by returning an error code and printing a deprecation warning if an application tries to read or write this socket option. Signed-off-by: NGerrit Renker <gerrit@erg.abdn.ac.uk> Signed-off-by: NArnaldo Carvalho de Melo <acme@mandriva.com>
-