- 13 1月, 2006 3 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 akpm@osdl.org 提交于
) From: Ingo Molnar <mingo@elte.hu> This is the latest version of the scheduler cache-hot-auto-tune patch. The first problem was that detection time scaled with O(N^2), which is unacceptable on larger SMP and NUMA systems. To solve this: - I've added a 'domain distance' function, which is used to cache measurement results. Each distance is only measured once. This means that e.g. on NUMA distances of 0, 1 and 2 might be measured, on HT distances 0 and 1, and on SMP distance 0 is measured. The code walks the domain tree to determine the distance, so it automatically follows whatever hierarchy an architecture sets up. This cuts down on the boot time significantly and removes the O(N^2) limit. The only assumption is that migration costs can be expressed as a function of domain distance - this covers the overwhelming majority of existing systems, and is a good guess even for more assymetric systems. [ People hacking systems that have assymetries that break this assumption (e.g. different CPU speeds) should experiment a bit with the cpu_distance() function. Adding a ->migration_distance factor to the domain structure would be one possible solution - but lets first see the problem systems, if they exist at all. Lets not overdesign. ] Another problem was that only a single cache-size was used for measuring the cost of migration, and most architectures didnt set that variable up. Furthermore, a single cache-size does not fit NUMA hierarchies with L3 caches and does not fit HT setups, where different CPUs will often have different 'effective cache sizes'. To solve this problem: - Instead of relying on a single cache-size provided by the platform and sticking to it, the code now auto-detects the 'effective migration cost' between two measured CPUs, via iterating through a wide range of cachesizes. The code searches for the maximum migration cost, which occurs when the working set of the test-workload falls just below the 'effective cache size'. I.e. real-life optimized search is done for the maximum migration cost, between two real CPUs. This, amongst other things, has the positive effect hat if e.g. two CPUs share a L2/L3 cache, a different (and accurate) migration cost will be found than between two CPUs on the same system that dont share any caches. (The reliable measurement of migration costs is tricky - see the source for details.) Furthermore i've added various boot-time options to override/tune migration behavior. Firstly, there's a blanket override for autodetection: migration_cost=1000,2000,3000 will override the depth 0/1/2 values with 1msec/2msec/3msec values. Secondly, there's a global factor that can be used to increase (or decrease) the autodetected values: migration_factor=120 will increase the autodetected values by 20%. This option is useful to tune things in a workload-dependent way - e.g. if a workload is cache-insensitive then CPU utilization can be maximized by specifying migration_factor=0. I've tested the autodetection code quite extensively on x86, on 3 P3/Xeon/2MB, and the autodetected values look pretty good: Dual Celeron (128K L2 cache): --------------------- migration cost matrix (max_cache_size: 131072, cpu: 467 MHz): --------------------- [00] [01] [00]: - 1.7(1) [01]: 1.7(1) - --------------------- cacheflush times [2]: 0.0 (0) 1.7 (1784008) --------------------- Here the slow memory subsystem dominates system performance, and even though caches are small, the migration cost is 1.7 msecs. Dual HT P4 (512K L2 cache): --------------------- migration cost matrix (max_cache_size: 524288, cpu: 2379 MHz): --------------------- [00] [01] [02] [03] [00]: - 0.4(1) 0.0(0) 0.4(1) [01]: 0.4(1) - 0.4(1) 0.0(0) [02]: 0.0(0) 0.4(1) - 0.4(1) [03]: 0.4(1) 0.0(0) 0.4(1) - --------------------- cacheflush times [2]: 0.0 (33900) 0.4 (448514) --------------------- Here it can be seen that there is no migration cost between two HT siblings (CPU#0/2 and CPU#1/3 are separate physical CPUs). A fast memory system makes inter-physical-CPU migration pretty cheap: 0.4 msecs. 8-way P3/Xeon [2MB L2 cache]: --------------------- migration cost matrix (max_cache_size: 2097152, cpu: 700 MHz): --------------------- [00] [01] [02] [03] [04] [05] [06] [07] [00]: - 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) [01]: 19.2(1) - 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) [02]: 19.2(1) 19.2(1) - 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) [03]: 19.2(1) 19.2(1) 19.2(1) - 19.2(1) 19.2(1) 19.2(1) 19.2(1) [04]: 19.2(1) 19.2(1) 19.2(1) 19.2(1) - 19.2(1) 19.2(1) 19.2(1) [05]: 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) - 19.2(1) 19.2(1) [06]: 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) - 19.2(1) [07]: 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) - --------------------- cacheflush times [2]: 0.0 (0) 19.2 (19281756) --------------------- This one has huge caches and a relatively slow memory subsystem - so the migration cost is 19 msecs. Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NAshok Raj <ashok.raj@intel.com> Signed-off-by: NKen Chen <kenneth.w.chen@intel.com> Cc: <wilder@us.ibm.com> Signed-off-by: NJohn Hawkes <hawkes@sgi.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Ingo Molnar 提交于
Add per-arch sched_cacheflush() which is a write-back cacheflush used by the migration-cost calibration code at bootup time. Signed-off-by: NIngo Molnar <mingo@elte.hu> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 10 1月, 2006 16 次提交
-
-
由 Ralf Baechle 提交于
Gcc has a tradition of misscompiling the previous construct using the address of a label as argument to inline assembler. Gas otoh has the annoying difference between la and dla which are only usable for 32-bit rsp. 64-bit code, so can't be used without conditional compilation. The alterantive is switching the assembler to 64-bit code which happens to work right even for 32-bit code ... Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Maxime Bizon 提交于
local_irq_restore uses di which saves the whole status content, not just the IE bit resulting in local_irq_restore() to fail. This only happens if both CONFIG_CPU_MIPSR2 and CONFIG_IRQ_CPU are enabled. Signed-off-by: NMaxime Bizon <mbizon@freebox.fr> Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Al Viro 提交于
dump_regs() is used by a bunch of drivers for their internal stuff; renamed mips instance (one that is seen in system-wide headers) to elf_dump_regs() Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Sergei Shtylyov 提交于
USB OpenHCI host controller on Au1550 only decodes memory addresses from 0x14020000 to 0x1407FFFF according to the databook, which gives 0x60000 (on the prior Au1x00 chips the map size was 1MB). Signed-off-by: NSergei Shtylyov <sshtylyov@ru.mvista.com> Acked-by: NJordan Crouse <jordan.crouse@amd.com> Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Ralf Baechle 提交于
Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Ralf Baechle 提交于
Signed-off-by: NRalf Baechle <ralf@ongar.mips.com>
-
由 Ralf Baechle 提交于
Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Ralf Baechle 提交于
Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Ralf Baechle 提交于
It was resulting in build errors for some configurations. Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Ralf Baechle 提交于
Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Ralf Baechle 提交于
0x2ff was a typo and the value 0x1f of DSP_MASK was refering to an old version of the documentation. Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Atsushi Nemoto 提交于
mdelay(1) (i.e. udelay(1000)) does not work correctly due to overflow. 1000 * 0x004189374BC6A7f0 = 0x10000000000000180 (>= 2**64) 0x004189374BC6A7ef (0x004189374BC6A7f0 - 1) is OK and it is exactly same as catchall case (0x8000000000000000UL / (500000 / HZ)). Signed-off-by: NAtsushi Nemoto <anemo@mba.ocn.ne.jp> Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Ralf Baechle 提交于
used_dsp was meant to be used like used_math - but since the FPU context is small and lazy context switching is a stupid idea on multiprocessors this idea only got halfway implemented and those bits are were now breaking ptrace. Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Ralf Baechle 提交于
Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Arjan van de Ven 提交于
add the per-arch mutex.h files for the remaining architectures. We default to asm-generic/mutex-dec.h, because that performs quite well on most arches. Arches that do not have atomic decrement/increment instructions should switch to mutex-xchg.h instead. Arches can also provide their own implementation for the mutex fastpath primitives. Signed-off-by: NArjan van de Ven <arjan@infradead.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ingo Molnar 提交于
add atomic_xchg() to all the architectures. Needed by the new mutex code. Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NArjan van de Ven <arjan@infradead.org>
-
- 09 1月, 2006 1 次提交
-
-
由 Ravikiran G Thirumalai 提交于
Kill L1_CACHE_SHIFT from all arches. Since L1_CACHE_SHIFT_MAX is not used anymore with the introduction of INTERNODE_CACHE, kill L1_CACHE_SHIFT_MAX. Signed-off-by: NRavikiran Thirumalai <kiran@scalex86.org> Signed-off-by: NShai Fultheim <shai@scalex86.org> Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 07 1月, 2006 3 次提交
-
-
由 Domen Puncer 提交于
Remove nowhere referenced file ("grep riscos -r ." didn't find anything). Signed-off-by: NDomen Puncer <domen@coderock.org> Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com> Acked-by: NRalf Baechle <ralf@linux-mips.org> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Christoph Lameter 提交于
Several counters already have the need to use 64 atomic variables on 64 bit platforms (see mm_counter_t in sched.h). We have to do ugly ifdefs to fall back to 32 bit atomic on 32 bit platforms. The VM statistics patch that I am working on will also make more extensive use of atomic64. This patch introduces a new type atomic_long_t by providing definitions in asm-generic/atomic.h that works similar to the c "long" type. Its 32 bits on 32 bit platforms and 64 bits on 64 bit platforms. Also cleans up the determination of the mm_counter_t in sched.h. Signed-off-by: NChristoph Lameter <clameter@sgi.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Badari Pulavarty 提交于
Here is the patch to implement madvise(MADV_REMOVE) - which frees up a given range of pages & its associated backing store. Current implementation supports only shmfs/tmpfs and other filesystems return -ENOSYS. "Some app allocates large tmpfs files, then when some task quits and some client disconnect, some memory can be released. However the only way to release tmpfs-swap is to MADV_REMOVE". - Andrea Arcangeli Databases want to use this feature to drop a section of their bufferpool (shared memory segments) - without writing back to disk/swap space. This feature is also useful for supporting hot-plug memory on UML. Concerns raised by Andrew Morton: - "We have no plan for holepunching! If we _do_ have such a plan (or might in the future) then what would the API look like? I think sys_holepunch(fd, start, len), so we should start out with that." - Using madvise is very weird, because people will ask "why do I need to mmap my file before I can stick a hole in it?" - None of the other madvise operations call into the filesystem in this manner. A broad question is: is this capability an MM operation or a filesytem operation? truncate, for example, is a filesystem operation which sometimes has MM side-effects. madvise is an mm operation and with this patch, it gains FS side-effects, only they're really, really significant ones." Comments: - Andrea suggested the fs operation too but then it's more efficient to have it as a mm operation with fs side effects, because they don't immediatly know fd and physical offset of the range. It's possible to fixup in userland and to use the fs operation but it's more expensive, the vmas are already in the kernel and we can use them. Short term plan & Future Direction: - We seem to need this interface only for shmfs/tmpfs files in the short term. We have to add hooks into the filesystem for correctness and completeness. This is what this patch does. - In the future, plan is to support both fs and mmap apis also. This also involves (other) filesystem specific functions to be implemented. - Current patch doesn't support VM_NONLINEAR - which can be addressed in the future. Signed-off-by: NBadari Pulavarty <pbadari@us.ibm.com> Cc: Hugh Dickins <hugh@veritas.com> Cc: Andrea Arcangeli <andrea@suse.de> Cc: Michael Kerrisk <mtk-manpages@gmx.net> Cc: Ulrich Drepper <drepper@redhat.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 04 1月, 2006 1 次提交
-
-
由 Stephen Hemminger 提交于
Signed-off-by: NStephen Hemminger <shemminger@osdl.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 02 1月, 2006 1 次提交
-
-
由 Brian Gerst 提交于
Ignore asm-offsets.h for all arches. Signed-off-by: NBrian Gerst <bgerst@didntduck.org> Signed-off-by: NSam Ravnborg <sam@ravnborg.org>
-
- 15 12月, 2005 2 次提交
-
-
由 Jordan Crouse 提交于
Changes here include removing all of CONFIG_PM while it is being repeatedly smacked with a lead pipe, moving the BURSTMODE param to a #define (it should be defined almost always anyway), fixing the rqsize stuff, pulling ide_ioreg_t, and general cleanups and whatnot. Signed-off-by: NJordan Crouse <jordan.crouse@amd.com> Signed-off-by: NBartlomiej Zolnierkiewicz <bzolnier@gmail.com>
-
由 Jordan Crouse 提交于
bart: slightly modified by me Signed-off-by: NJordan Crouse <jordan.crouse@amd.com> Signed-off-by: NBartlomiej Zolnierkiewicz <bzolnier@gmail.com>
-
- 01 12月, 2005 1 次提交
-
-
由 Ralf Baechle 提交于
From Daniel Jacobowitz <dan@debian.org>. Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 18 11月, 2005 11 次提交
-
-
由 Ralf Baechle 提交于
Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Ralf Baechle 提交于
There is no definition for seadint_init() and the unprotected prototype breaks compilation of assembler files. Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Ralf Baechle 提交于
Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Arnaud Giersch 提交于
Signed-off-by: NArnaud Giersch <arnaud.giersch@free.fr> Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Arnaud Giersch 提交于
Add const qualifier to parameter addr of writes##bwlq. Signed-off-by: NArnaud Giersch <arnaud.giersch@free.fr> Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Arnaud Giersch 提交于
Add __iomem qualifier to crime and mace pointers. Signed-off-by: NArnaud Giersch <arnaud.giersch@free.fr> Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Arnaud Giersch 提交于
Fix, complete, and indent IP32 parport definitions. Definition were wrong for CTXINUSE and DMACTIVE (1-bit shift). Add macros DATA_BOUND, DATALEN_SHIFT, and CTRSHIFT. Signed-off-by: NArnaud Giersch <arnaud.giersch@free.fr> Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Ralf Baechle 提交于
Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Ralf Baechle 提交于
Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Ralf Baechle 提交于
Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Ralf Baechle 提交于
Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 14 11月, 2005 1 次提交
-
-
由 Nick Piggin 提交于
Introduce an atomic_inc_not_zero operation. Make this a special case of atomic_add_unless because lockless pagecache actually wants atomic_inc_not_negativeone due to its offset refcount. Signed-off-by: NNick Piggin <npiggin@suse.de> Cc: "Paul E. McKenney" <paulmck@us.ibm.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-