- 22 4月, 2005 1 次提交
-
-
由 David Mosberger-Tang 提交于
The ia64-version of fls() never worked as intended (the bitnumbering was off by 1 and fls(0) was undefined). This patch fixes the problem by using a popcnt-based fls(), which on McKinley-derived cores is slightly faster than both ia64_fls() and generic_fls(). The resulting code, however, is bigger (7-8 bundles instead of about 3 bundles). Also switch ia64_popcnt() to __builtin_popcountl() for GCC v3.4 or newer since the compiler can predicate that and schedule it better. Thanks to Simon Derr and Matt Mackall for tracking down this bug. Signed-off-by: NDavid Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 21 4月, 2005 6 次提交
-
-
由 Karsten Keil 提交于
We do not longer use DLT_LINUX_SLL for activ/pass filters but DLT_PPP_WITHDIRECTION witch need 1 as outbound flag. Signed-off-by: NKarsten Keil <kkeil@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Alexander Nyberg 提交于
The new out of line put_user() assembly on x86_64 changes %rcx without telling GCC about it causing things like: http://bugme.osdl.org/show_bug.cgi?id=4515 See to it that %rcx is not changed (made it consistent with get_user()). Signed-off-by: NAlexander Nyberg <alexn@telia.com> Signed-off-by: ak@suse.de Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 James Bottomley 提交于
My version of gcc doesn't warn about this error (declaration in the middle of a set of statements). The fix is simple (this also corrects return code; for init functions it should be zero or error).
-
由 Al Viro 提交于
PREEMPT+SMP support - see if it looks sane... Signed-off-by: NAl Viro <viro@parcelfarce.linux.theplanet.co.uk> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Linus Torvalds 提交于
Releasing this will also make "git" the official source control thing. Here's to hoping for the best.
-
- 20 4月, 2005 24 次提交
-
-
由 Herbert Xu 提交于
The following patch just makes the header part of the skb writeable. This is needed since we modify the IP headers just a few lines below. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Herbert Xu 提交于
Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Arnaldo Carvalho de Melo 提交于
Signed-off-by: NArnaldo Carvalho de Melo <acme@ghostprotocols.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Stephen Hemminger 提交于
Here is a revised alternative that uses BUG_ON/WARN_ON (as suggested by Herbert Xu) to eliminate NET_CALLER. Signed-off-by: NStephen Hemminger <shemminger@osdl.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Noticed by Herbert Xu. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Thomas Graf 提交于
Be kind to userspace and don't force them to hardcode protocol families just to have it changed again once we support routing rules for more than one protocol family. Signed-off-by: NThomas Graf <tgraf@suug.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Herbert Xu 提交于
While looking at this problem I noticed that IPv6 was sometimes looking at inet->recverr which is bogus. Here is a patch to correct that and use np->recverr. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Acked-by: NHideaki YOSHIFUJI <yoshfuji@linux-ipv6.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Herbert Xu 提交于
So here is a patch that introduces skb_store_bits -- the opposite of skb_copy_bits, and uses them to read/write the csum field in rawv6. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 YOSHIFUJI Hideaki 提交于
From: Tushar Gohad <tgohad@mvista.com> Signed-off-by: NHideaki YOSHIFUJI <yoshfuji@linux-ipv6.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Herbert Xu 提交于
Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Hugh Dickins 提交于
Once all the MMU architectures define FIRST_USER_ADDRESS, remove hack from mmap.c which derived it from FIRST_USER_PGD_NR. Signed-off-by: NHugh Dickins <hugh@veritas.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Hugh Dickins 提交于
Replace misleading definition of FIRST_USER_PGD_NR 0 by definition of FIRST_USER_ADDRESS 0 in all the MMU architectures beyond arm and arm26. Signed-off-by: NHugh Dickins <hugh@veritas.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Hugh Dickins 提交于
ARM26 define FIRST_USER_ADDRESS as PAGE_SIZE (beyond the machine vectors when they are mapped low), and use that definition in place of locally defined MIN_MAP_ADDR. Previously, ARM26 permitted user mappings at 0 if the machine vectors were mapped high; but that's inconsistent with ARM, and FIRST_USER_ADDRESS would then have to be determined at runtime. Let's fix it at PAGE_SIZE throughout the architecture. Signed-off-by: NHugh Dickins <hugh@veritas.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Hugh Dickins 提交于
ARM define FIRST_USER_ADDRESS as PAGE_SIZE (beyond the machine vectors when they are mapped low), and use that definition in place of locally defined MIN_MAP_ADDR. Signed-off-by: NHugh Dickins <hugh@veritas.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Hugh Dickins 提交于
Remove use of FIRST_USER_PGD_NR from sys_mincore: it's inconsistent (no other syscall refers to it), unnecessary (sys_mincore loops over vmas further down) and incorrect (misses user addresses in ARM's first pgd). Signed-off-by: NHugh Dickins <hugh@veritas.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Hugh Dickins 提交于
The patches to free_pgtables by vma left problems on any architectures which leave some user address page table entries unencapsulated by vma. Andi has fixed the 32-bit vDSO on x86_64 to use a vma. Now fix arm (and arm26), whose first PAGE_SIZE is reserved (perhaps) for machine vectors. Our calls to free_pgtables must not touch that area, and exit_mmap's BUG_ON(nr_ptes) must allow that arm's get_pgd_slow may (or may not) have allocated an extra page table, which its free_pgd_slow would free later. FIRST_USER_PGD_NR has misled me and others: until all the arches define FIRST_USER_ADDRESS instead, a hack in mmap.c to derive one from t'other. This patch fixes the bugs, the remaining patches just clean it up. Signed-off-by: NHugh Dickins <hugh@veritas.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Hugh Dickins 提交于
Once we're strict about clearing away page tables, hugetlb_prefault can assume there are no page tables left within its range. Since the other arches continue if !pte_none here, let i386 do the same. Signed-off-by: NHugh Dickins <hugh@veritas.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Hugh Dickins 提交于
While dabbling here in mmap.c, clean up mysterious "mpnt"s to "vma"s. Signed-off-by: NHugh Dickins <hugh@veritas.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Hugh Dickins 提交于
ia64 and sparc64 hurriedly had to introduce their own variants of pgd_addr_end, to leapfrog over the holes in their virtual address spaces which the final clear_page_range suddenly presented when converted from pgd_index to pgd_addr_end. But now that free_pgtables respects the vma list, those holes are never presented, and the arch variants can go. Signed-off-by: NHugh Dickins <hugh@veritas.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Hugh Dickins 提交于
ia64 and ppc64 had hugetlb_free_pgtables functions which were no longer being called, and it wasn't obvious what to do about them. The ppc64 case turns out to be easy: the associated tables are noted elsewhere and freed later, safe to either skip its hugetlb areas or go through the motions of freeing nothing. Since ia64 does need a special case, restore to ppc64 the special case of skipping them. The ia64 hugetlb case has been broken since pgd_addr_end went in, though it probably appeared to work okay if you just had one such area; in fact it's been broken much longer if you consider a long munmap spanning from another region into the hugetlb region. In the ia64 hugetlb region, more virtual address bits are available than in the other regions, yet the page tables are structured the same way: the page at the bottom is larger. Here we need to scale down each addr before passing it to the standard free_pgd_range. Was about to write a hugely_scaled_down macro, but found htlbpage_to_page already exists for just this purpose. Fixed off-by-one in ia64 is_hugepage_only_range. Uninline free_pgd_range to make it available to ia64. Make sure the vma-gathering loop in free_pgtables cannot join a hugepage_only_range to any other (safe to join huges? probably but don't bother). Signed-off-by: NHugh Dickins <hugh@veritas.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Hugh Dickins 提交于
There's only one usage of MM_VM_SIZE(mm) left, and it's a troublesome macro because mm doesn't contain the (32-bit emulation?) info needed. But it too is only needed because we ignore the end from the vma list. We could make flush_pgtables return that end, or unmap_vmas. Choose the latter, since it's a natural fit with unmap_mapping_range_vma needing to know its restart addr. This does make more than minimal change, but if unmap_vmas had returned the end before, this is how we'd have done it, rather than storing the break_addr in zap_details. unmap_vmas used to return count of vmas scanned, but that's just debug which hasn't been useful in a while; and if we want the map_count 0 on exit check back, it can easily come from the final remove_vm_struct loop. Signed-off-by: NHugh Dickins <hugh@veritas.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Hugh Dickins 提交于
Recent woes with some arches needing their own pgd_addr_end macro; and 4-level clear_page_range regression since 2.6.10's clear_page_tables; and its long-standing well-known inefficiency in searching throughout the higher-level page tables for those few entries to clear and free: all can be blamed on ignoring the list of vmas when we free page tables. Replace exit_mmap's clear_page_range of the total user address space by free_pgtables operating on the mm's vma list; unmap_region use it in the same way, giving floor and ceiling beyond which it may not free tables. This brings lmbench fork/exec/sh numbers back to 2.6.10 (unless preempt is enabled, in which case latency fixes spoil unmap_vmas throughput). Beware: the do_mmap_pgoff driver failure case must now use unmap_region instead of zap_page_range, since a page table might have been allocated, and can only be freed while it is touched by some vma. Move free_pgtables from mmap.c to memory.c, where its lower levels are adapted from the clear_page_range levels. (Most of free_pgtables' old code was actually for a non-existent case, prev not properly set up, dating from before hch gave us split_vma.) Pass mmu_gather** in the public interfaces, since we might want to add latency lockdrops later; but no attempt to do so yet, going by vma should itself reduce latency. But what if is_hugepage_only_range? Those ia64 and ppc64 cases need careful examination: put that off until a later patch of the series. What of x86_64's 32bit vdso page __map_syscall32 maps outside any vma? And the range to sparc64's flush_tlb_pgtables? It's less clear to me now that we need to do more than is done here - every PMD_SIZE ever occupied will be flushed, do we really have to flush every PGDIR_SIZE ever partially occupied? A shame to complicate it unnecessarily. Special thanks to David Miller for time spent repairing my ceilings. Signed-off-by: NHugh Dickins <hugh@veritas.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Linus Torvalds 提交于
for 13 driver core, sysfs, and debugfs fixes.
-
由 Linus Torvalds 提交于
for 11 aoe bugfix patches.
-
- 19 4月, 2005 9 次提交
-
-
由 Linus Torvalds 提交于
-
由 Linus Torvalds 提交于
Yah, it does work to merge. Knock wood.
-
由 ecashin@coraid.com 提交于
I can't use list.h, since sk_buff doesn't have a list_head but instead has two struct sk_buff pointers, and I want to avoid any extra memory allocation. send outgoing packets in order Signed-off-by: NEd L. Cashin <ecashin@coraid.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 ecashin@coraid.com 提交于
add support for disk statistics Signed-off-by: NEd L. Cashin <ecashin@coraid.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 ecashin@coraid.com 提交于
add note about the need for deadlock-free sk_buff allocation Signed-off-by: NEd L. Cashin <ecashin@coraid.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 ecashin@coraid.com 提交于
document env var for specifying number of partitions per dev Signed-off-by: NEd L. Cashin <ecashin@coraid.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 ecashin@coraid.com 提交于
Alexey Dobriyan sparse cleanup Signed-off-by: NAlexey Dobriyan <adobriyan@mail.ru> Signed-off-by: NEd L. Cashin <ecashin@coraid.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 ecashin@coraid.com 提交于
don't try to free null bufpool Signed-off-by: NEd L. Cashin <ecashin@coraid.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 ecashin@coraid.com 提交于
handle distros that have a udev rules file instead of dir Signed-off-by: NEd L. Cashin <ecashin@coraid.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-