1. 12 5月, 2007 4 次提交
  2. 11 5月, 2007 6 次提交
    • E
      Revert "[PATCH] paravirt: Add startup infrastructure for paravirtualization" · 5a18c92a
      Eric W. Biederman 提交于
      This reverts commit c9ccf30d.
      
      Entering the kernel at startup_32 without passing our real mode data in
      %esi, and without guaranteeing that physical and virtual addresses are
      identity mapped makes head.S impossible to maintain.
      
      The only user of this infrastructure is lguest which is not merged so
      nothing we currently support will break by removing this over designed
      nightmare, and only the pending lguest patches will be affected.  The
      pending Xen patches have a different entry point that they use.
      
      We are currently discussing what Xen and lguest need to do to boot the
      kernel in a more normal fashion so using startup_32 in this weird manner is
      clearly not their long term direction.
      
      So let's remove this code in head.S before it causes brain damage to people
      trying to maintain head.S
      
      Cc: Chris Wright <chrisw@sous-sol.org>
      Cc: Andi Kleen <ak@suse.de>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Zachary Amsden <zach@vmware.com>
      CC: H. Peter Anvin <hpa@zytor.com>
      Signed-off-by: NEric W. Biederman <ebiederm@xmission.com>
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5a18c92a
    • A
      add upper-32-bits macro · 218e180e
      Andrew Morton 提交于
      We keep on getting "right shift count >= width of type" warnings when doing
      things like
      
      	sector_t s;
      
      	x = s >> 56;
      
      because with CONFIG_LBD=n, s is only 32-bit.  Similar problems can occur with
      dma_addr_t's.
      
      So add a simple wrapper function which code can use to avoid this warning.
      The above example would become
      
      	x = upper_32_bits(s) >> 24;
      
      The first user is in fact AFS.
      
      Cc: James Bottomley <James.Bottomley@SteelEye.com>
      Cc: "Cameron, Steve" <Steve.Cameron@hp.com>
      Cc: "Miller, Mike (OS Dev)" <Mike.Miller@hp.com>
      Cc: Hisashi Hifumi <hifumi.hisashi@oss.ntt.co.jp>
      Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
      Cc: David Howells <dhowells@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      218e180e
    • C
      slub: support concurrent local and remote frees and allocs on a slab · 894b8788
      Christoph Lameter 提交于
      Avoid atomic overhead in slab_alloc and slab_free
      
      SLUB needs to use the slab_lock for the per cpu slabs to synchronize with
      potential kfree operations.  This patch avoids that need by moving all free
      objects onto a lockless_freelist.  The regular freelist continues to exist
      and will be used to free objects.  So while we consume the
      lockless_freelist the regular freelist may build up objects.
      
      If we are out of objects on the lockless_freelist then we may check the
      regular freelist.  If it has objects then we move those over to the
      lockless_freelist and do this again.  There is a significant savings in
      terms of atomic operations that have to be performed.
      
      We can even free directly to the lockless_freelist if we know that we are
      running on the same processor.  So this speeds up short lived objects.
      They may be allocated and freed without taking the slab_lock.  This is
      particular good for netperf.
      
      In order to maximize the effect of the new faster hotpath we extract the
      hottest performance pieces into inlined functions.  These are then inlined
      into kmem_cache_alloc and kmem_cache_free.  So hotpath allocation and
      freeing no longer requires a subroutine call within SLUB.
      
      [I am not sure that it is worth doing this because it changes the easy to
      read structure of slub just to reduce atomic ops.  However, there is
      someone out there with a benchmark on 4 way and 8 way processor systems
      that seems to show a 5% regression vs.  Slab.  Seems that the regression is
      due to increased atomic operations use vs.  SLAB in SLUB).  I wonder if
      this is applicable or discernable at all in a real workload?]
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      894b8788
    • K
    • K
    • I
      CRC ITU-T V.41 · 3e7cbae7
      Ivo van Doorn 提交于
      This will add the CRC calculation according
      to the CRC ITU-T V.41 to the kernel lib/ folder.
      
      This code has been derived from the rt2x00 driver,
      currently found only in the wireless-dev tree, but
      this library is generic and could be used by more
      drivers who currently use their own implementation.
      Signed-off-by: NIvo van Doorn <IvDoorn@gmail.com>
      
      Also useful for the new firewire stack.
      Signed-off-by: NKristian Hoegsberg <krh@redhat.com>
      Signed-off-by: NStefan Richter <stefanr@s5r6.in-berlin.de>
      3e7cbae7
  3. 10 5月, 2007 30 次提交