1. 12 8月, 2007 2 次提交
    • R
      x86_64: fix HPET init race · b291aa7a
      Robin Holt 提交于
      I have had four seperate system lockups attributable to this exact problem
      in two days of testing.  Instead of trying to handle all the weird end
      cases and wrap, how about changing it to look for exactly what we appear
      to want.
      
      The following patch removes a couple races in setup_APIC_timer.  One occurs
      when the HPET advances the COUNTER past the T0_CMP value between the time
      the T0_CMP was originally read and when COUNTER is read.  This results in
      a delay waiting for the counter to wrap.  The other results from the counter
      wrapping.
      
      This change takes a snapshot of T0_CMP at the beginning of the loop and
      simply loops until T0_CMP has changed (a tick has happened).
      
      <later>
      
      I have one small concern about the patch.  I am not sure it meets the intent
      as well as it should.  I think we are trying to match APIC timer interrupts up
      with the hpet counter increment.  The event which appears to be disturbing
      this loop in our test environment is the NMI watchdog.  What we believe has
      been happening with the existing code is the setup_APIC_timer loop has read
      the CMP value, and the NMI watchdog code fires for the first time.  This
      results in a series of icache miss slowdowns and by the time we get back to
      things it has wrapped.
      
      I think this code is trying to get the CMP as close to the counter value as
      possible.  If that is the intent, maybe we should really be testing against a
      "window" around the CMP.  Something like COUNTER = CMP+/2.  It appears COUNTER
      should get advanced every 89nSec (IIRC).  The above seems like an unreasonably
      small window, but may be necessary.  Without documentation, I am not sure of
      the original intent with this code.
      
      In summary, this code fixes my boot hangs, but since I am not certain of the
      intent of the existing code, I am not certain this has not introduced new bugs
      or unexpected behaviors.
      Signed-off-by: NRobin Holt <holt@sgi.com>
      Acked-by: NAndi Kleen <ak@suse.de>
      Cc: Vojtech Pavlik <vojtech@suse.cz>
      Cc: "Aaron Durbin" <adurbin@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b291aa7a
    • B
      Blackfin arch: after removing fs.h from mm.h, fix the broken on Blackfin arch · d31c5ab1
      Bryan Wu 提交于
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: NBryan Wu <bryan.wu@analog.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d31c5ab1
  2. 10 8月, 2007 3 次提交
    • J
      SLUB: Fix format specifier in Documentation/vm/slabinfo.c · ac078602
      Jesper Juhl 提交于
      There's a little problem in Documentation/vm/slabinfo.c
      The code is using "%d" in a printf() call to print an 'unsigned long'.
      This patch corrects it to use "%lu" instead.
      Signed-off-by: NJesper Juhl <jesper.juhl@gmail.com>
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      ac078602
    • C
      SLUB: Fix dynamic dma kmalloc cache creation · 1ceef402
      Christoph Lameter 提交于
      The dynamic dma kmalloc creation can run into trouble if a
      GFP_ATOMIC allocation is the first one performed for a certain size
      of dma kmalloc slab.
      
      - Move the adding of the slab to sysfs into a workqueue
        (sysfs does GFP_KERNEL allocations)
      - Do not call kmem_cache_destroy() (uses slub_lock)
      - Only acquire the slub_lock once and--if we cannot wait--do a trylock.
      
        This introduces a slight risk of the first kmalloc(x, GFP_DMA|GFP_ATOMIC)
        for a range of sizes failing due to another process holding the slub_lock.
        However, we only need to acquire the spinlock once in order to establish
        each power of two DMA kmalloc cache. The possible conflict is with the
        slub_lock taken during slab management actions (create / remove slab cache).
      
        It is rather typical that a driver will first fill its buffers using
        GFP_KERNEL allocations which will wait until the slub_lock can be acquired.
        Drivers will also create its slab caches first outside of an atomic
        context before starting to use atomic kmalloc from an interrupt context.
      
        If there are any failures then they will occur early after boot or when
        loading of multiple drivers concurrently. Drivers can already accomodate
        failures of GFP_ATOMIC for other reasons. Retries will then create the slab.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      1ceef402
    • C
      SLUB: Remove checks for MAX_PARTIAL from kmem_cache_shrink · fcda3d89
      Christoph Lameter 提交于
      The MAX_PARTIAL checks were supposed to be an optimization. However, slab
      shrinking is a manually triggered process either through running slabinfo
      or by the kernel calling kmem_cache_shrink.
      
      If one really wants to shrink a slab then all operations should be done
      regardless of the size of the partial list. This also fixes an issue that
      could surface if the number of partial slabs was initially above MAX_PARTIAL
      in kmem_cache_shrink and later drops below MAX_PARTIAL through the
      elimination of empty slabs on the partial list (rare). In that case a few
      slabs may be left off the partial list (and only be put back when they
      are empty).
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      fcda3d89
  3. 09 8月, 2007 35 次提交