1. 12 8月, 2007 4 次提交
    • H
      stifb: detect cards in double buffer mode more reliably · 04a3f959
      Helge Deller 提交于
      Visualize-EG, Graffiti and A4450A graphics cards on PARISC can
      be configured in double-buffer and standard mode, but the stifb
      driver supports standard mode only.
      This patch detects double-buffered cards more reliable.
      
      It is a real bugfix for a very nasty problem for all parisc users which have
      wrongly configured their graphic card.  The problem: The stifb graphics driver
      will not detect that the card is wrongly configured and then nevertheless just
      enables the graphics mode, which it shouldn't.  In the end, the user will see
      no further updates / boot messages on the screen.
      
      We had documented this problem already on our FAQ
      (http://parisc-linux.org/faq/index.html#viseg "Why do I get corrupted graphics
      with my Vis-EG/Graffiti/A4450A card?") but people still run into this problem.
       So having this fix in as early as possible can help us.
      Signed-off-by: NHelge Deller <deller@gmx.de>
      Signed-off-by: NAntonino Daplas <adaplas@gmail.com>
      Cc: <stable@kernel.org>
      Cc: Kyle McMartin <kyle@mcmartin.ca>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      04a3f959
    • B
      direct-io: fix error-path crashes · 6a648fa7
      Badari Pulavarty 提交于
      Need to initialize map_bh.b_state to zero.  Otherwise, in case of a faulty
      user-buffer its possible to go into dio_zero_block() and submit a page by
      mistake - since it checks for buffer_new().
      
      http://marc.info/?l=linux-kernel&m=118551339032528&w=2
      
      akpm: Linus had a (better) patch to just do a kzalloc() in there, but it got
      lost.  Probably this version is better for -stable anwyay.
      Signed-off-by: NBadari Pulavarty <pbadari@us.ibm.com>
      Acked-by: NJoe Jin <joe.jin@oracle.com>
      Acked-by: NZach Brown <zach.brown@oracle.com>
      Cc: gurudas pai <gurudas.pai@oracle.com>
      Cc: <stable@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6a648fa7
    • R
      x86_64: fix HPET init race · b291aa7a
      Robin Holt 提交于
      I have had four seperate system lockups attributable to this exact problem
      in two days of testing.  Instead of trying to handle all the weird end
      cases and wrap, how about changing it to look for exactly what we appear
      to want.
      
      The following patch removes a couple races in setup_APIC_timer.  One occurs
      when the HPET advances the COUNTER past the T0_CMP value between the time
      the T0_CMP was originally read and when COUNTER is read.  This results in
      a delay waiting for the counter to wrap.  The other results from the counter
      wrapping.
      
      This change takes a snapshot of T0_CMP at the beginning of the loop and
      simply loops until T0_CMP has changed (a tick has happened).
      
      <later>
      
      I have one small concern about the patch.  I am not sure it meets the intent
      as well as it should.  I think we are trying to match APIC timer interrupts up
      with the hpet counter increment.  The event which appears to be disturbing
      this loop in our test environment is the NMI watchdog.  What we believe has
      been happening with the existing code is the setup_APIC_timer loop has read
      the CMP value, and the NMI watchdog code fires for the first time.  This
      results in a series of icache miss slowdowns and by the time we get back to
      things it has wrapped.
      
      I think this code is trying to get the CMP as close to the counter value as
      possible.  If that is the intent, maybe we should really be testing against a
      "window" around the CMP.  Something like COUNTER = CMP+/2.  It appears COUNTER
      should get advanced every 89nSec (IIRC).  The above seems like an unreasonably
      small window, but may be necessary.  Without documentation, I am not sure of
      the original intent with this code.
      
      In summary, this code fixes my boot hangs, but since I am not certain of the
      intent of the existing code, I am not certain this has not introduced new bugs
      or unexpected behaviors.
      Signed-off-by: NRobin Holt <holt@sgi.com>
      Acked-by: NAndi Kleen <ak@suse.de>
      Cc: Vojtech Pavlik <vojtech@suse.cz>
      Cc: "Aaron Durbin" <adurbin@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b291aa7a
    • B
      Blackfin arch: after removing fs.h from mm.h, fix the broken on Blackfin arch · d31c5ab1
      Bryan Wu 提交于
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: NBryan Wu <bryan.wu@analog.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d31c5ab1
  2. 10 8月, 2007 3 次提交
    • J
      SLUB: Fix format specifier in Documentation/vm/slabinfo.c · ac078602
      Jesper Juhl 提交于
      There's a little problem in Documentation/vm/slabinfo.c
      The code is using "%d" in a printf() call to print an 'unsigned long'.
      This patch corrects it to use "%lu" instead.
      Signed-off-by: NJesper Juhl <jesper.juhl@gmail.com>
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      ac078602
    • C
      SLUB: Fix dynamic dma kmalloc cache creation · 1ceef402
      Christoph Lameter 提交于
      The dynamic dma kmalloc creation can run into trouble if a
      GFP_ATOMIC allocation is the first one performed for a certain size
      of dma kmalloc slab.
      
      - Move the adding of the slab to sysfs into a workqueue
        (sysfs does GFP_KERNEL allocations)
      - Do not call kmem_cache_destroy() (uses slub_lock)
      - Only acquire the slub_lock once and--if we cannot wait--do a trylock.
      
        This introduces a slight risk of the first kmalloc(x, GFP_DMA|GFP_ATOMIC)
        for a range of sizes failing due to another process holding the slub_lock.
        However, we only need to acquire the spinlock once in order to establish
        each power of two DMA kmalloc cache. The possible conflict is with the
        slub_lock taken during slab management actions (create / remove slab cache).
      
        It is rather typical that a driver will first fill its buffers using
        GFP_KERNEL allocations which will wait until the slub_lock can be acquired.
        Drivers will also create its slab caches first outside of an atomic
        context before starting to use atomic kmalloc from an interrupt context.
      
        If there are any failures then they will occur early after boot or when
        loading of multiple drivers concurrently. Drivers can already accomodate
        failures of GFP_ATOMIC for other reasons. Retries will then create the slab.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      1ceef402
    • C
      SLUB: Remove checks for MAX_PARTIAL from kmem_cache_shrink · fcda3d89
      Christoph Lameter 提交于
      The MAX_PARTIAL checks were supposed to be an optimization. However, slab
      shrinking is a manually triggered process either through running slabinfo
      or by the kernel calling kmem_cache_shrink.
      
      If one really wants to shrink a slab then all operations should be done
      regardless of the size of the partial list. This also fixes an issue that
      could surface if the number of partial slabs was initially above MAX_PARTIAL
      in kmem_cache_shrink and later drops below MAX_PARTIAL through the
      elimination of empty slabs on the partial list (rare). In that case a few
      slabs may be left off the partial list (and only be put back when they
      are empty).
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      fcda3d89
  3. 09 8月, 2007 33 次提交