1. 20 10月, 2005 1 次提交
    • Y
      [PATCH] swiotlb: make sure initial DMA allocations really are in DMA memory · 281dd25c
      Yasunori Goto 提交于
      This introduces a limit parameter to the core bootmem allocator; The new
      parameter indicates that physical memory allocated by the bootmem
      allocator should be within the requested limit.
      
      We also introduce alloc_bootmem_low_pages_limit, alloc_bootmem_node_limit,
      alloc_bootmem_low_pages_node_limit apis, but alloc_bootmem_low_pages_limit
      is the only api used for swiotlb.
      
      The existing alloc_bootmem_low_pages() api could instead have been
      changed and made to pass right limit to the core allocator.  But that
      would make the patch more intrusive for 2.6.14, as other arches use
      alloc_bootmem_low_pages().  We may be done that post 2.6.14 as a
      cleanup.
      
      With this, swiotlb gets memory within 4G for both x86_64 and ia64
      arches.
      Signed-off-by: NYasunori Goto <y-goto@jp.fujitsu.com>
      Cc: Ravikiran G Thirumalai <kiran@scalex86.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      281dd25c
  2. 07 10月, 2005 1 次提交
    • B
      [IA64] Avoid kernel hang during CMC interrupt storm · 76e677e2
      Bryan Sutula 提交于
      I've noticed a kernel hang during a storm of CMC interrupts, which was
      tracked down to the continual execution of the interrupt handler.
      
      There's code in the CMC handler that's supposed to disable CMC
      interrupts and switch to polling mode when it sees a bunch of CMCs.
      Because disabling CMCs across all CPUs isn't safe in interrupt context,
      the disable is done with a schedule_work().  But with continual CMC
      interrupts, the schedule_work() never gets executed.
      
      The following patch immediately disables CMC interrupts for the current
      CPU.  This then allows (at least) one CPU to ignore CMC interrupts,
      execute the schedule_work() code, and disable CMC interrupts on the rest
      of the CPUs.
      Acked-by: NKeith Owens <kaos@sgi.com>
      Signed-off-by: NBryan Sutula <Bryan.Sutula@hp.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      76e677e2
  3. 23 9月, 2005 3 次提交
  4. 18 9月, 2005 1 次提交
  5. 17 9月, 2005 2 次提交
  6. 15 9月, 2005 2 次提交
    • D
      [LIB]: Consolidate _atomic_dec_and_lock() · 4db2ce01
      David S. Miller 提交于
      Several implementations were essentialy a common piece of C code using
      the cmpxchg() macro.  Put the implementation in one spot that everyone
      can share, and convert sparc64 over to using this.
      
      Alpha is the lone arch-specific implementation, which codes up a
      special fast path for the common case in order to avoid GP reloading
      which a pure C version would require.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4db2ce01
    • H
      [PATCH] error path in setup_arg_pages() misses vm_unacct_memory() · 2fd4ef85
      Hugh Dickins 提交于
      Pavel Emelianov and Kirill Korotaev observe that fs and arch users of
      security_vm_enough_memory tend to forget to vm_unacct_memory when a
      failure occurs further down (typically in setup_arg_pages variants).
      
      These are all users of insert_vm_struct, and that reservation will only
      be unaccounted on exit if the vma is marked VM_ACCOUNT: which in some
      cases it is (hidden inside VM_STACK_FLAGS) and in some cases it isn't.
      
      So x86_64 32-bit and ppc64 vDSO ELFs have been leaking memory into
      Committed_AS each time they're run.  But don't add VM_ACCOUNT to them,
      it's inappropriate to reserve against the very unlikely case that gdb
      be used to COW a vDSO page - we ought to do something about that in
      do_wp_page, but there are yet other inconsistencies to be resolved.
      
      The safe and economical way to fix this is to let insert_vm_struct do
      the security_vm_enough_memory check when it finds VM_ACCOUNT is set.
      
      And the MIPS irix_brk has been calling security_vm_enough_memory before
      calling do_brk which repeats it, doubly accounting and so also leaking.
      Remove that, and all the fs and arch calls to security_vm_enough_memory:
      give it a less misleading name later on.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-Off-By: NKirill Korotaev <dev@sw.ru>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      2fd4ef85
  7. 13 9月, 2005 2 次提交
  8. 12 9月, 2005 5 次提交
    • K
      [IA64] MCA/INIT: remove obsolete unwind code · 49a28cc8
      Keith Owens 提交于
      Delete the special case unwind code that was only used by the old
      MCA/INIT handler.
      Signed-off-by: NKeith Owens <kaos@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      49a28cc8
    • K
      [IA64] MCA/INIT: remove the physical mode path from minstate.h · 05f335ea
      Keith Owens 提交于
      Remove the physical mode path from minstate.h.
      Signed-off-by: NKeith Owens <kaos@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      05f335ea
    • K
      [PATCH] MCA/INIT: use per cpu stacks · 7f613c7d
      Keith Owens 提交于
      The bulk of the change.  Use per cpu MCA/INIT stacks.  Change the SAL
      to OS state (sos) to be per process.  Do all the assembler work on the
      MCA/INIT stacks, leaving the original stack alone.  Pass per cpu state
      data to the C handlers for MCA and INIT, which also means changing the
      mca_drv interfaces slightly.  Lots of verification on whether the
      original stack is usable before converting it to a sleeping process.
      Signed-off-by: NKeith Owens <kaos@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      7f613c7d
    • K
      [IA64] MCA/INIT: avoid reading INIT record during INIT event · 289d773e
      Keith Owens 提交于
      Reading the INIT record from SAL during the INIT event has proved to be
      unreliable, and a source of hangs during INIT processing.  The new
      MCA/INIT handlers remove the need to get the INIT record from SAL.
      Change salinfo.c so mca.c can just flag that a new record is available,
      without having to read the record during INIT processing.  This patch
      can be applied without the new MCA/INIT handlers.
      
      Also clean up some usage of NR_CPUS which should have been using
      cpu_online().
      Signed-off-by: NKeith Owens <kaos@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      289d773e
    • S
      kbuild: rename prepare to archprepare to fix dependency chain · 5bb78269
      Sam Ravnborg 提交于
      When introducing the generic asm-offsets.h support the dependency
      chain for the prepare targets was changed. All build scripts expecting
      include/asm/asm-offsets.h to be made when using the prepare target would broke.
      With the limited number of prepare targets left in arch Makefiles
      the trivial solution was to introduce a new arch specific target: archprepare
      
      The dependency chain looks like this now:
      
      prepare
        |
        +--> prepare0
               |
               +--> archprepare
                      |
      		+--> scripts_basic
                      +--> prepare1
                             |
                             +---> prepare2
                                     |
                                     +--> prepare3
      
      So prepare 3 is processed before prepare2 etc.
      This guaantees that the asm symlink, version.h, scripts_basic
      are all updated before archprepare is processed.
      
      prepare0 which build the asm-offsets.h file will need the
      actions performed by archprepare.
      
      The head target is now named prepare, because users scripts will most
      likely use that target, but prepare-all has been kept for compatibility.
      Updated Documentation/kbuild/makefiles.txt.
      Signed-off-by: NSam Ravnborg <sam@ravnborg.org>
      5bb78269
  9. 11 9月, 2005 1 次提交
    • I
      [PATCH] spinlock consolidation · fb1c8f93
      Ingo Molnar 提交于
      This patch (written by me and also containing many suggestions of Arjan van
      de Ven) does a major cleanup of the spinlock code.  It does the following
      things:
      
       - consolidates and enhances the spinlock/rwlock debugging code
      
       - simplifies the asm/spinlock.h files
      
       - encapsulates the raw spinlock type and moves generic spinlock
         features (such as ->break_lock) into the generic code.
      
       - cleans up the spinlock code hierarchy to get rid of the spaghetti.
      
      Most notably there's now only a single variant of the debugging code,
      located in lib/spinlock_debug.c.  (previously we had one SMP debugging
      variant per architecture, plus a separate generic one for UP builds)
      
      Also, i've enhanced the rwlock debugging facility, it will now track
      write-owners.  There is new spinlock-owner/CPU-tracking on SMP builds too.
      All locks have lockup detection now, which will work for both soft and hard
      spin/rwlock lockups.
      
      The arch-level include files now only contain the minimally necessary
      subset of the spinlock code - all the rest that can be generalized now
      lives in the generic headers:
      
       include/asm-i386/spinlock_types.h       |   16
       include/asm-x86_64/spinlock_types.h     |   16
      
      I have also split up the various spinlock variants into separate files,
      making it easier to see which does what. The new layout is:
      
         SMP                         |  UP
         ----------------------------|-----------------------------------
         asm/spinlock_types_smp.h    |  linux/spinlock_types_up.h
         linux/spinlock_types.h      |  linux/spinlock_types.h
         asm/spinlock_smp.h          |  linux/spinlock_up.h
         linux/spinlock_api_smp.h    |  linux/spinlock_api_up.h
         linux/spinlock.h            |  linux/spinlock.h
      
      /*
       * here's the role of the various spinlock/rwlock related include files:
       *
       * on SMP builds:
       *
       *  asm/spinlock_types.h: contains the raw_spinlock_t/raw_rwlock_t and the
       *                        initializers
       *
       *  linux/spinlock_types.h:
       *                        defines the generic type and initializers
       *
       *  asm/spinlock.h:       contains the __raw_spin_*()/etc. lowlevel
       *                        implementations, mostly inline assembly code
       *
       *   (also included on UP-debug builds:)
       *
       *  linux/spinlock_api_smp.h:
       *                        contains the prototypes for the _spin_*() APIs.
       *
       *  linux/spinlock.h:     builds the final spin_*() APIs.
       *
       * on UP builds:
       *
       *  linux/spinlock_type_up.h:
       *                        contains the generic, simplified UP spinlock type.
       *                        (which is an empty structure on non-debug builds)
       *
       *  linux/spinlock_types.h:
       *                        defines the generic type and initializers
       *
       *  linux/spinlock_up.h:
       *                        contains the __raw_spin_*()/etc. version of UP
       *                        builds. (which are NOPs on non-debug, non-preempt
       *                        builds)
       *
       *   (included on UP-non-debug builds:)
       *
       *  linux/spinlock_api_up.h:
       *                        builds the _spin_*() APIs.
       *
       *  linux/spinlock.h:     builds the final spin_*() APIs.
       */
      
      All SMP and UP architectures are converted by this patch.
      
      arm, i386, ia64, ppc, ppc64, s390/s390x, x64 was build-tested via
      crosscompilers.  m32r, mips, sh, sparc, have not been tested yet, but should
      be mostly fine.
      
      From: Grant Grundler <grundler@parisc-linux.org>
      
        Booted and lightly tested on a500-44 (64-bit, SMP kernel, dual CPU).
        Builds 32-bit SMP kernel (not booted or tested).  I did not try to build
        non-SMP kernels.  That should be trivial to fix up later if necessary.
      
        I converted bit ops atomic_hash lock to raw_spinlock_t.  Doing so avoids
        some ugly nesting of linux/*.h and asm/*.h files.  Those particular locks
        are well tested and contained entirely inside arch specific code.  I do NOT
        expect any new issues to arise with them.
      
       If someone does ever need to use debug/metrics with them, then they will
        need to unravel this hairball between spinlocks, atomic ops, and bit ops
        that exist only because parisc has exactly one atomic instruction: LDCW
        (load and clear word).
      
      From: "Luck, Tony" <tony.luck@intel.com>
      
         ia64 fix
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NArjan van de Ven <arjanv@infradead.org>
      Signed-off-by: NGrant Grundler <grundler@parisc-linux.org>
      Cc: Matthew Wilcox <willy@debian.org>
      Signed-off-by: NHirokazu Takata <takata@linux-m32r.org>
      Signed-off-by: NMikael Pettersson <mikpe@csd.uu.se>
      Signed-off-by: NBenoit Boissinot <benoit.boissinot@ens-lyon.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      fb1c8f93
  10. 10 9月, 2005 4 次提交
  11. 09 9月, 2005 3 次提交
  12. 08 9月, 2005 11 次提交
  13. 07 9月, 2005 2 次提交
  14. 01 9月, 2005 2 次提交