1. 26 9月, 2006 5 次提交
    • C
      [PATCH] ZVC: Support NR_SLAB_RECLAIMABLE / NR_SLAB_UNRECLAIMABLE · 972d1a7b
      Christoph Lameter 提交于
      Remove the atomic counter for slab_reclaim_pages and replace the counter
      and NR_SLAB with two ZVC counter that account for unreclaimable and
      reclaimable slab pages: NR_SLAB_RECLAIMABLE and NR_SLAB_UNRECLAIMABLE.
      
      Change the check in vmscan.c to refer to to NR_SLAB_RECLAIMABLE.  The
      intend seems to be to check for slab pages that could be freed.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      972d1a7b
    • C
      [PATCH] reduce MAX_NR_ZONES: fix MAX_NR_ZONES array initializations · f06a9684
      Christoph Lameter 提交于
      Fix array initialization in lots of arches
      
      The number of zones may now be reduced from 4 to 2 for many arches.  Fix the
      array initialization for the zones array for all architectures so that it is
      not initializing a fixed number of elements.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      f06a9684
    • C
      [PATCH] reduce MAX_NR_ZONES: remove two strange uses of MAX_NR_ZONES · 776ed98b
      Christoph Lameter 提交于
      I keep seeing zones on various platforms that are never used and wonder why we
      compile support for them into the kernel.  Counters show up for HIGHMEM and
      DMA32 that are alway zero.
      
      This patch allows the removal of ZONE_DMA32 for non x86_64 architectures and
      it will get rid of ZONE_HIGHMEM for arches not using highmem (like 64 bit
      architectures).  If an arch does not define CONFIG_HIGHMEM then ZONE_HIGHMEM
      will not be defined.  Similarly if an arch does not define CONFIG_ZONE_DMA32
      then ZONE_DMA32 will not be defined.
      
      No current architecture uses all the 4 zones (DMA,DMA32,NORMAL,HIGH) that we
      have now.  The patchset will reduce the number of zones for all platforms.
      
      On many platforms that do not have DMA32 or HIGHMEM this will reduce the
      number of zones by 50%.  F.e.  ia64 only uses DMA and NORMAL.
      
      Large amounts of memory can be saved for larger systemss that may have a few
      hundred NUMA nodes.
      
      With ZONE_DMA32 and ZONE_HIGHMEM support optional MAX_NR_ZONES will be 2 for
      many non i386 platforms and even for i386 without CONFIG_HIGHMEM set.
      
      Tested on ia64, x86_64 and on i386 with and without highmem.
      
      The patchset consists of 11 patches that are following this message.
      
      One could go even further than this patchset and also make ZONE_DMA optional
      because some platforms do not need a separate DMA zone and can do DMA to all
      of memory.  This could reduce MAX_NR_ZONES to 1.  Such a patchset will
      hopefully follow soon.
      
      This patch:
      
      Fix strange uses of MAX_NR_ZONES
      
      Sometimes we use MAX_NR_ZONES - x to refer to a zone.  Make that explicit.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      776ed98b
    • K
      [PATCH] convert i386 NUMA KVA space to bootmem · 91023300
      keith mannthey 提交于
      Address a long standing issue of booting with an initrd on an i386 numa
      system.  Currently (and always) the numa kva area is mapped into low memory
      by finding the end of low memory and moving that mark down (thus creating
      space for the kva).  The issue with this is that Grub loads initrds into
      this similar space so when the kernel check the initrd it finds it outside
      max_low_pfn and disables it (it thinks the initrd is not mapped into usable
      memory) thus initrd enabled kernels can't boot i386 numa :(
      
      My solution to the problem just converts the numa kva area to use the
      bootmem allocator to save it's area (instead of moving the end of low
      memory).  Using bootmem allows the kva area to be mapped into more diverse
      addresses (not just the end of low memory) and enables the kva area to be
      mapped below the initrd if present.
      
      I have tested this patch on numaq(no initrd) and summit(initrd) i386 numa
      based systems.
      
      [akpm@osdl.org: cleanups]
      Signed-off-by: NKeith Mannthey <kmannth@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      91023300
    • K
      [PATCH] i386 bootioremap / kexec fix · 24fd425e
      keith mannthey 提交于
      With CONFIG_PHYSICAL_START set to a non default values the i386
      boot_ioremap code calculated its pte index wrong and users of boot_ioremap
      have their areas incorrectly mapped (for me SRAT table not mapped during
      early boot).  This patch removes the addr < BOOT_PTE_PTRS constraint.
      
      [ Keith says this is applicable to 2.6.16 and 2.6.17 as well ]
      
      Signed-off-by: Keith Mannthey<kmannth@us.ibm.com>
      Cc: Vivek Goyal <vgoyal@in.ibm.com>
      Cc: Dave Hansen <haveblue@us.ibm.com>
      Cc: <stable@kernel.org>
      Cc: Adrian Bunk <bunk@stusta.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      24fd425e
  2. 02 7月, 2006 1 次提交
  3. 01 7月, 2006 8 次提交
  4. 28 6月, 2006 3 次提交
  5. 27 6月, 2006 1 次提交
  6. 23 6月, 2006 4 次提交
  7. 22 5月, 2006 1 次提交
  8. 10 4月, 2006 1 次提交
  9. 28 3月, 2006 3 次提交
  10. 23 3月, 2006 4 次提交
    • A
      [PATCH] pause_on_oops command line option · dd287796
      Andrew Morton 提交于
      Attempt to fix the problem wherein people's oops reports scroll off the screen
      due to repeated oopsing or to oopses on other CPUs.
      
      If this happens the user can reboot with the `pause_on_oops=<seconds>' option.
      It will allow the first oopsing CPU to print an oops record just a single
      time.  Second oopsing attempts, or oopses on other CPUs will cause those CPUs
      to enter a tight loop until the specified number of seconds have elapsed.
      
      The patch implements the infrastructure generically in the expectation that
      architectures other than x86 will find it useful.
      
      Cc: Dave Jones <davej@codemonkey.org.uk>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      dd287796
    • I
      [PATCH] make bug messages more consistent · 91368d73
      Ingo Molnar 提交于
      Consolidate all kernel bug printouts to begin with the "BUG: " string.
      Makes it easier to find them in large bootup logs.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      91368d73
    • J
      [PATCH] i386: actively synchronize vmalloc area when registering certain callbacks · 101f12af
      Jan Beulich 提交于
      Registering a callback handler through register_die_notifier() is obviously
      primarily intended for use by modules.  However, the way these currently
      get called it is basically impossible for them to actually be used by
      modules, as there is, on non-PAE configurationes, a good chance (the larger
      the module, the better) for the system to crash as a result.
      
      This is because the callback gets invoked
      
      (a) in the page fault path before the top level page table propagation
          gets carried out (hence a fault to propagate the top level page table
          entry/entries mapping to module's code/data would nest infinitly) and
      
      (b) in the NMI path, where nested faults must absolutely not happen,
          since otherwise the IRET from the nested fault re-enables NMIs,
          potentially resulting in nested NMI occurences.
      
      Besides the modular aspect, similar problems would even arise for in-
      kernel consumers of the API if they touched ioremap()ed or vmalloc()ed
      memory inside their handlers.
      Signed-off-by: NJan Beulich <jbeulich@novell.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      101f12af
    • G
      [PATCH] x86: SMP alternatives · 9a0b5817
      Gerd Hoffmann 提交于
      Implement SMP alternatives, i.e.  switching at runtime between different
      code versions for UP and SMP.  The code can patch both SMP->UP and UP->SMP.
      The UP->SMP case is useful for CPU hotplug.
      
      With CONFIG_CPU_HOTPLUG enabled the code switches to UP at boot time and
      when the number of CPUs goes down to 1, and switches to SMP when the number
      of CPUs goes up to 2.
      
      Without CONFIG_CPU_HOTPLUG or on non-SMP-capable systems the code is
      patched once at boot time (if needed) and the tables are released
      afterwards.
      
      The changes in detail:
      
        * The current alternatives bits are moved to a separate file,
          the SMP alternatives code is added there.
      
        * The patch adds some new elf sections to the kernel:
          .smp_altinstructions
      	like .altinstructions, also contains a list
      	of alt_instr structs.
          .smp_altinstr_replacement
      	like .altinstr_replacement, but also has some space to
      	save original instruction before replaving it.
          .smp_locks
      	list of pointers to lock prefixes which can be nop'ed
      	out on UP.
          The first two are used to replace more complex instruction
          sequences such as spinlocks and semaphores.  It would be possible
          to deal with the lock prefixes with that as well, but by handling
          them as special case the table sizes become much smaller.
      
       * The sections are page-aligned and padded up to page size, so they
         can be free if they are not needed.
      
       * Splitted the code to release init pages to a separate function and
         use it to release the elf sections if they are unused.
      Signed-off-by: NGerd Hoffmann <kraxel@suse.de>
      Signed-off-by: NChuck Ebbert <76306.1226@compuserve.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      9a0b5817
  11. 22 3月, 2006 3 次提交
    • D
      [PATCH] hugepage: is_aligned_hugepage_range() cleanup · 42b88bef
      David Gibson 提交于
      Quite a long time back, prepare_hugepage_range() replaced
      is_aligned_hugepage_range() as the callback from mm/mmap.c to arch code to
      verify if an address range is suitable for a hugepage mapping.
      is_aligned_hugepage_range() stuck around, but only to implement
      prepare_hugepage_range() on archs which didn't implement their own.
      
      Most archs (everything except ia64 and powerpc) used the same
      implementation of is_aligned_hugepage_range().  On powerpc, which
      implements its own prepare_hugepage_range(), the custom version was never
      used.
      
      In addition, "is_aligned_hugepage_range()" was a bad name, because it
      suggests it returns true iff the given range is a good hugepage range,
      whereas in fact it returns 0-or-error (so the sense is reversed).
      
      This patch cleans up by abolishing is_aligned_hugepage_range().  Instead
      prepare_hugepage_range() is defined directly.  Most archs use the default
      version, which simply checks the given region is aligned to the size of a
      hugepage.  ia64 and powerpc define custom versions.  The ia64 one simply
      checks that the range is in the correct address space region in addition to
      being suitably aligned.  The powerpc version (just as previously) checks
      for suitable addresses, and if necessary performs low-level MMU frobbing to
      set up new areas for use by hugepages.
      
      No libhugetlbfs testsuite regressions on ppc64 (POWER5 LPAR).
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NZhang Yanmin <yanmin.zhang@intel.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: William Lee Irwin III <wli@holomorphy.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      42b88bef
    • N
      [PATCH] remove set_page_count() outside mm/ · 7835e98b
      Nick Piggin 提交于
      set_page_count usage outside mm/ is limited to setting the refcount to 1.
      Remove set_page_count from outside mm/, and replace those users with
      init_page_count() and set_page_refcounted().
      
      This allows more debug checking, and tighter control on how code is allowed
      to play around with page->_count.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      7835e98b
    • N
      [PATCH] i386: pageattr remove __put_page · 84d1c054
      Nick Piggin 提交于
      Stop using __put_page and page_count in i386 pageattr.c
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      84d1c054
  12. 17 1月, 2006 1 次提交
  13. 12 1月, 2006 1 次提交
  14. 10 1月, 2006 1 次提交
  15. 07 1月, 2006 2 次提交
  16. 16 12月, 2005 1 次提交