1. 15 5月, 2008 2 次提交
  2. 03 5月, 2008 1 次提交
  3. 02 5月, 2008 1 次提交
  4. 01 5月, 2008 2 次提交
  5. 30 4月, 2008 4 次提交
  6. 29 4月, 2008 3 次提交
  7. 28 4月, 2008 4 次提交
    • G
      hugetlbfs: common code update for s390 · 7f2e9525
      Gerald Schaefer 提交于
      Huge ptes have a special type on s390 and cannot be handled with the standard
      pte functions in certain cases, e.g.  because of a different location of the
      invalid bit.  This patch adds some new architecture- specific functions to
      hugetlb common code, as a prerequisite for the s390 large page support.
      
      This won't affect other architectures in functionality, but I need to add some
      new dummy inline functions to the headers.
      Acked-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NGerald Schaefer <gerald.schaefer@de.ibm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "David S. Miller" <davem@davemloft.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7f2e9525
    • G
      hugetlbfs: add missing TLB flush to hugetlb_cow() · 8fe627ec
      Gerald Schaefer 提交于
      A cow break on a hugetlbfs page with page_count > 1 will set a new pte with
      set_huge_pte_at(), w/o any tlb flush operation.  The old pte will remain in
      the tlb and subsequent write access to the page will result in a page fault
      loop, for as long as it may take until the tlb is flushed from somewhere else.
       This patch introduces an architecture-specific huge_ptep_clear_flush()
      function, which is called before the the set_huge_pte_at() in hugetlb_cow().
      
      ATTENTION: This is just a nop on all architectures for now, the s390
      implementation will come with our large page patch later.  Other architectures
      should define their own huge_ptep_clear_flush() if needed.
      Acked-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NGerald Schaefer <gerald.schaefer@de.ibm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "David S. Miller" <davem@davemloft.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8fe627ec
    • G
      hugetlbfs: architecture header cleanup · 6d779079
      Gerald Schaefer 提交于
      This patch moves all architecture functions for hugetlb to architecture header
      files (include/asm-foo/hugetlb.h) and converts all macros to inline functions.
       It also removes (!) ARCH_HAS_HUGEPAGE_ONLY_RANGE,
      ARCH_HAS_HUGETLB_FREE_PGD_RANGE, ARCH_HAS_PREPARE_HUGEPAGE_RANGE,
      ARCH_HAS_SETCLEAR_HUGE_PTE and ARCH_HAS_HUGETLB_PREFAULT_HOOK.
      
      Getting rid of the ARCH_HAS_xxx #ifdef and macro fugliness should increase
      readability and maintainability, at the price of some code duplication.  An
      asm-generic common part would have reduced the loc, but we would end up with
      new ARCH_HAS_xxx defines eventually.
      Acked-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NGerald Schaefer <gerald.schaefer@de.ibm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "David S. Miller" <davem@davemloft.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6d779079
    • N
      mm: introduce pte_special pte bit · 7e675137
      Nick Piggin 提交于
      s390 for one, cannot implement VM_MIXEDMAP with pfn_valid, due to their memory
      model (which is more dynamic than most).  Instead, they had proposed to
      implement it with an additional path through vm_normal_page(), using a bit in
      the pte to determine whether or not the page should be refcounted:
      
      vm_normal_page()
      {
      	...
              if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
                      if (vma->vm_flags & VM_MIXEDMAP) {
      #ifdef s390
      			if (!mixedmap_refcount_pte(pte))
      				return NULL;
      #else
                              if (!pfn_valid(pfn))
                                      return NULL;
      #endif
                              goto out;
                      }
      	...
      }
      
      This is fine, however if we are allowed to use a bit in the pte to determine
      refcountedness, we can use that to _completely_ replace all the vma based
      schemes.  So instead of adding more cases to the already complex vma-based
      scheme, we can have a clearly seperate and simple pte-based scheme (and get
      slightly better code generation in the process):
      
      vm_normal_page()
      {
      #ifdef s390
      	if (!mixedmap_refcount_pte(pte))
      		return NULL;
      	return pte_page(pte);
      #else
      	...
      #endif
      }
      
      And finally, we may rather make this concept usable by any architecture rather
      than making it s390 only, so implement a new type of pte state for this.
      Unfortunately the old vma based code must stay, because some architectures may
      not be able to spare pte bits.  This makes vm_normal_page a little bit more
      ugly than we would like, but the 2 cases are clearly seperate.
      
      So introduce a pte_special pte state, and use it in mm/memory.c.  It is
      currently a noop for all architectures, so this doesn't actually result in any
      compiled code changes to mm/memory.o.
      
      BTW:
      I haven't put vm_normal_page() into arch code as-per an earlier suggestion.
      The reason is that, regardless of where vm_normal_page is actually
      implemented, the *abstraction* is still exactly the same. Also, while it
      depends on whether the architecture has pte_special or not, that is the
      only two possible cases, and it really isn't an arch specific function --
      the role of the arch code should be to provide primitive functions and
      accessors with which to build the core code; pte_special does that. We do
      not want architectures to know or care about vm_normal_page itself, and
      we definitely don't want them being able to invent something new there
      out of sight of mm/ code. If we made vm_normal_page an arch function, then
      we have to make vm_insert_mixed (next patch) an arch function too. So I
      don't think moving it to arch code fundamentally improves any abstractions,
      while it does practically make the code more difficult to follow, for both
      mm and arch developers, and easier to misuse.
      
      [akpm@linux-foundation.org: build fix]
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Acked-by: NCarsten Otte <cotte@de.ibm.com>
      Cc: Jared Hulbert <jaredeh@gmail.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7e675137
  8. 27 4月, 2008 4 次提交
  9. 23 4月, 2008 1 次提交
  10. 22 4月, 2008 1 次提交
  11. 20 4月, 2008 2 次提交
    • H
      sched, cpuset: customize sched domains, core · 1d3504fc
      Hidetoshi Seto 提交于
      [rebased for sched-devel/latest]
      
       - Add a new cpuset file, having levels:
           sched_relax_domain_level
      
       - Modify partition_sched_domains() and build_sched_domains()
         to take attributes parameter passed from cpuset.
      
       - Fill newidle_idx for node domains which currently unused but
         might be required if sched_relax_domain_level become higher.
      
       - We can change the default level by boot option 'relax_domain_level='.
      Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1d3504fc
    • M
      asm-generic: add node_to_cpumask_ptr macro · aa6b5446
      Mike Travis 提交于
      Create a simple macro to always return a pointer to the node_to_cpumask(node)
      value.  This relies on compiler optimization to remove the extra indirection:
      
          #define node_to_cpumask_ptr(v, node) 		\
      	    cpumask_t _##v = node_to_cpumask(node), *v = &_##v
      
      For those systems with a large cpumask size, then a true pointer
      to the array element can be used:
      
          #define node_to_cpumask_ptr(v, node)		\
      	    cpumask_t *v = &(node_to_cpumask_map[node])
      
      A node_to_cpumask_ptr_next() macro is provided to access another
      node_to_cpumask value.
      
      The other change is to always include asm-generic/topology.h moving the
      ifdef CONFIG_NUMA to this same file.
      
      Note: there are no references to either of these new macros in this patch,
      only the definition.
      
      Based on 2.6.25-rc5-mm1
      
      # alpha
      Cc: Richard Henderson <rth@twiddle.net>
      
      # fujitsu
      Cc: David Howells <dhowells@redhat.com>
      
      # ia64
      Cc: Tony Luck <tony.luck@intel.com>
      
      # powerpc
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Anton Blanchard <anton@samba.org>
      
      # sparc
      Cc: David S. Miller <davem@davemloft.net>
      Cc: William L. Irwin <wli@holomorphy.com>
      
      # x86
      Cc: H. Peter Anvin <hpa@zytor.com>
      Signed-off-by: NMike Travis <travis@sgi.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      aa6b5446
  12. 19 4月, 2008 1 次提交
  13. 18 4月, 2008 3 次提交
  14. 17 4月, 2008 1 次提交
  15. 12 4月, 2008 1 次提交
    • Z
      [IA64] Fix NUMA configuration issue · 98075d24
      Zoltan Menyhart 提交于
      There is a NUMA memory configuration issue in 2.6.24:
      
      A 2-node machine of ours has got the following memory layout:
      
      Node 0:	0 - 2 Gbytes
      Node 0:	4 - 8 Gbytes
      Node 1:	8 - 16 Gbytes
      Node 0:	16 - 18 Gbytes
      
      "efi_memmap_init()" merges the three last ranges into one.
      
      "register_active_ranges()" is called as follows:
      
      efi_memmap_walk(register_active_ranges, NULL);
      
      i.e. once for the 4 - 18 Gbytes range. It picks up the node
      number from the start address, and registers all the memory for
      the node #0.
      
      "register_active_ranges()" should be called as follows to
      make sure there is no merged address range at its entry:
      
      efi_memmap_walk(filter_memory, register_active_ranges);
      
      "filter_memory()" is similar to "filter_rsvd_memory()",
      but the reserved memory ranges are not filtered out.
      Signed-off-by: NZoltan Menyhart <Zoltan.Menyhart@bull.net>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      98075d24
  16. 10 4月, 2008 3 次提交
    • R
      [IA64] Itanium Spec updates · c19b2930
      Russ Anderson 提交于
      Updates based on the "Intel® Itanium® Architecture Software Developer's Manual
      Specification Update October 2007".
      
      http://download.intel.com/design/itanium/specupdt/24869911.pdfSigned-off-by: NRuss Anderson <rja@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      c19b2930
    • M
      [IA64] kprobes: kprobe-booster for ia64 · 34e1ceb1
      Masami Hiramatsu 提交于
      Add kprobe-booster support on ia64.
      
      Kprobe-booster improves the performance of kprobes by eliminating single-step,
      where possible.  Currently, kprobe-booster is implemented on x86 and x86-64.
      This is an ia64 port.
      
      On ia64, kprobe-booster executes a copied bundle directly, instead of single
      stepping.  Bundles which have B or X unit and which may cause an exception
      (including break) are not executed directly.  And also, to prevent hitting
      break exceptions on the copied bundle, only the hindmost kprobe is executed
      directly if several kprobes share a bundle and are placed in different slots.
      Note: set_brl_inst() is used for preparing an instruction buffer(it does not
      modify any active code), so it does not need any atomic operation.
      Signed-off-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: bibo,mao <bibo.mao@intel.com>
      Cc: Rusty Lynch <rusty.lynch@intel.com>
      Cc: Prasanna S Panchamukhi <prasanna@in.ibm.com>
      Cc: Jim Keniston <jkenisto@us.ibm.com>
      Cc: Shaohua Li <shaohua.li@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      34e1ceb1
    • K
      [IA64] pgd_offset() constfication. · e4b05d40
      KOSAKI Motohiro 提交于
      when compile 2.6.25-rc8-mm1, below warning happend.
      because walk_page_range pass argument as "const struct mm*",
      but pgd_offset() receive as "struct mm*".
      
        CC      mm/pagewalk.o
      mm/pagewalk.c: In function 'walk_page_range':
      mm/pagewalk.c:111: warning: passing argument 1 of 'pgd_offset' discards qualifiers from pointer target type
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      e4b05d40
  17. 09 4月, 2008 1 次提交
    • H
      [IA64] Minimize per_cpu reservations. · 2c6e6db4
      holt@sgi.com 提交于
      This attached patch significantly shrinks boot memory allocation on ia64.
      It does this by not allocating per_cpu areas for cpus that can never
      exist.
      
      In the case where acpi does not have any numa node description of the
      cpus, I defaulted to assigning the first 32 round-robin on the known
      nodes..  For the !CONFIG_ACPI  I used for_each_possible_cpu().
      Signed-off-by: NRobin Holt <holt@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      2c6e6db4
  18. 05 4月, 2008 2 次提交
  19. 04 4月, 2008 2 次提交
  20. 03 4月, 2008 1 次提交
    • C
      kvm: provide kvm.h for all architecture: fixes headers_install · dd135ebb
      Christian Borntraeger 提交于
      Currently include/linux/kvm.h is not considered by make headers_install,
      because Kbuild cannot handle " unifdef-$(CONFIG_FOO) += foo.h.  This problem
      was introduced by
      
      commit fb56dbb3
      Author: Avi Kivity <avi@qumranet.com>
      Date:   Sun Dec 2 10:50:06 2007 +0200
      
          KVM: Export include/linux/kvm.h only if $ARCH actually supports KVM
      
          Currently, make headers_check barfs due to <asm/kvm.h>, which <linux/kvm.h>
          includes, not existing.  Rather than add a zillion <asm/kvm.h>s, export kvm.
          only if the arch actually supports it.
      Signed-off-by: NAvi Kivity <avi@qumranet.com>
      
      which makes this an 2.6.25 regression.
      
      One way of solving the issue is to enhance Kbuild, but Avi and David conviced
      me, that changing headers_install is not the way to go.  This patch changes
      the definition for linux/kvm.h to unifdef-y.
      
      If  unifdef-y is used for linux/kvm.h "make headers_check" will fail on all
      architectures without asm/kvm.h.  Therefore, this patch also provides
      asm/kvm.h on all architectures.
      Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Acked-by: NAvi Kivity <avi@qumranet.com>
      Cc: Sam Ravnborg <sam@ravnborg.org
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: <linux-arch@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dd135ebb