1. 05 4月, 2008 2 次提交
  2. 19 12月, 2007 1 次提交
    • D
      [IA64] Avoid unnecessary TLB flushes when allocating memory · aec103bf
      de Dinechin, Christophe (Integrity VM) 提交于
      Improve performance of memory allocations on ia64 by avoiding a global TLB
      purge to purge a single page from the file cache. This happens whenever we
      evict a page from the buffer cache to make room for some other allocation.
      
      Test case: Run 'find /usr -type f | xargs cat > /dev/null' in the
      background to fill the buffer cache, then run something that uses memory,
      e.g. 'gmake -j50 install'. Instrumentation showed that the number of
      global TLB purges went from a few millions down to about 170 over a 12
      hours run of the above.
      
      The performance impact is particularly noticeable under virtualization,
      because a virtual TLB is generally both larger and slower to purge than
      a physical one.
      Signed-off-by: NChristophe de Dinechin <ddd@hp.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      aec103bf
  3. 08 12月, 2007 1 次提交
  4. 12 7月, 2007 1 次提交
  5. 09 5月, 2007 1 次提交
  6. 01 7月, 2006 1 次提交
  7. 28 3月, 2006 1 次提交
    • C
      [IA64] optimize flush_tlb_range on large numa box · ce9eed5a
      Chen, Kenneth W 提交于
      It was reported from a field customer that global spin lock ptcg_lock
      is giving a lot of grief on munmap performance running on a large numa
      machine.  What appears to be a problem coming from flush_tlb_range(),
      which currently unconditionally calls platform_global_tlb_purge().
      For some of the numa machines in existence today, this function is
      mapped into ia64_global_tlb_purge(), which holds ptcg_lock spin lock
      while executing ptc.ga instruction.
      
      Here is a patch that attempt to avoid global tlb purge whenever
      possible.  It will use local tlb purge as much as possible. Though the
      conditions to use local tlb purge is pretty restrictive.  One of the
      side effect of having flush tlb range instruction on ia64 is that
      kernel don't get a chance to clear out cpu_vm_mask.  On ia64, this mask
      is sticky and it will accumulate if process bounces around.  Thus
      diminishing the possible use of ptc.l.  Thoughts?
      Signed-off-by: NKen Chen <kenneth.w.chen@intel.com>
      Acked-by: NJack Steiner <steiner@sgi.com>
      Acked-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      ce9eed5a
  8. 14 1月, 2006 1 次提交
    • J
      [IA64] Hole in IA64 TLB flushing from system threads · cfbb1426
      Jack Steiner 提交于
      I originally thought this was an bug only in the SN code, but I think I
      also see a hole in the generic IA64 tlb code. (Separate patch was sent
      for the SN problem).
      
      It looks like there is a bug in the TLB flushing code. During context switch,
      kernel threads (kswapd, for example) inherit the mm of the task that was
      previously running on the cpu. Normally, this is ok because the previous context
      is still loaded into the RR registers. However, if the owner of the mm
      migrates to another cpu, changes it's context number, and references a
      page before kswapd issues a tlb_purge for that same page, the purge will be
      done with a stale context number (& RR registers).
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      cfbb1426
  9. 04 11月, 2005 1 次提交
  10. 01 11月, 2005 1 次提交
  11. 30 10月, 2005 1 次提交
    • H
      [PATCH] mm: flush_tlb_range outside ptlock · 663b97f7
      Hugh Dickins 提交于
      There was one small but very significant change in the previous patch:
      mprotect's flush_tlb_range fell outside the page_table_lock: as it is in 2.4,
      but that doesn't prove it safe in 2.6.
      
      On some architectures flush_tlb_range comes to the same as flush_tlb_mm, which
      has always been called from outside page_table_lock in dup_mmap, and is so
      proved safe.  Others required a deeper audit: I could find no reliance on
      page_table_lock in any; but in ia64 and parisc found some code which looks a
      bit as if it might want preemption disabled.  That won't do any actual harm,
      so pending a decision from the maintainers, disable preemption there.
      
      Remove comments on page_table_lock from flush_tlb_mm, flush_tlb_range and
      flush_tlb_page entries in cachetlb.txt: they were rather misleading (what
      generic code does is different from what usually happens), the rules are now
      changing, and it's not yet clear where we'll end up (will the generic
      tlb_flush_mmu happen always under lock?  never under lock?  or sometimes under
      and sometimes not?).
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      663b97f7
  12. 28 10月, 2005 1 次提交
    • D
      [IA64] - Avoid slow TLB purges on SGI Altix systems · c1902aae
      Dean Roe 提交于
      flush_tlb_all() can be a scaling issue on large SGI Altix systems
      since it uses the global call_lock and always executes on all cpus.
      When a process enters flush_tlb_range() to purge TLBs for another
      process, it is possible to avoid flush_tlb_all() and instead allow
      sn2_global_tlb_purge() to purge TLBs only where necessary.
      
      This patch modifies flush_tlb_range() so that this case can be handled
      by platform TLB purge functions and updates ia64_global_tlb_purge()
      accordingly.  sn2_global_tlb_purge() now calculates the region register
      value from the mm argument introduced with this patch.
      Signed-off-by: NDean Roe <roe@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      c1902aae
  13. 26 10月, 2005 1 次提交
  14. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4