1. 18 7月, 2008 1 次提交
    • S
      sparc: join the remaining header files · f5e706ad
      Sam Ravnborg 提交于
      With this commit all sparc64 header files are moved to asm-sparc.
      The remaining files (71 files) were too different to be trivially
      merged so divide them up in a _32.h and a _64.h file which
      are both included from the file with no bit size.
      
      The following script were used:
      cd include
      FILES=`wc -l asm-sparc64/*h | grep -v '^     1' | cut -b 20-`
      
      for FILE in ${FILES}; do
        echo $FILE:
        BASE=`echo $FILE | cut -d '.' -f 1`
        FN32=${BASE}_32.h
        FN64=${BASE}_64.h
        GUARD=___ASM_SPARC_`echo $BASE | tr '-' '_' | tr [:lower:] [:upper:]`_H
        git mv asm-sparc/$FILE asm-sparc/$FN32
        git mv asm-sparc64/$FILE asm-sparc/$FN64
        echo git mv done
        printf "#ifndef %s\n" $GUARD                             >   asm-sparc/$FILE
        printf "#define %s\n" $GUARD                             >>  asm-sparc/$FILE
        printf "#if defined(__sparc__) && defined(__arch64__)\n" >>  asm-sparc/$FILE
        printf "#include <asm-sparc/%s>\n" $FN64                 >>  asm-sparc/$FILE
        printf "#else\n"                                         >>  asm-sparc/$FILE
        printf "#include <asm-sparc/%s>\n" $FN32                 >>  asm-sparc/$FILE
        printf "#endif\n"                                        >>  asm-sparc/$FILE
        printf "#endif\n"                                        >>  asm-sparc/$FILE
        git add asm-sparc/$FILE
        echo new file done
        printf "#include <asm-sparc/%s>\n" $FILE                 >  asm-sparc64/$FILE
        git add asm-sparc64/$FILE
        echo sparc64 file done
      done
      
      The guard contains three '_' to avoid conflict with existing guards.
      In additing the two Kbuild files are emptied to avoid breaking
      headers_* targets.
      We will reintroduce the exported header files when the necessary
      kbuild changes are merged.
      Signed-off-by: NSam Ravnborg <sam@ravnborg.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f5e706ad
  2. 20 5月, 2008 1 次提交
  3. 16 7月, 2007 1 次提交
  4. 03 5月, 2007 1 次提交
    • J
      [PATCH] x86: PARAVIRT: add hooks to intercept mm creation and destruction · d6dd61c8
      Jeremy Fitzhardinge 提交于
      Add hooks to allow a paravirt implementation to track the lifetime of
      an mm.  Paravirtualization requires three hooks, but only two are
      needed in common code.  They are:
      
      arch_dup_mmap, which is called when a new mmap is created at fork
      
      arch_exit_mmap, which is called when the last process reference to an
        mm is dropped, which typically happens on exit and exec.
      
      The third hook is activate_mm, which is called from the arch-specific
      activate_mm() macro/function, and so doesn't need stub versions for
      other architectures.  It's called when an mm is first used.
      Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      Cc: linux-arch@vger.kernel.org
      Cc: James Bottomley <James.Bottomley@SteelEye.com>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      d6dd61c8
  5. 22 3月, 2006 1 次提交
  6. 20 3月, 2006 9 次提交
    • D
      [SPARC64]: Fix and re-enable dynamic TSB sizing. · 7a1ac526
      David S. Miller 提交于
      This is good for up to %50 performance improvement of some test cases.
      The problem has been the race conditions, and hopefully I've plugged
      them all up here.
      
      1) There was a serious race in switch_mm() wrt. lazy TLB
         switching to and from kernel threads.
      
         We could erroneously skip a tsb_context_switch() and thus
         use a stale TSB across a TSB grow event.
      
         There is a big comment now in that function describing
         exactly how it can happen.
      
      2) All code paths that do something with the TSB need to be
         guarded with the mm->context.lock spinlock.  This makes
         page table flushing paths properly synchronize with both
         TSB growing and TLB context changes.
      
      3) TSB growing events are moved to the end of successful fault
         processing.  Previously it was in update_mmu_cache() but
         that is deadlock prone.  At the end of do_sparc64_fault()
         we hold no spinlocks that could deadlock the TSB grow
         sequence.  We also have dropped the address space semaphore.
      
      While we're here, add prefetching to the copy_tsb() routine
      and put it in assembler into the tsb.S file.  This piece of
      code is quite time critical.
      
      There are some small negative side effects to this code which
      can be improved upon.  In particular we grab the mm->context.lock
      even for the tsb insert done by update_mmu_cache() now and that's
      a bit excessive.  We can get rid of that locking, and the same
      lock taking in flush_tsb_user(), by disabling PSTATE_IE around
      the whole operation including the capturing of the tsb pointer
      and tsb_nentries value.  That would work because anyone growing
      the TSB won't free up the old TSB until all cpus respond to the
      TSB change cross call.
      
      I'm not quite so confident in that optimization to put it in
      right now, but eventually we might be able to and the description
      is here for reference.
      
      This code seems very solid now.  It passes several parallel GCC
      bootstrap builds, and our favorite "nut cruncher" stress test which is
      a full "make -j8192" build of a "make allmodconfig" kernel.  That puts
      about 256 processes on each cpu's run queue, makes lots of process cpu
      migrations occur, causes lots of page table and TLB flushing activity,
      incurs many context version number changes, and it swaps the machine
      real far out to disk even though there is 16GB of ram on this test
      system. :-)
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7a1ac526
    • D
      [SPARC64]: Bulletproof MMU context locking. · a77754b4
      David S. Miller 提交于
      1) Always spin_lock_init() in init_context().  The caller essentially
         clears it out, or copies the mm info from the parent.  In both
         cases we need to explicitly initialize the spinlock.
      
      2) Always do explicit IRQ disabling while taking mm->context.lock
         and ctx_alloc_lock.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a77754b4
    • D
      [SPARC64]: Fix TLB context allocation with SMT style shared TLBs. · a0663a79
      David S. Miller 提交于
      The context allocation scheme we use depends upon there being a 1<-->1
      mapping from cpu to physical TLB for correctness.  Chips like Niagara
      break this assumption.
      
      So what we do is notify all cpus with a cross call when the context
      version number changes, and if necessary this makes them allocate
      a valid context for the address space they are running at the time.
      
      Stress tested with make -j1024, make -j2048, and make -j4096 kernel
      builds on a 32-strand, 8 core, T2000 with 16GB of ram.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a0663a79
    • D
      618e9ed9
    • D
      [SPARC64]: Patch up mmu context register writes for sun4v. · 8b11bd12
      David S. Miller 提交于
      sun4v uses ASI_MMU instead of ASI_DMMU
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8b11bd12
    • D
      [SPARC64]: Dynamically grow TSB in response to RSS growth. · bd40791e
      David S. Miller 提交于
      As the RSS grows, grow the TSB in order to reduce the likelyhood
      of hash collisions and thus poor hit rates in the TSB.
      
      This definitely needs some serious tuning.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      bd40791e
    • D
      [SPARC64]: Add infrastructure for dynamic TSB sizing. · 98c5584c
      David S. Miller 提交于
      This also cleans up tsb_context_switch().  The assembler
      routine is now __tsb_context_switch() and the former is
      an inline function that picks out the bits from the mm_struct
      and passes it into the assembler code as arguments.
      
      setup_tsb_parms() computes the locked TLB entry to map the
      TSB.  Later when we support using the physical address quad
      load instructions of Cheetah+ and later, we'll simply use
      the physical address for the TSB register value and set
      the map virtual and PTE both to zero.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      98c5584c
    • D
      [SPARC64]: TSB refinements. · 09f94287
      David S. Miller 提交于
      Move {init_new,destroy}_context() out of line.
      
      Do not put huge pages into the TSB, only base page size translations.
      There are some clever things we could do here, but for now let's be
      correct instead of fancy.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      09f94287
    • D
      [SPARC64]: Move away from virtual page tables, part 1. · 74bf4312
      David S. Miller 提交于
      We now use the TSB hardware assist features of the UltraSPARC
      MMUs.
      
      SMP is currently knowingly broken, we need to find another place
      to store the per-cpu base pointers.  We hid them away in the TSB
      base register, and that obviously will not work any more :-)
      
      Another known broken case is non-8KB base page size.
      
      Also noticed that flush_tlb_all() is not referenced anywhere, only
      the internal __flush_tlb_all() (local cpu only) is used by the
      sparc64 port, so we can get rid of flush_tlb_all().
      
      The kernel gets it's own 8KB TSB (swapper_tsb) and each address space
      gets it's own private 8K TSB.  Later we can add code to dynamically
      increase the size of per-process TSB as the RSS grows.  An 8KB TSB is
      good enough for up to about a 4MB RSS, after which the TSB starts to
      incur many capacity and conflict misses.
      
      We even accumulate OBP translations into the kernel TSB.
      
      Another area for refinement is large page size support.  We could use
      a secondary address space TSB to handle those.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      74bf4312
  7. 13 1月, 2006 1 次提交
  8. 08 11月, 2005 1 次提交
    • H
      [SPARC64] mm: context switch ptlock · dedeb002
      Hugh Dickins 提交于
      sparc64 is unique among architectures in taking the page_table_lock in
      its context switch (well, cris does too, but erroneously, and it's not
      yet SMP anyway).
      
      This seems to be a private affair between switch_mm and activate_mm,
      using page_table_lock as a per-mm lock, without any relation to its uses
      elsewhere.  That's fine, but comment it as such; and unlock sooner in
      switch_mm, more like in activate_mm (preemption is disabled here).
      
      There is a block of "if (0)"ed code in smp_flush_tlb_pending which would
      have liked to rely on the page_table_lock, in switch_mm and elsewhere;
      but its comment explains how dup_mmap's flush_tlb_mm defeated it.  And
      though that could have been changed at any time over the past few years,
      now the chance vanishes as we push the page_table_lock downwards, and
      perhaps split it per page table page.  Just delete that block of code.
      
      Which leaves the mysterious spin_unlock_wait(&oldmm->page_table_lock)
      in kernel/fork.c copy_mm.  Textual analysis (supported by Nick Piggin)
      suggests that the comment was written by DaveM, and that it relates to
      the defeated approach in the sparc64 smp_flush_tlb_pending.  Just delete
      this block too.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      dedeb002
  9. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4