1. 20 3月, 2006 10 次提交
    • D
    • D
      618e9ed9
    • D
      [SPARC64]: Turn off TSB growing for now. · f4e841da
      David S. Miller 提交于
      There are several tricky races involved with growing the TSB.  So just
      use base-size TSBs for user contexts and we can revisit enabling this
      later.
      
      One part of the SMP problems is that tsb_context_switch() can see
      partially updated TSB configuration state if tsb_grow() is running in
      parallel.  That's easily solved with a seqlock taken as a writer by
      tsb_grow() and taken as a reader to capture all the TSB config state
      in tsb_context_switch().
      
      Then there is flush_tsb_user() running in parallel with a tsb_grow().
      In theory we could take the seqlock as a reader there too, and just
      resample the TSB pointer and reflush but that looks really ugly.
      
      Lastly, I believe there is a case with threads that results in a TSB
      entry lock bit being set spuriously which will cause the next access
      to that TSB entry to wedge the cpu (since the TSB entry lock bit will
      never clear).  It's either copy_tsb() or some bug elsewhere in the TSB
      assembly.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f4e841da
    • D
      [SPARC64]: Access TSB with physical addresses when possible. · 517af332
      David S. Miller 提交于
      This way we don't need to lock the TSB into the TLB.
      The trick is that every TSB load/store is registered into
      a special instruction patch section.  The default uses
      virtual addresses, and the patch instructions use physical
      address load/stores.
      
      We can't do this on all chips because only cheetah+ and later
      have the physical variant of the atomic quad load.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      517af332
    • D
      2f7ee7c6
    • D
      [SPARC64]: Fix incorrect TSB lock bit handling. · 4753eb2a
      David S. Miller 提交于
      The TSB_LOCK_BIT define is actually a special
      value shifted down by 32-bits for the assembler
      code macros.
      
      In C code, this isn't what we want.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4753eb2a
    • D
      [SPARC64]: Dynamically grow TSB in response to RSS growth. · bd40791e
      David S. Miller 提交于
      As the RSS grows, grow the TSB in order to reduce the likelyhood
      of hash collisions and thus poor hit rates in the TSB.
      
      This definitely needs some serious tuning.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      bd40791e
    • D
      [SPARC64]: Add infrastructure for dynamic TSB sizing. · 98c5584c
      David S. Miller 提交于
      This also cleans up tsb_context_switch().  The assembler
      routine is now __tsb_context_switch() and the former is
      an inline function that picks out the bits from the mm_struct
      and passes it into the assembler code as arguments.
      
      setup_tsb_parms() computes the locked TLB entry to map the
      TSB.  Later when we support using the physical address quad
      load instructions of Cheetah+ and later, we'll simply use
      the physical address for the TSB register value and set
      the map virtual and PTE both to zero.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      98c5584c
    • D
      [SPARC64]: TSB refinements. · 09f94287
      David S. Miller 提交于
      Move {init_new,destroy}_context() out of line.
      
      Do not put huge pages into the TSB, only base page size translations.
      There are some clever things we could do here, but for now let's be
      correct instead of fancy.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      09f94287
    • D
      [SPARC64]: Move away from virtual page tables, part 1. · 74bf4312
      David S. Miller 提交于
      We now use the TSB hardware assist features of the UltraSPARC
      MMUs.
      
      SMP is currently knowingly broken, we need to find another place
      to store the per-cpu base pointers.  We hid them away in the TSB
      base register, and that obviously will not work any more :-)
      
      Another known broken case is non-8KB base page size.
      
      Also noticed that flush_tlb_all() is not referenced anywhere, only
      the internal __flush_tlb_all() (local cpu only) is used by the
      sparc64 port, so we can get rid of flush_tlb_all().
      
      The kernel gets it's own 8KB TSB (swapper_tsb) and each address space
      gets it's own private 8K TSB.  Later we can add code to dynamically
      increase the size of per-process TSB as the RSS grows.  An 8KB TSB is
      good enough for up to about a 4MB RSS, after which the TSB starts to
      incur many capacity and conflict misses.
      
      We even accumulate OBP translations into the kernel TSB.
      
      Another area for refinement is large page size support.  We could use
      a secondary address space TSB to handle those.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      74bf4312