1. 17 4月, 2012 2 次提交
    • C
      ARM: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW on ASID-capable CPUs · 7fec1b57
      Catalin Marinas 提交于
      Since the ASIDs must be unique to an mm across all the CPUs in a system,
      the __new_context() function needs to broadcast a context reset event to
      all the CPUs during ASID allocation if a roll-over occurred. Such IPIs
      cannot be issued with interrupts disabled and ARM had to define
      __ARCH_WANT_INTERRUPTS_ON_CTXSW.
      
      This patch changes the check_context() function to
      check_and_switch_context() called from switch_mm(). In case of
      ASID-capable CPUs (ARMv6 onwards), if a new ASID is needed and the
      interrupts are disabled, it defers the __new_context() and
      cpu_switch_mm() calls to the post-lock switch hook where the interrupts
      are enabled. Setting the reserved TTBR0 was also moved to
      check_and_switch_context() from cpu_v7_switch_mm().
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      Tested-by: NWill Deacon <will.deacon@arm.com>
      Reviewed-by: NFrank Rowand <frank.rowand@am.sony.com>
      Tested-by: NMarc Zyngier <Marc.Zyngier@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      7fec1b57
    • W
      ARM: Use TTBR1 instead of reserved context ID · 3c5f7e7b
      Will Deacon 提交于
      On ARMv7 CPUs that cache first level page table entries (like the
      Cortex-A15), using a reserved ASID while changing the TTBR or flushing
      the TLB is unsafe.
      
      This is because the CPU may cache the first level entry as the result of
      a speculative memory access while the reserved ASID is assigned. After
      the process owning the page tables dies, the memory will be reallocated
      and may be written with junk values which can be interpreted as global,
      valid PTEs by the processor. This will result in the TLB being populated
      with bogus global entries.
      
      This patch avoids the use of a reserved context ID in the v7 switch_mm
      and ASID rollover code by temporarily using the swapper_pg_dir pointed
      at by TTBR1, which contains only global entries that are not tagged
      with ASIDs.
      Reviewed-by: NFrank Rowand <frank.rowand@am.sony.com>
      Tested-by: NMarc Zyngier <Marc.Zyngier@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      [catalin.marinas@arm.com: add LPAE support]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      3c5f7e7b
  2. 08 12月, 2011 1 次提交
  3. 13 9月, 2011 1 次提交
  4. 09 6月, 2011 2 次提交
    • R
      Revert "ARM: 6944/1: mm: allow ASID 0 to be allocated to tasks" · a0a54d37
      Russell King 提交于
      This reverts commit 45b95235.
      
      Will Deacon reports that:
      
       In 52af9c6c ("ARM: 6943/1: mm: use TTBR1 instead of reserved context ID")
       I updated the ASID rollover code to use only the kernel page tables
       whilst updating the ASID.
      
       Unfortunately, the code to restore the user page tables was part of a
       later patch which isn't yet in mainline, so this leaves the code
       quite broken.
      
      We're also in the process of eliminating __ARCH_WANT_INTERRUPTS_ON_CTXSW
      from ARM, so lets revert these until we can properly sort out what we're
      doing with the context switching.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      a0a54d37
    • R
      Revert "ARM: 6943/1: mm: use TTBR1 instead of reserved context ID" · 07989b7a
      Russell King 提交于
      This reverts commit 52af9c6c.
      
      Will Deacon reports that:
      
       In 52af9c6c ("ARM: 6943/1: mm: use TTBR1 instead of reserved context ID")
       I updated the ASID rollover code to use only the kernel page tables
       whilst updating the ASID.
      
       Unfortunately, the code to restore the user page tables was part of a
       later patch which isn't yet in mainline, so this leaves the code
       quite broken.
      
      We're also in the process of eliminating __ARCH_WANT_INTERRUPTS_ON_CTXSW
      from ARM, so lets revert these until we can properly sort out what we're
      doing with the ARM context switching.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      07989b7a
  5. 26 5月, 2011 2 次提交
  6. 16 2月, 2010 1 次提交
    • C
      ARM: 5905/1: ARM: Global ASID allocation on SMP · 11805bcf
      Catalin Marinas 提交于
      The current ASID allocation algorithm doesn't ensure the notification
      of the other CPUs when the ASID rolls over. This may lead to two
      processes using the same ASID (but different generation) or multiple
      threads of the same process using different ASIDs.
      
      This patch adds the broadcasting of the ASID rollover event to the
      other CPUs. To avoid a race on multiple CPUs modifying "cpu_last_asid"
      during the handling of the broadcast, the ASID numbering now starts at
      "smp_processor_id() + 1". At rollover, the cpu_last_asid will be set
      to NR_CPUS.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      11805bcf
  7. 30 10月, 2009 1 次提交
    • R
      ARM: Fix errata 411920 workarounds · df71dfd4
      Russell King 提交于
      Errata 411920 indicates that any "invalidate entire instruction cache"
      operation can fail if the right conditions are present.  This is not
      limited just to those operations in flush.c, but elsewhere.  Place the
      workaround in the already existing __flush_icache_all() function
      instead.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      df71dfd4
  8. 24 9月, 2009 1 次提交
  9. 09 5月, 2007 2 次提交
    • C
      [ARM] armv7: add support for asid-tagged VIVT I-cache · 065cf519
      Catalin Marinas 提交于
      ARMv7 can have VIPT, PIPT or ASID-tagged VIVT I-cache. This patch
      adds the necessary invalidation of the I-cache when the ASID numbers
      are re-used.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      065cf519
    • R
      [ARM] Fix ASID version switch · 8678c1f0
      Russell King 提交于
      Close a hole in the ASID version switch, particularly the following
      scenario:
      
      CPU0 MM PID			CPU1 MM PID
      	idle
      				  A	pid(A)
      				  A	idle(lazy tlb)
      		* new asid version triggered by B *
        B	pid(B)
        A	pid(A)
      		* MM A gets new asid version *
        A	idle(lazy tlb)
      				  A	pid(A)
      		* CPU1 doesn't see the new ASID *
      
      The result is that CPU1 continues running with the hardware set
      for the original (stale) ASID value, but mm->context.id contains
      the new ASID value.  The result is that the next MM fault on CPU1
      updates the page table entries, but flush_tlb_page() fails due to
      wrong ASID.
      
      There is a related case with a threaded application is allocated
      a new ASID on one CPU while another of its threads is running on
      some different CPU.  This scenario is not fixed by this commit.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      8678c1f0
  10. 08 2月, 2007 1 次提交
  11. 20 9月, 2006 1 次提交
  12. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4