1. 04 3月, 2013 2 次提交
    • W
      ARM: 7659/1: mm: make mm->context.id an atomic64_t variable · 8a4e3a9e
      Will Deacon 提交于
      mm->context.id is updated under asid_lock when a new ASID is allocated
      to an mm_struct. However, it is also read without the lock when a task
      is being scheduled and checking whether or not the current ASID
      generation is up-to-date.
      
      If two threads of the same process are being scheduled in parallel and
      the bottom bits of the generation in their mm->context.id match the
      current generation (that is, the mm_struct has not been used for ~2^24
      rollovers) then the non-atomic, lockless access to mm->context.id may
      yield the incorrect ASID.
      
      This patch fixes this issue by making mm->context.id and atomic64_t,
      ensuring that the generation is always read consistently. For code that
      only requires access to the ASID bits (e.g. TLB flushing by mm), then
      the value is accessed directly, which GCC converts to an ldrb.
      
      Cc: <stable@vger.kernel.org> # 3.8
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      8a4e3a9e
    • W
      ARM: 7658/1: mm: fix race updating mm->context.id on ASID rollover · 37f47e3d
      Will Deacon 提交于
      If a thread triggers an ASID rollover, other threads of the same process
      must be made to wait until the mm->context.id for the shared mm_struct
      has been updated to new generation and associated book-keeping (e.g.
      TLB invalidation) has ben performed.
      
      However, there is a *tiny* window where both mm->context.id and the
      relevant active_asids entry are updated to the new generation, but the
      TLB flush has not been performed, which could allow another thread to
      return to userspace with a dirty TLB, potentially leading to data
      corruption. In reality this will never occur because one CPU would need
      to perform a context-switch in the time it takes another to do a couple
      of atomic test/set operations but we should plug the race anyway.
      
      This patch moves the active_asids update until after the potential TLB
      flush on context-switch.
      
      Cc: <stable@vger.kernel.org> # 3.8
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      37f47e3d
  2. 17 2月, 2013 1 次提交
  3. 26 11月, 2012 1 次提交
  4. 06 11月, 2012 3 次提交
    • W
      ARM: mm: use bitmap operations when allocating new ASIDs · bf51bb82
      Will Deacon 提交于
      When allocating a new ASID, we must take care not to re-assign a
      reserved ASID-value to a new mm. This requires us to check each
      candidate ASID against those currently reserved by other cores before
      assigning a new ASID to the current mm.
      
      This patch improves the ASID allocation algorithm by using a
      bitmap-based approach. Rather than iterating over the reserved ASID
      array for each candidate ASID, we simply find the first zero bit,
      ensuring that those indices corresponding to reserved ASIDs are set
      when flushing during a rollover event.
      Tested-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      bf51bb82
    • W
      ARM: mm: avoid taking ASID spinlock on fastpath · 4b883160
      Will Deacon 提交于
      When scheduling a new mm, we take a spinlock so that we can:
      
        1. Safely allocate a new ASID, if required
        2. Update our active_asids field without worrying about parallel
           updates to reserved_asids
        3. Ensure that we flush our local TLB, if required
      
      However, this has the nasty affect of serialising context-switch across
      all CPUs in the system. The usual (fast) case is where the next mm has
      a valid ASID for the current generation. In such a scenario, we can
      avoid taking the lock and instead use atomic64_xchg to update the
      active_asids variable for the current CPU. If a rollover occurs on
      another CPU (which would take the lock), when copying the active_asids
      into the reserved_asids another atomic64_xchg is used to replace each
      active_asids with 0. The fast path can then detect this case and fall
      back to spinning on the lock.
      Tested-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      4b883160
    • W
      ARM: mm: remove IPI broadcasting on ASID rollover · b5466f87
      Will Deacon 提交于
      ASIDs are allocated to MMU contexts based on a rolling counter. This
      means that after 255 allocations we must invalidate all existing ASIDs
      via an expensive IPI mechanism to synchronise all of the online CPUs and
      ensure that all tasks execute with an ASID from the new generation.
      
      This patch changes the rollover behaviour so that we rely instead on the
      hardware broadcasting of the TLB invalidation to avoid the IPI calls.
      This works by keeping track of the active ASID on each core, which is
      then reserved in the case of a rollover so that currently scheduled
      tasks can continue to run. For cores without hardware TLB broadcasting,
      we keep track of pending flushes in a cpumask, so cores can flush their
      local TLB before scheduling a new mm.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Tested-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      b5466f87
  5. 25 8月, 2012 1 次提交
  6. 10 7月, 2012 1 次提交
    • W
      ARM: 7445/1: mm: update CONTEXTIDR register to contain PID of current process · 575320d6
      Will Deacon 提交于
      This patch introduces a new Kconfig option which, when enabled, causes
      the kernel to write the PID of the current task into the PROCID field
      of the CONTEXTIDR on context switch. This is useful when analysing
      hardware trace, since writes to this register can be configured to emit
      an event into the trace stream.
      
      The thread notifier for writing the PID is deliberately kept separate
      from the ASID-writing code so that we can support newer processors using
      LPAE, where the ASID is stored in TTBR0. As such, the switch_mm code is
      updated to perform a read-modify-write sequence to ensure that we don't
      clobber the PID on CPUs using the classic 2-level page tables.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      575320d6
  7. 17 4月, 2012 3 次提交
  8. 08 12月, 2011 1 次提交
  9. 13 9月, 2011 1 次提交
  10. 09 6月, 2011 2 次提交
    • R
      Revert "ARM: 6944/1: mm: allow ASID 0 to be allocated to tasks" · a0a54d37
      Russell King 提交于
      This reverts commit 45b95235.
      
      Will Deacon reports that:
      
       In 52af9c6c ("ARM: 6943/1: mm: use TTBR1 instead of reserved context ID")
       I updated the ASID rollover code to use only the kernel page tables
       whilst updating the ASID.
      
       Unfortunately, the code to restore the user page tables was part of a
       later patch which isn't yet in mainline, so this leaves the code
       quite broken.
      
      We're also in the process of eliminating __ARCH_WANT_INTERRUPTS_ON_CTXSW
      from ARM, so lets revert these until we can properly sort out what we're
      doing with the context switching.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      a0a54d37
    • R
      Revert "ARM: 6943/1: mm: use TTBR1 instead of reserved context ID" · 07989b7a
      Russell King 提交于
      This reverts commit 52af9c6c.
      
      Will Deacon reports that:
      
       In 52af9c6c ("ARM: 6943/1: mm: use TTBR1 instead of reserved context ID")
       I updated the ASID rollover code to use only the kernel page tables
       whilst updating the ASID.
      
       Unfortunately, the code to restore the user page tables was part of a
       later patch which isn't yet in mainline, so this leaves the code
       quite broken.
      
      We're also in the process of eliminating __ARCH_WANT_INTERRUPTS_ON_CTXSW
      from ARM, so lets revert these until we can properly sort out what we're
      doing with the ARM context switching.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      07989b7a
  11. 26 5月, 2011 2 次提交
  12. 16 2月, 2010 1 次提交
    • C
      ARM: 5905/1: ARM: Global ASID allocation on SMP · 11805bcf
      Catalin Marinas 提交于
      The current ASID allocation algorithm doesn't ensure the notification
      of the other CPUs when the ASID rolls over. This may lead to two
      processes using the same ASID (but different generation) or multiple
      threads of the same process using different ASIDs.
      
      This patch adds the broadcasting of the ASID rollover event to the
      other CPUs. To avoid a race on multiple CPUs modifying "cpu_last_asid"
      during the handling of the broadcast, the ASID numbering now starts at
      "smp_processor_id() + 1". At rollover, the cpu_last_asid will be set
      to NR_CPUS.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      11805bcf
  13. 30 10月, 2009 1 次提交
    • R
      ARM: Fix errata 411920 workarounds · df71dfd4
      Russell King 提交于
      Errata 411920 indicates that any "invalidate entire instruction cache"
      operation can fail if the right conditions are present.  This is not
      limited just to those operations in flush.c, but elsewhere.  Place the
      workaround in the already existing __flush_icache_all() function
      instead.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      df71dfd4
  14. 24 9月, 2009 1 次提交
  15. 09 5月, 2007 2 次提交
    • C
      [ARM] armv7: add support for asid-tagged VIVT I-cache · 065cf519
      Catalin Marinas 提交于
      ARMv7 can have VIPT, PIPT or ASID-tagged VIVT I-cache. This patch
      adds the necessary invalidation of the I-cache when the ASID numbers
      are re-used.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      065cf519
    • R
      [ARM] Fix ASID version switch · 8678c1f0
      Russell King 提交于
      Close a hole in the ASID version switch, particularly the following
      scenario:
      
      CPU0 MM PID			CPU1 MM PID
      	idle
      				  A	pid(A)
      				  A	idle(lazy tlb)
      		* new asid version triggered by B *
        B	pid(B)
        A	pid(A)
      		* MM A gets new asid version *
        A	idle(lazy tlb)
      				  A	pid(A)
      		* CPU1 doesn't see the new ASID *
      
      The result is that CPU1 continues running with the hardware set
      for the original (stale) ASID value, but mm->context.id contains
      the new ASID value.  The result is that the next MM fault on CPU1
      updates the page table entries, but flush_tlb_page() fails due to
      wrong ASID.
      
      There is a related case with a threaded application is allocated
      a new ASID on one CPU while another of its threads is running on
      some different CPU.  This scenario is not fixed by this commit.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      8678c1f0
  16. 08 2月, 2007 1 次提交
  17. 20 9月, 2006 1 次提交
  18. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4