1. 17 4月, 2012 2 次提交
    • C
      ARM: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW on ASID-capable CPUs · 7fec1b57
      Catalin Marinas 提交于
      Since the ASIDs must be unique to an mm across all the CPUs in a system,
      the __new_context() function needs to broadcast a context reset event to
      all the CPUs during ASID allocation if a roll-over occurred. Such IPIs
      cannot be issued with interrupts disabled and ARM had to define
      __ARCH_WANT_INTERRUPTS_ON_CTXSW.
      
      This patch changes the check_context() function to
      check_and_switch_context() called from switch_mm(). In case of
      ASID-capable CPUs (ARMv6 onwards), if a new ASID is needed and the
      interrupts are disabled, it defers the __new_context() and
      cpu_switch_mm() calls to the post-lock switch hook where the interrupts
      are enabled. Setting the reserved TTBR0 was also moved to
      check_and_switch_context() from cpu_v7_switch_mm().
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      Tested-by: NWill Deacon <will.deacon@arm.com>
      Reviewed-by: NFrank Rowand <frank.rowand@am.sony.com>
      Tested-by: NMarc Zyngier <Marc.Zyngier@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      7fec1b57
    • W
      ARM: Use TTBR1 instead of reserved context ID · 3c5f7e7b
      Will Deacon 提交于
      On ARMv7 CPUs that cache first level page table entries (like the
      Cortex-A15), using a reserved ASID while changing the TTBR or flushing
      the TLB is unsafe.
      
      This is because the CPU may cache the first level entry as the result of
      a speculative memory access while the reserved ASID is assigned. After
      the process owning the page tables dies, the memory will be reallocated
      and may be written with junk values which can be interpreted as global,
      valid PTEs by the processor. This will result in the TLB being populated
      with bogus global entries.
      
      This patch avoids the use of a reserved context ID in the v7 switch_mm
      and ASID rollover code by temporarily using the swapper_pg_dir pointed
      at by TTBR1, which contains only global entries that are not tagged
      with ASIDs.
      Reviewed-by: NFrank Rowand <frank.rowand@am.sony.com>
      Tested-by: NMarc Zyngier <Marc.Zyngier@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      [catalin.marinas@arm.com: add LPAE support]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      3c5f7e7b
  2. 16 4月, 2012 5 次提交
  3. 15 4月, 2012 2 次提交
  4. 14 4月, 2012 12 次提交
  5. 13 4月, 2012 4 次提交
  6. 12 4月, 2012 3 次提交
    • G
      irq_domain: Move irq_virq_count into NOMAP revmap · 6fa6c8e2
      Grant Likely 提交于
      This patch replaces the old global setting of irq_virq_count that is only
      used by the NOMAP mapping and instead uses a revmap_data property so that
      the maximum NOMAP allocation can be set per NOMAP irq_domain.
      
      There is exactly one user of irq_virq_count in-tree right now: PS3.
      Also, irq_virq_count is only useful for the NOMAP mapping.  So,
      instead of having a single global irq_virq_count values, this change
      drops it entirely and added a max_irq argument to irq_domain_add_nomap().
      That makes it a property of an individual nomap irq domain instead of
      a global system settting.
      Signed-off-by: NGrant Likely <grant.likely@secretlab.ca>
      Tested-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Milton Miller <miltonm@bga.com>
      6fa6c8e2
    • C
      arch/tile: avoid unused variable warning in proc.c for tilegx · e72d5c7e
      Chris Metcalf 提交于
      Until we push the unaligned access support for tilegx, it's silly
      to have arch/tile/kernel/proc.c generate a warning about an unused
      variable.  Extend the #ifdef to cover all the code and data for now.
      Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
      e72d5c7e
    • L
      x86: merge 32/64-bit versions of 'strncpy_from_user()' and speed it up · 92ae03f2
      Linus Torvalds 提交于
      This merges the 32- and 64-bit versions of the x86 strncpy_from_user()
      by just rewriting it in C rather than the ancient inline asm versions
      that used lodsb/stosb and had been duplicated for (trivial) differences
      between the 32-bit and 64-bit versions.
      
      While doing that, it also speeds them up by doing the accesses a word at
      a time.  Finally, the new routines also properly handle the case of
      hitting the end of the address space, which we have never done correctly
      before (fs/namei.c has a hack around it for that reason).
      
      Despite all these improvements, it actually removes more lines than it
      adds, due to the de-duplication.  Also, we no longer export (or define)
      the legacy __strncpy_from_user() function (that was defined to not do
      the user permission checks), since it's not actually used anywhere, and
      the user address space checks are built in to the new code.
      
      Other architecture maintainers have been notified that the old hack in
      fs/namei.c will be going away in the 3.5 merge window, in case they
      copied the x86 approach of being a bit cavalier about the end of the
      address space.
      
      Cc: linux-arch@vger.kernel.org
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      92ae03f2
  7. 11 4月, 2012 2 次提交
  8. 10 4月, 2012 10 次提交