1. 21 11月, 2014 1 次提交
  2. 20 11月, 2014 12 次提交
  3. 19 11月, 2014 3 次提交
  4. 18 11月, 2014 1 次提交
  5. 17 11月, 2014 3 次提交
  6. 16 11月, 2014 3 次提交
  7. 14 11月, 2014 2 次提交
    • N
      ARM: 8198/1: make kuser helpers depend on MMU · 08b964ff
      Nathan Lynch 提交于
      The kuser helpers page is not set up on non-MMU systems, so it does
      not make sense to allow CONFIG_KUSER_HELPERS to be enabled when
      CONFIG_MMU=n.  Allowing it to be set on !MMU results in an oops in
      set_tls (used in execve and the arm_syscall trap handler):
      
      Unhandled exception: IPSR = 00000005 LR = fffffff1
      CPU: 0 PID: 1 Comm: swapper Not tainted 3.18.0-rc1-00041-ga30465a #216
      task: 8b838000 ti: 8b82a000 task.ti: 8b82a000
      PC is at flush_thread+0x32/0x40
      LR is at flush_thread+0x21/0x40
      pc : [<8f00157a>]    lr : [<8f001569>]    psr: 4100000b
      sp : 8b82be20  ip : 00000000  fp : 8b83c000
      r10: 00000001  r9 : 88018c84  r8 : 8bb85000
      r7 : 8b838000  r6 : 00000000  r5 : 8bb77400  r4 : 8b82a000
      r3 : ffff0ff0  r2 : 8b82a000  r1 : 00000000  r0 : 88020354
      xPSR: 4100000b
      CPU: 0 PID: 1 Comm: swapper Not tainted 3.18.0-rc1-00041-ga30465a #216
      [<8f002bc1>] (unwind_backtrace) from [<8f002033>] (show_stack+0xb/0xc)
      [<8f002033>] (show_stack) from [<8f00265b>] (__invalid_entry+0x4b/0x4c)
      
      As best I can tell this issue existed for the set_tls ARM syscall
      before commit fbfb872f "ARM: 8148/1: flush TLS and thumbee
      register state during exec" consolidated the TLS manipulation code
      into the set_tls helper function, but now that we're using it to flush
      register state during execve, !MMU users encounter the oops at the
      first exec.
      
      Prevent CONFIG_MMU=n configurations from enabling
      CONFIG_KUSER_HELPERS.
      
      Fixes: fbfb872f (ARM: 8148/1: flush TLS and thumbee register state during exec)
      Signed-off-by: NNathan Lynch <nathan_lynch@mentor.com>
      Reported-by: NStefan Agner <stefan@agner.ch>
      Acked-by: NUwe Kleine-König <u.kleine-koenig@pengutronix.de>
      Cc: stable@vger.kernel.org
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      08b964ff
    • W
      ARM: 8191/1: decompressor: ensure I-side picks up relocated code · 238962ac
      Will Deacon 提交于
      To speed up decompression, the decompressor sets up a flat, cacheable
      mapping of memory. However, when there is insufficient space to hold
      the page tables for this mapping, we don't bother to enable the caches
      and subsequently skip all the cache maintenance hooks.
      
      Skipping the cache maintenance before jumping to the relocated code
      allows the processor to predict the branch and populate the I-cache
      with stale data before the relocation loop has completed (since a
      bootloader may have SCTLR.I set, which permits normal, cacheable
      instruction fetches regardless of SCTLR.M).
      
      This patch moves the cache maintenance check into the maintenance
      routines themselves, allowing the v6/v7 versions to invalidate the
      I-cache regardless of the MMU state.
      
      Cc: <stable@vger.kernel.org>
      Reported-by: NMarc Carino <marc.ceeeee@gmail.com>
      Tested-by: NJulien Grall <julien.grall@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      238962ac
  8. 13 11月, 2014 9 次提交
    • A
      ARM: tegra: roth: Fix SD card VDD_IO regulator · 221b9bf4
      Alexandre Courbot 提交于
      vddio_sdmmc3 is a vdd_io, and thus should be under the vqmmc-supply
      property, not vmmc-supply.
      Signed-off-by: NAlexandre Courbot <acourbot@nvidia.com>
      Signed-off-by: NThierry Reding <treding@nvidia.com>
      221b9bf4
    • A
      ARM: tegra: Remove eMMC vmmc property for roth/tn7 · edbde56a
      Alexandre Courbot 提交于
      This property was wrong and broke eMMC since commit 52221610 ("mmc:
      sdhci: Improve external VDD regulator support"). Align the eMMC
      properties to those of other Tegra boards.
      Signed-off-by: NAlexandre Courbot <acourbot@nvidia.com>
      Signed-off-by: NThierry Reding <treding@nvidia.com>
      edbde56a
    • O
      ARM: dts: tegra: move serial aliases to per-board · c4574aa0
      Olof Johansson 提交于
      There are general changes pending to make the /aliases/serial* entries
      number the serial ports on the system. On Tegra, so far the ports have
      been just numbered dynamically as they are configured so that makes them
      change.
      
      To avoid this, add specific aliases per board to keep the old numbers.
      This allows us to change the numbering by default on future SoCs while
      keeping the numbering on existing boards.
      Signed-off-by: NOlof Johansson <olof@lixom.net>
      Signed-off-by: NThierry Reding <treding@nvidia.com>
      c4574aa0
    • L
      ARM: tegra: Add serial port labels to Tegra124 DT · 121a2f6d
      Lucas Stach 提交于
      These labels will be used to provide deterministic numbering of consoles
      in a later patch.
      Signed-off-by: NLucas Stach <dev@lynxeye.de>
      [treding@nvidia.com: drop aliases, reword commit message]
      Signed-off-by: NThierry Reding <treding@nvidia.com>
      121a2f6d
    • N
      arm64: ARCH_PFN_OFFSET should be unsigned long · 5fd6690c
      Neil Zhang 提交于
      pfns are unsigned long, but PHYS_PFN_OFFSET is phys_addr_t. This leads
      to page_to_pfn() returning phys_addr_t which cause type mismatches in
      some print statements.
      Signed-off-by: NNeil Zhang <zhangwm@marvell.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      5fd6690c
    • W
      Correct the race condition in aarch64_insn_patch_text_sync() · 899d5933
      William Cohen 提交于
      When experimenting with patches to provide kprobes support for aarch64
      smp machines would hang when inserting breakpoints into kernel code.
      The hangs were caused by a race condition in the code called by
      aarch64_insn_patch_text_sync().  The first processor in the
      aarch64_insn_patch_text_cb() function would patch the code while other
      processors were still entering the function and incrementing the
      cpu_count field.  This resulted in some processors never observing the
      exit condition and exiting the function.  Thus, processors in the
      system hung.
      
      The first processor to enter the patching function performs the
      patching and signals that the patching is complete with an increment
      of the cpu_count field. When all the processors have incremented the
      cpu_count field the cpu_count will be num_cpus_online()+1 and they
      will return to normal execution.
      
      Fixes: ae164807 arm64: introduce interfaces to hotpatch kernel and module code
      Signed-off-by: NWilliam Cohen <wcohen@redhat.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      899d5933
    • K
      arm64: __clear_user: handle exceptions on strb · 97fc1543
      Kyle McMartin 提交于
      ARM64 currently doesn't fix up faults on the single-byte (strb) case of
      __clear_user... which means that we can cause a nasty kernel panic as an
      ordinary user with any multiple PAGE_SIZE+1 read from /dev/zero.
      i.e.: dd if=/dev/zero of=foo ibs=1 count=1 (or ibs=65537, etc.)
      
      This is a pretty obscure bug in the general case since we'll only
      __do_kernel_fault (since there's no extable entry for pc) if the
      mmap_sem is contended. However, with CONFIG_DEBUG_VM enabled, we'll
      always fault.
      
      if (!down_read_trylock(&mm->mmap_sem)) {
      	if (!user_mode(regs) && !search_exception_tables(regs->pc))
      		goto no_context;
      retry:
      	down_read(&mm->mmap_sem);
      } else {
      	/*
      	 * The above down_read_trylock() might have succeeded in
      	 * which
      	 * case, we'll have missed the might_sleep() from
      	 * down_read().
      	 */
      	might_sleep();
      	if (!user_mode(regs) && !search_exception_tables(regs->pc))
      		goto no_context;
      }
      
      Fix that by adding an extable entry for the strb instruction, since it
      touches user memory, similar to the other stores in __clear_user.
      Signed-off-by: NKyle McMartin <kyle@redhat.com>
      Reported-by: NMiloš Prchlík <mprchlik@redhat.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      97fc1543
    • M
      arm64: Fix data type for physical address · 287e8c6a
      Min-Hua Chen 提交于
      Use phys_addr_t for physical address in alloc_init_pud. Although
      phys_addr_t and unsigned long are 64 bit in arm64, it is better
      to use phys_addr_t to describe physical addresses.
      Signed-off-by: NMin-Hua Chen <orca.chen@gmail.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      287e8c6a
    • M
      arm64: efi: Fix stub cache maintenance · 9b0b2658
      Mark Rutland 提交于
      While efi-entry.S mentions that efi_entry() will have relocated the
      kernel image, it actually means that efi_entry will have placed a copy
      of the kernel in the appropriate location, and until this is branched to
      at the end of efi_entry.S, all instructions are executed from the
      original image.
      
      Thus while the flush in efi_entry.S does ensure that the copy is visible
      to noncacheable accesses, it does not guarantee that this is true for
      the image instructions are being executed from. This could have
      disasterous effects when the MMU and caches are disabled if the image
      has not been naturally evicted to the PoC.
      
      Additionally, due to a missing dsb following the ic ialluis, the new
      kernel image is not necessarily clean in the I-cache when it is branched
      to, with similar potentially disasterous effects.
      
      This patch adds additional flushing to ensure that the currently
      executing stub text is flushed to the PoC and is thus visible to
      noncacheable accesses. As it is placed after the instructions cache
      maintenance for the new image and __flush_dcache_area already contains a
      dsb, we do not need to add a separate barrier to ensure completion of
      the icache maintenance.
      
      Comments are updated to clarify the situation with regard to the two
      images and the maintenance required for both.
      
      Fixes: 3c7f2550Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NJoel Schopp <joel.schopp@amd.com>
      Reviewed-by: NRoy Franz <roy.franz@linaro.org>
      Tested-by: NTom Lendacky <thomas.lendacky@amd.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Ian Campbell <ijc@hellion.org.uk>
      Cc: Leif Lindholm <leif.lindholm@linaro.org>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      9b0b2658
  9. 12 11月, 2014 1 次提交
  10. 11 11月, 2014 5 次提交