1. 21 12月, 2015 1 次提交
  2. 08 12月, 2015 1 次提交
  3. 27 11月, 2015 3 次提交
  4. 25 11月, 2015 2 次提交
    • M
      arm64: efi: correctly map runtime regions · 3b12acf4
      Mark Rutland 提交于
      The kernel may use a page granularity of 4K, 16K, or 64K depending on
      configuration.
      
      When mapping EFI runtime regions, we use memrange_efi_to_native to round
      the physical base address of a region down to a kernel page boundary,
      and round the size up to a kernel page boundary, adding the residue left
      over from rounding down the physical base address. We do not round down
      the virtual base address.
      
      In __create_mapping we account for the offset of the virtual base from a
      granule boundary, adding the residue to the size before rounding the
      base down to said granule boundary.
      
      Thus we account for the residue twice, and when the residue is non-zero
      will cause __create_mapping to map an additional page at the end of the
      region. Depending on the memory map, this page may be in a region we are
      not intended/permitted to map, or may clash with a different region that
      we wish to map. In typical cases, mapping the next item in the memory
      map will overwrite the erroneously created entry, as we sort the memory
      map in the stub.
      
      As __create_mapping can cope with base addresses which are not page
      aligned, we can instead rely on it to map the region appropriately, and
      simplify efi_virtmap_init by removing the unnecessary code.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Leif Lindholm <leif.lindholm@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      3b12acf4
    • M
      arm64: KVM: Add workaround for Cortex-A57 erratum 834220 · 498cd5c3
      Marc Zyngier 提交于
      Cortex-A57 parts up to r1p2 can misreport Stage 2 translation faults
      when a Stage 1 permission fault or device alignment fault should
      have been reported.
      
      This patch implements the workaround (which is to validate that the
      Stage-1 translation actually succeeds) by using code patching.
      
      Cc: stable@vger.kernel.org
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      498cd5c3
  5. 20 11月, 2015 1 次提交
  6. 18 11月, 2015 2 次提交
  7. 12 11月, 2015 4 次提交
  8. 31 10月, 2015 1 次提交
  9. 30 10月, 2015 2 次提交
  10. 29 10月, 2015 4 次提交
    • W
      arm64: cpufeature: declare enable_cpu_capabilities as static · fde4a59f
      Will Deacon 提交于
      enable_cpu_capabilities is only called from within cpufeature.c, so it
      can be declared static.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      fde4a59f
    • W
      Revert "ARM64: unwind: Fix PC calculation" · 9702970c
      Will Deacon 提交于
      This reverts commit e306dfd0.
      
      With this patch applied, we were the only architecture making this sort
      of adjustment to the PC calculation in the unwinder. This causes
      problems for ftrace, where the PC values are matched against the
      contents of the stack frames in the callchain and fail to match any
      records after the address adjustment.
      
      Whilst there has been some effort to change ftrace to workaround this,
      those patches are not yet ready for mainline and, since we're the odd
      architecture in this regard, let's just step in line with other
      architectures (like arch/arm/) for now.
      
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      9702970c
    • L
      arm64: kernel: fix tcr_el1.t0sz restore on systems with extended idmap · e13d918a
      Lorenzo Pieralisi 提交于
      Commit dd006da2 ("arm64: mm: increase VA range of identity map")
      introduced a mechanism to extend the virtual memory map range
      to support arm64 systems with system RAM located at very high offset,
      where the identity mapping used to enable/disable the MMU requires
      additional translation levels to map the physical memory at an equal
      virtual offset.
      
      The kernel detects at boot time the tcr_el1.t0sz value required by the
      identity mapping and sets-up the tcr_el1.t0sz register field accordingly,
      any time the identity map is required in the kernel (ie when enabling the
      MMU).
      
      After enabling the MMU, in the cold boot path the kernel resets the
      tcr_el1.t0sz to its default value (ie the actual configuration value for
      the system virtual address space) so that after enabling the MMU the
      memory space translated by ttbr0_el1 is restored as expected.
      
      Commit dd006da2 ("arm64: mm: increase VA range of identity map")
      also added code to set-up the tcr_el1.t0sz value when the kernel resumes
      from low-power states with the MMU off through cpu_resume() in order to
      effectively use the identity mapping to enable the MMU but failed to add
      the code required to restore the tcr_el1.t0sz to its default value, when
      the core returns to the kernel with the MMU enabled, so that the kernel
      might end up running with tcr_el1.t0sz value set-up for the identity
      mapping which can be lower than the value required by the actual virtual
      address space, resulting in an erroneous set-up.
      
      This patchs adds code in the resume path that restores the tcr_el1.t0sz
      default value upon core resume, mirroring this way the cold boot path
      behaviour therefore fixing the issue.
      
      Cc: <stable@vger.kernel.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Fixes: dd006da2 ("arm64: mm: increase VA range of identity map")
      Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      e13d918a
    • W
      arm64: compat: fix stxr failure case in SWP emulation · 589cb22b
      Will Deacon 提交于
      If the STXR instruction fails in the SWP emulation code, we leave *data
      overwritten with the loaded value, therefore corrupting the data written
      by a subsequent, successful attempt.
      
      This patch re-jigs the code so that we only write back to *data once we
      know that the update has happened.
      
      Cc: <stable@vger.kernel.org>
      Fixes: bd35a4ad ("arm64: Port SWP/SWPB emulation support from arm")
      Reported-by: NShengjiu Wang <shengjiu.wang@freescale.com>
      Reported-by: NVladimir Murzin <vladimir.murzin@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      589cb22b
  11. 28 10月, 2015 1 次提交
    • A
      efi: Use correct type for struct efi_memory_map::phys_map · 44511fb9
      Ard Biesheuvel 提交于
      We have been getting away with using a void* for the physical
      address of the UEFI memory map, since, even on 32-bit platforms
      with 64-bit physical addresses, no truncation takes place if the
      memory map has been allocated by the firmware (which only uses
      1:1 virtually addressable memory), which is usually the case.
      
      However, commit:
      
        0f96a99d ("efi: Add "efi_fake_mem" boot option")
      
      adds code that clones and modifies the UEFI memory map, and the
      clone may live above 4 GB on 32-bit platforms.
      
      This means our use of void* for struct efi_memory_map::phys_map has
      graduated from 'incorrect but working' to 'incorrect and
      broken', and we need to fix it.
      
      So redefine struct efi_memory_map::phys_map as phys_addr_t, and
      get rid of a bunch of casts that are now unneeded.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NMatt Fleming <matt@codeblueprint.co.uk>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: izumi.taku@jp.fujitsu.com
      Cc: kamezawa.hiroyu@jp.fujitsu.com
      Cc: linux-efi@vger.kernel.org
      Cc: matt.fleming@intel.com
      Link: http://lkml.kernel.org/r/1445593697-1342-1-git-send-email-ard.biesheuvel@linaro.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      44511fb9
  12. 21 10月, 2015 17 次提交
  13. 20 10月, 2015 1 次提交
    • J
      arm64: Synchronise dump_backtrace() with perf callchain · 9f93f3e9
      Jungseok Lee 提交于
      Unlike perf callchain relying on walk_stackframe(), dump_backtrace()
      has its own backtrace logic. A major difference between them is the
      moment a symbol is recorded. Perf writes down a symbol *before*
      calling unwind_frame(), but dump_backtrace() prints it out *after*
      unwind_frame(). As a result, the last valid symbol cannot be hooked
      in case of dump_backtrace(). This patch addresses the issue as
      synchronising dump_backtrace() with perf callchain.
      
      A simple test and its results are as follows:
      
      - crash trigger
      
       $ sudo echo c > /proc/sysrq-trigger
      
      - current status
      
       Call trace:
       [<fffffe00003dc738>] sysrq_handle_crash+0x24/0x30
       [<fffffe00003dd2ac>] __handle_sysrq+0x128/0x19c
       [<fffffe00003dd730>] write_sysrq_trigger+0x60/0x74
       [<fffffe0000249fc4>] proc_reg_write+0x84/0xc0
       [<fffffe00001f2638>] __vfs_write+0x44/0x104
       [<fffffe00001f2e60>] vfs_write+0x98/0x1a8
       [<fffffe00001f3730>] SyS_write+0x50/0xb0
      
      - with this change
      
       Call trace:
       [<fffffe00003dc738>] sysrq_handle_crash+0x24/0x30
       [<fffffe00003dd2ac>] __handle_sysrq+0x128/0x19c
       [<fffffe00003dd730>] write_sysrq_trigger+0x60/0x74
       [<fffffe0000249fc4>] proc_reg_write+0x84/0xc0
       [<fffffe00001f2638>] __vfs_write+0x44/0x104
       [<fffffe00001f2e60>] vfs_write+0x98/0x1a8
       [<fffffe00001f3730>] SyS_write+0x50/0xb0
       [<fffffe00000939ec>] el0_svc_naked+0x20/0x28
      
      Note that this patch does not cover a case where MMU is disabled. The
      last stack frame of swapper, for example, has PC in a form of physical
      address. Unfortunately, a simple conversion using phys_to_virt() cannot
      cover all scenarios since PC is retrieved from LR - 4, not LR. It is
      a big tradeoff to change both head.S and unwind_frame() for only a few
      of symbols in *.S. Thus, this hunk does not take care of the case.
      
      Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: NJungseok Lee <jungseoklee85@gmail.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      9f93f3e9