1. 28 4月, 2016 7 次提交
  2. 22 4月, 2016 1 次提交
  3. 20 4月, 2016 1 次提交
  4. 18 4月, 2016 1 次提交
  5. 06 4月, 2016 1 次提交
    • M
      arm64: KVM: Warn when PARange is less than 40 bits · 6141570c
      Marc Zyngier 提交于
      We always thought that 40bits of PA range would be the minimum people
      would actually build. Anything less is terrifyingly small.
      
      Turns out that we were both right and wrong. Nobody has ever built
      such a system, but the ARM Foundation Model has a PARange set to 36bits.
      Just because we can. Oh well. Now, the KVM API explicitely says that
      we offer a 40bit PA space to the VM, so we shouldn't run KVM on
      the Foundation Model at all.
      
      That being said, this patch offers a less agressive alternative, and
      loudly warns about the configuration being unsupported. You'll still
      be able to run VMs (at your own risks, though).
      
      This is just a workaround until we have a proper userspace API where
      we report the PARange to userspace.
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      6141570c
  6. 31 3月, 2016 1 次提交
  7. 30 3月, 2016 1 次提交
  8. 29 3月, 2016 3 次提交
    • W
      arm64: defconfig: updates for 4.6 · 431597bb
      Will Deacon 提交于
      A few defconfig updates got dropped on the floor during the merge window,
      so I've rounded up the remainder here:
      
        * Fix duplicate definition of MMC_BLOCK_MINORS and bump to 32 for
          msm8916
      
        * CPUFreq support for the Juno platform, using the MHU/SCPI interface
      
        * Removal of the default command line, which assumed a console called
          ttyAMA0
      
        * Bits and pieces for the Hi6220 (96Boards HiKey)
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      431597bb
    • S
      arm64: perf: Move PMU register related defines to asm/perf_event.h · b8cfadfc
      Shannon Zhao 提交于
      To use the ARMv8 PMU related register defines from the KVM code, we move
      the relevant definitions to asm/perf_event.h header file and rename them
      with prefix ARMV8_PMU_. This allows us to get rid of kvm_perf_event.h.
      Signed-off-by: NAnup Patel <anup.patel@linaro.org>
      Signed-off-by: NShannon Zhao <shannon.zhao@linaro.org>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Reviewed-by: NAndrew Jones <drjones@redhat.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      b8cfadfc
    • J
      arm64: opcodes.h: Add arm big-endian config options before including arm header · a6002ec5
      James Morse 提交于
      arm and arm64 use different config options to specify big endian. This
      needs taking into account when including code/headers between the two
      architectures.
      
      A case in point is PAN, which uses the __instr_arm() macro to output
      instructions. The macro comes from opcodes.h, which lives under arch/arm.
      On a big-endian build the mismatched config options mean the instruction
      isn't byte swapped correctly, resulting in undefined instruction exceptions
      during boot:
      
      | alternatives: patching kernel code
      | kdevtmpfs[87]: undefined instruction: pc=ffffffc0004505b4
      | kdevtmpfs[87]: undefined instruction: pc=ffffffc00076231c
      | kdevtmpfs[87]: undefined instruction: pc=ffffffc00076231c
      | kdevtmpfs[87]: undefined instruction: pc=ffffffc00076231c
      | kdevtmpfs[87]: undefined instruction: pc=ffffffc00076231c
      | kdevtmpfs[87]: undefined instruction: pc=ffffffc00076231c
      | kdevtmpfs[87]: undefined instruction: pc=ffffffc00076231c
      | kdevtmpfs[87]: undefined instruction: pc=ffffffc00076231c
      | kdevtmpfs[87]: undefined instruction: pc=ffffffc00076231c
      | kdevtmpfs[87]: undefined instruction: pc=ffffffc00076231c
      | Internal error: Oops - undefined instruction: 0 [#1] SMP
      | Modules linked in:
      | CPU: 0 PID: 87 Comm: kdevtmpfs Not tainted 4.1.16+ #5
      | Hardware name: Hisilicon PhosphorHi1382 EVB (DT)
      | task: ffffffc336591700 ti: ffffffc3365a4000 task.ti: ffffffc3365a4000
      | PC is at dump_instr+0x68/0x100
      | LR is at do_undefinstr+0x1d4/0x2a4
      | pc : [<ffffffc00076231c>] lr : [<ffffffc0000811d4>] pstate: 604001c5
      | sp : ffffffc3365a6450
      
      Cc: <stable@vger.kernel.org> #4.3.x-
      Reported-by: NHanjun Guo <guohanjun@huawei.com>
      Tested-by: NXuefeng Wang <wxf.wang@hisilicon.com>
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      a6002ec5
  9. 26 3月, 2016 1 次提交
  10. 25 3月, 2016 3 次提交
    • M
      arm64: mm: allow preemption in copy_to_user_page · 691b1e2e
      Mark Rutland 提交于
      Currently we disable preemption in copy_to_user_page; a behaviour that
      we inherited from the 32-bit arm code. This was necessary for older
      cores without broadcast data cache maintenance, and ensured that cache
      lines were dirtied and cleaned by the same CPU. On these systems dirty
      cache line migration was not possible, so this was sufficient to
      guarantee coherency.
      
      On contemporary systems, cache coherence protocols permit (dirty) cache
      lines to migrate between CPUs as a result of speculation, prefetching,
      and other behaviours. To account for this, in ARMv8 data cache
      maintenance operations are broadcast and affect all data caches in the
      domain associated with the VA (i.e. ISH for kernel and user mappings).
      
      In __switch_to we ensure that tasks can be safely migrated in the middle
      of a maintenance sequence, using a dsb(ish) to ensure prior explicit
      memory accesses are observed and cache maintenance operations are
      completed before a task can be run on another CPU.
      
      Given the above, it is not necessary to disable preemption in
      copy_to_user_page. This patch removes the preempt_{disable,enable}
      calls, permitting preemption.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      691b1e2e
    • M
      arm64: consistently use p?d_set_huge · c661cb1c
      Mark Rutland 提交于
      Commit 324420bf ("arm64: add support for ioremap() block
      mappings") added new p?d_set_huge functions which do the hard work to
      generate and set a correct block entry.
      
      These differ from open-coded huge page creation in the early page table
      code by explicitly setting the P?D_TYPE_SECT bits (which are implicitly
      retained by mk_sect_prot() for any valid prot), but are otherwise
      identical (and cannot fail on arm64).
      
      For simplicity and consistency, make use of these in the initial page
      table creation code.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      c661cb1c
    • A
      arm64: kaslr: use callee saved register to preserve SCTLR across C call · d5e57437
      Ard Biesheuvel 提交于
      The KASLR code incorrectly expects the contents of x18 to be preserved
      across a call into C code, and uses it to stash the contents of SCTLR_EL1
      before enabling the MMU. If the MMU needs to be disabled again to create
      the randomized kernel mapping, x18 is written back to SCTLR_EL1, which is
      likely to crash the system if x18 has been clobbered by kasan_early_init()
      or kaslr_early_init(). So use x22 instead, which is not in use so far in
      head.S
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      d5e57437
  11. 23 3月, 2016 1 次提交
  12. 21 3月, 2016 6 次提交
  13. 19 3月, 2016 3 次提交
  14. 18 3月, 2016 2 次提交
  15. 13 3月, 2016 3 次提交
  16. 11 3月, 2016 3 次提交
    • C
      arm64: kasan: Fix zero shadow mapping overriding kernel image shadow · 2776e0e8
      Catalin Marinas 提交于
      With the 16KB and 64KB page size configurations, SWAPPER_BLOCK_SIZE is
      PAGE_SIZE and ARM64_SWAPPER_USES_SECTION_MAPS is 0. Since
      kimg_shadow_end is not page aligned (_end shifted by
      KASAN_SHADOW_SCALE_SHIFT), the edges of previously mapped kernel image
      shadow via vmemmap_populate() may be overridden by subsequent calls to
      kasan_populate_zero_shadow(), leading to kernel panics like below:
      
      ------------------------------------------------------------------------------
      Unable to handle kernel paging request at virtual address fffffc100135068c
      pgd = fffffc8009ac0000
      [fffffc100135068c] *pgd=00000009ffee0003, *pud=00000009ffee0003, *pmd=00000009ffee0003, *pte=00e0000081a00793
      Internal error: Oops: 9600004f [#1] PREEMPT SMP
      Modules linked in:
      CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.5.0-rc4+ #1984
      Hardware name: Juno (DT)
      task: fffffe09001a0000 ti: fffffe0900200000 task.ti: fffffe0900200000
      PC is at __memset+0x4c/0x200
      LR is at kasan_unpoison_shadow+0x34/0x50
      pc : [<fffffc800846f1cc>] lr : [<fffffc800821ff54>] pstate: 00000245
      sp : fffffe0900203db0
      x29: fffffe0900203db0 x28: 0000000000000000
      x27: 0000000000000000 x26: 0000000000000000
      x25: fffffc80099b69d0 x24: 0000000000000001
      x23: 0000000000000000 x22: 0000000000002000
      x21: dffffc8000000000 x20: 1fffff9001350a8c
      x19: 0000000000002000 x18: 0000000000000008
      x17: 0000000000000147 x16: ffffffffffffffff
      x15: 79746972100e041d x14: ffffff0000000000
      x13: ffff000000000000 x12: 0000000000000000
      x11: 0101010101010101 x10: 1fffffc11c000000
      x9 : 0000000000000000 x8 : fffffc100135068c
      x7 : 0000000000000000 x6 : 000000000000003f
      x5 : 0000000000000040 x4 : 0000000000000004
      x3 : fffffc100134f651 x2 : 0000000000000400
      x1 : 0000000000000000 x0 : fffffc100135068c
      
      Process swapper/0 (pid: 1, stack limit = 0xfffffe0900200020)
      Call trace:
      [<fffffc800846f1cc>] __memset+0x4c/0x200
      [<fffffc8008220044>] __asan_register_globals+0x5c/0xb0
      [<fffffc8008a09d34>] _GLOBAL__sub_I_65535_1_sunrpc_cache_lookup+0x1c/0x28
      [<fffffc8008f20d28>] kernel_init_freeable+0x104/0x274
      [<fffffc80089e1948>] kernel_init+0x10/0xf8
      [<fffffc8008093a00>] ret_from_fork+0x10/0x50
      ------------------------------------------------------------------------------
      
      This patch aligns kimg_shadow_start and kimg_shadow_end to
      SWAPPER_BLOCK_SIZE in all configurations.
      
      Fixes: f9040773 ("arm64: move kernel image to base of vmalloc area")
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      2776e0e8
    • C
      arm64: kasan: Use actual memory node when populating the kernel image shadow · 2f76969f
      Catalin Marinas 提交于
      With the 16KB or 64KB page configurations, the generic
      vmemmap_populate() implementation warns on potential offnode
      page_structs via vmemmap_verify() because the arm64 kasan_init() passes
      NUMA_NO_NODE instead of the actual node for the kernel image memory.
      
      Fixes: f9040773 ("arm64: move kernel image to base of vmalloc area")
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reported-by: NJames Morse <james.morse@arm.com>
      Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      2f76969f
    • C
      arm64: Update PTE_RDONLY in set_pte_at() for PROT_NONE permission · fdc69e7d
      Catalin Marinas 提交于
      The set_pte_at() function must update the hardware PTE_RDONLY bit
      depending on the state of the PTE_WRITE and PTE_DIRTY bits of the given
      entry value. However, it currently only performs this for pte_valid()
      entries, ignoring PTE_PROT_NONE. The side-effect is that PROT_NONE
      mappings would not have the PTE_RDONLY bit set. Without
      CONFIG_ARM64_HW_AFDBM, this is not an issue since such PROT_NONE pages
      are not accessible anyway.
      
      With commit 2f4b829c ("arm64: Add support for hardware updates of
      the access and dirty pte bits"), the ptep_set_wrprotect() function was
      re-written to cope with automatic hardware updates of the dirty state.
      As an optimisation, only PTE_RDONLY is checked to assess the "dirty"
      status. Since set_pte_at() does not set this bit for PROT_NONE mappings,
      such pages may be considered "dirty" as a result of
      ptep_set_wrprotect().
      
      This patch updates the pte_valid() check to pte_present() in
      set_pte_at(). It also adds PTE_PROT_NONE to the swap entry bits comment.
      
      Fixes: 2f4b829c ("arm64: Add support for hardware updates of the access and dirty pte bits")
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reported-by: NGanapatrao Kulkarni <gkulkarni@caviumnetworks.com>
      Tested-by: NGanapatrao Kulkarni <gkulkarni@cavium.com>
      Cc: <stable@vger.kernel.org>
      fdc69e7d
  17. 10 3月, 2016 1 次提交
    • M
      arm64: kasan: clear stale stack poison · 0d97e6d8
      Mark Rutland 提交于
      Functions which the compiler has instrumented for KASAN place poison on
      the stack shadow upon entry and remove this poison prior to returning.
      
      In the case of cpuidle, CPUs exit the kernel a number of levels deep in
      C code.  Any instrumented functions on this critical path will leave
      portions of the stack shadow poisoned.
      
      If CPUs lose context and return to the kernel via a cold path, we
      restore a prior context saved in __cpu_suspend_enter are forgotten, and
      we never remove the poison they placed in the stack shadow area by
      functions calls between this and the actual exit of the kernel.
      
      Thus, (depending on stackframe layout) subsequent calls to instrumented
      functions may hit this stale poison, resulting in (spurious) KASAN
      splats to the console.
      
      To avoid this, clear any stale poison from the idle thread for a CPU
      prior to bringing a CPU online.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Reviewed-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0d97e6d8
  18. 09 3月, 2016 1 次提交
    • W
      arm64: hugetlb: partial revert of 66b3923a · ff792584
      Will Deacon 提交于
      Commit 66b3923a ("arm64: hugetlb: add support for PTE contiguous bit")
      introduced support for huge pages using the contiguous bit in the PTE
      as opposed to block mappings, which may be slightly unwieldy (512M) in
      64k page configurations.
      
      Unfortunately, this support has resulted in some late regressions when
      running the libhugetlbfs test suite with 64k pages and CONFIG_DEBUG_VM
      as a result of a BUG:
      
       | readback (2M: 64):	------------[ cut here ]------------
       | kernel BUG at fs/hugetlbfs/inode.c:446!
       | Internal error: Oops - BUG: 0 [#1] SMP
       | Modules linked in:
       | CPU: 7 PID: 1448 Comm: readback Not tainted 4.5.0-rc7 #148
       | Hardware name: linux,dummy-virt (DT)
       | task: fffffe0040964b00 ti: fffffe00c2668000 task.ti: fffffe00c2668000
       | PC is at remove_inode_hugepages+0x44c/0x480
       | LR is at remove_inode_hugepages+0x264/0x480
      
      Rather than revert the entire patch, simply avoid advertising the
      contiguous huge page sizes for now while people are actively working on
      a fix. This patch can then be reverted once things have been sorted out.
      
      Cc: David Woods <dwoods@ezchip.com>
      Reported-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      ff792584