1. 25 7月, 2014 4 次提交
    • C
      arm64: gicv3: Allow GICv3 compilation with older binutils · 72c58395
      Catalin Marinas 提交于
      GICv3 introduces new system registers accessible with the full msr/mrs
      syntax (e.g. mrs x0, Sop0_op1_CRm_CRn_op2). However, only recent
      binutils understand the new syntax. This patch introduces msr_s/mrs_s
      assembly macros which generate the equivalent instructions above and
      converts the existing GICv3 code (both drivers/irqchip/ and
      arch/arm64/kernel/).
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reported-by: NOlof Johansson <olof@lixom.net>
      Tested-by: NOlof Johansson <olof@lixom.net>
      Suggested-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NJason Cooper <jason@lakedaemon.net>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      72c58395
    • C
      Merge tag 'deps-irqchip-gic-3.17' of git://git.infradead.org/users/jcooper/linux · ecb3c2bb
      Catalin Marinas 提交于
      * tag 'deps-irqchip-gic-3.17' of git://git.infradead.org/users/jcooper/linux:
        irqchip: gic-v3: Initial support for GICv3
        irqchip: gic: Move some bits of GICv2 to a library-type file
      
      Conflicts:
      	arch/arm64/Kconfig
      ecb3c2bb
    • M
      arm64: fix soft lockup due to large tlb flush range · 05ac6530
      Mark Salter 提交于
      Under certain loads, this soft lockup has been observed:
      
         BUG: soft lockup - CPU#2 stuck for 22s! [ip6tables:1016]
         Modules linked in: ip6t_rpfilter ip6t_REJECT cfg80211 rfkill xt_conntrack ebtable_nat ebtable_broute bridge stp llc ebtable_filter ebtables ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6 ip6table_mangle ip6table_security ip6table_raw ip6table_filter ip6_tables iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack iptable_mangle iptable_security iptable_raw vfat fat efivarfs xfs libcrc32c
      
         CPU: 2 PID: 1016 Comm: ip6tables Not tainted 3.13.0-0.rc7.30.sa2.aarch64 #1
         task: fffffe03e81d1400 ti: fffffe03f01f8000 task.ti: fffffe03f01f8000
         PC is at __cpu_flush_kern_tlb_range+0xc/0x40
         LR is at __purge_vmap_area_lazy+0x28c/0x3ac
         pc : [<fffffe000009c5cc>] lr : [<fffffe0000182710>] pstate: 80000145
         sp : fffffe03f01fbb70
         x29: fffffe03f01fbb70 x28: fffffe03f01f8000
         x27: fffffe0000b19000 x26: 00000000000000d0
         x25: 000000000000001c x24: fffffe03f01fbc50
         x23: fffffe03f01fbc58 x22: fffffe03f01fbc10
         x21: fffffe0000b2a3f8 x20: 0000000000000802
         x19: fffffe0000b2a3c8 x18: 000003fffdf52710
         x17: 000003ff9d8bb910 x16: fffffe000050fbfc
         x15: 0000000000005735 x14: 000003ff9d7e1a5c
         x13: 0000000000000000 x12: 000003ff9d7e1a5c
         x11: 0000000000000007 x10: fffffe0000c09af0
         x9 : fffffe0000ad1000 x8 : 000000000000005c
         x7 : fffffe03e8624000 x6 : 0000000000000000
         x5 : 0000000000000000 x4 : 0000000000000000
         x3 : fffffe0000c09cc8 x2 : 0000000000000000
         x1 : 000fffffdfffca80 x0 : 000fffffcd742150
      
      The __cpu_flush_kern_tlb_range() function looks like:
      
        ENTRY(__cpu_flush_kern_tlb_range)
      	dsb	sy
      	lsr	x0, x0, #12
      	lsr	x1, x1, #12
        1:	tlbi	vaae1is, x0
      	add	x0, x0, #1
      	cmp	x0, x1
      	b.lo	1b
      	dsb	sy
      	isb
      	ret
        ENDPROC(__cpu_flush_kern_tlb_range)
      
      The above soft lockup shows the PC at tlbi insn with:
      
        x0 = 0x000fffffcd742150
        x1 = 0x000fffffdfffca80
      
      So __cpu_flush_kern_tlb_range has 0x128ba930 tlbi flushes left
      after it has already been looping for 23 seconds!.
      
      Looking up one frame at __purge_vmap_area_lazy(), there is:
      
      	...
      	list_for_each_entry_rcu(va, &vmap_area_list, list) {
      		if (va->flags & VM_LAZY_FREE) {
      			if (va->va_start < *start)
      				*start = va->va_start;
      			if (va->va_end > *end)
      				*end = va->va_end;
      			nr += (va->va_end - va->va_start) >> PAGE_SHIFT;
      			list_add_tail(&va->purge_list, &valist);
      			va->flags |= VM_LAZY_FREEING;
      			va->flags &= ~VM_LAZY_FREE;
      		}
      	}
      	...
      	if (nr || force_flush)
      		flush_tlb_kernel_range(*start, *end);
      
      So if two areas are being freed, the range passed to
      flush_tlb_kernel_range() may be as large as the vmalloc
      space. For arm64, this is ~240GB for 4k pagesize and ~2TB
      for 64kpage size.
      
      This patch works around this problem by adding a loop limit.
      If the range is larger than the limit, use flush_tlb_all()
      rather than flushing based on individual pages. The limit
      chosen is arbitrary as the TLB size is implementation
      specific and not accessible in an architected way. The aim
      of the arbitrary limit is to avoid soft lockup.
      Signed-off-by: NMark Salter <msalter@redhat.com>
      [catalin.marinas@arm.com: commit log update]
      [catalin.marinas@arm.com: marginal optimisation]
      [catalin.marinas@arm.com: changed to MAX_TLB_RANGE and added comment]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      05ac6530
    • A
      arm64/crypto: fix makefile rule for aes-glue-%.o · 7c2105fb
      Andreas Schwab 提交于
      This fixes the following build failure when building with CONFIG_MODVERSIONS
      enabled:
      
        CC [M]  arch/arm64/crypto/aes-glue-ce.o
      ld: cannot find arch/arm64/crypto/aes-glue-ce.o: No such file or directory
      make[1]: *** [arch/arm64/crypto/aes-ce-blk.o] Error 1
      make: *** [arch/arm64/crypto] Error 2
      
      The $(obj)/aes-glue-%.o rule only creates $(obj)/.tmp_aes-glue-ce.o, it
      should use if_changed_rule instead of if_changed_dep.
      Signed-off-by: NAndreas Schwab <schwab@suse.de>
      [ardb: mention CONFIG_MODVERSIONS in commit log]
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      7c2105fb
  2. 24 7月, 2014 2 次提交
    • C
      arm64: Do not invoke audit_syscall_* functions if !CONFIG_AUDIT_SYSCALL · 2a8f45b0
      Catalin Marinas 提交于
      This is a temporary patch to be able to compile the kernel in linux-next
      where the audit_syscall_* API has been changed. To be reverted once the
      proper arm64 fix can be applied.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      2a8f45b0
    • C
      arm64: Fix barriers used for page table modifications · 7f0b1bf0
      Catalin Marinas 提交于
      The architecture specification states that both DSB and ISB are required
      between page table modifications and subsequent memory accesses using the
      corresponding virtual address. When TLB invalidation takes place, the
      tlb_flush_* functions already have the necessary barriers. However, there are
      other functions like create_mapping() for which this is not the case.
      
      The patch adds the DSB+ISB instructions in the set_pte() function for
      valid kernel mappings. The invalid pte case is handled by tlb_flush_*
      and the user mappings in general have a corresponding update_mmu_cache()
      call containing a DSB. Even when update_mmu_cache() isn't called, the
      kernel can still cope with an unlikely spurious page fault by
      re-executing the instruction.
      
      In addition, the set_pmd, set_pud() functions gain an ISB for
      architecture compliance when block mappings are created.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reported-by: NLeif Lindholm <leif.lindholm@linaro.org>
      Acked-by: NSteve Capper <steve.capper@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: <stable@vger.kernel.org>
      7f0b1bf0
  3. 23 7月, 2014 13 次提交
  4. 21 7月, 2014 1 次提交
  5. 19 7月, 2014 1 次提交
    • M
      arm64: cpuinfo: print info for all CPUs · d7a49086
      Mark Rutland 提交于
      Currently reading /proc/cpuinfo will result in information being read
      out of the MIDR_EL1 of the current CPU, and the information is not
      associated with any particular logical CPU number.
      
      This is problematic for systems with heterogeneous CPUs (i.e.
      big.LITTLE) where MIDR fields will vary across CPUs, and the output will
      differ depending on the executing CPU.
      
      This patch reorganises the code responsible for /proc/cpuinfo to print
      information per-cpu. In the process, we perform several cleanups:
      
      * Property names are coerced to lower-case (to match "processor" as per
        glibc's expectations).
      * Property names are simplified and made to match the MIDR field names.
      * Revision is changed to hex as with every other field.
      * The meaningless Architecture property is removed.
      * The ripe-for-abuse Machine field is removed.
      
      The features field (a human-readable representation of the hwcaps)
      remains printed once, as this is expected to remain in use as the
      globally support CPU features. To enable the possibility of the addition
      of per-cpu HW feature information later, this is printed before any
      CPU-specific information.
      
      Comments are added to guide userspace developers in the right direction
      (using the hwcaps provided in auxval). Hopefully where userspace
      applications parse /proc/cpuinfo rather than using the readily available
      hwcaps, they limit themselves to reading said first line.
      
      If CPU features differ from each other, the previously installed sanity
      checks will give us some advance notice with warnings and
      TAINT_CPU_OUT_OF_SPEC. If we are lucky, we will never see such systems.
      Rework will be required in many places to support such systems anyway.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Marcus Shawcroft <marcus.shawcroft@arm.com>
      Cc: Peter Maydell <peter.maydell@linaro.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      [catalin.marinas@arm.com: remove machine_name as it is no longer reported]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      d7a49086
  6. 18 7月, 2014 8 次提交
  7. 17 7月, 2014 6 次提交
  8. 10 7月, 2014 5 次提交
    • M
      arm64: Enable TEXT_OFFSET fuzzing · da57a369
      Mark Rutland 提交于
      The arm64 Image header contains a text_offset field which bootloaders
      are supposed to read to determine the offset (from a 2MB aligned "start
      of memory" per booting.txt) at which to load the kernel. The offset is
      not well respected by bootloaders at present, and due to the lack of
      variation there is little incentive to support it. This is unfortunate
      for the sake of future kernels where we may wish to vary the text offset
      (even zeroing it).
      
      This patch adds options to arm64 to enable fuzz-testing of text_offset.
      CONFIG_ARM64_RANDOMIZE_TEXT_OFFSET forces the text offset to a random
      16-byte aligned value value in the range [0..2MB) upon a build of the
      kernel. It is recommended that distribution kernels enable randomization
      to test bootloaders such that any compliance issues can be fixed early.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NTom Rini <trini@ti.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      da57a369
    • M
      arm64: Update the Image header · a2c1d73b
      Mark Rutland 提交于
      Currently the kernel Image is stripped of everything past the initial
      stack, and at runtime the memory is initialised and used by the kernel.
      This makes the effective minimum memory footprint of the kernel larger
      than the size of the loaded binary, though bootloaders have no mechanism
      to identify how large this minimum memory footprint is. This makes it
      difficult to choose safe locations to place both the kernel and other
      binaries required at boot (DTB, initrd, etc), such that the kernel won't
      clobber said binaries or other reserved memory during initialisation.
      
      Additionally when big endian support was added the image load offset was
      overlooked, and is currently of an arbitrary endianness, which makes it
      difficult for bootloaders to make use of it. It seems that bootloaders
      aren't respecting the image load offset at present anyway, and are
      assuming that offset 0x80000 will always be correct.
      
      This patch adds an effective image size to the kernel header which
      describes the amount of memory from the start of the kernel Image binary
      which the kernel expects to use before detecting memory and handling any
      memory reservations. This can be used by bootloaders to choose suitable
      locations to load the kernel and/or other binaries such that the kernel
      will not clobber any memory unexpectedly. As before, memory reservations
      are required to prevent the kernel from clobbering these locations
      later.
      
      Both the image load offset and the effective image size are forced to be
      little-endian regardless of the native endianness of the kernel to
      enable bootloaders to load a kernel of arbitrary endianness. Bootloaders
      which wish to make use of the load offset can inspect the effective
      image size field for a non-zero value to determine if the offset is of a
      known endianness. To enable software to determine the endinanness of the
      kernel as may be required for certain use-cases, a new flags field (also
      little-endian) is added to the kernel header to export this information.
      
      The documentation is updated to clarify these details. To discourage
      future assumptions regarding the value of text_offset, the value at this
      point in time is removed from the main flow of the documentation (though
      kept as a compatibility note). Some minor formatting issues in the
      documentation are also corrected.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NTom Rini <trini@ti.com>
      Cc: Geoff Levand <geoff@infradead.org>
      Cc: Kevin Hilman <kevin.hilman@linaro.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      a2c1d73b
    • M
      arm64: place initial page tables above the kernel · bd00cd5f
      Mark Rutland 提交于
      Currently we place swapper_pg_dir and idmap_pg_dir below the kernel
      image, between PHYS_OFFSET and (PHYS_OFFSET + TEXT_OFFSET). However,
      bootloaders may use portions of this memory below the kernel and we do
      not parse the memory reservation list until after the MMU has been
      enabled. As such we may clobber some memory a bootloader wishes to have
      preserved.
      
      To enable the use of all of this memory by bootloaders (when the
      required memory reservations are communicated to the kernel) it is
      necessary to move our initial page tables elsewhere. As we currently
      have an effectively unbound requirement for memory at the end of the
      kernel image for .bss, we can place the page tables here.
      
      This patch moves the initial page table to the end of the kernel image,
      after the BSS. As they do not consist of any initialised data they will
      be stripped from the kernel Image as with the BSS. The BSS clearing
      routine is updated to stop at __bss_stop rather than _end so as to not
      clobber the page tables, and memory reservations made redundant by the
      new organisation are removed.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NLaura Abbott <lauraa@codeaurora.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      bd00cd5f
    • M
      arm64: head.S: remove unnecessary function alignment · 909a4069
      Mark Rutland 提交于
      Currently __turn_mmu_on is aligned to 64 bytes to ensure that it doesn't
      span any page boundary, which simplifies the idmap and spares us
      requiring an additional page table to map half of the function. In
      keeping with other important requirements in architecture code, this
      fact is undocumented.
      
      Additionally, as the function consists of three instructions totalling
      12 bytes with no literal pool data, a smaller alignment of 16 bytes
      would be sufficient.
      
      This patch reduces the alignment to 16 bytes and documents the
      underlying reason for the alignment. This reduces the required alignment
      of the entire .head.text section from 64 bytes to 16 bytes, though it
      may still be aligned to a larger value depending on TEXT_OFFSET.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NLaura Abbott <lauraa@codeaurora.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      909a4069
    • C
      arm64: Cast KSTK_(EIP|ESP) to unsigned long · ebe6152e
      Catalin Marinas 提交于
      This is for similarity with thread_saved_(pc|sp) and to avoid some
      compiler warnings in the audit code.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      ebe6152e