1. 24 1月, 2015 5 次提交
  2. 23 1月, 2015 4 次提交
    • S
      arm64: Fix SCTLR_EL1 initialisation · 9f71ac96
      Suzuki K. Poulose 提交于
      We initialise the SCTLR_EL1 value by read-modify-writeback
      of the desired bits, leaving the other bits (including reserved
      bits(RESx)) untouched. However, sometimes the boot monitor could
      leave garbage values in the RESx bits which could have different
      implications. This patch makes sure that all the bits, including
      the RESx bits, are set to the proper state, except for the
      'endianness' control bits, EE(25) & E0E(24)- which are set early
      in the el2_setup.
      
      Updated the state of the Bit[6] in the comment to RES0 in the
      comment.
      Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      9f71ac96
    • M
      arm64: add ioremap physical address information · da1f2b82
      Min-Hua Chen 提交于
      In /proc/vmallocinfo, it's good to show the physical address
      of each ioremap in vmallocinfo. Add physical address information
      in arm64 ioremap.
      
      0xffffc900047f2000-0xffffc900047f4000    8192 _nv013519rm+0x57/0xa0
      [nvidia] phys=f8100000 ioremap
      0xffffc900047f4000-0xffffc900047f6000    8192 _nv013519rm+0x57/0xa0
      [nvidia] phys=f8008000 ioremap
      0xffffc90004800000-0xffffc90004821000  135168 e1000_probe+0x22c/0xb95
      [e1000e] phys=f4300000 ioremap
      0xffffc900049c0000-0xffffc900049e1000  135168 _nv013521rm+0x4d/0xd0
      [nvidia] phys=e0140000 ioremap
      Signed-off-by: NMin-Hua Chen <orca.chen@gmail.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      da1f2b82
    • M
      arm64: mm: dump: add missing includes · 764011ca
      Mark Rutland 提交于
      The arm64 dump code is currently relying on some definitions which are
      pulled in via transitive dependencies. It seems we have implicit
      dependencies on the following definitions:
      
      * MODULES_VADDR         (asm/memory.h)
      * MODULES_END           (asm/memory.h)
      * PAGE_OFFSET           (asm/memory.h)
      * PTE_*                 (asm/pgtable-hwdef.h)
      * ENOMEM                (linux/errno.h)
      * device_initcall       (linux/init.h)
      
      This patch ensures we explicitly include the relevant headers for the
      above items, fixing the observed build issue and hopefully preventing
      future issues as headers are refactored.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reported-by: NMark Brown <broonie@kernel.org>
      Acked-by: NSteve Capper <steve.capper@linaro.org>
      Cc: Laura Abbott <lauraa@codeaurora.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      764011ca
    • M
      arm64: Fix overlapping VA allocations · aa03c428
      Mark Rutland 提交于
      PCI IO space was intended to be 16MiB, at 32MiB below MODULES_VADDR, but
      commit d1e6dc91 ("arm64: Add architectural support for PCI")
      extended this to cover the full 32MiB. The final 8KiB of this 32MiB is
      also allocated for the fixmap, allowing for potential clashes between
      the two.
      
      This change was masked by assumptions in mem_init and the page table
      dumping code, which assumed the I/O space to be 16MiB long through
      seaparte hard-coded definitions.
      
      This patch changes the definition of the PCI I/O space allocation to
      live in asm/memory.h, along with the other VA space allocations. As the
      fixmap allocation depends on the number of fixmap entries, this is moved
      below the PCI I/O space allocation. Both the fixmap and PCI I/O space
      are guarded with 2MB of padding. Sites assuming the I/O space was 16MiB
      are moved over use new PCI_IO_{START,END} definitions, which will keep
      in sync with the size of the IO space (now restored to 16MiB).
      
      As a useful side effect, the use of the new PCI_IO_{START,END}
      definitions prevents a build issue in the dumping code due to a (now
      redundant) missing include of io.h for PCI_IOBASE.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Laura Abbott <lauraa@codeaurora.org>
      Cc: Liviu Dudau <liviu.dudau@arm.com>
      Cc: Steve Capper <steve.capper@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      [catalin.marinas@arm.com: reorder FIXADDR and PCI_IO address_markers_idx enum]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      aa03c428
  3. 22 1月, 2015 3 次提交
  4. 17 1月, 2015 2 次提交
  5. 15 1月, 2015 9 次提交
    • K
      arm64: kill off the libgcc dependency · d67703a8
      Kevin Hao 提交于
      The arm64 kernel builds fine without the libgcc. Actually it should not
      be used at all in the kernel. The following are the reasons indicated
      by Russell King:
      
        Although libgcc is part of the compiler, libgcc is built with the
        expectation that it will be running in userland - it expects to link
        to a libc.  That's why you can't build libgcc without having the glibc
        headers around.
      
        [...]
      
        Meanwhile, having the kernel build the compiler support functions that
        it needs ensures that (a) we know what compiler support functions are
        being used, (b) we know the implementation of those support functions
        are sane for use in the kernel, (c) we can build them with appropriate
        compiler flags for best performance, and (d) we remove an unnecessary
        dependency on the build toolchain.
      Signed-off-by: NKevin Hao <haokexin@gmail.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      d67703a8
    • M
      arm64: kvm: decode ESR_ELx.EC when reporting exceptions · 056bb5f5
      Mark Rutland 提交于
      To aid the developer when something triggers an unexpected exception,
      decode the ESR_ELx.EC field when logging an ESR_ELx value using the
      newly introduced esr_get_class_string. This doesn't tell the developer
      the specifics of the exception encoded in the remaining IL and ISS bits,
      but it can be helpful to distinguish between exception classes (e.g.
      SError and a data abort) without having to manually decode the field,
      which can be tiresome.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NChristoffer Dall <christoffer.dall@linaro.org>
      Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Peter Maydell <peter.maydell@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      056bb5f5
    • M
      arm64: kvm: remove ESR_EL2_* macros · 6e53031e
      Mark Rutland 提交于
      Now that all users have been moved over to the common ESR_ELx_* macros,
      remove the redundant ESR_EL2 macros. To maintain compatibility with the
      fault handling code shared with 32-bit, the FSC_{FAULT,PERM} macros are
      retained as aliases for the common ESR_ELx_FSC_{FAULT,PERM} definitions.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NChristoffer Dall <christoffer.dall@linaro.org>
      Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Peter Maydell <peter.maydell@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      6e53031e
    • M
      arm64: remove ESR_EL1_* macros · 4a939087
      Mark Rutland 提交于
      Now that all users have been moved over to the common ESR_ELx_* macros,
      remove the redundant ESR_EL1 macros.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Peter Maydell <peter.maydell@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      4a939087
    • M
      arm64: kvm: move to ESR_ELx macros · c6d01a94
      Mark Rutland 提交于
      Now that we have common ESR_ELx macros, make use of them in the arm64
      KVM code. The addition of <asm/esr.h> to the include path highlighted
      badly ordered (i.e. not alphabetical) include lists; these are changed
      to alphabetical order.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Peter Maydell <peter.maydell@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      c6d01a94
    • M
      arm64: decode ESR_ELx.EC when reporting exceptions · 60a1f02c
      Mark Rutland 提交于
      To aid the developer when something triggers an unexpected exception,
      decode the ESR_ELx.EC field when logging an ESR_ELx value. This doesn't
      tell the developer the specifics of the exception encoded in the
      remaining IL and ISS bits, but it can be helpful to distinguish between
      exception classes (e.g. SError and a data abort) without having to
      manually decode the field, which can be tiresome.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Peter Maydell <peter.maydell@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      60a1f02c
    • M
      arm64: move to ESR_ELx macros · aed40e01
      Mark Rutland 提交于
      Now that we have common ESR_ELx_* macros, move the core arm64 code over
      to them.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Peter Maydell <peter.maydell@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      aed40e01
    • M
      arm64: introduce common ESR_ELx_* definitions · cf99a48d
      Mark Rutland 提交于
      Currently we have separate ESR_EL{1,2}_* macros, despite the fact that
      the encodings are common. While encodings are architected to refer to
      the current EL or a lower EL, the macros refer to particular ELs (e.g.
      ESR_ELx_EC_DABT_EL0). Having these duplicate definitions is redundant,
      and their naming is misleading.
      
      This patch introduces common ESR_ELx_* macros that can be used in all
      cases, in preparation for later patches which will migrate existing
      users over. Some additional cleanups are made in the process:
      
      * Suffixes for particular exception levelts (e.g. _EL0, _EL1) are
        replaced with more general _LOW and _CUR suffixes, matching the
        architectural intent.
      
      * ESR_ELx_EC_WFx, rather than ESR_ELx_EC_WFI is introduced, as this
        EC encoding covers traps from both WFE and WFI. Similarly,
        ESR_ELx_WFx_ISS_WFE rather than ESR_ELx_EC_WFI_ISS_WFE is introduced.
      
      * Multi-bit fields are given consistently named _SHIFT and _MASK macros.
      
      * UL() is used for compatiblity with assembly files.
      
      * Comments are added for currently unallocated ESR_ELx.EC encodings.
      
      For fields other than ESR_ELx.EC, macros are only implemented for fields
      for which there is already an ESR_EL{1,2}_* macro.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Peter Maydell <peter.maydell@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      cf99a48d
    • S
      arm64: kernel: add support for cpu cache information · 5d425c18
      Sudeep Holla 提交于
      This patch adds support for cacheinfo on ARM64.
      
      On ARMv8, the cache hierarchy can be identified through Cache Level ID
      (CLIDR) register while the cache geometry is provided by Cache Size ID
      (CCSIDR) register.
      
      Since the architecture doesn't provide any way of detecting the cpus
      sharing particular cache, device tree is used for the same purpose.
      Signed-off-by: NSudeep Holla <sudeep.holla@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      5d425c18
  6. 14 1月, 2015 1 次提交
    • M
      arm64: remove broken cachepolicy code · 26a945ca
      Mark Rutland 提交于
      The cachepolicy kernel parameter was intended to aid in the debugging of
      coherency issues, but it is fundamentally broken for several reasons:
      
       * On SMP platforms, only the boot CPU's tcr_el1 is altered. Secondary
         CPUs may therefore use differ w.r.t. the attributes they apply to
         MT_NORMAL memory, resulting in a loss of coherency.
      
       * The cache maintenance using flush_dcache_all (based on Set/Way
         operations) is not guaranteed to empty a given CPU's cache hierarchy
         while said CPU has caches enabled, it cannot empty the caches of
         other coherent PEs, nor is it guaranteed to flush data to the PoC
         even when caches are disabled.
      
       * The TLBs are not invalidated around the modification of MAIR_EL1 and
         TCR_EL1, as required by the architecture (as both are permitted to be
         cached in a TLB). This may result in CPUs using attributes other than
         those expected for some memory accesses, resulting in a loss of
         coherency.
      
       * Exclusive accesses are not architecturally guaranteed to function as
         expected on memory marked as Write-Through or Non-Cacheable. Thus
         changing the attributes of MT_NORMAL away from the (architecurally
         safe) defaults may cause uses of these instructions (e.g. atomics) to
         behave erratically.
      
      Given this, the cachepolicy code cannot be used for debugging purposes
      as it alone is likely to cause coherency issues. This patch removes the
      broken cachepolicy code.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      26a945ca
  7. 13 1月, 2015 3 次提交
  8. 12 1月, 2015 3 次提交
  9. 10 1月, 2015 1 次提交
  10. 09 1月, 2015 5 次提交
  11. 08 1月, 2015 4 次提交
    • A
      arm64/efi: add missing call to early_ioremap_reset() · 0e63ea48
      Ard Biesheuvel 提交于
      The early ioremap support introduced by patch bf4b558e
      ("arm64: add early_ioremap support") failed to add a call to
      early_ioremap_reset() at an appropriate time. Without this call,
      invocations of early_ioremap etc. that are done too late will go
      unnoticed and may cause corruption.
      
      This is exactly what happened when the first user of this feature
      was added in patch f84d0275 ("arm64: add EFI runtime services").
      The early mapping of the EFI memory map is unmapped during an early
      initcall, at which time the early ioremap support is long gone.
      
      Fix by adding the missing call to early_ioremap_reset() to
      setup_arch(), and move the offending early_memunmap() to right after
      the point where the early mapping of the EFI memory map is last used.
      
      Fixes: f84d0275 ("arm64: add EFI runtime services")
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NLeif Lindholm <leif.lindholm@linaro.org>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      0e63ea48
    • G
      ARM: 8253/1: mm: use phys_addr_t type in map_lowmem() for kernel mem region · ac084688
      Grygorii Strashko 提交于
      Now local variables kernel_x_start and kernel_x_end defined using
      'unsigned long' type which is wrong because they represent physical
      memory range and will be calculated wrongly if LPAE is enabled.
      As result, all following code in map_lowmem() will not work correctly.
      
      For example, Keystone 2 boot is broken because
       kernel_x_start == 0x0000 0000
       kernel_x_end   == 0x0080 0000
      
      instead of
       kernel_x_start == 0x0000 0008 0000 0000
       kernel_x_end   == 0x0000 0008 0080 0000
      and as result whole low memory will be mapped with MT_MEMORY_RW
      permissions by code (start > kernel_x_end):
      		} else if (start >= kernel_x_end) {
      			map.pfn = __phys_to_pfn(start);
      			map.virtual = __phys_to_virt(start);
      			map.length = end - start;
      			map.type = MT_MEMORY_RW;
      
      			create_mapping(&map);
      		}
      
      Hence, fix it by using phys_addr_t type for variables kernel_x_start
      and kernel_x_end.
      Tested-by: NMurali Karicheri <m-karicheri2@ti.com>
      Signed-off-by: NGrygorii Strashko <grygorii.strashko@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      ac084688
    • M
      ARM: 8249/1: mm: dump: don't skip regions · cca547e9
      Mark Rutland 提交于
      Currently the arm page table dumping code starts dumping page tables
      from USER_PGTABLES_CEILING. This is unnecessary for skipping any entries
      related to userspace as the swapper_pg_dir does not contain such
      entries, and results in a couple of unfortuante side effects.
      
      Firstly, any kernel mappings which might exist below
      USER_PGTABLES_CEILING will not be accounted in the dump output. This
      masks any entries erroneously created below this address.
      
      Secondly, if the final page table entry walked is part of a valid
      mapping the page table dumping code will not log the region this entry
      is part of, as the final note_page call in walk_pgd will trigger an
      early return when 0 < USER_PGTABLES_CEILING. Luckily this isn't seen on
      contemporary systems as they typically don't have enough RAM to extend
      the linear mapping right to the end of the address space.
      
      Due to the way addr is constructed in the walk_* functions, it can never
      be less than USER_PGTABLES_CEILING when walking the page tables, so it
      is not necessary to avoid dereferencing invalid table addresses. The
      existing checks for st->current_prot and st->marker[1].start_address are
      sufficient to ensure we will not print and/or dereference garbage when
      trying to log information.
      
      This patch removes both problematic uses of USER_PGTABLES_CEILING from
      the arm page table dumping code, preventing both of these issues. We
      will now report any low mappings, and the final note_page call will not
      return early, ensuring all regions are logged.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Steve Capper <steve.capper@linaro.org>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      cca547e9
    • R
      ARM: wire up execveat syscall · 841ee230
      Russell King 提交于
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      841ee230