1. 03 9月, 2016 1 次提交
  2. 25 8月, 2016 1 次提交
  3. 24 8月, 2016 1 次提交
  4. 18 8月, 2016 3 次提交
    • C
      arm64: Fix shift warning in arch/arm64/mm/dump.c · a93a4d62
      Catalin Marinas 提交于
      When building with 48-bit VAs and 16K page configuration, it's possible
      to get the following warning when building the arm64 page table dumping
      code:
      
      arch/arm64/mm/dump.c: In function ‘walk_pud’:
      arch/arm64/mm/dump.c:274:102: warning: right shift count >= width of type [-Wshift-count-overflow]
      
      This is because pud_offset(pgd, 0) performs a shift to the right by 36
      while the value 0 has the type 'int' by default, therefore 32-bit.
      
      This patch modifies all the p*_offset() uses in arch/arm64/mm/dump.c to
      use 0UL for the address argument.
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      a93a4d62
    • A
      arm64: kernel: avoid literal load of virtual address with MMU off · bc9f3d77
      Ard Biesheuvel 提交于
      Literal loads of virtual addresses are subject to runtime relocation when
      CONFIG_RELOCATABLE=y, and given that the relocation routines run with the
      MMU and caches enabled, literal loads of relocated values performed with
      the MMU off are not guaranteed to return the latest value unless the
      memory covering the literal is cleaned to the PoC explicitly.
      
      So defer the literal load until after the MMU has been enabled, just like
      we do for primary_switch() and secondary_switch() in head.S.
      
      Fixes: 1e48ef7f ("arm64: add support for building vmlinux as a relocatable PIE binary")
      Cc: <stable@vger.kernel.org> # 4.6+
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      bc9f3d77
    • C
      arm64: Fix NUMA build error when !CONFIG_ACPI · bfe6c8a8
      Catalin Marinas 提交于
      Since asm/acpi.h is only included by linux/acpi.h when CONFIG_ACPI is
      enabled, disabling the latter leads to the following build error on
      arm64:
      
      arch/arm64/mm/numa.c: In function ‘arm64_numa_init’:
      arch/arm64/mm/numa.c:395:24: error: ‘arm64_acpi_numa_init’ undeclared (first use in this function)
         if (!acpi_disabled && !numa_init(arm64_acpi_numa_init))
      
      This patch include the asm/acpi.h explicitly in arch/arm64/mm/numa.c for
      the arm64_acpi_numa_init() definition.
      
      Fixes: d8b47fca ("arm64, ACPI, NUMA: NUMA support based on SRAT and SLIT")
      Reviewed-by: NHanjun Guo <hanjun.guo@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      bfe6c8a8
  5. 17 8月, 2016 3 次提交
  6. 13 8月, 2016 5 次提交
    • M
      arm64: defconfig: enable CONFIG_LOCALVERSION_AUTO · 53fb45d3
      Masahiro Yamada 提交于
      When CONFIG_LOCALVERSION_AUTO is disabled, the version string is
      just a tag name (or with a '+' appended if HEAD is not a tagged
      commit).
      
      During the development (and especially when git-bisecting), longer
      version string would be helpful to identify the commit we are running.
      
      This is a default y option, so drop the unset to enable it.
      Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      53fb45d3
    • R
      arm64: defconfig: add options for virtualization and containers · 2323439f
      Riku Voipio 提交于
      Enable options commonly needed by popular virtualization
      and container applications. Use modules when possible to
      avoid too much overhead for users not interested.
      
      - add namespace and cgroup options needed
      - add seccomp - optional, but enhances Qemu etc
      - bridge, nat, veth, macvtap and multicast for routing
        guests and containers
      - btfrs and overlayfs modules for container COW backends
      - while near it, make fuse a module instead of built-in.
      
      Generated with make saveconfig and dropping unrelated spurious
      change hunks while commiting. bloat-o-meter old-vmlinux vmlinux:
      
      add/remove: 905/390 grow/shrink: 767/229 up/down: 183513/-94861 (88652)
      ....
      Total: Before=10515408, After=10604060, chg +0.84%
      Signed-off-by: NRiku Voipio <riku.voipio@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      2323439f
    • M
      arm64: hibernate: handle allocation failures · dfbca61a
      Mark Rutland 提交于
      In create_safe_exec_page(), we create a copy of the hibernate exit text,
      along with some page tables to map this via TTBR0. We then install the
      new tables in TTBR0.
      
      In swsusp_arch_resume() we call create_safe_exec_page() before trying a
      number of operations which may fail (e.g. copying the linear map page
      tables). If these fail, we bail out of swsusp_arch_resume() and return
      an error code, but leave TTBR0 as-is. Subsequently, the core hibernate
      code will call free_basic_memory_bitmaps(), which will free all of the
      memory allocations we made, including the page tables installed in
      TTBR0.
      
      Thus, we may have TTBR0 pointing at dangling freed memory for some
      period of time. If the hibernate attempt was triggered by a user
      requesting a hibernate test via the reboot syscall, we may return to
      userspace with the clobbered TTBR0 value.
      
      Avoid these issues by reorganising swsusp_arch_resume() such that we
      have no failure paths after create_safe_exec_page(). We also add a check
      that the zero page allocation succeeded, matching what we have for other
      allocations.
      
      Fixes: 82869ac5 ("arm64: kernel: Add support for hibernate/suspend-to-disk")
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NJames Morse <james.morse@arm.com>
      Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: <stable@vger.kernel.org> # 4.7+
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      dfbca61a
    • M
      arm64: hibernate: avoid potential TLB conflict · 0194e760
      Mark Rutland 提交于
      In create_safe_exec_page we install a set of global mappings in TTBR0,
      then subsequently invalidate TLBs. While TTBR0 points at the zero page,
      and the TLBs should be free of stale global entries, we may have stale
      ASID-tagged entries (e.g. from the EFI runtime services mappings) for
      the same VAs. Per the ARM ARM these ASID-tagged entries may conflict
      with newly-allocated global entries, and we must follow a
      Break-Before-Make approach to avoid issues resulting from this.
      
      This patch reworks create_safe_exec_page to invalidate TLBs while the
      zero page is still in place, ensuring that there are no potential
      conflicts when the new TTBR0 value is installed. As a single CPU is
      online while this code executes, we do not need to perform broadcast TLB
      maintenance, and can call local_flush_tlb_all(), which also subsumes
      some barriers. The remaining assembly is converted to use write_sysreg()
      and isb().
      
      Other than this, we safely manipulate TTBRs in the hibernate dance. The
      code we install as part of the new TTBR0 mapping (the hibernated
      kernel's swsusp_arch_suspend_exit) installs a zero page into TTBR1,
      invalidates TLBs, then installs its preferred value. Upon being restored
      to the middle of swsusp_arch_suspend, the new image will call
      __cpu_suspend_exit, which will call cpu_uninstall_idmap, installing the
      zero page in TTBR0 and invalidating all TLB entries.
      
      Fixes: 82869ac5 ("arm64: kernel: Add support for hibernate/suspend-to-disk")
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NJames Morse <james.morse@arm.com>
      Tested-by: NJames Morse <james.morse@arm.com>
      Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: <stable@vger.kernel.org> # 4.7+
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      0194e760
    • L
      arm64: Handle el1 synchronous instruction aborts cleanly · 9adeb8e7
      Laura Abbott 提交于
      Executing from a non-executable area gives an ugly message:
      
      lkdtm: Performing direct entry EXEC_RODATA
      lkdtm: attempting ok execution at ffff0000084c0e08
      lkdtm: attempting bad execution at ffff000008880700
      Bad mode in Synchronous Abort handler detected on CPU2, code 0x8400000e -- IABT (current EL)
      CPU: 2 PID: 998 Comm: sh Not tainted 4.7.0-rc2+ #13
      Hardware name: linux,dummy-virt (DT)
      task: ffff800077e35780 ti: ffff800077970000 task.ti: ffff800077970000
      PC is at lkdtm_rodata_do_nothing+0x0/0x8
      LR is at execute_location+0x74/0x88
      
      The 'IABT (current EL)' indicates the error but it's a bit cryptic
      without knowledge of the ARM ARM. There is also no indication of the
      specific address which triggered the fault. The increase in kernel
      page permissions makes hitting this case more likely as well.
      Handling the case in the vectors gives a much more familiar looking
      error message:
      
      lkdtm: Performing direct entry EXEC_RODATA
      lkdtm: attempting ok execution at ffff0000084c0840
      lkdtm: attempting bad execution at ffff000008880680
      Unable to handle kernel paging request at virtual address ffff000008880680
      pgd = ffff8000089b2000
      [ffff000008880680] *pgd=00000000489b4003, *pud=0000000048904003, *pmd=0000000000000000
      Internal error: Oops: 8400000e [#1] PREEMPT SMP
      Modules linked in:
      CPU: 1 PID: 997 Comm: sh Not tainted 4.7.0-rc1+ #24
      Hardware name: linux,dummy-virt (DT)
      task: ffff800077f9f080 ti: ffff800008a1c000 task.ti: ffff800008a1c000
      PC is at lkdtm_rodata_do_nothing+0x0/0x8
      LR is at execute_location+0x74/0x88
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NLaura Abbott <labbott@redhat.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      9adeb8e7
  7. 12 8月, 2016 1 次提交
  8. 11 8月, 2016 2 次提交
  9. 09 8月, 2016 1 次提交
  10. 04 8月, 2016 2 次提交
    • K
      dma-mapping: use unsigned long for dma_attrs · 00085f1e
      Krzysztof Kozlowski 提交于
      The dma-mapping core and the implementations do not change the DMA
      attributes passed by pointer.  Thus the pointer can point to const data.
      However the attributes do not have to be a bitfield.  Instead unsigned
      long will do fine:
      
      1. This is just simpler.  Both in terms of reading the code and setting
         attributes.  Instead of initializing local attributes on the stack
         and passing pointer to it to dma_set_attr(), just set the bits.
      
      2. It brings safeness and checking for const correctness because the
         attributes are passed by value.
      
      Semantic patches for this change (at least most of them):
      
          virtual patch
          virtual context
      
          @r@
          identifier f, attrs;
      
          @@
          f(...,
          - struct dma_attrs *attrs
          + unsigned long attrs
          , ...)
          {
          ...
          }
      
          @@
          identifier r.f;
          @@
          f(...,
          - NULL
          + 0
           )
      
      and
      
          // Options: --all-includes
          virtual patch
          virtual context
      
          @r@
          identifier f, attrs;
          type t;
      
          @@
          t f(..., struct dma_attrs *attrs);
      
          @@
          identifier r.f;
          @@
          f(...,
          - NULL
          + 0
           )
      
      Link: http://lkml.kernel.org/r/1468399300-5399-2-git-send-email-k.kozlowski@samsung.comSigned-off-by: NKrzysztof Kozlowski <k.kozlowski@samsung.com>
      Acked-by: NVineet Gupta <vgupta@synopsys.com>
      Acked-by: NRobin Murphy <robin.murphy@arm.com>
      Acked-by: NHans-Christian Noren Egtvedt <egtvedt@samfundet.no>
      Acked-by: Mark Salter <msalter@redhat.com> [c6x]
      Acked-by: Jesper Nilsson <jesper.nilsson@axis.com> [cris]
      Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> [drm]
      Reviewed-by: NBart Van Assche <bart.vanassche@sandisk.com>
      Acked-by: Joerg Roedel <jroedel@suse.de> [iommu]
      Acked-by: Fabien Dessenne <fabien.dessenne@st.com> [bdisp]
      Reviewed-by: Marek Szyprowski <m.szyprowski@samsung.com> [vb2-core]
      Acked-by: David Vrabel <david.vrabel@citrix.com> [xen]
      Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> [xen swiotlb]
      Acked-by: Joerg Roedel <jroedel@suse.de> [iommu]
      Acked-by: Richard Kuo <rkuo@codeaurora.org> [hexagon]
      Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> [m68k]
      Acked-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> [s390]
      Acked-by: NBjorn Andersson <bjorn.andersson@linaro.org>
      Acked-by: Hans-Christian Noren Egtvedt <egtvedt@samfundet.no> [avr32]
      Acked-by: Vineet Gupta <vgupta@synopsys.com> [arc]
      Acked-by: Robin Murphy <robin.murphy@arm.com> [arm64 and dma-iommu]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      00085f1e
    • S
      arm64: Fix copy-on-write referencing in HugeTLB · 747a70e6
      Steve Capper 提交于
      set_pte_at(.) will set or unset the PTE_RDONLY hardware bit before
      writing the entry to the table.
      
      This can cause problems with the copy-on-write logic in hugetlb_cow:
       *) hugetlb_cow(.) called to handle a write fault on read only pte,
       *) Before the copy-on-write updates the new page table a call is
          made to pte_same(huge_ptep_get(ptep), pte)), to check for a race,
       *) Because set_pte_at(.) changed the pte, *ptep != pte, and the
          hugetlb_cow(.) code erroneously assumes that it lost the race,
       *) The new page is subsequently freed without being used.
      
      On arm64 this problem only becomes apparent when we apply:
      67961f9d mm/hugetlb: fix huge page reserve accounting for private
      mappings
      
      When one runs the libhugetlbfs test suite, there are allocation errors
      and hugetlbfs pages become erroneously locked in memory as reserved.
      (There is a high HugePages_Rsvd: count).
      
      In this patch we introduce pte_same which ignores the PTE_RDONLY bit,
      allowing for the libhugetlbfs test suite to pass as expected and
      without leaking any reserved HugeTLB pages.
      Reported-by: NHuang Shijie <shijie.huang@arm.com>
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      747a70e6
  11. 03 8月, 2016 1 次提交
  12. 01 8月, 2016 2 次提交
    • A
      arm64: KVM: Set cpsr before spsr on fault injection · 89581f06
      Andrew Jones 提交于
      We need to set cpsr before determining the spsr bank, as the bank
      depends on the target exception level of the injection, not the
      current mode of the vcpu. Normally this is one in the same (EL1),
      but not when we manage to trap an EL0 fault. It still doesn't really
      matter for the 64-bit EL0 case though, as vcpu_spsr() unconditionally
      uses the EL1 bank for that. However the 32-bit EL0 case gets fun, as
      that path will lead to the BUG() in vcpu_spsr32().
      
      This patch fixes the assignment order and also modifies some white
      space in order to better group pairs of lines that have strict order.
      
      Cc: stable@vger.kernel.org # v4.5
      Signed-off-by: NAndrew Jones <drjones@redhat.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      89581f06
    • A
      arm64: mm: avoid fdt_check_header() before the FDT is fully mapped · 04a84810
      Ard Biesheuvel 提交于
      As reported by Zijun, the fdt_check_header() call in __fixmap_remap_fdt()
      is not safe since it is not guaranteed that the FDT header is mapped
      completely. Due to the minimum alignment of 8 bytes, the only fields we
      can assume to be mapped are 'magic' and 'totalsize'.
      
      Since the OF layer is in charge of validating the FDT image, and we are
      only interested in making reasonably sure that the size field contains
      a meaningful value, replace the fdt_check_header() call with an explicit
      comparison of the magic field's value against the expected value.
      
      Cc: <stable@vger.kernel.org>
      Reported-by: NZijun Hu <zijun_hu@htc.com>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      04a84810
  13. 29 7月, 2016 4 次提交
    • J
      arm64: Define AT_VECTOR_SIZE_ARCH for ARCH_DLINFO · 3146bc64
      James Hogan 提交于
      AT_VECTOR_SIZE_ARCH should be defined with the maximum number of
      NEW_AUX_ENT entries that ARCH_DLINFO can contain, but it wasn't defined
      for arm64 at all even though ARCH_DLINFO will contain one NEW_AUX_ENT
      for the VDSO address.
      
      This shouldn't be a problem as AT_VECTOR_SIZE_BASE includes space for
      AT_BASE_PLATFORM which arm64 doesn't use, but lets define it now and add
      the comment above ARCH_DLINFO as found in several other architectures to
      remind future modifiers of ARCH_DLINFO to keep AT_VECTOR_SIZE_ARCH up to
      date.
      
      Fixes: f668cd16 ("arm64: ELF definitions")
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: linux-arm-kernel@lists.infradead.org
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      3146bc64
    • A
      arm64: relocatable: suppress R_AARCH64_ABS64 relocations in vmlinux · 08cc55b2
      Ard Biesheuvel 提交于
      The linker routines that we rely on to produce a relocatable PIE binary
      treat it as a shared ELF object in some ways, i.e., it emits symbol based
      R_AARCH64_ABS64 relocations into the final binary since doing so would be
      appropriate when linking a shared library that is subject to symbol
      preemption. (This means that an executable can override certain symbols
      that are exported by a shared library it is linked with, and that the
      shared library *must* update all its internal references as well, and point
      them to the version provided by the executable.)
      
      Symbol preemption does not occur for OS hosted PIE executables, let alone
      for vmlinux, and so we would prefer to get rid of these symbol based
      relocations. This would allow us to simplify the relocation routines, and
      to strip the .dynsym, .dynstr and .hash sections from the binary. (Note
      that these are tiny, and are placed in the .init segment, but they clutter
      up the vmlinux binary.)
      
      Note that these R_AARCH64_ABS64 relocations are only emitted for absolute
      references to symbols defined in the linker script, all other relocatable
      quantities are covered by anonymous R_AARCH64_RELATIVE relocations that
      simply list the offsets to all 64-bit values in the binary that need to be
      fixed up based on the offset between the link time and run time addresses.
      
      Fortunately, GNU ld has a -Bsymbolic option, which is intended for shared
      libraries to allow them to ignore symbol preemption, and unconditionally
      bind all internal symbol references to its own definitions. So set it for
      our PIE binary as well, and get rid of the asoociated sections and the
      relocation code that processes them.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      [will: fixed conflict with __dynsym_offset linker script entry]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      08cc55b2
    • A
      arm64: vmlinux.lds: make __rela_offset and __dynsym_offset ABSOLUTE · d6732fc4
      Ard Biesheuvel 提交于
      Due to the untyped KIMAGE_VADDR constant, the linker may not notice
      that the __rela_offset and __dynsym_offset expressions are absolute
      values (i.e., are not subject to relocation). This does not matter for
      KASLR, but it does confuse kallsyms in relative mode, since it uses
      the lowest non-absolute symbol address as the anchor point, and expects
      all other symbol addresses to be within 4 GB of it.
      
      Fix this by qualifying these expressions as ABSOLUTE() explicitly.
      
      Fixes: 0cd3defe ("arm64: kernel: perform relocation processing from ID map")
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      d6732fc4
    • D
      arm64:acpi: fix the acpi alignment exception when 'mem=' specified · cb0a6502
      Dennis Chen 提交于
      When booting an ACPI enabled kernel with 'mem=x', there is the
      possibility that ACPI data regions from the firmware will lie above the
      memory limit.  Ordinarily these will be removed by
      memblock_enforce_memory_limit(.).
      
      Unfortunately, this means that these regions will then be mapped by
      acpi_os_ioremap(.) as device memory (instead of normal) thus unaligned
      accessess will then provoke alignment faults.
      
      In this patch we adopt memblock_mem_limit_remove_map instead, and this
      preserves these ACPI data regions (marked NOMAP) thus ensuring that
      these regions are not mapped as device memory.
      
      For example, below is an alignment exception observed on ARM platform
      when booting the kernel with 'acpi=on mem=8G':
      
        ...
        Unable to handle kernel paging request at virtual address ffff0000080521e7
        pgd = ffff000008aa0000
        [ffff0000080521e7] *pgd=000000801fffe003, *pud=000000801fffd003, *pmd=000000801fffc003, *pte=00e80083ff1c1707
        Internal error: Oops: 96000021 [#1] PREEMPT SMP
        Modules linked in:
        CPU: 1 PID: 1 Comm: swapper/0 Not tainted 4.7.0-rc3-next-20160616+ #172
        Hardware name: AMD Overdrive/Supercharger/Default string, BIOS ROD1001A 02/09/2016
        task: ffff800001ef0000 ti: ffff800001ef8000 task.ti: ffff800001ef8000
        PC is at acpi_ns_lookup+0x520/0x734
        LR is at acpi_ns_lookup+0x4a4/0x734
        pc : [<ffff0000083b8b10>] lr : [<ffff0000083b8a94>] pstate: 60000045
        sp : ffff800001efb8b0
        x29: ffff800001efb8c0 x28: 000000000000001b
        x27: 0000000000000001 x26: 0000000000000000
        x25: ffff800001efb9e8 x24: ffff000008a10000
        x23: 0000000000000001 x22: 0000000000000001
        x21: ffff000008724000 x20: 000000000000001b
        x19: ffff0000080521e7 x18: 000000000000000d
        x17: 00000000000038ff x16: 0000000000000002
        x15: 0000000000000007 x14: 0000000000007fff
        x13: ffffff0000000000 x12: 0000000000000018
        x11: 000000001fffd200 x10: 00000000ffffff76
        x9 : 000000000000005f x8 : ffff000008725fa8
        x7 : ffff000008a8df70 x6 : ffff000008a8df70
        x5 : ffff000008a8d000 x4 : 0000000000000010
        x3 : 0000000000000010 x2 : 000000000000000c
        x1 : 0000000000000006 x0 : 0000000000000000
        ...
          acpi_ns_lookup+0x520/0x734
          acpi_ds_load1_begin_op+0x174/0x4fc
          acpi_ps_build_named_op+0xf8/0x220
          acpi_ps_create_op+0x208/0x33c
          acpi_ps_parse_loop+0x204/0x838
          acpi_ps_parse_aml+0x1bc/0x42c
          acpi_ns_one_complete_parse+0x1e8/0x22c
          acpi_ns_parse_table+0x8c/0x128
          acpi_ns_load_table+0xc0/0x1e8
          acpi_tb_load_namespace+0xf8/0x2e8
          acpi_load_tables+0x7c/0x110
          acpi_init+0x90/0x2c0
          do_one_initcall+0x38/0x12c
          kernel_init_freeable+0x148/0x1ec
          kernel_init+0x10/0xec
          ret_from_fork+0x10/0x40
        Code: b9009fbc 2a00037b 36380057 3219037b (b9400260)
        ---[ end trace 03381e5eb0a24de4 ]---
        Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
      
      With 'efi=debug', we can see those ACPI regions loaded by firmware on
      that board as:
      
        efi:   0x0083ff185000-0x0083ff1b4fff [Reserved           |   |  |  |  |  |  |  |   |WB|WT|WC|UC]*
        efi:   0x0083ff1b5000-0x0083ff1c2fff [ACPI Reclaim Memory|   |  |  |  |  |  |  |   |WB|WT|WC|UC]*
        efi:   0x0083ff223000-0x0083ff224fff [ACPI Memory NVS    |   |  |  |  |  |  |  |   |WB|WT|WC|UC]*
      
      Link: http://lkml.kernel.org/r/1468475036-5852-3-git-send-email-dennis.chen@arm.comAcked-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NDennis Chen <dennis.chen@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Tang Chen <tangchen@cn.fujitsu.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Rafael J. Wysocki <rafael@kernel.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Matt Fleming <matt@codeblueprint.co.uk>
      Cc: Kaly Xin <kaly.xin@arm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cb0a6502
  14. 27 7月, 2016 5 次提交
  15. 26 7月, 2016 3 次提交
  16. 24 7月, 2016 1 次提交
  17. 23 7月, 2016 1 次提交
    • E
      KVM: arm/arm64: Enable irqchip routing · 180ae7b1
      Eric Auger 提交于
      This patch adds compilation and link against irqchip.
      
      Main motivation behind using irqchip code is to enable MSI
      routing code. In the future irqchip routing may also be useful
      when targeting multiple irqchips.
      
      Routing standard callbacks now are implemented in vgic-irqfd:
      - kvm_set_routing_entry
      - kvm_set_irq
      - kvm_set_msi
      
      They only are supported with new_vgic code.
      
      Both HAVE_KVM_IRQCHIP and HAVE_KVM_IRQ_ROUTING are defined.
      KVM_CAP_IRQ_ROUTING is advertised and KVM_SET_GSI_ROUTING is allowed.
      
      So from now on IRQCHIP routing is enabled and a routing table entry
      must exist for irqfd injection to succeed for a given SPI. This patch
      builds a default flat irqchip routing table (gsi=irqchip.pin) covering
      all the VGIC SPI indexes. This routing table is overwritten by the
      first first user-space call to KVM_SET_GSI_ROUTING ioctl.
      
      MSI routing setup is not yet allowed.
      Signed-off-by: NEric Auger <eric.auger@redhat.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      180ae7b1
  18. 22 7月, 2016 2 次提交
  19. 21 7月, 2016 1 次提交
    • S
      arm64: Honor nosmp kernel command line option · e75118a7
      Suzuki K Poulose 提交于
      Passing "nosmp" should boot the kernel with a single processor, without
      provision to enable secondary CPUs even if they are present. "nosmp" is
      implemented by setting maxcpus=0. At the moment we still mark the secondary
      CPUs present even with nosmp, which allows the userspace to bring them
      up. This patch corrects the smp_prepare_cpus() to honor the maxcpus == 0.
      
      Commit 44dbcc93 ("arm64: Fix behavior of maxcpus=N") fixed the
      behavior for maxcpus >= 1, but broke maxcpus = 0.
      
      Fixes: 44dbcc93 ("arm64: Fix behavior of maxcpus=N")
      Cc: <stable@vger.kernel.org> # 4.7+
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      [catalin.marinas@arm.com: updated code comment]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      e75118a7