1. 01 12月, 2017 1 次提交
    • A
      arm64: ftrace: emit ftrace-mod.o contents through code · be0f272b
      Ard Biesheuvel 提交于
      When building the arm64 kernel with both CONFIG_ARM64_MODULE_PLTS and
      CONFIG_DYNAMIC_FTRACE enabled, the ftrace-mod.o object file is built
      with the kernel and contains a trampoline that is linked into each
      module, so that modules can be loaded far away from the kernel and
      still reach the ftrace entry point in the core kernel with an ordinary
      relative branch, as is emitted by the compiler instrumentation code
      dynamic ftrace relies on.
      
      In order to be able to build out of tree modules, this object file
      needs to be included into the linux-headers or linux-devel packages,
      which is undesirable, as it makes arm64 a special case (although a
      precedent does exist for 32-bit PPC).
      
      Given that the trampoline essentially consists of a PLT entry, let's
      not bother with a source or object file for it, and simply patch it
      in whenever the trampoline is being populated, using the existing
      PLT support routines.
      
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      be0f272b
  2. 02 11月, 2017 1 次提交
    • G
      License cleanup: add SPDX GPL-2.0 license identifier to files with no license · b2441318
      Greg Kroah-Hartman 提交于
      Many source files in the tree are missing licensing information, which
      makes it harder for compliance tools to determine the correct license.
      
      By default all files without license information are under the default
      license of the kernel, which is GPL version 2.
      
      Update the files which contain no license information with the 'GPL-2.0'
      SPDX license identifier.  The SPDX identifier is a legally binding
      shorthand, which can be used instead of the full boiler plate text.
      
      This patch is based on work done by Thomas Gleixner and Kate Stewart and
      Philippe Ombredanne.
      
      How this work was done:
      
      Patches were generated and checked against linux-4.14-rc6 for a subset of
      the use cases:
       - file had no licensing information it it.
       - file was a */uapi/* one with no licensing information in it,
       - file was a */uapi/* one with existing licensing information,
      
      Further patches will be generated in subsequent months to fix up cases
      where non-standard license headers were used, and references to license
      had to be inferred by heuristics based on keywords.
      
      The analysis to determine which SPDX License Identifier to be applied to
      a file was done in a spreadsheet of side by side results from of the
      output of two independent scanners (ScanCode & Windriver) producing SPDX
      tag:value files created by Philippe Ombredanne.  Philippe prepared the
      base worksheet, and did an initial spot review of a few 1000 files.
      
      The 4.13 kernel was the starting point of the analysis with 60,537 files
      assessed.  Kate Stewart did a file by file comparison of the scanner
      results in the spreadsheet to determine which SPDX license identifier(s)
      to be applied to the file. She confirmed any determination that was not
      immediately clear with lawyers working with the Linux Foundation.
      
      Criteria used to select files for SPDX license identifier tagging was:
       - Files considered eligible had to be source code files.
       - Make and config files were included as candidates if they contained >5
         lines of source
       - File already had some variant of a license header in it (even if <5
         lines).
      
      All documentation files were explicitly excluded.
      
      The following heuristics were used to determine which SPDX license
      identifiers to apply.
      
       - when both scanners couldn't find any license traces, file was
         considered to have no license information in it, and the top level
         COPYING file license applied.
      
         For non */uapi/* files that summary was:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|-------
         GPL-2.0                                              11139
      
         and resulted in the first patch in this series.
      
         If that file was a */uapi/* path one, it was "GPL-2.0 WITH
         Linux-syscall-note" otherwise it was "GPL-2.0".  Results of that was:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|-------
         GPL-2.0 WITH Linux-syscall-note                        930
      
         and resulted in the second patch in this series.
      
       - if a file had some form of licensing information in it, and was one
         of the */uapi/* ones, it was denoted with the Linux-syscall-note if
         any GPL family license was found in the file or had no licensing in
         it (per prior point).  Results summary:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|------
         GPL-2.0 WITH Linux-syscall-note                       270
         GPL-2.0+ WITH Linux-syscall-note                      169
         ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause)    21
         ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)    17
         LGPL-2.1+ WITH Linux-syscall-note                      15
         GPL-1.0+ WITH Linux-syscall-note                       14
         ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause)    5
         LGPL-2.0+ WITH Linux-syscall-note                       4
         LGPL-2.1 WITH Linux-syscall-note                        3
         ((GPL-2.0 WITH Linux-syscall-note) OR MIT)              3
         ((GPL-2.0 WITH Linux-syscall-note) AND MIT)             1
      
         and that resulted in the third patch in this series.
      
       - when the two scanners agreed on the detected license(s), that became
         the concluded license(s).
      
       - when there was disagreement between the two scanners (one detected a
         license but the other didn't, or they both detected different
         licenses) a manual inspection of the file occurred.
      
       - In most cases a manual inspection of the information in the file
         resulted in a clear resolution of the license that should apply (and
         which scanner probably needed to revisit its heuristics).
      
       - When it was not immediately clear, the license identifier was
         confirmed with lawyers working with the Linux Foundation.
      
       - If there was any question as to the appropriate license identifier,
         the file was flagged for further research and to be revisited later
         in time.
      
      In total, over 70 hours of logged manual review was done on the
      spreadsheet to determine the SPDX license identifiers to apply to the
      source files by Kate, Philippe, Thomas and, in some cases, confirmation
      by lawyers working with the Linux Foundation.
      
      Kate also obtained a third independent scan of the 4.13 code base from
      FOSSology, and compared selected files where the other two scanners
      disagreed against that SPDX file, to see if there was new insights.  The
      Windriver scanner is based on an older version of FOSSology in part, so
      they are related.
      
      Thomas did random spot checks in about 500 files from the spreadsheets
      for the uapi headers and agreed with SPDX license identifier in the
      files he inspected. For the non-uapi files Thomas did random spot checks
      in about 15000 files.
      
      In initial set of patches against 4.14-rc6, 3 files were found to have
      copy/paste license identifier errors, and have been fixed to reflect the
      correct identifier.
      
      Additionally Philippe spent 10 hours this week doing a detailed manual
      inspection and review of the 12,461 patched files from the initial patch
      version early this week with:
       - a full scancode scan run, collecting the matched texts, detected
         license ids and scores
       - reviewing anything where there was a license detected (about 500+
         files) to ensure that the applied SPDX license was correct
       - reviewing anything where there was no detection but the patch license
         was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
         SPDX license was correct
      
      This produced a worksheet with 20 files needing minor correction.  This
      worksheet was then exported into 3 different .csv files for the
      different types of files to be modified.
      
      These .csv files were then reviewed by Greg.  Thomas wrote a script to
      parse the csv files and add the proper SPDX tag to the file, in the
      format that the file expected.  This script was further refined by Greg
      based on the output to detect more types of files automatically and to
      distinguish between header and source .c files (which need different
      comment types.)  Finally Greg ran the script using the .csv files to
      generate the patches.
      Reviewed-by: NKate Stewart <kstewart@linuxfoundation.org>
      Reviewed-by: NPhilippe Ombredanne <pombredanne@nexb.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b2441318
  3. 02 10月, 2017 1 次提交
    • M
      arm64: remove unneeded copy to init_utsname()->machine · c2f0b54f
      Masahiro Yamada 提交于
      As you see in init/version.c, init_uts_ns.name.machine is initially
      set to UTS_MACHINE.  There is no point to copy the same string.
      
      I dug the git history to figure out why this line is here.  My best
      guess is like this:
      
       - This line has been around here since the initial support of arm64
         by commit 9703d9d7 ("arm64: Kernel booting and initialisation").
         If ARCH (=arm64) and UTS_MACHINE (=aarch64) do not match,
         arch/$(ARCH)/Makefile is supposed to override UTS_MACHINE, but the
         initial version of arch/arm64/Makefile missed to do that.  Instead,
         the boot code copied "aarch64" to init_utsname()->machine.
      
       - Commit 94ed1f2c ("arm64: setup: report ELF_PLATFORM as the
         machine for utsname") replaced "aarch64" with ELF_PLATFORM to
         make "uname" to reflect the endianness.
      
       - ELF_PLATFORM does not help to provide the UTS machine name to rpm
         target, so commit cfa88c79 ("arm64: Set UTS_MACHINE in the
         Makefile") fixed it.  The commit simply replaced ELF_PLATFORM with
         UTS_MACHINE, but missed the fact the string copy itself is no longer
         needed.
      Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      c2f0b54f
  4. 07 6月, 2017 1 次提交
    • A
      arm64: ftrace: add support for far branches to dynamic ftrace · e71a4e1b
      Ard Biesheuvel 提交于
      Currently, dynamic ftrace support in the arm64 kernel assumes that all
      core kernel code is within range of ordinary branch instructions that
      occur in module code, which is usually the case, but is no longer
      guaranteed now that we have support for module PLTs and address space
      randomization.
      
      Since on arm64, all patching of branch instructions involves function
      calls to the same entry point [ftrace_caller()], we can emit the modules
      with a trampoline that has unlimited range, and patch both the trampoline
      itself and the branch instruction to redirect the call via the trampoline.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      [will: minor clarification to smp_wmb() comment]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      e71a4e1b
  5. 06 4月, 2017 1 次提交
    • A
      arm64: kdump: provide /proc/vmcore file · e62aaeac
      AKASHI Takahiro 提交于
      Arch-specific functions are added to allow for implementing a crash dump
      file interface, /proc/vmcore, which can be viewed as a ELF file.
      
      A user space tool, like kexec-tools, is responsible for allocating
      a separate region for the core's ELF header within crash kdump kernel
      memory and filling it in when executing kexec_load().
      
      Then, its location will be advertised to crash dump kernel via a new
      device-tree property, "linux,elfcorehdr", and crash dump kernel preserves
      the region for later use with reserve_elfcorehdr() at boot time.
      
      On crash dump kernel, /proc/vmcore will access the primary kernel's memory
      with copy_oldmem_page(), which feeds the data page-by-page by ioremap'ing
      it since it does not reside in linear mapping on crash dump kernel.
      
      Meanwhile, elfcorehdr_read() is simple as the region is always mapped.
      Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org>
      Reviewed-by: NJames Morse <james.morse@arm.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      e62aaeac
  6. 05 4月, 2017 1 次提交
    • A
      arm64: relocation testing module · 214fad55
      Ard Biesheuvel 提交于
      This module tests the module loader's ELF relocation processing
      routines. When loaded, it logs output like below.
      
          Relocation test:
          -------------------------------------------------------
          R_AARCH64_ABS64                 0xffff880000cccccc pass
          R_AARCH64_ABS32                 0x00000000f800cccc pass
          R_AARCH64_ABS16                 0x000000000000f8cc pass
          R_AARCH64_MOVW_SABS_Gn          0xffff880000cccccc pass
          R_AARCH64_MOVW_UABS_Gn          0xffff880000cccccc pass
          R_AARCH64_ADR_PREL_LO21         0xffffff9cf4d1a400 pass
          R_AARCH64_PREL64                0xffffff9cf4d1a400 pass
          R_AARCH64_PREL32                0xffffff9cf4d1a400 pass
          R_AARCH64_PREL16                0xffffff9cf4d1a400 pass
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      214fad55
  7. 03 2月, 2017 1 次提交
    • A
      efi: arm64: Add vmlinux debug link to the Image binary · 757b435a
      Ard Biesheuvel 提交于
      When building with debugging symbols, take the absolute path to the
      vmlinux binary and add it to the special PE/COFF debug table entry.
      This allows a debug EFI build to find the vmlinux binary, which is
      very helpful in debugging, given that the offset where the Image is
      first loaded by EFI is highly unpredictable.
      
      On implementations of UEFI that choose to implement it, this
      information is exposed via the EFI debug support table, which is a UEFI
      configuration table that is accessible both by the firmware at boot time
      and by the OS at runtime, and lists all PE/COFF images loaded by the
      system.
      
      The format of the NB10 Codeview entry is based on the definition used
      by EDK2, which is our primary reference when it comes to the use of
      PE/COFF in the context of UEFI firmware.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      [will: use realpath instead of shell invocation, as discussed on list]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      757b435a
  8. 31 8月, 2016 1 次提交
  9. 19 7月, 2016 2 次提交
    • S
      arm64: Kprobes with single stepping support · 2dd0e8d2
      Sandeepa Prabhu 提交于
      Add support for basic kernel probes(kprobes) and jump probes
      (jprobes) for ARM64.
      
      Kprobes utilizes software breakpoint and single step debug
      exceptions supported on ARM v8.
      
      A software breakpoint is placed at the probe address to trap the
      kernel execution into the kprobe handler.
      
      ARM v8 supports enabling single stepping before the break exception
      return (ERET), with next PC in exception return address (ELR_EL1). The
      kprobe handler prepares an executable memory slot for out-of-line
      execution with a copy of the original instruction being probed, and
      enables single stepping. The PC is set to the out-of-line slot address
      before the ERET. With this scheme, the instruction is executed with the
      exact same register context except for the PC (and DAIF) registers.
      
      Debug mask (PSTATE.D) is enabled only when single stepping a recursive
      kprobe, e.g.: during kprobes reenter so that probed instruction can be
      single stepped within the kprobe handler -exception- context.
      The recursion depth of kprobe is always 2, i.e. upon probe re-entry,
      any further re-entry is prevented by not calling handlers and the case
      counted as a missed kprobe).
      
      Single stepping from the x-o-l slot has a drawback for PC-relative accesses
      like branching and symbolic literals access as the offset from the new PC
      (slot address) may not be ensured to fit in the immediate value of
      the opcode. Such instructions need simulation, so reject
      probing them.
      
      Instructions generating exceptions or cpu mode change are rejected
      for probing.
      
      Exclusive load/store instructions are rejected too.  Additionally, the
      code is checked to see if it is inside an exclusive load/store sequence
      (code from Pratyush).
      
      System instructions are mostly enabled for stepping, except MSR/MRS
      accesses to "DAIF" flags in PSTATE, which are not safe for
      probing.
      
      This also changes arch/arm64/include/asm/ptrace.h to use
      include/asm-generic/ptrace.h.
      
      Thanks to Steve Capper and Pratyush Anand for several suggested
      Changes.
      Signed-off-by: NSandeepa Prabhu <sandeepa.s.prabhu@gmail.com>
      Signed-off-by: NDavid A. Long <dave.long@linaro.org>
      Signed-off-by: NPratyush Anand <panand@redhat.com>
      Acked-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      2dd0e8d2
    • D
      arm64: add conditional instruction simulation support · 2af3ec08
      David A. Long 提交于
      Cease using the arm32 arm_check_condition() function and replace it with
      a local version for use in deprecated instruction support on arm64. Also
      make the function table used by this available for future use by kprobes
      and/or uprobes.
      
      This function is derived from code written by Sandeepa Prabhu.
      Signed-off-by: NSandeepa Prabhu <sandeepa.s.prabhu@gmail.com>
      Signed-off-by: NDavid A. Long <dave.long@linaro.org>
      Acked-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      2af3ec08
  10. 12 7月, 2016 2 次提交
    • K
      arm64: fix vdso-offsets.h dependency · a66649da
      Kevin Brodsky 提交于
      arm64/kernel/{vdso,signal}.c include vdso-offsets.h, as well as any
      file that includes asm/vdso.h. Therefore, vdso-offsets.h must be
      generated before these files are compiled.
      
      The current rules in arm64/kernel/Makefile do not actually enforce
      this, because even though $(obj)/vdso is listed as a prerequisite for
      vdso-offsets.h, this does not result in the intended effect of
      building the vdso subdirectory (before all the other objects). As a
      consequence, depending on the order in which the rules are followed,
      vdso-offsets.h is updated or not before arm64/kernel/{vdso,signal}.o
      are built. The current rules also impose an unnecessary dependency on
      vdso-offsets.h for all arm64/kernel/*.o, resulting in unnecessary
      rebuilds. This is made obvious when using make -j:
      
        touch arch/arm64/kernel/vdso/gettimeofday.S && make -j$NCPUS arch/arm64/kernel
      
      will sometimes result in none of arm64/kernel/*.o being
      rebuilt, sometimes all of them, or even just some of them.
      
      It is quite difficult to ensure that a header is generated before it
      is used with recursive Makefiles by using normal rules.  Instead,
      arch-specific generated headers are normally built in the archprepare
      recipe in the arch Makefile (see for instance arch/ia64/Makefile).
      Unfortunately, asm-offsets.h is included in gettimeofday.S, and must
      therefore be generated before vdso-offsets.h, which is not the case if
      archprepare is used. For this reason, a rule run after archprepare has
      to be used.
      
      This commit adds rules in arm64/Makefile to build vdso-offsets.h
      during the prepare step, ensuring that vdso-offsets.h is generated
      before building anything. It also removes the now-unnecessary
      dependencies on vdso-offsets.h in arm64/kernel/Makefile. Finally, it
      removes the duplication of asm-offsets.h between arm64/kernel/vdso/
      and include/generated/ and makes include/generated/vdso-offsets.h a
      target in arm64/kernel/vdso/Makefile.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Michal Marek <mmarek@suse.com>
      Signed-off-by: NKevin Brodsky <kevin.brodsky@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      a66649da
    • C
      Revert "arm64: Fix vdso-offsets.h dependency" · 7d9a7086
      Catalin Marinas 提交于
      This reverts commit 90f777be.
      
      While this commit was aimed at fixing the dependencies, with a large
      make -j the vdso-offsets.h file is not generated, leading to build
      failures.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      7d9a7086
  11. 08 7月, 2016 1 次提交
    • C
      arm64: Fix vdso-offsets.h dependency · 90f777be
      Catalin Marinas 提交于
      arch/arm64/kernel/{vdso,signal}.c include generated/vdso-offsets.h, and
      therefore the symbol offsets must be generated before these files are
      compiled.
      
      The current rules in arm64/kernel/Makefile do not actually enforce
      this, because even though $(obj)/vdso is listed as a prerequisite for
      vdso-offsets.h, this does not result in the intended effect of
      building the vdso subdirectory (before all the other objects). As a
      consequence, depending on the order in which the rules are followed,
      vdso-offsets.h is updated or not before arm64/kernel/{vdso,signal}.o
      are built. The current rules also impose an unnecessary dependency on
      vdso-offsets.h for all arm64/kernel/*.o, resulting in unnecessary
      rebuilds.
      
      This patch removes the arch/arm64/kernel/vdso/vdso-offsets.h file
      generation, leaving only the include/generated/vdso-offsets.h one. It
      adds a forced dependency check of the vdso-offsets.h file in
      arch/arm64/kernel/Makefile which, if not up to date according to the
      arch/arm64/kernel/vdso/Makefile rules (depending on vdso.so.dbg), will
      trigger the vdso/ subdirectory build and vdso-offsets.h re-generation.
      Automatic kbuild dependency rules between kernel/{vdso,signal}.c rules
      and vdso-offsets.h will guarantee that the vDSO object is built first,
      followed by the generated symbol offsets header file.
      Reported-by: NKevin Brodsky <kevin.brodsky@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      90f777be
  12. 27 6月, 2016 1 次提交
  13. 30 5月, 2016 1 次提交
  14. 28 4月, 2016 1 次提交
    • J
      arm64: kernel: Add support for hibernate/suspend-to-disk · 82869ac5
      James Morse 提交于
      Add support for hibernate/suspend-to-disk.
      
      Suspend borrows code from cpu_suspend() to write cpu state onto the stack,
      before calling swsusp_save() to save the memory image.
      
      Restore creates a set of temporary page tables, covering only the
      linear map, copies the restore code to a 'safe' page, then uses the copy to
      restore the memory image. The copied code executes in the lower half of the
      address space, and once complete, restores the original kernel's page
      tables. It then calls into cpu_resume(), and follows the normal
      cpu_suspend() path back into the suspend code.
      
      To restore a kernel using KASLR, the address of the page tables, and
      cpu_resume() are stored in the hibernate arch-header and the el2
      vectors are pivotted via the 'safe' page in low memory.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Tested-by: Kevin Hilman <khilman@baylibre.com> # Tested on Juno R2
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      82869ac5
  15. 24 2月, 2016 2 次提交
    • A
      arm64: add support for kernel ASLR · f80fb3a3
      Ard Biesheuvel 提交于
      This adds support for KASLR is implemented, based on entropy provided by
      the bootloader in the /chosen/kaslr-seed DT property. Depending on the size
      of the address space (VA_BITS) and the page size, the entropy in the
      virtual displacement is up to 13 bits (16k/2 levels) and up to 25 bits (all
      4 levels), with the sidenote that displacements that result in the kernel
      image straddling a 1GB/32MB/512MB alignment boundary (for 4KB/16KB/64KB
      granule kernels, respectively) are not allowed, and will be rounded up to
      an acceptable value.
      
      If CONFIG_RANDOMIZE_MODULE_REGION_FULL is enabled, the module region is
      randomized independently from the core kernel. This makes it less likely
      that the location of core kernel data structures can be determined by an
      adversary, but causes all function calls from modules into the core kernel
      to be resolved via entries in the module PLTs.
      
      If CONFIG_RANDOMIZE_MODULE_REGION_FULL is not enabled, the module region is
      randomized by choosing a page aligned 128 MB region inside the interval
      [_etext - 128 MB, _stext + 128 MB). This gives between 10 and 14 bits of
      entropy (depending on page size), independently of the kernel randomization,
      but still guarantees that modules are within the range of relative branch
      and jump instructions (with the caveat that, since the module region is
      shared with other uses of the vmalloc area, modules may need to be loaded
      further away if the module region is exhausted)
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      f80fb3a3
    • A
      arm64: add support for module PLTs · fd045f6c
      Ard Biesheuvel 提交于
      This adds support for emitting PLTs at module load time for relative
      branches that are out of range. This is a prerequisite for KASLR, which
      may place the kernel and the modules anywhere in the vmalloc area,
      making it more likely that branch target offsets exceed the maximum
      range of +/- 128 MB.
      
      In this version, I removed the distinction between relocations against
      .init executable sections and ordinary executable sections. The reason
      is that it is hardly worth the trouble, given that .init.text usually
      does not contain that many far branches, and this version now only
      reserves PLT entry space for jump and call relocations against undefined
      symbols (since symbols defined in the same module can be assumed to be
      within +/- 128 MB)
      
      For example, the mac80211.ko module (which is fairly sizable at ~400 KB)
      built with -mcmodel=large gives the following relocation counts:
      
                          relocs    branches   unique     !local
        .text              3925       3347       518        219
        .init.text           11          8         7          1
        .exit.text            4          4         4          1
        .text.unlikely       81         67        36         17
      
      ('unique' means branches to unique type/symbol/addend combos, of which
      !local is the subset referring to undefined symbols)
      
      IOW, we are only emitting a single PLT entry for the .init sections, and
      we are better off just adding it to the core PLT section instead.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      fd045f6c
  16. 16 2月, 2016 1 次提交
    • L
      arm64: kernel: implement ACPI parking protocol · 5e89c55e
      Lorenzo Pieralisi 提交于
      The SBBR and ACPI specifications allow ACPI based systems that do not
      implement PSCI (eg systems with no EL3) to boot through the ACPI parking
      protocol specification[1].
      
      This patch implements the ACPI parking protocol CPU operations, and adds
      code that eases parsing the parking protocol data structures to the
      ARM64 SMP initializion carried out at the same time as cpus enumeration.
      
      To wake-up the CPUs from the parked state, this patch implements a
      wakeup IPI for ARM64 (ie arch_send_wakeup_ipi_mask()) that mirrors the
      ARM one, so that a specific IPI is sent for wake-up purpose in order
      to distinguish it from other IPI sources.
      
      Given the current ACPI MADT parsing API, the patch implements a glue
      layer that helps passing MADT GICC data structure from SMP initialization
      code to the parking protocol implementation somewhat overriding the CPU
      operations interfaces. This to avoid creating a completely trasparent
      DT/ACPI CPU operations layer that would require creating opaque
      structure handling for CPUs data (DT represents CPU through DT nodes, ACPI
      through static MADT table entries), which seems overkill given that ACPI
      on ARM64 mandates only two booting protocols (PSCI and parking protocol),
      so there is no need for further protocol additions.
      
      Based on the original work by Mark Salter <msalter@redhat.com>
      
      [1] https://acpica.org/sites/acpica/files/MP%20Startup%20for%20ARM%20platforms.docxSigned-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Tested-by: NLoc Ho <lho@apm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Hanjun Guo <hanjun.guo@linaro.org>
      Cc: Sudeep Holla <sudeep.holla@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Al Stone <ahs3@redhat.com>
      [catalin.marinas@arm.com: Added WARN_ONCE(!acpi_parking_protocol_valid() on the IPI]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      5e89c55e
  17. 05 1月, 2016 2 次提交
  18. 21 12月, 2015 1 次提交
  19. 31 10月, 2015 1 次提交
  20. 13 10月, 2015 1 次提交
    • A
      arm64: add KASAN support · 39d114dd
      Andrey Ryabinin 提交于
      This patch adds arch specific code for kernel address sanitizer
      (see Documentation/kasan.txt).
      
      1/8 of kernel addresses reserved for shadow memory. There was no
      big enough hole for this, so virtual addresses for shadow were
      stolen from vmalloc area.
      
      At early boot stage the whole shadow region populated with just
      one physical page (kasan_zero_page). Later, this page reused
      as readonly zero shadow for some memory that KASan currently
      don't track (vmalloc).
      After mapping the physical memory, pages for shadow memory are
      allocated and mapped.
      
      Functions like memset/memmove/memcpy do a lot of memory accesses.
      If bad pointer passed to one of these function it is important
      to catch this. Compiler's instrumentation cannot do this since
      these functions are written in assembly.
      KASan replaces memory functions with manually instrumented variants.
      Original functions declared as weak symbols so strong definitions
      in mm/kasan/kasan.c could replace them. Original functions have aliases
      with '__' prefix in name, so we could call non-instrumented variant
      if needed.
      Some files built without kasan instrumentation (e.g. mm/slub.c).
      Original mem* function replaced (via #define) with prefixed variants
      to disable memory access checks for such files.
      Signed-off-by: NAndrey Ryabinin <ryabinin.a.a@gmail.com>
      Tested-by: NLinus Walleij <linus.walleij@linaro.org>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      39d114dd
  21. 12 10月, 2015 1 次提交
    • A
      arm64/efi: isolate EFI stub from the kernel proper · e8f3010f
      Ard Biesheuvel 提交于
      Since arm64 does not use a builtin decompressor, the EFI stub is built
      into the kernel proper. So far, this has been working fine, but actually,
      since the stub is in fact a PE/COFF relocatable binary that is executed
      at an unknown offset in the 1:1 mapping provided by the UEFI firmware, we
      should not be seamlessly sharing code with the kernel proper, which is a
      position dependent executable linked at a high virtual offset.
      
      So instead, separate the contents of libstub and its dependencies, by
      putting them into their own namespace by prefixing all of its symbols
      with __efistub. This way, we have tight control over what parts of the
      kernel proper are referenced by the stub.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NMatt Fleming <matt.fleming@intel.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      e8f3010f
  22. 24 8月, 2015 1 次提交
  23. 30 7月, 2015 1 次提交
  24. 27 7月, 2015 2 次提交
  25. 30 3月, 2015 1 次提交
  26. 25 3月, 2015 1 次提交
  27. 20 3月, 2015 1 次提交
  28. 27 2月, 2015 1 次提交
    • W
      arm64: psci: move psci firmware calls out of line · f5e0a12c
      Will Deacon 提交于
      An arm64 allmodconfig fails to build with GCC 5 due to __asmeq
      assertions in the PSCI firmware calling code firing due to mcount
      preambles breaking our assumptions about register allocation of function
      arguments:
      
        /tmp/ccDqJsJ6.s: Assembler messages:
        /tmp/ccDqJsJ6.s:60: Error: .err encountered
        /tmp/ccDqJsJ6.s:61: Error: .err encountered
        /tmp/ccDqJsJ6.s:62: Error: .err encountered
        /tmp/ccDqJsJ6.s:99: Error: .err encountered
        /tmp/ccDqJsJ6.s:100: Error: .err encountered
        /tmp/ccDqJsJ6.s:101: Error: .err encountered
      
      This patch fixes the issue by moving the PSCI calls out-of-line into
      their own assembly files, which are safe from the compiler's meddling
      fingers.
      Reported-by: NAndy Whitcroft <apw@canonical.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      f5e0a12c
  29. 27 1月, 2015 2 次提交
    • L
      arm64: kernel: remove ARM64_CPU_SUSPEND config option · af3cfdbf
      Lorenzo Pieralisi 提交于
      ARM64_CPU_SUSPEND config option was introduced to make code providing
      context save/restore selectable only on platforms requiring power
      management capabilities.
      
      Currently ARM64_CPU_SUSPEND depends on the PM_SLEEP config option which
      in turn is set by the SUSPEND config option.
      
      The introduction of CPU_IDLE for arm64 requires that code configured
      by ARM64_CPU_SUSPEND (context save/restore) should be compiled in
      in order to enable the CPU idle driver to rely on CPU operations
      carrying out context save/restore.
      
      The ARM64_CPUIDLE config option (ARM64 generic idle driver) is therefore
      forced to select ARM64_CPU_SUSPEND, even if there may be (ie PM_SLEEP)
      failed dependencies, which is not a clean way of handling the kernel
      configuration option.
      
      For these reasons, this patch removes the ARM64_CPU_SUSPEND config option
      and makes the context save/restore dependent on CPU_PM, which is selected
      whenever either SUSPEND or CPU_IDLE are configured, cleaning up dependencies
      in the process.
      
      This way, code previously configured through ARM64_CPU_SUSPEND is
      compiled in whenever a power management subsystem requires it to be
      present in the kernel (SUSPEND || CPU_IDLE), which is the behaviour
      expected on ARM64 kernels.
      
      The cpu_suspend and cpu_init_idle CPU operations are added only if
      CPU_IDLE is selected, since they are CPU_IDLE specific methods and
      should be grouped and defined accordingly.
      
      PSCI CPU operations are updated to reflect the introduced changes.
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com>
      Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      af3cfdbf
    • C
      arm64: Implement the compat_sys_call_table in C · 0156411b
      Catalin Marinas 提交于
      Unlike the sys_call_table[], the compat one was implemented in sys32.S
      making it impossible to notice discrepancies between the number of
      compat syscalls and the __NR_compat_syscalls macro, the latter having to
      be defined in asm/unistd.h as including asm/unistd32.h would cause
      conflicts on __NR_* definitions. With this patch, incorrect
      __NR_compat_syscalls values will result in a build-time error.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Suggested-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      0156411b
  30. 15 1月, 2015 1 次提交
  31. 25 11月, 2014 2 次提交
  32. 21 11月, 2014 2 次提交