1. 29 12月, 2018 1 次提交
  2. 03 10月, 2018 1 次提交
  3. 21 8月, 2018 1 次提交
  4. 06 8月, 2018 1 次提交
    • A
      x86: vdso: Use $LD instead of $CC to link · 379d98dd
      Alistair Strachan 提交于
      The vdso{32,64}.so can fail to link with CC=clang when clang tries to find
      a suitable GCC toolchain to link these libraries with.
      
      /usr/bin/ld: arch/x86/entry/vdso/vclock_gettime.o:
        access beyond end of merged section (782)
      
      This happens because the host environment leaked into the cross compiler
      environment due to the way clang searches for suitable GCC toolchains.
      
      Clang is a retargetable compiler, and each invocation of it must provide
      --target=<something> --gcc-toolchain=<something> to allow it to find the
      correct binutils for cross compilation. These flags had been added to
      KBUILD_CFLAGS, but the vdso code uses CC and not KBUILD_CFLAGS (for various
      reasons) which breaks clang's ability to find the correct linker when cross
      compiling.
      
      Most of the time this goes unnoticed because the host linker is new enough
      to work anyway, or is incompatible and skipped, but this cannot be reliably
      assumed.
      
      This change alters the vdso makefile to just use LD directly, which
      bypasses clang and thus the searching problem. The makefile will just use
      ${CROSS_COMPILE}ld instead, which is always what we want. This matches the
      method used to link vmlinux.
      
      This drops references to DISABLE_LTO; this option doesn't seem to be set
      anywhere, and not knowing what its possible values are, it's not clear how
      to convert it from CC to LD flag.
      Signed-off-by: NAlistair Strachan <astrachan@google.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NAndy Lutomirski <luto@kernel.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: kernel-team@android.com
      Cc: joel@joelfernandes.org
      Cc: Andi Kleen <andi.kleen@intel.com>
      Link: https://lkml.kernel.org/r/20180803173931.117515-1-astrachan@google.com
      379d98dd
  5. 03 7月, 2018 1 次提交
  6. 15 5月, 2018 3 次提交
  7. 07 4月, 2018 1 次提交
    • M
      kbuild: mark $(targets) as .SECONDARY and remove .PRECIOUS markers · 54a702f7
      Masahiro Yamada 提交于
      GNU Make automatically deletes intermediate files that are updated
      in a chain of pattern rules.
      
      Example 1) %.dtb.o <- %.dtb.S <- %.dtb <- %.dts
      Example 2) %.o <- %.c <- %.c_shipped
      
      A couple of makefiles mark such targets as .PRECIOUS to prevent Make
      from deleting them, but the correct way is to use .SECONDARY.
      
        .SECONDARY
          Prerequisites of this special target are treated as intermediate
          files but are never automatically deleted.
      
        .PRECIOUS
          When make is interrupted during execution, it may delete the target
          file it is updating if the file was modified since make started.
          If you mark the file as precious, make will never delete the file
          if interrupted.
      
      Both can avoid deletion of intermediate files, but the difference is
      the behavior when Make is interrupted; .SECONDARY deletes the target,
      but .PRECIOUS does not.
      
      The use of .PRECIOUS is relatively rare since we do not want to keep
      partially constructed (possibly corrupted) targets.
      
      Another difference is that .PRECIOUS works with pattern rules whereas
      .SECONDARY does not.
      
        .PRECIOUS: $(obj)/%.lex.c
      
      works, but
      
        .SECONDARY: $(obj)/%.lex.c
      
      has no effect.  However, for the reason above, I do not want to use
      .PRECIOUS which could cause obscure build breakage.
      
      The targets specified as .SECONDARY must be explicit.  $(targets)
      contains all targets that need to include .*.cmd files.  So, the
      intermediates you want to keep are mostly in there.  Therefore, mark
      $(targets) as .SECONDARY.  It means primary targets are also marked
      as .SECONDARY, but I do not see any drawback for this.
      
      I replaced some .SECONDARY / .PRECIOUS markers with 'targets'.  This
      will make Kbuild search for non-existing .*.cmd files, but this is
      not a noticeable performance issue.
      Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      Acked-by: NFrank Rowand <frowand.list@gmail.com>
      Acked-by: NIngo Molnar <mingo@kernel.org>
      54a702f7
  8. 16 11月, 2017 1 次提交
    • M
      kbuild: create object directories simpler and faster · 8a78756e
      Masahiro Yamada 提交于
      For the out-of-tree build, scripts/Makefile.build creates output
      directories, but this operation is not efficient.
      
      scripts/Makefile.lib calculates obj-dirs as follows:
      
        obj-dirs := $(dir $(multi-objs) $(obj-y))
      
      Please notice $(sort ...) is not used here.  Usually the result is
      as many "./" as objects here.
      
      For a lot of duplicated paths, the following command is invoked.
      
        _dummy := $(foreach d,$(obj-dirs), $(shell [ -d $(d) ] || mkdir -p $(d)))
      
      Then, the costly shell command is run over and over again.
      
      I see many points for optimization:
      
      [1] Use $(sort ...) to cut down duplicated paths before passing them
          to system call
      [2] Use single $(shell ...) instead of repeating it with $(foreach ...)
          This will reduce forking.
      [3] We can calculate obj-dirs more simply.  Most of objects are already
          accumulated in $(targets).  So, $(dir $(targets)) is fine and more
          comprehensive.
      
      I also removed ugly code in arch/x86/entry/vdso/Makefile.  This is now
      really unnecessary.
      Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      Acked-by: NIngo Molnar <mingo@kernel.org>
      Tested-by: NDouglas Anderson <dianders@chromium.org>
      8a78756e
  9. 02 11月, 2017 1 次提交
    • G
      License cleanup: add SPDX GPL-2.0 license identifier to files with no license · b2441318
      Greg Kroah-Hartman 提交于
      Many source files in the tree are missing licensing information, which
      makes it harder for compliance tools to determine the correct license.
      
      By default all files without license information are under the default
      license of the kernel, which is GPL version 2.
      
      Update the files which contain no license information with the 'GPL-2.0'
      SPDX license identifier.  The SPDX identifier is a legally binding
      shorthand, which can be used instead of the full boiler plate text.
      
      This patch is based on work done by Thomas Gleixner and Kate Stewart and
      Philippe Ombredanne.
      
      How this work was done:
      
      Patches were generated and checked against linux-4.14-rc6 for a subset of
      the use cases:
       - file had no licensing information it it.
       - file was a */uapi/* one with no licensing information in it,
       - file was a */uapi/* one with existing licensing information,
      
      Further patches will be generated in subsequent months to fix up cases
      where non-standard license headers were used, and references to license
      had to be inferred by heuristics based on keywords.
      
      The analysis to determine which SPDX License Identifier to be applied to
      a file was done in a spreadsheet of side by side results from of the
      output of two independent scanners (ScanCode & Windriver) producing SPDX
      tag:value files created by Philippe Ombredanne.  Philippe prepared the
      base worksheet, and did an initial spot review of a few 1000 files.
      
      The 4.13 kernel was the starting point of the analysis with 60,537 files
      assessed.  Kate Stewart did a file by file comparison of the scanner
      results in the spreadsheet to determine which SPDX license identifier(s)
      to be applied to the file. She confirmed any determination that was not
      immediately clear with lawyers working with the Linux Foundation.
      
      Criteria used to select files for SPDX license identifier tagging was:
       - Files considered eligible had to be source code files.
       - Make and config files were included as candidates if they contained >5
         lines of source
       - File already had some variant of a license header in it (even if <5
         lines).
      
      All documentation files were explicitly excluded.
      
      The following heuristics were used to determine which SPDX license
      identifiers to apply.
      
       - when both scanners couldn't find any license traces, file was
         considered to have no license information in it, and the top level
         COPYING file license applied.
      
         For non */uapi/* files that summary was:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|-------
         GPL-2.0                                              11139
      
         and resulted in the first patch in this series.
      
         If that file was a */uapi/* path one, it was "GPL-2.0 WITH
         Linux-syscall-note" otherwise it was "GPL-2.0".  Results of that was:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|-------
         GPL-2.0 WITH Linux-syscall-note                        930
      
         and resulted in the second patch in this series.
      
       - if a file had some form of licensing information in it, and was one
         of the */uapi/* ones, it was denoted with the Linux-syscall-note if
         any GPL family license was found in the file or had no licensing in
         it (per prior point).  Results summary:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|------
         GPL-2.0 WITH Linux-syscall-note                       270
         GPL-2.0+ WITH Linux-syscall-note                      169
         ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause)    21
         ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)    17
         LGPL-2.1+ WITH Linux-syscall-note                      15
         GPL-1.0+ WITH Linux-syscall-note                       14
         ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause)    5
         LGPL-2.0+ WITH Linux-syscall-note                       4
         LGPL-2.1 WITH Linux-syscall-note                        3
         ((GPL-2.0 WITH Linux-syscall-note) OR MIT)              3
         ((GPL-2.0 WITH Linux-syscall-note) AND MIT)             1
      
         and that resulted in the third patch in this series.
      
       - when the two scanners agreed on the detected license(s), that became
         the concluded license(s).
      
       - when there was disagreement between the two scanners (one detected a
         license but the other didn't, or they both detected different
         licenses) a manual inspection of the file occurred.
      
       - In most cases a manual inspection of the information in the file
         resulted in a clear resolution of the license that should apply (and
         which scanner probably needed to revisit its heuristics).
      
       - When it was not immediately clear, the license identifier was
         confirmed with lawyers working with the Linux Foundation.
      
       - If there was any question as to the appropriate license identifier,
         the file was flagged for further research and to be revisited later
         in time.
      
      In total, over 70 hours of logged manual review was done on the
      spreadsheet to determine the SPDX license identifiers to apply to the
      source files by Kate, Philippe, Thomas and, in some cases, confirmation
      by lawyers working with the Linux Foundation.
      
      Kate also obtained a third independent scan of the 4.13 code base from
      FOSSology, and compared selected files where the other two scanners
      disagreed against that SPDX file, to see if there was new insights.  The
      Windriver scanner is based on an older version of FOSSology in part, so
      they are related.
      
      Thomas did random spot checks in about 500 files from the spreadsheets
      for the uapi headers and agreed with SPDX license identifier in the
      files he inspected. For the non-uapi files Thomas did random spot checks
      in about 15000 files.
      
      In initial set of patches against 4.14-rc6, 3 files were found to have
      copy/paste license identifier errors, and have been fixed to reflect the
      correct identifier.
      
      Additionally Philippe spent 10 hours this week doing a detailed manual
      inspection and review of the 12,461 patched files from the initial patch
      version early this week with:
       - a full scancode scan run, collecting the matched texts, detected
         license ids and scores
       - reviewing anything where there was a license detected (about 500+
         files) to ensure that the applied SPDX license was correct
       - reviewing anything where there was no detection but the patch license
         was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
         SPDX license was correct
      
      This produced a worksheet with 20 files needing minor correction.  This
      worksheet was then exported into 3 different .csv files for the
      different types of files to be modified.
      
      These .csv files were then reviewed by Greg.  Thomas wrote a script to
      parse the csv files and add the proper SPDX tag to the file, in the
      format that the file expected.  This script was further refined by Greg
      based on the output to detect more types of files automatically and to
      distinguish between header and source .c files (which need different
      comment types.)  Finally Greg ran the script using the .csv files to
      generate the patches.
      Reviewed-by: NKate Stewart <kstewart@linuxfoundation.org>
      Reviewed-by: NPhilippe Ombredanne <pombredanne@nexb.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b2441318
  10. 25 7月, 2016 1 次提交
  11. 14 6月, 2016 1 次提交
  12. 08 6月, 2016 1 次提交
    • E
      GCC plugin infrastructure · 6b90bd4b
      Emese Revfy 提交于
      This patch allows to build the whole kernel with GCC plugins. It was ported from
      grsecurity/PaX. The infrastructure supports building out-of-tree modules and
      building in a separate directory. Cross-compilation is supported too.
      Currently the x86, arm, arm64 and uml architectures enable plugins.
      
      The directory of the gcc plugins is scripts/gcc-plugins. You can use a file or a directory
      there. The plugins compile with these options:
       * -fno-rtti: gcc is compiled with this option so the plugins must use it too
       * -fno-exceptions: this is inherited from gcc too
       * -fasynchronous-unwind-tables: this is inherited from gcc too
       * -ggdb: it is useful for debugging a plugin (better backtrace on internal
          errors)
       * -Wno-narrowing: to suppress warnings from gcc headers (ipa-utils.h)
       * -Wno-unused-variable: to suppress warnings from gcc headers (gcc_version
          variable, plugin-version.h)
      
      The infrastructure introduces a new Makefile target called gcc-plugins. It
      supports all gcc versions from 4.5 to 6.0. The scripts/gcc-plugin.sh script
      chooses the proper host compiler (gcc-4.7 can be built by either gcc or g++).
      This script also checks the availability of the included headers in
      scripts/gcc-plugins/gcc-common.h.
      
      The gcc-common.h header contains frequently included headers for GCC plugins
      and it has a compatibility layer for the supported gcc versions.
      
      The gcc-generate-*-pass.h headers automatically generate the registration
      structures for GIMPLE, SIMPLE_IPA, IPA and RTL passes.
      
      Note that 'make clean' keeps the *.so files (only the distclean or mrproper
      targets clean all) because they are needed for out-of-tree modules.
      
      Based on work created by the PaX Team.
      Signed-off-by: NEmese Revfy <re.emese@gmail.com>
      Acked-by: NKees Cook <keescook@chromium.org>
      Signed-off-by: NMichal Marek <mmarek@suse.com>
      6b90bd4b
  13. 20 4月, 2016 1 次提交
  14. 23 3月, 2016 1 次提交
    • D
      kernel: add kcov code coverage · 5c9a8750
      Dmitry Vyukov 提交于
      kcov provides code coverage collection for coverage-guided fuzzing
      (randomized testing).  Coverage-guided fuzzing is a testing technique
      that uses coverage feedback to determine new interesting inputs to a
      system.  A notable user-space example is AFL
      (http://lcamtuf.coredump.cx/afl/).  However, this technique is not
      widely used for kernel testing due to missing compiler and kernel
      support.
      
      kcov does not aim to collect as much coverage as possible.  It aims to
      collect more or less stable coverage that is function of syscall inputs.
      To achieve this goal it does not collect coverage in soft/hard
      interrupts and instrumentation of some inherently non-deterministic or
      non-interesting parts of kernel is disbled (e.g.  scheduler, locking).
      
      Currently there is a single coverage collection mode (tracing), but the
      API anticipates additional collection modes.  Initially I also
      implemented a second mode which exposes coverage in a fixed-size hash
      table of counters (what Quentin used in his original patch).  I've
      dropped the second mode for simplicity.
      
      This patch adds the necessary support on kernel side.  The complimentary
      compiler support was added in gcc revision 231296.
      
      We've used this support to build syzkaller system call fuzzer, which has
      found 90 kernel bugs in just 2 months:
      
        https://github.com/google/syzkaller/wiki/Found-Bugs
      
      We've also found 30+ bugs in our internal systems with syzkaller.
      Another (yet unexplored) direction where kcov coverage would greatly
      help is more traditional "blob mutation".  For example, mounting a
      random blob as a filesystem, or receiving a random blob over wire.
      
      Why not gcov.  Typical fuzzing loop looks as follows: (1) reset
      coverage, (2) execute a bit of code, (3) collect coverage, repeat.  A
      typical coverage can be just a dozen of basic blocks (e.g.  an invalid
      input).  In such context gcov becomes prohibitively expensive as
      reset/collect coverage steps depend on total number of basic
      blocks/edges in program (in case of kernel it is about 2M).  Cost of
      kcov depends only on number of executed basic blocks/edges.  On top of
      that, kernel requires per-thread coverage because there are always
      background threads and unrelated processes that also produce coverage.
      With inlined gcov instrumentation per-thread coverage is not possible.
      
      kcov exposes kernel PCs and control flow to user-space which is
      insecure.  But debugfs should not be mapped as user accessible.
      
      Based on a patch by Quentin Casasnovas.
      
      [akpm@linux-foundation.org: make task_struct.kcov_mode have type `enum kcov_mode']
      [akpm@linux-foundation.org: unbreak allmodconfig]
      [akpm@linux-foundation.org: follow x86 Makefile layout standards]
      Signed-off-by: NDmitry Vyukov <dvyukov@google.com>
      Reviewed-by: NKees Cook <keescook@chromium.org>
      Cc: syzkaller <syzkaller@googlegroups.com>
      Cc: Vegard Nossum <vegard.nossum@oracle.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Tavis Ormandy <taviso@google.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Kostya Serebryany <kcc@google.com>
      Cc: Eric Dumazet <edumazet@google.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Kees Cook <keescook@google.com>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: David Drysdale <drysdale@google.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Kirill A. Shutemov <kirill@shutemov.name>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5c9a8750
  15. 29 2月, 2016 1 次提交
    • J
      objtool: Mark non-standard object files and directories · c0dd6716
      Josh Poimboeuf 提交于
      Code which runs outside the kernel's normal mode of operation often does
      unusual things which can cause a static analysis tool like objtool to
      emit false positive warnings:
      
       - boot image
       - vdso image
       - relocation
       - realmode
       - efi
       - head
       - purgatory
       - modpost
      
      Set OBJECT_FILES_NON_STANDARD for their related files and directories,
      which will tell objtool to skip checking them.  It's ok to skip them
      because they don't affect runtime stack traces.
      
      Also skip the following code which does the right thing with respect to
      frame pointers, but is too "special" to be validated by a tool:
      
       - entry
       - mcount
      
      Also skip the test_nx module because it modifies its exception handling
      table at runtime, which objtool can't understand.  Fortunately it's
      just a test module so it doesn't matter much.
      
      Currently objtool is the only user of OBJECT_FILES_NON_STANDARD, but it
      might eventually be useful for other tools.
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Bernd Petrovitsch <bernd@petrovitsch.priv.at>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Chris J Arges <chris.j.arges@canonical.com>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Michal Marek <mmarek@suse.cz>
      Cc: Namhyung Kim <namhyung@gmail.com>
      Cc: Pedro Alves <palves@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: live-patching@vger.kernel.org
      Link: http://lkml.kernel.org/r/366c080e3844e8a5b6a0327dc7e8c2b90ca3baeb.1456719558.git.jpoimboe@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      c0dd6716
  16. 21 1月, 2016 1 次提交
  17. 09 10月, 2015 1 次提交
  18. 07 10月, 2015 1 次提交
  19. 08 8月, 2015 1 次提交
    • A
      x86/vdso: Emit a GNU hash · 6b7e2654
      Andy Lutomirski 提交于
      Some dynamic loaders may be slightly faster if a GNU hash is
      available.  Strangely, this seems to have no effect at all on
      the vdso size.
      
      This is unlikely to have any measurable effect on the time it
      takes to resolve vdso symbols (since there are so few of them).
      In some contexts, it can be a win for a different reason: if
      every DSO has a GNU hash section, then libc can avoid
      calculating SysV hashes at all.  Both musl and glibc appear to
      have this optimization.
      
      It's plausible that this breaks some ancient glibc version.  If
      so, then, depending on what glibc versions break, we could
      either require COMPAT_VDSO for them or consider reverting.
      Signed-off-by: NAndy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Isaac Dunham <ibid.ag@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nathan Lynch <nathan_lynch@mentor.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: musl@lists.openwall.com <musl@lists.openwall.com>
      Link: http://lkml.kernel.org/r/fd56cc057a2d62ab31c56a48d04fccb435b3fd4f.1438897382.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      6b7e2654
  20. 06 7月, 2015 1 次提交
  21. 04 6月, 2015 1 次提交
    • I
      x86/asm/entry, x86/vdso: Move the vDSO code to arch/x86/entry/vdso/ · d603c8e1
      Ingo Molnar 提交于
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      d603c8e1
  22. 11 5月, 2015 1 次提交
  23. 31 3月, 2015 3 次提交
  24. 14 2月, 2015 1 次提交
    • A
      x86_64: add KASan support · ef7f0d6a
      Andrey Ryabinin 提交于
      This patch adds arch specific code for kernel address sanitizer.
      
      16TB of virtual addressed used for shadow memory.  It's located in range
      [ffffec0000000000 - fffffc0000000000] between vmemmap and %esp fixup
      stacks.
      
      At early stage we map whole shadow region with zero page.  Latter, after
      pages mapped to direct mapping address range we unmap zero pages from
      corresponding shadow (see kasan_map_shadow()) and allocate and map a real
      shadow memory reusing vmemmap_populate() function.
      
      Also replace __pa with __pa_nodebug before shadow initialized.  __pa with
      CONFIG_DEBUG_VIRTUAL=y make external function call (__phys_addr)
      __phys_addr is instrumented, so __asan_load could be called before shadow
      area initialized.
      Signed-off-by: NAndrey Ryabinin <a.ryabinin@samsung.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Signed-off-by: NAndrey Konovalov <adech.fo@gmail.com>
      Cc: Yuri Gribov <tetra2005@gmail.com>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Jim Davis <jim.epost@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ef7f0d6a
  25. 29 1月, 2015 1 次提交
  26. 12 7月, 2014 1 次提交
  27. 25 6月, 2014 1 次提交
  28. 21 6月, 2014 1 次提交
  29. 20 6月, 2014 1 次提交
    • A
      x86/vdso: Improve the fake section headers · bfad381c
      Andy Lutomirski 提交于
      Fully stripping the vDSO has other unfortunate side effects:
      
       - binutils is unable to find ELF notes without a SHT_NOTE section.
      
       - Even elfutils has trouble: it can find ELF notes without a section
         table at all, but if a section table is present, it won't look for
         PT_NOTE.
      
       - gdb wants section names to match between stripped DSOs and their
         symbols; otherwise it will corrupt symbol addresses.
      
      We're also breaking the rules: section 0 is supposed to be SHT_NULL.
      
      Fix these problems by building a better fake section table.  While
      we're at it, we might as well let buggy Go versions keep working well
      by giving the SHT_DYNSYM entry the correct size.
      
      This is a bit unfortunate: it adds quite a bit of size to the vdso
      image.
      
      If/when binutils improves and the improved versions become widespread,
      it would be worth considering dropping most of this.
      Signed-off-by: NAndy Lutomirski <luto@amacapital.net>
      Link: http://lkml.kernel.org/r/0e546a5eeaafdf1840e6ee654a55c1e727c26663.1403129369.git.luto@amacapital.netSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      bfad381c
  30. 14 6月, 2014 1 次提交
  31. 13 6月, 2014 1 次提交
  32. 07 6月, 2014 1 次提交
  33. 06 5月, 2014 1 次提交
    • A
      x86, vdso: Reimplement vdso.so preparation in build-time C · 6f121e54
      Andy Lutomirski 提交于
      Currently, vdso.so files are prepared and analyzed by a combination
      of objcopy, nm, some linker script tricks, and some simple ELF
      parsers in the kernel.  Replace all of that with plain C code that
      runs at build time.
      
      All five vdso images now generate .c files that are compiled and
      linked in to the kernel image.
      
      This should cause only one userspace-visible change: the loaded vDSO
      images are stripped more heavily than they used to be.  Everything
      outside the loadable segment is dropped.  In particular, this causes
      the section table and section name strings to be missing.  This
      should be fine: real dynamic loaders don't load or inspect these
      tables anyway.  The result is roughly equivalent to eu-strip's
      --strip-sections option.
      
      The purpose of this change is to enable the vvar and hpet mappings
      to be moved to the page following the vDSO load segment.  Currently,
      it is possible for the section table to extend into the page after
      the load segment, so, if we map it, it risks overlapping the vvar or
      hpet page.  This happens whenever the load segment is just under a
      multiple of PAGE_SIZE.
      
      The only real subtlety here is that the old code had a C file with
      inline assembler that did 'call VDSO32_vsyscall' and a linker script
      that defined 'VDSO32_vsyscall = __kernel_vsyscall'.  This most
      likely worked by accident: the linker script entry defines a symbol
      associated with an address as opposed to an alias for the real
      dynamic symbol __kernel_vsyscall.  That caused ld to relocate the
      reference at link time instead of leaving an interposable dynamic
      relocation.  Since the VDSO32_vsyscall hack is no longer needed, I
      now use 'call __kernel_vsyscall', and I added -Bsymbolic to make it
      work.  vdso2c will generate an error and abort the build if the
      resulting image contains any dynamic relocations, so we won't
      silently generate bad vdso images.
      
      (Dynamic relocations are a problem because nothing will even attempt
      to relocate the vdso.)
      Signed-off-by: NAndy Lutomirski <luto@amacapital.net>
      Link: http://lkml.kernel.org/r/2c4fcf45524162a34d87fdda1eb046b2a5cecee7.1399317206.git.luto@amacapital.netSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      6f121e54
  34. 26 3月, 2014 1 次提交
  35. 19 3月, 2014 2 次提交