1. 11 2月, 2016 1 次提交
    • C
      ARM: 8513/1: xip: Move XIP linking to a separate file · 538bf469
      Chris Brandt 提交于
      When building an XIP kernel, the linker script needs to be much different
      than a conventional kernel's script. Over time, it's been difficult to
      maintain both XIP and non-XIP layouts in one linker script. Therefore,
      this patch separates the two procedures into two completely different
      files.
      
      The new linker script is essentially a straight copy of the current script
      with all the non-CONFIG_XIP_KERNEL portions removed.
      
      Additionally, all CONFIG_XIP_KERNEL portions have been removed from the
      existing linker script...never to return again.
      
      It should be noted that this does not fix any current XIP issues, but
      rather is the first move in fixing them properly with subsequent patches.
      Signed-off-by: NChris Brandt <chris.brandt@renesas.com>
      Acked-by: NNicolas Pitre <nico@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      538bf469
  2. 08 2月, 2016 1 次提交
    • K
      ARM: 8501/1: mm: flip priority of CONFIG_DEBUG_RODATA · 25362dc4
      Kees Cook 提交于
      The use of CONFIG_DEBUG_RODATA is generally seen as an essential part of
      kernel self-protection:
      http://www.openwall.com/lists/kernel-hardening/2015/11/30/13
      Additionally, its name has grown to mean things beyond just rodata. To
      get ARM closer to this, we ought to rearrange the names of the configs
      that control how the kernel protects its memory. What was called
      CONFIG_ARM_KERNMEM_PERMS is realy doing the work that other architectures
      call CONFIG_DEBUG_RODATA.
      
      This redefines CONFIG_DEBUG_RODATA to actually do the bulk of the
      ROing (and NXing). In the place of the old CONFIG_DEBUG_RODATA, use
      CONFIG_DEBUG_ALIGN_RODATA, since that's what the option does: adds
      section alignment for making rodata explicitly NX, as arm does not split
      the page tables like arm64 does without _ALIGN_RODATA.
      
      Also adds human readable names to the sections so I could more easily
      debug my typos, and makes CONFIG_DEBUG_RODATA default "y" for CPU_V7.
      
      Results in /sys/kernel/debug/kernel_page_tables for each config state:
      
       # CONFIG_DEBUG_RODATA is not set
       # CONFIG_DEBUG_ALIGN_RODATA is not set
      
      ---[ Kernel Mapping ]---
      0x80000000-0x80900000           9M     RW x  SHD
      0x80900000-0xa0000000         503M     RW NX SHD
      
       CONFIG_DEBUG_RODATA=y
       CONFIG_DEBUG_ALIGN_RODATA=y
      
      ---[ Kernel Mapping ]---
      0x80000000-0x80100000           1M     RW NX SHD
      0x80100000-0x80700000           6M     ro x  SHD
      0x80700000-0x80a00000           3M     ro NX SHD
      0x80a00000-0xa0000000         502M     RW NX SHD
      
       CONFIG_DEBUG_RODATA=y
       # CONFIG_DEBUG_ALIGN_RODATA is not set
      
      ---[ Kernel Mapping ]---
      0x80000000-0x80100000           1M     RW NX SHD
      0x80100000-0x80a00000           9M     ro x  SHD
      0x80a00000-0xa0000000         502M     RW NX SHD
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Reviewed-by: NLaura Abbott <labbott@fedoraproject.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      25362dc4
  3. 30 3月, 2015 1 次提交
  4. 28 3月, 2015 1 次提交
  5. 27 3月, 2015 1 次提交
  6. 25 3月, 2015 1 次提交
    • A
      ARM: kvm: assert on HYP section boundaries not actual code size · 12eb3e83
      Ard Biesheuvel 提交于
      Using ASSERT() with an expression that involves a symbol that
      is only supplied through a PROVIDE() definition in the linker
      script itself is apparently not supported by some older versions
      of binutils.
      
      So instead, rewrite the expression so that only the section
      boundaries __hyp_idmap_text_start and __hyp_idmap_text_end
      are used. Note that this reverts the fix in 06f75a1f
      ("ARM, arm64: kvm: get rid of the bounce page") for the ASSERT()
      being triggered erroneously when unrelated linker emitted veneers
      happen to end up in the HYP idmap region.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      12eb3e83
  7. 23 3月, 2015 1 次提交
  8. 20 3月, 2015 1 次提交
    • A
      ARM, arm64: kvm: get rid of the bounce page · 06f75a1f
      Ard Biesheuvel 提交于
      The HYP init bounce page is a runtime construct that ensures that the
      HYP init code does not cross a page boundary. However, this is something
      we can do perfectly well at build time, by aligning the code appropriately.
      
      For arm64, we just align to 4 KB, and enforce that the code size is less
      than 4 KB, regardless of the chosen page size.
      
      For ARM, the whole code is less than 256 bytes, so we tweak the linker
      script to align at a power of 2 upper bound of the code size
      
      Note that this also fixes a benign off-by-one error in the original bounce
      page code, where a bounce page would be allocated unnecessarily if the code
      was exactly 1 page in size.
      
      On ARM, it also fixes an issue with very large kernels reported by Arnd
      Bergmann, where stub sections with linker emitted veneers could erroneously
      trigger the size/alignment ASSERT() in the linker script.
      Tested-by: NMarc Zyngier <marc.zyngier@arm.com>
      Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      06f75a1f
  9. 17 10月, 2014 2 次提交
  10. 03 10月, 2014 1 次提交
  11. 18 7月, 2014 1 次提交
  12. 01 8月, 2013 1 次提交
  13. 04 6月, 2013 1 次提交
  14. 29 4月, 2013 1 次提交
  15. 24 1月, 2013 1 次提交
  16. 16 12月, 2012 1 次提交
  17. 04 11月, 2012 1 次提交
  18. 23 6月, 2012 1 次提交
    • D
      ARM: 7428/1: Prevent KALLSYM size mismatch on ARM. · 9973290c
      David Brown 提交于
      ARM builds seem to be plagued by an occasional build error:
      
          Inconsistent kallsyms data
          This is a bug - please report about it
          Try "make KALLSYMS_EXTRA_PASS=1" as a workaround
      
      The problem has to do with alignment of some sections by the linker.
      The kallsyms data is built in two passes by first linking the kernel
      without it, and then linking the kernel again with the symbols
      included.  Normally, this just shifts the symbols, without changing
      their order, and the compression used by the kallsyms gives the same
      result.
      
      On non SMP, the per CPU data is empty.  Depending on the where the
      alignment ends up, it can come out as either:
      
         +-------------------+
         | last text segment |
         +-------------------+
         /* padding */
         +-------------------+     <- L1_CACHE_BYTES alignemnt
         | per cpu (empty)   |
         +-------------------+
      __per_cpu_end:
         /* padding */
      __data_loc:
         +-------------------+     <- THREAD_SIZE alignment
         | data              |
         +-------------------+
      
      or
      
         +-------------------+
         | last text segment |
         +-------------------+
         /* padding */
         +-------------------+     <- L1_CACHE_BYTES alignemnt
         | per cpu (empty)   |
         +-------------------+
      __per_cpu_end:
         /* no padding */
      __data_loc:
         +-------------------+     <- THREAD_SIZE alignment
         | data              |
         +-------------------+
      
      if the alignment satisfies both.  Because symbols that have the same
      address are sorted by 'nm -n', the second case will be in a different
      order than the first case.  This changes the compression, changing the
      size of the kallsym data, causing the build failure.
      
      The KALLSYMS_EXTRA_PASS=1 workaround usually works, but it is still
      possible to have the alignment change between the second and third
      pass.  It's probably even possible for it to never reach a fixedpoint.
      
      The problem only occurs on non-SMP, when the per-cpu data is empty,
      and when the data segment has alignment (and immediately follows the
      text segments).  Fix this by only including the per_cpu section on
      SMP, when it is not empty.
      Signed-off-by: NDavid Brown <davidb@codeaurora.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      9973290c
  19. 10 2月, 2012 1 次提交
  20. 23 1月, 2012 2 次提交
  21. 06 12月, 2011 1 次提交
  22. 17 10月, 2011 1 次提交
    • S
      ARM: 7017/1: Use generic BUG() handler · 87e040b6
      Simon Glass 提交于
      ARM uses its own BUG() handler which makes its output slightly different
      from other archtectures.
      
      One of the problems is that the ARM implementation doesn't report the function
      with the BUG() in it, but always reports the PC being in __bug(). The generic
      implementation doesn't have this problem.
      
      Currently we get something like:
      
      kernel BUG at fs/proc/breakme.c:35!
      Unable to handle kernel NULL pointer dereference at virtual address 00000000
      ...
      PC is at __bug+0x20/0x2c
      
      With this patch it displays:
      
      kernel BUG at fs/proc/breakme.c:35!
      Internal error: Oops - undefined instruction: 0 [#1] PREEMPT SMP
      ...
      PC is at write_breakme+0xd0/0x1b4
      
      This implementation uses an undefined instruction to implement BUG, and sets up
      a bug table containing the relevant information. Many versions of gcc do not
      support %c properly for ARM (inserting a # when they shouldn't) so we work
      around this using distasteful macro magic.
      
      v1: Initial version to replace existing ARM BUG() implementation with something
      more similar to other architectures.
      
      v2: Add Thumb support, remove backtrace whitespace output changes. Change to
      use macros instead of requiring the asm %d flag to work (thanks to
      Dave Martin <dave.martin@linaro.org>)
      
      v3: Remove old BUG() implementation in favor of this one.
      Remove the Backtrace: message (will submit this separately).
      Use ARM_EXIT_KEEP() so that some architectures can dump exit text at link time
      thanks to Stephen Boyd <sboyd@codeaurora.org> (although since we always
      define GENERIC_BUG this might be academic.)
      Rebase to linux-2.6.git master.
      
      v4: Allow BUGS in modules (these were not reported correctly in v3)
      (thanks to Stephen Boyd <sboyd@codeaurora.org> for suggesting that.)
      Remove __bug() as this is no longer needed.
      
      v5: Add %progbits as the section flags.
      Signed-off-by: NSimon Glass <sjg@chromium.org>
      Reviewed-by: NStephen Boyd <sboyd@codeaurora.org>
      Tested-by: NStephen Boyd <sboyd@codeaurora.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      87e040b6
  23. 21 9月, 2011 1 次提交
    • R
      ARM: fix vmlinux.lds.S discarding sections · 6760b109
      Russell King 提交于
      We are seeing linker errors caused by sections being discarded, despite
      the linker script trying to keep them.  The result is (eg):
      
      `.exit.text' referenced in section `.alt.smp.init' of drivers/built-in.o: defined in discarded section `.exit.text' of drivers/built-in.o
      `.exit.text' referenced in section `.alt.smp.init' of net/built-in.o: defined in discarded section `.exit.text' of net/built-in.o
      
      This is the relevent part of the linker script (reformatted to make it
      clearer):
      | SECTIONS
      | {
      | /*
      | * unwind exit sections must be discarded before the rest of the
      | * unwind sections get included.
      | */
      | /DISCARD/ : {
      | *(.ARM.exidx.exit.text)
      | *(.ARM.extab.exit.text)
      | }
      | ...
      | .exit.text : {
      | *(.exit.text)
      | *(.memexit.text)
      | }
      | ...
      | /DISCARD/ : {
      | *(.exit.text)
      | *(.memexit.text)
      | *(.exit.data)
      | *(.memexit.data)
      | *(.memexit.rodata)
      | *(.exitcall.exit)
      | *(.discard)
      | *(.discard.*)
      | }
      | }
      
      Now, this is what the linker manual says about discarded output sections:
      
      |    The special output section name `/DISCARD/' may be used to discard
      | input sections.  Any input sections which are assigned to an output
      | section named `/DISCARD/' are not included in the output file.
      
      No questions, no exceptions. It doesn't say "unless they are listed
      before the /DISCARD/ section." Now, this is what asn-generic/vmlinux.lds.S
      says:
      | /*
      |  * Default discarded sections.
      |  *
      |  * Some archs want to discard exit text/data at runtime rather than
      |  * link time due to cross-section references such as alt instructions,
      |  * bug table, eh_frame, etc. DISCARDS must be the last of output
      |  * section definitions so that such archs put those in earlier section
      |  * definitions.
      |  */
      
      And guess what - the list _always_ includes .exit.text etc.
      
      Now, what's actually happening is that the linker is reading the script,
      and it finds the first /DISCARD/ output section at the beginning of the
      script. It continues reading the script, and finds the 'DISCARD' macro
      at the end, which having been postprocessed results in another
      /DISCARD/ output section. As the linker already contains the earlier
      /DISCARD/ output section, it adds it to that existing section, so it
      effectively is placed at the start. This can be seen by using the -M
      option to ld:
      
      | Linker script and memory map
      |
      |                 0xc037c080                jiffies = jiffies_64
      |
      | /DISCARD/
      |  *(.ARM.exidx.exit.text)
      |  *(.ARM.extab.exit.text)
      |  *(.exit.text)
      |  *(.memexit.text)
      |  *(.exit.data)
      |  *(.memexit.data)
      |  *(.memexit.rodata)
      |  *(.exitcall.exit)
      |  *(.discard)
      |  *(.discard.*)
      |
      |                 0xc0008000                . = 0xc0008000
      |
      | .head.text      0xc0008000      0x1d0
      |                 0xc0008000                _text = .
      |  *(.head.text)
      |  .head.text     0xc0008000      0x1d0 arch/arm/kernel/head.o
      |                 0xc0008000                stext
      |
      | .text           0xc0008200   0x2d78d0
      |                 0xc0008200                _stext = .
      |                 0xc0008200                __exception_text_start = .
      |  *(.exception.text)
      |  .exception.text
      | ...
      
      As you can see, all the discarded sections are grouped together - and
      as a result of it being the first output section, they all appear before
      any other section.
      
      The result is that not only is the unwind information discarded (as
      intended), but also the .exit.text, despite us wanting to have the
      .exit.text preserved.
      
      We can't move the unwind information elsewhere, because it'll then be
      included even when we do actually discard the .exit.text (and similar)
      sections.
      
      So, work around this by avoiding the generic DISCARDS macro, and instead
      conditionalize the sections to be discarded ourselves.  This avoids the
      ambiguity in how the linker assigns input sections to output sections,
      making our script less dependent on undocumented linker behaviour.
      Reported-by: NRob Herring <robherring2@gmail.com>
      Tested-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      6760b109
  24. 08 7月, 2011 5 次提交
  25. 25 3月, 2011 1 次提交
    • T
      percpu: Always align percpu output section to PAGE_SIZE · 0415b00d
      Tejun Heo 提交于
      Percpu allocator honors alignment request upto PAGE_SIZE and both the
      percpu addresses in the percpu address space and the translated kernel
      addresses should be aligned accordingly.  The calculation of the
      former depends on the alignment of percpu output section in the kernel
      image.
      
      The linker script macros PERCPU_VADDR() and PERCPU() are used to
      define this output section and the latter takes @align parameter.
      Several architectures are using @align smaller than PAGE_SIZE breaking
      percpu memory alignment.
      
      This patch removes @align parameter from PERCPU(), renames it to
      PERCPU_SECTION() and makes it always align to PAGE_SIZE.  While at it,
      add PCPU_SETUP_BUG_ON() checks such that alignment problems are
      reliably detected and remove percpu alignment comment recently added
      in workqueue.c as the condition would trigger BUG way before reaching
      there.
      
      For um, this patch raises the alignment of percpu area.  As the area
      is in .init, there shouldn't be any noticeable difference.
      
      This problem was discovered by David Howells while debugging boot
      failure on mn10300.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NMike Frysinger <vapier@gentoo.org>
      Cc: uclinux-dist-devel@blackfin.uclinux.org
      Cc: David Howells <dhowells@redhat.com>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: user-mode-linux-devel@lists.sourceforge.net
      0415b00d
  26. 22 2月, 2011 2 次提交
  27. 18 2月, 2011 1 次提交
    • R
      ARM: P2V: introduce phys_to_virt/virt_to_phys runtime patching · dc21af99
      Russell King 提交于
      This idea came from Nicolas, Eric Miao produced an initial version,
      which was then rewritten into this.
      
      Patch the physical to virtual translations at runtime.  As we modify
      the code, this makes it incompatible with XIP kernels, but allows us
      to achieve this with minimal loss of performance.
      
      As many translations are of the form:
      
      	physical = virtual + (PHYS_OFFSET - PAGE_OFFSET)
      	virtual = physical - (PHYS_OFFSET - PAGE_OFFSET)
      
      we generate an 'add' instruction for __virt_to_phys(), and a 'sub'
      instruction for __phys_to_virt().  We calculate at run time (PHYS_OFFSET
      - PAGE_OFFSET) by comparing the address prior to MMU initialization with
      where it should be once the MMU has been initialized, and place this
      constant into the above add/sub instructions.
      
      Once we have (PHYS_OFFSET - PAGE_OFFSET), we can calculate the real
      PHYS_OFFSET as PAGE_OFFSET is a build-time constant, and save this for
      the C-mode PHYS_OFFSET variable definition to use.
      
      At present, we are unable to support Realview with Sparsemem enabled
      as this uses a complex mapping function, and MSM as this requires a
      constant which will not fit in our math instruction.
      
      Add a module version magic string for this feature to prevent
      incompatible modules being loaded.
      Tested-by: NTony Lindgren <tony@atomide.com>
      Reviewed-by: NNicolas Pitre <nicolas.pitre@linaro.org>
      Tested-by: NNicolas Pitre <nicolas.pitre@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      dc21af99
  28. 25 1月, 2011 1 次提交
    • T
      percpu: align percpu readmostly subsection to cacheline · 19df0c2f
      Tejun Heo 提交于
      Currently percpu readmostly subsection may share cachelines with other
      percpu subsections which may result in unnecessary cacheline bounce
      and performance degradation.
      
      This patch adds @cacheline parameter to PERCPU() and PERCPU_VADDR()
      linker macros, makes each arch linker scripts specify its cacheline
      size and use it to align percpu subsections.
      
      This is based on Shaohua's x86 only patch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Shaohua Li <shaohua.li@intel.com>
      19df0c2f
  29. 05 12月, 2010 1 次提交
    • R
      ARM: implement support for read-mostly sections · daf87416
      Russell King 提交于
      As our SMP implementation uses MESI protocols.  Grouping together data
      which is mostly only read together means that we avoid unnecessary
      cache line bouncing when this code shares a cache line with other data.
      
      In other words, cache lines associated with read-mostly data are
      expected to spend most of their time in shared state.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      daf87416
  30. 20 11月, 2010 1 次提交
  31. 28 10月, 2010 1 次提交
  32. 08 10月, 2010 2 次提交