1. 06 5月, 2014 1 次提交
    • A
      x86, vdso: Reimplement vdso.so preparation in build-time C · 6f121e54
      Andy Lutomirski 提交于
      Currently, vdso.so files are prepared and analyzed by a combination
      of objcopy, nm, some linker script tricks, and some simple ELF
      parsers in the kernel.  Replace all of that with plain C code that
      runs at build time.
      
      All five vdso images now generate .c files that are compiled and
      linked in to the kernel image.
      
      This should cause only one userspace-visible change: the loaded vDSO
      images are stripped more heavily than they used to be.  Everything
      outside the loadable segment is dropped.  In particular, this causes
      the section table and section name strings to be missing.  This
      should be fine: real dynamic loaders don't load or inspect these
      tables anyway.  The result is roughly equivalent to eu-strip's
      --strip-sections option.
      
      The purpose of this change is to enable the vvar and hpet mappings
      to be moved to the page following the vDSO load segment.  Currently,
      it is possible for the section table to extend into the page after
      the load segment, so, if we map it, it risks overlapping the vvar or
      hpet page.  This happens whenever the load segment is just under a
      multiple of PAGE_SIZE.
      
      The only real subtlety here is that the old code had a C file with
      inline assembler that did 'call VDSO32_vsyscall' and a linker script
      that defined 'VDSO32_vsyscall = __kernel_vsyscall'.  This most
      likely worked by accident: the linker script entry defines a symbol
      associated with an address as opposed to an alias for the real
      dynamic symbol __kernel_vsyscall.  That caused ld to relocate the
      reference at link time instead of leaving an interposable dynamic
      relocation.  Since the VDSO32_vsyscall hack is no longer needed, I
      now use 'call __kernel_vsyscall', and I added -Bsymbolic to make it
      work.  vdso2c will generate an error and abort the build if the
      resulting image contains any dynamic relocations, so we won't
      silently generate bad vdso images.
      
      (Dynamic relocations are a problem because nothing will even attempt
      to relocate the vdso.)
      Signed-off-by: NAndy Lutomirski <luto@amacapital.net>
      Link: http://lkml.kernel.org/r/2c4fcf45524162a34d87fdda1eb046b2a5cecee7.1399317206.git.luto@amacapital.netSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      6f121e54
  2. 09 11月, 2013 4 次提交
  3. 02 9月, 2013 1 次提交
  4. 23 7月, 2013 1 次提交
  5. 11 7月, 2013 1 次提交
  6. 22 6月, 2013 1 次提交
    • A
      aout32 coredump compat fix · 945fb136
      Al Viro 提交于
      dump_seek() does SEEK_CUR, not SEEK_SET; native binfmt_aout
      handles it correctly (seeks by PAGE_SIZE - sizeof(struct user),
      getting the current position to PAGE_SIZE), compat one seeks
      by PAGE_SIZE and ends up at PAGE_SIZE + already written...
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      945fb136
  7. 28 5月, 2013 1 次提交
  8. 10 5月, 2013 1 次提交
  9. 01 5月, 2013 1 次提交
  10. 30 4月, 2013 1 次提交
  11. 04 3月, 2013 4 次提交
  12. 24 2月, 2013 1 次提交
  13. 23 2月, 2013 1 次提交
  14. 14 2月, 2013 1 次提交
  15. 08 2月, 2013 1 次提交
  16. 04 2月, 2013 8 次提交
  17. 31 1月, 2013 1 次提交
  18. 20 12月, 2012 2 次提交
  19. 29 11月, 2012 2 次提交
  20. 01 10月, 2012 1 次提交
  21. 22 9月, 2012 2 次提交
  22. 19 9月, 2012 1 次提交
    • S
      x86, fpu: Unify signal handling code paths for x86 and x86_64 kernels · 72a671ce
      Suresh Siddha 提交于
      Currently for x86 and x86_32 binaries, fpstate in the user sigframe is copied
      to/from the fpstate in the task struct.
      
      And in the case of signal delivery for x86_64 binaries, if the fpstate is live
      in the CPU registers, then the live state is copied directly to the user
      sigframe. Otherwise  fpstate in the task struct is copied to the user sigframe.
      During restore, fpstate in the user sigframe is restored directly to the live
      CPU registers.
      
      Historically, different code paths led to different bugs. For example,
      x86_64 code path was not preemption safe till recently. Also there is lot
      of code duplication for support of new features like xsave etc.
      
      Unify signal handling code paths for x86 and x86_64 kernels.
      
      New strategy is as follows:
      
      Signal delivery: Both for 32/64-bit frames, align the core math frame area to
      64bytes as needed by xsave (this where the main fpu/extended state gets copied
      to and excludes the legacy compatibility fsave header for the 32-bit [f]xsave
      frames). If the state is live, copy the register state directly to the user
      frame. If not live, copy the state in the thread struct to the user frame. And
      for 32-bit [f]xsave frames, construct the fsave header separately before
      the actual [f]xsave area.
      
      Signal return: As the 32-bit frames with [f]xstate has an additional
      'fsave' header, copy everything back from the user sigframe to the
      fpstate in the task structure and reconstruct the fxstate from the 'fsave'
      header (Also user passed pointers may not be correctly aligned for
      any attempt to directly restore any partial state). At the next fpstate usage,
      everything will be restored to the live CPU registers.
      For all the 64-bit frames and the 32-bit fsave frame, restore the state from
      the user sigframe directly to the live CPU registers. 64-bit signals always
      restored the math frame directly, so we can expect the math frame pointer
      to be correctly aligned. For 32-bit fsave frames, there are no alignment
      requirements, so we can restore the state directly.
      
      "lat_sig catch" microbenchmark numbers (for x86, x86_64, x86_32 binaries) are
      with in the noise range with this change.
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Link: http://lkml.kernel.org/r/1343171129-2747-4-git-send-email-suresh.b.siddha@intel.com
      [ Merged in compilation fix ]
      Link: http://lkml.kernel.org/r/1344544736.8326.17.camel@sbsiddha-desk.sc.intel.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      72a671ce
  23. 05 9月, 2012 2 次提交