- 28 10月, 2014 5 次提交
-
-
由 Andy Lutomirski 提交于
I think that the jiffies vvar was once used for the vgetcpu cache. That code is long gone, so let's just make jiffies be a normal variable. Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/fcfee6f8749af14d96373a9e2656354ad0b95499.1411494540.git.luto@amacapital.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Andy Lutomirski 提交于
IMO users ought not to be able to use 16-bit segments without using modify_ldt. Fortunately, it's impossible to break espfix64 by loading the PER_CPU segment into SS because it's PER_CPU is marked read-only and SS cannot contain an RO segment, but marking PER_CPU as 32-bit is less fragile. Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/179f490d659307873eefd09206bebd417e2ab5ad.1411494540.git.luto@amacapital.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Andy Lutomirski 提交于
The first userspace attempt to read or write the PER_CPU segment will write the accessed bit to the GDT. This is visible to userspace using the LAR instruction, and it also pointlessly dirties a cache line. Set the segment's accessed bit at boot to prevent userspace access to segments from having side effects. Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/ac63814ca4c637a08ec2fd0360d67ca67560a9ee.1411494540.git.luto@amacapital.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Andy Lutomirski 提交于
This makes it easier to see what's going on. It produces exactly the same segment descriptor as the old code. Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/d492f7b55136cbc60f016adae79160707b2e03b7.1411494540.git.luto@amacapital.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Andy Lutomirski 提交于
This is pure cut-and-paste. At this point, vsyscall_64.c contains only code needed for vsyscall emulation, but some of the comments and function names are still confused. Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/a244daf7d3cbe71afc08ad09fdfe1866ca1f1978.1411494540.git.luto@amacapital.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 24 9月, 2014 1 次提交
-
-
由 Andy Lutomirski 提交于
Stephen Rothwell's compiler did something amazing: it unrolled a loop, discovered that one iteration of that loop contained an always-true test, and emitted a warning that will IMO only serve to convince people to disable the warning. That bogus warning caused me to wonder what prompted such an absurdity from his compiler, and I discovered that the code in question was, in fact, completely wrong -- I was looking things up in the wrong array. This affects 3.16 as well, but the only effect is to screw up the error checking a bit. vdso2c's output is unaffected. Reported-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/53d96ad5.80ywqrbs33ZBCQej%25akpm@linux-foundation.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 09 8月, 2014 1 次提交
-
-
由 Andy Lutomirski 提交于
The core mm code will provide a default gate area based on FIXADDR_USER_START and FIXADDR_USER_END if !defined(__HAVE_ARCH_GATE_AREA) && defined(AT_SYSINFO_EHDR). This default is only useful for ia64. arm64, ppc, s390, sh, tile, 64-bit UML, and x86_32 have their own code just to disable it. arm, 32-bit UML, and x86_64 have gate areas, but they have their own implementations. This gets rid of the default and moves the code into ia64. This should save some code on architectures without a gate area: it's now possible to inline the gate_area functions in the default case. Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Acked-by: NNathan Lynch <nathan_lynch@mentor.com> Acked-by: NH. Peter Anvin <hpa@linux.intel.com> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> [in principle] Acked-by: Richard Weinberger <richard@nod.at> [for um] Acked-by: Will Deacon <will.deacon@arm.com> [for arm64] Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: Jeff Dike <jdike@addtoit.com> Cc: Richard Weinberger <richard@nod.at> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Nathan Lynch <Nathan_Lynch@mentor.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 26 7月, 2014 1 次提交
-
-
由 Andy Lutomirski 提交于
The VVAR area can, obviously, be read; that is kind of the point. AFAIK this has no effect whatsoever unless x86 suddenly turns into a nommu architecture. Nonetheless, not setting it is suspicious. Reported-by: NNathan Lynch <Nathan_Lynch@mentor.com> Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/e4c8bf4bc2725bda22c4a4b7d0c82adcd8f8d9b8.1406330779.git.luto@amacapital.netSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 12 7月, 2014 2 次提交
-
-
由 Andy Lutomirski 提交于
Now that we can tolerate extra things dangling off the end of the vdso image, we can strip the vdso the old fashioned way rather than using an overcomplicated custom stripping algorithm. This is a partial reversion of: 6f121e54 x86, vdso: Reimplement vdso.so preparation in build-time C Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/50e01ed6dcc0575d20afd782f9fe98d5ee3e2d8a.1405040914.git.luto@amacapital.netSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Andy Lutomirski 提交于
Putting the vvar area after the vdso text is rather complicated: it only works of the total length of the vdso text mapping is known at vdso link time, and the linker doesn't allow symbol addresses to depend on the sizes of non-allocatable data after the PT_LOAD segment. Moving the vvar area before the vdso text will allow is to safely map non-allocatable data after the vdso text, which is a nice simplification. Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/156c78c0d93144ff1055a66493783b9e56813983.1405040914.git.luto@amacapital.netSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 11 7月, 2014 2 次提交
-
-
由 Jan Beulich 提交于
Relying on static functions used just once to get inlined (and subsequently have dead code paths eliminated) is wrong: Compilers are free to decide whether they do this, regardless of optimization level. With this not happening for vdso_addr() (observed with gcc 4.1.x), an unresolved reference to align_vdso_addr() causes the build to fail. [ hpa: vdso_addr() is never actually used on x86-32, as calculate_addr in map_vdso() is always false. It ought to be possible to clean this up further, but this fixes the immediate problem. ] Signed-off-by: NJan Beulich <jbeulich@suse.com> Link: http://lkml.kernel.org/r/53B5863B02000078000204D5@mail.emea.novell.comAcked-by: NAndy Lutomirski <luto@amacapital.net> Tested-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com> Tested-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Jan Beulich 提交于
Certain ld versions (observed with 2.20.0) put an empty .rela.dyn section into shared object files, breaking the assumption on the number of sections to be copied to the final output. Simply discard any empty SHT_REL and SHT_RELA sections to address this. Signed-off-by: NJan Beulich <jbeulich@suse.com> Link: http://lkml.kernel.org/r/53B5861E02000078000204D1@mail.emea.novell.comAcked-by: NAndy Lutomirski <luto@amacapital.net> Tested-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com> Tested-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 25 6月, 2014 2 次提交
-
-
由 Andy Lutomirski 提交于
vdso2c was checking for various types of relocations to detect when the vdso had undefined symbols or was otherwise dependent on relocation at load time. Undefined symbols in the vdso would fail if accessed at runtime, and certain implementation errors (e.g. branch profiling or incorrect symbol visibilities) could result in data access through the GOT that requires relocations. This could be as simple as: extern char foo; return foo; Without some kind of visibility control, the compiler would assume that foo could be interposed at load time and would generate a relocation. x86-64 and x32 (as opposed to i386) use explicit-addent (RELA) instead of implicit-addent (REL) relocations for data access, and vdso2c forgot to detect those. Whether these bad relocations would actually fail at runtime depends on what the linker sticks in the unrelocated references. Nonetheless, these relocations have no business existing in the vDSO and should be fixed rather than silently ignored. This error could trigger on some configurations due to branch profiling. The previous patch fixed that. Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/74ef0c00b4d2a3b573e00a4113874e62f772e348.1403642755.git.luto@amacapital.netSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Andy Lutomirski 提交于
DISABLE_BRANCH_PROFILING turns off branch profiling (i.e. a redefinition of 'if'). Branch profiling depends on a bunch of kernel-internal symbols and generates extra output sections, none of which are useful or functional in the vDSO. It's currently turned off for vclock_gettime.c, but vgetcpu.c also triggers branch profiling, so just turn it off in the makefile. This fixes the build on some configurations: the vdso could contain undefined symbols, and the fake section table overflowed due to ftrace's added sections. Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/bf1ec29e03b2bbc081f6dcaefa64db1c3a83fb21.1403642755.git.luto@amacapital.netSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 21 6月, 2014 1 次提交
-
-
由 Andy Lutomirski 提交于
With this change, doing 'make vdso_install' and telling gdb: set debug-file-directory /lib/modules/KVER/vdso will enable vdso debugging with symbols. This is useful for testing, but kernel RPM builds will probably want to manually delete these symlinks or otherwise do something sensible when they strip the vdso/*.so files. If ld does not support --build-id, then the symlinks will not be created. Note that kernel packagers that use vdso_install may need to adjust their packaging scripts to accomdate this change. For example, Fedora's scripts create build-id symlinks themselves in a different location, so the spec should probably be updated to remove the symlinks created by make vdso_install. Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/a424b189ce3ced85fe1e82d032a20e765e0fe0d3.1403291930.git.luto@amacapital.netSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 20 6月, 2014 4 次提交
-
-
由 Andy Lutomirski 提交于
.data doesn't need to be separate from .rodata: they're both readonly. .altinstructions and .altinstr_replacement aren't needed by anything except vdso2c; strip them from the final image. While we're at it, rather than aligning the actual executable text, just shove some unused-at-runtime data in between real data and text. My vdso image is still above 4k, but I'm disinclined to try to trim it harder for 3.16. For future trimming, I suspect that these sections could be moved to later in the file and dropped from the in-memory image: .gnu.version and .gnu.version_d (this may lose versions in gdb) .eh_frame (should be harmless) .eh_frame_hdr (I'm not really sure) .hash (AFAIK nothing needs this section header) Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/2e96d0c49016ea6d026a614ae645e93edd325961.1403129369.git.luto@amacapital.netSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Andy Lutomirski 提交于
Fully stripping the vDSO has other unfortunate side effects: - binutils is unable to find ELF notes without a SHT_NOTE section. - Even elfutils has trouble: it can find ELF notes without a section table at all, but if a section table is present, it won't look for PT_NOTE. - gdb wants section names to match between stripped DSOs and their symbols; otherwise it will corrupt symbol addresses. We're also breaking the rules: section 0 is supposed to be SHT_NULL. Fix these problems by building a better fake section table. While we're at it, we might as well let buggy Go versions keep working well by giving the SHT_DYNSYM entry the correct size. This is a bit unfortunate: it adds quite a bit of size to the vdso image. If/when binutils improves and the improved versions become widespread, it would be worth considering dropping most of this. Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/0e546a5eeaafdf1840e6ee654a55c1e727c26663.1403129369.git.luto@amacapital.netSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Andy Lutomirski 提交于
Rather than using a separate macro for each replacement, use generic macros. Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/d953cd2e70ceee1400985d091188cdd65fba2f05.1403129369.git.luto@amacapital.netSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Andy Lutomirski 提交于
It serves no purpose in user code. Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/2a5bebff42defd8a5e81d96f7dc00f21143c80e8.1403129369.git.luto@amacapital.netSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 14 6月, 2014 1 次提交
-
-
由 Andy Lutomirski 提交于
"make vdso_install" installs unstripped versions of the vdso objects for the benefit of the debugger. This was broken by checkin: 6f121e54 x86, vdso: Reimplement vdso.so preparation in build-time C The filenames are different now, so update the Makefile to cope. This still installs the 64-bit vdso as vdso64.so. We believe this will be okay, as the only known user is a patched gdb which is known to use build-ids, but if it turns out to be a problem we may have to add a link. Inspired by a patch from Sam Ravnborg. Acked-by: NSam Ravnborg <sam@ravnborg.org> Reported-by: NJosh Boyer <jwboyer@fedoraproject.org> Tested-by: NJosh Boyer <jwboyer@fedoraproject.org> Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/b10299edd8ba98d17e07dafcd895b8ecf4d99eff.1402586707.git.luto@amacapital.netSigned-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 13 6月, 2014 2 次提交
-
-
由 Andy Lutomirski 提交于
The Go runtime has a buggy vDSO parser that currently segfaults. This writes an empty SHT_DYNSYM entry that causes Go's runtime to malfunction by thinking that the vDSO is empty rather than malfunctioning by running off the end and segfaulting. This affects x86-64 only as far as we know, so we do not need this for the i386 and x32 vdsos. Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/d10618176c4bd39b457a5e85c497295c90cab1bc.1402620737.git.luto@amacapital.netSigned-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Andy Lutomirski 提交于
Add PUT_LE() by analogy with GET_LE() to write littleendian values in addition to reading them. Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/3d9b27e92745b27b6fda1b9a98f70dc9c1246c7a.1402620737.git.luto@amacapital.netSigned-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 11 6月, 2014 1 次提交
-
-
由 H. Peter Anvin 提交于
One final use of the macros from <endian.h> which are not available on older system. In this case we had one sole case of *writing* a littleendian number, but the number is SHN_UNDEF which is the constant zero, so rather than dealing with the general case of littleendian puts here, just document that the constant is zero and be done with it. Reported-and-Tested-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Cc: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/20140610135051.c3c34165f73d67d218b62bd9@linux-foundation.org
-
- 07 6月, 2014 1 次提交
-
-
由 H. Peter Anvin 提交于
There are no standard functions for littleendian data (unlike bigendian data.) Thus, use <tools/le_byteshift.h> to access littleendian data members. Those are fairly inefficient, but it doesn't matter for this purpose (and can be optimized later.) This avoids portability problems. Reported-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com> Tested-by: NAndy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/20140606140017.afb7f91142f66cb3dd13c186@linux-foundation.org
-
- 31 5月, 2014 3 次提交
-
-
由 H. Peter Anvin 提交于
Make it a little clearer what the littleendian access macros in vdso2c.[ch] actually do. This way they can probably also be moved to a central location (e.g. tools/include) for the benefit of other host tools. We should avoid implementation namespace symbols when writing code that is compiling for the compiler host, so avoid names starting with double underscore or underscore-capital. Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com> Cc: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/2cf258df123cb24bad63c274c8563c050547d99d.1401464755.git.luto@amacapital.net
-
由 Andy Lutomirski 提交于
This adds a macro GET(x) to convert x from big-endian to little-endian. Hopefully I put it everywhere it needs to go and got all the cases needed for everyone's linux/elf.h. Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/2cf258df123cb24bad63c274c8563c050547d99d.1401464755.git.luto@amacapital.netSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Andy Lutomirski 提交于
This avoids bizarre failures if make is run again. Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/1764385fe9931e8940b9d001132515448ea89523.1401464755.git.luto@amacapital.netSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 22 5月, 2014 2 次提交
-
-
由 Andy Lutomirski 提交于
The oops can be triggered in qemu using -no-hpet (but not nohpet) by running a 32-bit program and reading a couple of pages before the vdso. This should send SIGBUS instead of OOPSing. The bug was introduced by: commit 7a59ed41 Author: Stefani Seibold <stefani@seibold.net> Date: Mon Mar 17 23:22:09 2014 +0100 x86, vdso: Add 32 bit VDSO time support for 32 bit kernel which is new in 3.15. Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/e99025d887d6670b6c4d81e6ccfeeb83770b21e9.1400109621.git.luto@amacapital.netSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 H. Peter Anvin 提交于
This reverts commit fa81511b in preparation of merging in the proper fix (espfix64). Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 21 5月, 2014 2 次提交
-
-
由 Andy Lutomirski 提交于
Using arch_vma_name to give special mappings a name is awkward. x86 currently implements it by comparing the start address of the vma to the expected address of the vdso. This requires tracking the start address of special mappings and is probably buggy if a special vma is split or moved. Improve _install_special_mapping to just name the vma directly. Use it to give the x86 vvar area a name, which should make CRIU's life easier. As a side effect, the vvar area will show up in core dumps. This could be considered weird and is fixable. [hpa: I say we accept this as-is but be prepared to deal with knocking out the vvars from core dumps if this becomes a problem.] Cc: Cyrill Gorcunov <gorcunov@openvz.org> Cc: Pavel Emelyanov <xemul@parallels.com> Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/276b39b6b645fb11e345457b503f17b83c2c6fd0.1400538962.git.luto@amacapital.netSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Andy Lutomirski 提交于
The oops can be triggered in qemu using -no-hpet (but not nohpet) by reading a couple of pages past the end of the vdso text. This should send SIGBUS instead of OOPSing. The bug was introduced by: commit 7a59ed41 Author: Stefani Seibold <stefani@seibold.net> Date: Mon Mar 17 23:22:09 2014 +0100 x86, vdso: Add 32 bit VDSO time support for 32 bit kernel which is new in 3.15. This will be fixed separately in 3.15, but that patch will not apply to tip/x86/vdso. This is the equivalent fix for tip/x86/vdso and, presumably, 3.16. Cc: Stefani Seibold <stefani@seibold.net> Reported-by: NSasha Levin <sasha.levin@oracle.com> Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/c8b0a9a0b8d011a8b273cbb2de88d37190ed2751.1400538962.git.luto@amacapital.netSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 15 5月, 2014 1 次提交
-
-
由 Linus Torvalds 提交于
Checkin: b3b42ac2 x86-64, modify_ldt: Ban 16-bit segments on 64-bit kernels disabled 16-bit segments on 64-bit kernels due to an information leak. However, it does seem that people are genuinely using Wine to run old 16-bit Windows programs on Linux. A proper fix for this ("espfix64") is coming in the upcoming merge window, but as a temporary fix, create a sysctl to allow the administrator to re-enable support for 16-bit segments. It adds a "/proc/sys/abi/ldt16" sysctl that defaults to zero (off). If you hit this issue and care about your old Windows program more than you care about a kernel stack address information leak, you can do echo 1 > /proc/sys/abi/ldt16 as root (add it to your startup scripts), and you should be ok. The sysctl table is only added if you have COMPAT support enabled on x86-64, but I assume anybody who runs old windows binaries very much does that ;) Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com> Link: http://lkml.kernel.org/r/CA%2B55aFw9BPoD10U1LfHbOMpHWZkvJTkMcfCs9s3urPr1YyWBxw@mail.gmail.com Cc: <stable@vger.kernel.org>
-
- 06 5月, 2014 6 次提交
-
-
由 Andy Lutomirski 提交于
These definitions had no effect. Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/946c104e40c47319f8ab406e54118799cb55bd99.1399317206.git.luto@amacapital.netSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Andy Lutomirski 提交于
This makes the 64-bit and x32 vdsos use the same mechanism as the 32-bit vdso. Most of the churn is deleting all the old fixmap code. Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/8af87023f57f6bb96ec8d17fce3f88018195b49b.1399317206.git.luto@amacapital.netSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Andy Lutomirski 提交于
This unifies the vdso mapping code and teaches it how to map special pages at addresses corresponding to symbols in the vdso image. The new code is used for all vdso variants, but so far only the 32-bit variants use the new vvar page position. Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/b6d7858ad7b5ac3fd3c29cab6d6d769bc45d195e.1399317206.git.luto@amacapital.netSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Andy Lutomirski 提交于
Currently, vdso.so files are prepared and analyzed by a combination of objcopy, nm, some linker script tricks, and some simple ELF parsers in the kernel. Replace all of that with plain C code that runs at build time. All five vdso images now generate .c files that are compiled and linked in to the kernel image. This should cause only one userspace-visible change: the loaded vDSO images are stripped more heavily than they used to be. Everything outside the loadable segment is dropped. In particular, this causes the section table and section name strings to be missing. This should be fine: real dynamic loaders don't load or inspect these tables anyway. The result is roughly equivalent to eu-strip's --strip-sections option. The purpose of this change is to enable the vvar and hpet mappings to be moved to the page following the vDSO load segment. Currently, it is possible for the section table to extend into the page after the load segment, so, if we map it, it risks overlapping the vvar or hpet page. This happens whenever the load segment is just under a multiple of PAGE_SIZE. The only real subtlety here is that the old code had a C file with inline assembler that did 'call VDSO32_vsyscall' and a linker script that defined 'VDSO32_vsyscall = __kernel_vsyscall'. This most likely worked by accident: the linker script entry defines a symbol associated with an address as opposed to an alias for the real dynamic symbol __kernel_vsyscall. That caused ld to relocate the reference at link time instead of leaving an interposable dynamic relocation. Since the VDSO32_vsyscall hack is no longer needed, I now use 'call __kernel_vsyscall', and I added -Bsymbolic to make it work. vdso2c will generate an error and abort the build if the resulting image contains any dynamic relocations, so we won't silently generate bad vdso images. (Dynamic relocations are a problem because nothing will even attempt to relocate the vdso.) Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/2c4fcf45524162a34d87fdda1eb046b2a5cecee7.1399317206.git.luto@amacapital.netSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Andy Lutomirski 提交于
This code is used during CPU setup, and it isn't strictly speaking related to the 32-bit vdso. It's easier to understand how this works when the code is closer to its callers. This also lets syscall32_cpu_init be static, which might save some trivial amount of kernel text. Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/4e466987204e232d7b55a53ff6b9739f12237461.1399317206.git.luto@amacapital.netSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Andy Lutomirski 提交于
Rather than using 'vdso_enabled' and an awful #define, just call the parameters vdso32_enabled and vdso64_enabled. Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/87913de56bdcbae3d93917938302fc369b05caee.1399317206.git.luto@amacapital.netSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 04 4月, 2014 1 次提交
-
-
由 Andy Lutomirski 提交于
Gold can't parse the script due to: https://sourceware.org/bugzilla/show_bug.cgi?id=16804 With a workaround in place for that issue, Gold 2.23 crashes due to: https://sourceware.org/bugzilla/show_bug.cgi?id=15355 This works around the former bug and avoids the second by removing the unnecessary vvar and hpet sections and segments. The vdso and hpet symbols are still there, and nothing needed the sections or segments. Reported-by: NMarkus Trippelsdorf <markus@trippelsdorf.de> Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/243fa205098d112ec759c9b1b26785c09f399833.1396547532.git.luto@amacapital.netSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 31 3月, 2014 1 次提交
-
-
由 Andy Lutomirski 提交于
The new symbols provide the same API as the 64-bit variants, so they should have the same symbol version name. This can't break userspace, since these symbols are new for 32-bit Linux. Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Cc: Stefani Seibold <stefani@seibold.net> Link: http://lkml.kernel.org/r/0a869bce03d25619565b1eee7d69a4fd15fd203a.1396124118.git.luto@amacapital.netSigned-off-by: NH. Peter Anvin <hpa@zytor.com>
-