- 22 4月, 2016 1 次提交
-
-
由 Juergen Gross 提交于
Correct the size of the module mapping space and the maximum available physical memory size of current processors. Signed-off-by: NJuergen Gross <jgross@suse.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: corbet@lwn.net Cc: linux-doc@vger.kernel.org Link: http://lkml.kernel.org/r/1461310504-15977-1-git-send-email-jgross@suse.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 29 11月, 2015 1 次提交
-
-
由 Matt Fleming 提交于
Make it clear that the EFI page tables are only available during EFI runtime calls since that subject has come up a fair numbers of times in the past. Additionally, add the EFI region start and end addresses to the table so that it's possible to see at a glance where they fall in relation to other regions. Signed-off-by: NMatt Fleming <matt@codeblueprint.co.uk> Reviewed-by: NBorislav Petkov <bp@suse.de> Acked-by: NBorislav Petkov <bp@suse.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Andy Lutomirski <luto@kernel.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Dave Jones <davej@codemonkey.org.uk> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com> Cc: Stephen Smalley <sds@tycho.nsa.gov> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Toshi Kani <toshi.kani@hp.com> Cc: linux-efi@vger.kernel.org Link: http://lkml.kernel.org/r/1448658575-17029-7-git-send-email-matt@codeblueprint.co.ukSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 14 2月, 2015 1 次提交
-
-
由 Andrey Ryabinin 提交于
This patch adds arch specific code for kernel address sanitizer. 16TB of virtual addressed used for shadow memory. It's located in range [ffffec0000000000 - fffffc0000000000] between vmemmap and %esp fixup stacks. At early stage we map whole shadow region with zero page. Latter, after pages mapped to direct mapping address range we unmap zero pages from corresponding shadow (see kasan_map_shadow()) and allocate and map a real shadow memory reusing vmemmap_populate() function. Also replace __pa with __pa_nodebug before shadow initialized. __pa with CONFIG_DEBUG_VIRTUAL=y make external function call (__phys_addr) __phys_addr is instrumented, so __asan_load could be called before shadow area initialized. Signed-off-by: NAndrey Ryabinin <a.ryabinin@samsung.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Konstantin Serebryany <kcc@google.com> Cc: Dmitry Chernenkov <dmitryc@google.com> Signed-off-by: NAndrey Konovalov <adech.fo@gmail.com> Cc: Yuri Gribov <tetra2005@gmail.com> Cc: Konstantin Khlebnikov <koct9i@gmail.com> Cc: Sasha Levin <sasha.levin@oracle.com> Cc: Christoph Lameter <cl@linux.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Jim Davis <jim.epost@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 19 9月, 2014 1 次提交
-
-
由 Dave Hansen 提交于
Peter Anvin says: > 0xffff880000000000 is the lowest usable address because we have > agreed to leave 0xffff800000000000-0xffff880000000000 for the > hypervisor or other non-OS uses. Let's call this out in the documentation. This came up during the kernel address sanitizer discussions where it was proposed to use this area for other kernel things. Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Dmitry Vyukov <dvyukov@google.com> Link: http://lkml.kernel.org/r/20140918195606.841389D2@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 01 5月, 2014 1 次提交
-
-
由 H. Peter Anvin 提交于
The IRET instruction, when returning to a 16-bit segment, only restores the bottom 16 bits of the user space stack pointer. This causes some 16-bit software to break, but it also leaks kernel state to user space. We have a software workaround for that ("espfix") for the 32-bit kernel, but it relies on a nonzero stack segment base which is not available in 64-bit mode. In checkin: b3b42ac2 x86-64, modify_ldt: Ban 16-bit segments on 64-bit kernels we "solved" this by forbidding 16-bit segments on 64-bit kernels, with the logic that 16-bit support is crippled on 64-bit kernels anyway (no V86 support), but it turns out that people are doing stuff like running old Win16 binaries under Wine and expect it to work. This works around this by creating percpu "ministacks", each of which is mapped 2^16 times 64K apart. When we detect that the return SS is on the LDT, we copy the IRET frame to the ministack and use the relevant alias to return to userspace. The ministacks are mapped readonly, so if IRET faults we promote #GP to #DF which is an IST vector and thus has its own stack; we then do the fixup in the #DF handler. (Making #GP an IST exception would make the msr_safe functions unsafe in NMI/MC context, and quite possibly have other effects.) Special thanks to: - Andy Lutomirski, for the suggestion of using very small stack slots and copy (as opposed to map) the IRET frame there, and for the suggestion to mark them readonly and let the fault promote to #DF. - Konrad Wilk for paravirt fixup and testing. - Borislav Petkov for testing help and useful comments. Reported-by: NBrian Gerst <brgerst@gmail.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com> Link: http://lkml.kernel.org/r/1398816946-3351-1-git-send-email-hpa@linux.intel.com Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Andrew Lutomriski <amluto@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Dirk Hohndel <dirk@hohndel.org> Cc: Arjan van de Ven <arjan.van.de.ven@intel.com> Cc: comex <comexk@gmail.com> Cc: Alexander van Heukelum <heukelum@fastmail.fm> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: <stable@vger.kernel.org> # consider after upstream merge
-
- 02 11月, 2013 1 次提交
-
-
由 Borislav Petkov 提交于
We map the EFI regions needed for runtime services non-contiguously, with preserved alignment on virtual addresses starting from -4G down for a total max space of 64G. This way, we provide for stable runtime services addresses across kernels so that a kexec'd kernel can still use them. Thus, they're mapped in a separate pagetable so that we don't pollute the kernel namespace. Add an efi= kernel command line parameter for passing miscellaneous options and chicken bits from the command line. While at it, add a chicken bit called "efi=old_map" which can be used as a fallback to the old runtime services mapping method in case there's some b0rkage with a particular EFI implementation (haha, it is hard to hold up the sarcasm here...). Also, add the UEFI RT VA space to Documentation/x86/x86_64/mm.txt. Signed-off-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
- 03 4月, 2013 1 次提交
-
-
由 Borislav Petkov 提交于
Add the end of the virtual address space to its layout documentation. Signed-off-by: NBorislav Petkov <bp@alien8.de> Link: http://lkml.kernel.org/r/1362428180-8865-4-git-send-email-bp@alien8.deSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 06 5月, 2009 2 次提交
-
-
由 H. Peter Anvin 提交于
Fix a trivial typo in Documentation/x86/x86_64/mm.txt. [ Impact: documentation only ] Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Cc: Rik van Riel <riel@redhat.com>
-
由 Rik van Riel 提交于
Extend the maximum addressable memory on x86-64 from 2^44 to 2^46 bytes. This requires some shuffling around of the vmalloc and virtual memmap memory areas, to keep them away from the direct mapping of up to 64TB of physical memory. This patch also introduces a guard hole between the vmalloc area and the virtual memory map space. There's really no good reason why we wouldn't have a guard hole there. [ Impact: future hardware enablement ] Signed-off-by: NRik van Riel <riel@redhat.com> LKML-Reference: <20090505172856.6820db22@cuia.bos.redhat.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 12 11月, 2008 1 次提交
-
-
由 Jiri Slaby 提交于
Impact: documentation update Commit a6523748 (paravirt/x86, 64-bit: move __PAGE_OFFSET to leave a space for hypervisor) changed address space without changing the documentation. Change it according to the code change -- direct mapping start: ffff810000000000 => ffff880000000000 which gives 57 TiB, something between 45 and 46 bits. Signed-off-by: NJiri Slaby <jirislaby@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 31 5月, 2008 1 次提交
-
-
由 H. Peter Anvin 提交于
The current organization of the x86 documentation makes it appear as if the "i386" documentation doesn't apply to x86-64, which is does. Thus, move that documentation into Documentation/x86, and move the x86-64-specific stuff into Documentation/x86/x86_64 with the eventual goal to move stuff that isn't actually 64-bit specific back into Documentation/x86. Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 25 5月, 2008 1 次提交
-
-
由 Jiri Slaby 提交于
Commit 85eb69a1 introduced 512 MiB sized kernel and 1.5 GiB sized module space but omitted to change documentation properly. Fix that. [Wasn't the hole intentional protection hole?] Signed-off-by: NJiri Slaby <jirislaby@gmail.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: hpa@zytor.com Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 17 10月, 2007 1 次提交
-
-
由 Christoph Lameter 提交于
x86_64 uses 2M page table entries to map its 1-1 kernel space. We also implement the virtual memmap using 2M page table entries. So there is no additional runtime overhead over FLATMEM, initialisation is slightly more complex. As FLATMEM still references memory to obtain the mem_map pointer and SPARSEMEM_VMEMMAP uses a compile time constant, SPARSEMEM_VMEMMAP should be superior. With this SPARSEMEM becomes the most efficient way of handling virt_to_page, pfn_to_page and friends for UP, SMP and NUMA on x86_64. [apw@shadowen.org: code resplit, style fixups] [apw@shadowen.org: vmemmap x86_64: ensure end of section memmap is initialised] Signed-off-by: NChristoph Lameter <clameter@sgi.com> Signed-off-by: NAndy Whitcroft <apw@shadowen.org> Acked-by: NMel Gorman <mel@csn.ul.ie> Cc: Andi Kleen <ak@suse.de> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 13 2月, 2007 1 次提交
-
-
由 Randy Dunlap 提交于
Fix typos. Lots of whitespace changes for readability and consistency. Signed-off-by: NRandy Dunlap <randy.dunlap@oracle.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
- 15 11月, 2005 1 次提交
-
-
由 Andi Kleen 提交于
I got some questions on this, so just fix up the documentation. Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 17 4月, 2005 1 次提交
-
-
由 Linus Torvalds 提交于
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!
-