- 05 3月, 2014 12 次提交
-
-
由 Matt Fleming 提交于
Some EFI firmware makes use of the FPU during boottime services and clearing X86_CR4_OSFXSR by overwriting %cr4 causes the firmware to crash. Add the PAE bit explicitly instead of trashing the existing contents, leaving the rest of the bits as the firmware set them. Cc: H. Peter Anvin <hpa@zytor.com> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
由 Matt Fleming 提交于
Add the Kconfig option and bump the kernel header version so that boot loaders can check whether the handover code is available if they want. The xloadflags field in the bzImage header is also updated to reflect that the kernel supports both entry points by setting both of XLF_EFI_HANDOVER_32 and XLF_EFI_HANDOVER_64 when CONFIG_EFI_MIXED=y. XLF_CAN_BE_LOADED_ABOVE_4G is disabled so that the kernel text is guaranteed to be addressable with 32-bits. Note that no boot loaders should be using the bits set in xloadflags to decide which entry point to jump to. The entire scheme is based on the concept that 32-bit bootloaders always jump to ->handover_offset and 64-bit loaders always jump to ->handover_offset + 512. We set both bits merely to inform the boot loader that it's safe to use the native handover offset even if the machine type in the PE/COFF header claims otherwise. Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
由 Matt Fleming 提交于
Setup the runtime services based on whether we're booting in EFI native mode or not. For non-native mode we need to thunk from 64-bit into 32-bit mode before invoking the EFI runtime services. Using the runtime services after SetVirtualAddressMap() is slightly more complicated because we need to ensure that all the addresses we pass to the firmware are below the 4GB boundary so that they can be addressed with 32-bit pointers, see efi_setup_page_tables(). Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
由 Matt Fleming 提交于
The EFI handover code only works if the "bitness" of the firmware and the kernel match, i.e. 64-bit firmware and 64-bit kernel - it is not possible to mix the two. This goes against the tradition that a 32-bit kernel can be loaded on a 64-bit BIOS platform without having to do anything special in the boot loader. Linux distributions, for one thing, regularly run only 32-bit kernels on their live media. Despite having only one 'handover_offset' field in the kernel header, EFI boot loaders use two separate entry points to enter the kernel based on the architecture the boot loader was compiled for, (1) 32-bit loader: handover_offset (2) 64-bit loader: handover_offset + 512 Since we already have two entry points, we can leverage them to infer the bitness of the firmware we're running on, without requiring any boot loader modifications, by making (1) and (2) valid entry points for both CONFIG_X86_32 and CONFIG_X86_64 kernels. To be clear, a 32-bit boot loader will always use (1) and a 64-bit boot loader will always use (2). It's just that, if a single kernel image supports (1) and (2) that image can be used with both 32-bit and 64-bit boot loaders, and hence both 32-bit and 64-bit EFI. (1) and (2) must be 512 bytes apart at all times, but that is already part of the boot ABI and we could never change that delta without breaking existing boot loaders anyhow. Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
由 Matt Fleming 提交于
Make the decision which code path to take at runtime based on efi_early->is64. Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
由 Matt Fleming 提交于
Implement the transition code to go from IA32e mode to protected mode in the EFI boot stub. This is required to use 32-bit EFI services from a 64-bit kernel. Since EFI boot stub is executed in an identity-mapped region, there's not much we need to do before invoking the 32-bit EFI boot services. However, we do reload the firmware's global descriptor table (efi32_boot_gdt) in case things like timer events are still running in the firmware. Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
由 Matt Fleming 提交于
It's not possible to dereference the EFI System table directly when booting a 64-bit kernel on a 32-bit EFI firmware because the size of pointers don't match. In preparation for supporting the above use case, build a list of function pointers on boot so that callers don't have to worry about converting pointer sizes through multiple levels of indirection. Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
由 Matt Fleming 提交于
The traditional approach of using machine-specific types such as 'unsigned long' does not allow the kernel to interact with firmware running in a different CPU mode, e.g. 64-bit kernel with 32-bit EFI. Add distinct EFI structure definitions for both 32-bit and 64-bit so that we can use them in the 32-bit and 64-bit code paths. Acked-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
由 Matt Fleming 提交于
Both efi_free_boot_services() and efi_enter_virtual_mode() are invoked from init/main.c, but only if the EFI runtime services are available. This is not the case for non-native boots, e.g. where a 64-bit kernel is booted with 32-bit EFI firmware. Delete the dead code. Acked-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
由 Matt Fleming 提交于
Now that we have EFI-specific page tables we need to lookup the pgd when dumping those page tables, rather than assuming that swapper_pgdir is the current pgdir. Remove the double underscore prefix, which is usually reserved for static functions. Acked-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
由 Matt Fleming 提交于
Instead of littering main() with #ifdef CONFIG_EFI_STUB, move the logic into separate functions that do nothing if the config option isn't set. This makes main() much easier to read. Acked-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
由 Matt Fleming 提交于
handover_offset is now filled out by build.c. Don't set a default value as it will be overwritten anyway. Acked-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
- 14 2月, 2014 3 次提交
-
-
由 Matt Fleming 提交于
Madper reported seeing the following crash, BUG: unable to handle kernel paging request at ffffffffff340003 IP: [<ffffffff81d85ba4>] efi_bgrt_init+0x9d/0x133 Call Trace: [<ffffffff81d8525d>] efi_late_init+0x9/0xb [<ffffffff81d68f59>] start_kernel+0x436/0x450 [<ffffffff81d6892c>] ? repair_env_string+0x5c/0x5c [<ffffffff81d68120>] ? early_idt_handlers+0x120/0x120 [<ffffffff81d685de>] x86_64_start_reservations+0x2a/0x2c [<ffffffff81d6871e>] x86_64_start_kernel+0x13e/0x14d This is caused because the layout of the ACPI BGRT header on this system doesn't match the definition from the ACPI spec, and so we get a bogus physical address when dereferencing ->image_address in efi_bgrt_init(). Luckily the status field in the BGRT header clearly marks it as invalid, so we can check that field and skip BGRT initialisation. Reported-by: NMadper Xie <cxie@redhat.com> Suggested-by: NToshi Kani <toshi.kani@hp.com> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> Cc: Matthew Garrett <mjg59@srcf.ucam.org> Cc: Josh Triplett <josh@joshtriplett.org> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
由 Borislav Petkov 提交于
We do not enable the new efi memmap on 32-bit and thus we need to run runtime_code_page_mkexec() unconditionally there. Fix that. Reported-and-tested-by: NLejun Zhu <lejun.zhu@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
由 H. Peter Anvin 提交于
If CONFIG_X86_SMAP is disabled, smap_violation() tests for conditions which are incorrect (as the AC flag doesn't matter), causing spurious faults. The dynamic disabling of SMAP (nosmap on the command line) is fine because it disables X86_FEATURE_SMAP, therefore causing the static_cpu_has() to return false. Found by Fengguang Wu's test system. [ v3: move all predicates into smap_violation() ] [ v2: use IS_ENABLED() instead of #ifdef ] Reported-by: NFengguang Wu <fengguang.wu@intel.com> Link: http://lkml.kernel.org/r/20140213124550.GA30497@localhostSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com> Cc: <stable@vger.kernel.org> # v3.7+
-
- 13 2月, 2014 1 次提交
-
-
由 H. Peter Anvin 提交于
If SMAP support is not compiled into the kernel, don't enable SMAP in CR4 -- in fact, we should clear it, because the kernel doesn't contain the proper STAC/CLAC instructions for SMAP support. Found by Fengguang Wu's test system. Reported-by: NFengguang Wu <fengguang.wu@intel.com> Link: http://lkml.kernel.org/r/20140213124550.GA30497@localhostSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com> Cc: <stable@vger.kernel.org> # v3.7+
-
- 12 2月, 2014 1 次提交
-
-
由 Steven Rostedt (Red Hat) 提交于
When the conversion was made to remove stop machine and use the breakpoint logic instead, the modification of the function graph caller is still done directly as though it was being done under stop machine. As it is not converted via stop machine anymore, there is a possibility that the code could be layed across cache lines and if another CPU is accessing that function graph call when it is being updated, it could cause a General Protection Fault. Convert the update of the function graph caller to use the breakpoint method as well. Cc: H. Peter Anvin <hpa@zytor.com> Cc: stable@vger.kernel.org # 3.5+ Fixes: 08d636b6 "ftrace/x86: Have arch x86_64 use breakpoints instead of stop machine" Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 11 2月, 2014 1 次提交
-
-
由 Mel Gorman 提交于
Steven Noonan forwarded a users report where they had a problem starting vsftpd on a Xen paravirtualized guest, with this in dmesg: BUG: Bad page map in process vsftpd pte:8000000493b88165 pmd:e9cc01067 page:ffffea00124ee200 count:0 mapcount:-1 mapping: (null) index:0x0 page flags: 0x2ffc0000000014(referenced|dirty) addr:00007f97eea74000 vm_flags:00100071 anon_vma:ffff880e98f80380 mapping: (null) index:7f97eea74 CPU: 4 PID: 587 Comm: vsftpd Not tainted 3.12.7-1-ec2 #1 Call Trace: dump_stack+0x45/0x56 print_bad_pte+0x22e/0x250 unmap_single_vma+0x583/0x890 unmap_vmas+0x65/0x90 exit_mmap+0xc5/0x170 mmput+0x65/0x100 do_exit+0x393/0x9e0 do_group_exit+0xcc/0x140 SyS_exit_group+0x14/0x20 system_call_fastpath+0x1a/0x1f Disabling lock debugging due to kernel taint BUG: Bad rss-counter state mm:ffff880e9ca60580 idx:0 val:-1 BUG: Bad rss-counter state mm:ffff880e9ca60580 idx:1 val:1 The issue could not be reproduced under an HVM instance with the same kernel, so it appears to be exclusive to paravirtual Xen guests. He bisected the problem to commit 1667918b ("mm: numa: clear numa hinting information on mprotect") that was also included in 3.12-stable. The problem was related to how xen translates ptes because it was not accounting for the _PAGE_NUMA bit. This patch splits pte_present to add a pteval_present helper for use by xen so both bare metal and xen use the same code when checking if a PTE is present. [mgorman@suse.de: wrote changelog, proposed minor modifications] [akpm@linux-foundation.org: fix typo in comment] Reported-by: NSteven Noonan <steven@uplinklabs.net> Tested-by: NSteven Noonan <steven@uplinklabs.net> Signed-off-by: NElena Ufimtseva <ufimtseva@gmail.com> Signed-off-by: NMel Gorman <mgorman@suse.de> Reviewed-by: NDavid Vrabel <david.vrabel@citrix.com> Acked-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: <stable@vger.kernel.org> [3.12+] Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 09 2月, 2014 1 次提交
-
-
由 Steven Rostedt 提交于
When debug preempt is enabled, preempt_disable() can be traced by function and function graph tracing. There's a place in the function graph tracer that calls trace_clock() which eventually calls cycles_2_ns() outside of the recursion protection. When cycles_2_ns() calls preempt_disable() it gets traced and the graph tracer will go into a recursive loop causing a crash or worse, a triple fault. Simple fix is to use preempt_disable_notrace() in cycles_2_ns, which makes sense because the preempt_disable() tracing may use that code too, and it tracing it, even with recursion protection is rather pointless. Signed-off-by: NSteven Rostedt <rostedt@goodmis.org> Acked-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/20140204141315.2a968a72@gandalf.local.homeSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 07 2月, 2014 3 次提交
-
-
由 Tang Chen 提交于
The following path will cause array out of bound. memblock_add_region() will always set nid in memblock.reserved to MAX_NUMNODES. In numa_register_memblks(), after we set all nid to correct valus in memblock.reserved, we called setup_node_data(), and used memblock_alloc_nid() to allocate memory, with nid set to MAX_NUMNODES. The nodemask_t type can be seen as a bit array. And the index is 0 ~ MAX_NUMNODES-1. After that, when we call node_set() in numa_clear_kernel_node_hotplug(), the nodemask_t got an index of value MAX_NUMNODES, which is out of [0 ~ MAX_NUMNODES-1]. See below: numa_init() |---> numa_register_memblks() | |---> memblock_set_node(memory) set correct nid in memblock.memory | |---> memblock_set_node(reserved) set correct nid in memblock.reserved | |...... | |---> setup_node_data() | |---> memblock_alloc_nid() here, nid is set to MAX_NUMNODES (1024) |...... |---> numa_clear_kernel_node_hotplug() |---> node_set() here, we have an index 1024, and overflowed This patch moves nid setting to numa_clear_kernel_node_hotplug() to fix this problem. Reported-by: NDave Jones <davej@redhat.com> Signed-off-by: NTang Chen <tangchen@cn.fujitsu.com> Tested-by: NGu Zheng <guz.fnst@cn.fujitsu.com> Reported-by: NDave Jones <davej@redhat.com> Cc: David Rientjes <rientjes@google.com> Tested-by: NDave Jones <davej@redhat.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Tang Chen 提交于
On-stack variable numa_kernel_nodes in numa_clear_kernel_node_hotplug() was not initialized. So we need to initialize it. [akpm@linux-foundation.org: use NODE_MASK_NONE, per David] Signed-off-by: NTang Chen <tangchen@cn.fujitsu.com> Tested-by: NGu Zheng <guz.fnst@cn.fujitsu.com> Reported-by: NDave Jones <davej@redhat.com> Reported-by: NDavid Rientjes <rientjes@google.com> Tested-by: NDave Jones <davej@redhat.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Borislav Petkov 提交于
For additional coverage, BorisO and friends unknowlingly did swap AMD microcode with Intel microcode blobs in order to see what happens. What did happen on 32-bit was [ 5.722656] BUG: unable to handle kernel paging request at be3a6008 [ 5.722693] IP: [<c106d6b4>] load_microcode_amd+0x24/0x3f0 [ 5.722716] *pdpt = 0000000000000000 *pde = 0000000000000000 because there was a valid initrd there but without valid microcode in it and the container check happened *after* the relocated ramdisk handling on 32-bit, which was clearly wrong. While at it, take care of the ramdisk relocation on both 32- and 64-bit as it is done on both. Also, comment what we're doing because this code is a bit tricky. Reported-and-tested-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: http://lkml.kernel.org/r/1391460104-7261-1-git-send-email-bp@alien8.deSigned-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 06 2月, 2014 2 次提交
-
-
由 Matt Fleming 提交于
CONFIG_X86_32 doesn't map the boot services regions into the EFI memory map (see commit 70087011 ("x86, efi: Don't map Boot Services on i386")), and so efi_lookup_mapped_addr() will fail to return a valid address. Executing the ioremap() path in efi_bgrt_init() causes the following warning on x86-32 because we're trying to ioremap() RAM, WARNING: CPU: 0 PID: 0 at arch/x86/mm/ioremap.c:102 __ioremap_caller+0x2ad/0x2c0() Modules linked in: CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.13.0-0.rc5.git0.1.2.fc21.i686 #1 Hardware name: DellInc. Venue 8 Pro 5830/09RP78, BIOS A02 10/17/2013 00000000 00000000 c0c0df08 c09a5196 00000000 c0c0df38 c0448c1e c0b41310 00000000 00000000 c0b37bc1 00000066 c043bbfd c043bbfd 00e7dfe0 00073eff 00073eff c0c0df48 c0448ce2 00000009 00000000 c0c0df9c c043bbfd 00078d88 Call Trace: [<c09a5196>] dump_stack+0x41/0x52 [<c0448c1e>] warn_slowpath_common+0x7e/0xa0 [<c043bbfd>] ? __ioremap_caller+0x2ad/0x2c0 [<c043bbfd>] ? __ioremap_caller+0x2ad/0x2c0 [<c0448ce2>] warn_slowpath_null+0x22/0x30 [<c043bbfd>] __ioremap_caller+0x2ad/0x2c0 [<c0718f92>] ? acpi_tb_verify_table+0x1c/0x43 [<c0719c78>] ? acpi_get_table_with_size+0x63/0xb5 [<c087cd5e>] ? efi_lookup_mapped_addr+0xe/0xf0 [<c043bc2b>] ioremap_nocache+0x1b/0x20 [<c0cb01c8>] ? efi_bgrt_init+0x83/0x10c [<c0cb01c8>] efi_bgrt_init+0x83/0x10c [<c0cafd82>] efi_late_init+0x8/0xa [<c0c9bab2>] start_kernel+0x3ae/0x3c3 [<c0c9b53b>] ? repair_env_string+0x51/0x51 [<c0c9b378>] i386_start_kernel+0x12e/0x131 Switch to using early_memremap(), which won't trigger this warning, and has the added benefit of more accurately conveying what we're trying to do - map a chunk of memory. This patch addresses the following bug report, https://bugzilla.kernel.org/show_bug.cgi?id=67911Reported-by: NAdam Williamson <awilliam@redhat.com> Cc: Josh Triplett <josh@joshtriplett.org> Cc: Matthew Garrett <mjg59@srcf.ucam.org> Cc: Rafael J. Wysocki <rjw@rjwysocki.net> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
由 Ingo Molnar 提交于
It can take some time to validate the image, make sure {allyes|allmod}config doesn't enable it. I'd say randconfig will cover it often enough, and the failure is also borderline build coverage related: you cannot really make the decoder test fail via source level changes, only with changes in the build environment, so I agree with Andi that we can disable this one too. Signed-off-by: NIngo Molnar <mingo@kernel.org> Acked-by: Paul Gortmaker paul.gortmaker@windriver.com> Suggested-and-acked-by: Andi Kleen andi@firstfloor.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 04 2月, 2014 1 次提交
-
-
由 Mukesh Rathor 提交于
During bootup in the 'probe_page_size_mask' these CR4 flags are set in there. But for AP processors they are not set as we do not use 'secondary_startup_64' which the baremetal kernels uses. Instead do it in this function which we use in Xen PVH during our startup for AP processors. As such fix it up to make sure we have that flag set. Signed-off-by: NMukesh Rathor <mukesh.rathor@oracle.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
- 03 2月, 2014 1 次提交
-
-
由 Konrad Rzeszutek Wilk 提交于
This reverts commit 08ece5bb. As it breaks ARM builds and needs more attention on the ARM side. Acked-by: NDavid Vrabel <david.vrabel@citrix.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
- 02 2月, 2014 1 次提交
-
-
由 Petr Tesarik 提交于
With DISCONTIGMEM, the mapping between a pfn and its owning node is initialized using data provided by the BIOS. However, the initialization may fail if the extents are not aligned to section boundary (64M). The symptom of this bug is an early boot failure in pfn_to_page(), as it tries to access NODE_DATA(__nid) using index from an unitialized element of the physnode_map[] array. While the bug is always present, it is more likely to be hit in kdump kernels on large machines, because: 1. The memory map for a kdump kernel is specified as exactmap, and exactmap is more likely to be unaligned. 2. Large reservations are more likely to span across a 64M boundary. [ hpa: fixed incorrect use of "pfn" instead of "start" ] Signed-off-by: NPetr Tesarik <ptesarik@suse.cz> Link: http://lkml.kernel.org/r/20140201133019.32e56f86@hananiah.suse.czAcked-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 31 1月, 2014 6 次提交
-
-
由 Dave Jones 提交于
Passing a freed 'pages' to free_xenballooned_pages will end badly on kernels with slub debug enabled. This looks out of place between the rc assign and the check, but we do want to kfree pages regardless of which path we take. Signed-off-by: NDave Jones <davej@fedoraproject.org> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
由 Zoltan Kiss 提交于
The grant mapping API does m2p_override unnecessarily: only gntdev needs it, for blkback and future netback patches it just cause a lock contention, as those pages never go to userspace. Therefore this series does the following: - the original functions were renamed to __gnttab_[un]map_refs, with a new parameter m2p_override - based on m2p_override either they follow the original behaviour, or just set the private flag and call set_phys_to_machine - gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with m2p_override false - a new function gnttab_[un]map_refs_userspace provides the old behaviour It also removes a stray space from page.h and change ret to 0 if XENFEAT_auto_translated_physmap, as that is the only possible return value there. v2: - move the storing of the old mfn in page->index to gnttab_map_refs - move the function header update to a separate patch v3: - a new approach to retain old behaviour where it needed - squash the patches into one v4: - move out the common bits from m2p* functions, and pass pfn/mfn as parameter - clear page->private before doing anything with the page, so m2p_find_override won't race with this v5: - change return value handling in __gnttab_[un]map_refs - remove a stray space in page.h - add detail why ret = 0 now at some places v6: - don't pass pfn to m2p* functions, just get it locally Signed-off-by: NZoltan Kiss <zoltan.kiss@citrix.com> Suggested-by: NDavid Vrabel <david.vrabel@citrix.com> Acked-by: NDavid Vrabel <david.vrabel@citrix.com> Acked-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
由 Andrey Vagin 提交于
The SOFT_DIRTY bit shows that the content of memory was changed after a defined point in the past. mprotect() doesn't change the content of memory, so it must not change the SOFT_DIRTY bit. This bug causes a malfunction: on the first iteration all pages are dumped. On other iterations only pages with the SOFT_DIRTY bit are dumped. So if the SOFT_DIRTY bit is cleared from a page by mistake, the page is not dumped and its content will be restored incorrectly. This patch does nothing with _PAGE_SWP_SOFT_DIRTY, becase pte_modify() is called only for present pages. Fixes commit 0f8975ec ("mm: soft-dirty bits for user memory changes tracking"). Signed-off-by: NAndrey Vagin <avagin@openvz.org> Acked-by: NCyrill Gorcunov <gorcunov@openvz.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Pavel Emelyanov <xemul@parallels.com> Cc: Borislav Petkov <bp@suse.de> Cc: Wen Congyang <wency@cn.fujitsu.com> Cc: <stable@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Prarit Bhargava 提交于
Further discussion here: http://marc.info/?l=linux-kernel&m=139073901101034&w=2 kbuild, 0day kernel build service, outputs the warning: arch/x86/kernel/irq.c:333:1: warning: the frame size of 2056 bytes is larger than 2048 bytes [-Wframe-larger-than=] because check_irq_vectors_for_cpu_disable() allocates two cpumasks on the stack. Fix this by moving the two cpumasks to a global file context. Reported-by: NFengguang Wu <fengguang.wu@intel.com> Tested-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NPrarit Bhargava <prarit@redhat.com> Link: http://lkml.kernel.org/r/1390915331-27375-1-git-send-email-prarit@redhat.com Cc: Andi Kleen <ak@linux.intel.com> Cc: Michel Lespinasse <walken@google.com> Cc: Seiji Aguchi <seiji.aguchi@hds.com> Cc: Yang Zhang <yang.z.zhang@Intel.com> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: Janet Morgan <janet.morgan@intel.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Ruiv Wang <ruiv.wang@gmail.com> Cc: Gong Chen <gong.chen@linux.intel.com> Cc: Yinghai Lu <yinghai@kernel.org> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 David Woodhouse 提交于
Both clang 3.5 and GCC 4.9 will support this (as of r199754 and r207196 respectively). Both have been tested to produce booting kernels when the 16-bit code is built with -m16. (Modulo LLVM PR3997, at least.) [ hpa: folded test for -m16 into M16_CFLAGS ] Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com> Link: http://lkml.kernel.org/r/1390997807.20153.133.camel@i7.infradead.orgSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 David Woodhouse 提交于
Commit dd78b973 ("x86, boot: Move CPU flags out of cpucheck") introduced ambiguous inline asm in the has_eflag() function. In 16-bit mode want the instruction to be 'pushfl', but we just say 'pushf' and hope the compiler does what we wanted. When building with 'clang -m16', it won't, because clang doesn't use the horrid '.code16gcc' hack that even 'gcc -m16' uses internally. Say what we mean and don't make the compiler make assumptions. [ hpa: ideally we would be able to use the gcc %zN construct here, but that is broken for 64-bit integers in gcc < 4.5. The code with plain "pushf/popf" is fine for 32- or 64-bit mode, but not for 16-bit mode; in 16-bit mode those are 16-bit instructions in .code16 mode, and 32-bit instructions in .code16gcc mode. ] Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com> Link: http://lkml.kernel.org/r/1391079628.26079.82.camel@shinybook.infradead.orgSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 30 1月, 2014 7 次提交
-
-
由 Andi Kleen 提交于
LTO requires consistent types of symbols over all files. So "nmi" cannot be declared as a char [] here, need to use the correct function type. Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NAndi Kleen <ak@linux.intel.com> Link: http://lkml.kernel.org/r/1382458079-24450-8-git-send-email-andi@firstfloor.orgSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Andi Kleen 提交于
These functions are called from inline assembler stubs, thus need to be global and visible. Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Gleb Natapov <gleb@kernel.org> Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com> Signed-off-by: NAndi Kleen <ak@linux.intel.com> Link: http://lkml.kernel.org/r/1382458079-24450-7-git-send-email-andi@firstfloor.orgSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Andi Kleen 提交于
LTO in gcc 4.6/47. has trouble with global register variables. They were used to read the stack pointer. Use a simple inline assembler statement with a mov instead. This also helps LLVM/clang, which does not support global register variables. [ hpa: Ideally this should become a builtin in both gcc and clang. ] v2: More general asm constraint. Fix description (Jan Beulich) Signed-off-by: NAndi Kleen <ak@linux.intel.com> Link: http://lkml.kernel.org/r/1382458079-24450-6-git-send-email-andi@firstfloor.orgSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Andi Kleen 提交于
The paravirt thunks use a hack of using a static reference to a static function to reference that function from the top level statement. This assumes that gcc always generates static function names in a specific format, which is not necessarily true. Simply make these functions global and asmlinkage or __visible. This way the static __used variables are not needed and everything works. Functions with arguments are __visible to keep the register calling convention on 32bit. Changed in paravirt and in all users (Xen and vsmp) v2: Use __visible for functions with arguments Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Ido Yariv <ido@wizery.com> Signed-off-by: NAndi Kleen <ak@linux.intel.com> Link: http://lkml.kernel.org/r/1382458079-24450-5-git-send-email-andi@firstfloor.orgSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Andi Kleen 提交于
The paravirt patching code assumes that it can reference a local assembler label between two different top level assembler statements. This does not work with LTO where the assembler code may end up in different assembler files. Replace it with extern / global /asm linkage labels. This also removes one redundant copy of the macro. Cc: Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: NAndi Kleen <ak@linux.intel.com> Link: http://lkml.kernel.org/r/1382458079-24450-4-git-send-email-andi@firstfloor.orgSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Andi Kleen 提交于
- Make the C code used by the paravirt stubs visible - Since they have to be global now, give them a more unique name. Cc: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: NAndi Kleen <ak@linux.intel.com> Link: http://lkml.kernel.org/r/1382458079-24450-3-git-send-email-andi@firstfloor.orgSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Paolo Bonzini 提交于
When Hyper-V hypervisor leaves are present, KVM must relocate its own leaves at 0x40000100, because Windows does not look for Hyper-V leaves at indices other than 0x40000000. In this case, the KVM features are at 0x40000101, but the old code would always look at 0x40000001. Fix by using kvm_cpuid_base(). This also requires making the function non-inline, since kvm_cpuid_base() is static. Fixes: 1085ba7f Cc: stable@vger.kernel.org Cc: mtosatti@redhat.com Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-