- 12 10月, 2015 5 次提交
-
-
由 Taku Izumi 提交于
This patch renames print_efi_memmap() to efi_print_memmap() and make it global function so that we can invoke it outside of arch/x86/platform/efi/efi.c Signed-off-by: NTaku Izumi <izumi.taku@jp.fujitsu.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Xishi Qiu <qiuxishi@huawei.com> Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
由 Matt Fleming 提交于
The EFI Graphics Output Protocol uses 64-bit frame buffer addresses but these get truncated to 32-bit by the EFI boot stub when storing the address in the 'lfb_base' field of 'struct screen_info'. Add a 'ext_lfb_base' field for the upper 32-bits of the frame buffer address and set VIDEO_TYPE_CAPABILITY_64BIT_BASE when the field is useable. It turns out that the reason no one has required this support so far is that there's actually code in tianocore to "downgrade" PCI resources that have option ROMs and 64-bit BARS from 64-bit to 32-bit to cope with legacy option ROMs that can't handle 64-bit addresses. The upshot is that basically all GOP devices in the wild use a 32-bit frame buffer address. Still, it is possible to build firmware that uses a full 64-bit GOP frame buffer address. Chad did, which led to him reporting this issue. Add support in anticipation of GOP devices using 64-bit addresses more widely, and so that efifb works out of the box when that happens. Reported-by: NChad Page <chad.page@znyx.com> Cc: Pete Hawkins <pete.hawkins@znyx.com> Acked-by: NPeter Jones <pjones@redhat.com> Cc: Matthew Garrett <mjg59@srcf.ucam.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
由 Leif Lindholm 提交于
As we now have a common debug infrastructure between core and arm64 efi, drop the bit of the interface passing verbose output flags around. Signed-off-by: NLeif Lindholm <leif.lindholm@linaro.org> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Cc: Mark Salter <msalter@redhat.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
由 Leif Lindholm 提交于
Now that we have an efi=debug command line option in the core code, use this instead of the arm64-specific uefi_debug option. Signed-off-by: NLeif Lindholm <leif.lindholm@linaro.org> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Cc: Mark Salter <msalter@redhat.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
由 Leif Lindholm 提交于
fed6cefe ("x86/efi: Add a "debug" option to the efi= cmdline") adds the DBG flag, but does so for x86 only. Move this early param parsing to core code. Signed-off-by: NLeif Lindholm <leif.lindholm@linaro.org> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Cc: Mark Salter <msalter@redhat.com> Cc: Borislav Petkov <bp@suse.de> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
- 14 9月, 2015 1 次提交
-
-
Table 8 of UEFI 2.5 section 2.3.6.1 defines mappings from EFI memory types to MAIR attribute encodings for arm64. If the physical address has memory attributes defined by EFI memmap as EFI_MEMORY_[UC|WC|WT], return approprate page protection type according to the UEFI spec. Otherwise, return PAGE_KERNEL. Signed-off-by: NJonathan (Zhixiong) Zhang <zjzhang@codeaurora.org> Signed-off-by: NMatt Fleming <matt.fleming@intel.com> [ Small stylistic tweaks. ] Reviewed-by: NMatt Fleming <matt.fleming@intel.com> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NHanjun Guo <hanjun.guo@linaro.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Link: http://lkml.kernel.org/r/1441372302-23242-2-git-send-email-matt@codeblueprint.co.ukSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 08 8月, 2015 5 次提交
-
-
UEFI spec 2.5 section 2.3.6.1 defines that EFI_MEMORY_[UC|WC|WT|WB] are possible EFI memory types for AArch64. Each of those EFI memory types is mapped to a corresponding AArch64 memory type. So we need to define PROT_DEVICE_nGnRnE and PROT_NORMWL_WT additionaly. MT_NORMAL_WT is defined, and its encoding is added to MAIR_EL1 when initializing the CPU. Signed-off-by: NJonathan (Zhixiong) Zhang <zjzhang@codeaurora.org> Signed-off-by: NMatt Fleming <matt.fleming@intel.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1438936621-5215-6-git-send-email-matt@codeblueprint.co.ukSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
... to allow an arch specific implementation of getting page protection type associated with a physical address. On x86, we currently have no way to look up the EFI memory map attributes for a region in a consistent way, because the memmap is discarded after efi_free_boot_services(). So if you call efi_mem_attributes() during boot and at runtime, you could theoretically see different attributes. Since we are yet to see any x86 platforms that require anything other than PAGE_KERNEL (some arm64 platforms require the equivalent of PAGE_KERNEL_NOCACHE), return that until we know differently. Signed-off-by: NJonathan (Zhixiong) Zhang <zjzhang@codeaurora.org> Signed-off-by: NMatt Fleming <matt.fleming@intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1438936621-5215-5-git-send-email-matt@codeblueprint.co.uk [ Small fixes to spelling. ] Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
x86 and ia64 implement efi_mem_attributes() differently. This function needs to be available for other architectures (such as arm64) as well, such as for the purpose of ACPI/APEI. ia64 EFI does not set up a 'memmap' variable and does not set the EFI_MEMMAP flag, so it needs to have its unique implementation of efi_mem_attributes(). Move efi_mem_attributes() implementation from x86 to the core EFI code, and declare it with __weak. It is recommended that other architectures should not override the default implementation. Signed-off-by: NJonathan (Zhixiong) Zhang <zjzhang@codeaurora.org> Signed-off-by: NMatt Fleming <matt.fleming@intel.com> Reviewed-by: NMatt Fleming <matt.fleming@intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1438936621-5215-4-git-send-email-matt@codeblueprint.co.ukSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Matt Fleming 提交于
This reverts commit: aeffc492 ("x86/efi: Request desired alignment via the PE/COFF headers") Linn reports that Signtool complains that kernels built with CONFIG_EFI_STUB=y are violating the PE/COFF specification because the 'SizeOfImage' field is not a multiple of 'SectionAlignment'. This violation was introduced as an optimisation to skip having the kernel relocate itself during boot and instead have the firmware place it at a correctly aligned address. No one else has complained and I'm not aware of any firmware implementations that refuse to boot with commit aeffc492, but it's a real bug, so revert the offending commit. Reported-by: NLinn Crosetto <linn@hp.com> Signed-off-by: NMatt Fleming <matt.fleming@intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Michael Brown <mbrown@fensystems.co.uk> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1438936621-5215-3-git-send-email-matt@codeblueprint.co.ukSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Matt Fleming 提交于
It's totally legitimate, per the ACPI spec, for the firmware to set the BGRT 'status' field to zero to indicate that the BGRT image isn't being displayed, and we shouldn't be printing an error message in that case because it's just noise for users. So swap pr_err() for pr_debug(). However, Josh points that out it still makes sense to test the validity of the upper 7 bits of the 'status' field, since they're marked as "reserved" in the spec and must be zero. If firmware violates this it really *is* an error. Reported-by: NTom Yan <tom.ty89@gmail.com> Tested-by: NTom Yan <tom.ty89@gmail.com> Signed-off-by: NMatt Fleming <matt.fleming@intel.com> Reviewed-by: NJosh Triplett <josh@joshtriplett.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matthew Garrett <mjg59@srcf.ucam.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1438936621-5215-2-git-send-email-matt@codeblueprint.co.ukSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 07 8月, 2015 4 次提交
-
-
由 Vineet Gupta 提交于
The increment of delay counter was 2 instructions: Arithmatic Shfit Left (ASL) + set to 1 on overflow This can be done in 1 using ROtate Left (ROL) Suggested-by: NNigel Topham <ntopham@synopsys.com> Cc: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: linux-kernel@vger.kernel.org Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 David S. Miller 提交于
If we have a series of events from userpsace, with %fprs=FPRS_FEF, like follows: ETRAP ETRAP VIS_ENTRY(fprs=0x4) VIS_EXIT RTRAP (kernel FPU restore with fpu_saved=0x4) RTRAP We will not restore the user registers that were clobbered by the FPU using kernel code in the inner-most trap. Traps allocate FPU save slots in the thread struct, and FPU using sequences save the "dirty" FPU registers only. This works at the initial trap level because all of the registers get recorded into the top-level FPU save area, and we'll return to userspace with the FPU disabled so that any FPU use by the user will take an FPU disabled trap wherein we'll load the registers back up properly. But this is not how trap returns from kernel to kernel operate. The simplest fix for this bug is to always save all FPU register state for anything other than the top-most FPU save area. Getting rid of the optimized inner-slot FPU saving code ends up making VISEntryHalf degenerate into plain VISEntry. Longer term we need to do something smarter to reinstate the partial save optimizations. Perhaps the fundament error is having trap entry and exit allocate FPU save slots and restore register state. Instead, the VISEntry et al. calls should be doing that work. This bug is about two decades old. Reported-by: NJames Y Knight <jyknight@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Amanieu d'Antras 提交于
This function may copy the si_addr_lsb, si_lower and si_upper fields to user mode when they haven't been initialized, which can leak kernel stack data to user mode. Just checking the value of si_code is insufficient because the same si_code value is shared between multiple signals. This is solved by checking the value of si_signo in addition to si_code. Signed-off-by: NAmanieu d'Antras <amanieu@gmail.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Russell King <rmk@arm.linux.org.uk> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Amanieu d'Antras 提交于
This function can leak kernel stack data when the user siginfo_t has a positive si_code value. The top 16 bits of si_code descibe which fields in the siginfo_t union are active, but they are treated inconsistently between copy_siginfo_from_user32, copy_siginfo_to_user32 and copy_siginfo_to_user. copy_siginfo_from_user32 is called from rt_sigqueueinfo and rt_tgsigqueueinfo in which the user has full control overthe top 16 bits of si_code. This fixes the following information leaks: x86: 8 bytes leaked when sending a signal from a 32-bit process to itself. This leak grows to 16 bytes if the process uses x32. (si_code = __SI_CHLD) x86: 100 bytes leaked when sending a signal from a 32-bit process to a 64-bit process. (si_code = -1) sparc: 4 bytes leaked when sending a signal from a 32-bit process to a 64-bit process. (si_code = any) parsic and s390 have similar bugs, but they are not vulnerable because rt_[tg]sigqueueinfo have checks that prevent sending a positive si_code to a different process. These bugs are also fixed for consistency. Signed-off-by: NAmanieu d'Antras <amanieu@gmail.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Chris Metcalf <cmetcalf@ezchip.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 05 8月, 2015 2 次提交
-
-
由 Alex Williamson 提交于
The patch was munged on commit to re-order these tests resulting in excessive warnings when trying to do device assignment. Return to original ordering: https://lkml.org/lkml/2015/7/15/769 Fixes: 3e5d2fdc ("KVM: MTRR: simplify kvm_mtrr_get_guest_memory_type") Signed-off-by: NAlex Williamson <alex.williamson@redhat.com> Reviewed-by: NXiao Guangrong <guangrong.xiao@linux.intel.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Vineet Gupta 提交于
KGDB fails to build after f51e2f19 ("ARC: make sure instruction_pointer() returns unsigned value") The hack to force one specific reg to unsigned backfired. There's no reason to keep the regs signed after all. | CC arch/arc/kernel/kgdb.o |../arch/arc/kernel/kgdb.c: In function 'kgdb_trap': | ../arch/arc/kernel/kgdb.c:180:29: error: lvalue required as left operand of assignment | instruction_pointer(regs) -= BREAK_INSTR_SIZE; Reported-by: NYuriy Kolerov <yuriy.kolerov@synopsys.com> Fixes: f51e2f19 ("ARC: make sure instruction_pointer() returns unsigned value") Cc: Alexey Brodkin <abrodkin@synopsys.com> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 04 8月, 2015 8 次提交
-
-
由 Roger Quadros 提交于
This register is required to be passed to the SATA PHY driver to workaround errata i783 (SATA Lockup After SATA DPLL Unlock/Relock). Signed-off-by: NRoger Quadros <rogerq@ti.com> Acked-by: NTony Lindgren <tony@atomide.com> Signed-off-by: NKishon Vijay Abraham I <kishon@ti.com>
-
由 Vineet Gupta 提交于
The previous commit for delayed retry of SCOND needs some fine tuning for spin locks. The backoff from delayed retry in conjunction with spin looping of lock itself can potentially cause the delay counter to reach high values. So to provide fairness to any lock operation, after a lock "seems" available (i.e. just before first SCOND try0, reset the delay counter back to starting value of 1 Essentially reset delay to 1 for a new spin-wait-loop-acquire cycle. Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
This is to workaround the llock/scond livelock HS38x4 could get into a LLOCK/SCOND livelock in case of multiple overlapping coherency transactions in the SCU. The exclusive line state keeps rotating among contenting cores leading to a never ending cycle. So break the cycle by deferring the retry of failed exclusive access (SCOND). The actual delay needed is function of number of contending cores as well as the unrelated coherency traffic from other cores. To keep the code simple, start off with small delay of 1 which would suffice most cases and in case of contention double the delay. Eventually the delay is sufficient such that the coherency pipeline is drained, thus a subsequent exclusive access would succeed. Link: http://lkml.kernel.org/r/1438612568-28265-1-git-send-email-vgupta@synopsys.comAcked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
With LLOCK/SCOND, the rwlock counter can be atomically updated w/o need for a guarding spin lock. This in turn elides the EXchange instruction based spinning which causes the cacheline transition to exclusive state and concurrent spinning across cores would cause the line to keep bouncing around. LLOCK/SCOND based implementation is superior as spinning on LLOCK keeps the cacheline in shared state. Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
Current spin_lock uses EXchange instruction to implement the atomic test and set of lock location (reads orig value and ST 1). This however forces the cacheline into exclusive state (because of the ST) and concurrent loops in multiple cores will bounce the line around between cores. Instead, use LLOCK/SCOND to implement the atomic test and set which is better as line is in shared state while lock is spinning on LLOCK The real motivation of this change however is to make way for future changes in atomics to implement delayed retry (with backoff). Initial experiment with delayed retry in atomics combined with orig EX based spinlock was a total disaster (broke even LMBench) as struct sock has a cache line sharing an atomic_t and spinlock. The tight spinning on lock, caused the atomic retry to keep backing off such that it would never finish. Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
This reduces the diff in forth-coming patches and also helps understand better the incremental changes to inline asm. Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
Extended testing of quad core configuration revealed that this fix was insufficient. Specifically LTP open posix shm_op/23-1 would cause the hardware livelock in llock/scond loop in update_cpu_load_active() So remove this and make way for a proper workaround This reverts commit a5c8b52a. Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 03 8月, 2015 1 次提交
-
-
由 Vineet Gupta 提交于
With HS 2.1 release, the peripheral space register no longer contains the uncached space specifics, causing the kernel to panic early on. So read the newer NON VOLATILE AUX register to get that info. Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 01 8月, 2015 1 次提交
-
-
由 Murali Karicheri 提交于
All of the keystone devices have a separate register to hold post divider value for main pll clock. Currently the fixed-postdiv value used for k2hk/l/e SoCs works by sheer luck as u-boot happens to use a value of 2 for this. Now that we have fixed this in the pll clock driver change the dt bindings for the same. Signed-off-by: NMurali Karicheri <m-karicheri2@ti.com> Acked-by: NSantosh Shilimkar <ssantosh@kernel.org> Signed-off-by: NOlof Johansson <olof@lixom.net>
-
- 31 7月, 2015 6 次提交
-
-
由 Rameshwar Prasad Sahu 提交于
There is an overlap in dma ring cmd csr region due to sharing of ethernet ring cmd csr region. This patch fix the resource overlapping by mapping the entire dma ring cmd csr region. Signed-off-by: NRameshwar Prasad Sahu <rsahu@apm.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Andy Lutomirski 提交于
modify_ldt() has questionable locking and does not synchronize threads. Improve it: redesign the locking and synchronize all threads' LDTs using an IPI on all modifications. This will dramatically slow down modify_ldt in multithreaded programs, but there shouldn't be any multithreaded programs that care about modify_ldt's performance in the first place. This fixes some fallout from the CVE-2015-5157 fixes. Signed-off-by: NAndy Lutomirski <luto@kernel.org> Reviewed-by: NBorislav Petkov <bp@suse.de> Cc: Andrew Cooper <andrew.cooper3@citrix.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jan Beulich <jbeulich@suse.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sasha Levin <sasha.levin@oracle.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: security@kernel.org <security@kernel.org> Cc: <stable@vger.kernel.org> Cc: xen-devel <xen-devel@lists.xen.org> Link: http://lkml.kernel.org/r/4c6978476782160600471bd865b318db34c7b628.1438291540.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Andy Lutomirski 提交于
The update_va_mapping hypercall can fail if the VA isn't present in the guest's page tables. Under certain loads, this can result in an OOPS when the target address is in unpopulated vmap space. While we're at it, add comments to help explain what's going on. This isn't a great long-term fix. This code should probably be changed to use something like set_memory_ro. Signed-off-by: NAndy Lutomirski <luto@kernel.org> Cc: Andrew Cooper <andrew.cooper3@citrix.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: David Vrabel <dvrabel@cantab.net> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jan Beulich <jbeulich@suse.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sasha Levin <sasha.levin@oracle.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: security@kernel.org <security@kernel.org> Cc: <stable@vger.kernel.org> Cc: xen-devel <xen-devel@lists.xen.org> Link: http://lkml.kernel.org/r/0b0e55b995cda11e7829f140b833ef932fcabe3a.1438291540.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Jiang Liu 提交于
Commit d32932d0 ("x86/irq: Convert IOAPIC to use hierarchical irqdomain interfaces") introduced a regression which causes malfunction of interrupt lines. The reason is that the conversion of mp_check_pin_attr() missed to update the polarity selection of the interrupt pin with the caller provided setting and instead uses a stale attribute value. That in turn results in chosing the wrong interrupt flow handler. Use the caller supplied setting to configure the pin correctly which also choses the correct interrupt flow handler. This restores the original behaviour and on the affected machine/driver (Surface Pro 3, i2c controller) all IOAPIC IRQ configuration are identical to v4.1. Fixes: d32932d0 ("x86/irq: Convert IOAPIC to use hierarchical irqdomain interfaces") Reported-and-tested-by: NMatt Fleming <matt@codeblueprint.co.uk> Reported-and-tested-by: NChen Yu <yu.c.chen@intel.com> Signed-off-by: NJiang Liu <jiang.liu@linux.intel.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Chen Yu <yu.c.chen@intel.com> Cc: Yinghai Lu <yinghai@kernel.org> Link: http://lkml.kernel.org/r/1438242695-23531-1-git-send-email-jiang.liu@linux.intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Ricardo Neri 提交于
Even though it is documented how to specifiy efi parameters, it is possible to cause a kernel panic due to a dereference of a NULL pointer when parsing such parameters if "efi" alone is given: PANIC: early exception 0e rip 10:ffffffff812fb361 error 0 cr2 0 [ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 4.2.0-rc1+ #450 [ 0.000000] ffffffff81fe20a9 ffffffff81e03d50 ffffffff8184bb0f 00000000000003f8 [ 0.000000] 0000000000000000 ffffffff81e03e08 ffffffff81f371a1 64656c62616e6520 [ 0.000000] 0000000000000069 000000000000005f 0000000000000000 0000000000000000 [ 0.000000] Call Trace: [ 0.000000] [<ffffffff8184bb0f>] dump_stack+0x45/0x57 [ 0.000000] [<ffffffff81f371a1>] early_idt_handler_common+0x81/0xae [ 0.000000] [<ffffffff812fb361>] ? parse_option_str+0x11/0x90 [ 0.000000] [<ffffffff81f4dd69>] arch_parse_efi_cmdline+0x15/0x42 [ 0.000000] [<ffffffff81f376e1>] do_early_param+0x50/0x8a [ 0.000000] [<ffffffff8106b1b3>] parse_args+0x1e3/0x400 [ 0.000000] [<ffffffff81f37a43>] parse_early_options+0x24/0x28 [ 0.000000] [<ffffffff81f37691>] ? loglevel+0x31/0x31 [ 0.000000] [<ffffffff81f37a78>] parse_early_param+0x31/0x3d [ 0.000000] [<ffffffff81f3ae98>] setup_arch+0x2de/0xc08 [ 0.000000] [<ffffffff8109629a>] ? vprintk_default+0x1a/0x20 [ 0.000000] [<ffffffff81f37b20>] start_kernel+0x90/0x423 [ 0.000000] [<ffffffff81f37495>] x86_64_start_reservations+0x2a/0x2c [ 0.000000] [<ffffffff81f37582>] x86_64_start_kernel+0xeb/0xef [ 0.000000] RIP 0xffffffff81ba2efc This panic is not reproducible with "efi=" as this will result in a non-NULL zero-length string. Thus, verify that the pointer to the parameter string is not NULL. This is consistent with other parameter-parsing functions which check for NULL pointers. Signed-off-by: NRicardo Neri <ricardo.neri-calderon@linux.intel.com> Cc: Dave Young <dyoung@redhat.com> Cc: <stable@vger.kernel.org> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
由 Dmitry Skorodumov 提交于
The efi_info structure stores low 32 bits of memory map in efi_memmap and high 32 bits in efi_memmap_hi. While constructing pointer in the setup_e820(), need to take into account all 64 bit of the pointer. It is because on 64bit machine the function efi_get_memory_map() may return full 64bit pointer and before the patch that pointer was truncated. The issue is triggered on Parallles virtual machine and fixed with this patch. Signed-off-by: NDmitry Skorodumov <sdmitry@parallels.com> Cc: Denis V. Lunev <den@openvz.org> Cc: <stable@vger.kernel.org> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
- 30 7月, 2015 3 次提交
-
-
由 Christian Borntraeger 提交于
commit 785dbef4 ("KVM: s390: optimize round trip time in request handling") introduced a regression. This regression was seen with CPU hotplug in the guest and switching between 1 or 2 CPUs. This will set/reset the IBS control via synced request. Whenever we make a synced request, we first set the vcpu->requests bit and then block the vcpu. The handler, on the other hand, unblocks itself, processes vcpu->requests (by clearing them) and unblocks itself once again. Now, if the requester sleeps between setting of vcpu->requests and blocking, the handler will clear the vcpu->requests bit and try to unblock itself (although no bit is set). When the requester wakes up, it blocks the VCPU and we have a blocked VCPU without requests. Solution is to always unset the block bit. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com> Fixes: 785dbef4 ("KVM: s390: optimize round trip time in request handling")
-
由 Alistair Popple 提交于
pnv_eeh_next_error() re-enables the eeh opal event interrupt but it gets called from a loop if there are more outstanding events to process, resulting in a warning due to enabling an already enabled interrupt. Instead the interrupt should only be re-enabled once the last outstanding event has been processed. Tested-by: NDaniel Axtens <dja@axtens.net> Reported-by: NDaniel Axtens <dja@axtens.net> Signed-off-by: NAlistair Popple <alistair@popple.id.au> Acked-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Daniel Borkmann 提交于
With eBPF JIT compiler enabled on x86_64, I was able to reliably trigger the following general protection fault out of an eBPF program with a simple tail call, f.e. tracex5 (or a stripped down version of it): [ 927.097918] general protection fault: 0000 [#1] SMP DEBUG_PAGEALLOC [...] [ 927.100870] task: ffff8801f228b780 ti: ffff880016a64000 task.ti: ffff880016a64000 [ 927.102096] RIP: 0010:[<ffffffffa002440d>] [<ffffffffa002440d>] 0xffffffffa002440d [ 927.103390] RSP: 0018:ffff880016a67a68 EFLAGS: 00010006 [ 927.104683] RAX: 5a5a5a5a5a5a5a5a RBX: 0000000000000000 RCX: 0000000000000001 [ 927.105921] RDX: 0000000000000000 RSI: ffff88014e438000 RDI: ffff880016a67e00 [ 927.107137] RBP: ffff880016a67c90 R08: 0000000000000000 R09: 0000000000000001 [ 927.108351] R10: 0000000000000000 R11: 0000000000000000 R12: ffff880016a67e00 [ 927.109567] R13: 0000000000000000 R14: ffff88026500e460 R15: ffff880220a81520 [ 927.110787] FS: 00007fe7d5c1f740(0000) GS:ffff880265000000(0000) knlGS:0000000000000000 [ 927.112021] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 927.113255] CR2: 0000003e7bbb91a0 CR3: 000000006e04b000 CR4: 00000000001407e0 [ 927.114500] Stack: [ 927.115737] ffffc90008cdb000 ffff880016a67e00 ffff88026500e460 ffff880220a81520 [ 927.117005] 0000000100000000 000000000000001b ffff880016a67aa8 ffffffff8106c548 [ 927.118276] 00007ffcdaf22e58 0000000000000000 0000000000000000 ffff880016a67ff0 [ 927.119543] Call Trace: [ 927.120797] [<ffffffff8106c548>] ? lookup_address+0x28/0x30 [ 927.122058] [<ffffffff8113d176>] ? __module_text_address+0x16/0x70 [ 927.123314] [<ffffffff8117bf0e>] ? is_ftrace_trampoline+0x3e/0x70 [ 927.124562] [<ffffffff810c1a0f>] ? __kernel_text_address+0x5f/0x80 [ 927.125806] [<ffffffff8102086f>] ? print_context_stack+0x7f/0xf0 [ 927.127033] [<ffffffff810f7852>] ? __lock_acquire+0x572/0x2050 [ 927.128254] [<ffffffff810f7852>] ? __lock_acquire+0x572/0x2050 [ 927.129461] [<ffffffff8119edfa>] ? trace_call_bpf+0x3a/0x140 [ 927.130654] [<ffffffff8119ee4a>] trace_call_bpf+0x8a/0x140 [ 927.131837] [<ffffffff8119edfa>] ? trace_call_bpf+0x3a/0x140 [ 927.133015] [<ffffffff8119f008>] kprobe_perf_func+0x28/0x220 [ 927.134195] [<ffffffff811a1668>] kprobe_dispatcher+0x38/0x60 [ 927.135367] [<ffffffff81174b91>] ? seccomp_phase1+0x1/0x230 [ 927.136523] [<ffffffff81061400>] kprobe_ftrace_handler+0xf0/0x150 [ 927.137666] [<ffffffff81174b95>] ? seccomp_phase1+0x5/0x230 [ 927.138802] [<ffffffff8117950c>] ftrace_ops_recurs_func+0x5c/0xb0 [ 927.139934] [<ffffffffa022b0d5>] 0xffffffffa022b0d5 [ 927.141066] [<ffffffff81174b91>] ? seccomp_phase1+0x1/0x230 [ 927.142199] [<ffffffff81174b95>] seccomp_phase1+0x5/0x230 [ 927.143323] [<ffffffff8102c0a4>] syscall_trace_enter_phase1+0xc4/0x150 [ 927.144450] [<ffffffff81174b95>] ? seccomp_phase1+0x5/0x230 [ 927.145572] [<ffffffff8102c0a4>] ? syscall_trace_enter_phase1+0xc4/0x150 [ 927.146666] [<ffffffff817f9a9f>] tracesys+0xd/0x44 [ 927.147723] Code: 48 8b 46 10 48 39 d0 76 2c 8b 85 fc fd ff ff 83 f8 20 77 21 83 c0 01 89 85 fc fd ff ff 48 8d 44 d6 80 48 8b 00 48 83 f8 00 74 0a <48> 8b 40 20 48 83 c0 33 ff e0 48 89 d8 48 8b 9d d8 fd ff ff 4c [ 927.150046] RIP [<ffffffffa002440d>] 0xffffffffa002440d The code section with the instructions that traps points into the eBPF JIT image of the root program (the one invoking the tail call instruction). Using bpf_jit_disasm -o on the eBPF root program image: [...] 4e: mov -0x204(%rbp),%eax 8b 85 fc fd ff ff 54: cmp $0x20,%eax <--- if (tail_call_cnt > MAX_TAIL_CALL_CNT) 83 f8 20 57: ja 0x000000000000007a 77 21 59: add $0x1,%eax <--- tail_call_cnt++ 83 c0 01 5c: mov %eax,-0x204(%rbp) 89 85 fc fd ff ff 62: lea -0x80(%rsi,%rdx,8),%rax <--- prog = array->prog[index] 48 8d 44 d6 80 67: mov (%rax),%rax 48 8b 00 6a: cmp $0x0,%rax <--- check for NULL 48 83 f8 00 6e: je 0x000000000000007a 74 0a 70: mov 0x20(%rax),%rax <--- GPF triggered here! fetch of bpf_func 48 8b 40 20 [ matches <48> 8b 40 20 ... from above ] 74: add $0x33,%rax <--- prologue skip of new prog 48 83 c0 33 78: jmpq *%rax <--- jump to new prog insns ff e0 [...] The problem is that rax has 5a5a5a5a5a5a5a5a, which suggests a tail call jump to map slot 0 is pointing to a poisoned page. The issue is the following: lea instruction has a wrong offset, i.e. it should be ... lea 0x80(%rsi,%rdx,8),%rax ... but it actually seems to be ... lea -0x80(%rsi,%rdx,8),%rax ... where 0x80 is offsetof(struct bpf_array, prog), thus the offset needs to be positive instead of negative. Disassembling the interpreter, we btw similarly do: [...] c88: lea 0x80(%rax,%rdx,8),%rax <--- prog = array->prog[index] 48 8d 84 d0 80 00 00 00 c90: add $0x1,%r13d 41 83 c5 01 c94: mov (%rax),%rax 48 8b 00 [...] Now the other interesting fact is that this panic triggers only when things like CONFIG_LOCKDEP are being used. In that case offsetof(struct bpf_array, prog) starts at offset 0x80 and in non-CONFIG_LOCKDEP case at offset 0x50. Reason is that the work_struct inside struct bpf_map grows by 48 bytes in my case due to the lockdep_map member (which also has CONFIG_LOCK_STAT enabled members). Changing the emitter to always use the 4 byte displacement in the lea instruction fixes the panic on my side. It increases the tail call instruction emission by 3 more byte, but it should cover us from various combinations (and perhaps other future increases on related structures). After patch, disassembly: [...] 9e: lea 0x80(%rsi,%rdx,8),%rax <--- CONFIG_LOCKDEP/CONFIG_LOCK_STAT 48 8d 84 d6 80 00 00 00 a6: mov (%rax),%rax 48 8b 00 [...] [...] 9e: lea 0x50(%rsi,%rdx,8),%rax <--- No CONFIG_LOCKDEP 48 8d 84 d6 50 00 00 00 a6: mov (%rax),%rax 48 8b 00 [...] Fixes: b52f00e6 ("x86: bpf_jit: implement bpf_tail_call() helper") Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NAlexei Starovoitov <ast@plumgrid.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 28 7月, 2015 2 次提交
-
-
由 Heiko Carstens 提交于
Stephen Powell reported the following crash on a z890 machine: Kernel BUG at 00000000001219d0 [verbose debug info unavailable] illegal operation: 0001 ilc:3 [#1] SMP Krnl PSW : 0704e00180000000 00000000001219d0 (init_cache_level+0x38/0xe0) R:0 T:1 IO:1 EX:1 Key:0 M:1 W:0 P:0 AS:3 CC:2 PM:0 EA:3 Krnl Code: 00000000001219c2: a7840056 brc 8,121a6e 00000000001219c6: a7190000 lghi %r1,0 #00000000001219ca: eb101000004c ecag %r1,%r0,0(%r1) >00000000001219d0: a7390000 lghi %r3,0 00000000001219d4: e310f0a00024 stg %r1,160(%r15) 00000000001219da: a7080000 lhi %r0,0 00000000001219de: a7b9f000 lghi %r11,-4096 00000000001219e2: c0a0002899d9 larl %r10,634d94 Call Trace: [<0000000000478ee2>] detect_cache_attributes+0x2a/0x2b8 [<000000000097c9b0>] cacheinfo_sysfs_init+0x60/0xc8 [<00000000001001c0>] do_one_initcall+0x98/0x1c8 [<000000000094fdc2>] kernel_init_freeable+0x212/0x2d8 [<000000000062352e>] kernel_init+0x26/0x118 [<000000000062fd2e>] kernel_thread_starter+0x6/0xc The illegal operation was executed because of a missing facility check, which should have made sure that the ECAG execution would only be executed on machines which have the general-instructions-extension facility installed. Reported-and-tested-by: NStephen Powell <zlinuxman@wowway.com> Cc: stable@vger.kernel.org # v4.0+ Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Ard Biesheuvel 提交于
At boot, the UTF-16 UEFI vendor string is copied from the system table into a char array with a size of 100 bytes. However, this size of 100 bytes is also used for memremapping() the source, which may not be sufficient if the vendor string exceeds 50 UTF-16 characters, and the placement of the vendor string inside a 4 KB page happens to leave the end unmapped. So use the correct '100 * sizeof(efi_char16_t)' for the size of the mapping. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Fixes: f84d0275 ("arm64: add EFI runtime services") Cc: <stable@vger.kernel.org> # 3.16+ Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 27 7月, 2015 2 次提交
-
-
由 Andy Shevchenko 提交于
Since NULL is used as valid clock object on optional clocks we have to handle this case in avr32 implementation as well. Fixes: e1824dfe (net: macb: Adjust tx_clk when link speed changes) Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: NHans-Christian Egtvedt <egtvedt@samfundet.no>
-
由 Michael Holzheu 提交于
Currently we assumed the following BPF to eBPF register mapping: - BPF_REG_A -> BPF_REG_7 - BPF_REG_X -> BPF_REG_8 Unfortunately this mapping is wrong. The correct mapping is: - BPF_REG_A -> BPF_REG_0 - BPF_REG_X -> BPF_REG_7 So clear the correct registers and use the BPF_REG_A and BPF_REG_X macros instead of BPF_REG_0/7. Fixes: 05462310 ("s390/bpf: Add s390x eBPF JIT compiler backend") Cc: stable@vger.kernel.org # 4.0+ Signed-off-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-