1. 11 1月, 2020 6 次提交
    • A
      efi/x86: Simplify mixed mode call wrapper · ea5e1919
      Ard Biesheuvel 提交于
      Calling 32-bit EFI runtime services from a 64-bit OS involves
      switching back to the flat mapping with a stack carved out of
      memory that is 32-bit addressable.
      
      There is no need to actually execute the 64-bit part of this
      routine from the flat mapping as well, as long as the entry
      and return address fit in 32 bits. There is also no need to
      preserve part of the calling context in global variables: we
      can simply push the old stack pointer value to the new stack,
      and keep the return address from the code32 section in EBX.
      
      While at it, move the conditional check whether to invoke
      the mixed mode version of SetVirtualAddressMap() into the
      64-bit implementation of the wrapper routine.
      Signed-off-by: NArd Biesheuvel <ardb@kernel.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Arvind Sankar <nivedita@alum.mit.edu>
      Cc: Matthew Garrett <mjg59@google.com>
      Cc: linux-efi@vger.kernel.org
      Link: https://lkml.kernel.org/r/20200103113953.9571-11-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      ea5e1919
    • A
      efi/x86: Simplify 64-bit EFI firmware call wrapper · e5f930fe
      Ard Biesheuvel 提交于
      The efi_call() wrapper used to invoke EFI runtime services serves
      a number of purposes:
      - realign the stack to 16 bytes
      - preserve FP and CR0 register state
      - translate from SysV to MS calling convention.
      
      Preserving CR0.TS is no longer necessary in Linux, and preserving the
      FP register state is also redundant in most cases, since efi_call() is
      almost always used from within the scope of a pair of kernel_fpu_begin()/
      kernel_fpu_end() calls, with the exception of the early call to
      SetVirtualAddressMap() and the SGI UV support code.
      
      So let's add a pair of kernel_fpu_begin()/_end() calls there as well,
      and remove the unnecessary code from the assembly implementation of
      efi_call(), and only keep the pieces that deal with the stack
      alignment and the ABI translation.
      Signed-off-by: NArd Biesheuvel <ardb@kernel.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Arvind Sankar <nivedita@alum.mit.edu>
      Cc: Matthew Garrett <mjg59@google.com>
      Cc: linux-efi@vger.kernel.org
      Link: https://lkml.kernel.org/r/20200103113953.9571-10-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      e5f930fe
    • A
      efi/x86: Simplify i386 efi_call_phys() firmware call wrapper · a46d6740
      Ard Biesheuvel 提交于
      The variadic efi_call_phys() wrapper that exists on i386 was
      originally created to call into any EFI firmware runtime service,
      but in practice, we only use it once, to call SetVirtualAddressMap()
      during early boot.
      The flexibility provided by the variadic nature also makes it
      type unsafe, and makes the assembler code more complicated than
      needed, since it has to deal with an unknown number of arguments
      living on the stack.
      
      So clean this up, by renaming the helper to efi_call_svam(), and
      dropping the unneeded complexity. Let's also drop the reference
      to the efi_phys struct and grab the address from the EFI system
      table directly.
      Signed-off-by: NArd Biesheuvel <ardb@kernel.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Arvind Sankar <nivedita@alum.mit.edu>
      Cc: Matthew Garrett <mjg59@google.com>
      Cc: linux-efi@vger.kernel.org
      Link: https://lkml.kernel.org/r/20200103113953.9571-9-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      a46d6740
    • A
      efi/x86: Split SetVirtualAddresMap() wrappers into 32 and 64 bit versions · 69829470
      Ard Biesheuvel 提交于
      Split the phys_efi_set_virtual_address_map() routine into 32 and 64 bit
      versions, so we can simplify them individually in subsequent patches.
      
      There is very little overlap between the logic anyway, and this has
      already been factored out in prolog/epilog routines which are completely
      different between 32 bit and 64 bit. So let's take it one step further,
      and get rid of the overlap completely.
      Signed-off-by: NArd Biesheuvel <ardb@kernel.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Arvind Sankar <nivedita@alum.mit.edu>
      Cc: Matthew Garrett <mjg59@google.com>
      Cc: linux-efi@vger.kernel.org
      Link: https://lkml.kernel.org/r/20200103113953.9571-8-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      69829470
    • A
      efi/x86: Split off some old memmap handling into separate routines · 98dd0e3a
      Ard Biesheuvel 提交于
      In a subsequent patch, we will fold the prolog/epilog routines that are
      part of the support code to call SetVirtualAddressMap() with a 1:1
      mapping into the callers. However, the 64-bit version mostly consists
      of ugly mapping code that is only used when efi=old_map is in effect,
      which is extremely rare. So let's move this code out of the way so it
      does not clutter the common code.
      Signed-off-by: NArd Biesheuvel <ardb@kernel.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Arvind Sankar <nivedita@alum.mit.edu>
      Cc: Matthew Garrett <mjg59@google.com>
      Cc: linux-efi@vger.kernel.org
      Link: https://lkml.kernel.org/r/20200103113953.9571-7-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      98dd0e3a
    • A
      efi/x86: Map the entire EFI vendor string before copying it · ffc2760b
      Ard Biesheuvel 提交于
      Fix a couple of issues with the way we map and copy the vendor string:
      - we map only 2 bytes, which usually works since you get at least a
        page, but if the vendor string happens to cross a page boundary,
        a crash will result
      - only call early_memunmap() if early_memremap() succeeded, or we will
        call it with a NULL address which it doesn't like,
      - while at it, switch to early_memremap_ro(), and array indexing rather
        than pointer dereferencing to read the CHAR16 characters.
      Signed-off-by: NArd Biesheuvel <ardb@kernel.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Arvind Sankar <nivedita@alum.mit.edu>
      Cc: Matthew Garrett <mjg59@google.com>
      Cc: linux-efi@vger.kernel.org
      Fixes: 5b83683f ("x86: EFI runtime service support")
      Link: https://lkml.kernel.org/r/20200103113953.9571-5-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      ffc2760b
  2. 25 12月, 2019 3 次提交
  3. 04 12月, 2019 1 次提交
    • D
      x86/efi: Update e820 with reserved EFI boot services data to fix kexec breakage · af164898
      Dave Young 提交于
      Michael Weiser reported that he got this error during a kexec rebooting:
      
        esrt: Unsupported ESRT version 2904149718861218184.
      
      The ESRT memory stays in EFI boot services data, and it was reserved
      in kernel via efi_mem_reserve().  The initial purpose of the reservation
      is to reuse the EFI boot services data across kexec reboot. For example
      the BGRT image data and some ESRT memory like Michael reported.
      
      But although the memory is reserved it is not updated in the X86 E820 table,
      and kexec_file_load() iterates system RAM in the IO resource list to find places
      for kernel, initramfs and other stuff. In Michael's case the kexec loaded
      initramfs overwrote the ESRT memory and then the failure happened.
      
      Since kexec_file_load() depends on the E820 table being updated, just fix this
      by updating the reserved EFI boot services memory as reserved type in E820.
      
      Originally any memory descriptors with EFI_MEMORY_RUNTIME attribute are
      bypassed in the reservation code path because they are assumed as reserved.
      
      But the reservation is still needed for multiple kexec reboots,
      and it is the only possible case we come here thus just drop the code
      chunk, then everything works without side effects.
      
      On my machine the ESRT memory sits in an EFI runtime data range, it does
      not trigger the problem, but I successfully tested with BGRT instead.
      both kexec_load() and kexec_file_load() work and kdump works as well.
      
      [ mingo: Edited the changelog. ]
      Reported-by: NMichael Weiser <michael@weiser.dinsnail.net>
      Tested-by: NMichael Weiser <michael@weiser.dinsnail.net>
      Signed-off-by: NDave Young <dyoung@redhat.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: kexec@lists.infradead.org
      Cc: linux-efi@vger.kernel.org
      Link: https://lkml.kernel.org/r/20191204075233.GA10520@dhcp-128-65.nay.redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      af164898
  4. 11 11月, 2019 2 次提交
  5. 07 11月, 2019 3 次提交
    • D
      x86/efi: Add efi_fake_mem support for EFI_MEMORY_SP · 199c8471
      Dan Williams 提交于
      Given that EFI_MEMORY_SP is platform BIOS policy decision for marking
      memory ranges as "reserved for a specific purpose" there will inevitably
      be scenarios where the BIOS omits the attribute in situations where it
      is desired. Unlike other attributes if the OS wants to reserve this
      memory from the kernel the reservation needs to happen early in init. So
      early, in fact, that it needs to happen before e820__memblock_setup()
      which is a pre-requisite for efi_fake_memmap() that wants to allocate
      memory for the updated table.
      
      Introduce an x86 specific efi_fake_memmap_early() that can search for
      attempts to set EFI_MEMORY_SP via efi_fake_mem and update the e820 table
      accordingly.
      
      The KASLR code that scans the command line looking for user-directed
      memory reservations also needs to be updated to consider
      "efi_fake_mem=nn@ss:0x40000" requests.
      Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NDave Hansen <dave.hansen@linux.intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      199c8471
    • D
      x86/efi: EFI soft reservation to E820 enumeration · 262b45ae
      Dan Williams 提交于
      UEFI 2.8 defines an EFI_MEMORY_SP attribute bit to augment the
      interpretation of the EFI Memory Types as "reserved for a specific
      purpose".
      
      The proposed Linux behavior for specific purpose memory is that it is
      reserved for direct-access (device-dax) by default and not available for
      any kernel usage, not even as an OOM fallback.  Later, through udev
      scripts or another init mechanism, these device-dax claimed ranges can
      be reconfigured and hot-added to the available System-RAM with a unique
      node identifier. This device-dax management scheme implements "soft" in
      the "soft reserved" designation by allowing some or all of the
      reservation to be recovered as typical memory. This policy can be
      disabled at compile-time with CONFIG_EFI_SOFT_RESERVE=n, or runtime with
      efi=nosoftreserve.
      
      This patch introduces 2 new concepts at once given the entanglement
      between early boot enumeration relative to memory that can optionally be
      reserved from the kernel page allocator by default. The new concepts
      are:
      
      - E820_TYPE_SOFT_RESERVED: Upon detecting the EFI_MEMORY_SP
        attribute on EFI_CONVENTIONAL memory, update the E820 map with this
        new type. Only perform this classification if the
        CONFIG_EFI_SOFT_RESERVE=y policy is enabled, otherwise treat it as
        typical ram.
      
      - IORES_DESC_SOFT_RESERVED: Add a new I/O resource descriptor for
        a device driver to search iomem resources for application specific
        memory. Teach the iomem code to identify such ranges as "Soft Reserved".
      
      Note that the comment for do_add_efi_memmap() needed refreshing since it
      seemed to imply that the efi map might overflow the e820 table, but that
      is not an issue as of commit 7b6e4ba3 "x86/boot/e820: Clean up the
      E820_X_MAX definition" that removed the 128 entry limit for
      e820__range_add().
      
      A follow-on change integrates parsing of the ACPI HMAT to identify the
      node and sub-range boundaries of EFI_MEMORY_SP designated memory. For
      now, just identify and reserve memory of this type.
      Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reported-by: Nkbuild test robot <lkp@intel.com>
      Reviewed-by: NDave Hansen <dave.hansen@linux.intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      262b45ae
    • D
      x86/efi: Push EFI_MEMMAP check into leaf routines · 6950e31b
      Dan Williams 提交于
      In preparation for adding another EFI_MEMMAP dependent call that needs
      to occur before e820__memblock_setup() fixup the existing efi calls to
      check for EFI_MEMMAP internally. This ends up being cleaner than the
      alternative of checking EFI_MEMMAP multiple times in setup_arch().
      Reviewed-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      6950e31b
  6. 18 10月, 2019 6 次提交
    • K
      x86: Use pr_warn instead of pr_warning · 8d3bcc44
      Kefeng Wang 提交于
      As said in commit f2c2cbcc ("powerpc: Use pr_warn instead of
      pr_warning"), removing pr_warning so all logging messages use a
      consistent <prefix>_warn style. Let's do it.
      
      Link: http://lkml.kernel.org/r/20191018031850.48498-7-wangkefeng.wang@huawei.com
      To: linux-kernel@vger.kernel.org
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Robert Richter <rric@kernel.org>
      Cc: Darren Hart <dvhart@infradead.org>
      Cc: Andy Shevchenko <andy@infradead.org>
      Signed-off-by: NKefeng Wang <wangkefeng.wang@huawei.com>
      Reviewed-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Signed-off-by: NPetr Mladek <pmladek@suse.com>
      8d3bcc44
    • J
      x86/asm/32: Change all ENTRY+ENDPROC to SYM_FUNC_* · 6d685e53
      Jiri Slaby 提交于
      These are all functions which are invoked from elsewhere, so annotate
      them as global using the new SYM_FUNC_START and their ENDPROC's by
      SYM_FUNC_END.
      
      Now, ENTRY/ENDPROC can be forced to be undefined on X86, so do so.
      Signed-off-by: NJiri Slaby <jslaby@suse.cz>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Allison Randal <allison@lohutok.net>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Andy Shevchenko <andy@infradead.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Bill Metzenthen <billm@melbpc.org.au>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Darren Hart <dvhart@infradead.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: linux-arch@vger.kernel.org
      Cc: linux-crypto@vger.kernel.org
      Cc: linux-efi <linux-efi@vger.kernel.org>
      Cc: linux-efi@vger.kernel.org
      Cc: linux-pm@vger.kernel.org
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Matt Fleming <matt@codeblueprint.co.uk>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: platform-driver-x86@vger.kernel.org
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will@kernel.org>
      Cc: x86-ml <x86@kernel.org>
      Link: https://lkml.kernel.org/r/20191011115108.12392-28-jslaby@suse.cz
      6d685e53
    • J
      x86/asm: Change all ENTRY+ENDPROC to SYM_FUNC_* · 6dcc5627
      Jiri Slaby 提交于
      These are all functions which are invoked from elsewhere, so annotate
      them as global using the new SYM_FUNC_START and their ENDPROC's by
      SYM_FUNC_END.
      
      Make sure ENTRY/ENDPROC is not defined on X86_64, given these were the
      last users.
      Signed-off-by: NJiri Slaby <jslaby@suse.cz>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> [hibernate]
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> [xen bits]
      Acked-by: Herbert Xu <herbert@gondor.apana.org.au> [crypto]
      Cc: Allison Randal <allison@lohutok.net>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Andy Shevchenko <andy@infradead.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Armijn Hemel <armijn@tjaldur.nl>
      Cc: Cao jin <caoj.fnst@cn.fujitsu.com>
      Cc: Darren Hart <dvhart@infradead.org>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Enrico Weigelt <info@metux.net>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jim Mattson <jmattson@google.com>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Kate Stewart <kstewart@linuxfoundation.org>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: kvm ML <kvm@vger.kernel.org>
      Cc: Len Brown <len.brown@intel.com>
      Cc: linux-arch@vger.kernel.org
      Cc: linux-crypto@vger.kernel.org
      Cc: linux-efi <linux-efi@vger.kernel.org>
      Cc: linux-efi@vger.kernel.org
      Cc: linux-pm@vger.kernel.org
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Matt Fleming <matt@codeblueprint.co.uk>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: platform-driver-x86@vger.kernel.org
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Sean Christopherson <sean.j.christopherson@intel.com>
      Cc: Stefano Stabellini <sstabellini@kernel.org>
      Cc: "Steven Rostedt (VMware)" <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Cc: Wanpeng Li <wanpengli@tencent.com>
      Cc: Wei Huang <wei@redhat.com>
      Cc: x86-ml <x86@kernel.org>
      Cc: xen-devel@lists.xenproject.org
      Cc: Xiaoyao Li <xiaoyao.li@linux.intel.com>
      Link: https://lkml.kernel.org/r/20191011115108.12392-25-jslaby@suse.cz
      6dcc5627
    • J
      x86/asm/64: Add ENDs to some functions and relabel with SYM_CODE_* · 4aec216b
      Jiri Slaby 提交于
      All these are functions which are invoked from elsewhere but they are
      not typical C functions. So annotate them using the new SYM_CODE_START.
      All these were not balanced with any END, so mark their ends by
      SYM_CODE_END appropriately too.
      Signed-off-by: NJiri Slaby <jslaby@suse.cz>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> [xen bits]
      Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> [power mgmt]
      Cc: Andy Shevchenko <andy@infradead.org>
      Cc: Cao jin <caoj.fnst@cn.fujitsu.com>
      Cc: Darren Hart <dvhart@infradead.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: linux-arch@vger.kernel.org
      Cc: linux-pm@vger.kernel.org
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Pingfan Liu <kernelfans@gmail.com>
      Cc: platform-driver-x86@vger.kernel.org
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: Stefano Stabellini <sstabellini@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Wei Huang <wei@redhat.com>
      Cc: x86-ml <x86@kernel.org>
      Cc: xen-devel@lists.xenproject.org
      Cc: Xiaoyao Li <xiaoyao.li@linux.intel.com>
      Link: https://lkml.kernel.org/r/20191011115108.12392-23-jslaby@suse.cz
      4aec216b
    • J
      x86/asm: Make some functions local · ef1e0315
      Jiri Slaby 提交于
      There are a couple of assembly functions which are invoked only locally
      in the file they are defined. In C, they are marked "static". In
      assembly, annotate them using SYM_{FUNC,CODE}_START_LOCAL (and switch
      their ENDPROC to SYM_{FUNC,CODE}_END too). Whether FUNC or CODE is used,
      depends on whether ENDPROC or END was used for a particular function
      before.
      Signed-off-by: NJiri Slaby <jslaby@suse.cz>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Andy Shevchenko <andy@infradead.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Darren Hart <dvhart@infradead.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: linux-arch@vger.kernel.org
      Cc: linux-efi <linux-efi@vger.kernel.org>
      Cc: linux-efi@vger.kernel.org
      Cc: Matt Fleming <matt@codeblueprint.co.uk>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: platform-driver-x86@vger.kernel.org
      Cc: Stefano Stabellini <sstabellini@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: x86-ml <x86@kernel.org>
      Cc: xen-devel@lists.xenproject.org
      Link: https://lkml.kernel.org/r/20191011115108.12392-21-jslaby@suse.cz
      ef1e0315
    • J
      xen/pvh: Annotate data appropriately · 1de5bdce
      Jiri Slaby 提交于
      Use the new SYM_DATA_START_LOCAL, and SYM_DATA_END* macros to get:
      
        0000     8 OBJECT  LOCAL  DEFAULT    6 gdt
        0008    32 OBJECT  LOCAL  DEFAULT    6 gdt_start
        0028     0 OBJECT  LOCAL  DEFAULT    6 gdt_end
        0028   256 OBJECT  LOCAL  DEFAULT    6 early_stack
        0128     0 OBJECT  LOCAL  DEFAULT    6 early_stack
      Signed-off-by: NJiri Slaby <jslaby@suse.cz>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Andy Shevchenko <andy@infradead.org>
      Cc: Darren Hart <dvhart@infradead.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: linux-arch@vger.kernel.org
      Cc: platform-driver-x86@vger.kernel.org
      Cc: Stefano Stabellini <sstabellini@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: x86-ml <x86@kernel.org>
      Cc: xen-devel@lists.xenproject.org
      Link: https://lkml.kernel.org/r/20191011115108.12392-15-jslaby@suse.cz
      1de5bdce
  7. 07 10月, 2019 2 次提交
  8. 06 9月, 2019 1 次提交
    • A
      x86/platform/uv: Fix kmalloc() NULL check routine · 864b23f0
      Austin Kim 提交于
      The result of kmalloc() should have been checked ahead of below statement:
      
      	pqp = (struct bau_pq_entry *)vp;
      
      Move BUG_ON(!vp) before above statement.
      Signed-off-by: NAustin Kim <austindh.kim@gmail.com>
      Cc: Dimitri Sivanich <dimitri.sivanich@hpe.com>
      Cc: Hedi Berriche <hedi.berriche@hpe.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Travis <mike.travis@hpe.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russ Anderson <russ.anderson@hpe.com>
      Cc: Steve Wahl <steve.wahl@hpe.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: allison@lohutok.net
      Cc: andy@infradead.org
      Cc: armijn@tjaldur.nl
      Cc: bp@alien8.de
      Cc: dvhart@infradead.org
      Cc: gregkh@linuxfoundation.org
      Cc: hpa@zytor.com
      Cc: kjlu@umn.edu
      Cc: platform-driver-x86@vger.kernel.org
      Link: https://lkml.kernel.org/r/20190905232951.GA28779@LGEARND20B15Signed-off-by: NIngo Molnar <mingo@kernel.org>
      864b23f0
  9. 20 8月, 2019 1 次提交
    • H
      x86/platform/intel/iosf_mbi Rewrite locking · 00452ba9
      Hans de Goede 提交于
      There are 2 problems with the old iosf PMIC I2C bus arbritration code which
      need to be addressed:
      
      1. The lockdep code complains about a possible deadlock in the
      iosf_mbi_[un]block_punit_i2c_access code:
      
      [    6.712662] ======================================================
      [    6.712673] WARNING: possible circular locking dependency detected
      [    6.712685] 5.3.0-rc2+ #79 Not tainted
      [    6.712692] ------------------------------------------------------
      [    6.712702] kworker/0:1/7 is trying to acquire lock:
      [    6.712712] 00000000df1c5681 (iosf_mbi_block_punit_i2c_access_count_mutex){+.+.}, at: iosf_mbi_unblock_punit_i2c_access+0x13/0x90
      [    6.712739]
                     but task is already holding lock:
      [    6.712749] 0000000067cb23e7 (iosf_mbi_punit_mutex){+.+.}, at: iosf_mbi_block_punit_i2c_access+0x97/0x186
      [    6.712768]
                     which lock already depends on the new lock.
      
      [    6.712780]
                     the existing dependency chain (in reverse order) is:
      [    6.712792]
                     -> #1 (iosf_mbi_punit_mutex){+.+.}:
      [    6.712808]        __mutex_lock+0xa8/0x9a0
      [    6.712818]        iosf_mbi_block_punit_i2c_access+0x97/0x186
      [    6.712831]        i2c_dw_acquire_lock+0x20/0x30
      [    6.712841]        i2c_dw_set_reg_access+0x15/0xb0
      [    6.712851]        i2c_dw_probe+0x57/0x473
      [    6.712861]        dw_i2c_plat_probe+0x33e/0x640
      [    6.712874]        platform_drv_probe+0x38/0x80
      [    6.712884]        really_probe+0xf3/0x380
      [    6.712894]        driver_probe_device+0x59/0xd0
      [    6.712905]        bus_for_each_drv+0x84/0xd0
      [    6.712915]        __device_attach+0xe4/0x170
      [    6.712925]        bus_probe_device+0x9f/0xb0
      [    6.712935]        deferred_probe_work_func+0x79/0xd0
      [    6.712946]        process_one_work+0x234/0x560
      [    6.712957]        worker_thread+0x50/0x3b0
      [    6.712967]        kthread+0x10a/0x140
      [    6.712977]        ret_from_fork+0x3a/0x50
      [    6.712986]
                     -> #0 (iosf_mbi_block_punit_i2c_access_count_mutex){+.+.}:
      [    6.713004]        __lock_acquire+0xe07/0x1930
      [    6.713015]        lock_acquire+0x9d/0x1a0
      [    6.713025]        __mutex_lock+0xa8/0x9a0
      [    6.713035]        iosf_mbi_unblock_punit_i2c_access+0x13/0x90
      [    6.713047]        i2c_dw_set_reg_access+0x4d/0xb0
      [    6.713058]        i2c_dw_probe+0x57/0x473
      [    6.713068]        dw_i2c_plat_probe+0x33e/0x640
      [    6.713079]        platform_drv_probe+0x38/0x80
      [    6.713089]        really_probe+0xf3/0x380
      [    6.713099]        driver_probe_device+0x59/0xd0
      [    6.713109]        bus_for_each_drv+0x84/0xd0
      [    6.713119]        __device_attach+0xe4/0x170
      [    6.713129]        bus_probe_device+0x9f/0xb0
      [    6.713140]        deferred_probe_work_func+0x79/0xd0
      [    6.713150]        process_one_work+0x234/0x560
      [    6.713160]        worker_thread+0x50/0x3b0
      [    6.713170]        kthread+0x10a/0x140
      [    6.713180]        ret_from_fork+0x3a/0x50
      [    6.713189]
                     other info that might help us debug this:
      
      [    6.713202]  Possible unsafe locking scenario:
      
      [    6.713212]        CPU0                    CPU1
      [    6.713221]        ----                    ----
      [    6.713229]   lock(iosf_mbi_punit_mutex);
      [    6.713239]                                lock(iosf_mbi_block_punit_i2c_access_count_mutex);
      [    6.713253]                                lock(iosf_mbi_punit_mutex);
      [    6.713265]   lock(iosf_mbi_block_punit_i2c_access_count_mutex);
      [    6.713276]
                      *** DEADLOCK ***
      
      In practice can never happen because only the first caller which
      increments iosf_mbi_block_punit_i2c_access_count will also take
      iosf_mbi_punit_mutex, that is the whole purpose of the counter, which
      itself is protected by iosf_mbi_block_punit_i2c_access_count_mutex.
      
      But there is no way to tell the lockdep code about this and we really
      want to be able to run a kernel with lockdep enabled without these
      warnings being triggered.
      
      2. The lockdep warning also points out another real problem, if 2 threads
      both are in a block of code protected by iosf_mbi_block_punit_i2c_access
      and the first thread to acquire the block exits before the second thread
      then the second thread will call mutex_unlock on iosf_mbi_punit_mutex,
      but it is not the thread which took the mutex and unlocking by another
      thread is not allowed.
      
      Fix this by getting rid of the notion of holding a mutex for the entire
      duration of the PMIC accesses, be it either from the PUnit side, or from an
      in kernel I2C driver. In general holding a mutex after exiting a function
      is a bad idea and the above problems show this case is no different.
      
      Instead 2 counters are now used, one for PMIC accesses from the PUnit
      and one for accesses from in kernel I2C code. When access is requested
      now the code will wait (using a waitqueue) for the counter of the other
      type of access to reach 0 and on release, if the counter reaches 0 the
      wakequeue is woken.
      
      Note that the counter approach is necessary to allow nested calls.
      The main reason for this is so that a series of i2c transfers can be done
      with the punit blocked from accessing the bus the whole time. This is
      necessary to be able to safely read/modify/write a PMIC register without
      racing with the PUNIT doing the same thing.
      
      Allowing nested iosf_mbi_block_punit_i2c_access() calls also is desirable
      from a performance pov since the whole dance necessary to block the PUnit
      from accessing the PMIC I2C bus is somewhat expensive.
      Signed-off-by: NHans de Goede <hdegoede@redhat.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NAndy Shevchenko <andy.shevchenko@gmail.com>
      Link: https://lkml.kernel.org/r/20190812102113.95794-1-hdegoede@redhat.com
      00452ba9
  10. 08 8月, 2019 4 次提交
  11. 02 8月, 2019 1 次提交
  12. 26 6月, 2019 1 次提交
  13. 21 6月, 2019 1 次提交
  14. 19 6月, 2019 1 次提交
  15. 12 6月, 2019 1 次提交
  16. 09 6月, 2019 1 次提交
  17. 05 6月, 2019 5 次提交