1. 24 6月, 2021 9 次提交
  2. 23 6月, 2021 25 次提交
  3. 22 6月, 2021 2 次提交
    • T
      x86/fpu: Make init_fpstate correct with optimized XSAVE · f9dfb5e3
      Thomas Gleixner 提交于
      The XSAVE init code initializes all enabled and supported components with
      XRSTOR(S) to init state. Then it XSAVEs the state of the components back
      into init_fpstate which is used in several places to fill in the init state
      of components.
      
      This works correctly with XSAVE, but not with XSAVEOPT and XSAVES because
      those use the init optimization and skip writing state of components which
      are in init state. So init_fpstate.xsave still contains all zeroes after
      this operation.
      
      There are two ways to solve that:
      
         1) Use XSAVE unconditionally, but that requires to reshuffle the buffer when
            XSAVES is enabled because XSAVES uses compacted format.
      
         2) Save the components which are known to have a non-zero init state by other
            means.
      
      Looking deeper, #2 is the right thing to do because all components the
      kernel supports have all-zeroes init state except the legacy features (FP,
      SSE). Those cannot be hard coded because the states are not identical on all
      CPUs, but they can be saved with FXSAVE which avoids all conditionals.
      
      Use FXSAVE to save the legacy FP/SSE components in init_fpstate along with
      a BUILD_BUG_ON() which reminds developers to validate that a newly added
      component has all zeroes init state. As a bonus remove the now unused
      copy_xregs_to_kernel_booting() crutch.
      
      The XSAVE and reshuffle method can still be implemented in the unlikely
      case that components are added which have a non-zero init state and no
      other means to save them. For now, FXSAVE is just simple and good enough.
      
        [ bp: Fix a typo or two in the text. ]
      
      Fixes: 6bad06b7 ("x86, xsave: Use xsaveopt in context-switch path when supported")
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20210618143444.587311343@linutronix.de
      f9dfb5e3
    • T
      x86/fpu: Preserve supervisor states in sanitize_restored_user_xstate() · 9301982c
      Thomas Gleixner 提交于
      sanitize_restored_user_xstate() preserves the supervisor states only
      when the fx_only argument is zero, which allows unprivileged user space
      to put supervisor states back into init state.
      
      Preserve them unconditionally.
      
       [ bp: Fix a typo or two in the text. ]
      
      Fixes: 5d6b6a6f ("x86/fpu/xstate: Update sanitize_restored_xstate() for supervisor xstates")
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20210618143444.438635017@linutronix.de
      9301982c
  4. 19 6月, 2021 1 次提交
    • F
      x86/mm: Avoid truncating memblocks for SGX memory · 28e5e44a
      Fan Du 提交于
      tl;dr:
      
      Several SGX users reported seeing the following message on NUMA systems:
      
        sgx: [Firmware Bug]: Unable to map EPC section to online node. Fallback to the NUMA node 0.
      
      This turned out to be the memblock code mistakenly throwing away SGX
      memory.
      
      === Full Changelog ===
      
      The 'max_pfn' variable represents the highest known RAM address.  It can
      be used, for instance, to quickly determine for which physical addresses
      there is mem_map[] space allocated.  The numa_meminfo code makes an
      effort to throw out ("trim") all memory blocks which are above 'max_pfn'.
      
      SGX memory is not considered RAM (it is marked as "Reserved" in the
      e820) and is not taken into account by max_pfn. Despite this, SGX memory
      areas have NUMA affinity and are enumerated in the ACPI SRAT table. The
      existing SGX code uses the numa_meminfo mechanism to look up the NUMA
      affinity for its memory areas.
      
      In cases where SGX memory was above max_pfn (usually just the one EPC
      section in the last highest NUMA node), the numa_memblock is truncated
      at 'max_pfn', which is below the SGX memory.  When the SGX code tries to
      look up the affinity of this memory, it fails and produces an error message:
      
        sgx: [Firmware Bug]: Unable to map EPC section to online node. Fallback to the NUMA node 0.
      
      and assigns the memory to NUMA node 0.
      
      Instead of silently truncating the memory block at 'max_pfn' and
      dropping the SGX memory, add the truncated portion to
      'numa_reserved_meminfo'.  This allows the SGX code to later determine
      the NUMA affinity of its 'Reserved' area.
      
      Before, numa_meminfo looked like this (from 'crash'):
      
        blk = { start =          0x0, end = 0x2080000000, nid = 0x0 }
              { start = 0x2080000000, end = 0x4000000000, nid = 0x1 }
      
      numa_reserved_meminfo is empty.
      
      With this, numa_meminfo looks like this:
      
        blk = { start =          0x0, end = 0x2080000000, nid = 0x0 }
              { start = 0x2080000000, end = 0x4000000000, nid = 0x1 }
      
      and numa_reserved_meminfo has an entry for node 1's SGX memory:
      
        blk =  { start = 0x4000000000, end = 0x4080000000, nid = 0x1 }
      
       [ daveh: completely rewrote/reworked changelog ]
      
      Fixes: 5d30f92e ("x86/NUMA: Provide a range-to-target_node lookup facility")
      Reported-by: NReinette Chatre <reinette.chatre@intel.com>
      Signed-off-by: NFan Du <fan.du@intel.com>
      Signed-off-by: NDave Hansen <dave.hansen@intel.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NJarkko Sakkinen <jarkko@kernel.org>
      Reviewed-by: NDan Williams <dan.j.williams@intel.com>
      Reviewed-by: NDave Hansen <dave.hansen@intel.com>
      Cc: <stable@vger.kernel.org>
      Link: https://lkml.kernel.org/r/20210617194657.0A99CB22@viggo.jf.intel.com
      28e5e44a
  5. 18 6月, 2021 1 次提交
    • M
      PCI: Add AMD RS690 quirk to enable 64-bit DMA · cacf994a
      Mikel Rychliski 提交于
      Although the AMD RS690 chipset has 64-bit DMA support, BIOS implementations
      sometimes fail to configure the memory limit registers correctly.
      
      The Acer F690GVM mainboard uses this chipset and a Marvell 88E8056 NIC. The
      sky2 driver programs the NIC to use 64-bit DMA, which will not work:
      
        sky2 0000:02:00.0: error interrupt status=0x8
        sky2 0000:02:00.0 eth0: tx timeout
        sky2 0000:02:00.0 eth0: transmit ring 0 .. 22 report=0 done=0
      
      Other drivers required by this mainboard either don't support 64-bit DMA,
      or have it disabled using driver specific quirks. For example, the ahci
      driver has quirks to enable or disable 64-bit DMA depending on the BIOS
      version (see ahci_sb600_enable_64bit() in ahci.c). This ahci quirk matches
      against the SB600 SATA controller, but the real issue is almost certainly
      with the RS690 PCI host that it was commonly attached to.
      
      To avoid this issue in all drivers with 64-bit DMA support, fix the
      configuration of the PCI host. If the kernel is aware of physical memory
      above 4GB, but the BIOS never configured the PCI host with this
      information, update the registers with our values.
      
      [bhelgaas: drop PCI_DEVICE_ID_ATI_RS690 definition]
      Link: https://lore.kernel.org/r/20210611214823.4898-1-mikel@mikelr.comSigned-off-by: NMikel Rychliski <mikel@mikelr.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      cacf994a
  6. 16 6月, 2021 1 次提交
  7. 12 6月, 2021 1 次提交