1. 17 1月, 2020 1 次提交
  2. 27 12月, 2019 1 次提交
  3. 18 12月, 2019 6 次提交
  4. 13 12月, 2019 1 次提交
  5. 05 12月, 2019 13 次提交
  6. 01 12月, 2019 17 次提交
    • M
      KVM: PPC: Book3S HV: Flush link stack on guest exit to host kernel · 345712c9
      Michael Ellerman 提交于
      commit af2e8c68b9c5403f77096969c516f742f5bb29e0 upstream.
      
      On some systems that are vulnerable to Spectre v2, it is up to
      software to flush the link stack (return address stack), in order to
      protect against Spectre-RSB.
      
      When exiting from a guest we do some house keeping and then
      potentially exit to C code which is several stack frames deep in the
      host kernel. We will then execute a series of returns without
      preceeding calls, opening up the possiblity that the guest could have
      poisoned the link stack, and direct speculative execution of the host
      to a gadget of some sort.
      
      To prevent this we add a flush of the link stack on exit from a guest.
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      [dja: straightforward backport to v4.19]
      Signed-off-by: NDaniel Axtens <dja@axtens.net>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      345712c9
    • M
      powerpc/book3s64: Fix link stack flush on context switch · 0a60d4bd
      Michael Ellerman 提交于
      commit 39e72bf96f5847ba87cc5bd7a3ce0fed813dc9ad upstream.
      
      In commit ee13cb24 ("powerpc/64s: Add support for software count
      cache flush"), I added support for software to flush the count
      cache (indirect branch cache) on context switch if firmware told us
      that was the required mitigation for Spectre v2.
      
      As part of that code we also added a software flush of the link
      stack (return address stack), which protects against Spectre-RSB
      between user processes.
      
      That is all correct for CPUs that activate that mitigation, which is
      currently Power9 Nimbus DD2.3.
      
      What I got wrong is that on older CPUs, where firmware has disabled
      the count cache, we also need to flush the link stack on context
      switch.
      
      To fix it we create a new feature bit which is not set by firmware,
      which tells us we need to flush the link stack. We set that when
      firmware tells us that either of the existing Spectre v2 mitigations
      are enabled.
      
      Then we adjust the patching code so that if we see that feature bit we
      enable the link stack flush. If we're also told to flush the count
      cache in software then we fall through and do that also.
      
      On the older CPUs we don't need to do do the software count cache
      flush, firmware has disabled it, so in that case we patch in an early
      return after the link stack flush.
      
      The naming of some of the functions is awkward after this patch,
      because they're called "count cache" but they also do link stack. But
      we'll fix that up in a later commit to ease backporting.
      
      This is the fix for CVE-2019-18660.
      Reported-by: NAnthony Steinhauser <asteinhauser@google.com>
      Fixes: ee13cb24 ("powerpc/64s: Add support for software count cache flush")
      Cc: stable@vger.kernel.org # v4.4+
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      0a60d4bd
    • C
      powerpc/64s: support nospectre_v2 cmdline option · 19d98b4d
      Christopher M. Riedl 提交于
      commit d8f0e0b073e1ec52a05f0c2a56318b47387d2f10 upstream.
      
      Add support for disabling the kernel implemented spectre v2 mitigation
      (count cache flush on context switch) via the nospectre_v2 and
      mitigations=off cmdline options.
      Suggested-by: NMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NChristopher M. Riedl <cmr@informatik.wtf>
      Reviewed-by: NAndrew Donnellan <ajd@linux.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Link: https://lore.kernel.org/r/20190524024647.381-1-cmr@informatik.wtfSigned-off-by: NDaniel Axtens <dja@axtens.net>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      19d98b4d
    • D
      powerpc/powernv: hold device_hotplug_lock when calling device_online() · 3081ae5e
      David Hildenbrand 提交于
      [ Upstream commit cec1680591d6d5b10ecc10f370210089416e98af ]
      
      device_online() should be called with device_hotplug_lock() held.
      
      Link: http://lkml.kernel.org/r/20180925091457.28651-5-david@redhat.comSigned-off-by: NDavid Hildenbrand <david@redhat.com>
      Reviewed-by: NPavel Tatashin <pavel.tatashin@microsoft.com>
      Reviewed-by: NRashmica Gupta <rashmica.g@gmail.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Rashmica Gupta <rashmica.g@gmail.com>
      Cc: Balbir Singh <bsingharora@gmail.com>
      Cc: Michael Neuling <mikey@neuling.org>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Haiyang Zhang <haiyangz@microsoft.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: John Allen <jallen@linux.vnet.ibm.com>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Kate Stewart <kstewart@linuxfoundation.org>
      Cc: "K. Y. Srinivasan" <kys@microsoft.com>
      Cc: Len Brown <lenb@kernel.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Mathieu Malaterre <malat@debian.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Nathan Fontenot <nfont@linux.vnet.ibm.com>
      Cc: Oscar Salvador <osalvador@suse.de>
      Cc: Philippe Ombredanne <pombredanne@nexb.com>
      Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: Stephen Hemminger <sthemmin@microsoft.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: YASUAKI ISHIMATSU <yasu.isimatu@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      3081ae5e
    • D
      mm/memory_hotplug: make add_memory() take the device_hotplug_lock · 02735d59
      David Hildenbrand 提交于
      [ Upstream commit 8df1d0e4a265f25dc1e7e7624ccdbcb4a6630c89 ]
      
      add_memory() currently does not take the device_hotplug_lock, however
      is aleady called under the lock from
      	arch/powerpc/platforms/pseries/hotplug-memory.c
      	drivers/acpi/acpi_memhotplug.c
      to synchronize against CPU hot-remove and similar.
      
      In general, we should hold the device_hotplug_lock when adding memory to
      synchronize against online/offline request (e.g.  from user space) - which
      already resulted in lock inversions due to device_lock() and
      mem_hotplug_lock - see 30467e0b ("mm, hotplug: fix concurrent memory
      hot-add deadlock").  add_memory()/add_memory_resource() will create memory
      block devices, so this really feels like the right thing to do.
      
      Holding the device_hotplug_lock makes sure that a memory block device
      can really only be accessed (e.g. via .online/.state) from user space,
      once the memory has been fully added to the system.
      
      The lock is not held yet in
      	drivers/xen/balloon.c
      	arch/powerpc/platforms/powernv/memtrace.c
      	drivers/s390/char/sclp_cmd.c
      	drivers/hv/hv_balloon.c
      So, let's either use the locked variants or take the lock.
      
      Don't export add_memory_resource(), as it once was exported to be used by
      XEN, which is never built as a module.  If somebody requires it, we also
      have to export a locked variant (as device_hotplug_lock is never
      exported).
      
      Link: http://lkml.kernel.org/r/20180925091457.28651-3-david@redhat.comSigned-off-by: NDavid Hildenbrand <david@redhat.com>
      Reviewed-by: NPavel Tatashin <pavel.tatashin@microsoft.com>
      Reviewed-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Reviewed-by: NRashmica Gupta <rashmica.g@gmail.com>
      Reviewed-by: NOscar Salvador <osalvador@suse.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: Len Brown <lenb@kernel.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Nathan Fontenot <nfont@linux.vnet.ibm.com>
      Cc: John Allen <jallen@linux.vnet.ibm.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Mathieu Malaterre <malat@debian.org>
      Cc: Pavel Tatashin <pavel.tatashin@microsoft.com>
      Cc: YASUAKI ISHIMATSU <yasu.isimatu@gmail.com>
      Cc: Balbir Singh <bsingharora@gmail.com>
      Cc: Haiyang Zhang <haiyangz@microsoft.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Kate Stewart <kstewart@linuxfoundation.org>
      Cc: "K. Y. Srinivasan" <kys@microsoft.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Michael Neuling <mikey@neuling.org>
      Cc: Philippe Ombredanne <pombredanne@nexb.com>
      Cc: Stephen Hemminger <sthemmin@microsoft.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      02735d59
    • J
      powerpc/xmon: Relax frame size for clang · 57aab8f0
      Joel Stanley 提交于
      [ Upstream commit 9c87156cce5a63735d1218f0096a65c50a7a32aa ]
      
      When building with clang (8 trunk, 7.0 release) the frame size limit is
      hit:
      
       arch/powerpc/xmon/xmon.c:452:12: warning: stack frame size of 2576
       bytes in function 'xmon_core' [-Wframe-larger-than=]
      
      Some investigation by Naveen indicates this is due to clang saving the
      addresses to printf format strings on the stack.
      
      While this issue is investigated, bump up the frame size limit for xmon
      when building with clang.
      
      Link: https://github.com/ClangBuiltLinux/linux/issues/252Signed-off-by: NJoel Stanley <joel@jms.id.au>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      57aab8f0
    • F
      powerpc/process: Fix flush_all_to_thread for SPE · 4e4cad43
      Felipe Rechia 提交于
      [ Upstream commit e901378578c62202594cba0f6c076f3df365ec91 ]
      
      Fix a bug introduced by the creation of flush_all_to_thread() for
      processors that have SPE (Signal Processing Engine) and use it to
      compute floating-point operations.
      
      >From userspace perspective, the problem was seen in attempts of
      computing floating-point operations which should generate exceptions.
      For example:
      
        fork();
        float x = 0.0 / 0.0;
        isnan(x);           // forked process returns False (should be True)
      
      The operation above also should always cause the SPEFSCR FINV bit to
      be set. However, the SPE floating-point exceptions were turned off
      after a fork().
      
      Kernel versions prior to the bug used flush_spe_to_thread(), which
      first saves SPEFSCR register values in tsk->thread and then calls
      giveup_spe(tsk).
      
      After commit 579e633e, the save_all() function was called first
      to giveup_spe(), and then the SPEFSCR register values were saved in
      tsk->thread. This would save the SPEFSCR register values after
      disabling SPE for that thread, causing the bug described above.
      
      Fixes 579e633e ("powerpc: create flush_all_to_thread()")
      Signed-off-by: NFelipe Rechia <felipe.rechia@datacom.com.br>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      4e4cad43
    • N
      powerpc/64s/radix: Fix radix__flush_tlb_collapsed_pmd double flushing pmd · 3173e226
      Nicholas Piggin 提交于
      [ Upstream commit dd76ff5af35350fd6d5bb5b069e73b6017f66893 ]
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      3173e226
    • M
      powerpc/mm/radix: Fix small page at boundary when splitting · b43c5287
      Michael Ellerman 提交于
      [ Upstream commit 81d1b54dec95209ab5e5be2cf37182885f998753 ]
      
      When we have CONFIG_STRICT_KERNEL_RWX enabled, we want to split the
      linear mapping at the text/data boundary so we can map the kernel
      text read only.
      
      Currently we always use a small page at the text/data boundary, even
      when that's not necessary:
      
        Mapped 0x0000000000000000-0x0000000000e00000 with 2.00 MiB pages
        Mapped 0x0000000000e00000-0x0000000001000000 with 64.0 KiB pages
        Mapped 0x0000000001000000-0x0000000040000000 with 2.00 MiB pages
      
      This is because the check that the mapping crosses the __init_begin
      boundary is too strict, it also returns true when we map exactly up to
      the boundary.
      
      So fix it to check that the mapping would actually map past
      __init_begin, and with that we see:
      
        Mapped 0x0000000000000000-0x0000000040000000 with 2.00 MiB pages
        Mapped 0x0000000040000000-0x0000000100000000 with 1.00 GiB pages
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      b43c5287
    • M
      powerpc/mm/radix: Fix overuse of small pages in splitting logic · b499fa07
      Michael Ellerman 提交于
      [ Upstream commit 3b5657ed5b4e27ccf593a41ff3c5aa27dae8df18 ]
      
      When we have CONFIG_STRICT_KERNEL_RWX enabled, we want to split the
      linear mapping at the text/data boundary so we can map the kernel text
      read only.
      
      But the current logic uses small pages for the entire text section,
      regardless of whether a larger page size would fit. eg. with the
      boundary at 16M we could use 2M pages, but instead we use 64K pages up
      to the 16M boundary:
      
        Mapped 0x0000000000000000-0x0000000001000000 with 64.0 KiB pages
        Mapped 0x0000000001000000-0x0000000040000000 with 2.00 MiB pages
        Mapped 0x0000000040000000-0x0000000100000000 with 1.00 GiB pages
      
      This is because the test is checking if addr is < __init_begin
      and addr + mapping_size is >= _stext. But that is true for all pages
      between _stext and __init_begin.
      
      Instead what we want to check is if we are crossing the text/data
      boundary, which is at __init_begin. With that fixed we see:
      
        Mapped 0x0000000000000000-0x0000000000e00000 with 2.00 MiB pages
        Mapped 0x0000000000e00000-0x0000000001000000 with 64.0 KiB pages
        Mapped 0x0000000001000000-0x0000000040000000 with 2.00 MiB pages
        Mapped 0x0000000040000000-0x0000000100000000 with 1.00 GiB pages
      
      ie. we're correctly using 2MB pages below __init_begin, but we still
      drop down to 64K pages unnecessarily at the boundary.
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      b499fa07
    • M
      powerpc/mm/radix: Fix off-by-one in split mapping logic · 434551e9
      Michael Ellerman 提交于
      [ Upstream commit 5c6499b7041b43807dfaeda28aa87fc0e62558f7 ]
      
      When we have CONFIG_STRICT_KERNEL_RWX enabled, we try to split the
      kernel linear (1:1) mapping so that the kernel text is in a separate
      page to kernel data, so we can mark the former read-only.
      
      We could achieve that just by always using 64K pages for the linear
      mapping, but we try to be smarter. Instead we use huge pages when
      possible, and only switch to smaller pages when necessary.
      
      However we have an off-by-one bug in that logic, which causes us to
      calculate the wrong boundary between text and data.
      
      For example with the end of the kernel text at 16M we see:
      
        radix-mmu: Mapped 0x0000000000000000-0x0000000001200000 with 64.0 KiB pages
        radix-mmu: Mapped 0x0000000001200000-0x0000000040000000 with 2.00 MiB pages
        radix-mmu: Mapped 0x0000000040000000-0x0000000100000000 with 1.00 GiB pages
      
      ie. we mapped from 0 to 18M with 64K pages, even though the boundary
      between text and data is at 16M.
      
      With the fix we see we're correctly hitting the 16M boundary:
      
        radix-mmu: Mapped 0x0000000000000000-0x0000000001000000 with 64.0 KiB pages
        radix-mmu: Mapped 0x0000000001000000-0x0000000040000000 with 2.00 MiB pages
        radix-mmu: Mapped 0x0000000040000000-0x0000000100000000 with 1.00 GiB pages
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      434551e9
    • A
      powerpc/pseries: Export raw per-CPU VPA data via debugfs · ee35e01b
      Aravinda Prasad 提交于
      [ Upstream commit c6c26fb55e8e4b3fc376be5611685990a17de27a ]
      
      This patch exports the raw per-CPU VPA data via debugfs.
      A per-CPU file is created which exports the VPA data of
      that CPU to help debug some of the VPA related issues or
      to analyze the per-CPU VPA related statistics.
      
      v3: Removed offline CPU check.
      
      v2: Included offline CPU check and other review comments.
      Signed-off-by: NAravinda Prasad <aravinda@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      ee35e01b
    • S
      powerpc/eeh: Fix use of EEH_PE_KEEP on wrong field · 97aab1a4
      Sam Bobroff 提交于
      [ Upstream commit 473af09b56dc4be68e4af33220ceca6be67aa60d ]
      
      eeh_add_to_parent_pe() sometimes removes the EEH_PE_KEEP flag, but it
      incorrectly removes it from pe->type, instead of pe->state.
      
      However, rather than clearing it from the correct field, remove it.
      Inspection of the code shows that it can't ever have had any effect
      (even if it had been cleared from the correct field), because the
      field is never tested after it is cleared by the statement in
      question.
      
      The clear statement was added by commit 807a827d ("powerpc/eeh:
      Keep PE during hotplug"), but it didn't explain why it was necessary.
      Signed-off-by: NSam Bobroff <sbobroff@linux.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      97aab1a4
    • S
      powerpc/eeh: Fix null deref for devices removed during EEH · bd2a7e53
      Sam Bobroff 提交于
      [ Upstream commit bcbe3730531239abd45ab6c6af4a18078b37dd47 ]
      
      If a device is removed during EEH processing (either by a driver's
      handler or as part of recovery), it can lead to a null dereference
      in eeh_pe_report_edev().
      
      To handle this, skip devices that have been removed.
      Signed-off-by: NSam Bobroff <sbobroff@linux.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      bd2a7e53
    • J
      powerpc/boot: Disable vector instructions · 16e4657a
      Joel Stanley 提交于
      [ Upstream commit e8e132e6885962582784b6fa16a80d07ea739c0f ]
      
      This will avoid auto-vectorisation when building with higher
      optimisation levels.
      
      We don't know if the machine can support VSX and even if it's present
      it's probably not going to be enabled at this point in boot.
      
      These flag were both added prior to GCC 4.6 which is the minimum
      compiler version supported by upstream, thanks to Segher for the
      details.
      Signed-off-by: NJoel Stanley <joel@jms.id.au>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      16e4657a
    • J
      powerpc/boot: Fix opal console in boot wrapper · 5346c840
      Joel Stanley 提交于
      [ Upstream commit 1a855eaccf353f7ed1d51a3d4b3af727ccbd81ca ]
      
      As of commit 10c77dba ("powerpc/boot: Fix build failure in 32-bit
      boot wrapper") the opal code is hidden behind CONFIG_PPC64_BOOT_WRAPPER,
      but the boot wrapper avoids include/linux, so it does not get the normal
      Kconfig flags.
      
      We can drop the guard entirely as in commit f8e8e69c ("powerpc/boot:
      Only build OPAL code when necessary") the makefile only includes opal.c
      in the build if CONFIG_PPC64_BOOT_WRAPPER is set.
      
      Fixes: 10c77dba ("powerpc/boot: Fix build failure in 32-bit boot wrapper")
      Signed-off-by: NJoel Stanley <joel@jms.id.au>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      5346c840
    • D
      powerpc: Fix signedness bug in update_flash_db() · 4505cff2
      Dan Carpenter 提交于
      [ Upstream commit 014704e6f54189a203cc14c7c0bb411b940241bc ]
      
      The "count < sizeof(struct os_area_db)" comparison is type promoted to
      size_t so negative values of "count" are treated as very high values
      and we accidentally return success instead of a negative error code.
      
      This doesn't really change runtime much but it fixes a static checker
      warning.
      Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com>
      Acked-by: NGeoff Levand <geoff@infradead.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      4505cff2
  7. 24 11月, 2019 1 次提交
    • M
      powerpc/time: Fix clockevent_decrementer initalisation for PR KVM · e2a37708
      Michael Ellerman 提交于
      [ Upstream commit b4d16ab58c41ff0125822464bdff074cebd0fe47 ]
      
      In the recent commit 8b78fdb045de ("powerpc/time: Use
      clockevents_register_device(), fixing an issue with large
      decrementer") we changed the way we initialise the decrementer
      clockevent(s).
      
      We no longer initialise the mult & shift values of
      decrementer_clockevent itself.
      
      This has the effect of breaking PR KVM, because it uses those values
      in kvmppc_emulate_dec(). The symptom is guest kernels spin forever
      mid-way through boot.
      
      For now fix it by assigning back to decrementer_clockevent the mult
      and shift values.
      
      Fixes: 8b78fdb045de ("powerpc/time: Use clockevents_register_device(), fixing an issue with large decrementer")
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      e2a37708