1. 13 9月, 2021 5 次提交
  2. 09 9月, 2021 4 次提交
    • A
      arch: remove compat_alloc_user_space · a7a08b27
      Arnd Bergmann 提交于
      All users of compat_alloc_user_space() and copy_in_user() have been
      removed from the kernel, only a few functions in sparc remain that can be
      changed to calling arch_copy_in_user() instead.
      
      Link: https://lkml.kernel.org/r/20210727144859.4150043-7-arnd@kernel.orgSigned-off-by: NArnd Bergmann <arnd@arndb.de>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Eric Biederman <ebiederm@xmission.com>
      Cc: Feng Tang <feng.tang@intel.com>
      Cc: Heiko Carstens <hca@linux.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a7a08b27
    • A
      compat: remove some compat entry points · 59ab844e
      Arnd Bergmann 提交于
      These are all handled correctly when calling the native system call entry
      point, so remove the special cases.
      
      Link: https://lkml.kernel.org/r/20210727144859.4150043-6-arnd@kernel.orgSigned-off-by: NArnd Bergmann <arnd@arndb.de>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Eric Biederman <ebiederm@xmission.com>
      Cc: Feng Tang <feng.tang@intel.com>
      Cc: Heiko Carstens <hca@linux.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      59ab844e
    • Z
      configs: remove the obsolete CONFIG_INPUT_POLLDEV · 4cb398fe
      Zenghui Yu 提交于
      This CONFIG option was removed in commit 278b13ce ("Input: remove
      input_polled_dev implementation") so there's no point to keep it in
      defconfigs any longer.
      
      Get rid of the leftover for all arches.
      
      Link: https://lkml.kernel.org/r/20210726074741.1062-1-yuzenghui@huawei.comSigned-off-by: NZenghui Yu <yuzenghui@huawei.com>
      Cc: Dmitry Torokhov <dmitry.torokhov@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4cb398fe
    • D
      mm/memory_hotplug: remove nid parameter from arch_remove_memory() · 65a2aa5f
      David Hildenbrand 提交于
      The parameter is unused, let's remove it.
      
      Link: https://lkml.kernel.org/r/20210712124052.26491-3-david@redhat.comSigned-off-by: NDavid Hildenbrand <david@redhat.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: Michael Ellerman <mpe@ellerman.id.au> [powerpc]
      Acked-by: Heiko Carstens <hca@linux.ibm.com>	[s390]
      Reviewed-by: NPankaj Gupta <pankaj.gupta@ionos.com>
      Reviewed-by: NOscar Salvador <osalvador@suse.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Heiko Carstens <hca@linux.ibm.com>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Anshuman Khandual <anshuman.khandual@arm.com>
      Cc: Ard Biesheuvel <ardb@kernel.org>
      Cc: Mike Rapoport <rppt@kernel.org>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Laurent Dufour <ldufour@linux.ibm.com>
      Cc: Sergei Trofimovich <slyfox@gentoo.org>
      Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
      Cc: Michel Lespinasse <michel@lespinasse.org>
      Cc: Christophe Leroy <christophe.leroy@c-s.fr>
      Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
      Cc: Thiago Jung Bauermann <bauerman@linux.ibm.com>
      Cc: Joe Perches <joe@perches.com>
      Cc: Pierre Morel <pmorel@linux.ibm.com>
      Cc: Jia He <justin.he@arm.com>
      Cc: Anton Blanchard <anton@ozlabs.org>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Jiang <dave.jiang@intel.com>
      Cc: Jason Wang <jasowang@redhat.com>
      Cc: Len Brown <lenb@kernel.org>
      Cc: "Michael S. Tsirkin" <mst@redhat.com>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Nathan Lynch <nathanl@linux.ibm.com>
      Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
      Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: Scott Cheloha <cheloha@linux.ibm.com>
      Cc: Vishal Verma <vishal.l.verma@intel.com>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      65a2aa5f
  3. 06 9月, 2021 12 次提交
    • Z
      KVM: x86: Update vCPU's hv_clock before back to guest when tsc_offset is adjusted · d9130a2d
      Zelin Deng 提交于
      When MSR_IA32_TSC_ADJUST is written by guest due to TSC ADJUST feature
      especially there's a big tsc warp (like a new vCPU is hot-added into VM
      which has been up for a long time), tsc_offset is added by a large value
      then go back to guest. This causes system time jump as tsc_timestamp is
      not adjusted in the meantime and pvclock monotonic character.
      To fix this, just notify kvm to update vCPU's guest time before back to
      guest.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NZelin Deng <zelin.deng@linux.alibaba.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Message-Id: <1619576521-81399-2-git-send-email-zelin.deng@linux.alibaba.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      d9130a2d
    • P
      KVM: MMU: mark role_regs and role accessors as maybe unused · 4ac21457
      Paolo Bonzini 提交于
      It is reasonable for these functions to be used only in some configurations,
      for example only if the host is 64-bits (and therefore supports 64-bit
      guests).  It is also reasonable to keep the role_regs and role accessors
      in sync even though some of the accessors may be used only for one of the
      two sets (as is the case currently for CR4.LA57)..
      
      Because clang reports warnings for unused inlines declared in a .c file,
      mark both sets of accessors as __maybe_unused.
      Reported-by: Nkernel test robot <lkp@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      4ac21457
    • L
      x86/kvm: Don't enable IRQ when IRQ enabled in kvm_wait · a40b2fd0
      Lai Jiangshan 提交于
      Commit f4e61f0c ("x86/kvm: Fix broken irq restoration in kvm_wait")
      replaced "local_irq_restore() when IRQ enabled" with "local_irq_enable()
      when IRQ enabled" to suppress a warnning.
      
      Although there is no similar debugging warnning for doing local_irq_enable()
      when IRQ enabled as doing local_irq_restore() in the same IRQ situation.  But
      doing local_irq_enable() when IRQ enabled is no less broken as doing
      local_irq_restore() and we'd better avoid it.
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NLai Jiangshan <laijs@linux.alibaba.com>
      Message-Id: <20210814035129.154242-1-jiangshanlai@gmail.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      a40b2fd0
    • S
      KVM: x86/mmu: Move lpage_disallowed_link further "down" in kvm_mmu_page · 1148bfc4
      Sean Christopherson 提交于
      Move "lpage_disallowed_link" out of the first 64 bytes, i.e. out of the
      first cache line, of kvm_mmu_page so that "spt" and to a lesser extent
      "gfns" land in the first cache line.  "lpage_disallowed_link" is accessed
      relatively infrequently compared to "spt", which is accessed any time KVM
      is walking and/or manipulating the shadow page tables.
      
      No functional change intended.
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20210901221023.1303578-4-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      1148bfc4
    • S
      KVM: x86/mmu: Relocate kvm_mmu_page.tdp_mmu_page for better cache locality · ca41c34c
      Sean Christopherson 提交于
      Move "tdp_mmu_page" into the 1-byte void left by the recently removed
      "mmio_cached" so that it resides in the first 64 bytes of kvm_mmu_page,
      i.e. in the same cache line as the most commonly accessed fields.
      
      Don't bother wrapping tdp_mmu_page in CONFIG_X86_64, including the field in
      32-bit builds doesn't affect the size of kvm_mmu_page, and a future patch
      can always wrap the field in the unlikely event KVM gains a 1-byte flag
      that is 32-bit specific.
      
      Note, the size of kvm_mmu_page is also unchanged on CONFIG_X86_64=y due
      to it previously sharing an 8-byte chunk with write_flooding_count.
      
      No functional change intended.
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20210901221023.1303578-3-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      ca41c34c
    • S
      Revert "KVM: x86: mmu: Add guest physical address check in translate_gpa()" · e7177339
      Sean Christopherson 提交于
      Revert a misguided illegal GPA check when "translating" a non-nested GPA.
      The check is woefully incomplete as it does not fill in @exception as
      expected by all callers, which leads to KVM attempting to inject a bogus
      exception, potentially exposing kernel stack information in the process.
      
       WARNING: CPU: 0 PID: 8469 at arch/x86/kvm/x86.c:525 exception_type+0x98/0xb0 arch/x86/kvm/x86.c:525
       CPU: 1 PID: 8469 Comm: syz-executor531 Not tainted 5.14.0-rc7-syzkaller #0
       RIP: 0010:exception_type+0x98/0xb0 arch/x86/kvm/x86.c:525
       Call Trace:
        x86_emulate_instruction+0xef6/0x1460 arch/x86/kvm/x86.c:7853
        kvm_mmu_page_fault+0x2f0/0x1810 arch/x86/kvm/mmu/mmu.c:5199
        handle_ept_misconfig+0xdf/0x3e0 arch/x86/kvm/vmx/vmx.c:5336
        __vmx_handle_exit arch/x86/kvm/vmx/vmx.c:6021 [inline]
        vmx_handle_exit+0x336/0x1800 arch/x86/kvm/vmx/vmx.c:6038
        vcpu_enter_guest+0x2a1c/0x4430 arch/x86/kvm/x86.c:9712
        vcpu_run arch/x86/kvm/x86.c:9779 [inline]
        kvm_arch_vcpu_ioctl_run+0x47d/0x1b20 arch/x86/kvm/x86.c:10010
        kvm_vcpu_ioctl+0x49e/0xe50 arch/x86/kvm/../../../virt/kvm/kvm_main.c:3652
      
      The bug has escaped notice because practically speaking the GPA check is
      useless.  The GPA check in question only comes into play when KVM is
      walking guest page tables (or "translating" CR3), and KVM already handles
      illegal GPA checks by setting reserved bits in rsvd_bits_mask for each
      PxE, or in the case of CR3 for loading PTDPTRs, manually checks for an
      illegal CR3.  This particular failure doesn't hit the existing reserved
      bits checks because syzbot sets guest.MAXPHYADDR=1, and IA32 architecture
      simply doesn't allow for such an absurd MAXPHYADDR, e.g. 32-bit paging
      doesn't define any reserved PA bits checks, which KVM emulates by only
      incorporating the reserved PA bits into the "high" bits, i.e. bits 63:32.
      
      Simply remove the bogus check.  There is zero meaningful value and no
      architectural justification for supporting guest.MAXPHYADDR < 32, and
      properly filling the exception would introduce non-trivial complexity.
      
      This reverts commit ec7771ab.
      
      Fixes: ec7771ab ("KVM: x86: mmu: Add guest physical address check in translate_gpa()")
      Cc: stable@vger.kernel.org
      Reported-by: syzbot+200c08e88ae818f849ce@syzkaller.appspotmail.com
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20210831164224.1119728-2-seanjc@google.com>
      Reviewed-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      e7177339
    • J
      KVM: x86/mmu: Remove unused field mmio_cached in struct kvm_mmu_page · 678a305b
      Jia He 提交于
      After reverting and restoring the fast tlb invalidation patch series,
      the mmio_cached is not removed. Hence a unused field is left in
      kvm_mmu_page.
      
      Cc: Sean Christopherson <seanjc@google.com>
      Signed-off-by: NJia He <justin.he@arm.com>
      Message-Id: <20210830145336.27183-1-justin.he@arm.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      678a305b
    • E
      kvm: x86: Increase KVM_SOFT_MAX_VCPUS to 710 · 1dbaf04c
      Eduardo Habkost 提交于
      Support for 710 VCPUs was tested by Red Hat since RHEL-8.4,
      so increase KVM_SOFT_MAX_VCPUS to 710.
      Signed-off-by: NEduardo Habkost <ehabkost@redhat.com>
      Message-Id: <20210903211600.2002377-4-ehabkost@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      1dbaf04c
    • E
      kvm: x86: Increase MAX_VCPUS to 1024 · 074c82c8
      Eduardo Habkost 提交于
      Increase KVM_MAX_VCPUS to 1024, so we can test larger VMs.
      
      I'm not changing KVM_SOFT_MAX_VCPUS yet because I'm afraid it
      might involve complicated questions around the meaning of
      "supported" and "recommended" in the upstream tree.
      KVM_SOFT_MAX_VCPUS will be changed in a separate patch.
      
      For reference, visible effects of this change are:
      - KVM_CAP_MAX_VCPUS will now return 1024 (of course)
      - Default value for CPUID[HYPERV_CPUID_IMPLEMENT_LIMITS (00x40000005)].EAX
        will now be 1024
      - KVM_MAX_VCPU_ID will change from 1151 to 4096
      - Size of struct kvm will increase from 19328 to 22272 bytes
        (in x86_64)
      - Size of struct kvm_ioapic will increase from 1780 to 5084 bytes
        (in x86_64)
      - Bitmap stack variables that will grow:
        - At kvm_hv_flush_tlb() kvm_hv_send_ipi(),
          vp_bitmap[] and vcpu_bitmap[] will now be 128 bytes long
        - vcpu_bitmap at bioapic_write_indirect() will be 128 bytes long
          once patch "KVM: x86: Fix stack-out-of-bounds memory access
          from ioapic_write_indirect()" is applied
      Signed-off-by: NEduardo Habkost <ehabkost@redhat.com>
      Message-Id: <20210903211600.2002377-3-ehabkost@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      074c82c8
    • E
      kvm: x86: Set KVM_MAX_VCPU_ID to 4*KVM_MAX_VCPUS · 4ddacd52
      Eduardo Habkost 提交于
      Instead of requiring KVM_MAX_VCPU_ID to be manually increased
      every time we increase KVM_MAX_VCPUS, set it to 4*KVM_MAX_VCPUS.
      This should be enough for CPU topologies where Cores-per-Package
      and Packages-per-Socket are not powers of 2.
      
      In practice, this increases KVM_MAX_VCPU_ID from 1023 to 1152.
      The only side effect of this change is making some fields in
      struct kvm_ioapic larger, increasing the struct size from 1628 to
      1780 bytes (in x86_64).
      Signed-off-by: NEduardo Habkost <ehabkost@redhat.com>
      Message-Id: <20210903211600.2002377-2-ehabkost@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      4ddacd52
    • M
      KVM: VMX: avoid running vmx_handle_exit_irqoff in case of emulation · 81b4b56d
      Maxim Levitsky 提交于
      If we are emulating an invalid guest state, we don't have a correct
      exit reason, and thus we shouldn't do anything in this function.
      Signed-off-by: NMaxim Levitsky <mlevitsk@redhat.com>
      Message-Id: <20210826095750.1650467-2-mlevitsk@redhat.com>
      Cc: stable@vger.kernel.org
      Fixes: 95b5a48c ("KVM: VMX: Handle NMIs, #MCs and async #PFs in common irqs-disabled fn", 2019-06-18)
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      81b4b56d
    • S
      KVM: x86/mmu: Don't freak out if pml5_root is NULL on 4-level host · a717a780
      Sean Christopherson 提交于
      Include pml5_root in the set of special roots if and only if the host,
      and thus NPT, is using 5-level paging.  mmu_alloc_special_roots() expects
      special roots to be allocated as a bundle, i.e. they're either all valid
      or all NULL.  But for pml5_root, that expectation only holds true if the
      host uses 5-level paging, which causes KVM to WARN about pml5_root being
      NULL when the other special roots are valid.
      
      The silver lining of 4-level vs. 5-level NPT being tied to the host
      kernel's paging level is that KVM's shadow root level is constant; unlike
      VMX's EPT, KVM can't choose 4-level NPT based on guest.MAXPHYADDR.  That
      means KVM can still expect pml5_root to be bundled with the other special
      roots, it just needs to be conditioned on the shadow root level.
      
      Fixes: cb0f722a ("KVM: x86/mmu: Support shadowing NPT when 5-level paging is enabled in host")
      Reported-by: NMaxim Levitsky <mlevitsk@redhat.com>
      Reviewed-by: NMaxim Levitsky <mlevitsk@redhat.com>
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20210824005824.205536-1-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      a717a780
  4. 04 9月, 2021 5 次提交
  5. 03 9月, 2021 3 次提交
    • N
      x86: remove cc-option-yn test for -mtune= · 7ab44e9e
      Nick Desaulniers 提交于
      As noted in the comment, -mtune= has been supported since GCC 3.4. The
      minimum required version of GCC to build the kernel (as specified in
      Documentation/process/changes.rst) is GCC 4.9.
      
      tune is not immediately expanded. Instead it defines a macro that will
      test via cc-option later values for -mtune=. But we can skip the test
      whether to use -mtune= vs. -mcpu=.
      Signed-off-by: NNick Desaulniers <ndesaulniers@google.com>
      Reviewed-by: NNathan Chancellor <nathan@kernel.org>
      Reviewed-by: NMiguel Ojeda <ojeda@kernel.org>
      Signed-off-by: NMasahiro Yamada <masahiroy@kernel.org>
      7ab44e9e
    • M
      x86/build/vdso: fix missing FORCE for *.so build rule · 55a6d00e
      Masahiro Yamada 提交于
      Add FORCE so that if_changed can detect the command line change.
      Signed-off-by: NMasahiro Yamada <masahiroy@kernel.org>
      55a6d00e
    • C
      x86/PCI: sta2x11: switch from 'pci_' to 'dma_' API · 0da14a19
      Christophe JAILLET 提交于
      The wrappers in include/linux/pci-dma-compat.h should go away.
      
      The patch has been generated with the coccinelle script below.
      
      It has been hand modified to use 'dma_set_mask_and_coherent()' instead of
      'pci_set_dma_mask()/pci_set_consistent_dma_mask()' when applicable.
      This is less verbose.
      
      It has been compile tested.
      
      @@
      @@
      -    PCI_DMA_BIDIRECTIONAL
      +    DMA_BIDIRECTIONAL
      
      @@
      @@
      -    PCI_DMA_TODEVICE
      +    DMA_TO_DEVICE
      
      @@
      @@
      -    PCI_DMA_FROMDEVICE
      +    DMA_FROM_DEVICE
      
      @@
      @@
      -    PCI_DMA_NONE
      +    DMA_NONE
      
      @@
      expression e1, e2, e3;
      @@
      -    pci_alloc_consistent(e1, e2, e3)
      +    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)
      
      @@
      expression e1, e2, e3;
      @@
      -    pci_zalloc_consistent(e1, e2, e3)
      +    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_free_consistent(e1, e2, e3, e4)
      +    dma_free_coherent(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_map_single(e1, e2, e3, e4)
      +    dma_map_single(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_unmap_single(e1, e2, e3, e4)
      +    dma_unmap_single(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4, e5;
      @@
      -    pci_map_page(e1, e2, e3, e4, e5)
      +    dma_map_page(&e1->dev, e2, e3, e4, e5)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_unmap_page(e1, e2, e3, e4)
      +    dma_unmap_page(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_map_sg(e1, e2, e3, e4)
      +    dma_map_sg(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_unmap_sg(e1, e2, e3, e4)
      +    dma_unmap_sg(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_single_for_cpu(e1, e2, e3, e4)
      +    dma_sync_single_for_cpu(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_single_for_device(e1, e2, e3, e4)
      +    dma_sync_single_for_device(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_sg_for_cpu(e1, e2, e3, e4)
      +    dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_sg_for_device(e1, e2, e3, e4)
      +    dma_sync_sg_for_device(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2;
      @@
      -    pci_dma_mapping_error(e1, e2)
      +    dma_mapping_error(&e1->dev, e2)
      
      @@
      expression e1, e2;
      @@
      -    pci_set_dma_mask(e1, e2)
      +    dma_set_mask(&e1->dev, e2)
      
      @@
      expression e1, e2;
      @@
      -    pci_set_consistent_dma_mask(e1, e2)
      +    dma_set_coherent_mask(&e1->dev, e2)
      
      Link: https://lore.kernel.org/r/99656452963ba3c63a6cb12e151279d81da365eb.1629658069.git.christophe.jaillet@wanadoo.fr
      Link: https://lore.kernel.org/kernel-janitors/20200421081257.GA131897@infradead.org/Signed-off-by: NChristophe JAILLET <christophe.jaillet@wanadoo.fr>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      0da14a19
  6. 02 9月, 2021 1 次提交
    • N
      x86/setup: Explicitly include acpi.h · ea7b4244
      Nathan Chancellor 提交于
      After commit 342f43af ("iscsi_ibft: fix crash due to KASLR physical
      memory remapping") x86_64_defconfig shows the following errors:
      
        arch/x86/kernel/setup.c: In function ‘setup_arch’:
        arch/x86/kernel/setup.c:916:13: error: implicit declaration of function ‘acpi_mps_check’ [-Werror=implicit-function-declaration]
          916 |         if (acpi_mps_check()) {
              |             ^~~~~~~~~~~~~~
        arch/x86/kernel/setup.c:1110:9: error: implicit declaration of function ‘acpi_table_upgrade’ [-Werror=implicit-function-declaration]
         1110 |         acpi_table_upgrade();
              |         ^~~~~~~~~~~~~~~~~~
        [... more acpi noise ...]
      
      acpi.h was being implicitly included from iscsi_ibft.h in this
      configuration so the removal of that header means these functions have
      no definition or declaration.
      
      In most other configurations, <linux/acpi.h> continued to be included
      through at least <linux/tboot.h> if CONFIG_INTEL_TXT was enabled, and
      there were probably other implicit include paths too.
      
      Add acpi.h explicitly so there is no more error, and so that we don't
      continue to depend on these unreliable implicit include paths.
      Tested-by: NMatthieu Baerts <matthieu.baerts@tessares.net>
      Signed-off-by: NNathan Chancellor <nathan@kernel.org>
      Cc: Maurizio Lombardi <mlombard@redhat.com>
      Cc: Mike Rapoport <rppt@linux.ibm.com>
      Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ea7b4244
  7. 01 9月, 2021 2 次提交
  8. 30 8月, 2021 3 次提交
  9. 28 8月, 2021 1 次提交
  10. 27 8月, 2021 4 次提交
    • S
      crypto: aesni - xts_crypt() return if walk.nbytes is 0 · 72ff2bf0
      Shreyansh Chouhan 提交于
      xts_crypt() code doesn't call kernel_fpu_end() after calling
      kernel_fpu_begin() if walk.nbytes is 0. The correct behavior should be
      not calling kernel_fpu_begin() if walk.nbytes is 0.
      
      Reported-by: syzbot+20191dc583eff8602d2d@syzkaller.appspotmail.com
      Signed-off-by: NShreyansh Chouhan <chouhan.shreyansh630@gmail.com>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      72ff2bf0
    • T
      crypto: x86/sm4 - add AES-NI/AVX2/x86_64 implementation · 5b2efa2b
      Tianjia Zhang 提交于
      Like the implementation of AESNI/AVX, this patch adds an accelerated
      implementation of AESNI/AVX2. In terms of code implementation, by
      reusing AESNI/AVX mode-related codes, the amount of code is greatly
      reduced. From the benchmark data, it can be seen that when the block
      size is 1024, compared to AVX acceleration, the performance achieved
      by AVX2 has increased by about 70%, it is also 7.7 times of the pure
      software implementation of sm4-generic.
      
      The main algorithm implementation comes from SM4 AES-NI work by
      libgcrypt and Markku-Juhani O. Saarinen at:
      https://github.com/mjosaarinen/sm4ni
      
      This optimization supports the four modes of SM4, ECB, CBC, CFB,
      and CTR. Since CBC and CFB do not support multiple block parallel
      encryption, the optimization effect is not obvious.
      
      Benchmark on Intel i5-6200U 2.30GHz, performance data of three
      implementation methods, pure software sm4-generic, aesni/avx
      acceleration, and aesni/avx2 acceleration, the data comes from
      the 218 mode and 518 mode of tcrypt. The abscissas are blocks of
      different lengths. The data is tabulated and the unit is Mb/s:
      
      block-size  |    16      64     128     256    1024    1420    4096
      sm4-generic
          ECB enc | 60.94   70.41   72.27   73.02   73.87   73.58   73.59
          ECB dec | 61.87   70.53   72.15   73.09   73.89   73.92   73.86
          CBC enc | 56.71   66.31   68.05   69.84   70.02   70.12   70.24
          CBC dec | 54.54   65.91   68.22   69.51   70.63   70.79   70.82
          CFB enc | 57.21   67.24   69.10   70.25   70.73   70.52   71.42
          CFB dec | 57.22   64.74   66.31   67.24   67.40   67.64   67.58
          CTR enc | 59.47   68.64   69.91   71.02   71.86   71.61   71.95
          CTR dec | 59.94   68.77   69.95   71.00   71.84   71.55   71.95
      sm4-aesni-avx
          ECB enc | 44.95  177.35  292.06  316.98  339.48  322.27  330.59
          ECB dec | 45.28  178.66  292.31  317.52  339.59  322.52  331.16
          CBC enc | 57.75   67.68   69.72   70.60   71.48   71.63   71.74
          CBC dec | 44.32  176.83  284.32  307.24  328.61  312.61  325.82
          CFB enc | 57.81   67.64   69.63   70.55   71.40   71.35   71.70
          CFB dec | 43.14  167.78  282.03  307.20  328.35  318.24  325.95
          CTR enc | 42.35  163.32  279.11  302.93  320.86  310.56  317.93
          CTR dec | 42.39  162.81  278.49  302.37  321.11  310.33  318.37
      sm4-aesni-avx2
          ECB enc | 45.19  177.41  292.42  316.12  339.90  322.53  330.54
          ECB dec | 44.83  178.90  291.45  317.31  339.85  322.55  331.07
          CBC enc | 57.66   67.62   69.73   70.55   71.58   71.66   71.77
          CBC dec | 44.34  176.86  286.10  501.68  559.58  483.87  527.46
          CFB enc | 57.43   67.60   69.61   70.52   71.43   71.28   71.65
          CFB dec | 43.12  167.75  268.09  499.33  558.35  490.36  524.73
          CTR enc | 42.42  163.39  256.17  493.95  552.45  481.58  517.19
          CTR dec | 42.49  163.11  256.36  493.34  552.62  481.49  516.83
      Signed-off-by: NTianjia Zhang <tianjia.zhang@linux.alibaba.com>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      5b2efa2b
    • T
      crypto: x86/sm4 - export reusable AESNI/AVX functions · de79d9aa
      Tianjia Zhang 提交于
      Export the reusable functions in the SM4 AESNI/AVX implementation,
      mainly public functions, which are used to develop the SM4 AESNI/AVX2
      implementation, and eliminate unnecessary duplication of code.
      
      At the same time, in order to make the public function universal,
      minor fixes was added.
      Signed-off-by: NTianjia Zhang <tianjia.zhang@linux.alibaba.com>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      de79d9aa
    • J
      um: fix stub location calculation · adf9ae0d
      Johannes Berg 提交于
      In commit 9f0b4807 ("um: rework userspace stubs to not hard-code
      stub location") I changed stub_segv_handler() to do a calculation with
      a pointer to a stack variable to find the data page that we're using
      for the stack and the rest of the data. This same commit was meant to
      do it as well for stub_clone_handler(), but the change inadvertently
      went into commit 84b2789d ("um: separate child and parent errors
      in clone stub") instead.
      
      This was reported to not be compiled correctly by gcc 5, causing the
      code to crash here. I'm not sure why, perhaps it's UB because the var
      isn't initialized? In any case, this trick always seemed bad, so just
      create a new inline function that does the calculation in assembly.
      
      Reported-by: subashab@codeaurora.org
      Fixes: 9f0b4807 ("um: rework userspace stubs to not hard-code stub location")
      Fixes: 84b2789d ("um: separate child and parent errors in clone stub")
      Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
      Signed-off-by: NRichard Weinberger <richard@nod.at>
      adf9ae0d