1. 28 7月, 2021 1 次提交
  2. 25 6月, 2021 3 次提交
  3. 24 6月, 2021 1 次提交
  4. 23 6月, 2021 2 次提交
  5. 07 6月, 2021 1 次提交
  6. 17 4月, 2021 1 次提交
  7. 15 4月, 2021 1 次提交
  8. 09 3月, 2021 2 次提交
  9. 14 2月, 2021 1 次提交
  10. 19 1月, 2021 1 次提交
    • S
      s390: convert to generic entry · 56e62a73
      Sven Schnelle 提交于
      This patch converts s390 to use the generic entry infrastructure from
      kernel/entry/*.
      
      There are a few special things on s390:
      
      - PIF_PER_TRAP is moved to TIF_PER_TRAP as the generic code doesn't
        know about our PIF flags in exit_to_user_mode_loop().
      
      - The old code had several ways to restart syscalls:
      
        a) PIF_SYSCALL_RESTART, which was only set during execve to force a
           restart after upgrading a process (usually qemu-kvm) to pgste page
           table extensions.
      
        b) PIF_SYSCALL, which is set by do_signal() to indicate that the
           current syscall should be restarted. This is changed so that
           do_signal() now also uses PIF_SYSCALL_RESTART. Continuing to use
           PIF_SYSCALL doesn't work with the generic code, and changing it
           to PIF_SYSCALL_RESTART makes PIF_SYSCALL and PIF_SYSCALL_RESTART
           more unique.
      
      - On s390 calling sys_sigreturn or sys_rt_sigreturn is implemented by
      executing a svc instruction on the process stack which causes a fault.
      While handling that fault the fault code sets PIF_SYSCALL to hand over
      processing to the syscall code on exit to usermode.
      
      The patch introduces PIF_SYSCALL_RET_SET, which is set if ptrace sets
      a return value for a syscall. The s390x ptrace ABI uses r2 both for the
      syscall number and return value, so ptrace cannot set the syscall number +
      return value at the same time. The flag makes handling that a bit easier.
      do_syscall() will just skip executing the syscall if PIF_SYSCALL_RET_SET
      is set.
      
      CONFIG_DEBUG_ASCE was removd in favour of the generic CONFIG_DEBUG_ENTRY.
      CR1/7/13 will be checked both on kernel entry and exit to contain the
      correct asces.
      Signed-off-by: NSven Schnelle <svens@linux.ibm.com>
      Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
      56e62a73
  11. 10 12月, 2020 2 次提交
  12. 11 11月, 2020 2 次提交
  13. 13 8月, 2020 1 次提交
  14. 10 7月, 2020 1 次提交
  15. 09 7月, 2020 1 次提交
  16. 23 6月, 2020 1 次提交
  17. 12 6月, 2020 1 次提交
  18. 10 6月, 2020 3 次提交
    • M
      mmap locking API: use coccinelle to convert mmap_sem rwsem call sites · d8ed45c5
      Michel Lespinasse 提交于
      This change converts the existing mmap_sem rwsem calls to use the new mmap
      locking API instead.
      
      The change is generated using coccinelle with the following rule:
      
      // spatch --sp-file mmap_lock_api.cocci --in-place --include-headers --dir .
      
      @@
      expression mm;
      @@
      (
      -init_rwsem
      +mmap_init_lock
      |
      -down_write
      +mmap_write_lock
      |
      -down_write_killable
      +mmap_write_lock_killable
      |
      -down_write_trylock
      +mmap_write_trylock
      |
      -up_write
      +mmap_write_unlock
      |
      -downgrade_write
      +mmap_write_downgrade
      |
      -down_read
      +mmap_read_lock
      |
      -down_read_killable
      +mmap_read_lock_killable
      |
      -down_read_trylock
      +mmap_read_trylock
      |
      -up_read
      +mmap_read_unlock
      )
      -(&mm->mmap_sem)
      +(mm)
      Signed-off-by: NMichel Lespinasse <walken@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: NDaniel Jordan <daniel.m.jordan@oracle.com>
      Reviewed-by: NLaurent Dufour <ldufour@linux.ibm.com>
      Reviewed-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Davidlohr Bueso <dbueso@suse.de>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Jason Gunthorpe <jgg@ziepe.ca>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: John Hubbard <jhubbard@nvidia.com>
      Cc: Liam Howlett <Liam.Howlett@oracle.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ying Han <yinghan@google.com>
      Link: http://lkml.kernel.org/r/20200520052908.204642-5-walken@google.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d8ed45c5
    • M
      mm: reorder includes after introduction of linux/pgtable.h · 65fddcfc
      Mike Rapoport 提交于
      The replacement of <asm/pgrable.h> with <linux/pgtable.h> made the include
      of the latter in the middle of asm includes.  Fix this up with the aid of
      the below script and manual adjustments here and there.
      
      	import sys
      	import re
      
      	if len(sys.argv) is not 3:
      	    print "USAGE: %s <file> <header>" % (sys.argv[0])
      	    sys.exit(1)
      
      	hdr_to_move="#include <linux/%s>" % sys.argv[2]
      	moved = False
      	in_hdrs = False
      
      	with open(sys.argv[1], "r") as f:
      	    lines = f.readlines()
      	    for _line in lines:
      		line = _line.rstrip('
      ')
      		if line == hdr_to_move:
      		    continue
      		if line.startswith("#include <linux/"):
      		    in_hdrs = True
      		elif not moved and in_hdrs:
      		    moved = True
      		    print hdr_to_move
      		print line
      Signed-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Cain <bcain@codeaurora.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Greg Ungerer <gerg@linux-m68k.org>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Guo Ren <guoren@kernel.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Ley Foon Tan <ley.foon.tan@intel.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Nick Hu <nickhu@andestech.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vincent Chen <deanbo422@gmail.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: http://lkml.kernel.org/r/20200514170327.31389-4-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      65fddcfc
    • M
      mm: introduce include/linux/pgtable.h · ca5999fd
      Mike Rapoport 提交于
      The include/linux/pgtable.h is going to be the home of generic page table
      manipulation functions.
      
      Start with moving asm-generic/pgtable.h to include/linux/pgtable.h and
      make the latter include asm/pgtable.h.
      Signed-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Cain <bcain@codeaurora.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Greg Ungerer <gerg@linux-m68k.org>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Guo Ren <guoren@kernel.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Ley Foon Tan <ley.foon.tan@intel.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Nick Hu <nickhu@andestech.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vincent Chen <deanbo422@gmail.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: http://lkml.kernel.org/r/20200514170327.31389-3-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ca5999fd
  19. 01 6月, 2020 1 次提交
    • V
      KVM: rename kvm_arch_can_inject_async_page_present() to kvm_arch_can_dequeue_async_page_present() · 7c0ade6c
      Vitaly Kuznetsov 提交于
      An innocent reader of the following x86 KVM code:
      
      bool kvm_arch_can_inject_async_page_present(struct kvm_vcpu *vcpu)
      {
              if (!(vcpu->arch.apf.msr_val & KVM_ASYNC_PF_ENABLED))
                      return true;
      ...
      
      may get very confused: if APF mechanism is not enabled, why do we report
      that we 'can inject async page present'? In reality, upon injection
      kvm_arch_async_page_present() will check the same condition again and,
      in case APF is disabled, will just drop the item. This is fine as the
      guest which deliberately disabled APF doesn't expect to get any APF
      notifications.
      
      Rename kvm_arch_can_inject_async_page_present() to
      kvm_arch_can_dequeue_async_page_present() to make it clear what we are
      checking: if the item can be dequeued (meaning either injected or just
      dropped).
      
      On s390 kvm_arch_can_inject_async_page_present() always returns 'true' so
      the rename doesn't matter much.
      Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Message-Id: <20200525144125.143875-4-vkuznets@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      7c0ade6c
  20. 28 5月, 2020 1 次提交
    • S
      s390: remove critical section cleanup from entry.S · 0b0ed657
      Sven Schnelle 提交于
      The current code is rather complex and caused a lot of subtle
      and hard to debug bugs in the past. Simplify the code by calling
      the system_call handler with interrupts disabled, save
      machine state, and re-enable them later.
      
      This requires significant changes to the machine check handling code
      as well. When the machine check interrupt arrived while being in kernel
      mode the new code will signal pending machine checks with a SIGP external
      call. When userspace was interrupted, the handler will switch to the
      kernel stack and directly execute s390_handle_mcck().
      Signed-off-by: NSven Schnelle <svens@linux.ibm.com>
      Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
      0b0ed657
  21. 16 5月, 2020 1 次提交
    • D
      kvm: add halt-polling cpu usage stats · cb953129
      David Matlack 提交于
      Two new stats for exposing halt-polling cpu usage:
      halt_poll_success_ns
      halt_poll_fail_ns
      
      Thus sum of these 2 stats is the total cpu time spent polling. "success"
      means the VCPU polled until a virtual interrupt was delivered. "fail"
      means the VCPU had to schedule out (either because the maximum poll time
      was reached or it needed to yield the CPU).
      
      To avoid touching every arch's kvm_vcpu_stat struct, only update and
      export halt-polling cpu usage stats if we're on x86.
      
      Exporting cpu usage as a u64 and in nanoseconds means we will overflow at
      ~500 years, which seems reasonably large.
      Signed-off-by: NDavid Matlack <dmatlack@google.com>
      Signed-off-by: NJon Cargille <jcargill@google.com>
      Reviewed-by: NJim Mattson <jmattson@google.com>
      
      Message-Id: <20200508182240.68440-1-jcargill@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      cb953129
  22. 07 5月, 2020 1 次提交
    • P
      KVM: X86: Declare KVM_CAP_SET_GUEST_DEBUG properly · b9b2782c
      Peter Xu 提交于
      KVM_CAP_SET_GUEST_DEBUG should be supported for x86 however it's not declared
      as supported.  My wild guess is that userspaces like QEMU are using "#ifdef
      KVM_CAP_SET_GUEST_DEBUG" to check for the capability instead, but that could be
      wrong because the compilation host may not be the runtime host.
      
      The userspace might still want to keep the old "#ifdef" though to not break the
      guest debug on old kernels.
      Signed-off-by: NPeter Xu <peterx@redhat.com>
      Message-Id: <20200505154750.126300-1-peterx@redhat.com>
      [Do the same for PPC and s390. - Paolo]
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      b9b2782c
  23. 06 5月, 2020 1 次提交
    • P
      KVM: X86: Declare KVM_CAP_SET_GUEST_DEBUG properly · 495907ec
      Peter Xu 提交于
      KVM_CAP_SET_GUEST_DEBUG should be supported for x86 however it's not declared
      as supported.  My wild guess is that userspaces like QEMU are using "#ifdef
      KVM_CAP_SET_GUEST_DEBUG" to check for the capability instead, but that could be
      wrong because the compilation host may not be the runtime host.
      
      The userspace might still want to keep the old "#ifdef" though to not break the
      guest debug on old kernels.
      Signed-off-by: NPeter Xu <peterx@redhat.com>
      Message-Id: <20200505154750.126300-1-peterx@redhat.com>
      [Do the same for PPC and s390. - Paolo]
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      495907ec
  24. 21 4月, 2020 2 次提交
  25. 14 4月, 2020 1 次提交
  26. 31 3月, 2020 1 次提交
  27. 26 3月, 2020 1 次提交
  28. 24 3月, 2020 1 次提交
  29. 17 3月, 2020 3 次提交
    • S
      KVM: Ensure validity of memslot with respect to kvm_get_dirty_log() · 2a49f61d
      Sean Christopherson 提交于
      Rework kvm_get_dirty_log() so that it "returns" the associated memslot
      on success.  A future patch will rework memslot handling such that
      id_to_memslot() can return NULL, returning the memslot makes it more
      obvious that the validity of the memslot has been verified, i.e.
      precludes the need to add validity checks in the arch code that are
      technically unnecessary.
      
      To maintain ordering in s390, move the call to kvm_arch_sync_dirty_log()
      from s390's kvm_vm_ioctl_get_dirty_log() to the new kvm_get_dirty_log().
      This is a nop for PPC, the only other arch that doesn't select
      KVM_GENERIC_DIRTYLOG_READ_PROTECT, as its sync_dirty_log() is empty.
      
      Ideally, moving the sync_dirty_log() call would be done in a separate
      patch, but it can't be done in a follow-on patch because that would
      temporarily break s390's ordering.  Making the move in a preparatory
      patch would be functionally correct, but would create an odd scenario
      where the moved sync_dirty_log() would operate on a "different" memslot
      due to consuming the result of a different id_to_memslot().  The
      memslot couldn't actually be different as slots_lock is held, but the
      code is confusing enough as it is, i.e. moving sync_dirty_log() in this
      patch is the lesser of all evils.
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      2a49f61d
    • S
      KVM: Provide common implementation for generic dirty log functions · 0dff0846
      Sean Christopherson 提交于
      Move the implementations of KVM_GET_DIRTY_LOG and KVM_CLEAR_DIRTY_LOG
      for CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT into common KVM code.
      The arch specific implemenations are extremely similar, differing
      only in whether the dirty log needs to be sync'd from hardware (x86)
      and how the TLBs are flushed.  Add new arch hooks to handle sync
      and TLB flush; the sync will also be used for non-generic dirty log
      support in a future patch (s390).
      
      The ulterior motive for providing a common implementation is to
      eliminate the dependency between arch and common code with respect to
      the memslot referenced by the dirty log, i.e. to make it obvious in the
      code that the validity of the memslot is guaranteed, as a future patch
      will rework memslot handling such that id_to_memslot() can return NULL.
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      0dff0846
    • S
      KVM: Drop "const" attribute from old memslot in commit_memory_region() · 9d4c197c
      Sean Christopherson 提交于
      Drop the "const" attribute from @old in kvm_arch_commit_memory_region()
      to allow arch specific code to free arch specific resources in the old
      memslot without having to cast away the attribute.  Freeing resources in
      kvm_arch_commit_memory_region() paves the way for simplifying
      kvm_free_memslot() by eliminating the last usage of its @dont param.
      Reviewed-by: NPeter Xu <peterx@redhat.com>
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      9d4c197c