1. 25 10月, 2018 2 次提交
    • F
      x86/cpufeatures: Enumerate MOVDIR64B instruction · ace6485a
      Fenghua Yu 提交于
      MOVDIR64B moves 64-bytes as direct-store with 64-bytes write atomicity.
      Direct store is implemented by using write combining (WC) for writing
      data directly into memory without caching the data.
      
      In low latency offload (e.g. Non-Volatile Memory, etc), MOVDIR64B writes
      work descriptors (and data in some cases) to device-hosted work-queues
      atomically without cache pollution.
      
      Availability of the MOVDIR64B instruction is indicated by the
      presence of the CPUID feature flag MOVDIR64B (CPUID.0x07.0x0:ECX[bit 28]).
      
      Please check the latest Intel Architecture Instruction Set Extensions
      and Future Features Programming Reference for more details on the CPUID
      feature MOVDIR64B flag.
      Signed-off-by: NFenghua Yu <fenghua.yu@intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Ashok Raj <ashok.raj@intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ravi V Shankar <ravi.v.shankar@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1540418237-125817-3-git-send-email-fenghua.yu@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      ace6485a
    • F
      x86/cpufeatures: Enumerate MOVDIRI instruction · 33823f4d
      Fenghua Yu 提交于
      MOVDIRI moves doubleword or quadword from register to memory through
      direct store which is implemented by using write combining (WC) for
      writing data directly into memory without caching the data.
      
      Programmable agents can handle streaming offload (e.g. high speed packet
      processing in network). Hardware implements a doorbell (tail pointer)
      register that is updated by software when adding new work-elements to
      the streaming offload work-queue.
      
      MOVDIRI can be used as the doorbell write which is a 4-byte or 8-byte
      uncachable write to MMIO. MOVDIRI has lower overhead than other ways
      to write the doorbell.
      
      Availability of the MOVDIRI instruction is indicated by the presence of
      the CPUID feature flag MOVDIRI(CPUID.0x07.0x0:ECX[bit 27]).
      
      Please check the latest Intel Architecture Instruction Set Extensions
      and Future Features Programming Reference for more details on the CPUID
      feature MOVDIRI flag.
      Signed-off-by: NFenghua Yu <fenghua.yu@intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Ashok Raj <ashok.raj@intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ravi V Shankar <ravi.v.shankar@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1540418237-125817-2-git-send-email-fenghua.yu@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      33823f4d
  2. 18 10月, 2018 1 次提交
  3. 17 10月, 2018 1 次提交
  4. 16 10月, 2018 2 次提交
    • P
      locking/qspinlock, x86: Provide liveness guarantee · 7aa54be2
      Peter Zijlstra 提交于
      On x86 we cannot do fetch_or() with a single instruction and thus end up
      using a cmpxchg loop, this reduces determinism. Replace the fetch_or()
      with a composite operation: tas-pending + load.
      
      Using two instructions of course opens a window we previously did not
      have. Consider the scenario:
      
      	CPU0		CPU1		CPU2
      
       1)	lock
      	  trylock -> (0,0,1)
      
       2)			lock
      			  trylock /* fail */
      
       3)	unlock -> (0,0,0)
      
       4)					lock
      					  trylock -> (0,0,1)
      
       5)			  tas-pending -> (0,1,1)
      			  load-val <- (0,1,0) from 3
      
       6)			  clear-pending-set-locked -> (0,0,1)
      
      			  FAIL: _2_ owners
      
      where 5) is our new composite operation. When we consider each part of
      the qspinlock state as a separate variable (as we can when
      _Q_PENDING_BITS == 8) then the above is entirely possible, because
      tas-pending will only RmW the pending byte, so the later load is able
      to observe prior tail and lock state (but not earlier than its own
      trylock, which operates on the whole word, due to coherence).
      
      To avoid this we need 2 things:
      
       - the load must come after the tas-pending (obviously, otherwise it
         can trivially observe prior state).
      
       - the tas-pending must be a full word RmW instruction, it cannot be an XCHGB for
         example, such that we cannot observe other state prior to setting
         pending.
      
      On x86 we can realize this by using "LOCK BTS m32, r32" for
      tas-pending followed by a regular load.
      
      Note that observing later state is not a problem:
      
       - if we fail to observe a later unlock, we'll simply spin-wait for
         that store to become visible.
      
       - if we observe a later xchg_tail(), there is no difference from that
         xchg_tail() having taken place before the tas-pending.
      Suggested-by: NWill Deacon <will.deacon@arm.com>
      Reported-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: andrea.parri@amarulasolutions.com
      Cc: longman@redhat.com
      Fixes: 59fb586b ("locking/qspinlock: Remove unbounded cmpxchg() loop from locking slowpath")
      Link: https://lkml.kernel.org/r/20181003130957.183726335@infradead.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      7aa54be2
    • P
      x86/asm: 'Simplify' GEN_*_RMWcc() macros · 288e4521
      Peter Zijlstra 提交于
      Currently the GEN_*_RMWcc() macros include a return statement, which
      pretty much mandates we directly wrap them in a (inline) function.
      
      Macros with return statements are tricky and, as per the above, limit
      use, so remove the return statement and make them
      statement-expressions. This allows them to be used more widely.
      
      Also, shuffle the arguments a bit. Place the @cc argument as 3rd, this
      makes it consistent between UNARY and BINARY, but more importantly, it
      makes the @arg0 argument last.
      
      Since the @arg0 argument is now last, we can do CPP trickery and make
      it an optional argument, simplifying the users; 17 out of 18
      occurences do not need this argument.
      
      Finally, change to asm symbolic names, instead of the numeric ordering
      of operands, which allows us to get rid of __BINARY_RMWcc_ARG and get
      cleaner code overall.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: JBeulich@suse.com
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: bp@alien8.de
      Cc: hpa@linux.intel.com
      Link: https://lkml.kernel.org/r/20181003130957.108960094@infradead.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      288e4521
  5. 14 10月, 2018 1 次提交
  6. 10 10月, 2018 3 次提交
    • J
      x86/acpi, x86/boot: Take RSDP address for boot params if available · e7b66d16
      Juergen Gross 提交于
      In case the RSDP address in struct boot_params is specified don't try
      to find the table by searching, but take the address directly as set
      by the boot loader.
      Signed-off-by: NJuergen Gross <jgross@suse.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Jia Zhang <qianyue.zj@alibaba-inc.com>
      Cc: Len Brown <len.brown@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Pavel Tatashin <pasha.tatashin@oracle.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: boris.ostrovsky@oracle.com
      Cc: linux-kernel@vger.kernel.org
      Cc: linux-pm@vger.kernel.org
      Cc: xen-devel@lists.xenproject.org
      Link: http://lkml.kernel.org/r/20181010061456.22238-4-jgross@suse.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      e7b66d16
    • J
      x86/boot: Add ACPI RSDP address to setup_header · ae7e1238
      Juergen Gross 提交于
      Xen PVH guests receive the address of the RSDP table from Xen. In order
      to support booting a Xen PVH guest via Grub2 using the standard x86
      boot entry we need a way for Grub2 to pass the RSDP address to the
      kernel.
      
      For this purpose expand the struct setup_header to hold the physical
      address of the RSDP address. Being zero means it isn't specified and
      has to be located the legacy way (searching through low memory or
      EBDA).
      
      While documenting the new setup_header layout and protocol version
      2.14 add the missing documentation of protocol version 2.13.
      
      There are Grub2 versions in several distros with a downstream patch
      violating the boot protocol by writing past the end of setup_header.
      This requires another update of the boot protocol to enable the kernel
      to distinguish between a specified RSDP address and one filled with
      garbage by such a broken Grub2.
      
      From protocol 2.14 on Grub2 will write the version it is supporting
      (but never a higher value than found to be supported by the kernel)
      ored with 0x8000 to the version field of setup_header. This enables
      the kernel to know up to which field Grub2 has written information
      to. All fields after that are supposed to be clobbered.
      Signed-off-by: NJuergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: boris.ostrovsky@oracle.com
      Cc: bp@alien8.de
      Cc: corbet@lwn.net
      Cc: linux-doc@vger.kernel.org
      Cc: xen-devel@lists.xenproject.org
      Link: http://lkml.kernel.org/r/20181010061456.22238-3-jgross@suse.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      ae7e1238
    • J
      mm: Preserve _PAGE_DEVMAP across mprotect() calls · 4628a645
      Jan Kara 提交于
      Currently _PAGE_DEVMAP bit is not preserved in mprotect(2) calls. As a
      result we will see warnings such as:
      
      BUG: Bad page map in process JobWrk0013  pte:800001803875ea25 pmd:7624381067
      addr:00007f0930720000 vm_flags:280000f9 anon_vma:          (null) mapping:ffff97f2384056f0 index:0
      file:457-000000fe00000030-00000009-000000ca-00000001_2001.fileblock fault:xfs_filemap_fault [xfs] mmap:xfs_file_mmap [xfs] readpage:          (null)
      CPU: 3 PID: 15848 Comm: JobWrk0013 Tainted: G        W          4.12.14-2.g7573215-default #1 SLE12-SP4 (unreleased)
      Hardware name: Intel Corporation S2600WFD/S2600WFD, BIOS SE5C620.86B.01.00.0833.051120182255 05/11/2018
      Call Trace:
       dump_stack+0x5a/0x75
       print_bad_pte+0x217/0x2c0
       ? enqueue_task_fair+0x76/0x9f0
       _vm_normal_page+0xe5/0x100
       zap_pte_range+0x148/0x740
       unmap_page_range+0x39a/0x4b0
       unmap_vmas+0x42/0x90
       unmap_region+0x99/0xf0
       ? vma_gap_callbacks_rotate+0x1a/0x20
       do_munmap+0x255/0x3a0
       vm_munmap+0x54/0x80
       SyS_munmap+0x1d/0x30
       do_syscall_64+0x74/0x150
       entry_SYSCALL_64_after_hwframe+0x3d/0xa2
      ...
      
      when mprotect(2) gets used on DAX mappings. Also there is a wide variety
      of other failures that can result from the missing _PAGE_DEVMAP flag
      when the area gets used by get_user_pages() later.
      
      Fix the problem by including _PAGE_DEVMAP in a set of flags that get
      preserved by mprotect(2).
      
      Fixes: 69660fd7 ("x86, mm: introduce _PAGE_DEVMAP")
      Fixes: ebd31197 ("powerpc/mm: Add devmap support for ppc64")
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NJan Kara <jack@suse.cz>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      4628a645
  7. 09 10月, 2018 7 次提交
  8. 08 10月, 2018 7 次提交
    • I
      x86/fsgsbase/64: Clean up various details · ec3a9418
      Ingo Molnar 提交于
      So:
      
       - use 'extern' consistently for APIs
      
       - fix weird header guard
      
       - clarify code comments
      
       - reorder APIs by type
      
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Chang S. Bae <chang.seok.bae@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Markus T Metzger <markus.t.metzger@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ravi Shankar <ravi.v.shankar@intel.com>
      Cc: Rik van Riel <riel@surriel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Link: http://lkml.kernel.org/r/1537312139-5580-2-git-send-email-chang.seok.bae@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      ec3a9418
    • I
      x86/segments: Introduce the 'CPUNODE' naming to better document the segment limit CPU/node NR trick · 22245bdf
      Ingo Molnar 提交于
      We have a special segment descriptor entry in the GDT, whose sole purpose is to
      encode the CPU and node numbers in its limit (size) field. There are user-space
      instructions that allow the reading of the limit field, which gives us a really
      fast way to read the CPU and node IDs from the vDSO for example.
      
      But the naming of related functionality does not make this clear, at all:
      
      	VDSO_CPU_SIZE
      	VDSO_CPU_MASK
      	__CPU_NUMBER_SEG
      	GDT_ENTRY_CPU_NUMBER
      	vdso_encode_cpu_node
      	vdso_read_cpu_node
      
      There's a number of problems:
      
       - The 'VDSO_CPU_SIZE' doesn't really make it clear that these are number
         of bits, nor does it make it clear which 'CPU' this refers to, i.e.
         that this is about a GDT entry whose limit encodes the CPU and node number.
      
       - Furthermore, the 'CPU_NUMBER' naming is actively misleading as well,
         because the segment limit encodes not just the CPU number but the
         node ID as well ...
      
      So use a better nomenclature all around: name everything related to this trick
      as 'CPUNODE', to make it clear that this is something special, and add
      _BITS to make it clear that these are number of bits, and propagate this to
      every affected name:
      
      	VDSO_CPU_SIZE         =>  VDSO_CPUNODE_BITS
      	VDSO_CPU_MASK         =>  VDSO_CPUNODE_MASK
      	__CPU_NUMBER_SEG      =>  __CPUNODE_SEG
      	GDT_ENTRY_CPU_NUMBER  =>  GDT_ENTRY_CPUNODE
      	vdso_encode_cpu_node  =>  vdso_encode_cpunode
      	vdso_read_cpu_node    =>  vdso_read_cpunode
      
      This, beyond being less confusing, also makes it easier to grep for all related
      functionality:
      
        $ git grep -i cpunode arch/x86
      
      Also, while at it, fix "return is not a function" style sloppiness in vdso_encode_cpunode().
      
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Chang S. Bae <chang.seok.bae@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Markus T Metzger <markus.t.metzger@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ravi Shankar <ravi.v.shankar@intel.com>
      Cc: Rik van Riel <riel@surriel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Link: http://lkml.kernel.org/r/1537312139-5580-2-git-send-email-chang.seok.bae@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      22245bdf
    • C
      x86/vdso: Introduce helper functions for CPU and node number · ffebbaed
      Chang S. Bae 提交于
      Clean up the CPU/node number related code a bit, to make it more apparent
      how we are encoding/extracting the CPU and node fields from the
      segment limit.
      
      No change in functionality intended.
      
      [ mingo: Wrote new changelog. ]
      Suggested-by: NAndy Lutomirski <luto@kernel.org>
      Suggested-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NChang S. Bae <chang.seok.bae@intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Markus T Metzger <markus.t.metzger@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ravi Shankar <ravi.v.shankar@intel.com>
      Cc: Rik van Riel <riel@surriel.com>
      Link: http://lkml.kernel.org/r/1537312139-5580-8-git-send-email-chang.seok.bae@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      ffebbaed
    • C
      x86/segments/64: Rename the GDT PER_CPU entry to CPU_NUMBER · c4755613
      Chang S. Bae 提交于
      The old 'per CPU' naming was misleading: 64-bit kernels don't use this
      GDT entry for per CPU data, but to store the CPU (and node) ID.
      
      [ mingo: Wrote new changelog. ]
      Suggested-by: NH. Peter Anvin <hpa@zytor.com>
      Signed-off-by: NChang S. Bae <chang.seok.bae@intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Markus T Metzger <markus.t.metzger@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ravi Shankar <ravi.v.shankar@intel.com>
      Cc: Rik van Riel <riel@surriel.com>
      Link: http://lkml.kernel.org/r/1537312139-5580-7-git-send-email-chang.seok.bae@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      c4755613
    • C
      x86/fsgsbase/64: Convert the ELF core dump code to the new FSGSBASE helpers · 824eea38
      Chang S. Bae 提交于
      Replace open-coded rdmsr()'s with their <asm/fsgsbase.h> API
      counterparts.
      
      No change in functionality intended.
      
      [ mingo: Wrote new changelog. ]
      
      Based-on-code-from: Andy Lutomirski <luto@kernel.org>
      Signed-off-by: NChang S. Bae <chang.seok.bae@intel.com>
      Reviewed-by: NAndi Kleen <ak@linux.intel.com>
      Reviewed-by: NAndy Lutomirski <luto@kernel.org>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Markus T Metzger <markus.t.metzger@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ravi Shankar <ravi.v.shankar@intel.com>
      Cc: Rik van Riel <riel@surriel.com>
      Link: http://lkml.kernel.org/r/1537312139-5580-5-git-send-email-chang.seok.bae@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      824eea38
    • C
      x86/fsgsbase/64: Make ptrace use the new FS/GS base helpers · e696c231
      Chang S. Bae 提交于
      Use the new FS/GS base helper functions in <asm/fsgsbase.h> in the platform
      specific ptrace implementation of the following APIs:
      
        PTRACE_ARCH_PRCTL,
        PTRACE_SETREG,
        PTRACE_GETREG,
        etc.
      
      The fsgsbase code is more abstracted out this way and the FS/GS-update
      mechanism will be easier to change this way.
      
      [ mingo: Wrote new changelog. ]
      
      Based-on-code-from: Andy Lutomirski <luto@kernel.org>
      Signed-off-by: NChang S. Bae <chang.seok.bae@intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Markus T Metzger <markus.t.metzger@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ravi Shankar <ravi.v.shankar@intel.com>
      Cc: Rik van Riel <riel@surriel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1537312139-5580-4-git-send-email-chang.seok.bae@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      e696c231
    • C
      x86/fsgsbase/64: Introduce FS/GS base helper functions · b1378a56
      Chang S. Bae 提交于
      Introduce FS/GS base access functionality via <asm/fsgsbase.h>,
      not yet used by anything directly.
      
      Factor out task_seg_base() from x86/ptrace.c and rename it to
      x86_fsgsbase_read_task() to make it part of the new helpers.
      
      This will allow us to enhance FSGSBASE support and eventually enable
      the FSBASE/GSBASE instructions.
      
      An "inactive" GS base refers to a base saved at kernel entry
      and being part of an inactive, non-running/stopped user-task.
      (The typical ptrace model.)
      
      Here are the new functions:
      
        x86_fsbase_read_task()
        x86_gsbase_read_task()
        x86_fsbase_write_task()
        x86_gsbase_write_task()
        x86_fsbase_read_cpu()
        x86_fsbase_write_cpu()
        x86_gsbase_read_cpu_inactive()
        x86_gsbase_write_cpu_inactive()
      
      As an advantage of the unified namespace we can now see all FS/GSBASE
      API use in the kernel via the following 'git grep' pattern:
      
        $ git grep x86_.*sbase
      
      [ mingo: Wrote new changelog. ]
      
      Based-on-code-from: Andy Lutomirski <luto@kernel.org>
      Suggested-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NChang S. Bae <chang.seok.bae@intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Markus T Metzger <markus.t.metzger@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ravi Shankar <ravi.v.shankar@intel.com>
      Cc: Rik van Riel <riel@surriel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1537312139-5580-3-git-send-email-chang.seok.bae@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      b1378a56
  9. 06 10月, 2018 5 次提交
    • N
      x86/jump-labels: Macrofy inline assembly code to work around GCC inlining bugs · 5bdcd510
      Nadav Amit 提交于
      As described in:
      
        77b0bf55: ("kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs")
      
      GCC's inlining heuristics are broken with common asm() patterns used in
      kernel code, resulting in the effective disabling of inlining.
      
      The workaround is to set an assembly macro and call it from the inline
      assembly block - which is also a minor cleanup for the jump-label code.
      
      As a result the code size is slightly increased, but inlining decisions
      are better:
      
            text     data     bss      dec     hex  filename
        18163528 10226300 2957312 31347140 1de51c4  ./vmlinux before
        18163608 10227348 2957312 31348268 1de562c  ./vmlinux after (+1128)
      
      And functions such as intel_pstate_adjust_policy_max(),
      kvm_cpu_accept_dm_intr(), kvm_register_readl() are inlined.
      Tested-by: NKees Cook <keescook@chromium.org>
      Signed-off-by: NNadav Amit <namit@vmware.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Kate Stewart <kstewart@linuxfoundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Philippe Ombredanne <pombredanne@nexb.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20181005202718.229565-4-namit@vmware.com
      Link: https://lore.kernel.org/lkml/20181003213100.189959-11-namit@vmware.com/T/#uSigned-off-by: NIngo Molnar <mingo@kernel.org>
      5bdcd510
    • N
      x86/cpufeature: Macrofy inline assembly code to work around GCC inlining bugs · d5a581d8
      Nadav Amit 提交于
      As described in:
      
        77b0bf55: ("kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs")
      
      GCC's inlining heuristics are broken with common asm() patterns used in
      kernel code, resulting in the effective disabling of inlining.
      
      The workaround is to set an assembly macro and call it from the inline
      assembly block - which is pretty pointless indirection in the static_cpu_has()
      case, but is worth it to improve overall inlining quality.
      
      The patch slightly increases the kernel size:
      
            text     data     bss      dec     hex  filename
        18162879 10226256 2957312 31346447 1de4f0f  ./vmlinux before
        18163528 10226300 2957312 31347140 1de51c4  ./vmlinux after (+693)
      
      And enables the inlining of function such as free_ldt_pgtables().
      Tested-by: NKees Cook <keescook@chromium.org>
      Signed-off-by: NNadav Amit <namit@vmware.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20181005202718.229565-3-namit@vmware.com
      Link: https://lore.kernel.org/lkml/20181003213100.189959-10-namit@vmware.com/T/#uSigned-off-by: NIngo Molnar <mingo@kernel.org>
      d5a581d8
    • N
      x86/extable: Macrofy inline assembly code to work around GCC inlining bugs · 0474d5d9
      Nadav Amit 提交于
      As described in:
      
        77b0bf55: ("kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs")
      
      GCC's inlining heuristics are broken with common asm() patterns used in
      kernel code, resulting in the effective disabling of inlining.
      
      The workaround is to set an assembly macro and call it from the inline
      assembly block - which is also a minor cleanup for the exception table
      code.
      
      Text size goes up a bit:
      
            text     data     bss      dec     hex  filename
        18162555 10226288 2957312 31346155 1de4deb  ./vmlinux before
        18162879 10226256 2957312 31346447 1de4f0f  ./vmlinux after (+292)
      
      But this allows the inlining of functions such as nested_vmx_exit_reflected(),
      set_segment_reg(), __copy_xstate_to_user() which is a net benefit.
      Tested-by: NKees Cook <keescook@chromium.org>
      Signed-off-by: NNadav Amit <namit@vmware.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20181005202718.229565-2-namit@vmware.com
      Link: https://lore.kernel.org/lkml/20181003213100.189959-9-namit@vmware.com/T/#uSigned-off-by: NIngo Molnar <mingo@kernel.org>
      0474d5d9
    • B
      x86/KASLR: Update KERNEL_IMAGE_SIZE description · 06d4a462
      Baoquan He 提交于
      Currently CONFIG_RANDOMIZE_BASE=y is set by default, which makes some of the
      old comments above the KERNEL_IMAGE_SIZE definition out of date. Update them
      to the current state of affairs.
      Signed-off-by: NBaoquan He <bhe@redhat.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@surriel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: corbet@lwn.net
      Cc: linux-doc@vger.kernel.org
      Cc: thgarnie@google.com
      Link: http://lkml.kernel.org/r/20181006084327.27467-2-bhe@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      06d4a462
    • L
      x86/ioremap: Add an ioremap_encrypted() helper · c3a7a61c
      Lianbo Jiang 提交于
      When SME is enabled, the memory is encrypted in the first kernel. In
      this case, SME also needs to be enabled in the kdump kernel, and we have
      to remap the old memory with the memory encryption mask.
      
      The case of concern here is if SME is active in the first kernel,
      and it is active too in the kdump kernel. There are four cases to be
      considered:
      
      a. dump vmcore
         It is encrypted in the first kernel, and needs be read out in the
         kdump kernel.
      
      b. crash notes
         When dumping vmcore, the people usually need to read useful
         information from notes, and the notes is also encrypted.
      
      c. iommu device table
         It's encrypted in the first kernel, kdump kernel needs to access its
         content to analyze and get information it needs.
      
      d. mmio of AMD iommu
         not encrypted in both kernels
      
      Add a new bool parameter @encrypted to __ioremap_caller(). If set,
      memory will be remapped with the SME mask.
      
      Add a new function ioremap_encrypted() to explicitly pass in a true
      value for @encrypted. Use ioremap_encrypted() for the above a, b, c
      cases.
      
       [ bp: cleanup commit message, extern defs in io.h and drop forgotten
         include. ]
      Signed-off-by: NLianbo Jiang <lijiang@redhat.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NTom Lendacky <thomas.lendacky@amd.com>
      Cc: kexec@lists.infradead.org
      Cc: tglx@linutronix.de
      Cc: mingo@redhat.com
      Cc: hpa@zytor.com
      Cc: akpm@linux-foundation.org
      Cc: dan.j.williams@intel.com
      Cc: bhelgaas@google.com
      Cc: baiyaowei@cmss.chinamobile.com
      Cc: tiwai@suse.de
      Cc: brijesh.singh@amd.com
      Cc: dyoung@redhat.com
      Cc: bhe@redhat.com
      Cc: jroedel@suse.de
      Link: https://lkml.kernel.org/r/20180927071954.29615-2-lijiang@redhat.com
      c3a7a61c
  10. 05 10月, 2018 4 次提交
    • A
      x86/vdso: Document vgtod_ts better · bcc4a62a
      Andy Lutomirski 提交于
      After reading do_hres() and do_course() and scratching my head a
      bit, I figured out why the arithmetic is strange.  Document it.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/f66f53d81150bbad47d7b282c9207a71a3ce1c16.1538689401.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      bcc4a62a
    • T
      x66/vdso: Add CLOCK_TAI support · 315f28fa
      Thomas Gleixner 提交于
      With the storage array in place it's now trivial to support CLOCK_TAI in
      the vdso. Extend the base time storage array and add the update code.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NMatt Rickard <matt@softrans.com.au>
      Acked-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephen Boyd <sboyd@kernel.org>
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: Florian Weimer <fweimer@redhat.com>
      Cc: "K. Y. Srinivasan" <kys@microsoft.com>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Cc: devel@linuxdriverproject.org
      Cc: virtualization@lists.linux-foundation.org
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Juergen Gross <jgross@suse.com>
      Link: https://lkml.kernel.org/r/20180917130707.823878601@linutronix.de
      315f28fa
    • T
      x86/vdso: Introduce and use vgtod_ts · 49116f20
      Thomas Gleixner 提交于
      It's desired to support more clocks in the VDSO, e.g. CLOCK_TAI. This
      results either in indirect calls due to the larger switch case, which then
      requires retpolines or when the compiler is forced to avoid jump tables it
      results in even more conditionals.
      
      To avoid both variants which are bad for performance the high resolution
      functions and the coarse grained functions will be collapsed into one for
      each. That requires to store the clock specific base time in an array.
      
      Introcude struct vgtod_ts for storage and convert the data store, the
      update function and the individual clock functions over to use it.
      
      The new storage does not longer use gtod_long_t for seconds depending on 32
      or 64 bit compile because this needs to be the full 64bit value even for
      32bit when a Y2038 function is added. No point in keeping the distinction
      alive in the internal representation.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Matt Rickard <matt@softrans.com.au>
      Cc: Stephen Boyd <sboyd@kernel.org>
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: Florian Weimer <fweimer@redhat.com>
      Cc: "K. Y. Srinivasan" <kys@microsoft.com>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Cc: devel@linuxdriverproject.org
      Cc: virtualization@lists.linux-foundation.org
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Juergen Gross <jgross@suse.com>
      Link: https://lkml.kernel.org/r/20180917130707.324679401@linutronix.de
      49116f20
    • T
      x86/vdso: Use unsigned int consistently for vsyscall_gtod_data:: Seq · 77e9c678
      Thomas Gleixner 提交于
      The sequence count in vgtod_data is unsigned int, but the call sites use
      unsigned long, which is a pointless exercise. Fix the call sites and
      replace 'unsigned' with unsinged 'int' while at it.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Matt Rickard <matt@softrans.com.au>
      Cc: Stephen Boyd <sboyd@kernel.org>
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: Florian Weimer <fweimer@redhat.com>
      Cc: "K. Y. Srinivasan" <kys@microsoft.com>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Cc: devel@linuxdriverproject.org
      Cc: virtualization@lists.linux-foundation.org
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Juergen Gross <jgross@suse.com>
      Link: https://lkml.kernel.org/r/20180917130707.236250416@linutronix.de
      77e9c678
  11. 04 10月, 2018 4 次提交
    • N
      x86/paravirt: Work around GCC inlining bugs when compiling paravirt ops · 494b5168
      Nadav Amit 提交于
      As described in:
      
        77b0bf55: ("kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs")
      
      GCC's inlining heuristics are broken with common asm() patterns used in
      kernel code, resulting in the effective disabling of inlining.
      
      The workaround is to set an assembly macro and call it from the inline
      assembly block. As a result GCC considers the inline assembly block as
      a single instruction. (Which it isn't, but that's the best we can get.)
      
      In this patch we wrap the paravirt call section tricks in a macro,
      to hide it from GCC.
      
      The effect of the patch is a more aggressive inlining, which also
      causes a size increase of kernel.
      
            text     data     bss      dec     hex  filename
        18147336 10226688 2957312 31331336 1de1408  ./vmlinux before
        18162555 10226288 2957312 31346155 1de4deb  ./vmlinux after (+14819)
      
      The number of static text symbols (non-inlined functions) goes down:
      
        Before: 40053
        After:  39942 (-111)
      
      [ mingo: Rewrote the changelog. ]
      Tested-by: NKees Cook <keescook@chromium.org>
      Signed-off-by: NNadav Amit <namit@vmware.com>
      Reviewed-by: NJuergen Gross <jgross@suse.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alok Kataria <akataria@vmware.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: virtualization@lists.linux-foundation.org
      Link: http://lkml.kernel.org/r/20181003213100.189959-8-namit@vmware.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      494b5168
    • N
      x86/bug: Macrofy the BUG table section handling, to work around GCC inlining bugs · f81f8ad5
      Nadav Amit 提交于
      As described in:
      
        77b0bf55: ("kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs")
      
      GCC's inlining heuristics are broken with common asm() patterns used in
      kernel code, resulting in the effective disabling of inlining.
      
      The workaround is to set an assembly macro and call it from the inline
      assembly block. As a result GCC considers the inline assembly block as
      a single instruction. (Which it isn't, but that's the best we can get.)
      
      This patch increases the kernel size:
      
            text     data     bss      dec     hex  filename
        18146889 10225380 2957312 31329581 1de0d2d  ./vmlinux before
        18147336 10226688 2957312 31331336 1de1408  ./vmlinux after (+1755)
      
      But enables more aggressive inlining (and probably better branch decisions).
      
      The number of static text symbols in vmlinux is much lower:
      
       Before: 40218
       After:  40053 (-165)
      
      The assembly code gets harder to read due to the extra macro layer.
      
      [ mingo: Rewrote the changelog. ]
      Tested-by: NKees Cook <keescook@chromium.org>
      Signed-off-by: NNadav Amit <namit@vmware.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20181003213100.189959-7-namit@vmware.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      f81f8ad5
    • N
      x86/alternatives: Macrofy lock prefixes to work around GCC inlining bugs · 77f48ec2
      Nadav Amit 提交于
      As described in:
      
        77b0bf55: ("kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs")
      
      GCC's inlining heuristics are broken with common asm() patterns used in
      kernel code, resulting in the effective disabling of inlining.
      
      The workaround is to set an assembly macro and call it from the inline
      assembly block - i.e. to macrify the affected block.
      
      As a result GCC considers the inline assembly block as a single instruction.
      
      This patch handles the LOCK prefix, allowing more aggresive inlining:
      
            text     data     bss      dec     hex  filename
        18140140 10225284 2957312 31322736 1ddf270  ./vmlinux before
        18146889 10225380 2957312 31329581 1de0d2d  ./vmlinux after (+6845)
      
      This is the reduction in non-inlined functions:
      
        Before: 40286
        After:  40218 (-68)
      Tested-by: NKees Cook <keescook@chromium.org>
      Signed-off-by: NNadav Amit <namit@vmware.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20181003213100.189959-6-namit@vmware.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      77f48ec2
    • N
      x86/refcount: Work around GCC inlining bug · 9e1725b4
      Nadav Amit 提交于
      As described in:
      
        77b0bf55: ("kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs")
      
      GCC's inlining heuristics are broken with common asm() patterns used in
      kernel code, resulting in the effective disabling of inlining.
      
      The workaround is to set an assembly macro and call it from the inline
      assembly block. As a result GCC considers the inline assembly block as
      a single instruction. (Which it isn't, but that's the best we can get.)
      
      This patch allows GCC to inline simple functions such as __get_seccomp_filter().
      
      To no-one's surprise the result is that GCC performs more aggressive (read: correct)
      inlining decisions in these senarios, which reduces the kernel size and presumably
      also speeds it up:
      
            text     data     bss      dec     hex  filename
        18140970 10225412 2957312 31323694 1ddf62e  ./vmlinux before
        18140140 10225284 2957312 31322736 1ddf270  ./vmlinux after (-958)
      
      16 fewer static text symbols:
      
         Before: 40302
          After: 40286 (-16)
      
      these got inlined instead.
      
      Functions such as kref_get(), free_user(), fuse_file_get() now get inlined. Hurray!
      
      [ mingo: Rewrote the changelog. ]
      Tested-by: NKees Cook <keescook@chromium.org>
      Signed-off-by: NNadav Amit <namit@vmware.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Jan Beulich <JBeulich@suse.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20181003213100.189959-5-namit@vmware.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      9e1725b4
  12. 03 10月, 2018 3 次提交