1. 04 10月, 2017 1 次提交
    • N
      powerpc/powernv: Implement NMI IPI with OPAL_SIGNAL_SYSTEM_RESET · e36d0a2e
      Nicholas Piggin 提交于
      This allows MSR[EE]=0 lockups to be detected on an OPAL (bare metal)
      system similarly to the hcall NMI IPI on pseries guests, when the
      platform/firmware supports it.
      
      This is an example of CPU10 spinning with interrupts hard disabled:
      
        Watchdog CPU:32 detected Hard LOCKUP other CPUS:10
        Watchdog CPU:10 Hard LOCKUP
        CPU: 10 PID: 4410 Comm: bash Not tainted 4.13.0-rc7-00074-ge89ce1f8-dirty #34
        task: c0000003a82b4400 task.stack: c0000003af55c000
        NIP: c0000000000a7b38 LR: c000000000659044 CTR: c0000000000a7b00
        REGS: c00000000fd23d80 TRAP: 0100   Not tainted  (4.13.0-rc7-00074-ge89ce1f8-dirty)
        MSR: 90000000000c1033 <SF,HV,ME,IR,DR,RI,LE>
        CR: 28422222  XER: 20000000
        CFAR: c0000000000a7b38 SOFTE: 0
        GPR00: c000000000659044 c0000003af55fbb0 c000000001072a00 0000000000000078
        GPR04: c0000003c81b5c80 c0000003c81cc7e8 9000000000009033 0000000000000000
        GPR08: 0000000000000000 c0000000000a7b00 0000000000000001 9000000000001003
        GPR12: c0000000000a7b00 c00000000fd83200 0000000010180df8 0000000010189e60
        GPR16: 0000000010189ed8 0000000010151270 000000001018bd88 000000001018de78
        GPR20: 00000000370a0668 0000000000000001 00000000101645e0 0000000010163c10
        GPR24: 00007fffd14d6294 00007fffd14d6290 c000000000fba6f0 0000000000000004
        GPR28: c000000000f351d8 0000000000000078 c000000000f4095c 0000000000000000
        NIP [c0000000000a7b38] sysrq_handle_xmon+0x38/0x40
        LR [c000000000659044] __handle_sysrq+0xe4/0x270
        Call Trace:
        [c0000003af55fbd0] [c000000000659044] __handle_sysrq+0xe4/0x270
        [c0000003af55fc70] [c000000000659810] write_sysrq_trigger+0x70/0xa0
        [c0000003af55fca0] [c0000000003da650] proc_reg_write+0xb0/0x110
        [c0000003af55fcf0] [c0000000003423bc] __vfs_write+0x6c/0x1b0
        [c0000003af55fd90] [c000000000344398] vfs_write+0xd8/0x240
        [c0000003af55fde0] [c00000000034632c] SyS_write+0x6c/0x110
        [c0000003af55fe30] [c00000000000b220] system_call+0x58/0x6c
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      [mpe: Use kernel types for opal_signal_system_reset()]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      e36d0a2e
  2. 28 9月, 2017 2 次提交
    • F
      cxl: Enable global TLBIs for cxl contexts · 03b8abed
      Frederic Barrat 提交于
      The PSL and nMMU need to see all TLB invalidations for the memory
      contexts used on the adapter. For the hash memory model, it is done by
      making all TLBIs global as soon as the cxl driver is in use. For
      radix, we need something similar, but we can refine and only convert
      to global the invalidations for contexts actually used by the device.
      
      The new mm_context_add_copro() API increments the 'active_cpus' count
      for the contexts attached to the cxl adapter. As soon as there's more
      than 1 active cpu, the TLBIs for the context become global. Active cpu
      count must be decremented when detaching to restore locality if
      possible and to avoid overflowing the counter.
      
      The hash memory model support is somewhat limited, as we can't
      decrement the active cpus count when mm_context_remove_copro() is
      called, because we can't flush the TLB for a mm on hash. So TLBIs
      remain global on hash.
      Signed-off-by: NFrederic Barrat <fbarrat@linux.vnet.ibm.com>
      Fixes: f24be42a ("cxl: Add psl9 specific code")
      Tested-by: NAlistair Popple <alistair@popple.id.au>
      [mpe: Fold in updated comment on the barrier from Fred]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      03b8abed
    • F
      powerpc/mm: Export flush_all_mm() · 6110236b
      Frederic Barrat 提交于
      With the optimizations introduced by commit a46cc7a9
      ("powerpc/mm/radix: Improve TLB/PWC flushes"), flush_tlb_mm() no
      longer flushes the page walk cache (PWC) with radix. This patch
      introduces flush_all_mm(), which flushes everything, TLB and PWC, for
      a given mm.
      Signed-off-by: NFrederic Barrat <fbarrat@linux.vnet.ibm.com>
      Reviewed-By: NAlistair Popple <alistair@popple.id.au>
      [mpe: Add a WARN_ON_ONCE() in the empty hash routines]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      6110236b
  3. 27 9月, 2017 1 次提交
    • M
      powerpc/64s: Add workaround for P9 vector CI load issue · 5080332c
      Michael Neuling 提交于
      POWER9 DD2.1 and earlier has an issue where some cache inhibited
      vector load will return bad data. The workaround is two part, one
      firmware/microcode part triggers HMI interrupts when hitting such
      loads, the other part is this patch which then emulates the
      instructions in Linux.
      
      The affected instructions are limited to lxvd2x, lxvw4x, lxvb16x and
      lxvh8x.
      
      When an instruction triggers the HMI, all threads in the core will be
      sent to the HMI handler, not just the one running the vector load.
      
      In general, these spurious HMIs are detected by the emulation code and
      we just return back to the running process. Unfortunately, if a
      spurious interrupt occurs on a vector load that's to normal memory we
      have no way to detect that it's spurious (unless we walk the page
      tables, which is very expensive). In this case we emulate the load but
      we need do so using a vector load itself to ensure 128bit atomicity is
      preserved.
      
      Some additional debugfs emulated instruction counters are added also.
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      [mpe: Switch CONFIG_PPC_BOOK3S_64 to CONFIG_VSX to unbreak the build]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      5080332c
  4. 26 9月, 2017 1 次提交
    • B
      powerpc/powernv: Rework EEH initialization on powernv · b9fde58d
      Benjamin Herrenschmidt 提交于
      Remove the post_init callback which is only used
      by powernv, we can just call it explicitly from
      the powernv code.
      
      This partially kills the ability to "disable" eeh at
      runtime via debugfs as this was calling that same
      callback again, but this is both unused and broken
      in several ways. If we want to revive it, we need
      to create a dedicated enable/disable callback on the
      backend that does the right thing.
      
      Let the bulk of eeh initialize normally at
      core_initcall() like it does on pseries by removing
      the hack in eeh_init() that delays it.
      
      Instead we make sure our eeh->probe cleanly bails
      out of the PEs haven't been created yet and we force
      a re-probe where we used to call eeh_init() again.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Acked-by: NRussell Currey <ruscur@russell.cc>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      b9fde58d
  5. 09 9月, 2017 1 次提交
    • M
      vga: optimise console scrolling · ac036f95
      Matthew Wilcox 提交于
      Where possible, call memset16(), memmove() or memcpy() instead of using
      open-coded loops.  I don't like the calling convention that uses a byte
      count instead of a count of u16s, but it's a little late to change that.
      Reduces code size of fbcon.o by almost 400 bytes on my laptop build.
      
      [akpm@linux-foundation.org: fix build]
      Link: http://lkml.kernel.org/r/20170720184539.31609-9-willy@infradead.orgSigned-off-by: NMatthew Wilcox <mawilcox@microsoft.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: David Miller <davem@davemloft.net>
      Cc: Sam Ravnborg <sam@ravnborg.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: "James E.J. Bottomley" <jejb@linux.vnet.ibm.com>
      Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Russell King <rmk+kernel@armlinux.org.uk>
      Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ac036f95
  6. 02 9月, 2017 4 次提交
    • C
      powerpc/xive: add XIVE Exploitation Mode to CAS · ac5e5a54
      Cédric Le Goater 提交于
      On POWER9, the Client Architecture Support (CAS) negotiation process
      determines whether the guest operates in XIVE Legacy compatibility or
      in XIVE exploitation mode. Now that we have initial guest support for
      the XIVE interrupt controller, let's inform the hypervisor what we can
      do.
      
      The platform advertises the XIVE Exploitation Mode support using the
      property "ibm,arch-vec-5-platform-support-vec-5", byte 23 bits 0-1 :
      
       - 0b00 XIVE legacy mode Only
       - 0b01 XIVE exploitation mode Only
       - 0b10 XIVE legacy or exploitation mode
      
      The OS asks for XIVE Exploitation Mode support using the property
      "ibm,architecture-vec-5", byte 23 bits 0-1:
      
       - 0b00 XIVE legacy mode Only
       - 0b01 XIVE exploitation mode Only
      Signed-off-by: NCédric Le Goater <clg@kaod.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      ac5e5a54
    • C
      powerpc/xive: introduce H_INT_ESB hcall · bed81ee1
      Cédric Le Goater 提交于
      The H_INT_ESB hcall() is used to issue a load or store to the ESB page
      instead of using the MMIO pages. This can be used as a workaround on
      some HW issues. The OS knows that this hcall should be used on an
      interrupt source when the ESB hcall flag is set to 1 in the hcall
      H_INT_GET_SOURCE_INFO.
      
      To maintain the frontier between the xive frontend and backend, we
      introduce a new xive operation 'esb_rw' to be used in the routines
      doing memory accesses on the ESBs.
      Signed-off-by: NCédric Le Goater <clg@kaod.org>
      Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      bed81ee1
    • C
      powerpc/xive: add the HW IRQ number under xive_irq_data · c58a14a9
      Cédric Le Goater 提交于
      It will be required later by the H_INT_ESB hcall.
      Signed-off-by: NCédric Le Goater <clg@kaod.org>
      Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      c58a14a9
    • C
      powerpc/xive: guest exploitation of the XIVE interrupt controller · eac1e731
      Cédric Le Goater 提交于
      This is the framework for using XIVE in a PowerVM guest. The support
      is very similar to the native one in a much simpler form.
      
      Each source is associated with an Event State Buffer (ESB). This is a
      two bit state machine which is used to trigger events. The bits are
      named "P" (pending) and "Q" (queued) and can be controlled by MMIO.
      The Guest OS registers event (or notifications) queues on which the HW
      will post event data for a target to notify.
      
      Instead of OPAL calls, a set of Hypervisors call are used to configure
      the interrupt sources and the event/notification queues of the guest:
      
       - H_INT_GET_SOURCE_INFO
      
         used to obtain the address of the MMIO page of the Event State
         Buffer (PQ bits) entry associated with the source.
      
       - H_INT_SET_SOURCE_CONFIG
      
         assigns a source to a "target".
      
       - H_INT_GET_SOURCE_CONFIG
      
         determines to which "target" and "priority" is assigned to a source
      
       - H_INT_GET_QUEUE_INFO
      
         returns the address of the notification management page associated
         with the specified "target" and "priority".
      
       - H_INT_SET_QUEUE_CONFIG
      
         sets or resets the event queue for a given "target" and "priority".
         It is also used to set the notification config associated with the
         queue, only unconditional notification for the moment.  Reset is
         performed with a queue size of 0 and queueing is disabled in that
         case.
      
       - H_INT_GET_QUEUE_CONFIG
      
         returns the queue settings for a given "target" and "priority".
      
       - H_INT_RESET
      
         resets all of the partition's interrupt exploitation structures to
         their initial state, losing all configuration set via the hcalls
         H_INT_SET_SOURCE_CONFIG and H_INT_SET_QUEUE_CONFIG.
      
       - H_INT_SYNC
      
         issue a synchronisation on a source to make sure sure all
         notifications have reached their queue.
      
      As for XICS, the XIVE interface for the guest is described in the
      device tree under the "interrupt-controller" node. A couple of new
      properties are specific to XIVE :
      
       - "reg"
      
         contains the base address and size of the thread interrupt
         managnement areas (TIMA), also called rings, for the User level and
         for the Guest OS level. Only the Guest OS level is taken into
         account today.
      
       - "ibm,xive-eq-sizes"
      
         the size of the event queues. One cell per size supported, contains
         log2 of size, in ascending order.
      
       - "ibm,xive-lisn-ranges"
      
         the interrupt numbers ranges assigned to the guest. These are
         allocated using a simple bitmap.
      
      and also :
      
       - "/ibm,plat-res-int-priorities"
      
         contains a list of priorities that the hypervisor has reserved for
         its own use.
      
      Tested with a QEMU XIVE model for pseries and with the Power hypervisor.
      Signed-off-by: NCédric Le Goater <clg@kaod.org>
      Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      eac1e731
  7. 01 9月, 2017 11 次提交
    • H
      crypto/nx: Add P9 NX specific error codes for 842 engine · 146e9f1b
      Haren Myneni 提交于
      This patch adds changes for checking P9 specific 842 engine
      error codes. These errros are reported in coprocessor status
      block (CSB) for failures.
      Signed-off-by: NHaren Myneni <haren@us.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      146e9f1b
    • C
      powerpc/32: add memset16() · da74f659
      Christophe Leroy 提交于
      Commit 694fc88c ("powerpc/string: Implement optimized
      memset variants") added memset16(), memset32() and memset64()
      for the 64 bits PPC.
      
      On 32 bits, memset64() is not relevant, and as shown below,
      the generic version of memset32() gives a good code, so only
      memset16() is candidate for an optimised version.
      
      000009c0 <memset32>:
       9c0:   2c 05 00 00     cmpwi   r5,0
       9c4:   39 23 ff fc     addi    r9,r3,-4
       9c8:   4d 82 00 20     beqlr
       9cc:   7c a9 03 a6     mtctr   r5
       9d0:   94 89 00 04     stwu    r4,4(r9)
       9d4:   42 00 ff fc     bdnz    9d0 <memset32+0x10>
       9d8:   4e 80 00 20     blr
      
      The last part of memset() handling the not 4-bytes multiples
      operates on bytes, making it unsuitable for handling word without
      modification. As it would increase memset() complexity, it is
      better to implement memset16() from scratch. In addition it
      has the advantage of allowing a more optimised memset16() than what
      we would have by using the memset() function.
      Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      da74f659
    • P
      powerpc: Emulate load/store floating point as integer word instructions · d2b65ac6
      Paul Mackerras 提交于
      This adds emulation for the lfiwax, lfiwzx and stfiwx instructions.
      This necessitated adding a new flag to indicate whether a floating
      point or an integer conversion was needed for LOAD_FP and STORE_FP,
      so this moves the size field in op->type up 4 bits.
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      d2b65ac6
    • P
      powerpc: Separate out load/store emulation into its own function · a53d5182
      Paul Mackerras 提交于
      This moves the parts of emulate_step() that deal with emulating
      load and store instructions into a new function called
      emulate_loadstore().  This is to make it possible to reuse this
      code in the alignment handler.
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      a53d5182
    • P
      powerpc: Handle opposite-endian processes in emulation code · d955189a
      Paul Mackerras 提交于
      This adds code to the load and store emulation code to byte-swap
      the data appropriately when the process being emulated is set to
      the opposite endianness to that of the kernel.
      
      This also enables the emulation for the multiple-register loads
      and stores (lmw, stmw, lswi, stswi, lswx, stswx) to work for
      little-endian.  In little-endian mode, the partial word at the
      end of a transfer for lsw*/stsw* (when the byte count is not a
      multiple of 4) is loaded/stored at the least-significant end of
      the register.  Additionally, this fixes a bug in the previous
      code in that it could call read_mem/write_mem with a byte count
      that was not 1, 2, 4 or 8.
      
      Note that this only works correctly on processors with "true"
      little-endian mode, such as IBM POWER processors from POWER6 on, not
      the so-called "PowerPC" little-endian mode that uses address swizzling
      as implemented on the old 32-bit 603, 604, 740/750, 74xx CPUs.
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      d955189a
    • P
      powerpc: Emulate the dcbz instruction · b2543f7b
      Paul Mackerras 提交于
      This adds code to analyse_instr() and emulate_step() to understand the
      dcbz (data cache block zero) instruction.  The emulate_dcbz() function
      is made public so it can be used by the alignment handler in future.
      (The apparently unnecessary cropping of the address to 32 bits is
      there because it will be needed in that situation.)
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      b2543f7b
    • P
      powerpc: Emulate FP/vector/VSX loads/stores correctly when regs not live · c22435a5
      Paul Mackerras 提交于
      At present, the analyse_instr/emulate_step code checks for the
      relevant MSR_FP/VEC/VSX bit being set when a FP/VMX/VSX load
      or store is decoded, but doesn't recheck the bit before reading or
      writing the relevant FP/VMX/VSX register in emulate_step().
      
      Since we don't have preemption disabled, it is possible that we get
      preempted between checking the MSR bit and doing the register access.
      If that happened, then the registers would have been saved to the
      thread_struct for the current process.  Accesses to the CPU registers
      would then potentially read stale values, or write values that would
      never be seen by the user process.
      
      Another way that the registers can become non-live is if a page
      fault occurs when accessing user memory, and the page fault code
      calls a copy routine that wants to use the VMX or VSX registers.
      
      To fix this, the code for all the FP/VMX/VSX loads gets restructured
      so that it forms an image in a local variable of the desired register
      contents, then disables preemption, checks the MSR bit and either
      sets the CPU register or writes the value to the thread struct.
      Similarly, the code for stores checks the MSR bit, copies either the
      CPU register or the thread struct to a local variable, then reenables
      preemption and then copies the register image to memory.
      
      If the instruction being emulated is in the kernel, then we must not
      use the register values in the thread_struct.  In this case, if the
      relevant MSR enable bit is not set, then emulate_step refuses to
      emulate the instruction.
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      c22435a5
    • P
      powerpc/64: Fix update forms of loads and stores to write 64-bit EA · d120cdbc
      Paul Mackerras 提交于
      When a 64-bit processor is executing in 32-bit mode, the update forms
      of load and store instructions are required by the architecture to
      write the full 64-bit effective address into the RA register, though
      only the bottom 32 bits are used to address memory.  Currently,
      the instruction emulation code writes the truncated address to the
      RA register.  This fixes it by keeping the full 64-bit EA in the
      instruction_op structure, truncating the address in emulate_step()
      where it is used to address memory, rather than in the address
      computations in analyse_instr().
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      d120cdbc
    • P
      powerpc: Handle most loads and stores in instruction emulation code · 350779a2
      Paul Mackerras 提交于
      This extends the instruction emulation infrastructure in sstep.c to
      handle all the load and store instructions defined in the Power ISA
      v3.0, except for the atomic memory operations, ldmx (which was never
      implemented), lfdp/stfdp, and the vector element load/stores.
      
      The instructions added are:
      
      Integer loads and stores: lbarx, lharx, lqarx, stbcx., sthcx., stqcx.,
      lq, stq.
      
      VSX loads and stores: lxsiwzx, lxsiwax, stxsiwx, lxvx, lxvl, lxvll,
      lxvdsx, lxvwsx, stxvx, stxvl, stxvll, lxsspx, lxsdx, stxsspx, stxsdx,
      lxvw4x, lxsibzx, lxvh8x, lxsihzx, lxvb16x, stxvw4x, stxsibx, stxvh8x,
      stxsihx, stxvb16x, lxsd, lxssp, lxv, stxsd, stxssp, stxv.
      
      These instructions are handled both in the analyse_instr phase and in
      the emulate_step phase.
      
      The code for lxvd2ux and stxvd2ux has been taken out, as those
      instructions were never implemented in any processor and have been
      taken out of the architecture, and their opcodes have been reused for
      other instructions in POWER9 (lxvb16x and stxvb16x).
      
      The emulation for the VSX loads and stores uses helper functions
      which don't access registers or memory directly, which can hopefully
      be reused by KVM later.
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      350779a2
    • P
      powerpc: Change analyse_instr so it doesn't modify *regs · 3cdfcbfd
      Paul Mackerras 提交于
      The analyse_instr function currently doesn't just work out what an
      instruction does, it also executes those instructions whose effect
      is only to update CPU registers that are stored in struct pt_regs.
      This is undesirable because optprobes uses analyse_instr to work out
      if an instruction could be successfully emulated in future.
      
      This changes analyse_instr so it doesn't modify *regs; instead it
      stores information in the instruction_op structure to indicate what
      registers (GPRs, CR, XER, LR) would be set and what value they would
      be set to.  A companion function called emulate_update_regs() can
      then use that information to update a pt_regs struct appropriately.
      
      As a minor cleanup, this replaces inline asm using the cntlzw and
      cntlzd instructions with calls to __builtin_clz() and __builtin_clzl().
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      3cdfcbfd
    • J
      KVM: update to new mmu_notifier semantic v2 · fb1522e0
      Jérôme Glisse 提交于
      Calls to mmu_notifier_invalidate_page() were replaced by calls to
      mmu_notifier_invalidate_range() and are now bracketed by calls to
      mmu_notifier_invalidate_range_start()/end()
      
      Remove now useless invalidate_page callback.
      
      Changed since v1 (Linus Torvalds)
          - remove now useless kvm_arch_mmu_notifier_invalidate_page()
      Signed-off-by: NJérôme Glisse <jglisse@redhat.com>
      Tested-by: NMike Galbraith <efault@gmx.de>
      Tested-by: NAdam Borowski <kilobyte@angband.pl>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: kvm@vger.kernel.org
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fb1522e0
  8. 31 8月, 2017 19 次提交