提交 f83f7151 编写于 作者: D David S. Miller

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Minor comment merge conflict in mlx5.

Staging driver has a fixup due to the skb->xmit_more changes
in 'net-next', but was removed in 'net'.
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
...@@ -156,6 +156,8 @@ Morten Welinder <welinder@darter.rentec.com> ...@@ -156,6 +156,8 @@ Morten Welinder <welinder@darter.rentec.com>
Morten Welinder <welinder@troll.com> Morten Welinder <welinder@troll.com>
Mythri P K <mythripk@ti.com> Mythri P K <mythripk@ti.com>
Nguyen Anh Quynh <aquynh@gmail.com> Nguyen Anh Quynh <aquynh@gmail.com>
Nicolas Pitre <nico@fluxnic.net> <nicolas.pitre@linaro.org>
Nicolas Pitre <nico@fluxnic.net> <nico@linaro.org>
Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it> Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
Patrick Mochel <mochel@digitalimplant.org> Patrick Mochel <mochel@digitalimplant.org>
Paul Burton <paul.burton@mips.com> <paul.burton@imgtec.com> Paul Burton <paul.burton@mips.com> <paul.burton@imgtec.com>
...@@ -224,3 +226,5 @@ Yakir Yang <kuankuan.y@gmail.com> <ykk@rock-chips.com> ...@@ -224,3 +226,5 @@ Yakir Yang <kuankuan.y@gmail.com> <ykk@rock-chips.com>
Yusuke Goda <goda.yusuke@renesas.com> Yusuke Goda <goda.yusuke@renesas.com>
Gustavo Padovan <gustavo@las.ic.unicamp.br> Gustavo Padovan <gustavo@las.ic.unicamp.br>
Gustavo Padovan <padovan@profusion.mobi> Gustavo Padovan <padovan@profusion.mobi>
Changbin Du <changbin.du@intel.com> <changbin.du@intel.com>
Changbin Du <changbin.du@intel.com> <changbin.du@gmail.com>
...@@ -148,16 +148,16 @@ The ``btf_type.size * 8`` must be equal to or greater than ``BTF_INT_BITS()`` ...@@ -148,16 +148,16 @@ The ``btf_type.size * 8`` must be equal to or greater than ``BTF_INT_BITS()``
for the type. The maximum value of ``BTF_INT_BITS()`` is 128. for the type. The maximum value of ``BTF_INT_BITS()`` is 128.
The ``BTF_INT_OFFSET()`` specifies the starting bit offset to calculate values The ``BTF_INT_OFFSET()`` specifies the starting bit offset to calculate values
for this int. For example, a bitfield struct member has: * btf member bit for this int. For example, a bitfield struct member has:
offset 100 from the start of the structure, * btf member pointing to an int * btf member bit offset 100 from the start of the structure,
type, * the int type has ``BTF_INT_OFFSET() = 2`` and ``BTF_INT_BITS() = 4`` * btf member pointing to an int type,
* the int type has ``BTF_INT_OFFSET() = 2`` and ``BTF_INT_BITS() = 4``
Then in the struct memory layout, this member will occupy ``4`` bits starting Then in the struct memory layout, this member will occupy ``4`` bits starting
from bits ``100 + 2 = 102``. from bits ``100 + 2 = 102``.
Alternatively, the bitfield struct member can be the following to access the Alternatively, the bitfield struct member can be the following to access the
same bits as the above: same bits as the above:
* btf member bit offset 102, * btf member bit offset 102,
* btf member pointing to an int type, * btf member pointing to an int type,
* the int type has ``BTF_INT_OFFSET() = 0`` and ``BTF_INT_BITS() = 4`` * the int type has ``BTF_INT_OFFSET() = 0`` and ``BTF_INT_BITS() = 4``
......
...@@ -26,7 +26,7 @@ Required node properties: ...@@ -26,7 +26,7 @@ Required node properties:
Optional node properties: Optional node properties:
- ti,mode: Operation mode (see above). - ti,mode: Operation mode (u8) (see above).
Example (operation mode 2): Example (operation mode 2):
...@@ -34,5 +34,5 @@ Example (operation mode 2): ...@@ -34,5 +34,5 @@ Example (operation mode 2):
adc128d818@1d { adc128d818@1d {
compatible = "ti,adc128d818"; compatible = "ti,adc128d818";
reg = <0x1d>; reg = <0x1d>;
ti,mode = <2>; ti,mode = /bits/ 8 <2>;
}; };
...@@ -16,6 +16,7 @@ Required properties: ...@@ -16,6 +16,7 @@ Required properties:
* "mediatek,mt8127-uart" for MT8127 compatible UARTS * "mediatek,mt8127-uart" for MT8127 compatible UARTS
* "mediatek,mt8135-uart" for MT8135 compatible UARTS * "mediatek,mt8135-uart" for MT8135 compatible UARTS
* "mediatek,mt8173-uart" for MT8173 compatible UARTS * "mediatek,mt8173-uart" for MT8173 compatible UARTS
* "mediatek,mt8183-uart", "mediatek,mt6577-uart" for MT8183 compatible UARTS
* "mediatek,mt6577-uart" for MT6577 and all of the above * "mediatek,mt6577-uart" for MT6577 and all of the above
- reg: The base address of the UART register bank. - reg: The base address of the UART register bank.
......
...@@ -36,6 +36,7 @@ Supported adapters: ...@@ -36,6 +36,7 @@ Supported adapters:
* Intel Cannon Lake (PCH) * Intel Cannon Lake (PCH)
* Intel Cedar Fork (PCH) * Intel Cedar Fork (PCH)
* Intel Ice Lake (PCH) * Intel Ice Lake (PCH)
* Intel Comet Lake (PCH)
Datasheets: Publicly available at the Intel website Datasheets: Publicly available at the Intel website
On Intel Patsburg and later chipsets, both the normal host SMBus controller On Intel Patsburg and later chipsets, both the normal host SMBus controller
......
.. SPDX-License-Identifier: GPL-2.0
==================
BPF Flow Dissector
==================
Overview
========
Flow dissector is a routine that parses metadata out of the packets. It's
used in the various places in the networking subsystem (RFS, flow hash, etc).
BPF flow dissector is an attempt to reimplement C-based flow dissector logic
in BPF to gain all the benefits of BPF verifier (namely, limits on the
number of instructions and tail calls).
API
===
BPF flow dissector programs operate on an ``__sk_buff``. However, only the
limited set of fields is allowed: ``data``, ``data_end`` and ``flow_keys``.
``flow_keys`` is ``struct bpf_flow_keys`` and contains flow dissector input
and output arguments.
The inputs are:
* ``nhoff`` - initial offset of the networking header
* ``thoff`` - initial offset of the transport header, initialized to nhoff
* ``n_proto`` - L3 protocol type, parsed out of L2 header
Flow dissector BPF program should fill out the rest of the ``struct
bpf_flow_keys`` fields. Input arguments ``nhoff/thoff/n_proto`` should be
also adjusted accordingly.
The return code of the BPF program is either BPF_OK to indicate successful
dissection, or BPF_DROP to indicate parsing error.
__sk_buff->data
===============
In the VLAN-less case, this is what the initial state of the BPF flow
dissector looks like::
+------+------+------------+-----------+
| DMAC | SMAC | ETHER_TYPE | L3_HEADER |
+------+------+------------+-----------+
^
|
+-- flow dissector starts here
.. code:: c
skb->data + flow_keys->nhoff point to the first byte of L3_HEADER
flow_keys->thoff = nhoff
flow_keys->n_proto = ETHER_TYPE
In case of VLAN, flow dissector can be called with the two different states.
Pre-VLAN parsing::
+------+------+------+-----+-----------+-----------+
| DMAC | SMAC | TPID | TCI |ETHER_TYPE | L3_HEADER |
+------+------+------+-----+-----------+-----------+
^
|
+-- flow dissector starts here
.. code:: c
skb->data + flow_keys->nhoff point the to first byte of TCI
flow_keys->thoff = nhoff
flow_keys->n_proto = TPID
Please note that TPID can be 802.1AD and, hence, BPF program would
have to parse VLAN information twice for double tagged packets.
Post-VLAN parsing::
+------+------+------+-----+-----------+-----------+
| DMAC | SMAC | TPID | TCI |ETHER_TYPE | L3_HEADER |
+------+------+------+-----+-----------+-----------+
^
|
+-- flow dissector starts here
.. code:: c
skb->data + flow_keys->nhoff point the to first byte of L3_HEADER
flow_keys->thoff = nhoff
flow_keys->n_proto = ETHER_TYPE
In this case VLAN information has been processed before the flow dissector
and BPF flow dissector is not required to handle it.
The takeaway here is as follows: BPF flow dissector program can be called with
the optional VLAN header and should gracefully handle both cases: when single
or double VLAN is present and when it is not present. The same program
can be called for both cases and would have to be written carefully to
handle both cases.
Reference Implementation
========================
See ``tools/testing/selftests/bpf/progs/bpf_flow.c`` for the reference
implementation and ``tools/testing/selftests/bpf/flow_dissector_load.[hc]``
for the loader. bpftool can be used to load BPF flow dissector program as well.
The reference implementation is organized as follows:
* ``jmp_table`` map that contains sub-programs for each supported L3 protocol
* ``_dissect`` routine - entry point; it does input ``n_proto`` parsing and
does ``bpf_tail_call`` to the appropriate L3 handler
Since BPF at this point doesn't support looping (or any jumping back),
jmp_table is used instead to handle multiple levels of encapsulation (and
IPv6 options).
Current Limitations
===================
BPF flow dissector doesn't support exporting all the metadata that in-kernel
C-based implementation can export. Notable example is single VLAN (802.1Q)
and double VLAN (802.1AD) tags. Please refer to the ``struct bpf_flow_keys``
for a set of information that's currently can be exported from the BPF context.
...@@ -9,6 +9,7 @@ Contents: ...@@ -9,6 +9,7 @@ Contents:
netdev-FAQ netdev-FAQ
af_xdp af_xdp
batman-adv batman-adv
bpf_flow_dissector
can can
can_ucan_protocol can_ucan_protocol
device_drivers/freescale/dpaa2/index device_drivers/freescale/dpaa2/index
......
...@@ -5,25 +5,32 @@ The Definitive KVM (Kernel-based Virtual Machine) API Documentation ...@@ -5,25 +5,32 @@ The Definitive KVM (Kernel-based Virtual Machine) API Documentation
---------------------- ----------------------
The kvm API is a set of ioctls that are issued to control various aspects The kvm API is a set of ioctls that are issued to control various aspects
of a virtual machine. The ioctls belong to three classes of a virtual machine. The ioctls belong to three classes:
- System ioctls: These query and set global attributes which affect the - System ioctls: These query and set global attributes which affect the
whole kvm subsystem. In addition a system ioctl is used to create whole kvm subsystem. In addition a system ioctl is used to create
virtual machines virtual machines.
- VM ioctls: These query and set attributes that affect an entire virtual - VM ioctls: These query and set attributes that affect an entire virtual
machine, for example memory layout. In addition a VM ioctl is used to machine, for example memory layout. In addition a VM ioctl is used to
create virtual cpus (vcpus). create virtual cpus (vcpus) and devices.
Only run VM ioctls from the same process (address space) that was used VM ioctls must be issued from the same process (address space) that was
to create the VM. used to create the VM.
- vcpu ioctls: These query and set attributes that control the operation - vcpu ioctls: These query and set attributes that control the operation
of a single virtual cpu. of a single virtual cpu.
Only run vcpu ioctls from the same thread that was used to create the vcpu ioctls should be issued from the same thread that was used to create
vcpu. the vcpu, except for asynchronous vcpu ioctl that are marked as such in
the documentation. Otherwise, the first ioctl after switching threads
could see a performance impact.
- device ioctls: These query and set attributes that control the operation
of a single device.
device ioctls must be issued from the same process (address space) that
was used to create the VM.
2. File descriptors 2. File descriptors
------------------- -------------------
...@@ -32,17 +39,34 @@ The kvm API is centered around file descriptors. An initial ...@@ -32,17 +39,34 @@ The kvm API is centered around file descriptors. An initial
open("/dev/kvm") obtains a handle to the kvm subsystem; this handle open("/dev/kvm") obtains a handle to the kvm subsystem; this handle
can be used to issue system ioctls. A KVM_CREATE_VM ioctl on this can be used to issue system ioctls. A KVM_CREATE_VM ioctl on this
handle will create a VM file descriptor which can be used to issue VM handle will create a VM file descriptor which can be used to issue VM
ioctls. A KVM_CREATE_VCPU ioctl on a VM fd will create a virtual cpu ioctls. A KVM_CREATE_VCPU or KVM_CREATE_DEVICE ioctl on a VM fd will
and return a file descriptor pointing to it. Finally, ioctls on a vcpu create a virtual cpu or device and return a file descriptor pointing to
fd can be used to control the vcpu, including the important task of the new resource. Finally, ioctls on a vcpu or device fd can be used
actually running guest code. to control the vcpu or device. For vcpus, this includes the important
task of actually running guest code.
In general file descriptors can be migrated among processes by means In general file descriptors can be migrated among processes by means
of fork() and the SCM_RIGHTS facility of unix domain socket. These of fork() and the SCM_RIGHTS facility of unix domain socket. These
kinds of tricks are explicitly not supported by kvm. While they will kinds of tricks are explicitly not supported by kvm. While they will
not cause harm to the host, their actual behavior is not guaranteed by not cause harm to the host, their actual behavior is not guaranteed by
the API. The only supported use is one virtual machine per process, the API. See "General description" for details on the ioctl usage
and one vcpu per thread. model that is supported by KVM.
It is important to note that althought VM ioctls may only be issued from
the process that created the VM, a VM's lifecycle is associated with its
file descriptor, not its creator (process). In other words, the VM and
its resources, *including the associated address space*, are not freed
until the last reference to the VM's file descriptor has been released.
For example, if fork() is issued after ioctl(KVM_CREATE_VM), the VM will
not be freed until both the parent (original) process and its child have
put their references to the VM's file descriptor.
Because a VM's resources are not freed until the last reference to its
file descriptor is released, creating additional references to a VM via
via fork(), dup(), etc... without careful consideration is strongly
discouraged and may have unwanted side effects, e.g. memory allocated
by and on behalf of the VM's process may not be freed/unaccounted when
the VM is shut down.
It is important to note that althought VM ioctls may only be issued from It is important to note that althought VM ioctls may only be issued from
...@@ -515,11 +539,15 @@ c) KVM_INTERRUPT_SET_LEVEL ...@@ -515,11 +539,15 @@ c) KVM_INTERRUPT_SET_LEVEL
Note that any value for 'irq' other than the ones stated above is invalid Note that any value for 'irq' other than the ones stated above is invalid
and incurs unexpected behavior. and incurs unexpected behavior.
This is an asynchronous vcpu ioctl and can be invoked from any thread.
MIPS: MIPS:
Queues an external interrupt to be injected into the virtual CPU. A negative Queues an external interrupt to be injected into the virtual CPU. A negative
interrupt number dequeues the interrupt. interrupt number dequeues the interrupt.
This is an asynchronous vcpu ioctl and can be invoked from any thread.
4.17 KVM_DEBUG_GUEST 4.17 KVM_DEBUG_GUEST
...@@ -1086,14 +1114,12 @@ struct kvm_userspace_memory_region { ...@@ -1086,14 +1114,12 @@ struct kvm_userspace_memory_region {
#define KVM_MEM_LOG_DIRTY_PAGES (1UL << 0) #define KVM_MEM_LOG_DIRTY_PAGES (1UL << 0)
#define KVM_MEM_READONLY (1UL << 1) #define KVM_MEM_READONLY (1UL << 1)
This ioctl allows the user to create or modify a guest physical memory This ioctl allows the user to create, modify or delete a guest physical
slot. When changing an existing slot, it may be moved in the guest memory slot. Bits 0-15 of "slot" specify the slot id and this value
physical memory space, or its flags may be modified. It may not be should be less than the maximum number of user memory slots supported per
resized. Slots may not overlap in guest physical address space. VM. The maximum allowed slots can be queried using KVM_CAP_NR_MEMSLOTS,
Bits 0-15 of "slot" specifies the slot id and this value should be if this capability is supported by the architecture. Slots may not
less than the maximum number of user memory slots supported per VM. overlap in guest physical address space.
The maximum allowed slots can be queried using KVM_CAP_NR_MEMSLOTS,
if this capability is supported by the architecture.
If KVM_CAP_MULTI_ADDRESS_SPACE is available, bits 16-31 of "slot" If KVM_CAP_MULTI_ADDRESS_SPACE is available, bits 16-31 of "slot"
specifies the address space which is being modified. They must be specifies the address space which is being modified. They must be
...@@ -1102,6 +1128,10 @@ KVM_CAP_MULTI_ADDRESS_SPACE capability. Slots in separate address spaces ...@@ -1102,6 +1128,10 @@ KVM_CAP_MULTI_ADDRESS_SPACE capability. Slots in separate address spaces
are unrelated; the restriction on overlapping slots only applies within are unrelated; the restriction on overlapping slots only applies within
each address space. each address space.
Deleting a slot is done by passing zero for memory_size. When changing
an existing slot, it may be moved in the guest physical memory space,
or its flags may be modified, but it may not be resized.
Memory for the region is taken starting at the address denoted by the Memory for the region is taken starting at the address denoted by the
field userspace_addr, which must point at user addressable memory for field userspace_addr, which must point at user addressable memory for
the entire memory slot size. Any object may back this memory, including the entire memory slot size. Any object may back this memory, including
...@@ -2493,7 +2523,7 @@ KVM_S390_MCHK (vm, vcpu) - machine check interrupt; cr 14 bits in parm, ...@@ -2493,7 +2523,7 @@ KVM_S390_MCHK (vm, vcpu) - machine check interrupt; cr 14 bits in parm,
machine checks needing further payload are not machine checks needing further payload are not
supported by this ioctl) supported by this ioctl)
Note that the vcpu ioctl is asynchronous to vcpu execution. This is an asynchronous vcpu ioctl and can be invoked from any thread.
4.78 KVM_PPC_GET_HTAB_FD 4.78 KVM_PPC_GET_HTAB_FD
...@@ -3042,8 +3072,7 @@ KVM_S390_INT_EMERGENCY - sigp emergency; parameters in .emerg ...@@ -3042,8 +3072,7 @@ KVM_S390_INT_EMERGENCY - sigp emergency; parameters in .emerg
KVM_S390_INT_EXTERNAL_CALL - sigp external call; parameters in .extcall KVM_S390_INT_EXTERNAL_CALL - sigp external call; parameters in .extcall
KVM_S390_MCHK - machine check interrupt; parameters in .mchk KVM_S390_MCHK - machine check interrupt; parameters in .mchk
This is an asynchronous vcpu ioctl and can be invoked from any thread.
Note that the vcpu ioctl is asynchronous to vcpu execution.
4.94 KVM_S390_GET_IRQ_STATE 4.94 KVM_S390_GET_IRQ_STATE
......
...@@ -142,7 +142,7 @@ Shadow pages contain the following information: ...@@ -142,7 +142,7 @@ Shadow pages contain the following information:
If clear, this page corresponds to a guest page table denoted by the gfn If clear, this page corresponds to a guest page table denoted by the gfn
field. field.
role.quadrant: role.quadrant:
When role.cr4_pae=0, the guest uses 32-bit gptes while the host uses 64-bit When role.gpte_is_8_bytes=0, the guest uses 32-bit gptes while the host uses 64-bit
sptes. That means a guest page table contains more ptes than the host, sptes. That means a guest page table contains more ptes than the host,
so multiple shadow pages are needed to shadow one guest page. so multiple shadow pages are needed to shadow one guest page.
For first-level shadow pages, role.quadrant can be 0 or 1 and denotes the For first-level shadow pages, role.quadrant can be 0 or 1 and denotes the
...@@ -158,9 +158,9 @@ Shadow pages contain the following information: ...@@ -158,9 +158,9 @@ Shadow pages contain the following information:
The page is invalid and should not be used. It is a root page that is The page is invalid and should not be used. It is a root page that is
currently pinned (by a cpu hardware register pointing to it); once it is currently pinned (by a cpu hardware register pointing to it); once it is
unpinned it will be destroyed. unpinned it will be destroyed.
role.cr4_pae: role.gpte_is_8_bytes:
Contains the value of cr4.pae for which the page is valid (e.g. whether Reflects the size of the guest PTE for which the page is valid, i.e. '1'
32-bit or 64-bit gptes are in use). if 64-bit gptes are in use, '0' if 32-bit gptes are in use.
role.nxe: role.nxe:
Contains the value of efer.nxe for which the page is valid. Contains the value of efer.nxe for which the page is valid.
role.cr0_wp: role.cr0_wp:
...@@ -173,6 +173,9 @@ Shadow pages contain the following information: ...@@ -173,6 +173,9 @@ Shadow pages contain the following information:
Contains the value of cr4.smap && !cr0.wp for which the page is valid Contains the value of cr4.smap && !cr0.wp for which the page is valid
(pages for which this is true are different from other pages; see the (pages for which this is true are different from other pages; see the
treatment of cr0.wp=0 below). treatment of cr0.wp=0 below).
role.ept_sp:
This is a virtual flag to denote a shadowed nested EPT page. ept_sp
is true if "cr0_wp && smap_andnot_wp", an otherwise invalid combination.
role.smm: role.smm:
Is 1 if the page is valid in system management mode. This field Is 1 if the page is valid in system management mode. This field
determines which of the kvm_memslots array was used to build this determines which of the kvm_memslots array was used to build this
......
...@@ -2356,7 +2356,7 @@ F: arch/arm/mm/cache-uniphier.c ...@@ -2356,7 +2356,7 @@ F: arch/arm/mm/cache-uniphier.c
F: arch/arm64/boot/dts/socionext/uniphier* F: arch/arm64/boot/dts/socionext/uniphier*
F: drivers/bus/uniphier-system-bus.c F: drivers/bus/uniphier-system-bus.c
F: drivers/clk/uniphier/ F: drivers/clk/uniphier/
F: drivers/dmaengine/uniphier-mdmac.c F: drivers/dma/uniphier-mdmac.c
F: drivers/gpio/gpio-uniphier.c F: drivers/gpio/gpio-uniphier.c
F: drivers/i2c/busses/i2c-uniphier* F: drivers/i2c/busses/i2c-uniphier*
F: drivers/irqchip/irq-uniphier-aidet.c F: drivers/irqchip/irq-uniphier-aidet.c
...@@ -4132,7 +4132,7 @@ F: drivers/cpuidle/* ...@@ -4132,7 +4132,7 @@ F: drivers/cpuidle/*
F: include/linux/cpuidle.h F: include/linux/cpuidle.h
CRAMFS FILESYSTEM CRAMFS FILESYSTEM
M: Nicolas Pitre <nico@linaro.org> M: Nicolas Pitre <nico@fluxnic.net>
S: Maintained S: Maintained
F: Documentation/filesystems/cramfs.txt F: Documentation/filesystems/cramfs.txt
F: fs/cramfs/ F: fs/cramfs/
...@@ -5836,7 +5836,7 @@ L: netdev@vger.kernel.org ...@@ -5836,7 +5836,7 @@ L: netdev@vger.kernel.org
S: Maintained S: Maintained
F: Documentation/ABI/testing/sysfs-bus-mdio F: Documentation/ABI/testing/sysfs-bus-mdio
F: Documentation/devicetree/bindings/net/mdio* F: Documentation/devicetree/bindings/net/mdio*
F: Documentation/networking/phy.txt F: Documentation/networking/phy.rst
F: drivers/net/phy/ F: drivers/net/phy/
F: drivers/of/of_mdio.c F: drivers/of/of_mdio.c
F: drivers/of/of_net.c F: drivers/of/of_net.c
...@@ -6411,7 +6411,6 @@ L: linux-kernel@vger.kernel.org ...@@ -6411,7 +6411,6 @@ L: linux-kernel@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git locking/core T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git locking/core
S: Maintained S: Maintained
F: kernel/futex.c F: kernel/futex.c
F: kernel/futex_compat.c
F: include/asm-generic/futex.h F: include/asm-generic/futex.h
F: include/linux/futex.h F: include/linux/futex.h
F: include/uapi/linux/futex.h F: include/uapi/linux/futex.h
...@@ -13976,7 +13975,7 @@ F: drivers/media/rc/serial_ir.c ...@@ -13976,7 +13975,7 @@ F: drivers/media/rc/serial_ir.c
SFC NETWORK DRIVER SFC NETWORK DRIVER
M: Solarflare linux maintainers <linux-net-drivers@solarflare.com> M: Solarflare linux maintainers <linux-net-drivers@solarflare.com>
M: Edward Cree <ecree@solarflare.com> M: Edward Cree <ecree@solarflare.com>
M: Bert Kenward <bkenward@solarflare.com> M: Martin Habets <mhabets@solarflare.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
S: Supported S: Supported
F: drivers/net/ethernet/sfc/ F: drivers/net/ethernet/sfc/
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
VERSION = 5 VERSION = 5
PATCHLEVEL = 1 PATCHLEVEL = 1
SUBLEVEL = 0 SUBLEVEL = 0
EXTRAVERSION = -rc2 EXTRAVERSION = -rc3
NAME = Shy Crocodile NAME = Shy Crocodile
# *DOCUMENTATION* # *DOCUMENTATION*
...@@ -31,26 +31,12 @@ _all: ...@@ -31,26 +31,12 @@ _all:
# descending is started. They are now explicitly listed as the # descending is started. They are now explicitly listed as the
# prepare rule. # prepare rule.
# Ugly workaround for Debian make-kpkg: ifneq ($(sub_make_done),1)
# make-kpkg directly includes the top Makefile of Linux kernel. In such a case,
# skip sub-make to support debian_* targets in ruleset/kernel_version.mk, but
# displays warning to discourage such abusage.
ifneq ($(word 2, $(MAKEFILE_LIST)),)
$(warning Do not include top Makefile of Linux Kernel)
sub-make-done := 1
MAKEFLAGS += -rR
endif
ifneq ($(sub-make-done),1)
# Do not use make's built-in rules and variables # Do not use make's built-in rules and variables
# (this increases performance and avoids hard-to-debug behaviour) # (this increases performance and avoids hard-to-debug behaviour)
MAKEFLAGS += -rR MAKEFLAGS += -rR
# 'MAKEFLAGS += -rR' does not become immediately effective for old
# GNU Make versions. Cancel implicit rules for this Makefile.
$(lastword $(MAKEFILE_LIST)): ;
# Avoid funny character set dependencies # Avoid funny character set dependencies
unexport LC_ALL unexport LC_ALL
LC_COLLATE=C LC_COLLATE=C
...@@ -153,6 +139,7 @@ $(if $(KBUILD_OUTPUT),, \ ...@@ -153,6 +139,7 @@ $(if $(KBUILD_OUTPUT),, \
# 'sub-make' below. # 'sub-make' below.
MAKEFLAGS += --include-dir=$(CURDIR) MAKEFLAGS += --include-dir=$(CURDIR)
need-sub-make := 1
else else
# Do not print "Entering directory ..." at all for in-tree build. # Do not print "Entering directory ..." at all for in-tree build.
...@@ -160,6 +147,18 @@ MAKEFLAGS += --no-print-directory ...@@ -160,6 +147,18 @@ MAKEFLAGS += --no-print-directory
endif # ifneq ($(KBUILD_OUTPUT),) endif # ifneq ($(KBUILD_OUTPUT),)
ifneq ($(filter 3.%,$(MAKE_VERSION)),)
# 'MAKEFLAGS += -rR' does not immediately become effective for GNU Make 3.x
# We need to invoke sub-make to avoid implicit rules in the top Makefile.
need-sub-make := 1
# Cancel implicit rules for this Makefile.
$(lastword $(MAKEFILE_LIST)): ;
endif
export sub_make_done := 1
ifeq ($(need-sub-make),1)
PHONY += $(MAKECMDGOALS) sub-make PHONY += $(MAKECMDGOALS) sub-make
$(filter-out _all sub-make $(CURDIR)/Makefile, $(MAKECMDGOALS)) _all: sub-make $(filter-out _all sub-make $(CURDIR)/Makefile, $(MAKECMDGOALS)) _all: sub-make
...@@ -167,12 +166,15 @@ $(filter-out _all sub-make $(CURDIR)/Makefile, $(MAKECMDGOALS)) _all: sub-make ...@@ -167,12 +166,15 @@ $(filter-out _all sub-make $(CURDIR)/Makefile, $(MAKECMDGOALS)) _all: sub-make
# Invoke a second make in the output directory, passing relevant variables # Invoke a second make in the output directory, passing relevant variables
sub-make: sub-make:
$(Q)$(MAKE) sub-make-done=1 \ $(Q)$(MAKE) \
$(if $(KBUILD_OUTPUT),-C $(KBUILD_OUTPUT) KBUILD_SRC=$(CURDIR)) \ $(if $(KBUILD_OUTPUT),-C $(KBUILD_OUTPUT) KBUILD_SRC=$(CURDIR)) \
-f $(CURDIR)/Makefile $(filter-out _all sub-make,$(MAKECMDGOALS)) -f $(CURDIR)/Makefile $(filter-out _all sub-make,$(MAKECMDGOALS))
else # sub-make-done endif # need-sub-make
endif # sub_make_done
# We process the rest of the Makefile if this is the final invocation of make # We process the rest of the Makefile if this is the final invocation of make
ifeq ($(need-sub-make),)
# Do not print "Entering directory ...", # Do not print "Entering directory ...",
# but we want to display it when entering to the output directory # but we want to display it when entering to the output directory
...@@ -497,7 +499,8 @@ outputmakefile: ...@@ -497,7 +499,8 @@ outputmakefile:
ifneq ($(KBUILD_SRC),) ifneq ($(KBUILD_SRC),)
$(Q)ln -fsn $(srctree) source $(Q)ln -fsn $(srctree) source
$(Q)$(CONFIG_SHELL) $(srctree)/scripts/mkmakefile $(srctree) $(Q)$(CONFIG_SHELL) $(srctree)/scripts/mkmakefile $(srctree)
$(Q){ echo "# this is build directory, ignore it"; echo "*"; } > .gitignore $(Q)test -e .gitignore || \
{ echo "# this is build directory, ignore it"; echo "*"; } > .gitignore
endif endif
ifneq ($(shell $(CC) --version 2>&1 | head -n 1 | grep clang),) ifneq ($(shell $(CC) --version 2>&1 | head -n 1 | grep clang),)
...@@ -677,7 +680,7 @@ KBUILD_CFLAGS += $(call cc-disable-warning, format-overflow) ...@@ -677,7 +680,7 @@ KBUILD_CFLAGS += $(call cc-disable-warning, format-overflow)
KBUILD_CFLAGS += $(call cc-disable-warning, int-in-bool-context) KBUILD_CFLAGS += $(call cc-disable-warning, int-in-bool-context)
ifdef CONFIG_CC_OPTIMIZE_FOR_SIZE ifdef CONFIG_CC_OPTIMIZE_FOR_SIZE
KBUILD_CFLAGS += $(call cc-option,-Oz,-Os) KBUILD_CFLAGS += -Os
else else
KBUILD_CFLAGS += -O2 KBUILD_CFLAGS += -O2
endif endif
...@@ -950,9 +953,11 @@ mod_sign_cmd = true ...@@ -950,9 +953,11 @@ mod_sign_cmd = true
endif endif
export mod_sign_cmd export mod_sign_cmd
HOST_LIBELF_LIBS = $(shell pkg-config libelf --libs 2>/dev/null || echo -lelf)
ifdef CONFIG_STACK_VALIDATION ifdef CONFIG_STACK_VALIDATION
has_libelf := $(call try-run,\ has_libelf := $(call try-run,\
echo "int main() {}" | $(HOSTCC) -xc -o /dev/null -lelf -,1,0) echo "int main() {}" | $(HOSTCC) -xc -o /dev/null $(HOST_LIBELF_LIBS) -,1,0)
ifeq ($(has_libelf),1) ifeq ($(has_libelf),1)
objtool_target := tools/objtool FORCE objtool_target := tools/objtool FORCE
else else
...@@ -1757,7 +1762,7 @@ existing-targets := $(wildcard $(sort $(targets))) ...@@ -1757,7 +1762,7 @@ existing-targets := $(wildcard $(sort $(targets)))
endif # ifeq ($(config-targets),1) endif # ifeq ($(config-targets),1)
endif # ifeq ($(mixed-targets),1) endif # ifeq ($(mixed-targets),1)
endif # sub-make-done endif # need-sub-make
PHONY += FORCE PHONY += FORCE
FORCE: FORCE:
......
...@@ -6,6 +6,7 @@ generic-y += exec.h ...@@ -6,6 +6,7 @@ generic-y += exec.h
generic-y += export.h generic-y += export.h
generic-y += fb.h generic-y += fb.h
generic-y += irq_work.h generic-y += irq_work.h
generic-y += kvm_para.h
generic-y += mcs_spinlock.h generic-y += mcs_spinlock.h
generic-y += mm-arch-hooks.h generic-y += mm-arch-hooks.h
generic-y += preempt.h generic-y += preempt.h
......
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
#include <asm-generic/kvm_para.h>
...@@ -11,6 +11,7 @@ generic-y += hardirq.h ...@@ -11,6 +11,7 @@ generic-y += hardirq.h
generic-y += hw_irq.h generic-y += hw_irq.h
generic-y += irq_regs.h generic-y += irq_regs.h
generic-y += irq_work.h generic-y += irq_work.h
generic-y += kvm_para.h
generic-y += local.h generic-y += local.h
generic-y += local64.h generic-y += local64.h
generic-y += mcs_spinlock.h generic-y += mcs_spinlock.h
......
generic-y += kvm_para.h
generic-y += ucontext.h generic-y += ucontext.h
...@@ -596,6 +596,7 @@ config ARCH_DAVINCI ...@@ -596,6 +596,7 @@ config ARCH_DAVINCI
select HAVE_IDE select HAVE_IDE
select PM_GENERIC_DOMAINS if PM select PM_GENERIC_DOMAINS if PM
select PM_GENERIC_DOMAINS_OF if PM && OF select PM_GENERIC_DOMAINS_OF if PM && OF
select REGMAP_MMIO
select RESET_CONTROLLER select RESET_CONTROLLER
select SPARSE_IRQ select SPARSE_IRQ
select USE_OF select USE_OF
......
...@@ -93,7 +93,7 @@ ...@@ -93,7 +93,7 @@
}; };
&hdmi { &hdmi {
hpd-gpios = <&gpio 46 GPIO_ACTIVE_LOW>; hpd-gpios = <&gpio 46 GPIO_ACTIVE_HIGH>;
}; };
&pwm { &pwm {
......
...@@ -114,9 +114,9 @@ ...@@ -114,9 +114,9 @@
reg = <2>; reg = <2>;
}; };
switch@0 { switch@10 {
compatible = "qca,qca8334"; compatible = "qca,qca8334";
reg = <0>; reg = <10>;
switch_ports: ports { switch_ports: ports {
#address-cells = <1>; #address-cells = <1>;
...@@ -125,7 +125,7 @@ ...@@ -125,7 +125,7 @@
ethphy0: port@0 { ethphy0: port@0 {
reg = <0>; reg = <0>;
label = "cpu"; label = "cpu";
phy-mode = "rgmii"; phy-mode = "rgmii-id";
ethernet = <&fec>; ethernet = <&fec>;
fixed-link { fixed-link {
......
...@@ -264,7 +264,7 @@ ...@@ -264,7 +264,7 @@
pinctrl-2 = <&pinctrl_usdhc3_200mhz>; pinctrl-2 = <&pinctrl_usdhc3_200mhz>;
vmcc-supply = <&reg_sd3_vmmc>; vmcc-supply = <&reg_sd3_vmmc>;
cd-gpios = <&gpio1 1 GPIO_ACTIVE_LOW>; cd-gpios = <&gpio1 1 GPIO_ACTIVE_LOW>;
bus-witdh = <4>; bus-width = <4>;
no-1-8-v; no-1-8-v;
status = "okay"; status = "okay";
}; };
...@@ -275,7 +275,7 @@ ...@@ -275,7 +275,7 @@
pinctrl-1 = <&pinctrl_usdhc4_100mhz>; pinctrl-1 = <&pinctrl_usdhc4_100mhz>;
pinctrl-2 = <&pinctrl_usdhc4_200mhz>; pinctrl-2 = <&pinctrl_usdhc4_200mhz>;
vmcc-supply = <&reg_sd4_vmmc>; vmcc-supply = <&reg_sd4_vmmc>;
bus-witdh = <8>; bus-width = <8>;
no-1-8-v; no-1-8-v;
non-removable; non-removable;
status = "okay"; status = "okay";
......
...@@ -91,6 +91,7 @@ ...@@ -91,6 +91,7 @@
pinctrl-0 = <&pinctrl_enet>; pinctrl-0 = <&pinctrl_enet>;
phy-handle = <&ethphy>; phy-handle = <&ethphy>;
phy-mode = "rgmii"; phy-mode = "rgmii";
phy-reset-duration = <10>; /* in msecs */
phy-reset-gpios = <&gpio3 23 GPIO_ACTIVE_LOW>; phy-reset-gpios = <&gpio3 23 GPIO_ACTIVE_LOW>;
phy-supply = <&vdd_eth_io_reg>; phy-supply = <&vdd_eth_io_reg>;
status = "disabled"; status = "disabled";
......
// SPDX-License-Identifier: GPL-2.0 /* SPDX-License-Identifier: GPL-2.0 */
/* /*
* Copyright (C) 2016 Freescale Semiconductor, Inc. * Copyright (C) 2016 Freescale Semiconductor, Inc.
* Copyright (C) 2017 NXP * Copyright (C) 2017 NXP
......
...@@ -213,12 +213,13 @@ ...@@ -213,12 +213,13 @@
gpio-sck = <&gpio0 5 GPIO_ACTIVE_HIGH>; gpio-sck = <&gpio0 5 GPIO_ACTIVE_HIGH>;
gpio-mosi = <&gpio0 4 GPIO_ACTIVE_HIGH>; gpio-mosi = <&gpio0 4 GPIO_ACTIVE_HIGH>;
/* /*
* It's not actually active high, but the frameworks assume * This chipselect is active high. Just setting the flags
* the polarity of the passed-in GPIO is "normal" (active * to GPIO_ACTIVE_HIGH is not enough for the SPI DT bindings,
* high) then actively drives the line low to select the * it will be ignored, only the special "spi-cs-high" flag
* chip. * really counts.
*/ */
cs-gpios = <&gpio0 6 GPIO_ACTIVE_HIGH>; cs-gpios = <&gpio0 6 GPIO_ACTIVE_HIGH>;
spi-cs-high;
num-chipselects = <1>; num-chipselects = <1>;
/* /*
......
...@@ -170,6 +170,9 @@ CONFIG_IMX_SDMA=y ...@@ -170,6 +170,9 @@ CONFIG_IMX_SDMA=y
# CONFIG_IOMMU_SUPPORT is not set # CONFIG_IOMMU_SUPPORT is not set
CONFIG_IIO=y CONFIG_IIO=y
CONFIG_FSL_MX25_ADC=y CONFIG_FSL_MX25_ADC=y
CONFIG_PWM=y
CONFIG_PWM_IMX1=y
CONFIG_PWM_IMX27=y
CONFIG_EXT4_FS=y CONFIG_EXT4_FS=y
# CONFIG_DNOTIFY is not set # CONFIG_DNOTIFY is not set
CONFIG_VFAT_FS=y CONFIG_VFAT_FS=y
......
...@@ -398,7 +398,7 @@ CONFIG_MAG3110=y ...@@ -398,7 +398,7 @@ CONFIG_MAG3110=y
CONFIG_MPL3115=y CONFIG_MPL3115=y
CONFIG_PWM=y CONFIG_PWM=y
CONFIG_PWM_FSL_FTM=y CONFIG_PWM_FSL_FTM=y
CONFIG_PWM_IMX=y CONFIG_PWM_IMX27=y
CONFIG_NVMEM_IMX_OCOTP=y CONFIG_NVMEM_IMX_OCOTP=y
CONFIG_NVMEM_VF610_OCOTP=y CONFIG_NVMEM_VF610_OCOTP=y
CONFIG_TEE=y CONFIG_TEE=y
......
...@@ -381,6 +381,17 @@ static inline int kvm_read_guest_lock(struct kvm *kvm, ...@@ -381,6 +381,17 @@ static inline int kvm_read_guest_lock(struct kvm *kvm,
return ret; return ret;
} }
static inline int kvm_write_guest_lock(struct kvm *kvm, gpa_t gpa,
const void *data, unsigned long len)
{
int srcu_idx = srcu_read_lock(&kvm->srcu);
int ret = kvm_write_guest(kvm, gpa, data, len);
srcu_read_unlock(&kvm->srcu, srcu_idx);
return ret;
}
static inline void *kvm_get_hyp_vector(void) static inline void *kvm_get_hyp_vector(void)
{ {
switch(read_cpuid_part()) { switch(read_cpuid_part()) {
......
...@@ -75,6 +75,8 @@ static inline bool kvm_stage2_has_pud(struct kvm *kvm) ...@@ -75,6 +75,8 @@ static inline bool kvm_stage2_has_pud(struct kvm *kvm)
#define S2_PMD_MASK PMD_MASK #define S2_PMD_MASK PMD_MASK
#define S2_PMD_SIZE PMD_SIZE #define S2_PMD_SIZE PMD_SIZE
#define S2_PUD_MASK PUD_MASK
#define S2_PUD_SIZE PUD_SIZE
static inline bool kvm_stage2_has_pmd(struct kvm *kvm) static inline bool kvm_stage2_has_pmd(struct kvm *kvm)
{ {
......
...@@ -3,3 +3,4 @@ ...@@ -3,3 +3,4 @@
generated-y += unistd-common.h generated-y += unistd-common.h
generated-y += unistd-oabi.h generated-y += unistd-oabi.h
generated-y += unistd-eabi.h generated-y += unistd-eabi.h
generic-y += kvm_para.h
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
#include <asm-generic/kvm_para.h>
...@@ -16,30 +16,23 @@ ...@@ -16,30 +16,23 @@
#include "cpuidle.h" #include "cpuidle.h"
#include "hardware.h" #include "hardware.h"
static atomic_t master = ATOMIC_INIT(0); static int num_idle_cpus = 0;
static DEFINE_SPINLOCK(master_lock); static DEFINE_SPINLOCK(cpuidle_lock);
static int imx6q_enter_wait(struct cpuidle_device *dev, static int imx6q_enter_wait(struct cpuidle_device *dev,
struct cpuidle_driver *drv, int index) struct cpuidle_driver *drv, int index)
{ {
if (atomic_inc_return(&master) == num_online_cpus()) { spin_lock(&cpuidle_lock);
/* if (++num_idle_cpus == num_online_cpus())
* With this lock, we prevent other cpu to exit and enter
* this function again and become the master.
*/
if (!spin_trylock(&master_lock))
goto idle;
imx6_set_lpm(WAIT_UNCLOCKED); imx6_set_lpm(WAIT_UNCLOCKED);
cpu_do_idle(); spin_unlock(&cpuidle_lock);
imx6_set_lpm(WAIT_CLOCKED);
spin_unlock(&master_lock);
goto done;
}
idle:
cpu_do_idle(); cpu_do_idle();
done:
atomic_dec(&master); spin_lock(&cpuidle_lock);
if (num_idle_cpus-- == num_online_cpus())
imx6_set_lpm(WAIT_CLOCKED);
spin_unlock(&cpuidle_lock);
return index; return index;
} }
......
...@@ -59,6 +59,7 @@ static void __init imx51_m4if_setup(void) ...@@ -59,6 +59,7 @@ static void __init imx51_m4if_setup(void)
return; return;
m4if_base = of_iomap(np, 0); m4if_base = of_iomap(np, 0);
of_node_put(np);
if (!m4if_base) { if (!m4if_base) {
pr_err("Unable to map M4IF registers\n"); pr_err("Unable to map M4IF registers\n");
return; return;
......
...@@ -27,6 +27,7 @@ config ARCH_BCM2835 ...@@ -27,6 +27,7 @@ config ARCH_BCM2835
bool "Broadcom BCM2835 family" bool "Broadcom BCM2835 family"
select TIMER_OF select TIMER_OF
select GPIOLIB select GPIOLIB
select MFD_CORE
select PINCTRL select PINCTRL
select PINCTRL_BCM2835 select PINCTRL_BCM2835
select ARM_AMBA select ARM_AMBA
......
...@@ -321,7 +321,6 @@ ...@@ -321,7 +321,6 @@
nvidia,default-trim = <0x9>; nvidia,default-trim = <0x9>;
nvidia,dqs-trim = <63>; nvidia,dqs-trim = <63>;
mmc-hs400-1_8v; mmc-hs400-1_8v;
supports-cqe;
status = "disabled"; status = "disabled";
}; };
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
/* /*
* Device Tree Source for the RZ/G2E (R8A774C0) SoC * Device Tree Source for the RZ/G2E (R8A774C0) SoC
* *
* Copyright (C) 2018 Renesas Electronics Corp. * Copyright (C) 2018-2019 Renesas Electronics Corp.
*/ */
#include <dt-bindings/clock/r8a774c0-cpg-mssr.h> #include <dt-bindings/clock/r8a774c0-cpg-mssr.h>
...@@ -1150,9 +1150,8 @@ ...@@ -1150,9 +1150,8 @@
<&cpg CPG_CORE R8A774C0_CLK_S3D1C>, <&cpg CPG_CORE R8A774C0_CLK_S3D1C>,
<&scif_clk>; <&scif_clk>;
clock-names = "fck", "brg_int", "scif_clk"; clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac1 0x5b>, <&dmac1 0x5a>, dmas = <&dmac0 0x5b>, <&dmac0 0x5a>;
<&dmac2 0x5b>, <&dmac2 0x5a>; dma-names = "tx", "rx";
dma-names = "tx", "rx", "tx", "rx";
power-domains = <&sysc R8A774C0_PD_ALWAYS_ON>; power-domains = <&sysc R8A774C0_PD_ALWAYS_ON>;
resets = <&cpg 202>; resets = <&cpg 202>;
status = "disabled"; status = "disabled";
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
/* /*
* Device Tree Source for the R-Car E3 (R8A77990) SoC * Device Tree Source for the R-Car E3 (R8A77990) SoC
* *
* Copyright (C) 2018 Renesas Electronics Corp. * Copyright (C) 2018-2019 Renesas Electronics Corp.
*/ */
#include <dt-bindings/clock/r8a77990-cpg-mssr.h> #include <dt-bindings/clock/r8a77990-cpg-mssr.h>
...@@ -1067,9 +1067,8 @@ ...@@ -1067,9 +1067,8 @@
<&cpg CPG_CORE R8A77990_CLK_S3D1C>, <&cpg CPG_CORE R8A77990_CLK_S3D1C>,
<&scif_clk>; <&scif_clk>;
clock-names = "fck", "brg_int", "scif_clk"; clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac1 0x5b>, <&dmac1 0x5a>, dmas = <&dmac0 0x5b>, <&dmac0 0x5a>;
<&dmac2 0x5b>, <&dmac2 0x5a>; dma-names = "tx", "rx";
dma-names = "tx", "rx", "tx", "rx";
power-domains = <&sysc R8A77990_PD_ALWAYS_ON>; power-domains = <&sysc R8A77990_PD_ALWAYS_ON>;
resets = <&cpg 202>; resets = <&cpg 202>;
status = "disabled"; status = "disabled";
......
...@@ -445,6 +445,17 @@ static inline int kvm_read_guest_lock(struct kvm *kvm, ...@@ -445,6 +445,17 @@ static inline int kvm_read_guest_lock(struct kvm *kvm,
return ret; return ret;
} }
static inline int kvm_write_guest_lock(struct kvm *kvm, gpa_t gpa,
const void *data, unsigned long len)
{
int srcu_idx = srcu_read_lock(&kvm->srcu);
int ret = kvm_write_guest(kvm, gpa, data, len);
srcu_read_unlock(&kvm->srcu, srcu_idx);
return ret;
}
#ifdef CONFIG_KVM_INDIRECT_VECTORS #ifdef CONFIG_KVM_INDIRECT_VECTORS
/* /*
* EL2 vectors can be mapped and rerouted in a number of ways, * EL2 vectors can be mapped and rerouted in a number of ways,
......
...@@ -217,7 +217,7 @@ static void __init request_standard_resources(void) ...@@ -217,7 +217,7 @@ static void __init request_standard_resources(void)
num_standard_resources = memblock.memory.cnt; num_standard_resources = memblock.memory.cnt;
res_size = num_standard_resources * sizeof(*standard_resources); res_size = num_standard_resources * sizeof(*standard_resources);
standard_resources = memblock_alloc_low(res_size, SMP_CACHE_BYTES); standard_resources = memblock_alloc(res_size, SMP_CACHE_BYTES);
if (!standard_resources) if (!standard_resources)
panic("%s: Failed to allocate %zu bytes\n", __func__, res_size); panic("%s: Failed to allocate %zu bytes\n", __func__, res_size);
......
...@@ -123,6 +123,9 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) ...@@ -123,6 +123,9 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
int ret = -EINVAL; int ret = -EINVAL;
bool loaded; bool loaded;
/* Reset PMU outside of the non-preemptible section */
kvm_pmu_vcpu_reset(vcpu);
preempt_disable(); preempt_disable();
loaded = (vcpu->cpu != -1); loaded = (vcpu->cpu != -1);
if (loaded) if (loaded)
...@@ -170,9 +173,6 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) ...@@ -170,9 +173,6 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
vcpu->arch.reset_state.reset = false; vcpu->arch.reset_state.reset = false;
} }
/* Reset PMU */
kvm_pmu_vcpu_reset(vcpu);
/* Default workaround setup is enabled (if supported) */ /* Default workaround setup is enabled (if supported) */
if (kvm_arm_have_ssbd() == KVM_SSBD_KERNEL) if (kvm_arm_have_ssbd() == KVM_SSBD_KERNEL)
vcpu->arch.workaround_flags |= VCPU_WORKAROUND_2_FLAG; vcpu->arch.workaround_flags |= VCPU_WORKAROUND_2_FLAG;
......
...@@ -19,6 +19,7 @@ generic-y += irq_work.h ...@@ -19,6 +19,7 @@ generic-y += irq_work.h
generic-y += kdebug.h generic-y += kdebug.h
generic-y += kmap_types.h generic-y += kmap_types.h
generic-y += kprobes.h generic-y += kprobes.h
generic-y += kvm_para.h
generic-y += local.h generic-y += local.h
generic-y += mcs_spinlock.h generic-y += mcs_spinlock.h
generic-y += mm-arch-hooks.h generic-y += mm-arch-hooks.h
......
generic-y += kvm_para.h
generic-y += ucontext.h generic-y += ucontext.h
...@@ -23,6 +23,7 @@ generic-y += irq_work.h ...@@ -23,6 +23,7 @@ generic-y += irq_work.h
generic-y += kdebug.h generic-y += kdebug.h
generic-y += kmap_types.h generic-y += kmap_types.h
generic-y += kprobes.h generic-y += kprobes.h
generic-y += kvm_para.h
generic-y += linkage.h generic-y += linkage.h
generic-y += local.h generic-y += local.h
generic-y += local64.h generic-y += local64.h
......
generic-y += kvm_para.h
generic-y += ucontext.h generic-y += ucontext.h
...@@ -19,6 +19,7 @@ generic-y += irq_work.h ...@@ -19,6 +19,7 @@ generic-y += irq_work.h
generic-y += kdebug.h generic-y += kdebug.h
generic-y += kmap_types.h generic-y += kmap_types.h
generic-y += kprobes.h generic-y += kprobes.h
generic-y += kvm_para.h
generic-y += local.h generic-y += local.h
generic-y += local64.h generic-y += local64.h
generic-y += mcs_spinlock.h generic-y += mcs_spinlock.h
......
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
#include <asm-generic/kvm_para.h>
...@@ -2,6 +2,7 @@ generated-y += syscall_table.h ...@@ -2,6 +2,7 @@ generated-y += syscall_table.h
generic-y += compat.h generic-y += compat.h
generic-y += exec.h generic-y += exec.h
generic-y += irq_work.h generic-y += irq_work.h
generic-y += kvm_para.h
generic-y += mcs_spinlock.h generic-y += mcs_spinlock.h
generic-y += mm-arch-hooks.h generic-y += mm-arch-hooks.h
generic-y += preempt.h generic-y += preempt.h
......
generated-y += unistd_64.h generated-y += unistd_64.h
generic-y += kvm_para.h
...@@ -13,6 +13,7 @@ generic-y += irq_work.h ...@@ -13,6 +13,7 @@ generic-y += irq_work.h
generic-y += kdebug.h generic-y += kdebug.h
generic-y += kmap_types.h generic-y += kmap_types.h
generic-y += kprobes.h generic-y += kprobes.h
generic-y += kvm_para.h
generic-y += local.h generic-y += local.h
generic-y += local64.h generic-y += local64.h
generic-y += mcs_spinlock.h generic-y += mcs_spinlock.h
......
generated-y += unistd_32.h generated-y += unistd_32.h
generic-y += kvm_para.h
...@@ -17,6 +17,7 @@ generic-y += irq_work.h ...@@ -17,6 +17,7 @@ generic-y += irq_work.h
generic-y += kdebug.h generic-y += kdebug.h
generic-y += kmap_types.h generic-y += kmap_types.h
generic-y += kprobes.h generic-y += kprobes.h
generic-y += kvm_para.h
generic-y += linkage.h generic-y += linkage.h
generic-y += local.h generic-y += local.h
generic-y += local64.h generic-y += local64.h
......
generated-y += unistd_32.h generated-y += unistd_32.h
generic-y += kvm_para.h
generic-y += ucontext.h generic-y += ucontext.h
...@@ -23,6 +23,7 @@ generic-y += irq_work.h ...@@ -23,6 +23,7 @@ generic-y += irq_work.h
generic-y += kdebug.h generic-y += kdebug.h
generic-y += kmap_types.h generic-y += kmap_types.h
generic-y += kprobes.h generic-y += kprobes.h
generic-y += kvm_para.h
generic-y += local.h generic-y += local.h
generic-y += mcs_spinlock.h generic-y += mcs_spinlock.h
generic-y += mm-arch-hooks.h generic-y += mm-arch-hooks.h
......
generic-y += kvm_para.h
generic-y += ucontext.h generic-y += ucontext.h
...@@ -20,6 +20,7 @@ generic-y += irq_work.h ...@@ -20,6 +20,7 @@ generic-y += irq_work.h
generic-y += kdebug.h generic-y += kdebug.h
generic-y += kmap_types.h generic-y += kmap_types.h
generic-y += kprobes.h generic-y += kprobes.h
generic-y += kvm_para.h
generic-y += local.h generic-y += local.h
generic-y += mcs_spinlock.h generic-y += mcs_spinlock.h
generic-y += mm-arch-hooks.h generic-y += mm-arch-hooks.h
......
generic-y += kvm_para.h
generic-y += ucontext.h generic-y += ucontext.h
...@@ -11,6 +11,7 @@ generic-y += irq_regs.h ...@@ -11,6 +11,7 @@ generic-y += irq_regs.h
generic-y += irq_work.h generic-y += irq_work.h
generic-y += kdebug.h generic-y += kdebug.h
generic-y += kprobes.h generic-y += kprobes.h
generic-y += kvm_para.h
generic-y += local.h generic-y += local.h
generic-y += local64.h generic-y += local64.h
generic-y += mcs_spinlock.h generic-y += mcs_spinlock.h
......
generated-y += unistd_32.h generated-y += unistd_32.h
generated-y += unistd_64.h generated-y += unistd_64.h
generic-y += kvm_para.h
...@@ -215,11 +215,20 @@ _GLOBAL_TOC(memcmp) ...@@ -215,11 +215,20 @@ _GLOBAL_TOC(memcmp)
beq .Lzero beq .Lzero
.Lcmp_rest_lt8bytes: .Lcmp_rest_lt8bytes:
/* Here we have only less than 8 bytes to compare with. at least s1 /*
* Address is aligned with 8 bytes. * Here we have less than 8 bytes to compare. At least s1 is aligned to
* The next double words are load and shift right with appropriate * 8 bytes, but s2 may not be. We must make sure s2 + 7 doesn't cross a
* bits. * page boundary, otherwise we might read past the end of the buffer and
* trigger a page fault. We use 4K as the conservative minimum page
* size. If we detect that case we go to the byte-by-byte loop.
*
* Otherwise the next double word is loaded from s1 and s2, and shifted
* right to compare the appropriate bits.
*/ */
clrldi r6,r4,(64-12) // r6 = r4 & 0xfff
cmpdi r6,0xff8
bgt .Lshort
subfic r6,r5,8 subfic r6,r5,8
slwi r6,r6,3 slwi r6,r6,3
LD rA,0,r3 LD rA,0,r3
......
...@@ -77,18 +77,27 @@ static u32 cpu_to_drc_index(int cpu) ...@@ -77,18 +77,27 @@ static u32 cpu_to_drc_index(int cpu)
ret = drc.drc_index_start + (thread_index * drc.sequential_inc); ret = drc.drc_index_start + (thread_index * drc.sequential_inc);
} else { } else {
const __be32 *indexes; u32 nr_drc_indexes, thread_drc_index;
indexes = of_get_property(dn, "ibm,drc-indexes", NULL);
if (indexes == NULL)
goto err_of_node_put;
/* /*
* The first element indexes[0] is the number of drc_indexes * The first element of ibm,drc-indexes array is the
* returned in the list. Hence thread_index+1 will get the * number of drc_indexes returned in the list. Hence
* drc_index corresponding to core number thread_index. * thread_index+1 will get the drc_index corresponding
* to core number thread_index.
*/ */
ret = indexes[thread_index + 1]; rc = of_property_read_u32_index(dn, "ibm,drc-indexes",
0, &nr_drc_indexes);
if (rc)
goto err_of_node_put;
WARN_ON_ONCE(thread_index > nr_drc_indexes);
rc = of_property_read_u32_index(dn, "ibm,drc-indexes",
thread_index + 1,
&thread_drc_index);
if (rc)
goto err_of_node_put;
ret = thread_drc_index;
} }
rc = 0; rc = 0;
......
...@@ -550,6 +550,7 @@ static void pseries_print_mce_info(struct pt_regs *regs, ...@@ -550,6 +550,7 @@ static void pseries_print_mce_info(struct pt_regs *regs,
"UE", "UE",
"SLB", "SLB",
"ERAT", "ERAT",
"Unknown",
"TLB", "TLB",
"D-Cache", "D-Cache",
"Unknown", "Unknown",
......
...@@ -26,7 +26,7 @@ enum fixed_addresses { ...@@ -26,7 +26,7 @@ enum fixed_addresses {
}; };
#define FIXADDR_SIZE (__end_of_fixed_addresses * PAGE_SIZE) #define FIXADDR_SIZE (__end_of_fixed_addresses * PAGE_SIZE)
#define FIXADDR_TOP (PAGE_OFFSET) #define FIXADDR_TOP (VMALLOC_START)
#define FIXADDR_START (FIXADDR_TOP - FIXADDR_SIZE) #define FIXADDR_START (FIXADDR_TOP - FIXADDR_SIZE)
#define FIXMAP_PAGE_IO PAGE_KERNEL #define FIXMAP_PAGE_IO PAGE_KERNEL
......
...@@ -300,7 +300,7 @@ do { \ ...@@ -300,7 +300,7 @@ do { \
" .balign 4\n" \ " .balign 4\n" \
"4:\n" \ "4:\n" \
" li %0, %6\n" \ " li %0, %6\n" \
" jump 2b, %1\n" \ " jump 3b, %1\n" \
" .previous\n" \ " .previous\n" \
" .section __ex_table,\"a\"\n" \ " .section __ex_table,\"a\"\n" \
" .balign " RISCV_SZPTR "\n" \ " .balign " RISCV_SZPTR "\n" \
......
...@@ -4,7 +4,6 @@ ...@@ -4,7 +4,6 @@
ifdef CONFIG_FTRACE ifdef CONFIG_FTRACE
CFLAGS_REMOVE_ftrace.o = -pg CFLAGS_REMOVE_ftrace.o = -pg
CFLAGS_REMOVE_setup.o = -pg
endif endif
extra-y += head.o extra-y += head.o
...@@ -29,8 +28,6 @@ obj-y += vdso.o ...@@ -29,8 +28,6 @@ obj-y += vdso.o
obj-y += cacheinfo.o obj-y += cacheinfo.o
obj-y += vdso/ obj-y += vdso/
CFLAGS_setup.o := -mcmodel=medany
obj-$(CONFIG_FPU) += fpu.o obj-$(CONFIG_FPU) += fpu.o
obj-$(CONFIG_SMP) += smpboot.o obj-$(CONFIG_SMP) += smpboot.o
obj-$(CONFIG_SMP) += smp.o obj-$(CONFIG_SMP) += smp.o
......
...@@ -141,7 +141,7 @@ static int apply_r_riscv_hi20_rela(struct module *me, u32 *location, ...@@ -141,7 +141,7 @@ static int apply_r_riscv_hi20_rela(struct module *me, u32 *location,
{ {
s32 hi20; s32 hi20;
if (IS_ENABLED(CMODEL_MEDLOW)) { if (IS_ENABLED(CONFIG_CMODEL_MEDLOW)) {
pr_err( pr_err(
"%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n", "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",
me->name, (long long)v, location); me->name, (long long)v, location);
......
...@@ -48,14 +48,6 @@ struct screen_info screen_info = { ...@@ -48,14 +48,6 @@ struct screen_info screen_info = {
}; };
#endif #endif
unsigned long va_pa_offset;
EXPORT_SYMBOL(va_pa_offset);
unsigned long pfn_base;
EXPORT_SYMBOL(pfn_base);
unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] __page_aligned_bss;
EXPORT_SYMBOL(empty_zero_page);
/* The lucky hart to first increment this variable will boot the other cores */ /* The lucky hart to first increment this variable will boot the other cores */
atomic_t hart_lottery; atomic_t hart_lottery;
unsigned long boot_cpu_hartid; unsigned long boot_cpu_hartid;
......
CFLAGS_init.o := -mcmodel=medany
ifdef CONFIG_FTRACE
CFLAGS_REMOVE_init.o = -pg
endif
obj-y += init.o obj-y += init.o
obj-y += fault.o obj-y += fault.o
obj-y += extable.o obj-y += extable.o
......
...@@ -25,6 +25,10 @@ ...@@ -25,6 +25,10 @@
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/io.h> #include <asm/io.h>
unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]
__page_aligned_bss;
EXPORT_SYMBOL(empty_zero_page);
static void __init zone_sizes_init(void) static void __init zone_sizes_init(void)
{ {
unsigned long max_zone_pfns[MAX_NR_ZONES] = { 0, }; unsigned long max_zone_pfns[MAX_NR_ZONES] = { 0, };
...@@ -143,6 +147,11 @@ void __init setup_bootmem(void) ...@@ -143,6 +147,11 @@ void __init setup_bootmem(void)
} }
} }
unsigned long va_pa_offset;
EXPORT_SYMBOL(va_pa_offset);
unsigned long pfn_base;
EXPORT_SYMBOL(pfn_base);
pgd_t swapper_pg_dir[PTRS_PER_PGD] __page_aligned_bss; pgd_t swapper_pg_dir[PTRS_PER_PGD] __page_aligned_bss;
pgd_t trampoline_pg_dir[PTRS_PER_PGD] __initdata __aligned(PAGE_SIZE); pgd_t trampoline_pg_dir[PTRS_PER_PGD] __initdata __aligned(PAGE_SIZE);
...@@ -172,6 +181,25 @@ void __set_fixmap(enum fixed_addresses idx, phys_addr_t phys, pgprot_t prot) ...@@ -172,6 +181,25 @@ void __set_fixmap(enum fixed_addresses idx, phys_addr_t phys, pgprot_t prot)
} }
} }
/*
* setup_vm() is called from head.S with MMU-off.
*
* Following requirements should be honoured for setup_vm() to work
* correctly:
* 1) It should use PC-relative addressing for accessing kernel symbols.
* To achieve this we always use GCC cmodel=medany.
* 2) The compiler instrumentation for FTRACE will not work for setup_vm()
* so disable compiler instrumentation when FTRACE is enabled.
*
* Currently, the above requirements are honoured by using custom CFLAGS
* for init.o in mm/Makefile.
*/
#ifndef __riscv_cmodel_medany
#error "setup_vm() is called from head.S before relocate so it should "
"not use absolute addressing."
#endif
asmlinkage void __init setup_vm(void) asmlinkage void __init setup_vm(void)
{ {
extern char _start; extern char _start;
......
...@@ -360,4 +360,15 @@ static inline struct ap_queue_status ap_dqap(ap_qid_t qid, ...@@ -360,4 +360,15 @@ static inline struct ap_queue_status ap_dqap(ap_qid_t qid,
return reg1; return reg1;
} }
/*
* Interface to tell the AP bus code that a configuration
* change has happened. The bus code should at least do
* an ap bus resource rescan.
*/
#if IS_ENABLED(CONFIG_ZCRYPT)
void ap_bus_cfg_chg(void);
#else
static inline void ap_bus_cfg_chg(void){};
#endif
#endif /* _ASM_S390_AP_H_ */ #endif /* _ASM_S390_AP_H_ */
...@@ -252,11 +252,14 @@ do { \ ...@@ -252,11 +252,14 @@ do { \
/* /*
* Cache aliasing on the latest machines calls for a mapping granularity * Cache aliasing on the latest machines calls for a mapping granularity
* of 512KB. For 64-bit processes use a 512KB alignment and a randomization * of 512KB for the anonymous mapping base. For 64-bit processes use a
* of up to 1GB. For 31-bit processes the virtual address space is limited, * 512KB alignment and a randomization of up to 1GB. For 31-bit processes
* use no alignment and limit the randomization to 8MB. * the virtual address space is limited, use no alignment and limit the
* randomization to 8MB.
* For the additional randomization of the program break use 32MB for
* 64-bit and 8MB for 31-bit.
*/ */
#define BRK_RND_MASK (is_compat_task() ? 0x7ffUL : 0x3ffffUL) #define BRK_RND_MASK (is_compat_task() ? 0x7ffUL : 0x1fffUL)
#define MMAP_RND_MASK (is_compat_task() ? 0x7ffUL : 0x3ff80UL) #define MMAP_RND_MASK (is_compat_task() ? 0x7ffUL : 0x3ff80UL)
#define MMAP_ALIGN_MASK (is_compat_task() ? 0 : 0x7fUL) #define MMAP_ALIGN_MASK (is_compat_task() ? 0 : 0x7fUL)
#define STACK_RND_MASK MMAP_RND_MASK #define STACK_RND_MASK MMAP_RND_MASK
......
...@@ -91,52 +91,53 @@ struct lowcore { ...@@ -91,52 +91,53 @@ struct lowcore {
__u64 hardirq_timer; /* 0x02e8 */ __u64 hardirq_timer; /* 0x02e8 */
__u64 softirq_timer; /* 0x02f0 */ __u64 softirq_timer; /* 0x02f0 */
__u64 steal_timer; /* 0x02f8 */ __u64 steal_timer; /* 0x02f8 */
__u64 last_update_timer; /* 0x0300 */ __u64 avg_steal_timer; /* 0x0300 */
__u64 last_update_clock; /* 0x0308 */ __u64 last_update_timer; /* 0x0308 */
__u64 int_clock; /* 0x0310 */ __u64 last_update_clock; /* 0x0310 */
__u64 mcck_clock; /* 0x0318 */ __u64 int_clock; /* 0x0318*/
__u64 clock_comparator; /* 0x0320 */ __u64 mcck_clock; /* 0x0320 */
__u64 boot_clock[2]; /* 0x0328 */ __u64 clock_comparator; /* 0x0328 */
__u64 boot_clock[2]; /* 0x0330 */
/* Current process. */ /* Current process. */
__u64 current_task; /* 0x0338 */ __u64 current_task; /* 0x0340 */
__u64 kernel_stack; /* 0x0340 */ __u64 kernel_stack; /* 0x0348 */
/* Interrupt, DAT-off and restartstack. */ /* Interrupt, DAT-off and restartstack. */
__u64 async_stack; /* 0x0348 */ __u64 async_stack; /* 0x0350 */
__u64 nodat_stack; /* 0x0350 */ __u64 nodat_stack; /* 0x0358 */
__u64 restart_stack; /* 0x0358 */ __u64 restart_stack; /* 0x0360 */
/* Restart function and parameter. */ /* Restart function and parameter. */
__u64 restart_fn; /* 0x0360 */ __u64 restart_fn; /* 0x0368 */
__u64 restart_data; /* 0x0368 */ __u64 restart_data; /* 0x0370 */
__u64 restart_source; /* 0x0370 */ __u64 restart_source; /* 0x0378 */
/* Address space pointer. */ /* Address space pointer. */
__u64 kernel_asce; /* 0x0378 */ __u64 kernel_asce; /* 0x0380 */
__u64 user_asce; /* 0x0380 */ __u64 user_asce; /* 0x0388 */
__u64 vdso_asce; /* 0x0388 */ __u64 vdso_asce; /* 0x0390 */
/* /*
* The lpp and current_pid fields form a * The lpp and current_pid fields form a
* 64-bit value that is set as program * 64-bit value that is set as program
* parameter with the LPP instruction. * parameter with the LPP instruction.
*/ */
__u32 lpp; /* 0x0390 */ __u32 lpp; /* 0x0398 */
__u32 current_pid; /* 0x0394 */ __u32 current_pid; /* 0x039c */
/* SMP info area */ /* SMP info area */
__u32 cpu_nr; /* 0x0398 */ __u32 cpu_nr; /* 0x03a0 */
__u32 softirq_pending; /* 0x039c */ __u32 softirq_pending; /* 0x03a4 */
__u32 preempt_count; /* 0x03a0 */ __u32 preempt_count; /* 0x03a8 */
__u32 spinlock_lockval; /* 0x03a4 */ __u32 spinlock_lockval; /* 0x03ac */
__u32 spinlock_index; /* 0x03a8 */ __u32 spinlock_index; /* 0x03b0 */
__u32 fpu_flags; /* 0x03ac */ __u32 fpu_flags; /* 0x03b4 */
__u64 percpu_offset; /* 0x03b0 */ __u64 percpu_offset; /* 0x03b8 */
__u64 vdso_per_cpu_data; /* 0x03b8 */ __u64 vdso_per_cpu_data; /* 0x03c0 */
__u64 machine_flags; /* 0x03c0 */ __u64 machine_flags; /* 0x03c8 */
__u64 gmap; /* 0x03c8 */ __u64 gmap; /* 0x03d0 */
__u8 pad_0x03d0[0x0400-0x03d0]; /* 0x03d0 */ __u8 pad_0x03d8[0x0400-0x03d8]; /* 0x03d8 */
/* br %r1 trampoline */ /* br %r1 trampoline */
__u16 br_r1_trampoline; /* 0x0400 */ __u16 br_r1_trampoline; /* 0x0400 */
......
...@@ -196,23 +196,30 @@ static void cf_diag_perf_event_destroy(struct perf_event *event) ...@@ -196,23 +196,30 @@ static void cf_diag_perf_event_destroy(struct perf_event *event)
*/ */
static int __hw_perf_event_init(struct perf_event *event) static int __hw_perf_event_init(struct perf_event *event)
{ {
struct cpu_cf_events *cpuhw = this_cpu_ptr(&cpu_cf_events);
struct perf_event_attr *attr = &event->attr; struct perf_event_attr *attr = &event->attr;
struct cpu_cf_events *cpuhw;
enum cpumf_ctr_set i; enum cpumf_ctr_set i;
int err = 0; int err = 0;
debug_sprintf_event(cf_diag_dbg, 5, debug_sprintf_event(cf_diag_dbg, 5, "%s event %p cpu %d\n", __func__,
"%s event %p cpu %d authorized %#x\n", __func__, event, event->cpu);
event, event->cpu, cpuhw->info.auth_ctl);
event->hw.config = attr->config; event->hw.config = attr->config;
event->hw.config_base = 0; event->hw.config_base = 0;
local64_set(&event->count, 0);
/* Add all authorized counter sets to config_base */ /* Add all authorized counter sets to config_base. The
* the hardware init function is either called per-cpu or just once
* for all CPUS (event->cpu == -1). This depends on the whether
* counting is started for all CPUs or on a per workload base where
* the perf event moves from one CPU to another CPU.
* Checking the authorization on any CPU is fine as the hardware
* applies the same authorization settings to all CPUs.
*/
cpuhw = &get_cpu_var(cpu_cf_events);
for (i = CPUMF_CTR_SET_BASIC; i < CPUMF_CTR_SET_MAX; ++i) for (i = CPUMF_CTR_SET_BASIC; i < CPUMF_CTR_SET_MAX; ++i)
if (cpuhw->info.auth_ctl & cpumf_ctr_ctl[i]) if (cpuhw->info.auth_ctl & cpumf_ctr_ctl[i])
event->hw.config_base |= cpumf_ctr_ctl[i]; event->hw.config_base |= cpumf_ctr_ctl[i];
put_cpu_var(cpu_cf_events);
/* No authorized counter sets, nothing to count/sample */ /* No authorized counter sets, nothing to count/sample */
if (!event->hw.config_base) { if (!event->hw.config_base) {
......
...@@ -266,7 +266,8 @@ static void pcpu_prepare_secondary(struct pcpu *pcpu, int cpu) ...@@ -266,7 +266,8 @@ static void pcpu_prepare_secondary(struct pcpu *pcpu, int cpu)
lc->percpu_offset = __per_cpu_offset[cpu]; lc->percpu_offset = __per_cpu_offset[cpu];
lc->kernel_asce = S390_lowcore.kernel_asce; lc->kernel_asce = S390_lowcore.kernel_asce;
lc->machine_flags = S390_lowcore.machine_flags; lc->machine_flags = S390_lowcore.machine_flags;
lc->user_timer = lc->system_timer = lc->steal_timer = 0; lc->user_timer = lc->system_timer =
lc->steal_timer = lc->avg_steal_timer = 0;
__ctl_store(lc->cregs_save_area, 0, 15); __ctl_store(lc->cregs_save_area, 0, 15);
save_access_regs((unsigned int *) lc->access_regs_save_area); save_access_regs((unsigned int *) lc->access_regs_save_area);
memcpy(lc->stfle_fac_list, S390_lowcore.stfle_fac_list, memcpy(lc->stfle_fac_list, S390_lowcore.stfle_fac_list,
......
...@@ -124,7 +124,7 @@ static void account_system_index_scaled(struct task_struct *p, u64 cputime, ...@@ -124,7 +124,7 @@ static void account_system_index_scaled(struct task_struct *p, u64 cputime,
*/ */
static int do_account_vtime(struct task_struct *tsk) static int do_account_vtime(struct task_struct *tsk)
{ {
u64 timer, clock, user, guest, system, hardirq, softirq, steal; u64 timer, clock, user, guest, system, hardirq, softirq;
timer = S390_lowcore.last_update_timer; timer = S390_lowcore.last_update_timer;
clock = S390_lowcore.last_update_clock; clock = S390_lowcore.last_update_clock;
...@@ -182,12 +182,6 @@ static int do_account_vtime(struct task_struct *tsk) ...@@ -182,12 +182,6 @@ static int do_account_vtime(struct task_struct *tsk)
if (softirq) if (softirq)
account_system_index_scaled(tsk, softirq, CPUTIME_SOFTIRQ); account_system_index_scaled(tsk, softirq, CPUTIME_SOFTIRQ);
steal = S390_lowcore.steal_timer;
if ((s64) steal > 0) {
S390_lowcore.steal_timer = 0;
account_steal_time(cputime_to_nsecs(steal));
}
return virt_timer_forward(user + guest + system + hardirq + softirq); return virt_timer_forward(user + guest + system + hardirq + softirq);
} }
...@@ -213,8 +207,19 @@ void vtime_task_switch(struct task_struct *prev) ...@@ -213,8 +207,19 @@ void vtime_task_switch(struct task_struct *prev)
*/ */
void vtime_flush(struct task_struct *tsk) void vtime_flush(struct task_struct *tsk)
{ {
u64 steal, avg_steal;
if (do_account_vtime(tsk)) if (do_account_vtime(tsk))
virt_timer_expire(); virt_timer_expire();
steal = S390_lowcore.steal_timer;
avg_steal = S390_lowcore.avg_steal_timer / 2;
if ((s64) steal > 0) {
S390_lowcore.steal_timer = 0;
account_steal_time(steal);
avg_steal += steal;
}
S390_lowcore.avg_steal_timer = avg_steal;
} }
/* /*
......
...@@ -9,6 +9,7 @@ generic-y += emergency-restart.h ...@@ -9,6 +9,7 @@ generic-y += emergency-restart.h
generic-y += exec.h generic-y += exec.h
generic-y += irq_regs.h generic-y += irq_regs.h
generic-y += irq_work.h generic-y += irq_work.h
generic-y += kvm_para.h
generic-y += local.h generic-y += local.h
generic-y += local64.h generic-y += local64.h
generic-y += mcs_spinlock.h generic-y += mcs_spinlock.h
......
# SPDX-License-Identifier: GPL-2.0 # SPDX-License-Identifier: GPL-2.0
generated-y += unistd_32.h generated-y += unistd_32.h
generic-y += kvm_para.h
generic-y += ucontext.h generic-y += ucontext.h
...@@ -9,6 +9,7 @@ generic-y += exec.h ...@@ -9,6 +9,7 @@ generic-y += exec.h
generic-y += export.h generic-y += export.h
generic-y += irq_regs.h generic-y += irq_regs.h
generic-y += irq_work.h generic-y += irq_work.h
generic-y += kvm_para.h
generic-y += linkage.h generic-y += linkage.h
generic-y += local.h generic-y += local.h
generic-y += local64.h generic-y += local64.h
......
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
#include <asm-generic/kvm_para.h>
...@@ -18,6 +18,7 @@ generic-y += irq_work.h ...@@ -18,6 +18,7 @@ generic-y += irq_work.h
generic-y += kdebug.h generic-y += kdebug.h
generic-y += kmap_types.h generic-y += kmap_types.h
generic-y += kprobes.h generic-y += kprobes.h
generic-y += kvm_para.h
generic-y += local.h generic-y += local.h
generic-y += mcs_spinlock.h generic-y += mcs_spinlock.h
generic-y += mm-arch-hooks.h generic-y += mm-arch-hooks.h
......
generic-y += kvm_para.h
generic-y += ucontext.h generic-y += ucontext.h
...@@ -2217,14 +2217,8 @@ config RANDOMIZE_MEMORY_PHYSICAL_PADDING ...@@ -2217,14 +2217,8 @@ config RANDOMIZE_MEMORY_PHYSICAL_PADDING
If unsure, leave at the default value. If unsure, leave at the default value.
config HOTPLUG_CPU config HOTPLUG_CPU
bool "Support for hot-pluggable CPUs" def_bool y
depends on SMP depends on SMP
---help---
Say Y here to allow turning CPUs off and on. CPUs can be
controlled through /sys/devices/system/cpu.
( Note: power management support will enable this option
automatically on SMP systems. )
Say N if you want to disable CPU hotplug.
config BOOTPARAM_HOTPLUG_CPU0 config BOOTPARAM_HOTPLUG_CPU0
bool "Set default setting of cpu0_hotpluggable" bool "Set default setting of cpu0_hotpluggable"
......
...@@ -219,8 +219,12 @@ ifdef CONFIG_RETPOLINE ...@@ -219,8 +219,12 @@ ifdef CONFIG_RETPOLINE
# Additionally, avoid generating expensive indirect jumps which # Additionally, avoid generating expensive indirect jumps which
# are subject to retpolines for small number of switch cases. # are subject to retpolines for small number of switch cases.
# clang turns off jump table generation by default when under # clang turns off jump table generation by default when under
# retpoline builds, however, gcc does not for x86. # retpoline builds, however, gcc does not for x86. This has
KBUILD_CFLAGS += $(call cc-option,--param=case-values-threshold=20) # only been fixed starting from gcc stable version 8.4.0 and
# onwards, but not for older ones. See gcc bug #86952.
ifndef CONFIG_CC_IS_CLANG
KBUILD_CFLAGS += $(call cc-option,-fno-jump-tables)
endif
endif endif
archscripts: scripts_basic archscripts: scripts_basic
......
...@@ -120,8 +120,6 @@ static inline void console_init(void) ...@@ -120,8 +120,6 @@ static inline void console_init(void)
void set_sev_encryption_mask(void); void set_sev_encryption_mask(void);
#endif
/* acpi.c */ /* acpi.c */
#ifdef CONFIG_ACPI #ifdef CONFIG_ACPI
acpi_physical_address get_rsdp_addr(void); acpi_physical_address get_rsdp_addr(void);
...@@ -135,3 +133,5 @@ int count_immovable_mem_regions(void); ...@@ -135,3 +133,5 @@ int count_immovable_mem_regions(void);
#else #else
static inline int count_immovable_mem_regions(void) { return 0; } static inline int count_immovable_mem_regions(void) { return 0; }
#endif #endif
#endif /* BOOT_COMPRESSED_MISC_H */
...@@ -112,8 +112,9 @@ extern const char * const x86_bug_flags[NBUGINTS*32]; ...@@ -112,8 +112,9 @@ extern const char * const x86_bug_flags[NBUGINTS*32];
test_cpu_cap(c, bit)) test_cpu_cap(c, bit))
#define this_cpu_has(bit) \ #define this_cpu_has(bit) \
(__builtin_constant_p(bit) && REQUIRED_MASK_BIT_SET(bit) ? 1 : \ (__builtin_constant_p(bit) && REQUIRED_MASK_BIT_SET(bit) ? 1 : \
x86_this_cpu_test_bit(bit, (unsigned long *)&cpu_info.x86_capability)) x86_this_cpu_test_bit(bit, \
(unsigned long __percpu *)&cpu_info.x86_capability))
/* /*
* This macro is for detection of features which need kernel * This macro is for detection of features which need kernel
......
...@@ -253,14 +253,14 @@ struct kvm_mmu_memory_cache { ...@@ -253,14 +253,14 @@ struct kvm_mmu_memory_cache {
* kvm_memory_slot.arch.gfn_track which is 16 bits, so the role bits used * kvm_memory_slot.arch.gfn_track which is 16 bits, so the role bits used
* by indirect shadow page can not be more than 15 bits. * by indirect shadow page can not be more than 15 bits.
* *
* Currently, we used 14 bits that are @level, @cr4_pae, @quadrant, @access, * Currently, we used 14 bits that are @level, @gpte_is_8_bytes, @quadrant, @access,
* @nxe, @cr0_wp, @smep_andnot_wp and @smap_andnot_wp. * @nxe, @cr0_wp, @smep_andnot_wp and @smap_andnot_wp.
*/ */
union kvm_mmu_page_role { union kvm_mmu_page_role {
u32 word; u32 word;
struct { struct {
unsigned level:4; unsigned level:4;
unsigned cr4_pae:1; unsigned gpte_is_8_bytes:1;
unsigned quadrant:2; unsigned quadrant:2;
unsigned direct:1; unsigned direct:1;
unsigned access:3; unsigned access:3;
...@@ -350,6 +350,7 @@ struct kvm_mmu_page { ...@@ -350,6 +350,7 @@ struct kvm_mmu_page {
}; };
struct kvm_pio_request { struct kvm_pio_request {
unsigned long linear_rip;
unsigned long count; unsigned long count;
int in; int in;
int port; int port;
...@@ -568,6 +569,7 @@ struct kvm_vcpu_arch { ...@@ -568,6 +569,7 @@ struct kvm_vcpu_arch {
bool tpr_access_reporting; bool tpr_access_reporting;
u64 ia32_xss; u64 ia32_xss;
u64 microcode_version; u64 microcode_version;
u64 arch_capabilities;
/* /*
* Paging state of the vcpu * Paging state of the vcpu
...@@ -1192,6 +1194,8 @@ struct kvm_x86_ops { ...@@ -1192,6 +1194,8 @@ struct kvm_x86_ops {
int (*nested_enable_evmcs)(struct kvm_vcpu *vcpu, int (*nested_enable_evmcs)(struct kvm_vcpu *vcpu,
uint16_t *vmcs_version); uint16_t *vmcs_version);
uint16_t (*nested_get_evmcs_version)(struct kvm_vcpu *vcpu); uint16_t (*nested_get_evmcs_version)(struct kvm_vcpu *vcpu);
bool (*need_emulation_on_page_fault)(struct kvm_vcpu *vcpu);
}; };
struct kvm_arch_async_pf { struct kvm_arch_async_pf {
...@@ -1252,7 +1256,7 @@ void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm, ...@@ -1252,7 +1256,7 @@ void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm,
gfn_t gfn_offset, unsigned long mask); gfn_t gfn_offset, unsigned long mask);
void kvm_mmu_zap_all(struct kvm *kvm); void kvm_mmu_zap_all(struct kvm *kvm);
void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen); void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen);
unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm); unsigned int kvm_mmu_calculate_default_mmu_pages(struct kvm *kvm);
void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned int kvm_nr_mmu_pages); void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned int kvm_nr_mmu_pages);
int load_pdptrs(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, unsigned long cr3); int load_pdptrs(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, unsigned long cr3);
......
...@@ -77,7 +77,11 @@ static inline size_t real_mode_size_needed(void) ...@@ -77,7 +77,11 @@ static inline size_t real_mode_size_needed(void)
return ALIGN(real_mode_blob_end - real_mode_blob, PAGE_SIZE); return ALIGN(real_mode_blob_end - real_mode_blob, PAGE_SIZE);
} }
void set_real_mode_mem(phys_addr_t mem, size_t size); static inline void set_real_mode_mem(phys_addr_t mem)
{
real_mode_header = (struct real_mode_header *) __va(mem);
}
void reserve_real_mode(void); void reserve_real_mode(void);
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
......
...@@ -501,11 +501,8 @@ void cqm_handle_limbo(struct work_struct *work) ...@@ -501,11 +501,8 @@ void cqm_handle_limbo(struct work_struct *work)
void cqm_setup_limbo_handler(struct rdt_domain *dom, unsigned long delay_ms) void cqm_setup_limbo_handler(struct rdt_domain *dom, unsigned long delay_ms)
{ {
unsigned long delay = msecs_to_jiffies(delay_ms); unsigned long delay = msecs_to_jiffies(delay_ms);
struct rdt_resource *r;
int cpu; int cpu;
r = &rdt_resources_all[RDT_RESOURCE_L3];
cpu = cpumask_any(&dom->cpu_mask); cpu = cpumask_any(&dom->cpu_mask);
dom->cqm_work_cpu = cpu; dom->cqm_work_cpu = cpu;
......
...@@ -526,7 +526,9 @@ static int stimer_set_config(struct kvm_vcpu_hv_stimer *stimer, u64 config, ...@@ -526,7 +526,9 @@ static int stimer_set_config(struct kvm_vcpu_hv_stimer *stimer, u64 config,
new_config.enable = 0; new_config.enable = 0;
stimer->config.as_uint64 = new_config.as_uint64; stimer->config.as_uint64 = new_config.as_uint64;
stimer_mark_pending(stimer, false); if (stimer->config.enable)
stimer_mark_pending(stimer, false);
return 0; return 0;
} }
...@@ -542,7 +544,10 @@ static int stimer_set_count(struct kvm_vcpu_hv_stimer *stimer, u64 count, ...@@ -542,7 +544,10 @@ static int stimer_set_count(struct kvm_vcpu_hv_stimer *stimer, u64 count,
stimer->config.enable = 0; stimer->config.enable = 0;
else if (stimer->config.auto_enable) else if (stimer->config.auto_enable)
stimer->config.enable = 1; stimer->config.enable = 1;
stimer_mark_pending(stimer, false);
if (stimer->config.enable)
stimer_mark_pending(stimer, false);
return 0; return 0;
} }
......
...@@ -182,7 +182,7 @@ struct kvm_shadow_walk_iterator { ...@@ -182,7 +182,7 @@ struct kvm_shadow_walk_iterator {
static const union kvm_mmu_page_role mmu_base_role_mask = { static const union kvm_mmu_page_role mmu_base_role_mask = {
.cr0_wp = 1, .cr0_wp = 1,
.cr4_pae = 1, .gpte_is_8_bytes = 1,
.nxe = 1, .nxe = 1,
.smep_andnot_wp = 1, .smep_andnot_wp = 1,
.smap_andnot_wp = 1, .smap_andnot_wp = 1,
...@@ -2205,6 +2205,7 @@ static bool kvm_mmu_prepare_zap_page(struct kvm *kvm, struct kvm_mmu_page *sp, ...@@ -2205,6 +2205,7 @@ static bool kvm_mmu_prepare_zap_page(struct kvm *kvm, struct kvm_mmu_page *sp,
static void kvm_mmu_commit_zap_page(struct kvm *kvm, static void kvm_mmu_commit_zap_page(struct kvm *kvm,
struct list_head *invalid_list); struct list_head *invalid_list);
#define for_each_valid_sp(_kvm, _sp, _gfn) \ #define for_each_valid_sp(_kvm, _sp, _gfn) \
hlist_for_each_entry(_sp, \ hlist_for_each_entry(_sp, \
&(_kvm)->arch.mmu_page_hash[kvm_page_table_hashfn(_gfn)], hash_link) \ &(_kvm)->arch.mmu_page_hash[kvm_page_table_hashfn(_gfn)], hash_link) \
...@@ -2215,12 +2216,17 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm, ...@@ -2215,12 +2216,17 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm,
for_each_valid_sp(_kvm, _sp, _gfn) \ for_each_valid_sp(_kvm, _sp, _gfn) \
if ((_sp)->gfn != (_gfn) || (_sp)->role.direct) {} else if ((_sp)->gfn != (_gfn) || (_sp)->role.direct) {} else
static inline bool is_ept_sp(struct kvm_mmu_page *sp)
{
return sp->role.cr0_wp && sp->role.smap_andnot_wp;
}
/* @sp->gfn should be write-protected at the call site */ /* @sp->gfn should be write-protected at the call site */
static bool __kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, static bool __kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
struct list_head *invalid_list) struct list_head *invalid_list)
{ {
if (sp->role.cr4_pae != !!is_pae(vcpu) if ((!is_ept_sp(sp) && sp->role.gpte_is_8_bytes != !!is_pae(vcpu)) ||
|| vcpu->arch.mmu->sync_page(vcpu, sp) == 0) { vcpu->arch.mmu->sync_page(vcpu, sp) == 0) {
kvm_mmu_prepare_zap_page(vcpu->kvm, sp, invalid_list); kvm_mmu_prepare_zap_page(vcpu->kvm, sp, invalid_list);
return false; return false;
} }
...@@ -2423,7 +2429,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, ...@@ -2423,7 +2429,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu,
role.level = level; role.level = level;
role.direct = direct; role.direct = direct;
if (role.direct) if (role.direct)
role.cr4_pae = 0; role.gpte_is_8_bytes = true;
role.access = access; role.access = access;
if (!vcpu->arch.mmu->direct_map if (!vcpu->arch.mmu->direct_map
&& vcpu->arch.mmu->root_level <= PT32_ROOT_LEVEL) { && vcpu->arch.mmu->root_level <= PT32_ROOT_LEVEL) {
...@@ -4794,7 +4800,6 @@ static union kvm_mmu_role kvm_calc_mmu_role_common(struct kvm_vcpu *vcpu, ...@@ -4794,7 +4800,6 @@ static union kvm_mmu_role kvm_calc_mmu_role_common(struct kvm_vcpu *vcpu,
role.base.access = ACC_ALL; role.base.access = ACC_ALL;
role.base.nxe = !!is_nx(vcpu); role.base.nxe = !!is_nx(vcpu);
role.base.cr4_pae = !!is_pae(vcpu);
role.base.cr0_wp = is_write_protection(vcpu); role.base.cr0_wp = is_write_protection(vcpu);
role.base.smm = is_smm(vcpu); role.base.smm = is_smm(vcpu);
role.base.guest_mode = is_guest_mode(vcpu); role.base.guest_mode = is_guest_mode(vcpu);
...@@ -4815,6 +4820,7 @@ kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu, bool base_only) ...@@ -4815,6 +4820,7 @@ kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu, bool base_only)
role.base.ad_disabled = (shadow_accessed_mask == 0); role.base.ad_disabled = (shadow_accessed_mask == 0);
role.base.level = kvm_x86_ops->get_tdp_level(vcpu); role.base.level = kvm_x86_ops->get_tdp_level(vcpu);
role.base.direct = true; role.base.direct = true;
role.base.gpte_is_8_bytes = true;
return role; return role;
} }
...@@ -4879,6 +4885,7 @@ kvm_calc_shadow_mmu_root_page_role(struct kvm_vcpu *vcpu, bool base_only) ...@@ -4879,6 +4885,7 @@ kvm_calc_shadow_mmu_root_page_role(struct kvm_vcpu *vcpu, bool base_only)
role.base.smap_andnot_wp = role.ext.cr4_smap && role.base.smap_andnot_wp = role.ext.cr4_smap &&
!is_write_protection(vcpu); !is_write_protection(vcpu);
role.base.direct = !is_paging(vcpu); role.base.direct = !is_paging(vcpu);
role.base.gpte_is_8_bytes = !!is_pae(vcpu);
if (!is_long_mode(vcpu)) if (!is_long_mode(vcpu))
role.base.level = PT32E_ROOT_LEVEL; role.base.level = PT32E_ROOT_LEVEL;
...@@ -4918,18 +4925,26 @@ static union kvm_mmu_role ...@@ -4918,18 +4925,26 @@ static union kvm_mmu_role
kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty, kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty,
bool execonly) bool execonly)
{ {
union kvm_mmu_role role; union kvm_mmu_role role = {0};
/* Base role is inherited from root_mmu */ /* SMM flag is inherited from root_mmu */
role.base.word = vcpu->arch.root_mmu.mmu_role.base.word; role.base.smm = vcpu->arch.root_mmu.mmu_role.base.smm;
role.ext = kvm_calc_mmu_role_ext(vcpu);
role.base.level = PT64_ROOT_4LEVEL; role.base.level = PT64_ROOT_4LEVEL;
role.base.gpte_is_8_bytes = true;
role.base.direct = false; role.base.direct = false;
role.base.ad_disabled = !accessed_dirty; role.base.ad_disabled = !accessed_dirty;
role.base.guest_mode = true; role.base.guest_mode = true;
role.base.access = ACC_ALL; role.base.access = ACC_ALL;
/*
* WP=1 and NOT_WP=1 is an impossible combination, use WP and the
* SMAP variation to denote shadow EPT entries.
*/
role.base.cr0_wp = true;
role.base.smap_andnot_wp = true;
role.ext = kvm_calc_mmu_role_ext(vcpu);
role.ext.execonly = execonly; role.ext.execonly = execonly;
return role; return role;
...@@ -5179,7 +5194,7 @@ static bool detect_write_misaligned(struct kvm_mmu_page *sp, gpa_t gpa, ...@@ -5179,7 +5194,7 @@ static bool detect_write_misaligned(struct kvm_mmu_page *sp, gpa_t gpa,
gpa, bytes, sp->role.word); gpa, bytes, sp->role.word);
offset = offset_in_page(gpa); offset = offset_in_page(gpa);
pte_size = sp->role.cr4_pae ? 8 : 4; pte_size = sp->role.gpte_is_8_bytes ? 8 : 4;
/* /*
* Sometimes, the OS only writes the last one bytes to update status * Sometimes, the OS only writes the last one bytes to update status
...@@ -5203,7 +5218,7 @@ static u64 *get_written_sptes(struct kvm_mmu_page *sp, gpa_t gpa, int *nspte) ...@@ -5203,7 +5218,7 @@ static u64 *get_written_sptes(struct kvm_mmu_page *sp, gpa_t gpa, int *nspte)
page_offset = offset_in_page(gpa); page_offset = offset_in_page(gpa);
level = sp->role.level; level = sp->role.level;
*nspte = 1; *nspte = 1;
if (!sp->role.cr4_pae) { if (!sp->role.gpte_is_8_bytes) {
page_offset <<= 1; /* 32->64 */ page_offset <<= 1; /* 32->64 */
/* /*
* A 32-bit pde maps 4MB while the shadow pdes map * A 32-bit pde maps 4MB while the shadow pdes map
...@@ -5393,10 +5408,12 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code, ...@@ -5393,10 +5408,12 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code,
* This can happen if a guest gets a page-fault on data access but the HW * This can happen if a guest gets a page-fault on data access but the HW
* table walker is not able to read the instruction page (e.g instruction * table walker is not able to read the instruction page (e.g instruction
* page is not present in memory). In those cases we simply restart the * page is not present in memory). In those cases we simply restart the
* guest. * guest, with the exception of AMD Erratum 1096 which is unrecoverable.
*/ */
if (unlikely(insn && !insn_len)) if (unlikely(insn && !insn_len)) {
return 1; if (!kvm_x86_ops->need_emulation_on_page_fault(vcpu))
return 1;
}
er = x86_emulate_instruction(vcpu, cr2, emulation_type, insn, insn_len); er = x86_emulate_instruction(vcpu, cr2, emulation_type, insn, insn_len);
...@@ -5509,7 +5526,9 @@ slot_handle_level_range(struct kvm *kvm, struct kvm_memory_slot *memslot, ...@@ -5509,7 +5526,9 @@ slot_handle_level_range(struct kvm *kvm, struct kvm_memory_slot *memslot,
if (need_resched() || spin_needbreak(&kvm->mmu_lock)) { if (need_resched() || spin_needbreak(&kvm->mmu_lock)) {
if (flush && lock_flush_tlb) { if (flush && lock_flush_tlb) {
kvm_flush_remote_tlbs(kvm); kvm_flush_remote_tlbs_with_address(kvm,
start_gfn,
iterator.gfn - start_gfn + 1);
flush = false; flush = false;
} }
cond_resched_lock(&kvm->mmu_lock); cond_resched_lock(&kvm->mmu_lock);
...@@ -5517,7 +5536,8 @@ slot_handle_level_range(struct kvm *kvm, struct kvm_memory_slot *memslot, ...@@ -5517,7 +5536,8 @@ slot_handle_level_range(struct kvm *kvm, struct kvm_memory_slot *memslot,
} }
if (flush && lock_flush_tlb) { if (flush && lock_flush_tlb) {
kvm_flush_remote_tlbs(kvm); kvm_flush_remote_tlbs_with_address(kvm, start_gfn,
end_gfn - start_gfn + 1);
flush = false; flush = false;
} }
...@@ -6011,7 +6031,7 @@ int kvm_mmu_module_init(void) ...@@ -6011,7 +6031,7 @@ int kvm_mmu_module_init(void)
/* /*
* Calculate mmu pages needed for kvm. * Calculate mmu pages needed for kvm.
*/ */
unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm) unsigned int kvm_mmu_calculate_default_mmu_pages(struct kvm *kvm)
{ {
unsigned int nr_mmu_pages; unsigned int nr_mmu_pages;
unsigned int nr_pages = 0; unsigned int nr_pages = 0;
......
...@@ -29,10 +29,10 @@ ...@@ -29,10 +29,10 @@
\ \
role.word = __entry->role; \ role.word = __entry->role; \
\ \
trace_seq_printf(p, "sp gfn %llx l%u%s q%u%s %s%s" \ trace_seq_printf(p, "sp gfn %llx l%u %u-byte q%u%s %s%s" \
" %snxe %sad root %u %s%c", \ " %snxe %sad root %u %s%c", \
__entry->gfn, role.level, \ __entry->gfn, role.level, \
role.cr4_pae ? " pae" : "", \ role.gpte_is_8_bytes ? 8 : 4, \
role.quadrant, \ role.quadrant, \
role.direct ? " direct" : "", \ role.direct ? " direct" : "", \
access_str[role.access], \ access_str[role.access], \
......
...@@ -7098,6 +7098,36 @@ static int nested_enable_evmcs(struct kvm_vcpu *vcpu, ...@@ -7098,6 +7098,36 @@ static int nested_enable_evmcs(struct kvm_vcpu *vcpu,
return -ENODEV; return -ENODEV;
} }
static bool svm_need_emulation_on_page_fault(struct kvm_vcpu *vcpu)
{
bool is_user, smap;
is_user = svm_get_cpl(vcpu) == 3;
smap = !kvm_read_cr4_bits(vcpu, X86_CR4_SMAP);
/*
* Detect and workaround Errata 1096 Fam_17h_00_0Fh
*
* In non SEV guest, hypervisor will be able to read the guest
* memory to decode the instruction pointer when insn_len is zero
* so we return true to indicate that decoding is possible.
*
* But in the SEV guest, the guest memory is encrypted with the
* guest specific key and hypervisor will not be able to decode the
* instruction pointer so we will not able to workaround it. Lets
* print the error and request to kill the guest.
*/
if (is_user && smap) {
if (!sev_guest(vcpu->kvm))
return true;
pr_err_ratelimited("KVM: Guest triggered AMD Erratum 1096\n");
kvm_make_request(KVM_REQ_TRIPLE_FAULT, vcpu);
}
return false;
}
static struct kvm_x86_ops svm_x86_ops __ro_after_init = { static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
.cpu_has_kvm_support = has_svm, .cpu_has_kvm_support = has_svm,
.disabled_by_bios = is_disabled, .disabled_by_bios = is_disabled,
...@@ -7231,6 +7261,8 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = { ...@@ -7231,6 +7261,8 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
.nested_enable_evmcs = nested_enable_evmcs, .nested_enable_evmcs = nested_enable_evmcs,
.nested_get_evmcs_version = nested_get_evmcs_version, .nested_get_evmcs_version = nested_get_evmcs_version,
.need_emulation_on_page_fault = svm_need_emulation_on_page_fault,
}; };
static int __init svm_init(void) static int __init svm_init(void)
......
...@@ -2585,6 +2585,11 @@ static int nested_check_host_control_regs(struct kvm_vcpu *vcpu, ...@@ -2585,6 +2585,11 @@ static int nested_check_host_control_regs(struct kvm_vcpu *vcpu,
!nested_host_cr4_valid(vcpu, vmcs12->host_cr4) || !nested_host_cr4_valid(vcpu, vmcs12->host_cr4) ||
!nested_cr3_valid(vcpu, vmcs12->host_cr3)) !nested_cr3_valid(vcpu, vmcs12->host_cr3))
return -EINVAL; return -EINVAL;
if (is_noncanonical_address(vmcs12->host_ia32_sysenter_esp, vcpu) ||
is_noncanonical_address(vmcs12->host_ia32_sysenter_eip, vcpu))
return -EINVAL;
/* /*
* If the load IA32_EFER VM-exit control is 1, bits reserved in the * If the load IA32_EFER VM-exit control is 1, bits reserved in the
* IA32_EFER MSR must be 0 in the field for that register. In addition, * IA32_EFER MSR must be 0 in the field for that register. In addition,
......
...@@ -1683,12 +1683,6 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) ...@@ -1683,12 +1683,6 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
msr_info->data = to_vmx(vcpu)->spec_ctrl; msr_info->data = to_vmx(vcpu)->spec_ctrl;
break; break;
case MSR_IA32_ARCH_CAPABILITIES:
if (!msr_info->host_initiated &&
!guest_cpuid_has(vcpu, X86_FEATURE_ARCH_CAPABILITIES))
return 1;
msr_info->data = to_vmx(vcpu)->arch_capabilities;
break;
case MSR_IA32_SYSENTER_CS: case MSR_IA32_SYSENTER_CS:
msr_info->data = vmcs_read32(GUEST_SYSENTER_CS); msr_info->data = vmcs_read32(GUEST_SYSENTER_CS);
break; break;
...@@ -1895,11 +1889,6 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) ...@@ -1895,11 +1889,6 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
vmx_disable_intercept_for_msr(vmx->vmcs01.msr_bitmap, MSR_IA32_PRED_CMD, vmx_disable_intercept_for_msr(vmx->vmcs01.msr_bitmap, MSR_IA32_PRED_CMD,
MSR_TYPE_W); MSR_TYPE_W);
break; break;
case MSR_IA32_ARCH_CAPABILITIES:
if (!msr_info->host_initiated)
return 1;
vmx->arch_capabilities = data;
break;
case MSR_IA32_CR_PAT: case MSR_IA32_CR_PAT:
if (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_PAT) { if (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_PAT) {
if (!kvm_mtrr_valid(vcpu, MSR_IA32_CR_PAT, data)) if (!kvm_mtrr_valid(vcpu, MSR_IA32_CR_PAT, data))
...@@ -4088,8 +4077,6 @@ static void vmx_vcpu_setup(struct vcpu_vmx *vmx) ...@@ -4088,8 +4077,6 @@ static void vmx_vcpu_setup(struct vcpu_vmx *vmx)
++vmx->nmsrs; ++vmx->nmsrs;
} }
vmx->arch_capabilities = kvm_get_arch_capabilities();
vm_exit_controls_init(vmx, vmx_vmexit_ctrl()); vm_exit_controls_init(vmx, vmx_vmexit_ctrl());
/* 22.2.1, 20.8.1 */ /* 22.2.1, 20.8.1 */
...@@ -7409,6 +7396,11 @@ static int enable_smi_window(struct kvm_vcpu *vcpu) ...@@ -7409,6 +7396,11 @@ static int enable_smi_window(struct kvm_vcpu *vcpu)
return 0; return 0;
} }
static bool vmx_need_emulation_on_page_fault(struct kvm_vcpu *vcpu)
{
return 0;
}
static __init int hardware_setup(void) static __init int hardware_setup(void)
{ {
unsigned long host_bndcfgs; unsigned long host_bndcfgs;
...@@ -7711,6 +7703,7 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = { ...@@ -7711,6 +7703,7 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
.set_nested_state = NULL, .set_nested_state = NULL,
.get_vmcs12_pages = NULL, .get_vmcs12_pages = NULL,
.nested_enable_evmcs = NULL, .nested_enable_evmcs = NULL,
.need_emulation_on_page_fault = vmx_need_emulation_on_page_fault,
}; };
static void vmx_cleanup_l1d_flush(void) static void vmx_cleanup_l1d_flush(void)
......
...@@ -190,7 +190,6 @@ struct vcpu_vmx { ...@@ -190,7 +190,6 @@ struct vcpu_vmx {
u64 msr_guest_kernel_gs_base; u64 msr_guest_kernel_gs_base;
#endif #endif
u64 arch_capabilities;
u64 spec_ctrl; u64 spec_ctrl;
u32 vm_entry_controls_shadow; u32 vm_entry_controls_shadow;
......
...@@ -1125,7 +1125,7 @@ static u32 msrs_to_save[] = { ...@@ -1125,7 +1125,7 @@ static u32 msrs_to_save[] = {
#endif #endif
MSR_IA32_TSC, MSR_IA32_CR_PAT, MSR_VM_HSAVE_PA, MSR_IA32_TSC, MSR_IA32_CR_PAT, MSR_VM_HSAVE_PA,
MSR_IA32_FEATURE_CONTROL, MSR_IA32_BNDCFGS, MSR_TSC_AUX, MSR_IA32_FEATURE_CONTROL, MSR_IA32_BNDCFGS, MSR_TSC_AUX,
MSR_IA32_SPEC_CTRL, MSR_IA32_ARCH_CAPABILITIES, MSR_IA32_SPEC_CTRL,
MSR_IA32_RTIT_CTL, MSR_IA32_RTIT_STATUS, MSR_IA32_RTIT_CR3_MATCH, MSR_IA32_RTIT_CTL, MSR_IA32_RTIT_STATUS, MSR_IA32_RTIT_CR3_MATCH,
MSR_IA32_RTIT_OUTPUT_BASE, MSR_IA32_RTIT_OUTPUT_MASK, MSR_IA32_RTIT_OUTPUT_BASE, MSR_IA32_RTIT_OUTPUT_MASK,
MSR_IA32_RTIT_ADDR0_A, MSR_IA32_RTIT_ADDR0_B, MSR_IA32_RTIT_ADDR0_A, MSR_IA32_RTIT_ADDR0_B,
...@@ -1158,6 +1158,7 @@ static u32 emulated_msrs[] = { ...@@ -1158,6 +1158,7 @@ static u32 emulated_msrs[] = {
MSR_IA32_TSC_ADJUST, MSR_IA32_TSC_ADJUST,
MSR_IA32_TSCDEADLINE, MSR_IA32_TSCDEADLINE,
MSR_IA32_ARCH_CAPABILITIES,
MSR_IA32_MISC_ENABLE, MSR_IA32_MISC_ENABLE,
MSR_IA32_MCG_STATUS, MSR_IA32_MCG_STATUS,
MSR_IA32_MCG_CTL, MSR_IA32_MCG_CTL,
...@@ -2443,6 +2444,11 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info) ...@@ -2443,6 +2444,11 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
if (msr_info->host_initiated) if (msr_info->host_initiated)
vcpu->arch.microcode_version = data; vcpu->arch.microcode_version = data;
break; break;
case MSR_IA32_ARCH_CAPABILITIES:
if (!msr_info->host_initiated)
return 1;
vcpu->arch.arch_capabilities = data;
break;
case MSR_EFER: case MSR_EFER:
return set_efer(vcpu, data); return set_efer(vcpu, data);
case MSR_K7_HWCR: case MSR_K7_HWCR:
...@@ -2747,6 +2753,12 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info) ...@@ -2747,6 +2753,12 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
case MSR_IA32_UCODE_REV: case MSR_IA32_UCODE_REV:
msr_info->data = vcpu->arch.microcode_version; msr_info->data = vcpu->arch.microcode_version;
break; break;
case MSR_IA32_ARCH_CAPABILITIES:
if (!msr_info->host_initiated &&
!guest_cpuid_has(vcpu, X86_FEATURE_ARCH_CAPABILITIES))
return 1;
msr_info->data = vcpu->arch.arch_capabilities;
break;
case MSR_IA32_TSC: case MSR_IA32_TSC:
msr_info->data = kvm_scale_tsc(vcpu, rdtsc()) + vcpu->arch.tsc_offset; msr_info->data = kvm_scale_tsc(vcpu, rdtsc()) + vcpu->arch.tsc_offset;
break; break;
...@@ -6523,14 +6535,27 @@ int kvm_emulate_instruction_from_buffer(struct kvm_vcpu *vcpu, ...@@ -6523,14 +6535,27 @@ int kvm_emulate_instruction_from_buffer(struct kvm_vcpu *vcpu,
} }
EXPORT_SYMBOL_GPL(kvm_emulate_instruction_from_buffer); EXPORT_SYMBOL_GPL(kvm_emulate_instruction_from_buffer);
static int complete_fast_pio_out(struct kvm_vcpu *vcpu)
{
vcpu->arch.pio.count = 0;
if (unlikely(!kvm_is_linear_rip(vcpu, vcpu->arch.pio.linear_rip)))
return 1;
return kvm_skip_emulated_instruction(vcpu);
}
static int kvm_fast_pio_out(struct kvm_vcpu *vcpu, int size, static int kvm_fast_pio_out(struct kvm_vcpu *vcpu, int size,
unsigned short port) unsigned short port)
{ {
unsigned long val = kvm_register_read(vcpu, VCPU_REGS_RAX); unsigned long val = kvm_register_read(vcpu, VCPU_REGS_RAX);
int ret = emulator_pio_out_emulated(&vcpu->arch.emulate_ctxt, int ret = emulator_pio_out_emulated(&vcpu->arch.emulate_ctxt,
size, port, &val, 1); size, port, &val, 1);
/* do not return to emulator after return from userspace */
vcpu->arch.pio.count = 0; if (!ret) {
vcpu->arch.pio.linear_rip = kvm_get_linear_rip(vcpu);
vcpu->arch.complete_userspace_io = complete_fast_pio_out;
}
return ret; return ret;
} }
...@@ -6541,6 +6566,11 @@ static int complete_fast_pio_in(struct kvm_vcpu *vcpu) ...@@ -6541,6 +6566,11 @@ static int complete_fast_pio_in(struct kvm_vcpu *vcpu)
/* We should only ever be called with arch.pio.count equal to 1 */ /* We should only ever be called with arch.pio.count equal to 1 */
BUG_ON(vcpu->arch.pio.count != 1); BUG_ON(vcpu->arch.pio.count != 1);
if (unlikely(!kvm_is_linear_rip(vcpu, vcpu->arch.pio.linear_rip))) {
vcpu->arch.pio.count = 0;
return 1;
}
/* For size less than 4 we merge, else we zero extend */ /* For size less than 4 we merge, else we zero extend */
val = (vcpu->arch.pio.size < 4) ? kvm_register_read(vcpu, VCPU_REGS_RAX) val = (vcpu->arch.pio.size < 4) ? kvm_register_read(vcpu, VCPU_REGS_RAX)
: 0; : 0;
...@@ -6553,7 +6583,7 @@ static int complete_fast_pio_in(struct kvm_vcpu *vcpu) ...@@ -6553,7 +6583,7 @@ static int complete_fast_pio_in(struct kvm_vcpu *vcpu)
vcpu->arch.pio.port, &val, 1); vcpu->arch.pio.port, &val, 1);
kvm_register_write(vcpu, VCPU_REGS_RAX, val); kvm_register_write(vcpu, VCPU_REGS_RAX, val);
return 1; return kvm_skip_emulated_instruction(vcpu);
} }
static int kvm_fast_pio_in(struct kvm_vcpu *vcpu, int size, static int kvm_fast_pio_in(struct kvm_vcpu *vcpu, int size,
...@@ -6572,6 +6602,7 @@ static int kvm_fast_pio_in(struct kvm_vcpu *vcpu, int size, ...@@ -6572,6 +6602,7 @@ static int kvm_fast_pio_in(struct kvm_vcpu *vcpu, int size,
return ret; return ret;
} }
vcpu->arch.pio.linear_rip = kvm_get_linear_rip(vcpu);
vcpu->arch.complete_userspace_io = complete_fast_pio_in; vcpu->arch.complete_userspace_io = complete_fast_pio_in;
return 0; return 0;
...@@ -6579,16 +6610,13 @@ static int kvm_fast_pio_in(struct kvm_vcpu *vcpu, int size, ...@@ -6579,16 +6610,13 @@ static int kvm_fast_pio_in(struct kvm_vcpu *vcpu, int size,
int kvm_fast_pio(struct kvm_vcpu *vcpu, int size, unsigned short port, int in) int kvm_fast_pio(struct kvm_vcpu *vcpu, int size, unsigned short port, int in)
{ {
int ret = kvm_skip_emulated_instruction(vcpu); int ret;
/*
* TODO: we might be squashing a KVM_GUESTDBG_SINGLESTEP-triggered
* KVM_EXIT_DEBUG here.
*/
if (in) if (in)
return kvm_fast_pio_in(vcpu, size, port) && ret; ret = kvm_fast_pio_in(vcpu, size, port);
else else
return kvm_fast_pio_out(vcpu, size, port) && ret; ret = kvm_fast_pio_out(vcpu, size, port);
return ret && kvm_skip_emulated_instruction(vcpu);
} }
EXPORT_SYMBOL_GPL(kvm_fast_pio); EXPORT_SYMBOL_GPL(kvm_fast_pio);
...@@ -8733,6 +8761,7 @@ struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm, ...@@ -8733,6 +8761,7 @@ struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm,
int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu) int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu)
{ {
vcpu->arch.arch_capabilities = kvm_get_arch_capabilities();
vcpu->arch.msr_platform_info = MSR_PLATFORM_INFO_CPUID_FAULT; vcpu->arch.msr_platform_info = MSR_PLATFORM_INFO_CPUID_FAULT;
kvm_vcpu_mtrr_init(vcpu); kvm_vcpu_mtrr_init(vcpu);
vcpu_load(vcpu); vcpu_load(vcpu);
...@@ -9429,13 +9458,9 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, ...@@ -9429,13 +9458,9 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
const struct kvm_memory_slot *new, const struct kvm_memory_slot *new,
enum kvm_mr_change change) enum kvm_mr_change change)
{ {
int nr_mmu_pages = 0;
if (!kvm->arch.n_requested_mmu_pages) if (!kvm->arch.n_requested_mmu_pages)
nr_mmu_pages = kvm_mmu_calculate_mmu_pages(kvm); kvm_mmu_change_mmu_pages(kvm,
kvm_mmu_calculate_default_mmu_pages(kvm));
if (nr_mmu_pages)
kvm_mmu_change_mmu_pages(kvm, nr_mmu_pages);
/* /*
* Dirty logging tracks sptes in 4k granularity, meaning that large * Dirty logging tracks sptes in 4k granularity, meaning that large
......
...@@ -230,7 +230,7 @@ bool mmap_address_hint_valid(unsigned long addr, unsigned long len) ...@@ -230,7 +230,7 @@ bool mmap_address_hint_valid(unsigned long addr, unsigned long len)
/* Can we access it for direct reading/writing? Must be RAM: */ /* Can we access it for direct reading/writing? Must be RAM: */
int valid_phys_addr_range(phys_addr_t addr, size_t count) int valid_phys_addr_range(phys_addr_t addr, size_t count)
{ {
return addr + count <= __pa(high_memory); return addr + count - 1 <= __pa(high_memory - 1);
} }
/* Can we access it through mmap? Must be a valid physical address: */ /* Can we access it through mmap? Must be a valid physical address: */
......
...@@ -449,7 +449,7 @@ void __init efi_free_boot_services(void) ...@@ -449,7 +449,7 @@ void __init efi_free_boot_services(void)
*/ */
rm_size = real_mode_size_needed(); rm_size = real_mode_size_needed();
if (rm_size && (start + rm_size) < (1<<20) && size >= rm_size) { if (rm_size && (start + rm_size) < (1<<20) && size >= rm_size) {
set_real_mode_mem(start, rm_size); set_real_mode_mem(start);
start += rm_size; start += rm_size;
size -= rm_size; size -= rm_size;
} }
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册