提交 a21aae3b 编写于 作者: M Mike Marshall

Merge tag 'for-hubcap-v4.9-readahead' of git://github.com/martinbrandenburg/linux

orangefs: integrate readahead cache changes from out-of-tree

The readahead cache has long been present as a compile time option to
OrangeFS. As it had not been tested in some time, it was disabled by
default. Recently, Walt Ligon started work reviving it, which eventually
culminated in the commit below to OrangeFS SVN.

r12519 | walt | 2016-07-13 14:32:42 -0400 (Wed, 13 Jul 2016) | 1 line
reintegrating readahead cache code

This cache is implemented almost entirely in userspace. There are a few
parameters exposed via sysfs, and the cache must be flushed after an
inode is released.
[spatch]
options = --timeout 200
options = --use-gitgrep
...@@ -37,6 +37,7 @@ modules.builtin ...@@ -37,6 +37,7 @@ modules.builtin
Module.symvers Module.symvers
*.dwo *.dwo
*.su *.su
*.c.[012]*.*
# #
# Top-level generic files # Top-level generic files
...@@ -66,6 +67,7 @@ Module.symvers ...@@ -66,6 +67,7 @@ Module.symvers
# #
!.gitignore !.gitignore
!.mailmap !.mailmap
!.cocciconfig
# #
# Generated include files # Generated include files
......
此差异已折叠。
...@@ -38,6 +38,15 @@ as a regular user, and install it with ...@@ -38,6 +38,15 @@ as a regular user, and install it with
sudo make install sudo make install
Supplemental documentation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For supplemental documentation refer to the wiki:
https://bottest.wiki.kernel.org/coccicheck
The wiki documentation always refers to the linux-next version of the script.
Using Coccinelle on the Linux kernel Using Coccinelle on the Linux kernel
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
...@@ -94,11 +103,26 @@ To enable verbose messages set the V= variable, for example: ...@@ -94,11 +103,26 @@ To enable verbose messages set the V= variable, for example:
make coccicheck MODE=report V=1 make coccicheck MODE=report V=1
Coccinelle parallelization
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
By default, coccicheck tries to run as parallel as possible. To change By default, coccicheck tries to run as parallel as possible. To change
the parallelism, set the J= variable. For example, to run across 4 CPUs: the parallelism, set the J= variable. For example, to run across 4 CPUs:
make coccicheck MODE=report J=4 make coccicheck MODE=report J=4
As of Coccinelle 1.0.2 Coccinelle uses Ocaml parmap for parallelization,
if support for this is detected you will benefit from parmap parallelization.
When parmap is enabled coccicheck will enable dynamic load balancing by using
'--chunksize 1' argument, this ensures we keep feeding threads with work
one by one, so that we avoid the situation where most work gets done by only
a few threads. With dynamic load balancing, if a thread finishes early we keep
feeding it more work.
When parmap is enabled, if an error occurs in Coccinelle, this error
value is propagated back, the return value of the 'make coccicheck'
captures this return value.
Using Coccinelle with a single semantic patch Using Coccinelle with a single semantic patch
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
...@@ -142,15 +166,118 @@ semantic patch as shown in the previous section. ...@@ -142,15 +166,118 @@ semantic patch as shown in the previous section.
The "report" mode is the default. You can select another one with the The "report" mode is the default. You can select another one with the
MODE variable explained above. MODE variable explained above.
Debugging Coccinelle SmPL patches
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Using coccicheck is best as it provides in the spatch command line
include options matching the options used when we compile the kernel.
You can learn what these options are by using V=1, you could then
manually run Coccinelle with debug options added.
Alternatively you can debug running Coccinelle against SmPL patches
by asking for stderr to be redirected to stderr, by default stderr
is redirected to /dev/null, if you'd like to capture stderr you
can specify the DEBUG_FILE="file.txt" option to coccicheck. For
instance:
rm -f cocci.err
make coccicheck COCCI=scripts/coccinelle/free/kfree.cocci MODE=report DEBUG_FILE=cocci.err
cat cocci.err
You can use SPFLAGS to add debugging flags, for instance you may want to
add both --profile --show-trying to SPFLAGS when debugging. For instance
you may want to use:
rm -f err.log
export COCCI=scripts/coccinelle/misc/irqf_oneshot.cocci
make coccicheck DEBUG_FILE="err.log" MODE=report SPFLAGS="--profile --show-trying" M=./drivers/mfd/arizona-irq.c
err.log will now have the profiling information, while stdout will
provide some progress information as Coccinelle moves forward with
work.
DEBUG_FILE support is only supported when using coccinelle >= 1.2.
.cocciconfig support
~~~~~~~~~~~~~~~~~~~~~~
Coccinelle supports reading .cocciconfig for default Coccinelle options that
should be used every time spatch is spawned, the order of precedence for
variables for .cocciconfig is as follows:
o Your current user's home directory is processed first
o Your directory from which spatch is called is processed next
o The directory provided with the --dir option is processed last, if used
Since coccicheck runs through make, it naturally runs from the kernel
proper dir, as such the second rule above would be implied for picking up a
.cocciconfig when using 'make coccicheck'.
'make coccicheck' also supports using M= targets.If you do not supply
any M= target, it is assumed you want to target the entire kernel.
The kernel coccicheck script has:
if [ "$KBUILD_EXTMOD" = "" ] ; then
OPTIONS="--dir $srctree $COCCIINCLUDE"
else
OPTIONS="--dir $KBUILD_EXTMOD $COCCIINCLUDE"
fi
KBUILD_EXTMOD is set when an explicit target with M= is used. For both cases
the spatch --dir argument is used, as such third rule applies when whether M=
is used or not, and when M= is used the target directory can have its own
.cocciconfig file. When M= is not passed as an argument to coccicheck the
target directory is the same as the directory from where spatch was called.
If not using the kernel's coccicheck target, keep the above precedence
order logic of .cocciconfig reading. If using the kernel's coccicheck target,
override any of the kernel's .coccicheck's settings using SPFLAGS.
We help Coccinelle when used against Linux with a set of sensible defaults
options for Linux with our own Linux .cocciconfig. This hints to coccinelle
git can be used for 'git grep' queries over coccigrep. A timeout of 200
seconds should suffice for now.
The options picked up by coccinelle when reading a .cocciconfig do not appear
as arguments to spatch processes running on your system, to confirm what
options will be used by Coccinelle run:
spatch --print-options-only
You can override with your own preferred index option by using SPFLAGS. Take
note that when there are conflicting options Coccinelle takes precedence for
the last options passed. Using .cocciconfig is possible to use idutils, however
given the order of precedence followed by Coccinelle, since the kernel now
carries its own .cocciconfig, you will need to use SPFLAGS to use idutils if
desired. See below section "Additional flags" for more details on how to use
idutils.
Additional flags Additional flags
~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~
Additional flags can be passed to spatch through the SPFLAGS Additional flags can be passed to spatch through the SPFLAGS
variable. variable. This works as Coccinelle respects the last flags
given to it when options are in conflict.
make SPFLAGS=--use-glimpse coccicheck make SPFLAGS=--use-glimpse coccicheck
Coccinelle supports idutils as well but requires coccinelle >= 1.0.6.
When no ID file is specified coccinelle assumes your ID database file
is in the file .id-utils.index on the top level of the kernel, coccinelle
carries a script scripts/idutils_index.sh which creates the database with
mkid -i C --output .id-utils.index
If you have another database filename you can also just symlink with this
name.
make SPFLAGS=--use-idutils coccicheck make SPFLAGS=--use-idutils coccicheck
Alternatively you can specify the database filename explicitly, for
instance:
make SPFLAGS="--use-idutils /full-path/to/ID" coccicheck
See spatch --help to learn more about spatch options. See spatch --help to learn more about spatch options.
Note that the '--use-glimpse' and '--use-idutils' options Note that the '--use-glimpse' and '--use-idutils' options
...@@ -159,6 +286,25 @@ thus active by default. However, by indexing the code with ...@@ -159,6 +286,25 @@ thus active by default. However, by indexing the code with
one of these tools, and according to the cocci file used, one of these tools, and according to the cocci file used,
spatch could proceed the entire code base more quickly. spatch could proceed the entire code base more quickly.
SmPL patch specific options
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
SmPL patches can have their own requirements for options passed
to Coccinelle. SmPL patch specific options can be provided by
providing them at the top of the SmPL patch, for instance:
// Options: --no-includes --include-headers
SmPL patch Coccinelle requirements
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
As Coccinelle features get added some more advanced SmPL patches
may require newer versions of Coccinelle. If an SmPL patch requires
at least a version of Coccinelle, this can be specified as follows,
as an example if requiring at least Coccinelle >= 1.0.5:
// Requires: 1.0.5
Proposing new semantic patches Proposing new semantic patches
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
......
...@@ -46,6 +46,10 @@ Required properties: ...@@ -46,6 +46,10 @@ Required properties:
0 maps to GPMC_WAIT0 pin. 0 maps to GPMC_WAIT0 pin.
- gpio-cells: Must be set to 2 - gpio-cells: Must be set to 2
Required properties when using NAND prefetch dma:
- dmas GPMC NAND prefetch dma channel
- dma-names Must be set to "rxtx"
Timing properties for child nodes. All are optional and default to 0. Timing properties for child nodes. All are optional and default to 0.
- gpmc,sync-clk-ps: Minimum clock period for synchronous mode, in picoseconds - gpmc,sync-clk-ps: Minimum clock period for synchronous mode, in picoseconds
...@@ -137,7 +141,8 @@ Example for an AM33xx board: ...@@ -137,7 +141,8 @@ Example for an AM33xx board:
ti,hwmods = "gpmc"; ti,hwmods = "gpmc";
reg = <0x50000000 0x2000>; reg = <0x50000000 0x2000>;
interrupts = <100>; interrupts = <100>;
dmas = <&edma 52 0>;
dma-names = "rxtx";
gpmc,num-cs = <8>; gpmc,num-cs = <8>;
gpmc,num-waitpins = <2>; gpmc,num-waitpins = <2>;
#address-cells = <2>; #address-cells = <2>;
......
* Atmel Quad Serial Peripheral Interface (QSPI)
Required properties:
- compatible: Should be "atmel,sama5d2-qspi".
- reg: Should contain the locations and lengths of the base registers
and the mapped memory.
- reg-names: Should contain the resource reg names:
- qspi_base: configuration register address space
- qspi_mmap: memory mapped address space
- interrupts: Should contain the interrupt for the device.
- clocks: The phandle of the clock needed by the QSPI controller.
- #address-cells: Should be <1>.
- #size-cells: Should be <0>.
Example:
spi@f0020000 {
compatible = "atmel,sama5d2-qspi";
reg = <0xf0020000 0x100>, <0xd0000000 0x8000000>;
reg-names = "qspi_base", "qspi_mmap";
interrupts = <52 IRQ_TYPE_LEVEL_HIGH 7>;
clocks = <&spi0_clk>;
#address-cells = <1>;
#size-cells = <0>;
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_spi0_default>;
status = "okay";
m25p80@0 {
...
};
};
...@@ -27,6 +27,7 @@ Required properties: ...@@ -27,6 +27,7 @@ Required properties:
brcm,brcmnand-v6.2 brcm,brcmnand-v6.2
brcm,brcmnand-v7.0 brcm,brcmnand-v7.0
brcm,brcmnand-v7.1 brcm,brcmnand-v7.1
brcm,brcmnand-v7.2
brcm,brcmnand brcm,brcmnand
- reg : the register start and length for NAND register region. - reg : the register start and length for NAND register region.
(optional) Flash DMA register range (if present) (optional) Flash DMA register range (if present)
......
* Cadence Quad SPI controller
Required properties:
- compatible : Should be "cdns,qspi-nor".
- reg : Contains two entries, each of which is a tuple consisting of a
physical address and length. The first entry is the address and
length of the controller register set. The second entry is the
address and length of the QSPI Controller data area.
- interrupts : Unit interrupt specifier for the controller interrupt.
- clocks : phandle to the Quad SPI clock.
- cdns,fifo-depth : Size of the data FIFO in words.
- cdns,fifo-width : Bus width of the data FIFO in bytes.
- cdns,trigger-address : 32-bit indirect AHB trigger address.
Optional properties:
- cdns,is-decoded-cs : Flag to indicate whether decoder is used or not.
Optional subnodes:
Subnodes of the Cadence Quad SPI controller are spi slave nodes with additional
custom properties:
- cdns,read-delay : Delay for read capture logic, in clock cycles
- cdns,tshsl-ns : Delay in nanoseconds for the length that the master
mode chip select outputs are de-asserted between
transactions.
- cdns,tsd2d-ns : Delay in nanoseconds between one chip select being
de-activated and the activation of another.
- cdns,tchsh-ns : Delay in nanoseconds between last bit of current
transaction and deasserting the device chip select
(qspi_n_ss_out).
- cdns,tslch-ns : Delay in nanoseconds between setting qspi_n_ss_out low
and first bit transfer.
Example:
qspi: spi@ff705000 {
compatible = "cdns,qspi-nor";
#address-cells = <1>;
#size-cells = <0>;
reg = <0xff705000 0x1000>,
<0xffa00000 0x1000>;
interrupts = <0 151 4>;
clocks = <&qspi_clk>;
cdns,is-decoded-cs;
cdns,fifo-depth = <128>;
cdns,fifo-width = <4>;
cdns,trigger-address = <0x00000000>;
flash0: n25q00@0 {
...
cdns,read-delay = <4>;
cdns,tshsl-ns = <50>;
cdns,tsd2d-ns = <50>;
cdns,tchsh-ns = <4>;
cdns,tslch-ns = <4>;
};
};
...@@ -39,7 +39,7 @@ Optional properties: ...@@ -39,7 +39,7 @@ Optional properties:
"prefetch-polled" Prefetch polled mode (default) "prefetch-polled" Prefetch polled mode (default)
"polled" Polled mode, without prefetch "polled" Polled mode, without prefetch
"prefetch-dma" Prefetch enabled sDMA mode "prefetch-dma" Prefetch enabled DMA mode
"prefetch-irq" Prefetch enabled irq mode "prefetch-irq" Prefetch enabled irq mode
- elm_id: <deprecated> use "ti,elm-id" instead - elm_id: <deprecated> use "ti,elm-id" instead
......
HiSilicon SPI-NOR Flash Controller
Required properties:
- compatible : Should be "hisilicon,fmc-spi-nor" and one of the following strings:
"hisilicon,hi3519-spi-nor"
- address-cells : Should be 1.
- size-cells : Should be 0.
- reg : Offset and length of the register set for the controller device.
- reg-names : Must include the following two entries: "control", "memory".
- clocks : handle to spi-nor flash controller clock.
Example:
spi-nor-controller@10000000 {
compatible = "hisilicon,hi3519-spi-nor", "hisilicon,fmc-spi-nor";
#address-cells = <1>;
#size-cells = <0>;
reg = <0x10000000 0x1000>, <0x14000000 0x1000000>;
reg-names = "control", "memory";
clocks = <&clock HI3519_FMC_CLK>;
spi-nor@0 {
compatible = "jedec,spi-nor";
reg = <0>;
};
};
MTK SoCs NAND FLASH controller (NFC) DT binding
This file documents the device tree bindings for MTK SoCs NAND controllers.
The functional split of the controller requires two drivers to operate:
the nand controller interface driver and the ECC engine driver.
The hardware description for both devices must be captured as device
tree nodes.
1) NFC NAND Controller Interface (NFI):
=======================================
The first part of NFC is NAND Controller Interface (NFI) HW.
Required NFI properties:
- compatible: Should be "mediatek,mtxxxx-nfc".
- reg: Base physical address and size of NFI.
- interrupts: Interrupts of NFI.
- clocks: NFI required clocks.
- clock-names: NFI clocks internal name.
- status: Disabled default. Then set "okay" by platform.
- ecc-engine: Required ECC Engine node.
- #address-cells: NAND chip index, should be 1.
- #size-cells: Should be 0.
Example:
nandc: nfi@1100d000 {
compatible = "mediatek,mt2701-nfc";
reg = <0 0x1100d000 0 0x1000>;
interrupts = <GIC_SPI 56 IRQ_TYPE_LEVEL_LOW>;
clocks = <&pericfg CLK_PERI_NFI>,
<&pericfg CLK_PERI_NFI_PAD>;
clock-names = "nfi_clk", "pad_clk";
status = "disabled";
ecc-engine = <&bch>;
#address-cells = <1>;
#size-cells = <0>;
};
Platform related properties, should be set in {platform_name}.dts:
- children nodes: NAND chips.
Children nodes properties:
- reg: Chip Select Signal, default 0.
Set as reg = <0>, <1> when need 2 CS.
Optional:
- nand-on-flash-bbt: Store BBT on NAND Flash.
- nand-ecc-mode: the NAND ecc mode (check driver for supported modes)
- nand-ecc-step-size: Number of data bytes covered by a single ECC step.
valid values: 512 and 1024.
1024 is recommended for large page NANDs.
- nand-ecc-strength: Number of bits to correct per ECC step.
The valid values that the controller supports are: 4, 6,
8, 10, 12, 14, 16, 18, 20, 22, 24, 28, 32, 36, 40, 44,
48, 52, 56, 60.
The strength should be calculated as follows:
E = (S - F) * 8 / 14
S = O / (P / Q)
E : nand-ecc-strength.
S : spare size per sector.
F : FDM size, should be in the range [1,8].
It is used to store free oob data.
O : oob size.
P : page size.
Q : nand-ecc-step-size.
If the result does not match any one of the listed
choices above, please select the smaller valid value from
the list.
(otherwise the driver will do the adjustment at runtime)
- pinctrl-names: Default NAND pin GPIO setting name.
- pinctrl-0: GPIO setting node.
Example:
&pio {
nand_pins_default: nanddefault {
pins_dat {
pinmux = <MT2701_PIN_111_MSDC0_DAT7__FUNC_NLD7>,
<MT2701_PIN_112_MSDC0_DAT6__FUNC_NLD6>,
<MT2701_PIN_114_MSDC0_DAT4__FUNC_NLD4>,
<MT2701_PIN_118_MSDC0_DAT3__FUNC_NLD3>,
<MT2701_PIN_121_MSDC0_DAT0__FUNC_NLD0>,
<MT2701_PIN_120_MSDC0_DAT1__FUNC_NLD1>,
<MT2701_PIN_113_MSDC0_DAT5__FUNC_NLD5>,
<MT2701_PIN_115_MSDC0_RSTB__FUNC_NLD8>,
<MT2701_PIN_119_MSDC0_DAT2__FUNC_NLD2>;
input-enable;
drive-strength = <MTK_DRIVE_8mA>;
bias-pull-up;
};
pins_we {
pinmux = <MT2701_PIN_117_MSDC0_CLK__FUNC_NWEB>;
drive-strength = <MTK_DRIVE_8mA>;
bias-pull-up = <MTK_PUPD_SET_R1R0_10>;
};
pins_ale {
pinmux = <MT2701_PIN_116_MSDC0_CMD__FUNC_NALE>;
drive-strength = <MTK_DRIVE_8mA>;
bias-pull-down = <MTK_PUPD_SET_R1R0_10>;
};
};
};
&nandc {
status = "okay";
pinctrl-names = "default";
pinctrl-0 = <&nand_pins_default>;
nand@0 {
reg = <0>;
nand-on-flash-bbt;
nand-ecc-mode = "hw";
nand-ecc-strength = <24>;
nand-ecc-step-size = <1024>;
};
};
NAND chip optional subnodes:
- Partitions, see Documentation/devicetree/bindings/mtd/partition.txt
Example:
nand@0 {
partitions {
compatible = "fixed-partitions";
#address-cells = <1>;
#size-cells = <1>;
preloader@0 {
label = "pl";
read-only;
reg = <0x00000000 0x00400000>;
};
android@0x00400000 {
label = "android";
reg = <0x00400000 0x12c00000>;
};
};
};
2) ECC Engine:
==============
Required BCH properties:
- compatible: Should be "mediatek,mtxxxx-ecc".
- reg: Base physical address and size of ECC.
- interrupts: Interrupts of ECC.
- clocks: ECC required clocks.
- clock-names: ECC clocks internal name.
- status: Disabled default. Then set "okay" by platform.
Example:
bch: ecc@1100e000 {
compatible = "mediatek,mt2701-ecc";
reg = <0 0x1100e000 0 0x1000>;
interrupts = <GIC_SPI 55 IRQ_TYPE_LEVEL_LOW>;
clocks = <&pericfg CLK_PERI_NFI_ECC>;
clock-names = "nfiecc_clk";
status = "disabled";
};
...@@ -11,10 +11,16 @@ Required properties: ...@@ -11,10 +11,16 @@ Required properties:
* "ahb" : AHB gating clock * "ahb" : AHB gating clock
* "mod" : nand controller clock * "mod" : nand controller clock
Optional properties:
- dmas : shall reference DMA channel associated to the NAND controller.
- dma-names : shall be "rxtx".
Optional children nodes: Optional children nodes:
Children nodes represent the available nand chips. Children nodes represent the available nand chips.
Optional properties: Optional properties:
- reset : phandle + reset specifier pair
- reset-names : must contain "ahb"
- allwinner,rb : shall contain the native Ready/Busy ids. - allwinner,rb : shall contain the native Ready/Busy ids.
or or
- rb-gpios : shall contain the gpios used as R/B pins. - rb-gpios : shall contain the gpios used as R/B pins.
......
Aardvark PCIe controller
This PCIe controller is used on the Marvell Armada 3700 ARM64 SoC.
The Device Tree node describing an Aardvark PCIe controller must
contain the following properties:
- compatible: Should be "marvell,armada-3700-pcie"
- reg: range of registers for the PCIe controller
- interrupts: the interrupt line of the PCIe controller
- #address-cells: set to <3>
- #size-cells: set to <2>
- device_type: set to "pci"
- ranges: ranges for the PCI memory and I/O regions
- #interrupt-cells: set to <1>
- msi-controller: indicates that the PCIe controller can itself
handle MSI interrupts
- msi-parent: pointer to the MSI controller to be used
- interrupt-map-mask and interrupt-map: standard PCI properties to
define the mapping of the PCIe interface to interrupt numbers.
- bus-range: PCI bus numbers covered
In addition, the Device Tree describing an Aardvark PCIe controller
must include a sub-node that describes the legacy interrupt controller
built into the PCIe controller. This sub-node must have the following
properties:
- interrupt-controller
- #interrupt-cells: set to <1>
Example:
pcie0: pcie@d0070000 {
compatible = "marvell,armada-3700-pcie";
device_type = "pci";
status = "disabled";
reg = <0 0xd0070000 0 0x20000>;
#address-cells = <3>;
#size-cells = <2>;
bus-range = <0x00 0xff>;
interrupts = <GIC_SPI 29 IRQ_TYPE_LEVEL_HIGH>;
#interrupt-cells = <1>;
msi-controller;
msi-parent = <&pcie0>;
ranges = <0x82000000 0 0xe8000000 0 0xe8000000 0 0x1000000 /* Port 0 MEM */
0x81000000 0 0xe9000000 0 0xe9000000 0 0x10000>; /* Port 0 IO*/
interrupt-map-mask = <0 0 0 7>;
interrupt-map = <0 0 0 1 &pcie_intc 0>,
<0 0 0 2 &pcie_intc 1>,
<0 0 0 3 &pcie_intc 2>,
<0 0 0 4 &pcie_intc 3>;
pcie_intc: interrupt-controller {
interrupt-controller;
#interrupt-cells = <1>;
};
};
* Axis ARTPEC-6 PCIe interface
This PCIe host controller is based on the Synopsys DesignWare PCIe IP
and thus inherits all the common properties defined in designware-pcie.txt.
Required properties:
- compatible: "axis,artpec6-pcie", "snps,dw-pcie"
- reg: base addresses and lengths of the PCIe controller (DBI),
the phy controller, and configuration address space.
- reg-names: Must include the following entries:
- "dbi"
- "phy"
- "config"
- interrupts: A list of interrupt outputs of the controller. Must contain an
entry for each entry in the interrupt-names property.
- interrupt-names: Must include the following entries:
- "msi": The interrupt that is asserted when an MSI is received
- axis,syscon-pcie: A phandle pointing to the ARTPEC-6 system controller,
used to enable and control the Synopsys IP.
Example:
pcie@f8050000 {
compatible = "axis,artpec6-pcie", "snps,dw-pcie";
reg = <0xf8050000 0x2000
0xf8040000 0x1000
0xc0000000 0x1000>;
reg-names = "dbi", "phy", "config";
#address-cells = <3>;
#size-cells = <2>;
device_type = "pci";
/* downstream I/O */
ranges = <0x81000000 0 0x00010000 0xc0010000 0 0x00010000
/* non-prefetchable memory */
0x82000000 0 0xc0020000 0xc0020000 0 0x1ffe0000>;
num-lanes = <2>;
interrupts = <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "msi";
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0x7>;
interrupt-map = <0 0 0 1 &intc GIC_SPI 144 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 2 &intc GIC_SPI 145 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 3 &intc GIC_SPI 146 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 4 &intc GIC_SPI 147 IRQ_TYPE_LEVEL_HIGH>;
axis,syscon-pcie = <&syscon>;
};
...@@ -3,6 +3,7 @@ ...@@ -3,6 +3,7 @@
*.bc *.bc
*.bin *.bin
*.bz2 *.bz2
*.c.[012]*.*
*.cis *.cis
*.cpio *.cpio
*.csp *.csp
......
GCC plugin infrastructure
=========================
1. Introduction
===============
GCC plugins are loadable modules that provide extra features to the
compiler [1]. They are useful for runtime instrumentation and static analysis.
We can analyse, change and add further code during compilation via
callbacks [2], GIMPLE [3], IPA [4] and RTL passes [5].
The GCC plugin infrastructure of the kernel supports all gcc versions from
4.5 to 6.0, building out-of-tree modules, cross-compilation and building in a
separate directory.
Plugin source files have to be compilable by both a C and a C++ compiler as well
because gcc versions 4.5 and 4.6 are compiled by a C compiler,
gcc-4.7 can be compiled by a C or a C++ compiler,
and versions 4.8+ can only be compiled by a C++ compiler.
Currently the GCC plugin infrastructure supports only the x86, arm and arm64
architectures.
This infrastructure was ported from grsecurity [6] and PaX [7].
--
[1] https://gcc.gnu.org/onlinedocs/gccint/Plugins.html
[2] https://gcc.gnu.org/onlinedocs/gccint/Plugin-API.html#Plugin-API
[3] https://gcc.gnu.org/onlinedocs/gccint/GIMPLE.html
[4] https://gcc.gnu.org/onlinedocs/gccint/IPA.html
[5] https://gcc.gnu.org/onlinedocs/gccint/RTL.html
[6] https://grsecurity.net/
[7] https://pax.grsecurity.net/
2. Files
========
$(src)/scripts/gcc-plugins
This is the directory of the GCC plugins.
$(src)/scripts/gcc-plugins/gcc-common.h
This is a compatibility header for GCC plugins.
It should be always included instead of individual gcc headers.
$(src)/scripts/gcc-plugin.sh
This script checks the availability of the included headers in
gcc-common.h and chooses the proper host compiler to build the plugins
(gcc-4.7 can be built by either gcc or g++).
$(src)/scripts/gcc-plugins/gcc-generate-gimple-pass.h
$(src)/scripts/gcc-plugins/gcc-generate-ipa-pass.h
$(src)/scripts/gcc-plugins/gcc-generate-simple_ipa-pass.h
$(src)/scripts/gcc-plugins/gcc-generate-rtl-pass.h
These headers automatically generate the registration structures for
GIMPLE, SIMPLE_IPA, IPA and RTL passes. They support all gcc versions
from 4.5 to 6.0.
They should be preferred to creating the structures by hand.
3. Usage
========
You must install the gcc plugin headers for your gcc version,
e.g., on Ubuntu for gcc-4.9:
apt-get install gcc-4.9-plugin-dev
Enable a GCC plugin based feature in the kernel config:
CONFIG_GCC_PLUGIN_CYC_COMPLEXITY = y
To compile only the plugin(s):
make gcc-plugins
or just run the kernel make and compile the whole kernel with
the cyclomatic complexity GCC plugin.
4. How to add a new GCC plugin
==============================
The GCC plugins are in $(src)/scripts/gcc-plugins/. You can use a file or a directory
here. It must be added to $(src)/scripts/gcc-plugins/Makefile,
$(src)/scripts/Makefile.gcc-plugins and $(src)/arch/Kconfig.
See the cyc_complexity_plugin.c (CONFIG_GCC_PLUGIN_CYC_COMPLEXITY) GCC plugin.
...@@ -3021,6 +3021,8 @@ bytes respectively. Such letter suffixes can also be entirely omitted. ...@@ -3021,6 +3021,8 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
resource_alignment= resource_alignment=
Format: Format:
[<order of align>@][<domain>:]<bus>:<slot>.<func>[; ...] [<order of align>@][<domain>:]<bus>:<slot>.<func>[; ...]
[<order of align>@]pci:<vendor>:<device>\
[:<subvendor>:<subdevice>][; ...]
Specifies alignment and device to reassign Specifies alignment and device to reassign
aligned memory resources. aligned memory resources.
If <order of align> is not specified, If <order of align> is not specified,
...@@ -3039,6 +3041,9 @@ bytes respectively. Such letter suffixes can also be entirely omitted. ...@@ -3039,6 +3041,9 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
hpmemsize=nn[KMG] The fixed amount of bus space which is hpmemsize=nn[KMG] The fixed amount of bus space which is
reserved for hotplug bridge's memory window. reserved for hotplug bridge's memory window.
Default size is 2 megabytes. Default size is 2 megabytes.
hpbussize=nn The minimum amount of additional bus numbers
reserved for buses below a hotplug bridge.
Default is 1.
realloc= Enable/disable reallocating PCI bridge resources realloc= Enable/disable reallocating PCI bridge resources
if allocations done by BIOS are too small to if allocations done by BIOS are too small to
accommodate resources required by all child accommodate resources required by all child
...@@ -3070,6 +3075,10 @@ bytes respectively. Such letter suffixes can also be entirely omitted. ...@@ -3070,6 +3075,10 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
compat Treat PCIe ports as PCI-to-PCI bridges, disable the PCIe compat Treat PCIe ports as PCI-to-PCI bridges, disable the PCIe
ports driver. ports driver.
pcie_port_pm= [PCIE] PCIe port power management handling:
off Disable power management of all PCIe ports
force Forcibly enable power management of all PCIe ports
pcie_pme= [PCIE,PM] Native PCIe PME signaling options: pcie_pme= [PCIE,PM] Native PCIe PME signaling options:
nomsi Do not use MSI for native PCIe PME signaling (this makes nomsi Do not use MSI for native PCIe PME signaling (this makes
all PCIe root ports use INTx for all services). all PCIe root ports use INTx for all services).
......
...@@ -1482,6 +1482,11 @@ struct kvm_irq_routing_msi { ...@@ -1482,6 +1482,11 @@ struct kvm_irq_routing_msi {
__u32 pad; __u32 pad;
}; };
On x86, address_hi is ignored unless the KVM_X2APIC_API_USE_32BIT_IDS
feature of KVM_CAP_X2APIC_API capability is enabled. If it is enabled,
address_hi bits 31-8 provide bits 31-8 of the destination id. Bits 7-0 of
address_hi must be zero.
struct kvm_irq_routing_s390_adapter { struct kvm_irq_routing_s390_adapter {
__u64 ind_addr; __u64 ind_addr;
__u64 summary_addr; __u64 summary_addr;
...@@ -1583,6 +1588,17 @@ struct kvm_lapic_state { ...@@ -1583,6 +1588,17 @@ struct kvm_lapic_state {
Reads the Local APIC registers and copies them into the input argument. The Reads the Local APIC registers and copies them into the input argument. The
data format and layout are the same as documented in the architecture manual. data format and layout are the same as documented in the architecture manual.
If KVM_X2APIC_API_USE_32BIT_IDS feature of KVM_CAP_X2APIC_API is
enabled, then the format of APIC_ID register depends on the APIC mode
(reported by MSR_IA32_APICBASE) of its VCPU. x2APIC stores APIC ID in
the APIC_ID register (bytes 32-35). xAPIC only allows an 8-bit APIC ID
which is stored in bits 31-24 of the APIC register, or equivalently in
byte 35 of struct kvm_lapic_state's regs field. KVM_GET_LAPIC must then
be called after MSR_IA32_APICBASE has been set with KVM_SET_MSR.
If KVM_X2APIC_API_USE_32BIT_IDS feature is disabled, struct kvm_lapic_state
always uses xAPIC format.
4.58 KVM_SET_LAPIC 4.58 KVM_SET_LAPIC
...@@ -1600,6 +1616,10 @@ struct kvm_lapic_state { ...@@ -1600,6 +1616,10 @@ struct kvm_lapic_state {
Copies the input argument into the Local APIC registers. The data format Copies the input argument into the Local APIC registers. The data format
and layout are the same as documented in the architecture manual. and layout are the same as documented in the architecture manual.
The format of the APIC ID register (bytes 32-35 of struct kvm_lapic_state's
regs field) depends on the state of the KVM_CAP_X2APIC_API capability.
See the note in KVM_GET_LAPIC.
4.59 KVM_IOEVENTFD 4.59 KVM_IOEVENTFD
...@@ -2032,6 +2052,12 @@ registers, find a list below: ...@@ -2032,6 +2052,12 @@ registers, find a list below:
MIPS | KVM_REG_MIPS_CP0_CONFIG5 | 32 MIPS | KVM_REG_MIPS_CP0_CONFIG5 | 32
MIPS | KVM_REG_MIPS_CP0_CONFIG7 | 32 MIPS | KVM_REG_MIPS_CP0_CONFIG7 | 32
MIPS | KVM_REG_MIPS_CP0_ERROREPC | 64 MIPS | KVM_REG_MIPS_CP0_ERROREPC | 64
MIPS | KVM_REG_MIPS_CP0_KSCRATCH1 | 64
MIPS | KVM_REG_MIPS_CP0_KSCRATCH2 | 64
MIPS | KVM_REG_MIPS_CP0_KSCRATCH3 | 64
MIPS | KVM_REG_MIPS_CP0_KSCRATCH4 | 64
MIPS | KVM_REG_MIPS_CP0_KSCRATCH5 | 64
MIPS | KVM_REG_MIPS_CP0_KSCRATCH6 | 64
MIPS | KVM_REG_MIPS_COUNT_CTL | 64 MIPS | KVM_REG_MIPS_COUNT_CTL | 64
MIPS | KVM_REG_MIPS_COUNT_RESUME | 64 MIPS | KVM_REG_MIPS_COUNT_RESUME | 64
MIPS | KVM_REG_MIPS_COUNT_HZ | 64 MIPS | KVM_REG_MIPS_COUNT_HZ | 64
...@@ -2156,7 +2182,7 @@ after pausing the vcpu, but before it is resumed. ...@@ -2156,7 +2182,7 @@ after pausing the vcpu, but before it is resumed.
4.71 KVM_SIGNAL_MSI 4.71 KVM_SIGNAL_MSI
Capability: KVM_CAP_SIGNAL_MSI Capability: KVM_CAP_SIGNAL_MSI
Architectures: x86 Architectures: x86 arm64
Type: vm ioctl Type: vm ioctl
Parameters: struct kvm_msi (in) Parameters: struct kvm_msi (in)
Returns: >0 on delivery, 0 if guest blocked the MSI, and -1 on error Returns: >0 on delivery, 0 if guest blocked the MSI, and -1 on error
...@@ -2169,10 +2195,22 @@ struct kvm_msi { ...@@ -2169,10 +2195,22 @@ struct kvm_msi {
__u32 address_hi; __u32 address_hi;
__u32 data; __u32 data;
__u32 flags; __u32 flags;
__u8 pad[16]; __u32 devid;
__u8 pad[12];
}; };
No flags are defined so far. The corresponding field must be 0. flags: KVM_MSI_VALID_DEVID: devid contains a valid value
devid: If KVM_MSI_VALID_DEVID is set, contains a unique device identifier
for the device that wrote the MSI message.
For PCI, this is usually a BFD identifier in the lower 16 bits.
The per-VM KVM_CAP_MSI_DEVID capability advertises the need to provide
the device ID. If this capability is not set, userland cannot rely on
the kernel to allow the KVM_MSI_VALID_DEVID flag being set.
On x86, address_hi is ignored unless the KVM_CAP_X2APIC_API capability is
enabled. If it is enabled, address_hi bits 31-8 provide bits 31-8 of the
destination id. Bits 7-0 of address_hi must be zero.
4.71 KVM_CREATE_PIT2 4.71 KVM_CREATE_PIT2
...@@ -2520,6 +2558,7 @@ Parameters: struct kvm_device_attr ...@@ -2520,6 +2558,7 @@ Parameters: struct kvm_device_attr
Returns: 0 on success, -1 on error Returns: 0 on success, -1 on error
Errors: Errors:
ENXIO: The group or attribute is unknown/unsupported for this device ENXIO: The group or attribute is unknown/unsupported for this device
or hardware support is missing.
EPERM: The attribute cannot (currently) be accessed this way EPERM: The attribute cannot (currently) be accessed this way
(e.g. read-only attribute, or attribute that only makes (e.g. read-only attribute, or attribute that only makes
sense when the device is in a different state) sense when the device is in a different state)
...@@ -2547,6 +2586,7 @@ Parameters: struct kvm_device_attr ...@@ -2547,6 +2586,7 @@ Parameters: struct kvm_device_attr
Returns: 0 on success, -1 on error Returns: 0 on success, -1 on error
Errors: Errors:
ENXIO: The group or attribute is unknown/unsupported for this device ENXIO: The group or attribute is unknown/unsupported for this device
or hardware support is missing.
Tests whether a device supports a particular attribute. A successful Tests whether a device supports a particular attribute. A successful
return indicates the attribute is implemented. It does not necessarily return indicates the attribute is implemented. It does not necessarily
...@@ -3803,6 +3843,42 @@ Allows use of runtime-instrumentation introduced with zEC12 processor. ...@@ -3803,6 +3843,42 @@ Allows use of runtime-instrumentation introduced with zEC12 processor.
Will return -EINVAL if the machine does not support runtime-instrumentation. Will return -EINVAL if the machine does not support runtime-instrumentation.
Will return -EBUSY if a VCPU has already been created. Will return -EBUSY if a VCPU has already been created.
7.7 KVM_CAP_X2APIC_API
Architectures: x86
Parameters: args[0] - features that should be enabled
Returns: 0 on success, -EINVAL when args[0] contains invalid features
Valid feature flags in args[0] are
#define KVM_X2APIC_API_USE_32BIT_IDS (1ULL << 0)
#define KVM_X2APIC_API_DISABLE_BROADCAST_QUIRK (1ULL << 1)
Enabling KVM_X2APIC_API_USE_32BIT_IDS changes the behavior of
KVM_SET_GSI_ROUTING, KVM_SIGNAL_MSI, KVM_SET_LAPIC, and KVM_GET_LAPIC,
allowing the use of 32-bit APIC IDs. See KVM_CAP_X2APIC_API in their
respective sections.
KVM_X2APIC_API_DISABLE_BROADCAST_QUIRK must be enabled for x2APIC to work
in logical mode or with more than 255 VCPUs. Otherwise, KVM treats 0xff
as a broadcast even in x2APIC mode in order to support physical x2APIC
without interrupt remapping. This is undesirable in logical mode,
where 0xff represents CPUs 0-7 in cluster 0.
7.8 KVM_CAP_S390_USER_INSTR0
Architectures: s390
Parameters: none
With this capability enabled, all illegal instructions 0x0000 (2 bytes) will
be intercepted and forwarded to user space. User space can use this
mechanism e.g. to realize 2-byte software breakpoints. The kernel will
not inject an operating exception for these instructions, user space has
to take care of that.
This capability can be enabled dynamically even if VCPUs were already
created and are running.
8. Other capabilities. 8. Other capabilities.
---------------------- ----------------------
......
...@@ -4,16 +4,22 @@ ARM Virtual Generic Interrupt Controller (VGIC) ...@@ -4,16 +4,22 @@ ARM Virtual Generic Interrupt Controller (VGIC)
Device types supported: Device types supported:
KVM_DEV_TYPE_ARM_VGIC_V2 ARM Generic Interrupt Controller v2.0 KVM_DEV_TYPE_ARM_VGIC_V2 ARM Generic Interrupt Controller v2.0
KVM_DEV_TYPE_ARM_VGIC_V3 ARM Generic Interrupt Controller v3.0 KVM_DEV_TYPE_ARM_VGIC_V3 ARM Generic Interrupt Controller v3.0
KVM_DEV_TYPE_ARM_VGIC_ITS ARM Interrupt Translation Service Controller
Only one VGIC instance may be instantiated through either this API or the Only one VGIC instance of the V2/V3 types above may be instantiated through
legacy KVM_CREATE_IRQCHIP api. The created VGIC will act as the VM interrupt either this API or the legacy KVM_CREATE_IRQCHIP api. The created VGIC will
controller, requiring emulated user-space devices to inject interrupts to the act as the VM interrupt controller, requiring emulated user-space devices to
VGIC instead of directly to CPUs. inject interrupts to the VGIC instead of directly to CPUs.
Creating a guest GICv3 device requires a host GICv3 as well. Creating a guest GICv3 device requires a host GICv3 as well.
GICv3 implementations with hardware compatibility support allow a guest GICv2 GICv3 implementations with hardware compatibility support allow a guest GICv2
as well. as well.
Creating a virtual ITS controller requires a host GICv3 (but does not depend
on having physical ITS controllers).
There can be multiple ITS controllers per guest, each of them has to have
a separate, non-overlapping MMIO region.
Groups: Groups:
KVM_DEV_ARM_VGIC_GRP_ADDR KVM_DEV_ARM_VGIC_GRP_ADDR
Attributes: Attributes:
...@@ -39,6 +45,13 @@ Groups: ...@@ -39,6 +45,13 @@ Groups:
Only valid for KVM_DEV_TYPE_ARM_VGIC_V3. Only valid for KVM_DEV_TYPE_ARM_VGIC_V3.
This address needs to be 64K aligned. This address needs to be 64K aligned.
KVM_VGIC_V3_ADDR_TYPE_ITS (rw, 64-bit)
Base address in the guest physical address space of the GICv3 ITS
control register frame. The ITS allows MSI(-X) interrupts to be
injected into guests. This extension is optional. If the kernel
does not support the ITS, the call returns -ENODEV.
Only valid for KVM_DEV_TYPE_ARM_VGIC_ITS.
This address needs to be 64K aligned and the region covers 128K.
KVM_DEV_ARM_VGIC_GRP_DIST_REGS KVM_DEV_ARM_VGIC_GRP_DIST_REGS
Attributes: Attributes:
...@@ -109,8 +122,8 @@ Groups: ...@@ -109,8 +122,8 @@ Groups:
KVM_DEV_ARM_VGIC_GRP_CTRL KVM_DEV_ARM_VGIC_GRP_CTRL
Attributes: Attributes:
KVM_DEV_ARM_VGIC_CTRL_INIT KVM_DEV_ARM_VGIC_CTRL_INIT
request the initialization of the VGIC, no additional parameter in request the initialization of the VGIC or ITS, no additional parameter
kvm_device_attr.addr. in kvm_device_attr.addr.
Errors: Errors:
-ENXIO: VGIC not properly configured as required prior to calling -ENXIO: VGIC not properly configured as required prior to calling
this attribute this attribute
......
...@@ -20,7 +20,8 @@ Enables Collaborative Memory Management Assist (CMMA) for the virtual machine. ...@@ -20,7 +20,8 @@ Enables Collaborative Memory Management Assist (CMMA) for the virtual machine.
1.2. ATTRIBUTE: KVM_S390_VM_MEM_CLR_CMMA 1.2. ATTRIBUTE: KVM_S390_VM_MEM_CLR_CMMA
Parameters: none Parameters: none
Returns: 0 Returns: -EINVAL if CMMA was not enabled
0 otherwise
Clear the CMMA status for all guest pages, so any pages the guest marked Clear the CMMA status for all guest pages, so any pages the guest marked
as unused are again used any may not be reclaimed by the host. as unused are again used any may not be reclaimed by the host.
...@@ -85,6 +86,90 @@ Returns: -EBUSY in case 1 or more vcpus are already activated (only in write ...@@ -85,6 +86,90 @@ Returns: -EBUSY in case 1 or more vcpus are already activated (only in write
-ENOMEM if not enough memory is available to process the ioctl -ENOMEM if not enough memory is available to process the ioctl
0 in case of success 0 in case of success
2.3. ATTRIBUTE: KVM_S390_VM_CPU_MACHINE_FEAT (r/o)
Allows user space to retrieve available cpu features. A feature is available if
provided by the hardware and supported by kvm. In theory, cpu features could
even be completely emulated by kvm.
struct kvm_s390_vm_cpu_feat {
__u64 feat[16]; # Bitmap (1 = feature available), MSB 0 bit numbering
};
Parameters: address of a buffer to load the feature list from.
Returns: -EFAULT if the given address is not accessible from kernel space.
0 in case of success.
2.4. ATTRIBUTE: KVM_S390_VM_CPU_PROCESSOR_FEAT (r/w)
Allows user space to retrieve or change enabled cpu features for all VCPUs of a
VM. Features that are not available cannot be enabled.
See 2.3. for a description of the parameter struct.
Parameters: address of a buffer to store/load the feature list from.
Returns: -EFAULT if the given address is not accessible from kernel space.
-EINVAL if a cpu feature that is not available is to be enabled.
-EBUSY if at least one VCPU has already been defined.
0 in case of success.
2.5. ATTRIBUTE: KVM_S390_VM_CPU_MACHINE_SUBFUNC (r/o)
Allows user space to retrieve available cpu subfunctions without any filtering
done by a set IBC. These subfunctions are indicated to the guest VCPU via
query or "test bit" subfunctions and used e.g. by cpacf functions, plo and ptff.
A subfunction block is only valid if KVM_S390_VM_CPU_MACHINE contains the
STFL(E) bit introducing the affected instruction. If the affected instruction
indicates subfunctions via a "query subfunction", the response block is
contained in the returned struct. If the affected instruction
indicates subfunctions via a "test bit" mechanism, the subfunction codes are
contained in the returned struct in MSB 0 bit numbering.
struct kvm_s390_vm_cpu_subfunc {
u8 plo[32]; # always valid (ESA/390 feature)
u8 ptff[16]; # valid with TOD-clock steering
u8 kmac[16]; # valid with Message-Security-Assist
u8 kmc[16]; # valid with Message-Security-Assist
u8 km[16]; # valid with Message-Security-Assist
u8 kimd[16]; # valid with Message-Security-Assist
u8 klmd[16]; # valid with Message-Security-Assist
u8 pckmo[16]; # valid with Message-Security-Assist-Extension 3
u8 kmctr[16]; # valid with Message-Security-Assist-Extension 4
u8 kmf[16]; # valid with Message-Security-Assist-Extension 4
u8 kmo[16]; # valid with Message-Security-Assist-Extension 4
u8 pcc[16]; # valid with Message-Security-Assist-Extension 4
u8 ppno[16]; # valid with Message-Security-Assist-Extension 5
u8 reserved[1824]; # reserved for future instructions
};
Parameters: address of a buffer to load the subfunction blocks from.
Returns: -EFAULT if the given address is not accessible from kernel space.
0 in case of success.
2.6. ATTRIBUTE: KVM_S390_VM_CPU_PROCESSOR_SUBFUNC (r/w)
Allows user space to retrieve or change cpu subfunctions to be indicated for
all VCPUs of a VM. This attribute will only be available if kernel and
hardware support are in place.
The kernel uses the configured subfunction blocks for indication to
the guest. A subfunction block will only be used if the associated STFL(E) bit
has not been disabled by user space (so the instruction to be queried is
actually available for the guest).
As long as no data has been written, a read will fail. The IBC will be used
to determine available subfunctions in this case, this will guarantee backward
compatibility.
See 2.5. for a description of the parameter struct.
Parameters: address of a buffer to store/load the subfunction blocks from.
Returns: -EFAULT if the given address is not accessible from kernel space.
-EINVAL when reading, if there was no write yet.
-EBUSY if at least one VCPU has already been defined.
0 in case of success.
3. GROUP: KVM_S390_VM_TOD 3. GROUP: KVM_S390_VM_TOD
Architectures: s390 Architectures: s390
......
...@@ -89,7 +89,7 @@ In mmu_spte_clear_track_bits(): ...@@ -89,7 +89,7 @@ In mmu_spte_clear_track_bits():
old_spte = *spte; old_spte = *spte;
/* 'if' condition is satisfied. */ /* 'if' condition is satisfied. */
if (old_spte.Accssed == 1 && if (old_spte.Accessed == 1 &&
old_spte.W == 0) old_spte.W == 0)
spte = 0ull; spte = 0ull;
on fast page fault path: on fast page fault path:
...@@ -102,7 +102,7 @@ In mmu_spte_clear_track_bits(): ...@@ -102,7 +102,7 @@ In mmu_spte_clear_track_bits():
old_spte = xchg(spte, 0ull) old_spte = xchg(spte, 0ull)
if (old_spte.Accssed == 1) if (old_spte.Accessed == 1)
kvm_set_pfn_accessed(spte.pfn); kvm_set_pfn_accessed(spte.pfn);
if (old_spte.Dirty == 1) if (old_spte.Dirty == 1)
kvm_set_pfn_dirty(spte.pfn); kvm_set_pfn_dirty(spte.pfn);
......
...@@ -5094,6 +5094,15 @@ L: linux-scsi@vger.kernel.org ...@@ -5094,6 +5094,15 @@ L: linux-scsi@vger.kernel.org
S: Odd Fixes (e.g., new signatures) S: Odd Fixes (e.g., new signatures)
F: drivers/scsi/fdomain.* F: drivers/scsi/fdomain.*
GCC PLUGINS
M: Kees Cook <keescook@chromium.org>
R: Emese Revfy <re.emese@gmail.com>
L: kernel-hardening@lists.openwall.com
S: Maintained
F: scripts/gcc-plugins/
F: scripts/gcc-plugin.sh
F: Documentation/gcc-plugins.txt
GCOV BASED KERNEL PROFILING GCOV BASED KERNEL PROFILING
M: Peter Oberparleiter <oberpar@linux.vnet.ibm.com> M: Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
S: Maintained S: Maintained
...@@ -8874,6 +8883,7 @@ L: linux-pci@vger.kernel.org ...@@ -8874,6 +8883,7 @@ L: linux-pci@vger.kernel.org
Q: http://patchwork.ozlabs.org/project/linux-pci/list/ Q: http://patchwork.ozlabs.org/project/linux-pci/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci.git
S: Supported S: Supported
F: Documentation/devicetree/bindings/pci/
F: Documentation/PCI/ F: Documentation/PCI/
F: drivers/pci/ F: drivers/pci/
F: include/linux/pci* F: include/linux/pci*
...@@ -8937,6 +8947,13 @@ L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) ...@@ -8937,6 +8947,13 @@ L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained S: Maintained
F: drivers/pci/host/*mvebu* F: drivers/pci/host/*mvebu*
PCI DRIVER FOR AARDVARK (Marvell Armada 3700)
M: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
L: linux-pci@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: drivers/pci/host/pci-aardvark.c
PCI DRIVER FOR NVIDIA TEGRA PCI DRIVER FOR NVIDIA TEGRA
M: Thierry Reding <thierry.reding@gmail.com> M: Thierry Reding <thierry.reding@gmail.com>
L: linux-tegra@vger.kernel.org L: linux-tegra@vger.kernel.org
...@@ -9019,6 +9036,15 @@ S: Maintained ...@@ -9019,6 +9036,15 @@ S: Maintained
F: Documentation/devicetree/bindings/pci/xgene-pci-msi.txt F: Documentation/devicetree/bindings/pci/xgene-pci-msi.txt
F: drivers/pci/host/pci-xgene-msi.c F: drivers/pci/host/pci-xgene-msi.c
PCIE DRIVER FOR AXIS ARTPEC
M: Niklas Cassel <niklas.cassel@axis.com>
M: Jesper Nilsson <jesper.nilsson@axis.com>
L: linux-arm-kernel@axis.com
L: linux-pci@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/pci/axis,artpec*
F: drivers/pci/host/*artpec*
PCIE DRIVER FOR HISILICON PCIE DRIVER FOR HISILICON
M: Zhou Wang <wangzhou1@hisilicon.com> M: Zhou Wang <wangzhou1@hisilicon.com>
M: Gabriele Paoloni <gabriele.paoloni@huawei.com> M: Gabriele Paoloni <gabriele.paoloni@huawei.com>
......
...@@ -371,26 +371,27 @@ CFLAGS_KERNEL = ...@@ -371,26 +371,27 @@ CFLAGS_KERNEL =
AFLAGS_KERNEL = AFLAGS_KERNEL =
LDFLAGS_vmlinux = LDFLAGS_vmlinux =
CFLAGS_GCOV = -fprofile-arcs -ftest-coverage -fno-tree-loop-im CFLAGS_GCOV = -fprofile-arcs -ftest-coverage -fno-tree-loop-im
CFLAGS_KCOV = -fsanitize-coverage=trace-pc CFLAGS_KCOV := $(call cc-option,-fsanitize-coverage=trace-pc,)
# Use USERINCLUDE when you must reference the UAPI directories only. # Use USERINCLUDE when you must reference the UAPI directories only.
USERINCLUDE := \ USERINCLUDE := \
-I$(srctree)/arch/$(hdr-arch)/include/uapi \ -I$(srctree)/arch/$(hdr-arch)/include/uapi \
-Iarch/$(hdr-arch)/include/generated/uapi \ -I$(objtree)/arch/$(hdr-arch)/include/generated/uapi \
-I$(srctree)/include/uapi \ -I$(srctree)/include/uapi \
-Iinclude/generated/uapi \ -I$(objtree)/include/generated/uapi \
-include $(srctree)/include/linux/kconfig.h -include $(srctree)/include/linux/kconfig.h
# Use LINUXINCLUDE when you must reference the include/ directory. # Use LINUXINCLUDE when you must reference the include/ directory.
# Needed to be compatible with the O= option # Needed to be compatible with the O= option
LINUXINCLUDE := \ LINUXINCLUDE := \
-I$(srctree)/arch/$(hdr-arch)/include \ -I$(srctree)/arch/$(hdr-arch)/include \
-Iarch/$(hdr-arch)/include/generated/uapi \ -I$(objtree)/arch/$(hdr-arch)/include/generated/uapi \
-Iarch/$(hdr-arch)/include/generated \ -I$(objtree)/arch/$(hdr-arch)/include/generated \
$(if $(KBUILD_SRC), -I$(srctree)/include) \ $(if $(KBUILD_SRC), -I$(srctree)/include) \
-Iinclude \ -I$(objtree)/include
$(USERINCLUDE)
LINUXINCLUDE += $(filter-out $(LINUXINCLUDE),$(USERINCLUDE))
KBUILD_CPPFLAGS := -D__KERNEL__ KBUILD_CPPFLAGS := -D__KERNEL__
...@@ -554,7 +555,7 @@ ifeq ($(KBUILD_EXTMOD),) ...@@ -554,7 +555,7 @@ ifeq ($(KBUILD_EXTMOD),)
# in parallel # in parallel
PHONY += scripts PHONY += scripts
scripts: scripts_basic include/config/auto.conf include/config/tristate.conf \ scripts: scripts_basic include/config/auto.conf include/config/tristate.conf \
asm-generic asm-generic gcc-plugins
$(Q)$(MAKE) $(build)=$(@) $(Q)$(MAKE) $(build)=$(@)
# Objects we will link into vmlinux / subdirs we need to visit # Objects we will link into vmlinux / subdirs we need to visit
...@@ -635,6 +636,15 @@ endif ...@@ -635,6 +636,15 @@ endif
# Tell gcc to never replace conditional load with a non-conditional one # Tell gcc to never replace conditional load with a non-conditional one
KBUILD_CFLAGS += $(call cc-option,--param=allow-store-data-races=0) KBUILD_CFLAGS += $(call cc-option,--param=allow-store-data-races=0)
PHONY += gcc-plugins
gcc-plugins: scripts_basic
ifdef CONFIG_GCC_PLUGINS
$(Q)$(MAKE) $(build)=scripts/gcc-plugins
endif
@:
include scripts/Makefile.gcc-plugins
ifdef CONFIG_READABLE_ASM ifdef CONFIG_READABLE_ASM
# Disable optimizations that make assembler listings hard to read. # Disable optimizations that make assembler listings hard to read.
# reorder blocks reorders the control in the function # reorder blocks reorders the control in the function
...@@ -666,21 +676,11 @@ endif ...@@ -666,21 +676,11 @@ endif
endif endif
# Find arch-specific stack protector compiler sanity-checking script. # Find arch-specific stack protector compiler sanity-checking script.
ifdef CONFIG_CC_STACKPROTECTOR ifdef CONFIG_CC_STACKPROTECTOR
stackp-path := $(srctree)/scripts/gcc-$(ARCH)_$(BITS)-has-stack-protector.sh stackp-path := $(srctree)/scripts/gcc-$(SRCARCH)_$(BITS)-has-stack-protector.sh
ifneq ($(wildcard $(stackp-path)),) stackp-check := $(wildcard $(stackp-path))
stackp-check := $(stackp-path)
endif
endif endif
KBUILD_CFLAGS += $(stackp-flag) KBUILD_CFLAGS += $(stackp-flag)
ifdef CONFIG_KCOV
ifeq ($(call cc-option, $(CFLAGS_KCOV)),)
$(warning Cannot use CONFIG_KCOV: \
-fsanitize-coverage=trace-pc is not supported by compiler)
CFLAGS_KCOV =
endif
endif
ifeq ($(cc-name),clang) ifeq ($(cc-name),clang)
KBUILD_CPPFLAGS += $(call cc-option,-Qunused-arguments,) KBUILD_CPPFLAGS += $(call cc-option,-Qunused-arguments,)
KBUILD_CPPFLAGS += $(call cc-option,-Wno-unknown-warning-option,) KBUILD_CPPFLAGS += $(call cc-option,-Wno-unknown-warning-option,)
...@@ -1019,7 +1019,7 @@ prepare1: prepare2 $(version_h) include/generated/utsrelease.h \ ...@@ -1019,7 +1019,7 @@ prepare1: prepare2 $(version_h) include/generated/utsrelease.h \
archprepare: archheaders archscripts prepare1 scripts_basic archprepare: archheaders archscripts prepare1 scripts_basic
prepare0: archprepare prepare0: archprepare gcc-plugins
$(Q)$(MAKE) $(build)=. $(Q)$(MAKE) $(build)=.
# All the preparing.. # All the preparing..
...@@ -1531,6 +1531,7 @@ clean: $(clean-dirs) ...@@ -1531,6 +1531,7 @@ clean: $(clean-dirs)
-o -name '.*.d' -o -name '.*.tmp' -o -name '*.mod.c' \ -o -name '.*.d' -o -name '.*.tmp' -o -name '*.mod.c' \
-o -name '*.symtypes' -o -name 'modules.order' \ -o -name '*.symtypes' -o -name 'modules.order' \
-o -name modules.builtin -o -name '.tmp_*.o.*' \ -o -name modules.builtin -o -name '.tmp_*.o.*' \
-o -name '*.c.[012]*.*' \
-o -name '*.gcno' \) -type f -print | xargs rm -f -o -name '*.gcno' \) -type f -print | xargs rm -f
# Generate tags for editors # Generate tags for editors
...@@ -1641,7 +1642,7 @@ endif ...@@ -1641,7 +1642,7 @@ endif
$(Q)$(MAKE) KBUILD_MODULES=$(if $(CONFIG_MODULES),1) \ $(Q)$(MAKE) KBUILD_MODULES=$(if $(CONFIG_MODULES),1) \
$(build)=$(build-dir) $(build)=$(build-dir)
# Make sure the latest headers are built for Documentation # Make sure the latest headers are built for Documentation
Documentation/: headers_install Documentation/ samples/: headers_install
%/: prepare scripts FORCE %/: prepare scripts FORCE
$(cmd_crmodverdir) $(cmd_crmodverdir)
$(Q)$(MAKE) KBUILD_MODULES=$(if $(CONFIG_MODULES),1) \ $(Q)$(MAKE) KBUILD_MODULES=$(if $(CONFIG_MODULES),1) \
......
...@@ -357,6 +357,43 @@ config SECCOMP_FILTER ...@@ -357,6 +357,43 @@ config SECCOMP_FILTER
See Documentation/prctl/seccomp_filter.txt for details. See Documentation/prctl/seccomp_filter.txt for details.
config HAVE_GCC_PLUGINS
bool
help
An arch should select this symbol if it supports building with
GCC plugins.
menuconfig GCC_PLUGINS
bool "GCC plugins"
depends on HAVE_GCC_PLUGINS
depends on !COMPILE_TEST
help
GCC plugins are loadable modules that provide extra features to the
compiler. They are useful for runtime instrumentation and static analysis.
See Documentation/gcc-plugins.txt for details.
config GCC_PLUGIN_CYC_COMPLEXITY
bool "Compute the cyclomatic complexity of a function"
depends on GCC_PLUGINS
help
The complexity M of a function's control flow graph is defined as:
M = E - N + 2P
where
E = the number of edges
N = the number of nodes
P = the number of connected components (exit nodes).
config GCC_PLUGIN_SANCOV
bool
depends on GCC_PLUGINS
help
This plugin inserts a __sanitizer_cov_trace_pc() call at the start of
basic blocks. It supports all gcc versions with plugin support (from
gcc-4.5 on). It is based on the commit "Add fuzzing coverage support"
by Dmitry Vyukov <dvyukov@google.com>.
config HAVE_CC_STACKPROTECTOR config HAVE_CC_STACKPROTECTOR
bool bool
help help
......
...@@ -15,7 +15,7 @@ targets := vmlinux.gz vmlinux \ ...@@ -15,7 +15,7 @@ targets := vmlinux.gz vmlinux \
OBJSTRIP := $(obj)/tools/objstrip OBJSTRIP := $(obj)/tools/objstrip
HOSTCFLAGS := -Wall -I$(objtree)/usr/include HOSTCFLAGS := -Wall -I$(objtree)/usr/include
BOOTCFLAGS += -I$(obj) -I$(srctree)/$(obj) BOOTCFLAGS += -I$(objtree)/$(obj) -I$(srctree)/$(obj)
# SRM bootable image. Copy to offset 512 of a partition. # SRM bootable image. Copy to offset 512 of a partition.
$(obj)/bootimage: $(addprefix $(obj)/tools/,mkbb lxboot bootlx) $(obj)/vmlinux.nh $(obj)/bootimage: $(addprefix $(obj)/tools/,mkbb lxboot bootlx) $(obj)/vmlinux.nh
......
...@@ -54,6 +54,7 @@ config ARM ...@@ -54,6 +54,7 @@ config ARM
select HAVE_FTRACE_MCOUNT_RECORD if (!XIP_KERNEL) select HAVE_FTRACE_MCOUNT_RECORD if (!XIP_KERNEL)
select HAVE_FUNCTION_GRAPH_TRACER if (!THUMB2_KERNEL) select HAVE_FUNCTION_GRAPH_TRACER if (!THUMB2_KERNEL)
select HAVE_FUNCTION_TRACER if (!XIP_KERNEL) select HAVE_FUNCTION_TRACER if (!XIP_KERNEL)
select HAVE_GCC_PLUGINS
select HAVE_GENERIC_DMA_COHERENT select HAVE_GENERIC_DMA_COHERENT
select HAVE_HW_BREAKPOINT if (PERF_EVENTS && (CPU_V6 || CPU_V6K || CPU_V7)) select HAVE_HW_BREAKPOINT if (PERF_EVENTS && (CPU_V6 || CPU_V6K || CPU_V7))
select HAVE_IDE if PCI || ISA || PCMCIA select HAVE_IDE if PCI || ISA || PCMCIA
...@@ -699,7 +700,7 @@ config ARCH_VIRT ...@@ -699,7 +700,7 @@ config ARCH_VIRT
depends on ARCH_MULTI_V7 depends on ARCH_MULTI_V7
select ARM_AMBA select ARM_AMBA
select ARM_GIC select ARM_GIC
select ARM_GIC_V2M if PCI_MSI select ARM_GIC_V2M if PCI
select ARM_GIC_V3 select ARM_GIC_V3
select ARM_PSCI select ARM_PSCI
select HAVE_ARM_ARCH_TIMER select HAVE_ARM_ARCH_TIMER
......
...@@ -66,6 +66,8 @@ extern void __kvm_tlb_flush_vmid(struct kvm *kvm); ...@@ -66,6 +66,8 @@ extern void __kvm_tlb_flush_vmid(struct kvm *kvm);
extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu); extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu);
extern void __init_stage2_translation(void); extern void __init_stage2_translation(void);
extern void __kvm_hyp_reset(unsigned long);
#endif #endif
#endif /* __ARM_KVM_ASM_H__ */ #endif /* __ARM_KVM_ASM_H__ */
...@@ -241,8 +241,7 @@ int kvm_arm_coproc_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *); ...@@ -241,8 +241,7 @@ int kvm_arm_coproc_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *);
int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
int exception_index); int exception_index);
static inline void __cpu_init_hyp_mode(phys_addr_t boot_pgd_ptr, static inline void __cpu_init_hyp_mode(phys_addr_t pgd_ptr,
phys_addr_t pgd_ptr,
unsigned long hyp_stack_ptr, unsigned long hyp_stack_ptr,
unsigned long vector_ptr) unsigned long vector_ptr)
{ {
...@@ -251,18 +250,13 @@ static inline void __cpu_init_hyp_mode(phys_addr_t boot_pgd_ptr, ...@@ -251,18 +250,13 @@ static inline void __cpu_init_hyp_mode(phys_addr_t boot_pgd_ptr,
* code. The init code doesn't need to preserve these * code. The init code doesn't need to preserve these
* registers as r0-r3 are already callee saved according to * registers as r0-r3 are already callee saved according to
* the AAPCS. * the AAPCS.
* Note that we slightly misuse the prototype by casing the * Note that we slightly misuse the prototype by casting the
* stack pointer to a void *. * stack pointer to a void *.
*
* We don't have enough registers to perform the full init in
* one go. Install the boot PGD first, and then install the
* runtime PGD, stack pointer and vectors. The PGDs are always
* passed as the third argument, in order to be passed into
* r2-r3 to the init code (yes, this is compliant with the
* PCS!).
*/
kvm_call_hyp(NULL, 0, boot_pgd_ptr); * The PGDs are always passed as the third argument, in order
* to be passed into r2-r3 to the init code (yes, this is
* compliant with the PCS!).
*/
kvm_call_hyp((void*)hyp_stack_ptr, vector_ptr, pgd_ptr); kvm_call_hyp((void*)hyp_stack_ptr, vector_ptr, pgd_ptr);
} }
...@@ -272,16 +266,13 @@ static inline void __cpu_init_stage2(void) ...@@ -272,16 +266,13 @@ static inline void __cpu_init_stage2(void)
kvm_call_hyp(__init_stage2_translation); kvm_call_hyp(__init_stage2_translation);
} }
static inline void __cpu_reset_hyp_mode(phys_addr_t boot_pgd_ptr, static inline void __cpu_reset_hyp_mode(unsigned long vector_ptr,
phys_addr_t phys_idmap_start) phys_addr_t phys_idmap_start)
{ {
/* kvm_call_hyp((void *)virt_to_idmap(__kvm_hyp_reset), vector_ptr);
* TODO
* kvm_call_reset(boot_pgd_ptr, phys_idmap_start);
*/
} }
static inline int kvm_arch_dev_ioctl_check_extension(long ext) static inline int kvm_arch_dev_ioctl_check_extension(struct kvm *kvm, long ext)
{ {
return 0; return 0;
} }
......
...@@ -25,9 +25,6 @@ ...@@ -25,9 +25,6 @@
#define __hyp_text __section(.hyp.text) notrace #define __hyp_text __section(.hyp.text) notrace
#define kern_hyp_va(v) (v)
#define hyp_kern_va(v) (v)
#define __ACCESS_CP15(CRn, Op1, CRm, Op2) \ #define __ACCESS_CP15(CRn, Op1, CRm, Op2) \
"mrc", "mcr", __stringify(p15, Op1, %0, CRn, CRm, Op2), u32 "mrc", "mcr", __stringify(p15, Op1, %0, CRn, CRm, Op2), u32
#define __ACCESS_CP15_64(Op1, CRm) \ #define __ACCESS_CP15_64(Op1, CRm) \
......
...@@ -26,16 +26,7 @@ ...@@ -26,16 +26,7 @@
* We directly use the kernel VA for the HYP, as we can directly share * We directly use the kernel VA for the HYP, as we can directly share
* the mapping (HTTBR "covers" TTBR1). * the mapping (HTTBR "covers" TTBR1).
*/ */
#define HYP_PAGE_OFFSET_MASK UL(~0) #define kern_hyp_va(kva) (kva)
#define HYP_PAGE_OFFSET PAGE_OFFSET
#define KERN_TO_HYP(kva) (kva)
/*
* Our virtual mapping for the boot-time MMU-enable code. Must be
* shared across all the page-tables. Conveniently, we use the vectors
* page, where no kernel data will ever be shared with HYP.
*/
#define TRAMPOLINE_VA UL(CONFIG_VECTORS_BASE)
/* /*
* KVM_MMU_CACHE_MIN_PAGES is the number of stage2 page table translation levels. * KVM_MMU_CACHE_MIN_PAGES is the number of stage2 page table translation levels.
...@@ -49,9 +40,8 @@ ...@@ -49,9 +40,8 @@
#include <asm/pgalloc.h> #include <asm/pgalloc.h>
#include <asm/stage2_pgtable.h> #include <asm/stage2_pgtable.h>
int create_hyp_mappings(void *from, void *to); int create_hyp_mappings(void *from, void *to, pgprot_t prot);
int create_hyp_io_mappings(void *from, void *to, phys_addr_t); int create_hyp_io_mappings(void *from, void *to, phys_addr_t);
void free_boot_hyp_pgd(void);
void free_hyp_pgds(void); void free_hyp_pgds(void);
void stage2_unmap_vm(struct kvm *kvm); void stage2_unmap_vm(struct kvm *kvm);
...@@ -65,7 +55,6 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run); ...@@ -65,7 +55,6 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run);
void kvm_mmu_free_memory_caches(struct kvm_vcpu *vcpu); void kvm_mmu_free_memory_caches(struct kvm_vcpu *vcpu);
phys_addr_t kvm_mmu_get_httbr(void); phys_addr_t kvm_mmu_get_httbr(void);
phys_addr_t kvm_mmu_get_boot_httbr(void);
phys_addr_t kvm_get_idmap_vector(void); phys_addr_t kvm_get_idmap_vector(void);
phys_addr_t kvm_get_idmap_start(void); phys_addr_t kvm_get_idmap_start(void);
int kvm_mmu_init(void); int kvm_mmu_init(void);
......
...@@ -22,6 +22,7 @@ struct hw_pci { ...@@ -22,6 +22,7 @@ struct hw_pci {
struct msi_controller *msi_ctrl; struct msi_controller *msi_ctrl;
struct pci_ops *ops; struct pci_ops *ops;
int nr_controllers; int nr_controllers;
unsigned int io_optional:1;
void **private_data; void **private_data;
int (*setup)(int nr, struct pci_sys_data *); int (*setup)(int nr, struct pci_sys_data *);
struct pci_bus *(*scan)(int nr, struct pci_sys_data *); struct pci_bus *(*scan)(int nr, struct pci_sys_data *);
......
...@@ -97,7 +97,9 @@ extern pgprot_t pgprot_s2_device; ...@@ -97,7 +97,9 @@ extern pgprot_t pgprot_s2_device;
#define PAGE_READONLY_EXEC _MOD_PROT(pgprot_user, L_PTE_USER | L_PTE_RDONLY) #define PAGE_READONLY_EXEC _MOD_PROT(pgprot_user, L_PTE_USER | L_PTE_RDONLY)
#define PAGE_KERNEL _MOD_PROT(pgprot_kernel, L_PTE_XN) #define PAGE_KERNEL _MOD_PROT(pgprot_kernel, L_PTE_XN)
#define PAGE_KERNEL_EXEC pgprot_kernel #define PAGE_KERNEL_EXEC pgprot_kernel
#define PAGE_HYP _MOD_PROT(pgprot_kernel, L_PTE_HYP) #define PAGE_HYP _MOD_PROT(pgprot_kernel, L_PTE_HYP | L_PTE_XN)
#define PAGE_HYP_EXEC _MOD_PROT(pgprot_kernel, L_PTE_HYP | L_PTE_RDONLY)
#define PAGE_HYP_RO _MOD_PROT(pgprot_kernel, L_PTE_HYP | L_PTE_RDONLY | L_PTE_XN)
#define PAGE_HYP_DEVICE _MOD_PROT(pgprot_hyp_device, L_PTE_HYP) #define PAGE_HYP_DEVICE _MOD_PROT(pgprot_hyp_device, L_PTE_HYP)
#define PAGE_S2 _MOD_PROT(pgprot_s2, L_PTE_S2_RDONLY) #define PAGE_S2 _MOD_PROT(pgprot_s2, L_PTE_S2_RDONLY)
#define PAGE_S2_DEVICE _MOD_PROT(pgprot_s2_device, L_PTE_S2_RDONLY) #define PAGE_S2_DEVICE _MOD_PROT(pgprot_s2_device, L_PTE_S2_RDONLY)
......
...@@ -80,6 +80,10 @@ static inline bool is_kernel_in_hyp_mode(void) ...@@ -80,6 +80,10 @@ static inline bool is_kernel_in_hyp_mode(void)
return false; return false;
} }
/* The section containing the hypervisor idmap text */
extern char __hyp_idmap_text_start[];
extern char __hyp_idmap_text_end[];
/* The section containing the hypervisor text */ /* The section containing the hypervisor text */
extern char __hyp_text_start[]; extern char __hyp_text_start[];
extern char __hyp_text_end[]; extern char __hyp_text_end[];
......
...@@ -410,7 +410,8 @@ static int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) ...@@ -410,7 +410,8 @@ static int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
return irq; return irq;
} }
static int pcibios_init_resources(int busnr, struct pci_sys_data *sys) static int pcibios_init_resource(int busnr, struct pci_sys_data *sys,
int io_optional)
{ {
int ret; int ret;
struct resource_entry *window; struct resource_entry *window;
...@@ -420,6 +421,14 @@ static int pcibios_init_resources(int busnr, struct pci_sys_data *sys) ...@@ -420,6 +421,14 @@ static int pcibios_init_resources(int busnr, struct pci_sys_data *sys)
&iomem_resource, sys->mem_offset); &iomem_resource, sys->mem_offset);
} }
/*
* If a platform says I/O port support is optional, we don't add
* the default I/O space. The platform is responsible for adding
* any I/O space it needs.
*/
if (io_optional)
return 0;
resource_list_for_each_entry(window, &sys->resources) resource_list_for_each_entry(window, &sys->resources)
if (resource_type(window->res) == IORESOURCE_IO) if (resource_type(window->res) == IORESOURCE_IO)
return 0; return 0;
...@@ -466,7 +475,7 @@ static void pcibios_init_hw(struct device *parent, struct hw_pci *hw, ...@@ -466,7 +475,7 @@ static void pcibios_init_hw(struct device *parent, struct hw_pci *hw,
if (ret > 0) { if (ret > 0) {
struct pci_host_bridge *host_bridge; struct pci_host_bridge *host_bridge;
ret = pcibios_init_resources(nr, sys); ret = pcibios_init_resource(nr, sys, hw->io_optional);
if (ret) { if (ret) {
kfree(sys); kfree(sys);
break; break;
...@@ -515,25 +524,23 @@ void pci_common_init_dev(struct device *parent, struct hw_pci *hw) ...@@ -515,25 +524,23 @@ void pci_common_init_dev(struct device *parent, struct hw_pci *hw)
list_for_each_entry(sys, &head, node) { list_for_each_entry(sys, &head, node) {
struct pci_bus *bus = sys->bus; struct pci_bus *bus = sys->bus;
if (!pci_has_flag(PCI_PROBE_ONLY)) {
struct pci_bus *child;
/* /*
* Size the bridge windows. * We insert PCI resources into the iomem_resource and
* ioport_resource trees in either pci_bus_claim_resources()
* or pci_bus_assign_resources().
*/ */
pci_bus_size_bridges(bus); if (pci_has_flag(PCI_PROBE_ONLY)) {
pci_bus_claim_resources(bus);
} else {
struct pci_bus *child;
/* pci_bus_size_bridges(bus);
* Assign resources.
*/
pci_bus_assign_resources(bus); pci_bus_assign_resources(bus);
list_for_each_entry(child, &bus->children, node) list_for_each_entry(child, &bus->children, node)
pcie_bus_configure_settings(child); pcie_bus_configure_settings(child);
} }
/*
* Tell drivers about devices found.
*/
pci_bus_add_devices(bus); pci_bus_add_devices(bus);
} }
} }
...@@ -590,18 +597,6 @@ resource_size_t pcibios_align_resource(void *data, const struct resource *res, ...@@ -590,18 +597,6 @@ resource_size_t pcibios_align_resource(void *data, const struct resource *res,
return start; return start;
} }
/**
* pcibios_enable_device - Enable I/O and memory.
* @dev: PCI device to be enabled
*/
int pcibios_enable_device(struct pci_dev *dev, int mask)
{
if (pci_has_flag(PCI_PROBE_ONLY))
return 0;
return pci_enable_resources(dev, mask);
}
int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma, int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma,
enum pci_mmap_state mmap_state, int write_combine) enum pci_mmap_state mmap_state, int write_combine)
{ {
......
...@@ -46,13 +46,6 @@ config KVM_ARM_HOST ...@@ -46,13 +46,6 @@ config KVM_ARM_HOST
---help--- ---help---
Provides host support for ARM processors. Provides host support for ARM processors.
config KVM_NEW_VGIC
bool "New VGIC implementation"
depends on KVM
default y
---help---
uses the new VGIC implementation
source drivers/vhost/Kconfig source drivers/vhost/Kconfig
endif # VIRTUALIZATION endif # VIRTUALIZATION
...@@ -22,7 +22,6 @@ obj-y += kvm-arm.o init.o interrupts.o ...@@ -22,7 +22,6 @@ obj-y += kvm-arm.o init.o interrupts.o
obj-y += arm.o handle_exit.o guest.o mmu.o emulate.o reset.o obj-y += arm.o handle_exit.o guest.o mmu.o emulate.o reset.o
obj-y += coproc.o coproc_a15.o coproc_a7.o mmio.o psci.o perf.o obj-y += coproc.o coproc_a15.o coproc_a7.o mmio.o psci.o perf.o
ifeq ($(CONFIG_KVM_NEW_VGIC),y)
obj-y += $(KVM)/arm/vgic/vgic.o obj-y += $(KVM)/arm/vgic/vgic.o
obj-y += $(KVM)/arm/vgic/vgic-init.o obj-y += $(KVM)/arm/vgic/vgic-init.o
obj-y += $(KVM)/arm/vgic/vgic-irqfd.o obj-y += $(KVM)/arm/vgic/vgic-irqfd.o
...@@ -30,9 +29,4 @@ obj-y += $(KVM)/arm/vgic/vgic-v2.o ...@@ -30,9 +29,4 @@ obj-y += $(KVM)/arm/vgic/vgic-v2.o
obj-y += $(KVM)/arm/vgic/vgic-mmio.o obj-y += $(KVM)/arm/vgic/vgic-mmio.o
obj-y += $(KVM)/arm/vgic/vgic-mmio-v2.o obj-y += $(KVM)/arm/vgic/vgic-mmio-v2.o
obj-y += $(KVM)/arm/vgic/vgic-kvm-device.o obj-y += $(KVM)/arm/vgic/vgic-kvm-device.o
else
obj-y += $(KVM)/arm/vgic.o
obj-y += $(KVM)/arm/vgic-v2.o
obj-y += $(KVM)/arm/vgic-v2-emul.o
endif
obj-y += $(KVM)/arm/arch_timer.o obj-y += $(KVM)/arm/arch_timer.o
...@@ -20,6 +20,7 @@ ...@@ -20,6 +20,7 @@
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/err.h> #include <linux/err.h>
#include <linux/kvm_host.h> #include <linux/kvm_host.h>
#include <linux/list.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
#include <linux/fs.h> #include <linux/fs.h>
...@@ -122,7 +123,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) ...@@ -122,7 +123,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
if (ret) if (ret)
goto out_fail_alloc; goto out_fail_alloc;
ret = create_hyp_mappings(kvm, kvm + 1); ret = create_hyp_mappings(kvm, kvm + 1, PAGE_HYP);
if (ret) if (ret)
goto out_free_stage2_pgd; goto out_free_stage2_pgd;
...@@ -201,7 +202,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) ...@@ -201,7 +202,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
r = KVM_MAX_VCPUS; r = KVM_MAX_VCPUS;
break; break;
default: default:
r = kvm_arch_dev_ioctl_check_extension(ext); r = kvm_arch_dev_ioctl_check_extension(kvm, ext);
break; break;
} }
return r; return r;
...@@ -239,7 +240,7 @@ struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm, unsigned int id) ...@@ -239,7 +240,7 @@ struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm, unsigned int id)
if (err) if (err)
goto free_vcpu; goto free_vcpu;
err = create_hyp_mappings(vcpu, vcpu + 1); err = create_hyp_mappings(vcpu, vcpu + 1, PAGE_HYP);
if (err) if (err)
goto vcpu_uninit; goto vcpu_uninit;
...@@ -377,7 +378,7 @@ void force_vm_exit(const cpumask_t *mask) ...@@ -377,7 +378,7 @@ void force_vm_exit(const cpumask_t *mask)
/** /**
* need_new_vmid_gen - check that the VMID is still valid * need_new_vmid_gen - check that the VMID is still valid
* @kvm: The VM's VMID to checkt * @kvm: The VM's VMID to check
* *
* return true if there is a new generation of VMIDs being used * return true if there is a new generation of VMIDs being used
* *
...@@ -616,7 +617,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) ...@@ -616,7 +617,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
* Enter the guest * Enter the guest
*/ */
trace_kvm_entry(*vcpu_pc(vcpu)); trace_kvm_entry(*vcpu_pc(vcpu));
__kvm_guest_enter(); guest_enter_irqoff();
vcpu->mode = IN_GUEST_MODE; vcpu->mode = IN_GUEST_MODE;
ret = kvm_call_hyp(__kvm_vcpu_run, vcpu); ret = kvm_call_hyp(__kvm_vcpu_run, vcpu);
...@@ -642,14 +643,14 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) ...@@ -642,14 +643,14 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
local_irq_enable(); local_irq_enable();
/* /*
* We do local_irq_enable() before calling kvm_guest_exit() so * We do local_irq_enable() before calling guest_exit() so
* that if a timer interrupt hits while running the guest we * that if a timer interrupt hits while running the guest we
* account that tick as being spent in the guest. We enable * account that tick as being spent in the guest. We enable
* preemption after calling kvm_guest_exit() so that if we get * preemption after calling guest_exit() so that if we get
* preempted we make sure ticks after that is not counted as * preempted we make sure ticks after that is not counted as
* guest time. * guest time.
*/ */
kvm_guest_exit(); guest_exit();
trace_kvm_exit(ret, kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu)); trace_kvm_exit(ret, kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu));
/* /*
...@@ -1039,7 +1040,6 @@ long kvm_arch_vm_ioctl(struct file *filp, ...@@ -1039,7 +1040,6 @@ long kvm_arch_vm_ioctl(struct file *filp,
static void cpu_init_hyp_mode(void *dummy) static void cpu_init_hyp_mode(void *dummy)
{ {
phys_addr_t boot_pgd_ptr;
phys_addr_t pgd_ptr; phys_addr_t pgd_ptr;
unsigned long hyp_stack_ptr; unsigned long hyp_stack_ptr;
unsigned long stack_page; unsigned long stack_page;
...@@ -1048,13 +1048,12 @@ static void cpu_init_hyp_mode(void *dummy) ...@@ -1048,13 +1048,12 @@ static void cpu_init_hyp_mode(void *dummy)
/* Switch from the HYP stub to our own HYP init vector */ /* Switch from the HYP stub to our own HYP init vector */
__hyp_set_vectors(kvm_get_idmap_vector()); __hyp_set_vectors(kvm_get_idmap_vector());
boot_pgd_ptr = kvm_mmu_get_boot_httbr();
pgd_ptr = kvm_mmu_get_httbr(); pgd_ptr = kvm_mmu_get_httbr();
stack_page = __this_cpu_read(kvm_arm_hyp_stack_page); stack_page = __this_cpu_read(kvm_arm_hyp_stack_page);
hyp_stack_ptr = stack_page + PAGE_SIZE; hyp_stack_ptr = stack_page + PAGE_SIZE;
vector_ptr = (unsigned long)kvm_ksym_ref(__kvm_hyp_vector); vector_ptr = (unsigned long)kvm_ksym_ref(__kvm_hyp_vector);
__cpu_init_hyp_mode(boot_pgd_ptr, pgd_ptr, hyp_stack_ptr, vector_ptr); __cpu_init_hyp_mode(pgd_ptr, hyp_stack_ptr, vector_ptr);
__cpu_init_stage2(); __cpu_init_stage2();
kvm_arm_init_debug(); kvm_arm_init_debug();
...@@ -1076,15 +1075,9 @@ static void cpu_hyp_reinit(void) ...@@ -1076,15 +1075,9 @@ static void cpu_hyp_reinit(void)
static void cpu_hyp_reset(void) static void cpu_hyp_reset(void)
{ {
phys_addr_t boot_pgd_ptr; if (!is_kernel_in_hyp_mode())
phys_addr_t phys_idmap_start; __cpu_reset_hyp_mode(hyp_default_vectors,
kvm_get_idmap_start());
if (!is_kernel_in_hyp_mode()) {
boot_pgd_ptr = kvm_mmu_get_boot_httbr();
phys_idmap_start = kvm_get_idmap_start();
__cpu_reset_hyp_mode(boot_pgd_ptr, phys_idmap_start);
}
} }
static void _kvm_arch_hardware_enable(void *discard) static void _kvm_arch_hardware_enable(void *discard)
...@@ -1294,14 +1287,14 @@ static int init_hyp_mode(void) ...@@ -1294,14 +1287,14 @@ static int init_hyp_mode(void)
* Map the Hyp-code called directly from the host * Map the Hyp-code called directly from the host
*/ */
err = create_hyp_mappings(kvm_ksym_ref(__hyp_text_start), err = create_hyp_mappings(kvm_ksym_ref(__hyp_text_start),
kvm_ksym_ref(__hyp_text_end)); kvm_ksym_ref(__hyp_text_end), PAGE_HYP_EXEC);
if (err) { if (err) {
kvm_err("Cannot map world-switch code\n"); kvm_err("Cannot map world-switch code\n");
goto out_err; goto out_err;
} }
err = create_hyp_mappings(kvm_ksym_ref(__start_rodata), err = create_hyp_mappings(kvm_ksym_ref(__start_rodata),
kvm_ksym_ref(__end_rodata)); kvm_ksym_ref(__end_rodata), PAGE_HYP_RO);
if (err) { if (err) {
kvm_err("Cannot map rodata section\n"); kvm_err("Cannot map rodata section\n");
goto out_err; goto out_err;
...@@ -1312,7 +1305,8 @@ static int init_hyp_mode(void) ...@@ -1312,7 +1305,8 @@ static int init_hyp_mode(void)
*/ */
for_each_possible_cpu(cpu) { for_each_possible_cpu(cpu) {
char *stack_page = (char *)per_cpu(kvm_arm_hyp_stack_page, cpu); char *stack_page = (char *)per_cpu(kvm_arm_hyp_stack_page, cpu);
err = create_hyp_mappings(stack_page, stack_page + PAGE_SIZE); err = create_hyp_mappings(stack_page, stack_page + PAGE_SIZE,
PAGE_HYP);
if (err) { if (err) {
kvm_err("Cannot map hyp stack\n"); kvm_err("Cannot map hyp stack\n");
...@@ -1324,7 +1318,7 @@ static int init_hyp_mode(void) ...@@ -1324,7 +1318,7 @@ static int init_hyp_mode(void)
kvm_cpu_context_t *cpu_ctxt; kvm_cpu_context_t *cpu_ctxt;
cpu_ctxt = per_cpu_ptr(kvm_host_cpu_state, cpu); cpu_ctxt = per_cpu_ptr(kvm_host_cpu_state, cpu);
err = create_hyp_mappings(cpu_ctxt, cpu_ctxt + 1); err = create_hyp_mappings(cpu_ctxt, cpu_ctxt + 1, PAGE_HYP);
if (err) { if (err) {
kvm_err("Cannot map host CPU state: %d\n", err); kvm_err("Cannot map host CPU state: %d\n", err);
...@@ -1332,10 +1326,6 @@ static int init_hyp_mode(void) ...@@ -1332,10 +1326,6 @@ static int init_hyp_mode(void)
} }
} }
#ifndef CONFIG_HOTPLUG_CPU
free_boot_hyp_pgd();
#endif
/* set size of VMID supported by CPU */ /* set size of VMID supported by CPU */
kvm_vmid_bits = kvm_get_vmid_bits(); kvm_vmid_bits = kvm_get_vmid_bits();
kvm_info("%d-bit VMID\n", kvm_vmid_bits); kvm_info("%d-bit VMID\n", kvm_vmid_bits);
......
...@@ -210,7 +210,7 @@ bool kvm_condition_valid(struct kvm_vcpu *vcpu) ...@@ -210,7 +210,7 @@ bool kvm_condition_valid(struct kvm_vcpu *vcpu)
* @vcpu: The VCPU pointer * @vcpu: The VCPU pointer
* *
* When exceptions occur while instructions are executed in Thumb IF-THEN * When exceptions occur while instructions are executed in Thumb IF-THEN
* blocks, the ITSTATE field of the CPSR is not advanved (updated), so we have * blocks, the ITSTATE field of the CPSR is not advanced (updated), so we have
* to do this little bit of work manually. The fields map like this: * to do this little bit of work manually. The fields map like this:
* *
* IT[7:0] -> CPSR[26:25],CPSR[15:10] * IT[7:0] -> CPSR[26:25],CPSR[15:10]
......
...@@ -182,7 +182,7 @@ unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu) ...@@ -182,7 +182,7 @@ unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu)
/** /**
* kvm_arm_copy_reg_indices - get indices of all registers. * kvm_arm_copy_reg_indices - get indices of all registers.
* *
* We do core registers right here, then we apppend coproc regs. * We do core registers right here, then we append coproc regs.
*/ */
int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
{ {
......
...@@ -32,23 +32,13 @@ ...@@ -32,23 +32,13 @@
* r2,r3 = Hypervisor pgd pointer * r2,r3 = Hypervisor pgd pointer
* *
* The init scenario is: * The init scenario is:
* - We jump in HYP with four parameters: boot HYP pgd, runtime HYP pgd, * - We jump in HYP with 3 parameters: runtime HYP pgd, runtime stack,
* runtime stack, runtime vectors * runtime vectors
* - Enable the MMU with the boot pgd
* - Jump to a target into the trampoline page (remember, this is the same
* physical page!)
* - Now switch to the runtime pgd (same VA, and still the same physical
* page!)
* - Invalidate TLBs * - Invalidate TLBs
* - Set stack and vectors * - Set stack and vectors
* - Setup the page tables
* - Enable the MMU
* - Profit! (or eret, if you only care about the code). * - Profit! (or eret, if you only care about the code).
*
* As we only have four registers available to pass parameters (and we
* need six), we split the init in two phases:
* - Phase 1: r0 = 0, r1 = 0, r2,r3 contain the boot PGD.
* Provides the basic HYP init, and enable the MMU.
* - Phase 2: r0 = ToS, r1 = vectors, r2,r3 contain the runtime PGD.
* Switches to the runtime PGD, set stack and vectors.
*/ */
.text .text
...@@ -68,8 +58,11 @@ __kvm_hyp_init: ...@@ -68,8 +58,11 @@ __kvm_hyp_init:
W(b) . W(b) .
__do_hyp_init: __do_hyp_init:
cmp r0, #0 @ We have a SP? @ Set stack pointer
bne phase2 @ Yes, second stage init mov sp, r0
@ Set HVBAR to point to the HYP vectors
mcr p15, 4, r1, c12, c0, 0 @ HVBAR
@ Set the HTTBR to point to the hypervisor PGD pointer passed @ Set the HTTBR to point to the hypervisor PGD pointer passed
mcrr p15, 4, rr_lo_hi(r2, r3), c2 mcrr p15, 4, rr_lo_hi(r2, r3), c2
...@@ -114,34 +107,25 @@ __do_hyp_init: ...@@ -114,34 +107,25 @@ __do_hyp_init:
THUMB( ldr r2, =(HSCTLR_M | HSCTLR_A | HSCTLR_TE) ) THUMB( ldr r2, =(HSCTLR_M | HSCTLR_A | HSCTLR_TE) )
orr r1, r1, r2 orr r1, r1, r2
orr r0, r0, r1 orr r0, r0, r1
isb
mcr p15, 4, r0, c1, c0, 0 @ HSCR mcr p15, 4, r0, c1, c0, 0 @ HSCR
isb
@ End of init phase-1
eret eret
phase2: @ r0 : stub vectors address
@ Set stack pointer ENTRY(__kvm_hyp_reset)
mov sp, r0 /* We're now in idmap, disable MMU */
mrc p15, 4, r1, c1, c0, 0 @ HSCTLR
@ Set HVBAR to point to the HYP vectors ldr r2, =(HSCTLR_M | HSCTLR_A | HSCTLR_C | HSCTLR_I)
mcr p15, 4, r1, c12, c0, 0 @ HVBAR bic r1, r1, r2
mcr p15, 4, r1, c1, c0, 0 @ HSCTLR
@ Jump to the trampoline page
ldr r0, =TRAMPOLINE_VA
adr r1, target
bfi r0, r1, #0, #PAGE_SHIFT
ret r0
target: @ We're now in the trampoline code, switch page tables /* Install stub vectors */
mcrr p15, 4, rr_lo_hi(r2, r3), c2 mcr p15, 4, r0, c12, c0, 0 @ HVBAR
isb isb
@ Invalidate the old TLBs
mcr p15, 4, r0, c8, c7, 0 @ TLBIALLH
dsb ish
eret eret
ENDPROC(__kvm_hyp_reset)
.ltorg .ltorg
......
...@@ -32,8 +32,6 @@ ...@@ -32,8 +32,6 @@
#include "trace.h" #include "trace.h"
extern char __hyp_idmap_text_start[], __hyp_idmap_text_end[];
static pgd_t *boot_hyp_pgd; static pgd_t *boot_hyp_pgd;
static pgd_t *hyp_pgd; static pgd_t *hyp_pgd;
static pgd_t *merged_hyp_pgd; static pgd_t *merged_hyp_pgd;
...@@ -483,28 +481,6 @@ static void unmap_hyp_range(pgd_t *pgdp, phys_addr_t start, u64 size) ...@@ -483,28 +481,6 @@ static void unmap_hyp_range(pgd_t *pgdp, phys_addr_t start, u64 size)
} while (pgd++, addr = next, addr != end); } while (pgd++, addr = next, addr != end);
} }
/**
* free_boot_hyp_pgd - free HYP boot page tables
*
* Free the HYP boot page tables. The bounce page is also freed.
*/
void free_boot_hyp_pgd(void)
{
mutex_lock(&kvm_hyp_pgd_mutex);
if (boot_hyp_pgd) {
unmap_hyp_range(boot_hyp_pgd, hyp_idmap_start, PAGE_SIZE);
unmap_hyp_range(boot_hyp_pgd, TRAMPOLINE_VA, PAGE_SIZE);
free_pages((unsigned long)boot_hyp_pgd, hyp_pgd_order);
boot_hyp_pgd = NULL;
}
if (hyp_pgd)
unmap_hyp_range(hyp_pgd, TRAMPOLINE_VA, PAGE_SIZE);
mutex_unlock(&kvm_hyp_pgd_mutex);
}
/** /**
* free_hyp_pgds - free Hyp-mode page tables * free_hyp_pgds - free Hyp-mode page tables
* *
...@@ -519,15 +495,20 @@ void free_hyp_pgds(void) ...@@ -519,15 +495,20 @@ void free_hyp_pgds(void)
{ {
unsigned long addr; unsigned long addr;
free_boot_hyp_pgd();
mutex_lock(&kvm_hyp_pgd_mutex); mutex_lock(&kvm_hyp_pgd_mutex);
if (boot_hyp_pgd) {
unmap_hyp_range(boot_hyp_pgd, hyp_idmap_start, PAGE_SIZE);
free_pages((unsigned long)boot_hyp_pgd, hyp_pgd_order);
boot_hyp_pgd = NULL;
}
if (hyp_pgd) { if (hyp_pgd) {
unmap_hyp_range(hyp_pgd, hyp_idmap_start, PAGE_SIZE);
for (addr = PAGE_OFFSET; virt_addr_valid(addr); addr += PGDIR_SIZE) for (addr = PAGE_OFFSET; virt_addr_valid(addr); addr += PGDIR_SIZE)
unmap_hyp_range(hyp_pgd, KERN_TO_HYP(addr), PGDIR_SIZE); unmap_hyp_range(hyp_pgd, kern_hyp_va(addr), PGDIR_SIZE);
for (addr = VMALLOC_START; is_vmalloc_addr((void*)addr); addr += PGDIR_SIZE) for (addr = VMALLOC_START; is_vmalloc_addr((void*)addr); addr += PGDIR_SIZE)
unmap_hyp_range(hyp_pgd, KERN_TO_HYP(addr), PGDIR_SIZE); unmap_hyp_range(hyp_pgd, kern_hyp_va(addr), PGDIR_SIZE);
free_pages((unsigned long)hyp_pgd, hyp_pgd_order); free_pages((unsigned long)hyp_pgd, hyp_pgd_order);
hyp_pgd = NULL; hyp_pgd = NULL;
...@@ -679,17 +660,18 @@ static phys_addr_t kvm_kaddr_to_phys(void *kaddr) ...@@ -679,17 +660,18 @@ static phys_addr_t kvm_kaddr_to_phys(void *kaddr)
* create_hyp_mappings - duplicate a kernel virtual address range in Hyp mode * create_hyp_mappings - duplicate a kernel virtual address range in Hyp mode
* @from: The virtual kernel start address of the range * @from: The virtual kernel start address of the range
* @to: The virtual kernel end address of the range (exclusive) * @to: The virtual kernel end address of the range (exclusive)
* @prot: The protection to be applied to this range
* *
* The same virtual address as the kernel virtual address is also used * The same virtual address as the kernel virtual address is also used
* in Hyp-mode mapping (modulo HYP_PAGE_OFFSET) to the same underlying * in Hyp-mode mapping (modulo HYP_PAGE_OFFSET) to the same underlying
* physical pages. * physical pages.
*/ */
int create_hyp_mappings(void *from, void *to) int create_hyp_mappings(void *from, void *to, pgprot_t prot)
{ {
phys_addr_t phys_addr; phys_addr_t phys_addr;
unsigned long virt_addr; unsigned long virt_addr;
unsigned long start = KERN_TO_HYP((unsigned long)from); unsigned long start = kern_hyp_va((unsigned long)from);
unsigned long end = KERN_TO_HYP((unsigned long)to); unsigned long end = kern_hyp_va((unsigned long)to);
if (is_kernel_in_hyp_mode()) if (is_kernel_in_hyp_mode())
return 0; return 0;
...@@ -704,7 +686,7 @@ int create_hyp_mappings(void *from, void *to) ...@@ -704,7 +686,7 @@ int create_hyp_mappings(void *from, void *to)
err = __create_hyp_mappings(hyp_pgd, virt_addr, err = __create_hyp_mappings(hyp_pgd, virt_addr,
virt_addr + PAGE_SIZE, virt_addr + PAGE_SIZE,
__phys_to_pfn(phys_addr), __phys_to_pfn(phys_addr),
PAGE_HYP); prot);
if (err) if (err)
return err; return err;
} }
...@@ -723,8 +705,8 @@ int create_hyp_mappings(void *from, void *to) ...@@ -723,8 +705,8 @@ int create_hyp_mappings(void *from, void *to)
*/ */
int create_hyp_io_mappings(void *from, void *to, phys_addr_t phys_addr) int create_hyp_io_mappings(void *from, void *to, phys_addr_t phys_addr)
{ {
unsigned long start = KERN_TO_HYP((unsigned long)from); unsigned long start = kern_hyp_va((unsigned long)from);
unsigned long end = KERN_TO_HYP((unsigned long)to); unsigned long end = kern_hyp_va((unsigned long)to);
if (is_kernel_in_hyp_mode()) if (is_kernel_in_hyp_mode())
return 0; return 0;
...@@ -1687,14 +1669,6 @@ phys_addr_t kvm_mmu_get_httbr(void) ...@@ -1687,14 +1669,6 @@ phys_addr_t kvm_mmu_get_httbr(void)
return virt_to_phys(hyp_pgd); return virt_to_phys(hyp_pgd);
} }
phys_addr_t kvm_mmu_get_boot_httbr(void)
{
if (__kvm_cpu_uses_extended_idmap())
return virt_to_phys(merged_hyp_pgd);
else
return virt_to_phys(boot_hyp_pgd);
}
phys_addr_t kvm_get_idmap_vector(void) phys_addr_t kvm_get_idmap_vector(void)
{ {
return hyp_idmap_vector; return hyp_idmap_vector;
...@@ -1705,6 +1679,22 @@ phys_addr_t kvm_get_idmap_start(void) ...@@ -1705,6 +1679,22 @@ phys_addr_t kvm_get_idmap_start(void)
return hyp_idmap_start; return hyp_idmap_start;
} }
static int kvm_map_idmap_text(pgd_t *pgd)
{
int err;
/* Create the idmap in the boot page tables */
err = __create_hyp_mappings(pgd,
hyp_idmap_start, hyp_idmap_end,
__phys_to_pfn(hyp_idmap_start),
PAGE_HYP_EXEC);
if (err)
kvm_err("Failed to idmap %lx-%lx\n",
hyp_idmap_start, hyp_idmap_end);
return err;
}
int kvm_mmu_init(void) int kvm_mmu_init(void)
{ {
int err; int err;
...@@ -1719,28 +1709,41 @@ int kvm_mmu_init(void) ...@@ -1719,28 +1709,41 @@ int kvm_mmu_init(void)
*/ */
BUG_ON((hyp_idmap_start ^ (hyp_idmap_end - 1)) & PAGE_MASK); BUG_ON((hyp_idmap_start ^ (hyp_idmap_end - 1)) & PAGE_MASK);
hyp_pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, hyp_pgd_order); kvm_info("IDMAP page: %lx\n", hyp_idmap_start);
boot_hyp_pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, hyp_pgd_order); kvm_info("HYP VA range: %lx:%lx\n",
kern_hyp_va(PAGE_OFFSET), kern_hyp_va(~0UL));
if (hyp_idmap_start >= kern_hyp_va(PAGE_OFFSET) &&
hyp_idmap_start < kern_hyp_va(~0UL)) {
/*
* The idmap page is intersecting with the VA space,
* it is not safe to continue further.
*/
kvm_err("IDMAP intersecting with HYP VA, unable to continue\n");
err = -EINVAL;
goto out;
}
if (!hyp_pgd || !boot_hyp_pgd) { hyp_pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, hyp_pgd_order);
if (!hyp_pgd) {
kvm_err("Hyp mode PGD not allocated\n"); kvm_err("Hyp mode PGD not allocated\n");
err = -ENOMEM; err = -ENOMEM;
goto out; goto out;
} }
/* Create the idmap in the boot page tables */ if (__kvm_cpu_uses_extended_idmap()) {
err = __create_hyp_mappings(boot_hyp_pgd, boot_hyp_pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
hyp_idmap_start, hyp_idmap_end, hyp_pgd_order);
__phys_to_pfn(hyp_idmap_start), if (!boot_hyp_pgd) {
PAGE_HYP); kvm_err("Hyp boot PGD not allocated\n");
err = -ENOMEM;
if (err) {
kvm_err("Failed to idmap %lx-%lx\n",
hyp_idmap_start, hyp_idmap_end);
goto out; goto out;
} }
if (__kvm_cpu_uses_extended_idmap()) { err = kvm_map_idmap_text(boot_hyp_pgd);
if (err)
goto out;
merged_hyp_pgd = (pgd_t *)__get_free_page(GFP_KERNEL | __GFP_ZERO); merged_hyp_pgd = (pgd_t *)__get_free_page(GFP_KERNEL | __GFP_ZERO);
if (!merged_hyp_pgd) { if (!merged_hyp_pgd) {
kvm_err("Failed to allocate extra HYP pgd\n"); kvm_err("Failed to allocate extra HYP pgd\n");
...@@ -1748,28 +1751,9 @@ int kvm_mmu_init(void) ...@@ -1748,28 +1751,9 @@ int kvm_mmu_init(void)
} }
__kvm_extend_hypmap(boot_hyp_pgd, hyp_pgd, merged_hyp_pgd, __kvm_extend_hypmap(boot_hyp_pgd, hyp_pgd, merged_hyp_pgd,
hyp_idmap_start); hyp_idmap_start);
return 0; } else {
} err = kvm_map_idmap_text(hyp_pgd);
if (err)
/* Map the very same page at the trampoline VA */
err = __create_hyp_mappings(boot_hyp_pgd,
TRAMPOLINE_VA, TRAMPOLINE_VA + PAGE_SIZE,
__phys_to_pfn(hyp_idmap_start),
PAGE_HYP);
if (err) {
kvm_err("Failed to map trampoline @%lx into boot HYP pgd\n",
TRAMPOLINE_VA);
goto out;
}
/* Map the same page again into the runtime page tables */
err = __create_hyp_mappings(hyp_pgd,
TRAMPOLINE_VA, TRAMPOLINE_VA + PAGE_SIZE,
__phys_to_pfn(hyp_idmap_start),
PAGE_HYP);
if (err) {
kvm_err("Failed to map trampoline @%lx into runtime HYP pgd\n",
TRAMPOLINE_VA);
goto out; goto out;
} }
......
...@@ -52,7 +52,7 @@ static const struct kvm_irq_level cortexa_vtimer_irq = { ...@@ -52,7 +52,7 @@ static const struct kvm_irq_level cortexa_vtimer_irq = {
* @vcpu: The VCPU pointer * @vcpu: The VCPU pointer
* *
* This function finds the right table above and sets the registers on the * This function finds the right table above and sets the registers on the
* virtual CPU struct to their architectually defined reset values. * virtual CPU struct to their architecturally defined reset values.
*/ */
int kvm_reset_vcpu(struct kvm_vcpu *vcpu) int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
{ {
......
...@@ -3,6 +3,7 @@ config ARM64 ...@@ -3,6 +3,7 @@ config ARM64
select ACPI_CCA_REQUIRED if ACPI select ACPI_CCA_REQUIRED if ACPI
select ACPI_GENERIC_GSI if ACPI select ACPI_GENERIC_GSI if ACPI
select ACPI_REDUCED_HARDWARE_ONLY if ACPI select ACPI_REDUCED_HARDWARE_ONLY if ACPI
select ACPI_MCFG if ACPI
select ARCH_HAS_DEVMEM_IS_ALLOWED select ARCH_HAS_DEVMEM_IS_ALLOWED
select ARCH_HAS_ACPI_TABLE_UPGRADE if ACPI select ARCH_HAS_ACPI_TABLE_UPGRADE if ACPI
select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
...@@ -22,9 +23,9 @@ config ARM64 ...@@ -22,9 +23,9 @@ config ARM64
select ARM_ARCH_TIMER select ARM_ARCH_TIMER
select ARM_GIC select ARM_GIC
select AUDIT_ARCH_COMPAT_GENERIC select AUDIT_ARCH_COMPAT_GENERIC
select ARM_GIC_V2M if PCI_MSI select ARM_GIC_V2M if PCI
select ARM_GIC_V3 select ARM_GIC_V3
select ARM_GIC_V3_ITS if PCI_MSI select ARM_GIC_V3_ITS if PCI
select ARM_PSCI_FW select ARM_PSCI_FW
select BUILDTIME_EXTABLE_SORT select BUILDTIME_EXTABLE_SORT
select CLONE_BACKWARDS select CLONE_BACKWARDS
...@@ -78,6 +79,7 @@ config ARM64 ...@@ -78,6 +79,7 @@ config ARM64
select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_FUNCTION_TRACER select HAVE_FUNCTION_TRACER
select HAVE_FUNCTION_GRAPH_TRACER select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_GCC_PLUGINS
select HAVE_GENERIC_DMA_COHERENT select HAVE_GENERIC_DMA_COHERENT
select HAVE_HW_BREAKPOINT if PERF_EVENTS select HAVE_HW_BREAKPOINT if PERF_EVENTS
select HAVE_IRQ_TIME_ACCOUNTING select HAVE_IRQ_TIME_ACCOUNTING
...@@ -101,6 +103,7 @@ config ARM64 ...@@ -101,6 +103,7 @@ config ARM64
select OF_EARLY_FLATTREE select OF_EARLY_FLATTREE
select OF_NUMA if NUMA && OF select OF_NUMA if NUMA && OF
select OF_RESERVED_MEM select OF_RESERVED_MEM
select PCI_ECAM if ACPI
select PERF_USE_VMALLOC select PERF_USE_VMALLOC
select POWER_RESET select POWER_RESET
select POWER_SUPPLY select POWER_SUPPLY
......
...@@ -76,3 +76,8 @@ ...@@ -76,3 +76,8 @@
&usb3 { &usb3 {
status = "okay"; status = "okay";
}; };
/* CON17 (PCIe) / CON12 (mini-PCIe) */
&pcie0 {
status = "okay";
};
...@@ -176,5 +176,30 @@ ...@@ -176,5 +176,30 @@
<0x1d40000 0x40000>; /* GICR */ <0x1d40000 0x40000>; /* GICR */
}; };
}; };
pcie0: pcie@d0070000 {
compatible = "marvell,armada-3700-pcie";
device_type = "pci";
status = "disabled";
reg = <0 0xd0070000 0 0x20000>;
#address-cells = <3>;
#size-cells = <2>;
bus-range = <0x00 0xff>;
interrupts = <GIC_SPI 29 IRQ_TYPE_LEVEL_HIGH>;
#interrupt-cells = <1>;
msi-parent = <&pcie0>;
msi-controller;
ranges = <0x82000000 0 0xe8000000 0 0xe8000000 0 0x1000000 /* Port 0 MEM */
0x81000000 0 0xe9000000 0 0xe9000000 0 0x10000>; /* Port 0 IO*/
interrupt-map-mask = <0 0 0 7>;
interrupt-map = <0 0 0 1 &pcie_intc 0>,
<0 0 0 2 &pcie_intc 1>,
<0 0 0 3 &pcie_intc 2>,
<0 0 0 4 &pcie_intc 3>;
pcie_intc: interrupt-controller {
interrupt-controller;
#interrupt-cells = <1>;
};
};
}; };
}; };
...@@ -36,8 +36,9 @@ ...@@ -36,8 +36,9 @@
#define ARM64_HAS_VIRT_HOST_EXTN 11 #define ARM64_HAS_VIRT_HOST_EXTN 11
#define ARM64_WORKAROUND_CAVIUM_27456 12 #define ARM64_WORKAROUND_CAVIUM_27456 12
#define ARM64_HAS_32BIT_EL0 13 #define ARM64_HAS_32BIT_EL0 13
#define ARM64_HYP_OFFSET_LOW 14
#define ARM64_NCAPS 14 #define ARM64_NCAPS 15
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
......
...@@ -178,7 +178,7 @@ ...@@ -178,7 +178,7 @@
/* Hyp System Trap Register */ /* Hyp System Trap Register */
#define HSTR_EL2_T(x) (1 << x) #define HSTR_EL2_T(x) (1 << x)
/* Hyp Coproccessor Trap Register Shifts */ /* Hyp Coprocessor Trap Register Shifts */
#define CPTR_EL2_TFP_SHIFT 10 #define CPTR_EL2_TFP_SHIFT 10
/* Hyp Coprocessor Trap Register */ /* Hyp Coprocessor Trap Register */
......
...@@ -47,8 +47,7 @@ ...@@ -47,8 +47,7 @@
int __attribute_const__ kvm_target_cpu(void); int __attribute_const__ kvm_target_cpu(void);
int kvm_reset_vcpu(struct kvm_vcpu *vcpu); int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
int kvm_arch_dev_ioctl_check_extension(long ext); int kvm_arch_dev_ioctl_check_extension(struct kvm *kvm, long ext);
unsigned long kvm_hyp_reset_entry(void);
void __extended_idmap_trampoline(phys_addr_t boot_pgd, phys_addr_t idmap_start); void __extended_idmap_trampoline(phys_addr_t boot_pgd, phys_addr_t idmap_start);
struct kvm_arch { struct kvm_arch {
...@@ -348,8 +347,7 @@ int kvm_perf_teardown(void); ...@@ -348,8 +347,7 @@ int kvm_perf_teardown(void);
struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr); struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr);
static inline void __cpu_init_hyp_mode(phys_addr_t boot_pgd_ptr, static inline void __cpu_init_hyp_mode(phys_addr_t pgd_ptr,
phys_addr_t pgd_ptr,
unsigned long hyp_stack_ptr, unsigned long hyp_stack_ptr,
unsigned long vector_ptr) unsigned long vector_ptr)
{ {
...@@ -357,19 +355,14 @@ static inline void __cpu_init_hyp_mode(phys_addr_t boot_pgd_ptr, ...@@ -357,19 +355,14 @@ static inline void __cpu_init_hyp_mode(phys_addr_t boot_pgd_ptr,
* Call initialization code, and switch to the full blown * Call initialization code, and switch to the full blown
* HYP code. * HYP code.
*/ */
__kvm_call_hyp((void *)boot_pgd_ptr, pgd_ptr, __kvm_call_hyp((void *)pgd_ptr, hyp_stack_ptr, vector_ptr);
hyp_stack_ptr, vector_ptr);
} }
static inline void __cpu_reset_hyp_mode(phys_addr_t boot_pgd_ptr, void __kvm_hyp_teardown(void);
static inline void __cpu_reset_hyp_mode(unsigned long vector_ptr,
phys_addr_t phys_idmap_start) phys_addr_t phys_idmap_start)
{ {
/* kvm_call_hyp(__kvm_hyp_teardown, phys_idmap_start);
* Call reset code, and switch back to stub hyp vectors.
* Uses __kvm_call_hyp() to avoid kaslr's kvm_ksym_ref() translation.
*/
__kvm_call_hyp((void *)kvm_hyp_reset_entry(),
boot_pgd_ptr, phys_idmap_start);
} }
static inline void kvm_arch_hardware_unsetup(void) {} static inline void kvm_arch_hardware_unsetup(void) {}
......
...@@ -25,29 +25,6 @@ ...@@ -25,29 +25,6 @@
#define __hyp_text __section(.hyp.text) notrace #define __hyp_text __section(.hyp.text) notrace
static inline unsigned long __kern_hyp_va(unsigned long v)
{
asm volatile(ALTERNATIVE("and %0, %0, %1",
"nop",
ARM64_HAS_VIRT_HOST_EXTN)
: "+r" (v) : "i" (HYP_PAGE_OFFSET_MASK));
return v;
}
#define kern_hyp_va(v) (typeof(v))(__kern_hyp_va((unsigned long)(v)))
static inline unsigned long __hyp_kern_va(unsigned long v)
{
u64 offset = PAGE_OFFSET - HYP_PAGE_OFFSET;
asm volatile(ALTERNATIVE("add %0, %0, %1",
"nop",
ARM64_HAS_VIRT_HOST_EXTN)
: "+r" (v) : "r" (offset));
return v;
}
#define hyp_kern_va(v) (typeof(v))(__hyp_kern_va((unsigned long)(v)))
#define read_sysreg_elx(r,nvh,vh) \ #define read_sysreg_elx(r,nvh,vh) \
({ \ ({ \
u64 reg; \ u64 reg; \
......
...@@ -29,21 +29,48 @@ ...@@ -29,21 +29,48 @@
* *
* Instead, give the HYP mode its own VA region at a fixed offset from * Instead, give the HYP mode its own VA region at a fixed offset from
* the kernel by just masking the top bits (which are all ones for a * the kernel by just masking the top bits (which are all ones for a
* kernel address). * kernel address). We need to find out how many bits to mask.
* *
* ARMv8.1 (using VHE) does have a TTBR1_EL2, and doesn't use these * We want to build a set of page tables that cover both parts of the
* macros (the entire kernel runs at EL2). * idmap (the trampoline page used to initialize EL2), and our normal
* runtime VA space, at the same time.
*
* Given that the kernel uses VA_BITS for its entire address space,
* and that half of that space (VA_BITS - 1) is used for the linear
* mapping, we can also limit the EL2 space to (VA_BITS - 1).
*
* The main question is "Within the VA_BITS space, does EL2 use the
* top or the bottom half of that space to shadow the kernel's linear
* mapping?". As we need to idmap the trampoline page, this is
* determined by the range in which this page lives.
*
* If the page is in the bottom half, we have to use the top half. If
* the page is in the top half, we have to use the bottom half:
*
* T = __virt_to_phys(__hyp_idmap_text_start)
* if (T & BIT(VA_BITS - 1))
* HYP_VA_MIN = 0 //idmap in upper half
* else
* HYP_VA_MIN = 1 << (VA_BITS - 1)
* HYP_VA_MAX = HYP_VA_MIN + (1 << (VA_BITS - 1)) - 1
*
* This of course assumes that the trampoline page exists within the
* VA_BITS range. If it doesn't, then it means we're in the odd case
* where the kernel idmap (as well as HYP) uses more levels than the
* kernel runtime page tables (as seen when the kernel is configured
* for 4k pages, 39bits VA, and yet memory lives just above that
* limit, forcing the idmap to use 4 levels of page tables while the
* kernel itself only uses 3). In this particular case, it doesn't
* matter which side of VA_BITS we use, as we're guaranteed not to
* conflict with anything.
*
* When using VHE, there are no separate hyp mappings and all KVM
* functionality is already mapped as part of the main kernel
* mappings, and none of this applies in that case.
*/ */
#define HYP_PAGE_OFFSET_SHIFT VA_BITS
#define HYP_PAGE_OFFSET_MASK ((UL(1) << HYP_PAGE_OFFSET_SHIFT) - 1)
#define HYP_PAGE_OFFSET (PAGE_OFFSET & HYP_PAGE_OFFSET_MASK)
/* #define HYP_PAGE_OFFSET_HIGH_MASK ((UL(1) << VA_BITS) - 1)
* Our virtual mapping for the idmap-ed MMU-enable code. Must be #define HYP_PAGE_OFFSET_LOW_MASK ((UL(1) << (VA_BITS - 1)) - 1)
* shared across all the page-tables. Conveniently, we use the last
* possible page, where no kernel mapping will ever exist.
*/
#define TRAMPOLINE_VA (HYP_PAGE_OFFSET_MASK & PAGE_MASK)
#ifdef __ASSEMBLY__ #ifdef __ASSEMBLY__
...@@ -53,13 +80,33 @@ ...@@ -53,13 +80,33 @@
/* /*
* Convert a kernel VA into a HYP VA. * Convert a kernel VA into a HYP VA.
* reg: VA to be converted. * reg: VA to be converted.
*
* This generates the following sequences:
* - High mask:
* and x0, x0, #HYP_PAGE_OFFSET_HIGH_MASK
* nop
* - Low mask:
* and x0, x0, #HYP_PAGE_OFFSET_HIGH_MASK
* and x0, x0, #HYP_PAGE_OFFSET_LOW_MASK
* - VHE:
* nop
* nop
*
* The "low mask" version works because the mask is a strict subset of
* the "high mask", hence performing the first mask for nothing.
* Should be completely invisible on any viable CPU.
*/ */
.macro kern_hyp_va reg .macro kern_hyp_va reg
alternative_if_not ARM64_HAS_VIRT_HOST_EXTN alternative_if_not ARM64_HAS_VIRT_HOST_EXTN
and \reg, \reg, #HYP_PAGE_OFFSET_MASK and \reg, \reg, #HYP_PAGE_OFFSET_HIGH_MASK
alternative_else alternative_else
nop nop
alternative_endif alternative_endif
alternative_if_not ARM64_HYP_OFFSET_LOW
nop
alternative_else
and \reg, \reg, #HYP_PAGE_OFFSET_LOW_MASK
alternative_endif
.endm .endm
#else #else
...@@ -70,7 +117,22 @@ alternative_endif ...@@ -70,7 +117,22 @@ alternative_endif
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
#define KERN_TO_HYP(kva) ((unsigned long)kva - PAGE_OFFSET + HYP_PAGE_OFFSET) static inline unsigned long __kern_hyp_va(unsigned long v)
{
asm volatile(ALTERNATIVE("and %0, %0, %1",
"nop",
ARM64_HAS_VIRT_HOST_EXTN)
: "+r" (v)
: "i" (HYP_PAGE_OFFSET_HIGH_MASK));
asm volatile(ALTERNATIVE("nop",
"and %0, %0, %1",
ARM64_HYP_OFFSET_LOW)
: "+r" (v)
: "i" (HYP_PAGE_OFFSET_LOW_MASK));
return v;
}
#define kern_hyp_va(v) (typeof(v))(__kern_hyp_va((unsigned long)(v)))
/* /*
* We currently only support a 40bit IPA. * We currently only support a 40bit IPA.
...@@ -81,9 +143,8 @@ alternative_endif ...@@ -81,9 +143,8 @@ alternative_endif
#include <asm/stage2_pgtable.h> #include <asm/stage2_pgtable.h>
int create_hyp_mappings(void *from, void *to); int create_hyp_mappings(void *from, void *to, pgprot_t prot);
int create_hyp_io_mappings(void *from, void *to, phys_addr_t); int create_hyp_io_mappings(void *from, void *to, phys_addr_t);
void free_boot_hyp_pgd(void);
void free_hyp_pgds(void); void free_hyp_pgds(void);
void stage2_unmap_vm(struct kvm *kvm); void stage2_unmap_vm(struct kvm *kvm);
...@@ -97,7 +158,6 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run); ...@@ -97,7 +158,6 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run);
void kvm_mmu_free_memory_caches(struct kvm_vcpu *vcpu); void kvm_mmu_free_memory_caches(struct kvm_vcpu *vcpu);
phys_addr_t kvm_mmu_get_httbr(void); phys_addr_t kvm_mmu_get_httbr(void);
phys_addr_t kvm_mmu_get_boot_httbr(void);
phys_addr_t kvm_get_idmap_vector(void); phys_addr_t kvm_get_idmap_vector(void);
phys_addr_t kvm_get_idmap_start(void); phys_addr_t kvm_get_idmap_start(void);
int kvm_mmu_init(void); int kvm_mmu_init(void);
......
...@@ -164,6 +164,7 @@ ...@@ -164,6 +164,7 @@
#define PTE_CONT (_AT(pteval_t, 1) << 52) /* Contiguous range */ #define PTE_CONT (_AT(pteval_t, 1) << 52) /* Contiguous range */
#define PTE_PXN (_AT(pteval_t, 1) << 53) /* Privileged XN */ #define PTE_PXN (_AT(pteval_t, 1) << 53) /* Privileged XN */
#define PTE_UXN (_AT(pteval_t, 1) << 54) /* User XN */ #define PTE_UXN (_AT(pteval_t, 1) << 54) /* User XN */
#define PTE_HYP_XN (_AT(pteval_t, 1) << 54) /* HYP XN */
/* /*
* AttrIndx[2:0] encoding (mapping attributes defined in the MAIR* registers). * AttrIndx[2:0] encoding (mapping attributes defined in the MAIR* registers).
......
...@@ -55,7 +55,9 @@ ...@@ -55,7 +55,9 @@
#define PAGE_KERNEL_EXEC __pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_WRITE) #define PAGE_KERNEL_EXEC __pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_WRITE)
#define PAGE_KERNEL_EXEC_CONT __pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_CONT) #define PAGE_KERNEL_EXEC_CONT __pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_CONT)
#define PAGE_HYP __pgprot(_PAGE_DEFAULT | PTE_HYP) #define PAGE_HYP __pgprot(_PAGE_DEFAULT | PTE_HYP | PTE_HYP_XN)
#define PAGE_HYP_EXEC __pgprot(_PAGE_DEFAULT | PTE_HYP | PTE_RDONLY)
#define PAGE_HYP_RO __pgprot(_PAGE_DEFAULT | PTE_HYP | PTE_RDONLY | PTE_HYP_XN)
#define PAGE_HYP_DEVICE __pgprot(PROT_DEVICE_nGnRE | PTE_HYP) #define PAGE_HYP_DEVICE __pgprot(PROT_DEVICE_nGnRE | PTE_HYP)
#define PAGE_S2 __pgprot(PROT_DEFAULT | PTE_S2_MEMATTR(MT_S2_NORMAL) | PTE_S2_RDONLY) #define PAGE_S2 __pgprot(PROT_DEFAULT | PTE_S2_MEMATTR(MT_S2_NORMAL) | PTE_S2_RDONLY)
......
...@@ -87,6 +87,10 @@ extern void verify_cpu_run_el(void); ...@@ -87,6 +87,10 @@ extern void verify_cpu_run_el(void);
static inline void verify_cpu_run_el(void) {} static inline void verify_cpu_run_el(void) {}
#endif #endif
/* The section containing the hypervisor idmap text */
extern char __hyp_idmap_text_start[];
extern char __hyp_idmap_text_end[];
/* The section containing the hypervisor text */ /* The section containing the hypervisor text */
extern char __hyp_text_start[]; extern char __hyp_text_start[];
extern char __hyp_text_end[]; extern char __hyp_text_end[];
......
...@@ -87,9 +87,11 @@ struct kvm_regs { ...@@ -87,9 +87,11 @@ struct kvm_regs {
/* Supported VGICv3 address types */ /* Supported VGICv3 address types */
#define KVM_VGIC_V3_ADDR_TYPE_DIST 2 #define KVM_VGIC_V3_ADDR_TYPE_DIST 2
#define KVM_VGIC_V3_ADDR_TYPE_REDIST 3 #define KVM_VGIC_V3_ADDR_TYPE_REDIST 3
#define KVM_VGIC_ITS_ADDR_TYPE 4
#define KVM_VGIC_V3_DIST_SIZE SZ_64K #define KVM_VGIC_V3_DIST_SIZE SZ_64K
#define KVM_VGIC_V3_REDIST_SIZE (2 * SZ_64K) #define KVM_VGIC_V3_REDIST_SIZE (2 * SZ_64K)
#define KVM_VGIC_V3_ITS_SIZE (2 * SZ_64K)
#define KVM_ARM_VCPU_POWER_OFF 0 /* CPU is started in OFF state */ #define KVM_ARM_VCPU_POWER_OFF 0 /* CPU is started in OFF state */
#define KVM_ARM_VCPU_EL1_32BIT 1 /* CPU running a 32bit VM */ #define KVM_ARM_VCPU_EL1_32BIT 1 /* CPU running a 32bit VM */
......
...@@ -726,6 +726,19 @@ static bool runs_at_el2(const struct arm64_cpu_capabilities *entry, int __unused ...@@ -726,6 +726,19 @@ static bool runs_at_el2(const struct arm64_cpu_capabilities *entry, int __unused
return is_kernel_in_hyp_mode(); return is_kernel_in_hyp_mode();
} }
static bool hyp_offset_low(const struct arm64_cpu_capabilities *entry,
int __unused)
{
phys_addr_t idmap_addr = virt_to_phys(__hyp_idmap_text_start);
/*
* Activate the lower HYP offset only if:
* - the idmap doesn't clash with it,
* - the kernel is not running at EL2.
*/
return idmap_addr > GENMASK(VA_BITS - 2, 0) && !is_kernel_in_hyp_mode();
}
static const struct arm64_cpu_capabilities arm64_features[] = { static const struct arm64_cpu_capabilities arm64_features[] = {
{ {
.desc = "GIC system register CPU interface", .desc = "GIC system register CPU interface",
...@@ -803,6 +816,12 @@ static const struct arm64_cpu_capabilities arm64_features[] = { ...@@ -803,6 +816,12 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.field_pos = ID_AA64PFR0_EL0_SHIFT, .field_pos = ID_AA64PFR0_EL0_SHIFT,
.min_field_value = ID_AA64PFR0_EL0_32BIT_64BIT, .min_field_value = ID_AA64PFR0_EL0_32BIT_64BIT,
}, },
{
.desc = "Reduced HYP mapping offset",
.capability = ARM64_HYP_OFFSET_LOW,
.def_scope = SCOPE_SYSTEM,
.matches = hyp_offset_low,
},
{}, {},
}; };
......
...@@ -17,6 +17,9 @@ ...@@ -17,6 +17,9 @@
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/of_pci.h> #include <linux/of_pci.h>
#include <linux/of_platform.h> #include <linux/of_platform.h>
#include <linux/pci.h>
#include <linux/pci-acpi.h>
#include <linux/pci-ecam.h>
#include <linux/slab.h> #include <linux/slab.h>
/* /*
...@@ -36,25 +39,17 @@ resource_size_t pcibios_align_resource(void *data, const struct resource *res, ...@@ -36,25 +39,17 @@ resource_size_t pcibios_align_resource(void *data, const struct resource *res,
return res->start; return res->start;
} }
/**
* pcibios_enable_device - Enable I/O and memory.
* @dev: PCI device to be enabled
* @mask: bitmask of BARs to enable
*/
int pcibios_enable_device(struct pci_dev *dev, int mask)
{
if (pci_has_flag(PCI_PROBE_ONLY))
return 0;
return pci_enable_resources(dev, mask);
}
/* /*
* Try to assign the IRQ number from DT when adding a new device * Try to assign the IRQ number when probing a new device
*/ */
int pcibios_add_device(struct pci_dev *dev) int pcibios_alloc_irq(struct pci_dev *dev)
{ {
if (acpi_disabled)
dev->irq = of_irq_parse_and_map_pci(dev, 0, 0); dev->irq = of_irq_parse_and_map_pci(dev, 0, 0);
#ifdef CONFIG_ACPI
else
return acpi_pci_irq_enable(dev);
#endif
return 0; return 0;
} }
...@@ -65,13 +60,21 @@ int pcibios_add_device(struct pci_dev *dev) ...@@ -65,13 +60,21 @@ int pcibios_add_device(struct pci_dev *dev)
int raw_pci_read(unsigned int domain, unsigned int bus, int raw_pci_read(unsigned int domain, unsigned int bus,
unsigned int devfn, int reg, int len, u32 *val) unsigned int devfn, int reg, int len, u32 *val)
{ {
return -ENXIO; struct pci_bus *b = pci_find_bus(domain, bus);
if (!b)
return PCIBIOS_DEVICE_NOT_FOUND;
return b->ops->read(b, devfn, reg, len, val);
} }
int raw_pci_write(unsigned int domain, unsigned int bus, int raw_pci_write(unsigned int domain, unsigned int bus,
unsigned int devfn, int reg, int len, u32 val) unsigned int devfn, int reg, int len, u32 val)
{ {
return -ENXIO; struct pci_bus *b = pci_find_bus(domain, bus);
if (!b)
return PCIBIOS_DEVICE_NOT_FOUND;
return b->ops->write(b, devfn, reg, len, val);
} }
#ifdef CONFIG_NUMA #ifdef CONFIG_NUMA
...@@ -85,10 +88,124 @@ EXPORT_SYMBOL(pcibus_to_node); ...@@ -85,10 +88,124 @@ EXPORT_SYMBOL(pcibus_to_node);
#endif #endif
#ifdef CONFIG_ACPI #ifdef CONFIG_ACPI
/* Root bridge scanning */
struct acpi_pci_generic_root_info {
struct acpi_pci_root_info common;
struct pci_config_window *cfg; /* config space mapping */
};
int acpi_pci_bus_find_domain_nr(struct pci_bus *bus)
{
struct pci_config_window *cfg = bus->sysdata;
struct acpi_device *adev = to_acpi_device(cfg->parent);
struct acpi_pci_root *root = acpi_driver_data(adev);
return root->segment;
}
int pcibios_root_bridge_prepare(struct pci_host_bridge *bridge)
{
if (!acpi_disabled) {
struct pci_config_window *cfg = bridge->bus->sysdata;
struct acpi_device *adev = to_acpi_device(cfg->parent);
ACPI_COMPANION_SET(&bridge->dev, adev);
}
return 0;
}
/*
* Lookup the bus range for the domain in MCFG, and set up config space
* mapping.
*/
static struct pci_config_window *
pci_acpi_setup_ecam_mapping(struct acpi_pci_root *root)
{
struct resource *bus_res = &root->secondary;
u16 seg = root->segment;
struct pci_config_window *cfg;
struct resource cfgres;
unsigned int bsz;
/* Use address from _CBA if present, otherwise lookup MCFG */
if (!root->mcfg_addr)
root->mcfg_addr = pci_mcfg_lookup(seg, bus_res);
if (!root->mcfg_addr) {
dev_err(&root->device->dev, "%04x:%pR ECAM region not found\n",
seg, bus_res);
return NULL;
}
bsz = 1 << pci_generic_ecam_ops.bus_shift;
cfgres.start = root->mcfg_addr + bus_res->start * bsz;
cfgres.end = cfgres.start + resource_size(bus_res) * bsz - 1;
cfgres.flags = IORESOURCE_MEM;
cfg = pci_ecam_create(&root->device->dev, &cfgres, bus_res,
&pci_generic_ecam_ops);
if (IS_ERR(cfg)) {
dev_err(&root->device->dev, "%04x:%pR error %ld mapping ECAM\n",
seg, bus_res, PTR_ERR(cfg));
return NULL;
}
return cfg;
}
/* release_info: free resources allocated by init_info */
static void pci_acpi_generic_release_info(struct acpi_pci_root_info *ci)
{
struct acpi_pci_generic_root_info *ri;
ri = container_of(ci, struct acpi_pci_generic_root_info, common);
pci_ecam_free(ri->cfg);
kfree(ri);
}
static struct acpi_pci_root_ops acpi_pci_root_ops = {
.release_info = pci_acpi_generic_release_info,
};
/* Interface called from ACPI code to setup PCI host controller */
struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root) struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root)
{ {
/* TODO: Should be revisited when implementing PCI on ACPI */ int node = acpi_get_node(root->device->handle);
struct acpi_pci_generic_root_info *ri;
struct pci_bus *bus, *child;
ri = kzalloc_node(sizeof(*ri), GFP_KERNEL, node);
if (!ri)
return NULL;
ri->cfg = pci_acpi_setup_ecam_mapping(root);
if (!ri->cfg) {
kfree(ri);
return NULL;
}
acpi_pci_root_ops.pci_ops = &ri->cfg->ops->pci_ops;
bus = acpi_pci_root_create(root, &acpi_pci_root_ops, &ri->common,
ri->cfg);
if (!bus)
return NULL; return NULL;
pci_bus_size_bridges(bus);
pci_bus_assign_resources(bus);
list_for_each_entry(child, &bus->children, node)
pcie_bus_configure_settings(child);
return bus;
} }
void pcibios_add_bus(struct pci_bus *bus)
{
acpi_pci_add_bus(bus);
}
void pcibios_remove_bus(struct pci_bus *bus)
{
acpi_pci_remove_bus(bus);
}
#endif #endif
...@@ -36,6 +36,7 @@ config KVM ...@@ -36,6 +36,7 @@ config KVM
select HAVE_KVM_IRQFD select HAVE_KVM_IRQFD
select KVM_ARM_VGIC_V3 select KVM_ARM_VGIC_V3
select KVM_ARM_PMU if HW_PERF_EVENTS select KVM_ARM_PMU if HW_PERF_EVENTS
select HAVE_KVM_MSI
---help--- ---help---
Support hosting virtualized guest machines. Support hosting virtualized guest machines.
We don't support KVM with 16K page tables yet, due to the multiple We don't support KVM with 16K page tables yet, due to the multiple
...@@ -54,13 +55,6 @@ config KVM_ARM_PMU ...@@ -54,13 +55,6 @@ config KVM_ARM_PMU
Adds support for a virtual Performance Monitoring Unit (PMU) in Adds support for a virtual Performance Monitoring Unit (PMU) in
virtual machines. virtual machines.
config KVM_NEW_VGIC
bool "New VGIC implementation"
depends on KVM
default y
---help---
uses the new VGIC implementation
source drivers/vhost/Kconfig source drivers/vhost/Kconfig
endif # VIRTUALIZATION endif # VIRTUALIZATION
...@@ -20,7 +20,6 @@ kvm-$(CONFIG_KVM_ARM_HOST) += emulate.o inject_fault.o regmap.o ...@@ -20,7 +20,6 @@ kvm-$(CONFIG_KVM_ARM_HOST) += emulate.o inject_fault.o regmap.o
kvm-$(CONFIG_KVM_ARM_HOST) += hyp.o hyp-init.o handle_exit.o kvm-$(CONFIG_KVM_ARM_HOST) += hyp.o hyp-init.o handle_exit.o
kvm-$(CONFIG_KVM_ARM_HOST) += guest.o debug.o reset.o sys_regs.o sys_regs_generic_v8.o kvm-$(CONFIG_KVM_ARM_HOST) += guest.o debug.o reset.o sys_regs.o sys_regs_generic_v8.o
ifeq ($(CONFIG_KVM_NEW_VGIC),y)
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic/vgic.o kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic/vgic.o
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic/vgic-init.o kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic/vgic-init.o
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic/vgic-irqfd.o kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic/vgic-irqfd.o
...@@ -30,12 +29,6 @@ kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic/vgic-mmio.o ...@@ -30,12 +29,6 @@ kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic/vgic-mmio.o
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic/vgic-mmio-v2.o kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic/vgic-mmio-v2.o
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic/vgic-mmio-v3.o kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic/vgic-mmio-v3.o
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic/vgic-kvm-device.o kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic/vgic-kvm-device.o
else kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic/vgic-its.o
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic.o
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v2.o
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v2-emul.o
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3.o
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3-emul.o
endif
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/arch_timer.o kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/arch_timer.o
kvm-$(CONFIG_KVM_ARM_PMU) += $(KVM)/arm/pmu.o kvm-$(CONFIG_KVM_ARM_PMU) += $(KVM)/arm/pmu.o
...@@ -211,7 +211,7 @@ unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu) ...@@ -211,7 +211,7 @@ unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu)
/** /**
* kvm_arm_copy_reg_indices - get indices of all registers. * kvm_arm_copy_reg_indices - get indices of all registers.
* *
* We do core registers right here, then we apppend system regs. * We do core registers right here, then we append system regs.
*/ */
int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
{ {
......
...@@ -53,10 +53,9 @@ __invalid: ...@@ -53,10 +53,9 @@ __invalid:
b . b .
/* /*
* x0: HYP boot pgd * x0: HYP pgd
* x1: HYP pgd * x1: HYP stack
* x2: HYP stack * x2: HYP vectors
* x3: HYP vectors
*/ */
__do_hyp_init: __do_hyp_init:
...@@ -110,71 +109,27 @@ __do_hyp_init: ...@@ -110,71 +109,27 @@ __do_hyp_init:
msr sctlr_el2, x4 msr sctlr_el2, x4
isb isb
/* Skip the trampoline dance if we merged the boot and runtime PGDs */
cmp x0, x1
b.eq merged
/* MMU is now enabled. Get ready for the trampoline dance */
ldr x4, =TRAMPOLINE_VA
adr x5, target
bfi x4, x5, #0, #PAGE_SHIFT
br x4
target: /* We're now in the trampoline code, switch page tables */
msr ttbr0_el2, x1
isb
/* Invalidate the old TLBs */
tlbi alle2
dsb sy
merged:
/* Set the stack and new vectors */ /* Set the stack and new vectors */
kern_hyp_va x1
mov sp, x1
kern_hyp_va x2 kern_hyp_va x2
mov sp, x2 msr vbar_el2, x2
kern_hyp_va x3
msr vbar_el2, x3
/* Hello, World! */ /* Hello, World! */
eret eret
ENDPROC(__kvm_hyp_init) ENDPROC(__kvm_hyp_init)
/* /*
* Reset kvm back to the hyp stub. This is the trampoline dance in * Reset kvm back to the hyp stub.
* reverse. If kvm used an extended idmap, __extended_idmap_trampoline
* calls this code directly in the idmap. In this case switching to the
* boot tables is a no-op.
*
* x0: HYP boot pgd
* x1: HYP phys_idmap_start
*/ */
ENTRY(__kvm_hyp_reset) ENTRY(__kvm_hyp_reset)
/* We're in trampoline code in VA, switch back to boot page tables */
msr ttbr0_el2, x0
isb
/* Ensure the PA branch doesn't find a stale tlb entry or stale code. */
ic iallu
tlbi alle2
dsb sy
isb
/* Branch into PA space */
adr x0, 1f
bfi x1, x0, #0, #PAGE_SHIFT
br x1
/* We're now in idmap, disable MMU */ /* We're now in idmap, disable MMU */
1: mrs x0, sctlr_el2 mrs x0, sctlr_el2
ldr x1, =SCTLR_ELx_FLAGS ldr x1, =SCTLR_ELx_FLAGS
bic x0, x0, x1 // Clear SCTL_M and etc bic x0, x0, x1 // Clear SCTL_M and etc
msr sctlr_el2, x0 msr sctlr_el2, x0
isb isb
/* Invalidate the old TLBs */
tlbi alle2
dsb sy
/* Install stub vectors */ /* Install stub vectors */
adr_l x0, __hyp_stub_vectors adr_l x0, __hyp_stub_vectors
msr vbar_el2, x0 msr vbar_el2, x0
......
...@@ -164,22 +164,3 @@ alternative_endif ...@@ -164,22 +164,3 @@ alternative_endif
eret eret
ENDPROC(__fpsimd_guest_restore) ENDPROC(__fpsimd_guest_restore)
/*
* When using the extended idmap, we don't have a trampoline page we can use
* while we switch pages tables during __kvm_hyp_reset. Accessing the idmap
* directly would be ideal, but if we're using the extended idmap then the
* idmap is located above HYP_PAGE_OFFSET, and the address will be masked by
* kvm_call_hyp using kern_hyp_va.
*
* x0: HYP boot pgd
* x1: HYP phys_idmap_start
*/
ENTRY(__extended_idmap_trampoline)
mov x4, x1
adr_l x3, __kvm_hyp_reset
/* insert __kvm_hyp_reset()s offset into phys_idmap_start */
bfi x4, x3, #0, #PAGE_SHIFT
br x4
ENDPROC(__extended_idmap_trampoline)
...@@ -63,6 +63,21 @@ ENTRY(__vhe_hyp_call) ...@@ -63,6 +63,21 @@ ENTRY(__vhe_hyp_call)
ret ret
ENDPROC(__vhe_hyp_call) ENDPROC(__vhe_hyp_call)
/*
* Compute the idmap address of __kvm_hyp_reset based on the idmap
* start passed as a parameter, and jump there.
*
* x0: HYP phys_idmap_start
*/
ENTRY(__kvm_hyp_teardown)
mov x4, x0
adr_l x3, __kvm_hyp_reset
/* insert __kvm_hyp_reset()s offset into phys_idmap_start */
bfi x4, x3, #0, #PAGE_SHIFT
br x4
ENDPROC(__kvm_hyp_teardown)
el1_sync: // Guest trapped into EL2 el1_sync: // Guest trapped into EL2
save_x0_to_x3 save_x0_to_x3
......
...@@ -299,9 +299,16 @@ static const char __hyp_panic_string[] = "HYP panic:\nPS:%08llx PC:%016llx ESR:% ...@@ -299,9 +299,16 @@ static const char __hyp_panic_string[] = "HYP panic:\nPS:%08llx PC:%016llx ESR:%
static void __hyp_text __hyp_call_panic_nvhe(u64 spsr, u64 elr, u64 par) static void __hyp_text __hyp_call_panic_nvhe(u64 spsr, u64 elr, u64 par)
{ {
unsigned long str_va = (unsigned long)__hyp_panic_string; unsigned long str_va;
__hyp_do_panic(hyp_kern_va(str_va), /*
* Force the panic string to be loaded from the literal pool,
* making sure it is a kernel address and not a PC-relative
* reference.
*/
asm volatile("ldr %0, =__hyp_panic_string" : "=r" (str_va));
__hyp_do_panic(str_va,
spsr, elr, spsr, elr,
read_sysreg(esr_el2), read_sysreg_el2(far), read_sysreg(esr_el2), read_sysreg_el2(far),
read_sysreg(hpfar_el2), par, read_sysreg(hpfar_el2), par,
......
...@@ -65,7 +65,7 @@ static bool cpu_has_32bit_el1(void) ...@@ -65,7 +65,7 @@ static bool cpu_has_32bit_el1(void)
* We currently assume that the number of HW registers is uniform * We currently assume that the number of HW registers is uniform
* across all CPUs (see cpuinfo_sanity_check). * across all CPUs (see cpuinfo_sanity_check).
*/ */
int kvm_arch_dev_ioctl_check_extension(long ext) int kvm_arch_dev_ioctl_check_extension(struct kvm *kvm, long ext)
{ {
int r; int r;
...@@ -86,6 +86,12 @@ int kvm_arch_dev_ioctl_check_extension(long ext) ...@@ -86,6 +86,12 @@ int kvm_arch_dev_ioctl_check_extension(long ext)
case KVM_CAP_VCPU_ATTRIBUTES: case KVM_CAP_VCPU_ATTRIBUTES:
r = 1; r = 1;
break; break;
case KVM_CAP_MSI_DEVID:
if (!kvm)
r = -EINVAL;
else
r = kvm->arch.vgic.msis_require_devid;
break;
default: default:
r = 0; r = 0;
} }
...@@ -98,7 +104,7 @@ int kvm_arch_dev_ioctl_check_extension(long ext) ...@@ -98,7 +104,7 @@ int kvm_arch_dev_ioctl_check_extension(long ext)
* @vcpu: The VCPU pointer * @vcpu: The VCPU pointer
* *
* This function finds the right table above and sets the registers on * This function finds the right table above and sets the registers on
* the virtual CPU struct to their architectually defined reset * the virtual CPU struct to their architecturally defined reset
* values. * values.
*/ */
int kvm_reset_vcpu(struct kvm_vcpu *vcpu) int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
...@@ -132,31 +138,3 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) ...@@ -132,31 +138,3 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
/* Reset timer */ /* Reset timer */
return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq); return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
} }
extern char __hyp_idmap_text_start[];
unsigned long kvm_hyp_reset_entry(void)
{
if (!__kvm_cpu_uses_extended_idmap()) {
unsigned long offset;
/*
* Find the address of __kvm_hyp_reset() in the trampoline page.
* This is present in the running page tables, and the boot page
* tables, so we call the code here to start the trampoline
* dance in reverse.
*/
offset = (unsigned long)__kvm_hyp_reset
- ((unsigned long)__hyp_idmap_text_start & PAGE_MASK);
return TRAMPOLINE_VA + offset;
} else {
/*
* KVM is running with merged page tables, which don't have the
* trampoline page mapped. We know the idmap is still mapped,
* but can't be called into directly. Use
* __extended_idmap_trampoline to do the call.
*/
return (unsigned long)kvm_ksym_ref(__extended_idmap_trampoline);
}
}
...@@ -1546,7 +1546,7 @@ static void unhandled_cp_access(struct kvm_vcpu *vcpu, ...@@ -1546,7 +1546,7 @@ static void unhandled_cp_access(struct kvm_vcpu *vcpu,
struct sys_reg_params *params) struct sys_reg_params *params)
{ {
u8 hsr_ec = kvm_vcpu_trap_get_class(vcpu); u8 hsr_ec = kvm_vcpu_trap_get_class(vcpu);
int cp; int cp = -1;
switch(hsr_ec) { switch(hsr_ec) {
case ESR_ELx_EC_CP15_32: case ESR_ELx_EC_CP15_32:
...@@ -1558,7 +1558,7 @@ static void unhandled_cp_access(struct kvm_vcpu *vcpu, ...@@ -1558,7 +1558,7 @@ static void unhandled_cp_access(struct kvm_vcpu *vcpu,
cp = 14; cp = 14;
break; break;
default: default:
WARN_ON((cp = -1)); WARN_ON(1);
} }
kvm_err("Unsupported guest CP%d access at: %08lx\n", kvm_err("Unsupported guest CP%d access at: %08lx\n",
......
...@@ -397,7 +397,7 @@ static int __init init_axis_flash(void) ...@@ -397,7 +397,7 @@ static int __init init_axis_flash(void)
if (!romfs_in_flash) { if (!romfs_in_flash) {
/* Create an RAM device for the root partition (romfs). */ /* Create an RAM device for the root partition (romfs). */
#if !defined(CONFIG_MTD_MTDRAM) || (CONFIG_MTDRAM_TOTAL_SIZE != 0) || (CONFIG_MTDRAM_ABS_POS != 0) #if !defined(CONFIG_MTD_MTDRAM) || (CONFIG_MTDRAM_TOTAL_SIZE != 0)
/* No use trying to boot this kernel from RAM. Panic! */ /* No use trying to boot this kernel from RAM. Panic! */
printk(KERN_EMERG "axisflashmap: Cannot create an MTD RAM " printk(KERN_EMERG "axisflashmap: Cannot create an MTD RAM "
"device due to kernel (mis)configuration!\n"); "device due to kernel (mis)configuration!\n");
......
...@@ -320,7 +320,7 @@ static int __init init_axis_flash(void) ...@@ -320,7 +320,7 @@ static int __init init_axis_flash(void)
* but its size must be configured as 0 so as not to conflict * but its size must be configured as 0 so as not to conflict
* with our usage. * with our usage.
*/ */
#if !defined(CONFIG_MTD_MTDRAM) || (CONFIG_MTDRAM_TOTAL_SIZE != 0) || (CONFIG_MTDRAM_ABS_POS != 0) #if !defined(CONFIG_MTD_MTDRAM) || (CONFIG_MTDRAM_TOTAL_SIZE != 0)
if (!romfs_in_flash && !nand_boot) { if (!romfs_in_flash && !nand_boot) {
printk(KERN_EMERG "axisflashmap: Cannot create an MTD RAM " printk(KERN_EMERG "axisflashmap: Cannot create an MTD RAM "
"device; configure CONFIG_MTD_MTDRAM with size = 0!\n"); "device; configure CONFIG_MTD_MTDRAM with size = 0!\n");
......
...@@ -82,9 +82,6 @@ extern pgprot_t pci_phys_mem_access_prot(struct file *file, ...@@ -82,9 +82,6 @@ extern pgprot_t pci_phys_mem_access_prot(struct file *file,
pgprot_t prot); pgprot_t prot);
#define HAVE_ARCH_PCI_RESOURCE_TO_USER #define HAVE_ARCH_PCI_RESOURCE_TO_USER
extern void pci_resource_to_user(const struct pci_dev *dev, int bar,
const struct resource *rsrc,
resource_size_t *start, resource_size_t *end);
extern void pcibios_setup_bus_devices(struct pci_bus *bus); extern void pcibios_setup_bus_devices(struct pci_bus *bus);
extern void pcibios_setup_bus_self(struct pci_bus *bus); extern void pcibios_setup_bus_self(struct pci_bus *bus);
......
...@@ -218,33 +218,6 @@ static struct resource *__pci_mmap_make_offset(struct pci_dev *dev, ...@@ -218,33 +218,6 @@ static struct resource *__pci_mmap_make_offset(struct pci_dev *dev,
return NULL; return NULL;
} }
/*
* Set vm_page_prot of VMA, as appropriate for this architecture, for a pci
* device mapping.
*/
static pgprot_t __pci_mmap_set_pgprot(struct pci_dev *dev, struct resource *rp,
pgprot_t protection,
enum pci_mmap_state mmap_state,
int write_combine)
{
pgprot_t prot = protection;
/* Write combine is always 0 on non-memory space mappings. On
* memory space, if the user didn't pass 1, we check for a
* "prefetchable" resource. This is a bit hackish, but we use
* this to workaround the inability of /sysfs to provide a write
* combine bit
*/
if (mmap_state != pci_mmap_mem)
write_combine = 0;
else if (write_combine == 0) {
if (rp->flags & IORESOURCE_PREFETCH)
write_combine = 1;
}
return pgprot_noncached(prot);
}
/* /*
* This one is used by /dev/mem and fbdev who have no clue about the * This one is used by /dev/mem and fbdev who have no clue about the
* PCI device, it tries to find the PCI device first and calls the * PCI device, it tries to find the PCI device first and calls the
...@@ -317,9 +290,7 @@ int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma, ...@@ -317,9 +290,7 @@ int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma,
return -EINVAL; return -EINVAL;
vma->vm_pgoff = offset >> PAGE_SHIFT; vma->vm_pgoff = offset >> PAGE_SHIFT;
vma->vm_page_prot = __pci_mmap_set_pgprot(dev, rp, vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
vma->vm_page_prot,
mmap_state, write_combine);
ret = remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff, ret = remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
vma->vm_end - vma->vm_start, vma->vm_page_prot); vma->vm_end - vma->vm_start, vma->vm_page_prot);
...@@ -473,39 +444,25 @@ void pci_resource_to_user(const struct pci_dev *dev, int bar, ...@@ -473,39 +444,25 @@ void pci_resource_to_user(const struct pci_dev *dev, int bar,
const struct resource *rsrc, const struct resource *rsrc,
resource_size_t *start, resource_size_t *end) resource_size_t *start, resource_size_t *end)
{ {
struct pci_controller *hose = pci_bus_to_host(dev->bus); struct pci_bus_region region;
resource_size_t offset = 0;
if (hose == NULL) if (rsrc->flags & IORESOURCE_IO) {
pcibios_resource_to_bus(dev->bus, &region,
(struct resource *) rsrc);
*start = region.start;
*end = region.end;
return; return;
}
if (rsrc->flags & IORESOURCE_IO) /* We pass a CPU physical address to userland for MMIO instead of a
offset = (unsigned long)hose->io_base_virt - _IO_BASE; * BAR value because X is lame and expects to be able to use that
* to pass to /dev/mem!
/* We pass a fully fixed up address to userland for MMIO instead of
* a BAR value because X is lame and expects to be able to use that
* to pass to /dev/mem !
*
* That means that we'll have potentially 64 bits values where some
* userland apps only expect 32 (like X itself since it thinks only
* Sparc has 64 bits MMIO) but if we don't do that, we break it on
* 32 bits CHRPs :-(
* *
* Hopefully, the sysfs insterface is immune to that gunk. Once X * That means we may have 64-bit values where some apps only expect
* has been fixed (and the fix spread enough), we can re-enable the * 32 (like X itself since it thinks only Sparc has 64-bit MMIO).
* 2 lines below and pass down a BAR value to userland. In that case
* we'll also have to re-enable the matching code in
* __pci_mmap_make_offset().
*
* BenH.
*/ */
#if 0 *start = rsrc->start;
else if (rsrc->flags & IORESOURCE_MEM) *end = rsrc->end;
offset = hose->pci_mem_offset;
#endif
*start = rsrc->start - offset;
*end = rsrc->end - offset;
} }
/** /**
......
...@@ -1488,6 +1488,7 @@ config CPU_MIPS64_R2 ...@@ -1488,6 +1488,7 @@ config CPU_MIPS64_R2
select CPU_SUPPORTS_HIGHMEM select CPU_SUPPORTS_HIGHMEM
select CPU_SUPPORTS_HUGEPAGES select CPU_SUPPORTS_HUGEPAGES
select CPU_SUPPORTS_MSA select CPU_SUPPORTS_MSA
select HAVE_KVM
help help
Choose this option to build a kernel for release 2 or later of the Choose this option to build a kernel for release 2 or later of the
MIPS64 architecture. Many modern embedded systems with a 64-bit MIPS64 architecture. Many modern embedded systems with a 64-bit
...@@ -1505,6 +1506,7 @@ config CPU_MIPS64_R6 ...@@ -1505,6 +1506,7 @@ config CPU_MIPS64_R6
select CPU_SUPPORTS_MSA select CPU_SUPPORTS_MSA
select GENERIC_CSUM select GENERIC_CSUM
select MIPS_O32_FP64_SUPPORT if MIPS32_O32 select MIPS_O32_FP64_SUPPORT if MIPS32_O32
select HAVE_KVM
help help
Choose this option to build a kernel for release 6 or later of the Choose this option to build a kernel for release 6 or later of the
MIPS64 architecture. New MIPS processors, starting with the Warrior MIPS64 architecture. New MIPS processors, starting with the Warrior
......
...@@ -45,7 +45,7 @@ ...@@ -45,7 +45,7 @@
/* /*
* Returns the kernel segment base of a given address * Returns the kernel segment base of a given address
*/ */
#define KSEGX(a) ((_ACAST32_ (a)) & 0xe0000000) #define KSEGX(a) ((_ACAST32_(a)) & _ACAST32_(0xe0000000))
/* /*
* Returns the physical address of a CKSEGx / XKPHYS address * Returns the physical address of a CKSEGx / XKPHYS address
......
此差异已折叠。
...@@ -55,7 +55,7 @@ ...@@ -55,7 +55,7 @@
#define cpu_has_mipsmt 0 #define cpu_has_mipsmt 0
#define cpu_has_vint 0 #define cpu_has_vint 0
#define cpu_has_veic 0 #define cpu_has_veic 0
#define cpu_hwrena_impl_bits 0xc0000000 #define cpu_hwrena_impl_bits (MIPS_HWRENA_IMPL1 | MIPS_HWRENA_IMPL2)
#define cpu_has_wsbh 1 #define cpu_has_wsbh 1
#define cpu_has_rixi (cpu_data[0].cputype != CPU_CAVIUM_OCTEON) #define cpu_has_rixi (cpu_data[0].cputype != CPU_CAVIUM_OCTEON)
......
...@@ -53,7 +53,7 @@ ...@@ -53,7 +53,7 @@
#define CP0_SEGCTL2 $5, 4 #define CP0_SEGCTL2 $5, 4
#define CP0_WIRED $6 #define CP0_WIRED $6
#define CP0_INFO $7 #define CP0_INFO $7
#define CP0_HWRENA $7, 0 #define CP0_HWRENA $7
#define CP0_BADVADDR $8 #define CP0_BADVADDR $8
#define CP0_BADINSTR $8, 1 #define CP0_BADINSTR $8, 1
#define CP0_COUNT $9 #define CP0_COUNT $9
...@@ -533,6 +533,7 @@ ...@@ -533,6 +533,7 @@
#define TX49_CONF_CWFON (_ULCAST_(1) << 27) #define TX49_CONF_CWFON (_ULCAST_(1) << 27)
/* Bits specific to the MIPS32/64 PRA. */ /* Bits specific to the MIPS32/64 PRA. */
#define MIPS_CONF_VI (_ULCAST_(1) << 3)
#define MIPS_CONF_MT (_ULCAST_(7) << 7) #define MIPS_CONF_MT (_ULCAST_(7) << 7)
#define MIPS_CONF_MT_TLB (_ULCAST_(1) << 7) #define MIPS_CONF_MT_TLB (_ULCAST_(1) << 7)
#define MIPS_CONF_MT_FTLB (_ULCAST_(4) << 7) #define MIPS_CONF_MT_FTLB (_ULCAST_(4) << 7)
...@@ -853,6 +854,24 @@ ...@@ -853,6 +854,24 @@
#define MIPS_CDMMBASE_ADDR_SHIFT 11 #define MIPS_CDMMBASE_ADDR_SHIFT 11
#define MIPS_CDMMBASE_ADDR_START 15 #define MIPS_CDMMBASE_ADDR_START 15
/* RDHWR register numbers */
#define MIPS_HWR_CPUNUM 0 /* CPU number */
#define MIPS_HWR_SYNCISTEP 1 /* SYNCI step size */
#define MIPS_HWR_CC 2 /* Cycle counter */
#define MIPS_HWR_CCRES 3 /* Cycle counter resolution */
#define MIPS_HWR_ULR 29 /* UserLocal */
#define MIPS_HWR_IMPL1 30 /* Implementation dependent */
#define MIPS_HWR_IMPL2 31 /* Implementation dependent */
/* Bits in HWREna register */
#define MIPS_HWRENA_CPUNUM (_ULCAST_(1) << MIPS_HWR_CPUNUM)
#define MIPS_HWRENA_SYNCISTEP (_ULCAST_(1) << MIPS_HWR_SYNCISTEP)
#define MIPS_HWRENA_CC (_ULCAST_(1) << MIPS_HWR_CC)
#define MIPS_HWRENA_CCRES (_ULCAST_(1) << MIPS_HWR_CCRES)
#define MIPS_HWRENA_ULR (_ULCAST_(1) << MIPS_HWR_ULR)
#define MIPS_HWRENA_IMPL1 (_ULCAST_(1) << MIPS_HWR_IMPL1)
#define MIPS_HWRENA_IMPL2 (_ULCAST_(1) << MIPS_HWR_IMPL2)
/* /*
* Bitfields in the TX39 family CP0 Configuration Register 3 * Bitfields in the TX39 family CP0 Configuration Register 3
*/ */
......
...@@ -80,16 +80,6 @@ extern int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma, ...@@ -80,16 +80,6 @@ extern int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma,
#define HAVE_ARCH_PCI_RESOURCE_TO_USER #define HAVE_ARCH_PCI_RESOURCE_TO_USER
static inline void pci_resource_to_user(const struct pci_dev *dev, int bar,
const struct resource *rsrc, resource_size_t *start,
resource_size_t *end)
{
phys_addr_t size = resource_size(rsrc);
*start = fixup_bigphys_addr(rsrc->start, size);
*end = rsrc->start + size;
}
/* /*
* Dynamic DMA mapping stuff. * Dynamic DMA mapping stuff.
* MIPS has everything mapped statically. * MIPS has everything mapped statically.
......
...@@ -21,6 +21,7 @@ extern void *set_vi_handler(int n, vi_handler_t addr); ...@@ -21,6 +21,7 @@ extern void *set_vi_handler(int n, vi_handler_t addr);
extern void *set_except_vector(int n, void *addr); extern void *set_except_vector(int n, void *addr);
extern unsigned long ebase; extern unsigned long ebase;
extern unsigned int hwrena;
extern void per_cpu_trap_init(bool); extern void per_cpu_trap_init(bool);
extern void cpu_cache_init(void); extern void cpu_cache_init(void);
......
...@@ -104,8 +104,13 @@ Ip_u1s2(_bltz); ...@@ -104,8 +104,13 @@ Ip_u1s2(_bltz);
Ip_u1s2(_bltzl); Ip_u1s2(_bltzl);
Ip_u1u2s3(_bne); Ip_u1u2s3(_bne);
Ip_u2s3u1(_cache); Ip_u2s3u1(_cache);
Ip_u1u2(_cfc1);
Ip_u2u1(_cfcmsa);
Ip_u1u2(_ctc1);
Ip_u2u1(_ctcmsa);
Ip_u2u1s3(_daddiu); Ip_u2u1s3(_daddiu);
Ip_u3u1u2(_daddu); Ip_u3u1u2(_daddu);
Ip_u1(_di);
Ip_u2u1msbu3(_dins); Ip_u2u1msbu3(_dins);
Ip_u2u1msbu3(_dinsm); Ip_u2u1msbu3(_dinsm);
Ip_u1u2(_divu); Ip_u1u2(_divu);
...@@ -141,6 +146,8 @@ Ip_u1(_mfhi); ...@@ -141,6 +146,8 @@ Ip_u1(_mfhi);
Ip_u1(_mflo); Ip_u1(_mflo);
Ip_u1u2u3(_mtc0); Ip_u1u2u3(_mtc0);
Ip_u1u2u3(_mthc0); Ip_u1u2u3(_mthc0);
Ip_u1(_mthi);
Ip_u1(_mtlo);
Ip_u3u1u2(_mul); Ip_u3u1u2(_mul);
Ip_u3u1u2(_or); Ip_u3u1u2(_or);
Ip_u2u1u3(_ori); Ip_u2u1u3(_ori);
......
...@@ -21,20 +21,20 @@ ...@@ -21,20 +21,20 @@
enum major_op { enum major_op {
spec_op, bcond_op, j_op, jal_op, spec_op, bcond_op, j_op, jal_op,
beq_op, bne_op, blez_op, bgtz_op, beq_op, bne_op, blez_op, bgtz_op,
addi_op, cbcond0_op = addi_op, addiu_op, slti_op, sltiu_op, addi_op, pop10_op = addi_op, addiu_op, slti_op, sltiu_op,
andi_op, ori_op, xori_op, lui_op, andi_op, ori_op, xori_op, lui_op,
cop0_op, cop1_op, cop2_op, cop1x_op, cop0_op, cop1_op, cop2_op, cop1x_op,
beql_op, bnel_op, blezl_op, bgtzl_op, beql_op, bnel_op, blezl_op, bgtzl_op,
daddi_op, cbcond1_op = daddi_op, daddiu_op, ldl_op, ldr_op, daddi_op, pop30_op = daddi_op, daddiu_op, ldl_op, ldr_op,
spec2_op, jalx_op, mdmx_op, msa_op = mdmx_op, spec3_op, spec2_op, jalx_op, mdmx_op, msa_op = mdmx_op, spec3_op,
lb_op, lh_op, lwl_op, lw_op, lb_op, lh_op, lwl_op, lw_op,
lbu_op, lhu_op, lwr_op, lwu_op, lbu_op, lhu_op, lwr_op, lwu_op,
sb_op, sh_op, swl_op, sw_op, sb_op, sh_op, swl_op, sw_op,
sdl_op, sdr_op, swr_op, cache_op, sdl_op, sdr_op, swr_op, cache_op,
ll_op, lwc1_op, lwc2_op, bc6_op = lwc2_op, pref_op, ll_op, lwc1_op, lwc2_op, bc6_op = lwc2_op, pref_op,
lld_op, ldc1_op, ldc2_op, beqzcjic_op = ldc2_op, ld_op, lld_op, ldc1_op, ldc2_op, pop66_op = ldc2_op, ld_op,
sc_op, swc1_op, swc2_op, balc6_op = swc2_op, major_3b_op, sc_op, swc1_op, swc2_op, balc6_op = swc2_op, major_3b_op,
scd_op, sdc1_op, sdc2_op, bnezcjialc_op = sdc2_op, sd_op scd_op, sdc1_op, sdc2_op, pop76_op = sdc2_op, sd_op
}; };
/* /*
...@@ -92,6 +92,50 @@ enum spec3_op { ...@@ -92,6 +92,50 @@ enum spec3_op {
rdhwr_op = 0x3b rdhwr_op = 0x3b
}; };
/*
* Bits 10-6 minor opcode for r6 spec mult/div encodings
*/
enum mult_op {
mult_mult_op = 0x0,
mult_mul_op = 0x2,
mult_muh_op = 0x3,
};
enum multu_op {
multu_multu_op = 0x0,
multu_mulu_op = 0x2,
multu_muhu_op = 0x3,
};
enum div_op {
div_div_op = 0x0,
div_div6_op = 0x2,
div_mod_op = 0x3,
};
enum divu_op {
divu_divu_op = 0x0,
divu_divu6_op = 0x2,
divu_modu_op = 0x3,
};
enum dmult_op {
dmult_dmult_op = 0x0,
dmult_dmul_op = 0x2,
dmult_dmuh_op = 0x3,
};
enum dmultu_op {
dmultu_dmultu_op = 0x0,
dmultu_dmulu_op = 0x2,
dmultu_dmuhu_op = 0x3,
};
enum ddiv_op {
ddiv_ddiv_op = 0x0,
ddiv_ddiv6_op = 0x2,
ddiv_dmod_op = 0x3,
};
enum ddivu_op {
ddivu_ddivu_op = 0x0,
ddivu_ddivu6_op = 0x2,
ddivu_dmodu_op = 0x3,
};
/* /*
* rt field of bcond opcodes. * rt field of bcond opcodes.
*/ */
...@@ -103,7 +147,7 @@ enum rt_op { ...@@ -103,7 +147,7 @@ enum rt_op {
bltzal_op, bgezal_op, bltzall_op, bgezall_op, bltzal_op, bgezal_op, bltzall_op, bgezall_op,
rt_op_0x14, rt_op_0x15, rt_op_0x16, rt_op_0x17, rt_op_0x14, rt_op_0x15, rt_op_0x16, rt_op_0x17,
rt_op_0x18, rt_op_0x19, rt_op_0x1a, rt_op_0x1b, rt_op_0x18, rt_op_0x19, rt_op_0x1a, rt_op_0x1b,
bposge32_op, rt_op_0x1d, rt_op_0x1e, rt_op_0x1f bposge32_op, rt_op_0x1d, rt_op_0x1e, synci_op
}; };
/* /*
...@@ -237,6 +281,21 @@ enum bshfl_func { ...@@ -237,6 +281,21 @@ enum bshfl_func {
seh_op = 0x18, seh_op = 0x18,
}; };
/*
* MSA minor opcodes.
*/
enum msa_func {
msa_elm_op = 0x19,
};
/*
* MSA ELM opcodes.
*/
enum msa_elm {
msa_ctc_op = 0x3e,
msa_cfc_op = 0x7e,
};
/* /*
* func field for MSA MI10 format. * func field for MSA MI10 format.
*/ */
...@@ -264,7 +323,7 @@ enum mm_major_op { ...@@ -264,7 +323,7 @@ enum mm_major_op {
mm_pool32b_op, mm_pool16b_op, mm_lhu16_op, mm_andi16_op, mm_pool32b_op, mm_pool16b_op, mm_lhu16_op, mm_andi16_op,
mm_addiu32_op, mm_lhu32_op, mm_sh32_op, mm_lh32_op, mm_addiu32_op, mm_lhu32_op, mm_sh32_op, mm_lh32_op,
mm_pool32i_op, mm_pool16c_op, mm_lwsp16_op, mm_pool16d_op, mm_pool32i_op, mm_pool16c_op, mm_lwsp16_op, mm_pool16d_op,
mm_ori32_op, mm_pool32f_op, mm_reserved1_op, mm_reserved2_op, mm_ori32_op, mm_pool32f_op, mm_pool32s_op, mm_reserved2_op,
mm_pool32c_op, mm_lwgp16_op, mm_lw16_op, mm_pool16e_op, mm_pool32c_op, mm_lwgp16_op, mm_lw16_op, mm_pool16e_op,
mm_xori32_op, mm_jals32_op, mm_addiupc_op, mm_reserved3_op, mm_xori32_op, mm_jals32_op, mm_addiupc_op, mm_reserved3_op,
mm_reserved4_op, mm_pool16f_op, mm_sb16_op, mm_beqz16_op, mm_reserved4_op, mm_pool16f_op, mm_sb16_op, mm_beqz16_op,
...@@ -360,7 +419,10 @@ enum mm_32axf_minor_op { ...@@ -360,7 +419,10 @@ enum mm_32axf_minor_op {
mm_mflo32_op = 0x075, mm_mflo32_op = 0x075,
mm_jalrhb_op = 0x07c, mm_jalrhb_op = 0x07c,
mm_tlbwi_op = 0x08d, mm_tlbwi_op = 0x08d,
mm_mthi32_op = 0x0b5,
mm_tlbwr_op = 0x0cd, mm_tlbwr_op = 0x0cd,
mm_mtlo32_op = 0x0f5,
mm_di_op = 0x11d,
mm_jalrs_op = 0x13c, mm_jalrs_op = 0x13c,
mm_jalrshb_op = 0x17c, mm_jalrshb_op = 0x17c,
mm_sync_op = 0x1ad, mm_sync_op = 0x1ad,
...@@ -478,6 +540,13 @@ enum mm_32f_73_minor_op { ...@@ -478,6 +540,13 @@ enum mm_32f_73_minor_op {
mm_fcvts1_op = 0xed, mm_fcvts1_op = 0xed,
}; };
/*
* (microMIPS) POOL32S minor opcodes.
*/
enum mm_32s_minor_op {
mm_32s_elm_op = 0x16,
};
/* /*
* (microMIPS) POOL16C minor opcodes. * (microMIPS) POOL16C minor opcodes.
*/ */
...@@ -586,6 +655,36 @@ struct r_format { /* Register format */ ...@@ -586,6 +655,36 @@ struct r_format { /* Register format */
;)))))) ;))))))
}; };
struct c0r_format { /* C0 register format */
__BITFIELD_FIELD(unsigned int opcode : 6,
__BITFIELD_FIELD(unsigned int rs : 5,
__BITFIELD_FIELD(unsigned int rt : 5,
__BITFIELD_FIELD(unsigned int rd : 5,
__BITFIELD_FIELD(unsigned int z: 8,
__BITFIELD_FIELD(unsigned int sel : 3,
;))))))
};
struct mfmc0_format { /* MFMC0 register format */
__BITFIELD_FIELD(unsigned int opcode : 6,
__BITFIELD_FIELD(unsigned int rs : 5,
__BITFIELD_FIELD(unsigned int rt : 5,
__BITFIELD_FIELD(unsigned int rd : 5,
__BITFIELD_FIELD(unsigned int re : 5,
__BITFIELD_FIELD(unsigned int sc : 1,
__BITFIELD_FIELD(unsigned int : 2,
__BITFIELD_FIELD(unsigned int sel : 3,
;))))))))
};
struct co_format { /* C0 CO format */
__BITFIELD_FIELD(unsigned int opcode : 6,
__BITFIELD_FIELD(unsigned int co : 1,
__BITFIELD_FIELD(unsigned int code : 19,
__BITFIELD_FIELD(unsigned int func : 6,
;))))
};
struct p_format { /* Performance counter format (R10000) */ struct p_format { /* Performance counter format (R10000) */
__BITFIELD_FIELD(unsigned int opcode : 6, __BITFIELD_FIELD(unsigned int opcode : 6,
__BITFIELD_FIELD(unsigned int rs : 5, __BITFIELD_FIELD(unsigned int rs : 5,
...@@ -937,6 +1036,9 @@ union mips_instruction { ...@@ -937,6 +1036,9 @@ union mips_instruction {
struct u_format u_format; struct u_format u_format;
struct c_format c_format; struct c_format c_format;
struct r_format r_format; struct r_format r_format;
struct c0r_format c0r_format;
struct mfmc0_format mfmc0_format;
struct co_format co_format;
struct p_format p_format; struct p_format p_format;
struct f_format f_format; struct f_format f_format;
struct ma_format ma_format; struct ma_format ma_format;
......
...@@ -339,71 +339,9 @@ void output_pm_defines(void) ...@@ -339,71 +339,9 @@ void output_pm_defines(void)
} }
#endif #endif
void output_cpuinfo_defines(void)
{
COMMENT(" MIPS cpuinfo offsets. ");
DEFINE(CPUINFO_SIZE, sizeof(struct cpuinfo_mips));
#ifdef CONFIG_MIPS_ASID_BITS_VARIABLE
OFFSET(CPUINFO_ASID_MASK, cpuinfo_mips, asid_mask);
#endif
}
void output_kvm_defines(void) void output_kvm_defines(void)
{ {
COMMENT(" KVM/MIPS Specfic offsets. "); COMMENT(" KVM/MIPS Specfic offsets. ");
DEFINE(VCPU_ARCH_SIZE, sizeof(struct kvm_vcpu_arch));
OFFSET(VCPU_RUN, kvm_vcpu, run);
OFFSET(VCPU_HOST_ARCH, kvm_vcpu, arch);
OFFSET(VCPU_HOST_EBASE, kvm_vcpu_arch, host_ebase);
OFFSET(VCPU_GUEST_EBASE, kvm_vcpu_arch, guest_ebase);
OFFSET(VCPU_HOST_STACK, kvm_vcpu_arch, host_stack);
OFFSET(VCPU_HOST_GP, kvm_vcpu_arch, host_gp);
OFFSET(VCPU_HOST_CP0_BADVADDR, kvm_vcpu_arch, host_cp0_badvaddr);
OFFSET(VCPU_HOST_CP0_CAUSE, kvm_vcpu_arch, host_cp0_cause);
OFFSET(VCPU_HOST_EPC, kvm_vcpu_arch, host_cp0_epc);
OFFSET(VCPU_HOST_ENTRYHI, kvm_vcpu_arch, host_cp0_entryhi);
OFFSET(VCPU_GUEST_INST, kvm_vcpu_arch, guest_inst);
OFFSET(VCPU_R0, kvm_vcpu_arch, gprs[0]);
OFFSET(VCPU_R1, kvm_vcpu_arch, gprs[1]);
OFFSET(VCPU_R2, kvm_vcpu_arch, gprs[2]);
OFFSET(VCPU_R3, kvm_vcpu_arch, gprs[3]);
OFFSET(VCPU_R4, kvm_vcpu_arch, gprs[4]);
OFFSET(VCPU_R5, kvm_vcpu_arch, gprs[5]);
OFFSET(VCPU_R6, kvm_vcpu_arch, gprs[6]);
OFFSET(VCPU_R7, kvm_vcpu_arch, gprs[7]);
OFFSET(VCPU_R8, kvm_vcpu_arch, gprs[8]);
OFFSET(VCPU_R9, kvm_vcpu_arch, gprs[9]);
OFFSET(VCPU_R10, kvm_vcpu_arch, gprs[10]);
OFFSET(VCPU_R11, kvm_vcpu_arch, gprs[11]);
OFFSET(VCPU_R12, kvm_vcpu_arch, gprs[12]);
OFFSET(VCPU_R13, kvm_vcpu_arch, gprs[13]);
OFFSET(VCPU_R14, kvm_vcpu_arch, gprs[14]);
OFFSET(VCPU_R15, kvm_vcpu_arch, gprs[15]);
OFFSET(VCPU_R16, kvm_vcpu_arch, gprs[16]);
OFFSET(VCPU_R17, kvm_vcpu_arch, gprs[17]);
OFFSET(VCPU_R18, kvm_vcpu_arch, gprs[18]);
OFFSET(VCPU_R19, kvm_vcpu_arch, gprs[19]);
OFFSET(VCPU_R20, kvm_vcpu_arch, gprs[20]);
OFFSET(VCPU_R21, kvm_vcpu_arch, gprs[21]);
OFFSET(VCPU_R22, kvm_vcpu_arch, gprs[22]);
OFFSET(VCPU_R23, kvm_vcpu_arch, gprs[23]);
OFFSET(VCPU_R24, kvm_vcpu_arch, gprs[24]);
OFFSET(VCPU_R25, kvm_vcpu_arch, gprs[25]);
OFFSET(VCPU_R26, kvm_vcpu_arch, gprs[26]);
OFFSET(VCPU_R27, kvm_vcpu_arch, gprs[27]);
OFFSET(VCPU_R28, kvm_vcpu_arch, gprs[28]);
OFFSET(VCPU_R29, kvm_vcpu_arch, gprs[29]);
OFFSET(VCPU_R30, kvm_vcpu_arch, gprs[30]);
OFFSET(VCPU_R31, kvm_vcpu_arch, gprs[31]);
OFFSET(VCPU_LO, kvm_vcpu_arch, lo);
OFFSET(VCPU_HI, kvm_vcpu_arch, hi);
OFFSET(VCPU_PC, kvm_vcpu_arch, pc);
BLANK();
OFFSET(VCPU_FPR0, kvm_vcpu_arch, fpu.fpr[0]); OFFSET(VCPU_FPR0, kvm_vcpu_arch, fpu.fpr[0]);
OFFSET(VCPU_FPR1, kvm_vcpu_arch, fpu.fpr[1]); OFFSET(VCPU_FPR1, kvm_vcpu_arch, fpu.fpr[1]);
...@@ -441,14 +379,6 @@ void output_kvm_defines(void) ...@@ -441,14 +379,6 @@ void output_kvm_defines(void)
OFFSET(VCPU_FCR31, kvm_vcpu_arch, fpu.fcr31); OFFSET(VCPU_FCR31, kvm_vcpu_arch, fpu.fcr31);
OFFSET(VCPU_MSA_CSR, kvm_vcpu_arch, fpu.msacsr); OFFSET(VCPU_MSA_CSR, kvm_vcpu_arch, fpu.msacsr);
BLANK(); BLANK();
OFFSET(VCPU_COP0, kvm_vcpu_arch, cop0);
OFFSET(VCPU_GUEST_KERNEL_ASID, kvm_vcpu_arch, guest_kernel_asid);
OFFSET(VCPU_GUEST_USER_ASID, kvm_vcpu_arch, guest_user_asid);
OFFSET(COP0_TLB_HI, mips_coproc, reg[MIPS_CP0_TLB_HI][0]);
OFFSET(COP0_STATUS, mips_coproc, reg[MIPS_CP0_STATUS][0]);
BLANK();
} }
#ifdef CONFIG_MIPS_CPS #ifdef CONFIG_MIPS_CPS
......
...@@ -790,7 +790,7 @@ int __compute_return_epc_for_insn(struct pt_regs *regs, ...@@ -790,7 +790,7 @@ int __compute_return_epc_for_insn(struct pt_regs *regs,
epc += 4 + (insn.i_format.simmediate << 2); epc += 4 + (insn.i_format.simmediate << 2);
regs->cp0_epc = epc; regs->cp0_epc = epc;
break; break;
case beqzcjic_op: case pop66_op:
if (!cpu_has_mips_r6) { if (!cpu_has_mips_r6) {
ret = -SIGILL; ret = -SIGILL;
break; break;
...@@ -798,7 +798,7 @@ int __compute_return_epc_for_insn(struct pt_regs *regs, ...@@ -798,7 +798,7 @@ int __compute_return_epc_for_insn(struct pt_regs *regs,
/* Compact branch: BEQZC || JIC */ /* Compact branch: BEQZC || JIC */
regs->cp0_epc += 8; regs->cp0_epc += 8;
break; break;
case bnezcjialc_op: case pop76_op:
if (!cpu_has_mips_r6) { if (!cpu_has_mips_r6) {
ret = -SIGILL; ret = -SIGILL;
break; break;
...@@ -809,8 +809,8 @@ int __compute_return_epc_for_insn(struct pt_regs *regs, ...@@ -809,8 +809,8 @@ int __compute_return_epc_for_insn(struct pt_regs *regs,
regs->cp0_epc += 8; regs->cp0_epc += 8;
break; break;
#endif #endif
case cbcond0_op: case pop10_op:
case cbcond1_op: case pop30_op:
/* Only valid for MIPS R6 */ /* Only valid for MIPS R6 */
if (!cpu_has_mips_r6) { if (!cpu_has_mips_r6) {
ret = -SIGILL; ret = -SIGILL;
......
...@@ -619,17 +619,17 @@ static int simulate_rdhwr(struct pt_regs *regs, int rd, int rt) ...@@ -619,17 +619,17 @@ static int simulate_rdhwr(struct pt_regs *regs, int rd, int rt)
perf_sw_event(PERF_COUNT_SW_EMULATION_FAULTS, perf_sw_event(PERF_COUNT_SW_EMULATION_FAULTS,
1, regs, 0); 1, regs, 0);
switch (rd) { switch (rd) {
case 0: /* CPU number */ case MIPS_HWR_CPUNUM: /* CPU number */
regs->regs[rt] = smp_processor_id(); regs->regs[rt] = smp_processor_id();
return 0; return 0;
case 1: /* SYNCI length */ case MIPS_HWR_SYNCISTEP: /* SYNCI length */
regs->regs[rt] = min(current_cpu_data.dcache.linesz, regs->regs[rt] = min(current_cpu_data.dcache.linesz,
current_cpu_data.icache.linesz); current_cpu_data.icache.linesz);
return 0; return 0;
case 2: /* Read count register */ case MIPS_HWR_CC: /* Read count register */
regs->regs[rt] = read_c0_count(); regs->regs[rt] = read_c0_count();
return 0; return 0;
case 3: /* Count register resolution */ case MIPS_HWR_CCRES: /* Count register resolution */
switch (current_cpu_type()) { switch (current_cpu_type()) {
case CPU_20KC: case CPU_20KC:
case CPU_25KF: case CPU_25KF:
...@@ -639,7 +639,7 @@ static int simulate_rdhwr(struct pt_regs *regs, int rd, int rt) ...@@ -639,7 +639,7 @@ static int simulate_rdhwr(struct pt_regs *regs, int rd, int rt)
regs->regs[rt] = 2; regs->regs[rt] = 2;
} }
return 0; return 0;
case 29: case MIPS_HWR_ULR: /* Read UserLocal register */
regs->regs[rt] = ti->tp_value; regs->regs[rt] = ti->tp_value;
return 0; return 0;
default: default:
...@@ -1859,6 +1859,7 @@ void __noreturn nmi_exception_handler(struct pt_regs *regs) ...@@ -1859,6 +1859,7 @@ void __noreturn nmi_exception_handler(struct pt_regs *regs)
#define VECTORSPACING 0x100 /* for EI/VI mode */ #define VECTORSPACING 0x100 /* for EI/VI mode */
unsigned long ebase; unsigned long ebase;
EXPORT_SYMBOL_GPL(ebase);
unsigned long exception_handlers[32]; unsigned long exception_handlers[32];
unsigned long vi_handlers[64]; unsigned long vi_handlers[64];
...@@ -2063,16 +2064,22 @@ static void configure_status(void) ...@@ -2063,16 +2064,22 @@ static void configure_status(void)
status_set); status_set);
} }
unsigned int hwrena;
EXPORT_SYMBOL_GPL(hwrena);
/* configure HWRENA register */ /* configure HWRENA register */
static void configure_hwrena(void) static void configure_hwrena(void)
{ {
unsigned int hwrena = cpu_hwrena_impl_bits; hwrena = cpu_hwrena_impl_bits;
if (cpu_has_mips_r2_r6) if (cpu_has_mips_r2_r6)
hwrena |= 0x0000000f; hwrena |= MIPS_HWRENA_CPUNUM |
MIPS_HWRENA_SYNCISTEP |
MIPS_HWRENA_CC |
MIPS_HWRENA_CCRES;
if (!noulri && cpu_has_userlocal) if (!noulri && cpu_has_userlocal)
hwrena |= (1 << 29); hwrena |= MIPS_HWRENA_ULR;
if (hwrena) if (hwrena)
write_c0_hwrena(hwrena); write_c0_hwrena(hwrena);
......
...@@ -17,6 +17,7 @@ if VIRTUALIZATION ...@@ -17,6 +17,7 @@ if VIRTUALIZATION
config KVM config KVM
tristate "Kernel-based Virtual Machine (KVM) support" tristate "Kernel-based Virtual Machine (KVM) support"
depends on HAVE_KVM depends on HAVE_KVM
select EXPORT_UASM
select PREEMPT_NOTIFIERS select PREEMPT_NOTIFIERS
select ANON_INODES select ANON_INODES
select KVM_MMIO select KVM_MMIO
......
...@@ -7,9 +7,10 @@ EXTRA_CFLAGS += -Ivirt/kvm -Iarch/mips/kvm ...@@ -7,9 +7,10 @@ EXTRA_CFLAGS += -Ivirt/kvm -Iarch/mips/kvm
common-objs-$(CONFIG_CPU_HAS_MSA) += msa.o common-objs-$(CONFIG_CPU_HAS_MSA) += msa.o
kvm-objs := $(common-objs-y) mips.o emulate.o locore.o \ kvm-objs := $(common-objs-y) mips.o emulate.o entry.o \
interrupt.o stats.o commpage.o \ interrupt.o stats.o commpage.o \
dyntrans.o trap_emul.o fpu.o dyntrans.o trap_emul.o fpu.o
kvm-objs += mmu.o
obj-$(CONFIG_KVM) += kvm.o obj-$(CONFIG_KVM) += kvm.o
obj-y += callback.o tlb.o obj-y += callback.o tlb.o
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
* for more details. * for more details.
* *
* commpage, currently used for Virtual COP0 registers. * commpage, currently used for Virtual COP0 registers.
* Mapped into the guest kernel @ 0x0. * Mapped into the guest kernel @ KVM_GUEST_COMMPAGE_ADDR.
* *
* Copyright (C) 2012 MIPS Technologies, Inc. All rights reserved. * Copyright (C) 2012 MIPS Technologies, Inc. All rights reserved.
* Authors: Sanjay Lal <sanjayl@kymasys.com> * Authors: Sanjay Lal <sanjayl@kymasys.com>
......
...@@ -11,6 +11,7 @@ ...@@ -11,6 +11,7 @@
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/err.h> #include <linux/err.h>
#include <linux/highmem.h>
#include <linux/kvm_host.h> #include <linux/kvm_host.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
...@@ -20,125 +21,114 @@ ...@@ -20,125 +21,114 @@
#include "commpage.h" #include "commpage.h"
#define SYNCI_TEMPLATE 0x041f0000 /**
#define SYNCI_BASE(x) (((x) >> 21) & 0x1f) * kvm_mips_trans_replace() - Replace trapping instruction in guest memory.
#define SYNCI_OFFSET ((x) & 0xffff) * @vcpu: Virtual CPU.
* @opc: PC of instruction to replace.
* @replace: Instruction to write
*/
static int kvm_mips_trans_replace(struct kvm_vcpu *vcpu, u32 *opc,
union mips_instruction replace)
{
unsigned long paddr, flags;
void *vaddr;
if (KVM_GUEST_KSEGX((unsigned long)opc) == KVM_GUEST_KSEG0) {
paddr = kvm_mips_translate_guest_kseg0_to_hpa(vcpu,
(unsigned long)opc);
vaddr = kmap_atomic(pfn_to_page(PHYS_PFN(paddr)));
vaddr += paddr & ~PAGE_MASK;
memcpy(vaddr, (void *)&replace, sizeof(u32));
local_flush_icache_range((unsigned long)vaddr,
(unsigned long)vaddr + 32);
kunmap_atomic(vaddr);
} else if (KVM_GUEST_KSEGX((unsigned long) opc) == KVM_GUEST_KSEG23) {
local_irq_save(flags);
memcpy((void *)opc, (void *)&replace, sizeof(u32));
local_flush_icache_range((unsigned long)opc,
(unsigned long)opc + 32);
local_irq_restore(flags);
} else {
kvm_err("%s: Invalid address: %p\n", __func__, opc);
return -EFAULT;
}
#define LW_TEMPLATE 0x8c000000 return 0;
#define CLEAR_TEMPLATE 0x00000020 }
#define SW_TEMPLATE 0xac000000
int kvm_mips_trans_cache_index(uint32_t inst, uint32_t *opc, int kvm_mips_trans_cache_index(union mips_instruction inst, u32 *opc,
struct kvm_vcpu *vcpu) struct kvm_vcpu *vcpu)
{ {
int result = 0; union mips_instruction nop_inst = { 0 };
unsigned long kseg0_opc;
uint32_t synci_inst = 0x0;
/* Replace the CACHE instruction, with a NOP */ /* Replace the CACHE instruction, with a NOP */
kseg0_opc = return kvm_mips_trans_replace(vcpu, opc, nop_inst);
CKSEG0ADDR(kvm_mips_translate_guest_kseg0_to_hpa
(vcpu, (unsigned long) opc));
memcpy((void *)kseg0_opc, (void *)&synci_inst, sizeof(uint32_t));
local_flush_icache_range(kseg0_opc, kseg0_opc + 32);
return result;
} }
/* /*
* Address based CACHE instructions are transformed into synci(s). A little * Address based CACHE instructions are transformed into synci(s). A little
* heavy for just D-cache invalidates, but avoids an expensive trap * heavy for just D-cache invalidates, but avoids an expensive trap
*/ */
int kvm_mips_trans_cache_va(uint32_t inst, uint32_t *opc, int kvm_mips_trans_cache_va(union mips_instruction inst, u32 *opc,
struct kvm_vcpu *vcpu) struct kvm_vcpu *vcpu)
{ {
int result = 0; union mips_instruction synci_inst = { 0 };
unsigned long kseg0_opc;
uint32_t synci_inst = SYNCI_TEMPLATE, base, offset; synci_inst.i_format.opcode = bcond_op;
synci_inst.i_format.rs = inst.i_format.rs;
base = (inst >> 21) & 0x1f; synci_inst.i_format.rt = synci_op;
offset = inst & 0xffff; if (cpu_has_mips_r6)
synci_inst |= (base << 21); synci_inst.i_format.simmediate = inst.spec3_format.simmediate;
synci_inst |= offset; else
synci_inst.i_format.simmediate = inst.i_format.simmediate;
kseg0_opc =
CKSEG0ADDR(kvm_mips_translate_guest_kseg0_to_hpa return kvm_mips_trans_replace(vcpu, opc, synci_inst);
(vcpu, (unsigned long) opc));
memcpy((void *)kseg0_opc, (void *)&synci_inst, sizeof(uint32_t));
local_flush_icache_range(kseg0_opc, kseg0_opc + 32);
return result;
} }
int kvm_mips_trans_mfc0(uint32_t inst, uint32_t *opc, struct kvm_vcpu *vcpu) int kvm_mips_trans_mfc0(union mips_instruction inst, u32 *opc,
struct kvm_vcpu *vcpu)
{ {
int32_t rt, rd, sel; union mips_instruction mfc0_inst = { 0 };
uint32_t mfc0_inst; u32 rd, sel;
unsigned long kseg0_opc, flags;
rt = (inst >> 16) & 0x1f;
rd = (inst >> 11) & 0x1f;
sel = inst & 0x7;
if ((rd == MIPS_CP0_ERRCTL) && (sel == 0)) { rd = inst.c0r_format.rd;
mfc0_inst = CLEAR_TEMPLATE; sel = inst.c0r_format.sel;
mfc0_inst |= ((rt & 0x1f) << 16);
} else {
mfc0_inst = LW_TEMPLATE;
mfc0_inst |= ((rt & 0x1f) << 16);
mfc0_inst |= offsetof(struct kvm_mips_commpage,
cop0.reg[rd][sel]);
}
if (KVM_GUEST_KSEGX(opc) == KVM_GUEST_KSEG0) { if (rd == MIPS_CP0_ERRCTL && sel == 0) {
kseg0_opc = mfc0_inst.r_format.opcode = spec_op;
CKSEG0ADDR(kvm_mips_translate_guest_kseg0_to_hpa mfc0_inst.r_format.rd = inst.c0r_format.rt;
(vcpu, (unsigned long) opc)); mfc0_inst.r_format.func = add_op;
memcpy((void *)kseg0_opc, (void *)&mfc0_inst, sizeof(uint32_t));
local_flush_icache_range(kseg0_opc, kseg0_opc + 32);
} else if (KVM_GUEST_KSEGX((unsigned long) opc) == KVM_GUEST_KSEG23) {
local_irq_save(flags);
memcpy((void *)opc, (void *)&mfc0_inst, sizeof(uint32_t));
local_flush_icache_range((unsigned long)opc,
(unsigned long)opc + 32);
local_irq_restore(flags);
} else { } else {
kvm_err("%s: Invalid address: %p\n", __func__, opc); mfc0_inst.i_format.opcode = lw_op;
return -EFAULT; mfc0_inst.i_format.rt = inst.c0r_format.rt;
mfc0_inst.i_format.simmediate = KVM_GUEST_COMMPAGE_ADDR |
offsetof(struct kvm_mips_commpage, cop0.reg[rd][sel]);
#ifdef CONFIG_CPU_BIG_ENDIAN
if (sizeof(vcpu->arch.cop0->reg[0][0]) == 8)
mfc0_inst.i_format.simmediate |= 4;
#endif
} }
return 0; return kvm_mips_trans_replace(vcpu, opc, mfc0_inst);
} }
int kvm_mips_trans_mtc0(uint32_t inst, uint32_t *opc, struct kvm_vcpu *vcpu) int kvm_mips_trans_mtc0(union mips_instruction inst, u32 *opc,
struct kvm_vcpu *vcpu)
{ {
int32_t rt, rd, sel; union mips_instruction mtc0_inst = { 0 };
uint32_t mtc0_inst = SW_TEMPLATE; u32 rd, sel;
unsigned long kseg0_opc, flags;
rd = inst.c0r_format.rd;
rt = (inst >> 16) & 0x1f; sel = inst.c0r_format.sel;
rd = (inst >> 11) & 0x1f;
sel = inst & 0x7; mtc0_inst.i_format.opcode = sw_op;
mtc0_inst.i_format.rt = inst.c0r_format.rt;
mtc0_inst |= ((rt & 0x1f) << 16); mtc0_inst.i_format.simmediate = KVM_GUEST_COMMPAGE_ADDR |
mtc0_inst |= offsetof(struct kvm_mips_commpage, cop0.reg[rd][sel]); offsetof(struct kvm_mips_commpage, cop0.reg[rd][sel]);
#ifdef CONFIG_CPU_BIG_ENDIAN
if (KVM_GUEST_KSEGX(opc) == KVM_GUEST_KSEG0) { if (sizeof(vcpu->arch.cop0->reg[0][0]) == 8)
kseg0_opc = mtc0_inst.i_format.simmediate |= 4;
CKSEG0ADDR(kvm_mips_translate_guest_kseg0_to_hpa #endif
(vcpu, (unsigned long) opc));
memcpy((void *)kseg0_opc, (void *)&mtc0_inst, sizeof(uint32_t)); return kvm_mips_trans_replace(vcpu, opc, mtc0_inst);
local_flush_icache_range(kseg0_opc, kseg0_opc + 32);
} else if (KVM_GUEST_KSEGX((unsigned long) opc) == KVM_GUEST_KSEG23) {
local_irq_save(flags);
memcpy((void *)opc, (void *)&mtc0_inst, sizeof(uint32_t));
local_flush_icache_range((unsigned long)opc,
(unsigned long)opc + 32);
local_irq_restore(flags);
} else {
kvm_err("%s: Invalid address: %p\n", __func__, opc);
return -EFAULT;
}
return 0;
} }
此差异已折叠。
此差异已折叠。
...@@ -14,13 +14,16 @@ ...@@ -14,13 +14,16 @@
#include <asm/mipsregs.h> #include <asm/mipsregs.h>
#include <asm/regdef.h> #include <asm/regdef.h>
/* preprocessor replaces the fp in ".set fp=64" with $30 otherwise */
#undef fp
.set noreorder .set noreorder
.set noat .set noat
LEAF(__kvm_save_fpu) LEAF(__kvm_save_fpu)
.set push .set push
.set mips64r2
SET_HARDFLOAT SET_HARDFLOAT
.set fp=64
mfc0 t0, CP0_STATUS mfc0 t0, CP0_STATUS
sll t0, t0, 5 # is Status.FR set? sll t0, t0, 5 # is Status.FR set?
bgez t0, 1f # no: skip odd doubles bgez t0, 1f # no: skip odd doubles
...@@ -63,8 +66,8 @@ LEAF(__kvm_save_fpu) ...@@ -63,8 +66,8 @@ LEAF(__kvm_save_fpu)
LEAF(__kvm_restore_fpu) LEAF(__kvm_restore_fpu)
.set push .set push
.set mips64r2
SET_HARDFLOAT SET_HARDFLOAT
.set fp=64
mfc0 t0, CP0_STATUS mfc0 t0, CP0_STATUS
sll t0, t0, 5 # is Status.FR set? sll t0, t0, 5 # is Status.FR set?
bgez t0, 1f # no: skip odd doubles bgez t0, 1f # no: skip odd doubles
......
...@@ -22,12 +22,12 @@ ...@@ -22,12 +22,12 @@
#include "interrupt.h" #include "interrupt.h"
void kvm_mips_queue_irq(struct kvm_vcpu *vcpu, uint32_t priority) void kvm_mips_queue_irq(struct kvm_vcpu *vcpu, unsigned int priority)
{ {
set_bit(priority, &vcpu->arch.pending_exceptions); set_bit(priority, &vcpu->arch.pending_exceptions);
} }
void kvm_mips_dequeue_irq(struct kvm_vcpu *vcpu, uint32_t priority) void kvm_mips_dequeue_irq(struct kvm_vcpu *vcpu, unsigned int priority)
{ {
clear_bit(priority, &vcpu->arch.pending_exceptions); clear_bit(priority, &vcpu->arch.pending_exceptions);
} }
...@@ -114,10 +114,10 @@ void kvm_mips_dequeue_io_int_cb(struct kvm_vcpu *vcpu, ...@@ -114,10 +114,10 @@ void kvm_mips_dequeue_io_int_cb(struct kvm_vcpu *vcpu,
/* Deliver the interrupt of the corresponding priority, if possible. */ /* Deliver the interrupt of the corresponding priority, if possible. */
int kvm_mips_irq_deliver_cb(struct kvm_vcpu *vcpu, unsigned int priority, int kvm_mips_irq_deliver_cb(struct kvm_vcpu *vcpu, unsigned int priority,
uint32_t cause) u32 cause)
{ {
int allowed = 0; int allowed = 0;
uint32_t exccode; u32 exccode;
struct kvm_vcpu_arch *arch = &vcpu->arch; struct kvm_vcpu_arch *arch = &vcpu->arch;
struct mips_coproc *cop0 = vcpu->arch.cop0; struct mips_coproc *cop0 = vcpu->arch.cop0;
...@@ -196,12 +196,12 @@ int kvm_mips_irq_deliver_cb(struct kvm_vcpu *vcpu, unsigned int priority, ...@@ -196,12 +196,12 @@ int kvm_mips_irq_deliver_cb(struct kvm_vcpu *vcpu, unsigned int priority,
} }
int kvm_mips_irq_clear_cb(struct kvm_vcpu *vcpu, unsigned int priority, int kvm_mips_irq_clear_cb(struct kvm_vcpu *vcpu, unsigned int priority,
uint32_t cause) u32 cause)
{ {
return 1; return 1;
} }
void kvm_mips_deliver_interrupts(struct kvm_vcpu *vcpu, uint32_t cause) void kvm_mips_deliver_interrupts(struct kvm_vcpu *vcpu, u32 cause)
{ {
unsigned long *pending = &vcpu->arch.pending_exceptions; unsigned long *pending = &vcpu->arch.pending_exceptions;
unsigned long *pending_clr = &vcpu->arch.pending_exceptions_clr; unsigned long *pending_clr = &vcpu->arch.pending_exceptions_clr;
......
...@@ -28,17 +28,13 @@ ...@@ -28,17 +28,13 @@
#define MIPS_EXC_MAX 12 #define MIPS_EXC_MAX 12
/* XXXSL More to follow */ /* XXXSL More to follow */
extern char __kvm_mips_vcpu_run_end[];
extern char mips32_exception[], mips32_exceptionEnd[];
extern char mips32_GuestException[], mips32_GuestExceptionEnd[];
#define C_TI (_ULCAST_(1) << 30) #define C_TI (_ULCAST_(1) << 30)
#define KVM_MIPS_IRQ_DELIVER_ALL_AT_ONCE (0) #define KVM_MIPS_IRQ_DELIVER_ALL_AT_ONCE (0)
#define KVM_MIPS_IRQ_CLEAR_ALL_AT_ONCE (0) #define KVM_MIPS_IRQ_CLEAR_ALL_AT_ONCE (0)
void kvm_mips_queue_irq(struct kvm_vcpu *vcpu, uint32_t priority); void kvm_mips_queue_irq(struct kvm_vcpu *vcpu, unsigned int priority);
void kvm_mips_dequeue_irq(struct kvm_vcpu *vcpu, uint32_t priority); void kvm_mips_dequeue_irq(struct kvm_vcpu *vcpu, unsigned int priority);
int kvm_mips_pending_timer(struct kvm_vcpu *vcpu); int kvm_mips_pending_timer(struct kvm_vcpu *vcpu);
void kvm_mips_queue_timer_int_cb(struct kvm_vcpu *vcpu); void kvm_mips_queue_timer_int_cb(struct kvm_vcpu *vcpu);
...@@ -48,7 +44,7 @@ void kvm_mips_queue_io_int_cb(struct kvm_vcpu *vcpu, ...@@ -48,7 +44,7 @@ void kvm_mips_queue_io_int_cb(struct kvm_vcpu *vcpu,
void kvm_mips_dequeue_io_int_cb(struct kvm_vcpu *vcpu, void kvm_mips_dequeue_io_int_cb(struct kvm_vcpu *vcpu,
struct kvm_mips_interrupt *irq); struct kvm_mips_interrupt *irq);
int kvm_mips_irq_deliver_cb(struct kvm_vcpu *vcpu, unsigned int priority, int kvm_mips_irq_deliver_cb(struct kvm_vcpu *vcpu, unsigned int priority,
uint32_t cause); u32 cause);
int kvm_mips_irq_clear_cb(struct kvm_vcpu *vcpu, unsigned int priority, int kvm_mips_irq_clear_cb(struct kvm_vcpu *vcpu, unsigned int priority,
uint32_t cause); u32 cause);
void kvm_mips_deliver_interrupts(struct kvm_vcpu *vcpu, uint32_t cause); void kvm_mips_deliver_interrupts(struct kvm_vcpu *vcpu, u32 cause);
此差异已折叠。
此差异已折叠。
此差异已折叠。
...@@ -11,27 +11,6 @@ ...@@ -11,27 +11,6 @@
#include <linux/kvm_host.h> #include <linux/kvm_host.h>
char *kvm_mips_exit_types_str[MAX_KVM_MIPS_EXIT_TYPES] = {
"WAIT",
"CACHE",
"Signal",
"Interrupt",
"COP0/1 Unusable",
"TLB Mod",
"TLB Miss (LD)",
"TLB Miss (ST)",
"Address Err (ST)",
"Address Error (LD)",
"System Call",
"Reserved Inst",
"Break Inst",
"Trap Inst",
"MSA FPE",
"FPE",
"MSA Disabled",
"D-Cache Flushes",
};
char *kvm_cop0_str[N_MIPS_COPROC_REGS] = { char *kvm_cop0_str[N_MIPS_COPROC_REGS] = {
"Index", "Index",
"Random", "Random",
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
...@@ -627,8 +627,8 @@ static int isBranchInstr(struct pt_regs *regs, struct mm_decoded_insn dec_insn, ...@@ -627,8 +627,8 @@ static int isBranchInstr(struct pt_regs *regs, struct mm_decoded_insn dec_insn,
dec_insn.pc_inc + dec_insn.pc_inc +
dec_insn.next_pc_inc; dec_insn.next_pc_inc;
return 1; return 1;
case cbcond0_op: case pop10_op:
case cbcond1_op: case pop30_op:
if (!cpu_has_mips_r6) if (!cpu_has_mips_r6)
break; break;
if (insn.i_format.rt && !insn.i_format.rs) if (insn.i_format.rt && !insn.i_format.rs)
...@@ -683,14 +683,14 @@ static int isBranchInstr(struct pt_regs *regs, struct mm_decoded_insn dec_insn, ...@@ -683,14 +683,14 @@ static int isBranchInstr(struct pt_regs *regs, struct mm_decoded_insn dec_insn,
dec_insn.next_pc_inc; dec_insn.next_pc_inc;
return 1; return 1;
case beqzcjic_op: case pop66_op:
if (!cpu_has_mips_r6) if (!cpu_has_mips_r6)
break; break;
*contpc = regs->cp0_epc + dec_insn.pc_inc + *contpc = regs->cp0_epc + dec_insn.pc_inc +
dec_insn.next_pc_inc; dec_insn.next_pc_inc;
return 1; return 1;
case bnezcjialc_op: case pop76_op:
if (!cpu_has_mips_r6) if (!cpu_has_mips_r6)
break; break;
if (!insn.i_format.rs) if (!insn.i_format.rs)
......
...@@ -1206,7 +1206,7 @@ static void probe_pcache(void) ...@@ -1206,7 +1206,7 @@ static void probe_pcache(void)
c->icache.linesz; c->icache.linesz;
c->icache.waybit = __ffs(icache_size/c->icache.ways); c->icache.waybit = __ffs(icache_size/c->icache.ways);
if (config & 0x8) /* VI bit */ if (config & MIPS_CONF_VI)
c->icache.flags |= MIPS_CACHE_VTAG; c->icache.flags |= MIPS_CACHE_VTAG;
/* /*
......
...@@ -53,8 +53,13 @@ static struct insn insn_table_MM[] = { ...@@ -53,8 +53,13 @@ static struct insn insn_table_MM[] = {
{ insn_bltzl, 0, 0 }, { insn_bltzl, 0, 0 },
{ insn_bne, M(mm_bne32_op, 0, 0, 0, 0, 0), RT | RS | BIMM }, { insn_bne, M(mm_bne32_op, 0, 0, 0, 0, 0), RT | RS | BIMM },
{ insn_cache, M(mm_pool32b_op, 0, 0, mm_cache_func, 0, 0), RT | RS | SIMM }, { insn_cache, M(mm_pool32b_op, 0, 0, mm_cache_func, 0, 0), RT | RS | SIMM },
{ insn_cfc1, M(mm_pool32f_op, 0, 0, 0, mm_cfc1_op, mm_32f_73_op), RT | RS },
{ insn_cfcmsa, M(mm_pool32s_op, 0, msa_cfc_op, 0, 0, mm_32s_elm_op), RD | RE },
{ insn_ctc1, M(mm_pool32f_op, 0, 0, 0, mm_ctc1_op, mm_32f_73_op), RT | RS },
{ insn_ctcmsa, M(mm_pool32s_op, 0, msa_ctc_op, 0, 0, mm_32s_elm_op), RD | RE },
{ insn_daddu, 0, 0 }, { insn_daddu, 0, 0 },
{ insn_daddiu, 0, 0 }, { insn_daddiu, 0, 0 },
{ insn_di, M(mm_pool32a_op, 0, 0, 0, mm_di_op, mm_pool32axf_op), RS },
{ insn_divu, M(mm_pool32a_op, 0, 0, 0, mm_divu_op, mm_pool32axf_op), RT | RS }, { insn_divu, M(mm_pool32a_op, 0, 0, 0, mm_divu_op, mm_pool32axf_op), RT | RS },
{ insn_dmfc0, 0, 0 }, { insn_dmfc0, 0, 0 },
{ insn_dmtc0, 0, 0 }, { insn_dmtc0, 0, 0 },
...@@ -84,6 +89,8 @@ static struct insn insn_table_MM[] = { ...@@ -84,6 +89,8 @@ static struct insn insn_table_MM[] = {
{ insn_mfhi, M(mm_pool32a_op, 0, 0, 0, mm_mfhi32_op, mm_pool32axf_op), RS }, { insn_mfhi, M(mm_pool32a_op, 0, 0, 0, mm_mfhi32_op, mm_pool32axf_op), RS },
{ insn_mflo, M(mm_pool32a_op, 0, 0, 0, mm_mflo32_op, mm_pool32axf_op), RS }, { insn_mflo, M(mm_pool32a_op, 0, 0, 0, mm_mflo32_op, mm_pool32axf_op), RS },
{ insn_mtc0, M(mm_pool32a_op, 0, 0, 0, mm_mtc0_op, mm_pool32axf_op), RT | RS | RD }, { insn_mtc0, M(mm_pool32a_op, 0, 0, 0, mm_mtc0_op, mm_pool32axf_op), RT | RS | RD },
{ insn_mthi, M(mm_pool32a_op, 0, 0, 0, mm_mthi32_op, mm_pool32axf_op), RS },
{ insn_mtlo, M(mm_pool32a_op, 0, 0, 0, mm_mtlo32_op, mm_pool32axf_op), RS },
{ insn_mul, M(mm_pool32a_op, 0, 0, 0, 0, mm_mul_op), RT | RS | RD }, { insn_mul, M(mm_pool32a_op, 0, 0, 0, 0, mm_mul_op), RT | RS | RD },
{ insn_or, M(mm_pool32a_op, 0, 0, 0, 0, mm_or32_op), RT | RS | RD }, { insn_or, M(mm_pool32a_op, 0, 0, 0, 0, mm_or32_op), RT | RS | RD },
{ insn_ori, M(mm_ori32_op, 0, 0, 0, 0, 0), RT | RS | UIMM }, { insn_ori, M(mm_ori32_op, 0, 0, 0, 0, 0), RT | RS | UIMM },
...@@ -166,13 +173,15 @@ static void build_insn(u32 **buf, enum opcode opc, ...) ...@@ -166,13 +173,15 @@ static void build_insn(u32 **buf, enum opcode opc, ...)
op = ip->match; op = ip->match;
va_start(ap, opc); va_start(ap, opc);
if (ip->fields & RS) { if (ip->fields & RS) {
if (opc == insn_mfc0 || opc == insn_mtc0) if (opc == insn_mfc0 || opc == insn_mtc0 ||
opc == insn_cfc1 || opc == insn_ctc1)
op |= build_rt(va_arg(ap, u32)); op |= build_rt(va_arg(ap, u32));
else else
op |= build_rs(va_arg(ap, u32)); op |= build_rs(va_arg(ap, u32));
} }
if (ip->fields & RT) { if (ip->fields & RT) {
if (opc == insn_mfc0 || opc == insn_mtc0) if (opc == insn_mfc0 || opc == insn_mtc0 ||
opc == insn_cfc1 || opc == insn_ctc1)
op |= build_rs(va_arg(ap, u32)); op |= build_rs(va_arg(ap, u32));
else else
op |= build_rt(va_arg(ap, u32)); op |= build_rt(va_arg(ap, u32));
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册