提交 58935f24 编写于 作者: O Olof Johansson

Merge tag 'omap-for-v4.7/fixes-powedomain' of...

Merge tag 'omap-for-v4.7/fixes-powedomain' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap into fixes

Fixes for omaps for v4.7-rc cycle:

- Fix dra7 for hardware issues limiting L4Per and L3init power domains
  to on state. Without this the devices may not work correctly after
  some time of use because of asymmetric aging. And related to this,
  let's also remove the unusable states.

- Always select omap interconnect for am43x as otherwise the am43x
  only configurations will not boot properly. This can happen easily
  for any product kernels that leave out other SoCs to save memory.

- Fix DSS PLL2 addresses that have gone unused for now

- Select erratum 430973 for omap3, this is now safe to do and can
  save quite a bit of debugging time for people who may have left
  it out.

* tag 'omap-for-v4.7/fixes-powedomain' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap:
  ARM: OMAP: DRA7: powerdomain data: Remove unused pwrsts_mem_ret
  ARM: OMAP: DRA7: powerdomain data: Remove unused pwrsts_logic_ret
  ARM: OMAP: DRA7: powerdomain data: Set L3init and L4per to ON
  ARM: OMAP2+: Select OMAP_INTERCONNECT for SOC_AM43XX
  ARM: dts: DRA74x: fix DSS PLL2 addresses
  ARM: OMAP2: Enable Errata 430973 for OMAP3
  + Linux 4.7-rc2
Signed-off-by: NOlof Johansson <olof@lixom.net>
...@@ -128,16 +128,44 @@ X!Edrivers/base/interface.c ...@@ -128,16 +128,44 @@ X!Edrivers/base/interface.c
!Edrivers/base/platform.c !Edrivers/base/platform.c
!Edrivers/base/bus.c !Edrivers/base/bus.c
</sect1> </sect1>
<sect1><title>Device Drivers DMA Management</title> <sect1>
<title>Buffer Sharing and Synchronization</title>
<para>
The dma-buf subsystem provides the framework for sharing buffers
for hardware (DMA) access across multiple device drivers and
subsystems, and for synchronizing asynchronous hardware access.
</para>
<para>
This is used, for example, by drm "prime" multi-GPU support, but
is of course not limited to GPU use cases.
</para>
<para>
The three main components of this are: (1) dma-buf, representing
a sg_table and exposed to userspace as a file descriptor to allow
passing between devices, (2) fence, which provides a mechanism
to signal when one device as finished access, and (3) reservation,
which manages the shared or exclusive fence(s) associated with
the buffer.
</para>
<sect2><title>dma-buf</title>
!Edrivers/dma-buf/dma-buf.c !Edrivers/dma-buf/dma-buf.c
!Iinclude/linux/dma-buf.h
</sect2>
<sect2><title>reservation</title>
!Pdrivers/dma-buf/reservation.c Reservation Object Overview
!Edrivers/dma-buf/reservation.c
!Iinclude/linux/reservation.h
</sect2>
<sect2><title>fence</title>
!Edrivers/dma-buf/fence.c !Edrivers/dma-buf/fence.c
!Edrivers/dma-buf/seqno-fence.c
!Iinclude/linux/fence.h !Iinclude/linux/fence.h
!Edrivers/dma-buf/seqno-fence.c
!Iinclude/linux/seqno-fence.h !Iinclude/linux/seqno-fence.h
!Edrivers/dma-buf/reservation.c
!Iinclude/linux/reservation.h
!Edrivers/dma-buf/sync_file.c !Edrivers/dma-buf/sync_file.c
!Iinclude/linux/sync_file.h !Iinclude/linux/sync_file.h
</sect2>
</sect1>
<sect1><title>Device Drivers DMA Management</title>
!Edrivers/base/dma-coherent.c !Edrivers/base/dma-coherent.c
!Edrivers/base/dma-mapping.c !Edrivers/base/dma-mapping.c
</sect1> </sect1>
......
...@@ -56,6 +56,7 @@ stable kernels. ...@@ -56,6 +56,7 @@ stable kernels.
| ARM | MMU-500 | #841119,#826419 | N/A | | ARM | MMU-500 | #841119,#826419 | N/A |
| | | | | | | | | |
| Cavium | ThunderX ITS | #22375, #24313 | CAVIUM_ERRATUM_22375 | | Cavium | ThunderX ITS | #22375, #24313 | CAVIUM_ERRATUM_22375 |
| Cavium | ThunderX ITS | #23144 | CAVIUM_ERRATUM_23144 |
| Cavium | ThunderX GICv3 | #23154 | CAVIUM_ERRATUM_23154 | | Cavium | ThunderX GICv3 | #23154 | CAVIUM_ERRATUM_23154 |
| Cavium | ThunderX Core | #27456 | CAVIUM_ERRATUM_27456 | | Cavium | ThunderX Core | #27456 | CAVIUM_ERRATUM_27456 |
| Cavium | ThunderX SMMUv2 | #27704 | N/A | | Cavium | ThunderX SMMUv2 | #27704 | N/A |
...@@ -62,6 +62,7 @@ Required properties: ...@@ -62,6 +62,7 @@ Required properties:
display-timings are used instead. display-timings are used instead.
Optional properties (required if display-timings are used): Optional properties (required if display-timings are used):
- ddc-i2c-bus: phandle of an I2C controller used for DDC EDID probing
- display-timings : A node that describes the display timings as defined in - display-timings : A node that describes the display timings as defined in
Documentation/devicetree/bindings/display/display-timing.txt. Documentation/devicetree/bindings/display/display-timing.txt.
- fsl,data-mapping : should be "spwg" or "jeida" - fsl,data-mapping : should be "spwg" or "jeida"
......
Each mount of the devpts filesystem is now distinct such that ptys
and their indicies allocated in one mount are independent from ptys
and their indicies in all other mounts.
To support containers, we now allow multiple instances of devpts filesystem, All mounts of the devpts filesystem now create a /dev/pts/ptmx node
such that indices of ptys allocated in one instance are independent of indices with permissions 0000.
allocated in other instances of devpts.
To preserve backward compatibility, this support for multiple instances is To retain backwards compatibility the a ptmx device node (aka any node
enabled only if: created with "mknod name c 5 2") when opened will look for an instance
of devpts under the name "pts" in the same directory as the ptmx device
node.
- CONFIG_DEVPTS_MULTIPLE_INSTANCES=y, and As an option instead of placing a /dev/ptmx device node at /dev/ptmx
- '-o newinstance' mount option is specified while mounting devpts it is possible to place a symlink to /dev/pts/ptmx at /dev/ptmx or
to bind mount /dev/ptx/ptmx to /dev/ptmx. If you opt for using
IOW, devpts now supports both single-instance and multi-instance semantics. the devpts filesystem in this manner devpts should be mounted with
the ptmxmode=0666, or chmod 0666 /dev/pts/ptmx should be called.
If CONFIG_DEVPTS_MULTIPLE_INSTANCES=n, there is no change in behavior and
this referred to as the "legacy" mode. In this mode, the new mount options
(-o newinstance and -o ptmxmode) will be ignored with a 'bogus option' message
on console.
If CONFIG_DEVPTS_MULTIPLE_INSTANCES=y and devpts is mounted without the
'newinstance' option (as in current start-up scripts) the new mount binds
to the initial kernel mount of devpts. This mode is referred to as the
'single-instance' mode and the current, single-instance semantics are
preserved, i.e PTYs are common across the system.
The only difference between this single-instance mode and the legacy mode
is the presence of new, '/dev/pts/ptmx' node with permissions 0000, which
can safely be ignored.
If CONFIG_DEVPTS_MULTIPLE_INSTANCES=y and 'newinstance' option is specified,
the mount is considered to be in the multi-instance mode and a new instance
of the devpts fs is created. Any ptys created in this instance are independent
of ptys in other instances of devpts. Like in the single-instance mode, the
/dev/pts/ptmx node is present. To effectively use the multi-instance mode,
open of /dev/ptmx must be a redirected to '/dev/pts/ptmx' using a symlink or
bind-mount.
Eg: A container startup script could do the following:
$ chmod 0666 /dev/pts/ptmx
$ rm /dev/ptmx
$ ln -s pts/ptmx /dev/ptmx
$ ns_exec -cm /bin/bash
# We are now in new container
$ umount /dev/pts
$ mount -t devpts -o newinstance lxcpts /dev/pts
$ sshd -p 1234
where 'ns_exec -cm /bin/bash' calls clone() with CLONE_NEWNS flag and execs
/bin/bash in the child process. A pty created by the sshd is not visible in
the original mount of /dev/pts.
Total count of pty pairs in all instances is limited by sysctls: Total count of pty pairs in all instances is limited by sysctls:
kernel.pty.max = 4096 - global limit kernel.pty.max = 4096 - global limit
kernel.pty.reserve = 1024 - reserve for initial instance kernel.pty.reserve = 1024 - reserved for filesystems mounted from the initial mount namespace
kernel.pty.nr - current count of ptys kernel.pty.nr - current count of ptys
Per-instance limit could be set by adding mount option "max=<count>". Per-instance limit could be set by adding mount option "max=<count>".
This feature was added in kernel 3.4 together with sysctl kernel.pty.reserve. This feature was added in kernel 3.4 together with sysctl kernel.pty.reserve.
In kernels older than 3.4 sysctl kernel.pty.max works as per-instance limit. In kernels older than 3.4 sysctl kernel.pty.max works as per-instance limit.
User-space changes
------------------
In multi-instance mode (i.e '-o newinstance' mount option is specified at least
once), following user-space issues should be noted.
1. If -o newinstance mount option is never used, /dev/pts/ptmx can be ignored
and no change is needed to system-startup scripts.
2. To effectively use multi-instance mode (i.e -o newinstance is specified)
administrators or startup scripts should "redirect" open of /dev/ptmx to
/dev/pts/ptmx using either a bind mount or symlink.
$ mount -t devpts -o newinstance devpts /dev/pts
followed by either
$ rm /dev/ptmx
$ ln -s pts/ptmx /dev/ptmx
$ chmod 666 /dev/pts/ptmx
or
$ mount -o bind /dev/pts/ptmx /dev/ptmx
3. The '/dev/ptmx -> pts/ptmx' symlink is the preferred method since it
enables better error-reporting and treats both single-instance and
multi-instance mounts similarly.
But this method requires that system-startup scripts set the mode of
/dev/pts/ptmx correctly (default mode is 0000). The scripts can set the
mode by, either
- adding ptmxmode mount option to devpts entry in /etc/fstab, or
- using 'chmod 0666 /dev/pts/ptmx'
4. If multi-instance mode mount is needed for containers, but the system
startup scripts have not yet been updated, container-startup scripts
should bind mount /dev/ptmx to /dev/pts/ptmx to avoid breaking single-
instance mounts.
Or, in general, container-startup scripts should use:
mount -t devpts -o newinstance -o ptmxmode=0666 devpts /dev/pts
if [ ! -L /dev/ptmx ]; then
mount -o bind /dev/pts/ptmx /dev/ptmx
fi
When all devpts mounts are multi-instance, /dev/ptmx can permanently be
a symlink to pts/ptmx and the bind mount can be ignored.
5. A multi-instance mount that is not accompanied by the /dev/ptmx to
/dev/pts/ptmx redirection would result in an unusable/unreachable pty.
mount -t devpts -o newinstance lxcpts /dev/pts
immediately followed by:
open("/dev/ptmx")
would create a pty, say /dev/pts/7, in the initial kernel mount.
But /dev/pts/7 would be invisible in the new mount.
6. The permissions for /dev/pts/ptmx node should be specified when mounting
/dev/pts, using the '-o ptmxmode=%o' mount option (default is 0000).
mount -t devpts -o newinstance -o ptmxmode=0644 devpts /dev/pts
The permissions can be later be changed as usual with 'chmod'.
chmod 666 /dev/pts/ptmx
7. A mount of devpts without the 'newinstance' option results in binding to
initial kernel mount. This behavior while preserving legacy semantics,
does not provide strict isolation in a container environment. i.e by
mounting devpts without the 'newinstance' option, a container could
get visibility into the 'host' or root container's devpts.
To workaround this and have strict isolation, all mounts of devpts,
including the mount in the root container, should use the newinstance
option.
...@@ -170,21 +170,92 @@ document trapinfo ...@@ -170,21 +170,92 @@ document trapinfo
address the kernel panicked. address the kernel panicked.
end end
define dump_log_idx
set $idx = $arg0
if ($argc > 1)
set $prev_flags = $arg1
else
set $prev_flags = 0
end
set $msg = ((struct printk_log *) (log_buf + $idx))
set $prefix = 1
set $newline = 1
set $log = log_buf + $idx + sizeof(*$msg)
define dmesg # prev & LOG_CONT && !(msg->flags & LOG_PREIX)
set $i = 0 if (($prev_flags & 8) && !($msg->flags & 4))
set $end_idx = (log_end - 1) & (log_buf_len - 1) set $prefix = 0
end
# msg->flags & LOG_CONT
if ($msg->flags & 8)
# (prev & LOG_CONT && !(prev & LOG_NEWLINE))
if (($prev_flags & 8) && !($prev_flags & 2))
set $prefix = 0
end
# (!(msg->flags & LOG_NEWLINE))
if (!($msg->flags & 2))
set $newline = 0
end
end
if ($prefix)
printf "[%5lu.%06lu] ", $msg->ts_nsec / 1000000000, $msg->ts_nsec % 1000000000
end
if ($msg->text_len != 0)
eval "printf \"%%%d.%ds\", $log", $msg->text_len, $msg->text_len
end
if ($newline)
printf "\n"
end
if ($msg->dict_len > 0)
set $dict = $log + $msg->text_len
set $idx = 0
set $line = 1
while ($idx < $msg->dict_len)
if ($line)
printf " "
set $line = 0
end
set $c = $dict[$idx]
if ($c == '\0')
printf "\n"
set $line = 1
else
if ($c < ' ' || $c >= 127 || $c == '\\')
printf "\\x%02x", $c
else
printf "%c", $c
end
end
set $idx = $idx + 1
end
printf "\n"
end
end
document dump_log_idx
Dump a single log given its index in the log buffer. The first
parameter is the index into log_buf, the second is optional and
specified the previous log buffer's flags, used for properly
formatting continued lines.
end
while ($i < logged_chars) define dmesg
set $idx = (log_end - 1 - logged_chars + $i) & (log_buf_len - 1) set $i = log_first_idx
set $end_idx = log_first_idx
set $prev_flags = 0
if ($idx + 100 <= $end_idx) || \ while (1)
($end_idx <= $idx && $idx + 100 < log_buf_len) set $msg = ((struct printk_log *) (log_buf + $i))
printf "%.100s", &log_buf[$idx] if ($msg->len == 0)
set $i = $i + 100 set $i = 0
else else
printf "%c", log_buf[$idx] dump_log_idx $i $prev_flags
set $i = $i + 1 set $i = $i + $msg->len
set $prev_flags = $msg->flags
end
if ($i == $end_idx)
loop_break
end end
end end
end end
......
...@@ -369,8 +369,6 @@ does not allocate any driver private context space. ...@@ -369,8 +369,6 @@ does not allocate any driver private context space.
Switch configuration Switch configuration
-------------------- --------------------
- priv_size: additional size needed by the switch driver for its private context
- tag_protocol: this is to indicate what kind of tagging protocol is supported, - tag_protocol: this is to indicate what kind of tagging protocol is supported,
should be a valid value from the dsa_tag_protocol enum should be a valid value from the dsa_tag_protocol enum
...@@ -416,11 +414,6 @@ PHY devices and link management ...@@ -416,11 +414,6 @@ PHY devices and link management
to the switch port MDIO registers. If unavailable return a negative error to the switch port MDIO registers. If unavailable return a negative error
code. code.
- poll_link: Function invoked by DSA to query the link state of the switch
builtin Ethernet PHYs, per port. This function is responsible for calling
netif_carrier_{on,off} when appropriate, and can be used to poll all ports in a
single call. Executes from workqueue context.
- adjust_link: Function invoked by the PHY library when a slave network device - adjust_link: Function invoked by the PHY library when a slave network device
is attached to a PHY device. This function is responsible for appropriately is attached to a PHY device. This function is responsible for appropriately
configuring the switch port link parameters: speed, duplex, pause based on configuring the switch port link parameters: speed, duplex, pause based on
...@@ -542,6 +535,16 @@ Bridge layer ...@@ -542,6 +535,16 @@ Bridge layer
Bridge VLAN filtering Bridge VLAN filtering
--------------------- ---------------------
- port_vlan_filtering: bridge layer function invoked when the bridge gets
configured for turning on or off VLAN filtering. If nothing specific needs to
be done at the hardware level, this callback does not need to be implemented.
When VLAN filtering is turned on, the hardware must be programmed with
rejecting 802.1Q frames which have VLAN IDs outside of the programmed allowed
VLAN ID map/rules. If there is no PVID programmed into the switch port,
untagged frames must be rejected as well. When turned off the switch must
accept any 802.1Q frames irrespective of their VLAN ID, and untagged frames are
allowed.
- port_vlan_prepare: bridge layer function invoked when the bridge prepares the - port_vlan_prepare: bridge layer function invoked when the bridge prepares the
configuration of a VLAN on the given port. If the operation is not supported configuration of a VLAN on the given port. If the operation is not supported
by the hardware, this function should return -EOPNOTSUPP to inform the bridge by the hardware, this function should return -EOPNOTSUPP to inform the bridge
......
...@@ -1036,15 +1036,17 @@ proxy_arp_pvlan - BOOLEAN ...@@ -1036,15 +1036,17 @@ proxy_arp_pvlan - BOOLEAN
shared_media - BOOLEAN shared_media - BOOLEAN
Send(router) or accept(host) RFC1620 shared media redirects. Send(router) or accept(host) RFC1620 shared media redirects.
Overrides ip_secure_redirects. Overrides secure_redirects.
shared_media for the interface will be enabled if at least one of shared_media for the interface will be enabled if at least one of
conf/{all,interface}/shared_media is set to TRUE, conf/{all,interface}/shared_media is set to TRUE,
it will be disabled otherwise it will be disabled otherwise
default TRUE default TRUE
secure_redirects - BOOLEAN secure_redirects - BOOLEAN
Accept ICMP redirect messages only for gateways, Accept ICMP redirect messages only to gateways listed in the
listed in default gateway list. interface's current gateway list. Even if disabled, RFC1122 redirect
rules still apply.
Overridden by shared_media.
secure_redirects for the interface will be enabled if at least one of secure_redirects for the interface will be enabled if at least one of
conf/{all,interface}/secure_redirects is set to TRUE, conf/{all,interface}/secure_redirects is set to TRUE,
it will be disabled otherwise it will be disabled otherwise
......
...@@ -826,7 +826,8 @@ The keyctl syscall functions are: ...@@ -826,7 +826,8 @@ The keyctl syscall functions are:
(*) Compute a Diffie-Hellman shared secret or public key (*) Compute a Diffie-Hellman shared secret or public key
long keyctl(KEYCTL_DH_COMPUTE, struct keyctl_dh_params *params, long keyctl(KEYCTL_DH_COMPUTE, struct keyctl_dh_params *params,
char *buffer, size_t buflen); char *buffer, size_t buflen,
void *reserved);
The params struct contains serial numbers for three keys: The params struct contains serial numbers for three keys:
...@@ -843,6 +844,8 @@ The keyctl syscall functions are: ...@@ -843,6 +844,8 @@ The keyctl syscall functions are:
public key. If the base is the remote public key, the result is public key. If the base is the remote public key, the result is
the shared secret. the shared secret.
The reserved argument must be set to NULL.
The buffer length must be at least the length of the prime, or zero. The buffer length must be at least the length of the prime, or zero.
If the buffer length is nonzero, the length of the result is If the buffer length is nonzero, the length of the result is
......
...@@ -7990,6 +7990,7 @@ Q: http://patchwork.ozlabs.org/project/netdev/list/ ...@@ -7990,6 +7990,7 @@ Q: http://patchwork.ozlabs.org/project/netdev/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/davem/net.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/davem/net.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git
S: Odd Fixes S: Odd Fixes
F: Documentation/devicetree/bindings/net/
F: drivers/net/ F: drivers/net/
F: include/linux/if_* F: include/linux/if_*
F: include/linux/netdevice.h F: include/linux/netdevice.h
...@@ -8945,6 +8946,7 @@ M: Linus Walleij <linus.walleij@linaro.org> ...@@ -8945,6 +8946,7 @@ M: Linus Walleij <linus.walleij@linaro.org>
L: linux-gpio@vger.kernel.org L: linux-gpio@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl.git
S: Maintained S: Maintained
F: Documentation/devicetree/bindings/pinctrl/
F: drivers/pinctrl/ F: drivers/pinctrl/
F: include/linux/pinctrl/ F: include/linux/pinctrl/
......
VERSION = 4 VERSION = 4
PATCHLEVEL = 7 PATCHLEVEL = 7
SUBLEVEL = 0 SUBLEVEL = 0
EXTRAVERSION = -rc1 EXTRAVERSION = -rc2
NAME = Psychotic Stoned Sheep NAME = Psychotic Stoned Sheep
# *DOCUMENTATION* # *DOCUMENTATION*
......
...@@ -107,8 +107,8 @@ ...@@ -107,8 +107,8 @@
reg = <0x58000000 0x80>, reg = <0x58000000 0x80>,
<0x58004054 0x4>, <0x58004054 0x4>,
<0x58004300 0x20>, <0x58004300 0x20>,
<0x58005054 0x4>, <0x58009054 0x4>,
<0x58005300 0x20>; <0x58009300 0x20>;
reg-names = "dss", "pll1_clkctrl", "pll1", reg-names = "dss", "pll1_clkctrl", "pll1",
"pll2_clkctrl", "pll2"; "pll2_clkctrl", "pll2";
......
...@@ -733,8 +733,8 @@ static int vfp_set(struct task_struct *target, ...@@ -733,8 +733,8 @@ static int vfp_set(struct task_struct *target,
if (ret) if (ret)
return ret; return ret;
vfp_flush_hwstate(thread);
thread->vfpstate.hard = new_vfp; thread->vfpstate.hard = new_vfp;
vfp_flush_hwstate(thread);
return 0; return 0;
} }
......
...@@ -17,6 +17,7 @@ config ARCH_OMAP3 ...@@ -17,6 +17,7 @@ config ARCH_OMAP3
select PM_OPP if PM select PM_OPP if PM
select PM if CPU_IDLE select PM if CPU_IDLE
select SOC_HAS_OMAP2_SDRC select SOC_HAS_OMAP2_SDRC
select ARM_ERRATA_430973
config ARCH_OMAP4 config ARCH_OMAP4
bool "TI OMAP4" bool "TI OMAP4"
...@@ -36,6 +37,7 @@ config ARCH_OMAP4 ...@@ -36,6 +37,7 @@ config ARCH_OMAP4
select PM if CPU_IDLE select PM if CPU_IDLE
select ARM_ERRATA_754322 select ARM_ERRATA_754322
select ARM_ERRATA_775420 select ARM_ERRATA_775420
select OMAP_INTERCONNECT
config SOC_OMAP5 config SOC_OMAP5
bool "TI OMAP5" bool "TI OMAP5"
......
...@@ -36,14 +36,7 @@ static struct powerdomain iva_7xx_pwrdm = { ...@@ -36,14 +36,7 @@ static struct powerdomain iva_7xx_pwrdm = {
.prcm_offs = DRA7XX_PRM_IVA_INST, .prcm_offs = DRA7XX_PRM_IVA_INST,
.prcm_partition = DRA7XX_PRM_PARTITION, .prcm_partition = DRA7XX_PRM_PARTITION,
.pwrsts = PWRSTS_OFF_ON, .pwrsts = PWRSTS_OFF_ON,
.pwrsts_logic_ret = PWRSTS_OFF,
.banks = 4, .banks = 4,
.pwrsts_mem_ret = {
[0] = PWRSTS_OFF_RET, /* hwa_mem */
[1] = PWRSTS_OFF_RET, /* sl2_mem */
[2] = PWRSTS_OFF_RET, /* tcm1_mem */
[3] = PWRSTS_OFF_RET, /* tcm2_mem */
},
.pwrsts_mem_on = { .pwrsts_mem_on = {
[0] = PWRSTS_ON, /* hwa_mem */ [0] = PWRSTS_ON, /* hwa_mem */
[1] = PWRSTS_ON, /* sl2_mem */ [1] = PWRSTS_ON, /* sl2_mem */
...@@ -76,12 +69,7 @@ static struct powerdomain ipu_7xx_pwrdm = { ...@@ -76,12 +69,7 @@ static struct powerdomain ipu_7xx_pwrdm = {
.prcm_offs = DRA7XX_PRM_IPU_INST, .prcm_offs = DRA7XX_PRM_IPU_INST,
.prcm_partition = DRA7XX_PRM_PARTITION, .prcm_partition = DRA7XX_PRM_PARTITION,
.pwrsts = PWRSTS_OFF_ON, .pwrsts = PWRSTS_OFF_ON,
.pwrsts_logic_ret = PWRSTS_OFF,
.banks = 2, .banks = 2,
.pwrsts_mem_ret = {
[0] = PWRSTS_OFF_RET, /* aessmem */
[1] = PWRSTS_OFF_RET, /* periphmem */
},
.pwrsts_mem_on = { .pwrsts_mem_on = {
[0] = PWRSTS_ON, /* aessmem */ [0] = PWRSTS_ON, /* aessmem */
[1] = PWRSTS_ON, /* periphmem */ [1] = PWRSTS_ON, /* periphmem */
...@@ -95,11 +83,7 @@ static struct powerdomain dss_7xx_pwrdm = { ...@@ -95,11 +83,7 @@ static struct powerdomain dss_7xx_pwrdm = {
.prcm_offs = DRA7XX_PRM_DSS_INST, .prcm_offs = DRA7XX_PRM_DSS_INST,
.prcm_partition = DRA7XX_PRM_PARTITION, .prcm_partition = DRA7XX_PRM_PARTITION,
.pwrsts = PWRSTS_OFF_ON, .pwrsts = PWRSTS_OFF_ON,
.pwrsts_logic_ret = PWRSTS_OFF,
.banks = 1, .banks = 1,
.pwrsts_mem_ret = {
[0] = PWRSTS_OFF_RET, /* dss_mem */
},
.pwrsts_mem_on = { .pwrsts_mem_on = {
[0] = PWRSTS_ON, /* dss_mem */ [0] = PWRSTS_ON, /* dss_mem */
}, },
...@@ -111,13 +95,8 @@ static struct powerdomain l4per_7xx_pwrdm = { ...@@ -111,13 +95,8 @@ static struct powerdomain l4per_7xx_pwrdm = {
.name = "l4per_pwrdm", .name = "l4per_pwrdm",
.prcm_offs = DRA7XX_PRM_L4PER_INST, .prcm_offs = DRA7XX_PRM_L4PER_INST,
.prcm_partition = DRA7XX_PRM_PARTITION, .prcm_partition = DRA7XX_PRM_PARTITION,
.pwrsts = PWRSTS_RET_ON, .pwrsts = PWRSTS_ON,
.pwrsts_logic_ret = PWRSTS_RET,
.banks = 2, .banks = 2,
.pwrsts_mem_ret = {
[0] = PWRSTS_OFF_RET, /* nonretained_bank */
[1] = PWRSTS_OFF_RET, /* retained_bank */
},
.pwrsts_mem_on = { .pwrsts_mem_on = {
[0] = PWRSTS_ON, /* nonretained_bank */ [0] = PWRSTS_ON, /* nonretained_bank */
[1] = PWRSTS_ON, /* retained_bank */ [1] = PWRSTS_ON, /* retained_bank */
...@@ -132,9 +111,6 @@ static struct powerdomain gpu_7xx_pwrdm = { ...@@ -132,9 +111,6 @@ static struct powerdomain gpu_7xx_pwrdm = {
.prcm_partition = DRA7XX_PRM_PARTITION, .prcm_partition = DRA7XX_PRM_PARTITION,
.pwrsts = PWRSTS_OFF_ON, .pwrsts = PWRSTS_OFF_ON,
.banks = 1, .banks = 1,
.pwrsts_mem_ret = {
[0] = PWRSTS_OFF_RET, /* gpu_mem */
},
.pwrsts_mem_on = { .pwrsts_mem_on = {
[0] = PWRSTS_ON, /* gpu_mem */ [0] = PWRSTS_ON, /* gpu_mem */
}, },
...@@ -148,8 +124,6 @@ static struct powerdomain wkupaon_7xx_pwrdm = { ...@@ -148,8 +124,6 @@ static struct powerdomain wkupaon_7xx_pwrdm = {
.prcm_partition = DRA7XX_PRM_PARTITION, .prcm_partition = DRA7XX_PRM_PARTITION,
.pwrsts = PWRSTS_ON, .pwrsts = PWRSTS_ON,
.banks = 1, .banks = 1,
.pwrsts_mem_ret = {
},
.pwrsts_mem_on = { .pwrsts_mem_on = {
[0] = PWRSTS_ON, /* wkup_bank */ [0] = PWRSTS_ON, /* wkup_bank */
}, },
...@@ -161,15 +135,7 @@ static struct powerdomain core_7xx_pwrdm = { ...@@ -161,15 +135,7 @@ static struct powerdomain core_7xx_pwrdm = {
.prcm_offs = DRA7XX_PRM_CORE_INST, .prcm_offs = DRA7XX_PRM_CORE_INST,
.prcm_partition = DRA7XX_PRM_PARTITION, .prcm_partition = DRA7XX_PRM_PARTITION,
.pwrsts = PWRSTS_ON, .pwrsts = PWRSTS_ON,
.pwrsts_logic_ret = PWRSTS_RET,
.banks = 5, .banks = 5,
.pwrsts_mem_ret = {
[0] = PWRSTS_OFF_RET, /* core_nret_bank */
[1] = PWRSTS_OFF_RET, /* core_ocmram */
[2] = PWRSTS_OFF_RET, /* core_other_bank */
[3] = PWRSTS_OFF_RET, /* ipu_l2ram */
[4] = PWRSTS_OFF_RET, /* ipu_unicache */
},
.pwrsts_mem_on = { .pwrsts_mem_on = {
[0] = PWRSTS_ON, /* core_nret_bank */ [0] = PWRSTS_ON, /* core_nret_bank */
[1] = PWRSTS_ON, /* core_ocmram */ [1] = PWRSTS_ON, /* core_ocmram */
...@@ -226,11 +192,7 @@ static struct powerdomain vpe_7xx_pwrdm = { ...@@ -226,11 +192,7 @@ static struct powerdomain vpe_7xx_pwrdm = {
.prcm_offs = DRA7XX_PRM_VPE_INST, .prcm_offs = DRA7XX_PRM_VPE_INST,
.prcm_partition = DRA7XX_PRM_PARTITION, .prcm_partition = DRA7XX_PRM_PARTITION,
.pwrsts = PWRSTS_OFF_ON, .pwrsts = PWRSTS_OFF_ON,
.pwrsts_logic_ret = PWRSTS_OFF,
.banks = 1, .banks = 1,
.pwrsts_mem_ret = {
[0] = PWRSTS_OFF_RET, /* vpe_bank */
},
.pwrsts_mem_on = { .pwrsts_mem_on = {
[0] = PWRSTS_ON, /* vpe_bank */ [0] = PWRSTS_ON, /* vpe_bank */
}, },
...@@ -260,14 +222,8 @@ static struct powerdomain l3init_7xx_pwrdm = { ...@@ -260,14 +222,8 @@ static struct powerdomain l3init_7xx_pwrdm = {
.name = "l3init_pwrdm", .name = "l3init_pwrdm",
.prcm_offs = DRA7XX_PRM_L3INIT_INST, .prcm_offs = DRA7XX_PRM_L3INIT_INST,
.prcm_partition = DRA7XX_PRM_PARTITION, .prcm_partition = DRA7XX_PRM_PARTITION,
.pwrsts = PWRSTS_RET_ON, .pwrsts = PWRSTS_ON,
.pwrsts_logic_ret = PWRSTS_RET,
.banks = 3, .banks = 3,
.pwrsts_mem_ret = {
[0] = PWRSTS_OFF_RET, /* gmac_bank */
[1] = PWRSTS_OFF_RET, /* l3init_bank1 */
[2] = PWRSTS_OFF_RET, /* l3init_bank2 */
},
.pwrsts_mem_on = { .pwrsts_mem_on = {
[0] = PWRSTS_ON, /* gmac_bank */ [0] = PWRSTS_ON, /* gmac_bank */
[1] = PWRSTS_ON, /* l3init_bank1 */ [1] = PWRSTS_ON, /* l3init_bank1 */
...@@ -283,9 +239,6 @@ static struct powerdomain eve3_7xx_pwrdm = { ...@@ -283,9 +239,6 @@ static struct powerdomain eve3_7xx_pwrdm = {
.prcm_partition = DRA7XX_PRM_PARTITION, .prcm_partition = DRA7XX_PRM_PARTITION,
.pwrsts = PWRSTS_OFF_ON, .pwrsts = PWRSTS_OFF_ON,
.banks = 1, .banks = 1,
.pwrsts_mem_ret = {
[0] = PWRSTS_OFF_RET, /* eve3_bank */
},
.pwrsts_mem_on = { .pwrsts_mem_on = {
[0] = PWRSTS_ON, /* eve3_bank */ [0] = PWRSTS_ON, /* eve3_bank */
}, },
...@@ -299,9 +252,6 @@ static struct powerdomain emu_7xx_pwrdm = { ...@@ -299,9 +252,6 @@ static struct powerdomain emu_7xx_pwrdm = {
.prcm_partition = DRA7XX_PRM_PARTITION, .prcm_partition = DRA7XX_PRM_PARTITION,
.pwrsts = PWRSTS_OFF_ON, .pwrsts = PWRSTS_OFF_ON,
.banks = 1, .banks = 1,
.pwrsts_mem_ret = {
[0] = PWRSTS_OFF_RET, /* emu_bank */
},
.pwrsts_mem_on = { .pwrsts_mem_on = {
[0] = PWRSTS_ON, /* emu_bank */ [0] = PWRSTS_ON, /* emu_bank */
}, },
...@@ -314,11 +264,6 @@ static struct powerdomain dsp2_7xx_pwrdm = { ...@@ -314,11 +264,6 @@ static struct powerdomain dsp2_7xx_pwrdm = {
.prcm_partition = DRA7XX_PRM_PARTITION, .prcm_partition = DRA7XX_PRM_PARTITION,
.pwrsts = PWRSTS_OFF_ON, .pwrsts = PWRSTS_OFF_ON,
.banks = 3, .banks = 3,
.pwrsts_mem_ret = {
[0] = PWRSTS_OFF_RET, /* dsp2_edma */
[1] = PWRSTS_OFF_RET, /* dsp2_l1 */
[2] = PWRSTS_OFF_RET, /* dsp2_l2 */
},
.pwrsts_mem_on = { .pwrsts_mem_on = {
[0] = PWRSTS_ON, /* dsp2_edma */ [0] = PWRSTS_ON, /* dsp2_edma */
[1] = PWRSTS_ON, /* dsp2_l1 */ [1] = PWRSTS_ON, /* dsp2_l1 */
...@@ -334,11 +279,6 @@ static struct powerdomain dsp1_7xx_pwrdm = { ...@@ -334,11 +279,6 @@ static struct powerdomain dsp1_7xx_pwrdm = {
.prcm_partition = DRA7XX_PRM_PARTITION, .prcm_partition = DRA7XX_PRM_PARTITION,
.pwrsts = PWRSTS_OFF_ON, .pwrsts = PWRSTS_OFF_ON,
.banks = 3, .banks = 3,
.pwrsts_mem_ret = {
[0] = PWRSTS_OFF_RET, /* dsp1_edma */
[1] = PWRSTS_OFF_RET, /* dsp1_l1 */
[2] = PWRSTS_OFF_RET, /* dsp1_l2 */
},
.pwrsts_mem_on = { .pwrsts_mem_on = {
[0] = PWRSTS_ON, /* dsp1_edma */ [0] = PWRSTS_ON, /* dsp1_edma */
[1] = PWRSTS_ON, /* dsp1_l1 */ [1] = PWRSTS_ON, /* dsp1_l1 */
...@@ -354,9 +294,6 @@ static struct powerdomain cam_7xx_pwrdm = { ...@@ -354,9 +294,6 @@ static struct powerdomain cam_7xx_pwrdm = {
.prcm_partition = DRA7XX_PRM_PARTITION, .prcm_partition = DRA7XX_PRM_PARTITION,
.pwrsts = PWRSTS_OFF_ON, .pwrsts = PWRSTS_OFF_ON,
.banks = 1, .banks = 1,
.pwrsts_mem_ret = {
[0] = PWRSTS_OFF_RET, /* vip_bank */
},
.pwrsts_mem_on = { .pwrsts_mem_on = {
[0] = PWRSTS_ON, /* vip_bank */ [0] = PWRSTS_ON, /* vip_bank */
}, },
...@@ -370,9 +307,6 @@ static struct powerdomain eve4_7xx_pwrdm = { ...@@ -370,9 +307,6 @@ static struct powerdomain eve4_7xx_pwrdm = {
.prcm_partition = DRA7XX_PRM_PARTITION, .prcm_partition = DRA7XX_PRM_PARTITION,
.pwrsts = PWRSTS_OFF_ON, .pwrsts = PWRSTS_OFF_ON,
.banks = 1, .banks = 1,
.pwrsts_mem_ret = {
[0] = PWRSTS_OFF_RET, /* eve4_bank */
},
.pwrsts_mem_on = { .pwrsts_mem_on = {
[0] = PWRSTS_ON, /* eve4_bank */ [0] = PWRSTS_ON, /* eve4_bank */
}, },
...@@ -386,9 +320,6 @@ static struct powerdomain eve2_7xx_pwrdm = { ...@@ -386,9 +320,6 @@ static struct powerdomain eve2_7xx_pwrdm = {
.prcm_partition = DRA7XX_PRM_PARTITION, .prcm_partition = DRA7XX_PRM_PARTITION,
.pwrsts = PWRSTS_OFF_ON, .pwrsts = PWRSTS_OFF_ON,
.banks = 1, .banks = 1,
.pwrsts_mem_ret = {
[0] = PWRSTS_OFF_RET, /* eve2_bank */
},
.pwrsts_mem_on = { .pwrsts_mem_on = {
[0] = PWRSTS_ON, /* eve2_bank */ [0] = PWRSTS_ON, /* eve2_bank */
}, },
...@@ -402,9 +333,6 @@ static struct powerdomain eve1_7xx_pwrdm = { ...@@ -402,9 +333,6 @@ static struct powerdomain eve1_7xx_pwrdm = {
.prcm_partition = DRA7XX_PRM_PARTITION, .prcm_partition = DRA7XX_PRM_PARTITION,
.pwrsts = PWRSTS_OFF_ON, .pwrsts = PWRSTS_OFF_ON,
.banks = 1, .banks = 1,
.pwrsts_mem_ret = {
[0] = PWRSTS_OFF_RET, /* eve1_bank */
},
.pwrsts_mem_on = { .pwrsts_mem_on = {
[0] = PWRSTS_ON, /* eve1_bank */ [0] = PWRSTS_ON, /* eve1_bank */
}, },
......
...@@ -113,6 +113,18 @@ config ARCH_PHYS_ADDR_T_64BIT ...@@ -113,6 +113,18 @@ config ARCH_PHYS_ADDR_T_64BIT
config MMU config MMU
def_bool y def_bool y
config ARM64_PAGE_SHIFT
int
default 16 if ARM64_64K_PAGES
default 14 if ARM64_16K_PAGES
default 12
config ARM64_CONT_SHIFT
int
default 5 if ARM64_64K_PAGES
default 7 if ARM64_16K_PAGES
default 4
config ARCH_MMAP_RND_BITS_MIN config ARCH_MMAP_RND_BITS_MIN
default 14 if ARM64_64K_PAGES default 14 if ARM64_64K_PAGES
default 16 if ARM64_16K_PAGES default 16 if ARM64_16K_PAGES
...@@ -426,6 +438,15 @@ config CAVIUM_ERRATUM_22375 ...@@ -426,6 +438,15 @@ config CAVIUM_ERRATUM_22375
If unsure, say Y. If unsure, say Y.
config CAVIUM_ERRATUM_23144
bool "Cavium erratum 23144: ITS SYNC hang on dual socket system"
depends on NUMA
default y
help
ITS SYNC command hang for cross node io and collections/cpu mapping.
If unsure, say Y.
config CAVIUM_ERRATUM_23154 config CAVIUM_ERRATUM_23154
bool "Cavium erratum 23154: Access to ICC_IAR1_EL1 is not sync'ed" bool "Cavium erratum 23154: Access to ICC_IAR1_EL1 is not sync'ed"
default y default y
......
...@@ -12,7 +12,8 @@ config ARM64_PTDUMP ...@@ -12,7 +12,8 @@ config ARM64_PTDUMP
who are working in architecture specific areas of the kernel. who are working in architecture specific areas of the kernel.
It is probably not a good idea to enable this feature in a production It is probably not a good idea to enable this feature in a production
kernel. kernel.
If in doubt, say "N"
If in doubt, say N.
config PID_IN_CONTEXTIDR config PID_IN_CONTEXTIDR
bool "Write the current PID to the CONTEXTIDR register" bool "Write the current PID to the CONTEXTIDR register"
...@@ -38,15 +39,15 @@ config ARM64_RANDOMIZE_TEXT_OFFSET ...@@ -38,15 +39,15 @@ config ARM64_RANDOMIZE_TEXT_OFFSET
value. value.
config DEBUG_SET_MODULE_RONX config DEBUG_SET_MODULE_RONX
bool "Set loadable kernel module data as NX and text as RO" bool "Set loadable kernel module data as NX and text as RO"
depends on MODULES depends on MODULES
help default y
This option helps catch unintended modifications to loadable help
kernel module's text and read-only data. It also prevents execution Is this is set, kernel module text and rodata will be made read-only.
of module data. Such protection may interfere with run-time code This is to help catch accidental or malicious attempts to change the
patching and dynamic kernel tracing - and they might also protect kernel's executable code.
against certain classes of kernel exploits.
If in doubt, say "N". If in doubt, say Y.
config DEBUG_RODATA config DEBUG_RODATA
bool "Make kernel text and rodata read-only" bool "Make kernel text and rodata read-only"
...@@ -56,7 +57,7 @@ config DEBUG_RODATA ...@@ -56,7 +57,7 @@ config DEBUG_RODATA
is to help catch accidental or malicious attempts to change the is to help catch accidental or malicious attempts to change the
kernel's executable code. kernel's executable code.
If in doubt, say Y If in doubt, say Y.
config DEBUG_ALIGN_RODATA config DEBUG_ALIGN_RODATA
depends on DEBUG_RODATA depends on DEBUG_RODATA
...@@ -69,7 +70,7 @@ config DEBUG_ALIGN_RODATA ...@@ -69,7 +70,7 @@ config DEBUG_ALIGN_RODATA
alignment and potentially wasted space. Turn on this option if alignment and potentially wasted space. Turn on this option if
performance is more important than memory pressure. performance is more important than memory pressure.
If in doubt, say N If in doubt, say N.
source "drivers/hwtracing/coresight/Kconfig" source "drivers/hwtracing/coresight/Kconfig"
......
...@@ -60,7 +60,9 @@ head-y := arch/arm64/kernel/head.o ...@@ -60,7 +60,9 @@ head-y := arch/arm64/kernel/head.o
# The byte offset of the kernel image in RAM from the start of RAM. # The byte offset of the kernel image in RAM from the start of RAM.
ifeq ($(CONFIG_ARM64_RANDOMIZE_TEXT_OFFSET), y) ifeq ($(CONFIG_ARM64_RANDOMIZE_TEXT_OFFSET), y)
TEXT_OFFSET := $(shell awk 'BEGIN {srand(); printf "0x%03x000\n", int(512 * rand())}') TEXT_OFFSET := $(shell awk "BEGIN {srand(); printf \"0x%06x\n\", \
int(2 * 1024 * 1024 / (2 ^ $(CONFIG_ARM64_PAGE_SHIFT)) * \
rand()) * (2 ^ $(CONFIG_ARM64_PAGE_SHIFT))}")
else else
TEXT_OFFSET := 0x00080000 TEXT_OFFSET := 0x00080000
endif endif
......
...@@ -160,14 +160,14 @@ extern int arch_setup_additional_pages(struct linux_binprm *bprm, ...@@ -160,14 +160,14 @@ extern int arch_setup_additional_pages(struct linux_binprm *bprm,
#define STACK_RND_MASK (0x3ffff >> (PAGE_SHIFT - 12)) #define STACK_RND_MASK (0x3ffff >> (PAGE_SHIFT - 12))
#endif #endif
#ifdef CONFIG_COMPAT
#ifdef __AARCH64EB__ #ifdef __AARCH64EB__
#define COMPAT_ELF_PLATFORM ("v8b") #define COMPAT_ELF_PLATFORM ("v8b")
#else #else
#define COMPAT_ELF_PLATFORM ("v8l") #define COMPAT_ELF_PLATFORM ("v8l")
#endif #endif
#ifdef CONFIG_COMPAT
#define COMPAT_ELF_ET_DYN_BASE (2 * TASK_SIZE_32 / 3) #define COMPAT_ELF_ET_DYN_BASE (2 * TASK_SIZE_32 / 3)
/* AArch32 registers. */ /* AArch32 registers. */
......
...@@ -55,8 +55,9 @@ ...@@ -55,8 +55,9 @@
#define VMEMMAP_SIZE (UL(1) << (VA_BITS - PAGE_SHIFT - 1 + STRUCT_PAGE_MAX_SHIFT)) #define VMEMMAP_SIZE (UL(1) << (VA_BITS - PAGE_SHIFT - 1 + STRUCT_PAGE_MAX_SHIFT))
/* /*
* PAGE_OFFSET - the virtual address of the start of the kernel image (top * PAGE_OFFSET - the virtual address of the start of the linear map (top
* (VA_BITS - 1)) * (VA_BITS - 1))
* KIMAGE_VADDR - the virtual address of the start of the kernel image
* VA_BITS - the maximum number of bits for virtual addresses. * VA_BITS - the maximum number of bits for virtual addresses.
* VA_START - the first kernel virtual address. * VA_START - the first kernel virtual address.
* TASK_SIZE - the maximum size of a user space task. * TASK_SIZE - the maximum size of a user space task.
......
...@@ -23,16 +23,8 @@ ...@@ -23,16 +23,8 @@
/* PAGE_SHIFT determines the page size */ /* PAGE_SHIFT determines the page size */
/* CONT_SHIFT determines the number of pages which can be tracked together */ /* CONT_SHIFT determines the number of pages which can be tracked together */
#ifdef CONFIG_ARM64_64K_PAGES #define PAGE_SHIFT CONFIG_ARM64_PAGE_SHIFT
#define PAGE_SHIFT 16 #define CONT_SHIFT CONFIG_ARM64_CONT_SHIFT
#define CONT_SHIFT 5
#elif defined(CONFIG_ARM64_16K_PAGES)
#define PAGE_SHIFT 14
#define CONT_SHIFT 7
#else
#define PAGE_SHIFT 12
#define CONT_SHIFT 4
#endif
#define PAGE_SIZE (_AC(1, UL) << PAGE_SHIFT) #define PAGE_SIZE (_AC(1, UL) << PAGE_SHIFT)
#define PAGE_MASK (~(PAGE_SIZE-1)) #define PAGE_MASK (~(PAGE_SIZE-1))
......
...@@ -80,19 +80,6 @@ static inline void set_fs(mm_segment_t fs) ...@@ -80,19 +80,6 @@ static inline void set_fs(mm_segment_t fs)
#define segment_eq(a, b) ((a) == (b)) #define segment_eq(a, b) ((a) == (b))
/*
* Return 1 if addr < current->addr_limit, 0 otherwise.
*/
#define __addr_ok(addr) \
({ \
unsigned long flag; \
asm("cmp %1, %0; cset %0, lo" \
: "=&r" (flag) \
: "r" (addr), "0" (current_thread_info()->addr_limit) \
: "cc"); \
flag; \
})
/* /*
* Test whether a block of memory is a valid user space address. * Test whether a block of memory is a valid user space address.
* Returns 1 if the range is valid, 0 otherwise. * Returns 1 if the range is valid, 0 otherwise.
......
...@@ -44,7 +44,7 @@ ...@@ -44,7 +44,7 @@
#define __ARM_NR_compat_cacheflush (__ARM_NR_COMPAT_BASE+2) #define __ARM_NR_compat_cacheflush (__ARM_NR_COMPAT_BASE+2)
#define __ARM_NR_compat_set_tls (__ARM_NR_COMPAT_BASE+5) #define __ARM_NR_compat_set_tls (__ARM_NR_COMPAT_BASE+5)
#define __NR_compat_syscalls 390 #define __NR_compat_syscalls 394
#endif #endif
#define __ARCH_WANT_SYS_CLONE #define __ARCH_WANT_SYS_CLONE
......
...@@ -801,6 +801,14 @@ __SYSCALL(__NR_execveat, compat_sys_execveat) ...@@ -801,6 +801,14 @@ __SYSCALL(__NR_execveat, compat_sys_execveat)
__SYSCALL(__NR_userfaultfd, sys_userfaultfd) __SYSCALL(__NR_userfaultfd, sys_userfaultfd)
#define __NR_membarrier 389 #define __NR_membarrier 389
__SYSCALL(__NR_membarrier, sys_membarrier) __SYSCALL(__NR_membarrier, sys_membarrier)
#define __NR_mlock2 390
__SYSCALL(__NR_mlock2, sys_mlock2)
#define __NR_copy_file_range 391
__SYSCALL(__NR_copy_file_range, sys_copy_file_range)
#define __NR_preadv2 392
__SYSCALL(__NR_preadv2, compat_sys_preadv2)
#define __NR_pwritev2 393
__SYSCALL(__NR_pwritev2, compat_sys_pwritev2)
/* /*
* Please add new compat syscalls above this comment and update * Please add new compat syscalls above this comment and update
......
...@@ -22,6 +22,8 @@ ...@@ -22,6 +22,8 @@
#include <linux/bitops.h> #include <linux/bitops.h>
#include <linux/bug.h> #include <linux/bug.h>
#include <linux/compat.h>
#include <linux/elf.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/personality.h> #include <linux/personality.h>
...@@ -104,6 +106,7 @@ static const char *const compat_hwcap2_str[] = { ...@@ -104,6 +106,7 @@ static const char *const compat_hwcap2_str[] = {
static int c_show(struct seq_file *m, void *v) static int c_show(struct seq_file *m, void *v)
{ {
int i, j; int i, j;
bool compat = personality(current->personality) == PER_LINUX32;
for_each_online_cpu(i) { for_each_online_cpu(i) {
struct cpuinfo_arm64 *cpuinfo = &per_cpu(cpu_data, i); struct cpuinfo_arm64 *cpuinfo = &per_cpu(cpu_data, i);
...@@ -115,6 +118,9 @@ static int c_show(struct seq_file *m, void *v) ...@@ -115,6 +118,9 @@ static int c_show(struct seq_file *m, void *v)
* "processor". Give glibc what it expects. * "processor". Give glibc what it expects.
*/ */
seq_printf(m, "processor\t: %d\n", i); seq_printf(m, "processor\t: %d\n", i);
if (compat)
seq_printf(m, "model name\t: ARMv8 Processor rev %d (%s)\n",
MIDR_REVISION(midr), COMPAT_ELF_PLATFORM);
seq_printf(m, "BogoMIPS\t: %lu.%02lu\n", seq_printf(m, "BogoMIPS\t: %lu.%02lu\n",
loops_per_jiffy / (500000UL/HZ), loops_per_jiffy / (500000UL/HZ),
...@@ -127,7 +133,7 @@ static int c_show(struct seq_file *m, void *v) ...@@ -127,7 +133,7 @@ static int c_show(struct seq_file *m, void *v)
* software which does already (at least for 32-bit). * software which does already (at least for 32-bit).
*/ */
seq_puts(m, "Features\t:"); seq_puts(m, "Features\t:");
if (personality(current->personality) == PER_LINUX32) { if (compat) {
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
for (j = 0; compat_hwcap_str[j]; j++) for (j = 0; compat_hwcap_str[j]; j++)
if (compat_elf_hwcap & (1 << j)) if (compat_elf_hwcap & (1 << j))
......
...@@ -477,8 +477,9 @@ asmlinkage void bad_mode(struct pt_regs *regs, int reason, unsigned int esr) ...@@ -477,8 +477,9 @@ asmlinkage void bad_mode(struct pt_regs *regs, int reason, unsigned int esr)
void __user *pc = (void __user *)instruction_pointer(regs); void __user *pc = (void __user *)instruction_pointer(regs);
console_verbose(); console_verbose();
pr_crit("Bad mode in %s handler detected, code 0x%08x -- %s\n", pr_crit("Bad mode in %s handler detected on CPU%d, code 0x%08x -- %s\n",
handler[reason], esr, esr_get_class_string(esr)); handler[reason], smp_processor_id(), esr,
esr_get_class_string(esr));
__show_regs(regs); __show_regs(regs);
info.si_signo = SIGILL; info.si_signo = SIGILL;
......
...@@ -169,7 +169,8 @@ void __hyp_text __vgic_v3_save_state(struct kvm_vcpu *vcpu) ...@@ -169,7 +169,8 @@ void __hyp_text __vgic_v3_save_state(struct kvm_vcpu *vcpu)
* Make sure stores to the GIC via the memory mapped interface * Make sure stores to the GIC via the memory mapped interface
* are now visible to the system register interface. * are now visible to the system register interface.
*/ */
dsb(st); if (!cpu_if->vgic_sre)
dsb(st);
cpu_if->vgic_vmcr = read_gicreg(ICH_VMCR_EL2); cpu_if->vgic_vmcr = read_gicreg(ICH_VMCR_EL2);
...@@ -190,12 +191,11 @@ void __hyp_text __vgic_v3_save_state(struct kvm_vcpu *vcpu) ...@@ -190,12 +191,11 @@ void __hyp_text __vgic_v3_save_state(struct kvm_vcpu *vcpu)
if (!(vcpu->arch.vgic_cpu.live_lrs & (1UL << i))) if (!(vcpu->arch.vgic_cpu.live_lrs & (1UL << i)))
continue; continue;
if (cpu_if->vgic_elrsr & (1 << i)) { if (cpu_if->vgic_elrsr & (1 << i))
cpu_if->vgic_lr[i] &= ~ICH_LR_STATE; cpu_if->vgic_lr[i] &= ~ICH_LR_STATE;
continue; else
} cpu_if->vgic_lr[i] = __gic_v3_get_lr(i);
cpu_if->vgic_lr[i] = __gic_v3_get_lr(i);
__gic_v3_set_lr(0, i); __gic_v3_set_lr(0, i);
} }
...@@ -236,8 +236,12 @@ void __hyp_text __vgic_v3_save_state(struct kvm_vcpu *vcpu) ...@@ -236,8 +236,12 @@ void __hyp_text __vgic_v3_save_state(struct kvm_vcpu *vcpu)
val = read_gicreg(ICC_SRE_EL2); val = read_gicreg(ICC_SRE_EL2);
write_gicreg(val | ICC_SRE_EL2_ENABLE, ICC_SRE_EL2); write_gicreg(val | ICC_SRE_EL2_ENABLE, ICC_SRE_EL2);
isb(); /* Make sure ENABLE is set at EL2 before setting SRE at EL1 */
write_gicreg(1, ICC_SRE_EL1); if (!cpu_if->vgic_sre) {
/* Make sure ENABLE is set at EL2 before setting SRE at EL1 */
isb();
write_gicreg(1, ICC_SRE_EL1);
}
} }
void __hyp_text __vgic_v3_restore_state(struct kvm_vcpu *vcpu) void __hyp_text __vgic_v3_restore_state(struct kvm_vcpu *vcpu)
...@@ -256,8 +260,10 @@ void __hyp_text __vgic_v3_restore_state(struct kvm_vcpu *vcpu) ...@@ -256,8 +260,10 @@ void __hyp_text __vgic_v3_restore_state(struct kvm_vcpu *vcpu)
* been actually programmed with the value we want before * been actually programmed with the value we want before
* starting to mess with the rest of the GIC. * starting to mess with the rest of the GIC.
*/ */
write_gicreg(cpu_if->vgic_sre, ICC_SRE_EL1); if (!cpu_if->vgic_sre) {
isb(); write_gicreg(0, ICC_SRE_EL1);
isb();
}
val = read_gicreg(ICH_VTR_EL2); val = read_gicreg(ICH_VTR_EL2);
max_lr_idx = vtr_to_max_lr_idx(val); max_lr_idx = vtr_to_max_lr_idx(val);
...@@ -306,18 +312,18 @@ void __hyp_text __vgic_v3_restore_state(struct kvm_vcpu *vcpu) ...@@ -306,18 +312,18 @@ void __hyp_text __vgic_v3_restore_state(struct kvm_vcpu *vcpu)
* (re)distributors. This ensure the guest will read the * (re)distributors. This ensure the guest will read the
* correct values from the memory-mapped interface. * correct values from the memory-mapped interface.
*/ */
isb(); if (!cpu_if->vgic_sre) {
dsb(sy); isb();
dsb(sy);
}
vcpu->arch.vgic_cpu.live_lrs = live_lrs; vcpu->arch.vgic_cpu.live_lrs = live_lrs;
/* /*
* Prevent the guest from touching the GIC system registers if * Prevent the guest from touching the GIC system registers if
* SRE isn't enabled for GICv3 emulation. * SRE isn't enabled for GICv3 emulation.
*/ */
if (!cpu_if->vgic_sre) { write_gicreg(read_gicreg(ICC_SRE_EL2) & ~ICC_SRE_EL2_ENABLE,
write_gicreg(read_gicreg(ICC_SRE_EL2) & ~ICC_SRE_EL2_ENABLE, ICC_SRE_EL2);
ICC_SRE_EL2);
}
} }
void __hyp_text __vgic_v3_init_lrs(void) void __hyp_text __vgic_v3_init_lrs(void)
......
...@@ -134,6 +134,17 @@ static bool access_gic_sgi(struct kvm_vcpu *vcpu, ...@@ -134,6 +134,17 @@ static bool access_gic_sgi(struct kvm_vcpu *vcpu,
return true; return true;
} }
static bool access_gic_sre(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
if (p->is_write)
return ignore_write(vcpu, p);
p->regval = vcpu->arch.vgic_cpu.vgic_v3.vgic_sre;
return true;
}
static bool trap_raz_wi(struct kvm_vcpu *vcpu, static bool trap_raz_wi(struct kvm_vcpu *vcpu,
struct sys_reg_params *p, struct sys_reg_params *p,
const struct sys_reg_desc *r) const struct sys_reg_desc *r)
...@@ -958,7 +969,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { ...@@ -958,7 +969,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
access_gic_sgi }, access_gic_sgi },
/* ICC_SRE_EL1 */ /* ICC_SRE_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b1100), CRm(0b1100), Op2(0b101), { Op0(0b11), Op1(0b000), CRn(0b1100), CRm(0b1100), Op2(0b101),
trap_raz_wi }, access_gic_sre },
/* CONTEXTIDR_EL1 */ /* CONTEXTIDR_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b1101), CRm(0b0000), Op2(0b001), { Op0(0b11), Op1(0b000), CRn(0b1101), CRm(0b0000), Op2(0b001),
......
...@@ -150,6 +150,7 @@ static const struct prot_bits pte_bits[] = { ...@@ -150,6 +150,7 @@ static const struct prot_bits pte_bits[] = {
struct pg_level { struct pg_level {
const struct prot_bits *bits; const struct prot_bits *bits;
const char *name;
size_t num; size_t num;
u64 mask; u64 mask;
}; };
...@@ -157,15 +158,19 @@ struct pg_level { ...@@ -157,15 +158,19 @@ struct pg_level {
static struct pg_level pg_level[] = { static struct pg_level pg_level[] = {
{ {
}, { /* pgd */ }, { /* pgd */
.name = "PGD",
.bits = pte_bits, .bits = pte_bits,
.num = ARRAY_SIZE(pte_bits), .num = ARRAY_SIZE(pte_bits),
}, { /* pud */ }, { /* pud */
.name = (CONFIG_PGTABLE_LEVELS > 3) ? "PUD" : "PGD",
.bits = pte_bits, .bits = pte_bits,
.num = ARRAY_SIZE(pte_bits), .num = ARRAY_SIZE(pte_bits),
}, { /* pmd */ }, { /* pmd */
.name = (CONFIG_PGTABLE_LEVELS > 2) ? "PMD" : "PGD",
.bits = pte_bits, .bits = pte_bits,
.num = ARRAY_SIZE(pte_bits), .num = ARRAY_SIZE(pte_bits),
}, { /* pte */ }, { /* pte */
.name = "PTE",
.bits = pte_bits, .bits = pte_bits,
.num = ARRAY_SIZE(pte_bits), .num = ARRAY_SIZE(pte_bits),
}, },
...@@ -214,7 +219,8 @@ static void note_page(struct pg_state *st, unsigned long addr, unsigned level, ...@@ -214,7 +219,8 @@ static void note_page(struct pg_state *st, unsigned long addr, unsigned level,
delta >>= 10; delta >>= 10;
unit++; unit++;
} }
seq_printf(st->seq, "%9lu%c", delta, *unit); seq_printf(st->seq, "%9lu%c %s", delta, *unit,
pg_level[st->level].name);
if (pg_level[st->level].bits) if (pg_level[st->level].bits)
dump_prot(st, pg_level[st->level].bits, dump_prot(st, pg_level[st->level].bits,
pg_level[st->level].num); pg_level[st->level].num);
......
...@@ -306,6 +306,10 @@ static __init int setup_hugepagesz(char *opt) ...@@ -306,6 +306,10 @@ static __init int setup_hugepagesz(char *opt)
hugetlb_add_hstate(PMD_SHIFT - PAGE_SHIFT); hugetlb_add_hstate(PMD_SHIFT - PAGE_SHIFT);
} else if (ps == PUD_SIZE) { } else if (ps == PUD_SIZE) {
hugetlb_add_hstate(PUD_SHIFT - PAGE_SHIFT); hugetlb_add_hstate(PUD_SHIFT - PAGE_SHIFT);
} else if (ps == (PAGE_SIZE * CONT_PTES)) {
hugetlb_add_hstate(CONT_PTE_SHIFT);
} else if (ps == (PMD_SIZE * CONT_PMDS)) {
hugetlb_add_hstate((PMD_SHIFT + CONT_PMD_SHIFT) - PAGE_SHIFT);
} else { } else {
hugetlb_bad_size(); hugetlb_bad_size();
pr_err("hugepagesz: Unsupported page size %lu K\n", ps >> 10); pr_err("hugepagesz: Unsupported page size %lu K\n", ps >> 10);
...@@ -314,3 +318,13 @@ static __init int setup_hugepagesz(char *opt) ...@@ -314,3 +318,13 @@ static __init int setup_hugepagesz(char *opt)
return 1; return 1;
} }
__setup("hugepagesz=", setup_hugepagesz); __setup("hugepagesz=", setup_hugepagesz);
#ifdef CONFIG_ARM64_64K_PAGES
static __init int add_default_hugepagesz(void)
{
if (size_to_hstate(CONT_PTES * PAGE_SIZE) == NULL)
hugetlb_add_hstate(CONT_PMD_SHIFT);
return 0;
}
arch_initcall(add_default_hugepagesz);
#endif
...@@ -8,6 +8,8 @@ struct pt_regs; ...@@ -8,6 +8,8 @@ struct pt_regs;
void parisc_terminate(char *msg, struct pt_regs *regs, void parisc_terminate(char *msg, struct pt_regs *regs,
int code, unsigned long offset) __noreturn __cold; int code, unsigned long offset) __noreturn __cold;
void die_if_kernel(char *str, struct pt_regs *regs, long err);
/* mm/fault.c */ /* mm/fault.c */
void do_page_fault(struct pt_regs *regs, unsigned long code, void do_page_fault(struct pt_regs *regs, unsigned long code,
unsigned long address); unsigned long address);
......
...@@ -324,8 +324,9 @@ int init_per_cpu(int cpunum) ...@@ -324,8 +324,9 @@ int init_per_cpu(int cpunum)
per_cpu(cpu_data, cpunum).fp_rev = coproc_cfg.revision; per_cpu(cpu_data, cpunum).fp_rev = coproc_cfg.revision;
per_cpu(cpu_data, cpunum).fp_model = coproc_cfg.model; per_cpu(cpu_data, cpunum).fp_model = coproc_cfg.model;
printk(KERN_INFO "FP[%d] enabled: Rev %ld Model %ld\n", if (cpunum == 0)
cpunum, coproc_cfg.revision, coproc_cfg.model); printk(KERN_INFO "FP[%d] enabled: Rev %ld Model %ld\n",
cpunum, coproc_cfg.revision, coproc_cfg.model);
/* /*
** store status register to stack (hopefully aligned) ** store status register to stack (hopefully aligned)
......
...@@ -309,11 +309,6 @@ void __init time_init(void) ...@@ -309,11 +309,6 @@ void __init time_init(void)
clocks_calc_mult_shift(&cyc2ns_mul, &cyc2ns_shift, current_cr16_khz, clocks_calc_mult_shift(&cyc2ns_mul, &cyc2ns_shift, current_cr16_khz,
NSEC_PER_MSEC, 0); NSEC_PER_MSEC, 0);
#if defined(CONFIG_HAVE_UNSTABLE_SCHED_CLOCK) && defined(CONFIG_64BIT)
/* At bootup only one 64bit CPU is online and cr16 is "stable" */
set_sched_clock_stable();
#endif
start_cpu_itimer(); /* get CPU 0 started */ start_cpu_itimer(); /* get CPU 0 started */
/* register at clocksource framework */ /* register at clocksource framework */
......
...@@ -28,6 +28,7 @@ ...@@ -28,6 +28,7 @@
#include <linux/ratelimit.h> #include <linux/ratelimit.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
#include <asm/hardirq.h> #include <asm/hardirq.h>
#include <asm/traps.h>
/* #define DEBUG_UNALIGNED 1 */ /* #define DEBUG_UNALIGNED 1 */
...@@ -130,8 +131,6 @@ ...@@ -130,8 +131,6 @@
int unaligned_enabled __read_mostly = 1; int unaligned_enabled __read_mostly = 1;
void die_if_kernel (char *str, struct pt_regs *regs, long err);
static int emulate_ldh(struct pt_regs *regs, int toreg) static int emulate_ldh(struct pt_regs *regs, int toreg)
{ {
unsigned long saddr = regs->ior; unsigned long saddr = regs->ior;
...@@ -666,7 +665,7 @@ void handle_unaligned(struct pt_regs *regs) ...@@ -666,7 +665,7 @@ void handle_unaligned(struct pt_regs *regs)
break; break;
} }
if (modify && R1(regs->iir)) if (ret == 0 && modify && R1(regs->iir))
regs->gr[R1(regs->iir)] = newbase; regs->gr[R1(regs->iir)] = newbase;
...@@ -677,6 +676,14 @@ void handle_unaligned(struct pt_regs *regs) ...@@ -677,6 +676,14 @@ void handle_unaligned(struct pt_regs *regs)
if (ret) if (ret)
{ {
/*
* The unaligned handler failed.
* If we were called by __get_user() or __put_user() jump
* to it's exception fixup handler instead of crashing.
*/
if (!user_mode(regs) && fixup_exception(regs))
return;
printk(KERN_CRIT "Unaligned handler failed, ret = %d\n", ret); printk(KERN_CRIT "Unaligned handler failed, ret = %d\n", ret);
die_if_kernel("Unaligned data reference", regs, 28); die_if_kernel("Unaligned data reference", regs, 28);
......
...@@ -75,7 +75,10 @@ find_unwind_entry(unsigned long addr) ...@@ -75,7 +75,10 @@ find_unwind_entry(unsigned long addr)
if (addr >= kernel_unwind_table.start && if (addr >= kernel_unwind_table.start &&
addr <= kernel_unwind_table.end) addr <= kernel_unwind_table.end)
e = find_unwind_entry_in_table(&kernel_unwind_table, addr); e = find_unwind_entry_in_table(&kernel_unwind_table, addr);
else else {
unsigned long flags;
spin_lock_irqsave(&unwind_lock, flags);
list_for_each_entry(table, &unwind_tables, list) { list_for_each_entry(table, &unwind_tables, list) {
if (addr >= table->start && if (addr >= table->start &&
addr <= table->end) addr <= table->end)
...@@ -86,6 +89,8 @@ find_unwind_entry(unsigned long addr) ...@@ -86,6 +89,8 @@ find_unwind_entry(unsigned long addr)
break; break;
} }
} }
spin_unlock_irqrestore(&unwind_lock, flags);
}
return e; return e;
} }
...@@ -303,18 +308,16 @@ static void unwind_frame_regs(struct unwind_frame_info *info) ...@@ -303,18 +308,16 @@ static void unwind_frame_regs(struct unwind_frame_info *info)
insn = *(unsigned int *)npc; insn = *(unsigned int *)npc;
if ((insn & 0xffffc000) == 0x37de0000 || if ((insn & 0xffffc001) == 0x37de0000 ||
(insn & 0xffe00000) == 0x6fc00000) { (insn & 0xffe00001) == 0x6fc00000) {
/* ldo X(sp), sp, or stwm X,D(sp) */ /* ldo X(sp), sp, or stwm X,D(sp) */
frame_size += (insn & 0x1 ? -1 << 13 : 0) | frame_size += (insn & 0x3fff) >> 1;
((insn & 0x3fff) >> 1);
dbg("analyzing func @ %lx, insn=%08x @ " dbg("analyzing func @ %lx, insn=%08x @ "
"%lx, frame_size = %ld\n", info->ip, "%lx, frame_size = %ld\n", info->ip,
insn, npc, frame_size); insn, npc, frame_size);
} else if ((insn & 0xffe00008) == 0x73c00008) { } else if ((insn & 0xffe00009) == 0x73c00008) {
/* std,ma X,D(sp) */ /* std,ma X,D(sp) */
frame_size += (insn & 0x1 ? -1 << 13 : 0) | frame_size += ((insn >> 4) & 0x3ff) << 3;
(((insn >> 4) & 0x3ff) << 3);
dbg("analyzing func @ %lx, insn=%08x @ " dbg("analyzing func @ %lx, insn=%08x @ "
"%lx, frame_size = %ld\n", info->ip, "%lx, frame_size = %ld\n", info->ip,
insn, npc, frame_size); insn, npc, frame_size);
...@@ -333,6 +336,9 @@ static void unwind_frame_regs(struct unwind_frame_info *info) ...@@ -333,6 +336,9 @@ static void unwind_frame_regs(struct unwind_frame_info *info)
} }
} }
if (frame_size > e->Total_frame_size << 3)
frame_size = e->Total_frame_size << 3;
if (!unwind_special(info, e->region_start, frame_size)) { if (!unwind_special(info, e->region_start, frame_size)) {
info->prev_sp = info->sp - frame_size; info->prev_sp = info->sp - frame_size;
if (e->Millicode) if (e->Millicode)
......
...@@ -717,7 +717,7 @@ ...@@ -717,7 +717,7 @@
#define MMCR0_FCWAIT 0x00000002UL /* freeze counter in WAIT state */ #define MMCR0_FCWAIT 0x00000002UL /* freeze counter in WAIT state */
#define MMCR0_FCHV 0x00000001UL /* freeze conditions in hypervisor mode */ #define MMCR0_FCHV 0x00000001UL /* freeze conditions in hypervisor mode */
#define SPRN_MMCR1 798 #define SPRN_MMCR1 798
#define SPRN_MMCR2 769 #define SPRN_MMCR2 785
#define SPRN_MMCRA 0x312 #define SPRN_MMCRA 0x312
#define MMCRA_SDSYNC 0x80000000UL /* SDAR synced with SIAR */ #define MMCRA_SDSYNC 0x80000000UL /* SDAR synced with SIAR */
#define MMCRA_SDAR_DCACHE_MISS 0x40000000UL #define MMCRA_SDAR_DCACHE_MISS 0x40000000UL
...@@ -754,13 +754,13 @@ ...@@ -754,13 +754,13 @@
#define SPRN_PMC6 792 #define SPRN_PMC6 792
#define SPRN_PMC7 793 #define SPRN_PMC7 793
#define SPRN_PMC8 794 #define SPRN_PMC8 794
#define SPRN_SIAR 780
#define SPRN_SDAR 781
#define SPRN_SIER 784 #define SPRN_SIER 784
#define SIER_SIPR 0x2000000 /* Sampled MSR_PR */ #define SIER_SIPR 0x2000000 /* Sampled MSR_PR */
#define SIER_SIHV 0x1000000 /* Sampled MSR_HV */ #define SIER_SIHV 0x1000000 /* Sampled MSR_HV */
#define SIER_SIAR_VALID 0x0400000 /* SIAR contents valid */ #define SIER_SIAR_VALID 0x0400000 /* SIAR contents valid */
#define SIER_SDAR_VALID 0x0200000 /* SDAR contents valid */ #define SIER_SDAR_VALID 0x0200000 /* SDAR contents valid */
#define SPRN_SIAR 796
#define SPRN_SDAR 797
#define SPRN_TACR 888 #define SPRN_TACR 888
#define SPRN_TCSCR 889 #define SPRN_TCSCR 889
#define SPRN_CSIGR 890 #define SPRN_CSIGR 890
......
...@@ -656,6 +656,7 @@ unsigned char ibm_architecture_vec[] = { ...@@ -656,6 +656,7 @@ unsigned char ibm_architecture_vec[] = {
W(0xffff0000), W(0x003e0000), /* POWER6 */ W(0xffff0000), W(0x003e0000), /* POWER6 */
W(0xffff0000), W(0x003f0000), /* POWER7 */ W(0xffff0000), W(0x003f0000), /* POWER7 */
W(0xffff0000), W(0x004b0000), /* POWER8E */ W(0xffff0000), W(0x004b0000), /* POWER8E */
W(0xffff0000), W(0x004c0000), /* POWER8NVL */
W(0xffff0000), W(0x004d0000), /* POWER8 */ W(0xffff0000), W(0x004d0000), /* POWER8 */
W(0xffffffff), W(0x0f000004), /* all 2.07-compliant */ W(0xffffffff), W(0x0f000004), /* all 2.07-compliant */
W(0xffffffff), W(0x0f000003), /* all 2.06-compliant */ W(0xffffffff), W(0x0f000003), /* all 2.06-compliant */
......
...@@ -159,6 +159,19 @@ static struct mmu_psize_def mmu_psize_defaults_gp[] = { ...@@ -159,6 +159,19 @@ static struct mmu_psize_def mmu_psize_defaults_gp[] = {
}, },
}; };
/*
* 'R' and 'C' update notes:
* - Under pHyp or KVM, the updatepp path will not set C, thus it *will*
* create writeable HPTEs without C set, because the hcall H_PROTECT
* that we use in that case will not update C
* - The above is however not a problem, because we also don't do that
* fancy "no flush" variant of eviction and we use H_REMOVE which will
* do the right thing and thus we don't have the race I described earlier
*
* - Under bare metal, we do have the race, so we need R and C set
* - We make sure R is always set and never lost
* - C is _PAGE_DIRTY, and *should* always be set for a writeable mapping
*/
unsigned long htab_convert_pte_flags(unsigned long pteflags) unsigned long htab_convert_pte_flags(unsigned long pteflags)
{ {
unsigned long rflags = 0; unsigned long rflags = 0;
...@@ -186,9 +199,14 @@ unsigned long htab_convert_pte_flags(unsigned long pteflags) ...@@ -186,9 +199,14 @@ unsigned long htab_convert_pte_flags(unsigned long pteflags)
rflags |= 0x1; rflags |= 0x1;
} }
/* /*
* Always add "C" bit for perf. Memory coherence is always enabled * We can't allow hardware to update hpte bits. Hence always
* set 'R' bit and set 'C' if it is a write fault
* Memory coherence is always enabled
*/ */
rflags |= HPTE_R_C | HPTE_R_M; rflags |= HPTE_R_R | HPTE_R_M;
if (pteflags & _PAGE_DIRTY)
rflags |= HPTE_R_C;
/* /*
* Add in WIG bits * Add in WIG bits
*/ */
......
...@@ -33,10 +33,7 @@ int pmdp_set_access_flags(struct vm_area_struct *vma, unsigned long address, ...@@ -33,10 +33,7 @@ int pmdp_set_access_flags(struct vm_area_struct *vma, unsigned long address,
changed = !pmd_same(*(pmdp), entry); changed = !pmd_same(*(pmdp), entry);
if (changed) { if (changed) {
__ptep_set_access_flags(pmdp_ptep(pmdp), pmd_pte(entry)); __ptep_set_access_flags(pmdp_ptep(pmdp), pmd_pte(entry));
/* flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
* Since we are not supporting SW TLB systems, we don't
* have any thing similar to flush_tlb_page_nohash()
*/
} }
return changed; return changed;
} }
......
...@@ -296,11 +296,6 @@ static void __init radix_init_page_sizes(void) ...@@ -296,11 +296,6 @@ static void __init radix_init_page_sizes(void)
void __init radix__early_init_mmu(void) void __init radix__early_init_mmu(void)
{ {
unsigned long lpcr; unsigned long lpcr;
/*
* setup LPCR UPRT based on mmu_features
*/
lpcr = mfspr(SPRN_LPCR);
mtspr(SPRN_LPCR, lpcr | LPCR_UPRT);
#ifdef CONFIG_PPC_64K_PAGES #ifdef CONFIG_PPC_64K_PAGES
/* PAGE_SIZE mappings */ /* PAGE_SIZE mappings */
...@@ -343,8 +338,11 @@ void __init radix__early_init_mmu(void) ...@@ -343,8 +338,11 @@ void __init radix__early_init_mmu(void)
__pte_frag_size_shift = H_PTE_FRAG_SIZE_SHIFT; __pte_frag_size_shift = H_PTE_FRAG_SIZE_SHIFT;
radix_init_page_sizes(); radix_init_page_sizes();
if (!firmware_has_feature(FW_FEATURE_LPAR)) if (!firmware_has_feature(FW_FEATURE_LPAR)) {
lpcr = mfspr(SPRN_LPCR);
mtspr(SPRN_LPCR, lpcr | LPCR_UPRT);
radix_init_partition_table(); radix_init_partition_table();
}
radix_init_pgtable(); radix_init_pgtable();
} }
...@@ -353,16 +351,15 @@ void radix__early_init_mmu_secondary(void) ...@@ -353,16 +351,15 @@ void radix__early_init_mmu_secondary(void)
{ {
unsigned long lpcr; unsigned long lpcr;
/* /*
* setup LPCR UPRT based on mmu_features * update partition table control register and UPRT
*/ */
lpcr = mfspr(SPRN_LPCR); if (!firmware_has_feature(FW_FEATURE_LPAR)) {
mtspr(SPRN_LPCR, lpcr | LPCR_UPRT); lpcr = mfspr(SPRN_LPCR);
/* mtspr(SPRN_LPCR, lpcr | LPCR_UPRT);
* update partition table control register, 64 K size.
*/
if (!firmware_has_feature(FW_FEATURE_LPAR))
mtspr(SPRN_PTCR, mtspr(SPRN_PTCR,
__pa(partition_tb) | (PATB_SIZE_SHIFT - 12)); __pa(partition_tb) | (PATB_SIZE_SHIFT - 12));
}
} }
void radix__setup_initial_memory_limit(phys_addr_t first_memblock_base, void radix__setup_initial_memory_limit(phys_addr_t first_memblock_base,
......
...@@ -53,7 +53,6 @@ static int ibm_read_slot_reset_state2; ...@@ -53,7 +53,6 @@ static int ibm_read_slot_reset_state2;
static int ibm_slot_error_detail; static int ibm_slot_error_detail;
static int ibm_get_config_addr_info; static int ibm_get_config_addr_info;
static int ibm_get_config_addr_info2; static int ibm_get_config_addr_info2;
static int ibm_configure_bridge;
static int ibm_configure_pe; static int ibm_configure_pe;
/* /*
...@@ -81,7 +80,14 @@ static int pseries_eeh_init(void) ...@@ -81,7 +80,14 @@ static int pseries_eeh_init(void)
ibm_get_config_addr_info2 = rtas_token("ibm,get-config-addr-info2"); ibm_get_config_addr_info2 = rtas_token("ibm,get-config-addr-info2");
ibm_get_config_addr_info = rtas_token("ibm,get-config-addr-info"); ibm_get_config_addr_info = rtas_token("ibm,get-config-addr-info");
ibm_configure_pe = rtas_token("ibm,configure-pe"); ibm_configure_pe = rtas_token("ibm,configure-pe");
ibm_configure_bridge = rtas_token("ibm,configure-bridge");
/*
* ibm,configure-pe and ibm,configure-bridge have the same semantics,
* however ibm,configure-pe can be faster. If we can't find
* ibm,configure-pe then fall back to using ibm,configure-bridge.
*/
if (ibm_configure_pe == RTAS_UNKNOWN_SERVICE)
ibm_configure_pe = rtas_token("ibm,configure-bridge");
/* /*
* Necessary sanity check. We needn't check "get-config-addr-info" * Necessary sanity check. We needn't check "get-config-addr-info"
...@@ -93,8 +99,7 @@ static int pseries_eeh_init(void) ...@@ -93,8 +99,7 @@ static int pseries_eeh_init(void)
(ibm_read_slot_reset_state2 == RTAS_UNKNOWN_SERVICE && (ibm_read_slot_reset_state2 == RTAS_UNKNOWN_SERVICE &&
ibm_read_slot_reset_state == RTAS_UNKNOWN_SERVICE) || ibm_read_slot_reset_state == RTAS_UNKNOWN_SERVICE) ||
ibm_slot_error_detail == RTAS_UNKNOWN_SERVICE || ibm_slot_error_detail == RTAS_UNKNOWN_SERVICE ||
(ibm_configure_pe == RTAS_UNKNOWN_SERVICE && ibm_configure_pe == RTAS_UNKNOWN_SERVICE) {
ibm_configure_bridge == RTAS_UNKNOWN_SERVICE)) {
pr_info("EEH functionality not supported\n"); pr_info("EEH functionality not supported\n");
return -EINVAL; return -EINVAL;
} }
...@@ -615,29 +620,41 @@ static int pseries_eeh_configure_bridge(struct eeh_pe *pe) ...@@ -615,29 +620,41 @@ static int pseries_eeh_configure_bridge(struct eeh_pe *pe)
{ {
int config_addr; int config_addr;
int ret; int ret;
/* Waiting 0.2s maximum before skipping configuration */
int max_wait = 200;
/* Figure out the PE address */ /* Figure out the PE address */
config_addr = pe->config_addr; config_addr = pe->config_addr;
if (pe->addr) if (pe->addr)
config_addr = pe->addr; config_addr = pe->addr;
/* Use new configure-pe function, if supported */ while (max_wait > 0) {
if (ibm_configure_pe != RTAS_UNKNOWN_SERVICE) {
ret = rtas_call(ibm_configure_pe, 3, 1, NULL, ret = rtas_call(ibm_configure_pe, 3, 1, NULL,
config_addr, BUID_HI(pe->phb->buid), config_addr, BUID_HI(pe->phb->buid),
BUID_LO(pe->phb->buid)); BUID_LO(pe->phb->buid));
} else if (ibm_configure_bridge != RTAS_UNKNOWN_SERVICE) {
ret = rtas_call(ibm_configure_bridge, 3, 1, NULL,
config_addr, BUID_HI(pe->phb->buid),
BUID_LO(pe->phb->buid));
} else {
return -EFAULT;
}
if (ret) if (!ret)
pr_warn("%s: Unable to configure bridge PHB#%d-PE#%x (%d)\n", return ret;
__func__, pe->phb->global_number, pe->addr, ret);
/*
* If RTAS returns a delay value that's above 100ms, cut it
* down to 100ms in case firmware made a mistake. For more
* on how these delay values work see rtas_busy_delay_time
*/
if (ret > RTAS_EXTENDED_DELAY_MIN+2 &&
ret <= RTAS_EXTENDED_DELAY_MAX)
ret = RTAS_EXTENDED_DELAY_MIN+2;
max_wait -= rtas_busy_delay_time(ret);
if (max_wait < 0)
break;
rtas_busy_delay(ret);
}
pr_warn("%s: Unable to configure bridge PHB#%d-PE#%x (%d)\n",
__func__, pe->phb->global_number, pe->addr, ret);
return ret; return ret;
} }
......
CONFIG_SYSVIPC=y CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y CONFIG_POSIX_MQUEUE=y
CONFIG_FHANDLE=y
CONFIG_AUDIT=y CONFIG_AUDIT=y
CONFIG_NO_HZ=y CONFIG_NO_HZ_IDLE=y
CONFIG_HIGH_RES_TIMERS=y CONFIG_HIGH_RES_TIMERS=y
CONFIG_BSD_PROCESS_ACCT=y CONFIG_BSD_PROCESS_ACCT=y
CONFIG_BSD_PROCESS_ACCT_V3=y CONFIG_BSD_PROCESS_ACCT_V3=y
...@@ -13,19 +12,19 @@ CONFIG_TASK_IO_ACCOUNTING=y ...@@ -13,19 +12,19 @@ CONFIG_TASK_IO_ACCOUNTING=y
CONFIG_IKCONFIG=y CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y CONFIG_IKCONFIG_PROC=y
CONFIG_NUMA_BALANCING=y CONFIG_NUMA_BALANCING=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_PIDS=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CPUSETS=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_MEMCG=y CONFIG_MEMCG=y
CONFIG_MEMCG_SWAP=y CONFIG_MEMCG_SWAP=y
CONFIG_MEMCG_KMEM=y CONFIG_BLK_CGROUP=y
CONFIG_CGROUP_HUGETLB=y
CONFIG_CGROUP_PERF=y
CONFIG_CFS_BANDWIDTH=y CONFIG_CFS_BANDWIDTH=y
CONFIG_RT_GROUP_SCHED=y CONFIG_RT_GROUP_SCHED=y
CONFIG_BLK_CGROUP=y CONFIG_CGROUP_PIDS=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_HUGETLB=y
CONFIG_CPUSETS=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_CGROUP_PERF=y
CONFIG_CHECKPOINT_RESTORE=y
CONFIG_NAMESPACES=y CONFIG_NAMESPACES=y
CONFIG_USER_NS=y CONFIG_USER_NS=y
CONFIG_SCHED_AUTOGROUP=y CONFIG_SCHED_AUTOGROUP=y
...@@ -55,7 +54,6 @@ CONFIG_UNIXWARE_DISKLABEL=y ...@@ -55,7 +54,6 @@ CONFIG_UNIXWARE_DISKLABEL=y
CONFIG_CFQ_GROUP_IOSCHED=y CONFIG_CFQ_GROUP_IOSCHED=y
CONFIG_DEFAULT_DEADLINE=y CONFIG_DEFAULT_DEADLINE=y
CONFIG_LIVEPATCH=y CONFIG_LIVEPATCH=y
CONFIG_MARCH_Z196=y
CONFIG_TUNE_ZEC12=y CONFIG_TUNE_ZEC12=y
CONFIG_NR_CPUS=256 CONFIG_NR_CPUS=256
CONFIG_NUMA=y CONFIG_NUMA=y
...@@ -65,6 +63,15 @@ CONFIG_MEMORY_HOTPLUG=y ...@@ -65,6 +63,15 @@ CONFIG_MEMORY_HOTPLUG=y
CONFIG_MEMORY_HOTREMOVE=y CONFIG_MEMORY_HOTREMOVE=y
CONFIG_KSM=y CONFIG_KSM=y
CONFIG_TRANSPARENT_HUGEPAGE=y CONFIG_TRANSPARENT_HUGEPAGE=y
CONFIG_CLEANCACHE=y
CONFIG_FRONTSWAP=y
CONFIG_CMA=y
CONFIG_MEM_SOFT_DIRTY=y
CONFIG_ZPOOL=m
CONFIG_ZBUD=m
CONFIG_ZSMALLOC=m
CONFIG_ZSMALLOC_STAT=y
CONFIG_IDLE_PAGE_TRACKING=y
CONFIG_PCI=y CONFIG_PCI=y
CONFIG_PCI_DEBUG=y CONFIG_PCI_DEBUG=y
CONFIG_HOTPLUG_PCI=y CONFIG_HOTPLUG_PCI=y
...@@ -452,6 +459,7 @@ CONFIG_HW_RANDOM_VIRTIO=m ...@@ -452,6 +459,7 @@ CONFIG_HW_RANDOM_VIRTIO=m
CONFIG_RAW_DRIVER=m CONFIG_RAW_DRIVER=m
CONFIG_HANGCHECK_TIMER=m CONFIG_HANGCHECK_TIMER=m
CONFIG_TN3270_FS=y CONFIG_TN3270_FS=y
# CONFIG_HWMON is not set
CONFIG_WATCHDOG=y CONFIG_WATCHDOG=y
CONFIG_WATCHDOG_NOWAYOUT=y CONFIG_WATCHDOG_NOWAYOUT=y
CONFIG_SOFT_WATCHDOG=m CONFIG_SOFT_WATCHDOG=m
...@@ -537,6 +545,8 @@ CONFIG_DLM=m ...@@ -537,6 +545,8 @@ CONFIG_DLM=m
CONFIG_PRINTK_TIME=y CONFIG_PRINTK_TIME=y
CONFIG_DYNAMIC_DEBUG=y CONFIG_DYNAMIC_DEBUG=y
CONFIG_DEBUG_INFO=y CONFIG_DEBUG_INFO=y
CONFIG_DEBUG_INFO_DWARF4=y
CONFIG_GDB_SCRIPTS=y
CONFIG_FRAME_WARN=1024 CONFIG_FRAME_WARN=1024
CONFIG_READABLE_ASM=y CONFIG_READABLE_ASM=y
CONFIG_UNUSED_SYMBOLS=y CONFIG_UNUSED_SYMBOLS=y
...@@ -555,13 +565,17 @@ CONFIG_SLUB_DEBUG_ON=y ...@@ -555,13 +565,17 @@ CONFIG_SLUB_DEBUG_ON=y
CONFIG_SLUB_STATS=y CONFIG_SLUB_STATS=y
CONFIG_DEBUG_STACK_USAGE=y CONFIG_DEBUG_STACK_USAGE=y
CONFIG_DEBUG_VM=y CONFIG_DEBUG_VM=y
CONFIG_DEBUG_VM_VMACACHE=y
CONFIG_DEBUG_VM_RB=y CONFIG_DEBUG_VM_RB=y
CONFIG_DEBUG_VM_PGFLAGS=y
CONFIG_DEBUG_MEMORY_INIT=y CONFIG_DEBUG_MEMORY_INIT=y
CONFIG_MEMORY_NOTIFIER_ERROR_INJECT=m CONFIG_MEMORY_NOTIFIER_ERROR_INJECT=m
CONFIG_DEBUG_PER_CPU_MAPS=y CONFIG_DEBUG_PER_CPU_MAPS=y
CONFIG_DEBUG_SHIRQ=y CONFIG_DEBUG_SHIRQ=y
CONFIG_DETECT_HUNG_TASK=y CONFIG_DETECT_HUNG_TASK=y
CONFIG_WQ_WATCHDOG=y
CONFIG_PANIC_ON_OOPS=y CONFIG_PANIC_ON_OOPS=y
CONFIG_DEBUG_TIMEKEEPING=y
CONFIG_TIMER_STATS=y CONFIG_TIMER_STATS=y
CONFIG_DEBUG_RT_MUTEXES=y CONFIG_DEBUG_RT_MUTEXES=y
CONFIG_DEBUG_WW_MUTEX_SLOWPATH=y CONFIG_DEBUG_WW_MUTEX_SLOWPATH=y
...@@ -596,6 +610,8 @@ CONFIG_FTRACE_SYSCALLS=y ...@@ -596,6 +610,8 @@ CONFIG_FTRACE_SYSCALLS=y
CONFIG_STACK_TRACER=y CONFIG_STACK_TRACER=y
CONFIG_BLK_DEV_IO_TRACE=y CONFIG_BLK_DEV_IO_TRACE=y
CONFIG_UPROBE_EVENT=y CONFIG_UPROBE_EVENT=y
CONFIG_FUNCTION_PROFILER=y
CONFIG_TRACE_ENUM_MAP_FILE=y
CONFIG_LKDTM=m CONFIG_LKDTM=m
CONFIG_TEST_LIST_SORT=y CONFIG_TEST_LIST_SORT=y
CONFIG_KPROBES_SANITY_TEST=y CONFIG_KPROBES_SANITY_TEST=y
...@@ -607,7 +623,6 @@ CONFIG_TEST_STRING_HELPERS=y ...@@ -607,7 +623,6 @@ CONFIG_TEST_STRING_HELPERS=y
CONFIG_TEST_KSTRTOX=y CONFIG_TEST_KSTRTOX=y
CONFIG_DMA_API_DEBUG=y CONFIG_DMA_API_DEBUG=y
CONFIG_TEST_BPF=m CONFIG_TEST_BPF=m
# CONFIG_STRICT_DEVMEM is not set
CONFIG_S390_PTDUMP=y CONFIG_S390_PTDUMP=y
CONFIG_ENCRYPTED_KEYS=m CONFIG_ENCRYPTED_KEYS=m
CONFIG_SECURITY=y CONFIG_SECURITY=y
...@@ -651,7 +666,6 @@ CONFIG_CRYPTO_SEED=m ...@@ -651,7 +666,6 @@ CONFIG_CRYPTO_SEED=m
CONFIG_CRYPTO_SERPENT=m CONFIG_CRYPTO_SERPENT=m
CONFIG_CRYPTO_TEA=m CONFIG_CRYPTO_TEA=m
CONFIG_CRYPTO_TWOFISH=m CONFIG_CRYPTO_TWOFISH=m
CONFIG_CRYPTO_ZLIB=y
CONFIG_CRYPTO_LZO=m CONFIG_CRYPTO_LZO=m
CONFIG_CRYPTO_LZ4=m CONFIG_CRYPTO_LZ4=m
CONFIG_CRYPTO_LZ4HC=m CONFIG_CRYPTO_LZ4HC=m
...@@ -664,7 +678,7 @@ CONFIG_CRYPTO_SHA512_S390=m ...@@ -664,7 +678,7 @@ CONFIG_CRYPTO_SHA512_S390=m
CONFIG_CRYPTO_DES_S390=m CONFIG_CRYPTO_DES_S390=m
CONFIG_CRYPTO_AES_S390=m CONFIG_CRYPTO_AES_S390=m
CONFIG_CRYPTO_GHASH_S390=m CONFIG_CRYPTO_GHASH_S390=m
CONFIG_ASYMMETRIC_KEY_TYPE=m CONFIG_ASYMMETRIC_KEY_TYPE=y
CONFIG_ASYMMETRIC_PUBLIC_KEY_SUBTYPE=m CONFIG_ASYMMETRIC_PUBLIC_KEY_SUBTYPE=m
CONFIG_X509_CERTIFICATE_PARSER=m CONFIG_X509_CERTIFICATE_PARSER=m
CONFIG_CRC7=m CONFIG_CRC7=m
......
CONFIG_SYSVIPC=y CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y CONFIG_POSIX_MQUEUE=y
CONFIG_FHANDLE=y
CONFIG_AUDIT=y CONFIG_AUDIT=y
CONFIG_NO_HZ=y CONFIG_NO_HZ_IDLE=y
CONFIG_HIGH_RES_TIMERS=y CONFIG_HIGH_RES_TIMERS=y
CONFIG_BSD_PROCESS_ACCT=y CONFIG_BSD_PROCESS_ACCT=y
CONFIG_BSD_PROCESS_ACCT_V3=y CONFIG_BSD_PROCESS_ACCT_V3=y
...@@ -13,17 +12,17 @@ CONFIG_TASK_IO_ACCOUNTING=y ...@@ -13,17 +12,17 @@ CONFIG_TASK_IO_ACCOUNTING=y
CONFIG_IKCONFIG=y CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y CONFIG_IKCONFIG_PROC=y
CONFIG_NUMA_BALANCING=y CONFIG_NUMA_BALANCING=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_PIDS=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CPUSETS=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_MEMCG=y CONFIG_MEMCG=y
CONFIG_MEMCG_SWAP=y CONFIG_MEMCG_SWAP=y
CONFIG_MEMCG_KMEM=y CONFIG_BLK_CGROUP=y
CONFIG_CGROUP_PIDS=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_HUGETLB=y CONFIG_CGROUP_HUGETLB=y
CONFIG_CPUSETS=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_CGROUP_PERF=y CONFIG_CGROUP_PERF=y
CONFIG_BLK_CGROUP=y CONFIG_CHECKPOINT_RESTORE=y
CONFIG_NAMESPACES=y CONFIG_NAMESPACES=y
CONFIG_USER_NS=y CONFIG_USER_NS=y
CONFIG_SCHED_AUTOGROUP=y CONFIG_SCHED_AUTOGROUP=y
...@@ -53,7 +52,6 @@ CONFIG_SOLARIS_X86_PARTITION=y ...@@ -53,7 +52,6 @@ CONFIG_SOLARIS_X86_PARTITION=y
CONFIG_UNIXWARE_DISKLABEL=y CONFIG_UNIXWARE_DISKLABEL=y
CONFIG_CFQ_GROUP_IOSCHED=y CONFIG_CFQ_GROUP_IOSCHED=y
CONFIG_DEFAULT_DEADLINE=y CONFIG_DEFAULT_DEADLINE=y
CONFIG_MARCH_Z196=y
CONFIG_TUNE_ZEC12=y CONFIG_TUNE_ZEC12=y
CONFIG_NR_CPUS=256 CONFIG_NR_CPUS=256
CONFIG_NUMA=y CONFIG_NUMA=y
...@@ -62,6 +60,14 @@ CONFIG_MEMORY_HOTPLUG=y ...@@ -62,6 +60,14 @@ CONFIG_MEMORY_HOTPLUG=y
CONFIG_MEMORY_HOTREMOVE=y CONFIG_MEMORY_HOTREMOVE=y
CONFIG_KSM=y CONFIG_KSM=y
CONFIG_TRANSPARENT_HUGEPAGE=y CONFIG_TRANSPARENT_HUGEPAGE=y
CONFIG_CLEANCACHE=y
CONFIG_FRONTSWAP=y
CONFIG_CMA=y
CONFIG_ZSWAP=y
CONFIG_ZBUD=m
CONFIG_ZSMALLOC=m
CONFIG_ZSMALLOC_STAT=y
CONFIG_IDLE_PAGE_TRACKING=y
CONFIG_PCI=y CONFIG_PCI=y
CONFIG_HOTPLUG_PCI=y CONFIG_HOTPLUG_PCI=y
CONFIG_HOTPLUG_PCI_S390=y CONFIG_HOTPLUG_PCI_S390=y
...@@ -530,6 +536,8 @@ CONFIG_NLS_UTF8=m ...@@ -530,6 +536,8 @@ CONFIG_NLS_UTF8=m
CONFIG_DLM=m CONFIG_DLM=m
CONFIG_PRINTK_TIME=y CONFIG_PRINTK_TIME=y
CONFIG_DEBUG_INFO=y CONFIG_DEBUG_INFO=y
CONFIG_DEBUG_INFO_DWARF4=y
CONFIG_GDB_SCRIPTS=y
# CONFIG_ENABLE_MUST_CHECK is not set # CONFIG_ENABLE_MUST_CHECK is not set
CONFIG_FRAME_WARN=1024 CONFIG_FRAME_WARN=1024
CONFIG_UNUSED_SYMBOLS=y CONFIG_UNUSED_SYMBOLS=y
...@@ -547,13 +555,13 @@ CONFIG_LATENCYTOP=y ...@@ -547,13 +555,13 @@ CONFIG_LATENCYTOP=y
CONFIG_DEBUG_STRICT_USER_COPY_CHECKS=y CONFIG_DEBUG_STRICT_USER_COPY_CHECKS=y
CONFIG_BLK_DEV_IO_TRACE=y CONFIG_BLK_DEV_IO_TRACE=y
# CONFIG_KPROBE_EVENT is not set # CONFIG_KPROBE_EVENT is not set
CONFIG_TRACE_ENUM_MAP_FILE=y
CONFIG_LKDTM=m CONFIG_LKDTM=m
CONFIG_RBTREE_TEST=m CONFIG_RBTREE_TEST=m
CONFIG_INTERVAL_TREE_TEST=m CONFIG_INTERVAL_TREE_TEST=m
CONFIG_PERCPU_TEST=m CONFIG_PERCPU_TEST=m
CONFIG_ATOMIC64_SELFTEST=y CONFIG_ATOMIC64_SELFTEST=y
CONFIG_TEST_BPF=m CONFIG_TEST_BPF=m
# CONFIG_STRICT_DEVMEM is not set
CONFIG_S390_PTDUMP=y CONFIG_S390_PTDUMP=y
CONFIG_ENCRYPTED_KEYS=m CONFIG_ENCRYPTED_KEYS=m
CONFIG_SECURITY=y CONFIG_SECURITY=y
...@@ -597,8 +605,6 @@ CONFIG_CRYPTO_SEED=m ...@@ -597,8 +605,6 @@ CONFIG_CRYPTO_SEED=m
CONFIG_CRYPTO_SERPENT=m CONFIG_CRYPTO_SERPENT=m
CONFIG_CRYPTO_TEA=m CONFIG_CRYPTO_TEA=m
CONFIG_CRYPTO_TWOFISH=m CONFIG_CRYPTO_TWOFISH=m
CONFIG_CRYPTO_ZLIB=y
CONFIG_CRYPTO_LZO=m
CONFIG_CRYPTO_LZ4=m CONFIG_CRYPTO_LZ4=m
CONFIG_CRYPTO_LZ4HC=m CONFIG_CRYPTO_LZ4HC=m
CONFIG_CRYPTO_USER_API_HASH=m CONFIG_CRYPTO_USER_API_HASH=m
...@@ -610,7 +616,7 @@ CONFIG_CRYPTO_SHA512_S390=m ...@@ -610,7 +616,7 @@ CONFIG_CRYPTO_SHA512_S390=m
CONFIG_CRYPTO_DES_S390=m CONFIG_CRYPTO_DES_S390=m
CONFIG_CRYPTO_AES_S390=m CONFIG_CRYPTO_AES_S390=m
CONFIG_CRYPTO_GHASH_S390=m CONFIG_CRYPTO_GHASH_S390=m
CONFIG_ASYMMETRIC_KEY_TYPE=m CONFIG_ASYMMETRIC_KEY_TYPE=y
CONFIG_ASYMMETRIC_PUBLIC_KEY_SUBTYPE=m CONFIG_ASYMMETRIC_PUBLIC_KEY_SUBTYPE=m
CONFIG_X509_CERTIFICATE_PARSER=m CONFIG_X509_CERTIFICATE_PARSER=m
CONFIG_CRC7=m CONFIG_CRC7=m
......
CONFIG_SYSVIPC=y CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y CONFIG_POSIX_MQUEUE=y
CONFIG_FHANDLE=y
CONFIG_AUDIT=y CONFIG_AUDIT=y
CONFIG_NO_HZ=y CONFIG_NO_HZ_IDLE=y
CONFIG_HIGH_RES_TIMERS=y CONFIG_HIGH_RES_TIMERS=y
CONFIG_BSD_PROCESS_ACCT=y CONFIG_BSD_PROCESS_ACCT=y
CONFIG_BSD_PROCESS_ACCT_V3=y CONFIG_BSD_PROCESS_ACCT_V3=y
...@@ -14,17 +13,17 @@ CONFIG_IKCONFIG=y ...@@ -14,17 +13,17 @@ CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y CONFIG_IKCONFIG_PROC=y
CONFIG_NUMA_BALANCING=y CONFIG_NUMA_BALANCING=y
# CONFIG_NUMA_BALANCING_DEFAULT_ENABLED is not set # CONFIG_NUMA_BALANCING_DEFAULT_ENABLED is not set
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_PIDS=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CPUSETS=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_MEMCG=y CONFIG_MEMCG=y
CONFIG_MEMCG_SWAP=y CONFIG_MEMCG_SWAP=y
CONFIG_MEMCG_KMEM=y CONFIG_BLK_CGROUP=y
CONFIG_CGROUP_PIDS=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_HUGETLB=y CONFIG_CGROUP_HUGETLB=y
CONFIG_CPUSETS=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_CGROUP_PERF=y CONFIG_CGROUP_PERF=y
CONFIG_BLK_CGROUP=y CONFIG_CHECKPOINT_RESTORE=y
CONFIG_NAMESPACES=y CONFIG_NAMESPACES=y
CONFIG_USER_NS=y CONFIG_USER_NS=y
CONFIG_SCHED_AUTOGROUP=y CONFIG_SCHED_AUTOGROUP=y
...@@ -53,7 +52,6 @@ CONFIG_UNIXWARE_DISKLABEL=y ...@@ -53,7 +52,6 @@ CONFIG_UNIXWARE_DISKLABEL=y
CONFIG_CFQ_GROUP_IOSCHED=y CONFIG_CFQ_GROUP_IOSCHED=y
CONFIG_DEFAULT_DEADLINE=y CONFIG_DEFAULT_DEADLINE=y
CONFIG_LIVEPATCH=y CONFIG_LIVEPATCH=y
CONFIG_MARCH_Z196=y
CONFIG_TUNE_ZEC12=y CONFIG_TUNE_ZEC12=y
CONFIG_NR_CPUS=512 CONFIG_NR_CPUS=512
CONFIG_NUMA=y CONFIG_NUMA=y
...@@ -62,6 +60,14 @@ CONFIG_MEMORY_HOTPLUG=y ...@@ -62,6 +60,14 @@ CONFIG_MEMORY_HOTPLUG=y
CONFIG_MEMORY_HOTREMOVE=y CONFIG_MEMORY_HOTREMOVE=y
CONFIG_KSM=y CONFIG_KSM=y
CONFIG_TRANSPARENT_HUGEPAGE=y CONFIG_TRANSPARENT_HUGEPAGE=y
CONFIG_CLEANCACHE=y
CONFIG_FRONTSWAP=y
CONFIG_CMA=y
CONFIG_ZSWAP=y
CONFIG_ZBUD=m
CONFIG_ZSMALLOC=m
CONFIG_ZSMALLOC_STAT=y
CONFIG_IDLE_PAGE_TRACKING=y
CONFIG_PCI=y CONFIG_PCI=y
CONFIG_HOTPLUG_PCI=y CONFIG_HOTPLUG_PCI=y
CONFIG_HOTPLUG_PCI_S390=y CONFIG_HOTPLUG_PCI_S390=y
...@@ -447,6 +453,7 @@ CONFIG_HW_RANDOM_VIRTIO=m ...@@ -447,6 +453,7 @@ CONFIG_HW_RANDOM_VIRTIO=m
CONFIG_RAW_DRIVER=m CONFIG_RAW_DRIVER=m
CONFIG_HANGCHECK_TIMER=m CONFIG_HANGCHECK_TIMER=m
CONFIG_TN3270_FS=y CONFIG_TN3270_FS=y
# CONFIG_HWMON is not set
CONFIG_WATCHDOG=y CONFIG_WATCHDOG=y
CONFIG_WATCHDOG_NOWAYOUT=y CONFIG_WATCHDOG_NOWAYOUT=y
CONFIG_SOFT_WATCHDOG=m CONFIG_SOFT_WATCHDOG=m
...@@ -530,6 +537,8 @@ CONFIG_NLS_UTF8=m ...@@ -530,6 +537,8 @@ CONFIG_NLS_UTF8=m
CONFIG_DLM=m CONFIG_DLM=m
CONFIG_PRINTK_TIME=y CONFIG_PRINTK_TIME=y
CONFIG_DEBUG_INFO=y CONFIG_DEBUG_INFO=y
CONFIG_DEBUG_INFO_DWARF4=y
CONFIG_GDB_SCRIPTS=y
# CONFIG_ENABLE_MUST_CHECK is not set # CONFIG_ENABLE_MUST_CHECK is not set
CONFIG_FRAME_WARN=1024 CONFIG_FRAME_WARN=1024
CONFIG_UNUSED_SYMBOLS=y CONFIG_UNUSED_SYMBOLS=y
...@@ -546,11 +555,12 @@ CONFIG_FTRACE_SYSCALLS=y ...@@ -546,11 +555,12 @@ CONFIG_FTRACE_SYSCALLS=y
CONFIG_STACK_TRACER=y CONFIG_STACK_TRACER=y
CONFIG_BLK_DEV_IO_TRACE=y CONFIG_BLK_DEV_IO_TRACE=y
CONFIG_UPROBE_EVENT=y CONFIG_UPROBE_EVENT=y
CONFIG_FUNCTION_PROFILER=y
CONFIG_TRACE_ENUM_MAP_FILE=y
CONFIG_LKDTM=m CONFIG_LKDTM=m
CONFIG_PERCPU_TEST=m CONFIG_PERCPU_TEST=m
CONFIG_ATOMIC64_SELFTEST=y CONFIG_ATOMIC64_SELFTEST=y
CONFIG_TEST_BPF=m CONFIG_TEST_BPF=m
# CONFIG_STRICT_DEVMEM is not set
CONFIG_S390_PTDUMP=y CONFIG_S390_PTDUMP=y
CONFIG_ENCRYPTED_KEYS=m CONFIG_ENCRYPTED_KEYS=m
CONFIG_SECURITY=y CONFIG_SECURITY=y
...@@ -594,8 +604,6 @@ CONFIG_CRYPTO_SEED=m ...@@ -594,8 +604,6 @@ CONFIG_CRYPTO_SEED=m
CONFIG_CRYPTO_SERPENT=m CONFIG_CRYPTO_SERPENT=m
CONFIG_CRYPTO_TEA=m CONFIG_CRYPTO_TEA=m
CONFIG_CRYPTO_TWOFISH=m CONFIG_CRYPTO_TWOFISH=m
CONFIG_CRYPTO_ZLIB=y
CONFIG_CRYPTO_LZO=m
CONFIG_CRYPTO_LZ4=m CONFIG_CRYPTO_LZ4=m
CONFIG_CRYPTO_LZ4HC=m CONFIG_CRYPTO_LZ4HC=m
CONFIG_CRYPTO_USER_API_HASH=m CONFIG_CRYPTO_USER_API_HASH=m
...@@ -607,7 +615,7 @@ CONFIG_CRYPTO_SHA512_S390=m ...@@ -607,7 +615,7 @@ CONFIG_CRYPTO_SHA512_S390=m
CONFIG_CRYPTO_DES_S390=m CONFIG_CRYPTO_DES_S390=m
CONFIG_CRYPTO_AES_S390=m CONFIG_CRYPTO_AES_S390=m
CONFIG_CRYPTO_GHASH_S390=m CONFIG_CRYPTO_GHASH_S390=m
CONFIG_ASYMMETRIC_KEY_TYPE=m CONFIG_ASYMMETRIC_KEY_TYPE=y
CONFIG_ASYMMETRIC_PUBLIC_KEY_SUBTYPE=m CONFIG_ASYMMETRIC_PUBLIC_KEY_SUBTYPE=m
CONFIG_X509_CERTIFICATE_PARSER=m CONFIG_X509_CERTIFICATE_PARSER=m
CONFIG_CRC7=m CONFIG_CRC7=m
......
# CONFIG_SWAP is not set # CONFIG_SWAP is not set
CONFIG_NO_HZ=y CONFIG_NO_HZ_IDLE=y
CONFIG_HIGH_RES_TIMERS=y CONFIG_HIGH_RES_TIMERS=y
CONFIG_BLK_DEV_INITRD=y CONFIG_BLK_DEV_INITRD=y
CONFIG_CC_OPTIMIZE_FOR_SIZE=y CONFIG_CC_OPTIMIZE_FOR_SIZE=y
...@@ -7,7 +7,6 @@ CONFIG_CC_OPTIMIZE_FOR_SIZE=y ...@@ -7,7 +7,6 @@ CONFIG_CC_OPTIMIZE_FOR_SIZE=y
CONFIG_PARTITION_ADVANCED=y CONFIG_PARTITION_ADVANCED=y
CONFIG_IBM_PARTITION=y CONFIG_IBM_PARTITION=y
CONFIG_DEFAULT_DEADLINE=y CONFIG_DEFAULT_DEADLINE=y
CONFIG_MARCH_Z196=y
CONFIG_TUNE_ZEC12=y CONFIG_TUNE_ZEC12=y
# CONFIG_COMPAT is not set # CONFIG_COMPAT is not set
CONFIG_NR_CPUS=2 CONFIG_NR_CPUS=2
...@@ -64,7 +63,6 @@ CONFIG_PANIC_ON_OOPS=y ...@@ -64,7 +63,6 @@ CONFIG_PANIC_ON_OOPS=y
# CONFIG_SCHED_DEBUG is not set # CONFIG_SCHED_DEBUG is not set
CONFIG_RCU_CPU_STALL_TIMEOUT=60 CONFIG_RCU_CPU_STALL_TIMEOUT=60
# CONFIG_FTRACE is not set # CONFIG_FTRACE is not set
# CONFIG_STRICT_DEVMEM is not set
# CONFIG_PFAULT is not set # CONFIG_PFAULT is not set
# CONFIG_S390_HYPFS_FS is not set # CONFIG_S390_HYPFS_FS is not set
# CONFIG_VIRTUALIZATION is not set # CONFIG_VIRTUALIZATION is not set
......
CONFIG_SYSVIPC=y CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y CONFIG_POSIX_MQUEUE=y
CONFIG_FHANDLE=y CONFIG_USELIB=y
CONFIG_AUDIT=y CONFIG_AUDIT=y
CONFIG_NO_HZ=y CONFIG_NO_HZ_IDLE=y
CONFIG_HIGH_RES_TIMERS=y CONFIG_HIGH_RES_TIMERS=y
CONFIG_TASKSTATS=y CONFIG_TASKSTATS=y
CONFIG_TASK_DELAY_ACCT=y CONFIG_TASK_DELAY_ACCT=y
...@@ -11,19 +11,19 @@ CONFIG_TASK_IO_ACCOUNTING=y ...@@ -11,19 +11,19 @@ CONFIG_TASK_IO_ACCOUNTING=y
CONFIG_IKCONFIG=y CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y CONFIG_IKCONFIG_PROC=y
CONFIG_CGROUPS=y CONFIG_CGROUPS=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_PIDS=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CPUSETS=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_MEMCG=y CONFIG_MEMCG=y
CONFIG_MEMCG_SWAP=y CONFIG_MEMCG_SWAP=y
CONFIG_MEMCG_KMEM=y CONFIG_BLK_CGROUP=y
CONFIG_CGROUP_HUGETLB=y
CONFIG_CGROUP_PERF=y
CONFIG_CGROUP_SCHED=y CONFIG_CGROUP_SCHED=y
CONFIG_RT_GROUP_SCHED=y CONFIG_RT_GROUP_SCHED=y
CONFIG_BLK_CGROUP=y CONFIG_CGROUP_PIDS=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_HUGETLB=y
CONFIG_CPUSETS=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_CGROUP_PERF=y
CONFIG_CHECKPOINT_RESTORE=y
CONFIG_NAMESPACES=y CONFIG_NAMESPACES=y
CONFIG_USER_NS=y CONFIG_USER_NS=y
CONFIG_BLK_DEV_INITRD=y CONFIG_BLK_DEV_INITRD=y
...@@ -44,7 +44,6 @@ CONFIG_PARTITION_ADVANCED=y ...@@ -44,7 +44,6 @@ CONFIG_PARTITION_ADVANCED=y
CONFIG_IBM_PARTITION=y CONFIG_IBM_PARTITION=y
CONFIG_DEFAULT_DEADLINE=y CONFIG_DEFAULT_DEADLINE=y
CONFIG_LIVEPATCH=y CONFIG_LIVEPATCH=y
CONFIG_MARCH_Z196=y
CONFIG_NR_CPUS=256 CONFIG_NR_CPUS=256
CONFIG_NUMA=y CONFIG_NUMA=y
CONFIG_HZ_100=y CONFIG_HZ_100=y
...@@ -52,6 +51,14 @@ CONFIG_MEMORY_HOTPLUG=y ...@@ -52,6 +51,14 @@ CONFIG_MEMORY_HOTPLUG=y
CONFIG_MEMORY_HOTREMOVE=y CONFIG_MEMORY_HOTREMOVE=y
CONFIG_KSM=y CONFIG_KSM=y
CONFIG_TRANSPARENT_HUGEPAGE=y CONFIG_TRANSPARENT_HUGEPAGE=y
CONFIG_CLEANCACHE=y
CONFIG_FRONTSWAP=y
CONFIG_CMA=y
CONFIG_ZSWAP=y
CONFIG_ZBUD=m
CONFIG_ZSMALLOC=m
CONFIG_ZSMALLOC_STAT=y
CONFIG_IDLE_PAGE_TRACKING=y
CONFIG_CRASH_DUMP=y CONFIG_CRASH_DUMP=y
CONFIG_BINFMT_MISC=m CONFIG_BINFMT_MISC=m
CONFIG_HIBERNATION=y CONFIG_HIBERNATION=y
...@@ -61,7 +68,6 @@ CONFIG_UNIX=y ...@@ -61,7 +68,6 @@ CONFIG_UNIX=y
CONFIG_NET_KEY=y CONFIG_NET_KEY=y
CONFIG_INET=y CONFIG_INET=y
CONFIG_IP_MULTICAST=y CONFIG_IP_MULTICAST=y
# CONFIG_INET_LRO is not set
CONFIG_L2TP=m CONFIG_L2TP=m
CONFIG_L2TP_DEBUGFS=m CONFIG_L2TP_DEBUGFS=m
CONFIG_VLAN_8021Q=y CONFIG_VLAN_8021Q=y
...@@ -144,6 +150,9 @@ CONFIG_TMPFS=y ...@@ -144,6 +150,9 @@ CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y CONFIG_TMPFS_POSIX_ACL=y
CONFIG_HUGETLBFS=y CONFIG_HUGETLBFS=y
# CONFIG_NETWORK_FILESYSTEMS is not set # CONFIG_NETWORK_FILESYSTEMS is not set
CONFIG_DEBUG_INFO=y
CONFIG_DEBUG_INFO_DWARF4=y
CONFIG_GDB_SCRIPTS=y
CONFIG_UNUSED_SYMBOLS=y CONFIG_UNUSED_SYMBOLS=y
CONFIG_DEBUG_SECTION_MISMATCH=y CONFIG_DEBUG_SECTION_MISMATCH=y
CONFIG_DEBUG_FORCE_WEAK_PER_CPU=y CONFIG_DEBUG_FORCE_WEAK_PER_CPU=y
...@@ -158,20 +167,21 @@ CONFIG_LOCK_STAT=y ...@@ -158,20 +167,21 @@ CONFIG_LOCK_STAT=y
CONFIG_DEBUG_LOCKDEP=y CONFIG_DEBUG_LOCKDEP=y
CONFIG_DEBUG_ATOMIC_SLEEP=y CONFIG_DEBUG_ATOMIC_SLEEP=y
CONFIG_DEBUG_LIST=y CONFIG_DEBUG_LIST=y
CONFIG_DEBUG_PI_LIST=y
CONFIG_DEBUG_SG=y CONFIG_DEBUG_SG=y
CONFIG_DEBUG_NOTIFIERS=y CONFIG_DEBUG_NOTIFIERS=y
CONFIG_RCU_CPU_STALL_TIMEOUT=60 CONFIG_RCU_CPU_STALL_TIMEOUT=60
CONFIG_RCU_TRACE=y CONFIG_RCU_TRACE=y
CONFIG_LATENCYTOP=y CONFIG_LATENCYTOP=y
CONFIG_DEBUG_STRICT_USER_COPY_CHECKS=y CONFIG_DEBUG_STRICT_USER_COPY_CHECKS=y
CONFIG_TRACER_SNAPSHOT=y CONFIG_SCHED_TRACER=y
CONFIG_FTRACE_SYSCALLS=y
CONFIG_TRACER_SNAPSHOT_PER_CPU_SWAP=y CONFIG_TRACER_SNAPSHOT_PER_CPU_SWAP=y
CONFIG_STACK_TRACER=y CONFIG_STACK_TRACER=y
CONFIG_BLK_DEV_IO_TRACE=y CONFIG_BLK_DEV_IO_TRACE=y
CONFIG_UPROBE_EVENT=y CONFIG_UPROBE_EVENT=y
CONFIG_FUNCTION_PROFILER=y
CONFIG_TRACE_ENUM_MAP_FILE=y
CONFIG_KPROBES_SANITY_TEST=y CONFIG_KPROBES_SANITY_TEST=y
# CONFIG_STRICT_DEVMEM is not set
CONFIG_S390_PTDUMP=y CONFIG_S390_PTDUMP=y
CONFIG_CRYPTO_CRYPTD=m CONFIG_CRYPTO_CRYPTD=m
CONFIG_CRYPTO_AUTHENC=m CONFIG_CRYPTO_AUTHENC=m
...@@ -212,8 +222,6 @@ CONFIG_CRYPTO_SERPENT=m ...@@ -212,8 +222,6 @@ CONFIG_CRYPTO_SERPENT=m
CONFIG_CRYPTO_TEA=m CONFIG_CRYPTO_TEA=m
CONFIG_CRYPTO_TWOFISH=m CONFIG_CRYPTO_TWOFISH=m
CONFIG_CRYPTO_DEFLATE=m CONFIG_CRYPTO_DEFLATE=m
CONFIG_CRYPTO_ZLIB=m
CONFIG_CRYPTO_LZO=m
CONFIG_CRYPTO_LZ4=m CONFIG_CRYPTO_LZ4=m
CONFIG_CRYPTO_LZ4HC=m CONFIG_CRYPTO_LZ4HC=m
CONFIG_CRYPTO_ANSI_CPRNG=m CONFIG_CRYPTO_ANSI_CPRNG=m
......
...@@ -250,6 +250,7 @@ static noinline void do_sigsegv(struct pt_regs *regs, int si_code) ...@@ -250,6 +250,7 @@ static noinline void do_sigsegv(struct pt_regs *regs, int si_code)
report_user_fault(regs, SIGSEGV, 1); report_user_fault(regs, SIGSEGV, 1);
si.si_signo = SIGSEGV; si.si_signo = SIGSEGV;
si.si_errno = 0;
si.si_code = si_code; si.si_code = si_code;
si.si_addr = (void __user *)(regs->int_parm_long & __FAIL_ADDR_MASK); si.si_addr = (void __user *)(regs->int_parm_long & __FAIL_ADDR_MASK);
force_sig_info(SIGSEGV, &si, current); force_sig_info(SIGSEGV, &si, current);
......
...@@ -37,7 +37,7 @@ extern u8 sk_load_word[], sk_load_half[], sk_load_byte[]; ...@@ -37,7 +37,7 @@ extern u8 sk_load_word[], sk_load_half[], sk_load_byte[];
* | | | * | | |
* +---------------+ | * +---------------+ |
* | 8 byte skbp | | * | 8 byte skbp | |
* R15+170 -> +---------------+ | * R15+176 -> +---------------+ |
* | 8 byte hlen | | * | 8 byte hlen | |
* R15+168 -> +---------------+ | * R15+168 -> +---------------+ |
* | 4 byte align | | * | 4 byte align | |
...@@ -58,7 +58,7 @@ extern u8 sk_load_word[], sk_load_half[], sk_load_byte[]; ...@@ -58,7 +58,7 @@ extern u8 sk_load_word[], sk_load_half[], sk_load_byte[];
#define STK_OFF (STK_SPACE - STK_160_UNUSED) #define STK_OFF (STK_SPACE - STK_160_UNUSED)
#define STK_OFF_TMP 160 /* Offset of tmp buffer on stack */ #define STK_OFF_TMP 160 /* Offset of tmp buffer on stack */
#define STK_OFF_HLEN 168 /* Offset of SKB header length on stack */ #define STK_OFF_HLEN 168 /* Offset of SKB header length on stack */
#define STK_OFF_SKBP 170 /* Offset of SKB pointer on stack */ #define STK_OFF_SKBP 176 /* Offset of SKB pointer on stack */
#define STK_OFF_R6 (160 - 11 * 8) /* Offset of r6 on stack */ #define STK_OFF_R6 (160 - 11 * 8) /* Offset of r6 on stack */
#define STK_OFF_TCCNT (160 - 12 * 8) /* Offset of tail_call_cnt on stack */ #define STK_OFF_TCCNT (160 - 12 * 8) /* Offset of tail_call_cnt on stack */
......
...@@ -45,7 +45,7 @@ struct bpf_jit { ...@@ -45,7 +45,7 @@ struct bpf_jit {
int labels[1]; /* Labels for local jumps */ int labels[1]; /* Labels for local jumps */
}; };
#define BPF_SIZE_MAX 0x7ffff /* Max size for program (20 bit signed displ) */ #define BPF_SIZE_MAX 0xffff /* Max size for program (16 bit branches) */
#define SEEN_SKB 1 /* skb access */ #define SEEN_SKB 1 /* skb access */
#define SEEN_MEM 2 /* use mem[] for temporary storage */ #define SEEN_MEM 2 /* use mem[] for temporary storage */
...@@ -450,7 +450,7 @@ static void bpf_jit_prologue(struct bpf_jit *jit) ...@@ -450,7 +450,7 @@ static void bpf_jit_prologue(struct bpf_jit *jit)
emit_load_skb_data_hlen(jit); emit_load_skb_data_hlen(jit);
if (jit->seen & SEEN_SKB_CHANGE) if (jit->seen & SEEN_SKB_CHANGE)
/* stg %b1,ST_OFF_SKBP(%r0,%r15) */ /* stg %b1,ST_OFF_SKBP(%r0,%r15) */
EMIT6_DISP_LH(0xe3000000, 0x0024, REG_W1, REG_0, REG_15, EMIT6_DISP_LH(0xe3000000, 0x0024, BPF_REG_1, REG_0, REG_15,
STK_OFF_SKBP); STK_OFF_SKBP);
} }
......
...@@ -15,6 +15,10 @@ ...@@ -15,6 +15,10 @@
#define PTREGS_OFF (STACK_BIAS + STACKFRAME_SZ) #define PTREGS_OFF (STACK_BIAS + STACKFRAME_SZ)
#define RTRAP_PSTATE (PSTATE_TSO|PSTATE_PEF|PSTATE_PRIV|PSTATE_IE)
#define RTRAP_PSTATE_IRQOFF (PSTATE_TSO|PSTATE_PEF|PSTATE_PRIV)
#define RTRAP_PSTATE_AG_IRQOFF (PSTATE_TSO|PSTATE_PEF|PSTATE_PRIV|PSTATE_AG)
#define __CHEETAH_ID 0x003e0014 #define __CHEETAH_ID 0x003e0014
#define __JALAPENO_ID 0x003e0016 #define __JALAPENO_ID 0x003e0016
#define __SERRANO_ID 0x003e0022 #define __SERRANO_ID 0x003e0022
......
...@@ -589,8 +589,8 @@ user_rtt_fill_64bit: \ ...@@ -589,8 +589,8 @@ user_rtt_fill_64bit: \
restored; \ restored; \
nop; nop; nop; nop; nop; nop; \ nop; nop; nop; nop; nop; nop; \
nop; nop; nop; nop; nop; \ nop; nop; nop; nop; nop; \
ba,a,pt %xcc, user_rtt_fill_fixup; \ ba,a,pt %xcc, user_rtt_fill_fixup_dax; \
ba,a,pt %xcc, user_rtt_fill_fixup; \ ba,a,pt %xcc, user_rtt_fill_fixup_mna; \
ba,a,pt %xcc, user_rtt_fill_fixup; ba,a,pt %xcc, user_rtt_fill_fixup;
...@@ -652,8 +652,8 @@ user_rtt_fill_32bit: \ ...@@ -652,8 +652,8 @@ user_rtt_fill_32bit: \
restored; \ restored; \
nop; nop; nop; nop; nop; \ nop; nop; nop; nop; nop; \
nop; nop; nop; \ nop; nop; nop; \
ba,a,pt %xcc, user_rtt_fill_fixup; \ ba,a,pt %xcc, user_rtt_fill_fixup_dax; \
ba,a,pt %xcc, user_rtt_fill_fixup; \ ba,a,pt %xcc, user_rtt_fill_fixup_mna; \
ba,a,pt %xcc, user_rtt_fill_fixup; ba,a,pt %xcc, user_rtt_fill_fixup;
......
...@@ -21,6 +21,7 @@ CFLAGS_REMOVE_perf_event.o := -pg ...@@ -21,6 +21,7 @@ CFLAGS_REMOVE_perf_event.o := -pg
CFLAGS_REMOVE_pcr.o := -pg CFLAGS_REMOVE_pcr.o := -pg
endif endif
obj-$(CONFIG_SPARC64) += urtt_fill.o
obj-$(CONFIG_SPARC32) += entry.o wof.o wuf.o obj-$(CONFIG_SPARC32) += entry.o wof.o wuf.o
obj-$(CONFIG_SPARC32) += etrap_32.o obj-$(CONFIG_SPARC32) += etrap_32.o
obj-$(CONFIG_SPARC32) += rtrap_32.o obj-$(CONFIG_SPARC32) += rtrap_32.o
......
...@@ -14,10 +14,6 @@ ...@@ -14,10 +14,6 @@
#include <asm/visasm.h> #include <asm/visasm.h>
#include <asm/processor.h> #include <asm/processor.h>
#define RTRAP_PSTATE (PSTATE_TSO|PSTATE_PEF|PSTATE_PRIV|PSTATE_IE)
#define RTRAP_PSTATE_IRQOFF (PSTATE_TSO|PSTATE_PEF|PSTATE_PRIV)
#define RTRAP_PSTATE_AG_IRQOFF (PSTATE_TSO|PSTATE_PEF|PSTATE_PRIV|PSTATE_AG)
#ifdef CONFIG_CONTEXT_TRACKING #ifdef CONFIG_CONTEXT_TRACKING
# define SCHEDULE_USER schedule_user # define SCHEDULE_USER schedule_user
#else #else
...@@ -242,52 +238,17 @@ rt_continue: ldx [%sp + PTREGS_OFF + PT_V9_G1], %g1 ...@@ -242,52 +238,17 @@ rt_continue: ldx [%sp + PTREGS_OFF + PT_V9_G1], %g1
wrpr %g1, %cwp wrpr %g1, %cwp
ba,a,pt %xcc, user_rtt_fill_64bit ba,a,pt %xcc, user_rtt_fill_64bit
user_rtt_fill_fixup: user_rtt_fill_fixup_dax:
rdpr %cwp, %g1 ba,pt %xcc, user_rtt_fill_fixup_common
add %g1, 1, %g1 mov 1, %g3
wrpr %g1, 0x0, %cwp
rdpr %wstate, %g2
sll %g2, 3, %g2
wrpr %g2, 0x0, %wstate
/* We know %canrestore and %otherwin are both zero. */
sethi %hi(sparc64_kern_pri_context), %g2
ldx [%g2 + %lo(sparc64_kern_pri_context)], %g2
mov PRIMARY_CONTEXT, %g1
661: stxa %g2, [%g1] ASI_DMMU
.section .sun4v_1insn_patch, "ax"
.word 661b
stxa %g2, [%g1] ASI_MMU
.previous
sethi %hi(KERNBASE), %g1
flush %g1
or %g4, FAULT_CODE_WINFIXUP, %g4 user_rtt_fill_fixup_mna:
stb %g4, [%g6 + TI_FAULT_CODE] ba,pt %xcc, user_rtt_fill_fixup_common
stx %g5, [%g6 + TI_FAULT_ADDR] mov 2, %g3
mov %g6, %l1 user_rtt_fill_fixup:
wrpr %g0, 0x0, %tl ba,pt %xcc, user_rtt_fill_fixup_common
clr %g3
661: nop
.section .sun4v_1insn_patch, "ax"
.word 661b
SET_GL(0)
.previous
wrpr %g0, RTRAP_PSTATE, %pstate
mov %l1, %g6
ldx [%g6 + TI_TASK], %g4
LOAD_PER_CPU_BASE(%g5, %g6, %g1, %g2, %g3)
call do_sparc64_fault
add %sp, PTREGS_OFF, %o0
ba,pt %xcc, rtrap
nop
user_rtt_pre_restore: user_rtt_pre_restore:
add %g1, 1, %g1 add %g1, 1, %g1
......
...@@ -138,12 +138,24 @@ int copy_siginfo_from_user32(siginfo_t *to, compat_siginfo_t __user *from) ...@@ -138,12 +138,24 @@ int copy_siginfo_from_user32(siginfo_t *to, compat_siginfo_t __user *from)
return 0; return 0;
} }
/* Checks if the fp is valid. We always build signal frames which are
* 16-byte aligned, therefore we can always enforce that the restore
* frame has that property as well.
*/
static bool invalid_frame_pointer(void __user *fp, int fplen)
{
if ((((unsigned long) fp) & 15) ||
((unsigned long)fp) > 0x100000000ULL - fplen)
return true;
return false;
}
void do_sigreturn32(struct pt_regs *regs) void do_sigreturn32(struct pt_regs *regs)
{ {
struct signal_frame32 __user *sf; struct signal_frame32 __user *sf;
compat_uptr_t fpu_save; compat_uptr_t fpu_save;
compat_uptr_t rwin_save; compat_uptr_t rwin_save;
unsigned int psr; unsigned int psr, ufp;
unsigned int pc, npc; unsigned int pc, npc;
sigset_t set; sigset_t set;
compat_sigset_t seta; compat_sigset_t seta;
...@@ -158,11 +170,16 @@ void do_sigreturn32(struct pt_regs *regs) ...@@ -158,11 +170,16 @@ void do_sigreturn32(struct pt_regs *regs)
sf = (struct signal_frame32 __user *) regs->u_regs[UREG_FP]; sf = (struct signal_frame32 __user *) regs->u_regs[UREG_FP];
/* 1. Make sure we are not getting garbage from the user */ /* 1. Make sure we are not getting garbage from the user */
if (!access_ok(VERIFY_READ, sf, sizeof(*sf)) || if (invalid_frame_pointer(sf, sizeof(*sf)))
(((unsigned long) sf) & 3)) goto segv;
if (get_user(ufp, &sf->info.si_regs.u_regs[UREG_FP]))
goto segv;
if (ufp & 0x7)
goto segv; goto segv;
if (get_user(pc, &sf->info.si_regs.pc) || if (__get_user(pc, &sf->info.si_regs.pc) ||
__get_user(npc, &sf->info.si_regs.npc)) __get_user(npc, &sf->info.si_regs.npc))
goto segv; goto segv;
...@@ -227,7 +244,7 @@ void do_sigreturn32(struct pt_regs *regs) ...@@ -227,7 +244,7 @@ void do_sigreturn32(struct pt_regs *regs)
asmlinkage void do_rt_sigreturn32(struct pt_regs *regs) asmlinkage void do_rt_sigreturn32(struct pt_regs *regs)
{ {
struct rt_signal_frame32 __user *sf; struct rt_signal_frame32 __user *sf;
unsigned int psr, pc, npc; unsigned int psr, pc, npc, ufp;
compat_uptr_t fpu_save; compat_uptr_t fpu_save;
compat_uptr_t rwin_save; compat_uptr_t rwin_save;
sigset_t set; sigset_t set;
...@@ -242,11 +259,16 @@ asmlinkage void do_rt_sigreturn32(struct pt_regs *regs) ...@@ -242,11 +259,16 @@ asmlinkage void do_rt_sigreturn32(struct pt_regs *regs)
sf = (struct rt_signal_frame32 __user *) regs->u_regs[UREG_FP]; sf = (struct rt_signal_frame32 __user *) regs->u_regs[UREG_FP];
/* 1. Make sure we are not getting garbage from the user */ /* 1. Make sure we are not getting garbage from the user */
if (!access_ok(VERIFY_READ, sf, sizeof(*sf)) || if (invalid_frame_pointer(sf, sizeof(*sf)))
(((unsigned long) sf) & 3))
goto segv; goto segv;
if (get_user(pc, &sf->regs.pc) || if (get_user(ufp, &sf->regs.u_regs[UREG_FP]))
goto segv;
if (ufp & 0x7)
goto segv;
if (__get_user(pc, &sf->regs.pc) ||
__get_user(npc, &sf->regs.npc)) __get_user(npc, &sf->regs.npc))
goto segv; goto segv;
...@@ -307,14 +329,6 @@ asmlinkage void do_rt_sigreturn32(struct pt_regs *regs) ...@@ -307,14 +329,6 @@ asmlinkage void do_rt_sigreturn32(struct pt_regs *regs)
force_sig(SIGSEGV, current); force_sig(SIGSEGV, current);
} }
/* Checks if the fp is valid */
static int invalid_frame_pointer(void __user *fp, int fplen)
{
if ((((unsigned long) fp) & 7) || ((unsigned long)fp) > 0x100000000ULL - fplen)
return 1;
return 0;
}
static void __user *get_sigframe(struct ksignal *ksig, struct pt_regs *regs, unsigned long framesize) static void __user *get_sigframe(struct ksignal *ksig, struct pt_regs *regs, unsigned long framesize)
{ {
unsigned long sp; unsigned long sp;
......
...@@ -60,10 +60,22 @@ struct rt_signal_frame { ...@@ -60,10 +60,22 @@ struct rt_signal_frame {
#define SF_ALIGNEDSZ (((sizeof(struct signal_frame) + 7) & (~7))) #define SF_ALIGNEDSZ (((sizeof(struct signal_frame) + 7) & (~7)))
#define RT_ALIGNEDSZ (((sizeof(struct rt_signal_frame) + 7) & (~7))) #define RT_ALIGNEDSZ (((sizeof(struct rt_signal_frame) + 7) & (~7)))
/* Checks if the fp is valid. We always build signal frames which are
* 16-byte aligned, therefore we can always enforce that the restore
* frame has that property as well.
*/
static inline bool invalid_frame_pointer(void __user *fp, int fplen)
{
if ((((unsigned long) fp) & 15) || !__access_ok((unsigned long)fp, fplen))
return true;
return false;
}
asmlinkage void do_sigreturn(struct pt_regs *regs) asmlinkage void do_sigreturn(struct pt_regs *regs)
{ {
unsigned long up_psr, pc, npc, ufp;
struct signal_frame __user *sf; struct signal_frame __user *sf;
unsigned long up_psr, pc, npc;
sigset_t set; sigset_t set;
__siginfo_fpu_t __user *fpu_save; __siginfo_fpu_t __user *fpu_save;
__siginfo_rwin_t __user *rwin_save; __siginfo_rwin_t __user *rwin_save;
...@@ -77,10 +89,13 @@ asmlinkage void do_sigreturn(struct pt_regs *regs) ...@@ -77,10 +89,13 @@ asmlinkage void do_sigreturn(struct pt_regs *regs)
sf = (struct signal_frame __user *) regs->u_regs[UREG_FP]; sf = (struct signal_frame __user *) regs->u_regs[UREG_FP];
/* 1. Make sure we are not getting garbage from the user */ /* 1. Make sure we are not getting garbage from the user */
if (!access_ok(VERIFY_READ, sf, sizeof(*sf))) if (!invalid_frame_pointer(sf, sizeof(*sf)))
goto segv_and_exit;
if (get_user(ufp, &sf->info.si_regs.u_regs[UREG_FP]))
goto segv_and_exit; goto segv_and_exit;
if (((unsigned long) sf) & 3) if (ufp & 0x7)
goto segv_and_exit; goto segv_and_exit;
err = __get_user(pc, &sf->info.si_regs.pc); err = __get_user(pc, &sf->info.si_regs.pc);
...@@ -127,7 +142,7 @@ asmlinkage void do_sigreturn(struct pt_regs *regs) ...@@ -127,7 +142,7 @@ asmlinkage void do_sigreturn(struct pt_regs *regs)
asmlinkage void do_rt_sigreturn(struct pt_regs *regs) asmlinkage void do_rt_sigreturn(struct pt_regs *regs)
{ {
struct rt_signal_frame __user *sf; struct rt_signal_frame __user *sf;
unsigned int psr, pc, npc; unsigned int psr, pc, npc, ufp;
__siginfo_fpu_t __user *fpu_save; __siginfo_fpu_t __user *fpu_save;
__siginfo_rwin_t __user *rwin_save; __siginfo_rwin_t __user *rwin_save;
sigset_t set; sigset_t set;
...@@ -135,8 +150,13 @@ asmlinkage void do_rt_sigreturn(struct pt_regs *regs) ...@@ -135,8 +150,13 @@ asmlinkage void do_rt_sigreturn(struct pt_regs *regs)
synchronize_user_stack(); synchronize_user_stack();
sf = (struct rt_signal_frame __user *) regs->u_regs[UREG_FP]; sf = (struct rt_signal_frame __user *) regs->u_regs[UREG_FP];
if (!access_ok(VERIFY_READ, sf, sizeof(*sf)) || if (!invalid_frame_pointer(sf, sizeof(*sf)))
(((unsigned long) sf) & 0x03)) goto segv;
if (get_user(ufp, &sf->regs.u_regs[UREG_FP]))
goto segv;
if (ufp & 0x7)
goto segv; goto segv;
err = __get_user(pc, &sf->regs.pc); err = __get_user(pc, &sf->regs.pc);
...@@ -178,15 +198,6 @@ asmlinkage void do_rt_sigreturn(struct pt_regs *regs) ...@@ -178,15 +198,6 @@ asmlinkage void do_rt_sigreturn(struct pt_regs *regs)
force_sig(SIGSEGV, current); force_sig(SIGSEGV, current);
} }
/* Checks if the fp is valid */
static inline int invalid_frame_pointer(void __user *fp, int fplen)
{
if ((((unsigned long) fp) & 7) || !__access_ok((unsigned long)fp, fplen))
return 1;
return 0;
}
static inline void __user *get_sigframe(struct ksignal *ksig, struct pt_regs *regs, unsigned long framesize) static inline void __user *get_sigframe(struct ksignal *ksig, struct pt_regs *regs, unsigned long framesize)
{ {
unsigned long sp = regs->u_regs[UREG_FP]; unsigned long sp = regs->u_regs[UREG_FP];
......
...@@ -234,6 +234,17 @@ asmlinkage void sparc64_get_context(struct pt_regs *regs) ...@@ -234,6 +234,17 @@ asmlinkage void sparc64_get_context(struct pt_regs *regs)
goto out; goto out;
} }
/* Checks if the fp is valid. We always build rt signal frames which
* are 16-byte aligned, therefore we can always enforce that the
* restore frame has that property as well.
*/
static bool invalid_frame_pointer(void __user *fp)
{
if (((unsigned long) fp) & 15)
return true;
return false;
}
struct rt_signal_frame { struct rt_signal_frame {
struct sparc_stackf ss; struct sparc_stackf ss;
siginfo_t info; siginfo_t info;
...@@ -246,8 +257,8 @@ struct rt_signal_frame { ...@@ -246,8 +257,8 @@ struct rt_signal_frame {
void do_rt_sigreturn(struct pt_regs *regs) void do_rt_sigreturn(struct pt_regs *regs)
{ {
unsigned long tpc, tnpc, tstate, ufp;
struct rt_signal_frame __user *sf; struct rt_signal_frame __user *sf;
unsigned long tpc, tnpc, tstate;
__siginfo_fpu_t __user *fpu_save; __siginfo_fpu_t __user *fpu_save;
__siginfo_rwin_t __user *rwin_save; __siginfo_rwin_t __user *rwin_save;
sigset_t set; sigset_t set;
...@@ -261,10 +272,16 @@ void do_rt_sigreturn(struct pt_regs *regs) ...@@ -261,10 +272,16 @@ void do_rt_sigreturn(struct pt_regs *regs)
(regs->u_regs [UREG_FP] + STACK_BIAS); (regs->u_regs [UREG_FP] + STACK_BIAS);
/* 1. Make sure we are not getting garbage from the user */ /* 1. Make sure we are not getting garbage from the user */
if (((unsigned long) sf) & 3) if (invalid_frame_pointer(sf))
goto segv;
if (get_user(ufp, &sf->regs.u_regs[UREG_FP]))
goto segv; goto segv;
err = get_user(tpc, &sf->regs.tpc); if ((ufp + STACK_BIAS) & 0x7)
goto segv;
err = __get_user(tpc, &sf->regs.tpc);
err |= __get_user(tnpc, &sf->regs.tnpc); err |= __get_user(tnpc, &sf->regs.tnpc);
if (test_thread_flag(TIF_32BIT)) { if (test_thread_flag(TIF_32BIT)) {
tpc &= 0xffffffff; tpc &= 0xffffffff;
...@@ -308,14 +325,6 @@ void do_rt_sigreturn(struct pt_regs *regs) ...@@ -308,14 +325,6 @@ void do_rt_sigreturn(struct pt_regs *regs)
force_sig(SIGSEGV, current); force_sig(SIGSEGV, current);
} }
/* Checks if the fp is valid */
static int invalid_frame_pointer(void __user *fp)
{
if (((unsigned long) fp) & 15)
return 1;
return 0;
}
static inline void __user *get_sigframe(struct ksignal *ksig, struct pt_regs *regs, unsigned long framesize) static inline void __user *get_sigframe(struct ksignal *ksig, struct pt_regs *regs, unsigned long framesize)
{ {
unsigned long sp = regs->u_regs[UREG_FP] + STACK_BIAS; unsigned long sp = regs->u_regs[UREG_FP] + STACK_BIAS;
......
...@@ -48,6 +48,10 @@ int save_fpu_state(struct pt_regs *regs, __siginfo_fpu_t __user *fpu) ...@@ -48,6 +48,10 @@ int save_fpu_state(struct pt_regs *regs, __siginfo_fpu_t __user *fpu)
int restore_fpu_state(struct pt_regs *regs, __siginfo_fpu_t __user *fpu) int restore_fpu_state(struct pt_regs *regs, __siginfo_fpu_t __user *fpu)
{ {
int err; int err;
if (((unsigned long) fpu) & 3)
return -EFAULT;
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
if (test_tsk_thread_flag(current, TIF_USEDFPU)) if (test_tsk_thread_flag(current, TIF_USEDFPU))
regs->psr &= ~PSR_EF; regs->psr &= ~PSR_EF;
...@@ -97,7 +101,10 @@ int restore_rwin_state(__siginfo_rwin_t __user *rp) ...@@ -97,7 +101,10 @@ int restore_rwin_state(__siginfo_rwin_t __user *rp)
struct thread_info *t = current_thread_info(); struct thread_info *t = current_thread_info();
int i, wsaved, err; int i, wsaved, err;
__get_user(wsaved, &rp->wsaved); if (((unsigned long) rp) & 3)
return -EFAULT;
get_user(wsaved, &rp->wsaved);
if (wsaved > NSWINS) if (wsaved > NSWINS)
return -EFAULT; return -EFAULT;
......
...@@ -37,7 +37,10 @@ int restore_fpu_state(struct pt_regs *regs, __siginfo_fpu_t __user *fpu) ...@@ -37,7 +37,10 @@ int restore_fpu_state(struct pt_regs *regs, __siginfo_fpu_t __user *fpu)
unsigned long fprs; unsigned long fprs;
int err; int err;
err = __get_user(fprs, &fpu->si_fprs); if (((unsigned long) fpu) & 7)
return -EFAULT;
err = get_user(fprs, &fpu->si_fprs);
fprs_write(0); fprs_write(0);
regs->tstate &= ~TSTATE_PEF; regs->tstate &= ~TSTATE_PEF;
if (fprs & FPRS_DL) if (fprs & FPRS_DL)
...@@ -72,7 +75,10 @@ int restore_rwin_state(__siginfo_rwin_t __user *rp) ...@@ -72,7 +75,10 @@ int restore_rwin_state(__siginfo_rwin_t __user *rp)
struct thread_info *t = current_thread_info(); struct thread_info *t = current_thread_info();
int i, wsaved, err; int i, wsaved, err;
__get_user(wsaved, &rp->wsaved); if (((unsigned long) rp) & 7)
return -EFAULT;
get_user(wsaved, &rp->wsaved);
if (wsaved > NSWINS) if (wsaved > NSWINS)
return -EFAULT; return -EFAULT;
......
#include <asm/thread_info.h>
#include <asm/trap_block.h>
#include <asm/spitfire.h>
#include <asm/ptrace.h>
#include <asm/head.h>
.text
.align 8
.globl user_rtt_fill_fixup_common
user_rtt_fill_fixup_common:
rdpr %cwp, %g1
add %g1, 1, %g1
wrpr %g1, 0x0, %cwp
rdpr %wstate, %g2
sll %g2, 3, %g2
wrpr %g2, 0x0, %wstate
/* We know %canrestore and %otherwin are both zero. */
sethi %hi(sparc64_kern_pri_context), %g2
ldx [%g2 + %lo(sparc64_kern_pri_context)], %g2
mov PRIMARY_CONTEXT, %g1
661: stxa %g2, [%g1] ASI_DMMU
.section .sun4v_1insn_patch, "ax"
.word 661b
stxa %g2, [%g1] ASI_MMU
.previous
sethi %hi(KERNBASE), %g1
flush %g1
mov %g4, %l4
mov %g5, %l5
brnz,pn %g3, 1f
mov %g3, %l3
or %g4, FAULT_CODE_WINFIXUP, %g4
stb %g4, [%g6 + TI_FAULT_CODE]
stx %g5, [%g6 + TI_FAULT_ADDR]
1:
mov %g6, %l1
wrpr %g0, 0x0, %tl
661: nop
.section .sun4v_1insn_patch, "ax"
.word 661b
SET_GL(0)
.previous
wrpr %g0, RTRAP_PSTATE, %pstate
mov %l1, %g6
ldx [%g6 + TI_TASK], %g4
LOAD_PER_CPU_BASE(%g5, %g6, %g1, %g2, %g3)
brnz,pn %l3, 1f
nop
call do_sparc64_fault
add %sp, PTREGS_OFF, %o0
ba,pt %xcc, rtrap
nop
1: cmp %g3, 2
bne,pn %xcc, 2f
nop
sethi %hi(tlb_type), %g1
lduw [%g1 + %lo(tlb_type)], %g1
cmp %g1, 3
bne,pt %icc, 1f
add %sp, PTREGS_OFF, %o0
mov %l4, %o2
call sun4v_do_mna
mov %l5, %o1
ba,a,pt %xcc, rtrap
1: mov %l4, %o1
mov %l5, %o2
call mem_address_unaligned
nop
ba,a,pt %xcc, rtrap
2: sethi %hi(tlb_type), %g1
mov %l4, %o1
lduw [%g1 + %lo(tlb_type)], %g1
mov %l5, %o2
cmp %g1, 3
bne,pt %icc, 1f
add %sp, PTREGS_OFF, %o0
call sun4v_data_access_exception
nop
ba,a,pt %xcc, rtrap
1: call spitfire_data_access_exception
nop
ba,a,pt %xcc, rtrap
...@@ -2824,9 +2824,10 @@ void hugetlb_setup(struct pt_regs *regs) ...@@ -2824,9 +2824,10 @@ void hugetlb_setup(struct pt_regs *regs)
* the Data-TLB for huge pages. * the Data-TLB for huge pages.
*/ */
if (tlb_type == cheetah_plus) { if (tlb_type == cheetah_plus) {
bool need_context_reload = false;
unsigned long ctx; unsigned long ctx;
spin_lock(&ctx_alloc_lock); spin_lock_irq(&ctx_alloc_lock);
ctx = mm->context.sparc64_ctx_val; ctx = mm->context.sparc64_ctx_val;
ctx &= ~CTX_PGSZ_MASK; ctx &= ~CTX_PGSZ_MASK;
ctx |= CTX_PGSZ_BASE << CTX_PGSZ0_SHIFT; ctx |= CTX_PGSZ_BASE << CTX_PGSZ0_SHIFT;
...@@ -2845,9 +2846,12 @@ void hugetlb_setup(struct pt_regs *regs) ...@@ -2845,9 +2846,12 @@ void hugetlb_setup(struct pt_regs *regs)
* also executing in this address space. * also executing in this address space.
*/ */
mm->context.sparc64_ctx_val = ctx; mm->context.sparc64_ctx_val = ctx;
on_each_cpu(context_reload, mm, 0); need_context_reload = true;
} }
spin_unlock(&ctx_alloc_lock); spin_unlock_irq(&ctx_alloc_lock);
if (need_context_reload)
on_each_cpu(context_reload, mm, 0);
} }
} }
#endif #endif
......
...@@ -181,19 +181,22 @@ int kvm_vcpu_ioctl_set_cpuid(struct kvm_vcpu *vcpu, ...@@ -181,19 +181,22 @@ int kvm_vcpu_ioctl_set_cpuid(struct kvm_vcpu *vcpu,
struct kvm_cpuid_entry __user *entries) struct kvm_cpuid_entry __user *entries)
{ {
int r, i; int r, i;
struct kvm_cpuid_entry *cpuid_entries; struct kvm_cpuid_entry *cpuid_entries = NULL;
r = -E2BIG; r = -E2BIG;
if (cpuid->nent > KVM_MAX_CPUID_ENTRIES) if (cpuid->nent > KVM_MAX_CPUID_ENTRIES)
goto out; goto out;
r = -ENOMEM; r = -ENOMEM;
cpuid_entries = vmalloc(sizeof(struct kvm_cpuid_entry) * cpuid->nent); if (cpuid->nent) {
if (!cpuid_entries) cpuid_entries = vmalloc(sizeof(struct kvm_cpuid_entry) *
goto out; cpuid->nent);
r = -EFAULT; if (!cpuid_entries)
if (copy_from_user(cpuid_entries, entries, goto out;
cpuid->nent * sizeof(struct kvm_cpuid_entry))) r = -EFAULT;
goto out_free; if (copy_from_user(cpuid_entries, entries,
cpuid->nent * sizeof(struct kvm_cpuid_entry)))
goto out;
}
for (i = 0; i < cpuid->nent; i++) { for (i = 0; i < cpuid->nent; i++) {
vcpu->arch.cpuid_entries[i].function = cpuid_entries[i].function; vcpu->arch.cpuid_entries[i].function = cpuid_entries[i].function;
vcpu->arch.cpuid_entries[i].eax = cpuid_entries[i].eax; vcpu->arch.cpuid_entries[i].eax = cpuid_entries[i].eax;
...@@ -212,9 +215,8 @@ int kvm_vcpu_ioctl_set_cpuid(struct kvm_vcpu *vcpu, ...@@ -212,9 +215,8 @@ int kvm_vcpu_ioctl_set_cpuid(struct kvm_vcpu *vcpu,
kvm_x86_ops->cpuid_update(vcpu); kvm_x86_ops->cpuid_update(vcpu);
r = kvm_update_cpuid(vcpu); r = kvm_update_cpuid(vcpu);
out_free:
vfree(cpuid_entries);
out: out:
vfree(cpuid_entries);
return r; return r;
} }
......
...@@ -336,12 +336,12 @@ static gfn_t pse36_gfn_delta(u32 gpte) ...@@ -336,12 +336,12 @@ static gfn_t pse36_gfn_delta(u32 gpte)
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
static void __set_spte(u64 *sptep, u64 spte) static void __set_spte(u64 *sptep, u64 spte)
{ {
*sptep = spte; WRITE_ONCE(*sptep, spte);
} }
static void __update_clear_spte_fast(u64 *sptep, u64 spte) static void __update_clear_spte_fast(u64 *sptep, u64 spte)
{ {
*sptep = spte; WRITE_ONCE(*sptep, spte);
} }
static u64 __update_clear_spte_slow(u64 *sptep, u64 spte) static u64 __update_clear_spte_slow(u64 *sptep, u64 spte)
...@@ -390,7 +390,7 @@ static void __set_spte(u64 *sptep, u64 spte) ...@@ -390,7 +390,7 @@ static void __set_spte(u64 *sptep, u64 spte)
*/ */
smp_wmb(); smp_wmb();
ssptep->spte_low = sspte.spte_low; WRITE_ONCE(ssptep->spte_low, sspte.spte_low);
} }
static void __update_clear_spte_fast(u64 *sptep, u64 spte) static void __update_clear_spte_fast(u64 *sptep, u64 spte)
...@@ -400,7 +400,7 @@ static void __update_clear_spte_fast(u64 *sptep, u64 spte) ...@@ -400,7 +400,7 @@ static void __update_clear_spte_fast(u64 *sptep, u64 spte)
ssptep = (union split_spte *)sptep; ssptep = (union split_spte *)sptep;
sspte = (union split_spte)spte; sspte = (union split_spte)spte;
ssptep->spte_low = sspte.spte_low; WRITE_ONCE(ssptep->spte_low, sspte.spte_low);
/* /*
* If we map the spte from present to nonpresent, we should clear * If we map the spte from present to nonpresent, we should clear
......
...@@ -2314,6 +2314,7 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info) ...@@ -2314,6 +2314,7 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
case MSR_AMD64_NB_CFG: case MSR_AMD64_NB_CFG:
case MSR_FAM10H_MMIO_CONF_BASE: case MSR_FAM10H_MMIO_CONF_BASE:
case MSR_AMD64_BU_CFG2: case MSR_AMD64_BU_CFG2:
case MSR_IA32_PERF_CTL:
msr_info->data = 0; msr_info->data = 0;
break; break;
case MSR_K7_EVNTSEL0 ... MSR_K7_EVNTSEL3: case MSR_K7_EVNTSEL0 ... MSR_K7_EVNTSEL3:
...@@ -2972,6 +2973,10 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu, ...@@ -2972,6 +2973,10 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
| KVM_VCPUEVENT_VALID_SMM)) | KVM_VCPUEVENT_VALID_SMM))
return -EINVAL; return -EINVAL;
if (events->exception.injected &&
(events->exception.nr > 31 || events->exception.nr == NMI_VECTOR))
return -EINVAL;
process_nmi(vcpu); process_nmi(vcpu);
vcpu->arch.exception.pending = events->exception.injected; vcpu->arch.exception.pending = events->exception.injected;
vcpu->arch.exception.nr = events->exception.nr; vcpu->arch.exception.nr = events->exception.nr;
...@@ -3036,6 +3041,11 @@ static int kvm_vcpu_ioctl_x86_set_debugregs(struct kvm_vcpu *vcpu, ...@@ -3036,6 +3041,11 @@ static int kvm_vcpu_ioctl_x86_set_debugregs(struct kvm_vcpu *vcpu,
if (dbgregs->flags) if (dbgregs->flags)
return -EINVAL; return -EINVAL;
if (dbgregs->dr6 & ~0xffffffffull)
return -EINVAL;
if (dbgregs->dr7 & ~0xffffffffull)
return -EINVAL;
memcpy(vcpu->arch.db, dbgregs->db, sizeof(vcpu->arch.db)); memcpy(vcpu->arch.db, dbgregs->db, sizeof(vcpu->arch.db));
kvm_update_dr0123(vcpu); kvm_update_dr0123(vcpu);
vcpu->arch.dr6 = dbgregs->dr6; vcpu->arch.dr6 = dbgregs->dr6;
...@@ -7815,7 +7825,7 @@ int __x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa, u32 size) ...@@ -7815,7 +7825,7 @@ int __x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa, u32 size)
slot = id_to_memslot(slots, id); slot = id_to_memslot(slots, id);
if (size) { if (size) {
if (WARN_ON(slot->npages)) if (slot->npages)
return -EEXIST; return -EEXIST;
/* /*
......
...@@ -13,6 +13,7 @@ config ASYMMETRIC_PUBLIC_KEY_SUBTYPE ...@@ -13,6 +13,7 @@ config ASYMMETRIC_PUBLIC_KEY_SUBTYPE
tristate "Asymmetric public-key crypto algorithm subtype" tristate "Asymmetric public-key crypto algorithm subtype"
select MPILIB select MPILIB
select CRYPTO_HASH_INFO select CRYPTO_HASH_INFO
select CRYPTO_AKCIPHER
help help
This option provides support for asymmetric public key type handling. This option provides support for asymmetric public key type handling.
If signature generation and/or verification are to be used, If signature generation and/or verification are to be used,
......
...@@ -331,15 +331,6 @@ static int acpi_processor_get_info(struct acpi_device *device) ...@@ -331,15 +331,6 @@ static int acpi_processor_get_info(struct acpi_device *device)
pr->throttling.duty_width = acpi_gbl_FADT.duty_width; pr->throttling.duty_width = acpi_gbl_FADT.duty_width;
pr->pblk = object.processor.pblk_address; pr->pblk = object.processor.pblk_address;
/*
* We don't care about error returns - we just try to mark
* these reserved so that nobody else is confused into thinking
* that this region might be unused..
*
* (In particular, allocating the IO range for Cardbus)
*/
request_region(pr->throttling.address, 6, "ACPI CPU throttle");
} }
/* /*
......
...@@ -754,7 +754,8 @@ static int acpi_video_bqc_quirk(struct acpi_video_device *device, ...@@ -754,7 +754,8 @@ static int acpi_video_bqc_quirk(struct acpi_video_device *device,
} }
int acpi_video_get_levels(struct acpi_device *device, int acpi_video_get_levels(struct acpi_device *device,
struct acpi_video_device_brightness **dev_br) struct acpi_video_device_brightness **dev_br,
int *pmax_level)
{ {
union acpi_object *obj = NULL; union acpi_object *obj = NULL;
int i, max_level = 0, count = 0, level_ac_battery = 0; int i, max_level = 0, count = 0, level_ac_battery = 0;
...@@ -841,6 +842,8 @@ int acpi_video_get_levels(struct acpi_device *device, ...@@ -841,6 +842,8 @@ int acpi_video_get_levels(struct acpi_device *device,
br->count = count; br->count = count;
*dev_br = br; *dev_br = br;
if (pmax_level)
*pmax_level = max_level;
out: out:
kfree(obj); kfree(obj);
...@@ -869,7 +872,7 @@ acpi_video_init_brightness(struct acpi_video_device *device) ...@@ -869,7 +872,7 @@ acpi_video_init_brightness(struct acpi_video_device *device)
struct acpi_video_device_brightness *br = NULL; struct acpi_video_device_brightness *br = NULL;
int result = -EINVAL; int result = -EINVAL;
result = acpi_video_get_levels(device->dev, &br); result = acpi_video_get_levels(device->dev, &br, &max_level);
if (result) if (result)
return result; return result;
device->brightness = br; device->brightness = br;
...@@ -1737,7 +1740,7 @@ static void acpi_video_run_bcl_for_osi(struct acpi_video_bus *video) ...@@ -1737,7 +1740,7 @@ static void acpi_video_run_bcl_for_osi(struct acpi_video_bus *video)
mutex_lock(&video->device_list_lock); mutex_lock(&video->device_list_lock);
list_for_each_entry(dev, &video->video_device_list, entry) { list_for_each_entry(dev, &video->video_device_list, entry) {
if (!acpi_video_device_lcd_query_levels(dev, &levels)) if (!acpi_video_device_lcd_query_levels(dev->dev->handle, &levels))
kfree(levels); kfree(levels);
} }
mutex_unlock(&video->device_list_lock); mutex_unlock(&video->device_list_lock);
......
...@@ -83,27 +83,22 @@ acpi_hw_write_multiple(u32 value, ...@@ -83,27 +83,22 @@ acpi_hw_write_multiple(u32 value,
static u8 static u8
acpi_hw_get_access_bit_width(struct acpi_generic_address *reg, u8 max_bit_width) acpi_hw_get_access_bit_width(struct acpi_generic_address *reg, u8 max_bit_width)
{ {
u64 address;
if (!reg->access_width) { if (!reg->access_width) {
if (reg->space_id == ACPI_ADR_SPACE_SYSTEM_IO) {
max_bit_width = 32;
}
/* /*
* Detect old register descriptors where only the bit_width field * Detect old register descriptors where only the bit_width field
* makes senses. The target address is copied to handle possible * makes senses.
* alignment issues.
*/ */
ACPI_MOVE_64_TO_64(&address, &reg->address); if (reg->bit_width < max_bit_width &&
if (!reg->bit_offset && reg->bit_width && !reg->bit_offset && reg->bit_width &&
ACPI_IS_POWER_OF_TWO(reg->bit_width) && ACPI_IS_POWER_OF_TWO(reg->bit_width) &&
ACPI_IS_ALIGNED(reg->bit_width, 8) && ACPI_IS_ALIGNED(reg->bit_width, 8)) {
ACPI_IS_ALIGNED(address, reg->bit_width)) {
return (reg->bit_width); return (reg->bit_width);
} else {
if (reg->space_id == ACPI_ADR_SPACE_SYSTEM_IO) {
return (32);
} else {
return (max_bit_width);
}
} }
return (max_bit_width);
} else { } else {
return (1 << (reg->access_width + 2)); return (1 << (reg->access_width + 2));
} }
......
...@@ -676,6 +676,15 @@ static int acpi_processor_get_throttling_fadt(struct acpi_processor *pr) ...@@ -676,6 +676,15 @@ static int acpi_processor_get_throttling_fadt(struct acpi_processor *pr)
if (!pr->flags.throttling) if (!pr->flags.throttling)
return -ENODEV; return -ENODEV;
/*
* We don't care about error returns - we just try to mark
* these reserved so that nobody else is confused into thinking
* that this region might be unused..
*
* (In particular, allocating the IO range for Cardbus)
*/
request_region(pr->throttling.address, 6, "ACPI CPU throttle");
pr->throttling.state = 0; pr->throttling.state = 0;
duty_mask = pr->throttling.state_count - 1; duty_mask = pr->throttling.state_count - 1;
......
...@@ -181,13 +181,17 @@ static char *res_strings[] = { ...@@ -181,13 +181,17 @@ static char *res_strings[] = {
"reserved 27", "reserved 27",
"reserved 28", "reserved 28",
"reserved 29", "reserved 29",
"reserved 30", "reserved 30", /* FIXME: The strings between 30-40 might be wrong. */
"reassembly abort: no buffers", "reassembly abort: no buffers",
"receive buffer overflow", "receive buffer overflow",
"change in GFC", "change in GFC",
"receive buffer full", "receive buffer full",
"low priority discard - no receive descriptor", "low priority discard - no receive descriptor",
"low priority discard - missing end of packet", "low priority discard - missing end of packet",
"reserved 37",
"reserved 38",
"reserved 39",
"reseverd 40",
"reserved 41", "reserved 41",
"reserved 42", "reserved 42",
"reserved 43", "reserved 43",
......
...@@ -1128,7 +1128,7 @@ static int rx_pkt(struct atm_dev *dev) ...@@ -1128,7 +1128,7 @@ static int rx_pkt(struct atm_dev *dev)
/* make the ptr point to the corresponding buffer desc entry */ /* make the ptr point to the corresponding buffer desc entry */
buf_desc_ptr += desc; buf_desc_ptr += desc;
if (!desc || (desc > iadev->num_rx_desc) || if (!desc || (desc > iadev->num_rx_desc) ||
((buf_desc_ptr->vc_index & 0xffff) > iadev->num_vc)) { ((buf_desc_ptr->vc_index & 0xffff) >= iadev->num_vc)) {
free_desc(dev, desc); free_desc(dev, desc);
IF_ERR(printk("IA: bad descriptor desc = %d \n", desc);) IF_ERR(printk("IA: bad descriptor desc = %d \n", desc);)
return -1; return -1;
......
...@@ -1832,7 +1832,7 @@ EXPORT_SYMBOL(cpufreq_unregister_notifier); ...@@ -1832,7 +1832,7 @@ EXPORT_SYMBOL(cpufreq_unregister_notifier);
unsigned int cpufreq_driver_fast_switch(struct cpufreq_policy *policy, unsigned int cpufreq_driver_fast_switch(struct cpufreq_policy *policy,
unsigned int target_freq) unsigned int target_freq)
{ {
clamp_val(target_freq, policy->min, policy->max); target_freq = clamp_val(target_freq, policy->min, policy->max);
return cpufreq_driver->fast_switch(policy, target_freq); return cpufreq_driver->fast_switch(policy, target_freq);
} }
......
...@@ -449,7 +449,7 @@ static void intel_pstate_init_acpi_perf_limits(struct cpufreq_policy *policy) ...@@ -449,7 +449,7 @@ static void intel_pstate_init_acpi_perf_limits(struct cpufreq_policy *policy)
cpu->acpi_perf_data.states[0].core_frequency = cpu->acpi_perf_data.states[0].core_frequency =
policy->cpuinfo.max_freq / 1000; policy->cpuinfo.max_freq / 1000;
cpu->valid_pss_table = true; cpu->valid_pss_table = true;
pr_info("_PPC limits will be enforced\n"); pr_debug("_PPC limits will be enforced\n");
return; return;
......
...@@ -122,6 +122,7 @@ static int ccp_aes_xts_crypt(struct ablkcipher_request *req, ...@@ -122,6 +122,7 @@ static int ccp_aes_xts_crypt(struct ablkcipher_request *req,
struct ccp_ctx *ctx = crypto_tfm_ctx(req->base.tfm); struct ccp_ctx *ctx = crypto_tfm_ctx(req->base.tfm);
struct ccp_aes_req_ctx *rctx = ablkcipher_request_ctx(req); struct ccp_aes_req_ctx *rctx = ablkcipher_request_ctx(req);
unsigned int unit; unsigned int unit;
u32 unit_size;
int ret; int ret;
if (!ctx->u.aes.key_len) if (!ctx->u.aes.key_len)
...@@ -133,11 +134,17 @@ static int ccp_aes_xts_crypt(struct ablkcipher_request *req, ...@@ -133,11 +134,17 @@ static int ccp_aes_xts_crypt(struct ablkcipher_request *req,
if (!req->info) if (!req->info)
return -EINVAL; return -EINVAL;
for (unit = 0; unit < ARRAY_SIZE(unit_size_map); unit++) unit_size = CCP_XTS_AES_UNIT_SIZE__LAST;
if (!(req->nbytes & (unit_size_map[unit].size - 1))) if (req->nbytes <= unit_size_map[0].size) {
break; for (unit = 0; unit < ARRAY_SIZE(unit_size_map); unit++) {
if (!(req->nbytes & (unit_size_map[unit].size - 1))) {
unit_size = unit_size_map[unit].value;
break;
}
}
}
if ((unit_size_map[unit].value == CCP_XTS_AES_UNIT_SIZE__LAST) || if ((unit_size == CCP_XTS_AES_UNIT_SIZE__LAST) ||
(ctx->u.aes.key_len != AES_KEYSIZE_128)) { (ctx->u.aes.key_len != AES_KEYSIZE_128)) {
/* Use the fallback to process the request for any /* Use the fallback to process the request for any
* unsupported unit sizes or key sizes * unsupported unit sizes or key sizes
...@@ -158,7 +165,7 @@ static int ccp_aes_xts_crypt(struct ablkcipher_request *req, ...@@ -158,7 +165,7 @@ static int ccp_aes_xts_crypt(struct ablkcipher_request *req,
rctx->cmd.engine = CCP_ENGINE_XTS_AES_128; rctx->cmd.engine = CCP_ENGINE_XTS_AES_128;
rctx->cmd.u.xts.action = (encrypt) ? CCP_AES_ACTION_ENCRYPT rctx->cmd.u.xts.action = (encrypt) ? CCP_AES_ACTION_ENCRYPT
: CCP_AES_ACTION_DECRYPT; : CCP_AES_ACTION_DECRYPT;
rctx->cmd.u.xts.unit_size = unit_size_map[unit].value; rctx->cmd.u.xts.unit_size = unit_size;
rctx->cmd.u.xts.key = &ctx->u.aes.key_sg; rctx->cmd.u.xts.key = &ctx->u.aes.key_sg;
rctx->cmd.u.xts.key_len = ctx->u.aes.key_len; rctx->cmd.u.xts.key_len = ctx->u.aes.key_len;
rctx->cmd.u.xts.iv = &rctx->iv_sg; rctx->cmd.u.xts.iv = &rctx->iv_sg;
......
...@@ -1986,7 +1986,7 @@ static int omap_sham_probe(struct platform_device *pdev) ...@@ -1986,7 +1986,7 @@ static int omap_sham_probe(struct platform_device *pdev)
&dd->pdata->algs_info[i].algs_list[j]); &dd->pdata->algs_info[i].algs_list[j]);
err_pm: err_pm:
pm_runtime_disable(dev); pm_runtime_disable(dev);
if (dd->polling_mode) if (!dd->polling_mode)
dma_release_channel(dd->dma_lch); dma_release_channel(dd->dma_lch);
data_err: data_err:
dev_err(dev, "initialization failed.\n"); dev_err(dev, "initialization failed.\n");
......
...@@ -33,6 +33,7 @@ ...@@ -33,6 +33,7 @@
#include <linux/seq_file.h> #include <linux/seq_file.h>
#include <linux/poll.h> #include <linux/poll.h>
#include <linux/reservation.h> #include <linux/reservation.h>
#include <linux/mm.h>
#include <uapi/linux/dma-buf.h> #include <uapi/linux/dma-buf.h>
...@@ -90,7 +91,7 @@ static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma) ...@@ -90,7 +91,7 @@ static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma)
dmabuf = file->private_data; dmabuf = file->private_data;
/* check for overflowing the buffer's size */ /* check for overflowing the buffer's size */
if (vma->vm_pgoff + ((vma->vm_end - vma->vm_start) >> PAGE_SHIFT) > if (vma->vm_pgoff + vma_pages(vma) >
dmabuf->size >> PAGE_SHIFT) dmabuf->size >> PAGE_SHIFT)
return -EINVAL; return -EINVAL;
...@@ -723,11 +724,11 @@ int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma, ...@@ -723,11 +724,11 @@ int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
return -EINVAL; return -EINVAL;
/* check for offset overflow */ /* check for offset overflow */
if (pgoff + ((vma->vm_end - vma->vm_start) >> PAGE_SHIFT) < pgoff) if (pgoff + vma_pages(vma) < pgoff)
return -EOVERFLOW; return -EOVERFLOW;
/* check for overflowing the buffer's size */ /* check for overflowing the buffer's size */
if (pgoff + ((vma->vm_end - vma->vm_start) >> PAGE_SHIFT) > if (pgoff + vma_pages(vma) >
dmabuf->size >> PAGE_SHIFT) dmabuf->size >> PAGE_SHIFT)
return -EINVAL; return -EINVAL;
......
...@@ -35,6 +35,17 @@ ...@@ -35,6 +35,17 @@
#include <linux/reservation.h> #include <linux/reservation.h>
#include <linux/export.h> #include <linux/export.h>
/**
* DOC: Reservation Object Overview
*
* The reservation object provides a mechanism to manage shared and
* exclusive fences associated with a buffer. A reservation object
* can have attached one exclusive fence (normally associated with
* write operations) or N shared fences (read operations). The RCU
* mechanism is used to protect read access to fences from locked
* write-side updates.
*/
DEFINE_WW_CLASS(reservation_ww_class); DEFINE_WW_CLASS(reservation_ww_class);
EXPORT_SYMBOL(reservation_ww_class); EXPORT_SYMBOL(reservation_ww_class);
...@@ -43,9 +54,17 @@ EXPORT_SYMBOL(reservation_seqcount_class); ...@@ -43,9 +54,17 @@ EXPORT_SYMBOL(reservation_seqcount_class);
const char reservation_seqcount_string[] = "reservation_seqcount"; const char reservation_seqcount_string[] = "reservation_seqcount";
EXPORT_SYMBOL(reservation_seqcount_string); EXPORT_SYMBOL(reservation_seqcount_string);
/*
* Reserve space to add a shared fence to a reservation_object, /**
* must be called with obj->lock held. * reservation_object_reserve_shared - Reserve space to add a shared
* fence to a reservation_object.
* @obj: reservation object
*
* Should be called before reservation_object_add_shared_fence(). Must
* be called with obj->lock held.
*
* RETURNS
* Zero for success, or -errno
*/ */
int reservation_object_reserve_shared(struct reservation_object *obj) int reservation_object_reserve_shared(struct reservation_object *obj)
{ {
...@@ -180,7 +199,11 @@ reservation_object_add_shared_replace(struct reservation_object *obj, ...@@ -180,7 +199,11 @@ reservation_object_add_shared_replace(struct reservation_object *obj,
fence_put(old_fence); fence_put(old_fence);
} }
/* /**
* reservation_object_add_shared_fence - Add a fence to a shared slot
* @obj: the reservation object
* @fence: the shared fence to add
*
* Add a fence to a shared slot, obj->lock must be held, and * Add a fence to a shared slot, obj->lock must be held, and
* reservation_object_reserve_shared_fence has been called. * reservation_object_reserve_shared_fence has been called.
*/ */
...@@ -200,6 +223,13 @@ void reservation_object_add_shared_fence(struct reservation_object *obj, ...@@ -200,6 +223,13 @@ void reservation_object_add_shared_fence(struct reservation_object *obj,
} }
EXPORT_SYMBOL(reservation_object_add_shared_fence); EXPORT_SYMBOL(reservation_object_add_shared_fence);
/**
* reservation_object_add_excl_fence - Add an exclusive fence.
* @obj: the reservation object
* @fence: the shared fence to add
*
* Add a fence to the exclusive slot. The obj->lock must be held.
*/
void reservation_object_add_excl_fence(struct reservation_object *obj, void reservation_object_add_excl_fence(struct reservation_object *obj,
struct fence *fence) struct fence *fence)
{ {
...@@ -233,6 +263,18 @@ void reservation_object_add_excl_fence(struct reservation_object *obj, ...@@ -233,6 +263,18 @@ void reservation_object_add_excl_fence(struct reservation_object *obj,
} }
EXPORT_SYMBOL(reservation_object_add_excl_fence); EXPORT_SYMBOL(reservation_object_add_excl_fence);
/**
* reservation_object_get_fences_rcu - Get an object's shared and exclusive
* fences without update side lock held
* @obj: the reservation object
* @pfence_excl: the returned exclusive fence (or NULL)
* @pshared_count: the number of shared fences returned
* @pshared: the array of shared fence ptrs returned (array is krealloc'd to
* the required size, and must be freed by caller)
*
* RETURNS
* Zero or -errno
*/
int reservation_object_get_fences_rcu(struct reservation_object *obj, int reservation_object_get_fences_rcu(struct reservation_object *obj,
struct fence **pfence_excl, struct fence **pfence_excl,
unsigned *pshared_count, unsigned *pshared_count,
...@@ -319,6 +361,18 @@ int reservation_object_get_fences_rcu(struct reservation_object *obj, ...@@ -319,6 +361,18 @@ int reservation_object_get_fences_rcu(struct reservation_object *obj,
} }
EXPORT_SYMBOL_GPL(reservation_object_get_fences_rcu); EXPORT_SYMBOL_GPL(reservation_object_get_fences_rcu);
/**
* reservation_object_wait_timeout_rcu - Wait on reservation's objects
* shared and/or exclusive fences.
* @obj: the reservation object
* @wait_all: if true, wait on all fences, else wait on just exclusive fence
* @intr: if true, do interruptible wait
* @timeout: timeout value in jiffies or zero to return immediately
*
* RETURNS
* Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or
* greater than zer on success.
*/
long reservation_object_wait_timeout_rcu(struct reservation_object *obj, long reservation_object_wait_timeout_rcu(struct reservation_object *obj,
bool wait_all, bool intr, bool wait_all, bool intr,
unsigned long timeout) unsigned long timeout)
...@@ -416,6 +470,16 @@ reservation_object_test_signaled_single(struct fence *passed_fence) ...@@ -416,6 +470,16 @@ reservation_object_test_signaled_single(struct fence *passed_fence)
return ret; return ret;
} }
/**
* reservation_object_test_signaled_rcu - Test if a reservation object's
* fences have been signaled.
* @obj: the reservation object
* @test_all: if true, test all fences, otherwise only test the exclusive
* fence
*
* RETURNS
* true if all fences signaled, else false
*/
bool reservation_object_test_signaled_rcu(struct reservation_object *obj, bool reservation_object_test_signaled_rcu(struct reservation_object *obj,
bool test_all) bool test_all)
{ {
......
...@@ -29,7 +29,6 @@ ...@@ -29,7 +29,6 @@
#include <mach/hardware.h> #include <mach/hardware.h>
#include <mach/platform.h> #include <mach/platform.h>
#include <mach/irqs.h>
#define LPC32XX_GPIO_P3_INP_STATE _GPREG(0x000) #define LPC32XX_GPIO_P3_INP_STATE _GPREG(0x000)
#define LPC32XX_GPIO_P3_OUTP_SET _GPREG(0x004) #define LPC32XX_GPIO_P3_OUTP_SET _GPREG(0x004)
...@@ -371,61 +370,16 @@ static int lpc32xx_gpio_request(struct gpio_chip *chip, unsigned pin) ...@@ -371,61 +370,16 @@ static int lpc32xx_gpio_request(struct gpio_chip *chip, unsigned pin)
static int lpc32xx_gpio_to_irq_p01(struct gpio_chip *chip, unsigned offset) static int lpc32xx_gpio_to_irq_p01(struct gpio_chip *chip, unsigned offset)
{ {
return IRQ_LPC32XX_P0_P1_IRQ; return -ENXIO;
} }
static const char lpc32xx_gpio_to_irq_gpio_p3_table[] = {
IRQ_LPC32XX_GPIO_00,
IRQ_LPC32XX_GPIO_01,
IRQ_LPC32XX_GPIO_02,
IRQ_LPC32XX_GPIO_03,
IRQ_LPC32XX_GPIO_04,
IRQ_LPC32XX_GPIO_05,
};
static int lpc32xx_gpio_to_irq_gpio_p3(struct gpio_chip *chip, unsigned offset) static int lpc32xx_gpio_to_irq_gpio_p3(struct gpio_chip *chip, unsigned offset)
{ {
if (offset < ARRAY_SIZE(lpc32xx_gpio_to_irq_gpio_p3_table))
return lpc32xx_gpio_to_irq_gpio_p3_table[offset];
return -ENXIO; return -ENXIO;
} }
static const char lpc32xx_gpio_to_irq_gpi_p3_table[] = {
IRQ_LPC32XX_GPI_00,
IRQ_LPC32XX_GPI_01,
IRQ_LPC32XX_GPI_02,
IRQ_LPC32XX_GPI_03,
IRQ_LPC32XX_GPI_04,
IRQ_LPC32XX_GPI_05,
IRQ_LPC32XX_GPI_06,
IRQ_LPC32XX_GPI_07,
IRQ_LPC32XX_GPI_08,
IRQ_LPC32XX_GPI_09,
-ENXIO, /* 10 */
-ENXIO, /* 11 */
-ENXIO, /* 12 */
-ENXIO, /* 13 */
-ENXIO, /* 14 */
-ENXIO, /* 15 */
-ENXIO, /* 16 */
-ENXIO, /* 17 */
-ENXIO, /* 18 */
IRQ_LPC32XX_GPI_19,
-ENXIO, /* 20 */
-ENXIO, /* 21 */
-ENXIO, /* 22 */
-ENXIO, /* 23 */
-ENXIO, /* 24 */
-ENXIO, /* 25 */
-ENXIO, /* 26 */
-ENXIO, /* 27 */
IRQ_LPC32XX_GPI_28,
};
static int lpc32xx_gpio_to_irq_gpi_p3(struct gpio_chip *chip, unsigned offset) static int lpc32xx_gpio_to_irq_gpi_p3(struct gpio_chip *chip, unsigned offset)
{ {
if (offset < ARRAY_SIZE(lpc32xx_gpio_to_irq_gpi_p3_table))
return lpc32xx_gpio_to_irq_gpi_p3_table[offset];
return -ENXIO; return -ENXIO;
} }
......
...@@ -20,6 +20,7 @@ ...@@ -20,6 +20,7 @@
#include <linux/cdev.h> #include <linux/cdev.h>
#include <linux/fs.h> #include <linux/fs.h>
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include <linux/compat.h>
#include <uapi/linux/gpio.h> #include <uapi/linux/gpio.h>
#include "gpiolib.h" #include "gpiolib.h"
...@@ -316,7 +317,7 @@ static long gpio_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) ...@@ -316,7 +317,7 @@ static long gpio_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{ {
struct gpio_device *gdev = filp->private_data; struct gpio_device *gdev = filp->private_data;
struct gpio_chip *chip = gdev->chip; struct gpio_chip *chip = gdev->chip;
int __user *ip = (int __user *)arg; void __user *ip = (void __user *)arg;
/* We fail any subsequent ioctl():s when the chip is gone */ /* We fail any subsequent ioctl():s when the chip is gone */
if (!chip) if (!chip)
...@@ -388,6 +389,14 @@ static long gpio_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) ...@@ -388,6 +389,14 @@ static long gpio_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
return -EINVAL; return -EINVAL;
} }
#ifdef CONFIG_COMPAT
static long gpio_ioctl_compat(struct file *filp, unsigned int cmd,
unsigned long arg)
{
return gpio_ioctl(filp, cmd, (unsigned long)compat_ptr(arg));
}
#endif
/** /**
* gpio_chrdev_open() - open the chardev for ioctl operations * gpio_chrdev_open() - open the chardev for ioctl operations
* @inode: inode for this chardev * @inode: inode for this chardev
...@@ -431,7 +440,9 @@ static const struct file_operations gpio_fileops = { ...@@ -431,7 +440,9 @@ static const struct file_operations gpio_fileops = {
.owner = THIS_MODULE, .owner = THIS_MODULE,
.llseek = noop_llseek, .llseek = noop_llseek,
.unlocked_ioctl = gpio_ioctl, .unlocked_ioctl = gpio_ioctl,
.compat_ioctl = gpio_ioctl, #ifdef CONFIG_COMPAT
.compat_ioctl = gpio_ioctl_compat,
#endif
}; };
static void gpiodevice_release(struct device *dev) static void gpiodevice_release(struct device *dev)
...@@ -618,6 +629,8 @@ int gpiochip_add_data(struct gpio_chip *chip, void *data) ...@@ -618,6 +629,8 @@ int gpiochip_add_data(struct gpio_chip *chip, void *data)
goto err_free_label; goto err_free_label;
} }
spin_unlock_irqrestore(&gpio_lock, flags);
for (i = 0; i < chip->ngpio; i++) { for (i = 0; i < chip->ngpio; i++) {
struct gpio_desc *desc = &gdev->descs[i]; struct gpio_desc *desc = &gdev->descs[i];
...@@ -649,8 +662,6 @@ int gpiochip_add_data(struct gpio_chip *chip, void *data) ...@@ -649,8 +662,6 @@ int gpiochip_add_data(struct gpio_chip *chip, void *data)
} }
} }
spin_unlock_irqrestore(&gpio_lock, flags);
#ifdef CONFIG_PINCTRL #ifdef CONFIG_PINCTRL
INIT_LIST_HEAD(&gdev->pin_ranges); INIT_LIST_HEAD(&gdev->pin_ranges);
#endif #endif
...@@ -1356,10 +1367,13 @@ static int __gpiod_request(struct gpio_desc *desc, const char *label) ...@@ -1356,10 +1367,13 @@ static int __gpiod_request(struct gpio_desc *desc, const char *label)
/* /*
* This descriptor validation needs to be inserted verbatim into each * This descriptor validation needs to be inserted verbatim into each
* function taking a descriptor, so we need to use a preprocessor * function taking a descriptor, so we need to use a preprocessor
* macro to avoid endless duplication. * macro to avoid endless duplication. If the desc is NULL it is an
* optional GPIO and calls should just bail out.
*/ */
#define VALIDATE_DESC(desc) do { \ #define VALIDATE_DESC(desc) do { \
if (!desc || !desc->gdev) { \ if (!desc) \
return 0; \
if (!desc->gdev) { \
pr_warn("%s: invalid GPIO\n", __func__); \ pr_warn("%s: invalid GPIO\n", __func__); \
return -EINVAL; \ return -EINVAL; \
} \ } \
...@@ -1370,7 +1384,9 @@ static int __gpiod_request(struct gpio_desc *desc, const char *label) ...@@ -1370,7 +1384,9 @@ static int __gpiod_request(struct gpio_desc *desc, const char *label)
} } while (0) } } while (0)
#define VALIDATE_DESC_VOID(desc) do { \ #define VALIDATE_DESC_VOID(desc) do { \
if (!desc || !desc->gdev) { \ if (!desc) \
return; \
if (!desc->gdev) { \
pr_warn("%s: invalid GPIO\n", __func__); \ pr_warn("%s: invalid GPIO\n", __func__); \
return; \ return; \
} \ } \
...@@ -2066,17 +2082,30 @@ EXPORT_SYMBOL_GPL(gpiod_to_irq); ...@@ -2066,17 +2082,30 @@ EXPORT_SYMBOL_GPL(gpiod_to_irq);
*/ */
int gpiochip_lock_as_irq(struct gpio_chip *chip, unsigned int offset) int gpiochip_lock_as_irq(struct gpio_chip *chip, unsigned int offset)
{ {
if (offset >= chip->ngpio) struct gpio_desc *desc;
return -EINVAL;
desc = gpiochip_get_desc(chip, offset);
if (IS_ERR(desc))
return PTR_ERR(desc);
/* Flush direction if something changed behind our back */
if (chip->get_direction) {
int dir = chip->get_direction(chip, offset);
if (dir)
clear_bit(FLAG_IS_OUT, &desc->flags);
else
set_bit(FLAG_IS_OUT, &desc->flags);
}
if (test_bit(FLAG_IS_OUT, &chip->gpiodev->descs[offset].flags)) { if (test_bit(FLAG_IS_OUT, &desc->flags)) {
chip_err(chip, chip_err(chip,
"%s: tried to flag a GPIO set as output for IRQ\n", "%s: tried to flag a GPIO set as output for IRQ\n",
__func__); __func__);
return -EIO; return -EIO;
} }
set_bit(FLAG_USED_AS_IRQ, &chip->gpiodev->descs[offset].flags); set_bit(FLAG_USED_AS_IRQ, &desc->flags);
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(gpiochip_lock_as_irq); EXPORT_SYMBOL_GPL(gpiochip_lock_as_irq);
......
...@@ -33,8 +33,17 @@ ...@@ -33,8 +33,17 @@
* *
*/ */
static void hdlcd_crtc_cleanup(struct drm_crtc *crtc)
{
struct hdlcd_drm_private *hdlcd = crtc_to_hdlcd_priv(crtc);
/* stop the controller on cleanup */
hdlcd_write(hdlcd, HDLCD_REG_COMMAND, 0);
drm_crtc_cleanup(crtc);
}
static const struct drm_crtc_funcs hdlcd_crtc_funcs = { static const struct drm_crtc_funcs hdlcd_crtc_funcs = {
.destroy = drm_crtc_cleanup, .destroy = hdlcd_crtc_cleanup,
.set_config = drm_atomic_helper_set_config, .set_config = drm_atomic_helper_set_config,
.page_flip = drm_atomic_helper_page_flip, .page_flip = drm_atomic_helper_page_flip,
.reset = drm_atomic_helper_crtc_reset, .reset = drm_atomic_helper_crtc_reset,
...@@ -97,7 +106,7 @@ static void hdlcd_crtc_mode_set_nofb(struct drm_crtc *crtc) ...@@ -97,7 +106,7 @@ static void hdlcd_crtc_mode_set_nofb(struct drm_crtc *crtc)
struct hdlcd_drm_private *hdlcd = crtc_to_hdlcd_priv(crtc); struct hdlcd_drm_private *hdlcd = crtc_to_hdlcd_priv(crtc);
struct drm_display_mode *m = &crtc->state->adjusted_mode; struct drm_display_mode *m = &crtc->state->adjusted_mode;
struct videomode vm; struct videomode vm;
unsigned int polarities, line_length, err; unsigned int polarities, err;
vm.vfront_porch = m->crtc_vsync_start - m->crtc_vdisplay; vm.vfront_porch = m->crtc_vsync_start - m->crtc_vdisplay;
vm.vback_porch = m->crtc_vtotal - m->crtc_vsync_end; vm.vback_porch = m->crtc_vtotal - m->crtc_vsync_end;
...@@ -113,23 +122,18 @@ static void hdlcd_crtc_mode_set_nofb(struct drm_crtc *crtc) ...@@ -113,23 +122,18 @@ static void hdlcd_crtc_mode_set_nofb(struct drm_crtc *crtc)
if (m->flags & DRM_MODE_FLAG_PVSYNC) if (m->flags & DRM_MODE_FLAG_PVSYNC)
polarities |= HDLCD_POLARITY_VSYNC; polarities |= HDLCD_POLARITY_VSYNC;
line_length = crtc->primary->state->fb->pitches[0];
/* Allow max number of outstanding requests and largest burst size */ /* Allow max number of outstanding requests and largest burst size */
hdlcd_write(hdlcd, HDLCD_REG_BUS_OPTIONS, hdlcd_write(hdlcd, HDLCD_REG_BUS_OPTIONS,
HDLCD_BUS_MAX_OUTSTAND | HDLCD_BUS_BURST_16); HDLCD_BUS_MAX_OUTSTAND | HDLCD_BUS_BURST_16);
hdlcd_write(hdlcd, HDLCD_REG_FB_LINE_LENGTH, line_length);
hdlcd_write(hdlcd, HDLCD_REG_FB_LINE_PITCH, line_length);
hdlcd_write(hdlcd, HDLCD_REG_FB_LINE_COUNT, m->crtc_vdisplay - 1);
hdlcd_write(hdlcd, HDLCD_REG_V_DATA, m->crtc_vdisplay - 1); hdlcd_write(hdlcd, HDLCD_REG_V_DATA, m->crtc_vdisplay - 1);
hdlcd_write(hdlcd, HDLCD_REG_V_BACK_PORCH, vm.vback_porch - 1); hdlcd_write(hdlcd, HDLCD_REG_V_BACK_PORCH, vm.vback_porch - 1);
hdlcd_write(hdlcd, HDLCD_REG_V_FRONT_PORCH, vm.vfront_porch - 1); hdlcd_write(hdlcd, HDLCD_REG_V_FRONT_PORCH, vm.vfront_porch - 1);
hdlcd_write(hdlcd, HDLCD_REG_V_SYNC, vm.vsync_len - 1); hdlcd_write(hdlcd, HDLCD_REG_V_SYNC, vm.vsync_len - 1);
hdlcd_write(hdlcd, HDLCD_REG_H_DATA, m->crtc_hdisplay - 1);
hdlcd_write(hdlcd, HDLCD_REG_H_BACK_PORCH, vm.hback_porch - 1); hdlcd_write(hdlcd, HDLCD_REG_H_BACK_PORCH, vm.hback_porch - 1);
hdlcd_write(hdlcd, HDLCD_REG_H_FRONT_PORCH, vm.hfront_porch - 1); hdlcd_write(hdlcd, HDLCD_REG_H_FRONT_PORCH, vm.hfront_porch - 1);
hdlcd_write(hdlcd, HDLCD_REG_H_SYNC, vm.hsync_len - 1); hdlcd_write(hdlcd, HDLCD_REG_H_SYNC, vm.hsync_len - 1);
hdlcd_write(hdlcd, HDLCD_REG_H_DATA, m->crtc_hdisplay - 1);
hdlcd_write(hdlcd, HDLCD_REG_POLARITIES, polarities); hdlcd_write(hdlcd, HDLCD_REG_POLARITIES, polarities);
err = hdlcd_set_pxl_fmt(crtc); err = hdlcd_set_pxl_fmt(crtc);
...@@ -144,20 +148,19 @@ static void hdlcd_crtc_enable(struct drm_crtc *crtc) ...@@ -144,20 +148,19 @@ static void hdlcd_crtc_enable(struct drm_crtc *crtc)
struct hdlcd_drm_private *hdlcd = crtc_to_hdlcd_priv(crtc); struct hdlcd_drm_private *hdlcd = crtc_to_hdlcd_priv(crtc);
clk_prepare_enable(hdlcd->clk); clk_prepare_enable(hdlcd->clk);
hdlcd_crtc_mode_set_nofb(crtc);
hdlcd_write(hdlcd, HDLCD_REG_COMMAND, 1); hdlcd_write(hdlcd, HDLCD_REG_COMMAND, 1);
drm_crtc_vblank_on(crtc);
} }
static void hdlcd_crtc_disable(struct drm_crtc *crtc) static void hdlcd_crtc_disable(struct drm_crtc *crtc)
{ {
struct hdlcd_drm_private *hdlcd = crtc_to_hdlcd_priv(crtc); struct hdlcd_drm_private *hdlcd = crtc_to_hdlcd_priv(crtc);
if (!crtc->primary->fb) if (!crtc->state->active)
return; return;
clk_disable_unprepare(hdlcd->clk);
hdlcd_write(hdlcd, HDLCD_REG_COMMAND, 0); hdlcd_write(hdlcd, HDLCD_REG_COMMAND, 0);
drm_crtc_vblank_off(crtc); clk_disable_unprepare(hdlcd->clk);
} }
static int hdlcd_crtc_atomic_check(struct drm_crtc *crtc, static int hdlcd_crtc_atomic_check(struct drm_crtc *crtc,
...@@ -179,20 +182,17 @@ static int hdlcd_crtc_atomic_check(struct drm_crtc *crtc, ...@@ -179,20 +182,17 @@ static int hdlcd_crtc_atomic_check(struct drm_crtc *crtc,
static void hdlcd_crtc_atomic_begin(struct drm_crtc *crtc, static void hdlcd_crtc_atomic_begin(struct drm_crtc *crtc,
struct drm_crtc_state *state) struct drm_crtc_state *state)
{ {
struct hdlcd_drm_private *hdlcd = crtc_to_hdlcd_priv(crtc); struct drm_pending_vblank_event *event = crtc->state->event;
unsigned long flags;
if (crtc->state->event) {
struct drm_pending_vblank_event *event = crtc->state->event;
if (event) {
crtc->state->event = NULL; crtc->state->event = NULL;
event->pipe = drm_crtc_index(crtc);
WARN_ON(drm_crtc_vblank_get(crtc) != 0);
spin_lock_irqsave(&crtc->dev->event_lock, flags); spin_lock_irq(&crtc->dev->event_lock);
list_add_tail(&event->base.link, &hdlcd->event_list); if (drm_crtc_vblank_get(crtc) == 0)
spin_unlock_irqrestore(&crtc->dev->event_lock, flags); drm_crtc_arm_vblank_event(crtc, event);
else
drm_crtc_send_vblank_event(crtc, event);
spin_unlock_irq(&crtc->dev->event_lock);
} }
} }
...@@ -225,6 +225,15 @@ static const struct drm_crtc_helper_funcs hdlcd_crtc_helper_funcs = { ...@@ -225,6 +225,15 @@ static const struct drm_crtc_helper_funcs hdlcd_crtc_helper_funcs = {
static int hdlcd_plane_atomic_check(struct drm_plane *plane, static int hdlcd_plane_atomic_check(struct drm_plane *plane,
struct drm_plane_state *state) struct drm_plane_state *state)
{ {
u32 src_w, src_h;
src_w = state->src_w >> 16;
src_h = state->src_h >> 16;
/* we can't do any scaling of the plane source */
if ((src_w != state->crtc_w) || (src_h != state->crtc_h))
return -EINVAL;
return 0; return 0;
} }
...@@ -233,20 +242,31 @@ static void hdlcd_plane_atomic_update(struct drm_plane *plane, ...@@ -233,20 +242,31 @@ static void hdlcd_plane_atomic_update(struct drm_plane *plane,
{ {
struct hdlcd_drm_private *hdlcd; struct hdlcd_drm_private *hdlcd;
struct drm_gem_cma_object *gem; struct drm_gem_cma_object *gem;
unsigned int depth, bpp;
u32 src_w, src_h, dest_w, dest_h;
dma_addr_t scanout_start; dma_addr_t scanout_start;
if (!plane->state->crtc || !plane->state->fb) if (!plane->state->fb)
return; return;
hdlcd = crtc_to_hdlcd_priv(plane->state->crtc); drm_fb_get_bpp_depth(plane->state->fb->pixel_format, &depth, &bpp);
src_w = plane->state->src_w >> 16;
src_h = plane->state->src_h >> 16;
dest_w = plane->state->crtc_w;
dest_h = plane->state->crtc_h;
gem = drm_fb_cma_get_gem_obj(plane->state->fb, 0); gem = drm_fb_cma_get_gem_obj(plane->state->fb, 0);
scanout_start = gem->paddr; scanout_start = gem->paddr + plane->state->fb->offsets[0] +
plane->state->crtc_y * plane->state->fb->pitches[0] +
plane->state->crtc_x * bpp / 8;
hdlcd = plane->dev->dev_private;
hdlcd_write(hdlcd, HDLCD_REG_FB_LINE_LENGTH, plane->state->fb->pitches[0]);
hdlcd_write(hdlcd, HDLCD_REG_FB_LINE_PITCH, plane->state->fb->pitches[0]);
hdlcd_write(hdlcd, HDLCD_REG_FB_LINE_COUNT, dest_h - 1);
hdlcd_write(hdlcd, HDLCD_REG_FB_BASE, scanout_start); hdlcd_write(hdlcd, HDLCD_REG_FB_BASE, scanout_start);
} }
static const struct drm_plane_helper_funcs hdlcd_plane_helper_funcs = { static const struct drm_plane_helper_funcs hdlcd_plane_helper_funcs = {
.prepare_fb = NULL,
.cleanup_fb = NULL,
.atomic_check = hdlcd_plane_atomic_check, .atomic_check = hdlcd_plane_atomic_check,
.atomic_update = hdlcd_plane_atomic_update, .atomic_update = hdlcd_plane_atomic_update,
}; };
...@@ -294,16 +314,6 @@ static struct drm_plane *hdlcd_plane_init(struct drm_device *drm) ...@@ -294,16 +314,6 @@ static struct drm_plane *hdlcd_plane_init(struct drm_device *drm)
return plane; return plane;
} }
void hdlcd_crtc_suspend(struct drm_crtc *crtc)
{
hdlcd_crtc_disable(crtc);
}
void hdlcd_crtc_resume(struct drm_crtc *crtc)
{
hdlcd_crtc_enable(crtc);
}
int hdlcd_setup_crtc(struct drm_device *drm) int hdlcd_setup_crtc(struct drm_device *drm)
{ {
struct hdlcd_drm_private *hdlcd = drm->dev_private; struct hdlcd_drm_private *hdlcd = drm->dev_private;
......
...@@ -49,8 +49,6 @@ static int hdlcd_load(struct drm_device *drm, unsigned long flags) ...@@ -49,8 +49,6 @@ static int hdlcd_load(struct drm_device *drm, unsigned long flags)
atomic_set(&hdlcd->dma_end_count, 0); atomic_set(&hdlcd->dma_end_count, 0);
#endif #endif
INIT_LIST_HEAD(&hdlcd->event_list);
res = platform_get_resource(pdev, IORESOURCE_MEM, 0); res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
hdlcd->mmio = devm_ioremap_resource(drm->dev, res); hdlcd->mmio = devm_ioremap_resource(drm->dev, res);
if (IS_ERR(hdlcd->mmio)) { if (IS_ERR(hdlcd->mmio)) {
...@@ -84,11 +82,7 @@ static int hdlcd_load(struct drm_device *drm, unsigned long flags) ...@@ -84,11 +82,7 @@ static int hdlcd_load(struct drm_device *drm, unsigned long flags)
goto setup_fail; goto setup_fail;
} }
pm_runtime_enable(drm->dev);
pm_runtime_get_sync(drm->dev);
ret = drm_irq_install(drm, platform_get_irq(pdev, 0)); ret = drm_irq_install(drm, platform_get_irq(pdev, 0));
pm_runtime_put_sync(drm->dev);
if (ret < 0) { if (ret < 0) {
DRM_ERROR("failed to install IRQ handler\n"); DRM_ERROR("failed to install IRQ handler\n");
goto irq_fail; goto irq_fail;
...@@ -164,24 +158,9 @@ static irqreturn_t hdlcd_irq(int irq, void *arg) ...@@ -164,24 +158,9 @@ static irqreturn_t hdlcd_irq(int irq, void *arg)
atomic_inc(&hdlcd->vsync_count); atomic_inc(&hdlcd->vsync_count);
#endif #endif
if (irq_status & HDLCD_INTERRUPT_VSYNC) { if (irq_status & HDLCD_INTERRUPT_VSYNC)
bool events_sent = false;
unsigned long flags;
struct drm_pending_vblank_event *e, *t;
drm_crtc_handle_vblank(&hdlcd->crtc); drm_crtc_handle_vblank(&hdlcd->crtc);
spin_lock_irqsave(&drm->event_lock, flags);
list_for_each_entry_safe(e, t, &hdlcd->event_list, base.link) {
list_del(&e->base.link);
drm_crtc_send_vblank_event(&hdlcd->crtc, e);
events_sent = true;
}
if (events_sent)
drm_crtc_vblank_put(&hdlcd->crtc);
spin_unlock_irqrestore(&drm->event_lock, flags);
}
/* acknowledge interrupt(s) */ /* acknowledge interrupt(s) */
hdlcd_write(hdlcd, HDLCD_REG_INT_CLEAR, irq_status); hdlcd_write(hdlcd, HDLCD_REG_INT_CLEAR, irq_status);
...@@ -275,6 +254,7 @@ static int hdlcd_show_pxlclock(struct seq_file *m, void *arg) ...@@ -275,6 +254,7 @@ static int hdlcd_show_pxlclock(struct seq_file *m, void *arg)
static struct drm_info_list hdlcd_debugfs_list[] = { static struct drm_info_list hdlcd_debugfs_list[] = {
{ "interrupt_count", hdlcd_show_underrun_count, 0 }, { "interrupt_count", hdlcd_show_underrun_count, 0 },
{ "clocks", hdlcd_show_pxlclock, 0 }, { "clocks", hdlcd_show_pxlclock, 0 },
{ "fb", drm_fb_cma_debugfs_show, 0 },
}; };
static int hdlcd_debugfs_init(struct drm_minor *minor) static int hdlcd_debugfs_init(struct drm_minor *minor)
...@@ -357,6 +337,8 @@ static int hdlcd_drm_bind(struct device *dev) ...@@ -357,6 +337,8 @@ static int hdlcd_drm_bind(struct device *dev)
return -ENOMEM; return -ENOMEM;
drm->dev_private = hdlcd; drm->dev_private = hdlcd;
dev_set_drvdata(dev, drm);
hdlcd_setup_mode_config(drm); hdlcd_setup_mode_config(drm);
ret = hdlcd_load(drm, 0); ret = hdlcd_load(drm, 0);
if (ret) if (ret)
...@@ -366,14 +348,18 @@ static int hdlcd_drm_bind(struct device *dev) ...@@ -366,14 +348,18 @@ static int hdlcd_drm_bind(struct device *dev)
if (ret) if (ret)
goto err_unload; goto err_unload;
dev_set_drvdata(dev, drm);
ret = component_bind_all(dev, drm); ret = component_bind_all(dev, drm);
if (ret) { if (ret) {
DRM_ERROR("Failed to bind all components\n"); DRM_ERROR("Failed to bind all components\n");
goto err_unregister; goto err_unregister;
} }
ret = pm_runtime_set_active(dev);
if (ret)
goto err_pm_active;
pm_runtime_enable(dev);
ret = drm_vblank_init(drm, drm->mode_config.num_crtc); ret = drm_vblank_init(drm, drm->mode_config.num_crtc);
if (ret < 0) { if (ret < 0) {
DRM_ERROR("failed to initialise vblank\n"); DRM_ERROR("failed to initialise vblank\n");
...@@ -399,16 +385,16 @@ static int hdlcd_drm_bind(struct device *dev) ...@@ -399,16 +385,16 @@ static int hdlcd_drm_bind(struct device *dev)
drm_mode_config_cleanup(drm); drm_mode_config_cleanup(drm);
drm_vblank_cleanup(drm); drm_vblank_cleanup(drm);
err_vblank: err_vblank:
pm_runtime_disable(drm->dev);
err_pm_active:
component_unbind_all(dev, drm); component_unbind_all(dev, drm);
err_unregister: err_unregister:
drm_dev_unregister(drm); drm_dev_unregister(drm);
err_unload: err_unload:
pm_runtime_get_sync(drm->dev);
drm_irq_uninstall(drm); drm_irq_uninstall(drm);
pm_runtime_put_sync(drm->dev);
pm_runtime_disable(drm->dev);
of_reserved_mem_device_release(drm->dev); of_reserved_mem_device_release(drm->dev);
err_free: err_free:
dev_set_drvdata(dev, NULL);
drm_dev_unref(drm); drm_dev_unref(drm);
return ret; return ret;
...@@ -495,30 +481,34 @@ MODULE_DEVICE_TABLE(of, hdlcd_of_match); ...@@ -495,30 +481,34 @@ MODULE_DEVICE_TABLE(of, hdlcd_of_match);
static int __maybe_unused hdlcd_pm_suspend(struct device *dev) static int __maybe_unused hdlcd_pm_suspend(struct device *dev)
{ {
struct drm_device *drm = dev_get_drvdata(dev); struct drm_device *drm = dev_get_drvdata(dev);
struct drm_crtc *crtc; struct hdlcd_drm_private *hdlcd = drm ? drm->dev_private : NULL;
if (pm_runtime_suspended(dev)) if (!hdlcd)
return 0; return 0;
drm_modeset_lock_all(drm); drm_kms_helper_poll_disable(drm);
list_for_each_entry(crtc, &drm->mode_config.crtc_list, head)
hdlcd_crtc_suspend(crtc); hdlcd->state = drm_atomic_helper_suspend(drm);
drm_modeset_unlock_all(drm); if (IS_ERR(hdlcd->state)) {
drm_kms_helper_poll_enable(drm);
return PTR_ERR(hdlcd->state);
}
return 0; return 0;
} }
static int __maybe_unused hdlcd_pm_resume(struct device *dev) static int __maybe_unused hdlcd_pm_resume(struct device *dev)
{ {
struct drm_device *drm = dev_get_drvdata(dev); struct drm_device *drm = dev_get_drvdata(dev);
struct drm_crtc *crtc; struct hdlcd_drm_private *hdlcd = drm ? drm->dev_private : NULL;
if (!pm_runtime_suspended(dev)) if (!hdlcd)
return 0; return 0;
drm_modeset_lock_all(drm); drm_atomic_helper_resume(drm, hdlcd->state);
list_for_each_entry(crtc, &drm->mode_config.crtc_list, head) drm_kms_helper_poll_enable(drm);
hdlcd_crtc_resume(crtc); pm_runtime_set_active(dev);
drm_modeset_unlock_all(drm);
return 0; return 0;
} }
......
...@@ -9,10 +9,9 @@ struct hdlcd_drm_private { ...@@ -9,10 +9,9 @@ struct hdlcd_drm_private {
void __iomem *mmio; void __iomem *mmio;
struct clk *clk; struct clk *clk;
struct drm_fbdev_cma *fbdev; struct drm_fbdev_cma *fbdev;
struct drm_framebuffer *fb;
struct list_head event_list;
struct drm_crtc crtc; struct drm_crtc crtc;
struct drm_plane *plane; struct drm_plane *plane;
struct drm_atomic_state *state;
#ifdef CONFIG_DEBUG_FS #ifdef CONFIG_DEBUG_FS
atomic_t buffer_underrun_count; atomic_t buffer_underrun_count;
atomic_t bus_error_count; atomic_t bus_error_count;
...@@ -36,7 +35,5 @@ static inline u32 hdlcd_read(struct hdlcd_drm_private *hdlcd, unsigned int reg) ...@@ -36,7 +35,5 @@ static inline u32 hdlcd_read(struct hdlcd_drm_private *hdlcd, unsigned int reg)
int hdlcd_setup_crtc(struct drm_device *dev); int hdlcd_setup_crtc(struct drm_device *dev);
void hdlcd_set_scanout(struct hdlcd_drm_private *hdlcd); void hdlcd_set_scanout(struct hdlcd_drm_private *hdlcd);
void hdlcd_crtc_suspend(struct drm_crtc *crtc);
void hdlcd_crtc_resume(struct drm_crtc *crtc);
#endif /* __HDLCD_DRV_H__ */ #endif /* __HDLCD_DRV_H__ */
...@@ -391,12 +391,11 @@ void atmel_hlcdc_crtc_reset(struct drm_crtc *crtc) ...@@ -391,12 +391,11 @@ void atmel_hlcdc_crtc_reset(struct drm_crtc *crtc)
{ {
struct atmel_hlcdc_crtc_state *state; struct atmel_hlcdc_crtc_state *state;
if (crtc->state && crtc->state->mode_blob)
drm_property_unreference_blob(crtc->state->mode_blob);
if (crtc->state) { if (crtc->state) {
__drm_atomic_helper_crtc_destroy_state(crtc->state);
state = drm_crtc_state_to_atmel_hlcdc_crtc_state(crtc->state); state = drm_crtc_state_to_atmel_hlcdc_crtc_state(crtc->state);
kfree(state); kfree(state);
crtc->state = NULL;
} }
state = kzalloc(sizeof(*state), GFP_KERNEL); state = kzalloc(sizeof(*state), GFP_KERNEL);
...@@ -415,8 +414,9 @@ atmel_hlcdc_crtc_duplicate_state(struct drm_crtc *crtc) ...@@ -415,8 +414,9 @@ atmel_hlcdc_crtc_duplicate_state(struct drm_crtc *crtc)
return NULL; return NULL;
state = kmalloc(sizeof(*state), GFP_KERNEL); state = kmalloc(sizeof(*state), GFP_KERNEL);
if (state) if (!state)
__drm_atomic_helper_crtc_duplicate_state(crtc, &state->base); return NULL;
__drm_atomic_helper_crtc_duplicate_state(crtc, &state->base);
cur = drm_crtc_state_to_atmel_hlcdc_crtc_state(crtc->state); cur = drm_crtc_state_to_atmel_hlcdc_crtc_state(crtc->state);
state->output_mode = cur->output_mode; state->output_mode = cur->output_mode;
......
...@@ -351,6 +351,8 @@ int drm_atomic_set_mode_prop_for_crtc(struct drm_crtc_state *state, ...@@ -351,6 +351,8 @@ int drm_atomic_set_mode_prop_for_crtc(struct drm_crtc_state *state,
drm_property_unreference_blob(state->mode_blob); drm_property_unreference_blob(state->mode_blob);
state->mode_blob = NULL; state->mode_blob = NULL;
memset(&state->mode, 0, sizeof(state->mode));
if (blob) { if (blob) {
if (blob->length != sizeof(struct drm_mode_modeinfo) || if (blob->length != sizeof(struct drm_mode_modeinfo) ||
drm_mode_convert_umode(&state->mode, drm_mode_convert_umode(&state->mode,
...@@ -363,7 +365,6 @@ int drm_atomic_set_mode_prop_for_crtc(struct drm_crtc_state *state, ...@@ -363,7 +365,6 @@ int drm_atomic_set_mode_prop_for_crtc(struct drm_crtc_state *state,
DRM_DEBUG_ATOMIC("Set [MODE:%s] for CRTC state %p\n", DRM_DEBUG_ATOMIC("Set [MODE:%s] for CRTC state %p\n",
state->mode.name, state); state->mode.name, state);
} else { } else {
memset(&state->mode, 0, sizeof(state->mode));
state->enable = false; state->enable = false;
DRM_DEBUG_ATOMIC("Set [NOMODE] for CRTC state %p\n", DRM_DEBUG_ATOMIC("Set [NOMODE] for CRTC state %p\n",
state); state);
......
...@@ -2821,8 +2821,6 @@ int drm_mode_setcrtc(struct drm_device *dev, void *data, ...@@ -2821,8 +2821,6 @@ int drm_mode_setcrtc(struct drm_device *dev, void *data,
goto out; goto out;
} }
drm_mode_set_crtcinfo(mode, CRTC_INTERLACE_HALVE_V);
/* /*
* Check whether the primary plane supports the fb pixel format. * Check whether the primary plane supports the fb pixel format.
* Drivers not implementing the universal planes API use a * Drivers not implementing the universal planes API use a
...@@ -4841,7 +4839,8 @@ bool drm_property_change_valid_get(struct drm_property *property, ...@@ -4841,7 +4839,8 @@ bool drm_property_change_valid_get(struct drm_property *property,
if (value == 0) if (value == 0)
return true; return true;
return _object_find(property->dev, value, property->values[0]) != NULL; *ref = _object_find(property->dev, value, property->values[0]);
return *ref != NULL;
} }
for (i = 0; i < property->num_values; i++) for (i = 0; i < property->num_values; i++)
......
...@@ -445,7 +445,7 @@ int drm_fbdev_cma_create_with_funcs(struct drm_fb_helper *helper, ...@@ -445,7 +445,7 @@ int drm_fbdev_cma_create_with_funcs(struct drm_fb_helper *helper,
err_fb_info_destroy: err_fb_info_destroy:
drm_fb_helper_release_fbi(helper); drm_fb_helper_release_fbi(helper);
err_gem_free_object: err_gem_free_object:
dev->driver->gem_free_object(&obj->base); drm_gem_object_unreference_unlocked(&obj->base);
return ret; return ret;
} }
EXPORT_SYMBOL(drm_fbdev_cma_create_with_funcs); EXPORT_SYMBOL(drm_fbdev_cma_create_with_funcs);
......
...@@ -121,7 +121,7 @@ struct drm_gem_cma_object *drm_gem_cma_create(struct drm_device *drm, ...@@ -121,7 +121,7 @@ struct drm_gem_cma_object *drm_gem_cma_create(struct drm_device *drm,
return cma_obj; return cma_obj;
error: error:
drm->driver->gem_free_object(&cma_obj->base); drm_gem_object_unreference_unlocked(&cma_obj->base);
return ERR_PTR(ret); return ERR_PTR(ret);
} }
EXPORT_SYMBOL_GPL(drm_gem_cma_create); EXPORT_SYMBOL_GPL(drm_gem_cma_create);
...@@ -162,18 +162,12 @@ drm_gem_cma_create_with_handle(struct drm_file *file_priv, ...@@ -162,18 +162,12 @@ drm_gem_cma_create_with_handle(struct drm_file *file_priv,
* and handle has the id what user can see. * and handle has the id what user can see.
*/ */
ret = drm_gem_handle_create(file_priv, gem_obj, handle); ret = drm_gem_handle_create(file_priv, gem_obj, handle);
if (ret)
goto err_handle_create;
/* drop reference from allocate - handle holds it now. */ /* drop reference from allocate - handle holds it now. */
drm_gem_object_unreference_unlocked(gem_obj); drm_gem_object_unreference_unlocked(gem_obj);
if (ret)
return ERR_PTR(ret);
return cma_obj; return cma_obj;
err_handle_create:
drm->driver->gem_free_object(gem_obj);
return ERR_PTR(ret);
} }
/** /**
......
...@@ -1518,6 +1518,8 @@ int drm_mode_convert_umode(struct drm_display_mode *out, ...@@ -1518,6 +1518,8 @@ int drm_mode_convert_umode(struct drm_display_mode *out,
if (out->status != MODE_OK) if (out->status != MODE_OK)
goto out; goto out;
drm_mode_set_crtcinfo(out, CRTC_INTERLACE_HALVE_V);
ret = 0; ret = 0;
out: out:
......
...@@ -97,8 +97,8 @@ static struct imx_drm_crtc *imx_drm_find_crtc(struct drm_crtc *crtc) ...@@ -97,8 +97,8 @@ static struct imx_drm_crtc *imx_drm_find_crtc(struct drm_crtc *crtc)
return NULL; return NULL;
} }
int imx_drm_set_bus_format_pins(struct drm_encoder *encoder, u32 bus_format, int imx_drm_set_bus_config(struct drm_encoder *encoder, u32 bus_format,
int hsync_pin, int vsync_pin) int hsync_pin, int vsync_pin, u32 bus_flags)
{ {
struct imx_drm_crtc_helper_funcs *helper; struct imx_drm_crtc_helper_funcs *helper;
struct imx_drm_crtc *imx_crtc; struct imx_drm_crtc *imx_crtc;
...@@ -110,14 +110,17 @@ int imx_drm_set_bus_format_pins(struct drm_encoder *encoder, u32 bus_format, ...@@ -110,14 +110,17 @@ int imx_drm_set_bus_format_pins(struct drm_encoder *encoder, u32 bus_format,
helper = &imx_crtc->imx_drm_helper_funcs; helper = &imx_crtc->imx_drm_helper_funcs;
if (helper->set_interface_pix_fmt) if (helper->set_interface_pix_fmt)
return helper->set_interface_pix_fmt(encoder->crtc, return helper->set_interface_pix_fmt(encoder->crtc,
bus_format, hsync_pin, vsync_pin); bus_format, hsync_pin, vsync_pin,
bus_flags);
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(imx_drm_set_bus_format_pins); EXPORT_SYMBOL_GPL(imx_drm_set_bus_config);
int imx_drm_set_bus_format(struct drm_encoder *encoder, u32 bus_format) int imx_drm_set_bus_format(struct drm_encoder *encoder, u32 bus_format)
{ {
return imx_drm_set_bus_format_pins(encoder, bus_format, 2, 3); return imx_drm_set_bus_config(encoder, bus_format, 2, 3,
DRM_BUS_FLAG_DE_HIGH |
DRM_BUS_FLAG_PIXDATA_NEGEDGE);
} }
EXPORT_SYMBOL_GPL(imx_drm_set_bus_format); EXPORT_SYMBOL_GPL(imx_drm_set_bus_format);
......
...@@ -19,7 +19,8 @@ struct imx_drm_crtc_helper_funcs { ...@@ -19,7 +19,8 @@ struct imx_drm_crtc_helper_funcs {
int (*enable_vblank)(struct drm_crtc *crtc); int (*enable_vblank)(struct drm_crtc *crtc);
void (*disable_vblank)(struct drm_crtc *crtc); void (*disable_vblank)(struct drm_crtc *crtc);
int (*set_interface_pix_fmt)(struct drm_crtc *crtc, int (*set_interface_pix_fmt)(struct drm_crtc *crtc,
u32 bus_format, int hsync_pin, int vsync_pin); u32 bus_format, int hsync_pin, int vsync_pin,
u32 bus_flags);
const struct drm_crtc_helper_funcs *crtc_helper_funcs; const struct drm_crtc_helper_funcs *crtc_helper_funcs;
const struct drm_crtc_funcs *crtc_funcs; const struct drm_crtc_funcs *crtc_funcs;
}; };
...@@ -41,8 +42,8 @@ void imx_drm_mode_config_init(struct drm_device *drm); ...@@ -41,8 +42,8 @@ void imx_drm_mode_config_init(struct drm_device *drm);
struct drm_gem_cma_object *imx_drm_fb_get_obj(struct drm_framebuffer *fb); struct drm_gem_cma_object *imx_drm_fb_get_obj(struct drm_framebuffer *fb);
int imx_drm_set_bus_format_pins(struct drm_encoder *encoder, int imx_drm_set_bus_config(struct drm_encoder *encoder, u32 bus_format,
u32 bus_format, int hsync_pin, int vsync_pin); int hsync_pin, int vsync_pin, u32 bus_flags);
int imx_drm_set_bus_format(struct drm_encoder *encoder, int imx_drm_set_bus_format(struct drm_encoder *encoder,
u32 bus_format); u32 bus_format);
......
...@@ -25,6 +25,7 @@ ...@@ -25,6 +25,7 @@
#include <linux/mfd/syscon/imx6q-iomuxc-gpr.h> #include <linux/mfd/syscon/imx6q-iomuxc-gpr.h>
#include <linux/of_device.h> #include <linux/of_device.h>
#include <linux/of_graph.h> #include <linux/of_graph.h>
#include <video/of_display_timing.h>
#include <video/of_videomode.h> #include <video/of_videomode.h>
#include <linux/regmap.h> #include <linux/regmap.h>
#include <linux/videodev2.h> #include <linux/videodev2.h>
...@@ -59,6 +60,7 @@ struct imx_ldb_channel { ...@@ -59,6 +60,7 @@ struct imx_ldb_channel {
struct drm_encoder encoder; struct drm_encoder encoder;
struct drm_panel *panel; struct drm_panel *panel;
struct device_node *child; struct device_node *child;
struct i2c_adapter *ddc;
int chno; int chno;
void *edid; void *edid;
int edid_len; int edid_len;
...@@ -107,6 +109,9 @@ static int imx_ldb_connector_get_modes(struct drm_connector *connector) ...@@ -107,6 +109,9 @@ static int imx_ldb_connector_get_modes(struct drm_connector *connector)
return num_modes; return num_modes;
} }
if (!imx_ldb_ch->edid && imx_ldb_ch->ddc)
imx_ldb_ch->edid = drm_get_edid(connector, imx_ldb_ch->ddc);
if (imx_ldb_ch->edid) { if (imx_ldb_ch->edid) {
drm_mode_connector_update_edid_property(connector, drm_mode_connector_update_edid_property(connector,
imx_ldb_ch->edid); imx_ldb_ch->edid);
...@@ -553,7 +558,8 @@ static int imx_ldb_bind(struct device *dev, struct device *master, void *data) ...@@ -553,7 +558,8 @@ static int imx_ldb_bind(struct device *dev, struct device *master, void *data)
for_each_child_of_node(np, child) { for_each_child_of_node(np, child) {
struct imx_ldb_channel *channel; struct imx_ldb_channel *channel;
struct device_node *port; struct device_node *ddc_node;
struct device_node *ep;
ret = of_property_read_u32(child, "reg", &i); ret = of_property_read_u32(child, "reg", &i);
if (ret || i < 0 || i > 1) if (ret || i < 0 || i > 1)
...@@ -576,33 +582,54 @@ static int imx_ldb_bind(struct device *dev, struct device *master, void *data) ...@@ -576,33 +582,54 @@ static int imx_ldb_bind(struct device *dev, struct device *master, void *data)
* The output port is port@4 with an external 4-port mux or * The output port is port@4 with an external 4-port mux or
* port@2 with the internal 2-port mux. * port@2 with the internal 2-port mux.
*/ */
port = of_graph_get_port_by_id(child, imx_ldb->lvds_mux ? 4 : 2); ep = of_graph_get_endpoint_by_regs(child,
if (port) { imx_ldb->lvds_mux ? 4 : 2,
struct device_node *endpoint, *remote; -1);
if (ep) {
endpoint = of_get_child_by_name(port, "endpoint"); struct device_node *remote;
if (endpoint) {
remote = of_graph_get_remote_port_parent(endpoint); remote = of_graph_get_remote_port_parent(ep);
if (remote) of_node_put(ep);
channel->panel = of_drm_find_panel(remote); if (remote)
else channel->panel = of_drm_find_panel(remote);
return -EPROBE_DEFER; else
if (!channel->panel) { return -EPROBE_DEFER;
dev_err(dev, "panel not found: %s\n", of_node_put(remote);
remote->full_name); if (!channel->panel) {
return -EPROBE_DEFER; dev_err(dev, "panel not found: %s\n",
} remote->full_name);
return -EPROBE_DEFER;
} }
} }
edidp = of_get_property(child, "edid", &channel->edid_len); ddc_node = of_parse_phandle(child, "ddc-i2c-bus", 0);
if (edidp) { if (ddc_node) {
channel->edid = kmemdup(edidp, channel->edid_len, channel->ddc = of_find_i2c_adapter_by_node(ddc_node);
GFP_KERNEL); of_node_put(ddc_node);
} else if (!channel->panel) { if (!channel->ddc) {
ret = of_get_drm_display_mode(child, &channel->mode, 0); dev_warn(dev, "failed to get ddc i2c adapter\n");
if (!ret) return -EPROBE_DEFER;
channel->mode_valid = 1; }
}
if (!channel->ddc) {
/* if no DDC available, fallback to hardcoded EDID */
dev_dbg(dev, "no ddc available\n");
edidp = of_get_property(child, "edid",
&channel->edid_len);
if (edidp) {
channel->edid = kmemdup(edidp,
channel->edid_len,
GFP_KERNEL);
} else if (!channel->panel) {
/* fallback to display-timings node */
ret = of_get_drm_display_mode(child,
&channel->mode,
OF_USE_NATIVE_MODE);
if (!ret)
channel->mode_valid = 1;
}
} }
channel->bus_format = of_get_bus_format(dev, child); channel->bus_format = of_get_bus_format(dev, child);
...@@ -647,6 +674,7 @@ static void imx_ldb_unbind(struct device *dev, struct device *master, ...@@ -647,6 +674,7 @@ static void imx_ldb_unbind(struct device *dev, struct device *master,
channel->encoder.funcs->destroy(&channel->encoder); channel->encoder.funcs->destroy(&channel->encoder);
kfree(channel->edid); kfree(channel->edid);
i2c_put_adapter(channel->ddc);
} }
} }
......
...@@ -294,8 +294,10 @@ static void imx_tve_encoder_prepare(struct drm_encoder *encoder) ...@@ -294,8 +294,10 @@ static void imx_tve_encoder_prepare(struct drm_encoder *encoder)
switch (tve->mode) { switch (tve->mode) {
case TVE_MODE_VGA: case TVE_MODE_VGA:
imx_drm_set_bus_format_pins(encoder, MEDIA_BUS_FMT_GBR888_1X24, imx_drm_set_bus_config(encoder, MEDIA_BUS_FMT_GBR888_1X24,
tve->hsync_pin, tve->vsync_pin); tve->hsync_pin, tve->vsync_pin,
DRM_BUS_FLAG_DE_HIGH |
DRM_BUS_FLAG_PIXDATA_NEGEDGE);
break; break;
case TVE_MODE_TVOUT: case TVE_MODE_TVOUT:
imx_drm_set_bus_format(encoder, MEDIA_BUS_FMT_YUV8_1X24); imx_drm_set_bus_format(encoder, MEDIA_BUS_FMT_YUV8_1X24);
......
...@@ -66,6 +66,7 @@ struct ipu_crtc { ...@@ -66,6 +66,7 @@ struct ipu_crtc {
struct ipu_flip_work *flip_work; struct ipu_flip_work *flip_work;
int irq; int irq;
u32 bus_format; u32 bus_format;
u32 bus_flags;
int di_hsync_pin; int di_hsync_pin;
int di_vsync_pin; int di_vsync_pin;
}; };
...@@ -271,8 +272,10 @@ static int ipu_crtc_mode_set(struct drm_crtc *crtc, ...@@ -271,8 +272,10 @@ static int ipu_crtc_mode_set(struct drm_crtc *crtc,
else else
sig_cfg.clkflags = 0; sig_cfg.clkflags = 0;
sig_cfg.enable_pol = 1; sig_cfg.enable_pol = !(ipu_crtc->bus_flags & DRM_BUS_FLAG_DE_LOW);
sig_cfg.clk_pol = 0; /* Default to driving pixel data on negative clock edges */
sig_cfg.clk_pol = !!(ipu_crtc->bus_flags &
DRM_BUS_FLAG_PIXDATA_POSEDGE);
sig_cfg.bus_format = ipu_crtc->bus_format; sig_cfg.bus_format = ipu_crtc->bus_format;
sig_cfg.v_to_h_sync = 0; sig_cfg.v_to_h_sync = 0;
sig_cfg.hsync_pin = ipu_crtc->di_hsync_pin; sig_cfg.hsync_pin = ipu_crtc->di_hsync_pin;
...@@ -396,11 +399,12 @@ static void ipu_disable_vblank(struct drm_crtc *crtc) ...@@ -396,11 +399,12 @@ static void ipu_disable_vblank(struct drm_crtc *crtc)
} }
static int ipu_set_interface_pix_fmt(struct drm_crtc *crtc, static int ipu_set_interface_pix_fmt(struct drm_crtc *crtc,
u32 bus_format, int hsync_pin, int vsync_pin) u32 bus_format, int hsync_pin, int vsync_pin, u32 bus_flags)
{ {
struct ipu_crtc *ipu_crtc = to_ipu_crtc(crtc); struct ipu_crtc *ipu_crtc = to_ipu_crtc(crtc);
ipu_crtc->bus_format = bus_format; ipu_crtc->bus_format = bus_format;
ipu_crtc->bus_flags = bus_flags;
ipu_crtc->di_hsync_pin = hsync_pin; ipu_crtc->di_hsync_pin = hsync_pin;
ipu_crtc->di_vsync_pin = vsync_pin; ipu_crtc->di_vsync_pin = vsync_pin;
......
...@@ -38,6 +38,8 @@ static const uint32_t ipu_plane_formats[] = { ...@@ -38,6 +38,8 @@ static const uint32_t ipu_plane_formats[] = {
DRM_FORMAT_RGBX8888, DRM_FORMAT_RGBX8888,
DRM_FORMAT_BGRA8888, DRM_FORMAT_BGRA8888,
DRM_FORMAT_BGRA8888, DRM_FORMAT_BGRA8888,
DRM_FORMAT_UYVY,
DRM_FORMAT_VYUY,
DRM_FORMAT_YUYV, DRM_FORMAT_YUYV,
DRM_FORMAT_YVYU, DRM_FORMAT_YVYU,
DRM_FORMAT_YUV420, DRM_FORMAT_YUV420,
...@@ -428,7 +430,6 @@ static int ipu_update_plane(struct drm_plane *plane, struct drm_crtc *crtc, ...@@ -428,7 +430,6 @@ static int ipu_update_plane(struct drm_plane *plane, struct drm_crtc *crtc,
if (crtc != plane->crtc) if (crtc != plane->crtc)
dev_dbg(plane->dev->dev, "crtc change: %p -> %p\n", dev_dbg(plane->dev->dev, "crtc change: %p -> %p\n",
plane->crtc, crtc); plane->crtc, crtc);
plane->crtc = crtc;
if (!ipu_plane->enabled) if (!ipu_plane->enabled)
ipu_plane_enable(ipu_plane); ipu_plane_enable(ipu_plane);
...@@ -461,7 +462,7 @@ static void ipu_plane_destroy(struct drm_plane *plane) ...@@ -461,7 +462,7 @@ static void ipu_plane_destroy(struct drm_plane *plane)
kfree(ipu_plane); kfree(ipu_plane);
} }
static struct drm_plane_funcs ipu_plane_funcs = { static const struct drm_plane_funcs ipu_plane_funcs = {
.update_plane = ipu_update_plane, .update_plane = ipu_update_plane,
.disable_plane = ipu_disable_plane, .disable_plane = ipu_disable_plane,
.destroy = ipu_plane_destroy, .destroy = ipu_plane_destroy,
......
...@@ -35,7 +35,6 @@ struct imx_parallel_display { ...@@ -35,7 +35,6 @@ struct imx_parallel_display {
void *edid; void *edid;
int edid_len; int edid_len;
u32 bus_format; u32 bus_format;
int mode_valid;
struct drm_display_mode mode; struct drm_display_mode mode;
struct drm_panel *panel; struct drm_panel *panel;
}; };
...@@ -68,17 +67,6 @@ static int imx_pd_connector_get_modes(struct drm_connector *connector) ...@@ -68,17 +67,6 @@ static int imx_pd_connector_get_modes(struct drm_connector *connector)
num_modes = drm_add_edid_modes(connector, imxpd->edid); num_modes = drm_add_edid_modes(connector, imxpd->edid);
} }
if (imxpd->mode_valid) {
struct drm_display_mode *mode = drm_mode_create(connector->dev);
if (!mode)
return -EINVAL;
drm_mode_copy(mode, &imxpd->mode);
mode->type |= DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED,
drm_mode_probed_add(connector, mode);
num_modes++;
}
if (np) { if (np) {
struct drm_display_mode *mode = drm_mode_create(connector->dev); struct drm_display_mode *mode = drm_mode_create(connector->dev);
...@@ -115,8 +103,8 @@ static void imx_pd_encoder_dpms(struct drm_encoder *encoder, int mode) ...@@ -115,8 +103,8 @@ static void imx_pd_encoder_dpms(struct drm_encoder *encoder, int mode)
static void imx_pd_encoder_prepare(struct drm_encoder *encoder) static void imx_pd_encoder_prepare(struct drm_encoder *encoder)
{ {
struct imx_parallel_display *imxpd = enc_to_imxpd(encoder); struct imx_parallel_display *imxpd = enc_to_imxpd(encoder);
imx_drm_set_bus_config(encoder, imxpd->bus_format, 2, 3,
imx_drm_set_bus_format(encoder, imxpd->bus_format); imxpd->connector.display_info.bus_flags);
} }
static void imx_pd_encoder_commit(struct drm_encoder *encoder) static void imx_pd_encoder_commit(struct drm_encoder *encoder)
...@@ -203,7 +191,7 @@ static int imx_pd_bind(struct device *dev, struct device *master, void *data) ...@@ -203,7 +191,7 @@ static int imx_pd_bind(struct device *dev, struct device *master, void *data)
{ {
struct drm_device *drm = data; struct drm_device *drm = data;
struct device_node *np = dev->of_node; struct device_node *np = dev->of_node;
struct device_node *port; struct device_node *ep;
const u8 *edidp; const u8 *edidp;
struct imx_parallel_display *imxpd; struct imx_parallel_display *imxpd;
int ret; int ret;
...@@ -230,18 +218,18 @@ static int imx_pd_bind(struct device *dev, struct device *master, void *data) ...@@ -230,18 +218,18 @@ static int imx_pd_bind(struct device *dev, struct device *master, void *data)
} }
/* port@1 is the output port */ /* port@1 is the output port */
port = of_graph_get_port_by_id(np, 1); ep = of_graph_get_endpoint_by_regs(np, 1, -1);
if (port) { if (ep) {
struct device_node *endpoint, *remote; struct device_node *remote;
endpoint = of_get_child_by_name(port, "endpoint"); remote = of_graph_get_remote_port_parent(ep);
if (endpoint) { of_node_put(ep);
remote = of_graph_get_remote_port_parent(endpoint); if (remote) {
if (remote) imxpd->panel = of_drm_find_panel(remote);
imxpd->panel = of_drm_find_panel(remote); of_node_put(remote);
if (!imxpd->panel)
return -EPROBE_DEFER;
} }
if (!imxpd->panel)
return -EPROBE_DEFER;
} }
imxpd->dev = dev; imxpd->dev = dev;
......
...@@ -432,11 +432,6 @@ static int mtk_dpi_set_display_mode(struct mtk_dpi *dpi, ...@@ -432,11 +432,6 @@ static int mtk_dpi_set_display_mode(struct mtk_dpi *dpi,
unsigned long pll_rate; unsigned long pll_rate;
unsigned int factor; unsigned int factor;
if (!dpi) {
dev_err(dpi->dev, "invalid argument\n");
return -EINVAL;
}
pix_rate = 1000UL * mode->clock; pix_rate = 1000UL * mode->clock;
if (mode->clock <= 74000) if (mode->clock <= 74000)
factor = 8 * 3; factor = 8 * 3;
......
...@@ -695,10 +695,8 @@ static void mtk_dsi_destroy_conn_enc(struct mtk_dsi *dsi) ...@@ -695,10 +695,8 @@ static void mtk_dsi_destroy_conn_enc(struct mtk_dsi *dsi)
{ {
drm_encoder_cleanup(&dsi->encoder); drm_encoder_cleanup(&dsi->encoder);
/* Skip connector cleanup if creation was delegated to the bridge */ /* Skip connector cleanup if creation was delegated to the bridge */
if (dsi->conn.dev) { if (dsi->conn.dev)
drm_connector_unregister(&dsi->conn);
drm_connector_cleanup(&dsi->conn); drm_connector_cleanup(&dsi->conn);
}
} }
static void mtk_dsi_ddp_start(struct mtk_ddp_comp *comp) static void mtk_dsi_ddp_start(struct mtk_ddp_comp *comp)
......
...@@ -182,7 +182,7 @@ static int mga_g200se_set_plls(struct mga_device *mdev, long clock) ...@@ -182,7 +182,7 @@ static int mga_g200se_set_plls(struct mga_device *mdev, long clock)
} }
} }
fvv = pllreffreq * testn / testm; fvv = pllreffreq * (n + 1) / (m + 1);
fvv = (fvv - 800000) / 50000; fvv = (fvv - 800000) / 50000;
if (fvv > 15) if (fvv > 15)
...@@ -202,6 +202,14 @@ static int mga_g200se_set_plls(struct mga_device *mdev, long clock) ...@@ -202,6 +202,14 @@ static int mga_g200se_set_plls(struct mga_device *mdev, long clock)
WREG_DAC(MGA1064_PIX_PLLC_M, m); WREG_DAC(MGA1064_PIX_PLLC_M, m);
WREG_DAC(MGA1064_PIX_PLLC_N, n); WREG_DAC(MGA1064_PIX_PLLC_N, n);
WREG_DAC(MGA1064_PIX_PLLC_P, p); WREG_DAC(MGA1064_PIX_PLLC_P, p);
if (mdev->unique_rev_id >= 0x04) {
WREG_DAC(0x1a, 0x09);
msleep(20);
WREG_DAC(0x1a, 0x01);
}
return 0; return 0;
} }
......
...@@ -2,6 +2,7 @@ config DRM_OMAP ...@@ -2,6 +2,7 @@ config DRM_OMAP
tristate "OMAP DRM" tristate "OMAP DRM"
depends on DRM depends on DRM
depends on ARCH_OMAP2PLUS || ARCH_MULTIPLATFORM depends on ARCH_OMAP2PLUS || ARCH_MULTIPLATFORM
select OMAP2_DSS
select DRM_KMS_HELPER select DRM_KMS_HELPER
select DRM_KMS_FB_HELPER select DRM_KMS_FB_HELPER
select FB_SYS_FILLRECT select FB_SYS_FILLRECT
......
...@@ -9,6 +9,7 @@ ...@@ -9,6 +9,7 @@
* the Free Software Foundation. * the Free Software Foundation.
*/ */
#include <linux/gpio/consumer.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
......
...@@ -14,7 +14,7 @@ ...@@ -14,7 +14,7 @@
* the Free Software Foundation. * the Free Software Foundation.
*/ */
#include <linux/gpio.h> #include <linux/gpio/consumer.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/slab.h> #include <linux/slab.h>
......
...@@ -9,7 +9,7 @@ ...@@ -9,7 +9,7 @@
* the Free Software Foundation. * the Free Software Foundation.
*/ */
#include <linux/gpio.h> #include <linux/gpio/consumer.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/slab.h> #include <linux/slab.h>
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册