提交 47a46942 编写于 作者: L Linus Torvalds

Merge branch 'akpm' (patches from Andrew)

Merge second patchbomb from Andrew Morton:

 - most of the rest of MM

 - lots of misc things

 - procfs updates

 - printk feature work

 - updates to get_maintainer, MAINTAINERS, checkpatch

 - lib/ updates

* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (96 commits)
  exit,stats: /* obey this comment */
  coredump: add __printf attribute to cn_*printf functions
  coredump: use from_kuid/kgid when formatting corename
  fs/reiserfs: remove unneeded cast
  NILFS2: support NFSv2 export
  fs/befs/btree.c: remove unneeded initializations
  fs/minix: remove unneeded cast
  init/do_mounts.c: add create_dev() failure log
  kasan: remove duplicate definition of the macro KASAN_FREE_PAGE
  fs/efs: femove unneeded cast
  checkpatch: emit "NOTE: <types>" message only once after multiple files
  checkpatch: emit an error when there's a diff in a changelog
  checkpatch: validate MODULE_LICENSE content
  checkpatch: add multi-line handling for PREFER_ETHER_ADDR_COPY
  checkpatch: suggest using eth_zero_addr() and eth_broadcast_addr()
  checkpatch: fix processing of MEMSET issues
  checkpatch: suggest using ether_addr_equal*()
  checkpatch: avoid NOT_UNIFIED_DIFF errors on cover-letter.patch files
  checkpatch: remove local from codespell path
  checkpatch: add --showfile to allow input via pipe to show filenames
  ...
...@@ -84,6 +84,7 @@ Mayuresh Janorkar <mayur@ti.com> ...@@ -84,6 +84,7 @@ Mayuresh Janorkar <mayur@ti.com>
Michael Buesch <m@bues.ch> Michael Buesch <m@bues.ch>
Michel Dänzer <michel@tungstengraphics.com> Michel Dänzer <michel@tungstengraphics.com>
Mitesh shah <mshah@teja.com> Mitesh shah <mshah@teja.com>
Mohit Kumar <mohit.kumar@st.com> <mohit.kumar.dhaka@gmail.com>
Morten Welinder <terra@gnome.org> Morten Welinder <terra@gnome.org>
Morten Welinder <welinder@anemone.rentec.com> Morten Welinder <welinder@anemone.rentec.com>
Morten Welinder <welinder@darter.rentec.com> Morten Welinder <welinder@darter.rentec.com>
...@@ -95,10 +96,12 @@ Patrick Mochel <mochel@digitalimplant.org> ...@@ -95,10 +96,12 @@ Patrick Mochel <mochel@digitalimplant.org>
Peter A Jonsson <pj@ludd.ltu.se> Peter A Jonsson <pj@ludd.ltu.se>
Peter Oruba <peter@oruba.de> Peter Oruba <peter@oruba.de>
Peter Oruba <peter.oruba@amd.com> Peter Oruba <peter.oruba@amd.com>
Pratyush Anand <pratyush.anand@gmail.com> <pratyush.anand@st.com>
Praveen BP <praveenbp@ti.com> Praveen BP <praveenbp@ti.com>
Rajesh Shah <rajesh.shah@intel.com> Rajesh Shah <rajesh.shah@intel.com>
Ralf Baechle <ralf@linux-mips.org> Ralf Baechle <ralf@linux-mips.org>
Ralf Wildenhues <Ralf.Wildenhues@gmx.de> Ralf Wildenhues <Ralf.Wildenhues@gmx.de>
Randy Dunlap <rdunlap@infradead.org> <rdunlap@xenotime.net>
Rémi Denis-Courmont <rdenis@simphalempin.com> Rémi Denis-Courmont <rdenis@simphalempin.com>
Ricardo Ribalda Delgado <ricardo.ribalda@gmail.com> Ricardo Ribalda Delgado <ricardo.ribalda@gmail.com>
Rudolf Marek <R.Marek@sh.cvut.cz> Rudolf Marek <R.Marek@sh.cvut.cz>
......
What: /config/pcie-gadget What: /config/pcie-gadget
Date: Feb 2011 Date: Feb 2011
KernelVersion: 2.6.37 KernelVersion: 2.6.37
Contact: Pratyush Anand <pratyush.anand@st.com> Contact: Pratyush Anand <pratyush.anand@gmail.com>
Description: Description:
Interface is used to configure selected dual mode PCIe controller Interface is used to configure selected dual mode PCIe controller
......
...@@ -98,4 +98,13 @@ Description: The /dev/kmsg character device node provides userspace access ...@@ -98,4 +98,13 @@ Description: The /dev/kmsg character device node provides userspace access
logic is used internally when messages are printed to the logic is used internally when messages are printed to the
console, /proc/kmsg or the syslog() syscall. console, /proc/kmsg or the syslog() syscall.
By default, kernel tries to avoid fragments by concatenating
when it can and fragments are rare; however, when extended
console support is enabled, the in-kernel concatenation is
disabled and /dev/kmsg output will contain more fragments. If
the log consumer performs concatenation, the end result
should be the same. In the future, the in-kernel concatenation
may be removed entirely and /dev/kmsg users are recommended to
implement fragment handling.
Users: dmesg(1), userspace kernel log consumers Users: dmesg(1), userspace kernel log consumers
...@@ -4,14 +4,14 @@ driver is bound with root hub device. ...@@ -4,14 +4,14 @@ driver is bound with root hub device.
What: /sys/bus/usb/devices/.../get_dev_desc What: /sys/bus/usb/devices/.../get_dev_desc
Date: March 2014 Date: March 2014
Contact: Pratyush Anand <pratyush.anand@st.com> Contact: Pratyush Anand <pratyush.anand@gmail.com>
Description: Description:
Write to this node to issue "Get Device Descriptor" Write to this node to issue "Get Device Descriptor"
for Link Layer Validation device. It is needed for TD.7.06. for Link Layer Validation device. It is needed for TD.7.06.
What: /sys/bus/usb/devices/.../u1_timeout What: /sys/bus/usb/devices/.../u1_timeout
Date: March 2014 Date: March 2014
Contact: Pratyush Anand <pratyush.anand@st.com> Contact: Pratyush Anand <pratyush.anand@gmail.com>
Description: Description:
Set "U1 timeout" for the downstream port where Link Layer Set "U1 timeout" for the downstream port where Link Layer
Validation device is connected. Timeout value must be between 0 Validation device is connected. Timeout value must be between 0
...@@ -19,7 +19,7 @@ Description: ...@@ -19,7 +19,7 @@ Description:
What: /sys/bus/usb/devices/.../u2_timeout What: /sys/bus/usb/devices/.../u2_timeout
Date: March 2014 Date: March 2014
Contact: Pratyush Anand <pratyush.anand@st.com> Contact: Pratyush Anand <pratyush.anand@gmail.com>
Description: Description:
Set "U2 timeout" for the downstream port where Link Layer Set "U2 timeout" for the downstream port where Link Layer
Validation device is connected. Timeout value must be between 0 Validation device is connected. Timeout value must be between 0
...@@ -27,21 +27,21 @@ Description: ...@@ -27,21 +27,21 @@ Description:
What: /sys/bus/usb/devices/.../hot_reset What: /sys/bus/usb/devices/.../hot_reset
Date: March 2014 Date: March 2014
Contact: Pratyush Anand <pratyush.anand@st.com> Contact: Pratyush Anand <pratyush.anand@gmail.com>
Description: Description:
Write to this node to issue "Reset" for Link Layer Validation Write to this node to issue "Reset" for Link Layer Validation
device. It is needed for TD.7.29, TD.7.31, TD.7.34 and TD.7.35. device. It is needed for TD.7.29, TD.7.31, TD.7.34 and TD.7.35.
What: /sys/bus/usb/devices/.../u3_entry What: /sys/bus/usb/devices/.../u3_entry
Date: March 2014 Date: March 2014
Contact: Pratyush Anand <pratyush.anand@st.com> Contact: Pratyush Anand <pratyush.anand@gmail.com>
Description: Description:
Write to this node to issue "U3 entry" for Link Layer Write to this node to issue "U3 entry" for Link Layer
Validation device. It is needed for TD.7.35 and TD.7.36. Validation device. It is needed for TD.7.35 and TD.7.36.
What: /sys/bus/usb/devices/.../u3_exit What: /sys/bus/usb/devices/.../u3_exit
Date: March 2014 Date: March 2014
Contact: Pratyush Anand <pratyush.anand@st.com> Contact: Pratyush Anand <pratyush.anand@gmail.com>
Description: Description:
Write to this node to issue "U3 exit" for Link Layer Write to this node to issue "U3 exit" for Link Layer
Validation device. It is needed for TD.7.36. Validation device. It is needed for TD.7.36.
What: /sys/class/zram-control/
Date: August 2015
KernelVersion: 4.2
Contact: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Description:
The zram-control/ class sub-directory belongs to zram
device class
What: /sys/class/zram-control/hot_add
Date: August 2015
KernelVersion: 4.2
Contact: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Description:
RO attribute. Read operation will cause zram to add a new
device and return its device id back to user (so one can
use /dev/zram<id>), or error code.
What: /sys/class/zram-control/hot_remove
Date: August 2015
KernelVersion: 4.2
Contact: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Description:
WO attribute. Remove a specific /dev/zramX device, where X
is a device_id provided by user.
...@@ -19,7 +19,9 @@ Following shows a typical sequence of steps for using zram. ...@@ -19,7 +19,9 @@ Following shows a typical sequence of steps for using zram.
1) Load Module: 1) Load Module:
modprobe zram num_devices=4 modprobe zram num_devices=4
This creates 4 devices: /dev/zram{0,1,2,3} This creates 4 devices: /dev/zram{0,1,2,3}
(num_devices parameter is optional. Default: 1)
num_devices parameter is optional and tells zram how many devices should be
pre-created. Default: 1.
2) Set max number of compression streams 2) Set max number of compression streams
Compression backend may use up to max_comp_streams compression streams, Compression backend may use up to max_comp_streams compression streams,
...@@ -97,7 +99,24 @@ size of the disk when not in use so a huge zram is wasteful. ...@@ -97,7 +99,24 @@ size of the disk when not in use so a huge zram is wasteful.
mkfs.ext4 /dev/zram1 mkfs.ext4 /dev/zram1
mount /dev/zram1 /tmp mount /dev/zram1 /tmp
7) Stats: 7) Add/remove zram devices
zram provides a control interface, which enables dynamic (on-demand) device
addition and removal.
In order to add a new /dev/zramX device, perform read operation on hot_add
attribute. This will return either new device's device id (meaning that you
can use /dev/zram<id>) or error code.
Example:
cat /sys/class/zram-control/hot_add
1
To remove the existing /dev/zramX device (where X is a device id)
execute
echo X > /sys/class/zram-control/hot_remove
8) Stats:
Per-device statistics are exported as various nodes under /sys/block/zram<id>/ Per-device statistics are exported as various nodes under /sys/block/zram<id>/
A brief description of exported device attritbutes. For more details please A brief description of exported device attritbutes. For more details please
...@@ -126,7 +145,7 @@ mem_used_max RW the maximum amount memory zram have consumed to ...@@ -126,7 +145,7 @@ mem_used_max RW the maximum amount memory zram have consumed to
mem_limit RW the maximum amount of memory ZRAM can use to store mem_limit RW the maximum amount of memory ZRAM can use to store
the compressed data the compressed data
num_migrated RO the number of objects migrated migrated by compaction num_migrated RO the number of objects migrated migrated by compaction
compact WO trigger memory compaction
WARNING WARNING
======= =======
...@@ -172,11 +191,11 @@ line of text and contains the following stats separated by whitespace: ...@@ -172,11 +191,11 @@ line of text and contains the following stats separated by whitespace:
zero_pages zero_pages
num_migrated num_migrated
8) Deactivate: 9) Deactivate:
swapoff /dev/zram0 swapoff /dev/zram0
umount /dev/zram1 umount /dev/zram1
9) Reset: 10) Reset:
Write any positive value to 'reset' sysfs node Write any positive value to 'reset' sysfs node
echo 1 > /sys/block/zram0/reset echo 1 > /sys/block/zram0/reset
echo 1 > /sys/block/zram1/reset echo 1 > /sys/block/zram1/reset
......
...@@ -2,7 +2,7 @@ Spear PCIe Gadget Driver: ...@@ -2,7 +2,7 @@ Spear PCIe Gadget Driver:
Author Author
============= =============
Pratyush Anand (pratyush.anand@st.com) Pratyush Anand (pratyush.anand@gmail.com)
Location Location
============ ============
......
...@@ -2,6 +2,7 @@ ...@@ -2,6 +2,7 @@
started by Ingo Molnar <mingo@redhat.com>, 2001.09.17 started by Ingo Molnar <mingo@redhat.com>, 2001.09.17
2.6 port and netpoll api by Matt Mackall <mpm@selenic.com>, Sep 9 2003 2.6 port and netpoll api by Matt Mackall <mpm@selenic.com>, Sep 9 2003
IPv6 support by Cong Wang <xiyou.wangcong@gmail.com>, Jan 1 2013 IPv6 support by Cong Wang <xiyou.wangcong@gmail.com>, Jan 1 2013
Extended console support by Tejun Heo <tj@kernel.org>, May 1 2015
Please send bug reports to Matt Mackall <mpm@selenic.com> Please send bug reports to Matt Mackall <mpm@selenic.com>
Satyam Sharma <satyam.sharma@gmail.com>, and Cong Wang <xiyou.wangcong@gmail.com> Satyam Sharma <satyam.sharma@gmail.com>, and Cong Wang <xiyou.wangcong@gmail.com>
...@@ -24,9 +25,10 @@ Sender and receiver configuration: ...@@ -24,9 +25,10 @@ Sender and receiver configuration:
It takes a string configuration parameter "netconsole" in the It takes a string configuration parameter "netconsole" in the
following format: following format:
netconsole=[src-port]@[src-ip]/[<dev>],[tgt-port]@<tgt-ip>/[tgt-macaddr] netconsole=[+][src-port]@[src-ip]/[<dev>],[tgt-port]@<tgt-ip>/[tgt-macaddr]
where where
+ if present, enable extended console support
src-port source for UDP packets (defaults to 6665) src-port source for UDP packets (defaults to 6665)
src-ip source IP to use (interface address) src-ip source IP to use (interface address)
dev network interface (eth0) dev network interface (eth0)
...@@ -107,6 +109,7 @@ To remove a target: ...@@ -107,6 +109,7 @@ To remove a target:
The interface exposes these parameters of a netconsole target to userspace: The interface exposes these parameters of a netconsole target to userspace:
enabled Is this target currently enabled? (read-write) enabled Is this target currently enabled? (read-write)
extended Extended mode enabled (read-write)
dev_name Local network interface name (read-write) dev_name Local network interface name (read-write)
local_port Source UDP port to use (read-write) local_port Source UDP port to use (read-write)
remote_port Remote agent's UDP port (read-write) remote_port Remote agent's UDP port (read-write)
...@@ -132,6 +135,36 @@ You can also update the local interface dynamically. This is especially ...@@ -132,6 +135,36 @@ You can also update the local interface dynamically. This is especially
useful if you want to use interfaces that have newly come up (and may not useful if you want to use interfaces that have newly come up (and may not
have existed when netconsole was loaded / initialized). have existed when netconsole was loaded / initialized).
Extended console:
=================
If '+' is prefixed to the configuration line or "extended" config file
is set to 1, extended console support is enabled. An example boot
param follows.
linux netconsole=+4444@10.0.0.1/eth1,9353@10.0.0.2/12:34:56:78:9a:bc
Log messages are transmitted with extended metadata header in the
following format which is the same as /dev/kmsg.
<level>,<sequnum>,<timestamp>,<contflag>;<message text>
Non printable characters in <message text> are escaped using "\xff"
notation. If the message contains optional dictionary, verbatim
newline is used as the delimeter.
If a message doesn't fit in certain number of bytes (currently 1000),
the message is split into multiple fragments by netconsole. These
fragments are transmitted with "ncfrag" header field added.
ncfrag=<byte-offset>/<total-bytes>
For example, assuming a lot smaller chunk size, a message "the first
chunk, the 2nd chunk." may be split as follows.
6,416,1758426,-,ncfrag=0/31;the first chunk,
6,416,1758426,-,ncfrag=16/31; the 2nd chunk.
Miscellaneous notes: Miscellaneous notes:
==================== ====================
......
...@@ -197,8 +197,8 @@ core_pattern is used to specify a core dumpfile pattern name. ...@@ -197,8 +197,8 @@ core_pattern is used to specify a core dumpfile pattern name.
%P global pid (init PID namespace) %P global pid (init PID namespace)
%i tid %i tid
%I global tid (init PID namespace) %I global tid (init PID namespace)
%u uid %u uid (in initial user namespace)
%g gid %g gid (in initial user namespace)
%d dump mode, matches PR_SET_DUMPABLE and %d dump mode, matches PR_SET_DUMPABLE and
/proc/sys/fs/suid_dumpable /proc/sys/fs/suid_dumpable
%s signal number %s signal number
......
...@@ -26,8 +26,22 @@ Zswap evicts pages from compressed cache on an LRU basis to the backing swap ...@@ -26,8 +26,22 @@ Zswap evicts pages from compressed cache on an LRU basis to the backing swap
device when the compressed pool reaches its size limit. This requirement had device when the compressed pool reaches its size limit. This requirement had
been identified in prior community discussions. been identified in prior community discussions.
To enabled zswap, the "enabled" attribute must be set to 1 at boot time. e.g. Zswap is disabled by default but can be enabled at boot time by setting
zswap.enabled=1 the "enabled" attribute to 1 at boot time. ie: zswap.enabled=1. Zswap
can also be enabled and disabled at runtime using the sysfs interface.
An example command to enable zswap at runtime, assuming sysfs is mounted
at /sys, is:
echo 1 > /sys/modules/zswap/parameters/enabled
When zswap is disabled at runtime it will stop storing pages that are
being swapped out. However, it will _not_ immediately write out or fault
back into memory all of the pages stored in the compressed pool. The
pages stored in zswap will remain in the compressed pool until they are
either invalidated or faulted back into memory. In order to force all
pages out of the compressed pool, a swapoff on the swap device(s) will
fault back into memory all swapped out pages, including those in the
compressed pool.
Design: Design:
......
...@@ -259,7 +259,7 @@ S: Maintained ...@@ -259,7 +259,7 @@ S: Maintained
F: drivers/platform/x86/acer-wmi.c F: drivers/platform/x86/acer-wmi.c
ACPI ACPI
M: Rafael J. Wysocki <rjw@rjwysocki.net> M: "Rafael J. Wysocki" <rjw@rjwysocki.net>
M: Len Brown <lenb@kernel.org> M: Len Brown <lenb@kernel.org>
L: linux-acpi@vger.kernel.org L: linux-acpi@vger.kernel.org
W: https://01.org/linux-acpi W: https://01.org/linux-acpi
...@@ -280,7 +280,7 @@ F: tools/power/acpi/ ...@@ -280,7 +280,7 @@ F: tools/power/acpi/
ACPI COMPONENT ARCHITECTURE (ACPICA) ACPI COMPONENT ARCHITECTURE (ACPICA)
M: Robert Moore <robert.moore@intel.com> M: Robert Moore <robert.moore@intel.com>
M: Lv Zheng <lv.zheng@intel.com> M: Lv Zheng <lv.zheng@intel.com>
M: Rafael J. Wysocki <rafael.j.wysocki@intel.com> M: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
L: linux-acpi@vger.kernel.org L: linux-acpi@vger.kernel.org
L: devel@acpica.org L: devel@acpica.org
W: https://acpica.org/ W: https://acpica.org/
...@@ -2515,7 +2515,7 @@ F: arch/powerpc/oprofile/*cell* ...@@ -2515,7 +2515,7 @@ F: arch/powerpc/oprofile/*cell*
F: arch/powerpc/platforms/cell/ F: arch/powerpc/platforms/cell/
CEPH DISTRIBUTED FILE SYSTEM CLIENT CEPH DISTRIBUTED FILE SYSTEM CLIENT
M: Yan, Zheng <zyan@redhat.com> M: "Yan, Zheng" <zyan@redhat.com>
M: Sage Weil <sage@redhat.com> M: Sage Weil <sage@redhat.com>
L: ceph-devel@vger.kernel.org L: ceph-devel@vger.kernel.org
W: http://ceph.com/ W: http://ceph.com/
...@@ -2829,7 +2829,7 @@ S: Maintained ...@@ -2829,7 +2829,7 @@ S: Maintained
F: drivers/net/ethernet/ti/cpmac.c F: drivers/net/ethernet/ti/cpmac.c
CPU FREQUENCY DRIVERS CPU FREQUENCY DRIVERS
M: Rafael J. Wysocki <rjw@rjwysocki.net> M: "Rafael J. Wysocki" <rjw@rjwysocki.net>
M: Viresh Kumar <viresh.kumar@linaro.org> M: Viresh Kumar <viresh.kumar@linaro.org>
L: linux-pm@vger.kernel.org L: linux-pm@vger.kernel.org
S: Maintained S: Maintained
...@@ -2868,7 +2868,7 @@ F: drivers/cpuidle/cpuidle-exynos.c ...@@ -2868,7 +2868,7 @@ F: drivers/cpuidle/cpuidle-exynos.c
F: arch/arm/mach-exynos/pm.c F: arch/arm/mach-exynos/pm.c
CPUIDLE DRIVERS CPUIDLE DRIVERS
M: Rafael J. Wysocki <rjw@rjwysocki.net> M: "Rafael J. Wysocki" <rjw@rjwysocki.net>
M: Daniel Lezcano <daniel.lezcano@linaro.org> M: Daniel Lezcano <daniel.lezcano@linaro.org>
L: linux-pm@vger.kernel.org L: linux-pm@vger.kernel.org
S: Maintained S: Maintained
...@@ -4103,7 +4103,7 @@ F: include/uapi/scsi/fc/ ...@@ -4103,7 +4103,7 @@ F: include/uapi/scsi/fc/
FILE LOCKING (flock() and fcntl()/lockf()) FILE LOCKING (flock() and fcntl()/lockf())
M: Jeff Layton <jlayton@poochiereds.net> M: Jeff Layton <jlayton@poochiereds.net>
M: J. Bruce Fields <bfields@fieldses.org> M: "J. Bruce Fields" <bfields@fieldses.org>
L: linux-fsdevel@vger.kernel.org L: linux-fsdevel@vger.kernel.org
S: Maintained S: Maintained
F: include/linux/fcntl.h F: include/linux/fcntl.h
...@@ -4299,7 +4299,7 @@ F: sound/soc/fsl/imx* ...@@ -4299,7 +4299,7 @@ F: sound/soc/fsl/imx*
F: sound/soc/fsl/mpc8610_hpcd.c F: sound/soc/fsl/mpc8610_hpcd.c
FREESCALE QORIQ MANAGEMENT COMPLEX DRIVER FREESCALE QORIQ MANAGEMENT COMPLEX DRIVER
M: J. German Rivera <German.Rivera@freescale.com> M: "J. German Rivera" <German.Rivera@freescale.com>
L: linux-kernel@vger.kernel.org L: linux-kernel@vger.kernel.org
S: Maintained S: Maintained
F: drivers/staging/fsl-mc/ F: drivers/staging/fsl-mc/
...@@ -4581,7 +4581,7 @@ S: Maintained ...@@ -4581,7 +4581,7 @@ S: Maintained
F: drivers/media/usb/gspca/ F: drivers/media/usb/gspca/
GUID PARTITION TABLE (GPT) GUID PARTITION TABLE (GPT)
M: Davidlohr Bueso <davidlohr@hp.com> M: Davidlohr Bueso <dave@stgolabs.net>
L: linux-efi@vger.kernel.org L: linux-efi@vger.kernel.org
S: Maintained S: Maintained
F: block/partitions/efi.* F: block/partitions/efi.*
...@@ -4871,7 +4871,7 @@ S: Maintained ...@@ -4871,7 +4871,7 @@ S: Maintained
F: fs/hugetlbfs/ F: fs/hugetlbfs/
Hyper-V CORE AND DRIVERS Hyper-V CORE AND DRIVERS
M: K. Y. Srinivasan <kys@microsoft.com> M: "K. Y. Srinivasan" <kys@microsoft.com>
M: Haiyang Zhang <haiyangz@microsoft.com> M: Haiyang Zhang <haiyangz@microsoft.com>
L: devel@linuxdriverproject.org L: devel@linuxdriverproject.org
S: Maintained S: Maintained
...@@ -5233,7 +5233,7 @@ K: \b(ABS|SYN)_MT_ ...@@ -5233,7 +5233,7 @@ K: \b(ABS|SYN)_MT_
INTEL ASoC BDW/HSW DRIVERS INTEL ASoC BDW/HSW DRIVERS
M: Jie Yang <yang.jie@linux.intel.com> M: Jie Yang <yang.jie@linux.intel.com>
L: alsa-devel@alsa-project.org L: alsa-devel@alsa-project.org (moderated for non-subscribers)
S: Supported S: Supported
F: sound/soc/intel/sst-haswell* F: sound/soc/intel/sst-haswell*
F: sound/soc/intel/sst-dsp* F: sound/soc/intel/sst-dsp*
...@@ -6825,7 +6825,7 @@ F: drivers/net/ethernet/natsemi/natsemi.c ...@@ -6825,7 +6825,7 @@ F: drivers/net/ethernet/natsemi/natsemi.c
NATIVE INSTRUMENTS USB SOUND INTERFACE DRIVER NATIVE INSTRUMENTS USB SOUND INTERFACE DRIVER
M: Daniel Mack <zonque@gmail.com> M: Daniel Mack <zonque@gmail.com>
S: Maintained S: Maintained
L: alsa-devel@alsa-project.org L: alsa-devel@alsa-project.org (moderated for non-subscribers)
W: http://www.native-instruments.com W: http://www.native-instruments.com
F: sound/usb/caiaq/ F: sound/usb/caiaq/
...@@ -7243,7 +7243,7 @@ F: arch/arm/mach-omap2/prm* ...@@ -7243,7 +7243,7 @@ F: arch/arm/mach-omap2/prm*
OMAP AUDIO SUPPORT OMAP AUDIO SUPPORT
M: Peter Ujfalusi <peter.ujfalusi@ti.com> M: Peter Ujfalusi <peter.ujfalusi@ti.com>
M: Jarkko Nikula <jarkko.nikula@bitmer.com> M: Jarkko Nikula <jarkko.nikula@bitmer.com>
L: alsa-devel@alsa-project.org (subscribers-only) L: alsa-devel@alsa-project.org (moderated for non-subscribers)
L: linux-omap@vger.kernel.org L: linux-omap@vger.kernel.org
S: Maintained S: Maintained
F: sound/soc/omap/ F: sound/soc/omap/
...@@ -7945,7 +7945,7 @@ F: include/linux/power_supply.h ...@@ -7945,7 +7945,7 @@ F: include/linux/power_supply.h
F: drivers/power/ F: drivers/power/
PNP SUPPORT PNP SUPPORT
M: Rafael J. Wysocki <rafael.j.wysocki@intel.com> M: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
S: Maintained S: Maintained
F: drivers/pnp/ F: drivers/pnp/
...@@ -8951,7 +8951,7 @@ F: drivers/mmc/host/sdhci-spear.c ...@@ -8951,7 +8951,7 @@ F: drivers/mmc/host/sdhci-spear.c
SECURITY SUBSYSTEM SECURITY SUBSYSTEM
M: James Morris <james.l.morris@oracle.com> M: James Morris <james.l.morris@oracle.com>
M: Serge E. Hallyn <serge@hallyn.com> M: "Serge E. Hallyn" <serge@hallyn.com>
L: linux-security-module@vger.kernel.org (suggested Cc:) L: linux-security-module@vger.kernel.org (suggested Cc:)
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security.git
W: http://kernsec.org/ W: http://kernsec.org/
...@@ -9171,7 +9171,7 @@ F: arch/arm/mach-davinci/ ...@@ -9171,7 +9171,7 @@ F: arch/arm/mach-davinci/
F: drivers/i2c/busses/i2c-davinci.c F: drivers/i2c/busses/i2c-davinci.c
TI DAVINCI SERIES MEDIA DRIVER TI DAVINCI SERIES MEDIA DRIVER
M: Lad, Prabhakar <prabhakar.csengg@gmail.com> M: "Lad, Prabhakar" <prabhakar.csengg@gmail.com>
L: linux-media@vger.kernel.org L: linux-media@vger.kernel.org
W: http://linuxtv.org/ W: http://linuxtv.org/
Q: http://patchwork.linuxtv.org/project/linux-media/list/ Q: http://patchwork.linuxtv.org/project/linux-media/list/
...@@ -9181,7 +9181,7 @@ F: drivers/media/platform/davinci/ ...@@ -9181,7 +9181,7 @@ F: drivers/media/platform/davinci/
F: include/media/davinci/ F: include/media/davinci/
TI AM437X VPFE DRIVER TI AM437X VPFE DRIVER
M: Lad, Prabhakar <prabhakar.csengg@gmail.com> M: "Lad, Prabhakar" <prabhakar.csengg@gmail.com>
L: linux-media@vger.kernel.org L: linux-media@vger.kernel.org
W: http://linuxtv.org/ W: http://linuxtv.org/
Q: http://patchwork.linuxtv.org/project/linux-media/list/ Q: http://patchwork.linuxtv.org/project/linux-media/list/
...@@ -9190,7 +9190,7 @@ S: Maintained ...@@ -9190,7 +9190,7 @@ S: Maintained
F: drivers/media/platform/am437x/ F: drivers/media/platform/am437x/
OV2659 OMNIVISION SENSOR DRIVER OV2659 OMNIVISION SENSOR DRIVER
M: Lad, Prabhakar <prabhakar.csengg@gmail.com> M: "Lad, Prabhakar" <prabhakar.csengg@gmail.com>
L: linux-media@vger.kernel.org L: linux-media@vger.kernel.org
W: http://linuxtv.org/ W: http://linuxtv.org/
Q: http://patchwork.linuxtv.org/project/linux-media/list/ Q: http://patchwork.linuxtv.org/project/linux-media/list/
...@@ -9755,7 +9755,7 @@ F: fs/sysv/ ...@@ -9755,7 +9755,7 @@ F: fs/sysv/
F: include/linux/sysv_fs.h F: include/linux/sysv_fs.h
TARGET SUBSYSTEM TARGET SUBSYSTEM
M: Nicholas A. Bellinger <nab@linux-iscsi.org> M: "Nicholas A. Bellinger" <nab@linux-iscsi.org>
L: linux-scsi@vger.kernel.org L: linux-scsi@vger.kernel.org
L: target-devel@vger.kernel.org L: target-devel@vger.kernel.org
W: http://www.linux-iscsi.org W: http://www.linux-iscsi.org
...@@ -9897,7 +9897,7 @@ F: include/linux/if_team.h ...@@ -9897,7 +9897,7 @@ F: include/linux/if_team.h
F: include/uapi/linux/if_team.h F: include/uapi/linux/if_team.h
TECHNOLOGIC SYSTEMS TS-5500 PLATFORM SUPPORT TECHNOLOGIC SYSTEMS TS-5500 PLATFORM SUPPORT
M: Savoir-faire Linux Inc. <kernel@savoirfairelinux.com> M: "Savoir-faire Linux Inc." <kernel@savoirfairelinux.com>
S: Maintained S: Maintained
F: arch/x86/platform/ts5500/ F: arch/x86/platform/ts5500/
......
...@@ -499,6 +499,13 @@ config ARCH_HAS_ELF_RANDOMIZE ...@@ -499,6 +499,13 @@ config ARCH_HAS_ELF_RANDOMIZE
- arch_mmap_rnd() - arch_mmap_rnd()
- arch_randomize_brk() - arch_randomize_brk()
config HAVE_COPY_THREAD_TLS
bool
help
Architecture provides copy_thread_tls to accept tls argument via
normal C parameter passing, rather than extracting the syscall
argument from pt_regs.
# #
# ABI hall of shame # ABI hall of shame
# #
......
...@@ -63,15 +63,6 @@ static inline pte_t huge_pte_wrprotect(pte_t pte) ...@@ -63,15 +63,6 @@ static inline pte_t huge_pte_wrprotect(pte_t pte)
return pte_wrprotect(pte); return pte_wrprotect(pte);
} }
static inline int arch_prepare_hugepage(struct page *page)
{
return 0;
}
static inline void arch_release_hugepage(struct page *page)
{
}
static inline void arch_clear_hugepage_flags(struct page *page) static inline void arch_clear_hugepage_flags(struct page *page)
{ {
clear_bit(PG_dcache_clean, &page->flags); clear_bit(PG_dcache_clean, &page->flags);
......
...@@ -96,15 +96,6 @@ static inline pte_t huge_pte_wrprotect(pte_t pte) ...@@ -96,15 +96,6 @@ static inline pte_t huge_pte_wrprotect(pte_t pte)
return pte_wrprotect(pte); return pte_wrprotect(pte);
} }
static inline int arch_prepare_hugepage(struct page *page)
{
return 0;
}
static inline void arch_release_hugepage(struct page *page)
{
}
static inline void arch_clear_hugepage_flags(struct page *page) static inline void arch_clear_hugepage_flags(struct page *page)
{ {
clear_bit(PG_dcache_clean, &page->flags); clear_bit(PG_dcache_clean, &page->flags);
......
...@@ -209,17 +209,18 @@ dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size, ...@@ -209,17 +209,18 @@ dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
* the same here. * the same here.
*/ */
static inline int static inline int
dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents,
enum dma_data_direction direction) enum dma_data_direction direction)
{ {
int i; int i;
struct scatterlist *sg;
for (i = 0; i < nents; i++) { for_each_sg(sglist, sg, nents, i) {
char *virt; char *virt;
sg[i].dma_address = page_to_bus(sg_page(&sg[i])) + sg[i].offset; sg->dma_address = page_to_bus(sg_page(sg)) + sg->offset;
virt = sg_virt(&sg[i]); virt = sg_virt(sg);
dma_cache_sync(dev, virt, sg[i].length, direction); dma_cache_sync(dev, virt, sg->length, direction);
} }
return nents; return nents;
...@@ -321,14 +322,14 @@ dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, ...@@ -321,14 +322,14 @@ dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
} }
static inline void static inline void
dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, dma_sync_sg_for_device(struct device *dev, struct scatterlist *sglist,
int nents, enum dma_data_direction direction) int nents, enum dma_data_direction direction)
{ {
int i; int i;
struct scatterlist *sg;
for (i = 0; i < nents; i++) { for_each_sg(sglist, sg, nents, i)
dma_cache_sync(dev, sg_virt(&sg[i]), sg[i].length, direction); dma_cache_sync(dev, sg_virt(sg), sg->length, direction);
}
} }
/* Now for the API extensions over the pci_ one */ /* Now for the API extensions over the pci_ one */
......
...@@ -35,12 +35,6 @@ extern unsigned long __nongprelbss memory_start; ...@@ -35,12 +35,6 @@ extern unsigned long __nongprelbss memory_start;
extern unsigned long __nongprelbss memory_end; extern unsigned long __nongprelbss memory_end;
extern unsigned long __nongprelbss rom_length; extern unsigned long __nongprelbss rom_length;
/* determine if we're running from ROM */
static inline int is_in_rom(unsigned long addr)
{
return 0; /* default case: not in ROM */
}
#endif #endif
#endif #endif
#endif /* _ASM_SECTIONS_H */ #endif /* _ASM_SECTIONS_H */
...@@ -119,14 +119,16 @@ dma_addr_t dma_map_single(struct device *dev, void *ptr, size_t size, ...@@ -119,14 +119,16 @@ dma_addr_t dma_map_single(struct device *dev, void *ptr, size_t size,
EXPORT_SYMBOL(dma_map_single); EXPORT_SYMBOL(dma_map_single);
int dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, int dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents,
enum dma_data_direction direction) enum dma_data_direction direction)
{ {
int i; int i;
struct scatterlist *sg;
for (i=0; i<nents; i++) for_each_sg(sglist, sg, nents, i) {
frv_cache_wback_inv(sg_dma_address(&sg[i]), frv_cache_wback_inv(sg_dma_address(sg),
sg_dma_address(&sg[i]) + sg_dma_len(&sg[i])); sg_dma_address(sg) + sg_dma_len(sg));
}
BUG_ON(direction == DMA_NONE); BUG_ON(direction == DMA_NONE);
......
...@@ -50,19 +50,20 @@ dma_addr_t dma_map_single(struct device *dev, void *ptr, size_t size, ...@@ -50,19 +50,20 @@ dma_addr_t dma_map_single(struct device *dev, void *ptr, size_t size,
EXPORT_SYMBOL(dma_map_single); EXPORT_SYMBOL(dma_map_single);
int dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, int dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents,
enum dma_data_direction direction) enum dma_data_direction direction)
{ {
unsigned long dampr2; unsigned long dampr2;
void *vaddr; void *vaddr;
int i; int i;
struct scatterlist *sg;
BUG_ON(direction == DMA_NONE); BUG_ON(direction == DMA_NONE);
dampr2 = __get_DAMPR(2); dampr2 = __get_DAMPR(2);
for (i = 0; i < nents; i++) { for_each_sg(sglist, sg, nents, i) {
vaddr = kmap_atomic_primary(sg_page(&sg[i])); vaddr = kmap_atomic_primary(sg_page(sg));
frv_dcache_writeback((unsigned long) vaddr, frv_dcache_writeback((unsigned long) vaddr,
(unsigned long) vaddr + PAGE_SIZE); (unsigned long) vaddr + PAGE_SIZE);
......
...@@ -65,15 +65,6 @@ static inline pte_t huge_ptep_get(pte_t *ptep) ...@@ -65,15 +65,6 @@ static inline pte_t huge_ptep_get(pte_t *ptep)
return *ptep; return *ptep;
} }
static inline int arch_prepare_hugepage(struct page *page)
{
return 0;
}
static inline void arch_release_hugepage(struct page *page)
{
}
static inline void arch_clear_hugepage_flags(struct page *page) static inline void arch_clear_hugepage_flags(struct page *page)
{ {
} }
......
...@@ -67,15 +67,6 @@ static inline pte_t huge_ptep_get(pte_t *ptep) ...@@ -67,15 +67,6 @@ static inline pte_t huge_ptep_get(pte_t *ptep)
return *ptep; return *ptep;
} }
static inline int arch_prepare_hugepage(struct page *page)
{
return 0;
}
static inline void arch_release_hugepage(struct page *page)
{
}
static inline void arch_clear_hugepage_flags(struct page *page) static inline void arch_clear_hugepage_flags(struct page *page)
{ {
} }
......
...@@ -110,15 +110,6 @@ static inline pte_t huge_ptep_get(pte_t *ptep) ...@@ -110,15 +110,6 @@ static inline pte_t huge_ptep_get(pte_t *ptep)
return *ptep; return *ptep;
} }
static inline int arch_prepare_hugepage(struct page *page)
{
return 0;
}
static inline void arch_release_hugepage(struct page *page)
{
}
static inline void arch_clear_hugepage_flags(struct page *page) static inline void arch_clear_hugepage_flags(struct page *page)
{ {
} }
......
...@@ -168,15 +168,6 @@ static inline pte_t huge_ptep_get(pte_t *ptep) ...@@ -168,15 +168,6 @@ static inline pte_t huge_ptep_get(pte_t *ptep)
return *ptep; return *ptep;
} }
static inline int arch_prepare_hugepage(struct page *page)
{
return 0;
}
static inline void arch_release_hugepage(struct page *page)
{
}
static inline void arch_clear_hugepage_flags(struct page *page) static inline void arch_clear_hugepage_flags(struct page *page)
{ {
} }
......
...@@ -37,9 +37,6 @@ static inline int prepare_hugepage_range(struct file *file, ...@@ -37,9 +37,6 @@ static inline int prepare_hugepage_range(struct file *file,
#define arch_clear_hugepage_flags(page) do { } while (0) #define arch_clear_hugepage_flags(page) do { } while (0)
int arch_prepare_hugepage(struct page *page);
void arch_release_hugepage(struct page *page);
static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr, static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
pte_t *ptep) pte_t *ptep)
{ {
......
...@@ -17,7 +17,10 @@ ...@@ -17,7 +17,10 @@
#define PAGE_DEFAULT_ACC 0 #define PAGE_DEFAULT_ACC 0
#define PAGE_DEFAULT_KEY (PAGE_DEFAULT_ACC << 4) #define PAGE_DEFAULT_KEY (PAGE_DEFAULT_ACC << 4)
#define HPAGE_SHIFT 20 #include <asm/setup.h>
#ifndef __ASSEMBLY__
extern int HPAGE_SHIFT;
#define HPAGE_SIZE (1UL << HPAGE_SHIFT) #define HPAGE_SIZE (1UL << HPAGE_SHIFT)
#define HPAGE_MASK (~(HPAGE_SIZE - 1)) #define HPAGE_MASK (~(HPAGE_SIZE - 1))
#define HUGETLB_PAGE_ORDER (HPAGE_SHIFT - PAGE_SHIFT) #define HUGETLB_PAGE_ORDER (HPAGE_SHIFT - PAGE_SHIFT)
...@@ -27,9 +30,6 @@ ...@@ -27,9 +30,6 @@
#define ARCH_HAS_PREPARE_HUGEPAGE #define ARCH_HAS_PREPARE_HUGEPAGE
#define ARCH_HAS_HUGEPAGE_CLEAR_FLUSH #define ARCH_HAS_HUGEPAGE_CLEAR_FLUSH
#include <asm/setup.h>
#ifndef __ASSEMBLY__
static inline void storage_key_init_range(unsigned long start, unsigned long end) static inline void storage_key_init_range(unsigned long start, unsigned long end)
{ {
#if PAGE_DEFAULT_KEY #if PAGE_DEFAULT_KEY
......
...@@ -202,7 +202,7 @@ COMPAT_SYSCALL_WRAP1(epoll_create1, int, flags); ...@@ -202,7 +202,7 @@ COMPAT_SYSCALL_WRAP1(epoll_create1, int, flags);
COMPAT_SYSCALL_WRAP2(tkill, int, pid, int, sig); COMPAT_SYSCALL_WRAP2(tkill, int, pid, int, sig);
COMPAT_SYSCALL_WRAP3(tgkill, int, tgid, int, pid, int, sig); COMPAT_SYSCALL_WRAP3(tgkill, int, tgid, int, pid, int, sig);
COMPAT_SYSCALL_WRAP5(perf_event_open, struct perf_event_attr __user *, attr_uptr, pid_t, pid, int, cpu, int, group_fd, unsigned long, flags); COMPAT_SYSCALL_WRAP5(perf_event_open, struct perf_event_attr __user *, attr_uptr, pid_t, pid, int, cpu, int, group_fd, unsigned long, flags);
COMPAT_SYSCALL_WRAP5(clone, unsigned long, newsp, unsigned long, clone_flags, int __user *, parent_tidptr, int __user *, child_tidptr, int, tls_val); COMPAT_SYSCALL_WRAP5(clone, unsigned long, newsp, unsigned long, clone_flags, int __user *, parent_tidptr, int __user *, child_tidptr, unsigned long, tls);
COMPAT_SYSCALL_WRAP2(fanotify_init, unsigned int, flags, unsigned int, event_f_flags); COMPAT_SYSCALL_WRAP2(fanotify_init, unsigned int, flags, unsigned int, event_f_flags);
COMPAT_SYSCALL_WRAP4(prlimit64, pid_t, pid, unsigned int, resource, const struct rlimit64 __user *, new_rlim, struct rlimit64 __user *, old_rlim); COMPAT_SYSCALL_WRAP4(prlimit64, pid_t, pid, unsigned int, resource, const struct rlimit64 __user *, new_rlim, struct rlimit64 __user *, old_rlim);
COMPAT_SYSCALL_WRAP5(name_to_handle_at, int, dfd, const char __user *, name, struct file_handle __user *, handle, int __user *, mnt_id, int, flag); COMPAT_SYSCALL_WRAP5(name_to_handle_at, int, dfd, const char __user *, name, struct file_handle __user *, handle, int __user *, mnt_id, int, flag);
......
...@@ -880,6 +880,8 @@ void __init setup_arch(char **cmdline_p) ...@@ -880,6 +880,8 @@ void __init setup_arch(char **cmdline_p)
*/ */
setup_hwcaps(); setup_hwcaps();
HPAGE_SHIFT = MACHINE_HAS_HPAGE ? 20 : 0;
/* /*
* Create kernel page tables and switch to virtual addressing. * Create kernel page tables and switch to virtual addressing.
*/ */
......
...@@ -86,31 +86,16 @@ static inline pte_t __pmd_to_pte(pmd_t pmd) ...@@ -86,31 +86,16 @@ static inline pte_t __pmd_to_pte(pmd_t pmd)
void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, pte_t pte) pte_t *ptep, pte_t pte)
{ {
pmd_t pmd; pmd_t pmd = __pte_to_pmd(pte);
pmd = __pte_to_pmd(pte); pmd_val(pmd) |= _SEGMENT_ENTRY_LARGE;
if (!MACHINE_HAS_HPAGE) {
/* Emulated huge ptes loose the dirty and young bit */
pmd_val(pmd) &= ~_SEGMENT_ENTRY_ORIGIN;
pmd_val(pmd) |= pte_page(pte)[1].index;
} else
pmd_val(pmd) |= _SEGMENT_ENTRY_LARGE;
*(pmd_t *) ptep = pmd; *(pmd_t *) ptep = pmd;
} }
pte_t huge_ptep_get(pte_t *ptep) pte_t huge_ptep_get(pte_t *ptep)
{ {
unsigned long origin; pmd_t pmd = *(pmd_t *) ptep;
pmd_t pmd;
pmd = *(pmd_t *) ptep;
if (!MACHINE_HAS_HPAGE && pmd_present(pmd)) {
origin = pmd_val(pmd) & _SEGMENT_ENTRY_ORIGIN;
pmd_val(pmd) &= ~_SEGMENT_ENTRY_ORIGIN;
pmd_val(pmd) |= *(unsigned long *) origin;
/* Emulated huge ptes are young and dirty by definition */
pmd_val(pmd) |= _SEGMENT_ENTRY_YOUNG | _SEGMENT_ENTRY_DIRTY;
}
return __pmd_to_pte(pmd); return __pmd_to_pte(pmd);
} }
...@@ -125,45 +110,6 @@ pte_t huge_ptep_get_and_clear(struct mm_struct *mm, ...@@ -125,45 +110,6 @@ pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
return pte; return pte;
} }
int arch_prepare_hugepage(struct page *page)
{
unsigned long addr = page_to_phys(page);
pte_t pte;
pte_t *ptep;
int i;
if (MACHINE_HAS_HPAGE)
return 0;
ptep = (pte_t *) pte_alloc_one(&init_mm, addr);
if (!ptep)
return -ENOMEM;
pte_val(pte) = addr;
for (i = 0; i < PTRS_PER_PTE; i++) {
set_pte_at(&init_mm, addr + i * PAGE_SIZE, ptep + i, pte);
pte_val(pte) += PAGE_SIZE;
}
page[1].index = (unsigned long) ptep;
return 0;
}
void arch_release_hugepage(struct page *page)
{
pte_t *ptep;
if (MACHINE_HAS_HPAGE)
return;
ptep = (pte_t *) page[1].index;
if (!ptep)
return;
clear_table((unsigned long *) ptep, _PAGE_INVALID,
PTRS_PER_PTE * sizeof(pte_t));
page_table_free(&init_mm, (unsigned long *) ptep);
page[1].index = 0;
}
pte_t *huge_pte_alloc(struct mm_struct *mm, pte_t *huge_pte_alloc(struct mm_struct *mm,
unsigned long addr, unsigned long sz) unsigned long addr, unsigned long sz)
{ {
...@@ -195,10 +141,7 @@ pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr) ...@@ -195,10 +141,7 @@ pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr)
int pmd_huge(pmd_t pmd) int pmd_huge(pmd_t pmd)
{ {
if (!MACHINE_HAS_HPAGE) return pmd_large(pmd);
return 0;
return !!(pmd_val(pmd) & _SEGMENT_ENTRY_LARGE);
} }
int pud_huge(pud_t pud) int pud_huge(pud_t pud)
......
...@@ -31,6 +31,8 @@ ...@@ -31,6 +31,8 @@
#define ALLOC_ORDER 2 #define ALLOC_ORDER 2
#define FRAG_MASK 0x03 #define FRAG_MASK 0x03
int HPAGE_SHIFT;
unsigned long *crst_table_alloc(struct mm_struct *mm) unsigned long *crst_table_alloc(struct mm_struct *mm)
{ {
struct page *page = alloc_pages(GFP_KERNEL, ALLOC_ORDER); struct page *page = alloc_pages(GFP_KERNEL, ALLOC_ORDER);
......
...@@ -79,15 +79,6 @@ static inline pte_t huge_ptep_get(pte_t *ptep) ...@@ -79,15 +79,6 @@ static inline pte_t huge_ptep_get(pte_t *ptep)
return *ptep; return *ptep;
} }
static inline int arch_prepare_hugepage(struct page *page)
{
return 0;
}
static inline void arch_release_hugepage(struct page *page)
{
}
static inline void arch_clear_hugepage_flags(struct page *page) static inline void arch_clear_hugepage_flags(struct page *page)
{ {
clear_bit(PG_dcache_clean, &page->flags); clear_bit(PG_dcache_clean, &page->flags);
......
...@@ -78,15 +78,6 @@ static inline pte_t huge_ptep_get(pte_t *ptep) ...@@ -78,15 +78,6 @@ static inline pte_t huge_ptep_get(pte_t *ptep)
return *ptep; return *ptep;
} }
static inline int arch_prepare_hugepage(struct page *page)
{
return 0;
}
static inline void arch_release_hugepage(struct page *page)
{
}
static inline void arch_clear_hugepage_flags(struct page *page) static inline void arch_clear_hugepage_flags(struct page *page)
{ {
} }
......
...@@ -94,15 +94,6 @@ static inline pte_t huge_ptep_get(pte_t *ptep) ...@@ -94,15 +94,6 @@ static inline pte_t huge_ptep_get(pte_t *ptep)
return *ptep; return *ptep;
} }
static inline int arch_prepare_hugepage(struct page *page)
{
return 0;
}
static inline void arch_release_hugepage(struct page *page)
{
}
static inline void arch_clear_hugepage_flags(struct page *page) static inline void arch_clear_hugepage_flags(struct page *page)
{ {
} }
......
...@@ -80,15 +80,6 @@ static inline pte_t huge_ptep_get(pte_t *ptep) ...@@ -80,15 +80,6 @@ static inline pte_t huge_ptep_get(pte_t *ptep)
return *ptep; return *ptep;
} }
static inline int arch_prepare_hugepage(struct page *page)
{
return 0;
}
static inline void arch_release_hugepage(struct page *page)
{
}
static inline void arch_clear_hugepage_flags(struct page *page) static inline void arch_clear_hugepage_flags(struct page *page)
{ {
} }
......
...@@ -1303,12 +1303,11 @@ const char *device_get_devnode(struct device *dev, ...@@ -1303,12 +1303,11 @@ const char *device_get_devnode(struct device *dev,
return dev_name(dev); return dev_name(dev);
/* replace '!' in the name with '/' */ /* replace '!' in the name with '/' */
*tmp = kstrdup(dev_name(dev), GFP_KERNEL); s = kstrdup(dev_name(dev), GFP_KERNEL);
if (!*tmp) if (!s)
return NULL; return NULL;
while ((s = strchr(*tmp, '!'))) strreplace(s, '!', '/');
s[0] = '/'; return *tmp = s;
return *tmp;
} }
/** /**
......
...@@ -23,12 +23,4 @@ config ZRAM_LZ4_COMPRESS ...@@ -23,12 +23,4 @@ config ZRAM_LZ4_COMPRESS
default n default n
help help
This option enables LZ4 compression algorithm support. Compression This option enables LZ4 compression algorithm support. Compression
algorithm can be changed using `comp_algorithm' device attribute. algorithm can be changed using `comp_algorithm' device attribute.
\ No newline at end of file
config ZRAM_DEBUG
bool "Compressed RAM block device debug support"
depends on ZRAM
default n
help
This option adds additional debugging code to the compressed
RAM block device driver.
...@@ -274,7 +274,7 @@ ssize_t zcomp_available_show(const char *comp, char *buf) ...@@ -274,7 +274,7 @@ ssize_t zcomp_available_show(const char *comp, char *buf)
int i = 0; int i = 0;
while (backends[i]) { while (backends[i]) {
if (sysfs_streq(comp, backends[i]->name)) if (!strcmp(comp, backends[i]->name))
sz += scnprintf(buf + sz, PAGE_SIZE - sz - 2, sz += scnprintf(buf + sz, PAGE_SIZE - sz - 2,
"[%s] ", backends[i]->name); "[%s] ", backends[i]->name);
else else
...@@ -286,6 +286,11 @@ ssize_t zcomp_available_show(const char *comp, char *buf) ...@@ -286,6 +286,11 @@ ssize_t zcomp_available_show(const char *comp, char *buf)
return sz; return sz;
} }
bool zcomp_available_algorithm(const char *comp)
{
return find_backend(comp) != NULL;
}
bool zcomp_set_max_streams(struct zcomp *comp, int num_strm) bool zcomp_set_max_streams(struct zcomp *comp, int num_strm)
{ {
return comp->set_max_streams(comp, num_strm); return comp->set_max_streams(comp, num_strm);
......
...@@ -51,6 +51,7 @@ struct zcomp { ...@@ -51,6 +51,7 @@ struct zcomp {
}; };
ssize_t zcomp_available_show(const char *comp, char *buf); ssize_t zcomp_available_show(const char *comp, char *buf);
bool zcomp_available_algorithm(const char *comp);
struct zcomp *zcomp_create(const char *comp, int max_strm); struct zcomp *zcomp_create(const char *comp, int max_strm);
void zcomp_destroy(struct zcomp *comp); void zcomp_destroy(struct zcomp *comp);
......
此差异已折叠。
...@@ -20,12 +20,6 @@ ...@@ -20,12 +20,6 @@
#include "zcomp.h" #include "zcomp.h"
/*
* Some arbitrary value. This is just to catch
* invalid value for num_devices module parameter.
*/
static const unsigned max_num_devices = 32;
/*-- Configurable parameters */ /*-- Configurable parameters */
/* /*
...@@ -121,5 +115,9 @@ struct zram { ...@@ -121,5 +115,9 @@ struct zram {
*/ */
u64 disksize; /* bytes */ u64 disksize; /* bytes */
char compressor[10]; char compressor[10];
/*
* zram is claimed so open request will be failed
*/
bool claim; /* Protected by bdev->bd_mutex */
}; };
#endif #endif
...@@ -144,7 +144,9 @@ static struct kobj_type __refdata memmap_ktype = { ...@@ -144,7 +144,9 @@ static struct kobj_type __refdata memmap_ktype = {
* *
* Common implementation of firmware_map_add() and firmware_map_add_early() * Common implementation of firmware_map_add() and firmware_map_add_early()
* which expects a pre-allocated struct firmware_map_entry. * which expects a pre-allocated struct firmware_map_entry.
**/ *
* Return: 0 always
*/
static int firmware_map_add_entry(u64 start, u64 end, static int firmware_map_add_entry(u64 start, u64 end,
const char *type, const char *type,
struct firmware_map_entry *entry) struct firmware_map_entry *entry)
...@@ -170,7 +172,7 @@ static int firmware_map_add_entry(u64 start, u64 end, ...@@ -170,7 +172,7 @@ static int firmware_map_add_entry(u64 start, u64 end,
* @entry: removed entry. * @entry: removed entry.
* *
* The caller must hold map_entries_lock, and release it properly. * The caller must hold map_entries_lock, and release it properly.
**/ */
static inline void firmware_map_remove_entry(struct firmware_map_entry *entry) static inline void firmware_map_remove_entry(struct firmware_map_entry *entry)
{ {
list_del(&entry->list); list_del(&entry->list);
...@@ -208,7 +210,7 @@ static inline void remove_sysfs_fw_map_entry(struct firmware_map_entry *entry) ...@@ -208,7 +210,7 @@ static inline void remove_sysfs_fw_map_entry(struct firmware_map_entry *entry)
kobject_put(&entry->kobj); kobject_put(&entry->kobj);
} }
/* /**
* firmware_map_find_entry_in_list() - Search memmap entry in a given list. * firmware_map_find_entry_in_list() - Search memmap entry in a given list.
* @start: Start of the memory range. * @start: Start of the memory range.
* @end: End of the memory range (exclusive). * @end: End of the memory range (exclusive).
...@@ -236,7 +238,7 @@ firmware_map_find_entry_in_list(u64 start, u64 end, const char *type, ...@@ -236,7 +238,7 @@ firmware_map_find_entry_in_list(u64 start, u64 end, const char *type,
return NULL; return NULL;
} }
/* /**
* firmware_map_find_entry() - Search memmap entry in map_entries. * firmware_map_find_entry() - Search memmap entry in map_entries.
* @start: Start of the memory range. * @start: Start of the memory range.
* @end: End of the memory range (exclusive). * @end: End of the memory range (exclusive).
...@@ -254,7 +256,7 @@ firmware_map_find_entry(u64 start, u64 end, const char *type) ...@@ -254,7 +256,7 @@ firmware_map_find_entry(u64 start, u64 end, const char *type)
return firmware_map_find_entry_in_list(start, end, type, &map_entries); return firmware_map_find_entry_in_list(start, end, type, &map_entries);
} }
/* /**
* firmware_map_find_entry_bootmem() - Search memmap entry in map_entries_bootmem. * firmware_map_find_entry_bootmem() - Search memmap entry in map_entries_bootmem.
* @start: Start of the memory range. * @start: Start of the memory range.
* @end: End of the memory range (exclusive). * @end: End of the memory range (exclusive).
...@@ -283,8 +285,8 @@ firmware_map_find_entry_bootmem(u64 start, u64 end, const char *type) ...@@ -283,8 +285,8 @@ firmware_map_find_entry_bootmem(u64 start, u64 end, const char *type)
* similar to function firmware_map_add_early(). The only difference is that * similar to function firmware_map_add_early(). The only difference is that
* it will create the syfs entry dynamically. * it will create the syfs entry dynamically.
* *
* Returns 0 on success, or -ENOMEM if no memory could be allocated. * Return: 0 on success, or -ENOMEM if no memory could be allocated.
**/ */
int __meminit firmware_map_add_hotplug(u64 start, u64 end, const char *type) int __meminit firmware_map_add_hotplug(u64 start, u64 end, const char *type)
{ {
struct firmware_map_entry *entry; struct firmware_map_entry *entry;
...@@ -325,8 +327,8 @@ int __meminit firmware_map_add_hotplug(u64 start, u64 end, const char *type) ...@@ -325,8 +327,8 @@ int __meminit firmware_map_add_hotplug(u64 start, u64 end, const char *type)
* *
* That function must be called before late_initcall. * That function must be called before late_initcall.
* *
* Returns 0 on success, or -ENOMEM if no memory could be allocated. * Return: 0 on success, or -ENOMEM if no memory could be allocated.
**/ */
int __init firmware_map_add_early(u64 start, u64 end, const char *type) int __init firmware_map_add_early(u64 start, u64 end, const char *type)
{ {
struct firmware_map_entry *entry; struct firmware_map_entry *entry;
...@@ -346,8 +348,8 @@ int __init firmware_map_add_early(u64 start, u64 end, const char *type) ...@@ -346,8 +348,8 @@ int __init firmware_map_add_early(u64 start, u64 end, const char *type)
* *
* removes a firmware mapping entry. * removes a firmware mapping entry.
* *
* Returns 0 on success, or -EINVAL if no entry. * Return: 0 on success, or -EINVAL if no entry.
**/ */
int __meminit firmware_map_remove(u64 start, u64 end, const char *type) int __meminit firmware_map_remove(u64 start, u64 end, const char *type)
{ {
struct firmware_map_entry *entry; struct firmware_map_entry *entry;
......
...@@ -2024,7 +2024,6 @@ static int bind_rdev_to_array(struct md_rdev *rdev, struct mddev *mddev) ...@@ -2024,7 +2024,6 @@ static int bind_rdev_to_array(struct md_rdev *rdev, struct mddev *mddev)
{ {
char b[BDEVNAME_SIZE]; char b[BDEVNAME_SIZE];
struct kobject *ko; struct kobject *ko;
char *s;
int err; int err;
/* prevent duplicates */ /* prevent duplicates */
...@@ -2070,8 +2069,7 @@ static int bind_rdev_to_array(struct md_rdev *rdev, struct mddev *mddev) ...@@ -2070,8 +2069,7 @@ static int bind_rdev_to_array(struct md_rdev *rdev, struct mddev *mddev)
return -EBUSY; return -EBUSY;
} }
bdevname(rdev->bdev,b); bdevname(rdev->bdev,b);
while ( (s=strchr(b, '/')) != NULL) strreplace(b, '/', '!');
*s = '!';
rdev->mddev = mddev; rdev->mddev = mddev;
printk(KERN_INFO "md: bind<%s>\n", b); printk(KERN_INFO "md: bind<%s>\n", b);
......
...@@ -2451,7 +2451,7 @@ int altera_init(struct altera_config *config, const struct firmware *fw) ...@@ -2451,7 +2451,7 @@ int altera_init(struct altera_config *config, const struct firmware *fw)
astate->config = config; astate->config = config;
if (!astate->config->jtag_io) { if (!astate->config->jtag_io) {
dprintk(KERN_INFO "%s: using byteblaster!\n", __func__); dprintk("%s: using byteblaster!\n", __func__);
astate->config->jtag_io = netup_jtag_io_lpt; astate->config->jtag_io = netup_jtag_io_lpt;
} }
......
...@@ -220,7 +220,7 @@ static unsigned long lookup_addr(char *arg) ...@@ -220,7 +220,7 @@ static unsigned long lookup_addr(char *arg)
else if (!strcmp(arg, "sys_open")) else if (!strcmp(arg, "sys_open"))
addr = (unsigned long)do_sys_open; addr = (unsigned long)do_sys_open;
else if (!strcmp(arg, "do_fork")) else if (!strcmp(arg, "do_fork"))
addr = (unsigned long)do_fork; addr = (unsigned long)_do_fork;
else if (!strcmp(arg, "hw_break_val")) else if (!strcmp(arg, "hw_break_val"))
addr = (unsigned long)&hw_break_val; addr = (unsigned long)&hw_break_val;
addr = (unsigned long) dereference_function_descriptor((void *)addr); addr = (unsigned long) dereference_function_descriptor((void *)addr);
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
* drivers/misc/spear13xx_pcie_gadget.c * drivers/misc/spear13xx_pcie_gadget.c
* *
* Copyright (C) 2010 ST Microelectronics * Copyright (C) 2010 ST Microelectronics
* Pratyush Anand<pratyush.anand@st.com> * Pratyush Anand<pratyush.anand@gmail.com>
* *
* This file is licensed under the terms of the GNU General Public * This file is licensed under the terms of the GNU General Public
* License version 2. This program is licensed "as is" without any * License version 2. This program is licensed "as is" without any
......
...@@ -79,6 +79,12 @@ static LIST_HEAD(target_list); ...@@ -79,6 +79,12 @@ static LIST_HEAD(target_list);
/* This needs to be a spinlock because write_msg() cannot sleep */ /* This needs to be a spinlock because write_msg() cannot sleep */
static DEFINE_SPINLOCK(target_list_lock); static DEFINE_SPINLOCK(target_list_lock);
/*
* Console driver for extended netconsoles. Registered on the first use to
* avoid unnecessarily enabling ext message formatting.
*/
static struct console netconsole_ext;
/** /**
* struct netconsole_target - Represents a configured netconsole target. * struct netconsole_target - Represents a configured netconsole target.
* @list: Links this target into the target_list. * @list: Links this target into the target_list.
...@@ -104,14 +110,15 @@ struct netconsole_target { ...@@ -104,14 +110,15 @@ struct netconsole_target {
#ifdef CONFIG_NETCONSOLE_DYNAMIC #ifdef CONFIG_NETCONSOLE_DYNAMIC
struct config_item item; struct config_item item;
#endif #endif
int enabled; bool enabled;
struct mutex mutex; bool extended;
struct netpoll np; struct netpoll np;
}; };
#ifdef CONFIG_NETCONSOLE_DYNAMIC #ifdef CONFIG_NETCONSOLE_DYNAMIC
static struct configfs_subsystem netconsole_subsys; static struct configfs_subsystem netconsole_subsys;
static DEFINE_MUTEX(dynamic_netconsole_mutex);
static int __init dynamic_netconsole_init(void) static int __init dynamic_netconsole_init(void)
{ {
...@@ -185,9 +192,13 @@ static struct netconsole_target *alloc_param_target(char *target_config) ...@@ -185,9 +192,13 @@ static struct netconsole_target *alloc_param_target(char *target_config)
strlcpy(nt->np.dev_name, "eth0", IFNAMSIZ); strlcpy(nt->np.dev_name, "eth0", IFNAMSIZ);
nt->np.local_port = 6665; nt->np.local_port = 6665;
nt->np.remote_port = 6666; nt->np.remote_port = 6666;
mutex_init(&nt->mutex);
eth_broadcast_addr(nt->np.remote_mac); eth_broadcast_addr(nt->np.remote_mac);
if (*target_config == '+') {
nt->extended = true;
target_config++;
}
/* Parse parameters and setup netpoll */ /* Parse parameters and setup netpoll */
err = netpoll_parse_options(&nt->np, target_config); err = netpoll_parse_options(&nt->np, target_config);
if (err) if (err)
...@@ -197,7 +208,7 @@ static struct netconsole_target *alloc_param_target(char *target_config) ...@@ -197,7 +208,7 @@ static struct netconsole_target *alloc_param_target(char *target_config)
if (err) if (err)
goto fail; goto fail;
nt->enabled = 1; nt->enabled = true;
return nt; return nt;
...@@ -258,6 +269,11 @@ static ssize_t show_enabled(struct netconsole_target *nt, char *buf) ...@@ -258,6 +269,11 @@ static ssize_t show_enabled(struct netconsole_target *nt, char *buf)
return snprintf(buf, PAGE_SIZE, "%d\n", nt->enabled); return snprintf(buf, PAGE_SIZE, "%d\n", nt->enabled);
} }
static ssize_t show_extended(struct netconsole_target *nt, char *buf)
{
return snprintf(buf, PAGE_SIZE, "%d\n", nt->extended);
}
static ssize_t show_dev_name(struct netconsole_target *nt, char *buf) static ssize_t show_dev_name(struct netconsole_target *nt, char *buf)
{ {
return snprintf(buf, PAGE_SIZE, "%s\n", nt->np.dev_name); return snprintf(buf, PAGE_SIZE, "%s\n", nt->np.dev_name);
...@@ -322,13 +338,18 @@ static ssize_t store_enabled(struct netconsole_target *nt, ...@@ -322,13 +338,18 @@ static ssize_t store_enabled(struct netconsole_target *nt,
return err; return err;
if (enabled < 0 || enabled > 1) if (enabled < 0 || enabled > 1)
return -EINVAL; return -EINVAL;
if (enabled == nt->enabled) { if ((bool)enabled == nt->enabled) {
pr_info("network logging has already %s\n", pr_info("network logging has already %s\n",
nt->enabled ? "started" : "stopped"); nt->enabled ? "started" : "stopped");
return -EINVAL; return -EINVAL;
} }
if (enabled) { /* 1 */ if (enabled) { /* true */
if (nt->extended && !(netconsole_ext.flags & CON_ENABLED)) {
netconsole_ext.flags |= CON_ENABLED;
register_console(&netconsole_ext);
}
/* /*
* Skip netpoll_parse_options() -- all the attributes are * Skip netpoll_parse_options() -- all the attributes are
* already configured via configfs. Just print them out. * already configured via configfs. Just print them out.
...@@ -340,13 +361,13 @@ static ssize_t store_enabled(struct netconsole_target *nt, ...@@ -340,13 +361,13 @@ static ssize_t store_enabled(struct netconsole_target *nt,
return err; return err;
pr_info("netconsole: network logging started\n"); pr_info("netconsole: network logging started\n");
} else { /* 0 */ } else { /* false */
/* We need to disable the netconsole before cleaning it up /* We need to disable the netconsole before cleaning it up
* otherwise we might end up in write_msg() with * otherwise we might end up in write_msg() with
* nt->np.dev == NULL and nt->enabled == 1 * nt->np.dev == NULL and nt->enabled == true
*/ */
spin_lock_irqsave(&target_list_lock, flags); spin_lock_irqsave(&target_list_lock, flags);
nt->enabled = 0; nt->enabled = false;
spin_unlock_irqrestore(&target_list_lock, flags); spin_unlock_irqrestore(&target_list_lock, flags);
netpoll_cleanup(&nt->np); netpoll_cleanup(&nt->np);
} }
...@@ -356,6 +377,30 @@ static ssize_t store_enabled(struct netconsole_target *nt, ...@@ -356,6 +377,30 @@ static ssize_t store_enabled(struct netconsole_target *nt,
return strnlen(buf, count); return strnlen(buf, count);
} }
static ssize_t store_extended(struct netconsole_target *nt,
const char *buf,
size_t count)
{
int extended;
int err;
if (nt->enabled) {
pr_err("target (%s) is enabled, disable to update parameters\n",
config_item_name(&nt->item));
return -EINVAL;
}
err = kstrtoint(buf, 10, &extended);
if (err < 0)
return err;
if (extended < 0 || extended > 1)
return -EINVAL;
nt->extended = extended;
return strnlen(buf, count);
}
static ssize_t store_dev_name(struct netconsole_target *nt, static ssize_t store_dev_name(struct netconsole_target *nt,
const char *buf, const char *buf,
size_t count) size_t count)
...@@ -508,6 +553,7 @@ static struct netconsole_target_attr netconsole_target_##_name = \ ...@@ -508,6 +553,7 @@ static struct netconsole_target_attr netconsole_target_##_name = \
__CONFIGFS_ATTR(_name, S_IRUGO | S_IWUSR, show_##_name, store_##_name) __CONFIGFS_ATTR(_name, S_IRUGO | S_IWUSR, show_##_name, store_##_name)
NETCONSOLE_TARGET_ATTR_RW(enabled); NETCONSOLE_TARGET_ATTR_RW(enabled);
NETCONSOLE_TARGET_ATTR_RW(extended);
NETCONSOLE_TARGET_ATTR_RW(dev_name); NETCONSOLE_TARGET_ATTR_RW(dev_name);
NETCONSOLE_TARGET_ATTR_RW(local_port); NETCONSOLE_TARGET_ATTR_RW(local_port);
NETCONSOLE_TARGET_ATTR_RW(remote_port); NETCONSOLE_TARGET_ATTR_RW(remote_port);
...@@ -518,6 +564,7 @@ NETCONSOLE_TARGET_ATTR_RW(remote_mac); ...@@ -518,6 +564,7 @@ NETCONSOLE_TARGET_ATTR_RW(remote_mac);
static struct configfs_attribute *netconsole_target_attrs[] = { static struct configfs_attribute *netconsole_target_attrs[] = {
&netconsole_target_enabled.attr, &netconsole_target_enabled.attr,
&netconsole_target_extended.attr,
&netconsole_target_dev_name.attr, &netconsole_target_dev_name.attr,
&netconsole_target_local_port.attr, &netconsole_target_local_port.attr,
&netconsole_target_remote_port.attr, &netconsole_target_remote_port.attr,
...@@ -562,10 +609,10 @@ static ssize_t netconsole_target_attr_store(struct config_item *item, ...@@ -562,10 +609,10 @@ static ssize_t netconsole_target_attr_store(struct config_item *item,
struct netconsole_target_attr *na = struct netconsole_target_attr *na =
container_of(attr, struct netconsole_target_attr, attr); container_of(attr, struct netconsole_target_attr, attr);
mutex_lock(&nt->mutex); mutex_lock(&dynamic_netconsole_mutex);
if (na->store) if (na->store)
ret = na->store(nt, buf, count); ret = na->store(nt, buf, count);
mutex_unlock(&nt->mutex); mutex_unlock(&dynamic_netconsole_mutex);
return ret; return ret;
} }
...@@ -594,7 +641,7 @@ static struct config_item *make_netconsole_target(struct config_group *group, ...@@ -594,7 +641,7 @@ static struct config_item *make_netconsole_target(struct config_group *group,
/* /*
* Allocate and initialize with defaults. * Allocate and initialize with defaults.
* Target is disabled at creation (enabled == 0). * Target is disabled at creation (!enabled).
*/ */
nt = kzalloc(sizeof(*nt), GFP_KERNEL); nt = kzalloc(sizeof(*nt), GFP_KERNEL);
if (!nt) if (!nt)
...@@ -604,7 +651,6 @@ static struct config_item *make_netconsole_target(struct config_group *group, ...@@ -604,7 +651,6 @@ static struct config_item *make_netconsole_target(struct config_group *group,
strlcpy(nt->np.dev_name, "eth0", IFNAMSIZ); strlcpy(nt->np.dev_name, "eth0", IFNAMSIZ);
nt->np.local_port = 6665; nt->np.local_port = 6665;
nt->np.remote_port = 6666; nt->np.remote_port = 6666;
mutex_init(&nt->mutex);
eth_broadcast_addr(nt->np.remote_mac); eth_broadcast_addr(nt->np.remote_mac);
/* Initialize the config_item member */ /* Initialize the config_item member */
...@@ -695,7 +741,7 @@ static int netconsole_netdev_event(struct notifier_block *this, ...@@ -695,7 +741,7 @@ static int netconsole_netdev_event(struct notifier_block *this,
spin_lock_irqsave(&target_list_lock, flags); spin_lock_irqsave(&target_list_lock, flags);
dev_put(nt->np.dev); dev_put(nt->np.dev);
nt->np.dev = NULL; nt->np.dev = NULL;
nt->enabled = 0; nt->enabled = false;
stopped = true; stopped = true;
netconsole_target_put(nt); netconsole_target_put(nt);
goto restart; goto restart;
...@@ -729,6 +775,82 @@ static struct notifier_block netconsole_netdev_notifier = { ...@@ -729,6 +775,82 @@ static struct notifier_block netconsole_netdev_notifier = {
.notifier_call = netconsole_netdev_event, .notifier_call = netconsole_netdev_event,
}; };
/**
* send_ext_msg_udp - send extended log message to target
* @nt: target to send message to
* @msg: extended log message to send
* @msg_len: length of message
*
* Transfer extended log @msg to @nt. If @msg is longer than
* MAX_PRINT_CHUNK, it'll be split and transmitted in multiple chunks with
* ncfrag header field added to identify them.
*/
static void send_ext_msg_udp(struct netconsole_target *nt, const char *msg,
int msg_len)
{
static char buf[MAX_PRINT_CHUNK]; /* protected by target_list_lock */
const char *header, *body;
int offset = 0;
int header_len, body_len;
if (msg_len <= MAX_PRINT_CHUNK) {
netpoll_send_udp(&nt->np, msg, msg_len);
return;
}
/* need to insert extra header fields, detect header and body */
header = msg;
body = memchr(msg, ';', msg_len);
if (WARN_ON_ONCE(!body))
return;
header_len = body - header;
body_len = msg_len - header_len - 1;
body++;
/*
* Transfer multiple chunks with the following extra header.
* "ncfrag=<byte-offset>/<total-bytes>"
*/
memcpy(buf, header, header_len);
while (offset < body_len) {
int this_header = header_len;
int this_chunk;
this_header += scnprintf(buf + this_header,
sizeof(buf) - this_header,
",ncfrag=%d/%d;", offset, body_len);
this_chunk = min(body_len - offset,
MAX_PRINT_CHUNK - this_header);
if (WARN_ON_ONCE(this_chunk <= 0))
return;
memcpy(buf + this_header, body + offset, this_chunk);
netpoll_send_udp(&nt->np, buf, this_header + this_chunk);
offset += this_chunk;
}
}
static void write_ext_msg(struct console *con, const char *msg,
unsigned int len)
{
struct netconsole_target *nt;
unsigned long flags;
if ((oops_only && !oops_in_progress) || list_empty(&target_list))
return;
spin_lock_irqsave(&target_list_lock, flags);
list_for_each_entry(nt, &target_list, list)
if (nt->extended && nt->enabled && netif_running(nt->np.dev))
send_ext_msg_udp(nt, msg, len);
spin_unlock_irqrestore(&target_list_lock, flags);
}
static void write_msg(struct console *con, const char *msg, unsigned int len) static void write_msg(struct console *con, const char *msg, unsigned int len)
{ {
int frag, left; int frag, left;
...@@ -744,8 +866,7 @@ static void write_msg(struct console *con, const char *msg, unsigned int len) ...@@ -744,8 +866,7 @@ static void write_msg(struct console *con, const char *msg, unsigned int len)
spin_lock_irqsave(&target_list_lock, flags); spin_lock_irqsave(&target_list_lock, flags);
list_for_each_entry(nt, &target_list, list) { list_for_each_entry(nt, &target_list, list) {
netconsole_target_get(nt); if (!nt->extended && nt->enabled && netif_running(nt->np.dev)) {
if (nt->enabled && netif_running(nt->np.dev)) {
/* /*
* We nest this inside the for-each-target loop above * We nest this inside the for-each-target loop above
* so that we're able to get as much logging out to * so that we're able to get as much logging out to
...@@ -760,11 +881,16 @@ static void write_msg(struct console *con, const char *msg, unsigned int len) ...@@ -760,11 +881,16 @@ static void write_msg(struct console *con, const char *msg, unsigned int len)
left -= frag; left -= frag;
} }
} }
netconsole_target_put(nt);
} }
spin_unlock_irqrestore(&target_list_lock, flags); spin_unlock_irqrestore(&target_list_lock, flags);
} }
static struct console netconsole_ext = {
.name = "netcon_ext",
.flags = CON_EXTENDED, /* starts disabled, registered on first use */
.write = write_ext_msg,
};
static struct console netconsole = { static struct console netconsole = {
.name = "netcon", .name = "netcon",
.flags = CON_ENABLED, .flags = CON_ENABLED,
...@@ -787,7 +913,11 @@ static int __init init_netconsole(void) ...@@ -787,7 +913,11 @@ static int __init init_netconsole(void)
goto fail; goto fail;
} }
/* Dump existing printks when we register */ /* Dump existing printks when we register */
netconsole.flags |= CON_PRINTBUFFER; if (nt->extended)
netconsole_ext.flags |= CON_PRINTBUFFER |
CON_ENABLED;
else
netconsole.flags |= CON_PRINTBUFFER;
spin_lock_irqsave(&target_list_lock, flags); spin_lock_irqsave(&target_list_lock, flags);
list_add(&nt->list, &target_list); list_add(&nt->list, &target_list);
...@@ -803,6 +933,8 @@ static int __init init_netconsole(void) ...@@ -803,6 +933,8 @@ static int __init init_netconsole(void)
if (err) if (err)
goto undonotifier; goto undonotifier;
if (netconsole_ext.flags & CON_ENABLED)
register_console(&netconsole_ext);
register_console(&netconsole); register_console(&netconsole);
pr_info("network logging started\n"); pr_info("network logging started\n");
...@@ -831,6 +963,7 @@ static void __exit cleanup_netconsole(void) ...@@ -831,6 +963,7 @@ static void __exit cleanup_netconsole(void)
{ {
struct netconsole_target *nt, *tmp; struct netconsole_target *nt, *tmp;
unregister_console(&netconsole_ext);
unregister_console(&netconsole); unregister_console(&netconsole);
dynamic_netconsole_exit(); dynamic_netconsole_exit();
unregister_netdevice_notifier(&netconsole_netdev_notifier); unregister_netdevice_notifier(&netconsole_netdev_notifier);
......
...@@ -4,8 +4,8 @@ ...@@ -4,8 +4,8 @@
* SPEAr13xx PCIe Glue Layer Source Code * SPEAr13xx PCIe Glue Layer Source Code
* *
* Copyright (C) 2010-2014 ST Microelectronics * Copyright (C) 2010-2014 ST Microelectronics
* Pratyush Anand <pratyush.anand@st.com> * Pratyush Anand <pratyush.anand@gmail.com>
* Mohit Kumar <mohit.kumar@st.com> * Mohit Kumar <mohit.kumar.dhaka@gmail.com>
* *
* This file is licensed under the terms of the GNU General Public * This file is licensed under the terms of the GNU General Public
* License version 2. This program is licensed "as is" without any * License version 2. This program is licensed "as is" without any
...@@ -386,5 +386,5 @@ static int __init spear13xx_pcie_init(void) ...@@ -386,5 +386,5 @@ static int __init spear13xx_pcie_init(void)
module_init(spear13xx_pcie_init); module_init(spear13xx_pcie_init);
MODULE_DESCRIPTION("ST Microelectronics SPEAr13xx PCIe host controller driver"); MODULE_DESCRIPTION("ST Microelectronics SPEAr13xx PCIe host controller driver");
MODULE_AUTHOR("Pratyush Anand <pratyush.anand@st.com>"); MODULE_AUTHOR("Pratyush Anand <pratyush.anand@gmail.com>");
MODULE_LICENSE("GPL v2"); MODULE_LICENSE("GPL v2");
...@@ -2,8 +2,8 @@ ...@@ -2,8 +2,8 @@
* ST SPEAr1310-miphy driver * ST SPEAr1310-miphy driver
* *
* Copyright (C) 2014 ST Microelectronics * Copyright (C) 2014 ST Microelectronics
* Pratyush Anand <pratyush.anand@st.com> * Pratyush Anand <pratyush.anand@gmail.com>
* Mohit Kumar <mohit.kumar@st.com> * Mohit Kumar <mohit.kumar.dhaka@gmail.com>
* *
* This program is free software; you can redistribute it and/or modify * This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as * it under the terms of the GNU General Public License version 2 as
...@@ -257,5 +257,5 @@ static struct platform_driver spear1310_miphy_driver = { ...@@ -257,5 +257,5 @@ static struct platform_driver spear1310_miphy_driver = {
module_platform_driver(spear1310_miphy_driver); module_platform_driver(spear1310_miphy_driver);
MODULE_DESCRIPTION("ST SPEAR1310-MIPHY driver"); MODULE_DESCRIPTION("ST SPEAR1310-MIPHY driver");
MODULE_AUTHOR("Pratyush Anand <pratyush.anand@st.com>"); MODULE_AUTHOR("Pratyush Anand <pratyush.anand@gmail.com>");
MODULE_LICENSE("GPL v2"); MODULE_LICENSE("GPL v2");
...@@ -2,8 +2,8 @@ ...@@ -2,8 +2,8 @@
* ST spear1340-miphy driver * ST spear1340-miphy driver
* *
* Copyright (C) 2014 ST Microelectronics * Copyright (C) 2014 ST Microelectronics
* Pratyush Anand <pratyush.anand@st.com> * Pratyush Anand <pratyush.anand@gmail.com>
* Mohit Kumar <mohit.kumar@st.com> * Mohit Kumar <mohit.kumar.dhaka@gmail.com>
* *
* This program is free software; you can redistribute it and/or modify * This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as * it under the terms of the GNU General Public License version 2 as
...@@ -290,5 +290,5 @@ static struct platform_driver spear1340_miphy_driver = { ...@@ -290,5 +290,5 @@ static struct platform_driver spear1340_miphy_driver = {
module_platform_driver(spear1340_miphy_driver); module_platform_driver(spear1340_miphy_driver);
MODULE_DESCRIPTION("ST SPEAR1340-MIPHY driver"); MODULE_DESCRIPTION("ST SPEAR1340-MIPHY driver");
MODULE_AUTHOR("Pratyush Anand <pratyush.anand@st.com>"); MODULE_AUTHOR("Pratyush Anand <pratyush.anand@gmail.com>");
MODULE_LICENSE("GPL v2"); MODULE_LICENSE("GPL v2");
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
* Test pattern generation for Link Layer Validation System Tests * Test pattern generation for Link Layer Validation System Tests
* *
* Copyright (C) 2014 ST Microelectronics * Copyright (C) 2014 ST Microelectronics
* Pratyush Anand <pratyush.anand@st.com> * Pratyush Anand <pratyush.anand@gmail.com>
* *
* This file is licensed under the terms of the GNU General Public * This file is licensed under the terms of the GNU General Public
* License version 2. This program is licensed "as is" without any * License version 2. This program is licensed "as is" without any
......
...@@ -137,8 +137,8 @@ static int ...@@ -137,8 +137,8 @@ static int
befs_bt_read_super(struct super_block *sb, befs_data_stream * ds, befs_bt_read_super(struct super_block *sb, befs_data_stream * ds,
befs_btree_super * sup) befs_btree_super * sup)
{ {
struct buffer_head *bh = NULL; struct buffer_head *bh;
befs_disk_btree_super *od_sup = NULL; befs_disk_btree_super *od_sup;
befs_debug(sb, "---> %s", __func__); befs_debug(sb, "---> %s", __func__);
...@@ -250,7 +250,7 @@ int ...@@ -250,7 +250,7 @@ int
befs_btree_find(struct super_block *sb, befs_data_stream * ds, befs_btree_find(struct super_block *sb, befs_data_stream * ds,
const char *key, befs_off_t * value) const char *key, befs_off_t * value)
{ {
struct befs_btree_node *this_node = NULL; struct befs_btree_node *this_node;
befs_btree_super bt_super; befs_btree_super bt_super;
befs_off_t node_off; befs_off_t node_off;
int res; int res;
......
...@@ -70,7 +70,8 @@ static int expand_corename(struct core_name *cn, int size) ...@@ -70,7 +70,8 @@ static int expand_corename(struct core_name *cn, int size)
return 0; return 0;
} }
static int cn_vprintf(struct core_name *cn, const char *fmt, va_list arg) static __printf(2, 0) int cn_vprintf(struct core_name *cn, const char *fmt,
va_list arg)
{ {
int free, need; int free, need;
va_list arg_copy; va_list arg_copy;
...@@ -93,7 +94,7 @@ static int cn_vprintf(struct core_name *cn, const char *fmt, va_list arg) ...@@ -93,7 +94,7 @@ static int cn_vprintf(struct core_name *cn, const char *fmt, va_list arg)
return -ENOMEM; return -ENOMEM;
} }
static int cn_printf(struct core_name *cn, const char *fmt, ...) static __printf(2, 3) int cn_printf(struct core_name *cn, const char *fmt, ...)
{ {
va_list arg; va_list arg;
int ret; int ret;
...@@ -105,7 +106,8 @@ static int cn_printf(struct core_name *cn, const char *fmt, ...) ...@@ -105,7 +106,8 @@ static int cn_printf(struct core_name *cn, const char *fmt, ...)
return ret; return ret;
} }
static int cn_esc_printf(struct core_name *cn, const char *fmt, ...) static __printf(2, 3)
int cn_esc_printf(struct core_name *cn, const char *fmt, ...)
{ {
int cur = cn->used; int cur = cn->used;
va_list arg; va_list arg;
...@@ -209,11 +211,15 @@ static int format_corename(struct core_name *cn, struct coredump_params *cprm) ...@@ -209,11 +211,15 @@ static int format_corename(struct core_name *cn, struct coredump_params *cprm)
break; break;
/* uid */ /* uid */
case 'u': case 'u':
err = cn_printf(cn, "%d", cred->uid); err = cn_printf(cn, "%u",
from_kuid(&init_user_ns,
cred->uid));
break; break;
/* gid */ /* gid */
case 'g': case 'g':
err = cn_printf(cn, "%d", cred->gid); err = cn_printf(cn, "%u",
from_kgid(&init_user_ns,
cred->gid));
break; break;
case 'd': case 'd':
err = cn_printf(cn, "%d", err = cn_printf(cn, "%d",
...@@ -221,7 +227,8 @@ static int format_corename(struct core_name *cn, struct coredump_params *cprm) ...@@ -221,7 +227,8 @@ static int format_corename(struct core_name *cn, struct coredump_params *cprm)
break; break;
/* signal that caused the coredump */ /* signal that caused the coredump */
case 's': case 's':
err = cn_printf(cn, "%ld", cprm->siginfo->si_signo); err = cn_printf(cn, "%d",
cprm->siginfo->si_signo);
break; break;
/* UNIX time of coredump */ /* UNIX time of coredump */
case 't': { case 't': {
......
...@@ -67,7 +67,7 @@ static struct kmem_cache * efs_inode_cachep; ...@@ -67,7 +67,7 @@ static struct kmem_cache * efs_inode_cachep;
static struct inode *efs_alloc_inode(struct super_block *sb) static struct inode *efs_alloc_inode(struct super_block *sb)
{ {
struct efs_inode_info *ei; struct efs_inode_info *ei;
ei = (struct efs_inode_info *)kmem_cache_alloc(efs_inode_cachep, GFP_KERNEL); ei = kmem_cache_alloc(efs_inode_cachep, GFP_KERNEL);
if (!ei) if (!ei)
return NULL; return NULL;
return &ei->vfs_inode; return &ei->vfs_inode;
......
...@@ -3446,7 +3446,6 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent) ...@@ -3446,7 +3446,6 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
unsigned long journal_devnum = 0; unsigned long journal_devnum = 0;
unsigned long def_mount_opts; unsigned long def_mount_opts;
struct inode *root; struct inode *root;
char *cp;
const char *descr; const char *descr;
int ret = -ENOMEM; int ret = -ENOMEM;
int blocksize, clustersize; int blocksize, clustersize;
...@@ -3477,8 +3476,7 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent) ...@@ -3477,8 +3476,7 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
part_stat_read(sb->s_bdev->bd_part, sectors[1]); part_stat_read(sb->s_bdev->bd_part, sectors[1]);
/* Cleanup superblock name */ /* Cleanup superblock name */
for (cp = sb->s_id; (cp = strchr(cp, '/'));) strreplace(sb->s_id, '/', '!');
*cp = '!';
/* -EINVAL is default */ /* -EINVAL is default */
ret = -EINVAL; ret = -EINVAL;
......
...@@ -1135,7 +1135,6 @@ journal_t * jbd2_journal_init_dev(struct block_device *bdev, ...@@ -1135,7 +1135,6 @@ journal_t * jbd2_journal_init_dev(struct block_device *bdev,
{ {
journal_t *journal = journal_init_common(); journal_t *journal = journal_init_common();
struct buffer_head *bh; struct buffer_head *bh;
char *p;
int n; int n;
if (!journal) if (!journal)
...@@ -1148,9 +1147,7 @@ journal_t * jbd2_journal_init_dev(struct block_device *bdev, ...@@ -1148,9 +1147,7 @@ journal_t * jbd2_journal_init_dev(struct block_device *bdev,
journal->j_blk_offset = start; journal->j_blk_offset = start;
journal->j_maxlen = len; journal->j_maxlen = len;
bdevname(journal->j_dev, journal->j_devname); bdevname(journal->j_dev, journal->j_devname);
p = journal->j_devname; strreplace(journal->j_devname, '/', '!');
while ((p = strchr(p, '/')))
*p = '!';
jbd2_stats_proc_init(journal); jbd2_stats_proc_init(journal);
n = journal->j_blocksize / sizeof(journal_block_tag_t); n = journal->j_blocksize / sizeof(journal_block_tag_t);
journal->j_wbufsize = n; journal->j_wbufsize = n;
...@@ -1202,10 +1199,7 @@ journal_t * jbd2_journal_init_inode (struct inode *inode) ...@@ -1202,10 +1199,7 @@ journal_t * jbd2_journal_init_inode (struct inode *inode)
journal->j_dev = journal->j_fs_dev = inode->i_sb->s_bdev; journal->j_dev = journal->j_fs_dev = inode->i_sb->s_bdev;
journal->j_inode = inode; journal->j_inode = inode;
bdevname(journal->j_dev, journal->j_devname); bdevname(journal->j_dev, journal->j_devname);
p = journal->j_devname; p = strreplace(journal->j_devname, '/', '!');
while ((p = strchr(p, '/')))
*p = '!';
p = journal->j_devname + strlen(journal->j_devname);
sprintf(p, "-%lu", journal->j_inode->i_ino); sprintf(p, "-%lu", journal->j_inode->i_ino);
jbd_debug(1, jbd_debug(1,
"journal %p: inode %s/%ld, size %Ld, bits %d, blksize %ld\n", "journal %p: inode %s/%ld, size %Ld, bits %d, blksize %ld\n",
......
...@@ -62,7 +62,7 @@ static struct kmem_cache * minix_inode_cachep; ...@@ -62,7 +62,7 @@ static struct kmem_cache * minix_inode_cachep;
static struct inode *minix_alloc_inode(struct super_block *sb) static struct inode *minix_alloc_inode(struct super_block *sb)
{ {
struct minix_inode_info *ei; struct minix_inode_info *ei;
ei = (struct minix_inode_info *)kmem_cache_alloc(minix_inode_cachep, GFP_KERNEL); ei = kmem_cache_alloc(minix_inode_cachep, GFP_KERNEL);
if (!ei) if (!ei)
return NULL; return NULL;
return &ei->vfs_inode; return &ei->vfs_inode;
......
...@@ -496,8 +496,7 @@ static struct dentry *nilfs_fh_to_dentry(struct super_block *sb, struct fid *fh, ...@@ -496,8 +496,7 @@ static struct dentry *nilfs_fh_to_dentry(struct super_block *sb, struct fid *fh,
{ {
struct nilfs_fid *fid = (struct nilfs_fid *)fh; struct nilfs_fid *fid = (struct nilfs_fid *)fh;
if ((fh_len != NILFS_FID_SIZE_NON_CONNECTABLE && if (fh_len < NILFS_FID_SIZE_NON_CONNECTABLE ||
fh_len != NILFS_FID_SIZE_CONNECTABLE) ||
(fh_type != FILEID_NILFS_WITH_PARENT && (fh_type != FILEID_NILFS_WITH_PARENT &&
fh_type != FILEID_NILFS_WITHOUT_PARENT)) fh_type != FILEID_NILFS_WITHOUT_PARENT))
return NULL; return NULL;
...@@ -510,7 +509,7 @@ static struct dentry *nilfs_fh_to_parent(struct super_block *sb, struct fid *fh, ...@@ -510,7 +509,7 @@ static struct dentry *nilfs_fh_to_parent(struct super_block *sb, struct fid *fh,
{ {
struct nilfs_fid *fid = (struct nilfs_fid *)fh; struct nilfs_fid *fid = (struct nilfs_fid *)fh;
if (fh_len != NILFS_FID_SIZE_CONNECTABLE || if (fh_len < NILFS_FID_SIZE_CONNECTABLE ||
fh_type != FILEID_NILFS_WITH_PARENT) fh_type != FILEID_NILFS_WITH_PARENT)
return NULL; return NULL;
......
...@@ -71,3 +71,7 @@ config PROC_PAGE_MONITOR ...@@ -71,3 +71,7 @@ config PROC_PAGE_MONITOR
/proc/pid/smaps, /proc/pid/clear_refs, /proc/pid/pagemap, /proc/pid/smaps, /proc/pid/clear_refs, /proc/pid/pagemap,
/proc/kpagecount, and /proc/kpageflags. Disabling these /proc/kpagecount, and /proc/kpageflags. Disabling these
interfaces will reduce the size of the kernel by approximately 4kb. interfaces will reduce the size of the kernel by approximately 4kb.
config PROC_CHILDREN
bool "Include /proc/<pid>/task/<tid>/children file"
default n
...@@ -577,7 +577,7 @@ int proc_pid_statm(struct seq_file *m, struct pid_namespace *ns, ...@@ -577,7 +577,7 @@ int proc_pid_statm(struct seq_file *m, struct pid_namespace *ns,
return 0; return 0;
} }
#ifdef CONFIG_CHECKPOINT_RESTORE #ifdef CONFIG_PROC_CHILDREN
static struct pid * static struct pid *
get_children_pid(struct inode *inode, struct pid *pid_prev, loff_t pos) get_children_pid(struct inode *inode, struct pid *pid_prev, loff_t pos)
{ {
...@@ -700,4 +700,4 @@ const struct file_operations proc_tid_children_operations = { ...@@ -700,4 +700,4 @@ const struct file_operations proc_tid_children_operations = {
.llseek = seq_lseek, .llseek = seq_lseek,
.release = children_seq_release, .release = children_seq_release,
}; };
#endif /* CONFIG_CHECKPOINT_RESTORE */ #endif /* CONFIG_PROC_CHILDREN */
...@@ -196,18 +196,205 @@ static int proc_root_link(struct dentry *dentry, struct path *path) ...@@ -196,18 +196,205 @@ static int proc_root_link(struct dentry *dentry, struct path *path)
return result; return result;
} }
static int proc_pid_cmdline(struct seq_file *m, struct pid_namespace *ns, static ssize_t proc_pid_cmdline_read(struct file *file, char __user *buf,
struct pid *pid, struct task_struct *task) size_t _count, loff_t *pos)
{ {
struct task_struct *tsk;
struct mm_struct *mm;
char *page;
unsigned long count = _count;
unsigned long arg_start, arg_end, env_start, env_end;
unsigned long len1, len2, len;
unsigned long p;
char c;
ssize_t rv;
BUG_ON(*pos < 0);
tsk = get_proc_task(file_inode(file));
if (!tsk)
return -ESRCH;
mm = get_task_mm(tsk);
put_task_struct(tsk);
if (!mm)
return 0;
/* Check if process spawned far enough to have cmdline. */
if (!mm->env_end) {
rv = 0;
goto out_mmput;
}
page = (char *)__get_free_page(GFP_TEMPORARY);
if (!page) {
rv = -ENOMEM;
goto out_mmput;
}
down_read(&mm->mmap_sem);
arg_start = mm->arg_start;
arg_end = mm->arg_end;
env_start = mm->env_start;
env_end = mm->env_end;
up_read(&mm->mmap_sem);
BUG_ON(arg_start > arg_end);
BUG_ON(env_start > env_end);
len1 = arg_end - arg_start;
len2 = env_end - env_start;
/* /*
* Rely on struct seq_operations::show() being called once * Inherently racy -- command line shares address space
* per internal buffer allocation. See single_open(), traverse(). * with code and data.
*/ */
BUG_ON(m->size < PAGE_SIZE); rv = access_remote_vm(mm, arg_end - 1, &c, 1, 0);
m->count += get_cmdline(task, m->buf, PAGE_SIZE); if (rv <= 0)
return 0; goto out_free_page;
rv = 0;
if (c == '\0') {
/* Command line (set of strings) occupies whole ARGV. */
if (len1 <= *pos)
goto out_free_page;
p = arg_start + *pos;
len = len1 - *pos;
while (count > 0 && len > 0) {
unsigned int _count;
int nr_read;
_count = min3(count, len, PAGE_SIZE);
nr_read = access_remote_vm(mm, p, page, _count, 0);
if (nr_read < 0)
rv = nr_read;
if (nr_read <= 0)
goto out_free_page;
if (copy_to_user(buf, page, nr_read)) {
rv = -EFAULT;
goto out_free_page;
}
p += nr_read;
len -= nr_read;
buf += nr_read;
count -= nr_read;
rv += nr_read;
}
} else {
/*
* Command line (1 string) occupies ARGV and maybe
* extends into ENVP.
*/
if (len1 + len2 <= *pos)
goto skip_argv_envp;
if (len1 <= *pos)
goto skip_argv;
p = arg_start + *pos;
len = len1 - *pos;
while (count > 0 && len > 0) {
unsigned int _count, l;
int nr_read;
bool final;
_count = min3(count, len, PAGE_SIZE);
nr_read = access_remote_vm(mm, p, page, _count, 0);
if (nr_read < 0)
rv = nr_read;
if (nr_read <= 0)
goto out_free_page;
/*
* Command line can be shorter than whole ARGV
* even if last "marker" byte says it is not.
*/
final = false;
l = strnlen(page, nr_read);
if (l < nr_read) {
nr_read = l;
final = true;
}
if (copy_to_user(buf, page, nr_read)) {
rv = -EFAULT;
goto out_free_page;
}
p += nr_read;
len -= nr_read;
buf += nr_read;
count -= nr_read;
rv += nr_read;
if (final)
goto out_free_page;
}
skip_argv:
/*
* Command line (1 string) occupies ARGV and
* extends into ENVP.
*/
if (len1 <= *pos) {
p = env_start + *pos - len1;
len = len1 + len2 - *pos;
} else {
p = env_start;
len = len2;
}
while (count > 0 && len > 0) {
unsigned int _count, l;
int nr_read;
bool final;
_count = min3(count, len, PAGE_SIZE);
nr_read = access_remote_vm(mm, p, page, _count, 0);
if (nr_read < 0)
rv = nr_read;
if (nr_read <= 0)
goto out_free_page;
/* Find EOS. */
final = false;
l = strnlen(page, nr_read);
if (l < nr_read) {
nr_read = l;
final = true;
}
if (copy_to_user(buf, page, nr_read)) {
rv = -EFAULT;
goto out_free_page;
}
p += nr_read;
len -= nr_read;
buf += nr_read;
count -= nr_read;
rv += nr_read;
if (final)
goto out_free_page;
}
skip_argv_envp:
;
}
out_free_page:
free_page((unsigned long)page);
out_mmput:
mmput(mm);
if (rv > 0)
*pos += rv;
return rv;
} }
static const struct file_operations proc_pid_cmdline_ops = {
.read = proc_pid_cmdline_read,
.llseek = generic_file_llseek,
};
static int proc_pid_auxv(struct seq_file *m, struct pid_namespace *ns, static int proc_pid_auxv(struct seq_file *m, struct pid_namespace *ns,
struct pid *pid, struct task_struct *task) struct pid *pid, struct task_struct *task)
{ {
...@@ -2572,7 +2759,7 @@ static const struct pid_entry tgid_base_stuff[] = { ...@@ -2572,7 +2759,7 @@ static const struct pid_entry tgid_base_stuff[] = {
#ifdef CONFIG_HAVE_ARCH_TRACEHOOK #ifdef CONFIG_HAVE_ARCH_TRACEHOOK
ONE("syscall", S_IRUSR, proc_pid_syscall), ONE("syscall", S_IRUSR, proc_pid_syscall),
#endif #endif
ONE("cmdline", S_IRUGO, proc_pid_cmdline), REG("cmdline", S_IRUGO, proc_pid_cmdline_ops),
ONE("stat", S_IRUGO, proc_tgid_stat), ONE("stat", S_IRUGO, proc_tgid_stat),
ONE("statm", S_IRUGO, proc_pid_statm), ONE("statm", S_IRUGO, proc_pid_statm),
REG("maps", S_IRUGO, proc_pid_maps_operations), REG("maps", S_IRUGO, proc_pid_maps_operations),
...@@ -2918,11 +3105,11 @@ static const struct pid_entry tid_base_stuff[] = { ...@@ -2918,11 +3105,11 @@ static const struct pid_entry tid_base_stuff[] = {
#ifdef CONFIG_HAVE_ARCH_TRACEHOOK #ifdef CONFIG_HAVE_ARCH_TRACEHOOK
ONE("syscall", S_IRUSR, proc_pid_syscall), ONE("syscall", S_IRUSR, proc_pid_syscall),
#endif #endif
ONE("cmdline", S_IRUGO, proc_pid_cmdline), REG("cmdline", S_IRUGO, proc_pid_cmdline_ops),
ONE("stat", S_IRUGO, proc_tid_stat), ONE("stat", S_IRUGO, proc_tid_stat),
ONE("statm", S_IRUGO, proc_pid_statm), ONE("statm", S_IRUGO, proc_pid_statm),
REG("maps", S_IRUGO, proc_tid_maps_operations), REG("maps", S_IRUGO, proc_tid_maps_operations),
#ifdef CONFIG_CHECKPOINT_RESTORE #ifdef CONFIG_PROC_CHILDREN
REG("children", S_IRUGO, proc_tid_children_operations), REG("children", S_IRUGO, proc_tid_children_operations),
#endif #endif
#ifdef CONFIG_NUMA #ifdef CONFIG_NUMA
......
...@@ -589,8 +589,7 @@ static struct kmem_cache *reiserfs_inode_cachep; ...@@ -589,8 +589,7 @@ static struct kmem_cache *reiserfs_inode_cachep;
static struct inode *reiserfs_alloc_inode(struct super_block *sb) static struct inode *reiserfs_alloc_inode(struct super_block *sb)
{ {
struct reiserfs_inode_info *ei; struct reiserfs_inode_info *ei;
ei = (struct reiserfs_inode_info *) ei = kmem_cache_alloc(reiserfs_inode_cachep, GFP_KERNEL);
kmem_cache_alloc(reiserfs_inode_cachep, GFP_KERNEL);
if (!ei) if (!ei)
return NULL; return NULL;
atomic_set(&ei->openers, 0); atomic_set(&ei->openers, 0);
......
...@@ -5,9 +5,9 @@ ...@@ -5,9 +5,9 @@
/* /*
* Common definitions for all gcc versions go here. * Common definitions for all gcc versions go here.
*/ */
#define GCC_VERSION (__GNUC__ * 10000 \ #define GCC_VERSION (__GNUC__ * 10000 \
+ __GNUC_MINOR__ * 100 \ + __GNUC_MINOR__ * 100 \
+ __GNUC_PATCHLEVEL__) + __GNUC_PATCHLEVEL__)
/* Optimization barrier */ /* Optimization barrier */
...@@ -46,55 +46,63 @@ ...@@ -46,55 +46,63 @@
* the inline assembly constraint from =g to =r, in this particular * the inline assembly constraint from =g to =r, in this particular
* case either is valid. * case either is valid.
*/ */
#define RELOC_HIDE(ptr, off) \ #define RELOC_HIDE(ptr, off) \
({ unsigned long __ptr; \ ({ \
__asm__ ("" : "=r"(__ptr) : "0"(ptr)); \ unsigned long __ptr; \
(typeof(ptr)) (__ptr + (off)); }) __asm__ ("" : "=r"(__ptr) : "0"(ptr)); \
(typeof(ptr)) (__ptr + (off)); \
})
/* Make the optimizer believe the variable can be manipulated arbitrarily. */ /* Make the optimizer believe the variable can be manipulated arbitrarily. */
#define OPTIMIZER_HIDE_VAR(var) __asm__ ("" : "=r" (var) : "0" (var)) #define OPTIMIZER_HIDE_VAR(var) \
__asm__ ("" : "=r" (var) : "0" (var))
#ifdef __CHECKER__ #ifdef __CHECKER__
#define __must_be_array(arr) 0 #define __must_be_array(a) 0
#else #else
/* &a[0] degrades to a pointer: a different type from an array */ /* &a[0] degrades to a pointer: a different type from an array */
#define __must_be_array(a) BUILD_BUG_ON_ZERO(__same_type((a), &(a)[0])) #define __must_be_array(a) BUILD_BUG_ON_ZERO(__same_type((a), &(a)[0]))
#endif #endif
/* /*
* Force always-inline if the user requests it so via the .config, * Force always-inline if the user requests it so via the .config,
* or if gcc is too old: * or if gcc is too old:
*/ */
#if !defined(CONFIG_ARCH_SUPPORTS_OPTIMIZED_INLINING) || \ #if !defined(CONFIG_ARCH_SUPPORTS_OPTIMIZED_INLINING) || \
!defined(CONFIG_OPTIMIZE_INLINING) || (__GNUC__ < 4) !defined(CONFIG_OPTIMIZE_INLINING) || (__GNUC__ < 4)
# define inline inline __attribute__((always_inline)) notrace #define inline inline __attribute__((always_inline)) notrace
# define __inline__ __inline__ __attribute__((always_inline)) notrace #define __inline__ __inline__ __attribute__((always_inline)) notrace
# define __inline __inline __attribute__((always_inline)) notrace #define __inline __inline __attribute__((always_inline)) notrace
#else #else
/* A lot of inline functions can cause havoc with function tracing */ /* A lot of inline functions can cause havoc with function tracing */
# define inline inline notrace #define inline inline notrace
# define __inline__ __inline__ notrace #define __inline__ __inline__ notrace
# define __inline __inline notrace #define __inline __inline notrace
#endif #endif
#define __deprecated __attribute__((deprecated)) #define __always_inline inline __attribute__((always_inline))
#define __packed __attribute__((packed)) #define noinline __attribute__((noinline))
#define __weak __attribute__((weak))
#define __alias(symbol) __attribute__((alias(#symbol))) #define __deprecated __attribute__((deprecated))
#define __packed __attribute__((packed))
#define __weak __attribute__((weak))
#define __alias(symbol) __attribute__((alias(#symbol)))
/* /*
* it doesn't make sense on ARM (currently the only user of __naked) to trace * it doesn't make sense on ARM (currently the only user of __naked)
* naked functions because then mcount is called without stack and frame pointer * to trace naked functions because then mcount is called without
* being set up and there is no chance to restore the lr register to the value * stack and frame pointer being set up and there is no chance to
* before mcount was called. * restore the lr register to the value before mcount was called.
*
* The asm() bodies of naked functions often depend on standard calling
* conventions, therefore they must be noinline and noclone.
* *
* The asm() bodies of naked functions often depend on standard calling conventions, * GCC 4.[56] currently fail to enforce this, so we must do so ourselves.
* therefore they must be noinline and noclone. GCC 4.[56] currently fail to enforce * See GCC PR44290.
* this, so we must do so ourselves. See GCC PR44290.
*/ */
#define __naked __attribute__((naked)) noinline __noclone notrace #define __naked __attribute__((naked)) noinline __noclone notrace
#define __noreturn __attribute__((noreturn)) #define __noreturn __attribute__((noreturn))
/* /*
* From the GCC manual: * From the GCC manual:
...@@ -106,19 +114,130 @@ ...@@ -106,19 +114,130 @@
* would be. * would be.
* [...] * [...]
*/ */
#define __pure __attribute__((pure)) #define __pure __attribute__((pure))
#define __aligned(x) __attribute__((aligned(x))) #define __aligned(x) __attribute__((aligned(x)))
#define __printf(a, b) __attribute__((format(printf, a, b))) #define __printf(a, b) __attribute__((format(printf, a, b)))
#define __scanf(a, b) __attribute__((format(scanf, a, b))) #define __scanf(a, b) __attribute__((format(scanf, a, b)))
#define noinline __attribute__((noinline)) #define __attribute_const__ __attribute__((__const__))
#define __attribute_const__ __attribute__((__const__)) #define __maybe_unused __attribute__((unused))
#define __maybe_unused __attribute__((unused)) #define __always_unused __attribute__((unused))
#define __always_unused __attribute__((unused))
/* gcc version specific checks */
#define __gcc_header(x) #x
#define _gcc_header(x) __gcc_header(linux/compiler-gcc##x.h) #if GCC_VERSION < 30200
#define gcc_header(x) _gcc_header(x) # error Sorry, your compiler is too old - please upgrade it.
#include gcc_header(__GNUC__) #endif
#if GCC_VERSION < 30300
# define __used __attribute__((__unused__))
#else
# define __used __attribute__((__used__))
#endif
#ifdef CONFIG_GCOV_KERNEL
# if GCC_VERSION < 30400
# error "GCOV profiling support for gcc versions below 3.4 not included"
# endif /* __GNUC_MINOR__ */
#endif /* CONFIG_GCOV_KERNEL */
#if GCC_VERSION >= 30400
#define __must_check __attribute__((warn_unused_result))
#endif
#if GCC_VERSION >= 40000
/* GCC 4.1.[01] miscompiles __weak */
#ifdef __KERNEL__
# if GCC_VERSION >= 40100 && GCC_VERSION <= 40101
# error Your version of gcc miscompiles the __weak directive
# endif
#endif
#define __used __attribute__((__used__))
#define __compiler_offsetof(a, b) \
__builtin_offsetof(a, b)
#if GCC_VERSION >= 40100 && GCC_VERSION < 40600
# define __compiletime_object_size(obj) __builtin_object_size(obj, 0)
#endif
#if GCC_VERSION >= 40300
/* Mark functions as cold. gcc will assume any path leading to a call
* to them will be unlikely. This means a lot of manual unlikely()s
* are unnecessary now for any paths leading to the usual suspects
* like BUG(), printk(), panic() etc. [but let's keep them for now for
* older compilers]
*
* Early snapshots of gcc 4.3 don't support this and we can't detect this
* in the preprocessor, but we can live with this because they're unreleased.
* Maketime probing would be overkill here.
*
* gcc also has a __attribute__((__hot__)) to move hot functions into
* a special section, but I don't see any sense in this right now in
* the kernel context
*/
#define __cold __attribute__((__cold__))
#define __UNIQUE_ID(prefix) __PASTE(__PASTE(__UNIQUE_ID_, prefix), __COUNTER__)
#ifndef __CHECKER__
# define __compiletime_warning(message) __attribute__((warning(message)))
# define __compiletime_error(message) __attribute__((error(message)))
#endif /* __CHECKER__ */
#endif /* GCC_VERSION >= 40300 */
#if GCC_VERSION >= 40500
/*
* Mark a position in code as unreachable. This can be used to
* suppress control flow warnings after asm blocks that transfer
* control elsewhere.
*
* Early snapshots of gcc 4.5 don't support this and we can't detect
* this in the preprocessor, but we can live with this because they're
* unreleased. Really, we need to have autoconf for the kernel.
*/
#define unreachable() __builtin_unreachable()
/* Mark a function definition as prohibited from being cloned. */
#define __noclone __attribute__((__noclone__))
#endif /* GCC_VERSION >= 40500 */
#if GCC_VERSION >= 40600
/*
* Tell the optimizer that something else uses this function or variable.
*/
#define __visible __attribute__((externally_visible))
#endif
/*
* GCC 'asm goto' miscompiles certain code sequences:
*
* http://gcc.gnu.org/bugzilla/show_bug.cgi?id=58670
*
* Work it around via a compiler barrier quirk suggested by Jakub Jelinek.
*
* (asm goto is automatically volatile - the naming reflects this.)
*/
#define asm_volatile_goto(x...) do { asm goto(x); asm (""); } while (0)
#ifdef CONFIG_ARCH_USE_BUILTIN_BSWAP
#if GCC_VERSION >= 40400
#define __HAVE_BUILTIN_BSWAP32__
#define __HAVE_BUILTIN_BSWAP64__
#endif
#if GCC_VERSION >= 40800 || (defined(__powerpc__) && GCC_VERSION >= 40600)
#define __HAVE_BUILTIN_BSWAP16__
#endif
#endif /* CONFIG_ARCH_USE_BUILTIN_BSWAP */
#if GCC_VERSION >= 50000
#define KASAN_ABI_VERSION 4
#elif GCC_VERSION >= 40902
#define KASAN_ABI_VERSION 3
#endif
#endif /* gcc version >= 40000 specific checks */
#if !defined(__noclone) #if !defined(__noclone)
#define __noclone /* not needed */ #define __noclone /* not needed */
...@@ -129,5 +248,3 @@ ...@@ -129,5 +248,3 @@
* code * code
*/ */
#define uninitialized_var(x) x = x #define uninitialized_var(x) x = x
#define __always_inline inline __attribute__((always_inline))
#ifndef __LINUX_COMPILER_H
#error "Please don't include <linux/compiler-gcc3.h> directly, include <linux/compiler.h> instead."
#endif
#if GCC_VERSION < 30200
# error Sorry, your compiler is too old - please upgrade it.
#endif
#if GCC_VERSION >= 30300
# define __used __attribute__((__used__))
#else
# define __used __attribute__((__unused__))
#endif
#if GCC_VERSION >= 30400
#define __must_check __attribute__((warn_unused_result))
#endif
#ifdef CONFIG_GCOV_KERNEL
# if GCC_VERSION < 30400
# error "GCOV profiling support for gcc versions below 3.4 not included"
# endif /* __GNUC_MINOR__ */
#endif /* CONFIG_GCOV_KERNEL */
#ifndef __LINUX_COMPILER_H
#error "Please don't include <linux/compiler-gcc4.h> directly, include <linux/compiler.h> instead."
#endif
/* GCC 4.1.[01] miscompiles __weak */
#ifdef __KERNEL__
# if GCC_VERSION >= 40100 && GCC_VERSION <= 40101
# error Your version of gcc miscompiles the __weak directive
# endif
#endif
#define __used __attribute__((__used__))
#define __must_check __attribute__((warn_unused_result))
#define __compiler_offsetof(a,b) __builtin_offsetof(a,b)
#if GCC_VERSION >= 40100 && GCC_VERSION < 40600
# define __compiletime_object_size(obj) __builtin_object_size(obj, 0)
#endif
#if GCC_VERSION >= 40300
/* Mark functions as cold. gcc will assume any path leading to a call
to them will be unlikely. This means a lot of manual unlikely()s
are unnecessary now for any paths leading to the usual suspects
like BUG(), printk(), panic() etc. [but let's keep them for now for
older compilers]
Early snapshots of gcc 4.3 don't support this and we can't detect this
in the preprocessor, but we can live with this because they're unreleased.
Maketime probing would be overkill here.
gcc also has a __attribute__((__hot__)) to move hot functions into
a special section, but I don't see any sense in this right now in
the kernel context */
#define __cold __attribute__((__cold__))
#define __UNIQUE_ID(prefix) __PASTE(__PASTE(__UNIQUE_ID_, prefix), __COUNTER__)
#ifndef __CHECKER__
# define __compiletime_warning(message) __attribute__((warning(message)))
# define __compiletime_error(message) __attribute__((error(message)))
#endif /* __CHECKER__ */
#endif /* GCC_VERSION >= 40300 */
#if GCC_VERSION >= 40500
/*
* Mark a position in code as unreachable. This can be used to
* suppress control flow warnings after asm blocks that transfer
* control elsewhere.
*
* Early snapshots of gcc 4.5 don't support this and we can't detect
* this in the preprocessor, but we can live with this because they're
* unreleased. Really, we need to have autoconf for the kernel.
*/
#define unreachable() __builtin_unreachable()
/* Mark a function definition as prohibited from being cloned. */
#define __noclone __attribute__((__noclone__))
#endif /* GCC_VERSION >= 40500 */
#if GCC_VERSION >= 40600
/*
* Tell the optimizer that something else uses this function or variable.
*/
#define __visible __attribute__((externally_visible))
#endif
/*
* GCC 'asm goto' miscompiles certain code sequences:
*
* http://gcc.gnu.org/bugzilla/show_bug.cgi?id=58670
*
* Work it around via a compiler barrier quirk suggested by Jakub Jelinek.
*
* (asm goto is automatically volatile - the naming reflects this.)
*/
#define asm_volatile_goto(x...) do { asm goto(x); asm (""); } while (0)
#ifdef CONFIG_ARCH_USE_BUILTIN_BSWAP
#if GCC_VERSION >= 40400
#define __HAVE_BUILTIN_BSWAP32__
#define __HAVE_BUILTIN_BSWAP64__
#endif
#if GCC_VERSION >= 40800 || (defined(__powerpc__) && GCC_VERSION >= 40600)
#define __HAVE_BUILTIN_BSWAP16__
#endif
#endif /* CONFIG_ARCH_USE_BUILTIN_BSWAP */
#if GCC_VERSION >= 40902
#define KASAN_ABI_VERSION 3
#endif
#ifndef __LINUX_COMPILER_H
#error "Please don't include <linux/compiler-gcc5.h> directly, include <linux/compiler.h> instead."
#endif
#define __used __attribute__((__used__))
#define __must_check __attribute__((warn_unused_result))
#define __compiler_offsetof(a, b) __builtin_offsetof(a, b)
/* Mark functions as cold. gcc will assume any path leading to a call
to them will be unlikely. This means a lot of manual unlikely()s
are unnecessary now for any paths leading to the usual suspects
like BUG(), printk(), panic() etc. [but let's keep them for now for
older compilers]
Early snapshots of gcc 4.3 don't support this and we can't detect this
in the preprocessor, but we can live with this because they're unreleased.
Maketime probing would be overkill here.
gcc also has a __attribute__((__hot__)) to move hot functions into
a special section, but I don't see any sense in this right now in
the kernel context */
#define __cold __attribute__((__cold__))
#define __UNIQUE_ID(prefix) __PASTE(__PASTE(__UNIQUE_ID_, prefix), __COUNTER__)
#ifndef __CHECKER__
# define __compiletime_warning(message) __attribute__((warning(message)))
# define __compiletime_error(message) __attribute__((error(message)))
#endif /* __CHECKER__ */
/*
* Mark a position in code as unreachable. This can be used to
* suppress control flow warnings after asm blocks that transfer
* control elsewhere.
*
* Early snapshots of gcc 4.5 don't support this and we can't detect
* this in the preprocessor, but we can live with this because they're
* unreleased. Really, we need to have autoconf for the kernel.
*/
#define unreachable() __builtin_unreachable()
/* Mark a function definition as prohibited from being cloned. */
#define __noclone __attribute__((__noclone__))
/*
* Tell the optimizer that something else uses this function or variable.
*/
#define __visible __attribute__((externally_visible))
/*
* GCC 'asm goto' miscompiles certain code sequences:
*
* http://gcc.gnu.org/bugzilla/show_bug.cgi?id=58670
*
* Work it around via a compiler barrier quirk suggested by Jakub Jelinek.
*
* (asm goto is automatically volatile - the naming reflects this.)
*/
#define asm_volatile_goto(x...) do { asm goto(x); asm (""); } while (0)
#ifdef CONFIG_ARCH_USE_BUILTIN_BSWAP
#define __HAVE_BUILTIN_BSWAP32__
#define __HAVE_BUILTIN_BSWAP64__
#define __HAVE_BUILTIN_BSWAP16__
#endif /* CONFIG_ARCH_USE_BUILTIN_BSWAP */
#define KASAN_ABI_VERSION 4
...@@ -13,10 +13,12 @@ ...@@ -13,10 +13,12 @@
/* Intel ECC compiler doesn't support gcc specific asm stmts. /* Intel ECC compiler doesn't support gcc specific asm stmts.
* It uses intrinsics to do the equivalent things. * It uses intrinsics to do the equivalent things.
*/ */
#undef barrier
#undef barrier_data #undef barrier_data
#undef RELOC_HIDE #undef RELOC_HIDE
#undef OPTIMIZER_HIDE_VAR #undef OPTIMIZER_HIDE_VAR
#define barrier() __memory_barrier()
#define barrier_data(ptr) barrier() #define barrier_data(ptr) barrier()
#define RELOC_HIDE(ptr, off) \ #define RELOC_HIDE(ptr, off) \
......
...@@ -115,6 +115,7 @@ static inline int con_debug_leave(void) ...@@ -115,6 +115,7 @@ static inline int con_debug_leave(void)
#define CON_BOOT (8) #define CON_BOOT (8)
#define CON_ANYTIME (16) /* Safe to call when cpu is offline */ #define CON_ANYTIME (16) /* Safe to call when cpu is offline */
#define CON_BRL (32) /* Used for a braille device */ #define CON_BRL (32) /* Used for a braille device */
#define CON_EXTENDED (64) /* Use the extended output format a la /dev/kmsg */
struct console { struct console {
char name[16]; char name[16];
......
...@@ -30,6 +30,8 @@ static inline const char *printk_skip_level(const char *buffer) ...@@ -30,6 +30,8 @@ static inline const char *printk_skip_level(const char *buffer)
return buffer; return buffer;
} }
#define CONSOLE_EXT_LOG_MAX 8192
/* printk's without a loglevel use this.. */ /* printk's without a loglevel use this.. */
#define MESSAGE_LOGLEVEL_DEFAULT CONFIG_MESSAGE_LOGLEVEL_DEFAULT #define MESSAGE_LOGLEVEL_DEFAULT CONFIG_MESSAGE_LOGLEVEL_DEFAULT
......
...@@ -2556,8 +2556,22 @@ extern struct mm_struct *mm_access(struct task_struct *task, unsigned int mode); ...@@ -2556,8 +2556,22 @@ extern struct mm_struct *mm_access(struct task_struct *task, unsigned int mode);
/* Remove the current tasks stale references to the old mm_struct */ /* Remove the current tasks stale references to the old mm_struct */
extern void mm_release(struct task_struct *, struct mm_struct *); extern void mm_release(struct task_struct *, struct mm_struct *);
#ifdef CONFIG_HAVE_COPY_THREAD_TLS
extern int copy_thread_tls(unsigned long, unsigned long, unsigned long,
struct task_struct *, unsigned long);
#else
extern int copy_thread(unsigned long, unsigned long, unsigned long, extern int copy_thread(unsigned long, unsigned long, unsigned long,
struct task_struct *); struct task_struct *);
/* Architectures that haven't opted into copy_thread_tls get the tls argument
* via pt_regs, so ignore the tls argument passed via C. */
static inline int copy_thread_tls(
unsigned long clone_flags, unsigned long sp, unsigned long arg,
struct task_struct *p, unsigned long tls)
{
return copy_thread(clone_flags, sp, arg, p);
}
#endif
extern void flush_thread(void); extern void flush_thread(void);
extern void exit_thread(void); extern void exit_thread(void);
...@@ -2576,6 +2590,7 @@ extern int do_execveat(int, struct filename *, ...@@ -2576,6 +2590,7 @@ extern int do_execveat(int, struct filename *,
const char __user * const __user *, const char __user * const __user *,
const char __user * const __user *, const char __user * const __user *,
int); int);
extern long _do_fork(unsigned long, unsigned long, unsigned long, int __user *, int __user *, unsigned long);
extern long do_fork(unsigned long, unsigned long, unsigned long, int __user *, int __user *); extern long do_fork(unsigned long, unsigned long, unsigned long, int __user *, int __user *);
struct task_struct *fork_idle(int); struct task_struct *fork_idle(int);
extern pid_t kernel_thread(int (*fn)(void *), void *arg, unsigned long flags); extern pid_t kernel_thread(int (*fn)(void *), void *arg, unsigned long flags);
......
...@@ -3,7 +3,6 @@ ...@@ -3,7 +3,6 @@
#include <uapi/linux/stddef.h> #include <uapi/linux/stddef.h>
#undef NULL #undef NULL
#define NULL ((void *)0) #define NULL ((void *)0)
...@@ -14,10 +13,9 @@ enum { ...@@ -14,10 +13,9 @@ enum {
#undef offsetof #undef offsetof
#ifdef __compiler_offsetof #ifdef __compiler_offsetof
#define offsetof(TYPE,MEMBER) __compiler_offsetof(TYPE,MEMBER) #define offsetof(TYPE, MEMBER) __compiler_offsetof(TYPE, MEMBER)
#else #else
#define offsetof(TYPE, MEMBER) ((size_t) &((TYPE *)0)->MEMBER) #define offsetof(TYPE, MEMBER) ((size_t)&((TYPE *)0)->MEMBER)
#endif
#endif #endif
/** /**
...@@ -28,3 +26,5 @@ enum { ...@@ -28,3 +26,5 @@ enum {
*/ */
#define offsetofend(TYPE, MEMBER) \ #define offsetofend(TYPE, MEMBER) \
(offsetof(TYPE, MEMBER) + sizeof(((TYPE *)0)->MEMBER)) (offsetof(TYPE, MEMBER) + sizeof(((TYPE *)0)->MEMBER))
#endif
...@@ -111,6 +111,7 @@ extern int memcmp(const void *,const void *,__kernel_size_t); ...@@ -111,6 +111,7 @@ extern int memcmp(const void *,const void *,__kernel_size_t);
extern void * memchr(const void *,int,__kernel_size_t); extern void * memchr(const void *,int,__kernel_size_t);
#endif #endif
void *memchr_inv(const void *s, int c, size_t n); void *memchr_inv(const void *s, int c, size_t n);
char *strreplace(char *s, char old, char new);
extern void kfree_const(const void *x); extern void kfree_const(const void *x);
......
...@@ -827,15 +827,15 @@ asmlinkage long sys_syncfs(int fd); ...@@ -827,15 +827,15 @@ asmlinkage long sys_syncfs(int fd);
asmlinkage long sys_fork(void); asmlinkage long sys_fork(void);
asmlinkage long sys_vfork(void); asmlinkage long sys_vfork(void);
#ifdef CONFIG_CLONE_BACKWARDS #ifdef CONFIG_CLONE_BACKWARDS
asmlinkage long sys_clone(unsigned long, unsigned long, int __user *, int, asmlinkage long sys_clone(unsigned long, unsigned long, int __user *, unsigned long,
int __user *); int __user *);
#else #else
#ifdef CONFIG_CLONE_BACKWARDS3 #ifdef CONFIG_CLONE_BACKWARDS3
asmlinkage long sys_clone(unsigned long, unsigned long, int, int __user *, asmlinkage long sys_clone(unsigned long, unsigned long, int, int __user *,
int __user *, int); int __user *, unsigned long);
#else #else
asmlinkage long sys_clone(unsigned long, unsigned long, int __user *, asmlinkage long sys_clone(unsigned long, unsigned long, int __user *,
int __user *, int); int __user *, unsigned long);
#endif #endif
#endif #endif
......
...@@ -47,12 +47,12 @@ ...@@ -47,12 +47,12 @@
#define SYSLOG_FROM_READER 0 #define SYSLOG_FROM_READER 0
#define SYSLOG_FROM_PROC 1 #define SYSLOG_FROM_PROC 1
int do_syslog(int type, char __user *buf, int count, bool from_file); int do_syslog(int type, char __user *buf, int count, int source);
#ifdef CONFIG_PRINTK #ifdef CONFIG_PRINTK
int check_syslog_permissions(int type, bool from_file); int check_syslog_permissions(int type, int source);
#else #else
static inline int check_syslog_permissions(int type, bool from_file) static inline int check_syslog_permissions(int type, int source)
{ {
return 0; return 0;
} }
......
...@@ -81,7 +81,8 @@ struct zpool_driver { ...@@ -81,7 +81,8 @@ struct zpool_driver {
atomic_t refcount; atomic_t refcount;
struct list_head list; struct list_head list;
void *(*create)(char *name, gfp_t gfp, struct zpool_ops *ops); void *(*create)(char *name, gfp_t gfp, struct zpool_ops *ops,
struct zpool *zpool);
void (*destroy)(void *pool); void (*destroy)(void *pool);
int (*malloc)(void *pool, size_t size, gfp_t gfp, int (*malloc)(void *pool, size_t size, gfp_t gfp,
...@@ -102,6 +103,4 @@ void zpool_register_driver(struct zpool_driver *driver); ...@@ -102,6 +103,4 @@ void zpool_register_driver(struct zpool_driver *driver);
int zpool_unregister_driver(struct zpool_driver *driver); int zpool_unregister_driver(struct zpool_driver *driver);
int zpool_evict(void *pool, unsigned long handle);
#endif #endif
...@@ -1136,6 +1136,7 @@ endif # CGROUPS ...@@ -1136,6 +1136,7 @@ endif # CGROUPS
config CHECKPOINT_RESTORE config CHECKPOINT_RESTORE
bool "Checkpoint/restore support" if EXPERT bool "Checkpoint/restore support" if EXPERT
select PROC_CHILDREN
default n default n
help help
Enables additional kernel features in a sake of checkpoint/restore. Enables additional kernel features in a sake of checkpoint/restore.
......
...@@ -533,8 +533,13 @@ void __init mount_root(void) ...@@ -533,8 +533,13 @@ void __init mount_root(void)
} }
#endif #endif
#ifdef CONFIG_BLOCK #ifdef CONFIG_BLOCK
create_dev("/dev/root", ROOT_DEV); {
mount_block_root("/dev/root", root_mountflags); int err = create_dev("/dev/root", ROOT_DEV);
if (err < 0)
pr_emerg("Failed to create /dev/root: %d\n", err);
mount_block_root("/dev/root", root_mountflags);
}
#endif #endif
} }
......
...@@ -711,10 +711,10 @@ void do_exit(long code) ...@@ -711,10 +711,10 @@ void do_exit(long code)
current->comm, task_pid_nr(current), current->comm, task_pid_nr(current),
preempt_count()); preempt_count());
acct_update_integrals(tsk);
/* sync mm's RSS info before statistics gathering */ /* sync mm's RSS info before statistics gathering */
if (tsk->mm) if (tsk->mm)
sync_mm_rss(tsk->mm); sync_mm_rss(tsk->mm);
acct_update_integrals(tsk);
group_dead = atomic_dec_and_test(&tsk->signal->live); group_dead = atomic_dec_and_test(&tsk->signal->live);
if (group_dead) { if (group_dead) {
hrtimer_cancel(&tsk->signal->real_timer); hrtimer_cancel(&tsk->signal->real_timer);
......
...@@ -1238,7 +1238,8 @@ static struct task_struct *copy_process(unsigned long clone_flags, ...@@ -1238,7 +1238,8 @@ static struct task_struct *copy_process(unsigned long clone_flags,
unsigned long stack_size, unsigned long stack_size,
int __user *child_tidptr, int __user *child_tidptr,
struct pid *pid, struct pid *pid,
int trace) int trace,
unsigned long tls)
{ {
int retval; int retval;
struct task_struct *p; struct task_struct *p;
...@@ -1447,7 +1448,7 @@ static struct task_struct *copy_process(unsigned long clone_flags, ...@@ -1447,7 +1448,7 @@ static struct task_struct *copy_process(unsigned long clone_flags,
retval = copy_io(clone_flags, p); retval = copy_io(clone_flags, p);
if (retval) if (retval)
goto bad_fork_cleanup_namespaces; goto bad_fork_cleanup_namespaces;
retval = copy_thread(clone_flags, stack_start, stack_size, p); retval = copy_thread_tls(clone_flags, stack_start, stack_size, p, tls);
if (retval) if (retval)
goto bad_fork_cleanup_io; goto bad_fork_cleanup_io;
...@@ -1659,7 +1660,7 @@ static inline void init_idle_pids(struct pid_link *links) ...@@ -1659,7 +1660,7 @@ static inline void init_idle_pids(struct pid_link *links)
struct task_struct *fork_idle(int cpu) struct task_struct *fork_idle(int cpu)
{ {
struct task_struct *task; struct task_struct *task;
task = copy_process(CLONE_VM, 0, 0, NULL, &init_struct_pid, 0); task = copy_process(CLONE_VM, 0, 0, NULL, &init_struct_pid, 0, 0);
if (!IS_ERR(task)) { if (!IS_ERR(task)) {
init_idle_pids(task->pids); init_idle_pids(task->pids);
init_idle(task, cpu); init_idle(task, cpu);
...@@ -1674,11 +1675,12 @@ struct task_struct *fork_idle(int cpu) ...@@ -1674,11 +1675,12 @@ struct task_struct *fork_idle(int cpu)
* It copies the process, and if successful kick-starts * It copies the process, and if successful kick-starts
* it and waits for it to finish using the VM if required. * it and waits for it to finish using the VM if required.
*/ */
long do_fork(unsigned long clone_flags, long _do_fork(unsigned long clone_flags,
unsigned long stack_start, unsigned long stack_start,
unsigned long stack_size, unsigned long stack_size,
int __user *parent_tidptr, int __user *parent_tidptr,
int __user *child_tidptr) int __user *child_tidptr,
unsigned long tls)
{ {
struct task_struct *p; struct task_struct *p;
int trace = 0; int trace = 0;
...@@ -1703,7 +1705,7 @@ long do_fork(unsigned long clone_flags, ...@@ -1703,7 +1705,7 @@ long do_fork(unsigned long clone_flags,
} }
p = copy_process(clone_flags, stack_start, stack_size, p = copy_process(clone_flags, stack_start, stack_size,
child_tidptr, NULL, trace); child_tidptr, NULL, trace, tls);
/* /*
* Do this prior waking up the new thread - the thread pointer * Do this prior waking up the new thread - the thread pointer
* might get invalid after that point, if the thread exits quickly. * might get invalid after that point, if the thread exits quickly.
...@@ -1744,20 +1746,34 @@ long do_fork(unsigned long clone_flags, ...@@ -1744,20 +1746,34 @@ long do_fork(unsigned long clone_flags,
return nr; return nr;
} }
#ifndef CONFIG_HAVE_COPY_THREAD_TLS
/* For compatibility with architectures that call do_fork directly rather than
* using the syscall entry points below. */
long do_fork(unsigned long clone_flags,
unsigned long stack_start,
unsigned long stack_size,
int __user *parent_tidptr,
int __user *child_tidptr)
{
return _do_fork(clone_flags, stack_start, stack_size,
parent_tidptr, child_tidptr, 0);
}
#endif
/* /*
* Create a kernel thread. * Create a kernel thread.
*/ */
pid_t kernel_thread(int (*fn)(void *), void *arg, unsigned long flags) pid_t kernel_thread(int (*fn)(void *), void *arg, unsigned long flags)
{ {
return do_fork(flags|CLONE_VM|CLONE_UNTRACED, (unsigned long)fn, return _do_fork(flags|CLONE_VM|CLONE_UNTRACED, (unsigned long)fn,
(unsigned long)arg, NULL, NULL); (unsigned long)arg, NULL, NULL, 0);
} }
#ifdef __ARCH_WANT_SYS_FORK #ifdef __ARCH_WANT_SYS_FORK
SYSCALL_DEFINE0(fork) SYSCALL_DEFINE0(fork)
{ {
#ifdef CONFIG_MMU #ifdef CONFIG_MMU
return do_fork(SIGCHLD, 0, 0, NULL, NULL); return _do_fork(SIGCHLD, 0, 0, NULL, NULL, 0);
#else #else
/* can not support in nommu mode */ /* can not support in nommu mode */
return -EINVAL; return -EINVAL;
...@@ -1768,8 +1784,8 @@ SYSCALL_DEFINE0(fork) ...@@ -1768,8 +1784,8 @@ SYSCALL_DEFINE0(fork)
#ifdef __ARCH_WANT_SYS_VFORK #ifdef __ARCH_WANT_SYS_VFORK
SYSCALL_DEFINE0(vfork) SYSCALL_DEFINE0(vfork)
{ {
return do_fork(CLONE_VFORK | CLONE_VM | SIGCHLD, 0, return _do_fork(CLONE_VFORK | CLONE_VM | SIGCHLD, 0,
0, NULL, NULL); 0, NULL, NULL, 0);
} }
#endif #endif
...@@ -1777,27 +1793,27 @@ SYSCALL_DEFINE0(vfork) ...@@ -1777,27 +1793,27 @@ SYSCALL_DEFINE0(vfork)
#ifdef CONFIG_CLONE_BACKWARDS #ifdef CONFIG_CLONE_BACKWARDS
SYSCALL_DEFINE5(clone, unsigned long, clone_flags, unsigned long, newsp, SYSCALL_DEFINE5(clone, unsigned long, clone_flags, unsigned long, newsp,
int __user *, parent_tidptr, int __user *, parent_tidptr,
int, tls_val, unsigned long, tls,
int __user *, child_tidptr) int __user *, child_tidptr)
#elif defined(CONFIG_CLONE_BACKWARDS2) #elif defined(CONFIG_CLONE_BACKWARDS2)
SYSCALL_DEFINE5(clone, unsigned long, newsp, unsigned long, clone_flags, SYSCALL_DEFINE5(clone, unsigned long, newsp, unsigned long, clone_flags,
int __user *, parent_tidptr, int __user *, parent_tidptr,
int __user *, child_tidptr, int __user *, child_tidptr,
int, tls_val) unsigned long, tls)
#elif defined(CONFIG_CLONE_BACKWARDS3) #elif defined(CONFIG_CLONE_BACKWARDS3)
SYSCALL_DEFINE6(clone, unsigned long, clone_flags, unsigned long, newsp, SYSCALL_DEFINE6(clone, unsigned long, clone_flags, unsigned long, newsp,
int, stack_size, int, stack_size,
int __user *, parent_tidptr, int __user *, parent_tidptr,
int __user *, child_tidptr, int __user *, child_tidptr,
int, tls_val) unsigned long, tls)
#else #else
SYSCALL_DEFINE5(clone, unsigned long, clone_flags, unsigned long, newsp, SYSCALL_DEFINE5(clone, unsigned long, clone_flags, unsigned long, newsp,
int __user *, parent_tidptr, int __user *, parent_tidptr,
int __user *, child_tidptr, int __user *, child_tidptr,
int, tls_val) unsigned long, tls)
#endif #endif
{ {
return do_fork(clone_flags, newsp, 0, parent_tidptr, child_tidptr); return _do_fork(clone_flags, newsp, 0, parent_tidptr, child_tidptr, tls);
} }
#endif #endif
......
...@@ -84,6 +84,18 @@ static struct lockdep_map console_lock_dep_map = { ...@@ -84,6 +84,18 @@ static struct lockdep_map console_lock_dep_map = {
}; };
#endif #endif
/*
* Number of registered extended console drivers.
*
* If extended consoles are present, in-kernel cont reassembly is disabled
* and each fragment is stored as a separate log entry with proper
* continuation flag so that every emitted message has full metadata. This
* doesn't change the result for regular consoles or /proc/kmsg. For
* /dev/kmsg, as long as the reader concatenates messages according to
* consecutive continuation flags, the end result should be the same too.
*/
static int nr_ext_console_drivers;
/* /*
* Helper macros to handle lockdep when locking/unlocking console_sem. We use * Helper macros to handle lockdep when locking/unlocking console_sem. We use
* macros instead of functions so that _RET_IP_ contains useful information. * macros instead of functions so that _RET_IP_ contains useful information.
...@@ -195,7 +207,7 @@ static int console_may_schedule; ...@@ -195,7 +207,7 @@ static int console_may_schedule;
* need to be changed in the future, when the requirements change. * need to be changed in the future, when the requirements change.
* *
* /dev/kmsg exports the structured data in the following line format: * /dev/kmsg exports the structured data in the following line format:
* "level,sequnum,timestamp;<message text>\n" * "<level>,<sequnum>,<timestamp>,<contflag>;<message text>\n"
* *
* The optional key/value pairs are attached as continuation lines starting * The optional key/value pairs are attached as continuation lines starting
* with a space character and terminated by a newline. All possible * with a space character and terminated by a newline. All possible
...@@ -477,18 +489,18 @@ static int syslog_action_restricted(int type) ...@@ -477,18 +489,18 @@ static int syslog_action_restricted(int type)
type != SYSLOG_ACTION_SIZE_BUFFER; type != SYSLOG_ACTION_SIZE_BUFFER;
} }
int check_syslog_permissions(int type, bool from_file) int check_syslog_permissions(int type, int source)
{ {
/* /*
* If this is from /proc/kmsg and we've already opened it, then we've * If this is from /proc/kmsg and we've already opened it, then we've
* already done the capabilities checks at open time. * already done the capabilities checks at open time.
*/ */
if (from_file && type != SYSLOG_ACTION_OPEN) if (source == SYSLOG_FROM_PROC && type != SYSLOG_ACTION_OPEN)
return 0; goto ok;
if (syslog_action_restricted(type)) { if (syslog_action_restricted(type)) {
if (capable(CAP_SYSLOG)) if (capable(CAP_SYSLOG))
return 0; goto ok;
/* /*
* For historical reasons, accept CAP_SYS_ADMIN too, with * For historical reasons, accept CAP_SYS_ADMIN too, with
* a warning. * a warning.
...@@ -498,13 +510,94 @@ int check_syslog_permissions(int type, bool from_file) ...@@ -498,13 +510,94 @@ int check_syslog_permissions(int type, bool from_file)
"CAP_SYS_ADMIN but no CAP_SYSLOG " "CAP_SYS_ADMIN but no CAP_SYSLOG "
"(deprecated).\n", "(deprecated).\n",
current->comm, task_pid_nr(current)); current->comm, task_pid_nr(current));
return 0; goto ok;
} }
return -EPERM; return -EPERM;
} }
ok:
return security_syslog(type); return security_syslog(type);
} }
static void append_char(char **pp, char *e, char c)
{
if (*pp < e)
*(*pp)++ = c;
}
static ssize_t msg_print_ext_header(char *buf, size_t size,
struct printk_log *msg, u64 seq,
enum log_flags prev_flags)
{
u64 ts_usec = msg->ts_nsec;
char cont = '-';
do_div(ts_usec, 1000);
/*
* If we couldn't merge continuation line fragments during the print,
* export the stored flags to allow an optional external merge of the
* records. Merging the records isn't always neccessarily correct, like
* when we hit a race during printing. In most cases though, it produces
* better readable output. 'c' in the record flags mark the first
* fragment of a line, '+' the following.
*/
if (msg->flags & LOG_CONT && !(prev_flags & LOG_CONT))
cont = 'c';
else if ((msg->flags & LOG_CONT) ||
((prev_flags & LOG_CONT) && !(msg->flags & LOG_PREFIX)))
cont = '+';
return scnprintf(buf, size, "%u,%llu,%llu,%c;",
(msg->facility << 3) | msg->level, seq, ts_usec, cont);
}
static ssize_t msg_print_ext_body(char *buf, size_t size,
char *dict, size_t dict_len,
char *text, size_t text_len)
{
char *p = buf, *e = buf + size;
size_t i;
/* escape non-printable characters */
for (i = 0; i < text_len; i++) {
unsigned char c = text[i];
if (c < ' ' || c >= 127 || c == '\\')
p += scnprintf(p, e - p, "\\x%02x", c);
else
append_char(&p, e, c);
}
append_char(&p, e, '\n');
if (dict_len) {
bool line = true;
for (i = 0; i < dict_len; i++) {
unsigned char c = dict[i];
if (line) {
append_char(&p, e, ' ');
line = false;
}
if (c == '\0') {
append_char(&p, e, '\n');
line = true;
continue;
}
if (c < ' ' || c >= 127 || c == '\\') {
p += scnprintf(p, e - p, "\\x%02x", c);
continue;
}
append_char(&p, e, c);
}
append_char(&p, e, '\n');
}
return p - buf;
}
/* /dev/kmsg - userspace message inject/listen interface */ /* /dev/kmsg - userspace message inject/listen interface */
struct devkmsg_user { struct devkmsg_user {
...@@ -512,7 +605,7 @@ struct devkmsg_user { ...@@ -512,7 +605,7 @@ struct devkmsg_user {
u32 idx; u32 idx;
enum log_flags prev; enum log_flags prev;
struct mutex lock; struct mutex lock;
char buf[8192]; char buf[CONSOLE_EXT_LOG_MAX];
}; };
static ssize_t devkmsg_write(struct kiocb *iocb, struct iov_iter *from) static ssize_t devkmsg_write(struct kiocb *iocb, struct iov_iter *from)
...@@ -570,9 +663,6 @@ static ssize_t devkmsg_read(struct file *file, char __user *buf, ...@@ -570,9 +663,6 @@ static ssize_t devkmsg_read(struct file *file, char __user *buf,
{ {
struct devkmsg_user *user = file->private_data; struct devkmsg_user *user = file->private_data;
struct printk_log *msg; struct printk_log *msg;
u64 ts_usec;
size_t i;
char cont = '-';
size_t len; size_t len;
ssize_t ret; ssize_t ret;
...@@ -608,66 +698,13 @@ static ssize_t devkmsg_read(struct file *file, char __user *buf, ...@@ -608,66 +698,13 @@ static ssize_t devkmsg_read(struct file *file, char __user *buf,
} }
msg = log_from_idx(user->idx); msg = log_from_idx(user->idx);
ts_usec = msg->ts_nsec; len = msg_print_ext_header(user->buf, sizeof(user->buf),
do_div(ts_usec, 1000); msg, user->seq, user->prev);
len += msg_print_ext_body(user->buf + len, sizeof(user->buf) - len,
log_dict(msg), msg->dict_len,
log_text(msg), msg->text_len);
/*
* If we couldn't merge continuation line fragments during the print,
* export the stored flags to allow an optional external merge of the
* records. Merging the records isn't always neccessarily correct, like
* when we hit a race during printing. In most cases though, it produces
* better readable output. 'c' in the record flags mark the first
* fragment of a line, '+' the following.
*/
if (msg->flags & LOG_CONT && !(user->prev & LOG_CONT))
cont = 'c';
else if ((msg->flags & LOG_CONT) ||
((user->prev & LOG_CONT) && !(msg->flags & LOG_PREFIX)))
cont = '+';
len = sprintf(user->buf, "%u,%llu,%llu,%c;",
(msg->facility << 3) | msg->level,
user->seq, ts_usec, cont);
user->prev = msg->flags; user->prev = msg->flags;
/* escape non-printable characters */
for (i = 0; i < msg->text_len; i++) {
unsigned char c = log_text(msg)[i];
if (c < ' ' || c >= 127 || c == '\\')
len += sprintf(user->buf + len, "\\x%02x", c);
else
user->buf[len++] = c;
}
user->buf[len++] = '\n';
if (msg->dict_len) {
bool line = true;
for (i = 0; i < msg->dict_len; i++) {
unsigned char c = log_dict(msg)[i];
if (line) {
user->buf[len++] = ' ';
line = false;
}
if (c == '\0') {
user->buf[len++] = '\n';
line = true;
continue;
}
if (c < ' ' || c >= 127 || c == '\\') {
len += sprintf(user->buf + len, "\\x%02x", c);
continue;
}
user->buf[len++] = c;
}
user->buf[len++] = '\n';
}
user->idx = log_next(user->idx); user->idx = log_next(user->idx);
user->seq++; user->seq++;
raw_spin_unlock_irq(&logbuf_lock); raw_spin_unlock_irq(&logbuf_lock);
...@@ -1253,20 +1290,16 @@ static int syslog_print_all(char __user *buf, int size, bool clear) ...@@ -1253,20 +1290,16 @@ static int syslog_print_all(char __user *buf, int size, bool clear)
return len; return len;
} }
int do_syslog(int type, char __user *buf, int len, bool from_file) int do_syslog(int type, char __user *buf, int len, int source)
{ {
bool clear = false; bool clear = false;
static int saved_console_loglevel = LOGLEVEL_DEFAULT; static int saved_console_loglevel = LOGLEVEL_DEFAULT;
int error; int error;
error = check_syslog_permissions(type, from_file); error = check_syslog_permissions(type, source);
if (error) if (error)
goto out; goto out;
error = security_syslog(type);
if (error)
return error;
switch (type) { switch (type) {
case SYSLOG_ACTION_CLOSE: /* Close log */ case SYSLOG_ACTION_CLOSE: /* Close log */
break; break;
...@@ -1346,7 +1379,7 @@ int do_syslog(int type, char __user *buf, int len, bool from_file) ...@@ -1346,7 +1379,7 @@ int do_syslog(int type, char __user *buf, int len, bool from_file)
syslog_prev = 0; syslog_prev = 0;
syslog_partial = 0; syslog_partial = 0;
} }
if (from_file) { if (source == SYSLOG_FROM_PROC) {
/* /*
* Short-cut for poll(/"proc/kmsg") which simply checks * Short-cut for poll(/"proc/kmsg") which simply checks
* for pending data, not the size; return the count of * for pending data, not the size; return the count of
...@@ -1393,7 +1426,9 @@ SYSCALL_DEFINE3(syslog, int, type, char __user *, buf, int, len) ...@@ -1393,7 +1426,9 @@ SYSCALL_DEFINE3(syslog, int, type, char __user *, buf, int, len)
* log_buf[start] to log_buf[end - 1]. * log_buf[start] to log_buf[end - 1].
* The console_lock must be held. * The console_lock must be held.
*/ */
static void call_console_drivers(int level, const char *text, size_t len) static void call_console_drivers(int level,
const char *ext_text, size_t ext_len,
const char *text, size_t len)
{ {
struct console *con; struct console *con;
...@@ -1414,7 +1449,10 @@ static void call_console_drivers(int level, const char *text, size_t len) ...@@ -1414,7 +1449,10 @@ static void call_console_drivers(int level, const char *text, size_t len)
if (!cpu_online(smp_processor_id()) && if (!cpu_online(smp_processor_id()) &&
!(con->flags & CON_ANYTIME)) !(con->flags & CON_ANYTIME))
continue; continue;
con->write(con, text, len); if (con->flags & CON_EXTENDED)
con->write(con, ext_text, ext_len);
else
con->write(con, text, len);
} }
} }
...@@ -1557,8 +1595,12 @@ static bool cont_add(int facility, int level, const char *text, size_t len) ...@@ -1557,8 +1595,12 @@ static bool cont_add(int facility, int level, const char *text, size_t len)
if (cont.len && cont.flushed) if (cont.len && cont.flushed)
return false; return false;
if (cont.len + len > sizeof(cont.buf)) { /*
/* the line gets too long, split it up in separate records */ * If ext consoles are present, flush and skip in-kernel
* continuation. See nr_ext_console_drivers definition. Also, if
* the line gets too long, split it up in separate records.
*/
if (nr_ext_console_drivers || cont.len + len > sizeof(cont.buf)) {
cont_flush(LOG_CONT); cont_flush(LOG_CONT);
return false; return false;
} }
...@@ -1893,9 +1935,19 @@ static struct cont { ...@@ -1893,9 +1935,19 @@ static struct cont {
u8 level; u8 level;
bool flushed:1; bool flushed:1;
} cont; } cont;
static char *log_text(const struct printk_log *msg) { return NULL; }
static char *log_dict(const struct printk_log *msg) { return NULL; }
static struct printk_log *log_from_idx(u32 idx) { return NULL; } static struct printk_log *log_from_idx(u32 idx) { return NULL; }
static u32 log_next(u32 idx) { return 0; } static u32 log_next(u32 idx) { return 0; }
static void call_console_drivers(int level, const char *text, size_t len) {} static ssize_t msg_print_ext_header(char *buf, size_t size,
struct printk_log *msg, u64 seq,
enum log_flags prev_flags) { return 0; }
static ssize_t msg_print_ext_body(char *buf, size_t size,
char *dict, size_t dict_len,
char *text, size_t text_len) { return 0; }
static void call_console_drivers(int level,
const char *ext_text, size_t ext_len,
const char *text, size_t len) {}
static size_t msg_print_text(const struct printk_log *msg, enum log_flags prev, static size_t msg_print_text(const struct printk_log *msg, enum log_flags prev,
bool syslog, char *buf, size_t size) { return 0; } bool syslog, char *buf, size_t size) { return 0; }
static size_t cont_print_text(char *text, size_t size) { return 0; } static size_t cont_print_text(char *text, size_t size) { return 0; }
...@@ -2148,7 +2200,7 @@ static void console_cont_flush(char *text, size_t size) ...@@ -2148,7 +2200,7 @@ static void console_cont_flush(char *text, size_t size)
len = cont_print_text(text, size); len = cont_print_text(text, size);
raw_spin_unlock(&logbuf_lock); raw_spin_unlock(&logbuf_lock);
stop_critical_timings(); stop_critical_timings();
call_console_drivers(cont.level, text, len); call_console_drivers(cont.level, NULL, 0, text, len);
start_critical_timings(); start_critical_timings();
local_irq_restore(flags); local_irq_restore(flags);
return; return;
...@@ -2172,6 +2224,7 @@ static void console_cont_flush(char *text, size_t size) ...@@ -2172,6 +2224,7 @@ static void console_cont_flush(char *text, size_t size)
*/ */
void console_unlock(void) void console_unlock(void)
{ {
static char ext_text[CONSOLE_EXT_LOG_MAX];
static char text[LOG_LINE_MAX + PREFIX_MAX]; static char text[LOG_LINE_MAX + PREFIX_MAX];
static u64 seen_seq; static u64 seen_seq;
unsigned long flags; unsigned long flags;
...@@ -2190,6 +2243,7 @@ void console_unlock(void) ...@@ -2190,6 +2243,7 @@ void console_unlock(void)
again: again:
for (;;) { for (;;) {
struct printk_log *msg; struct printk_log *msg;
size_t ext_len = 0;
size_t len; size_t len;
int level; int level;
...@@ -2235,13 +2289,22 @@ void console_unlock(void) ...@@ -2235,13 +2289,22 @@ void console_unlock(void)
level = msg->level; level = msg->level;
len += msg_print_text(msg, console_prev, false, len += msg_print_text(msg, console_prev, false,
text + len, sizeof(text) - len); text + len, sizeof(text) - len);
if (nr_ext_console_drivers) {
ext_len = msg_print_ext_header(ext_text,
sizeof(ext_text),
msg, console_seq, console_prev);
ext_len += msg_print_ext_body(ext_text + ext_len,
sizeof(ext_text) - ext_len,
log_dict(msg), msg->dict_len,
log_text(msg), msg->text_len);
}
console_idx = log_next(console_idx); console_idx = log_next(console_idx);
console_seq++; console_seq++;
console_prev = msg->flags; console_prev = msg->flags;
raw_spin_unlock(&logbuf_lock); raw_spin_unlock(&logbuf_lock);
stop_critical_timings(); /* don't trace print latency */ stop_critical_timings(); /* don't trace print latency */
call_console_drivers(level, text, len); call_console_drivers(level, ext_text, ext_len, text, len);
start_critical_timings(); start_critical_timings();
local_irq_restore(flags); local_irq_restore(flags);
} }
...@@ -2497,6 +2560,11 @@ void register_console(struct console *newcon) ...@@ -2497,6 +2560,11 @@ void register_console(struct console *newcon)
newcon->next = console_drivers->next; newcon->next = console_drivers->next;
console_drivers->next = newcon; console_drivers->next = newcon;
} }
if (newcon->flags & CON_EXTENDED)
if (!nr_ext_console_drivers++)
pr_info("printk: continuation disabled due to ext consoles, expect more fragments in /dev/kmsg\n");
if (newcon->flags & CON_PRINTBUFFER) { if (newcon->flags & CON_PRINTBUFFER) {
/* /*
* console_unlock(); will print out the buffered messages * console_unlock(); will print out the buffered messages
...@@ -2569,6 +2637,9 @@ int unregister_console(struct console *console) ...@@ -2569,6 +2637,9 @@ int unregister_console(struct console *console)
} }
} }
if (!res && (console->flags & CON_EXTENDED))
nr_ext_console_drivers--;
/* /*
* If this isn't the last console and it has CON_CONSDEV set, we * If this isn't the last console and it has CON_CONSDEV set, we
* need to set it on the next preferred console. * need to set it on the next preferred console.
......
...@@ -1722,7 +1722,6 @@ static int prctl_set_mm_exe_file(struct mm_struct *mm, unsigned int fd) ...@@ -1722,7 +1722,6 @@ static int prctl_set_mm_exe_file(struct mm_struct *mm, unsigned int fd)
goto exit; goto exit;
} }
#ifdef CONFIG_CHECKPOINT_RESTORE
/* /*
* WARNING: we don't require any capability here so be very careful * WARNING: we don't require any capability here so be very careful
* in what is allowed for modification from userspace. * in what is allowed for modification from userspace.
...@@ -1818,6 +1817,7 @@ static int validate_prctl_map(struct prctl_mm_map *prctl_map) ...@@ -1818,6 +1817,7 @@ static int validate_prctl_map(struct prctl_mm_map *prctl_map)
return error; return error;
} }
#ifdef CONFIG_CHECKPOINT_RESTORE
static int prctl_set_mm_map(int opt, const void __user *addr, unsigned long data_size) static int prctl_set_mm_map(int opt, const void __user *addr, unsigned long data_size)
{ {
struct prctl_mm_map prctl_map = { .exe_fd = (u32)-1, }; struct prctl_mm_map prctl_map = { .exe_fd = (u32)-1, };
...@@ -1902,10 +1902,41 @@ static int prctl_set_mm_map(int opt, const void __user *addr, unsigned long data ...@@ -1902,10 +1902,41 @@ static int prctl_set_mm_map(int opt, const void __user *addr, unsigned long data
} }
#endif /* CONFIG_CHECKPOINT_RESTORE */ #endif /* CONFIG_CHECKPOINT_RESTORE */
static int prctl_set_auxv(struct mm_struct *mm, unsigned long addr,
unsigned long len)
{
/*
* This doesn't move the auxiliary vector itself since it's pinned to
* mm_struct, but it permits filling the vector with new values. It's
* up to the caller to provide sane values here, otherwise userspace
* tools which use this vector might be unhappy.
*/
unsigned long user_auxv[AT_VECTOR_SIZE];
if (len > sizeof(user_auxv))
return -EINVAL;
if (copy_from_user(user_auxv, (const void __user *)addr, len))
return -EFAULT;
/* Make sure the last entry is always AT_NULL */
user_auxv[AT_VECTOR_SIZE - 2] = 0;
user_auxv[AT_VECTOR_SIZE - 1] = 0;
BUILD_BUG_ON(sizeof(user_auxv) != sizeof(mm->saved_auxv));
task_lock(current);
memcpy(mm->saved_auxv, user_auxv, len);
task_unlock(current);
return 0;
}
static int prctl_set_mm(int opt, unsigned long addr, static int prctl_set_mm(int opt, unsigned long addr,
unsigned long arg4, unsigned long arg5) unsigned long arg4, unsigned long arg5)
{ {
struct mm_struct *mm = current->mm; struct mm_struct *mm = current->mm;
struct prctl_mm_map prctl_map;
struct vm_area_struct *vma; struct vm_area_struct *vma;
int error; int error;
...@@ -1925,6 +1956,9 @@ static int prctl_set_mm(int opt, unsigned long addr, ...@@ -1925,6 +1956,9 @@ static int prctl_set_mm(int opt, unsigned long addr,
if (opt == PR_SET_MM_EXE_FILE) if (opt == PR_SET_MM_EXE_FILE)
return prctl_set_mm_exe_file(mm, (unsigned int)addr); return prctl_set_mm_exe_file(mm, (unsigned int)addr);
if (opt == PR_SET_MM_AUXV)
return prctl_set_auxv(mm, addr, arg4);
if (addr >= TASK_SIZE || addr < mmap_min_addr) if (addr >= TASK_SIZE || addr < mmap_min_addr)
return -EINVAL; return -EINVAL;
...@@ -1933,42 +1967,64 @@ static int prctl_set_mm(int opt, unsigned long addr, ...@@ -1933,42 +1967,64 @@ static int prctl_set_mm(int opt, unsigned long addr,
down_read(&mm->mmap_sem); down_read(&mm->mmap_sem);
vma = find_vma(mm, addr); vma = find_vma(mm, addr);
prctl_map.start_code = mm->start_code;
prctl_map.end_code = mm->end_code;
prctl_map.start_data = mm->start_data;
prctl_map.end_data = mm->end_data;
prctl_map.start_brk = mm->start_brk;
prctl_map.brk = mm->brk;
prctl_map.start_stack = mm->start_stack;
prctl_map.arg_start = mm->arg_start;
prctl_map.arg_end = mm->arg_end;
prctl_map.env_start = mm->env_start;
prctl_map.env_end = mm->env_end;
prctl_map.auxv = NULL;
prctl_map.auxv_size = 0;
prctl_map.exe_fd = -1;
switch (opt) { switch (opt) {
case PR_SET_MM_START_CODE: case PR_SET_MM_START_CODE:
mm->start_code = addr; prctl_map.start_code = addr;
break; break;
case PR_SET_MM_END_CODE: case PR_SET_MM_END_CODE:
mm->end_code = addr; prctl_map.end_code = addr;
break; break;
case PR_SET_MM_START_DATA: case PR_SET_MM_START_DATA:
mm->start_data = addr; prctl_map.start_data = addr;
break; break;
case PR_SET_MM_END_DATA: case PR_SET_MM_END_DATA:
mm->end_data = addr; prctl_map.end_data = addr;
break;
case PR_SET_MM_START_STACK:
prctl_map.start_stack = addr;
break; break;
case PR_SET_MM_START_BRK: case PR_SET_MM_START_BRK:
if (addr <= mm->end_data) prctl_map.start_brk = addr;
goto out;
if (check_data_rlimit(rlimit(RLIMIT_DATA), mm->brk, addr,
mm->end_data, mm->start_data))
goto out;
mm->start_brk = addr;
break; break;
case PR_SET_MM_BRK: case PR_SET_MM_BRK:
if (addr <= mm->end_data) prctl_map.brk = addr;
goto out;
if (check_data_rlimit(rlimit(RLIMIT_DATA), addr, mm->start_brk,
mm->end_data, mm->start_data))
goto out;
mm->brk = addr;
break; break;
case PR_SET_MM_ARG_START:
prctl_map.arg_start = addr;
break;
case PR_SET_MM_ARG_END:
prctl_map.arg_end = addr;
break;
case PR_SET_MM_ENV_START:
prctl_map.env_start = addr;
break;
case PR_SET_MM_ENV_END:
prctl_map.env_end = addr;
break;
default:
goto out;
}
error = validate_prctl_map(&prctl_map);
if (error)
goto out;
switch (opt) {
/* /*
* If command line arguments and environment * If command line arguments and environment
* are placed somewhere else on stack, we can * are placed somewhere else on stack, we can
...@@ -1985,52 +2041,20 @@ static int prctl_set_mm(int opt, unsigned long addr, ...@@ -1985,52 +2041,20 @@ static int prctl_set_mm(int opt, unsigned long addr,
error = -EFAULT; error = -EFAULT;
goto out; goto out;
} }
if (opt == PR_SET_MM_START_STACK)
mm->start_stack = addr;
else if (opt == PR_SET_MM_ARG_START)
mm->arg_start = addr;
else if (opt == PR_SET_MM_ARG_END)
mm->arg_end = addr;
else if (opt == PR_SET_MM_ENV_START)
mm->env_start = addr;
else if (opt == PR_SET_MM_ENV_END)
mm->env_end = addr;
break;
/*
* This doesn't move auxiliary vector itself
* since it's pinned to mm_struct, but allow
* to fill vector with new values. It's up
* to a caller to provide sane values here
* otherwise user space tools which use this
* vector might be unhappy.
*/
case PR_SET_MM_AUXV: {
unsigned long user_auxv[AT_VECTOR_SIZE];
if (arg4 > sizeof(user_auxv))
goto out;
up_read(&mm->mmap_sem);
if (copy_from_user(user_auxv, (const void __user *)addr, arg4))
return -EFAULT;
/* Make sure the last entry is always AT_NULL */
user_auxv[AT_VECTOR_SIZE - 2] = 0;
user_auxv[AT_VECTOR_SIZE - 1] = 0;
BUILD_BUG_ON(sizeof(user_auxv) != sizeof(mm->saved_auxv));
task_lock(current);
memcpy(mm->saved_auxv, user_auxv, arg4);
task_unlock(current);
return 0;
}
default:
goto out;
} }
mm->start_code = prctl_map.start_code;
mm->end_code = prctl_map.end_code;
mm->start_data = prctl_map.start_data;
mm->end_data = prctl_map.end_data;
mm->start_brk = prctl_map.start_brk;
mm->brk = prctl_map.brk;
mm->start_stack = prctl_map.start_stack;
mm->arg_start = prctl_map.arg_start;
mm->arg_end = prctl_map.arg_end;
mm->env_start = prctl_map.env_start;
mm->env_end = prctl_map.env_end;
error = 0; error = 0;
out: out:
up_read(&mm->mmap_sem); up_read(&mm->mmap_sem);
......
...@@ -439,7 +439,7 @@ int do_blk_trace_setup(struct request_queue *q, char *name, dev_t dev, ...@@ -439,7 +439,7 @@ int do_blk_trace_setup(struct request_queue *q, char *name, dev_t dev,
{ {
struct blk_trace *old_bt, *bt = NULL; struct blk_trace *old_bt, *bt = NULL;
struct dentry *dir = NULL; struct dentry *dir = NULL;
int ret, i; int ret;
if (!buts->buf_size || !buts->buf_nr) if (!buts->buf_size || !buts->buf_nr)
return -EINVAL; return -EINVAL;
...@@ -451,9 +451,7 @@ int do_blk_trace_setup(struct request_queue *q, char *name, dev_t dev, ...@@ -451,9 +451,7 @@ int do_blk_trace_setup(struct request_queue *q, char *name, dev_t dev,
* some device names have larger paths - convert the slashes * some device names have larger paths - convert the slashes
* to underscores for this to work as expected * to underscores for this to work as expected
*/ */
for (i = 0; i < strlen(buts->name); i++) strreplace(buts->name, '/', '_');
if (buts->name[i] == '/')
buts->name[i] = '_';
bt = kzalloc(sizeof(*bt), GFP_KERNEL); bt = kzalloc(sizeof(*bt), GFP_KERNEL);
if (!bt) if (!bt)
......
...@@ -2082,7 +2082,7 @@ struct function_filter_data { ...@@ -2082,7 +2082,7 @@ struct function_filter_data {
static char ** static char **
ftrace_function_filter_re(char *buf, int len, int *count) ftrace_function_filter_re(char *buf, int len, int *count)
{ {
char *str, *sep, **re; char *str, **re;
str = kstrndup(buf, len, GFP_KERNEL); str = kstrndup(buf, len, GFP_KERNEL);
if (!str) if (!str)
...@@ -2092,8 +2092,7 @@ ftrace_function_filter_re(char *buf, int len, int *count) ...@@ -2092,8 +2092,7 @@ ftrace_function_filter_re(char *buf, int len, int *count)
* The argv_split function takes white space * The argv_split function takes white space
* as a separator, so convert ',' into spaces. * as a separator, so convert ',' into spaces.
*/ */
while ((sep = strchr(str, ','))) strreplace(str, ',', ' ');
*sep = ' ';
re = argv_split(GFP_KERNEL, str, count); re = argv_split(GFP_KERNEL, str, count);
kfree(str); kfree(str);
......
...@@ -462,19 +462,20 @@ EXPORT_SYMBOL(bitmap_parse_user); ...@@ -462,19 +462,20 @@ EXPORT_SYMBOL(bitmap_parse_user);
* Output format is a comma-separated list of decimal numbers and * Output format is a comma-separated list of decimal numbers and
* ranges if list is specified or hex digits grouped into comma-separated * ranges if list is specified or hex digits grouped into comma-separated
* sets of 8 digits/set. Returns the number of characters written to buf. * sets of 8 digits/set. Returns the number of characters written to buf.
*
* It is assumed that @buf is a pointer into a PAGE_SIZE area and that
* sufficient storage remains at @buf to accommodate the
* bitmap_print_to_pagebuf() output.
*/ */
int bitmap_print_to_pagebuf(bool list, char *buf, const unsigned long *maskp, int bitmap_print_to_pagebuf(bool list, char *buf, const unsigned long *maskp,
int nmaskbits) int nmaskbits)
{ {
ptrdiff_t len = PTR_ALIGN(buf + PAGE_SIZE - 1, PAGE_SIZE) - buf - 2; ptrdiff_t len = PTR_ALIGN(buf + PAGE_SIZE - 1, PAGE_SIZE) - buf;
int n = 0; int n = 0;
if (len > 1) { if (len > 1)
n = list ? scnprintf(buf, len, "%*pbl", nmaskbits, maskp) : n = list ? scnprintf(buf, len, "%*pbl\n", nmaskbits, maskp) :
scnprintf(buf, len, "%*pb", nmaskbits, maskp); scnprintf(buf, len, "%*pb\n", nmaskbits, maskp);
buf[n++] = '\n';
buf[n] = '\0';
}
return n; return n;
} }
EXPORT_SYMBOL(bitmap_print_to_pagebuf); EXPORT_SYMBOL(bitmap_print_to_pagebuf);
...@@ -506,12 +507,12 @@ static int __bitmap_parselist(const char *buf, unsigned int buflen, ...@@ -506,12 +507,12 @@ static int __bitmap_parselist(const char *buf, unsigned int buflen,
unsigned a, b; unsigned a, b;
int c, old_c, totaldigits; int c, old_c, totaldigits;
const char __user __force *ubuf = (const char __user __force *)buf; const char __user __force *ubuf = (const char __user __force *)buf;
int exp_digit, in_range; int at_start, in_range;
totaldigits = c = 0; totaldigits = c = 0;
bitmap_zero(maskp, nmaskbits); bitmap_zero(maskp, nmaskbits);
do { do {
exp_digit = 1; at_start = 1;
in_range = 0; in_range = 0;
a = b = 0; a = b = 0;
...@@ -540,11 +541,10 @@ static int __bitmap_parselist(const char *buf, unsigned int buflen, ...@@ -540,11 +541,10 @@ static int __bitmap_parselist(const char *buf, unsigned int buflen,
break; break;
if (c == '-') { if (c == '-') {
if (exp_digit || in_range) if (at_start || in_range)
return -EINVAL; return -EINVAL;
b = 0; b = 0;
in_range = 1; in_range = 1;
exp_digit = 1;
continue; continue;
} }
...@@ -554,16 +554,18 @@ static int __bitmap_parselist(const char *buf, unsigned int buflen, ...@@ -554,16 +554,18 @@ static int __bitmap_parselist(const char *buf, unsigned int buflen,
b = b * 10 + (c - '0'); b = b * 10 + (c - '0');
if (!in_range) if (!in_range)
a = b; a = b;
exp_digit = 0; at_start = 0;
totaldigits++; totaldigits++;
} }
if (!(a <= b)) if (!(a <= b))
return -EINVAL; return -EINVAL;
if (b >= nmaskbits) if (b >= nmaskbits)
return -ERANGE; return -ERANGE;
while (a <= b) { if (!at_start) {
set_bit(a, maskp); while (a <= b) {
a++; set_bit(a, maskp);
a++;
}
} }
} while (buflen && c == ','); } while (buflen && c == ',');
return 0; return 0;
......
...@@ -257,23 +257,20 @@ static int kobject_add_internal(struct kobject *kobj) ...@@ -257,23 +257,20 @@ static int kobject_add_internal(struct kobject *kobj)
int kobject_set_name_vargs(struct kobject *kobj, const char *fmt, int kobject_set_name_vargs(struct kobject *kobj, const char *fmt,
va_list vargs) va_list vargs)
{ {
const char *old_name = kobj->name;
char *s; char *s;
if (kobj->name && !fmt) if (kobj->name && !fmt)
return 0; return 0;
kobj->name = kvasprintf(GFP_KERNEL, fmt, vargs); s = kvasprintf(GFP_KERNEL, fmt, vargs);
if (!kobj->name) { if (!s)
kobj->name = old_name;
return -ENOMEM; return -ENOMEM;
}
/* ewww... some of these buggers have '/' in the name ... */ /* ewww... some of these buggers have '/' in the name ... */
while ((s = strchr(kobj->name, '/'))) strreplace(s, '/', '!');
s[0] = '!'; kfree(kobj->name);
kobj->name = s;
kfree(old_name);
return 0; return 0;
} }
......
...@@ -65,7 +65,8 @@ static struct kmem_cache *radix_tree_node_cachep; ...@@ -65,7 +65,8 @@ static struct kmem_cache *radix_tree_node_cachep;
*/ */
struct radix_tree_preload { struct radix_tree_preload {
int nr; int nr;
struct radix_tree_node *nodes[RADIX_TREE_PRELOAD_SIZE]; /* nodes->private_data points to next preallocated node */
struct radix_tree_node *nodes;
}; };
static DEFINE_PER_CPU(struct radix_tree_preload, radix_tree_preloads) = { 0, }; static DEFINE_PER_CPU(struct radix_tree_preload, radix_tree_preloads) = { 0, };
...@@ -197,8 +198,9 @@ radix_tree_node_alloc(struct radix_tree_root *root) ...@@ -197,8 +198,9 @@ radix_tree_node_alloc(struct radix_tree_root *root)
*/ */
rtp = this_cpu_ptr(&radix_tree_preloads); rtp = this_cpu_ptr(&radix_tree_preloads);
if (rtp->nr) { if (rtp->nr) {
ret = rtp->nodes[rtp->nr - 1]; ret = rtp->nodes;
rtp->nodes[rtp->nr - 1] = NULL; rtp->nodes = ret->private_data;
ret->private_data = NULL;
rtp->nr--; rtp->nr--;
} }
/* /*
...@@ -257,17 +259,20 @@ static int __radix_tree_preload(gfp_t gfp_mask) ...@@ -257,17 +259,20 @@ static int __radix_tree_preload(gfp_t gfp_mask)
preempt_disable(); preempt_disable();
rtp = this_cpu_ptr(&radix_tree_preloads); rtp = this_cpu_ptr(&radix_tree_preloads);
while (rtp->nr < ARRAY_SIZE(rtp->nodes)) { while (rtp->nr < RADIX_TREE_PRELOAD_SIZE) {
preempt_enable(); preempt_enable();
node = kmem_cache_alloc(radix_tree_node_cachep, gfp_mask); node = kmem_cache_alloc(radix_tree_node_cachep, gfp_mask);
if (node == NULL) if (node == NULL)
goto out; goto out;
preempt_disable(); preempt_disable();
rtp = this_cpu_ptr(&radix_tree_preloads); rtp = this_cpu_ptr(&radix_tree_preloads);
if (rtp->nr < ARRAY_SIZE(rtp->nodes)) if (rtp->nr < RADIX_TREE_PRELOAD_SIZE) {
rtp->nodes[rtp->nr++] = node; node->private_data = rtp->nodes;
else rtp->nodes = node;
rtp->nr++;
} else {
kmem_cache_free(radix_tree_node_cachep, node); kmem_cache_free(radix_tree_node_cachep, node);
}
} }
ret = 0; ret = 0;
out: out:
...@@ -1463,15 +1468,16 @@ static int radix_tree_callback(struct notifier_block *nfb, ...@@ -1463,15 +1468,16 @@ static int radix_tree_callback(struct notifier_block *nfb,
{ {
int cpu = (long)hcpu; int cpu = (long)hcpu;
struct radix_tree_preload *rtp; struct radix_tree_preload *rtp;
struct radix_tree_node *node;
/* Free per-cpu pool of perloaded nodes */ /* Free per-cpu pool of perloaded nodes */
if (action == CPU_DEAD || action == CPU_DEAD_FROZEN) { if (action == CPU_DEAD || action == CPU_DEAD_FROZEN) {
rtp = &per_cpu(radix_tree_preloads, cpu); rtp = &per_cpu(radix_tree_preloads, cpu);
while (rtp->nr) { while (rtp->nr) {
kmem_cache_free(radix_tree_node_cachep, node = rtp->nodes;
rtp->nodes[rtp->nr-1]); rtp->nodes = node->private_data;
rtp->nodes[rtp->nr-1] = NULL; kmem_cache_free(radix_tree_node_cachep, node);
rtp->nr--; rtp->nr--;
} }
} }
return NOTIFY_OK; return NOTIFY_OK;
......
...@@ -8,6 +8,12 @@ ...@@ -8,6 +8,12 @@
#include <linux/export.h> #include <linux/export.h>
#include <linux/sort.h> #include <linux/sort.h>
static int alignment_ok(const void *base, int align)
{
return IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) ||
((unsigned long)base & (align - 1)) == 0;
}
static void u32_swap(void *a, void *b, int size) static void u32_swap(void *a, void *b, int size)
{ {
u32 t = *(u32 *)a; u32 t = *(u32 *)a;
...@@ -15,6 +21,13 @@ static void u32_swap(void *a, void *b, int size) ...@@ -15,6 +21,13 @@ static void u32_swap(void *a, void *b, int size)
*(u32 *)b = t; *(u32 *)b = t;
} }
static void u64_swap(void *a, void *b, int size)
{
u64 t = *(u64 *)a;
*(u64 *)a = *(u64 *)b;
*(u64 *)b = t;
}
static void generic_swap(void *a, void *b, int size) static void generic_swap(void *a, void *b, int size)
{ {
char t; char t;
...@@ -50,8 +63,14 @@ void sort(void *base, size_t num, size_t size, ...@@ -50,8 +63,14 @@ void sort(void *base, size_t num, size_t size,
/* pre-scale counters for performance */ /* pre-scale counters for performance */
int i = (num/2 - 1) * size, n = num * size, c, r; int i = (num/2 - 1) * size, n = num * size, c, r;
if (!swap_func) if (!swap_func) {
swap_func = (size == 4 ? u32_swap : generic_swap); if (size == 4 && alignment_ok(base, 4))
swap_func = u32_swap;
else if (size == 8 && alignment_ok(base, 8))
swap_func = u64_swap;
else
swap_func = generic_swap;
}
/* heapify */ /* heapify */
for ( ; i >= 0; i -= size) { for ( ; i >= 0; i -= size) {
......
...@@ -849,3 +849,20 @@ void *memchr_inv(const void *start, int c, size_t bytes) ...@@ -849,3 +849,20 @@ void *memchr_inv(const void *start, int c, size_t bytes)
return check_bytes8(start, value, bytes % 8); return check_bytes8(start, value, bytes % 8);
} }
EXPORT_SYMBOL(memchr_inv); EXPORT_SYMBOL(memchr_inv);
/**
* strreplace - Replace all occurrences of character in string.
* @s: The string to operate on.
* @old: The character being replaced.
* @new: The character @old is replaced with.
*
* Returns pointer to the nul byte at the end of @s.
*/
char *strreplace(char *s, char old, char new)
{
for (; *s; ++s)
if (*s == old)
*s = new;
return s;
}
EXPORT_SYMBOL(strreplace);
...@@ -25,19 +25,19 @@ static const char * const test_data_1_le[] __initconst = { ...@@ -25,19 +25,19 @@ static const char * const test_data_1_le[] __initconst = {
"4c", "d1", "19", "99", "43", "b1", "af", "0c", "4c", "d1", "19", "99", "43", "b1", "af", "0c",
}; };
static const char *test_data_2_le[] __initdata = { static const char * const test_data_2_le[] __initconst = {
"32be", "7bdb", "180a", "b293", "32be", "7bdb", "180a", "b293",
"ba70", "24c4", "837d", "9b34", "ba70", "24c4", "837d", "9b34",
"9ca6", "ad31", "0f9c", "e9ac", "9ca6", "ad31", "0f9c", "e9ac",
"d14c", "9919", "b143", "0caf", "d14c", "9919", "b143", "0caf",
}; };
static const char *test_data_4_le[] __initdata = { static const char * const test_data_4_le[] __initconst = {
"7bdb32be", "b293180a", "24c4ba70", "9b34837d", "7bdb32be", "b293180a", "24c4ba70", "9b34837d",
"ad319ca6", "e9ac0f9c", "9919d14c", "0cafb143", "ad319ca6", "e9ac0f9c", "9919d14c", "0cafb143",
}; };
static const char *test_data_8_le[] __initdata = { static const char * const test_data_8_le[] __initconst = {
"b293180a7bdb32be", "9b34837d24c4ba70", "b293180a7bdb32be", "9b34837d24c4ba70",
"e9ac0f9cad319ca6", "0cafb1439919d14c", "e9ac0f9cad319ca6", "0cafb1439919d14c",
}; };
......
...@@ -975,7 +975,6 @@ static void update_and_free_page(struct hstate *h, struct page *page) ...@@ -975,7 +975,6 @@ static void update_and_free_page(struct hstate *h, struct page *page)
destroy_compound_gigantic_page(page, huge_page_order(h)); destroy_compound_gigantic_page(page, huge_page_order(h));
free_gigantic_page(page, huge_page_order(h)); free_gigantic_page(page, huge_page_order(h));
} else { } else {
arch_release_hugepage(page);
__free_pages(page, huge_page_order(h)); __free_pages(page, huge_page_order(h));
} }
} }
...@@ -1160,10 +1159,6 @@ static struct page *alloc_fresh_huge_page_node(struct hstate *h, int nid) ...@@ -1160,10 +1159,6 @@ static struct page *alloc_fresh_huge_page_node(struct hstate *h, int nid)
__GFP_REPEAT|__GFP_NOWARN, __GFP_REPEAT|__GFP_NOWARN,
huge_page_order(h)); huge_page_order(h));
if (page) { if (page) {
if (arch_prepare_hugepage(page)) {
__free_pages(page, huge_page_order(h));
return NULL;
}
prep_new_huge_page(h, page, nid); prep_new_huge_page(h, page, nid);
} }
...@@ -1315,11 +1310,6 @@ static struct page *alloc_buddy_huge_page(struct hstate *h, int nid) ...@@ -1315,11 +1310,6 @@ static struct page *alloc_buddy_huge_page(struct hstate *h, int nid)
htlb_alloc_mask(h)|__GFP_COMP|__GFP_THISNODE| htlb_alloc_mask(h)|__GFP_COMP|__GFP_THISNODE|
__GFP_REPEAT|__GFP_NOWARN, huge_page_order(h)); __GFP_REPEAT|__GFP_NOWARN, huge_page_order(h));
if (page && arch_prepare_hugepage(page)) {
__free_pages(page, huge_page_order(h));
page = NULL;
}
spin_lock(&hugetlb_lock); spin_lock(&hugetlb_lock);
if (page) { if (page) {
INIT_LIST_HEAD(&page->lru); INIT_LIST_HEAD(&page->lru);
......
...@@ -6,7 +6,6 @@ ...@@ -6,7 +6,6 @@
#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT) #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
#define KASAN_SHADOW_MASK (KASAN_SHADOW_SCALE_SIZE - 1) #define KASAN_SHADOW_MASK (KASAN_SHADOW_SCALE_SIZE - 1)
#define KASAN_FREE_PAGE 0xFF /* page was freed */
#define KASAN_FREE_PAGE 0xFF /* page was freed */ #define KASAN_FREE_PAGE 0xFF /* page was freed */
#define KASAN_PAGE_REDZONE 0xFE /* redzone for kmalloc_large allocations */ #define KASAN_PAGE_REDZONE 0xFE /* redzone for kmalloc_large allocations */
#define KASAN_KMALLOC_REDZONE 0xFC /* redzone inside slub object */ #define KASAN_KMALLOC_REDZONE 0xFC /* redzone inside slub object */
......
...@@ -97,6 +97,10 @@ struct zbud_pool { ...@@ -97,6 +97,10 @@ struct zbud_pool {
struct list_head lru; struct list_head lru;
u64 pages_nr; u64 pages_nr;
struct zbud_ops *ops; struct zbud_ops *ops;
#ifdef CONFIG_ZPOOL
struct zpool *zpool;
struct zpool_ops *zpool_ops;
#endif
}; };
/* /*
...@@ -123,7 +127,10 @@ struct zbud_header { ...@@ -123,7 +127,10 @@ struct zbud_header {
static int zbud_zpool_evict(struct zbud_pool *pool, unsigned long handle) static int zbud_zpool_evict(struct zbud_pool *pool, unsigned long handle)
{ {
return zpool_evict(pool, handle); if (pool->zpool && pool->zpool_ops && pool->zpool_ops->evict)
return pool->zpool_ops->evict(pool->zpool, handle);
else
return -ENOENT;
} }
static struct zbud_ops zbud_zpool_ops = { static struct zbud_ops zbud_zpool_ops = {
...@@ -131,9 +138,17 @@ static struct zbud_ops zbud_zpool_ops = { ...@@ -131,9 +138,17 @@ static struct zbud_ops zbud_zpool_ops = {
}; };
static void *zbud_zpool_create(char *name, gfp_t gfp, static void *zbud_zpool_create(char *name, gfp_t gfp,
struct zpool_ops *zpool_ops) struct zpool_ops *zpool_ops,
struct zpool *zpool)
{ {
return zbud_create_pool(gfp, zpool_ops ? &zbud_zpool_ops : NULL); struct zbud_pool *pool;
pool = zbud_create_pool(gfp, zpool_ops ? &zbud_zpool_ops : NULL);
if (pool) {
pool->zpool = zpool;
pool->zpool_ops = zpool_ops;
}
return pool;
} }
static void zbud_zpool_destroy(void *pool) static void zbud_zpool_destroy(void *pool)
...@@ -292,7 +307,7 @@ struct zbud_pool *zbud_create_pool(gfp_t gfp, struct zbud_ops *ops) ...@@ -292,7 +307,7 @@ struct zbud_pool *zbud_create_pool(gfp_t gfp, struct zbud_ops *ops)
struct zbud_pool *pool; struct zbud_pool *pool;
int i; int i;
pool = kmalloc(sizeof(struct zbud_pool), gfp); pool = kzalloc(sizeof(struct zbud_pool), gfp);
if (!pool) if (!pool)
return NULL; return NULL;
spin_lock_init(&pool->lock); spin_lock_init(&pool->lock);
......
...@@ -73,33 +73,6 @@ int zpool_unregister_driver(struct zpool_driver *driver) ...@@ -73,33 +73,6 @@ int zpool_unregister_driver(struct zpool_driver *driver)
} }
EXPORT_SYMBOL(zpool_unregister_driver); EXPORT_SYMBOL(zpool_unregister_driver);
/**
* zpool_evict() - evict callback from a zpool implementation.
* @pool: pool to evict from.
* @handle: handle to evict.
*
* This can be used by zpool implementations to call the
* user's evict zpool_ops struct evict callback.
*/
int zpool_evict(void *pool, unsigned long handle)
{
struct zpool *zpool;
spin_lock(&pools_lock);
list_for_each_entry(zpool, &pools_head, list) {
if (zpool->pool == pool) {
spin_unlock(&pools_lock);
if (!zpool->ops || !zpool->ops->evict)
return -EINVAL;
return zpool->ops->evict(zpool, handle);
}
}
spin_unlock(&pools_lock);
return -ENOENT;
}
EXPORT_SYMBOL(zpool_evict);
static struct zpool_driver *zpool_get_driver(char *type) static struct zpool_driver *zpool_get_driver(char *type)
{ {
struct zpool_driver *driver; struct zpool_driver *driver;
...@@ -147,7 +120,7 @@ struct zpool *zpool_create_pool(char *type, char *name, gfp_t gfp, ...@@ -147,7 +120,7 @@ struct zpool *zpool_create_pool(char *type, char *name, gfp_t gfp,
struct zpool_driver *driver; struct zpool_driver *driver;
struct zpool *zpool; struct zpool *zpool;
pr_info("creating pool type %s\n", type); pr_debug("creating pool type %s\n", type);
driver = zpool_get_driver(type); driver = zpool_get_driver(type);
...@@ -170,7 +143,7 @@ struct zpool *zpool_create_pool(char *type, char *name, gfp_t gfp, ...@@ -170,7 +143,7 @@ struct zpool *zpool_create_pool(char *type, char *name, gfp_t gfp,
zpool->type = driver->type; zpool->type = driver->type;
zpool->driver = driver; zpool->driver = driver;
zpool->pool = driver->create(name, gfp, ops); zpool->pool = driver->create(name, gfp, ops, zpool);
zpool->ops = ops; zpool->ops = ops;
if (!zpool->pool) { if (!zpool->pool) {
...@@ -180,7 +153,7 @@ struct zpool *zpool_create_pool(char *type, char *name, gfp_t gfp, ...@@ -180,7 +153,7 @@ struct zpool *zpool_create_pool(char *type, char *name, gfp_t gfp,
return NULL; return NULL;
} }
pr_info("created %s pool\n", type); pr_debug("created pool type %s\n", type);
spin_lock(&pools_lock); spin_lock(&pools_lock);
list_add(&zpool->list, &pools_head); list_add(&zpool->list, &pools_head);
...@@ -202,7 +175,7 @@ struct zpool *zpool_create_pool(char *type, char *name, gfp_t gfp, ...@@ -202,7 +175,7 @@ struct zpool *zpool_create_pool(char *type, char *name, gfp_t gfp,
*/ */
void zpool_destroy_pool(struct zpool *zpool) void zpool_destroy_pool(struct zpool *zpool)
{ {
pr_info("destroying pool type %s\n", zpool->type); pr_debug("destroying pool type %s\n", zpool->type);
spin_lock(&pools_lock); spin_lock(&pools_lock);
list_del(&zpool->list); list_del(&zpool->list);
......
...@@ -45,10 +45,6 @@ ...@@ -45,10 +45,6 @@
* *
*/ */
#ifdef CONFIG_ZSMALLOC_DEBUG
#define DEBUG
#endif
#include <linux/module.h> #include <linux/module.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/sched.h> #include <linux/sched.h>
...@@ -313,7 +309,8 @@ static void record_obj(unsigned long handle, unsigned long obj) ...@@ -313,7 +309,8 @@ static void record_obj(unsigned long handle, unsigned long obj)
#ifdef CONFIG_ZPOOL #ifdef CONFIG_ZPOOL
static void *zs_zpool_create(char *name, gfp_t gfp, struct zpool_ops *zpool_ops) static void *zs_zpool_create(char *name, gfp_t gfp, struct zpool_ops *zpool_ops,
struct zpool *zpool)
{ {
return zs_create_pool(name, gfp); return zs_create_pool(name, gfp);
} }
......
...@@ -75,9 +75,10 @@ static u64 zswap_duplicate_entry; ...@@ -75,9 +75,10 @@ static u64 zswap_duplicate_entry;
/********************************* /*********************************
* tunables * tunables
**********************************/ **********************************/
/* Enable/disable zswap (disabled by default, fixed at boot for now) */
static bool zswap_enabled __read_mostly; /* Enable/disable zswap (disabled by default) */
module_param_named(enabled, zswap_enabled, bool, 0444); static bool zswap_enabled;
module_param_named(enabled, zswap_enabled, bool, 0644);
/* Compressor to be used by zswap (fixed at boot for now) */ /* Compressor to be used by zswap (fixed at boot for now) */
#define ZSWAP_COMPRESSOR_DEFAULT "lzo" #define ZSWAP_COMPRESSOR_DEFAULT "lzo"
...@@ -648,7 +649,7 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset, ...@@ -648,7 +649,7 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
u8 *src, *dst; u8 *src, *dst;
struct zswap_header *zhdr; struct zswap_header *zhdr;
if (!tree) { if (!zswap_enabled || !tree) {
ret = -ENODEV; ret = -ENODEV;
goto reject; goto reject;
} }
...@@ -901,9 +902,6 @@ static int __init init_zswap(void) ...@@ -901,9 +902,6 @@ static int __init init_zswap(void)
{ {
gfp_t gfp = __GFP_NORETRY | __GFP_NOWARN; gfp_t gfp = __GFP_NORETRY | __GFP_NOWARN;
if (!zswap_enabled)
return 0;
pr_info("loading zswap\n"); pr_info("loading zswap\n");
zswap_pool = zpool_create_pool(zswap_zpool_type, "zswap", gfp, zswap_pool = zpool_create_pool(zswap_zpool_type, "zswap", gfp,
......
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册