提交 da8ac5e0 编写于 作者: L Linus Torvalds

Merge branch 'for-linus' of git://git390.osdl.marist.edu/pub/scm/linux-2.6

* 'for-linus' of git://git390.osdl.marist.edu/pub/scm/linux-2.6: (38 commits)
  [S390] SPIN_LOCK_UNLOCKED cleanup in drivers/s390
  [S390] Clean up smp code in preparation for some larger changes.
  [S390] Remove debugging junk.
  [S390] Switch etr from tasklet to workqueue.
  [S390] split page_test_and_clear_dirty.
  [S390] Processor degradation notification.
  [S390] vtime: cleanup per_cpu usage.
  [S390] crypto: cleanup.
  [S390] sclp: fix coding style.
  [S390] vmlogrdr: stop IUCV connection in vmlogrdr_release.
  [S390] sclp: initialize early.
  [S390] ctc: kmalloc->kzalloc/casting cleanups.
  [S390] zfcpdump support.
  [S390] dasd: Add ipldev parameter.
  [S390] dasd: Add sysfs attribute status and generate uevents.
  [S390] Improved kernel stack overflow checking.
  [S390] Get rid of console setup functions.
  [S390] No execute support cleanup.
  [S390] Minor fault path optimization.
  [S390] Use generic bug.
  ...
crypto-API support for z990 Message Security Assist (MSA) instructions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
AUTHOR: Thomas Spatzier (tspat@de.ibm.com)
1. Introduction crypto-API
~~~~~~~~~~~~~~~~~~~~~~~~~~
See Documentation/crypto/api-intro.txt for an introduction/description of the
kernel crypto API.
According to api-intro.txt support for z990 crypto instructions has been added
in the algorithm api layer of the crypto API. Several files containing z990
optimized implementations of crypto algorithms are placed in the
arch/s390/crypto directory.
2. Probing for availability of MSA
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It should be possible to use Kernels with the z990 crypto implementations both
on machines with MSA available and on those without MSA (pre z990 or z990
without MSA). Therefore a simple probing mechanism has been implemented:
In the init function of each crypto module the availability of MSA and of the
respective crypto algorithm in particular will be tested. If the algorithm is
available the module will load and register its algorithm with the crypto API.
If the respective crypto algorithm is not available, the init function will
return -ENOSYS. In that case a fallback to the standard software implementation
of the crypto algorithm must be taken ( -> the standard crypto modules are
also built when compiling the kernel).
3. Ensuring z990 crypto module preference
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If z990 crypto instructions are available the optimized modules should be
preferred instead of standard modules.
3.1. compiled-in modules
~~~~~~~~~~~~~~~~~~~~~~~~
For compiled-in modules it has to be ensured that the z990 modules are linked
before the standard crypto modules. Then, on system startup the init functions
of z990 crypto modules will be called first and query for availability of z990
crypto instructions. If instruction is available, the z990 module will register
its crypto algorithm implementation -> the load of the standard module will fail
since the algorithm is already registered.
If z990 crypto instruction is not available the load of the z990 module will
fail -> the standard module will load and register its algorithm.
3.2. dynamic modules
~~~~~~~~~~~~~~~~~~~~
A system administrator has to take care of giving preference to z990 crypto
modules. If MSA is available appropriate lines have to be added to
/etc/modprobe.conf.
Example: z990 crypto instruction for SHA1 algorithm is available
add the following line to /etc/modprobe.conf (assuming the
z990 crypto modules for SHA1 is called sha1_z990):
alias sha1 sha1_z990
-> when the sha1 algorithm is requested through the crypto API
(which has a module autoloader) the z990 module will be loaded.
TBD: a userspace module probing mechanism
something like 'probe sha1 sha1_z990 sha1' in modprobe.conf
-> try module sha1_z990, if it fails to load standard module sha1
the 'probe' statement is currently not supported in modprobe.conf
4. Currently implemented z990 crypto algorithms
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following crypto algorithms with z990 MSA support are currently implemented.
The name of each algorithm under which it is registered in crypto API and the
name of the respective module is given in square brackets.
- SHA1 Digest Algorithm [sha1 -> sha1_z990]
- DES Encrypt/Decrypt Algorithm (64bit key) [des -> des_z990]
- Triple DES Encrypt/Decrypt Algorithm (128bit key) [des3_ede128 -> des_z990]
- Triple DES Encrypt/Decrypt Algorithm (192bit key) [des3_ede -> des_z990]
In order to load, for example, the sha1_z990 module when the sha1 algorithm is
requested (see 3.2.) add 'alias sha1 sha1_z990' to /etc/modprobe.conf.
s390 SCSI dump tool (zfcpdump)
System z machines (z900 or higher) provide hardware support for creating system
dumps on SCSI disks. The dump process is initiated by booting a dump tool, which
has to create a dump of the current (probably crashed) Linux image. In order to
not overwrite memory of the crashed Linux with data of the dump tool, the
hardware saves some memory plus the register sets of the boot cpu before the
dump tool is loaded. There exists an SCLP hardware interface to obtain the saved
memory afterwards. Currently 32 MB are saved.
This zfcpdump implementation consists of a Linux dump kernel together with
a userspace dump tool, which are loaded together into the saved memory region
below 32 MB. zfcpdump is installed on a SCSI disk using zipl (as contained in
the s390-tools package) to make the device bootable. The operator of a Linux
system can then trigger a SCSI dump by booting the SCSI disk, where zfcpdump
resides on.
The kernel part of zfcpdump is implemented as a debugfs file under "zcore/mem",
which exports memory and registers of the crashed Linux in an s390
standalone dump format. It can be used in the same way as e.g. /dev/mem. The
dump format defines a 4K header followed by plain uncompressed memory. The
register sets are stored in the prefix pages of the respective cpus. To build a
dump enabled kernel with the zcore driver, the kernel config option
CONFIG_ZFCPDUMP has to be set. When reading from "zcore/mem", the part of
memory, which has been saved by hardware is read by the driver via the SCLP
hardware interface. The second part is just copied from the non overwritten real
memory.
The userspace application of zfcpdump can reside e.g. in an intitramfs or an
initrd. It reads from zcore/mem and writes the system dump to a file on a
SCSI disk.
To build a zfcpdump kernel use the following settings in your kernel
configuration:
* CONFIG_ZFCPDUMP=y
* Enable ZFCP driver
* Enable SCSI driver
* Enable ext2 and ext3 filesystems
* Disable as many features as possible to keep the kernel small.
E.g. network support is not needed at all.
To use the zfcpdump userspace application in an initramfs you have to do the
following:
* Copy the zfcpdump executable somewhere into your Linux tree.
E.g. to "arch/s390/boot/zfcpdump. If you do not want to include
shared libraries, compile the tool with the "-static" gcc option.
* If you want to include e2fsck, add it to your source tree, too. The zfcpdump
application attempts to start /sbin/e2fsck from the ramdisk.
* Use an initramfs config file like the following:
dir /dev 755 0 0
nod /dev/console 644 0 0 c 5 1
nod /dev/null 644 0 0 c 1 3
nod /dev/sda1 644 0 0 b 8 1
nod /dev/sda2 644 0 0 b 8 2
nod /dev/sda3 644 0 0 b 8 3
nod /dev/sda4 644 0 0 b 8 4
nod /dev/sda5 644 0 0 b 8 5
nod /dev/sda6 644 0 0 b 8 6
nod /dev/sda7 644 0 0 b 8 7
nod /dev/sda8 644 0 0 b 8 8
nod /dev/sda9 644 0 0 b 8 9
nod /dev/sda10 644 0 0 b 8 10
nod /dev/sda11 644 0 0 b 8 11
nod /dev/sda12 644 0 0 b 8 12
nod /dev/sda13 644 0 0 b 8 13
nod /dev/sda14 644 0 0 b 8 14
nod /dev/sda15 644 0 0 b 8 15
file /init arch/s390/boot/zfcpdump 755 0 0
file /sbin/e2fsck arch/s390/boot/e2fsck 755 0 0
dir /proc 755 0 0
dir /sys 755 0 0
dir /mnt 755 0 0
dir /sbin 755 0 0
* Issue "make image" to build the zfcpdump image with initramfs.
In a Linux distribution the zfcpdump enabled kernel image must be copied to
/usr/share/zfcpdump/zfcpdump.image, where the s390 zipl tool is looking for the
dump kernel when preparing a SCSI dump disk.
If you use a ramdisk copy it to "/usr/share/zfcpdump/zfcpdump.rd".
For more information on how to use zfcpdump refer to the s390 'Using the Dump
Tools book', which is available from
http://www.ibm.com/developerworks/linux/linux390.
......@@ -41,6 +41,11 @@ config GENERIC_HWEIGHT
config GENERIC_TIME
def_bool y
config GENERIC_BUG
bool
depends on BUG
default y
config NO_IOMEM
def_bool y
......@@ -514,6 +519,14 @@ config KEXEC
current kernel, and to start another kernel. It is like a reboot
but is independent of hardware/microcode support.
config ZFCPDUMP
tristate "zfcpdump support"
select SMP
default n
help
Select this option if you want to build an zfcpdump enabled kernel.
Refer to "Documentation/s390/zfcpdump.txt" for more details on this.
endmenu
source "net/Kconfig"
......
......@@ -67,8 +67,10 @@ endif
ifeq ($(call cc-option-yn,-mstack-size=8192 -mstack-guard=128),y)
cflags-$(CONFIG_CHECK_STACK) += -mstack-size=$(STACK_SIZE)
ifneq ($(call cc-option-yn,-mstack-size=8192),y)
cflags-$(CONFIG_CHECK_STACK) += -mstack-guard=$(CONFIG_STACK_GUARD)
endif
endif
ifeq ($(call cc-option-yn,-mwarn-dynamicstack),y)
cflags-$(CONFIG_WARN_STACK) += -mwarn-dynamicstack
......@@ -103,6 +105,9 @@ install: vmlinux
image: vmlinux
$(Q)$(MAKE) $(build)=$(boot) $(boot)/$@
zfcpdump:
$(Q)$(MAKE) $(build)=$(boot) $(boot)/$@
archclean:
$(Q)$(MAKE) $(clean)=$(boot)
......
......@@ -668,45 +668,7 @@ EXPORT_SYMBOL_GPL(appldata_register_ops);
EXPORT_SYMBOL_GPL(appldata_unregister_ops);
EXPORT_SYMBOL_GPL(appldata_diag);
#ifdef MODULE
/*
* Kernel symbols needed by appldata_mem and appldata_os modules.
* However, if this file is compiled as a module (for testing only), these
* symbols are not exported. In this case, we define them locally and export
* those.
*/
void si_swapinfo(struct sysinfo *val)
{
val->freeswap = -1ul;
val->totalswap = -1ul;
}
unsigned long avenrun[3] = {-1 - FIXED_1/200, -1 - FIXED_1/200,
-1 - FIXED_1/200};
int nr_threads = -1;
void get_full_page_state(struct page_state *ps)
{
memset(ps, -1, sizeof(struct page_state));
}
unsigned long nr_running(void)
{
return -1;
}
unsigned long nr_iowait(void)
{
return -1;
}
/*unsigned long nr_context_switches(void)
{
return -1;
}*/
#endif /* MODULE */
EXPORT_SYMBOL_GPL(si_swapinfo);
EXPORT_SYMBOL_GPL(nr_threads);
EXPORT_SYMBOL_GPL(nr_running);
EXPORT_SYMBOL_GPL(nr_iowait);
//EXPORT_SYMBOL_GPL(nr_context_switches);
......@@ -25,99 +25,100 @@
*/
#include <linux/init.h>
#include <linux/module.h>
#include <linux/mm.h>
#include <linux/crypto.h>
#include <asm/scatterlist.h>
#include <asm/byteorder.h>
#include "crypt_s390.h"
#define SHA1_DIGEST_SIZE 20
#define SHA1_BLOCK_SIZE 64
struct crypt_s390_sha1_ctx {
u64 count;
struct s390_sha1_ctx {
u64 count; /* message length */
u32 state[5];
u32 buf_len;
u8 buffer[2 * SHA1_BLOCK_SIZE];
u8 buf[2 * SHA1_BLOCK_SIZE];
};
static void sha1_init(struct crypto_tfm *tfm)
{
struct crypt_s390_sha1_ctx *ctx = crypto_tfm_ctx(tfm);
ctx->state[0] = 0x67452301;
ctx->state[1] = 0xEFCDAB89;
ctx->state[2] = 0x98BADCFE;
ctx->state[3] = 0x10325476;
ctx->state[4] = 0xC3D2E1F0;
ctx->count = 0;
ctx->buf_len = 0;
struct s390_sha1_ctx *sctx = crypto_tfm_ctx(tfm);
sctx->state[0] = 0x67452301;
sctx->state[1] = 0xEFCDAB89;
sctx->state[2] = 0x98BADCFE;
sctx->state[3] = 0x10325476;
sctx->state[4] = 0xC3D2E1F0;
sctx->count = 0;
}
static void sha1_update(struct crypto_tfm *tfm, const u8 *data,
unsigned int len)
{
struct crypt_s390_sha1_ctx *sctx;
long imd_len;
sctx = crypto_tfm_ctx(tfm);
sctx->count += len * 8; /* message bit length */
/* anything in buffer yet? -> must be completed */
if (sctx->buf_len && (sctx->buf_len + len) >= SHA1_BLOCK_SIZE) {
/* complete full block and hash */
memcpy(sctx->buffer + sctx->buf_len, data,
SHA1_BLOCK_SIZE - sctx->buf_len);
crypt_s390_kimd(KIMD_SHA_1, sctx->state, sctx->buffer,
struct s390_sha1_ctx *sctx = crypto_tfm_ctx(tfm);
unsigned int index;
int ret;
/* how much is already in the buffer? */
index = sctx->count & 0x3f;
sctx->count += len;
if (index + len < SHA1_BLOCK_SIZE)
goto store;
/* process one stored block */
if (index) {
memcpy(sctx->buf + index, data, SHA1_BLOCK_SIZE - index);
ret = crypt_s390_kimd(KIMD_SHA_1, sctx->state, sctx->buf,
SHA1_BLOCK_SIZE);
data += SHA1_BLOCK_SIZE - sctx->buf_len;
len -= SHA1_BLOCK_SIZE - sctx->buf_len;
sctx->buf_len = 0;
BUG_ON(ret != SHA1_BLOCK_SIZE);
data += SHA1_BLOCK_SIZE - index;
len -= SHA1_BLOCK_SIZE - index;
}
/* rest of data contains full blocks? */
imd_len = len & ~0x3ful;
if (imd_len) {
crypt_s390_kimd(KIMD_SHA_1, sctx->state, data, imd_len);
data += imd_len;
len -= imd_len;
}
/* anything left? store in buffer */
if (len) {
memcpy(sctx->buffer + sctx->buf_len , data, len);
sctx->buf_len += len;
/* process as many blocks as possible */
if (len >= SHA1_BLOCK_SIZE) {
ret = crypt_s390_kimd(KIMD_SHA_1, sctx->state, data,
len & ~(SHA1_BLOCK_SIZE - 1));
BUG_ON(ret != (len & ~(SHA1_BLOCK_SIZE - 1)));
data += ret;
len -= ret;
}
}
store:
/* anything left? */
if (len)
memcpy(sctx->buf + index , data, len);
}
static void pad_message(struct crypt_s390_sha1_ctx* sctx)
/* Add padding and return the message digest. */
static void sha1_final(struct crypto_tfm *tfm, u8 *out)
{
int index;
struct s390_sha1_ctx *sctx = crypto_tfm_ctx(tfm);
u64 bits;
unsigned int index, end;
int ret;
/* must perform manual padding */
index = sctx->count & 0x3f;
end = (index < 56) ? SHA1_BLOCK_SIZE : (2 * SHA1_BLOCK_SIZE);
index = sctx->buf_len;
sctx->buf_len = (sctx->buf_len < 56) ?
SHA1_BLOCK_SIZE:2 * SHA1_BLOCK_SIZE;
/* start pad with 1 */
sctx->buffer[index] = 0x80;
sctx->buf[index] = 0x80;
/* pad with zeros */
index++;
memset(sctx->buffer + index, 0x00, sctx->buf_len - index);
/* append length */
memcpy(sctx->buffer + sctx->buf_len - 8, &sctx->count,
sizeof sctx->count);
}
memset(sctx->buf + index, 0x00, end - index - 8);
/* Add padding and return the message digest. */
static void sha1_final(struct crypto_tfm *tfm, u8 *out)
{
struct crypt_s390_sha1_ctx *sctx = crypto_tfm_ctx(tfm);
/* append message length */
bits = sctx->count * 8;
memcpy(sctx->buf + end - 8, &bits, sizeof(bits));
ret = crypt_s390_kimd(KIMD_SHA_1, sctx->state, sctx->buf, end);
BUG_ON(ret != end);
/* must perform manual padding */
pad_message(sctx);
crypt_s390_kimd(KIMD_SHA_1, sctx->state, sctx->buffer, sctx->buf_len);
/* copy digest to out */
memcpy(out, sctx->state, SHA1_DIGEST_SIZE);
/* wipe context */
memset(sctx, 0, sizeof *sctx);
}
......@@ -128,7 +129,7 @@ static struct crypto_alg alg = {
.cra_priority = CRYPT_S390_PRIORITY,
.cra_flags = CRYPTO_ALG_TYPE_DIGEST,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct crypt_s390_sha1_ctx),
.cra_ctxsize = sizeof(struct s390_sha1_ctx),
.cra_module = THIS_MODULE,
.cra_list = LIST_HEAD_INIT(alg.cra_list),
.cra_u = { .digest = {
......
......@@ -26,7 +26,7 @@
#define SHA256_BLOCK_SIZE 64
struct s390_sha256_ctx {
u64 count;
u64 count; /* message length */
u32 state[8];
u8 buf[2 * SHA256_BLOCK_SIZE];
};
......@@ -54,10 +54,9 @@ static void sha256_update(struct crypto_tfm *tfm, const u8 *data,
int ret;
/* how much is already in the buffer? */
index = sctx->count / 8 & 0x3f;
index = sctx->count & 0x3f;
/* update message bit length */
sctx->count += len * 8;
sctx->count += len;
if ((index + len) < SHA256_BLOCK_SIZE)
goto store;
......@@ -87,12 +86,17 @@ static void sha256_update(struct crypto_tfm *tfm, const u8 *data,
memcpy(sctx->buf + index , data, len);
}
static void pad_message(struct s390_sha256_ctx* sctx)
/* Add padding and return the message digest */
static void sha256_final(struct crypto_tfm *tfm, u8 *out)
{
int index, end;
struct s390_sha256_ctx *sctx = crypto_tfm_ctx(tfm);
u64 bits;
unsigned int index, end;
int ret;
index = sctx->count / 8 & 0x3f;
end = index < 56 ? SHA256_BLOCK_SIZE : 2 * SHA256_BLOCK_SIZE;
/* must perform manual padding */
index = sctx->count & 0x3f;
end = (index < 56) ? SHA256_BLOCK_SIZE : (2 * SHA256_BLOCK_SIZE);
/* start pad with 1 */
sctx->buf[index] = 0x80;
......@@ -102,21 +106,11 @@ static void pad_message(struct s390_sha256_ctx* sctx)
memset(sctx->buf + index, 0x00, end - index - 8);
/* append message length */
memcpy(sctx->buf + end - 8, &sctx->count, sizeof sctx->count);
sctx->count = end * 8;
}
/* Add padding and return the message digest */
static void sha256_final(struct crypto_tfm *tfm, u8 *out)
{
struct s390_sha256_ctx *sctx = crypto_tfm_ctx(tfm);
/* must perform manual padding */
pad_message(sctx);
bits = sctx->count * 8;
memcpy(sctx->buf + end - 8, &bits, sizeof(bits));
crypt_s390_kimd(KIMD_SHA_256, sctx->state, sctx->buf,
sctx->count / 8);
ret = crypt_s390_kimd(KIMD_SHA_256, sctx->state, sctx->buf, end);
BUG_ON(ret != end);
/* copy digest to out */
memcpy(out, sctx->state, SHA256_DIGEST_SIZE);
......
......@@ -12,6 +12,7 @@ CONFIG_RWSEM_XCHGADD_ALGORITHM=y
# CONFIG_ARCH_HAS_ILOG2_U64 is not set
CONFIG_GENERIC_HWEIGHT=y
CONFIG_GENERIC_TIME=y
CONFIG_GENERIC_BUG=y
CONFIG_NO_IOMEM=y
CONFIG_S390=y
CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config"
......@@ -166,6 +167,7 @@ CONFIG_NO_IDLE_HZ=y
CONFIG_NO_IDLE_HZ_INIT=y
CONFIG_S390_HYPFS_FS=y
CONFIG_KEXEC=y
# CONFIG_ZFCPDUMP is not set
#
# Networking
......@@ -705,6 +707,7 @@ CONFIG_DEBUG_MUTEXES=y
CONFIG_DEBUG_SPINLOCK_SLEEP=y
# CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set
# CONFIG_DEBUG_KOBJECT is not set
CONFIG_DEBUG_BUGVERBOSE=y
# CONFIG_DEBUG_INFO is not set
# CONFIG_DEBUG_VM is not set
# CONFIG_DEBUG_LIST is not set
......
......@@ -6,7 +6,7 @@ EXTRA_AFLAGS := -traditional
obj-y := bitmap.o traps.o time.o process.o base.o early.o \
setup.o sys_s390.o ptrace.o signal.o cpcmd.o ebcdic.o \
semaphore.o s390_ext.o debug.o irq.o ipl.o
semaphore.o s390_ext.o debug.o irq.o ipl.o dis.o
obj-y += $(if $(CONFIG_64BIT),entry64.o,entry.o)
obj-y += $(if $(CONFIG_64BIT),reipl64.o,reipl.o)
......
......@@ -495,29 +495,34 @@ sys32_rt_sigqueueinfo(int pid, int sig, compat_siginfo_t __user *uinfo)
* sys32_execve() executes a new program after the asm stub has set
* things up for us. This should basically do what I want it to.
*/
asmlinkage long
sys32_execve(struct pt_regs regs)
asmlinkage long sys32_execve(void)
{
int error;
char * filename;
struct pt_regs *regs = task_pt_regs(current);
char *filename;
unsigned long result;
int rc;
filename = getname(compat_ptr(regs.orig_gpr2));
error = PTR_ERR(filename);
if (IS_ERR(filename))
filename = getname(compat_ptr(regs->orig_gpr2));
if (IS_ERR(filename)) {
result = PTR_ERR(filename);
goto out;
error = compat_do_execve(filename, compat_ptr(regs.gprs[3]),
compat_ptr(regs.gprs[4]), &regs);
if (error == 0)
{
}
rc = compat_do_execve(filename, compat_ptr(regs->gprs[3]),
compat_ptr(regs->gprs[4]), regs);
if (rc) {
result = rc;
goto out_putname;
}
task_lock(current);
current->ptrace &= ~PT_DTRACE;
task_unlock(current);
current->thread.fp_regs.fpc=0;
asm volatile("sfpc %0,0" : : "d" (0));
}
result = regs->gprs[2];
out_putname:
putname(filename);
out:
return error;
return result;
}
......@@ -918,19 +923,20 @@ asmlinkage long sys32_write(unsigned int fd, char __user * buf, size_t count)
return sys_write(fd, buf, count);
}
asmlinkage long sys32_clone(struct pt_regs regs)
asmlinkage long sys32_clone(void)
{
struct pt_regs *regs = task_pt_regs(current);
unsigned long clone_flags;
unsigned long newsp;
int __user *parent_tidptr, *child_tidptr;
clone_flags = regs.gprs[3] & 0xffffffffUL;
newsp = regs.orig_gpr2 & 0x7fffffffUL;
parent_tidptr = compat_ptr(regs.gprs[4]);
child_tidptr = compat_ptr(regs.gprs[5]);
clone_flags = regs->gprs[3] & 0xffffffffUL;
newsp = regs->orig_gpr2 & 0x7fffffffUL;
parent_tidptr = compat_ptr(regs->gprs[4]);
child_tidptr = compat_ptr(regs->gprs[5]);
if (!newsp)
newsp = regs.gprs[15];
return do_fork(clone_flags, newsp, &regs, 0,
newsp = regs->gprs[15];
return do_fork(clone_flags, newsp, regs, 0,
parent_tidptr, child_tidptr);
}
......
......@@ -255,9 +255,9 @@ sys32_rt_sigaction(int sig, const struct sigaction32 __user *act,
}
asmlinkage long
sys32_sigaltstack(const stack_t32 __user *uss, stack_t32 __user *uoss,
struct pt_regs *regs)
sys32_sigaltstack(const stack_t32 __user *uss, stack_t32 __user *uoss)
{
struct pt_regs *regs = task_pt_regs(current);
stack_t kss, koss;
unsigned long ss_sp;
int ret, err = 0;
......@@ -344,8 +344,9 @@ static int restore_sigregs32(struct pt_regs *regs,_sigregs32 __user *sregs)
return 0;
}
asmlinkage long sys32_sigreturn(struct pt_regs *regs)
asmlinkage long sys32_sigreturn(void)
{
struct pt_regs *regs = task_pt_regs(current);
sigframe32 __user *frame = (sigframe32 __user *)regs->gprs[15];
sigset_t set;
......@@ -370,8 +371,9 @@ asmlinkage long sys32_sigreturn(struct pt_regs *regs)
return 0;
}
asmlinkage long sys32_rt_sigreturn(struct pt_regs *regs)
asmlinkage long sys32_rt_sigreturn(void)
{
struct pt_regs *regs = task_pt_regs(current);
rt_sigframe32 __user *frame = (rt_sigframe32 __user *)regs->gprs[15];
sigset_t set;
stack_t st;
......
此差异已折叠。
......@@ -253,11 +253,10 @@ static noinline __init void find_memory_chunks(unsigned long memsize)
break;
#endif
/*
* Finish memory detection at the first hole, unless
* - we reached the hsa -> skip it.
* - we know there must be more.
* Finish memory detection at the first hole
* if storage size is unknown.
*/
if (cc == -1UL && !memsize && old_addr != ADDR2G)
if (cc == -1UL && !memsize)
break;
if (memsize && addr >= memsize)
break;
......
......@@ -249,8 +249,6 @@ sysc_do_restart:
bnz BASED(sysc_tracesys)
basr %r14,%r8 # call sys_xxxx
st %r2,SP_R2(%r15) # store return value (change R2 on stack)
# ATTENTION: check sys_execve_glue before
# changing anything here !!
sysc_return:
tm SP_PSW+1(%r15),0x01 # returning to user ?
......@@ -381,50 +379,37 @@ ret_from_fork:
b BASED(sysc_return)
#
# clone, fork, vfork, exec and sigreturn need glue,
# because they all expect pt_regs as parameter,
# but are called with different parameter.
# return-address is set up above
# kernel_execve function needs to deal with pt_regs that is not
# at the usual place
#
sys_clone_glue:
la %r2,SP_PTREGS(%r15) # load pt_regs
l %r1,BASED(.Lclone)
br %r1 # branch to sys_clone
sys_fork_glue:
la %r2,SP_PTREGS(%r15) # load pt_regs
l %r1,BASED(.Lfork)
br %r1 # branch to sys_fork
sys_vfork_glue:
la %r2,SP_PTREGS(%r15) # load pt_regs
l %r1,BASED(.Lvfork)
br %r1 # branch to sys_vfork
sys_execve_glue:
la %r2,SP_PTREGS(%r15) # load pt_regs
l %r1,BASED(.Lexecve)
lr %r12,%r14 # save return address
basr %r14,%r1 # call sys_execve
ltr %r2,%r2 # check if execve failed
bnz 0(%r12) # it did fail -> store result in gpr2
b 4(%r12) # SKIP ST 2,SP_R2(15) after BASR 14,8
# in system_call/sysc_tracesys
sys_sigreturn_glue:
la %r2,SP_PTREGS(%r15) # load pt_regs as parameter
l %r1,BASED(.Lsigreturn)
br %r1 # branch to sys_sigreturn
sys_rt_sigreturn_glue:
la %r2,SP_PTREGS(%r15) # load pt_regs as parameter
l %r1,BASED(.Lrt_sigreturn)
br %r1 # branch to sys_sigreturn
sys_sigaltstack_glue:
la %r4,SP_PTREGS(%r15) # load pt_regs as parameter
l %r1,BASED(.Lsigaltstack)
br %r1 # branch to sys_sigreturn
.globl kernel_execve
kernel_execve:
stm %r12,%r15,48(%r15)
lr %r14,%r15
l %r13,__LC_SVC_NEW_PSW+4
s %r15,BASED(.Lc_spsize)
st %r14,__SF_BACKCHAIN(%r15)
la %r12,SP_PTREGS(%r15)
xc 0(__PT_SIZE,%r12),0(%r12)
l %r1,BASED(.Ldo_execve)
lr %r5,%r12
basr %r14,%r1
ltr %r2,%r2
be BASED(0f)
a %r15,BASED(.Lc_spsize)
lm %r12,%r15,48(%r15)
br %r14
# execve succeeded.
0: stnsm __SF_EMPTY(%r15),0xfc # disable interrupts
l %r15,__LC_KERNEL_STACK # load ksp
s %r15,BASED(.Lc_spsize) # make room for registers & psw
l %r9,__LC_THREAD_INFO
mvc SP_PTREGS(__PT_SIZE,%r15),0(%r12) # copy pt_regs
xc __SF_BACKCHAIN(4,%r15),__SF_BACKCHAIN(%r15)
stosm __SF_EMPTY(%r15),0x03 # reenable interrupts
l %r1,BASED(.Lexecve_tail)
basr %r14,%r1
b BASED(sysc_return)
/*
* Program check handler routine
......@@ -1031,19 +1016,11 @@ cleanup_io_leave_insn:
.Ldo_extint: .long do_extint
.Ldo_signal: .long do_signal
.Lhandle_per: .long do_single_step
.Ldo_execve: .long do_execve
.Lexecve_tail: .long execve_tail
.Ljump_table: .long pgm_check_table
.Lschedule: .long schedule
.Lclone: .long sys_clone
.Lexecve: .long sys_execve
.Lfork: .long sys_fork
.Lrt_sigreturn: .long sys_rt_sigreturn
.Lrt_sigsuspend:
.long sys_rt_sigsuspend
.Lsigreturn: .long sys_sigreturn
.Lsigsuspend: .long sys_sigsuspend
.Lsigaltstack: .long sys_sigaltstack
.Ltrace: .long syscall_trace
.Lvfork: .long sys_vfork
.Lschedtail: .long schedule_tail
.Lsysc_table: .long sys_call_table
#ifdef CONFIG_TRACE_IRQFLAGS
......
......@@ -244,8 +244,6 @@ sysc_noemu:
jnz sysc_tracesys
basr %r14,%r8 # call sys_xxxx
stg %r2,SP_R2(%r15) # store return value (change R2 on stack)
# ATTENTION: check sys_execve_glue before
# changing anything here !!
sysc_return:
tm SP_PSW+1(%r15),0x01 # returning to user ?
......@@ -371,77 +369,35 @@ ret_from_fork:
j sysc_return
#
# clone, fork, vfork, exec and sigreturn need glue,
# because they all expect pt_regs as parameter,
# but are called with different parameter.
# return-address is set up above
# kernel_execve function needs to deal with pt_regs that is not
# at the usual place
#
sys_clone_glue:
la %r2,SP_PTREGS(%r15) # load pt_regs
jg sys_clone # branch to sys_clone
#ifdef CONFIG_COMPAT
sys32_clone_glue:
la %r2,SP_PTREGS(%r15) # load pt_regs
jg sys32_clone # branch to sys32_clone
#endif
sys_fork_glue:
la %r2,SP_PTREGS(%r15) # load pt_regs
jg sys_fork # branch to sys_fork
sys_vfork_glue:
la %r2,SP_PTREGS(%r15) # load pt_regs
jg sys_vfork # branch to sys_vfork
sys_execve_glue:
la %r2,SP_PTREGS(%r15) # load pt_regs
lgr %r12,%r14 # save return address
brasl %r14,sys_execve # call sys_execve
ltgr %r2,%r2 # check if execve failed
bnz 0(%r12) # it did fail -> store result in gpr2
b 6(%r12) # SKIP STG 2,SP_R2(15) in
# system_call/sysc_tracesys
#ifdef CONFIG_COMPAT
sys32_execve_glue:
la %r2,SP_PTREGS(%r15) # load pt_regs
lgr %r12,%r14 # save return address
brasl %r14,sys32_execve # call sys32_execve
ltgr %r2,%r2 # check if execve failed
bnz 0(%r12) # it did fail -> store result in gpr2
b 6(%r12) # SKIP STG 2,SP_R2(15) in
# system_call/sysc_tracesys
#endif
sys_sigreturn_glue:
la %r2,SP_PTREGS(%r15) # load pt_regs as parameter
jg sys_sigreturn # branch to sys_sigreturn
#ifdef CONFIG_COMPAT
sys32_sigreturn_glue:
la %r2,SP_PTREGS(%r15) # load pt_regs as parameter
jg sys32_sigreturn # branch to sys32_sigreturn
#endif
sys_rt_sigreturn_glue:
la %r2,SP_PTREGS(%r15) # load pt_regs as parameter
jg sys_rt_sigreturn # branch to sys_sigreturn
#ifdef CONFIG_COMPAT
sys32_rt_sigreturn_glue:
la %r2,SP_PTREGS(%r15) # load pt_regs as parameter
jg sys32_rt_sigreturn # branch to sys32_sigreturn
#endif
sys_sigaltstack_glue:
la %r4,SP_PTREGS(%r15) # load pt_regs as parameter
jg sys_sigaltstack # branch to sys_sigreturn
#ifdef CONFIG_COMPAT
sys32_sigaltstack_glue:
la %r4,SP_PTREGS(%r15) # load pt_regs as parameter
jg sys32_sigaltstack_wrapper # branch to sys_sigreturn
#endif
.globl kernel_execve
kernel_execve:
stmg %r12,%r15,96(%r15)
lgr %r14,%r15
aghi %r15,-SP_SIZE
stg %r14,__SF_BACKCHAIN(%r15)
la %r12,SP_PTREGS(%r15)
xc 0(__PT_SIZE,%r12),0(%r12)
lgr %r5,%r12
brasl %r14,do_execve
ltgfr %r2,%r2
je 0f
aghi %r15,SP_SIZE
lmg %r12,%r15,96(%r15)
br %r14
# execve succeeded.
0: stnsm __SF_EMPTY(%r15),0xfc # disable interrupts
lg %r15,__LC_KERNEL_STACK # load ksp
aghi %r15,-SP_SIZE # make room for registers & psw
lg %r13,__LC_SVC_NEW_PSW+8
lg %r9,__LC_THREAD_INFO
mvc SP_PTREGS(__PT_SIZE,%r15),0(%r12) # copy pt_regs
xc __SF_BACKCHAIN(8,%r15),__SF_BACKCHAIN(%r15)
stosm __SF_EMPTY(%r15),0x03 # reenable interrupts
brasl %r14,execve_tail
j sysc_return
/*
* Program check handler routine
......
......@@ -39,7 +39,69 @@ startup_continue:
basr %r13,0 # get base
.LPG1: sll %r13,1 # remove high order bit
srl %r13,1
lhi %r1,1 # mode 1 = esame
#ifdef CONFIG_ZFCPDUMP
# check if we have been ipled using zfcp dump:
tm 0xb9,0x01 # test if subchannel is enabled
jno .nodump # subchannel disabled
l %r1,0xb8
la %r5,.Lipl_schib-.LPG1(%r13)
stsch 0(%r5) # get schib of subchannel
jne .nodump # schib not available
tm 5(%r5),0x01 # devno valid?
jno .nodump
tm 4(%r5),0x80 # qdio capable device?
jno .nodump
l %r2,20(%r0) # address of ipl parameter block
lhi %r3,0
ic %r3,0x148(%r2) # get opt field
chi %r3,0x20 # load with dump?
jne .nodump
# store all prefix registers in case of load with dump:
la %r7,0 # base register for 0 page
la %r8,0 # first cpu
l %r11,.Lpref_arr_ptr-.LPG1(%r13) # address of prefix array
ahi %r11,4 # skip boot cpu
lr %r12,%r11
ahi %r12,(CONFIG_NR_CPUS*4) # end of prefix array
stap .Lcurrent_cpu+2-.LPG1(%r13) # store current cpu addr
1:
cl %r8,.Lcurrent_cpu-.LPG1(%r13) # is ipl cpu ?
je 4f # if yes get next cpu
2:
lr %r9,%r7
sigp %r9,%r8,0x9 # stop & store status of cpu
brc 8,3f # accepted
brc 4,4f # status stored: next cpu
brc 2,2b # busy: try again
brc 1,4f # not op: next cpu
3:
mvc 0(4,%r11),264(%r7) # copy prefix register to prefix array
ahi %r11,4 # next element in prefix array
clr %r11,%r12
je 5f # no more space in prefix array
4:
ahi %r8,1 # next cpu (r8 += 1)
cl %r8,.Llast_cpu-.LPG1(%r13) # is last possible cpu ?
jl 1b # jump if not last cpu
5:
lhi %r1,2 # mode 2 = esame (dump)
j 6f
.align 4
.Lipl_schib:
.rept 13
.long 0
.endr
.nodump:
lhi %r1,1 # mode 1 = esame (normal ipl)
6:
#else
lhi %r1,1 # mode 1 = esame (normal ipl)
#endif /* CONFIG_ZFCPDUMP */
mvi __LC_AR_MODE_ID,1 # set esame flag
slr %r0,%r0 # set cpuid to zero
sigp %r1,%r0,0x12 # switch to esame mode
......@@ -149,6 +211,14 @@ startup_continue:
.L4malign:.quad 0xffffffffffc00000
.Lscan2g:.quad 0x80000000 + 0x20000 - 8 # 2GB + 128K - 8
.Lnop: .long 0x07000700
#ifdef CONFIG_ZFCPDUMP
.Lcurrent_cpu:
.long 0x0
.Llast_cpu:
.long 0x0000ffff
.Lpref_arr_ptr:
.long zfcpdump_prefix_array
#endif /* CONFIG_ZFCPDUMP */
.Lparmaddr:
.quad PARMAREA
.align 64
......
......@@ -29,36 +29,21 @@
#define SCCB_LOADPARM (&s390_readinfo_sccb.loadparm)
#define SCCB_FLAG (s390_readinfo_sccb.flags)
enum ipl_type {
IPL_TYPE_NONE = 1,
IPL_TYPE_UNKNOWN = 2,
IPL_TYPE_CCW = 4,
IPL_TYPE_FCP = 8,
IPL_TYPE_NSS = 16,
};
#define IPL_NONE_STR "none"
#define IPL_UNKNOWN_STR "unknown"
#define IPL_CCW_STR "ccw"
#define IPL_FCP_STR "fcp"
#define IPL_FCP_DUMP_STR "fcp_dump"
#define IPL_NSS_STR "nss"
/*
* Must be in data section since the bss section
* is not cleared when these are accessed.
*/
u16 ipl_devno __attribute__((__section__(".data"))) = 0;
u32 ipl_flags __attribute__((__section__(".data"))) = 0;
static char *ipl_type_str(enum ipl_type type)
{
switch (type) {
case IPL_TYPE_NONE:
return IPL_NONE_STR;
case IPL_TYPE_CCW:
return IPL_CCW_STR;
case IPL_TYPE_FCP:
return IPL_FCP_STR;
case IPL_TYPE_FCP_DUMP:
return IPL_FCP_DUMP_STR;
case IPL_TYPE_NSS:
return IPL_NSS_STR;
case IPL_TYPE_UNKNOWN:
......@@ -67,15 +52,55 @@ static char *ipl_type_str(enum ipl_type type)
}
}
enum dump_type {
DUMP_TYPE_NONE = 1,
DUMP_TYPE_CCW = 2,
DUMP_TYPE_FCP = 4,
};
#define DUMP_NONE_STR "none"
#define DUMP_CCW_STR "ccw"
#define DUMP_FCP_STR "fcp"
static char *dump_type_str(enum dump_type type)
{
switch (type) {
case DUMP_TYPE_NONE:
return DUMP_NONE_STR;
case DUMP_TYPE_CCW:
return DUMP_CCW_STR;
case DUMP_TYPE_FCP:
return DUMP_FCP_STR;
default:
return NULL;
}
}
/*
* Must be in data section since the bss section
* is not cleared when these are accessed.
*/
static u16 ipl_devno __attribute__((__section__(".data"))) = 0;
u32 ipl_flags __attribute__((__section__(".data"))) = 0;
enum ipl_method {
IPL_METHOD_NONE,
IPL_METHOD_CCW_CIO,
IPL_METHOD_CCW_DIAG,
IPL_METHOD_CCW_VM,
IPL_METHOD_FCP_RO_DIAG,
IPL_METHOD_FCP_RW_DIAG,
IPL_METHOD_FCP_RO_VM,
IPL_METHOD_NSS,
REIPL_METHOD_CCW_CIO,
REIPL_METHOD_CCW_DIAG,
REIPL_METHOD_CCW_VM,
REIPL_METHOD_FCP_RO_DIAG,
REIPL_METHOD_FCP_RW_DIAG,
REIPL_METHOD_FCP_RO_VM,
REIPL_METHOD_FCP_DUMP,
REIPL_METHOD_NSS,
REIPL_METHOD_DEFAULT,
};
enum dump_method {
DUMP_METHOD_NONE,
DUMP_METHOD_CCW_CIO,
DUMP_METHOD_CCW_DIAG,
DUMP_METHOD_CCW_VM,
DUMP_METHOD_FCP_DIAG,
};
enum shutdown_action {
......@@ -107,15 +132,15 @@ static int diag308_set_works = 0;
static int reipl_capabilities = IPL_TYPE_UNKNOWN;
static enum ipl_type reipl_type = IPL_TYPE_UNKNOWN;
static enum ipl_method reipl_method = IPL_METHOD_NONE;
static enum ipl_method reipl_method = REIPL_METHOD_DEFAULT;
static struct ipl_parameter_block *reipl_block_fcp;
static struct ipl_parameter_block *reipl_block_ccw;
static char reipl_nss_name[NSS_NAME_SIZE + 1];
static int dump_capabilities = IPL_TYPE_NONE;
static enum ipl_type dump_type = IPL_TYPE_NONE;
static enum ipl_method dump_method = IPL_METHOD_NONE;
static int dump_capabilities = DUMP_TYPE_NONE;
static enum dump_type dump_type = DUMP_TYPE_NONE;
static enum dump_method dump_method = DUMP_METHOD_NONE;
static struct ipl_parameter_block *dump_block_fcp;
static struct ipl_parameter_block *dump_block_ccw;
......@@ -134,6 +159,7 @@ int diag308(unsigned long subcode, void *addr)
: "d" (subcode) : "cc", "memory");
return _rc;
}
EXPORT_SYMBOL_GPL(diag308);
/* SYSFS */
......@@ -197,7 +223,7 @@ static void make_attrs_ro(struct attribute **attrs)
* ipl section
*/
static enum ipl_type ipl_get_type(void)
static __init enum ipl_type get_ipl_type(void)
{
struct ipl_parameter_block *ipl = IPL_PARMBLOCK_START;
......@@ -211,12 +237,44 @@ static enum ipl_type ipl_get_type(void)
return IPL_TYPE_UNKNOWN;
if (ipl->hdr.pbt != DIAG308_IPL_TYPE_FCP)
return IPL_TYPE_UNKNOWN;
if (ipl->ipl_info.fcp.opt == DIAG308_IPL_OPT_DUMP)
return IPL_TYPE_FCP_DUMP;
return IPL_TYPE_FCP;
}
void __init setup_ipl_info(void)
{
ipl_info.type = get_ipl_type();
switch (ipl_info.type) {
case IPL_TYPE_CCW:
ipl_info.data.ccw.dev_id.devno = ipl_devno;
ipl_info.data.ccw.dev_id.ssid = 0;
break;
case IPL_TYPE_FCP:
case IPL_TYPE_FCP_DUMP:
ipl_info.data.fcp.dev_id.devno =
IPL_PARMBLOCK_START->ipl_info.fcp.devno;
ipl_info.data.fcp.dev_id.ssid = 0;
ipl_info.data.fcp.wwpn = IPL_PARMBLOCK_START->ipl_info.fcp.wwpn;
ipl_info.data.fcp.lun = IPL_PARMBLOCK_START->ipl_info.fcp.lun;
break;
case IPL_TYPE_NSS:
strncpy(ipl_info.data.nss.name, kernel_nss_name,
sizeof(ipl_info.data.nss.name));
break;
case IPL_TYPE_UNKNOWN:
default:
/* We have no info to copy */
break;
}
}
struct ipl_info ipl_info;
EXPORT_SYMBOL_GPL(ipl_info);
static ssize_t ipl_type_show(struct subsystem *subsys, char *page)
{
return sprintf(page, "%s\n", ipl_type_str(ipl_get_type()));
return sprintf(page, "%s\n", ipl_type_str(ipl_info.type));
}
static struct subsys_attribute sys_ipl_type_attr = __ATTR_RO(ipl_type);
......@@ -225,10 +283,11 @@ static ssize_t sys_ipl_device_show(struct subsystem *subsys, char *page)
{
struct ipl_parameter_block *ipl = IPL_PARMBLOCK_START;
switch (ipl_get_type()) {
switch (ipl_info.type) {
case IPL_TYPE_CCW:
return sprintf(page, "0.0.%04x\n", ipl_devno);
case IPL_TYPE_FCP:
case IPL_TYPE_FCP_DUMP:
return sprintf(page, "0.0.%04x\n", ipl->ipl_info.fcp.devno);
default:
return 0;
......@@ -485,23 +544,29 @@ static int reipl_set_type(enum ipl_type type)
switch(type) {
case IPL_TYPE_CCW:
if (MACHINE_IS_VM)
reipl_method = IPL_METHOD_CCW_VM;
reipl_method = REIPL_METHOD_CCW_VM;
else
reipl_method = IPL_METHOD_CCW_CIO;
reipl_method = REIPL_METHOD_CCW_CIO;
break;
case IPL_TYPE_FCP:
if (diag308_set_works)
reipl_method = IPL_METHOD_FCP_RW_DIAG;
reipl_method = REIPL_METHOD_FCP_RW_DIAG;
else if (MACHINE_IS_VM)
reipl_method = IPL_METHOD_FCP_RO_VM;
reipl_method = REIPL_METHOD_FCP_RO_VM;
else
reipl_method = IPL_METHOD_FCP_RO_DIAG;
reipl_method = REIPL_METHOD_FCP_RO_DIAG;
break;
case IPL_TYPE_FCP_DUMP:
reipl_method = REIPL_METHOD_FCP_DUMP;
break;
case IPL_TYPE_NSS:
reipl_method = IPL_METHOD_NSS;
reipl_method = REIPL_METHOD_NSS;
break;
case IPL_TYPE_UNKNOWN:
reipl_method = REIPL_METHOD_DEFAULT;
break;
default:
reipl_method = IPL_METHOD_NONE;
BUG();
}
reipl_type = type;
return 0;
......@@ -579,22 +644,22 @@ static struct attribute_group dump_ccw_attr_group = {
/* dump type */
static int dump_set_type(enum ipl_type type)
static int dump_set_type(enum dump_type type)
{
if (!(dump_capabilities & type))
return -EINVAL;
switch(type) {
case IPL_TYPE_CCW:
case DUMP_TYPE_CCW:
if (MACHINE_IS_VM)
dump_method = IPL_METHOD_CCW_VM;
dump_method = DUMP_METHOD_CCW_VM;
else
dump_method = IPL_METHOD_CCW_CIO;
dump_method = DUMP_METHOD_CCW_CIO;
break;
case IPL_TYPE_FCP:
dump_method = IPL_METHOD_FCP_RW_DIAG;
case DUMP_TYPE_FCP:
dump_method = DUMP_METHOD_FCP_DIAG;
break;
default:
dump_method = IPL_METHOD_NONE;
dump_method = DUMP_METHOD_NONE;
}
dump_type = type;
return 0;
......@@ -602,7 +667,7 @@ static int dump_set_type(enum ipl_type type)
static ssize_t dump_type_show(struct subsystem *subsys, char *page)
{
return sprintf(page, "%s\n", ipl_type_str(dump_type));
return sprintf(page, "%s\n", dump_type_str(dump_type));
}
static ssize_t dump_type_store(struct subsystem *subsys, const char *buf,
......@@ -610,12 +675,12 @@ static ssize_t dump_type_store(struct subsystem *subsys, const char *buf,
{
int rc = -EINVAL;
if (strncmp(buf, IPL_NONE_STR, strlen(IPL_NONE_STR)) == 0)
rc = dump_set_type(IPL_TYPE_NONE);
else if (strncmp(buf, IPL_CCW_STR, strlen(IPL_CCW_STR)) == 0)
rc = dump_set_type(IPL_TYPE_CCW);
else if (strncmp(buf, IPL_FCP_STR, strlen(IPL_FCP_STR)) == 0)
rc = dump_set_type(IPL_TYPE_FCP);
if (strncmp(buf, DUMP_NONE_STR, strlen(DUMP_NONE_STR)) == 0)
rc = dump_set_type(DUMP_TYPE_NONE);
else if (strncmp(buf, DUMP_CCW_STR, strlen(DUMP_CCW_STR)) == 0)
rc = dump_set_type(DUMP_TYPE_CCW);
else if (strncmp(buf, DUMP_FCP_STR, strlen(DUMP_FCP_STR)) == 0)
rc = dump_set_type(DUMP_TYPE_FCP);
return (rc != 0) ? rc : len;
}
......@@ -664,14 +729,14 @@ void do_reipl(void)
char loadparm[LOADPARM_LEN + 1];
switch (reipl_method) {
case IPL_METHOD_CCW_CIO:
case REIPL_METHOD_CCW_CIO:
devid.devno = reipl_block_ccw->ipl_info.ccw.devno;
if (ipl_get_type() == IPL_TYPE_CCW && devid.devno == ipl_devno)
if (ipl_info.type == IPL_TYPE_CCW && devid.devno == ipl_devno)
diag308(DIAG308_IPL, NULL);
devid.ssid = 0;
reipl_ccw_dev(&devid);
break;
case IPL_METHOD_CCW_VM:
case REIPL_METHOD_CCW_VM:
reipl_get_ascii_loadparm(loadparm);
if (strlen(loadparm) == 0)
sprintf(buf, "IPL %X",
......@@ -681,30 +746,32 @@ void do_reipl(void)
reipl_block_ccw->ipl_info.ccw.devno, loadparm);
__cpcmd(buf, NULL, 0, NULL);
break;
case IPL_METHOD_CCW_DIAG:
case REIPL_METHOD_CCW_DIAG:
diag308(DIAG308_SET, reipl_block_ccw);
diag308(DIAG308_IPL, NULL);
break;
case IPL_METHOD_FCP_RW_DIAG:
case REIPL_METHOD_FCP_RW_DIAG:
diag308(DIAG308_SET, reipl_block_fcp);
diag308(DIAG308_IPL, NULL);
break;
case IPL_METHOD_FCP_RO_DIAG:
case REIPL_METHOD_FCP_RO_DIAG:
diag308(DIAG308_IPL, NULL);
break;
case IPL_METHOD_FCP_RO_VM:
case REIPL_METHOD_FCP_RO_VM:
__cpcmd("IPL", NULL, 0, NULL);
break;
case IPL_METHOD_NSS:
case REIPL_METHOD_NSS:
sprintf(buf, "IPL %s", reipl_nss_name);
__cpcmd(buf, NULL, 0, NULL);
break;
case IPL_METHOD_NONE:
default:
case REIPL_METHOD_DEFAULT:
if (MACHINE_IS_VM)
__cpcmd("IPL", NULL, 0, NULL);
diag308(DIAG308_IPL, NULL);
break;
case REIPL_METHOD_FCP_DUMP:
default:
break;
}
signal_processor(smp_processor_id(), sigp_stop_and_store_status);
}
......@@ -715,28 +782,28 @@ static void do_dump(void)
static char buf[100];
switch (dump_method) {
case IPL_METHOD_CCW_CIO:
case DUMP_METHOD_CCW_CIO:
smp_send_stop();
devid.devno = dump_block_ccw->ipl_info.ccw.devno;
devid.ssid = 0;
reipl_ccw_dev(&devid);
break;
case IPL_METHOD_CCW_VM:
case DUMP_METHOD_CCW_VM:
smp_send_stop();
sprintf(buf, "STORE STATUS");
__cpcmd(buf, NULL, 0, NULL);
sprintf(buf, "IPL %X", dump_block_ccw->ipl_info.ccw.devno);
__cpcmd(buf, NULL, 0, NULL);
break;
case IPL_METHOD_CCW_DIAG:
case DUMP_METHOD_CCW_DIAG:
diag308(DIAG308_SET, dump_block_ccw);
diag308(DIAG308_DUMP, NULL);
break;
case IPL_METHOD_FCP_RW_DIAG:
case DUMP_METHOD_FCP_DIAG:
diag308(DIAG308_SET, dump_block_fcp);
diag308(DIAG308_DUMP, NULL);
break;
case IPL_METHOD_NONE:
case DUMP_METHOD_NONE:
default:
return;
}
......@@ -777,12 +844,13 @@ static int __init ipl_init(void)
rc = firmware_register(&ipl_subsys);
if (rc)
return rc;
switch (ipl_get_type()) {
switch (ipl_info.type) {
case IPL_TYPE_CCW:
rc = sysfs_create_group(&ipl_subsys.kset.kobj,
&ipl_ccw_attr_group);
break;
case IPL_TYPE_FCP:
case IPL_TYPE_FCP_DUMP:
rc = ipl_register_fcp_files();
break;
case IPL_TYPE_NSS:
......@@ -852,7 +920,7 @@ static int __init reipl_ccw_init(void)
/* FIXME: check for diag308_set_works when enabling diag ccw reipl */
if (!MACHINE_IS_VM)
sys_reipl_ccw_loadparm_attr.attr.mode = S_IRUGO;
if (ipl_get_type() == IPL_TYPE_CCW)
if (ipl_info.type == IPL_TYPE_CCW)
reipl_block_ccw->ipl_info.ccw.devno = ipl_devno;
reipl_capabilities |= IPL_TYPE_CCW;
return 0;
......@@ -862,9 +930,9 @@ static int __init reipl_fcp_init(void)
{
int rc;
if ((!diag308_set_works) && (ipl_get_type() != IPL_TYPE_FCP))
if ((!diag308_set_works) && (ipl_info.type != IPL_TYPE_FCP))
return 0;
if ((!diag308_set_works) && (ipl_get_type() == IPL_TYPE_FCP))
if ((!diag308_set_works) && (ipl_info.type == IPL_TYPE_FCP))
make_attrs_ro(reipl_fcp_attrs);
reipl_block_fcp = (void *) get_zeroed_page(GFP_KERNEL);
......@@ -875,7 +943,7 @@ static int __init reipl_fcp_init(void)
free_page((unsigned long)reipl_block_fcp);
return rc;
}
if (ipl_get_type() == IPL_TYPE_FCP) {
if (ipl_info.type == IPL_TYPE_FCP) {
memcpy(reipl_block_fcp, IPL_PARMBLOCK_START, PAGE_SIZE);
} else {
reipl_block_fcp->hdr.len = IPL_PARM_BLK_FCP_LEN;
......@@ -909,7 +977,7 @@ static int __init reipl_init(void)
rc = reipl_nss_init();
if (rc)
return rc;
rc = reipl_set_type(ipl_get_type());
rc = reipl_set_type(ipl_info.type);
if (rc)
return rc;
return 0;
......@@ -931,7 +999,7 @@ static int __init dump_ccw_init(void)
dump_block_ccw->hdr.version = IPL_PARM_BLOCK_VERSION;
dump_block_ccw->hdr.blk0_len = IPL_PARM_BLK0_CCW_LEN;
dump_block_ccw->hdr.pbt = DIAG308_IPL_TYPE_CCW;
dump_capabilities |= IPL_TYPE_CCW;
dump_capabilities |= DUMP_TYPE_CCW;
return 0;
}
......@@ -956,7 +1024,7 @@ static int __init dump_fcp_init(void)
dump_block_fcp->hdr.blk0_len = IPL_PARM_BLK0_FCP_LEN;
dump_block_fcp->hdr.pbt = DIAG308_IPL_TYPE_FCP;
dump_block_fcp->ipl_info.fcp.opt = DIAG308_IPL_OPT_DUMP;
dump_capabilities |= IPL_TYPE_FCP;
dump_capabilities |= DUMP_TYPE_FCP;
return 0;
}
......@@ -995,7 +1063,7 @@ static int __init dump_init(void)
rc = dump_fcp_init();
if (rc)
return rc;
dump_set_type(IPL_TYPE_NONE);
dump_set_type(DUMP_TYPE_NONE);
return 0;
}
......@@ -1038,6 +1106,27 @@ static int __init s390_ipl_init(void)
__initcall(s390_ipl_init);
void __init ipl_save_parameters(void)
{
struct cio_iplinfo iplinfo;
unsigned int *ipl_ptr;
void *src, *dst;
if (cio_get_iplinfo(&iplinfo))
return;
ipl_devno = iplinfo.devno;
ipl_flags |= IPL_DEVNO_VALID;
if (!iplinfo.is_qdio)
return;
ipl_flags |= IPL_PARMBLOCK_VALID;
ipl_ptr = (unsigned int *)__LC_IPL_PARMBLOCK_PTR;
src = (void *)(unsigned long)*ipl_ptr;
dst = (void *)IPL_PARMBLOCK_ORIGIN;
memmove(dst, src, PAGE_SIZE);
*ipl_ptr = IPL_PARMBLOCK_ORIGIN;
}
static LIST_HEAD(rcall);
static DEFINE_MUTEX(rcall_mutex);
......
......@@ -31,6 +31,7 @@
#include <linux/string.h>
#include <linux/kernel.h>
#include <linux/moduleloader.h>
#include <linux/bug.h>
#if 0
#define DEBUGP printk
......@@ -398,9 +399,10 @@ int module_finalize(const Elf_Ehdr *hdr,
struct module *me)
{
vfree(me->arch.syminfo);
return 0;
return module_bug_finalize(hdr, sechdrs, me);
}
void module_arch_cleanup(struct module *mod)
{
module_bug_cleanup(mod);
}
......@@ -280,24 +280,26 @@ int copy_thread(int nr, unsigned long clone_flags, unsigned long new_stackp,
return 0;
}
asmlinkage long sys_fork(struct pt_regs regs)
asmlinkage long sys_fork(void)
{
return do_fork(SIGCHLD, regs.gprs[15], &regs, 0, NULL, NULL);
struct pt_regs *regs = task_pt_regs(current);
return do_fork(SIGCHLD, regs->gprs[15], regs, 0, NULL, NULL);
}
asmlinkage long sys_clone(struct pt_regs regs)
asmlinkage long sys_clone(void)
{
struct pt_regs *regs = task_pt_regs(current);
unsigned long clone_flags;
unsigned long newsp;
int __user *parent_tidptr, *child_tidptr;
clone_flags = regs.gprs[3];
newsp = regs.orig_gpr2;
parent_tidptr = (int __user *) regs.gprs[4];
child_tidptr = (int __user *) regs.gprs[5];
clone_flags = regs->gprs[3];
newsp = regs->orig_gpr2;
parent_tidptr = (int __user *) regs->gprs[4];
child_tidptr = (int __user *) regs->gprs[5];
if (!newsp)
newsp = regs.gprs[15];
return do_fork(clone_flags, newsp, &regs, 0,
newsp = regs->gprs[15];
return do_fork(clone_flags, newsp, regs, 0,
parent_tidptr, child_tidptr);
}
......@@ -311,40 +313,52 @@ asmlinkage long sys_clone(struct pt_regs regs)
* do not have enough call-clobbered registers to hold all
* the information you need.
*/
asmlinkage long sys_vfork(struct pt_regs regs)
asmlinkage long sys_vfork(void)
{
struct pt_regs *regs = task_pt_regs(current);
return do_fork(CLONE_VFORK | CLONE_VM | SIGCHLD,
regs.gprs[15], &regs, 0, NULL, NULL);
regs->gprs[15], regs, 0, NULL, NULL);
}
/*
* sys_execve() executes a new program.
*/
asmlinkage long sys_execve(struct pt_regs regs)
asmlinkage void execve_tail(void)
{
int error;
char * filename;
filename = getname((char __user *) regs.orig_gpr2);
error = PTR_ERR(filename);
if (IS_ERR(filename))
goto out;
error = do_execve(filename, (char __user * __user *) regs.gprs[3],
(char __user * __user *) regs.gprs[4], &regs);
if (error == 0) {
task_lock(current);
current->ptrace &= ~PT_DTRACE;
task_unlock(current);
current->thread.fp_regs.fpc = 0;
if (MACHINE_HAS_IEEE)
asm volatile("sfpc %0,%0" : : "d" (0));
}
/*
* sys_execve() executes a new program.
*/
asmlinkage long sys_execve(void)
{
struct pt_regs *regs = task_pt_regs(current);
char *filename;
unsigned long result;
int rc;
filename = getname((char __user *) regs->orig_gpr2);
if (IS_ERR(filename)) {
result = PTR_ERR(filename);
goto out;
}
rc = do_execve(filename, (char __user * __user *) regs->gprs[3],
(char __user * __user *) regs->gprs[4], regs);
if (rc) {
result = rc;
goto out_putname;
}
execve_tail();
result = regs->gprs[2];
out_putname:
putname(filename);
out:
return error;
return result;
}
/*
* fill in the FPU structure for a core dump.
*/
......
......@@ -285,6 +285,26 @@ static void __init conmode_default(void)
}
}
#if defined(CONFIG_ZFCPDUMP) || defined(CONFIG_ZFCPDUMP_MODULE)
static void __init setup_zfcpdump(unsigned int console_devno)
{
static char str[64];
if (ipl_info.type != IPL_TYPE_FCP_DUMP)
return;
if (console_devno != -1)
sprintf(str, "cio_ignore=all,!0.0.%04x,!0.0.%04x",
ipl_info.data.fcp.dev_id.devno, console_devno);
else
sprintf(str, "cio_ignore=all,!0.0.%04x",
ipl_info.data.fcp.dev_id.devno);
strcat(COMMAND_LINE, str);
console_loglevel = 2;
}
#else
static inline void setup_zfcpdump(unsigned int console_devno) {}
#endif /* CONFIG_ZFCPDUMP */
#ifdef CONFIG_SMP
void (*_machine_restart)(char *command) = machine_restart_smp;
void (*_machine_halt)(void) = machine_halt_smp;
......@@ -586,13 +606,20 @@ setup_resources(void)
}
}
unsigned long real_memory_size;
EXPORT_SYMBOL_GPL(real_memory_size);
static void __init setup_memory_end(void)
{
unsigned long real_size, memory_size;
unsigned long memory_size;
unsigned long max_mem, max_phys;
int i;
memory_size = real_size = 0;
#if defined(CONFIG_ZFCPDUMP) || defined(CONFIG_ZFCPDUMP_MODULE)
if (ipl_info.type == IPL_TYPE_FCP_DUMP)
memory_end = ZFCPDUMP_HSA_SIZE;
#endif
memory_size = 0;
max_phys = VMALLOC_END_INIT - VMALLOC_MIN_SIZE;
memory_end &= PAGE_MASK;
......@@ -601,7 +628,8 @@ static void __init setup_memory_end(void)
for (i = 0; i < MEMORY_CHUNKS; i++) {
struct mem_chunk *chunk = &memory_chunk[i];
real_size = max(real_size, chunk->addr + chunk->size);
real_memory_size = max(real_memory_size,
chunk->addr + chunk->size);
if (chunk->addr >= max_mem) {
memset(chunk, 0, sizeof(*chunk));
continue;
......@@ -765,6 +793,7 @@ setup_arch(char **cmdline_p)
parse_early_param();
setup_ipl_info();
setup_memory_end();
setup_addressing_mode();
setup_memory();
......@@ -782,6 +811,9 @@ setup_arch(char **cmdline_p)
/* Setup default console */
conmode_default();
/* Setup zfcpdump support */
setup_zfcpdump(console_devno);
}
void print_cpu_info(struct cpuinfo_S390 *cpuinfo)
......
......@@ -102,9 +102,9 @@ sys_sigaction(int sig, const struct old_sigaction __user *act,
}
asmlinkage long
sys_sigaltstack(const stack_t __user *uss, stack_t __user *uoss,
struct pt_regs *regs)
sys_sigaltstack(const stack_t __user *uss, stack_t __user *uoss)
{
struct pt_regs *regs = task_pt_regs(current);
return do_sigaltstack(uss, uoss, regs->gprs[15]);
}
......@@ -163,8 +163,9 @@ static int restore_sigregs(struct pt_regs *regs, _sigregs __user *sregs)
return 0;
}
asmlinkage long sys_sigreturn(struct pt_regs *regs)
asmlinkage long sys_sigreturn(void)
{
struct pt_regs *regs = task_pt_regs(current);
sigframe __user *frame = (sigframe __user *)regs->gprs[15];
sigset_t set;
......@@ -189,8 +190,9 @@ asmlinkage long sys_sigreturn(struct pt_regs *regs)
return 0;
}
asmlinkage long sys_rt_sigreturn(struct pt_regs *regs)
asmlinkage long sys_rt_sigreturn(void)
{
struct pt_regs *regs = task_pt_regs(current);
rt_sigframe __user *frame = (rt_sigframe __user *)regs->gprs[15];
sigset_t set;
......
/*
* arch/s390/kernel/smp.c
*
* Copyright (C) IBM Corp. 1999,2006
* Copyright IBM Corp. 1999,2007
* Author(s): Denis Joseph Barrow (djbarrow@de.ibm.com,barrow_dj@yahoo.com),
* Martin Schwidefsky (schwidefsky@de.ibm.com)
* Heiko Carstens (heiko.carstens@de.ibm.com)
......@@ -31,6 +31,7 @@
#include <linux/interrupt.h>
#include <linux/cpu.h>
#include <linux/timex.h>
#include <linux/bootmem.h>
#include <asm/ipl.h>
#include <asm/setup.h>
#include <asm/sigp.h>
......@@ -40,17 +41,19 @@
#include <asm/cpcmd.h>
#include <asm/tlbflush.h>
#include <asm/timer.h>
extern volatile int __cpu_logical_map[];
#include <asm/lowcore.h>
/*
* An array with a pointer the lowcore of every CPU.
*/
struct _lowcore *lowcore_ptr[NR_CPUS];
EXPORT_SYMBOL(lowcore_ptr);
cpumask_t cpu_online_map = CPU_MASK_NONE;
EXPORT_SYMBOL(cpu_online_map);
cpumask_t cpu_possible_map = CPU_MASK_NONE;
EXPORT_SYMBOL(cpu_possible_map);
static struct task_struct *current_set[NR_CPUS];
......@@ -70,7 +73,7 @@ struct call_data_struct {
int wait;
};
static struct call_data_struct * call_data;
static struct call_data_struct *call_data;
/*
* 'Call function' interrupt callback
......@@ -150,8 +153,8 @@ static void __smp_call_function_map(void (*func) (void *info), void *info,
*
* Run a function on all other CPUs.
*
* You must not call this function with disabled interrupts or from a
* hardware interrupt handler. You may call it from a bottom half.
* You must not call this function with disabled interrupts, from a
* hardware interrupt handler or from a bottom half.
*/
int smp_call_function(void (*func) (void *info), void *info, int nonatomic,
int wait)
......@@ -177,8 +180,8 @@ EXPORT_SYMBOL(smp_call_function);
*
* Run a function on one processor.
*
* You must not call this function with disabled interrupts or from a
* hardware interrupt handler. You may call it from a bottom half.
* You must not call this function with disabled interrupts, from a
* hardware interrupt handler or from a bottom half.
*/
int smp_call_function_on(void (*func) (void *info), void *info, int nonatomic,
int wait, int cpu)
......@@ -219,7 +222,7 @@ static void do_store_status(void)
rc = signal_processor_p(
(__u32)(unsigned long) lowcore_ptr[cpu], cpu,
sigp_store_status_at_address);
} while(rc == sigp_busy);
} while (rc == sigp_busy);
}
}
......@@ -231,7 +234,7 @@ static void do_wait_for_stop(void)
for_each_online_cpu(cpu) {
if (cpu == smp_processor_id())
continue;
while(!smp_cpu_not_running(cpu))
while (!smp_cpu_not_running(cpu))
cpu_relax();
}
}
......@@ -261,8 +264,7 @@ void smp_send_stop(void)
/*
* Reboot, halt and power_off routines for SMP.
*/
void machine_restart_smp(char * __unused)
void machine_restart_smp(char *__unused)
{
smp_send_stop();
do_reipl();
......@@ -317,7 +319,7 @@ static void smp_ext_bitcall(int cpu, ec_bit_sig sig)
* Set signaling bit in lowcore of target cpu and kick it
*/
set_bit(sig, (unsigned long *) &lowcore_ptr[cpu]->ext_call_fast);
while(signal_processor(cpu, sigp_emergency_signal) == sigp_busy)
while (signal_processor(cpu, sigp_emergency_signal) == sigp_busy)
udelay(10);
}
......@@ -358,7 +360,8 @@ struct ec_creg_mask_parms {
/*
* callback for setting/clearing control bits
*/
static void smp_ctl_bit_callback(void *info) {
static void smp_ctl_bit_callback(void *info)
{
struct ec_creg_mask_parms *pp = info;
unsigned long cregs[16];
int i;
......@@ -381,6 +384,7 @@ void smp_ctl_set_bit(int cr, int bit)
parms.orvals[cr] = 1 << bit;
on_each_cpu(smp_ctl_bit_callback, &parms, 0, 1);
}
EXPORT_SYMBOL(smp_ctl_set_bit);
/*
* Clear a bit in a control register of all cpus
......@@ -394,13 +398,72 @@ void smp_ctl_clear_bit(int cr, int bit)
parms.andvals[cr] = ~(1L << bit);
on_each_cpu(smp_ctl_bit_callback, &parms, 0, 1);
}
EXPORT_SYMBOL(smp_ctl_clear_bit);
#if defined(CONFIG_ZFCPDUMP) || defined(CONFIG_ZFCPDUMP_MODULE)
/*
* zfcpdump_prefix_array holds prefix registers for the following scenario:
* 64 bit zfcpdump kernel and 31 bit kernel which is to be dumped. We have to
* save its prefix registers, since they get lost, when switching from 31 bit
* to 64 bit.
*/
unsigned int zfcpdump_prefix_array[NR_CPUS + 1] \
__attribute__((__section__(".data")));
static void __init smp_get_save_areas(void)
{
unsigned int cpu, cpu_num, rc;
__u16 boot_cpu_addr;
if (ipl_info.type != IPL_TYPE_FCP_DUMP)
return;
boot_cpu_addr = S390_lowcore.cpu_data.cpu_addr;
cpu_num = 1;
for (cpu = 0; cpu <= 65535; cpu++) {
if ((u16) cpu == boot_cpu_addr)
continue;
__cpu_logical_map[1] = (__u16) cpu;
if (signal_processor(1, sigp_sense) == sigp_not_operational)
continue;
if (cpu_num >= NR_CPUS) {
printk("WARNING: Registers for cpu %i are not "
"saved, since dump kernel was compiled with"
"NR_CPUS=%i!\n", cpu_num, NR_CPUS);
continue;
}
zfcpdump_save_areas[cpu_num] =
alloc_bootmem(sizeof(union save_area));
while (1) {
rc = signal_processor(1, sigp_stop_and_store_status);
if (rc != sigp_busy)
break;
cpu_relax();
}
memcpy(zfcpdump_save_areas[cpu_num],
(void *)(unsigned long) store_prefix() +
SAVE_AREA_BASE, SAVE_AREA_SIZE);
#ifdef __s390x__
/* copy original prefix register */
zfcpdump_save_areas[cpu_num]->s390x.pref_reg =
zfcpdump_prefix_array[cpu_num];
#endif
cpu_num++;
}
}
union save_area *zfcpdump_save_areas[NR_CPUS + 1];
EXPORT_SYMBOL_GPL(zfcpdump_save_areas);
#else
#define smp_get_save_areas() do { } while (0)
#endif
/*
* Lets check how many CPUs we have.
*/
static unsigned int
__init smp_count_cpus(void)
static unsigned int __init smp_count_cpus(void)
{
unsigned int cpu, num_cpus;
__u16 boot_cpu_addr;
......@@ -416,13 +479,12 @@ __init smp_count_cpus(void)
if ((__u16) cpu == boot_cpu_addr)
continue;
__cpu_logical_map[1] = (__u16) cpu;
if (signal_processor(1, sigp_sense) ==
sigp_not_operational)
if (signal_processor(1, sigp_sense) == sigp_not_operational)
continue;
num_cpus++;
}
printk("Detected %d CPU's\n",(int) num_cpus);
printk("Detected %d CPU's\n", (int) num_cpus);
printk("Boot cpu address %2X\n", boot_cpu_addr);
return num_cpus;
......@@ -470,56 +532,13 @@ static void __init smp_create_idle(unsigned int cpu)
current_set[cpu] = p;
}
/* Reserving and releasing of CPUs */
static DEFINE_SPINLOCK(smp_reserve_lock);
static int smp_cpu_reserved[NR_CPUS];
int
smp_get_cpu(cpumask_t cpu_mask)
{
unsigned long flags;
int cpu;
spin_lock_irqsave(&smp_reserve_lock, flags);
/* Try to find an already reserved cpu. */
for_each_cpu_mask(cpu, cpu_mask) {
if (smp_cpu_reserved[cpu] != 0) {
smp_cpu_reserved[cpu]++;
/* Found one. */
goto out;
}
}
/* Reserve a new cpu from cpu_mask. */
for_each_cpu_mask(cpu, cpu_mask) {
if (cpu_online(cpu)) {
smp_cpu_reserved[cpu]++;
goto out;
}
}
cpu = -ENODEV;
out:
spin_unlock_irqrestore(&smp_reserve_lock, flags);
return cpu;
}
void
smp_put_cpu(int cpu)
{
unsigned long flags;
spin_lock_irqsave(&smp_reserve_lock, flags);
smp_cpu_reserved[cpu]--;
spin_unlock_irqrestore(&smp_reserve_lock, flags);
}
static int
cpu_stopped(int cpu)
static int cpu_stopped(int cpu)
{
__u32 status;
/* Check for stopped state */
if (signal_processor_ps(&status, 0, cpu, sigp_sense) == sigp_status_stored) {
if (signal_processor_ps(&status, 0, cpu, sigp_sense) ==
sigp_status_stored) {
if (status & 0x40)
return 1;
}
......@@ -528,8 +547,7 @@ cpu_stopped(int cpu)
/* Upping and downing of CPUs */
int
__cpu_up(unsigned int cpu)
int __cpu_up(unsigned int cpu)
{
struct task_struct *idle;
struct _lowcore *cpu_lowcore;
......@@ -548,7 +566,7 @@ __cpu_up(unsigned int cpu)
ccode = signal_processor_p((__u32)(unsigned long)(lowcore_ptr[cpu]),
cpu, sigp_set_prefix);
if (ccode){
if (ccode) {
printk("sigp_set_prefix failed for cpu %d "
"with condition code %d\n",
(int) cpu, (int) ccode);
......@@ -558,7 +576,7 @@ __cpu_up(unsigned int cpu)
idle = current_set[cpu];
cpu_lowcore = lowcore_ptr[cpu];
cpu_lowcore->kernel_stack = (unsigned long)
task_stack_page(idle) + (THREAD_SIZE);
task_stack_page(idle) + THREAD_SIZE;
sf = (struct stack_frame *) (cpu_lowcore->kernel_stack
- sizeof(struct pt_regs)
- sizeof(struct stack_frame));
......@@ -574,7 +592,7 @@ __cpu_up(unsigned int cpu)
cpu_lowcore->cpu_data.cpu_nr = cpu;
eieio();
while (signal_processor(cpu,sigp_restart) == sigp_busy)
while (signal_processor(cpu, sigp_restart) == sigp_busy)
udelay(10);
while (!cpu_online(cpu))
......@@ -589,6 +607,7 @@ void __init smp_setup_cpu_possible_map(void)
{
unsigned int phy_cpus, pos_cpus, cpu;
smp_get_save_areas();
phy_cpus = smp_count_cpus();
pos_cpus = min(phy_cpus + additional_cpus, (unsigned int) NR_CPUS);
......@@ -620,18 +639,11 @@ static int __init setup_possible_cpus(char *s)
}
early_param("possible_cpus", setup_possible_cpus);
int
__cpu_disable(void)
int __cpu_disable(void)
{
unsigned long flags;
struct ec_creg_mask_parms cr_parms;
int cpu = smp_processor_id();
spin_lock_irqsave(&smp_reserve_lock, flags);
if (smp_cpu_reserved[cpu] != 0) {
spin_unlock_irqrestore(&smp_reserve_lock, flags);
return -EBUSY;
}
cpu_clear(cpu, cpu_online_map);
/* Disable pfault pseudo page faults on this cpu. */
......@@ -642,24 +654,23 @@ __cpu_disable(void)
/* disable all external interrupts */
cr_parms.orvals[0] = 0;
cr_parms.andvals[0] = ~(1<<15 | 1<<14 | 1<<13 | 1<<12 |
1<<11 | 1<<10 | 1<< 6 | 1<< 4);
cr_parms.andvals[0] = ~(1 << 15 | 1 << 14 | 1 << 13 | 1 << 12 |
1 << 11 | 1 << 10 | 1 << 6 | 1 << 4);
/* disable all I/O interrupts */
cr_parms.orvals[6] = 0;
cr_parms.andvals[6] = ~(1<<31 | 1<<30 | 1<<29 | 1<<28 |
1<<27 | 1<<26 | 1<<25 | 1<<24);
cr_parms.andvals[6] = ~(1 << 31 | 1 << 30 | 1 << 29 | 1 << 28 |
1 << 27 | 1 << 26 | 1 << 25 | 1 << 24);
/* disable most machine checks */
cr_parms.orvals[14] = 0;
cr_parms.andvals[14] = ~(1<<28 | 1<<27 | 1<<26 | 1<<25 | 1<<24);
cr_parms.andvals[14] = ~(1 << 28 | 1 << 27 | 1 << 26 |
1 << 25 | 1 << 24);
smp_ctl_bit_callback(&cr_parms);
spin_unlock_irqrestore(&smp_reserve_lock, flags);
return 0;
}
void
__cpu_die(unsigned int cpu)
void __cpu_die(unsigned int cpu)
{
/* Wait until target cpu is down */
while (!smp_cpu_not_running(cpu))
......@@ -667,13 +678,12 @@ __cpu_die(unsigned int cpu)
printk("Processor %d spun down\n", cpu);
}
void
cpu_die(void)
void cpu_die(void)
{
idle_task_exit();
signal_processor(smp_processor_id(), sigp_stop);
BUG();
for(;;);
for (;;);
}
#endif /* CONFIG_HOTPLUG_CPU */
......@@ -691,7 +701,7 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
/* request the 0x1201 emergency signal external interrupt */
if (register_external_interrupt(0x1201, do_ext_call_interrupt) != 0)
panic("Couldn't request external interrupt 0x1201");
memset(lowcore_ptr,0,sizeof(lowcore_ptr));
memset(lowcore_ptr, 0, sizeof(lowcore_ptr));
/*
* Initialize prefix pages and stacks for all possible cpus
*/
......@@ -699,23 +709,23 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
for_each_possible_cpu(i) {
lowcore_ptr[i] = (struct _lowcore *)
__get_free_pages(GFP_KERNEL|GFP_DMA,
__get_free_pages(GFP_KERNEL | GFP_DMA,
sizeof(void*) == 8 ? 1 : 0);
stack = __get_free_pages(GFP_KERNEL,ASYNC_ORDER);
if (lowcore_ptr[i] == NULL || stack == 0ULL)
stack = __get_free_pages(GFP_KERNEL, ASYNC_ORDER);
if (!lowcore_ptr[i] || !stack)
panic("smp_boot_cpus failed to allocate memory\n");
*(lowcore_ptr[i]) = S390_lowcore;
lowcore_ptr[i]->async_stack = stack + (ASYNC_SIZE);
stack = __get_free_pages(GFP_KERNEL,0);
if (stack == 0ULL)
lowcore_ptr[i]->async_stack = stack + ASYNC_SIZE;
stack = __get_free_pages(GFP_KERNEL, 0);
if (!stack)
panic("smp_boot_cpus failed to allocate memory\n");
lowcore_ptr[i]->panic_stack = stack + (PAGE_SIZE);
lowcore_ptr[i]->panic_stack = stack + PAGE_SIZE;
#ifndef CONFIG_64BIT
if (MACHINE_HAS_IEEE) {
lowcore_ptr[i]->extended_save_area_addr =
(__u32) __get_free_pages(GFP_KERNEL,0);
if (lowcore_ptr[i]->extended_save_area_addr == 0)
(__u32) __get_free_pages(GFP_KERNEL, 0);
if (!lowcore_ptr[i]->extended_save_area_addr)
panic("smp_boot_cpus failed to "
"allocate memory\n");
}
......@@ -759,29 +769,58 @@ int setup_profiling_timer(unsigned int multiplier)
static DEFINE_PER_CPU(struct cpu, cpu_devices);
static ssize_t show_capability(struct sys_device *dev, char *buf)
{
unsigned int capability;
int rc;
rc = get_cpu_capability(&capability);
if (rc)
return rc;
return sprintf(buf, "%u\n", capability);
}
static SYSDEV_ATTR(capability, 0444, show_capability, NULL);
static int __cpuinit smp_cpu_notify(struct notifier_block *self,
unsigned long action, void *hcpu)
{
unsigned int cpu = (unsigned int)(long)hcpu;
struct cpu *c = &per_cpu(cpu_devices, cpu);
struct sys_device *s = &c->sysdev;
switch (action) {
case CPU_ONLINE:
if (sysdev_create_file(s, &attr_capability))
return NOTIFY_BAD;
break;
case CPU_DEAD:
sysdev_remove_file(s, &attr_capability);
break;
}
return NOTIFY_OK;
}
static struct notifier_block __cpuinitdata smp_cpu_nb = {
.notifier_call = smp_cpu_notify,
};
static int __init topology_init(void)
{
int cpu;
int ret;
register_cpu_notifier(&smp_cpu_nb);
for_each_possible_cpu(cpu) {
struct cpu *c = &per_cpu(cpu_devices, cpu);
struct sys_device *s = &c->sysdev;
c->hotpluggable = 1;
ret = register_cpu(c, cpu);
if (ret)
printk(KERN_WARNING "topology_init: register_cpu %d "
"failed (%d)\n", cpu, ret);
register_cpu(c, cpu);
if (!cpu_online(cpu))
continue;
s = &c->sysdev;
sysdev_create_file(s, &attr_capability);
}
return 0;
}
subsys_initcall(topology_init);
EXPORT_SYMBOL(cpu_online_map);
EXPORT_SYMBOL(cpu_possible_map);
EXPORT_SYMBOL(lowcore_ptr);
EXPORT_SYMBOL(smp_ctl_set_bit);
EXPORT_SYMBOL(smp_ctl_clear_bit);
EXPORT_SYMBOL(smp_get_cpu);
EXPORT_SYMBOL(smp_put_cpu);
......@@ -266,23 +266,3 @@ s390_fadvise64_64(struct fadvise64_64_args __user *args)
return -EFAULT;
return sys_fadvise64_64(a.fd, a.offset, a.len, a.advice);
}
/*
* Do a system call from kernel instead of calling sys_execve so we
* end up with proper pt_regs.
*/
int kernel_execve(const char *filename, char *const argv[], char *const envp[])
{
register const char *__arg1 asm("2") = filename;
register char *const*__arg2 asm("3") = argv;
register char *const*__arg3 asm("4") = envp;
register long __svcres asm("2");
asm volatile(
"svc %b1"
: "=d" (__svcres)
: "i" (__NR_execve),
"0" (__arg1),
"d" (__arg2),
"d" (__arg3) : "memory");
return __svcres;
}
......@@ -10,7 +10,7 @@
NI_SYSCALL /* 0 */
SYSCALL(sys_exit,sys_exit,sys32_exit_wrapper)
SYSCALL(sys_fork_glue,sys_fork_glue,sys_fork_glue)
SYSCALL(sys_fork,sys_fork,sys_fork)
SYSCALL(sys_read,sys_read,sys32_read_wrapper)
SYSCALL(sys_write,sys_write,sys32_write_wrapper)
SYSCALL(sys_open,sys_open,sys32_open_wrapper) /* 5 */
......@@ -19,7 +19,7 @@ SYSCALL(sys_restart_syscall,sys_restart_syscall,sys_restart_syscall)
SYSCALL(sys_creat,sys_creat,sys32_creat_wrapper)
SYSCALL(sys_link,sys_link,sys32_link_wrapper)
SYSCALL(sys_unlink,sys_unlink,sys32_unlink_wrapper) /* 10 */
SYSCALL(sys_execve_glue,sys_execve_glue,sys32_execve_glue)
SYSCALL(sys_execve,sys_execve,sys32_execve)
SYSCALL(sys_chdir,sys_chdir,sys32_chdir_wrapper)
SYSCALL(sys_time,sys_ni_syscall,sys32_time_wrapper) /* old time syscall */
SYSCALL(sys_mknod,sys_mknod,sys32_mknod_wrapper)
......@@ -127,8 +127,8 @@ SYSCALL(sys_swapoff,sys_swapoff,sys32_swapoff_wrapper) /* 115 */
SYSCALL(sys_sysinfo,sys_sysinfo,compat_sys_sysinfo_wrapper)
SYSCALL(sys_ipc,sys_ipc,sys32_ipc_wrapper)
SYSCALL(sys_fsync,sys_fsync,sys32_fsync_wrapper)
SYSCALL(sys_sigreturn_glue,sys_sigreturn_glue,sys32_sigreturn_glue)
SYSCALL(sys_clone_glue,sys_clone_glue,sys32_clone_glue) /* 120 */
SYSCALL(sys_sigreturn,sys_sigreturn,sys32_sigreturn)
SYSCALL(sys_clone,sys_clone,sys32_clone) /* 120 */
SYSCALL(sys_setdomainname,sys_setdomainname,sys32_setdomainname_wrapper)
SYSCALL(sys_newuname,s390x_newuname,sys32_newuname_wrapper)
NI_SYSCALL /* modify_ldt for i386 */
......@@ -181,7 +181,7 @@ SYSCALL(sys_nfsservctl,sys_nfsservctl,compat_sys_nfsservctl_wrapper)
SYSCALL(sys_setresgid16,sys_ni_syscall,sys32_setresgid16_wrapper) /* 170 old setresgid16 syscall */
SYSCALL(sys_getresgid16,sys_ni_syscall,sys32_getresgid16_wrapper) /* old getresgid16 syscall */
SYSCALL(sys_prctl,sys_prctl,sys32_prctl_wrapper)
SYSCALL(sys_rt_sigreturn_glue,sys_rt_sigreturn_glue,sys32_rt_sigreturn_glue)
SYSCALL(sys_rt_sigreturn,sys_rt_sigreturn,sys32_rt_sigreturn)
SYSCALL(sys_rt_sigaction,sys_rt_sigaction,sys32_rt_sigaction_wrapper)
SYSCALL(sys_rt_sigprocmask,sys_rt_sigprocmask,sys32_rt_sigprocmask_wrapper) /* 175 */
SYSCALL(sys_rt_sigpending,sys_rt_sigpending,sys32_rt_sigpending_wrapper)
......@@ -194,11 +194,11 @@ SYSCALL(sys_chown16,sys_ni_syscall,sys32_chown16_wrapper) /* old chown16 syscall
SYSCALL(sys_getcwd,sys_getcwd,sys32_getcwd_wrapper)
SYSCALL(sys_capget,sys_capget,sys32_capget_wrapper)
SYSCALL(sys_capset,sys_capset,sys32_capset_wrapper) /* 185 */
SYSCALL(sys_sigaltstack_glue,sys_sigaltstack_glue,sys32_sigaltstack_glue)
SYSCALL(sys_sigaltstack,sys_sigaltstack,sys32_sigaltstack)
SYSCALL(sys_sendfile,sys_sendfile64,sys32_sendfile_wrapper)
NI_SYSCALL /* streams1 */
NI_SYSCALL /* streams2 */
SYSCALL(sys_vfork_glue,sys_vfork_glue,sys_vfork_glue) /* 190 */
SYSCALL(sys_vfork,sys_vfork,sys_vfork) /* 190 */
SYSCALL(sys_getrlimit,sys_getrlimit,compat_sys_getrlimit_wrapper)
SYSCALL(sys_mmap2,sys_mmap2,sys32_mmap2_wrapper)
SYSCALL(sys_truncate64,sys_ni_syscall,sys32_truncate64_wrapper)
......
......@@ -280,7 +280,6 @@ static void clock_comparator_interrupt(__u16 code)
}
static void etr_reset(void);
static void etr_init(void);
static void etr_ext_handler(__u16);
/*
......@@ -355,7 +354,6 @@ void __init time_init(void)
#ifdef CONFIG_VIRT_TIMER
vtime_init();
#endif
etr_init();
}
/*
......@@ -426,11 +424,11 @@ static struct etr_aib etr_port1;
static int etr_port1_uptodate;
static unsigned long etr_events;
static struct timer_list etr_timer;
static struct tasklet_struct etr_tasklet;
static DEFINE_PER_CPU(atomic_t, etr_sync_word);
static void etr_timeout(unsigned long dummy);
static void etr_tasklet_fn(unsigned long dummy);
static void etr_work_fn(struct work_struct *work);
static DECLARE_WORK(etr_work, etr_work_fn);
/*
* The etr get_clock function. It will write the current clock value
......@@ -507,29 +505,31 @@ static void etr_reset(void)
}
}
static void etr_init(void)
static int __init etr_init(void)
{
struct etr_aib aib;
if (test_bit(ETR_FLAG_ENOSYS, &etr_flags))
return;
return 0;
/* Check if this machine has the steai instruction. */
if (etr_steai(&aib, ETR_STEAI_STEPPING_PORT) == 0)
set_bit(ETR_FLAG_STEAI, &etr_flags);
setup_timer(&etr_timer, etr_timeout, 0UL);
tasklet_init(&etr_tasklet, etr_tasklet_fn, 0);
if (!etr_port0_online && !etr_port1_online)
set_bit(ETR_FLAG_EACCES, &etr_flags);
if (etr_port0_online) {
set_bit(ETR_EVENT_PORT0_CHANGE, &etr_events);
tasklet_hi_schedule(&etr_tasklet);
schedule_work(&etr_work);
}
if (etr_port1_online) {
set_bit(ETR_EVENT_PORT1_CHANGE, &etr_events);
tasklet_hi_schedule(&etr_tasklet);
schedule_work(&etr_work);
}
return 0;
}
arch_initcall(etr_init);
/*
* Two sorts of ETR machine checks. The architecture reads:
* "When a machine-check niterruption occurs and if a switch-to-local or
......@@ -549,7 +549,7 @@ void etr_switch_to_local(void)
return;
etr_disable_sync_clock(NULL);
set_bit(ETR_EVENT_SWITCH_LOCAL, &etr_events);
tasklet_hi_schedule(&etr_tasklet);
schedule_work(&etr_work);
}
/*
......@@ -564,7 +564,7 @@ void etr_sync_check(void)
return;
etr_disable_sync_clock(NULL);
set_bit(ETR_EVENT_SYNC_CHECK, &etr_events);
tasklet_hi_schedule(&etr_tasklet);
schedule_work(&etr_work);
}
/*
......@@ -591,13 +591,13 @@ static void etr_ext_handler(__u16 code)
* Both ports are not up-to-date now.
*/
set_bit(ETR_EVENT_PORT_ALERT, &etr_events);
tasklet_hi_schedule(&etr_tasklet);
schedule_work(&etr_work);
}
static void etr_timeout(unsigned long dummy)
{
set_bit(ETR_EVENT_UPDATE, &etr_events);
tasklet_hi_schedule(&etr_tasklet);
schedule_work(&etr_work);
}
/*
......@@ -927,7 +927,7 @@ static struct etr_eacr etr_handle_update(struct etr_aib *aib,
if (!eacr.e0 && !eacr.e1)
return eacr;
/* Update port0 or port1 with aib stored in etr_tasklet_fn. */
/* Update port0 or port1 with aib stored in etr_work_fn. */
if (aib->esw.q == 0) {
/* Information for port 0 stored. */
if (eacr.p0 && !etr_port0_uptodate) {
......@@ -1007,7 +1007,7 @@ static void etr_update_eacr(struct etr_eacr eacr)
* particular this is the only function that calls etr_update_eacr(),
* it "controls" the etr control register.
*/
static void etr_tasklet_fn(unsigned long dummy)
static void etr_work_fn(struct work_struct *work)
{
unsigned long long now;
struct etr_eacr eacr;
......@@ -1220,13 +1220,13 @@ static ssize_t etr_online_store(struct sys_device *dev,
return count; /* Nothing to do. */
etr_port0_online = value;
set_bit(ETR_EVENT_PORT0_CHANGE, &etr_events);
tasklet_hi_schedule(&etr_tasklet);
schedule_work(&etr_work);
} else {
if (etr_port1_online == value)
return count; /* Nothing to do. */
etr_port1_online = value;
set_bit(ETR_EVENT_PORT1_CHANGE, &etr_events);
tasklet_hi_schedule(&etr_tasklet);
schedule_work(&etr_work);
}
return count;
}
......
......@@ -30,7 +30,7 @@
#include <linux/kallsyms.h>
#include <linux/reboot.h>
#include <linux/kprobes.h>
#include <linux/bug.h>
#include <asm/system.h>
#include <asm/uaccess.h>
#include <asm/io.h>
......@@ -188,18 +188,31 @@ void dump_stack(void)
EXPORT_SYMBOL(dump_stack);
static inline int mask_bits(struct pt_regs *regs, unsigned long bits)
{
return (regs->psw.mask & bits) / ((~bits + 1) & bits);
}
void show_registers(struct pt_regs *regs)
{
mm_segment_t old_fs;
char *mode;
int i;
mode = (regs->psw.mask & PSW_MASK_PSTATE) ? "User" : "Krnl";
printk("%s PSW : %p %p",
mode, (void *) regs->psw.mask,
(void *) regs->psw.addr);
print_symbol(" (%s)\n", regs->psw.addr & PSW_ADDR_INSN);
printk("%s GPRS: " FOURLONG, mode,
printk(" R:%x T:%x IO:%x EX:%x Key:%x M:%x W:%x "
"P:%x AS:%x CC:%x PM:%x", mask_bits(regs, PSW_MASK_PER),
mask_bits(regs, PSW_MASK_DAT), mask_bits(regs, PSW_MASK_IO),
mask_bits(regs, PSW_MASK_EXT), mask_bits(regs, PSW_MASK_KEY),
mask_bits(regs, PSW_MASK_MCHECK), mask_bits(regs, PSW_MASK_WAIT),
mask_bits(regs, PSW_MASK_PSTATE), mask_bits(regs, PSW_MASK_ASC),
mask_bits(regs, PSW_MASK_CC), mask_bits(regs, PSW_MASK_PM));
#ifdef CONFIG_64BIT
printk(" EA:%x", mask_bits(regs, PSW_BASE_BITS));
#endif
printk("\n%s GPRS: " FOURLONG, mode,
regs->gprs[0], regs->gprs[1], regs->gprs[2], regs->gprs[3]);
printk(" " FOURLONG,
regs->gprs[4], regs->gprs[5], regs->gprs[6], regs->gprs[7]);
......@@ -208,41 +221,7 @@ void show_registers(struct pt_regs *regs)
printk(" " FOURLONG,
regs->gprs[12], regs->gprs[13], regs->gprs[14], regs->gprs[15]);
#if 0
/* FIXME: this isn't needed any more but it changes the ksymoops
* input. To remove or not to remove ... */
save_access_regs(regs->acrs);
printk("%s ACRS: %08x %08x %08x %08x\n", mode,
regs->acrs[0], regs->acrs[1], regs->acrs[2], regs->acrs[3]);
printk(" %08x %08x %08x %08x\n",
regs->acrs[4], regs->acrs[5], regs->acrs[6], regs->acrs[7]);
printk(" %08x %08x %08x %08x\n",
regs->acrs[8], regs->acrs[9], regs->acrs[10], regs->acrs[11]);
printk(" %08x %08x %08x %08x\n",
regs->acrs[12], regs->acrs[13], regs->acrs[14], regs->acrs[15]);
#endif
/*
* Print the first 20 byte of the instruction stream at the
* time of the fault.
*/
old_fs = get_fs();
if (regs->psw.mask & PSW_MASK_PSTATE)
set_fs(USER_DS);
else
set_fs(KERNEL_DS);
printk("%s Code: ", mode);
for (i = 0; i < 20; i++) {
unsigned char c;
if (__get_user(c, (char __user *)(regs->psw.addr + i))) {
printk(" Bad PSW.");
break;
}
printk("%02x ", c);
}
set_fs(old_fs);
printk("\n");
show_code(regs);
}
/* This is called from fs/proc/array.c */
......@@ -318,6 +297,11 @@ report_user_fault(long interruption_code, struct pt_regs *regs)
#endif
}
int is_valid_bugaddr(unsigned long addr)
{
return 1;
}
static void __kprobes inline do_trap(long interruption_code, int signr,
char *str, struct pt_regs *regs,
siginfo_t *info)
......@@ -344,9 +328,15 @@ static void __kprobes inline do_trap(long interruption_code, int signr,
fixup = search_exception_tables(regs->psw.addr & PSW_ADDR_INSN);
if (fixup)
regs->psw.addr = fixup->fixup | PSW_ADDR_AMODE;
else
else {
enum bug_trap_type btt;
btt = report_bug(regs->psw.addr & PSW_ADDR_INSN);
if (btt == BUG_TRAP_TYPE_WARN)
return;
die(str, regs, interruption_code);
}
}
}
static inline void __user *get_check_address(struct pt_regs *regs)
......
......@@ -45,6 +45,8 @@ SECTIONS
__ex_table : { *(__ex_table) }
__stop___ex_table = .;
BUG_TABLE
.data : { /* Data */
*(.data)
CONSTRUCTORS
......@@ -77,6 +79,12 @@ SECTIONS
*(.init.text)
_einittext = .;
}
/*
* .exit.text is discarded at runtime, not link time,
* to deal with references from __bug_table
*/
.exit.text : { *(.exit.text) }
.init.data : { *(.init.data) }
. = ALIGN(256);
__setup_start = .;
......@@ -116,7 +124,7 @@ SECTIONS
/* Sections to be discarded */
/DISCARD/ : {
*(.exit.text) *(.exit.data) *(.exitcall.exit)
*(.exit.data) *(.exitcall.exit)
}
/* Stabs debugging sections. */
......
......@@ -128,7 +128,7 @@ static inline void set_vtimer(__u64 expires)
S390_lowcore.last_update_timer = expires;
/* store expire time for this CPU timer */
per_cpu(virt_cpu_timer, smp_processor_id()).to_expire = expires;
__get_cpu_var(virt_cpu_timer).to_expire = expires;
}
#else
static inline void set_vtimer(__u64 expires)
......@@ -137,7 +137,7 @@ static inline void set_vtimer(__u64 expires)
asm volatile ("SPT %0" : : "m" (S390_lowcore.last_update_timer));
/* store expire time for this CPU timer */
per_cpu(virt_cpu_timer, smp_processor_id()).to_expire = expires;
__get_cpu_var(virt_cpu_timer).to_expire = expires;
}
#endif
......@@ -145,7 +145,7 @@ static void start_cpu_timer(void)
{
struct vtimer_queue *vt_list;
vt_list = &per_cpu(virt_cpu_timer, smp_processor_id());
vt_list = &__get_cpu_var(virt_cpu_timer);
/* CPU timer interrupt is pending, don't reprogramm it */
if (vt_list->idle & 1LL<<63)
......@@ -159,7 +159,7 @@ static void stop_cpu_timer(void)
{
struct vtimer_queue *vt_list;
vt_list = &per_cpu(virt_cpu_timer, smp_processor_id());
vt_list = &__get_cpu_var(virt_cpu_timer);
/* nothing to do */
if (list_empty(&vt_list->list)) {
......@@ -219,7 +219,7 @@ static void do_callbacks(struct list_head *cb_list)
if (list_empty(cb_list))
return;
vt_list = &per_cpu(virt_cpu_timer, smp_processor_id());
vt_list = &__get_cpu_var(virt_cpu_timer);
list_for_each_entry_safe(event, tmp, cb_list, entry) {
fn = event->function;
......@@ -244,7 +244,6 @@ static void do_callbacks(struct list_head *cb_list)
*/
static void do_cpu_timer_interrupt(__u16 error_code)
{
int cpu;
__u64 next, delta;
struct vtimer_queue *vt_list;
struct vtimer_list *event, *tmp;
......@@ -253,8 +252,7 @@ static void do_cpu_timer_interrupt(__u16 error_code)
struct list_head cb_list;
INIT_LIST_HEAD(&cb_list);
cpu = smp_processor_id();
vt_list = &per_cpu(virt_cpu_timer, cpu);
vt_list = &__get_cpu_var(virt_cpu_timer);
/* walk timer list, fire all expired events */
spin_lock(&vt_list->lock);
......@@ -534,7 +532,7 @@ void init_cpu_vtimer(void)
/* enable cpu timer interrupts */
__ctl_set_bit(0,10);
vt_list = &per_cpu(virt_cpu_timer, smp_processor_id());
vt_list = &__get_cpu_var(virt_cpu_timer);
INIT_LIST_HEAD(&vt_list->list);
spin_lock_init(&vt_list->lock);
vt_list->to_expire = 0;
......
......@@ -26,9 +26,9 @@
#include <linux/module.h>
#include <linux/hardirq.h>
#include <linux/kprobes.h>
#include <linux/uaccess.h>
#include <asm/system.h>
#include <asm/uaccess.h>
#include <asm/pgtable.h>
#include <asm/kdebug.h>
#include <asm/s390_ext.h>
......@@ -63,21 +63,25 @@ int unregister_page_fault_notifier(struct notifier_block *nb)
return atomic_notifier_chain_unregister(&notify_page_fault_chain, nb);
}
static inline int notify_page_fault(enum die_val val, const char *str,
struct pt_regs *regs, long err, int trap, int sig)
static int __kprobes __notify_page_fault(struct pt_regs *regs, long err)
{
struct die_args args = {
.regs = regs,
.str = str,
.err = err,
.trapnr = trap,
.signr = sig
};
return atomic_notifier_call_chain(&notify_page_fault_chain, val, &args);
struct die_args args = { .str = "page fault",
.trapnr = 14,
.signr = SIGSEGV };
args.regs = regs;
args.err = err;
return atomic_notifier_call_chain(&notify_page_fault_chain,
DIE_PAGE_FAULT, &args);
}
static inline int notify_page_fault(struct pt_regs *regs, long err)
{
if (unlikely(kprobe_running()))
return __notify_page_fault(regs, err);
return NOTIFY_DONE;
}
#else
static inline int notify_page_fault(enum die_val val, const char *str,
struct pt_regs *regs, long err, int trap, int sig)
static inline int notify_page_fault(struct pt_regs *regs, long err)
{
return NOTIFY_DONE;
}
......@@ -170,74 +174,127 @@ static void do_sigsegv(struct pt_regs *regs, unsigned long error_code,
force_sig_info(SIGSEGV, &si, current);
}
static void do_no_context(struct pt_regs *regs, unsigned long error_code,
unsigned long address)
{
const struct exception_table_entry *fixup;
/* Are we prepared to handle this kernel fault? */
fixup = search_exception_tables(regs->psw.addr & __FIXUP_MASK);
if (fixup) {
regs->psw.addr = fixup->fixup | PSW_ADDR_AMODE;
return;
}
/*
* Oops. The kernel tried to access some bad page. We'll have to
* terminate things with extreme prejudice.
*/
if (check_space(current) == 0)
printk(KERN_ALERT "Unable to handle kernel pointer dereference"
" at virtual kernel address %p\n", (void *)address);
else
printk(KERN_ALERT "Unable to handle kernel paging request"
" at virtual user address %p\n", (void *)address);
die("Oops", regs, error_code);
do_exit(SIGKILL);
}
static void do_low_address(struct pt_regs *regs, unsigned long error_code)
{
/* Low-address protection hit in kernel mode means
NULL pointer write access in kernel mode. */
if (regs->psw.mask & PSW_MASK_PSTATE) {
/* Low-address protection hit in user mode 'cannot happen'. */
die ("Low-address protection", regs, error_code);
do_exit(SIGKILL);
}
do_no_context(regs, error_code, 0);
}
/*
* We ran out of memory, or some other thing happened to us that made
* us unable to handle the page fault gracefully.
*/
static int do_out_of_memory(struct pt_regs *regs, unsigned long error_code,
unsigned long address)
{
struct task_struct *tsk = current;
struct mm_struct *mm = tsk->mm;
up_read(&mm->mmap_sem);
if (is_init(tsk)) {
yield();
down_read(&mm->mmap_sem);
return 1;
}
printk("VM: killing process %s\n", tsk->comm);
if (regs->psw.mask & PSW_MASK_PSTATE)
do_exit(SIGKILL);
do_no_context(regs, error_code, address);
return 0;
}
static void do_sigbus(struct pt_regs *regs, unsigned long error_code,
unsigned long address)
{
struct task_struct *tsk = current;
struct mm_struct *mm = tsk->mm;
up_read(&mm->mmap_sem);
/*
* Send a sigbus, regardless of whether we were in kernel
* or user mode.
*/
tsk->thread.prot_addr = address;
tsk->thread.trap_no = error_code;
force_sig(SIGBUS, tsk);
/* Kernel mode? Handle exceptions or die */
if (!(regs->psw.mask & PSW_MASK_PSTATE))
do_no_context(regs, error_code, address);
}
#ifdef CONFIG_S390_EXEC_PROTECT
extern long sys_sigreturn(struct pt_regs *regs);
extern long sys_rt_sigreturn(struct pt_regs *regs);
extern long sys32_sigreturn(struct pt_regs *regs);
extern long sys32_rt_sigreturn(struct pt_regs *regs);
static inline void do_sigreturn(struct mm_struct *mm, struct pt_regs *regs,
int rt)
static int signal_return(struct mm_struct *mm, struct pt_regs *regs,
unsigned long address, unsigned long error_code)
{
u16 instruction;
int rc, compat;
pagefault_disable();
rc = __get_user(instruction, (u16 __user *) regs->psw.addr);
pagefault_enable();
if (rc)
return -EFAULT;
up_read(&mm->mmap_sem);
clear_tsk_thread_flag(current, TIF_SINGLE_STEP);
#ifdef CONFIG_COMPAT
if (test_tsk_thread_flag(current, TIF_31BIT)) {
if (rt)
sys32_rt_sigreturn(regs);
else
compat = test_tsk_thread_flag(current, TIF_31BIT);
if (compat && instruction == 0x0a77)
sys32_sigreturn(regs);
return;
}
#endif /* CONFIG_COMPAT */
if (rt)
sys_rt_sigreturn(regs);
else if (compat && instruction == 0x0aad)
sys32_rt_sigreturn(regs);
else
#endif
if (instruction == 0x0a77)
sys_sigreturn(regs);
return;
}
static int signal_return(struct mm_struct *mm, struct pt_regs *regs,
unsigned long address, unsigned long error_code)
{
pgd_t *pgd;
pmd_t *pmd;
pte_t *pte;
u16 *instruction;
unsigned long pfn, uaddr = regs->psw.addr;
spin_lock(&mm->page_table_lock);
pgd = pgd_offset(mm, uaddr);
if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd)))
goto out_fault;
pmd = pmd_offset(pgd, uaddr);
if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd)))
goto out_fault;
pte = pte_offset_map(pmd_offset(pgd_offset(mm, uaddr), uaddr), uaddr);
if (!pte || !pte_present(*pte))
goto out_fault;
pfn = pte_pfn(*pte);
if (!pfn_valid(pfn))
goto out_fault;
spin_unlock(&mm->page_table_lock);
instruction = (u16 *) ((pfn << PAGE_SHIFT) + (uaddr & (PAGE_SIZE-1)));
if (*instruction == 0x0a77)
do_sigreturn(mm, regs, 0);
else if (*instruction == 0x0aad)
do_sigreturn(mm, regs, 1);
else if (instruction == 0x0aad)
sys_rt_sigreturn(regs);
else {
printk("- XXX - do_exception: task = %s, primary, NO EXEC "
"-> SIGSEGV\n", current->comm);
up_read(&mm->mmap_sem);
current->thread.prot_addr = address;
current->thread.trap_no = error_code;
do_sigsegv(regs, error_code, SEGV_MAPERR, address);
}
return 0;
out_fault:
spin_unlock(&mm->page_table_lock);
return -EFAULT;
}
#endif /* CONFIG_S390_EXEC_PROTECT */
......@@ -253,48 +310,22 @@ static int signal_return(struct mm_struct *mm, struct pt_regs *regs,
* 3b Region third trans. -> Not present (nullification)
*/
static inline void
do_exception(struct pt_regs *regs, unsigned long error_code, int is_protection)
do_exception(struct pt_regs *regs, unsigned long error_code, int write)
{
struct task_struct *tsk;
struct mm_struct *mm;
struct vm_area_struct * vma;
struct vm_area_struct *vma;
unsigned long address;
const struct exception_table_entry *fixup;
int si_code;
int space;
int si_code;
tsk = current;
mm = tsk->mm;
if (notify_page_fault(DIE_PAGE_FAULT, "page fault", regs, error_code, 14,
SIGSEGV) == NOTIFY_STOP)
if (notify_page_fault(regs, error_code) == NOTIFY_STOP)
return;
/*
* Check for low-address protection. This needs to be treated
* as a special case because the translation exception code
* field is not guaranteed to contain valid data in this case.
*/
if (is_protection && !(S390_lowcore.trans_exc_code & 4)) {
/* Low-address protection hit in kernel mode means
NULL pointer write access in kernel mode. */
if (!(regs->psw.mask & PSW_MASK_PSTATE)) {
address = 0;
space = 0;
goto no_context;
}
/* Low-address protection hit in user mode 'cannot happen'. */
die ("Low-address protection", regs, error_code);
do_exit(SIGKILL);
}
tsk = current;
mm = tsk->mm;
/*
* get the failing address
* more specific the segment and page table portion of
* the address
*/
/* get the failing address and the affected space */
address = S390_lowcore.trans_exc_code & __FAIL_ADDR_MASK;
space = check_space(tsk);
......@@ -342,7 +373,7 @@ do_exception(struct pt_regs *regs, unsigned long error_code, int is_protection)
*/
good_area:
si_code = SEGV_ACCERR;
if (!is_protection) {
if (!write) {
/* page not present, check vm flags */
if (!(vma->vm_flags & (VM_READ | VM_EXEC | VM_WRITE)))
goto bad_area;
......@@ -357,7 +388,7 @@ do_exception(struct pt_regs *regs, unsigned long error_code, int is_protection)
* make sure we exit gracefully rather than endlessly redo
* the fault.
*/
switch (handle_mm_fault(mm, vma, address, is_protection)) {
switch (handle_mm_fault(mm, vma, address, write)) {
case VM_FAULT_MINOR:
tsk->min_flt++;
break;
......@@ -365,9 +396,12 @@ do_exception(struct pt_regs *regs, unsigned long error_code, int is_protection)
tsk->maj_flt++;
break;
case VM_FAULT_SIGBUS:
goto do_sigbus;
do_sigbus(regs, error_code, address);
return;
case VM_FAULT_OOM:
goto out_of_memory;
if (do_out_of_memory(regs, error_code, address))
goto survive;
return;
default:
BUG();
}
......@@ -396,64 +430,23 @@ do_exception(struct pt_regs *regs, unsigned long error_code, int is_protection)
}
no_context:
/* Are we prepared to handle this kernel fault? */
fixup = search_exception_tables(regs->psw.addr & __FIXUP_MASK);
if (fixup) {
regs->psw.addr = fixup->fixup | PSW_ADDR_AMODE;
return;
}
/*
* Oops. The kernel tried to access some bad page. We'll have to
* terminate things with extreme prejudice.
*/
if (space == 0)
printk(KERN_ALERT "Unable to handle kernel pointer dereference"
" at virtual kernel address %p\n", (void *)address);
else
printk(KERN_ALERT "Unable to handle kernel paging request"
" at virtual user address %p\n", (void *)address);
die("Oops", regs, error_code);
do_exit(SIGKILL);
/*
* We ran out of memory, or some other thing happened to us that made
* us unable to handle the page fault gracefully.
*/
out_of_memory:
up_read(&mm->mmap_sem);
if (is_init(tsk)) {
yield();
down_read(&mm->mmap_sem);
goto survive;
}
printk("VM: killing process %s\n", tsk->comm);
if (regs->psw.mask & PSW_MASK_PSTATE)
do_exit(SIGKILL);
goto no_context;
do_sigbus:
up_read(&mm->mmap_sem);
/*
* Send a sigbus, regardless of whether we were in kernel
* or user mode.
*/
tsk->thread.prot_addr = address;
tsk->thread.trap_no = error_code;
force_sig(SIGBUS, tsk);
/* Kernel mode? Handle exceptions or die */
if (!(regs->psw.mask & PSW_MASK_PSTATE))
goto no_context;
do_no_context(regs, error_code, address);
}
void __kprobes do_protection_exception(struct pt_regs *regs,
unsigned long error_code)
{
/* Protection exception is supressing, decrement psw address. */
regs->psw.addr -= (error_code >> 16);
/*
* Check for low-address protection. This needs to be treated
* as a special case because the translation exception code
* field is not guaranteed to contain valid data in this case.
*/
if (unlikely(!(S390_lowcore.trans_exc_code & 4))) {
do_low_address(regs, error_code);
return;
}
do_exception(regs, 4, 1);
}
......
......@@ -398,6 +398,9 @@ dasd_change_state(struct dasd_device *device)
if (device->state == device->target)
wake_up(&dasd_init_waitq);
/* let user-space know that the device status changed */
kobject_uevent(&device->cdev->dev.kobj, KOBJ_CHANGE);
}
/*
......
......@@ -19,6 +19,7 @@
#include <asm/debug.h>
#include <asm/uaccess.h>
#include <asm/ipl.h>
/* This is ugly... */
#define PRINTK_HEADER "dasd_devmap:"
......@@ -133,6 +134,8 @@ dasd_call_setup(char *str)
__setup ("dasd=", dasd_call_setup);
#endif /* #ifndef MODULE */
#define DASD_IPLDEV "ipldev"
/*
* Read a device busid/devno from a string.
*/
......@@ -141,6 +144,20 @@ dasd_busid(char **str, int *id0, int *id1, int *devno)
{
int val, old_style;
/* Interpret ipldev busid */
if (strncmp(DASD_IPLDEV, *str, strlen(DASD_IPLDEV)) == 0) {
if (ipl_info.type != IPL_TYPE_CCW) {
MESSAGE(KERN_ERR, "%s", "ipl device is not a ccw "
"device");
return -EINVAL;
}
*id0 = 0;
*id1 = ipl_info.data.ccw.dev_id.ssid;
*devno = ipl_info.data.ccw.dev_id.devno;
*str += strlen(DASD_IPLDEV);
return 0;
}
/* check for leading '0x' */
old_style = 0;
if ((*str)[0] == '0' && (*str)[1] == 'x') {
......@@ -828,6 +845,46 @@ dasd_discipline_show(struct device *dev, struct device_attribute *attr,
static DEVICE_ATTR(discipline, 0444, dasd_discipline_show, NULL);
static ssize_t
dasd_device_status_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct dasd_device *device;
ssize_t len;
device = dasd_device_from_cdev(to_ccwdev(dev));
if (!IS_ERR(device)) {
switch (device->state) {
case DASD_STATE_NEW:
len = snprintf(buf, PAGE_SIZE, "new\n");
break;
case DASD_STATE_KNOWN:
len = snprintf(buf, PAGE_SIZE, "detected\n");
break;
case DASD_STATE_BASIC:
len = snprintf(buf, PAGE_SIZE, "basic\n");
break;
case DASD_STATE_UNFMT:
len = snprintf(buf, PAGE_SIZE, "unformatted\n");
break;
case DASD_STATE_READY:
len = snprintf(buf, PAGE_SIZE, "ready\n");
break;
case DASD_STATE_ONLINE:
len = snprintf(buf, PAGE_SIZE, "online\n");
break;
default:
len = snprintf(buf, PAGE_SIZE, "no stat\n");
break;
}
dasd_put_device(device);
} else
len = snprintf(buf, PAGE_SIZE, "unknown\n");
return len;
}
static DEVICE_ATTR(status, 0444, dasd_device_status_show, NULL);
static ssize_t
dasd_alias_show(struct device *dev, struct device_attribute *attr, char *buf)
{
......@@ -939,6 +996,7 @@ static DEVICE_ATTR(eer_enabled, 0644, dasd_eer_show, dasd_eer_store);
static struct attribute * dasd_attrs[] = {
&dev_attr_readonly.attr,
&dev_attr_discipline.attr,
&dev_attr_status.attr,
&dev_attr_alias.attr,
&dev_attr_vendor.attr,
&dev_attr_uid.attr,
......
......@@ -3,7 +3,7 @@
#
obj-y += ctrlchar.o keyboard.o defkeymap.o sclp.o sclp_rw.o sclp_quiesce.o \
sclp_info.o
sclp_info.o sclp_config.o sclp_chp.o
obj-$(CONFIG_TN3270) += raw3270.o
obj-$(CONFIG_TN3270_CONSOLE) += con3270.o
......@@ -29,3 +29,6 @@ obj-$(CONFIG_S390_TAPE_34XX) += tape_34xx.o
obj-$(CONFIG_S390_TAPE_3590) += tape_3590.o
obj-$(CONFIG_MONREADER) += monreader.o
obj-$(CONFIG_MONWRITER) += monwriter.o
zcore_mod-objs := sclp_sdias.o zcore.o
obj-$(CONFIG_ZFCPDUMP) += zcore_mod.o
......@@ -813,12 +813,6 @@ con3215_unblank(void)
spin_unlock_irqrestore(get_ccwdev_lock(raw->cdev), flags);
}
static int __init
con3215_consetup(struct console *co, char *options)
{
return 0;
}
/*
* The console structure for the 3215 console
*/
......@@ -827,7 +821,6 @@ static struct console con3215 = {
.write = con3215_write,
.device = con3215_device,
.unblank = con3215_unblank,
.setup = con3215_consetup,
.flags = CON_PRINTBUFFER,
};
......
......@@ -555,12 +555,6 @@ con3270_unblank(void)
spin_unlock_irqrestore(&cp->view.lock, flags);
}
static int __init
con3270_consetup(struct console *co, char *options)
{
return 0;
}
/*
* The console structure for the 3270 console
*/
......@@ -569,7 +563,6 @@ static struct console con3270 = {
.write = con3270_write,
.device = con3270_device,
.unblank = con3270_unblank,
.setup = con3270_consetup,
.flags = CON_PRINTBUFFER,
};
......
......@@ -15,6 +15,7 @@
#include <linux/timer.h>
#include <linux/reboot.h>
#include <linux/jiffies.h>
#include <linux/init.h>
#include <asm/types.h>
#include <asm/s390_ext.h>
......@@ -510,7 +511,7 @@ sclp_state_change_cb(struct evbuf_header *evbuf)
}
static struct sclp_register sclp_state_change_event = {
.receive_mask = EvTyp_StateChange_Mask,
.receive_mask = EVTYP_STATECHANGE_MASK,
.receiver_fn = sclp_state_change_cb
};
......@@ -930,3 +931,10 @@ sclp_init(void)
sclp_init_mask(1);
return 0;
}
static __init int sclp_initcall(void)
{
return sclp_init();
}
arch_initcall(sclp_initcall);
......@@ -19,33 +19,37 @@
#define MAX_KMEM_PAGES (sizeof(unsigned long) << 3)
#define MAX_CONSOLE_PAGES 4
#define EvTyp_OpCmd 0x01
#define EvTyp_Msg 0x02
#define EvTyp_StateChange 0x08
#define EvTyp_PMsgCmd 0x09
#define EvTyp_CntlProgOpCmd 0x20
#define EvTyp_CntlProgIdent 0x0B
#define EvTyp_SigQuiesce 0x1D
#define EvTyp_VT220Msg 0x1A
#define EvTyp_OpCmd_Mask 0x80000000
#define EvTyp_Msg_Mask 0x40000000
#define EvTyp_StateChange_Mask 0x01000000
#define EvTyp_PMsgCmd_Mask 0x00800000
#define EvTyp_CtlProgOpCmd_Mask 0x00000001
#define EvTyp_CtlProgIdent_Mask 0x00200000
#define EvTyp_SigQuiesce_Mask 0x00000008
#define EvTyp_VT220Msg_Mask 0x00000040
#define GnrlMsgFlgs_DOM 0x8000
#define GnrlMsgFlgs_SndAlrm 0x4000
#define GnrlMsgFlgs_HoldMsg 0x2000
#define LnTpFlgs_CntlText 0x8000
#define LnTpFlgs_LabelText 0x4000
#define LnTpFlgs_DataText 0x2000
#define LnTpFlgs_EndText 0x1000
#define LnTpFlgs_PromptText 0x0800
#define EVTYP_OPCMD 0x01
#define EVTYP_MSG 0x02
#define EVTYP_STATECHANGE 0x08
#define EVTYP_PMSGCMD 0x09
#define EVTYP_CNTLPROGOPCMD 0x20
#define EVTYP_CNTLPROGIDENT 0x0B
#define EVTYP_SIGQUIESCE 0x1D
#define EVTYP_VT220MSG 0x1A
#define EVTYP_CONFMGMDATA 0x04
#define EVTYP_SDIAS 0x1C
#define EVTYP_OPCMD_MASK 0x80000000
#define EVTYP_MSG_MASK 0x40000000
#define EVTYP_STATECHANGE_MASK 0x01000000
#define EVTYP_PMSGCMD_MASK 0x00800000
#define EVTYP_CTLPROGOPCMD_MASK 0x00000001
#define EVTYP_CTLPROGIDENT_MASK 0x00200000
#define EVTYP_SIGQUIESCE_MASK 0x00000008
#define EVTYP_VT220MSG_MASK 0x00000040
#define EVTYP_CONFMGMDATA_MASK 0x10000000
#define EVTYP_SDIAS_MASK 0x00000010
#define GNRLMSGFLGS_DOM 0x8000
#define GNRLMSGFLGS_SNDALRM 0x4000
#define GNRLMSGFLGS_HOLDMSG 0x2000
#define LNTPFLGS_CNTLTEXT 0x8000
#define LNTPFLGS_LABELTEXT 0x4000
#define LNTPFLGS_DATATEXT 0x2000
#define LNTPFLGS_ENDTEXT 0x1000
#define LNTPFLGS_PROMPTTEXT 0x0800
typedef unsigned int sclp_cmdw_t;
......@@ -56,15 +60,15 @@ typedef unsigned int sclp_cmdw_t;
#define SCLP_CMDW_READ_SCP_INFO_FORCED 0x00120001
#define GDS_ID_MDSMU 0x1310
#define GDS_ID_MDSRouteInfo 0x1311
#define GDS_ID_AgUnWrkCorr 0x1549
#define GDS_ID_SNACondReport 0x1532
#define GDS_ID_MDSROUTEINFO 0x1311
#define GDS_ID_AGUNWRKCORR 0x1549
#define GDS_ID_SNACONDREPORT 0x1532
#define GDS_ID_CPMSU 0x1212
#define GDS_ID_RoutTargInstr 0x154D
#define GDS_ID_OpReq 0x8070
#define GDS_ID_TextCmd 0x1320
#define GDS_ID_ROUTTARGINSTR 0x154D
#define GDS_ID_OPREQ 0x8070
#define GDS_ID_TEXTCMD 0x1320
#define GDS_KEY_SelfDefTextMsg 0x31
#define GDS_KEY_SELFDEFTEXTMSG 0x31
typedef u32 sccb_mask_t; /* ATTENTION: assumes 32bit mask !!! */
......
/*
* drivers/s390/char/sclp_chp.c
*
* Copyright IBM Corp. 2007
* Author(s): Peter Oberparleiter <peter.oberparleiter@de.ibm.com>
*/
#include <linux/types.h>
#include <linux/gfp.h>
#include <linux/errno.h>
#include <linux/completion.h>
#include <asm/sclp.h>
#include <asm/chpid.h>
#include "sclp.h"
#define TAG "sclp_chp: "
#define SCLP_CMDW_CONFIGURE_CHANNEL_PATH 0x000f0001
#define SCLP_CMDW_DECONFIGURE_CHANNEL_PATH 0x000e0001
#define SCLP_CMDW_READ_CHANNEL_PATH_INFORMATION 0x00030001
static inline sclp_cmdw_t get_configure_cmdw(struct chp_id chpid)
{
return SCLP_CMDW_CONFIGURE_CHANNEL_PATH | chpid.id << 8;
}
static inline sclp_cmdw_t get_deconfigure_cmdw(struct chp_id chpid)
{
return SCLP_CMDW_DECONFIGURE_CHANNEL_PATH | chpid.id << 8;
}
static void chp_callback(struct sclp_req *req, void *data)
{
struct completion *completion = data;
complete(completion);
}
struct chp_cfg_sccb {
struct sccb_header header;
u8 ccm;
u8 reserved[6];
u8 cssid;
} __attribute__((packed));
struct chp_cfg_data {
struct chp_cfg_sccb sccb;
struct sclp_req req;
struct completion completion;
} __attribute__((packed));
static int do_configure(sclp_cmdw_t cmd)
{
struct chp_cfg_data *data;
int rc;
/* Prepare sccb. */
data = (struct chp_cfg_data *) get_zeroed_page(GFP_KERNEL | GFP_DMA);
if (!data)
return -ENOMEM;
data->sccb.header.length = sizeof(struct chp_cfg_sccb);
data->req.command = cmd;
data->req.sccb = &(data->sccb);
data->req.status = SCLP_REQ_FILLED;
data->req.callback = chp_callback;
data->req.callback_data = &(data->completion);
init_completion(&data->completion);
/* Perform sclp request. */
rc = sclp_add_request(&(data->req));
if (rc)
goto out;
wait_for_completion(&data->completion);
/* Check response .*/
if (data->req.status != SCLP_REQ_DONE) {
printk(KERN_WARNING TAG "configure channel-path request failed "
"(status=0x%02x)\n", data->req.status);
rc = -EIO;
goto out;
}
switch (data->sccb.header.response_code) {
case 0x0020:
case 0x0120:
case 0x0440:
case 0x0450:
break;
default:
printk(KERN_WARNING TAG "configure channel-path failed "
"(cmd=0x%08x, response=0x%04x)\n", cmd,
data->sccb.header.response_code);
rc = -EIO;
break;
}
out:
free_page((unsigned long) data);
return rc;
}
/**
* sclp_chp_configure - perform configure channel-path sclp command
* @chpid: channel-path ID
*
* Perform configure channel-path command sclp command for specified chpid.
* Return 0 after command successfully finished, non-zero otherwise.
*/
int sclp_chp_configure(struct chp_id chpid)
{
return do_configure(get_configure_cmdw(chpid));
}
/**
* sclp_chp_deconfigure - perform deconfigure channel-path sclp command
* @chpid: channel-path ID
*
* Perform deconfigure channel-path command sclp command for specified chpid
* and wait for completion. On success return 0. Return non-zero otherwise.
*/
int sclp_chp_deconfigure(struct chp_id chpid)
{
return do_configure(get_deconfigure_cmdw(chpid));
}
struct chp_info_sccb {
struct sccb_header header;
u8 recognized[SCLP_CHP_INFO_MASK_SIZE];
u8 standby[SCLP_CHP_INFO_MASK_SIZE];
u8 configured[SCLP_CHP_INFO_MASK_SIZE];
u8 ccm;
u8 reserved[6];
u8 cssid;
} __attribute__((packed));
struct chp_info_data {
struct chp_info_sccb sccb;
struct sclp_req req;
struct completion completion;
} __attribute__((packed));
/**
* sclp_chp_read_info - perform read channel-path information sclp command
* @info: resulting channel-path information data
*
* Perform read channel-path information sclp command and wait for completion.
* On success, store channel-path information in @info and return 0. Return
* non-zero otherwise.
*/
int sclp_chp_read_info(struct sclp_chp_info *info)
{
struct chp_info_data *data;
int rc;
/* Prepare sccb. */
data = (struct chp_info_data *) get_zeroed_page(GFP_KERNEL | GFP_DMA);
if (!data)
return -ENOMEM;
data->sccb.header.length = sizeof(struct chp_info_sccb);
data->req.command = SCLP_CMDW_READ_CHANNEL_PATH_INFORMATION;
data->req.sccb = &(data->sccb);
data->req.status = SCLP_REQ_FILLED;
data->req.callback = chp_callback;
data->req.callback_data = &(data->completion);
init_completion(&data->completion);
/* Perform sclp request. */
rc = sclp_add_request(&(data->req));
if (rc)
goto out;
wait_for_completion(&data->completion);
/* Check response .*/
if (data->req.status != SCLP_REQ_DONE) {
printk(KERN_WARNING TAG "read channel-path info request failed "
"(status=0x%02x)\n", data->req.status);
rc = -EIO;
goto out;
}
if (data->sccb.header.response_code != 0x0010) {
printk(KERN_WARNING TAG "read channel-path info failed "
"(response=0x%04x)\n", data->sccb.header.response_code);
rc = -EIO;
goto out;
}
memcpy(info->recognized, data->sccb.recognized,
SCLP_CHP_INFO_MASK_SIZE);
memcpy(info->standby, data->sccb.standby,
SCLP_CHP_INFO_MASK_SIZE);
memcpy(info->configured, data->sccb.configured,
SCLP_CHP_INFO_MASK_SIZE);
out:
free_page((unsigned long) data);
return rc;
}
/*
* drivers/s390/char/sclp_config.c
*
* Copyright IBM Corp. 2007
* Author(s): Heiko Carstens <heiko.carstens@de.ibm.com>
*/
#include <linux/init.h>
#include <linux/errno.h>
#include <linux/cpu.h>
#include <linux/sysdev.h>
#include <linux/workqueue.h>
#include "sclp.h"
#define TAG "sclp_config: "
struct conf_mgm_data {
u8 reserved;
u8 ev_qualifier;
} __attribute__((packed));
#define EV_QUAL_CAP_CHANGE 3
static struct work_struct sclp_cpu_capability_work;
static void sclp_cpu_capability_notify(struct work_struct *work)
{
int cpu;
struct sys_device *sysdev;
printk(KERN_WARNING TAG "cpu capability changed.\n");
lock_cpu_hotplug();
for_each_online_cpu(cpu) {
sysdev = get_cpu_sysdev(cpu);
kobject_uevent(&sysdev->kobj, KOBJ_CHANGE);
}
unlock_cpu_hotplug();
}
static void sclp_conf_receiver_fn(struct evbuf_header *evbuf)
{
struct conf_mgm_data *cdata;
cdata = (struct conf_mgm_data *)(evbuf + 1);
if (cdata->ev_qualifier == EV_QUAL_CAP_CHANGE)
schedule_work(&sclp_cpu_capability_work);
}
static struct sclp_register sclp_conf_register =
{
.receive_mask = EVTYP_CONFMGMDATA_MASK,
.receiver_fn = sclp_conf_receiver_fn,
};
static int __init sclp_conf_init(void)
{
int rc;
INIT_WORK(&sclp_cpu_capability_work, sclp_cpu_capability_notify);
rc = sclp_register(&sclp_conf_register);
if (rc) {
printk(KERN_ERR TAG "failed to register (%d).\n", rc);
return rc;
}
if (!(sclp_conf_register.sclp_receive_mask & EVTYP_CONFMGMDATA_MASK)) {
printk(KERN_WARNING TAG "no configuration management.\n");
sclp_unregister(&sclp_conf_register);
rc = -ENOSYS;
}
return rc;
}
__initcall(sclp_conf_init);
......@@ -46,7 +46,7 @@ struct cpi_sccb {
/* Event type structure for write message and write priority message */
static struct sclp_register sclp_cpi_event =
{
.send_mask = EvTyp_CtlProgIdent_Mask
.send_mask = EVTYP_CTLPROGIDENT_MASK
};
MODULE_LICENSE("GPL");
......@@ -201,7 +201,7 @@ cpi_module_init(void)
"console.\n");
return -EINVAL;
}
if (!(sclp_cpi_event.sclp_send_mask & EvTyp_CtlProgIdent_Mask)) {
if (!(sclp_cpi_event.sclp_send_mask & EVTYP_CTLPROGIDENT_MASK)) {
printk(KERN_WARNING "cpi: no control program identification "
"support\n");
sclp_unregister(&sclp_cpi_event);
......
......@@ -43,7 +43,7 @@ sclp_quiesce_handler(struct evbuf_header *evbuf)
}
static struct sclp_register sclp_quiesce_event = {
.receive_mask = EvTyp_SigQuiesce_Mask,
.receive_mask = EVTYP_SIGQUIESCE_MASK,
.receiver_fn = sclp_quiesce_handler
};
......
......@@ -30,7 +30,7 @@
/* Event type structure for write message and write priority message */
static struct sclp_register sclp_rw_event = {
.send_mask = EvTyp_Msg_Mask | EvTyp_PMsgCmd_Mask
.send_mask = EVTYP_MSG_MASK | EVTYP_PMSGCMD_MASK
};
/*
......@@ -64,7 +64,7 @@ sclp_make_buffer(void *page, unsigned short columns, unsigned short htab)
memset(sccb, 0, sizeof(struct write_sccb));
sccb->header.length = sizeof(struct write_sccb);
sccb->msg_buf.header.length = sizeof(struct msg_buf);
sccb->msg_buf.header.type = EvTyp_Msg;
sccb->msg_buf.header.type = EVTYP_MSG;
sccb->msg_buf.mdb.header.length = sizeof(struct mdb);
sccb->msg_buf.mdb.header.type = 1;
sccb->msg_buf.mdb.header.tag = 0xD4C4C240; /* ebcdic "MDB " */
......@@ -114,7 +114,7 @@ sclp_initialize_mto(struct sclp_buffer *buffer, int max_len)
memset(mto, 0, sizeof(struct mto));
mto->length = sizeof(struct mto);
mto->type = 4; /* message text object */
mto->line_type_flags = LnTpFlgs_EndText; /* end text */
mto->line_type_flags = LNTPFLGS_ENDTEXT; /* end text */
/* set pointer to first byte after struct mto. */
buffer->current_line = (char *) (mto + 1);
......@@ -215,7 +215,7 @@ sclp_write(struct sclp_buffer *buffer, const unsigned char *msg, int count)
case '\a': /* bell, one for several times */
/* set SCLP sound alarm bit in General Object */
buffer->sccb->msg_buf.mdb.go.general_msg_flags |=
GnrlMsgFlgs_SndAlrm;
GNRLMSGFLGS_SNDALRM;
break;
case '\t': /* horizontal tabulator */
/* check if new mto needs to be created */
......@@ -452,12 +452,12 @@ sclp_emit_buffer(struct sclp_buffer *buffer,
return -EIO;
sccb = buffer->sccb;
if (sclp_rw_event.sclp_send_mask & EvTyp_Msg_Mask)
if (sclp_rw_event.sclp_send_mask & EVTYP_MSG_MASK)
/* Use normal write message */
sccb->msg_buf.header.type = EvTyp_Msg;
else if (sclp_rw_event.sclp_send_mask & EvTyp_PMsgCmd_Mask)
sccb->msg_buf.header.type = EVTYP_MSG;
else if (sclp_rw_event.sclp_send_mask & EVTYP_PMSGCMD_MASK)
/* Use write priority message */
sccb->msg_buf.header.type = EvTyp_PMsgCmd;
sccb->msg_buf.header.type = EVTYP_PMSGCMD;
else
return -ENOSYS;
buffer->request.command = SCLP_CMDW_WRITE_EVENT_DATA;
......
此差异已折叠。
......@@ -648,7 +648,7 @@ sclp_eval_textcmd(struct gds_subvector *start,
subvec = start;
while (subvec < end) {
subvec = find_gds_subvector(subvec, end,
GDS_KEY_SelfDefTextMsg);
GDS_KEY_SELFDEFTEXTMSG);
if (!subvec)
break;
sclp_eval_selfdeftextmsg((struct gds_subvector *)(subvec + 1),
......@@ -664,7 +664,7 @@ sclp_eval_cpmsu(struct gds_vector *start, struct gds_vector *end)
vec = start;
while (vec < end) {
vec = find_gds_vector(vec, end, GDS_ID_TextCmd);
vec = find_gds_vector(vec, end, GDS_ID_TEXTCMD);
if (!vec)
break;
sclp_eval_textcmd((struct gds_subvector *)(vec + 1),
......@@ -703,7 +703,7 @@ sclp_tty_state_change(struct sclp_register *reg)
static struct sclp_register sclp_input_event =
{
.receive_mask = EvTyp_OpCmd_Mask | EvTyp_PMsgCmd_Mask,
.receive_mask = EVTYP_OPCMD_MASK | EVTYP_PMSGCMD_MASK,
.state_change_fn = sclp_tty_state_change,
.receiver_fn = sclp_tty_receiver
};
......
......@@ -99,8 +99,8 @@ static void sclp_vt220_emit_current(void);
/* Registration structure for our interest in SCLP event buffers */
static struct sclp_register sclp_vt220_register = {
.send_mask = EvTyp_VT220Msg_Mask,
.receive_mask = EvTyp_VT220Msg_Mask,
.send_mask = EVTYP_VT220MSG_MASK,
.receive_mask = EVTYP_VT220MSG_MASK,
.state_change_fn = NULL,
.receiver_fn = sclp_vt220_receiver_fn
};
......@@ -202,7 +202,7 @@ sclp_vt220_callback(struct sclp_req *request, void *data)
static int
__sclp_vt220_emit(struct sclp_vt220_request *request)
{
if (!(sclp_vt220_register.sclp_send_mask & EvTyp_VT220Msg_Mask)) {
if (!(sclp_vt220_register.sclp_send_mask & EVTYP_VT220MSG_MASK)) {
request->sclp_req.status = SCLP_REQ_FAILED;
return -EIO;
}
......@@ -284,7 +284,7 @@ sclp_vt220_initialize_page(void *page)
sccb->header.length = sizeof(struct sclp_vt220_sccb);
sccb->header.function_code = SCLP_NORMAL_WRITE;
sccb->header.response_code = 0x0000;
sccb->evbuf.type = EvTyp_VT220Msg;
sccb->evbuf.type = EVTYP_VT220MSG;
sccb->evbuf.length = sizeof(struct evbuf_header);
return request;
......
......@@ -125,7 +125,7 @@ static struct vmlogrdr_priv_t sys_ser[] = {
.recording_name = "EREP",
.minor_num = 0,
.buffer_free = 1,
.priv_lock = SPIN_LOCK_UNLOCKED,
.priv_lock = __SPIN_LOCK_UNLOCKED(sys_ser[0].priv_lock),
.autorecording = 1,
.autopurge = 1,
},
......@@ -134,7 +134,7 @@ static struct vmlogrdr_priv_t sys_ser[] = {
.recording_name = "ACCOUNT",
.minor_num = 1,
.buffer_free = 1,
.priv_lock = SPIN_LOCK_UNLOCKED,
.priv_lock = __SPIN_LOCK_UNLOCKED(sys_ser[1].priv_lock),
.autorecording = 1,
.autopurge = 1,
},
......@@ -143,7 +143,7 @@ static struct vmlogrdr_priv_t sys_ser[] = {
.recording_name = "SYMPTOM",
.minor_num = 2,
.buffer_free = 1,
.priv_lock = SPIN_LOCK_UNLOCKED,
.priv_lock = __SPIN_LOCK_UNLOCKED(sys_ser[2].priv_lock),
.autorecording = 1,
.autopurge = 1,
}
......@@ -385,6 +385,9 @@ static int vmlogrdr_release (struct inode *inode, struct file *filp)
struct vmlogrdr_priv_t * logptr = filp->private_data;
iucv_path_sever(logptr->path, NULL);
kfree(logptr->path);
logptr->path = NULL;
if (logptr->autorecording) {
ret = vmlogrdr_recording(logptr,0,logptr->autopurge);
if (ret)
......
此差异已折叠。
......@@ -2,7 +2,7 @@
# Makefile for the S/390 common i/o drivers
#
obj-y += airq.o blacklist.o chsc.o cio.o css.o
obj-y += airq.o blacklist.o chsc.o cio.o css.o chp.o idset.o
ccw_device-objs += device.o device_fsm.o device_ops.o
ccw_device-objs += device_id.o device_pgid.o device_status.o
obj-y += ccw_device.o cmf.o
......
......@@ -75,8 +75,10 @@ static void ccwgroup_ungroup_callback(struct device *dev)
{
struct ccwgroup_device *gdev = to_ccwgroupdev(dev);
mutex_lock(&gdev->reg_mutex);
__ccwgroup_remove_symlinks(gdev);
device_unregister(dev);
mutex_unlock(&gdev->reg_mutex);
}
static ssize_t
......@@ -173,7 +175,8 @@ ccwgroup_create(struct device *root,
return -ENOMEM;
atomic_set(&gdev->onoff, 0);
mutex_init(&gdev->reg_mutex);
mutex_lock(&gdev->reg_mutex);
for (i = 0; i < argc; i++) {
gdev->cdev[i] = get_ccwdev_by_busid(cdrv, argv[i]);
......@@ -183,12 +186,12 @@ ccwgroup_create(struct device *root,
|| gdev->cdev[i]->id.driver_info !=
gdev->cdev[0]->id.driver_info) {
rc = -EINVAL;
goto free_dev;
goto error;
}
/* Don't allow a device to belong to more than one group. */
if (gdev->cdev[i]->dev.driver_data) {
rc = -EINVAL;
goto free_dev;
goto error;
}
gdev->cdev[i]->dev.driver_data = gdev;
}
......@@ -203,9 +206,8 @@ ccwgroup_create(struct device *root,
gdev->cdev[0]->dev.bus_id);
rc = device_register(&gdev->dev);
if (rc)
goto free_dev;
goto error;
get_device(&gdev->dev);
rc = device_create_file(&gdev->dev, &dev_attr_ungroup);
......@@ -216,27 +218,21 @@ ccwgroup_create(struct device *root,
rc = __ccwgroup_create_symlinks(gdev);
if (!rc) {
mutex_unlock(&gdev->reg_mutex);
put_device(&gdev->dev);
return 0;
}
device_remove_file(&gdev->dev, &dev_attr_ungroup);
device_unregister(&gdev->dev);
error:
for (i = 0; i < argc; i++)
if (gdev->cdev[i]) {
put_device(&gdev->cdev[i]->dev);
gdev->cdev[i]->dev.driver_data = NULL;
}
put_device(&gdev->dev);
return rc;
free_dev:
for (i = 0; i < argc; i++)
if (gdev->cdev[i]) {
if (gdev->cdev[i]->dev.driver_data == gdev)
gdev->cdev[i]->dev.driver_data = NULL;
put_device(&gdev->cdev[i]->dev);
}
kfree(gdev);
mutex_unlock(&gdev->reg_mutex);
put_device(&gdev->dev);
return rc;
}
......@@ -422,8 +418,12 @@ ccwgroup_driver_unregister (struct ccwgroup_driver *cdriver)
get_driver(&cdriver->driver);
while ((dev = driver_find_device(&cdriver->driver, NULL, NULL,
__ccwgroup_match_all))) {
__ccwgroup_remove_symlinks(to_ccwgroupdev(dev));
struct ccwgroup_device *gdev = to_ccwgroupdev(dev);
mutex_lock(&gdev->reg_mutex);
__ccwgroup_remove_symlinks(gdev);
device_unregister(dev);
mutex_unlock(&gdev->reg_mutex);
put_device(dev);
}
put_driver(&cdriver->driver);
......@@ -444,8 +444,10 @@ __ccwgroup_get_gdev_by_cdev(struct ccw_device *cdev)
if (cdev->dev.driver_data) {
gdev = (struct ccwgroup_device *)cdev->dev.driver_data;
if (get_device(&gdev->dev)) {
mutex_lock(&gdev->reg_mutex);
if (device_is_registered(&gdev->dev))
return gdev;
mutex_unlock(&gdev->reg_mutex);
put_device(&gdev->dev);
}
return NULL;
......@@ -465,6 +467,7 @@ ccwgroup_remove_ccwdev(struct ccw_device *cdev)
if (gdev) {
__ccwgroup_remove_symlinks(gdev);
device_unregister(&gdev->dev);
mutex_unlock(&gdev->reg_mutex);
put_device(&gdev->dev);
}
}
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
......@@ -476,7 +476,7 @@ struct cmb_area {
};
static struct cmb_area cmb_area = {
.lock = SPIN_LOCK_UNLOCKED,
.lock = __SPIN_LOCK_UNLOCKED(cmb_area.lock),
.list = LIST_HEAD_INIT(cmb_area.list),
.num_channels = 1024,
};
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
......@@ -11,6 +11,7 @@ struct ccwgroup_device {
CCWGROUP_ONLINE,
} state;
atomic_t onoff;
struct mutex reg_mutex;
unsigned int count; /* number of attached slave devices */
struct device dev; /* master device */
struct ccw_device *cdev[0]; /* variable number, allocate as needed */
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册