提交 0444fa78 编写于 作者: L Linus Torvalds

Merge branch 'for-linus' of git://git390.osdl.marist.edu/pub/scm/linux-2.6

* 'for-linus' of git://git390.osdl.marist.edu/pub/scm/linux-2.6: (56 commits)
  [S390] replace lock_cpu_hotplug with get_online_cpus
  [S390] usage of s390dbf: shrink number of debug areas to use.
  [S390] constify function pointer tables.
  [S390] do local_irq_restore while spinning in spin_lock_irqsave.
  [S390] add smp_call_function_mask
  [S390] dasd: fix loop in request expiration handling
  [S390] Unused field / extern declaration in processor.h
  [S390] Remove TOPDIR from Makefile
  [S390] dasd: add hyper PAV support to DASD device driver, part 1
  [S390] single-step cleanup
  [S390] Move NOTES and BUG_TABLE.
  [S390] drivers/s390/: Spelling fixes
  [S390] include/asm-s390/: Spelling fixes
  [S390] arch/s390/: Spelling fixes
  [S390] Use diag308 subcodes 3 and 6 for reboot and dump when possible.
  [S390] vmemmap: allocate struct pages before 1:1 mapping
  [S390] Initialize sclp_ipl_info
  [S390] Allocate and free cpu lowcores and stacks when needed/possible.
  [S390] use LIST_HEAD instead of LIST_HEAD_INIT
  [S390] Load disabled wait psw instead of stopping cpu on halt.
  ...
......@@ -116,6 +116,7 @@
!Iinclude/asm-s390/ccwdev.h
!Edrivers/s390/cio/device.c
!Edrivers/s390/cio/device_ops.c
!Edrivers/s390/cio/airq.c
</sect1>
<sect1 id="cmf">
<title>The channel-measurement facility</title>
......
......@@ -50,7 +50,7 @@ additional_cpus=n (*) Use this to limit hotpluggable cpus. This option sets
cpu_possible_map = cpu_present_map + additional_cpus
(*) Option valid only for following architectures
- x86_64, ia64, s390
- x86_64, ia64
ia64 and x86_64 use the number of disabled local apics in ACPI tables MADT
to determine the number of potentially hot-pluggable cpus. The implementation
......
......@@ -370,7 +370,8 @@ and is between 256 and 4096 characters. It is defined in the file
configured. Potentially dangerous and should only be
used if you are entirely sure of the consequences.
chandev= [HW,NET] Generic channel device initialisation
ccw_timeout_log [S390]
See Documentation/s390/CommonIO for details.
checkreqprot [SELINUX] Set initial checkreqprot flag value.
Format: { "0" | "1" }
......@@ -382,6 +383,12 @@ and is between 256 and 4096 characters. It is defined in the file
Value can be changed at runtime via
/selinux/checkreqprot.
cio_ignore= [S390]
See Documentation/s390/CommonIO for details.
cio_msg= [S390]
See Documentation/s390/CommonIO for details.
clock= [BUGS=X86-32, HW] gettimeofday clocksource override.
[Deprecated]
Forces specified clocksource (if available) to be used
......
......@@ -4,6 +4,11 @@ S/390 common I/O-Layer - command line parameters, procfs and debugfs entries
Command line parameters
-----------------------
* ccw_timeout_log
Enable logging of debug information in case of ccw device timeouts.
* cio_msg = yes | no
Determines whether information on found devices and sensed device
......
......@@ -276,9 +276,6 @@ source "kernel/Kconfig.preempt"
source "mm/Kconfig"
config HOLES_IN_ZONE
def_bool y
comment "I/O subsystem configuration"
config MACHCHK_WARNING
......
config CRYPTO_SHA1_S390
tristate "SHA1 digest algorithm"
depends on S390
select CRYPTO_ALGAPI
help
This is the s390 hardware accelerated implementation of the
SHA-1 secure hash standard (FIPS 180-1/DFIPS 180-2).
config CRYPTO_SHA256_S390
tristate "SHA256 digest algorithm"
depends on S390
select CRYPTO_ALGAPI
help
This is the s390 hardware accelerated implementation of the
SHA256 secure hash standard (DFIPS 180-2).
This version of SHA implements a 256 bit hash with 128 bits of
security against collision attacks.
config CRYPTO_DES_S390
tristate "DES and Triple DES cipher algorithms"
depends on S390
select CRYPTO_ALGAPI
select CRYPTO_BLKCIPHER
help
This us the s390 hardware accelerated implementation of the
DES cipher algorithm (FIPS 46-2), and Triple DES EDE (FIPS 46-3).
config CRYPTO_AES_S390
tristate "AES cipher algorithms"
depends on S390
select CRYPTO_ALGAPI
select CRYPTO_BLKCIPHER
help
This is the s390 hardware accelerated implementation of the
AES cipher algorithms (FIPS-197). AES uses the Rijndael
algorithm.
Rijndael appears to be consistently a very good performer in
both hardware and software across a wide range of computing
environments regardless of its use in feedback or non-feedback
modes. Its key setup time is excellent, and its key agility is
good. Rijndael's very low memory requirements make it very well
suited for restricted-space environments, in which it also
demonstrates excellent performance. Rijndael's operations are
among the easiest to defend against power and timing attacks.
On s390 the System z9-109 currently only supports the key size
of 128 bit.
config S390_PRNG
tristate "Pseudo random number generator device driver"
depends on S390
default "m"
help
Select this option if you want to use the s390 pseudo random number
generator. The PRNG is part of the cryptographic processor functions
and uses triple-DES to generate secure random numbers like the
ANSI X9.17 standard. The PRNG is usable via the char device
/dev/prandom.
......@@ -516,7 +516,7 @@ static int __init aes_init(void)
/* z9 109 and z9 BC/EC only support 128 bit key length */
if (keylen_flag == AES_KEYLEN_128)
printk(KERN_INFO
"aes_s390: hardware acceleration only available for"
"aes_s390: hardware acceleration only available for "
"128 bit keys\n");
ret = crypto_register_alg(&aes_alg);
......
......@@ -90,7 +90,7 @@ static ssize_t prng_read(struct file *file, char __user *ubuf, size_t nbytes,
int ret = 0;
int tmp;
/* nbytes can be arbitrary long, we spilt it into chunks */
/* nbytes can be arbitrary length, we split it into chunks */
while (nbytes) {
/* same as in extract_entropy_user in random.c */
if (need_resched()) {
......@@ -146,7 +146,7 @@ static ssize_t prng_read(struct file *file, char __user *ubuf, size_t nbytes,
return ret;
}
static struct file_operations prng_fops = {
static const struct file_operations prng_fops = {
.owner = THIS_MODULE,
.open = &prng_open,
.release = NULL,
......
......@@ -31,7 +31,3 @@ S390_KEXEC_OBJS := machine_kexec.o crash.o
S390_KEXEC_OBJS += $(if $(CONFIG_64BIT),relocate_kernel64.o,relocate_kernel.o)
obj-$(CONFIG_KEXEC) += $(S390_KEXEC_OBJS)
#
# This is just to get the dependencies...
#
binfmt_elf32.o: $(TOPDIR)/fs/binfmt_elf.c
......@@ -276,7 +276,7 @@ void __init startup_init(void)
create_kernel_nss();
sort_main_extable();
setup_lowcore_early();
sclp_readinfo_early();
sclp_read_info_early();
sclp_facilities_detect();
memsize = sclp_memory_detect();
#ifndef CONFIG_64BIT
......
......@@ -157,7 +157,7 @@ startup_continue:
.long 0xb2b10000 # store facility list
tm 0xc8,0x08 # check bit for clearing-by-ASCE
bno 0f-.LPG1(%r13)
lhi %r1,2094
lhi %r1,2048
lhi %r2,0
.long 0xb98e2001
oi 7(%r12),0x80 # set IDTE flag
......
此差异已折叠。
......@@ -36,7 +36,7 @@
#include <linux/init.h>
#include <linux/module.h>
#include <linux/notifier.h>
#include <linux/utsname.h>
#include <asm/uaccess.h>
#include <asm/pgtable.h>
#include <asm/system.h>
......@@ -182,13 +182,15 @@ void cpu_idle(void)
void show_regs(struct pt_regs *regs)
{
struct task_struct *tsk = current;
printk("CPU: %d %s\n", task_thread_info(tsk)->cpu, print_tainted());
printk("Process %s (pid: %d, task: %p, ksp: %p)\n",
current->comm, task_pid_nr(current), (void *) tsk,
(void *) tsk->thread.ksp);
print_modules();
printk("CPU: %d %s %s %.*s\n",
task_thread_info(current)->cpu, print_tainted(),
init_utsname()->release,
(int)strcspn(init_utsname()->version, " "),
init_utsname()->version);
printk("Process %s (pid: %d, task: %p, ksp: %p)\n",
current->comm, current->pid, current,
(void *) current->thread.ksp);
show_registers(regs);
/* Show stack backtrace if pt_regs is from kernel mode */
if (!(regs->psw.mask & PSW_MASK_PSTATE))
......
......@@ -86,13 +86,13 @@ FixPerRegisters(struct task_struct *task)
per_info->control_regs.bits.storage_alt_space_ctl = 0;
}
static void set_single_step(struct task_struct *task)
void user_enable_single_step(struct task_struct *task)
{
task->thread.per_info.single_step = 1;
FixPerRegisters(task);
}
static void clear_single_step(struct task_struct *task)
void user_disable_single_step(struct task_struct *task)
{
task->thread.per_info.single_step = 0;
FixPerRegisters(task);
......@@ -107,7 +107,7 @@ void
ptrace_disable(struct task_struct *child)
{
/* make sure the single step bit is not set. */
clear_single_step(child);
user_disable_single_step(child);
}
#ifndef CONFIG_64BIT
......@@ -651,7 +651,7 @@ do_ptrace(struct task_struct *child, long request, long addr, long data)
clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
child->exit_code = data;
/* make sure the single step bit is not set. */
clear_single_step(child);
user_disable_single_step(child);
wake_up_process(child);
return 0;
......@@ -665,7 +665,7 @@ do_ptrace(struct task_struct *child, long request, long addr, long data)
return 0;
child->exit_code = SIGKILL;
/* make sure the single step bit is not set. */
clear_single_step(child);
user_disable_single_step(child);
wake_up_process(child);
return 0;
......@@ -675,10 +675,7 @@ do_ptrace(struct task_struct *child, long request, long addr, long data)
return -EIO;
clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
child->exit_code = data;
if (data)
set_tsk_thread_flag(child, TIF_SINGLE_STEP);
else
set_single_step(child);
user_enable_single_step(child);
/* give it a chance to run. */
wake_up_process(child);
return 0;
......
......@@ -125,75 +125,6 @@ void __cpuinit cpu_init(void)
enter_lazy_tlb(&init_mm, current);
}
/*
* VM halt and poweroff setup routines
*/
char vmhalt_cmd[128] = "";
char vmpoff_cmd[128] = "";
static char vmpanic_cmd[128] = "";
static void strncpy_skip_quote(char *dst, char *src, int n)
{
int sx, dx;
dx = 0;
for (sx = 0; src[sx] != 0; sx++) {
if (src[sx] == '"') continue;
dst[dx++] = src[sx];
if (dx >= n) break;
}
}
static int __init vmhalt_setup(char *str)
{
strncpy_skip_quote(vmhalt_cmd, str, 127);
vmhalt_cmd[127] = 0;
return 1;
}
__setup("vmhalt=", vmhalt_setup);
static int __init vmpoff_setup(char *str)
{
strncpy_skip_quote(vmpoff_cmd, str, 127);
vmpoff_cmd[127] = 0;
return 1;
}
__setup("vmpoff=", vmpoff_setup);
static int vmpanic_notify(struct notifier_block *self, unsigned long event,
void *data)
{
if (MACHINE_IS_VM && strlen(vmpanic_cmd) > 0)
cpcmd(vmpanic_cmd, NULL, 0, NULL);
return NOTIFY_OK;
}
#define PANIC_PRI_VMPANIC 0
static struct notifier_block vmpanic_nb = {
.notifier_call = vmpanic_notify,
.priority = PANIC_PRI_VMPANIC
};
static int __init vmpanic_setup(char *str)
{
static int register_done __initdata = 0;
strncpy_skip_quote(vmpanic_cmd, str, 127);
vmpanic_cmd[127] = 0;
if (!register_done) {
register_done = 1;
atomic_notifier_chain_register(&panic_notifier_list,
&vmpanic_nb);
}
return 1;
}
__setup("vmpanic=", vmpanic_setup);
/*
* condev= and conmode= setup parameter.
*/
......@@ -308,38 +239,6 @@ static void __init setup_zfcpdump(unsigned int console_devno)
static inline void setup_zfcpdump(unsigned int console_devno) {}
#endif /* CONFIG_ZFCPDUMP */
#ifdef CONFIG_SMP
void (*_machine_restart)(char *command) = machine_restart_smp;
void (*_machine_halt)(void) = machine_halt_smp;
void (*_machine_power_off)(void) = machine_power_off_smp;
#else
/*
* Reboot, halt and power_off routines for non SMP.
*/
static void do_machine_restart_nonsmp(char * __unused)
{
do_reipl();
}
static void do_machine_halt_nonsmp(void)
{
if (MACHINE_IS_VM && strlen(vmhalt_cmd) > 0)
__cpcmd(vmhalt_cmd, NULL, 0, NULL);
signal_processor(smp_processor_id(), sigp_stop_and_store_status);
}
static void do_machine_power_off_nonsmp(void)
{
if (MACHINE_IS_VM && strlen(vmpoff_cmd) > 0)
__cpcmd(vmpoff_cmd, NULL, 0, NULL);
signal_processor(smp_processor_id(), sigp_stop_and_store_status);
}
void (*_machine_restart)(char *command) = do_machine_restart_nonsmp;
void (*_machine_halt)(void) = do_machine_halt_nonsmp;
void (*_machine_power_off)(void) = do_machine_power_off_nonsmp;
#endif
/*
* Reboot, halt and power_off stubs. They just call _machine_restart,
* _machine_halt or _machine_power_off.
......@@ -559,7 +458,9 @@ setup_resources(void)
data_resource.start = (unsigned long) &_etext;
data_resource.end = (unsigned long) &_edata - 1;
for (i = 0; i < MEMORY_CHUNKS && memory_chunk[i].size > 0; i++) {
for (i = 0; i < MEMORY_CHUNKS; i++) {
if (!memory_chunk[i].size)
continue;
res = alloc_bootmem_low(sizeof(struct resource));
res->flags = IORESOURCE_BUSY | IORESOURCE_MEM;
switch (memory_chunk[i].type) {
......@@ -617,7 +518,7 @@ EXPORT_SYMBOL_GPL(real_memory_size);
static void __init setup_memory_end(void)
{
unsigned long memory_size;
unsigned long max_mem, max_phys;
unsigned long max_mem;
int i;
#if defined(CONFIG_ZFCPDUMP) || defined(CONFIG_ZFCPDUMP_MODULE)
......@@ -625,10 +526,31 @@ static void __init setup_memory_end(void)
memory_end = ZFCPDUMP_HSA_SIZE;
#endif
memory_size = 0;
max_phys = VMALLOC_END_INIT - VMALLOC_MIN_SIZE;
memory_end &= PAGE_MASK;
max_mem = memory_end ? min(max_phys, memory_end) : max_phys;
max_mem = memory_end ? min(VMALLOC_START, memory_end) : VMALLOC_START;
memory_end = min(max_mem, memory_end);
/*
* Make sure all chunks are MAX_ORDER aligned so we don't need the
* extra checks that HOLES_IN_ZONE would require.
*/
for (i = 0; i < MEMORY_CHUNKS; i++) {
unsigned long start, end;
struct mem_chunk *chunk;
unsigned long align;
chunk = &memory_chunk[i];
align = 1UL << (MAX_ORDER + PAGE_SHIFT - 1);
start = (chunk->addr + align - 1) & ~(align - 1);
end = (chunk->addr + chunk->size) & ~(align - 1);
if (start >= end)
memset(chunk, 0, sizeof(*chunk));
else {
chunk->addr = start;
chunk->size = end - start;
}
}
for (i = 0; i < MEMORY_CHUNKS; i++) {
struct mem_chunk *chunk = &memory_chunk[i];
......@@ -890,7 +812,7 @@ setup_arch(char **cmdline_p)
parse_early_param();
setup_ipl_info();
setup_ipl();
setup_memory_end();
setup_addressing_mode();
setup_memory();
......@@ -899,7 +821,6 @@ setup_arch(char **cmdline_p)
cpu_init();
__cpu_logical_map[0] = S390_lowcore.cpu_data.cpu_addr;
smp_setup_cpu_possible_map();
/*
* Setup capabilities (ELF_HWCAP & ELF_PLATFORM).
......@@ -920,7 +841,7 @@ setup_arch(char **cmdline_p)
void __cpuinit print_cpu_info(struct cpuinfo_S390 *cpuinfo)
{
printk("cpu %d "
printk(KERN_INFO "cpu %d "
#ifdef CONFIG_SMP
"phys_idx=%d "
#endif
......@@ -996,7 +917,7 @@ static void *c_next(struct seq_file *m, void *v, loff_t *pos)
static void c_stop(struct seq_file *m, void *v)
{
}
struct seq_operations cpuinfo_op = {
const struct seq_operations cpuinfo_op = {
.start = c_start,
.next = c_next,
.stop = c_stop,
......
......@@ -471,6 +471,7 @@ void do_signal(struct pt_regs *regs)
if (signr > 0) {
/* Whee! Actually deliver the signal. */
int ret;
#ifdef CONFIG_COMPAT
if (test_thread_flag(TIF_31BIT)) {
extern int handle_signal32(unsigned long sig,
......@@ -478,15 +479,12 @@ void do_signal(struct pt_regs *regs)
siginfo_t *info,
sigset_t *oldset,
struct pt_regs *regs);
if (handle_signal32(
signr, &ka, &info, oldset, regs) == 0) {
if (test_thread_flag(TIF_RESTORE_SIGMASK))
clear_thread_flag(TIF_RESTORE_SIGMASK);
}
return;
ret = handle_signal32(signr, &ka, &info, oldset, regs);
}
else
#endif
if (handle_signal(signr, &ka, &info, oldset, regs) == 0) {
ret = handle_signal(signr, &ka, &info, oldset, regs);
if (!ret) {
/*
* A signal was successfully delivered; the saved
* sigmask will have been stored in the signal frame,
......@@ -495,6 +493,14 @@ void do_signal(struct pt_regs *regs)
*/
if (test_thread_flag(TIF_RESTORE_SIGMASK))
clear_thread_flag(TIF_RESTORE_SIGMASK);
/*
* If we would have taken a single-step trap
* for a normal instruction, act like we took
* one for the handler setup.
*/
if (current->thread.per_info.single_step)
set_thread_flag(TIF_SINGLE_STEP);
}
return;
}
......
......@@ -42,6 +42,7 @@
#include <asm/tlbflush.h>
#include <asm/timer.h>
#include <asm/lowcore.h>
#include <asm/sclp.h>
#include <asm/cpu.h>
/*
......@@ -53,11 +54,27 @@ EXPORT_SYMBOL(lowcore_ptr);
cpumask_t cpu_online_map = CPU_MASK_NONE;
EXPORT_SYMBOL(cpu_online_map);
cpumask_t cpu_possible_map = CPU_MASK_NONE;
cpumask_t cpu_possible_map = CPU_MASK_ALL;
EXPORT_SYMBOL(cpu_possible_map);
static struct task_struct *current_set[NR_CPUS];
static u8 smp_cpu_type;
static int smp_use_sigp_detection;
enum s390_cpu_state {
CPU_STATE_STANDBY,
CPU_STATE_CONFIGURED,
};
#ifdef CONFIG_HOTPLUG_CPU
static DEFINE_MUTEX(smp_cpu_state_mutex);
#endif
static int smp_cpu_state[NR_CPUS];
static DEFINE_PER_CPU(struct cpu, cpu_devices);
DEFINE_PER_CPU(struct s390_idle_data, s390_idle);
static void smp_ext_bitcall(int, ec_bit_sig);
/*
......@@ -193,6 +210,33 @@ int smp_call_function_single(int cpu, void (*func) (void *info), void *info,
}
EXPORT_SYMBOL(smp_call_function_single);
/**
* smp_call_function_mask(): Run a function on a set of other CPUs.
* @mask: The set of cpus to run on. Must not include the current cpu.
* @func: The function to run. This must be fast and non-blocking.
* @info: An arbitrary pointer to pass to the function.
* @wait: If true, wait (atomically) until function has completed on other CPUs.
*
* Returns 0 on success, else a negative status code.
*
* If @wait is true, then returns once @func has returned; otherwise
* it returns just before the target cpu calls @func.
*
* You must not call this function with disabled interrupts or from a
* hardware interrupt handler or from a bottom half handler.
*/
int
smp_call_function_mask(cpumask_t mask,
void (*func)(void *), void *info,
int wait)
{
preempt_disable();
__smp_call_function_map(func, info, 0, wait, mask);
preempt_enable();
return 0;
}
EXPORT_SYMBOL(smp_call_function_mask);
void smp_send_stop(void)
{
int cpu, rc;
......@@ -216,33 +260,6 @@ void smp_send_stop(void)
}
}
/*
* Reboot, halt and power_off routines for SMP.
*/
void machine_restart_smp(char *__unused)
{
smp_send_stop();
do_reipl();
}
void machine_halt_smp(void)
{
smp_send_stop();
if (MACHINE_IS_VM && strlen(vmhalt_cmd) > 0)
__cpcmd(vmhalt_cmd, NULL, 0, NULL);
signal_processor(smp_processor_id(), sigp_stop_and_store_status);
for (;;);
}
void machine_power_off_smp(void)
{
smp_send_stop();
if (MACHINE_IS_VM && strlen(vmpoff_cmd) > 0)
__cpcmd(vmpoff_cmd, NULL, 0, NULL);
signal_processor(smp_processor_id(), sigp_stop_and_store_status);
for (;;);
}
/*
* This is the main routine where commands issued by other
* cpus are handled.
......@@ -355,6 +372,13 @@ void smp_ctl_clear_bit(int cr, int bit)
}
EXPORT_SYMBOL(smp_ctl_clear_bit);
/*
* In early ipl state a temp. logically cpu number is needed, so the sigp
* functions can be used to sense other cpus. Since NR_CPUS is >= 2 on
* CONFIG_SMP and the ipl cpu is logical cpu 0, it must be 1.
*/
#define CPU_INIT_NO 1
#if defined(CONFIG_ZFCPDUMP) || defined(CONFIG_ZFCPDUMP_MODULE)
/*
......@@ -375,9 +399,10 @@ static void __init smp_get_save_area(unsigned int cpu, unsigned int phy_cpu)
"kernel was compiled with NR_CPUS=%i\n", cpu, NR_CPUS);
return;
}
zfcpdump_save_areas[cpu] = alloc_bootmem(sizeof(union save_area));
__cpu_logical_map[1] = (__u16) phy_cpu;
while (signal_processor(1, sigp_stop_and_store_status) == sigp_busy)
zfcpdump_save_areas[cpu] = kmalloc(sizeof(union save_area), GFP_KERNEL);
__cpu_logical_map[CPU_INIT_NO] = (__u16) phy_cpu;
while (signal_processor(CPU_INIT_NO, sigp_stop_and_store_status) ==
sigp_busy)
cpu_relax();
memcpy(zfcpdump_save_areas[cpu],
(void *)(unsigned long) store_prefix() + SAVE_AREA_BASE,
......@@ -397,32 +422,155 @@ static inline void smp_get_save_area(unsigned int cpu, unsigned int phy_cpu) { }
#endif /* CONFIG_ZFCPDUMP || CONFIG_ZFCPDUMP_MODULE */
/*
* Lets check how many CPUs we have.
*/
static unsigned int __init smp_count_cpus(void)
static int cpu_stopped(int cpu)
{
unsigned int cpu, num_cpus;
__u16 boot_cpu_addr;
__u32 status;
/*
* cpu 0 is the boot cpu. See smp_prepare_boot_cpu.
*/
/* Check for stopped state */
if (signal_processor_ps(&status, 0, cpu, sigp_sense) ==
sigp_status_stored) {
if (status & 0x40)
return 1;
}
return 0;
}
static int cpu_known(int cpu_id)
{
int cpu;
for_each_present_cpu(cpu) {
if (__cpu_logical_map[cpu] == cpu_id)
return 1;
}
return 0;
}
static int smp_rescan_cpus_sigp(cpumask_t avail)
{
int cpu_id, logical_cpu;
logical_cpu = first_cpu(avail);
if (logical_cpu == NR_CPUS)
return 0;
for (cpu_id = 0; cpu_id <= 65535; cpu_id++) {
if (cpu_known(cpu_id))
continue;
__cpu_logical_map[logical_cpu] = cpu_id;
if (!cpu_stopped(logical_cpu))
continue;
cpu_set(logical_cpu, cpu_present_map);
smp_cpu_state[logical_cpu] = CPU_STATE_CONFIGURED;
logical_cpu = next_cpu(logical_cpu, avail);
if (logical_cpu == NR_CPUS)
break;
}
return 0;
}
static int smp_rescan_cpus_sclp(cpumask_t avail)
{
struct sclp_cpu_info *info;
int cpu_id, logical_cpu, cpu;
int rc;
logical_cpu = first_cpu(avail);
if (logical_cpu == NR_CPUS)
return 0;
info = kmalloc(sizeof(*info), GFP_KERNEL);
if (!info)
return -ENOMEM;
rc = sclp_get_cpu_info(info);
if (rc)
goto out;
for (cpu = 0; cpu < info->combined; cpu++) {
if (info->has_cpu_type && info->cpu[cpu].type != smp_cpu_type)
continue;
cpu_id = info->cpu[cpu].address;
if (cpu_known(cpu_id))
continue;
__cpu_logical_map[logical_cpu] = cpu_id;
cpu_set(logical_cpu, cpu_present_map);
if (cpu >= info->configured)
smp_cpu_state[logical_cpu] = CPU_STATE_STANDBY;
else
smp_cpu_state[logical_cpu] = CPU_STATE_CONFIGURED;
logical_cpu = next_cpu(logical_cpu, avail);
if (logical_cpu == NR_CPUS)
break;
}
out:
kfree(info);
return rc;
}
static int smp_rescan_cpus(void)
{
cpumask_t avail;
cpus_xor(avail, cpu_possible_map, cpu_present_map);
if (smp_use_sigp_detection)
return smp_rescan_cpus_sigp(avail);
else
return smp_rescan_cpus_sclp(avail);
}
static void __init smp_detect_cpus(void)
{
unsigned int cpu, c_cpus, s_cpus;
struct sclp_cpu_info *info;
u16 boot_cpu_addr, cpu_addr;
c_cpus = 1;
s_cpus = 0;
boot_cpu_addr = S390_lowcore.cpu_data.cpu_addr;
current_thread_info()->cpu = 0;
num_cpus = 1;
for (cpu = 0; cpu <= 65535; cpu++) {
if ((__u16) cpu == boot_cpu_addr)
info = kmalloc(sizeof(*info), GFP_KERNEL);
if (!info)
panic("smp_detect_cpus failed to allocate memory\n");
/* Use sigp detection algorithm if sclp doesn't work. */
if (sclp_get_cpu_info(info)) {
smp_use_sigp_detection = 1;
for (cpu = 0; cpu <= 65535; cpu++) {
if (cpu == boot_cpu_addr)
continue;
__cpu_logical_map[CPU_INIT_NO] = cpu;
if (!cpu_stopped(CPU_INIT_NO))
continue;
smp_get_save_area(c_cpus, cpu);
c_cpus++;
}
goto out;
}
if (info->has_cpu_type) {
for (cpu = 0; cpu < info->combined; cpu++) {
if (info->cpu[cpu].address == boot_cpu_addr) {
smp_cpu_type = info->cpu[cpu].type;
break;
}
}
}
for (cpu = 0; cpu < info->combined; cpu++) {
if (info->has_cpu_type && info->cpu[cpu].type != smp_cpu_type)
continue;
cpu_addr = info->cpu[cpu].address;
if (cpu_addr == boot_cpu_addr)
continue;
__cpu_logical_map[1] = (__u16) cpu;
if (signal_processor(1, sigp_sense) == sigp_not_operational)
__cpu_logical_map[CPU_INIT_NO] = cpu_addr;
if (!cpu_stopped(CPU_INIT_NO)) {
s_cpus++;
continue;
smp_get_save_area(num_cpus, cpu);
num_cpus++;
}
smp_get_save_area(c_cpus, cpu_addr);
c_cpus++;
}
printk("Detected %d CPU's\n", (int) num_cpus);
printk("Boot cpu address %2X\n", boot_cpu_addr);
return num_cpus;
out:
kfree(info);
printk(KERN_INFO "CPUs: %d configured, %d standby\n", c_cpus, s_cpus);
get_online_cpus();
smp_rescan_cpus();
put_online_cpus();
}
/*
......@@ -453,8 +601,6 @@ int __cpuinit start_secondary(void *cpuvoid)
return 0;
}
DEFINE_PER_CPU(struct s390_idle_data, s390_idle);
static void __init smp_create_idle(unsigned int cpu)
{
struct task_struct *p;
......@@ -470,37 +616,82 @@ static void __init smp_create_idle(unsigned int cpu)
spin_lock_init(&(&per_cpu(s390_idle, cpu))->lock);
}
static int cpu_stopped(int cpu)
static int __cpuinit smp_alloc_lowcore(int cpu)
{
__u32 status;
unsigned long async_stack, panic_stack;
struct _lowcore *lowcore;
int lc_order;
lc_order = sizeof(long) == 8 ? 1 : 0;
lowcore = (void *) __get_free_pages(GFP_KERNEL | GFP_DMA, lc_order);
if (!lowcore)
return -ENOMEM;
async_stack = __get_free_pages(GFP_KERNEL, ASYNC_ORDER);
if (!async_stack)
goto out_async_stack;
panic_stack = __get_free_page(GFP_KERNEL);
if (!panic_stack)
goto out_panic_stack;
*lowcore = S390_lowcore;
lowcore->async_stack = async_stack + ASYNC_SIZE;
lowcore->panic_stack = panic_stack + PAGE_SIZE;
/* Check for stopped state */
if (signal_processor_ps(&status, 0, cpu, sigp_sense) ==
sigp_status_stored) {
if (status & 0x40)
return 1;
#ifndef CONFIG_64BIT
if (MACHINE_HAS_IEEE) {
unsigned long save_area;
save_area = get_zeroed_page(GFP_KERNEL);
if (!save_area)
goto out_save_area;
lowcore->extended_save_area_addr = (u32) save_area;
}
#endif
lowcore_ptr[cpu] = lowcore;
return 0;
#ifndef CONFIG_64BIT
out_save_area:
free_page(panic_stack);
#endif
out_panic_stack:
free_pages(async_stack, ASYNC_ORDER);
out_async_stack:
free_pages((unsigned long) lowcore, lc_order);
return -ENOMEM;
}
/* Upping and downing of CPUs */
#ifdef CONFIG_HOTPLUG_CPU
static void smp_free_lowcore(int cpu)
{
struct _lowcore *lowcore;
int lc_order;
lc_order = sizeof(long) == 8 ? 1 : 0;
lowcore = lowcore_ptr[cpu];
#ifndef CONFIG_64BIT
if (MACHINE_HAS_IEEE)
free_page((unsigned long) lowcore->extended_save_area_addr);
#endif
free_page(lowcore->panic_stack - PAGE_SIZE);
free_pages(lowcore->async_stack - ASYNC_SIZE, ASYNC_ORDER);
free_pages((unsigned long) lowcore, lc_order);
lowcore_ptr[cpu] = NULL;
}
#endif /* CONFIG_HOTPLUG_CPU */
int __cpu_up(unsigned int cpu)
/* Upping and downing of CPUs */
int __cpuinit __cpu_up(unsigned int cpu)
{
struct task_struct *idle;
struct _lowcore *cpu_lowcore;
struct stack_frame *sf;
sigp_ccode ccode;
int curr_cpu;
for (curr_cpu = 0; curr_cpu <= 65535; curr_cpu++) {
__cpu_logical_map[cpu] = (__u16) curr_cpu;
if (cpu_stopped(cpu))
break;
}
if (!cpu_stopped(cpu))
return -ENODEV;
if (smp_cpu_state[cpu] != CPU_STATE_CONFIGURED)
return -EIO;
if (smp_alloc_lowcore(cpu))
return -ENOMEM;
ccode = signal_processor_p((__u32)(unsigned long)(lowcore_ptr[cpu]),
cpu, sigp_set_prefix);
......@@ -515,6 +706,7 @@ int __cpu_up(unsigned int cpu)
cpu_lowcore = lowcore_ptr[cpu];
cpu_lowcore->kernel_stack = (unsigned long)
task_stack_page(idle) + THREAD_SIZE;
cpu_lowcore->thread_info = (unsigned long) task_thread_info(idle);
sf = (struct stack_frame *) (cpu_lowcore->kernel_stack
- sizeof(struct pt_regs)
- sizeof(struct stack_frame));
......@@ -528,6 +720,8 @@ int __cpu_up(unsigned int cpu)
cpu_lowcore->percpu_offset = __per_cpu_offset[cpu];
cpu_lowcore->current_task = (unsigned long) idle;
cpu_lowcore->cpu_data.cpu_nr = cpu;
cpu_lowcore->softirq_pending = 0;
cpu_lowcore->ext_call_fast = 0;
eieio();
while (signal_processor(cpu, sigp_restart) == sigp_busy)
......@@ -538,44 +732,20 @@ int __cpu_up(unsigned int cpu)
return 0;
}
static unsigned int __initdata additional_cpus;
static unsigned int __initdata possible_cpus;
void __init smp_setup_cpu_possible_map(void)
static int __init setup_possible_cpus(char *s)
{
unsigned int phy_cpus, pos_cpus, cpu;
phy_cpus = smp_count_cpus();
pos_cpus = min(phy_cpus + additional_cpus, (unsigned int) NR_CPUS);
if (possible_cpus)
pos_cpus = min(possible_cpus, (unsigned int) NR_CPUS);
int pcpus, cpu;
for (cpu = 0; cpu < pos_cpus; cpu++)
pcpus = simple_strtoul(s, NULL, 0);
cpu_possible_map = cpumask_of_cpu(0);
for (cpu = 1; cpu < pcpus && cpu < NR_CPUS; cpu++)
cpu_set(cpu, cpu_possible_map);
phy_cpus = min(phy_cpus, pos_cpus);
for (cpu = 0; cpu < phy_cpus; cpu++)
cpu_set(cpu, cpu_present_map);
}
#ifdef CONFIG_HOTPLUG_CPU
static int __init setup_additional_cpus(char *s)
{
additional_cpus = simple_strtoul(s, NULL, 0);
return 0;
}
early_param("additional_cpus", setup_additional_cpus);
static int __init setup_possible_cpus(char *s)
{
possible_cpus = simple_strtoul(s, NULL, 0);
return 0;
}
early_param("possible_cpus", setup_possible_cpus);
#ifdef CONFIG_HOTPLUG_CPU
int __cpu_disable(void)
{
struct ec_creg_mask_parms cr_parms;
......@@ -612,7 +782,8 @@ void __cpu_die(unsigned int cpu)
/* Wait until target cpu is down */
while (!smp_cpu_not_running(cpu))
cpu_relax();
printk("Processor %d spun down\n", cpu);
smp_free_lowcore(cpu);
printk(KERN_INFO "Processor %d spun down\n", cpu);
}
void cpu_die(void)
......@@ -625,49 +796,19 @@ void cpu_die(void)
#endif /* CONFIG_HOTPLUG_CPU */
/*
* Cycle through the processors and setup structures.
*/
void __init smp_prepare_cpus(unsigned int max_cpus)
{
unsigned long stack;
unsigned int cpu;
int i;
smp_detect_cpus();
/* request the 0x1201 emergency signal external interrupt */
if (register_external_interrupt(0x1201, do_ext_call_interrupt) != 0)
panic("Couldn't request external interrupt 0x1201");
memset(lowcore_ptr, 0, sizeof(lowcore_ptr));
/*
* Initialize prefix pages and stacks for all possible cpus
*/
print_cpu_info(&S390_lowcore.cpu_data);
smp_alloc_lowcore(smp_processor_id());
for_each_possible_cpu(i) {
lowcore_ptr[i] = (struct _lowcore *)
__get_free_pages(GFP_KERNEL | GFP_DMA,
sizeof(void*) == 8 ? 1 : 0);
stack = __get_free_pages(GFP_KERNEL, ASYNC_ORDER);
if (!lowcore_ptr[i] || !stack)
panic("smp_boot_cpus failed to allocate memory\n");
*(lowcore_ptr[i]) = S390_lowcore;
lowcore_ptr[i]->async_stack = stack + ASYNC_SIZE;
stack = __get_free_pages(GFP_KERNEL, 0);
if (!stack)
panic("smp_boot_cpus failed to allocate memory\n");
lowcore_ptr[i]->panic_stack = stack + PAGE_SIZE;
#ifndef CONFIG_64BIT
if (MACHINE_HAS_IEEE) {
lowcore_ptr[i]->extended_save_area_addr =
(__u32) __get_free_pages(GFP_KERNEL, 0);
if (!lowcore_ptr[i]->extended_save_area_addr)
panic("smp_boot_cpus failed to "
"allocate memory\n");
}
#endif
}
#ifndef CONFIG_64BIT
if (MACHINE_HAS_IEEE)
ctl_set_bit(14, 29); /* enable extended save area */
......@@ -683,15 +824,17 @@ void __init smp_prepare_boot_cpu(void)
{
BUG_ON(smp_processor_id() != 0);
current_thread_info()->cpu = 0;
cpu_set(0, cpu_present_map);
cpu_set(0, cpu_online_map);
S390_lowcore.percpu_offset = __per_cpu_offset[0];
current_set[0] = current;
smp_cpu_state[0] = CPU_STATE_CONFIGURED;
spin_lock_init(&(&__get_cpu_var(s390_idle))->lock);
}
void __init smp_cpus_done(unsigned int max_cpus)
{
cpu_present_map = cpu_possible_map;
}
/*
......@@ -705,7 +848,79 @@ int setup_profiling_timer(unsigned int multiplier)
return 0;
}
static DEFINE_PER_CPU(struct cpu, cpu_devices);
#ifdef CONFIG_HOTPLUG_CPU
static ssize_t cpu_configure_show(struct sys_device *dev, char *buf)
{
ssize_t count;
mutex_lock(&smp_cpu_state_mutex);
count = sprintf(buf, "%d\n", smp_cpu_state[dev->id]);
mutex_unlock(&smp_cpu_state_mutex);
return count;
}
static ssize_t cpu_configure_store(struct sys_device *dev, const char *buf,
size_t count)
{
int cpu = dev->id;
int val, rc;
char delim;
if (sscanf(buf, "%d %c", &val, &delim) != 1)
return -EINVAL;
if (val != 0 && val != 1)
return -EINVAL;
mutex_lock(&smp_cpu_state_mutex);
get_online_cpus();
rc = -EBUSY;
if (cpu_online(cpu))
goto out;
rc = 0;
switch (val) {
case 0:
if (smp_cpu_state[cpu] == CPU_STATE_CONFIGURED) {
rc = sclp_cpu_deconfigure(__cpu_logical_map[cpu]);
if (!rc)
smp_cpu_state[cpu] = CPU_STATE_STANDBY;
}
break;
case 1:
if (smp_cpu_state[cpu] == CPU_STATE_STANDBY) {
rc = sclp_cpu_configure(__cpu_logical_map[cpu]);
if (!rc)
smp_cpu_state[cpu] = CPU_STATE_CONFIGURED;
}
break;
default:
break;
}
out:
put_online_cpus();
mutex_unlock(&smp_cpu_state_mutex);
return rc ? rc : count;
}
static SYSDEV_ATTR(configure, 0644, cpu_configure_show, cpu_configure_store);
#endif /* CONFIG_HOTPLUG_CPU */
static ssize_t show_cpu_address(struct sys_device *dev, char *buf)
{
return sprintf(buf, "%d\n", __cpu_logical_map[dev->id]);
}
static SYSDEV_ATTR(address, 0444, show_cpu_address, NULL);
static struct attribute *cpu_common_attrs[] = {
#ifdef CONFIG_HOTPLUG_CPU
&attr_configure.attr,
#endif
&attr_address.attr,
NULL,
};
static struct attribute_group cpu_common_attr_group = {
.attrs = cpu_common_attrs,
};
static ssize_t show_capability(struct sys_device *dev, char *buf)
{
......@@ -750,15 +965,15 @@ static ssize_t show_idle_time(struct sys_device *dev, char *buf)
}
static SYSDEV_ATTR(idle_time_us, 0444, show_idle_time, NULL);
static struct attribute *cpu_attrs[] = {
static struct attribute *cpu_online_attrs[] = {
&attr_capability.attr,
&attr_idle_count.attr,
&attr_idle_time_us.attr,
NULL,
};
static struct attribute_group cpu_attr_group = {
.attrs = cpu_attrs,
static struct attribute_group cpu_online_attr_group = {
.attrs = cpu_online_attrs,
};
static int __cpuinit smp_cpu_notify(struct notifier_block *self,
......@@ -778,12 +993,12 @@ static int __cpuinit smp_cpu_notify(struct notifier_block *self,
idle->idle_time = 0;
idle->idle_count = 0;
spin_unlock_irq(&idle->lock);
if (sysfs_create_group(&s->kobj, &cpu_attr_group))
if (sysfs_create_group(&s->kobj, &cpu_online_attr_group))
return NOTIFY_BAD;
break;
case CPU_DEAD:
case CPU_DEAD_FROZEN:
sysfs_remove_group(&s->kobj, &cpu_attr_group);
sysfs_remove_group(&s->kobj, &cpu_online_attr_group);
break;
}
return NOTIFY_OK;
......@@ -793,6 +1008,62 @@ static struct notifier_block __cpuinitdata smp_cpu_nb = {
.notifier_call = smp_cpu_notify,
};
static int smp_add_present_cpu(int cpu)
{
struct cpu *c = &per_cpu(cpu_devices, cpu);
struct sys_device *s = &c->sysdev;
int rc;
c->hotpluggable = 1;
rc = register_cpu(c, cpu);
if (rc)
goto out;
rc = sysfs_create_group(&s->kobj, &cpu_common_attr_group);
if (rc)
goto out_cpu;
if (!cpu_online(cpu))
goto out;
rc = sysfs_create_group(&s->kobj, &cpu_online_attr_group);
if (!rc)
return 0;
sysfs_remove_group(&s->kobj, &cpu_common_attr_group);
out_cpu:
#ifdef CONFIG_HOTPLUG_CPU
unregister_cpu(c);
#endif
out:
return rc;
}
#ifdef CONFIG_HOTPLUG_CPU
static ssize_t rescan_store(struct sys_device *dev, const char *buf,
size_t count)
{
cpumask_t newcpus;
int cpu;
int rc;
mutex_lock(&smp_cpu_state_mutex);
get_online_cpus();
newcpus = cpu_present_map;
rc = smp_rescan_cpus();
if (rc)
goto out;
cpus_andnot(newcpus, cpu_present_map, newcpus);
for_each_cpu_mask(cpu, newcpus) {
rc = smp_add_present_cpu(cpu);
if (rc)
cpu_clear(cpu, cpu_present_map);
}
rc = 0;
out:
put_online_cpus();
mutex_unlock(&smp_cpu_state_mutex);
return rc ? rc : count;
}
static SYSDEV_ATTR(rescan, 0200, NULL, rescan_store);
#endif /* CONFIG_HOTPLUG_CPU */
static int __init topology_init(void)
{
int cpu;
......@@ -800,16 +1071,14 @@ static int __init topology_init(void)
register_cpu_notifier(&smp_cpu_nb);
for_each_possible_cpu(cpu) {
struct cpu *c = &per_cpu(cpu_devices, cpu);
struct sys_device *s = &c->sysdev;
c->hotpluggable = 1;
register_cpu(c, cpu);
if (!cpu_online(cpu))
continue;
s = &c->sysdev;
rc = sysfs_create_group(&s->kobj, &cpu_attr_group);
#ifdef CONFIG_HOTPLUG_CPU
rc = sysfs_create_file(&cpu_sysdev_class.kset.kobj,
&attr_rescan.attr);
if (rc)
return rc;
#endif
for_each_present_cpu(cpu) {
rc = smp_add_present_cpu(cpu);
if (rc)
return rc;
}
......
......@@ -31,6 +31,7 @@
#include <linux/reboot.h>
#include <linux/kprobes.h>
#include <linux/bug.h>
#include <linux/utsname.h>
#include <asm/system.h>
#include <asm/uaccess.h>
#include <asm/io.h>
......@@ -168,9 +169,16 @@ void show_stack(struct task_struct *task, unsigned long *sp)
*/
void dump_stack(void)
{
printk("CPU: %d %s %s %.*s\n",
task_thread_info(current)->cpu, print_tainted(),
init_utsname()->release,
(int)strcspn(init_utsname()->version, " "),
init_utsname()->version);
printk("Process %s (pid: %d, task: %p, ksp: %p)\n",
current->comm, current->pid, current,
(void *) current->thread.ksp);
show_stack(NULL, NULL);
}
EXPORT_SYMBOL(dump_stack);
static inline int mask_bits(struct pt_regs *regs, unsigned long bits)
......@@ -258,8 +266,14 @@ void die(const char * str, struct pt_regs * regs, long err)
console_verbose();
spin_lock_irq(&die_lock);
bust_spinlocks(1);
printk("%s: %04lx [#%d]\n", str, err & 0xffff, ++die_counter);
print_modules();
printk("%s: %04lx [#%d] ", str, err & 0xffff, ++die_counter);
#ifdef CONFIG_PREEMPT
printk("PREEMPT ");
#endif
#ifdef CONFIG_SMP
printk("SMP");
#endif
printk("\n");
notify_die(DIE_OOPS, str, regs, err, current->thread.trap_no, SIGSEGV);
show_regs(regs);
bust_spinlocks(0);
......
......@@ -17,6 +17,12 @@ ENTRY(_start)
jiffies = jiffies_64;
#endif
PHDRS {
text PT_LOAD FLAGS(5); /* R_E */
data PT_LOAD FLAGS(7); /* RWE */
note PT_NOTE FLAGS(0); /* ___ */
}
SECTIONS
{
. = 0x00000000;
......@@ -33,6 +39,9 @@ SECTIONS
_etext = .; /* End of text section */
NOTES :text :note
BUG_TABLE :text
RODATA
#ifdef CONFIG_SHARED_KERNEL
......@@ -49,9 +58,6 @@ SECTIONS
__stop___ex_table = .;
}
NOTES
BUG_TABLE
.data : { /* Data */
DATA_DATA
CONSTRUCTORS
......
......@@ -39,7 +39,7 @@ static inline void _raw_yield_cpu(int cpu)
_raw_yield();
}
void _raw_spin_lock_wait(raw_spinlock_t *lp, unsigned int pc)
void _raw_spin_lock_wait(raw_spinlock_t *lp)
{
int count = spin_retry;
unsigned int cpu = ~smp_processor_id();
......@@ -53,15 +53,36 @@ void _raw_spin_lock_wait(raw_spinlock_t *lp, unsigned int pc)
}
if (__raw_spin_is_locked(lp))
continue;
if (_raw_compare_and_swap(&lp->owner_cpu, 0, cpu) == 0) {
lp->owner_pc = pc;
if (_raw_compare_and_swap(&lp->owner_cpu, 0, cpu) == 0)
return;
}
}
}
EXPORT_SYMBOL(_raw_spin_lock_wait);
int _raw_spin_trylock_retry(raw_spinlock_t *lp, unsigned int pc)
void _raw_spin_lock_wait_flags(raw_spinlock_t *lp, unsigned long flags)
{
int count = spin_retry;
unsigned int cpu = ~smp_processor_id();
local_irq_restore(flags);
while (1) {
if (count-- <= 0) {
unsigned int owner = lp->owner_cpu;
if (owner != 0)
_raw_yield_cpu(~owner);
count = spin_retry;
}
if (__raw_spin_is_locked(lp))
continue;
local_irq_disable();
if (_raw_compare_and_swap(&lp->owner_cpu, 0, cpu) == 0)
return;
local_irq_restore(flags);
}
}
EXPORT_SYMBOL(_raw_spin_lock_wait_flags);
int _raw_spin_trylock_retry(raw_spinlock_t *lp)
{
unsigned int cpu = ~smp_processor_id();
int count;
......@@ -69,10 +90,8 @@ int _raw_spin_trylock_retry(raw_spinlock_t *lp, unsigned int pc)
for (count = spin_retry; count > 0; count--) {
if (__raw_spin_is_locked(lp))
continue;
if (_raw_compare_and_swap(&lp->owner_cpu, 0, cpu) == 0) {
lp->owner_pc = pc;
if (_raw_compare_and_swap(&lp->owner_cpu, 0, cpu) == 0)
return 1;
}
}
return 0;
}
......
......@@ -83,7 +83,7 @@ struct dcss_segment {
};
static DEFINE_MUTEX(dcss_lock);
static struct list_head dcss_list = LIST_HEAD_INIT(dcss_list);
static LIST_HEAD(dcss_list);
static char *segtype_string[] = { "SW", "EW", "SR", "ER", "SN", "EN", "SC",
"EW/EN-MIXED" };
......
......@@ -15,10 +15,6 @@
#include <asm/setup.h>
#include <asm/tlbflush.h>
unsigned long vmalloc_end;
EXPORT_SYMBOL(vmalloc_end);
static struct page *vmem_map;
static DEFINE_MUTEX(vmem_mutex);
struct memory_segment {
......@@ -188,8 +184,8 @@ static int vmem_add_mem_map(unsigned long start, unsigned long size)
pte_t pte;
int ret = -ENOMEM;
map_start = vmem_map + PFN_DOWN(start);
map_end = vmem_map + PFN_DOWN(start + size);
map_start = VMEM_MAP + PFN_DOWN(start);
map_end = VMEM_MAP + PFN_DOWN(start + size);
start_addr = (unsigned long) map_start & PAGE_MASK;
end_addr = PFN_ALIGN((unsigned long) map_end);
......@@ -240,10 +236,10 @@ static int vmem_add_mem(unsigned long start, unsigned long size)
{
int ret;
ret = vmem_add_range(start, size);
ret = vmem_add_mem_map(start, size);
if (ret)
return ret;
return vmem_add_mem_map(start, size);
return vmem_add_range(start, size);
}
/*
......@@ -254,7 +250,7 @@ static int insert_memory_segment(struct memory_segment *seg)
{
struct memory_segment *tmp;
if (PFN_DOWN(seg->start + seg->size) > max_pfn ||
if (seg->start + seg->size >= VMALLOC_START ||
seg->start + seg->size < seg->start)
return -ERANGE;
......@@ -357,17 +353,15 @@ int add_shared_memory(unsigned long start, unsigned long size)
/*
* map whole physical memory to virtual memory (identity mapping)
* we reserve enough space in the vmalloc area for vmemmap to hotplug
* additional memory segments.
*/
void __init vmem_map_init(void)
{
unsigned long map_size;
int i;
map_size = ALIGN(max_low_pfn, MAX_ORDER_NR_PAGES) * sizeof(struct page);
vmalloc_end = PFN_ALIGN(VMALLOC_END_INIT) - PFN_ALIGN(map_size);
vmem_map = (struct page *) vmalloc_end;
NODE_DATA(0)->node_mem_map = vmem_map;
BUILD_BUG_ON((unsigned long)VMEM_MAP + VMEM_MAP_SIZE > VMEM_MAP_MAX);
NODE_DATA(0)->node_mem_map = VMEM_MAP;
for (i = 0; i < MEMORY_CHUNKS && memory_chunk[i].size > 0; i++)
vmem_add_mem(memory_chunk[i].addr, memory_chunk[i].size);
}
......@@ -382,7 +376,7 @@ static int __init vmem_convert_memory_chunk(void)
int i;
mutex_lock(&vmem_mutex);
for (i = 0; i < MEMORY_CHUNKS && memory_chunk[i].size > 0; i++) {
for (i = 0; i < MEMORY_CHUNKS; i++) {
if (!memory_chunk[i].size)
continue;
seg = kzalloc(sizeof(*seg), GFP_KERNEL);
......
......@@ -48,8 +48,6 @@ config CRYPTO_DEV_PADLOCK_SHA
If unsure say M. The compiled module will be
called padlock-sha.ko
source "arch/s390/crypto/Kconfig"
config CRYPTO_DEV_GEODE
tristate "Support for the Geode LX AES engine"
depends on X86_32 && PCI
......@@ -83,6 +81,67 @@ config ZCRYPT_MONOLITHIC
that contains all parts of the crypto device driver (ap bus,
request router and all the card drivers).
config CRYPTO_SHA1_S390
tristate "SHA1 digest algorithm"
depends on S390
select CRYPTO_ALGAPI
help
This is the s390 hardware accelerated implementation of the
SHA-1 secure hash standard (FIPS 180-1/DFIPS 180-2).
config CRYPTO_SHA256_S390
tristate "SHA256 digest algorithm"
depends on S390
select CRYPTO_ALGAPI
help
This is the s390 hardware accelerated implementation of the
SHA256 secure hash standard (DFIPS 180-2).
This version of SHA implements a 256 bit hash with 128 bits of
security against collision attacks.
config CRYPTO_DES_S390
tristate "DES and Triple DES cipher algorithms"
depends on S390
select CRYPTO_ALGAPI
select CRYPTO_BLKCIPHER
help
This us the s390 hardware accelerated implementation of the
DES cipher algorithm (FIPS 46-2), and Triple DES EDE (FIPS 46-3).
config CRYPTO_AES_S390
tristate "AES cipher algorithms"
depends on S390
select CRYPTO_ALGAPI
select CRYPTO_BLKCIPHER
help
This is the s390 hardware accelerated implementation of the
AES cipher algorithms (FIPS-197). AES uses the Rijndael
algorithm.
Rijndael appears to be consistently a very good performer in
both hardware and software across a wide range of computing
environments regardless of its use in feedback or non-feedback
modes. Its key setup time is excellent, and its key agility is
good. Rijndael's very low memory requirements make it very well
suited for restricted-space environments, in which it also
demonstrates excellent performance. Rijndael's operations are
among the easiest to defend against power and timing attacks.
On s390 the System z9-109 currently only supports the key size
of 128 bit.
config S390_PRNG
tristate "Pseudo random number generator device driver"
depends on S390
default "m"
help
Select this option if you want to use the s390 pseudo random number
generator. The PRNG is part of the cryptographic processor functions
and uses triple-DES to generate secure random numbers like the
ANSI X9.17 standard. The PRNG is usable via the char device
/dev/prandom.
config CRYPTO_DEV_HIFN_795X
tristate "Driver HIFN 795x crypto accelerator chips"
select CRYPTO_DES
......
......@@ -2,8 +2,8 @@
# S/390 block devices
#
dasd_eckd_mod-objs := dasd_eckd.o dasd_3990_erp.o dasd_9343_erp.o
dasd_fba_mod-objs := dasd_fba.o dasd_3370_erp.o dasd_9336_erp.o
dasd_eckd_mod-objs := dasd_eckd.o dasd_3990_erp.o dasd_alias.o
dasd_fba_mod-objs := dasd_fba.o
dasd_diag_mod-objs := dasd_diag.o
dasd_mod-objs := dasd.o dasd_ioctl.o dasd_proc.o dasd_devmap.o \
dasd_genhd.o dasd_erp.o
......
此差异已折叠。
/*
* File...........: linux/drivers/s390/block/dasd_3370_erp.c
* Author(s)......: Holger Smolinski <Holger.Smolinski@de.ibm.com>
* Bugreports.to..: <Linux390@de.ibm.com>
* (C) IBM Corporation, IBM Deutschland Entwicklung GmbH, 2000
*
*/
#define PRINTK_HEADER "dasd_erp(3370)"
#include "dasd_int.h"
/*
* DASD_3370_ERP_EXAMINE
*
* DESCRIPTION
* Checks only for fatal/no/recover error.
* A detailed examination of the sense data is done later outside
* the interrupt handler.
*
* The logic is based on the 'IBM 3880 Storage Control Reference' manual
* 'Chapter 7. 3370 Sense Data'.
*
* RETURN VALUES
* dasd_era_none no error
* dasd_era_fatal for all fatal (unrecoverable errors)
* dasd_era_recover for all others.
*/
dasd_era_t
dasd_3370_erp_examine(struct dasd_ccw_req * cqr, struct irb * irb)
{
char *sense = irb->ecw;
/* check for successful execution first */
if (irb->scsw.cstat == 0x00 &&
irb->scsw.dstat == (DEV_STAT_CHN_END | DEV_STAT_DEV_END))
return dasd_era_none;
if (sense[0] & 0x80) { /* CMD reject */
return dasd_era_fatal;
}
if (sense[0] & 0x40) { /* Drive offline */
return dasd_era_recover;
}
if (sense[0] & 0x20) { /* Bus out parity */
return dasd_era_recover;
}
if (sense[0] & 0x10) { /* equipment check */
if (sense[1] & 0x80) {
return dasd_era_fatal;
}
return dasd_era_recover;
}
if (sense[0] & 0x08) { /* data check */
if (sense[1] & 0x80) {
return dasd_era_fatal;
}
return dasd_era_recover;
}
if (sense[0] & 0x04) { /* overrun */
if (sense[1] & 0x80) {
return dasd_era_fatal;
}
return dasd_era_recover;
}
if (sense[1] & 0x40) { /* invalid blocksize */
return dasd_era_fatal;
}
if (sense[1] & 0x04) { /* file protected */
return dasd_era_recover;
}
if (sense[1] & 0x01) { /* operation incomplete */
return dasd_era_recover;
}
if (sense[2] & 0x80) { /* check data erroor */
return dasd_era_recover;
}
if (sense[2] & 0x10) { /* Env. data present */
return dasd_era_recover;
}
/* examine the 24 byte sense data */
return dasd_era_recover;
} /* END dasd_3370_erp_examine */
此差异已折叠。
/*
* File...........: linux/drivers/s390/block/dasd_9336_erp.c
* Author(s)......: Holger Smolinski <Holger.Smolinski@de.ibm.com>
* Bugreports.to..: <Linux390@de.ibm.com>
* (C) IBM Corporation, IBM Deutschland Entwicklung GmbH, 2000
*
*/
#define PRINTK_HEADER "dasd_erp(9336)"
#include "dasd_int.h"
/*
* DASD_9336_ERP_EXAMINE
*
* DESCRIPTION
* Checks only for fatal/no/recover error.
* A detailed examination of the sense data is done later outside
* the interrupt handler.
*
* The logic is based on the 'IBM 3880 Storage Control Reference' manual
* 'Chapter 7. 9336 Sense Data'.
*
* RETURN VALUES
* dasd_era_none no error
* dasd_era_fatal for all fatal (unrecoverable errors)
* dasd_era_recover for all others.
*/
dasd_era_t
dasd_9336_erp_examine(struct dasd_ccw_req * cqr, struct irb * irb)
{
/* check for successful execution first */
if (irb->scsw.cstat == 0x00 &&
irb->scsw.dstat == (DEV_STAT_CHN_END | DEV_STAT_DEV_END))
return dasd_era_none;
/* examine the 24 byte sense data */
return dasd_era_recover;
} /* END dasd_9336_erp_examine */
/*
* File...........: linux/drivers/s390/block/dasd_9345_erp.c
* Author(s)......: Holger Smolinski <Holger.Smolinski@de.ibm.com>
* Bugreports.to..: <Linux390@de.ibm.com>
* (C) IBM Corporation, IBM Deutschland Entwicklung GmbH, 2000
*
*/
#define PRINTK_HEADER "dasd_erp(9343)"
#include "dasd_int.h"
dasd_era_t
dasd_9343_erp_examine(struct dasd_ccw_req * cqr, struct irb * irb)
{
if (irb->scsw.cstat == 0x00 &&
irb->scsw.dstat == (DEV_STAT_CHN_END | DEV_STAT_DEV_END))
return dasd_era_none;
return dasd_era_recover;
}
此差异已折叠。
......@@ -48,22 +48,6 @@ struct dasd_devmap {
struct dasd_uid uid;
};
/*
* dasd_server_ssid_map contains a globally unique storage server subsystem ID.
* dasd_server_ssid_list contains the list of all subsystem IDs accessed by
* the DASD device driver.
*/
struct dasd_server_ssid_map {
struct list_head list;
struct system_id {
char vendor[4];
char serial[15];
__u16 ssid;
} sid;
};
static struct list_head dasd_server_ssid_list;
/*
* Parameter parsing functions for dasd= parameter. The syntax is:
* <devno> : (0x)?[0-9a-fA-F]+
......@@ -721,8 +705,9 @@ dasd_ro_store(struct device *dev, struct device_attribute *attr,
devmap->features &= ~DASD_FEATURE_READONLY;
if (devmap->device)
devmap->device->features = devmap->features;
if (devmap->device && devmap->device->gdp)
set_disk_ro(devmap->device->gdp, val);
if (devmap->device && devmap->device->block
&& devmap->device->block->gdp)
set_disk_ro(devmap->device->block->gdp, val);
spin_unlock(&dasd_devmap_lock);
return count;
}
......@@ -893,12 +878,16 @@ dasd_alias_show(struct device *dev, struct device_attribute *attr, char *buf)
devmap = dasd_find_busid(dev->bus_id);
spin_lock(&dasd_devmap_lock);
if (!IS_ERR(devmap))
alias = devmap->uid.alias;
if (IS_ERR(devmap) || strlen(devmap->uid.vendor) == 0) {
spin_unlock(&dasd_devmap_lock);
return sprintf(buf, "0\n");
}
if (devmap->uid.type == UA_BASE_PAV_ALIAS ||
devmap->uid.type == UA_HYPER_PAV_ALIAS)
alias = 1;
else
alias = 0;
spin_unlock(&dasd_devmap_lock);
return sprintf(buf, alias ? "1\n" : "0\n");
}
......@@ -930,19 +919,36 @@ static ssize_t
dasd_uid_show(struct device *dev, struct device_attribute *attr, char *buf)
{
struct dasd_devmap *devmap;
char uid[UID_STRLEN];
char uid_string[UID_STRLEN];
char ua_string[3];
struct dasd_uid *uid;
devmap = dasd_find_busid(dev->bus_id);
spin_lock(&dasd_devmap_lock);
if (!IS_ERR(devmap) && strlen(devmap->uid.vendor) > 0)
snprintf(uid, sizeof(uid), "%s.%s.%04x.%02x",
devmap->uid.vendor, devmap->uid.serial,
devmap->uid.ssid, devmap->uid.unit_addr);
else
uid[0] = 0;
if (IS_ERR(devmap) || strlen(devmap->uid.vendor) == 0) {
spin_unlock(&dasd_devmap_lock);
return sprintf(buf, "\n");
}
uid = &devmap->uid;
switch (uid->type) {
case UA_BASE_DEVICE:
sprintf(ua_string, "%02x", uid->real_unit_addr);
break;
case UA_BASE_PAV_ALIAS:
sprintf(ua_string, "%02x", uid->base_unit_addr);
break;
case UA_HYPER_PAV_ALIAS:
sprintf(ua_string, "xx");
break;
default:
/* should not happen, treat like base device */
sprintf(ua_string, "%02x", uid->real_unit_addr);
break;
}
snprintf(uid_string, sizeof(uid_string), "%s.%s.%04x.%s",
uid->vendor, uid->serial, uid->ssid, ua_string);
spin_unlock(&dasd_devmap_lock);
return snprintf(buf, PAGE_SIZE, "%s\n", uid);
return snprintf(buf, PAGE_SIZE, "%s\n", uid_string);
}
static DEVICE_ATTR(uid, 0444, dasd_uid_show, NULL);
......@@ -1040,39 +1046,16 @@ int
dasd_set_uid(struct ccw_device *cdev, struct dasd_uid *uid)
{
struct dasd_devmap *devmap;
struct dasd_server_ssid_map *srv, *tmp;
devmap = dasd_find_busid(cdev->dev.bus_id);
if (IS_ERR(devmap))
return PTR_ERR(devmap);
/* generate entry for server_ssid_map */
srv = (struct dasd_server_ssid_map *)
kzalloc(sizeof(struct dasd_server_ssid_map), GFP_KERNEL);
if (!srv)
return -ENOMEM;
strncpy(srv->sid.vendor, uid->vendor, sizeof(srv->sid.vendor) - 1);
strncpy(srv->sid.serial, uid->serial, sizeof(srv->sid.serial) - 1);
srv->sid.ssid = uid->ssid;
/* server is already contained ? */
spin_lock(&dasd_devmap_lock);
devmap->uid = *uid;
list_for_each_entry(tmp, &dasd_server_ssid_list, list) {
if (!memcmp(&srv->sid, &tmp->sid,
sizeof(struct system_id))) {
kfree(srv);
srv = NULL;
break;
}
}
/* add servermap to serverlist */
if (srv)
list_add(&srv->list, &dasd_server_ssid_list);
spin_unlock(&dasd_devmap_lock);
return (srv ? 1 : 0);
return 0;
}
EXPORT_SYMBOL_GPL(dasd_set_uid);
......@@ -1138,9 +1121,6 @@ dasd_devmap_init(void)
dasd_max_devindex = 0;
for (i = 0; i < 256; i++)
INIT_LIST_HEAD(&dasd_hashlists[i]);
/* Initialize servermap structure. */
INIT_LIST_HEAD(&dasd_server_ssid_list);
return 0;
}
......
......@@ -142,7 +142,7 @@ dasd_diag_erp(struct dasd_device *device)
int rc;
mdsk_term_io(device);
rc = mdsk_init_io(device, device->bp_block, 0, NULL);
rc = mdsk_init_io(device, device->block->bp_block, 0, NULL);
if (rc)
DEV_MESSAGE(KERN_WARNING, device, "DIAG ERP unsuccessful, "
"rc=%d", rc);
......@@ -158,11 +158,11 @@ dasd_start_diag(struct dasd_ccw_req * cqr)
struct dasd_diag_req *dreq;
int rc;
device = cqr->device;
device = cqr->startdev;
if (cqr->retries < 0) {
DEV_MESSAGE(KERN_WARNING, device, "DIAG start_IO: request %p "
"- no retry left)", cqr);
cqr->status = DASD_CQR_FAILED;
cqr->status = DASD_CQR_ERROR;
return -EIO;
}
private = (struct dasd_diag_private *) device->private;
......@@ -184,7 +184,7 @@ dasd_start_diag(struct dasd_ccw_req * cqr)
switch (rc) {
case 0: /* Synchronous I/O finished successfully */
cqr->stopclk = get_clock();
cqr->status = DASD_CQR_DONE;
cqr->status = DASD_CQR_SUCCESS;
/* Indicate to calling function that only a dasd_schedule_bh()
and no timer is needed */
rc = -EACCES;
......@@ -209,12 +209,12 @@ dasd_diag_term_IO(struct dasd_ccw_req * cqr)
{
struct dasd_device *device;
device = cqr->device;
device = cqr->startdev;
mdsk_term_io(device);
mdsk_init_io(device, device->bp_block, 0, NULL);
cqr->status = DASD_CQR_CLEAR;
mdsk_init_io(device, device->block->bp_block, 0, NULL);
cqr->status = DASD_CQR_CLEAR_PENDING;
cqr->stopclk = get_clock();
dasd_schedule_bh(device);
dasd_schedule_device_bh(device);
return 0;
}
......@@ -247,7 +247,7 @@ dasd_ext_handler(__u16 code)
return;
}
cqr = (struct dasd_ccw_req *) ip;
device = (struct dasd_device *) cqr->device;
device = (struct dasd_device *) cqr->startdev;
if (strncmp(device->discipline->ebcname, (char *) &cqr->magic, 4)) {
DEV_MESSAGE(KERN_WARNING, device,
" magic number of dasd_ccw_req 0x%08X doesn't"
......@@ -260,10 +260,10 @@ dasd_ext_handler(__u16 code)
spin_lock_irqsave(get_ccwdev_lock(device->cdev), flags);
/* Check for a pending clear operation */
if (cqr->status == DASD_CQR_CLEAR) {
cqr->status = DASD_CQR_QUEUED;
dasd_clear_timer(device);
dasd_schedule_bh(device);
if (cqr->status == DASD_CQR_CLEAR_PENDING) {
cqr->status = DASD_CQR_CLEARED;
dasd_device_clear_timer(device);
dasd_schedule_device_bh(device);
spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags);
return;
}
......@@ -272,11 +272,11 @@ dasd_ext_handler(__u16 code)
expires = 0;
if (status == 0) {
cqr->status = DASD_CQR_DONE;
cqr->status = DASD_CQR_SUCCESS;
/* Start first request on queue if possible -> fast_io. */
if (!list_empty(&device->ccw_queue)) {
next = list_entry(device->ccw_queue.next,
struct dasd_ccw_req, list);
struct dasd_ccw_req, devlist);
if (next->status == DASD_CQR_QUEUED) {
rc = dasd_start_diag(next);
if (rc == 0)
......@@ -296,10 +296,10 @@ dasd_ext_handler(__u16 code)
}
if (expires != 0)
dasd_set_timer(device, expires);
dasd_device_set_timer(device, expires);
else
dasd_clear_timer(device);
dasd_schedule_bh(device);
dasd_device_clear_timer(device);
dasd_schedule_device_bh(device);
spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags);
}
......@@ -309,6 +309,7 @@ dasd_ext_handler(__u16 code)
static int
dasd_diag_check_device(struct dasd_device *device)
{
struct dasd_block *block;
struct dasd_diag_private *private;
struct dasd_diag_characteristics *rdc_data;
struct dasd_diag_bio bio;
......@@ -328,6 +329,16 @@ dasd_diag_check_device(struct dasd_device *device)
ccw_device_get_id(device->cdev, &private->dev_id);
device->private = (void *) private;
}
block = dasd_alloc_block();
if (IS_ERR(block)) {
DEV_MESSAGE(KERN_WARNING, device, "%s",
"could not allocate dasd block structure");
kfree(device->private);
return PTR_ERR(block);
}
device->block = block;
block->base = device;
/* Read Device Characteristics */
rdc_data = (void *) &(private->rdc_data);
rdc_data->dev_nr = private->dev_id.devno;
......@@ -409,14 +420,14 @@ dasd_diag_check_device(struct dasd_device *device)
sizeof(DASD_DIAG_CMS1)) == 0) {
/* get formatted blocksize from label block */
bsize = (unsigned int) label->block_size;
device->blocks = (unsigned long) label->block_count;
block->blocks = (unsigned long) label->block_count;
} else
device->blocks = end_block;
device->bp_block = bsize;
device->s2b_shift = 0; /* bits to shift 512 to get a block */
block->blocks = end_block;
block->bp_block = bsize;
block->s2b_shift = 0; /* bits to shift 512 to get a block */
for (sb = 512; sb < bsize; sb = sb << 1)
device->s2b_shift++;
rc = mdsk_init_io(device, device->bp_block, 0, NULL);
block->s2b_shift++;
rc = mdsk_init_io(device, block->bp_block, 0, NULL);
if (rc) {
DEV_MESSAGE(KERN_WARNING, device, "DIAG initialization "
"failed (rc=%d)", rc);
......@@ -424,9 +435,9 @@ dasd_diag_check_device(struct dasd_device *device)
} else {
DEV_MESSAGE(KERN_INFO, device,
"(%ld B/blk): %ldkB",
(unsigned long) device->bp_block,
(unsigned long) (device->blocks <<
device->s2b_shift) >> 1);
(unsigned long) block->bp_block,
(unsigned long) (block->blocks <<
block->s2b_shift) >> 1);
}
out:
free_page((long) label);
......@@ -436,22 +447,16 @@ dasd_diag_check_device(struct dasd_device *device)
/* Fill in virtual disk geometry for device. Return zero on success, non-zero
* otherwise. */
static int
dasd_diag_fill_geometry(struct dasd_device *device, struct hd_geometry *geo)
dasd_diag_fill_geometry(struct dasd_block *block, struct hd_geometry *geo)
{
if (dasd_check_blocksize(device->bp_block) != 0)
if (dasd_check_blocksize(block->bp_block) != 0)
return -EINVAL;
geo->cylinders = (device->blocks << device->s2b_shift) >> 10;
geo->cylinders = (block->blocks << block->s2b_shift) >> 10;
geo->heads = 16;
geo->sectors = 128 >> device->s2b_shift;
geo->sectors = 128 >> block->s2b_shift;
return 0;
}
static dasd_era_t
dasd_diag_examine_error(struct dasd_ccw_req * cqr, struct irb * stat)
{
return dasd_era_fatal;
}
static dasd_erp_fn_t
dasd_diag_erp_action(struct dasd_ccw_req * cqr)
{
......@@ -466,8 +471,9 @@ dasd_diag_erp_postaction(struct dasd_ccw_req * cqr)
/* Create DASD request from block device request. Return pointer to new
* request on success, ERR_PTR otherwise. */
static struct dasd_ccw_req *
dasd_diag_build_cp(struct dasd_device * device, struct request *req)
static struct dasd_ccw_req *dasd_diag_build_cp(struct dasd_device *memdev,
struct dasd_block *block,
struct request *req)
{
struct dasd_ccw_req *cqr;
struct dasd_diag_req *dreq;
......@@ -486,17 +492,17 @@ dasd_diag_build_cp(struct dasd_device * device, struct request *req)
rw_cmd = MDSK_WRITE_REQ;
else
return ERR_PTR(-EINVAL);
blksize = device->bp_block;
blksize = block->bp_block;
/* Calculate record id of first and last block. */
first_rec = req->sector >> device->s2b_shift;
last_rec = (req->sector + req->nr_sectors - 1) >> device->s2b_shift;
first_rec = req->sector >> block->s2b_shift;
last_rec = (req->sector + req->nr_sectors - 1) >> block->s2b_shift;
/* Check struct bio and count the number of blocks for the request. */
count = 0;
rq_for_each_segment(bv, req, iter) {
if (bv->bv_len & (blksize - 1))
/* Fba can only do full blocks. */
return ERR_PTR(-EINVAL);
count += bv->bv_len >> (device->s2b_shift + 9);
count += bv->bv_len >> (block->s2b_shift + 9);
}
/* Paranoia. */
if (count != last_rec - first_rec + 1)
......@@ -505,7 +511,7 @@ dasd_diag_build_cp(struct dasd_device * device, struct request *req)
datasize = sizeof(struct dasd_diag_req) +
count*sizeof(struct dasd_diag_bio);
cqr = dasd_smalloc_request(dasd_diag_discipline.name, 0,
datasize, device);
datasize, memdev);
if (IS_ERR(cqr))
return cqr;
......@@ -529,7 +535,9 @@ dasd_diag_build_cp(struct dasd_device * device, struct request *req)
cqr->buildclk = get_clock();
if (req->cmd_flags & REQ_FAILFAST)
set_bit(DASD_CQR_FLAGS_FAILFAST, &cqr->flags);
cqr->device = device;
cqr->startdev = memdev;
cqr->memdev = memdev;
cqr->block = block;
cqr->expires = DIAG_TIMEOUT;
cqr->status = DASD_CQR_FILLED;
return cqr;
......@@ -543,10 +551,15 @@ dasd_diag_free_cp(struct dasd_ccw_req *cqr, struct request *req)
int status;
status = cqr->status == DASD_CQR_DONE;
dasd_sfree_request(cqr, cqr->device);
dasd_sfree_request(cqr, cqr->memdev);
return status;
}
static void dasd_diag_handle_terminated_request(struct dasd_ccw_req *cqr)
{
cqr->status = DASD_CQR_FILLED;
};
/* Fill in IOCTL data for device. */
static int
dasd_diag_fill_info(struct dasd_device * device,
......@@ -583,7 +596,7 @@ static struct dasd_discipline dasd_diag_discipline = {
.fill_geometry = dasd_diag_fill_geometry,
.start_IO = dasd_start_diag,
.term_IO = dasd_diag_term_IO,
.examine_error = dasd_diag_examine_error,
.handle_terminated_request = dasd_diag_handle_terminated_request,
.erp_action = dasd_diag_erp_action,
.erp_postaction = dasd_diag_erp_postaction,
.build_cp = dasd_diag_build_cp,
......
此差异已折叠。
......@@ -39,6 +39,8 @@
#define DASD_ECKD_CCW_READ_CKD_MT 0x9e
#define DASD_ECKD_CCW_WRITE_CKD_MT 0x9d
#define DASD_ECKD_CCW_RESERVE 0xB4
#define DASD_ECKD_CCW_PFX 0xE7
#define DASD_ECKD_CCW_RSCK 0xF9
/*
* Perform Subsystem Function / Sub-Orders
......@@ -137,6 +139,25 @@ struct LO_eckd_data {
__u16 length;
} __attribute__ ((packed));
/* Prefix data for format 0x00 and 0x01 */
struct PFX_eckd_data {
unsigned char format;
struct {
unsigned char define_extend:1;
unsigned char time_stamp:1;
unsigned char verify_base:1;
unsigned char hyper_pav:1;
unsigned char reserved:4;
} __attribute__ ((packed)) validity;
__u8 base_address;
__u8 aux;
__u8 base_lss;
__u8 reserved[7];
struct DE_eckd_data define_extend;
struct LO_eckd_data locate_record;
__u8 LO_extended_data[4];
} __attribute__ ((packed));
struct dasd_eckd_characteristics {
__u16 cu_type;
struct {
......@@ -254,7 +275,9 @@ struct dasd_eckd_confdata {
} __attribute__ ((packed)) ned;
struct {
unsigned char flags; /* byte 0 */
unsigned char res2[7]; /* byte 1- 7 */
unsigned char res1; /* byte 1 */
__u16 format; /* byte 2-3 */
unsigned char res2[4]; /* byte 4-7 */
unsigned char sua_flags; /* byte 8 */
__u8 base_unit_addr; /* byte 9 */
unsigned char res3[22]; /* byte 10-31 */
......@@ -343,6 +366,11 @@ struct dasd_eckd_path {
__u8 npm;
};
struct dasd_rssd_features {
char feature[256];
} __attribute__((packed));
/*
* Perform Subsystem Function - Prepare for Read Subsystem Data
*/
......@@ -365,4 +393,99 @@ struct dasd_psf_ssc_data {
unsigned char reserved[59];
} __attribute__((packed));
/*
* some structures and definitions for alias handling
*/
struct dasd_unit_address_configuration {
struct {
char ua_type;
char base_ua;
} unit[256];
} __attribute__((packed));
#define MAX_DEVICES_PER_LCU 256
/* flags on the LCU */
#define NEED_UAC_UPDATE 0x01
#define UPDATE_PENDING 0x02
enum pavtype {NO_PAV, BASE_PAV, HYPER_PAV};
struct alias_root {
struct list_head serverlist;
spinlock_t lock;
};
struct alias_server {
struct list_head server;
struct dasd_uid uid;
struct list_head lculist;
};
struct summary_unit_check_work_data {
char reason;
struct dasd_device *device;
struct work_struct worker;
};
struct read_uac_work_data {
struct dasd_device *device;
struct delayed_work dwork;
};
struct alias_lcu {
struct list_head lcu;
struct dasd_uid uid;
enum pavtype pav;
char flags;
spinlock_t lock;
struct list_head grouplist;
struct list_head active_devices;
struct list_head inactive_devices;
struct dasd_unit_address_configuration *uac;
struct summary_unit_check_work_data suc_data;
struct read_uac_work_data ruac_data;
struct dasd_ccw_req *rsu_cqr;
};
struct alias_pav_group {
struct list_head group;
struct dasd_uid uid;
struct alias_lcu *lcu;
struct list_head baselist;
struct list_head aliaslist;
struct dasd_device *next;
};
struct dasd_eckd_private {
struct dasd_eckd_characteristics rdc_data;
struct dasd_eckd_confdata conf_data;
struct dasd_eckd_path path_data;
struct eckd_count count_area[5];
int init_cqr_status;
int uses_cdl;
struct attrib_data_t attrib; /* e.g. cache operations */
struct dasd_rssd_features features;
/* alias managemnet */
struct dasd_uid uid;
struct alias_pav_group *pavgroup;
struct alias_lcu *lcu;
int count;
};
int dasd_alias_make_device_known_to_lcu(struct dasd_device *);
void dasd_alias_disconnect_device_from_lcu(struct dasd_device *);
int dasd_alias_add_device(struct dasd_device *);
int dasd_alias_remove_device(struct dasd_device *);
struct dasd_device *dasd_alias_get_start_dev(struct dasd_device *);
void dasd_alias_handle_summary_unit_check(struct dasd_device *, struct irb *);
void dasd_eckd_reset_ccw_to_base_io(struct dasd_ccw_req *);
#endif /* DASD_ECKD_H */
......@@ -336,7 +336,7 @@ static void dasd_eer_write_snss_trigger(struct dasd_device *device,
unsigned long flags;
struct eerbuffer *eerb;
snss_rc = (cqr->status == DASD_CQR_FAILED) ? -EIO : 0;
snss_rc = (cqr->status == DASD_CQR_DONE) ? 0 : -EIO;
if (snss_rc)
data_size = 0;
else
......@@ -404,10 +404,11 @@ void dasd_eer_snss(struct dasd_device *device)
set_bit(DASD_FLAG_EER_SNSS, &device->flags);
return;
}
/* cdev is already locked, can't use dasd_add_request_head */
clear_bit(DASD_FLAG_EER_SNSS, &device->flags);
cqr->status = DASD_CQR_QUEUED;
list_add(&cqr->list, &device->ccw_queue);
dasd_schedule_bh(device);
list_add(&cqr->devlist, &device->ccw_queue);
dasd_schedule_device_bh(device);
}
/*
......@@ -415,7 +416,7 @@ void dasd_eer_snss(struct dasd_device *device)
*/
static void dasd_eer_snss_cb(struct dasd_ccw_req *cqr, void *data)
{
struct dasd_device *device = cqr->device;
struct dasd_device *device = cqr->startdev;
unsigned long flags;
dasd_eer_write(device, cqr, DASD_EER_STATECHANGE);
......@@ -458,7 +459,7 @@ int dasd_eer_enable(struct dasd_device *device)
if (!cqr)
return -ENOMEM;
cqr->device = device;
cqr->startdev = device;
cqr->retries = 255;
cqr->expires = 10 * HZ;
clear_bit(DASD_CQR_FLAGS_USE_ERP, &cqr->flags);
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册