You need to sign in or sign up before continuing.
提交 21c7075f 编写于 作者: L Linus Torvalds

Merge branch 'for-linus' of git://git390.marist.edu/pub/scm/linux-2.6

* 'for-linus' of git://git390.marist.edu/pub/scm/linux-2.6: (21 commits)
  [S390] use siginfo for sigtrap signals
  [S390] dasd: add enhanced DASD statistics interface
  [S390] kvm: make sigp emerg smp capable
  [S390] disable cpu measurement alerts on a dying cpu
  [S390] initial cr0 bits
  [S390] iucv cr0 enablement bit
  [S390] race safe external interrupt registration
  [S390] remove tape block docu
  [S390] ap: toleration support for ap device type 10
  [S390] cleanup program check handler prototypes
  [S390] remove kvm mmu reload on s390
  [S390] Use gmap translation for accessing guest memory
  [S390] use gmap address spaces for kvm guest images
  [S390] kvm guest address space mapping
  [S390] fix s390 assembler code alignments
  [S390] move sie code to entry.S
  [S390] kvm: handle tprot intercepts
  [S390] qdio: clear shared DSCI before scheduling the queue handler
  [S390] reference bit testing for unmapped pages
  [S390] irqs: Do not trace arch_local_{*,irq_*} functions
  ...
Channel attached Tape device driver
-----------------------------WARNING-----------------------------------------
This driver is considered to be EXPERIMENTAL. Do NOT use it in
production environments. Feel free to test it and report problems back to us.
-----------------------------------------------------------------------------
The LINUX for zSeries tape device driver manages channel attached tape drives
which are compatible to IBM 3480 or IBM 3490 magnetic tape subsystems. This
includes various models of these devices (for example the 3490E).
Tape driver features
The device driver supports a maximum of 128 tape devices.
No official LINUX device major number is assigned to the zSeries tape device
driver. It allocates major numbers dynamically and reports them on system
startup.
Typically it will get major number 254 for both the character device front-end
and the block device front-end.
The tape device driver needs no kernel parameters. All supported devices
present are detected on driver initialization at system startup or module load.
The devices detected are ordered by their subchannel numbers. The device with
the lowest subchannel number becomes device 0, the next one will be device 1
and so on.
Tape character device front-end
The usual way to read or write to the tape device is through the character
device front-end. The zSeries tape device driver provides two character devices
for each physical device -- the first of these will rewind automatically when
it is closed, the second will not rewind automatically.
The character device nodes are named /dev/rtibm0 (rewinding) and /dev/ntibm0
(non-rewinding) for the first device, /dev/rtibm1 and /dev/ntibm1 for the
second, and so on.
The character device front-end can be used as any other LINUX tape device. You
can write to it and read from it using LINUX facilities such as GNU tar. The
tool mt can be used to perform control operations, such as rewinding the tape
or skipping a file.
Most LINUX tape software should work with either tape character device.
Tape block device front-end
The tape device may also be accessed as a block device in read-only mode.
This could be used for software installation in the same way as it is used with
other operation systems on the zSeries platform (and most LINUX
distributions are shipped on compact disk using ISO9660 filesystems).
One block device node is provided for each physical device. These are named
/dev/btibm0 for the first device, /dev/btibm1 for the second and so on.
You should only use the ISO9660 filesystem on LINUX for zSeries tapes because
the physical tape devices cannot perform fast seeks and the ISO9660 system is
optimized for this situation.
Tape block device example
In this example a tape with an ISO9660 filesystem is created using the first
tape device. ISO9660 filesystem support must be built into your system kernel
for this.
The mt command is used to issue tape commands and the mkisofs command to
create an ISO9660 filesystem:
- create a LINUX directory (somedir) with the contents of the filesystem
mkdir somedir
cp contents somedir
- insert a tape
- ensure the tape is at the beginning
mt -f /dev/ntibm0 rewind
- set the blocksize of the character driver. The blocksize 2048 bytes
is commonly used on ISO9660 CD-Roms
mt -f /dev/ntibm0 setblk 2048
- write the filesystem to the character device driver
mkisofs -o /dev/ntibm0 somedir
- rewind the tape again
mt -f /dev/ntibm0 rewind
- Now you can mount your new filesystem as a block device:
mount -t iso9660 -o ro,block=2048 /dev/btibm0 /mnt
TODO List
- Driver has to be stabilized still
BUGS
This driver is considered BETA, which means some weaknesses may still
be in it.
If an error occurs which cannot be handled by the code you will get a
sense-data dump.In that case please do the following:
1. set the tape driver debug level to maximum:
echo 6 >/proc/s390dbf/tape/level
2. re-perform the actions which produced the bug. (Hopefully the bug will
reappear.)
3. get a snapshot from the debug-feature:
cat /proc/s390dbf/tape/hex_ascii >somefile
4. Now put the snapshot together with a detailed description of the situation
that led to the bug:
- Which tool did you use?
- Which hardware do you have?
- Was your tape unit online?
- Is it a shared tape unit?
5. Send an email with your bug report to:
mailto:Linux390@de.ibm.com
......@@ -7,14 +7,14 @@
*/
#include <linux/init.h>
#include <linux/linkage.h>
#include <asm/asm-offsets.h>
#include <asm/thread_info.h>
#include <asm/page.h>
#include "sizes.h"
__HEAD
.globl startup_continue
startup_continue:
ENTRY(startup_continue)
basr %r13,0 # get base
.LPG1:
# setup stack
......
......@@ -7,14 +7,14 @@
*/
#include <linux/init.h>
#include <linux/linkage.h>
#include <asm/asm-offsets.h>
#include <asm/thread_info.h>
#include <asm/page.h>
#include "sizes.h"
__HEAD
.globl startup_continue
startup_continue:
ENTRY(startup_continue)
basr %r13,0 # get base
.LPG1:
# setup stack
......
......@@ -29,42 +29,42 @@
})
/* set system mask. */
static inline void __arch_local_irq_ssm(unsigned long flags)
static inline notrace void __arch_local_irq_ssm(unsigned long flags)
{
asm volatile("ssm %0" : : "Q" (flags) : "memory");
}
static inline unsigned long arch_local_save_flags(void)
static inline notrace unsigned long arch_local_save_flags(void)
{
return __arch_local_irq_stosm(0x00);
}
static inline unsigned long arch_local_irq_save(void)
static inline notrace unsigned long arch_local_irq_save(void)
{
return __arch_local_irq_stnsm(0xfc);
}
static inline void arch_local_irq_disable(void)
static inline notrace void arch_local_irq_disable(void)
{
arch_local_irq_save();
}
static inline void arch_local_irq_enable(void)
static inline notrace void arch_local_irq_enable(void)
{
__arch_local_irq_stosm(0x03);
}
static inline void arch_local_irq_restore(unsigned long flags)
static inline notrace void arch_local_irq_restore(unsigned long flags)
{
__arch_local_irq_ssm(flags);
}
static inline bool arch_irqs_disabled_flags(unsigned long flags)
static inline notrace bool arch_irqs_disabled_flags(unsigned long flags)
{
return !(flags & (3UL << (BITS_PER_LONG - 8)));
}
static inline bool arch_irqs_disabled(void)
static inline notrace bool arch_irqs_disabled(void)
{
return arch_irqs_disabled_flags(arch_local_save_flags());
}
......
......@@ -93,9 +93,7 @@ struct kvm_s390_sie_block {
__u32 scaol; /* 0x0064 */
__u8 reserved68[4]; /* 0x0068 */
__u32 todpr; /* 0x006c */
__u8 reserved70[16]; /* 0x0070 */
__u64 gmsor; /* 0x0080 */
__u64 gmslm; /* 0x0088 */
__u8 reserved70[32]; /* 0x0070 */
psw_t gpsw; /* 0x0090 */
__u64 gg14; /* 0x00a0 */
__u64 gg15; /* 0x00a8 */
......@@ -138,6 +136,7 @@ struct kvm_vcpu_stat {
u32 instruction_chsc;
u32 instruction_stsi;
u32 instruction_stfl;
u32 instruction_tprot;
u32 instruction_sigp_sense;
u32 instruction_sigp_emergency;
u32 instruction_sigp_stop;
......@@ -175,6 +174,10 @@ struct kvm_s390_prefix_info {
__u32 address;
};
struct kvm_s390_emerg_info {
__u16 code;
};
struct kvm_s390_interrupt_info {
struct list_head list;
u64 type;
......@@ -182,6 +185,7 @@ struct kvm_s390_interrupt_info {
struct kvm_s390_io_info io;
struct kvm_s390_ext_info ext;
struct kvm_s390_pgm_info pgm;
struct kvm_s390_emerg_info emerg;
struct kvm_s390_prefix_info prefix;
};
};
......@@ -226,6 +230,7 @@ struct kvm_vcpu_arch {
struct cpuid cpu_id;
u64 stidp_data;
};
struct gmap *gmap;
};
struct kvm_vm_stat {
......@@ -236,6 +241,7 @@ struct kvm_arch{
struct sca_block *sca;
debug_info_t *dbf;
struct kvm_s390_float_interrupt float_int;
struct gmap *gmap;
};
extern int sie64a(struct kvm_s390_sie_block *, unsigned long *);
......
#ifndef __ASM_LINKAGE_H
#define __ASM_LINKAGE_H
/* Nothing to see here... */
#include <linux/stringify.h>
#define __ALIGN .align 4, 0x07
#define __ALIGN_STR __stringify(__ALIGN)
#endif
......@@ -268,7 +268,7 @@ struct _lowcore {
__u64 vdso_per_cpu_data; /* 0x0358 */
__u64 machine_flags; /* 0x0360 */
__u64 ftrace_func; /* 0x0368 */
__u64 sie_hook; /* 0x0370 */
__u64 gmap; /* 0x0370 */
__u64 cmf_hpp; /* 0x0378 */
/* Interrupt response block. */
......
......@@ -6,6 +6,7 @@ typedef struct {
unsigned int flush_mm;
spinlock_t list_lock;
struct list_head pgtable_list;
struct list_head gmap_list;
unsigned long asce_bits;
unsigned long asce_limit;
unsigned long vdso_base;
......@@ -17,6 +18,7 @@ typedef struct {
#define INIT_MM_CONTEXT(name) \
.context.list_lock = __SPIN_LOCK_UNLOCKED(name.context.list_lock), \
.context.pgtable_list = LIST_HEAD_INIT(name.context.pgtable_list),
.context.pgtable_list = LIST_HEAD_INIT(name.context.pgtable_list), \
.context.gmap_list = LIST_HEAD_INIT(name.context.gmap_list),
#endif
......@@ -20,7 +20,7 @@
unsigned long *crst_table_alloc(struct mm_struct *);
void crst_table_free(struct mm_struct *, unsigned long *);
unsigned long *page_table_alloc(struct mm_struct *);
unsigned long *page_table_alloc(struct mm_struct *, unsigned long);
void page_table_free(struct mm_struct *, unsigned long *);
#ifdef CONFIG_HAVE_RCU_TABLE_FREE
void page_table_free_rcu(struct mmu_gather *, unsigned long *);
......@@ -115,6 +115,7 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm)
{
spin_lock_init(&mm->context.list_lock);
INIT_LIST_HEAD(&mm->context.pgtable_list);
INIT_LIST_HEAD(&mm->context.gmap_list);
return (pgd_t *) crst_table_alloc(mm);
}
#define pgd_free(mm, pgd) crst_table_free(mm, (unsigned long *) pgd)
......@@ -133,8 +134,8 @@ static inline void pmd_populate(struct mm_struct *mm,
/*
* page table entry allocation/free routines.
*/
#define pte_alloc_one_kernel(mm, vmaddr) ((pte_t *) page_table_alloc(mm))
#define pte_alloc_one(mm, vmaddr) ((pte_t *) page_table_alloc(mm))
#define pte_alloc_one_kernel(mm, vmaddr) ((pte_t *) page_table_alloc(mm, vmaddr))
#define pte_alloc_one(mm, vmaddr) ((pte_t *) page_table_alloc(mm, vmaddr))
#define pte_free_kernel(mm, pte) page_table_free(mm, (unsigned long *) pte)
#define pte_free(mm, pte) page_table_free(mm, (unsigned long *) pte)
......
......@@ -654,6 +654,48 @@ static inline void pgste_set_pte(pte_t *ptep, pgste_t pgste)
#endif
}
/**
* struct gmap_struct - guest address space
* @mm: pointer to the parent mm_struct
* @table: pointer to the page directory
* @crst_list: list of all crst tables used in the guest address space
*/
struct gmap {
struct list_head list;
struct mm_struct *mm;
unsigned long *table;
struct list_head crst_list;
};
/**
* struct gmap_rmap - reverse mapping for segment table entries
* @next: pointer to the next gmap_rmap structure in the list
* @entry: pointer to a segment table entry
*/
struct gmap_rmap {
struct list_head list;
unsigned long *entry;
};
/**
* struct gmap_pgtable - gmap information attached to a page table
* @vmaddr: address of the 1MB segment in the process virtual memory
* @mapper: list of segment table entries maping a page table
*/
struct gmap_pgtable {
unsigned long vmaddr;
struct list_head mapper;
};
struct gmap *gmap_alloc(struct mm_struct *mm);
void gmap_free(struct gmap *gmap);
void gmap_enable(struct gmap *gmap);
void gmap_disable(struct gmap *gmap);
int gmap_map_segment(struct gmap *gmap, unsigned long from,
unsigned long to, unsigned long length);
int gmap_unmap_segment(struct gmap *gmap, unsigned long to, unsigned long len);
unsigned long gmap_fault(unsigned long address, struct gmap *);
/*
* Certain architectures need to do special things when PTEs
* within a page table are directly modified. Thus, the following
......
......@@ -80,6 +80,7 @@ struct thread_struct {
mm_segment_t mm_segment;
unsigned long prot_addr; /* address of protection-excep. */
unsigned int trap_no;
unsigned long gmap_addr; /* address of last gmap fault. */
struct per_regs per_user; /* User specified PER registers */
struct per_event per_event; /* Cause of the last PER trap */
/* pfault_wait is used to block the process on a pfault event */
......
......@@ -94,6 +94,7 @@ static inline struct thread_info *current_thread_info(void)
#define TIF_SYSCALL_AUDIT 9 /* syscall auditing active */
#define TIF_SECCOMP 10 /* secure computing */
#define TIF_SYSCALL_TRACEPOINT 11 /* syscall tracepoint instrumentation */
#define TIF_SIE 12 /* guest execution active */
#define TIF_POLLING_NRFLAG 16 /* true if poll_idle() is polling
TIF_NEED_RESCHED */
#define TIF_31BIT 17 /* 32bit process */
......@@ -113,6 +114,7 @@ static inline struct thread_info *current_thread_info(void)
#define _TIF_SYSCALL_AUDIT (1<<TIF_SYSCALL_AUDIT)
#define _TIF_SECCOMP (1<<TIF_SECCOMP)
#define _TIF_SYSCALL_TRACEPOINT (1<<TIF_SYSCALL_TRACEPOINT)
#define _TIF_SIE (1<<TIF_SIE)
#define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG)
#define _TIF_31BIT (1<<TIF_31BIT)
#define _TIF_SINGLE_STEP (1<<TIF_FREEZE)
......
......@@ -80,7 +80,7 @@ static inline void __tlb_flush_mm(struct mm_struct * mm)
* on all cpus instead of doing a local flush if the mm
* only ran on the local cpu.
*/
if (MACHINE_HAS_IDTE)
if (MACHINE_HAS_IDTE && list_empty(&mm->context.gmap_list))
__tlb_flush_idte((unsigned long) mm->pgd |
mm->context.asce_bits);
else
......
......@@ -151,7 +151,7 @@ int main(void)
DEFINE(__LC_FP_CREG_SAVE_AREA, offsetof(struct _lowcore, fpt_creg_save_area));
DEFINE(__LC_LAST_BREAK, offsetof(struct _lowcore, breaking_event_addr));
DEFINE(__LC_VDSO_PER_CPU, offsetof(struct _lowcore, vdso_per_cpu_data));
DEFINE(__LC_SIE_HOOK, offsetof(struct _lowcore, sie_hook));
DEFINE(__LC_GMAP, offsetof(struct _lowcore, gmap));
DEFINE(__LC_CMF_HPP, offsetof(struct _lowcore, cmf_hpp));
#endif /* CONFIG_32BIT */
return 0;
......
......@@ -6,13 +6,13 @@
* Michael Holzheu <holzheu@de.ibm.com>
*/
#include <linux/linkage.h>
#include <asm/asm-offsets.h>
#include <asm/ptrace.h>
#ifdef CONFIG_64BIT
.globl s390_base_mcck_handler
s390_base_mcck_handler:
ENTRY(s390_base_mcck_handler)
basr %r13,0
0: lg %r15,__LC_PANIC_STACK # load panic stack
aghi %r15,-STACK_FRAME_OVERHEAD
......@@ -26,13 +26,13 @@ s390_base_mcck_handler:
lpswe __LC_MCK_OLD_PSW
.section .bss
.align 8
.globl s390_base_mcck_handler_fn
s390_base_mcck_handler_fn:
.quad 0
.previous
.globl s390_base_ext_handler
s390_base_ext_handler:
ENTRY(s390_base_ext_handler)
stmg %r0,%r15,__LC_SAVE_AREA
basr %r13,0
0: aghi %r15,-STACK_FRAME_OVERHEAD
......@@ -46,13 +46,13 @@ s390_base_ext_handler:
lpswe __LC_EXT_OLD_PSW
.section .bss
.align 8
.globl s390_base_ext_handler_fn
s390_base_ext_handler_fn:
.quad 0
.previous
.globl s390_base_pgm_handler
s390_base_pgm_handler:
ENTRY(s390_base_pgm_handler)
stmg %r0,%r15,__LC_SAVE_AREA
basr %r13,0
0: aghi %r15,-STACK_FRAME_OVERHEAD
......@@ -70,6 +70,7 @@ disabled_wait_psw:
.quad 0x0002000180000000,0x0000000000000000 + s390_base_pgm_handler
.section .bss
.align 8
.globl s390_base_pgm_handler_fn
s390_base_pgm_handler_fn:
.quad 0
......@@ -77,8 +78,7 @@ s390_base_pgm_handler_fn:
#else /* CONFIG_64BIT */
.globl s390_base_mcck_handler
s390_base_mcck_handler:
ENTRY(s390_base_mcck_handler)
basr %r13,0
0: l %r15,__LC_PANIC_STACK # load panic stack
ahi %r15,-STACK_FRAME_OVERHEAD
......@@ -93,13 +93,13 @@ s390_base_mcck_handler:
2: .long s390_base_mcck_handler_fn
.section .bss
.align 4
.globl s390_base_mcck_handler_fn
s390_base_mcck_handler_fn:
.long 0
.previous
.globl s390_base_ext_handler
s390_base_ext_handler:
ENTRY(s390_base_ext_handler)
stm %r0,%r15,__LC_SAVE_AREA
basr %r13,0
0: ahi %r15,-STACK_FRAME_OVERHEAD
......@@ -115,13 +115,13 @@ s390_base_ext_handler:
2: .long s390_base_ext_handler_fn
.section .bss
.align 4
.globl s390_base_ext_handler_fn
s390_base_ext_handler_fn:
.long 0
.previous
.globl s390_base_pgm_handler
s390_base_pgm_handler:
ENTRY(s390_base_pgm_handler)
stm %r0,%r15,__LC_SAVE_AREA
basr %r13,0
0: ahi %r15,-STACK_FRAME_OVERHEAD
......@@ -142,6 +142,7 @@ disabled_wait_psw:
.long 0x000a0000,0x00000000 + s390_base_pgm_handler
.section .bss
.align 4
.globl s390_base_pgm_handler_fn
s390_base_pgm_handler_fn:
.long 0
......
此差异已折叠。
......@@ -9,8 +9,8 @@
* Heiko Carstens <heiko.carstens@de.ibm.com>
*/
#include <linux/linkage.h>
#include <linux/init.h>
#include <linux/linkage.h>
#include <asm/cache.h>
#include <asm/errno.h>
#include <asm/ptrace.h>
......@@ -197,8 +197,7 @@ STACK_SIZE = 1 << STACK_SHIFT
* Returns:
* gpr2 = prev
*/
.globl __switch_to
__switch_to:
ENTRY(__switch_to)
basr %r1,0
0: l %r4,__THREAD_info(%r2) # get thread_info of prev
l %r5,__THREAD_info(%r3) # get thread_info of next
......@@ -224,8 +223,7 @@ __critical_start:
* are executed with interrupts enabled.
*/
.globl system_call
system_call:
ENTRY(system_call)
stpt __LC_SYNC_ENTER_TIMER
sysc_saveall:
SAVE_ALL_SVC __LC_SVC_OLD_PSW,__LC_SAVE_AREA
......@@ -388,8 +386,7 @@ sysc_tracenogo:
#
# a new process exits the kernel with ret_from_fork
#
.globl ret_from_fork
ret_from_fork:
ENTRY(ret_from_fork)
l %r13,__LC_SVC_NEW_PSW+4
l %r12,__LC_THREAD_INFO # load pointer to thread_info struct
tm SP_PSW+1(%r15),0x01 # forking a kernel thread ?
......@@ -405,8 +402,7 @@ ret_from_fork:
# kernel_execve function needs to deal with pt_regs that is not
# at the usual place
#
.globl kernel_execve
kernel_execve:
ENTRY(kernel_execve)
stm %r12,%r15,48(%r15)
lr %r14,%r15
l %r13,__LC_SVC_NEW_PSW+4
......@@ -438,8 +434,7 @@ kernel_execve:
* Program check handler routine
*/
.globl pgm_check_handler
pgm_check_handler:
ENTRY(pgm_check_handler)
/*
* First we need to check for a special case:
* Single stepping an instruction that disables the PER event mask will
......@@ -565,8 +560,7 @@ kernel_per:
* IO interrupt handler routine
*/
.globl io_int_handler
io_int_handler:
ENTRY(io_int_handler)
stck __LC_INT_CLOCK
stpt __LC_ASYNC_ENTER_TIMER
SAVE_ALL_ASYNC __LC_IO_OLD_PSW,__LC_SAVE_AREA+16
......@@ -703,8 +697,7 @@ io_notify_resume:
* External interrupt handler routine
*/
.globl ext_int_handler
ext_int_handler:
ENTRY(ext_int_handler)
stck __LC_INT_CLOCK
stpt __LC_ASYNC_ENTER_TIMER
SAVE_ALL_ASYNC __LC_EXT_OLD_PSW,__LC_SAVE_AREA+16
......@@ -731,8 +724,7 @@ __critical_end:
* Machine check handler routines
*/
.globl mcck_int_handler
mcck_int_handler:
ENTRY(mcck_int_handler)
stck __LC_MCCK_CLOCK
spt __LC_CPU_TIMER_SAVE_AREA # revalidate cpu timer
lm %r0,%r15,__LC_GPREGS_SAVE_AREA # revalidate gprs
......@@ -818,8 +810,7 @@ mcck_return:
*/
#ifdef CONFIG_SMP
__CPUINIT
.globl restart_int_handler
restart_int_handler:
ENTRY(restart_int_handler)
basr %r1,0
restart_base:
spt restart_vtime-restart_base(%r1)
......@@ -848,8 +839,7 @@ restart_vtime:
/*
* If we do not run with SMP enabled, let the new CPU crash ...
*/
.globl restart_int_handler
restart_int_handler:
ENTRY(restart_int_handler)
basr %r1,0
restart_base:
lpsw restart_crash-restart_base(%r1)
......
......@@ -5,10 +5,9 @@
#include <linux/signal.h>
#include <asm/ptrace.h>
typedef void pgm_check_handler_t(struct pt_regs *, long, unsigned long);
extern pgm_check_handler_t *pgm_check_table[128];
pgm_check_handler_t do_protection_exception;
pgm_check_handler_t do_dat_exception;
void do_protection_exception(struct pt_regs *, long, unsigned long);
void do_dat_exception(struct pt_regs *, long, unsigned long);
void do_asce_exception(struct pt_regs *, long, unsigned long);
extern int sysctl_userprocess_debug;
......
......@@ -9,8 +9,8 @@
* Heiko Carstens <heiko.carstens@de.ibm.com>
*/
#include <linux/linkage.h>
#include <linux/init.h>
#include <linux/linkage.h>
#include <asm/cache.h>
#include <asm/errno.h>
#include <asm/ptrace.h>
......@@ -56,15 +56,28 @@ _TIF_WORK_INT = (_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | _TIF_NEED_RESCHED | \
_TIF_MCCK_PENDING)
_TIF_SYSCALL = (_TIF_SYSCALL_TRACE>>8 | _TIF_SYSCALL_AUDIT>>8 | \
_TIF_SECCOMP>>8 | _TIF_SYSCALL_TRACEPOINT>>8)
_TIF_EXIT_SIE = (_TIF_SIGPENDING | _TIF_NEED_RESCHED | _TIF_MCCK_PENDING)
#define BASED(name) name-system_call(%r13)
.macro SPP newpp
#if defined(CONFIG_KVM) || defined(CONFIG_KVM_MODULE)
tm __LC_MACHINE_FLAGS+6,0x20 # MACHINE_FLAG_SPP
jz .+8
.insn s,0xb2800000,\newpp
#endif
.endm
.macro HANDLE_SIE_INTERCEPT
#if defined(CONFIG_KVM) || defined(CONFIG_KVM_MODULE)
lg %r3,__LC_SIE_HOOK
ltgr %r3,%r3
tm __TI_flags+6(%r12),_TIF_SIE>>8
jz 0f
basr %r14,%r3
SPP __LC_CMF_HPP # set host id
clc SP_PSW+8(8,%r15),BASED(.Lsie_loop)
jl 0f
clc SP_PSW+8(8,%r15),BASED(.Lsie_done)
jhe 0f
mvc SP_PSW+8(8,%r15),BASED(.Lsie_loop)
0:
#endif
.endm
......@@ -206,8 +219,7 @@ _TIF_SYSCALL = (_TIF_SYSCALL_TRACE>>8 | _TIF_SYSCALL_AUDIT>>8 | \
* Returns:
* gpr2 = prev
*/
.globl __switch_to
__switch_to:
ENTRY(__switch_to)
lg %r4,__THREAD_info(%r2) # get thread_info of prev
lg %r5,__THREAD_info(%r3) # get thread_info of next
tm __TI_flags+7(%r4),_TIF_MCCK_PENDING # machine check pending?
......@@ -232,8 +244,7 @@ __critical_start:
* are executed with interrupts enabled.
*/
.globl system_call
system_call:
ENTRY(system_call)
stpt __LC_SYNC_ENTER_TIMER
sysc_saveall:
SAVE_ALL_SVC __LC_SVC_OLD_PSW,__LC_SAVE_AREA
......@@ -395,8 +406,7 @@ sysc_tracenogo:
#
# a new process exits the kernel with ret_from_fork
#
.globl ret_from_fork
ret_from_fork:
ENTRY(ret_from_fork)
lg %r13,__LC_SVC_NEW_PSW+8
lg %r12,__LC_THREAD_INFO # load pointer to thread_info struct
tm SP_PSW+1(%r15),0x01 # forking a kernel thread ?
......@@ -411,8 +421,7 @@ ret_from_fork:
# kernel_execve function needs to deal with pt_regs that is not
# at the usual place
#
.globl kernel_execve
kernel_execve:
ENTRY(kernel_execve)
stmg %r12,%r15,96(%r15)
lgr %r14,%r15
aghi %r15,-SP_SIZE
......@@ -442,8 +451,7 @@ kernel_execve:
* Program check handler routine
*/
.globl pgm_check_handler
pgm_check_handler:
ENTRY(pgm_check_handler)
/*
* First we need to check for a special case:
* Single stepping an instruction that disables the PER event mask will
......@@ -465,6 +473,7 @@ pgm_check_handler:
xc SP_ILC(4,%r15),SP_ILC(%r15)
mvc SP_PSW(16,%r15),__LC_PGM_OLD_PSW
lg %r12,__LC_THREAD_INFO # load pointer to thread_info struct
HANDLE_SIE_INTERCEPT
tm SP_PSW+1(%r15),0x01 # interrupting from user ?
jz pgm_no_vtime
UPDATE_VTIME __LC_EXIT_TIMER,__LC_SYNC_ENTER_TIMER,__LC_USER_TIMER
......@@ -472,7 +481,6 @@ pgm_check_handler:
mvc __LC_LAST_UPDATE_TIMER(8),__LC_SYNC_ENTER_TIMER
LAST_BREAK
pgm_no_vtime:
HANDLE_SIE_INTERCEPT
stg %r11,SP_ARGS(%r15)
lgf %r3,__LC_PGM_ILC # load program interruption code
lg %r4,__LC_TRANS_EXC_CODE
......@@ -507,6 +515,7 @@ pgm_per_std:
CREATE_STACK_FRAME __LC_SAVE_AREA
mvc SP_PSW(16,%r15),__LC_PGM_OLD_PSW
lg %r12,__LC_THREAD_INFO # load pointer to thread_info struct
HANDLE_SIE_INTERCEPT
tm SP_PSW+1(%r15),0x01 # interrupting from user ?
jz pgm_no_vtime2
UPDATE_VTIME __LC_EXIT_TIMER,__LC_SYNC_ENTER_TIMER,__LC_USER_TIMER
......@@ -514,7 +523,6 @@ pgm_per_std:
mvc __LC_LAST_UPDATE_TIMER(8),__LC_SYNC_ENTER_TIMER
LAST_BREAK
pgm_no_vtime2:
HANDLE_SIE_INTERCEPT
lg %r1,__TI_task(%r12)
tm SP_PSW+1(%r15),0x01 # kernel per event ?
jz kernel_per
......@@ -571,14 +579,14 @@ kernel_per:
/*
* IO interrupt handler routine
*/
.globl io_int_handler
io_int_handler:
ENTRY(io_int_handler)
stck __LC_INT_CLOCK
stpt __LC_ASYNC_ENTER_TIMER
SAVE_ALL_ASYNC __LC_IO_OLD_PSW,__LC_SAVE_AREA+40
CREATE_STACK_FRAME __LC_SAVE_AREA+40
mvc SP_PSW(16,%r15),0(%r12) # move user PSW to stack
lg %r12,__LC_THREAD_INFO # load pointer to thread_info struct
HANDLE_SIE_INTERCEPT
tm SP_PSW+1(%r15),0x01 # interrupting from user ?
jz io_no_vtime
UPDATE_VTIME __LC_EXIT_TIMER,__LC_ASYNC_ENTER_TIMER,__LC_USER_TIMER
......@@ -586,7 +594,6 @@ io_int_handler:
mvc __LC_LAST_UPDATE_TIMER(8),__LC_ASYNC_ENTER_TIMER
LAST_BREAK
io_no_vtime:
HANDLE_SIE_INTERCEPT
TRACE_IRQS_OFF
la %r2,SP_PTREGS(%r15) # address of register-save area
brasl %r14,do_IRQ # call standard irq handler
......@@ -706,14 +713,14 @@ io_notify_resume:
/*
* External interrupt handler routine
*/
.globl ext_int_handler
ext_int_handler:
ENTRY(ext_int_handler)
stck __LC_INT_CLOCK
stpt __LC_ASYNC_ENTER_TIMER
SAVE_ALL_ASYNC __LC_EXT_OLD_PSW,__LC_SAVE_AREA+40
CREATE_STACK_FRAME __LC_SAVE_AREA+40
mvc SP_PSW(16,%r15),0(%r12) # move user PSW to stack
lg %r12,__LC_THREAD_INFO # load pointer to thread_info struct
HANDLE_SIE_INTERCEPT
tm SP_PSW+1(%r15),0x01 # interrupting from user ?
jz ext_no_vtime
UPDATE_VTIME __LC_EXIT_TIMER,__LC_ASYNC_ENTER_TIMER,__LC_USER_TIMER
......@@ -721,7 +728,6 @@ ext_int_handler:
mvc __LC_LAST_UPDATE_TIMER(8),__LC_ASYNC_ENTER_TIMER
LAST_BREAK
ext_no_vtime:
HANDLE_SIE_INTERCEPT
TRACE_IRQS_OFF
lghi %r1,4096
la %r2,SP_PTREGS(%r15) # address of register-save area
......@@ -736,8 +742,7 @@ __critical_end:
/*
* Machine check handler routines
*/
.globl mcck_int_handler
mcck_int_handler:
ENTRY(mcck_int_handler)
stck __LC_MCCK_CLOCK
la %r1,4095 # revalidate r1
spt __LC_CPU_TIMER_SAVE_AREA-4095(%r1) # revalidate cpu timer
......@@ -785,6 +790,7 @@ mcck_int_main:
lg %r12,__LC_THREAD_INFO # load pointer to thread_info struct
tm __LC_MCCK_CODE+2,0x08 # mwp of old psw valid?
jno mcck_no_vtime # no -> no timer update
HANDLE_SIE_INTERCEPT
tm SP_PSW+1(%r15),0x01 # interrupting from user ?
jz mcck_no_vtime
UPDATE_VTIME __LC_EXIT_TIMER,__LC_MCCK_ENTER_TIMER,__LC_USER_TIMER
......@@ -804,7 +810,6 @@ mcck_no_vtime:
stosm __SF_EMPTY(%r15),0x04 # turn dat on
tm __TI_flags+7(%r12),_TIF_MCCK_PENDING
jno mcck_return
HANDLE_SIE_INTERCEPT
TRACE_IRQS_OFF
brasl %r14,s390_handle_mcck
TRACE_IRQS_ON
......@@ -823,8 +828,7 @@ mcck_done:
*/
#ifdef CONFIG_SMP
__CPUINIT
.globl restart_int_handler
restart_int_handler:
ENTRY(restart_int_handler)
basr %r1,0
restart_base:
spt restart_vtime-restart_base(%r1)
......@@ -851,8 +855,7 @@ restart_vtime:
/*
* If we do not run with SMP enabled, let the new CPU crash ...
*/
.globl restart_int_handler
restart_int_handler:
ENTRY(restart_int_handler)
basr %r1,0
restart_base:
lpswe restart_crash-restart_base(%r1)
......@@ -1036,6 +1039,56 @@ cleanup_io_restore_insn:
.Lcritical_end:
.quad __critical_end
#if defined(CONFIG_KVM) || defined(CONFIG_KVM_MODULE)
/*
* sie64a calling convention:
* %r2 pointer to sie control block
* %r3 guest register save area
*/
ENTRY(sie64a)
stmg %r6,%r14,__SF_GPRS(%r15) # save kernel registers
stg %r2,__SF_EMPTY(%r15) # save control block pointer
stg %r3,__SF_EMPTY+8(%r15) # save guest register save area
lmg %r0,%r13,0(%r3) # load guest gprs 0-13
lg %r14,__LC_THREAD_INFO # pointer thread_info struct
oi __TI_flags+6(%r14),_TIF_SIE>>8
sie_loop:
lg %r14,__LC_THREAD_INFO # pointer thread_info struct
tm __TI_flags+7(%r14),_TIF_EXIT_SIE
jnz sie_exit
lg %r14,__SF_EMPTY(%r15) # get control block pointer
SPP __SF_EMPTY(%r15) # set guest id
sie 0(%r14)
sie_done:
SPP __LC_CMF_HPP # set host id
lg %r14,__LC_THREAD_INFO # pointer thread_info struct
sie_exit:
ni __TI_flags+6(%r14),255-(_TIF_SIE>>8)
lg %r14,__SF_EMPTY+8(%r15) # load guest register save area
stmg %r0,%r13,0(%r14) # save guest gprs 0-13
lmg %r6,%r14,__SF_GPRS(%r15) # restore kernel registers
lghi %r2,0
br %r14
sie_fault:
lg %r14,__LC_THREAD_INFO # pointer thread_info struct
ni __TI_flags+6(%r14),255-(_TIF_SIE>>8)
lg %r14,__SF_EMPTY+8(%r15) # load guest register save area
stmg %r0,%r13,0(%r14) # save guest gprs 0-13
lmg %r6,%r14,__SF_GPRS(%r15) # restore kernel registers
lghi %r2,-EFAULT
br %r14
.align 8
.Lsie_loop:
.quad sie_loop
.Lsie_done:
.quad sie_done
.section __ex_table,"a"
.quad sie_loop,sie_fault
.previous
#endif
.section .rodata, "a"
#define SYSCALL(esa,esame,emu) .long esame
.globl sys_call_table
......
......@@ -22,6 +22,7 @@
*/
#include <linux/init.h>
#include <linux/linkage.h>
#include <asm/asm-offsets.h>
#include <asm/thread_info.h>
#include <asm/page.h>
......@@ -383,8 +384,7 @@ iplstart:
# doesn't need a builtin ipl record.
#
.org 0x800
.globl start
start:
ENTRY(start)
stm %r0,%r15,0x07b0 # store registers
basr %r12,%r0
.base:
......@@ -448,8 +448,7 @@ start:
# or linload or SALIPL
#
.org 0x10000
.globl startup
startup:
ENTRY(startup)
basr %r13,0 # get base
.LPG0:
xc 0x200(256),0x200 # partially clear lowcore
......
......@@ -11,13 +11,13 @@
*/
#include <linux/init.h>
#include <linux/linkage.h>
#include <asm/asm-offsets.h>
#include <asm/thread_info.h>
#include <asm/page.h>
__HEAD
.globl startup_continue
startup_continue:
ENTRY(startup_continue)
basr %r13,0 # get base
.LPG1:
......@@ -45,7 +45,7 @@ startup_continue:
# virtual and never return ...
.align 8
.Lentry:.long 0x00080000,0x80000000 + _stext
.Lctl: .long 0x04b50002 # cr0: various things
.Lctl: .long 0x04b50000 # cr0: various things
.long 0 # cr1: primary space segment table
.long .Lduct # cr2: dispatchable unit control table
.long 0 # cr3: instruction authorization
......@@ -78,8 +78,7 @@ startup_continue:
.Lbase_cc:
.long sched_clock_base_cc
.globl _ehead
_ehead:
ENTRY(_ehead)
#ifdef CONFIG_SHARED_KERNEL
.org 0x100000 - 0x11000 # head.o ends at 0x11000
......@@ -88,8 +87,8 @@ _ehead:
#
# startup-code, running in absolute addressing mode
#
.globl _stext
_stext: basr %r13,0 # get base
ENTRY(_stext)
basr %r13,0 # get base
.LPG3:
# check control registers
stctl %c0,%c15,0(%r15)
......
......@@ -11,13 +11,13 @@
*/
#include <linux/init.h>
#include <linux/linkage.h>
#include <asm/asm-offsets.h>
#include <asm/thread_info.h>
#include <asm/page.h>
__HEAD
.globl startup_continue
startup_continue:
ENTRY(startup_continue)
larl %r1,sched_clock_base_cc
mvc 0(8,%r1),__LC_LAST_UPDATE_CLOCK
larl %r13,.LPG1 # get base
......@@ -46,7 +46,7 @@ startup_continue:
.align 16
.LPG1:
.Lentry:.quad 0x0000000180000000,_stext
.Lctl: .quad 0x04350002 # cr0: various things
.Lctl: .quad 0x04040000 # cr0: AFP registers & secondary space
.quad 0 # cr1: primary space segment table
.quad .Lduct # cr2: dispatchable unit control table
.quad 0 # cr3: instruction authorization
......@@ -76,8 +76,7 @@ startup_continue:
.long 0x80000000,0,0,0 # invalid access-list entries
.endr
.globl _ehead
_ehead:
ENTRY(_ehead)
#ifdef CONFIG_SHARED_KERNEL
.org 0x100000 - 0x11000 # head.o ends at 0x11000
......@@ -86,8 +85,8 @@ _ehead:
#
# startup-code, running in absolute addressing mode
#
.globl _stext
_stext: basr %r13,0 # get base
ENTRY(_stext)
basr %r13,0 # get base
.LPG3:
# check control registers
stctg %c0,%c15,0(%r15)
......
......@@ -87,15 +87,6 @@ int show_interrupts(struct seq_file *p, void *v)
return 0;
}
/*
* For compatibilty only. S/390 specific setup of interrupts et al. is done
* much later in init_channel_subsystem().
*/
void __init init_IRQ(void)
{
/* nothing... */
}
/*
* Switch to the asynchronous interrupt stack for softirq execution.
*/
......@@ -144,28 +135,45 @@ void init_irq_proc(void)
#endif
/*
* ext_int_hash[index] is the start of the list for all external interrupts
* that hash to this index. With the current set of external interrupts
* (0x1202 external call, 0x1004 cpu timer, 0x2401 hwc console, 0x4000
* iucv and 0x2603 pfault) this is always the first element.
* ext_int_hash[index] is the list head for all external interrupts that hash
* to this index.
*/
static struct list_head ext_int_hash[256];
struct ext_int_info {
struct ext_int_info *next;
ext_int_handler_t handler;
u16 code;
struct list_head entry;
struct rcu_head rcu;
};
static struct ext_int_info *ext_int_hash[256];
/* ext_int_hash_lock protects the handler lists for external interrupts */
DEFINE_SPINLOCK(ext_int_hash_lock);
static void __init init_external_interrupts(void)
{
int idx;
for (idx = 0; idx < ARRAY_SIZE(ext_int_hash); idx++)
INIT_LIST_HEAD(&ext_int_hash[idx]);
}
static inline int ext_hash(u16 code)
{
return (code + (code >> 9)) & 0xff;
}
static void ext_int_hash_update(struct rcu_head *head)
{
struct ext_int_info *p = container_of(head, struct ext_int_info, rcu);
kfree(p);
}
int register_external_interrupt(u16 code, ext_int_handler_t handler)
{
struct ext_int_info *p;
unsigned long flags;
int index;
p = kmalloc(sizeof(*p), GFP_ATOMIC);
......@@ -174,33 +182,27 @@ int register_external_interrupt(u16 code, ext_int_handler_t handler)
p->code = code;
p->handler = handler;
index = ext_hash(code);
p->next = ext_int_hash[index];
ext_int_hash[index] = p;
spin_lock_irqsave(&ext_int_hash_lock, flags);
list_add_rcu(&p->entry, &ext_int_hash[index]);
spin_unlock_irqrestore(&ext_int_hash_lock, flags);
return 0;
}
EXPORT_SYMBOL(register_external_interrupt);
int unregister_external_interrupt(u16 code, ext_int_handler_t handler)
{
struct ext_int_info *p, *q;
int index;
struct ext_int_info *p;
unsigned long flags;
int index = ext_hash(code);
index = ext_hash(code);
q = NULL;
p = ext_int_hash[index];
while (p) {
if (p->code == code && p->handler == handler)
break;
q = p;
p = p->next;
}
if (!p)
return -ENOENT;
if (q)
q->next = p->next;
else
ext_int_hash[index] = p->next;
kfree(p);
spin_lock_irqsave(&ext_int_hash_lock, flags);
list_for_each_entry_rcu(p, &ext_int_hash[index], entry)
if (p->code == code && p->handler == handler) {
list_del_rcu(&p->entry);
call_rcu(&p->rcu, ext_int_hash_update);
}
spin_unlock_irqrestore(&ext_int_hash_lock, flags);
return 0;
}
EXPORT_SYMBOL(unregister_external_interrupt);
......@@ -224,15 +226,22 @@ void __irq_entry do_extint(struct pt_regs *regs, unsigned int ext_int_code,
kstat_cpu(smp_processor_id()).irqs[EXTERNAL_INTERRUPT]++;
if (code != 0x1004)
__get_cpu_var(s390_idle).nohz_delay = 1;
index = ext_hash(code);
for (p = ext_int_hash[index]; p; p = p->next) {
rcu_read_lock();
list_for_each_entry_rcu(p, &ext_int_hash[index], entry)
if (likely(p->code == code))
p->handler(ext_int_code, param32, param64);
}
rcu_read_unlock();
irq_exit();
set_irq_regs(old_regs);
}
void __init init_IRQ(void)
{
init_external_interrupts();
}
static DEFINE_SPINLOCK(sc_irq_lock);
static int sc_irq_refcount;
......
......@@ -5,21 +5,19 @@
*
*/
#include <linux/linkage.h>
#include <asm/asm-offsets.h>
.section .kprobes.text, "ax"
.globl ftrace_stub
ftrace_stub:
ENTRY(ftrace_stub)
br %r14
.globl _mcount
_mcount:
ENTRY(_mcount)
#ifdef CONFIG_DYNAMIC_FTRACE
br %r14
.globl ftrace_caller
ftrace_caller:
ENTRY(ftrace_caller)
#endif
stm %r2,%r5,16(%r15)
bras %r1,2f
......@@ -41,8 +39,7 @@ ftrace_caller:
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
l %r2,100(%r15)
l %r3,152(%r15)
.globl ftrace_graph_caller
ftrace_graph_caller:
ENTRY(ftrace_graph_caller)
# The bras instruction gets runtime patched to call prepare_ftrace_return.
# See ftrace_enable_ftrace_graph_caller. The patched instruction is:
# bras %r14,prepare_ftrace_return
......@@ -56,8 +53,7 @@ ftrace_graph_caller:
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
.globl return_to_handler
return_to_handler:
ENTRY(return_to_handler)
stm %r2,%r5,16(%r15)
st %r14,56(%r15)
lr %r0,%r15
......
......@@ -5,21 +5,19 @@
*
*/
#include <linux/linkage.h>
#include <asm/asm-offsets.h>
.section .kprobes.text, "ax"
.globl ftrace_stub
ftrace_stub:
ENTRY(ftrace_stub)
br %r14
.globl _mcount
_mcount:
ENTRY(_mcount)
#ifdef CONFIG_DYNAMIC_FTRACE
br %r14
.globl ftrace_caller
ftrace_caller:
ENTRY(ftrace_caller)
#endif
larl %r1,function_trace_stop
icm %r1,0xf,0(%r1)
......@@ -37,8 +35,7 @@ ftrace_caller:
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
lg %r2,168(%r15)
lg %r3,272(%r15)
.globl ftrace_graph_caller
ftrace_graph_caller:
ENTRY(ftrace_graph_caller)
# The bras instruction gets runtime patched to call prepare_ftrace_return.
# See ftrace_enable_ftrace_graph_caller. The patched instruction is:
# bras %r14,prepare_ftrace_return
......@@ -52,8 +49,7 @@ ftrace_graph_caller:
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
.globl return_to_handler
return_to_handler:
ENTRY(return_to_handler)
stmg %r2,%r5,32(%r15)
lgr %r1,%r15
aghi %r15,-160
......
......@@ -6,14 +6,15 @@
* Author(s): Holger Smolinski (Holger.Smolinski@de.ibm.com)
*/
#include <linux/linkage.h>
#include <asm/asm-offsets.h>
#
# do_reipl_asm
# Parameter: r2 = schid of reipl device
#
.globl do_reipl_asm
do_reipl_asm: basr %r13,0
ENTRY(do_reipl_asm)
basr %r13,0
.Lpg0: lpsw .Lnewpsw-.Lpg0(%r13)
.Lpg1: # do store status of all registers
......
......@@ -4,6 +4,7 @@
* Denis Joseph Barrow,
*/
#include <linux/linkage.h>
#include <asm/asm-offsets.h>
#
......@@ -11,8 +12,8 @@
# Parameter: r2 = schid of reipl device
#
.globl do_reipl_asm
do_reipl_asm: basr %r13,0
ENTRY(do_reipl_asm)
basr %r13,0
.Lpg0: lpswe .Lnewpsw-.Lpg0(%r13)
.Lpg1: # do store status of all registers
......
......@@ -8,6 +8,8 @@
*
*/
#include <linux/linkage.h>
/*
* moves the new kernel to its destination...
* %r2 = pointer to first kimage_entry_t
......@@ -22,8 +24,7 @@
*/
.text
.globl relocate_kernel
relocate_kernel:
ENTRY(relocate_kernel)
basr %r13,0 # base address
.base:
stnsm sys_msk-.base(%r13),0xfb # disable DAT
......@@ -112,6 +113,7 @@
.byte 0
.align 8
relocate_kernel_end:
.align 8
.globl relocate_kernel_len
relocate_kernel_len:
.quad relocate_kernel_end - relocate_kernel
......@@ -8,6 +8,8 @@
*
*/
#include <linux/linkage.h>
/*
* moves the new kernel to its destination...
* %r2 = pointer to first kimage_entry_t
......@@ -23,8 +25,7 @@
*/
.text
.globl relocate_kernel
relocate_kernel:
ENTRY(relocate_kernel)
basr %r13,0 # base address
.base:
stnsm sys_msk-.base(%r13),0xfb # disable DAT
......@@ -115,6 +116,7 @@
.byte 0
.align 8
relocate_kernel_end:
.align 8
.globl relocate_kernel_len
relocate_kernel_len:
.quad relocate_kernel_end - relocate_kernel
#include <linux/module.h>
#include <linux/kvm_host.h>
#include <asm/ftrace.h>
#ifdef CONFIG_FUNCTION_TRACER
EXPORT_SYMBOL(_mcount);
#endif
#if defined(CONFIG_KVM) || defined(CONFIG_KVM_MODULE)
EXPORT_SYMBOL(sie64a);
#endif
......@@ -8,6 +8,8 @@
*
*/
#include <linux/linkage.h>
LC_EXT_NEW_PSW = 0x58 # addr of ext int handler
LC_EXT_NEW_PSW_64 = 0x1b0 # addr of ext int handler 64 bit
LC_EXT_INT_PARAM = 0x80 # addr of ext int parameter
......@@ -260,8 +262,7 @@ _sclp_print:
# R2 = 0 on success, 1 on failure
#
.globl _sclp_print_early
_sclp_print_early:
ENTRY(_sclp_print_early)
stm %r6,%r15,24(%r15) # save registers
ahi %r15,-96 # create stack frame
#ifdef CONFIG_64BIT
......
......@@ -654,7 +654,8 @@ int __cpu_disable(void)
/* disable all external interrupts */
cr_parms.orvals[0] = 0;
cr_parms.andvals[0] = ~(1 << 15 | 1 << 14 | 1 << 13 | 1 << 11 |
1 << 10 | 1 << 9 | 1 << 6 | 1 << 4);
1 << 10 | 1 << 9 | 1 << 6 | 1 << 5 |
1 << 4);
/* disable all I/O interrupts */
cr_parms.orvals[6] = 0;
cr_parms.andvals[6] = ~(1 << 31 | 1 << 30 | 1 << 29 | 1 << 28 |
......
......@@ -5,6 +5,7 @@
*
*/
#include <linux/linkage.h>
#include <asm/asm-offsets.h>
#include <asm/ptrace.h>
......@@ -16,9 +17,7 @@
# %r6 - destination cpu
.section .text
.align 4
.globl smp_switch_to_cpu
smp_switch_to_cpu:
ENTRY(smp_switch_to_cpu)
stm %r6,%r15,__SF_GPRS(%r15)
lr %r1,%r15
ahi %r15,-STACK_FRAME_OVERHEAD
......@@ -33,8 +32,7 @@ smp_switch_to_cpu:
brc 2,2b /* busy, try again */
3: j 3b
.globl smp_restart_cpu
smp_restart_cpu:
ENTRY(smp_restart_cpu)
basr %r13,0
0: la %r1,.gprregs_addr-0b(%r13)
l %r1,0(%r1)
......
......@@ -5,6 +5,7 @@
*
*/
#include <linux/linkage.h>
#include <asm/asm-offsets.h>
#include <asm/ptrace.h>
......@@ -16,9 +17,7 @@
# %r6 - destination cpu
.section .text
.align 4
.globl smp_switch_to_cpu
smp_switch_to_cpu:
ENTRY(smp_switch_to_cpu)
stmg %r6,%r15,__SF_GPRS(%r15)
lgr %r1,%r15
aghi %r15,-STACK_FRAME_OVERHEAD
......@@ -31,8 +30,7 @@ smp_switch_to_cpu:
brc 2,2b /* busy, try again */
3: j 3b
.globl smp_restart_cpu
smp_restart_cpu:
ENTRY(smp_restart_cpu)
larl %r1,.gprregs
lmg %r0,%r15,0(%r1)
1: sigp %r0,%r5,__SIGP_SENSE /* Wait for calling CPU */
......
......@@ -7,6 +7,7 @@
* Michael Holzheu <holzheu@linux.vnet.ibm.com>
*/
#include <linux/linkage.h>
#include <asm/page.h>
#include <asm/ptrace.h>
#include <asm/thread_info.h>
......@@ -22,9 +23,7 @@
* This function runs with disabled interrupts.
*/
.section .text
.align 4
.globl swsusp_arch_suspend
swsusp_arch_suspend:
ENTRY(swsusp_arch_suspend)
stmg %r6,%r15,__SF_GPRS(%r15)
lgr %r1,%r15
aghi %r15,-STACK_FRAME_OVERHEAD
......@@ -112,8 +111,7 @@ swsusp_arch_suspend:
* Then we return to the function that called swsusp_arch_suspend().
* swsusp_arch_resume() runs with disabled interrupts.
*/
.globl swsusp_arch_resume
swsusp_arch_resume:
ENTRY(swsusp_arch_resume)
stmg %r6,%r15,__SF_GPRS(%r15)
lgr %r1,%r15
aghi %r15,-STACK_FRAME_OVERHEAD
......
......@@ -18,7 +18,7 @@
#include <linux/kernel.h>
#include <linux/string.h>
#include <linux/errno.h>
#include <linux/tracehook.h>
#include <linux/ptrace.h>
#include <linux/timer.h>
#include <linux/mm.h>
#include <linux/smp.h>
......@@ -43,14 +43,10 @@
#include <asm/debug.h>
#include "entry.h"
pgm_check_handler_t *pgm_check_table[128];
void (*pgm_check_table[128])(struct pt_regs *, long, unsigned long);
int show_unhandled_signals;
extern pgm_check_handler_t do_protection_exception;
extern pgm_check_handler_t do_dat_exception;
extern pgm_check_handler_t do_asce_exception;
#define stack_pointer ({ void **sp; asm("la %0,0(15)" : "=&d" (sp)); sp; })
#ifndef CONFIG_64BIT
......@@ -329,10 +325,17 @@ static inline void __user *get_psw_address(struct pt_regs *regs,
void __kprobes do_per_trap(struct pt_regs *regs)
{
siginfo_t info;
if (notify_die(DIE_SSTEP, "sstep", regs, 0, 0, SIGTRAP) == NOTIFY_STOP)
return;
if (current->ptrace)
force_sig(SIGTRAP, current);
if (!current->ptrace)
return;
info.si_signo = SIGTRAP;
info.si_errno = 0;
info.si_code = TRAP_HWBKPT;
info.si_addr = (void *) current->thread.per_event.address;
force_sig_info(SIGTRAP, &info, current);
}
static void default_trap_handler(struct pt_regs *regs, long pgm_int_code,
......@@ -425,9 +428,13 @@ static void __kprobes illegal_op(struct pt_regs *regs, long pgm_int_code,
if (get_user(*((__u16 *) opcode), (__u16 __user *) location))
return;
if (*((__u16 *) opcode) == S390_BREAKPOINT_U16) {
if (current->ptrace)
force_sig(SIGTRAP, current);
else
if (current->ptrace) {
info.si_signo = SIGTRAP;
info.si_errno = 0;
info.si_code = TRAP_BRKPT;
info.si_addr = location;
force_sig_info(SIGTRAP, &info, current);
} else
signal = SIGILL;
#ifdef CONFIG_MATHEMU
} else if (opcode[0] == 0xb3) {
......@@ -489,9 +496,8 @@ static void __kprobes illegal_op(struct pt_regs *regs, long pgm_int_code,
#ifdef CONFIG_MATHEMU
asmlinkage void specification_exception(struct pt_regs *regs,
long pgm_int_code,
unsigned long trans_exc_code)
void specification_exception(struct pt_regs *regs, long pgm_int_code,
unsigned long trans_exc_code)
{
__u8 opcode[6];
__u16 __user *location = NULL;
......@@ -648,7 +654,7 @@ static void space_switch_exception(struct pt_regs *regs, long pgm_int_code,
do_trap(pgm_int_code, SIGILL, "space switch event", regs, &info);
}
asmlinkage void __kprobes kernel_stack_overflow(struct pt_regs * regs)
void __kprobes kernel_stack_overflow(struct pt_regs * regs)
{
bust_spinlocks(1);
printk("Kernel stack overflow.\n");
......
......@@ -10,5 +10,5 @@ common-objs = $(addprefix ../../../virt/kvm/, kvm_main.o)
ccflags-y := -Ivirt/kvm -Iarch/s390/kvm
kvm-objs := $(common-objs) kvm-s390.o sie64a.o intercept.o interrupt.o priv.o sigp.o diag.o
kvm-objs := $(common-objs) kvm-s390.o intercept.o interrupt.o priv.o sigp.o diag.o
obj-$(CONFIG_KVM) += kvm.o
/*
* gaccess.h - access guest memory
* access.h - access guest memory
*
* Copyright IBM Corp. 2008,2009
*
......@@ -22,20 +22,13 @@ static inline void __user *__guestaddr_to_user(struct kvm_vcpu *vcpu,
unsigned long guestaddr)
{
unsigned long prefix = vcpu->arch.sie_block->prefix;
unsigned long origin = vcpu->arch.sie_block->gmsor;
unsigned long memsize = kvm_s390_vcpu_get_memsize(vcpu);
if (guestaddr < 2 * PAGE_SIZE)
guestaddr += prefix;
else if ((guestaddr >= prefix) && (guestaddr < prefix + 2 * PAGE_SIZE))
guestaddr -= prefix;
if (guestaddr > memsize)
return (void __user __force *) ERR_PTR(-EFAULT);
guestaddr += origin;
return (void __user *) guestaddr;
return (void __user *) gmap_fault(guestaddr, vcpu->arch.gmap);
}
static inline int get_guest_u64(struct kvm_vcpu *vcpu, unsigned long guestaddr,
......@@ -141,11 +134,11 @@ static inline int put_guest_u8(struct kvm_vcpu *vcpu, unsigned long guestaddr,
static inline int __copy_to_guest_slow(struct kvm_vcpu *vcpu,
unsigned long guestdest,
const void *from, unsigned long n)
void *from, unsigned long n)
{
int rc;
unsigned long i;
const u8 *data = from;
u8 *data = from;
for (i = 0; i < n; i++) {
rc = put_guest_u8(vcpu, guestdest++, *(data++));
......@@ -155,12 +148,95 @@ static inline int __copy_to_guest_slow(struct kvm_vcpu *vcpu,
return 0;
}
static inline int __copy_to_guest_fast(struct kvm_vcpu *vcpu,
unsigned long guestdest,
void *from, unsigned long n)
{
int r;
void __user *uptr;
unsigned long size;
if (guestdest + n < guestdest)
return -EFAULT;
/* simple case: all within one segment table entry? */
if ((guestdest & PMD_MASK) == ((guestdest+n) & PMD_MASK)) {
uptr = (void __user *) gmap_fault(guestdest, vcpu->arch.gmap);
if (IS_ERR((void __force *) uptr))
return PTR_ERR((void __force *) uptr);
r = copy_to_user(uptr, from, n);
if (r)
r = -EFAULT;
goto out;
}
/* copy first segment */
uptr = (void __user *)gmap_fault(guestdest, vcpu->arch.gmap);
if (IS_ERR((void __force *) uptr))
return PTR_ERR((void __force *) uptr);
size = PMD_SIZE - (guestdest & ~PMD_MASK);
r = copy_to_user(uptr, from, size);
if (r) {
r = -EFAULT;
goto out;
}
from += size;
n -= size;
guestdest += size;
/* copy full segments */
while (n >= PMD_SIZE) {
uptr = (void __user *)gmap_fault(guestdest, vcpu->arch.gmap);
if (IS_ERR((void __force *) uptr))
return PTR_ERR((void __force *) uptr);
r = copy_to_user(uptr, from, PMD_SIZE);
if (r) {
r = -EFAULT;
goto out;
}
from += PMD_SIZE;
n -= PMD_SIZE;
guestdest += PMD_SIZE;
}
/* copy the tail segment */
if (n) {
uptr = (void __user *)gmap_fault(guestdest, vcpu->arch.gmap);
if (IS_ERR((void __force *) uptr))
return PTR_ERR((void __force *) uptr);
r = copy_to_user(uptr, from, n);
if (r)
r = -EFAULT;
}
out:
return r;
}
static inline int copy_to_guest_absolute(struct kvm_vcpu *vcpu,
unsigned long guestdest,
void *from, unsigned long n)
{
return __copy_to_guest_fast(vcpu, guestdest, from, n);
}
static inline int copy_to_guest(struct kvm_vcpu *vcpu, unsigned long guestdest,
const void *from, unsigned long n)
void *from, unsigned long n)
{
unsigned long prefix = vcpu->arch.sie_block->prefix;
unsigned long origin = vcpu->arch.sie_block->gmsor;
unsigned long memsize = kvm_s390_vcpu_get_memsize(vcpu);
if ((guestdest < 2 * PAGE_SIZE) && (guestdest + n > 2 * PAGE_SIZE))
goto slowpath;
......@@ -177,15 +253,7 @@ static inline int copy_to_guest(struct kvm_vcpu *vcpu, unsigned long guestdest,
else if ((guestdest >= prefix) && (guestdest < prefix + 2 * PAGE_SIZE))
guestdest -= prefix;
if (guestdest + n > memsize)
return -EFAULT;
if (guestdest + n < guestdest)
return -EFAULT;
guestdest += origin;
return copy_to_user((void __user *) guestdest, from, n);
return __copy_to_guest_fast(vcpu, guestdest, from, n);
slowpath:
return __copy_to_guest_slow(vcpu, guestdest, from, n);
}
......@@ -206,74 +274,113 @@ static inline int __copy_from_guest_slow(struct kvm_vcpu *vcpu, void *to,
return 0;
}
static inline int copy_from_guest(struct kvm_vcpu *vcpu, void *to,
unsigned long guestsrc, unsigned long n)
static inline int __copy_from_guest_fast(struct kvm_vcpu *vcpu, void *to,
unsigned long guestsrc,
unsigned long n)
{
unsigned long prefix = vcpu->arch.sie_block->prefix;
unsigned long origin = vcpu->arch.sie_block->gmsor;
unsigned long memsize = kvm_s390_vcpu_get_memsize(vcpu);
int r;
void __user *uptr;
unsigned long size;
if ((guestsrc < 2 * PAGE_SIZE) && (guestsrc + n > 2 * PAGE_SIZE))
goto slowpath;
if (guestsrc + n < guestsrc)
return -EFAULT;
if ((guestsrc < prefix) && (guestsrc + n > prefix))
goto slowpath;
/* simple case: all within one segment table entry? */
if ((guestsrc & PMD_MASK) == ((guestsrc+n) & PMD_MASK)) {
uptr = (void __user *) gmap_fault(guestsrc, vcpu->arch.gmap);
if ((guestsrc < prefix + 2 * PAGE_SIZE)
&& (guestsrc + n > prefix + 2 * PAGE_SIZE))
goto slowpath;
if (IS_ERR((void __force *) uptr))
return PTR_ERR((void __force *) uptr);
if (guestsrc < 2 * PAGE_SIZE)
guestsrc += prefix;
else if ((guestsrc >= prefix) && (guestsrc < prefix + 2 * PAGE_SIZE))
guestsrc -= prefix;
r = copy_from_user(to, uptr, n);
if (guestsrc + n > memsize)
return -EFAULT;
if (r)
r = -EFAULT;
if (guestsrc + n < guestsrc)
return -EFAULT;
goto out;
}
guestsrc += origin;
/* copy first segment */
uptr = (void __user *)gmap_fault(guestsrc, vcpu->arch.gmap);
return copy_from_user(to, (void __user *) guestsrc, n);
slowpath:
return __copy_from_guest_slow(vcpu, to, guestsrc, n);
}
if (IS_ERR((void __force *) uptr))
return PTR_ERR((void __force *) uptr);
static inline int copy_to_guest_absolute(struct kvm_vcpu *vcpu,
unsigned long guestdest,
const void *from, unsigned long n)
{
unsigned long origin = vcpu->arch.sie_block->gmsor;
unsigned long memsize = kvm_s390_vcpu_get_memsize(vcpu);
size = PMD_SIZE - (guestsrc & ~PMD_MASK);
if (guestdest + n > memsize)
return -EFAULT;
r = copy_from_user(to, uptr, size);
if (guestdest + n < guestdest)
return -EFAULT;
if (r) {
r = -EFAULT;
goto out;
}
to += size;
n -= size;
guestsrc += size;
/* copy full segments */
while (n >= PMD_SIZE) {
uptr = (void __user *)gmap_fault(guestsrc, vcpu->arch.gmap);
if (IS_ERR((void __force *) uptr))
return PTR_ERR((void __force *) uptr);
r = copy_from_user(to, uptr, PMD_SIZE);
if (r) {
r = -EFAULT;
goto out;
}
to += PMD_SIZE;
n -= PMD_SIZE;
guestsrc += PMD_SIZE;
}
/* copy the tail segment */
if (n) {
uptr = (void __user *)gmap_fault(guestsrc, vcpu->arch.gmap);
guestdest += origin;
if (IS_ERR((void __force *) uptr))
return PTR_ERR((void __force *) uptr);
return copy_to_user((void __user *) guestdest, from, n);
r = copy_from_user(to, uptr, n);
if (r)
r = -EFAULT;
}
out:
return r;
}
static inline int copy_from_guest_absolute(struct kvm_vcpu *vcpu, void *to,
unsigned long guestsrc,
unsigned long n)
{
unsigned long origin = vcpu->arch.sie_block->gmsor;
unsigned long memsize = kvm_s390_vcpu_get_memsize(vcpu);
return __copy_from_guest_fast(vcpu, to, guestsrc, n);
}
if (guestsrc + n > memsize)
return -EFAULT;
static inline int copy_from_guest(struct kvm_vcpu *vcpu, void *to,
unsigned long guestsrc, unsigned long n)
{
unsigned long prefix = vcpu->arch.sie_block->prefix;
if (guestsrc + n < guestsrc)
return -EFAULT;
if ((guestsrc < 2 * PAGE_SIZE) && (guestsrc + n > 2 * PAGE_SIZE))
goto slowpath;
guestsrc += origin;
if ((guestsrc < prefix) && (guestsrc + n > prefix))
goto slowpath;
if ((guestsrc < prefix + 2 * PAGE_SIZE)
&& (guestsrc + n > prefix + 2 * PAGE_SIZE))
goto slowpath;
if (guestsrc < 2 * PAGE_SIZE)
guestsrc += prefix;
else if ((guestsrc >= prefix) && (guestsrc < prefix + 2 * PAGE_SIZE))
guestsrc -= prefix;
return copy_from_user(to, (void __user *) guestsrc, n);
return __copy_from_guest_fast(vcpu, to, guestsrc, n);
slowpath:
return __copy_from_guest_slow(vcpu, to, guestsrc, n);
}
#endif
......@@ -105,6 +105,7 @@ static intercept_handler_t instruction_handlers[256] = {
[0xae] = kvm_s390_handle_sigp,
[0xb2] = kvm_s390_handle_b2,
[0xb7] = handle_lctl,
[0xe5] = kvm_s390_handle_e5,
[0xeb] = handle_lctlg,
};
......@@ -159,22 +160,42 @@ static int handle_stop(struct kvm_vcpu *vcpu)
static int handle_validity(struct kvm_vcpu *vcpu)
{
unsigned long vmaddr;
int viwhy = vcpu->arch.sie_block->ipb >> 16;
int rc;
vcpu->stat.exit_validity++;
if ((viwhy == 0x37) && (vcpu->arch.sie_block->prefix
<= kvm_s390_vcpu_get_memsize(vcpu) - 2*PAGE_SIZE)) {
rc = fault_in_pages_writeable((char __user *)
vcpu->arch.sie_block->gmsor +
vcpu->arch.sie_block->prefix,
2*PAGE_SIZE);
if (rc)
if (viwhy == 0x37) {
vmaddr = gmap_fault(vcpu->arch.sie_block->prefix,
vcpu->arch.gmap);
if (IS_ERR_VALUE(vmaddr)) {
rc = -EOPNOTSUPP;
goto out;
}
rc = fault_in_pages_writeable((char __user *) vmaddr,
PAGE_SIZE);
if (rc) {
/* user will receive sigsegv, exit to user */
rc = -EOPNOTSUPP;
goto out;
}
vmaddr = gmap_fault(vcpu->arch.sie_block->prefix + PAGE_SIZE,
vcpu->arch.gmap);
if (IS_ERR_VALUE(vmaddr)) {
rc = -EOPNOTSUPP;
goto out;
}
rc = fault_in_pages_writeable((char __user *) vmaddr,
PAGE_SIZE);
if (rc) {
/* user will receive sigsegv, exit to user */
rc = -EOPNOTSUPP;
goto out;
}
} else
rc = -EOPNOTSUPP;
out:
if (rc)
VCPU_EVENT(vcpu, 2, "unhandled validity intercept code %d",
viwhy);
......
......@@ -128,6 +128,10 @@ static void __do_deliver_interrupt(struct kvm_vcpu *vcpu,
if (rc == -EFAULT)
exception = 1;
rc = put_guest_u16(vcpu, __LC_CPU_ADDRESS, inti->emerg.code);
if (rc == -EFAULT)
exception = 1;
rc = copy_to_guest(vcpu, __LC_EXT_OLD_PSW,
&vcpu->arch.sie_block->gpsw, sizeof(psw_t));
if (rc == -EFAULT)
......
......@@ -62,6 +62,7 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
{ "instruction_chsc", VCPU_STAT(instruction_chsc) },
{ "instruction_stsi", VCPU_STAT(instruction_stsi) },
{ "instruction_stfl", VCPU_STAT(instruction_stfl) },
{ "instruction_tprot", VCPU_STAT(instruction_tprot) },
{ "instruction_sigp_sense", VCPU_STAT(instruction_sigp_sense) },
{ "instruction_sigp_emergency", VCPU_STAT(instruction_sigp_emergency) },
{ "instruction_sigp_stop", VCPU_STAT(instruction_sigp_stop) },
......@@ -189,7 +190,13 @@ int kvm_arch_init_vm(struct kvm *kvm)
debug_register_view(kvm->arch.dbf, &debug_sprintf_view);
VM_EVENT(kvm, 3, "%s", "vm created");
kvm->arch.gmap = gmap_alloc(current->mm);
if (!kvm->arch.gmap)
goto out_nogmap;
return 0;
out_nogmap:
debug_unregister(kvm->arch.dbf);
out_nodbf:
free_page((unsigned long)(kvm->arch.sca));
out_err:
......@@ -234,11 +241,13 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
kvm_free_vcpus(kvm);
free_page((unsigned long)(kvm->arch.sca));
debug_unregister(kvm->arch.dbf);
gmap_free(kvm->arch.gmap);
}
/* Section: vcpu related */
int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)
{
vcpu->arch.gmap = vcpu->kvm->arch.gmap;
return 0;
}
......@@ -284,8 +293,7 @@ static void kvm_s390_vcpu_initial_reset(struct kvm_vcpu *vcpu)
int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu)
{
atomic_set(&vcpu->arch.sie_block->cpuflags, CPUSTAT_ZARCH);
set_bit(KVM_REQ_MMU_RELOAD, &vcpu->requests);
atomic_set(&vcpu->arch.sie_block->cpuflags, CPUSTAT_ZARCH | CPUSTAT_SM);
vcpu->arch.sie_block->ecb = 6;
vcpu->arch.sie_block->eca = 0xC1002001U;
vcpu->arch.sie_block->fac = (int) (long) facilities;
......@@ -453,6 +461,7 @@ static void __vcpu_run(struct kvm_vcpu *vcpu)
local_irq_disable();
kvm_guest_enter();
local_irq_enable();
gmap_enable(vcpu->arch.gmap);
VCPU_EVENT(vcpu, 6, "entering sie flags %x",
atomic_read(&vcpu->arch.sie_block->cpuflags));
if (sie64a(vcpu->arch.sie_block, vcpu->arch.guest_gprs)) {
......@@ -461,6 +470,7 @@ static void __vcpu_run(struct kvm_vcpu *vcpu)
}
VCPU_EVENT(vcpu, 6, "exit sie icptcode %d",
vcpu->arch.sie_block->icptcode);
gmap_disable(vcpu->arch.gmap);
local_irq_disable();
kvm_guest_exit();
local_irq_enable();
......@@ -474,17 +484,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
sigset_t sigsaved;
rerun_vcpu:
if (vcpu->requests)
if (test_and_clear_bit(KVM_REQ_MMU_RELOAD, &vcpu->requests))
kvm_s390_vcpu_set_mem(vcpu);
/* verify, that memory has been registered */
if (!vcpu->arch.sie_block->gmslm) {
vcpu_put(vcpu);
VCPU_EVENT(vcpu, 3, "%s", "no memory registered to run vcpu");
return -EINVAL;
}
if (vcpu->sigset_active)
sigprocmask(SIG_SETMASK, &vcpu->sigset, &sigsaved);
......@@ -545,7 +544,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
return rc;
}
static int __guestcopy(struct kvm_vcpu *vcpu, u64 guestdest, const void *from,
static int __guestcopy(struct kvm_vcpu *vcpu, u64 guestdest, void *from,
unsigned long n, int prefix)
{
if (prefix)
......@@ -562,7 +561,7 @@ static int __guestcopy(struct kvm_vcpu *vcpu, u64 guestdest, const void *from,
*/
int kvm_s390_vcpu_store_status(struct kvm_vcpu *vcpu, unsigned long addr)
{
const unsigned char archmode = 1;
unsigned char archmode = 1;
int prefix;
if (addr == KVM_S390_STORE_STATUS_NOADDR) {
......@@ -680,10 +679,10 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
if (mem->guest_phys_addr)
return -EINVAL;
if (mem->userspace_addr & (PAGE_SIZE - 1))
if (mem->userspace_addr & 0xffffful)
return -EINVAL;
if (mem->memory_size & (PAGE_SIZE - 1))
if (mem->memory_size & 0xffffful)
return -EINVAL;
if (!user_alloc)
......@@ -697,15 +696,14 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
struct kvm_memory_slot old,
int user_alloc)
{
int i;
struct kvm_vcpu *vcpu;
int rc;
/* request update of sie control block for all available vcpus */
kvm_for_each_vcpu(i, vcpu, kvm) {
if (test_and_set_bit(KVM_REQ_MMU_RELOAD, &vcpu->requests))
continue;
kvm_s390_inject_sigp_stop(vcpu, ACTION_RELOADVCPU_ON_STOP);
}
rc = gmap_map_segment(kvm->arch.gmap, mem->userspace_addr,
mem->guest_phys_addr, mem->memory_size);
if (rc)
printk(KERN_WARNING "kvm-s390: failed to commit memory region\n");
return;
}
void kvm_arch_flush_shadow(struct kvm *kvm)
......
......@@ -58,35 +58,9 @@ int kvm_s390_inject_vcpu(struct kvm_vcpu *vcpu,
int kvm_s390_inject_program_int(struct kvm_vcpu *vcpu, u16 code);
int kvm_s390_inject_sigp_stop(struct kvm_vcpu *vcpu, int action);
static inline long kvm_s390_vcpu_get_memsize(struct kvm_vcpu *vcpu)
{
return vcpu->arch.sie_block->gmslm
- vcpu->arch.sie_block->gmsor
- VIRTIODESCSPACE + 1ul;
}
static inline void kvm_s390_vcpu_set_mem(struct kvm_vcpu *vcpu)
{
int idx;
struct kvm_memory_slot *mem;
struct kvm_memslots *memslots;
idx = srcu_read_lock(&vcpu->kvm->srcu);
memslots = kvm_memslots(vcpu->kvm);
mem = &memslots->memslots[0];
vcpu->arch.sie_block->gmsor = mem->userspace_addr;
vcpu->arch.sie_block->gmslm =
mem->userspace_addr +
(mem->npages << PAGE_SHIFT) +
VIRTIODESCSPACE - 1ul;
srcu_read_unlock(&vcpu->kvm->srcu, idx);
}
/* implemented in priv.c */
int kvm_s390_handle_b2(struct kvm_vcpu *vcpu);
int kvm_s390_handle_e5(struct kvm_vcpu *vcpu);
/* implemented in sigp.c */
int kvm_s390_handle_sigp(struct kvm_vcpu *vcpu);
......
......@@ -326,3 +326,52 @@ int kvm_s390_handle_b2(struct kvm_vcpu *vcpu)
}
return -EOPNOTSUPP;
}
static int handle_tprot(struct kvm_vcpu *vcpu)
{
int base1 = (vcpu->arch.sie_block->ipb & 0xf0000000) >> 28;
int disp1 = (vcpu->arch.sie_block->ipb & 0x0fff0000) >> 16;
int base2 = (vcpu->arch.sie_block->ipb & 0xf000) >> 12;
int disp2 = vcpu->arch.sie_block->ipb & 0x0fff;
u64 address1 = disp1 + base1 ? vcpu->arch.guest_gprs[base1] : 0;
u64 address2 = disp2 + base2 ? vcpu->arch.guest_gprs[base2] : 0;
struct vm_area_struct *vma;
vcpu->stat.instruction_tprot++;
/* we only handle the Linux memory detection case:
* access key == 0
* guest DAT == off
* everything else goes to userspace. */
if (address2 & 0xf0)
return -EOPNOTSUPP;
if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_DAT)
return -EOPNOTSUPP;
down_read(&current->mm->mmap_sem);
vma = find_vma(current->mm,
(unsigned long) __guestaddr_to_user(vcpu, address1));
if (!vma) {
up_read(&current->mm->mmap_sem);
return kvm_s390_inject_program_int(vcpu, PGM_ADDRESSING);
}
vcpu->arch.sie_block->gpsw.mask &= ~(3ul << 44);
if (!(vma->vm_flags & VM_WRITE) && (vma->vm_flags & VM_READ))
vcpu->arch.sie_block->gpsw.mask |= (1ul << 44);
if (!(vma->vm_flags & VM_WRITE) && !(vma->vm_flags & VM_READ))
vcpu->arch.sie_block->gpsw.mask |= (2ul << 44);
up_read(&current->mm->mmap_sem);
return 0;
}
int kvm_s390_handle_e5(struct kvm_vcpu *vcpu)
{
/* For e5xx... instructions we only handle TPROT */
if ((vcpu->arch.sie_block->ipa & 0x00ff) == 0x01)
return handle_tprot(vcpu);
return -EOPNOTSUPP;
}
/*
* sie64a.S - low level sie call
*
* Copyright IBM Corp. 2008,2010
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License (version 2 only)
* as published by the Free Software Foundation.
*
* Author(s): Heiko Carstens <heiko.carstens@de.ibm.com>
* Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com>
*/
#include <linux/errno.h>
#include <asm/asm-offsets.h>
#include <asm/setup.h>
#include <asm/asm-offsets.h>
#include <asm/ptrace.h>
#include <asm/thread_info.h>
_TIF_EXIT_SIE = (_TIF_SIGPENDING | _TIF_NEED_RESCHED | _TIF_MCCK_PENDING)
/*
* offsets into stackframe
* SP_ = offsets into stack sie64 is called with
* SPI_ = offsets into irq stack
*/
SP_GREGS = __SF_EMPTY
SP_HOOK = __SF_EMPTY+8
SP_GPP = __SF_EMPTY+16
SPI_PSW = STACK_FRAME_OVERHEAD + __PT_PSW
.macro SPP newpp
tm __LC_MACHINE_FLAGS+6,0x20 # MACHINE_FLAG_SPP
jz 0f
.insn s,0xb2800000,\newpp
0:
.endm
sie_irq_handler:
SPP __LC_CMF_HPP # set host id
larl %r2,sie_inst
clg %r2,SPI_PSW+8(0,%r15) # intercepted sie
jne 1f
xc __LC_SIE_HOOK(8),__LC_SIE_HOOK
lg %r2,__LC_THREAD_INFO # pointer thread_info struct
tm __TI_flags+7(%r2),_TIF_EXIT_SIE
jz 0f
larl %r2,sie_exit # work pending, leave sie
stg %r2,SPI_PSW+8(0,%r15)
br %r14
0: larl %r2,sie_reenter # re-enter with guest id
stg %r2,SPI_PSW+8(0,%r15)
1: br %r14
/*
* sie64a calling convention:
* %r2 pointer to sie control block
* %r3 guest register save area
*/
.globl sie64a
sie64a:
stg %r3,SP_GREGS(%r15) # save guest register save area
stmg %r6,%r14,__SF_GPRS(%r15) # save registers on entry
lgr %r14,%r2 # pointer to sie control block
larl %r5,sie_irq_handler
stg %r2,SP_GPP(%r15)
stg %r5,SP_HOOK(%r15) # save hook target
lmg %r0,%r13,0(%r3) # load guest gprs 0-13
sie_reenter:
mvc __LC_SIE_HOOK(8),SP_HOOK(%r15)
SPP SP_GPP(%r15) # set guest id
sie_inst:
sie 0(%r14)
xc __LC_SIE_HOOK(8),__LC_SIE_HOOK
SPP __LC_CMF_HPP # set host id
sie_exit:
lg %r14,SP_GREGS(%r15)
stmg %r0,%r13,0(%r14) # save guest gprs 0-13
lghi %r2,0
lmg %r6,%r14,__SF_GPRS(%r15)
br %r14
sie_err:
xc __LC_SIE_HOOK(8),__LC_SIE_HOOK
SPP __LC_CMF_HPP # set host id
lg %r14,SP_GREGS(%r15)
stmg %r0,%r13,0(%r14) # save guest gprs 0-13
lghi %r2,-EFAULT
lmg %r6,%r14,__SF_GPRS(%r15)
br %r14
.section __ex_table,"a"
.quad sie_inst,sie_err
.quad sie_exit,sie_err
.quad sie_reenter,sie_err
.previous
......@@ -189,10 +189,8 @@ static int __sigp_set_prefix(struct kvm_vcpu *vcpu, u16 cpu_addr, u32 address,
/* make sure that the new value is valid memory */
address = address & 0x7fffe000u;
if ((copy_from_user(&tmp, (void __user *)
(address + vcpu->arch.sie_block->gmsor) , 1)) ||
(copy_from_user(&tmp, (void __user *)(address +
vcpu->arch.sie_block->gmsor + PAGE_SIZE), 1))) {
if (copy_from_guest_absolute(vcpu, &tmp, address, 1) ||
copy_from_guest_absolute(vcpu, &tmp, address + PAGE_SIZE, 1)) {
*reg |= SIGP_STAT_INVALID_PARAMETER;
return 1; /* invalid parameter */
}
......
# S/390 __udiv_qrnnd
#include <linux/linkage.h>
# r2 : &__r
# r3 : upper half of 64 bit word n
# r4 : lower half of 64 bit word n
......@@ -8,8 +10,7 @@
# the quotient q is to be returned
.text
.globl __udiv_qrnnd
__udiv_qrnnd:
ENTRY(__udiv_qrnnd)
st %r2,24(%r15) # store pointer to reminder for later
lr %r0,%r3 # reload n
lr %r1,%r4
......
......@@ -303,9 +303,24 @@ static inline int do_exception(struct pt_regs *regs, int access,
flags = FAULT_FLAG_ALLOW_RETRY;
if (access == VM_WRITE || (trans_exc_code & store_indication) == 0x400)
flags |= FAULT_FLAG_WRITE;
retry:
down_read(&mm->mmap_sem);
#ifdef CONFIG_PGSTE
if (test_tsk_thread_flag(current, TIF_SIE) && S390_lowcore.gmap) {
address = gmap_fault(address,
(struct gmap *) S390_lowcore.gmap);
if (address == -EFAULT) {
fault = VM_FAULT_BADMAP;
goto out_up;
}
if (address == -ENOMEM) {
fault = VM_FAULT_OOM;
goto out_up;
}
}
#endif
retry:
fault = VM_FAULT_BADMAP;
vma = find_vma(mm, address);
if (!vma)
......@@ -356,6 +371,7 @@ static inline int do_exception(struct pt_regs *regs, int access,
/* Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk
* of starvation. */
flags &= ~FAULT_FLAG_ALLOW_RETRY;
down_read(&mm->mmap_sem);
goto retry;
}
}
......
......@@ -35,7 +35,7 @@ int arch_prepare_hugepage(struct page *page)
if (MACHINE_HAS_HPAGE)
return 0;
ptep = (pte_t *) pte_alloc_one(&init_mm, address);
ptep = (pte_t *) pte_alloc_one(&init_mm, addr);
if (!ptep)
return -ENOMEM;
......
......@@ -16,6 +16,7 @@
#include <linux/module.h>
#include <linux/quicklist.h>
#include <linux/rcupdate.h>
#include <linux/slab.h>
#include <asm/system.h>
#include <asm/pgtable.h>
......@@ -133,30 +134,374 @@ void crst_table_downgrade(struct mm_struct *mm, unsigned long limit)
}
#endif
static inline unsigned int atomic_xor_bits(atomic_t *v, unsigned int bits)
#ifdef CONFIG_PGSTE
/**
* gmap_alloc - allocate a guest address space
* @mm: pointer to the parent mm_struct
*
* Returns a guest address space structure.
*/
struct gmap *gmap_alloc(struct mm_struct *mm)
{
unsigned int old, new;
struct gmap *gmap;
struct page *page;
unsigned long *table;
do {
old = atomic_read(v);
new = old ^ bits;
} while (atomic_cmpxchg(v, old, new) != old);
return new;
gmap = kzalloc(sizeof(struct gmap), GFP_KERNEL);
if (!gmap)
goto out;
INIT_LIST_HEAD(&gmap->crst_list);
gmap->mm = mm;
page = alloc_pages(GFP_KERNEL, ALLOC_ORDER);
if (!page)
goto out_free;
list_add(&page->lru, &gmap->crst_list);
table = (unsigned long *) page_to_phys(page);
crst_table_init(table, _REGION1_ENTRY_EMPTY);
gmap->table = table;
list_add(&gmap->list, &mm->context.gmap_list);
return gmap;
out_free:
kfree(gmap);
out:
return NULL;
}
EXPORT_SYMBOL_GPL(gmap_alloc);
/*
* page table entry allocation/free routines.
static int gmap_unlink_segment(struct gmap *gmap, unsigned long *table)
{
struct gmap_pgtable *mp;
struct gmap_rmap *rmap;
struct page *page;
if (*table & _SEGMENT_ENTRY_INV)
return 0;
page = pfn_to_page(*table >> PAGE_SHIFT);
mp = (struct gmap_pgtable *) page->index;
list_for_each_entry(rmap, &mp->mapper, list) {
if (rmap->entry != table)
continue;
list_del(&rmap->list);
kfree(rmap);
break;
}
*table = _SEGMENT_ENTRY_INV | _SEGMENT_ENTRY_RO | mp->vmaddr;
return 1;
}
static void gmap_flush_tlb(struct gmap *gmap)
{
if (MACHINE_HAS_IDTE)
__tlb_flush_idte((unsigned long) gmap->table |
_ASCE_TYPE_REGION1);
else
__tlb_flush_global();
}
/**
* gmap_free - free a guest address space
* @gmap: pointer to the guest address space structure
*/
#ifdef CONFIG_PGSTE
static inline unsigned long *page_table_alloc_pgste(struct mm_struct *mm)
void gmap_free(struct gmap *gmap)
{
struct page *page, *next;
unsigned long *table;
int i;
/* Flush tlb. */
if (MACHINE_HAS_IDTE)
__tlb_flush_idte((unsigned long) gmap->table |
_ASCE_TYPE_REGION1);
else
__tlb_flush_global();
/* Free all segment & region tables. */
down_read(&gmap->mm->mmap_sem);
list_for_each_entry_safe(page, next, &gmap->crst_list, lru) {
table = (unsigned long *) page_to_phys(page);
if ((*table & _REGION_ENTRY_TYPE_MASK) == 0)
/* Remove gmap rmap structures for segment table. */
for (i = 0; i < PTRS_PER_PMD; i++, table++)
gmap_unlink_segment(gmap, table);
__free_pages(page, ALLOC_ORDER);
}
up_read(&gmap->mm->mmap_sem);
list_del(&gmap->list);
kfree(gmap);
}
EXPORT_SYMBOL_GPL(gmap_free);
/**
* gmap_enable - switch primary space to the guest address space
* @gmap: pointer to the guest address space structure
*/
void gmap_enable(struct gmap *gmap)
{
/* Load primary space page table origin. */
S390_lowcore.user_asce = _ASCE_TYPE_REGION1 | _ASCE_TABLE_LENGTH |
_ASCE_USER_BITS | __pa(gmap->table);
asm volatile("lctlg 1,1,%0\n" : : "m" (S390_lowcore.user_asce) );
S390_lowcore.gmap = (unsigned long) gmap;
}
EXPORT_SYMBOL_GPL(gmap_enable);
/**
* gmap_disable - switch back to the standard primary address space
* @gmap: pointer to the guest address space structure
*/
void gmap_disable(struct gmap *gmap)
{
/* Load primary space page table origin. */
S390_lowcore.user_asce =
gmap->mm->context.asce_bits | __pa(gmap->mm->pgd);
asm volatile("lctlg 1,1,%0\n" : : "m" (S390_lowcore.user_asce) );
S390_lowcore.gmap = 0UL;
}
EXPORT_SYMBOL_GPL(gmap_disable);
static int gmap_alloc_table(struct gmap *gmap,
unsigned long *table, unsigned long init)
{
struct page *page;
unsigned long *new;
page = alloc_pages(GFP_KERNEL, ALLOC_ORDER);
if (!page)
return -ENOMEM;
new = (unsigned long *) page_to_phys(page);
crst_table_init(new, init);
down_read(&gmap->mm->mmap_sem);
if (*table & _REGION_ENTRY_INV) {
list_add(&page->lru, &gmap->crst_list);
*table = (unsigned long) new | _REGION_ENTRY_LENGTH |
(*table & _REGION_ENTRY_TYPE_MASK);
} else
__free_pages(page, ALLOC_ORDER);
up_read(&gmap->mm->mmap_sem);
return 0;
}
/**
* gmap_unmap_segment - unmap segment from the guest address space
* @gmap: pointer to the guest address space structure
* @addr: address in the guest address space
* @len: length of the memory area to unmap
*
* Returns 0 if the unmap succeded, -EINVAL if not.
*/
int gmap_unmap_segment(struct gmap *gmap, unsigned long to, unsigned long len)
{
unsigned long *table;
unsigned long off;
int flush;
if ((to | len) & (PMD_SIZE - 1))
return -EINVAL;
if (len == 0 || to + len < to)
return -EINVAL;
flush = 0;
down_read(&gmap->mm->mmap_sem);
for (off = 0; off < len; off += PMD_SIZE) {
/* Walk the guest addr space page table */
table = gmap->table + (((to + off) >> 53) & 0x7ff);
if (*table & _REGION_ENTRY_INV)
return 0;
table = (unsigned long *)(*table & _REGION_ENTRY_ORIGIN);
table = table + (((to + off) >> 42) & 0x7ff);
if (*table & _REGION_ENTRY_INV)
return 0;
table = (unsigned long *)(*table & _REGION_ENTRY_ORIGIN);
table = table + (((to + off) >> 31) & 0x7ff);
if (*table & _REGION_ENTRY_INV)
return 0;
table = (unsigned long *)(*table & _REGION_ENTRY_ORIGIN);
table = table + (((to + off) >> 20) & 0x7ff);
/* Clear segment table entry in guest address space. */
flush |= gmap_unlink_segment(gmap, table);
*table = _SEGMENT_ENTRY_INV;
}
up_read(&gmap->mm->mmap_sem);
if (flush)
gmap_flush_tlb(gmap);
return 0;
}
EXPORT_SYMBOL_GPL(gmap_unmap_segment);
/**
* gmap_mmap_segment - map a segment to the guest address space
* @gmap: pointer to the guest address space structure
* @from: source address in the parent address space
* @to: target address in the guest address space
*
* Returns 0 if the mmap succeded, -EINVAL or -ENOMEM if not.
*/
int gmap_map_segment(struct gmap *gmap, unsigned long from,
unsigned long to, unsigned long len)
{
unsigned long *table;
unsigned long off;
int flush;
if ((from | to | len) & (PMD_SIZE - 1))
return -EINVAL;
if (len == 0 || from + len > PGDIR_SIZE ||
from + len < from || to + len < to)
return -EINVAL;
flush = 0;
down_read(&gmap->mm->mmap_sem);
for (off = 0; off < len; off += PMD_SIZE) {
/* Walk the gmap address space page table */
table = gmap->table + (((to + off) >> 53) & 0x7ff);
if ((*table & _REGION_ENTRY_INV) &&
gmap_alloc_table(gmap, table, _REGION2_ENTRY_EMPTY))
goto out_unmap;
table = (unsigned long *)(*table & _REGION_ENTRY_ORIGIN);
table = table + (((to + off) >> 42) & 0x7ff);
if ((*table & _REGION_ENTRY_INV) &&
gmap_alloc_table(gmap, table, _REGION3_ENTRY_EMPTY))
goto out_unmap;
table = (unsigned long *)(*table & _REGION_ENTRY_ORIGIN);
table = table + (((to + off) >> 31) & 0x7ff);
if ((*table & _REGION_ENTRY_INV) &&
gmap_alloc_table(gmap, table, _SEGMENT_ENTRY_EMPTY))
goto out_unmap;
table = (unsigned long *) (*table & _REGION_ENTRY_ORIGIN);
table = table + (((to + off) >> 20) & 0x7ff);
/* Store 'from' address in an invalid segment table entry. */
flush |= gmap_unlink_segment(gmap, table);
*table = _SEGMENT_ENTRY_INV | _SEGMENT_ENTRY_RO | (from + off);
}
up_read(&gmap->mm->mmap_sem);
if (flush)
gmap_flush_tlb(gmap);
return 0;
out_unmap:
up_read(&gmap->mm->mmap_sem);
gmap_unmap_segment(gmap, to, len);
return -ENOMEM;
}
EXPORT_SYMBOL_GPL(gmap_map_segment);
unsigned long gmap_fault(unsigned long address, struct gmap *gmap)
{
unsigned long *table, vmaddr, segment;
struct mm_struct *mm;
struct gmap_pgtable *mp;
struct gmap_rmap *rmap;
struct vm_area_struct *vma;
struct page *page;
pgd_t *pgd;
pud_t *pud;
pmd_t *pmd;
current->thread.gmap_addr = address;
mm = gmap->mm;
/* Walk the gmap address space page table */
table = gmap->table + ((address >> 53) & 0x7ff);
if (unlikely(*table & _REGION_ENTRY_INV))
return -EFAULT;
table = (unsigned long *)(*table & _REGION_ENTRY_ORIGIN);
table = table + ((address >> 42) & 0x7ff);
if (unlikely(*table & _REGION_ENTRY_INV))
return -EFAULT;
table = (unsigned long *)(*table & _REGION_ENTRY_ORIGIN);
table = table + ((address >> 31) & 0x7ff);
if (unlikely(*table & _REGION_ENTRY_INV))
return -EFAULT;
table = (unsigned long *)(*table & _REGION_ENTRY_ORIGIN);
table = table + ((address >> 20) & 0x7ff);
/* Convert the gmap address to an mm address. */
segment = *table;
if (likely(!(segment & _SEGMENT_ENTRY_INV))) {
page = pfn_to_page(segment >> PAGE_SHIFT);
mp = (struct gmap_pgtable *) page->index;
return mp->vmaddr | (address & ~PMD_MASK);
} else if (segment & _SEGMENT_ENTRY_RO) {
vmaddr = segment & _SEGMENT_ENTRY_ORIGIN;
vma = find_vma(mm, vmaddr);
if (!vma || vma->vm_start > vmaddr)
return -EFAULT;
/* Walk the parent mm page table */
pgd = pgd_offset(mm, vmaddr);
pud = pud_alloc(mm, pgd, vmaddr);
if (!pud)
return -ENOMEM;
pmd = pmd_alloc(mm, pud, vmaddr);
if (!pmd)
return -ENOMEM;
if (!pmd_present(*pmd) &&
__pte_alloc(mm, vma, pmd, vmaddr))
return -ENOMEM;
/* pmd now points to a valid segment table entry. */
rmap = kmalloc(sizeof(*rmap), GFP_KERNEL|__GFP_REPEAT);
if (!rmap)
return -ENOMEM;
/* Link gmap segment table entry location to page table. */
page = pmd_page(*pmd);
mp = (struct gmap_pgtable *) page->index;
rmap->entry = table;
list_add(&rmap->list, &mp->mapper);
/* Set gmap segment table entry to page table. */
*table = pmd_val(*pmd) & PAGE_MASK;
return vmaddr | (address & ~PMD_MASK);
}
return -EFAULT;
}
EXPORT_SYMBOL_GPL(gmap_fault);
void gmap_unmap_notifier(struct mm_struct *mm, unsigned long *table)
{
struct gmap_rmap *rmap, *next;
struct gmap_pgtable *mp;
struct page *page;
int flush;
flush = 0;
spin_lock(&mm->page_table_lock);
page = pfn_to_page(__pa(table) >> PAGE_SHIFT);
mp = (struct gmap_pgtable *) page->index;
list_for_each_entry_safe(rmap, next, &mp->mapper, list) {
*rmap->entry =
_SEGMENT_ENTRY_INV | _SEGMENT_ENTRY_RO | mp->vmaddr;
list_del(&rmap->list);
kfree(rmap);
flush = 1;
}
spin_unlock(&mm->page_table_lock);
if (flush)
__tlb_flush_global();
}
static inline unsigned long *page_table_alloc_pgste(struct mm_struct *mm,
unsigned long vmaddr)
{
struct page *page;
unsigned long *table;
struct gmap_pgtable *mp;
page = alloc_page(GFP_KERNEL|__GFP_REPEAT);
if (!page)
return NULL;
mp = kmalloc(sizeof(*mp), GFP_KERNEL|__GFP_REPEAT);
if (!mp) {
__free_page(page);
return NULL;
}
pgtable_page_ctor(page);
mp->vmaddr = vmaddr & PMD_MASK;
INIT_LIST_HEAD(&mp->mapper);
page->index = (unsigned long) mp;
atomic_set(&page->_mapcount, 3);
table = (unsigned long *) page_to_phys(page);
clear_table(table, _PAGE_TYPE_EMPTY, PAGE_SIZE/2);
......@@ -167,24 +512,57 @@ static inline unsigned long *page_table_alloc_pgste(struct mm_struct *mm)
static inline void page_table_free_pgste(unsigned long *table)
{
struct page *page;
struct gmap_pgtable *mp;
page = pfn_to_page(__pa(table) >> PAGE_SHIFT);
mp = (struct gmap_pgtable *) page->index;
BUG_ON(!list_empty(&mp->mapper));
pgtable_page_ctor(page);
atomic_set(&page->_mapcount, -1);
kfree(mp);
__free_page(page);
}
#endif
unsigned long *page_table_alloc(struct mm_struct *mm)
#else /* CONFIG_PGSTE */
static inline unsigned long *page_table_alloc_pgste(struct mm_struct *mm,
unsigned long vmaddr)
{
}
static inline void page_table_free_pgste(unsigned long *table)
{
}
static inline void gmap_unmap_notifier(struct mm_struct *mm,
unsigned long *table)
{
}
#endif /* CONFIG_PGSTE */
static inline unsigned int atomic_xor_bits(atomic_t *v, unsigned int bits)
{
unsigned int old, new;
do {
old = atomic_read(v);
new = old ^ bits;
} while (atomic_cmpxchg(v, old, new) != old);
return new;
}
/*
* page table entry allocation/free routines.
*/
unsigned long *page_table_alloc(struct mm_struct *mm, unsigned long vmaddr)
{
struct page *page;
unsigned long *table;
unsigned int mask, bit;
#ifdef CONFIG_PGSTE
if (mm_has_pgste(mm))
return page_table_alloc_pgste(mm);
#endif
return page_table_alloc_pgste(mm, vmaddr);
/* Allocate fragments of a 4K page as 1K/2K page table */
spin_lock_bh(&mm->context.list_lock);
mask = FRAG_MASK;
......@@ -222,10 +600,10 @@ void page_table_free(struct mm_struct *mm, unsigned long *table)
struct page *page;
unsigned int bit, mask;
#ifdef CONFIG_PGSTE
if (mm_has_pgste(mm))
if (mm_has_pgste(mm)) {
gmap_unmap_notifier(mm, table);
return page_table_free_pgste(table);
#endif
}
/* Free 1K/2K page table fragment of a 4K page */
page = pfn_to_page(__pa(table) >> PAGE_SHIFT);
bit = 1 << ((__pa(table) & ~PAGE_MASK)/(PTRS_PER_PTE*sizeof(pte_t)));
......@@ -249,10 +627,8 @@ static void __page_table_free_rcu(void *table, unsigned bit)
{
struct page *page;
#ifdef CONFIG_PGSTE
if (bit == FRAG_MASK)
return page_table_free_pgste(table);
#endif
/* Free 1K/2K page table fragment of a 4K page */
page = pfn_to_page(__pa(table) >> PAGE_SHIFT);
if (atomic_xor_bits(&page->_mapcount, bit) == 0) {
......@@ -269,13 +645,12 @@ void page_table_free_rcu(struct mmu_gather *tlb, unsigned long *table)
unsigned int bit, mask;
mm = tlb->mm;
#ifdef CONFIG_PGSTE
if (mm_has_pgste(mm)) {
gmap_unmap_notifier(mm, table);
table = (unsigned long *) (__pa(table) | FRAG_MASK);
tlb_remove_table(tlb, table);
return;
}
#endif
bit = 1 << ((__pa(table) & ~PAGE_MASK) / (PTRS_PER_PTE*sizeof(pte_t)));
page = pfn_to_page(__pa(table) >> PAGE_SHIFT);
spin_lock_bh(&mm->context.list_lock);
......
......@@ -61,12 +61,12 @@ static inline pmd_t *vmem_pmd_alloc(void)
return pmd;
}
static pte_t __ref *vmem_pte_alloc(void)
static pte_t __ref *vmem_pte_alloc(unsigned long address)
{
pte_t *pte;
if (slab_is_available())
pte = (pte_t *) page_table_alloc(&init_mm);
pte = (pte_t *) page_table_alloc(&init_mm, address);
else
pte = alloc_bootmem(PTRS_PER_PTE * sizeof(pte_t));
if (!pte)
......@@ -120,7 +120,7 @@ static int vmem_add_mem(unsigned long start, unsigned long size, int ro)
}
#endif
if (pmd_none(*pm_dir)) {
pt_dir = vmem_pte_alloc();
pt_dir = vmem_pte_alloc(address);
if (!pt_dir)
goto out;
pmd_populate(&init_mm, pm_dir, pt_dir);
......@@ -205,7 +205,7 @@ int __meminit vmemmap_populate(struct page *start, unsigned long nr, int node)
pm_dir = pmd_offset(pu_dir, address);
if (pmd_none(*pm_dir)) {
pt_dir = vmem_pte_alloc();
pt_dir = vmem_pte_alloc(address);
if (!pt_dir)
goto out;
pmd_populate(&init_mm, pm_dir, pt_dir);
......
此差异已折叠。
......@@ -382,6 +382,41 @@ struct dasd_path {
__u8 npm;
};
struct dasd_profile_info {
/* legacy part of profile data, as in dasd_profile_info_t */
unsigned int dasd_io_reqs; /* number of requests processed */
unsigned int dasd_io_sects; /* number of sectors processed */
unsigned int dasd_io_secs[32]; /* histogram of request's sizes */
unsigned int dasd_io_times[32]; /* histogram of requests's times */
unsigned int dasd_io_timps[32]; /* h. of requests's times per sector */
unsigned int dasd_io_time1[32]; /* hist. of time from build to start */
unsigned int dasd_io_time2[32]; /* hist. of time from start to irq */
unsigned int dasd_io_time2ps[32]; /* hist. of time from start to irq */
unsigned int dasd_io_time3[32]; /* hist. of time from irq to end */
unsigned int dasd_io_nr_req[32]; /* hist. of # of requests in chanq */
/* new data */
struct timespec starttod; /* time of start or last reset */
unsigned int dasd_io_alias; /* requests using an alias */
unsigned int dasd_io_tpm; /* requests using transport mode */
unsigned int dasd_read_reqs; /* total number of read requests */
unsigned int dasd_read_sects; /* total number read sectors */
unsigned int dasd_read_alias; /* read request using an alias */
unsigned int dasd_read_tpm; /* read requests in transport mode */
unsigned int dasd_read_secs[32]; /* histogram of request's sizes */
unsigned int dasd_read_times[32]; /* histogram of requests's times */
unsigned int dasd_read_time1[32]; /* hist. time from build to start */
unsigned int dasd_read_time2[32]; /* hist. of time from start to irq */
unsigned int dasd_read_time3[32]; /* hist. of time from irq to end */
unsigned int dasd_read_nr_req[32]; /* hist. of # of requests in chanq */
};
struct dasd_profile {
struct dentry *dentry;
struct dasd_profile_info *data;
spinlock_t lock;
};
struct dasd_device {
/* Block device stuff. */
struct dasd_block *block;
......@@ -431,6 +466,9 @@ struct dasd_device {
/* default expiration time in s */
unsigned long default_expires;
struct dentry *debugfs_dentry;
struct dasd_profile profile;
};
struct dasd_block {
......@@ -453,9 +491,8 @@ struct dasd_block {
struct tasklet_struct tasklet;
struct timer_list timer;
#ifdef CONFIG_DASD_PROFILE
struct dasd_profile_info_t profile;
#endif
struct dentry *debugfs_dentry;
struct dasd_profile profile;
};
......@@ -589,12 +626,13 @@ dasd_check_blocksize(int bsize)
}
/* externals in dasd.c */
#define DASD_PROFILE_ON 1
#define DASD_PROFILE_OFF 0
#define DASD_PROFILE_OFF 0
#define DASD_PROFILE_ON 1
#define DASD_PROFILE_GLOBAL_ONLY 2
extern debug_info_t *dasd_debug_area;
extern struct dasd_profile_info_t dasd_global_profile;
extern unsigned int dasd_profile_level;
extern struct dasd_profile_info dasd_global_profile_data;
extern unsigned int dasd_global_profile_level;
extern const struct block_device_operations dasd_device_operations;
extern struct kmem_cache *dasd_page_cache;
......@@ -662,6 +700,11 @@ void dasd_device_remove_stop_bits(struct dasd_device *, int);
int dasd_device_is_ro(struct dasd_device *);
void dasd_profile_reset(struct dasd_profile *);
int dasd_profile_on(struct dasd_profile *);
void dasd_profile_off(struct dasd_profile *);
void dasd_global_profile_reset(void);
char *dasd_get_user_string(const char __user *, size_t);
/* externals in dasd_devmap.c */
extern int dasd_max_devindex;
......
此差异已折叠。
此差异已折叠。
......@@ -116,9 +116,6 @@ config S390_TAPE
called tape390 and include all selected interfaces and
hardware drivers.
comment "S/390 tape interface support"
depends on S390_TAPE
comment "S/390 tape hardware support"
depends on S390_TAPE
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
......@@ -869,11 +869,11 @@ int page_referenced(struct page *page,
vm_flags);
if (we_locked)
unlock_page(page);
if (page_test_and_clear_young(page_to_pfn(page)))
referenced++;
}
out:
if (page_test_and_clear_young(page_to_pfn(page)))
referenced++;
return referenced;
}
......
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册