- 29 4月, 2014 1 次提交
-
-
由 David Hildenbrand 提交于
This patch enables the IBS facility when a single VCPU is running. The facility is dynamically turned on/off as soon as other VCPUs enter/leave the stopped state. When this facility is operating, some instructions can be executed faster for single-cpu guests. Signed-off-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com> Reviewed-by: NDominik Dingel <dingel@linux.vnet.ibm.com> Reviewed-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
- 22 4月, 2014 21 次提交
-
-
由 David Hildenbrand 提交于
This patch adds support to debug the guest using the PER facility on s390. Single-stepping, hardware breakpoints and hardware watchpoints are supported. In order to use the PER facility of the guest without it noticing it, the control registers of the guest have to be patched and access to them has to be intercepted(stctl, stctg, lctl, lctlg). All PER program interrupts have to be intercepted and only the relevant PER interrupts for the guest have to be given back. Special care has to be taken about repeated exits on the same hardware breakpoint. The intervention of the host in the guests PER configuration is not fully transparent. PER instruction nullification can not be used by the guest and too many storage alteration events may be reported to the guest (if it is activated for special address ranges only) when the host concurrently debugging it. Signed-off-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
由 David Hildenbrand 提交于
This patch adds the structs to the kernel headers needed to pass information from/to userspace in order to debug a guest on s390 with hardware support. Signed-off-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
由 David Hildenbrand 提交于
Introduce the methods to emulate the stctl and stctg instruction. Added tracing code. Signed-off-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
由 David Hildenbrand 提交于
When a program interrupt was to be delivered until now, no program interrupt parameters were stored in the low-core of the target vcpu. This patch enables the delivery of those program interrupt parameters, takes care of concurrent PER events which can be injected in addition to any program interrupt and uses the correct instruction length code (depending on the interception code) for the injection of program interrupts. Signed-off-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
由 David Hildenbrand 提交于
Whenever a program interrupt is intercepted, some parameters are stored in the sie control block. These parameters have to be extracted in order to be reinjected correctly. This patch also takes care of intercepted PER events which can occurr in addition to any program interrupt. Signed-off-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
由 Jens Freimann 提交于
per_perc_atmid is currently a two-byte field that combines two fields, the PER code and the PER Addressing-and-Translation-Mode Identification (ATMID) Let's make them accessible indepently and also rename per_cause to per_code. Signed-off-by: NJens Freimann <jfrei@linux.vnet.ibm.com> Acked-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
由 Jens Freimann 提交于
According to the Principles of Operation, at offset 0xA3 in the lowcore we have the "Architectural-Mode identification", not an "access identification". Signed-off-by: NJens Freimann <jfrei@linux.vnet.ibm.com> Acked-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
由 Heiko Carstens 提交于
Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: NThomas Huth <thuth@linux.vnet.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
由 Heiko Carstens 提交于
Check if siif is available before setting. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: NThomas Huth <thuth@linux.vnet.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
由 Heiko Carstens 提交于
Add a 'struct kvm_s390_pgm_info pgm' member to kvm_vcpu_arch. This structure will be used if during instruction emulation in the context of a vcpu exception data needs to be stored somewhere. Also add a helper function kvm_s390_inject_prog_cond() which can inject vcpu's last exception if needed. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
由 Heiko Carstens 提交于
Add 'union ctlreg0_bits' to easily allow setting and testing bits of control register 0 bits. This patch only adds the bits needed for the new guest access functions. Other bits and control registers can be added when needed. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: NThomas Huth <thuth@linux.vnet.ibm.com> Reviewed-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
由 Heiko Carstens 提交于
Introduce a 'struct psw' which makes it easier to decode and test if certain bits in a psw are set or are not set. In addition also add a 'psw_bits()' helper define which allows to directly modify and test a psw_t structure. E.g. psw_t psw; psw_bits(psw).t = 1; /* set dat bit */ Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: NThomas Huth <thuth@linux.vnet.ibm.com> Reviewed-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
由 Jens Freimann 提交于
Add a new data structure and function that allows to inject all kinds of interrupt as defined in the PoP Signed-off-by: NJens Freimann <jfrei@linux.vnet.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
由 Dominik Dingel 提交于
To enable CMMA and to reset its state we use the vm kvm_device ioctls, encapsulating attributes within the KVM_S390_VM_MEM_CTRL group. Signed-off-by: NDominik Dingel <dingel@linux.vnet.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
由 Dominik Dingel 提交于
When userspace reset the guest without notifying kvm, the CMMA state of the pages might be unused, resulting in guest data corruption. To avoid this, CMMA must be enabled only if userspace understands the implications. CMMA must be enabled before vCPU creation. It can't be switched off once enabled. All subsequently created vCPUs will be enabled for CMMA according to the CMMA state of the VM. Signed-off-by: NDominik Dingel <dingel@linux.vnet.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> [remove now unnecessary calls to page_table_reset_pgste]
-
由 Dominik Dingel 提交于
For live migration kvm needs to test and clear the dirty bit of guest pages. That for is ptep_test_and_clear_user_dirty, to be sure we are not racing with other code, we protect the pte. This needs to be done within the architecture memory management code. Signed-off-by: NDominik Dingel <dingel@linux.vnet.ibm.com> Acked-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Switch the user dirty bit detection used for migration from the hardware provided host change-bit in the pgste to a fault based detection method. This reduced the dependency of the host from the storage key to a point where it becomes possible to enable the RCP bypass for KVM guests. The fault based dirty detection will only indicate changes caused by accesses via the guest address space. The hardware based method can detect all changes, even those caused by I/O or accesses via the kernel page table. The KVM/qemu code needs to take this into account. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: NDominik Dingel <dingel@linux.vnet.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
由 Dominik Dingel 提交于
The first invocation of storage key operations on a given cpu will be intercepted. On these intercepts we will enable storage keys for the guest and remove the previously added intercepts. Signed-off-by: NDominik Dingel <dingel@linux.vnet.ibm.com> Acked-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
由 Dominik Dingel 提交于
Introduce a new function s390_enable_skey(), which enables storage key handling via setting the use_skey flag in the mmu context. This function is only useful within the context of kvm. Note that enabling storage keys will cause a one-time hickup when walking the page table; however, it saves us special effort for cases like clear reset while making it possible for us to be architecture conform. s390_enable_skey() takes the page table lock to prevent reseting storage keys triggered from multiple vcpus. Signed-off-by: NDominik Dingel <dingel@linux.vnet.ibm.com> Acked-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
由 Dominik Dingel 提交于
page_table_reset_pgste() already does a complete page table walk to reset the pgste. Enhance it to initialize the storage keys to PAGE_DEFAULT_KEY if requested by the caller. This will be used for lazy storage key handling. Also provide an empty stub for !CONFIG_PGSTE Lets adopt the current code (diag 308) to not clear the keys. Signed-off-by: NDominik Dingel <dingel@linux.vnet.ibm.com> Acked-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
由 Dominik Dingel 提交于
For lazy storage key handling, we need a mechanism to track if the process ever issued a storage key operation. This patch adds the basic infrastructure for making the storage key handling optional, but still leaves it enabled for now by default. Signed-off-by: NDominik Dingel <dingel@linux.vnet.ibm.com> Acked-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
- 11 4月, 2014 1 次提交
-
-
由 Heiko Carstens 提交于
Actually this also enable sys_setattr and sys_getattr, since I forgot to increase NR_syscalls when adding those syscalls. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 09 4月, 2014 1 次提交
-
-
由 Heiko Carstens 提交于
smp_stop_cpu() should stop the current cpu even for !CONFIG_SMP. Otherwise machine_halt() will return and and the machine generates a panic instread of simply stopping the current cpu: Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000000 CPU: 0 PID: 1 Comm: systemd-shutdow Not tainted 3.14.0-01527-g2b6ef16a6bc5 #10 [...] Call Trace: ([<0000000000110db0>] show_trace+0xf8/0x158) [<0000000000110e7a>] show_stack+0x6a/0xe8 [<000000000074dba8>] panic+0xe4/0x268 [<0000000000140570>] do_exit+0xa88/0xb2c [<000000000016e12c>] SyS_reboot+0x1f0/0x234 [<000000000075da70>] sysc_nr_ok+0x22/0x28 [<000000007d5a09b4>] 0x7d5a09b4 Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 03 4月, 2014 5 次提交
-
-
由 Heiko Carstens 提交于
The current uaccess code uses a page table walk in some circumstances, e.g. in case of the in atomic futex operations or if running on old hardware which doesn't support the mvcos instruction. However it turned out that the page table walk code does not correctly lock page tables when accessing page table entries. In other words: a different cpu may invalidate a page table entry while the current cpu inspects the pte. This may lead to random data corruption. Adding correct locking however isn't trivial for all uaccess operations. Especially copy_in_user() is problematic since that requires to hold at least two locks, but must be protected against ABBA deadlock when a different cpu also performs a copy_in_user() operation. So the solution is a different approach where we change address spaces: User space runs in primary address mode, or access register mode within vdso code, like it currently already does. The kernel usually also runs in home space mode, however when accessing user space the kernel switches to primary or secondary address mode if the mvcos instruction is not available or if a compare-and-swap (futex) instruction on a user space address is performed. KVM however is special, since that requires the kernel to run in home address space while implicitly accessing user space with the sie instruction. So we end up with: User space: - runs in primary or access register mode - cr1 contains the user asce - cr7 contains the user asce - cr13 contains the kernel asce Kernel space: - runs in home space mode - cr1 contains the user or kernel asce -> the kernel asce is loaded when a uaccess requires primary or secondary address mode - cr7 contains the user or kernel asce, (changed with set_fs()) - cr13 contains the kernel asce In case of uaccess the kernel changes to: - primary space mode in case of a uaccess (copy_to_user) and uses e.g. the mvcp instruction to access user space. However the kernel will stay in home space mode if the mvcos instruction is available - secondary space mode in case of futex atomic operations, so that the instructions come from primary address space and data from secondary space In case of kvm the kernel runs in home space mode, but cr1 gets switched to contain the gmap asce before the sie instruction gets executed. When the sie instruction is finished cr1 will be switched back to contain the user asce. A context switch between two processes will always load the kernel asce for the next process in cr1. So the first exit to user space is a bit more expensive (one extra load control register instruction) than before, however keeps the code rather simple. In sum this means there is no need to perform any error prone page table walks anymore when accessing user space. The patch seems to be rather large, however it mainly removes the the page table walk code and restores the previously deleted "standard" uaccess code, with a couple of changes. The uaccess without mvcos mode can be enforced with the "uaccess_primary" kernel parameter. Reported-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
The zEC12 machines introduced the local-clearing control for the IDTE and IPTE instruction. If the control is set only the TLB of the local CPU is cleared of entries, either all entries of a single address space for IDTE, or the entry for a single page-table entry for IPTE. Without the local-clearing control the TLB flush is broadcasted to all CPUs in the configuration, which is expensive. The reset of the bit mask of the CPUs that need flushing after a non-local IDTE is tricky. As TLB entries for an address space remain in the TLB even if the address space is detached a new bit field is required to keep track of attached CPUs vs. CPUs in the need of a flush. After a non-local flush with IDTE the bit-field of attached CPUs is copied to the bit-field of CPUs in need of a flush. The ordering of operations on cpu_attach_mask, attach_count and mm_cpumask(mm) is such that an underindication in mm_cpumask(mm) is prevented but an overindication in mm_cpumask(mm) is possible. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
The principles of operations states that the CPU is allowed to create TLB entries for an address space anytime while an ASCE is loaded to the control register. This is true even if the CPU is running in the kernel and the user address space is not (actively) accessed. In theory this can affect two aspects of the TLB flush logic. For full-mm flushes the ASCE of the dying process is still attached. The approach to flush first with IDTE and then just free all page tables can in theory lead to stale TLB entries. Use the batched free of page tables for the full-mm flushes as well. For operations that can have a stale ASCE in the control register, e.g. a delayed update_user_asce in switch_mm, load the kernel ASCE to prevent invalid TLBs from being created. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Thomas Huth 提交于
Use the new defines for external interruption codes to get rid of "magic" numbers in the s390 source code. And while we're at it, also rename the (un-)register_external_interrupt function to something shorter so that this patch does not exceed the 80 columns all over the place. Signed-off-by: NThomas Huth <thuth@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Thomas Huth 提交于
Introduce defines for external interruption codes so that we can get rid of some "magic" numbers in the s390 source code. Signed-off-by: NThomas Huth <thuth@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 01 4月, 2014 1 次提交
-
-
由 Heiko Carstens 提交于
When reworking the bitops and atomic ops I missed that those instructions that got atomic behaviour only perform a "specific-operand-serialization" instead of a full "serialization". The compare-and-swap instruction used before performs a full serialization before and after the instruction is executed, which means it has full memory barrier semantics. In order to give the new bitops and atomic ops functions also full memory barrier semantics add a "bcr 14,0" before and after each of those new instructions which performs full serialization as well. This restores memory barrier semantics for bitops and atomic ops functions which return values, like e.g. atomic_add_return(), but not for functions which do not return a value, like e.g. atomic_add(). This is consistent to other architectures and what common code requires. Cc: stable@vger.kernel.org # v3.13+ Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 25 3月, 2014 1 次提交
-
-
由 Jens Freimann 提交于
We need BITS_TO_LONGS, not sizeof(long) to calculate the correct size. idle_mask is a bitmask, each bit representing the state of a cpu. The desired outcome is an array of unsigned long fields that can fit KVM_MAX_VCPUS bits. We should not use sizeof(long) which returnes the size in bytes, but BITS_TO_LONGS Signed-off-by: NJens Freimann <jfrei@linux.vnet.ibm.com> Reviewed-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
- 21 3月, 2014 4 次提交
-
-
由 Cornelia Huck 提交于
Introduce a new interrupt class for s390 adapter interrupts and enable irqfds for s390. This is depending on a new s390 specific vm capability, KVM_CAP_S390_IRQCHIP, that needs to be enabled by userspace. Acked-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NCornelia Huck <cornelia.huck@de.ibm.com>
-
由 Cornelia Huck 提交于
Add a new interface to register/deregister sources of adapter interrupts identified by an unique id via the flic. Adapters may also be maskable and carry a list of pinned pages. These adapters will be used by irq routing later. Acked-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NCornelia Huck <cornelia.huck@de.ibm.com>
-
由 Dominik Dingel 提交于
Signed-off-by: NDominik Dingel <dingel@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Dominik Dingel 提交于
Signed-off-by: NDominik Dingel <dingel@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 20 3月, 2014 2 次提交
-
-
由 Eric Paris 提交于
The syscall.h headers were including linux/audit.h but really only needed the uapi/linux/audit.h to get the requisite defines. Switch to the uapi headers. Signed-off-by: NEric Paris <eparis@redhat.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-mips@linux-mips.org Cc: linux-s390@vger.kernel.org Cc: x86@kernel.org
-
由 Eric Paris 提交于
Every caller of syscall_get_arch() uses current for the task and no implementors of the function need args. So just get rid of both of those things. Admittedly, since these are inline functions we aren't wasting stack space, but it just makes the prototypes better. Signed-off-by: NEric Paris <eparis@redhat.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-mips@linux-mips.org Cc: linux390@de.ibm.com Cc: x86@kernel.org Cc: linux-kernel@vger.kernel.org Cc: linux-s390@vger.kernel.org Cc: linux-arch@vger.kernel.org
-
- 17 3月, 2014 1 次提交
-
-
由 Heiko Carstens 提交于
Limit the number of bits to the maximum number of cpus a machine can have. possible_cpu_mask typically will have more bits set than a machine may physically have. This results in wasted memory during per-cpu memory allocations, if the possible mask contains more cpus than physically possible for a given configuration. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 14 3月, 2014 1 次提交
-
-
由 Martin Schwidefsky 提交于
The PTRACE_SINGLEBLOCK option is used to get control whenever the inferior has executed a successful branch. The PER option to implement block stepping is successful-branching event, bit 32 in the PER-event mask. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 06 3月, 2014 1 次提交
-
-
由 Heiko Carstens 提交于
Enforce 32 bit types for all compat syscall argument types. This way we can make sure that all arguments get correct sign or zero extension. Otherwise incorrect code would be generated. E.g. for a 'long' type the COMPAT_SYSCALL_DEFINE macro wouldn't generate code that would cause sign extension of the passed in 32 bit user space parameter. This can cause quite subtle bugs like e.g. the one that was fixed with dfd948e3 "fs/compat: fix parameter handling for compat readv/writev syscalls". Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
-