- 30 10月, 2011 3 次提交
-
-
由 Carsten Otte 提交于
KVM common code does vcpu_load prior to calling our arch ioctls and vcpu_put after we're done here. Via the kvm_arch_vcpu_load/put callbacks we do load the fpu and access register state into the processor, which saves us moving the state on every SIE exit the kernel handles. However this breaks register setting from userspace, because of the following sequence: 1a. vcpu load stores userspace register content 1b. vcpu load loads guest register content 2. kvm_arch_vcpu_ioctl_set_fpu/sregs updates saved guest register content 3a. vcpu put stores the guest registers and overwrites the new content 3b. vcpu put loads the userspace register set again This patch loads the new guest register state into the cpu, so that the correct (new) set of guest registers will be stored in step 3a. Signed-off-by: NCarsten Otte <cotte@de.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Carsten Otte 提交于
This patch fixes the return value of kvm_arch_init_vm in case a memory allocation goes wrong. Signed-off-by: NCarsten Otte <cotte@de.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Carsten Otte 提交于
We use the cpu id provided by userspace as array index here. Thus we clearly need to check it first. Ooops. CC: <stable@vger.kernel.org> Signed-off-by: NCarsten Otte <cotte@de.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
- 20 9月, 2011 2 次提交
-
-
由 Christian Borntraeger 提交于
598841ca ([S390] use gmap address spaces for kvm guest images) changed kvm on s390 to use a separate address space for kvm guests. We can now put KVM guests anywhere in the user address mode with a size up to 8PB - as long as the memory is 1MB-aligned. This change was done without KVM extension capability bit. The change was added after 3.0, but we still have a chance to add a feature bit before 3.1 (keeping the releases in a sane state). We use number 71 to avoid collisions with other pending kvm patches as requested by Alexander Graf. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Acked-by: NAvi Kivity <avi@redhat.com> Cc: Alexander Graf <agraf@suse.de> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
-
由 Christian Borntraeger 提交于
598841ca ([S390] use gmap address spaces for kvm guest images) changed kvm to use a separate address space for kvm guests. This address space was switched in __vcpu_run In some cases (preemption, page fault) there is the possibility that this address space switch is lost. The typical symptom was a huge amount of validity intercepts or random guest addressing exceptions. Fix this by doing the switch in sie_loop and sie_exit and saving the address space in the gmap structure itself. Also use the preempt notifier. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Acked-by: NAvi Kivity <avi@redhat.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
-
- 27 8月, 2011 1 次提交
-
-
由 NeilBrown 提交于
The nfsservctl system call is now gone, so we should remove all linkage for it. Signed-off-by: NNeilBrown <neilb@suse.de> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 24 8月, 2011 3 次提交
-
-
由 Michael Holzheu 提交于
The main purpose for PSW restart will be kdump. Therefore customers will issue "system restart" for creating a dump. If kdump is not enabled, currently "PSW restart" will reboot the system and then no dump can be created any more. In order to still allow a manual stand-alone dump in the case a user issues "PSW restart" on a system that has not enabled kdump we now stop the system. Signed-off-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Julia Lawall 提交于
reipl_fcp_kset was just initialized, so it appears that it should be tested instead of reipl_kset. Signed-off-by: NJulia Lawall <julia@diku.dk> Reported-by: NSuman Saha <sumsaha@gmail.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
When IPL'ing from a block device and an NSS should be created we must make sure that the kernel image and the initrd are in different 1MB segments. Otherwise creating the NSS will fail. So we make sure the initrd is 4MB behind the end of the kernel image like we do already when IPL via the VM reader is performed. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 03 8月, 2011 12 次提交
-
-
由 Huang Ying 提交于
cmpxchg() is widely used by lockless code, including NMI-safe lockless code. But on some architectures, the cmpxchg() implementation is not NMI-safe, on these architectures the lockless code may need a spin_trylock_irqsave() based implementation. This patch adds a Kconfig option: ARCH_HAVE_NMI_SAFE_CMPXCHG, so that NMI-safe lockless code can depend on it or provide different implementation according to it. On many architectures, cmpxchg is only NMI-safe for several specific operand sizes. So, ARCH_HAVE_NMI_SAFE_CMPXCHG define in this patch only guarantees cmpxchg is NMI-safe for sizeof(unsigned long). Signed-off-by: NHuang Ying <ying.huang@intel.com> Acked-by: NMike Frysinger <vapier@gentoo.org> Acked-by: NPaul Mundt <lethal@linux-sh.org> Acked-by: NHans-Christian Egtvedt <hans-christian.egtvedt@atmel.com> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: NChris Metcalf <cmetcalf@tilera.com> Acked-by: NRichard Henderson <rth@twiddle.net> CC: Mikael Starvik <starvik@axis.com> Acked-by: NDavid Howells <dhowells@redhat.com> CC: Yoshinori Sato <ysato@users.sourceforge.jp> CC: Tony Luck <tony.luck@intel.com> CC: Hirokazu Takata <takata@linux-m32r.org> CC: Geert Uytterhoeven <geert@linux-m68k.org> CC: Michal Simek <monstr@monstr.eu> Acked-by: NRalf Baechle <ralf@linux-mips.org> CC: Kyle McMartin <kyle@mcmartin.ca> CC: Martin Schwidefsky <schwidefsky@de.ibm.com> CC: Chen Liqin <liqin.chen@sunplusct.com> CC: "David S. Miller" <davem@davemloft.net> CC: Ingo Molnar <mingo@redhat.com> CC: Chris Zankel <chris@zankel.net> Signed-off-by: NLen Brown <len.brown@intel.com>
-
由 Heiko Carstens 提交于
We should call set_restore_sigmask() instead of directly setting TIF_RESTORE_SIGMASK. This change should have been done three years earlier... see 4e4c22 "signals: add set_restore_sigmask". Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
-
由 Heiko Carstens 提交于
Remove pointless comments in startup_secondary(). There is not too much value in having comments like e.g. "call cpu notifiers" just before a call to notify_cpu*(). Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
-
由 Mathias Krause 提交于
The address limit is already set in flush_old_exec() so those calls to set_fs(USER_DS) are redundant. Signed-off-by: NMathias Krause <minipli@googlemail.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
-
由 Heiko Carstens 提交于
This is the same as fd8a7de1 "x86: cpu-hotplug: Prevent softirq wakeup on wrong CPU". Unlike on x86 this doesn't fix a bug on s390 since we do not have threaded interrupt handlers. However we want to keep the same initialization order like on x86. This should prevent bugs caused by code which assumes (and relies on) the init order is the same on each architecture. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
-
由 Heiko Carstens 提交于
Convert to use set_current_blocked() like x86. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
-
由 Heiko Carstens 提交于
Because of readability reasons we ignore the 80 character line limit in asm offsets. Just one line per define, nothing else. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
-
由 Heiko Carstens 提交于
Just fix up the Kconfig description and the elf platform. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
-
由 Michael Holzheu 提交于
The diagnose 308 call is the prefered method for clearing all ongoing I/O. Therefore if it is available we use it instead of doing a manual reset. Signed-off-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
-
由 Michael Holzheu 提交于
For kdump we need a store status function to save the registers for the current CPU. Therefore this patch exports a function "store_status()". In addition to that now also floating point registers are saved correctly. Signed-off-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
-
由 Michael Holzheu 提交于
With this patch a new S390 shutdown trigger "restart" is added. If under z/VM "systerm restart" is entered or under the HMC the "PSW restart" button is pressed, the PSW located at 0 (31 bit) or 0x1a0 (64 bit) bit is loaded. Now we execute do_restart() that processes the restart action that is defined under /sys/firmware/shutdown_actions/on_restart. Currently the following actions are possible: reipl (default), stop, vmcmd, dump, and dump_reipl. Signed-off-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
-
由 Jan Glauber 提交于
Fix the following compile warning for !CONFIG_PGSTE: CC arch/s390/mm/pgtable.o arch/s390/mm/pgtable.c: In function ‘page_table_alloc_pgste’: arch/s390/mm/pgtable.c:531:1: warning: no return statement in function returning non-void [-Wreturn-type] Signed-off-by: NJan Glauber <jang@linux.vnet.ibm.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
-
- 27 7月, 2011 5 次提交
-
-
由 Arun Sharma 提交于
After changing all consumers of atomics to include <linux/atomic.h>, we ran into some compile time errors due to this dependency chain: linux/atomic.h -> asm/atomic.h -> asm-generic/atomic-long.h where atomic-long.h could use funcs defined later in linux/atomic.h without a prototype. This patches moves the code that includes asm-generic/atomic*.h to linux/atomic.h. Archs that need <asm-generic/atomic64.h> need to select CONFIG_GENERIC_ATOMIC64 from now on (some of them used to include it unconditionally). Compile tested on i386 and x86_64 with allnoconfig. Signed-off-by: NArun Sharma <asharma@fb.com> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: David Miller <davem@davemloft.net> Acked-by: NMike Frysinger <vapier@gentoo.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Arun Sharma 提交于
This is in preparation for more generic atomic primitives based on __atomic_add_unless. Signed-off-by: NArun Sharma <asharma@fb.com> Signed-off-by: NHans-Christian Egtvedt <hans-christian.egtvedt@atmel.com> Reviewed-by: NEric Dumazet <eric.dumazet@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: David Miller <davem@davemloft.net> Acked-by: NMike Frysinger <vapier@gentoo.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Arun Sharma 提交于
This allows us to move duplicated code in <asm/atomic.h> (atomic_inc_not_zero() for now) to <linux/atomic.h> Signed-off-by: NArun Sharma <asharma@fb.com> Reviewed-by: NEric Dumazet <eric.dumazet@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: David Miller <davem@davemloft.net> Cc: Eric Dumazet <eric.dumazet@gmail.com> Acked-by: NMike Frysinger <vapier@gentoo.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Akinobu Mita 提交于
The majority of architectures implement ext2 atomic bitops as test_and_{set,clear}_bit() without spinlock. This adds this type of generic implementation in ext2-atomic-setbit.h and use it wherever possible. Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com> Suggested-by: NAndreas Dilger <adilger@dilger.ca> Suggested-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Mike Frysinger 提交于
[ poleg@redhat.com: no need to declare show_regs() in ptrace.h, sched.h does this ] Signed-off-by: NMike Frysinger <vapier@gentoo.org> Cc: Tejun Heo <tj@kernel.org> Signed-off-by: NOleg Nesterov <oleg@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 24 7月, 2011 14 次提交
-
-
由 Jonas Bonn 提交于
This patch removes all the module loader hook implementations in the architecture specific code where the functionality is the same as that now provided by the recently added default hooks. Signed-off-by: NJonas Bonn <jonas@southpole.se> Acked-by: NMike Frysinger <vapier@gentoo.org> Acked-by: NGeert Uytterhoeven <geert@linux-m68k.org> Tested-by: NMichal Simek <monstr@monstr.eu> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Martin Schwidefsky 提交于
Provide additional information on SIGTRAP by using a sig_info signal. Use TRAP_BRKPT for breakpoints via illegal operation and TRAP_HWBKPT for breakpoints via program event recording. Provide the address of the instruction that caused the breakpoint via si_addr. While we are at it get rid of tracehook_consider_fatal_signal. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Christian Ehrhardt 提交于
SIGP emerg needs to pass the source vpu adress into __LC_CPU_ADDRESS of the target guest. Signed-off-by: NChristian Ehrhardt <ehrhardt@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Jan Glauber 提交于
The cpu measurement alerts that are used for instance by oprofile for hardware sampling are not turned off on a cpu that is going offline. Add the appropriate control register bit that should be disabled to the list. Signed-off-by: NJan Glauber <jang@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Remove outdated bits from the initial cr0 register. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Do not set the cr0 enablement bit for iucv by default in head[31|64].S, move the enablement to iucv_init in the iucv base layer. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Jan Glauber 提交于
The (un-)register_external_interrupt functions are not race safe if more than one interrupt handler is added or deleted for an external interrupt concurrently. Make the registration / unregistration of external interrupts race safe by using RCU and a spinlock. RCU is used to avoid a performance penalty in the external interrupt handler, the register and unregister functions are protected by the spinlock and are not performance critical. call_rcu must be used since the SCLP driver uses the interface with IRQs disabled. Also use the generic list implementation rather than homebrewn list code. Signed-off-by: NJan Glauber <jang@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Carsten Otte 提交于
This patch removes the mmu reload logic for kvm on s390. Via Martin's new gmap interface, we can safely add or remove memory slots while guest CPUs are in-flight. Thus, the mmu reload logic is not needed anymore. Signed-off-by: NCarsten Otte <cotte@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Carsten Otte 提交于
This patch removes kvm-s390 internal assumption of a linear mapping of guest address space to user space. Previously, guest memory was translated to user addresses using a fixed offset (gmsor). The new code uses gmap_fault to resolve guest addresses. Signed-off-by: NCarsten Otte <cotte@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Carsten Otte 提交于
This patch switches kvm from using (Qemu's) user address space to Martin's gmap address space. This way QEMU does not have to use a linker script in order to fit large guests at low addresses in its address space. Signed-off-by: NCarsten Otte <cotte@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Add code that allows KVM to control the virtual memory layout that is seen by a guest. The guest address space uses a second page table that shares the last level pte-tables with the process page table. If a page is unmapped from the process page table it is automatically unmapped from the guest page table as well. The guest address space mapping starts out empty, KVM can map any individual 1MB segments from the process virtual memory to any 1MB aligned location in the guest virtual memory. If a target segment in the process virtual memory does not exist or is unmapped while a guest mapping exists the desired target address is stored as an invalid segment table entry in the guest page table. The population of the guest page table is fault driven. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Jan Glauber 提交于
The alignment is missing for various global symbols in s390 assembly code. With a recent gcc and an instruction like stgrl this can lead to a specification exception if the instruction uses such a mis-aligned address. Specify the alignment explicitely and while add it define __ALIGN for s390 and use the ENTRY define to save some lines of code. Signed-off-by: NJan Glauber <jang@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
The entry to / exit from sie has subtle dependencies to the first level interrupt handler. Move the sie assembler code to entry64.S and replace the SIE_HOOK callback with a test and the new _TIF_SIE bit. In addition this patch fixes several problems in regard to the check for the_TIF_EXIT_SIE bits. The old code checked the TIF bits before executing the interrupt handler and it only modified the instruction address if it pointed directly to the sie instruction. In both cases it could miss a TIF bit that normally would cause an exit from the guest and would reenter the guest context. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-