- 27 1月, 2014 3 次提交
-
-
由 Paul Mackerras 提交于
On a threaded processor such as POWER7, we group VCPUs into virtual cores and arrange that the VCPUs in a virtual core run on the same physical core. Currently we don't enforce any correspondence between virtual thread numbers within a virtual core and physical thread numbers. Physical threads are allocated starting at 0 on a first-come first-served basis to runnable virtual threads (VCPUs). POWER8 implements a new "msgsndp" instruction which guest kernels can use to interrupt other threads in the same core or sub-core. Since the instruction takes the destination physical thread ID as a parameter, it becomes necessary to align the physical thread IDs with the virtual thread IDs, that is, to make sure virtual thread N within a virtual core always runs on physical thread N. This means that it's possible that thread 0, which is where we call __kvmppc_vcore_entry, may end up running some other vcpu than the one whose task called kvmppc_run_core(), or it may end up running no vcpu at all, if for example thread 0 of the virtual core is currently executing in userspace. However, we do need thread 0 to be responsible for switching the MMU -- a previous version of this patch that had other threads switching the MMU was found to be responsible for occasional memory corruption and machine check interrupts in the guest on POWER7 machines. To accommodate this, we no longer pass the vcpu pointer to __kvmppc_vcore_entry, but instead let the assembly code load it from the PACA. Since the assembly code will need to know the kvm pointer and the thread ID for threads which don't have a vcpu, we move the thread ID into the PACA and we add a kvm pointer to the virtual core structure. In the case where thread 0 has no vcpu to run, it still calls into kvmppc_hv_entry in order to do the MMU switch, and then naps until either its vcpu is ready to run in the guest, or some other thread needs to exit the guest. In the latter case, thread 0 jumps to the code that switches the MMU back to the host. This control flow means that now we switch the MMU before loading any guest vcpu state. Similarly, on guest exit we now save all the guest vcpu state before switching the MMU back to the host. This has required substantial code movement, making the diff rather large. Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
由 Scott Wood 提交于
Simplify the handling of lazy EE by going directly from fully-enabled to hard-disabled. This replaces the lazy_irq_pending() check (including its misplaced kvm_guest_exit() call). As suggested by Tiejun Chen, move the interrupt disabling into kvmppc_prepare_to_enter() rather than have each caller do it. Also move the IRQ enabling on heavyweight exit into kvmppc_prepare_to_enter(). Signed-off-by: NScott Wood <scottwood@freescale.com> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
由 Cédric Le Goater 提交于
MMIO emulation reads the last instruction executed by the guest and then emulates. If the guest is running in Little Endian order, or more generally in a different endian order of the host, the instruction needs to be byte-swapped before being emulated. This patch adds a helper routine which tests the endian order of the host and the guest in order to decide whether a byteswap is needed or not. It is then used to byteswap the last instruction of the guest in the endian order of the host before MMIO emulation is performed. Finally, kvmppc_handle_load() of kvmppc_handle_store() are modified to reverse the endianness of the MMIO if required. Signed-off-by: NCédric Le Goater <clg@fr.ibm.com> [agraf: add booke handling] Signed-off-by: NAlexander Graf <agraf@suse.de>
-
- 09 1月, 2014 7 次提交
-
-
由 Alexander Graf 提交于
We had code duplication between the inline functions to get our last instruction on normal interrupts and system call interrupts. Unify both helper functions towards a single implementation. Signed-off-by: NAlexander Graf <agraf@suse.de>
-
由 Bharat Bhushan 提交于
KVM uses same WIM tlb attributes as the corresponding qemu pte. For this we now search the linux pte for the requested page and get these cache caching/coherency attributes from pte. Signed-off-by: NBharat Bhushan <bharat.bhushan@freescale.com> Reviewed-by: NScott Wood <scottwood@freescale.com> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
由 Bharat Bhushan 提交于
We need to search linux "pte" to get "pte" attributes for setting TLB in KVM. This patch defines a lookup_linux_ptep() function which returns pte pointer. Signed-off-by: NBharat Bhushan <bharat.bhushan@freescale.com> Reviewed-by: NScott Wood <scottwood@freescale.com> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
由 Paul Mackerras 提交于
This uses struct thread_fp_state and struct thread_vr_state to store the floating-point, VMX/Altivec and VSX state, rather than flat arrays. This makes transferring the state to/from the thread_struct simpler and allows us to unify the get/set_one_reg implementations for the VSX registers. Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
由 Paul Mackerras 提交于
The load_up_fpu and load_up_altivec functions were never intended to be called from C, and do things like modifying the MSR value in their callers' stack frames, which are assumed to be interrupt frames. In addition, on 32-bit Book S they require the MMU to be off. This makes KVM use the new load_fp_state() and load_vr_state() functions instead of load_up_fpu/altivec. This means we can remove the assembler glue in book3s_rmhandlers.S, and potentially fixes a bug on Book E, where load_up_fpu was called directly from C. Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
由 Bharat Bhushan 提交于
kvm_hypercall0() and friends have nothing KVM specific so moved to epapr_hypercall0() and friends. Also they are moved from arch/powerpc/include/asm/kvm_para.h to arch/powerpc/include/asm/epapr_hcalls.h Signed-off-by: NBharat Bhushan <bharat.bhushan@freescale.com> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
由 Bharat Bhushan 提交于
kvm_hypercall() have nothing KVM specific, so renamed to epapr_hypercall(). Also this in moved to arch/powerpc/include/asm/epapr_hcalls.h Signed-off-by: NBharat Bhushan <bharat.bhushan@freescale.com> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
- 15 11月, 2013 1 次提交
-
-
由 Kirill A. Shutemov 提交于
Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 09 11月, 2013 1 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 07 11月, 2013 3 次提交
-
-
由 Prabhakar Kushwaha 提交于
Current IFC driver supports till 4K page size NAND flash. Add support of 8K Page size NAND flash - Add nand_ecclayout for 4 bit & 8 bit ecc - Defines constants - also fix ecc.strength for 8bit ecc of 8K page size NAND Signed-off-by: NPrabhakar Kushwaha <prabhakar@freescale.com> Signed-off-by: NBrian Norris <computersforpeace@gmail.com>
-
由 Oleg Nesterov 提交于
Currently xol_get_insn_slot() assumes that we should simply copy arch_uprobe->insn[] which is (ignoring arch_uprobe_analyze_insn) just the copy of the original insn. This is not true for arm which needs to create another insn to execute it out-of-line. So this patch simply adds the new member, ->ixol into the union. This doesn't make any difference for x86 and powerpc, but arm can divorce insn/ixol and initialize the correct xol insn in arch_uprobe_analyze_insn(). Signed-off-by: NOleg Nesterov <oleg@redhat.com>
-
由 David A. Long 提交于
Move the function declarations from the arch headers to the common header, since only the function bodies are architecture-specific. These changes are from Vincent Rabin's uprobes patch. [ oleg: update arch/powerpc/include/asm/uprobes.h ] Signed-off-by: NRabin Vincent <rabin@rab.in> Signed-off-by: NDavid A. Long <dave.long@linaro.org> Signed-off-by: NOleg Nesterov <oleg@redhat.com>
-
- 06 11月, 2013 2 次提交
-
-
由 Benjamin Herrenschmidt 提交于
When restoring the PPR value, we incorrectly access the thread structure at a time where MSR:RI is clear, which means we cannot recover from nested faults. However the thread structure isn't covered by the "bolted" SLB entries and thus accessing can fault. This fixes it by splitting the code so that the PPR value is loaded into a GPR before MSR:RI is cleared. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
On P8, XSCOM addresses has a special "indirect" form that requires more than 32-bits, so let's use u64 everywhere in the code instead of u32. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 31 10月, 2013 4 次提交
-
-
由 Vladimir Murzin 提交于
Currently DIVWU stands for *signed* divw opcode: 7d 2a 4b 96 divwu r9,r10,r9 7d 2a 4b d6 divw r9,r10,r9 Use the *unsigned* divw opcode for DIVWU. Suggested-by: NVassili Karpov <av1474@comtv.ru> Reviewed-by: NVassili Karpov <av1474@comtv.ru> Signed-off-by: NVladimir Murzin <murzin.v@gmail.com> Acked-by: NMatt Evans <matt@ozlabs.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Sudeep KarkadaNagesha 提交于
Since the definition of_find_next_cache_node is architecture independent, the existing definition in powerpc can be moved to driver/of/base.c Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Grant Likely <grant.likely@linaro.org> Cc: Rob Herring <rob.herring@calxeda.com> Signed-off-by: NSudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Philippe Bergheaud 提交于
This is an optimization for the PowerPC in 64-bit little-endian. Bit counting is used in find_zero(), instead of the multiply and shift. It is modelled after Alan Modra's PowerPC LE strlen patch http://sourceware.org/ml/libc-alpha/2013-08/msg00097.html. Signed-off-by: NPhilippe Bergheaud <felix@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Philippe Bergheaud 提交于
This enables the Berkeley Packet Filter JIT compiler for the PowerPC running in 64bit Little Endian. Signed-off-by: NPhilippe Bergheaud <felix@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 30 10月, 2013 7 次提交
-
-
由 Vasant Hegde 提交于
Code update interface for powernv platform. This provides sysfs interface to pass new image, validate, update and commit images. This patch includes: - Below OPAL APIs for code update - opal_validate_flash() - opal_manage_flash() - opal_update_flash() - Create below sysfs files under /sys/firmware/opal - image : Interface to pass new FW image - validate_flash : Validate candidate image - manage_flash : Commit/Reject operations - update_flash : Flash new candidate image Updating Image: "update_flash" is an interface to indicate flash new FW. It just passes image SG list to FW. Actual flashing is done during system reboot time. Note: - SG entry format: I have kept version number to keep this list similar to what PAPR is defined. Signed-off-by: NVasant Hegde <hegdevasant@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Vasant Hegde 提交于
Create /sys/firmware/opal directory. We wil use this interface to fetch opal error logs, firmware update, etc. Signed-off-by: NVasant Hegde <hegdevasant@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Add a VMX optimised xor, used primarily for RAID5. On a POWER7 blade this is a decent win: 32regs : 17932.800 MB/sec altivec : 19724.800 MB/sec The bigger gain is when the same test is run in SMT4 mode, as it would if there was a lot of work going on: 8regs : 8377.600 MB/sec altivec : 15801.600 MB/sec I tested this against an array created without the patch, and also verified it worked as expected on a little endian kernel. [ Fix !CONFIG_ALTIVEC build -- BenH ] Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
commit f13c13a0 (powerpc: Stop using non-architected shared_proc field in lppaca) fixed a potential issue with shared/dedicated partition detection. The old method of detection relied on an unarchitected field (shared_proc), and this patch switched to using something architected (a non zero yield_count). Unfortunately the assertion in the Linux header that yield_count is only non zero on shared processor partitions is not true. It turns out dedicated processor partitions can increment yield_count and as such we falsely detect dedicated partitions as shared. Fix the comment, and switch back to using the old method. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Alistair Popple 提交于
PPC44x supports page sizes other than 4K however when 64K page sizes are selected compilation fails. This is due to a change in the definition of pgtable_t introduced by the following patch: commit 5c1f6ee9 Author: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> powerpc: Reduce PTE table memory wastage The above patch only implements the new layout for PPC64 so it doesn't compile for PPC32 with a 64K page size. Ideally we should implement the same layout for PPC32 however for the meantime this patch reverts the definition of pgtable_t for PPC32. Signed-off-by: NAlistair Popple <alistair@popple.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Vaishnavi Bhat 提交于
This patch fixes typo in comments virtual to physical address conversion. Signed-off-by: NVaishnavi Bhat <vaishnavi@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Robert Jennings 提交于
Move the few declarations from arch/powerpc/kernel/setup.h into arch/powerpc/include/asm/setup.h. This resolves a sparse warning for arch/powerpc/mm/numa.c which defines do_init_bootmem() but can't include the setup.h header in the prior path. Resolves: arch/powerpc/mm/numa.c:998:13: warning: symbol 'do_init_bootmem' was not declared. Should it be static? Signed-off-by: NRobert C Jennings <rcj@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 29 10月, 2013 1 次提交
-
-
由 Scott Wood 提交于
Commit 9863c28a ("powerpc: Emulate sync instruction variants") introduced a build breakage with CONFIG_PPC_EMULATED_STATS enabled. Signed-off-by: NScott Wood <scottwood@freescale.com> Cc: Kumar Gala <galak@kernel.org> Cc: James Yang <James.Yang@freescale.com> ---
-
- 19 10月, 2013 2 次提交
-
-
由 Bharat Bhushan 提交于
KVM need this function when switching from vcpu to user-space thread. My subsequent patch will use this function. Signed-off-by: NBharat Bhushan <bharat.bhushan@freescale.com> Acked-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NScott Wood <scottwood@freescale.com>
-
由 Bharat Bhushan 提交于
This way we can use same data type struct with KVM and also help in using other debug related function. Signed-off-by: NBharat Bhushan <bharat.bhushan@freescale.com> Acked-by: NMichael Neuling <mikey@neuling.org> [scottwood@freescale.com: removed obvious debug_reg comment] Signed-off-by: NScott Wood <scottwood@freescale.com>
-
- 18 10月, 2013 2 次提交
-
-
由 Aneesh Kumar K.V 提交于
drop is_hv_enabled, because that should not be a callback property Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
由 Aneesh Kumar K.V 提交于
This moves the kvmppc_ops callbacks to be a per VM entity. This enables us to select HV and PR mode when creating a VM. We also allow both kvm-hv and kvm-pr kernel module to be loaded. To achieve this we move /dev/kvm ownership to kvm.ko module. Depending on which KVM mode we select during VM creation we take a reference count on respective module Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> [agraf: fix coding style] Signed-off-by: NAlexander Graf <agraf@suse.de>
-
- 17 10月, 2013 7 次提交
-
-
由 Aneesh Kumar K.V 提交于
We will use that in the later patch to find the kvm ops handler Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
由 Aneesh Kumar K.V 提交于
This help us to identify whether we are running with hypervisor mode KVM enabled. The change is needed so that we can have both HV and PR kvm enabled in the same kernel. If both HV and PR KVM are included, interrupts come in to the HV version of the kvmppc_interrupt code, which then jumps to the PR handler, renamed to kvmppc_interrupt_pr, if the guest is a PR guest. Allowing both PR and HV in the same kernel required some changes to kvm_dev_ioctl_check_extension(), since the values returned now can't be selected with #ifdefs as much as previously. We look at is_hv_enabled to return the right value when checking for capabilities.For capabilities that are only provided by HV KVM, we return the HV value only if is_hv_enabled is true. For capabilities provided by PR KVM but not HV, we return the PR value only if is_hv_enabled is false. NOTE: in later patch we replace is_hv_enabled with a static inline function comparing kvm_ppc_ops Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
由 Aneesh Kumar K.V 提交于
With this patch if HV is included, interrupts come in to the HV version of the kvmppc_interrupt code, which then jumps to the PR handler, renamed to kvmppc_interrupt_pr, if the guest is a PR guest. This helps in enabling both HV and PR, which we do in later patch Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
由 Aneesh Kumar K.V 提交于
This patch add a new callback kvmppc_ops. This will help us in enabling both HV and PR KVM together in the same kernel. The actual change to enable them together is done in the later patch in the series. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> [agraf: squash in booke changes] Signed-off-by: NAlexander Graf <agraf@suse.de>
-
由 Aneesh Kumar K.V 提交于
This help ups to select the relevant code in the kernel code when we later move HV and PR bits as seperate modules. The patch also makes the config options for PR KVM selectable Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
由 Aneesh Kumar K.V 提交于
With later patches supporting PR kvm as a kernel module, the changes that has to be built into the main kernel binary to enable PR KVM module is now selected via KVM_BOOK3S_PR_POSSIBLE Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
由 Bharat Bhushan 提交于
This patch adds the debug stub support on booke/bookehv. Now QEMU debug stub can use hw breakpoint, watchpoint and software breakpoint to debug guest. This is how we save/restore debug register context when switching between guest, userspace and kernel user-process: When QEMU is running -> thread->debug_reg == QEMU debug register context. -> Kernel will handle switching the debug register on context switch. -> no vcpu_load() called QEMU makes ioctls (except RUN) -> This will call vcpu_load() -> should not change context. -> Some ioctls can change vcpu debug register, context saved in vcpu->debug_regs QEMU Makes RUN ioctl -> Save thread->debug_reg on STACK -> Store thread->debug_reg == vcpu->debug_reg -> load thread->debug_reg -> RUN VCPU ( So thread points to vcpu context ) Context switch happens When VCPU running -> makes vcpu_load() should not load any context -> kernel loads the vcpu context as thread->debug_regs points to vcpu context. On heavyweight_exit -> Load the context saved on stack in thread->debug_reg Currently we do not support debug resource emulation to guest, On debug exception, always exit to user space irrespective of user space is expecting the debug exception or not. If this is unexpected exception (breakpoint/watchpoint event not set by userspace) then let us leave the action on user space. This is similar to what it was before, only thing is that now we have proper exit state available to user space. Signed-off-by: NBharat Bhushan <bharat.bhushan@freescale.com> Signed-off-by: NAlexander Graf <agraf@suse.de>
-