- 03 7月, 2018 9 次提交
-
-
由 Richard Henderson 提交于
Move the guts of ST_ATOMIC to a function. Use foo_tl for the operations instead of foo_i32 or foo_i64 specifically. Use MO_ALIGN instead of an explicit call to gen_check_align. Signed-off-by: NRichard Henderson <richard.henderson@linaro.org> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Richard Henderson 提交于
Move the guts of LD_ATOMIC to a function. Use foo_tl for the operations instead of foo_i32 or foo_i64 specifically. Use MO_ALIGN instead of an explicit call to gen_check_align. Signed-off-by: NRichard Henderson <richard.henderson@linaro.org> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Richard Henderson 提交于
Leave only the minimal amount of code within the LDAR macro, moving the rest of the code into gen_load_locked. Use MO_ALIGN and remove the explicit call to gen_check_align. Signed-off-by: NRichard Henderson <richard.henderson@linaro.org> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Richard Henderson 提交于
Leave only the minimal amount of code within the STCX macro, moving the rest of the code into gen_conditional_store. Remove the explicit call to gen_check_align; the matching LDAX will have already checked alignment, and we verify the same address. Signed-off-by: NRichard Henderson <richard.henderson@linaro.org> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Richard Henderson 提交于
Always use the gen_conditional_store implementation that uses atomic_cmpxchg. Make sure and clear reserve_addr across most interrupts crossing the cpu_loop. Signed-off-by: NRichard Henderson <richard.henderson@linaro.org> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Richard Henderson 提交于
When running in a parallel context, we must use a helper in order to perform the 128-bit atomic operation. When running in a serial context, do the compare before the store. Signed-off-by: NRichard Henderson <richard.henderson@linaro.org> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Richard Henderson 提交于
Section 1.4 of the Power ISA v3.0B states that this insn is single-copy atomic. As we cannot (yet) issue 128-bit stores within TCG, use the generic helpers provided. Signed-off-by: NRichard Henderson <richard.henderson@linaro.org> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Richard Henderson 提交于
Section 1.4 of the Power ISA v3.0B states that both of these instructions are single-copy atomic. As we cannot (yet) issue 128-bit loads within TCG, use the generic helpers provided. Since TCG cannot (yet) return a 128-bit value, add a slot within CPUPPCState for returning the high half of a 128-bit return value. This solution is preferred to the helper assigning to architectural registers directly, as it avoids clobbering all TCG live values. Signed-off-by: NRichard Henderson <richard.henderson@linaro.org> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Richard Henderson 提交于
This allows faults from MO_ALIGN to have the same effect as from gen_check_align. Signed-off-by: NRichard Henderson <richard.henderson@linaro.org> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
- 27 6月, 2018 1 次提交
-
-
由 Stefan Hajnoczi 提交于
Determining the size of a field is useful when you don't have a struct variable handy. Open-coding this is ugly. This patch adds the sizeof_field() macro, which is similar to typeof_field(). Existing instances are updated to use the macro. Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Reviewed-by: NJohn Snow <jsnow@redhat.com> Message-id: 20180614164431.29305-1-stefanha@redhat.com Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
- 22 6月, 2018 3 次提交
-
-
由 David Gibson 提交于
Currently during KVM initialization on POWER, kvm_fixup_page_sizes() rewrites a bunch of information in the cpu state to reflect the capabilities of the host MMU and KVM. This overwrites the information that's already there reflecting how the TCG implementation of the MMU will operate. This means that we can get guest-visibly different behaviour between KVM and TCG (and between different KVM implementations). That's bad. It also prevents migration between KVM and TCG. The pseries machine type now has filtering of the pagesizes it allows the guest to use which means it can present a consistent model of the MMU across all accelerators. So, we can now replace kvm_fixup_page_sizes() with kvm_check_mmu() which merely verifies that the expected cpu model can be faithfully handled by KVM, rather than updating the cpu model to match KVM. We call kvm_check_mmu() from the spapr cpu reset code. This is a hack: conceptually it makes more sense where fixup_page_sizes() was - in the KVM cpu init path. However, doing that would require moving the platform's pagesize filtering much earlier, which would require a lot of work making further adjustments. There wouldn't be a lot of concrete point to doing that, since the only KVM implementation which has the awkward MMU restrictions is KVM HV, which can only work with an spapr guest anyway. Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au> Reviewed-by: NCédric Le Goater <clg@kaod.org>
-
由 David Gibson 提交于
The paravirtualized PAPR platform sometimes needs to restrict the guest to using only some of the page sizes actually supported by the host's MMU. At the moment this is handled in KVM specific code, but for consistency we want to apply the same limitations to all accelerators. This makes a start on this by providing a helper function in the cpu code to allow platform code to remove some of the cpu's page size definitions via a caller supplied callback. Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au> Reviewed-by: NCédric Le Goater <clg@kaod.org> Reviewed-by: NGreg Kurz <groug@kaod.org>
-
由 David Gibson 提交于
The way we used to handle KVM allowable guest pagesizes for PAPR guests required some convoluted checking of memory attached to the guest. The allowable pagesizes advertised to the guest cpus depended on the memory which was attached at boot, but then we needed to ensure that any memory later hotplugged didn't change which pagesizes were allowed. Now that we have an explicit machine option to control the allowable maximum pagesize we can simplify this. We just check all memory backends against that declared pagesize. We check base and cold-plugged memory at reset time, and hotplugged memory at pre_plug() time. Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au> Reviewed-by: NCédric Le Goater <clg@kaod.org> Reviewed-by: NGreg Kurz <groug@kaod.org>
-
- 21 6月, 2018 4 次提交
-
-
由 BALATON Zoltan 提交于
According to PPC440 User Manual PPC440 has multiple opcodes for icbt instruction: one for compatibility with older cores and two 440 specific opcodes one of which is defined in BookE. QEMU only implements two of these, add the missing one. Signed-off-by: NBALATON Zoltan <balaton@eik.bme.hu> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 John Arbuckle 提交于
Fix the helper_fpscr_clrbit() function so it correctly sets the FEX and VX bits. Determining the value for the Floating Point Status and Control Register's (FPSCR) FEX bit is suppose to be done like this: FEX = (VX & VE) | (OX & OE) | (UX & UE) | (ZX & ZE) | (XX & XE)) It is described as "the logical OR of all the floating-point exception bits masked by their respective enable bits". It was not implemented correctly. The value of FEX would stay on even when all other bits were set to off. The VX bit is described as "the logical OR of all of the invalid operation exceptions". This bit was also not implemented correctly. It too would stay on when all the other bits were set to off. My main source of information is an IBM document called: PowerPC Microprocessor Family: The Programming Environments for 32-Bit Microprocessors Page 62 is where the FPSCR information is located. This is an older copy than the one I use but it is still very useful: https://www.pdfdrive.net/powerpc-microprocessor-family-the-programming-environments-for-32-e3087633.html I use a G3 and G5 iMac to compare bit values with QEMU. This patch fixed all the problems I was having with these bits. Signed-off-by: NJohn Arbuckle <programmingkidx@gmail.com> [dwg: Re-wrapped commit message] Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 David Gibson 提交于
KVM HV has a restriction that for HPT mode guests, guest pages must be hpa contiguous as well as gpa contiguous. We have to account for that in various places. We determine whether we're subject to this restriction from the SMMU information exposed by KVM. Planned cleanups to the way we handle this will require knowing whether this restriction is in play in wider parts of the code. So, expose a helper function which returns it. This does mean some redundant calls to kvm_get_smmu_info(), but they'll go away again with future cleanups. Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au> Reviewed-by: NGreg Kurz <groug@kaod.org> Reviewed-by: NCédric Le Goater <clg@kaod.org>
-
由 David Gibson 提交于
ppc_check_compat() is used in a number of places to check if a cpu object supports a certain compatiblity mode, subject to various constraints. It takes a PowerPCCPU *, however it really only depends on the cpu's class. We have upcoming cases where it would be useful to make compatibility checks before we fully instantiate the cpu objects. ppc_type_check_compat() will now make an equivalent check, but based on a CPU's QOM typename instead of an instantiated CPU object. We make use of the new interface in several places in spapr, where we're essentially making a global check, rather than one specific to a particular cpu. This avoids some ugly uses of first_cpu to grab a "representative" instance. Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au> Reviewed-by: NGreg Kurz <groug@kaod.org> Reviewed-by: NCédric Le Goater <clg@kaod.org>
-
- 16 6月, 2018 3 次提交
-
-
由 David Gibson 提交于
CPUPPCState currently contains a number of fields containing the state of the VPA. The VPA is a PAPR specific concept covering several guest/host shared memory areas used to communicate some information with the hypervisor. As a PAPR concept this is really machine specific information, although it is per-cpu, so it doesn't really belong in the core CPU state structure. There's also other information that's per-cpu, but platform/machine specific. So create a (void *)machine_data in PowerPCCPU which can be used by the machine to locate per-cpu data. Intialization, lifetime and cleanup of machine_data is entirely up to the machine type. Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au> Reviewed-by: NGreg Kurz <groug@kaod.org> Tested-by: NGreg Kurz <groug@kaod.org>
-
由 Greg Kurz 提交于
Commit 9d6f1065 moved the last line in this block to somewhere else, but it forgot to remove the now useless #if/#endif. Signed-off-by: NGreg Kurz <groug@kaod.org> Reviewed-by: NPhilippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Suraj Jitindar Singh 提交于
For cap_ppc_safe_cache to be set to workaround, we require both a l1d cache flush instruction and private l1d cache. On POWER8 don't require private l1d cache. This means a guest on a POWER8 machine can make use of the cache flush workarounds. Signed-off-by: NSuraj Jitindar Singh <sjitindarsingh@gmail.com> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
- 12 6月, 2018 5 次提交
-
-
由 luporl 提交于
According to PowerISA, the PIR register should be readable in privileged mode also, not only in hypervisor privileged mode. PowerISA 3.0 - 4.3.3 Processor Identification Register "Read access to the PIR is privileged; write access is not provided." Figure 18 in section 4.4.4 explicitly confirms that mfspr PIR is privileged and doesn't require hypervisor state. Cc: David Gibson <david@gibson.dropbear.id.au> Cc: Alexander Graf <agraf@suse.de> Cc: qemu-ppc@nongnu.org Signed-off-by: NLeandro Lupori <leandro.lupori@gmail.com> Reviewed-by: NJose Ricardo Ziviani <joserz@linux.ibm.com> Reviewed-by: NGreg Kurz <groug@kaod.org> Signed-off-by: NGreg Kurz <groug@kaod.org> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Cédric Le Goater 提交于
POWER9 introduced a new variant of the eieio instruction using bit 6 as a hint to tell the CPU it is a store-forwarding barrier. The usage of this eieio extension was recently added in Linux 4.17 which activated the "support for a store forwarding barrier at kernel entry/exit". Unfortunately, it is not possible to insert this new eieio instruction without considerable change in ppc_tr_translate_insn(). So instead we loosen the QEMU eieio instruction mask and modify the gen_eieio() helper to test for bit6. On non-POWER9 CPUs, the bit6 is just ignored but a warning is emitted as this is not an instruction software should be using. Signed-off-by: NCédric Le Goater <clg@kaod.org> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Joel Stanley 提交于
The powerpc Linux kernel[1] and skiboot firmware[2] recently gained changes that cause the Processor Compatibility Register (PCR) SPR to be cleared. These changes cause Linux to fail to boot on the Qemu powernv machine with an error: Trying to write privileged spr 338 (0x152) at 0000000030017f0c With this patch Qemu makes this register available as a hypervisor privileged register. Note that bits set in this register disable features of the processor. Currently the only register state that is supported is when the register is zeroed (enable all features). This is sufficient for guests to once again boot. [1] https://lkml.kernel.org/r/20180518013742.24095-1-mikey@neuling.org [2] https://patchwork.ozlabs.org/patch/915932/Signed-off-by: NJoel Stanley <joel@jms.id.au> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Suraj Jitindar Singh 提交于
Factor out the parsing of struct kvm_ppc_cpu_char in kvmppc_get_cpu_characteristics() into a separate function for each cap for simplicity. Signed-off-by: NSuraj Jitindar Singh <sjitindarsingh@gmail.com> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Thomas Huth 提交于
fprintf() and qemu_log_separate() are frowned upon these days for printing logging information in QEMU. Accessing the wrong SPRs indicates wrong guest behaviour in most cases, and we've got a proper way to log such situations, which is the qemu_log_mask(LOG_GUEST_ERROR, ...) function. So use this function now for logging the bad SPR accesses instead. Signed-off-by: NThomas Huth <thuth@redhat.com> Reviewed-by: NPhilippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: NGreg Kurz <groug@kaod.org> Reviewed-by: NAlistair Francis <alistair.francis@wdc.com> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
- 02 6月, 2018 1 次提交
-
-
由 Richard Henderson 提交于
Do the cast to uintptr_t within the helper, so that the compiler can type check the pointer argument. We can also do some more sanity checking of the index argument. Reviewed-by: NLaurent Vivier <laurent@vivier.eu> Signed-off-by: NRichard Henderson <richard.henderson@linaro.org>
-
- 01 6月, 2018 2 次提交
-
-
由 Philippe Mathieu-Daudé 提交于
Code change produced with: $ git grep '#include "exec/exec-all.h"' | \ cut -d: -f-1 | \ xargs egrep -L "(cpu_address_space_init|cpu_loop_|tlb_|tb_|GETPC|singlestep|TranslationBlock)" | \ xargs sed -i.bak '/#include "exec\/exec-all.h"/d' Signed-off-by: NPhilippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20180528232719.4721-10-f4bug@amsat.org> Acked-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Philippe Mathieu-Daudé 提交于
Since it inception this include uses tlb_flush() declared in "exec/exec-all.h". Include the other header to allow further includes cleanup. Signed-off-by: NPhilippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20180528232719.4721-8-f4bug@amsat.org> Acked-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 31 5月, 2018 1 次提交
-
-
由 Peter Maydell 提交于
As part of plumbing MemTxAttrs down to the IOMMU translate method, add MemTxAttrs as an argument to address_space_map(). Its callers either have an attrs value to hand, or don't care and can use MEMTXATTRS_UNSPECIFIED. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Message-id: 20180521140402.23318-5-peter.maydell@linaro.org
-
- 29 5月, 2018 1 次提交
-
-
由 Peter Maydell 提交于
Rename the 2.13 machines to match the number we're going to use for the next release. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NCornelia Huck <cohuck@redhat.com> Reviewed-by: NThomas Huth <thuth@redhat.com> Reviewed-by: NGreg Kurz <groug@kaod.org> Message-id: 20180522104000.9044-5-peter.maydell@linaro.org
-
- 19 5月, 2018 1 次提交
-
-
由 Richard Henderson 提交于
Cc: Alexander Graf <agraf@suse.de> Cc: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: NPhilippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: NRichard Henderson <richard.henderson@linaro.org>
-
- 18 5月, 2018 1 次提交
-
-
由 Richard Henderson 提交于
Only MIPS requires snan_bit_is_one to be variable. While we are specializing softfloat behaviour, allow other targets to eliminate this runtime check. Cc: Aurelien Jarno <aurelien@aurel32.net> Cc: Yongbok Kim <yongbok.kim@mips.com> Cc: David Gibson <david@gibson.dropbear.id.au> Cc: Alexander Graf <agraf@suse.de> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Tested-by: NAlex Bennée <alex.bennee@linaro.org> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Signed-off-by: NRichard Henderson <richard.henderson@linaro.org>
-
- 11 5月, 2018 1 次提交
-
-
由 Paolo Bonzini 提交于
osdep.h is only needed for files that are compiled directly. Remove it from included C source files, and rename them to *.inc.c so that scripts/clean-includes knows to skip them. Cc: Eric Blake <eblake@redhat.com> Cc: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 10 5月, 2018 1 次提交
-
-
由 Emilio G. Cota 提交于
While at it, use int for both num_insns and max_insns to make sure we have same-type comparisons. Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Reviewed-by: NPhilippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: NMichael Clark <mjc@sifive.com> Signed-off-by: NEmilio G. Cota <cota@braap.org> Signed-off-by: NRichard Henderson <richard.henderson@linaro.org>
-
- 04 5月, 2018 6 次提交
-
-
由 Marc-André Lureau 提交于
Now that we can safely call QOBJECT() on QObject * as well as its subtypes, we can have macros qobject_ref() / qobject_unref() that work everywhere instead of having to use QINCREF() / QDECREF() for QObject and qobject_incref() / qobject_decref() for its subtypes. The replacement is mechanical, except I broke a long line, and added a cast in monitor_qmp_cleanup_req_queue_locked(). Unlike qobject_decref(), qobject_unref() doesn't accept void *. Note that the new macros evaluate their argument exactly once, thus no need to shout them. Signed-off-by: NMarc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Message-Id: <20180419150145.24795-4-marcandre.lureau@redhat.com> Reviewed-by: NMarkus Armbruster <armbru@redhat.com> [Rebased, semantic conflict resolved, commit message improved] Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
-
由 Greg Kurz 提交于
The pseries-2.7 and older machine types require CPUPPCState::insns_flags to be strictly equal between source and destination. This checking is abusive and breaks migration of KVM guests when the host CPU models are different, even if they are compatible enough to allow the guest to run transparently. This buggy behaviour was fixed for pseries-2.8 and we added some hacks to allow backward migration of older machine types. These hacks assume that the CPU belongs to the POWER8 family, which was true for most KVM based setup we cared about at the time. But now POWER9 systems are coming, and backward migration of pre 2.8 guests running in POWER8 architected mode from a POWER9 host to a POWER8 host is broken: qemu-system-ppc64: error while loading state for instance 0x0 of device 'cpu' qemu-system-ppc64: load of migration failed: Invalid argument This happens because POWER9 doesn't set PPC_MEM_TLBIE in insns_flags, while POWER8 does. Let's force PPC_MEM_TLBIE in the migration hack to fix the issue. This is an acceptable hack because these old machine types only support CPU models that do set PPC_MEM_TLBIE. Signed-off-by: NGreg Kurz <groug@kaod.org> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 David Gibson 提交于
cpu_ppc_set_papr() does several things: 1) it sets up the virtual hypervisor interface 2) it prevents the cpu from ever entering hypervisor mode 3) it tells KVM that we're emulating a cpu in PAPR mode and 4) it configures the LPCR and AMOR (hypervisor privileged registers) so that TCG will behave correctly for PAPR guests, without attempting to emulate the cpu in hypervisor mode (1) & (2) make sense for any virtual hypervisor (if another one ever exists). (3) belongs more properly in the machine type specific to a PAPR guest, so move it to spapr_cpu_init(). While we're at it, remove an ugly test on kvm_enabled() by making kvmppc_set_papr() a safe no-op on non-KVM. (4) also belongs more properly in the machine type specific code. (4) is done by mangling the default values of the SPRs, so that they will be set correctly at reset time. Manipulating usually-static parameters of the cpu model like this is kind of ugly, especially since the values used really have more to do with the platform than the cpu. The spapr code already has places for PAPR specific initializations of register state in spapr_cpu_reset(), so move this handling there. Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au> Reviewed-by: NCédric Le Goater <clg@kaod.org> Tested-by: NCédric Le Goater <clg@kaod.org>
-
由 David Gibson 提交于
In cpu_ppc_set_papr() the UPRT and GTSE bits of the LPCR default value are initialized based on on ppc64_radix_guest(). Which seems reasonable, except that ppc64_radix_guest() is based on spapr->patb_entry which is only set up in spapr_machine_reset, called _after_ cpu_ppc_set_papr() for boot cpus. Well, and the fact that modifying the SPR default value for an instance rather than a class is kind of yucky. The initialization here is really only necessary or valid for hotplugged cpus; the base cpu initialization already sets a value that's good enough for the boot cpus until the guest uses an hcall to configure it's preferred MMU mode. So, move this initialization to the rtas_start_cpu() path, at which point ppc64_radix_guest() will have a sensible value, to make sure secondary cpus come up in an MMU mode matching the existing cpus. Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au> Reviewed-by: NCédric Le Goater <clg@kaod.org> Tested-by: NCédric Le Goater <clg@kaod.org>
-
由 David Gibson 提交于
There are some fields in the cpu state which need to be updated when the LPCR register is changed, which is done by ppc_hash64_update_rmls() and ppc_hash64_update_vrma(). Code which alters env->spr[SPR_LPCR] needs to call them afterwards to make sure the state is up to date. That's easy to get wrong. The normal way of dealing with sitautions like that is to use a helper which both updates the basic register value and the derived state. So, do that. Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au> Reviewed-by: NCédric Le Goater <clg@kaod.org> Tested-by: NCédric Le Goater <clg@kaod.org> Reviewed-by: NGreg Kurz <groug@kaod.org>
-
由 David Gibson 提交于
Current POWER cpus allow for a VRMA, a special mapping which describes a guest's view of memory when in real mode (MMU off, from the guest's point of view). Older cpus didn't have that which meant that to support a guest a special host-contiguous region of memory was needed to give the guest its Real Mode Area (RMA). KVM used to provide special calls to allocate a contiguous RMA for those cases. This was useful in the early days of KVM on Power to allow it to be tested on PowerPC 970 chips as used in Macintosh G5 machines. Now, those machines are so old as to be almost irrelevant. The normal qemu deprecation process would require this to be marked deprecated then removed in 2 releases. However, this can only be used with corresponding support in the host kernel - which was dropped years ago (in c17b98cf "KVM: PPC: Book3S HV: Remove code for PPC970 processors" of 2014-12-03 to be precise). Therefore it should be ok to drop this immediately. Just to be clear this only affects *KVM HV* guests with PowerPC 970, and those already require an ancient host kernel. TCG and KVM PR guests with PowerPC 970 should still work. Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au> Acked-by: NThomas Huth <thuth@redhat.com>
-