- 14 3月, 2017 1 次提交
-
-
由 Nikunj A Dadhania 提交于
A bug was introduced in following commit: dc0ad844 target/ppc: update overflow flags for add/sub As for 32-bit ppc target extracting bit 63 for overflow is not correct. Made it dependent on TARGET_LOG_BITS. This had broken booting MacOS 9.2.1 image Reported-by: NMark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Signed-off-by: NNikunj A Dadhania <nikunj@linux.vnet.ibm.com> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au> Tested-by: NMark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
-
- 03 3月, 2017 2 次提交
-
-
由 Sam Bobroff 提交于
The PPC MMU types are sometimes treated as if they were a bit field and sometime as if they were an enum which causes maintenance problems: flipping bits in the MMU type (which is done on both the 1TB segment and 64K segment bits) currently produces new MMU type values that are not handled in every "switch" on it, sometimes causing an abort(). This patch provides some macros that can be used to filter out the "bit field-like" bits so that the remainder of the value can be switched on, like an enum. This allows removal of all of the "degraded" types from the list and should ease maintenance. Signed-off-by: NSam Bobroff <sam.bobroff@au1.ibm.com> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Suraj Jitindar Singh 提交于
POWER9 doesn't have a storage description register 1 (SDR1) which is used to store the base and size of the hash table. Thus we don't need to generate this register on the POWER9 cpu model. While we're here, the register generation code for 970, POWER5+, POWER<7/8/9> in general is a mess where we call a generic function from a model specific function which then attempts to call model specific functions, so rework this for readability. We update ppc_cpu_dump_state so that "info registers" will only display the value of sdr1 if the register has been generated. As mentioned above the register generation for the pcc->init_proc function for 970, POWER5+, POWER7, POWER8 and POWER9 has been reworked for improved clarity. Instead of calling init_proc_book3s_64 which then attempts to generate the correct registers through a mess of if statements, we remove this function and instead call the appropriate register generation functions directly. This follows the register generation model used for earlier cpu models (pre-970) whereby cpu specific registers are generated directly in the init_proc function and makes it easier to add/remove specific registers for new cpu models. Signed-off-by: NSuraj Jitindar Singh <sjitindarsingh@gmail.com> Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
- 01 3月, 2017 9 次提交
-
-
由 Nikunj A Dadhania 提交于
mcrxrx: Move to CR from XER Extended Signed-off-by: NNikunj A Dadhania <nikunj@linux.vnet.ibm.com> Reviewed-by: NRichard Henderson <rth@twiddle.net> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Nikunj A Dadhania 提交于
Add helper_div_compute_ov() in the int_helper for updating the overflow flags. For Divide Word: SO, OV, and OV32 bits reflects overflow of the 32-bit result For Divide DoubleWord: SO, OV, and OV32 bits reflects overflow of the 64-bit result Signed-off-by: NNikunj A Dadhania <nikunj@linux.vnet.ibm.com> Reviewed-by: NRichard Henderson <rth@twiddle.net> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Nikunj A Dadhania 提交于
For Multiply Word: SO, OV, and OV32 bits reflects overflow of the 32-bit result For Multiply DoubleWord: SO, OV, and OV32 bits reflects overflow of the 64-bit result Signed-off-by: NNikunj A Dadhania <nikunj@linux.vnet.ibm.com> Reviewed-by: NRichard Henderson <rth@twiddle.net> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Nikunj A Dadhania 提交于
Signed-off-by: NNikunj A Dadhania <nikunj@linux.vnet.ibm.com> Reviewed-by: NRichard Henderson <rth@twiddle.net> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Nikunj A Dadhania 提交于
* SO and OV reflects overflow of the 64-bit result in 64-bit mode and overflow of the low-order 32-bit result in 32-bit mode * OV32 reflects overflow of the low-order 32-bit independent of the mode Signed-off-by: NNikunj A Dadhania <nikunj@linux.vnet.ibm.com> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Nikunj A Dadhania 提交于
Signed-off-by: NNikunj A Dadhania <nikunj@linux.vnet.ibm.com> Reviewed-by: NRichard Henderson <rth@twiddle.net> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Nikunj A Dadhania 提交于
Adds routine to compute ca32 - gen_op_arith_compute_ca32 For 64-bit mode use the compute ca32 routine. While for 32-bit mode, CA and CA32 will have same value. Signed-off-by: NNikunj A Dadhania <nikunj@linux.vnet.ibm.com> Reviewed-by: NRichard Henderson <rth@twiddle.net> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Nikunj A Dadhania 提交于
POWER ISA 3.0 adds CA32 and OV32 status in 64-bit mode. Add the flags and corresponding defines. Moreover, CA32 is updated when CA is updated and OV32 is updated when OV is updated. Arithmetic instructions: * Addition and Substractions: addic, addic., subfic, addc, subfc, adde, subfe, addme, subfme, addze, and subfze always updates CA and CA32. => CA reflects the carry out of bit 0 in 64-bit mode and out of bit 32 in 32-bit mode. => CA32 reflects the carry out of bit 32 independent of the mode. => SO and OV reflects overflow of the 64-bit result in 64-bit mode and overflow of the low-order 32-bit result in 32-bit mode => OV32 reflects overflow of the low-order 32-bit independent of the mode * Multiply Low and Divide: For mulld, divd, divde, divdu and divdeu: SO, OV, and OV32 bits reflects overflow of the 64-bit result For mullw, divw, divwe, divwu and divweu: SO, OV, and OV32 bits reflects overflow of the 32-bit result * Negate with OE=1 (nego) For 64-bit mode if the register RA contains 0x8000_0000_0000_0000, OV and OV32 are set to 1. For 32-bit mode if the register RA contains 0x8000_0000, OV and OV32 are set to 1. Signed-off-by: NNikunj A Dadhania <nikunj@linux.vnet.ibm.com> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Nikunj A Dadhania 提交于
Signed-off-by: NNikunj A Dadhania <nikunj@linux.vnet.ibm.com> Reviewed-by: NRichard Henderson <rth@twiddle.net> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
- 22 2月, 2017 6 次提交
-
-
由 Nikunj A Dadhania 提交于
Use the available wait instruction implementation. Signed-off-by: NNikunj A Dadhania <nikunj@linux.vnet.ibm.com> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Nikunj A Dadhania 提交于
slbsync: SLB Synchoronize The instruction provides an ordering function for the effects of all slbieg instructions executed by the thread executing the slbsync instruction. Signed-off-by: NNikunj A Dadhania <nikunj@linux.vnet.ibm.com> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Nikunj A Dadhania 提交于
slbieg: SLB Invalidate Entry Global Signed-off-by: NNikunj A Dadhania <nikunj@linux.vnet.ibm.com> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Nikunj A Dadhania 提交于
Signed-off-by: NNikunj A Dadhania <nikunj@linux.vnet.ibm.com> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Balamuruhan S 提交于
stwat: Store Word Atomic stdat: Store Doubleword Atomic The instruction includes as function code (5 bits) which gives a detail on the operation to be performed. The patch implements five such functions. Signed-off-by: NBalamuruhan S <bala24@linux.vnet.ibm.com> Signed-off-by: NHarish S <harisrir@linux.vnet.ibm.com> Signed-off-by: NAthira Rajeev <atrajeev@linux.vnet.ibm.com> [ implement stdat, use macro and combine both implementation ] Signed-off-by: NNikunj A Dadhania <nikunj@linux.vnet.ibm.com> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Balamuruhan S 提交于
lwat: Load Word Atomic ldat: Load Doubleword Atomic The instruction includes as function code (5 bits) which gives a detail on the operation to be performed. The patch implements five such functions. Signed-off-by: NBalamuruhan S <bala24@linux.vnet.ibm.com> Signed-off-by: NHarish S <harisrir@linux.vnet.ibm.com> Signed-off-by: NAthira Rajeev <atrajeev@linux.vnet.ibm.com> [ combine both lwat/ldat implementation using macro ] Signed-off-by: NNikunj A Dadhania <nikunj@linux.vnet.ibm.com> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
- 02 2月, 2017 2 次提交
-
-
由 Suraj Jitindar Singh 提交于
The cp_abort instruction is used to remove the state of an in progress copy paste sequence. POWER9 compilers add this in various places, such as context switches which causes illegal instruction signals since we don't yet implement this instruction. Given there is no implementation of the copy paste facility and that we don't claim to support it, we can just noop this instruction. Signed-off-by: NSuraj Jitindar Singh <sjitindarsingh@gmail.com> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Suraj Jitindar Singh 提交于
It can be useful when debugging to print the LPCR value. Thus we add the LPCR to the "info registers" output if the register had been defined. Signed-off-by: NSuraj Jitindar Singh <sjitindarsingh@gmail.com> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
- 31 1月, 2017 6 次提交
-
-
由 Nikunj A Dadhania 提交于
Use the nap code. Signed-off-by: NNikunj A Dadhania <nikunj@linux.vnet.ibm.com> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Nikunj A Dadhania 提交于
lxv: Load VSX Vector lxvx: Load VSX Vector Indexed Little/Big-endian Storage +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ |F0|F1|F2|F3|F4|F5|F6|F7|E0|E1|E2|E3|E4|E5|E6|E7| +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ Vector load results: BE: +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ |F0|F1|F2|F3|F4|F5|F6|F7|E0|E1|E2|E3|E4|E5|E6|E7| +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ LE: +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ |E7|E6|E5|E4|E3|E2|E1|E0|F7|F6|F5|F4|F3|F2|F1|F0| +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ stxv: Store VSX Vector stxvx: Store VSX Vector Indexed Vector (8-bit elements) in BE: +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ |F0|F1|F2|F3|F4|F5|F6|F7|E0|E1|E2|E3|E4|E5|E6|E7| +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ Vector (8-bit elements) in LE: +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ |E7|E6|E5|E4|E3|E2|E1|E0|F7|F6|F5|F4|F3|F2|F1|F0| +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ Store results in following: +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ |F0|F1|F2|F3|F4|F5|F6|F7|E0|E1|E2|E3|E4|E5|E6|E7| +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ Signed-off-by: NNikunj A Dadhania <nikunj@linux.vnet.ibm.com> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Nikunj A Dadhania 提交于
stxsd: Store VSX Scalar Dword stxssp: Store VSX Scalar SP Moreover, DQ-Form/DS-FORM instructions shares the same primary opcode(0x3D). For DQ-FORM bits 29:31 are used, for DS-FORM bits 30:31 are used. Common routine to decode primary opcode(0x3D) - ds-form/dq-form instructions is required. Signed-off-by: NNikunj A Dadhania <nikunj@linux.vnet.ibm.com> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Nikunj A Dadhania 提交于
lxsd: Load VSX Scalar Dword lxssp: Load VSX Scalar Single Moreover, DS-Form instructions shares the same primary opcode, bits 30:31 are used to decode the instruction. Use a common routine to decode primary opcode(0x39) - ds-form instructions and branch-out depending on bits 30:31. Signed-off-by: NNikunj A Dadhania <nikunj@linux.vnet.ibm.com> Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Nikunj A Dadhania 提交于
Add _BIT to CRF_[GT,LT,EQ_SO] and introduce CRF_[GT,LT,EQ,SO] for usage without shifts in the code. This would simplify the code. Signed-off-by: NNikunj A Dadhania <nikunj@linux.vnet.ibm.com> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Bharata B Rao 提交于
Move instruction decode helpers to target-ppc/internal.h so that some of these can be used from outside of translate.c. This movement also helps to get rid of some duplicate helpers from target-ppc/fpu_helper.c. Suggested-by: NNikunj A Dadhania <nikunj@linux.vnet.ibm.com> Signed-off-by: NBharata B Rao <bharata@linux.vnet.ibm.com> Signed-off-by: NNikunj A Dadhania <nikunj@linux.vnet.ibm.com> Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
- 11 1月, 2017 3 次提交
-
-
由 Richard Henderson 提交于
Signed-off-by: NRichard Henderson <rth@twiddle.net>
-
由 Richard Henderson 提交于
Signed-off-by: NRichard Henderson <rth@twiddle.net>
-
由 Richard Henderson 提交于
Use the new primitives for RDWINM and RLDICL. Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au> Signed-off-by: NRichard Henderson <rth@twiddle.net>
-
- 21 12月, 2016 1 次提交
-
-
由 Thomas Huth 提交于
We've currently got 18 architectures in QEMU, and thus 18 target-xxx folders in the root folder of the QEMU source tree. More architectures (e.g. RISC-V, AVR) are likely to be included soon, too, so the main folder of the QEMU sources slowly gets quite overcrowded with the target-xxx folders. To disburden the main folder a little bit, let's move the target-xxx folders into a dedicated target/ folder, so that target-xxx/ simply becomes target/xxx/ instead. Acked-by: Laurent Vivier <laurent@vivier.eu> [m68k part] Acked-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de> [tricore part] Acked-by: Michael Walle <michael@walle.cc> [lm32 part] Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com> [s390x part] Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> [s390x part] Acked-by: Eduardo Habkost <ehabkost@redhat.com> [i386 part] Acked-by: Artyom Tarasenko <atar4qemu@gmail.com> [sparc part] Acked-by: Richard Henderson <rth@twiddle.net> [alpha part] Acked-by: Max Filippov <jcmvbkbc@gmail.com> [xtensa part] Reviewed-by: David Gibson <david@gibson.dropbear.id.au> [ppc part] Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> [crisµblaze part] Acked-by: Guan Xuetao <gxt@mprc.pku.edu.cn> [unicore32 part] Signed-off-by: NThomas Huth <thuth@redhat.com>
-
- 15 11月, 2016 1 次提交
-
-
由 Gautham R. Shenoy 提交于
vrldmi: Vector Rotate Left Dword then Mask Insert vrlwmi: Vector Rotate Left Word then Mask Insert Signed-off-by: NGautham R. Shenoy <ego@linux.vnet.ibm.com> Signed-off-by: NBharata B Rao <bharata@linux.vnet.ibm.com> ( use extract[32,64] and rol[32,64], introduce mask helpers in internal.h ) Signed-off-by: NNikunj A Dadhania <nikunj@linux.vnet.ibm.com> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
- 02 11月, 2016 1 次提交
-
-
由 Richard Henderson 提交于
Reuse the existing locking provided by stdio to keep in_asm, cpu, op, op_opt, op_ind, and out_asm as contiguous blocks. While it isn't possible to interleave e.g. in_asm or op_opt logs because of the TB lock protecting all code generation, it is possible to interleave cpu logs, or to interleave a cpu dump with an out_asm dump. For mingw32, we appear to have no viable solution for this. The locking functions are not properly exported from the system runtime library. Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NRichard Henderson <rth@twiddle.net>
-
- 28 10月, 2016 2 次提交
-
-
由 Benjamin Herrenschmidt 提交于
Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Nikunj A Dadhania 提交于
Add required helpers (GEN_XX2FORM_EO) for supporting this instruction. xxbrh: VSX Vector Byte-Reverse Halfword xxbrw: VSX Vector Byte-Reverse Word xxbrd: VSX Vector Byte-Reverse Doubleword xxbrq: VSX Vector Byte-Reverse Quadword Signed-off-by: NNikunj A Dadhania <nikunj@linux.vnet.ibm.com> Reviewed-by: NRichard Henderson <rth@twiddle.net> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
- 05 10月, 2016 1 次提交
-
-
由 Avinesh Kumar 提交于
cmpl: invalid bit mask should be 0x00400001 bctar: invalid bit mask should be 0x0000E000 Signed-off-by: NAvinesh Kumar <avinesku@linux.vnet.ibm.com> Signed-off-by: NRajalakshmi Srinivasaraghavan <raji@linux.vnet.ibm.com> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
- 23 9月, 2016 5 次提交
-
-
由 Nikunj A Dadhania 提交于
tlbie (BookS) and tlbivax (BookE) plus the H_CALLs(pseries) should have a global effect. Introduces TLB_NEED_GLOBAL_FLUSH flag. During lazy tlb flush, after taking care of pending local flushes, check broadcast flush(at context synchronizing event ptesync/tlbsync, etc) is needed. Depending on the bitmask state of the tlb_need_flush, tlb is flushed from other cpus if needed and the flags are cleared. Suggested-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NNikunj A Dadhania <nikunj@linux.vnet.ibm.com> Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au> [dwg: Use 'true' instead of '1' for call to check_tlb_flush()] Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Nikunj A Dadhania 提交于
We flush the qemu TLB lazily. check_tlb_flush is called whenever we hit a context synchronizing event or instruction that requires a pending flush to be performed. However, we fail to handle broadcast TLB flush operations. In order to fix that efficiently, we want to differentiate whether check_tlb_flush() needs to only apply pending local flushes (isync instructions, interrupts, ...) or also global pending flush operations. The latter is only needed when executing instructions that are defined architecturally as synchronizing global TLB flush operations. This in our case is ptesync on BookS and tlbsync on BookE along with the paravirtualized hypervisor calls. Signed-off-by: NNikunj A Dadhania <nikunj@linux.vnet.ibm.com> [dwg: Changed gen_check_tlb_flush() to also take a bool, and fixed some spelling errors in commit message] Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Ravi Bangoria 提交于
darn: Deliver A Random Number Currently return invalid random number for all the case. This needs proper algorithm to provide cryptographically suitable random data. Reading from /dev/random can block and that is not an expected behaviour while the cpu instruction is getting executed. Moreover, /dev/random would only work for linux-user Signed-off-by: NRavi Bangoria <ravi.bangoria@linux.vnet.ibm.com> Signed-off-by: NNikunj A Dadhania <nikunj@linux.vnet.ibm.com> [dwg: Added minor clang warning fix for ppc32 target] Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Nikunj A Dadhania 提交于
stxsibx - Store VSX Scalar as Integer Byte Indexed stxsihx - Store VSX Scalar as Integer Halfword Indexed Signed-off-by: NNikunj A Dadhania <nikunj@linux.vnet.ibm.com> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Nikunj A Dadhania 提交于
lxsibzx - Load VSX Scalar as Integer Byte & Zero Indexed lxsihzx - Load VSX Scalar as Integer Halfword & Zero Indexed Signed-off-by: NNikunj A Dadhania <nikunj@linux.vnet.ibm.com> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-