- 15 11月, 2016 40 次提交
-
-
由 Marc-André Lureau 提交于
ASAN spotted: SUMMARY: AddressSanitizer: 301990288 byte(s) leaked in 33 allocation(s). Signed-off-by: NMarc-André Lureau <marcandre.lureau@redhat.com> Message-id: 20161109104547.23861-1-marcandre.lureau@redhat.com Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Stefan Hajnoczi 提交于
# gpg: Signature made Tue 15 Nov 2016 07:37:27 AM GMT # gpg: using RSA key 0xEF04965B398D6211 # gpg: Good signature from "Jason Wang (Jason Wang on RedHat) <jasowang@redhat.com>" # gpg: WARNING: This key is not certified with sufficiently trusted signatures! # gpg: It is not certain that the signature belongs to the owner. # Primary key fingerprint: 215D 46F4 8246 689E C77F 3562 EF04 965B 398D 6211 * jasowang/tags/net-pull-request: docs: fix COLO architecture diagram net: fix sending of data with -net socket, listen backend net: skip virtio-net config of deleted nic's peers Message-id: 1479195830-4725-1-git-send-email-jasowang@redhat.com Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Stefan Hajnoczi 提交于
# gpg: Signature made Tue 15 Nov 2016 04:10:29 AM GMT # gpg: using RSA key 0xBDBE7B27C0DE3057 # gpg: Good signature from "Jeffrey Cody <jcody@redhat.com>" # gpg: aka "Jeffrey Cody <jeff@codyprime.org>" # gpg: aka "Jeffrey Cody <codyprime@gmail.com>" # Primary key fingerprint: 9957 4B4D 3474 90E7 9D98 D624 BDBE 7B27 C0DE 3057 * jtc/tags/block-pull-request: mirror: do not flush every time the disks are synced block/curl: Do not wait for data beyond EOF block/curl: Remember all sockets block/curl: Fix return value from curl_read_cb block/curl: Use BDRV_SECTOR_SIZE block/curl: Drop TFTP "support" qemu-iotests: avoid spurious failure on test 109 iotests: add transactional failure race test blockjob: refactor backup_start as backup_job_create blockjob: add block_job_start blockjob: add .start field blockjob: add .clean property blockjob: fix dead pointer in txn list Message-id: 1479183291-14086-1-git-send-email-jcody@redhat.com Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Stefan Hajnoczi 提交于
ppc patch queue 2016-11-15 Latest set of ppc and spapr related patches. Highlights are: * More POWER9 instructions * Fix some subtle outstanding bugs * Add some extra tests One patch affects bitops.h, so isn't strictly ppc related. # gpg: Signature made Tue 15 Nov 2016 02:46:48 AM GMT # gpg: using RSA key 0x6C38CACA20D9B392 # gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>" # gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>" # gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>" # gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>" # Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392 * dgibson/tags/ppc-for-2.8-20161115: boot-serial-test: Add a test for the powernv machine tests: add XSCOM tests for the PowerNV machine ppc/pnv: Fix fatal bug on 32-bit hosts ppc/pnv: fix xscom address translation for POWER9 ppc/pnv: add a 'xscom_core_base' field to PnvChipClass spapr-vty: Fix bad assert() statement FU exceptions should carry a cause (IC) spapr: Fix migration of PCI host bridges from qemu-2.7 target-ppc: Implement bcdctz. instruction target-ppc: Implement bcdcfz. instruction target-ppc: Implement bcdctn. instruction target-ppc: Implement bcdcfn. instruction ppc: Remove some stub POWER6 models ppc/pnv: fix compile breakage on old gcc powernv: CPU compatibility modes don't make sense for powernv target-ppc: add vprtyb[w/d/q] instructions target-ppc: add vrldnm and vrlwnm instructions target-ppc: add vrldnmi and vrlwmi instructions bitops: fix rol/ror when shift is zero Message-id: 1479178144-28153-1-git-send-email-david@gibson.dropbear.id.au Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Stefan Hajnoczi 提交于
slirp updates # gpg: Signature made Mon 14 Nov 2016 08:19:16 PM GMT # gpg: using RSA key 0xA003196827414880 # gpg: Good signature from "Samuel Thibault <samuel.thibault@u-bordeaux.fr>" # gpg: aka "Samuel Thibault <sthibault@debian.org>" # gpg: aka "Samuel Thibault <samuel.thibault@gnu.org>" # gpg: aka "Samuel Thibault <samuel.thibault@inria.fr>" # gpg: aka "Samuel Thibault <samuel.thibault@labri.fr>" # gpg: aka "Samuel Thibault <samuel.thibault@ens-lyon.org>" # Primary key fingerprint: 900C B024 B679 31D4 0F82 304B D017 8C76 7D06 9EE6 # Subkey fingerprint: 6B0F AC21 8566 46E9 4AA2 D200 A003 1968 2741 4880 * sthibault/tags/samuel-thibault: slirp: Fix access to freed memory Message-id: 20161114202030.17685-1-samuel.thibault@ens-lyon.org Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Stefan Hajnoczi 提交于
migration/next for 20161114 # gpg: Signature made Mon 14 Nov 2016 07:55:42 PM GMT # gpg: using RSA key 0xF487EF185872D723 # gpg: Good signature from "Juan Quintela <quintela@redhat.com>" # gpg: aka "Juan Quintela <quintela@trasno.org>" # Primary key fingerprint: 1899 FF8E DEBF 58CC EE03 4B82 F487 EF18 5872 D723 * quintela/tags/migration/20161114: migration: Fix return code of ram_save_iterate() tests/test-vmstate.c: add array of pointer to struct tests/test-vmstate.c: add save_buffer util func migration: fix missing assignment for has_x_checkpoint_delay Message-id: 1479153474-2401-1-git-send-email-quintela@redhat.com Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Zhang Chen 提交于
Fix COLO-Proxy part of COLO architecture diagram Signed-off-by: NZhang Chen <zhangchen.fnst@cn.fujitsu.com> Reviewed-by: Nzhanghailiang <zhang.zhanghailiang@huawei.com> Signed-off-by: NJason Wang <jasowang@redhat.com>
-
由 Daniel P. Berrange 提交于
The use of -net socket,listen was broken in the following commit commit 16a3df40 Author: Zhang Chen <zhangchen.fnst@cn.fujitsu.com> Date: Fri May 13 15:35:19 2016 +0800 net/net: Add SocketReadState for reuse codes This function is from net/socket.c, move it to net.c and net.h. Add SocketReadState to make others reuse net_fill_rstate(). suggestion from jason. This refactored the state out of NetSocketState into a separate SocketReadState. This refactoring requires that a callback is provided to be triggered upon completion of a packet receive from the guest. The patch only registered this callback in the codepaths hit by -net socket,connect, not -net socket,listen. So as a result packets sent by the guest in the latter case get dropped on the floor. This bug is hidden because net_fill_rstate() silently does nothing if the callback is not set. This patch adds in the middle callback registration and also adds an assert so that QEMU aborts if there are any other codepaths hit which are missing the callback. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com> Reviewed-by: NZhang Chen <zhangchen.fnst@cn.fujitsu.com> Signed-off-by: NJason Wang <jasowang@redhat.com>
-
由 Yuri Benditovich 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1373816 qemu core dump happens during repetitive unpug-plug with multiple queues and Windows RSS-capable guest. If back-end delete requested during virtio-net device initialization, driver still can try configure the device for multiple queues. The virtio-net device is expected to be removed as soon as the initialization is done. Signed-off-by: NYuri Benditovich <yuri.benditovich@daynix.com> Signed-off-by: NJason Wang <jasowang@redhat.com>
-
由 Paolo Bonzini 提交于
This puts a huge strain on the disks when there are many concurrent migrations. With this patch we only flush twice: just before issuing the event, and just before pivoting to the destination. If management will complete the job close to the BLOCK_JOB_READY event, the cost of the second flush should be small anyway. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Message-id: 20161109162008.27287-2-pbonzini@redhat.com Signed-off-by: NJeff Cody <jcody@redhat.com>
-
由 Max Reitz 提交于
libcurl will only give us as much data as there is, not more. The block layer will deny requests beyond the end of file for us; but since this block driver is still using a sector-based interface, we can still get in trouble if the file size is not a multiple of 512. While we have already made sure not to attempt transfers beyond the end of the file, we are currently still trying to receive data from there if the original request exceeds the file size. This patch fixes this issue and invokes qemu_iovec_memset() on the iovec's tail. Cc: qemu-stable@nongnu.org Signed-off-by: NMax Reitz <mreitz@redhat.com> Message-id: 20161025025431.24714-5-mreitz@redhat.com Signed-off-by: NJeff Cody <jcody@redhat.com>
-
由 Max Reitz 提交于
For some connection types (like FTP, generally), more than one socket may be used (in FTP's case: control vs. data stream). As of commit 838ef602 ("curl: Eliminate unnecessary use of curl_multi_socket_all"), we have to remember all of the sockets used by libcurl, but in fact we only did that for a single one. Since one libcurl connection may use multiple sockets, however, we have to remember them all. Cc: qemu-stable@nongnu.org Signed-off-by: NMax Reitz <mreitz@redhat.com> Message-id: 20161025025431.24714-4-mreitz@redhat.com Signed-off-by: NJeff Cody <jcody@redhat.com>
-
由 Max Reitz 提交于
While commit 38bbc0a5 is correct in that the callback is supposed to return the number of bytes handled; what it does not mention is that libcurl will throw an error if the callback did not "handle" all of the data passed to it. Therefore, if the callback receives some data that it cannot handle (either because the receive buffer has not been set up yet or because it would not fit into the receive buffer) and we have to ignore it, we still have to report that the data has been handled. Obviously, this should not happen normally. But it does happen at least for FTP connections where some data (that we do not expect) may be generated when the connection is established. Cc: qemu-stable@nongnu.org Signed-off-by: NMax Reitz <mreitz@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Message-id: 20161025025431.24714-3-mreitz@redhat.com Signed-off-by: NJeff Cody <jcody@redhat.com>
-
由 Max Reitz 提交于
Currently, curl defines its own constant SECTOR_SIZE. There is no advantage over using the global BDRV_SECTOR_SIZE, so drop it. Cc: qemu-stable@nongnu.org Signed-off-by: NMax Reitz <mreitz@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Message-id: 20161025025431.24714-2-mreitz@redhat.com Signed-off-by: NJeff Cody <jcody@redhat.com>
-
由 Max Reitz 提交于
Because TFTP does not support byte ranges, it was never usable with our curl block driver. Since apparently nobody has ever complained loudly enough for someone to take care of the issue until now, it seems reasonable to assume that nobody has ever actually used it. Therefore, it should be safe to just drop it from curl's protocol list. [Jeff Cody: Below is additional summary pulled, with some rewording, from followup emails between Max and Markus, to explain what worked and what didn't] TFTP would sometimes work, to a limited extent, for images <= the curl "readahead" size, so long as reads started at offset zero. By default, that readahead size is 256KB. Reads starting at a non-zero offset would also have returned data from a zero offset. It can become more complicated still, with mixed reads at zero offset and non-zero offsets, due to data buffering. In short, TFTP could only have worked before in very specific scenarios with unrealistic expectations and constraints. Signed-off-by: NMax Reitz <mreitz@redhat.com> Reviewed-by: NKevin Wolf <kwolf@redhat.com> Reviewed-by: NJeff Cody <jcody@redhat.com> Message-id: 20161102175539.4375-4-mreitz@redhat.com Signed-off-by: NJeff Cody <jcody@redhat.com>
-
由 Paolo Bonzini 提交于
In some cases it is possible that query-io-status is called just before the job is completed, causing -{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "src", "len": 31457280, "offset": OFFSET, "speed": 0, "type": "mirror", "error": "Operation not permitted"}} -{"return": []} +{"return": [{"io-status": "ok", "device": "src", "busy": true, "len": 31457280, "offset": OFFSET, "paused": false, "speed": 0, "ready": false, "type": "mirror"}]} Assert that the completeion event eventually happens. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Message-id: 20161109162008.27287-1-pbonzini@redhat.com Reviewed-by: NJeff Cody <jcody@redhat.com> Signed-off-by: NJeff Cody <jcody@redhat.com>
-
由 John Snow 提交于
Add a regression test for the case found by Vladimir. Reported-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: NJohn Snow <jsnow@redhat.com> Reviewed-by: NKevin Wolf <kwolf@redhat.com> Message-id: 1478587839-9834-7-git-send-email-jsnow@redhat.com Signed-off-by: NJeff Cody <jcody@redhat.com>
-
由 John Snow 提交于
Refactor backup_start as backup_job_create, which only creates the job, but does not automatically start it. The old interface, 'backup_start', is not kept in favor of limiting the number of nearly-identical interfaces that would have to be edited to keep up with QAPI changes in the future. Callers that wish to synchronously start the backup_block_job can instead just call block_job_start immediately after calling backup_job_create. Transactions are updated to use the new interface, calling block_job_start only during the .commit phase, which helps prevent race conditions where jobs may finish before we even finish building the transaction. This may happen, for instance, during empty block backup jobs. Reported-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: NJohn Snow <jsnow@redhat.com> Message-id: 1478587839-9834-6-git-send-email-jsnow@redhat.com Signed-off-by: NJeff Cody <jcody@redhat.com>
-
由 John Snow 提交于
Instead of automatically starting jobs at creation time via backup_start et al, we'd like to return a job object pointer that can be started manually at later point in time. For now, add the block_job_start mechanism and start the jobs automatically as we have been doing, with conversions job-by-job coming in later patches. Of note: cancellation of unstarted jobs will perform all the normal cleanup as if the job had started, particularly abort and clean. The only difference is that we will not emit any events, because the job never actually started. Signed-off-by: NJohn Snow <jsnow@redhat.com> Message-id: 1478587839-9834-5-git-send-email-jsnow@redhat.com Signed-off-by: NJeff Cody <jcody@redhat.com>
-
由 John Snow 提交于
Add an explicit start field to specify the entrypoint. We already have ownership of the coroutine itself AND managing the lifetime of the coroutine, let's take control of creation of the coroutine, too. This will allow us to delay creation of the actual coroutine until we know we'll actually start a BlockJob in block_job_start. This avoids the sticky question of how to "un-create" a Coroutine that hasn't been started yet. Signed-off-by: NJohn Snow <jsnow@redhat.com> Reviewed-by: NKevin Wolf <kwolf@redhat.com> Message-id: 1478587839-9834-4-git-send-email-jsnow@redhat.com Signed-off-by: NJeff Cody <jcody@redhat.com>
-
由 John Snow 提交于
Cleaning up after we have deferred to the main thread but before the transaction has converged can be dangerous and result in deadlocks if the job cleanup invokes any BH polling loops. A job may attempt to begin cleaning up, but may induce another job to enter its cleanup routine. The second job, part of our same transaction, will block waiting for the first job to finish, so neither job may now make progress. To rectify this, allow jobs to register a cleanup operation that will always run regardless of if the job was in a transaction or not, and if the transaction job group completed successfully or not. Move sensitive cleanup to this callback instead which is guaranteed to be run only after the transaction has converged, which removes sensitive timing constraints from said cleanup. Furthermore, in future patches these cleanup operations will be performed regardless of whether or not we actually started the job. Therefore, cleanup callbacks should essentially confine themselves to undoing create operations, e.g. setup actions taken in what is now backup_start. Reported-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: NJohn Snow <jsnow@redhat.com> Reviewed-by: NKevin Wolf <kwolf@redhat.com> Message-id: 1478587839-9834-3-git-send-email-jsnow@redhat.com Signed-off-by: NJeff Cody <jcody@redhat.com>
-
Though it is not intended to be reached through normal circumstances, if we do not gracefully deconstruct the transaction QLIST, we may wind up with stale pointers in the list. The rest of this series attempts to address the underlying issues, but this should fix list inconsistencies. Signed-off-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Tested-by: NJohn Snow <jsnow@redhat.com> Reviewed-by: NJohn Snow <jsnow@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NKevin Wolf <kwolf@redhat.com> Signed-off-by: NJohn Snow <jsnow@redhat.com> Message-id: 1478587839-9834-2-git-send-email-jsnow@redhat.com [Rewrote commit message. --js] Signed-off-by: NJohn Snow <jsnow@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NKevin Wolf <kwolf@redhat.com> Signed-off-by: NJohn Snow <jsnow@redhat.com> Signed-off-by: NJeff Cody <jcody@redhat.com>
-
由 Thomas Huth 提交于
The new powernv machine ships with a firmware that outputs some text to the serial console, so we can automatically test this machine type in the boot-serial tester, too. And to get some (very limited) test coverage for the new POWER9 CPU emulation, too, this test is also started with "-cpu POWER9". Signed-off-by: NThomas Huth <thuth@redhat.com> Reviewed-by: NCédric Le Goater <clg@kaod.org> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 David Gibson 提交于
Add a couple of tests on the XSCOM bus of the PowerNV machine for the the POWER8 and POWER9 CPUs. The first tests reads the CFAM identifier of the chip. The second test goes further in the XSCOM address space and reaches the cores to read their DTS registers. Signed-off-by: NCédric Le Goater <clg@kaod.org> [dwg: Fixed an incorrect indentation, and a Makefile problem]] Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 David Gibson 提交于
If the pnv machine type is compiled on a 32-bit host, the unsigned long (host) type is 32-bit. This means that the hweight_long() used to calculate the number of allowed cores only considers the low 32 bits of the cores_mask variable, and can thus return 0 in some circumstances. This corrects the bug. Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au> Suggested-by: NRichard Henderson <rth@twiddle.net> [clg: replaced hweight_long() by ctpop64() ] Signed-off-by: NCédric Le Goater <clg@kaod.org> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Cédric Le Goater 提交于
High addresses can overflow the uint32_t pcba variable after the 8byte shift. Signed-off-by: NCédric Le Goater <clg@kaod.org> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Cédric Le Goater 提交于
The XSCOM addresses for the core registers are encoded in a slightly different way on POWER8 and POWER9. Signed-off-by: NCédric Le Goater <clg@kaod.org> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Thomas Huth 提交于
When using the serial console in the GTK interface of QEMU (and QEMU has been compiled with CONFIG_VTE), it is possible to trigger the assert() statement in vty_receive() in spapr_vty.c by pasting a chunk of text with length > 16 into the QEMU window. Most of the other serial backends seem to simply drop characters that they can not handle, so I think we should also do the same in spapr-vty to fix this issue. Buglink: https://bugs.launchpad.net/qemu/+bug/1639322Signed-off-by: NThomas Huth <thuth@redhat.com> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Balbir Singh 提交于
As per the ISA we need a cause and executing a tabort r9 in libc for example causes a EXCP_FU exception, we don't wire up the IC (cause) when we post the exception. The cause is required for the kernel to do the right thing. The fix applies only to 64 bit ppc targets. Signed-off-by: NBalbir singh <bsingharora@gmail.com> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 David Gibson 提交于
daa23699 "spapr_pci: Add a 64-bit MMIO window" subtly broke migration from qemu-2.7 to the current version. It split the device's MMIO window into two pieces for 32-bit and 64-bit MMIO. The patch included backwards compatibility code to convert the old property into the new format. However, the property value was also transferred in the migration stream and compared with a (probably unwise) VMSTATE_EQUAL. So, the "raw" value from 2.7 is compared to the new style converted value from (pre-)2.8 giving a mismatch and migration failure. Although it would be technically possible to fix this in a way allowing backwards migration, that would leave an ugly legacy around indefinitely. This patch takes the simpler approach of bumping the migration version, dropping the unwise VMSTATE_EQUAL (and some equally unwise ones around it) and ignoring them on an incoming migration. Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au> Reviewed-by: NAlexey Kardashevskiy <aik@ozlabs.ru>
-
由 Jose Ricardo Ziviani 提交于
bcdctz. converts from BCD to Zoned numeric format. Zoned format uses a byte to represent a digit where the most significant nibble is 0x3 or 0xf, depending on the preferred signal. Signed-off-by: NJose Ricardo Ziviani <joserz@linux.vnet.ibm.com> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Jose Ricardo Ziviani 提交于
bcdcfz. converts from Zoned numeric format to BCD. Zoned format uses a byte to represent a digit where the most significant nibble is 0x3 or 0xf, depending on the preferred signal. Signed-off-by: NJose Ricardo Ziviani <joserz@linux.vnet.ibm.com> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Jose Ricardo Ziviani 提交于
bcdctn. converts from BCD to National numeric format. National format uses a byte to represent a digit where the most significant nibble is always 0x3 and the least sign. nibbles is the digit itself. Signed-off-by: NJose Ricardo Ziviani <joserz@linux.vnet.ibm.com> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Jose Ricardo Ziviani 提交于
bcdcfn. converts from National numeric format to BCD. National format uses a byte to represent a digit where the most significant nibble is always 0x3 and the least sign. nibbles is the digit itself. Signed-off-by: NJose Ricardo Ziviani <joserz@linux.vnet.ibm.com> Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 David Gibson 提交于
The CPU model table includes stub (commented out) definitions for CPU_POWERPC_POWER6_5 and CPU_POWERPC_POWER6A. These are not real cpu models, but represent the POWER6 in some compatiblity modes. If we ever do implement POWER6 (unlikely), we'll implement its compatibility modes in a different way (similar to what we do for POWER7 and POWER8). So these stub definitions can be removed. Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au> Reviewed-by: NThomas Huth <thuth@redhat.com>
-
由 Cédric Le Goater 提交于
PnvChip is defined twice and this can confuse old compilers : CC ppc64-softmmu/hw/ppc/pnv_xscom.o In file included from qemu.git/hw/ppc/pnv.c:29: qemu.git/include/hw/ppc/pnv.h:60: error: redefinition of typedef ‘PnvChip’ qemu.git/include/hw/ppc/pnv_xscom.h:24: note: previous declaration of ‘PnvChip’ was here make[1]: *** [hw/ppc/pnv.o] Error 1 make[1]: *** Waiting for unfinished jobs.... Signed-off-by: NCédric Le Goater <clg@kaod.org> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 David Gibson 提交于
powernv has some code (derived from the spapr equivalent) used in device tree generation which depends on the CPU's compatibility mode / logical PVR. However, compatibility modes don't make sense on powernv - at least not as a property controlled by the host - because the guest in powernv has full hypervisor level access to the virtual system, and so owns the PCR (Processor Compatibility Register) which implements compatiblity modes. Note: the new logic doesn't take into account kvmppc_smt_threads() like the old version did. However, if core->nr_threads exceeds kvmppc_smt_threads() then things will already be broken and clamping the value in the device tree isn't going to save us. Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au> Reviewed-by: NGreg Kurz <groug@kaod.org> Reviewed-by: NThomas Huth <thuth@redhat.com>
-
由 Ankit Kumar 提交于
Add following POWER ISA 3.0 instructions. vprtybw: Vector Parity Byte Word vprtybd: Vector Parity Byte Double Word vprtybq: Vector Parity Byte Quad Word Signed-off-by: NAnkit Kumar <ankit@linux.vnet.ibm.com> Signed-off-by: NNikunj A Dadhania <nikunj@linux.vnet.ibm.com> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Bharata B Rao 提交于
vrldnm: Vector Rotate Left Doubleword then AND with Mask vrlwnm: Vector Rotate Left Word then AND with Mask Signed-off-by: NBharata B Rao <bharata@linux.vnet.ibm.com> Signed-off-by: NNikunj A Dadhania <nikunj@linux.vnet.ibm.com> Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
由 Gautham R. Shenoy 提交于
vrldmi: Vector Rotate Left Dword then Mask Insert vrlwmi: Vector Rotate Left Word then Mask Insert Signed-off-by: NGautham R. Shenoy <ego@linux.vnet.ibm.com> Signed-off-by: NBharata B Rao <bharata@linux.vnet.ibm.com> ( use extract[32,64] and rol[32,64], introduce mask helpers in internal.h ) Signed-off-by: NNikunj A Dadhania <nikunj@linux.vnet.ibm.com> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-