- 15 8月, 2018 33 次提交
-
-
由 Peter Maydell 提交于
If the "trap general exceptions" bit HCR_EL2.TGE is set, we must mask all virtual interrupts (as per DDI0487C.a D1.14.3). Implement this in arm_excp_unmasked(). Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Message-id: 20180724115950.17316-2-peter.maydell@linaro.org
-
由 Adam Lackorzynski 提交于
Use an int64_t as a return type to restore the negative check for arm_load_as. Signed-off-by: NAdam Lackorzynski <adam@l4re.org> Message-id: 20180730173712.GG4987@os.inf.tu-dresden.de Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Luc Michel 提交于
Add support for GICv2 virtualization extensions by mapping the necessary I/O regions and connecting the maintenance IRQ lines. Declare those additions in the device tree and in the ACPI tables. Signed-off-by: NLuc Michel <luc.michel@greensocs.com> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Message-id: 20180727095421.386-21-luc.michel@greensocs.com Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Luc Michel 提交于
This commit improve the way the GIC is realized and connected in the ZynqMP SoC. The security extensions are enabled only if requested in the machine state. The same goes for the virtualization extensions. All the GIC to APU CPU(s) IRQ lines are now connected, including FIQ, vIRQ and vFIQ. The missing CPU to GIC timers IRQ connections are also added (HYP and SEC timers). The GIC maintenance IRQs are back-wired to the correct GIC PPIs. Finally, the MMIO mappings are reworked to take into account the ZynqMP specifics. The GIC (v)CPU interface is aliased 16 times: * for the first 0x1000 bytes from 0xf9010000 to 0xf901f000 * for the second 0x1000 bytes from 0xf9020000 to 0xf902f000 Mappings of the virtual interface and virtual CPU interface are mapped only when virtualization extensions are requested. The XlnxZynqMPGICRegion struct has been enhanced to be able to catch all this information. Signed-off-by: NLuc Michel <luc.michel@greensocs.com> Reviewed-by: NEdgar E. Iglesias <edgar.iglesias@xilinx.com> Message-id: 20180727095421.386-20-luc.michel@greensocs.com Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Luc Michel 提交于
Add some traces to the ARM GIC to catch register accesses (distributor, (v)cpu interface and virtual interface), and to take into account virtualization extensions (print `vcpu` instead of `cpu` when needed). Also add some virtualization extensions specific traces: LR updating and maintenance IRQ generation. Signed-off-by: NLuc Michel <luc.michel@greensocs.com> Reviewed-by: NPhilippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Message-id: 20180727095421.386-19-luc.michel@greensocs.com Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Luc Michel 提交于
Implement the maintenance interrupt generation that is part of the GICv2 virtualization extensions. Signed-off-by: NLuc Michel <luc.michel@greensocs.com> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Message-id: 20180727095421.386-18-luc.michel@greensocs.com Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Luc Michel 提交于
Add the gic_update_virt() function to update the vCPU interface states and raise vIRQ and vFIQ as needed. This commit renames gic_update() to gic_update_internal() and generalizes it to handle both cases, with a `virt' parameter to track whether we are updating the CPU or vCPU interfaces. The main difference between CPU and vCPU is the way we select the best IRQ. This part has been split into the gic_get_best_(v)irq functions. For the virt case, the LRs are iterated to find the best candidate. Signed-off-by: NLuc Michel <luc.michel@greensocs.com> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Message-id: 20180727095421.386-17-luc.michel@greensocs.com Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Luc Michel 提交于
Implement the read and write functions for the virtual interface of the virtualization extensions in the GICv2. One mirror region per CPU is also created, which maps to that specific CPU id. This is required by the GIC architecture specification. Signed-off-by: NLuc Michel <luc.michel@greensocs.com> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Message-id: 20180727095421.386-16-luc.michel@greensocs.com Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Luc Michel 提交于
Add the read/write functions to handle accesses to the vCPU interface. Those accesses are forwarded to the real CPU interface, with the CPU id being converted to the corresponding vCPU id (vCPU id = CPU id + GIC_NCPU). Signed-off-by: NLuc Michel <luc.michel@greensocs.com> Message-id: 20180727095421.386-15-luc.michel@greensocs.com Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Luc Michel 提交于
Implement virtualization extensions in the gic_cpu_read() and gic_cpu_write() functions. Those are the last bits missing to fully support virtualization extensions in the CPU interface path. Signed-off-by: NLuc Michel <luc.michel@greensocs.com> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Message-id: 20180727095421.386-14-luc.michel@greensocs.com Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Luc Michel 提交于
Implement virtualization extensions in the gic_deactivate_irq() and gic_complete_irq() functions. When the guest writes an invalid vIRQ to V_EOIR or V_DIR, since the GICv2 specification is not entirely clear here, we adopt the behaviour observed on real hardware: * When V_CTRL.EOIMode is false (EOI split is disabled): - In case of an invalid vIRQ write to V_EOIR: -> If some bits are set in H_APR, an invalid vIRQ write to V_EOIR triggers a priority drop, and increments V_HCR.EOICount. -> If V_APR is already cleared, nothing happen - An invalid vIRQ write to V_DIR is ignored. * When V_CTRL.EOIMode is true: - In case of an invalid vIRQ write to V_EOIR: -> If some bits are set in H_APR, an invalid vIRQ write to V_EOIR triggers a priority drop. -> If V_APR is already cleared, nothing happen - An invalid vIRQ write to V_DIR increments V_HCR.EOICount. Signed-off-by: NLuc Michel <luc.michel@greensocs.com> Message-id: 20180727095421.386-13-luc.michel@greensocs.com Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Luc Michel 提交于
Implement virtualization extensions in the gic_acknowledge_irq() function. This function changes the state of the highest priority IRQ from pending to active. When the current CPU is a vCPU, modifying the state of an IRQ modifies the corresponding LR entry. However if we clear the pending flag before setting the active one, we lose track of the LR entry as it becomes invalid. The next call to gic_get_lr_entry() will fail. To overcome this issue, we call gic_activate_irq() before gic_clear_pending(). This does not change the general behaviour of gic_acknowledge_irq. We also move the SGI case in gic_clear_pending_sgi() to enhance code readability as the virtualization extensions support adds a if-else level. Signed-off-by: NLuc Michel <luc.michel@greensocs.com> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Message-id: 20180727095421.386-12-luc.michel@greensocs.com Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Luc Michel 提交于
Implement virtualization extensions in gic_activate_irq() and gic_drop_prio() and in gic_get_prio_from_apr_bits() called by gic_drop_prio(). When the current CPU is a vCPU: - Use GIC_VIRT_MIN_BPR and GIC_VIRT_NR_APRS instead of their non-virt counterparts, - the vCPU APR is stored in the virtual interface, in h_apr. Signed-off-by: NLuc Michel <luc.michel@greensocs.com> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Message-id: 20180727095421.386-11-luc.michel@greensocs.com Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Luc Michel 提交于
Add some helper functions to gic_internal.h to get or change the state of an IRQ. When the current CPU is not a vCPU, the call is forwarded to the GIC distributor. Otherwise, it acts on the list register matching the IRQ in the current CPU virtual interface. gic_clear_active can have a side effect on the distributor, even in the vCPU case, when the correponding LR has the HW field set. Use those functions in the CPU interface code path to prepare for the vCPU interface implementation. Signed-off-by: NLuc Michel <luc.michel@greensocs.com> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NPhilippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20180727095421.386-10-luc.michel@greensocs.com Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Luc Michel 提交于
An access to the CPU interface is non-secure if the current GIC instance implements the security extensions, and the memory access is actually non-secure. Until then, it was checked with tests such as if (s->security_extn && !attrs.secure) { ... } in various places of the CPU interface code. With the implementation of the virtualization extensions, those tests must be updated to take into account whether we are in a vCPU interface or not. This is because the exposed vCPU interface does not implement security extensions. This commits replaces all those tests with a call to the gic_cpu_ns_access() function to check if the current access to the CPU interface is non-secure. This function takes into account whether the current CPU is a vCPU or not. Note that this function is used only in the (v)CPU interface code path. The distributor code path is left unchanged, as the distributor is not exposed to vCPUs at all. Signed-off-by: NLuc Michel <luc.michel@greensocs.com> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NPhilippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20180727095421.386-9-luc.michel@greensocs.com Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Luc Michel 提交于
Add some helper macros and functions related to the virtualization extensions to gic_internal.h. The GICH_LR_* macros help extracting specific fields of a list register value. The only tricky one is the priority field as only the MSB are stored. The value must be shifted accordingly to obtain the correct priority value. gic_is_vcpu() and gic_get_vcpu_real_id() help with (v)CPU id manipulation to abstract the fact that vCPU id are in the range [ GIC_NCPU; (GIC_NCPU + num_cpu) [. gic_lr_* and gic_virq_is_valid() help with the list registers. gic_get_lr_entry() returns the LR entry for a given (vCPU, irq) pair. It is meant to be used in contexts where we know for sure that the entry exists, so we assert that entry is actually found, and the caller can avoid the NULL check on the returned pointer. Signed-off-by: NLuc Michel <luc.michel@greensocs.com> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Message-id: 20180727095421.386-8-luc.michel@greensocs.com Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Luc Michel 提交于
Add the register definitions for the virtual interface of the GICv2. Signed-off-by: NLuc Michel <luc.michel@greensocs.com> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Message-id: 20180727095421.386-7-luc.michel@greensocs.com Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Luc Michel 提交于
Add the necessary parts of the virtualization extensions state to the GIC state. We choose to increase the size of the CPU interfaces state to add space for the vCPU interfaces (the GIC_NCPU_VCPU macro). This way, we'll be able to reuse most of the CPU interface code for the vCPUs. The only exception is the APR value, which is stored in h_apr in the virtual interface state for vCPUs. This is due to some complications with the GIC VMState, for which we don't want to break backward compatibility. APRs being stored in 2D arrays, increasing the second dimension would lead to some ugly VMState description. To avoid that, we keep it in h_apr for vCPUs. The vCPUs are numbered from GIC_NCPU to (GIC_NCPU * 2) - 1. The `gic_is_vcpu` function help to determine if a given CPU id correspond to a physical CPU or a virtual one. For the in-kernel KVM VGIC, since the exposed VGIC does not implement the virtualization extensions, we report an error if the corresponding property is set to true. Signed-off-by: NLuc Michel <luc.michel@greensocs.com> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Message-id: 20180727095421.386-6-luc.michel@greensocs.com Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Luc Michel 提交于
Provide a VMSTATE_UINT16_SUB_ARRAY macro to save a uint16_t sub-array in a VMState. Signed-off-by: NLuc Michel <luc.michel@greensocs.com> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NPhilippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20180727095421.386-5-luc.michel@greensocs.com Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Luc Michel 提交于
Some functions are now only used in arm_gic.c, put them static. Some of them where only used by the NVIC implementation and are not used anymore, so remove them. Signed-off-by: NLuc Michel <luc.michel@greensocs.com> Reviewed-by: NPhilippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Message-id: 20180727095421.386-4-luc.michel@greensocs.com Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Luc Michel 提交于
Implement GICD_ISACTIVERn and GICD_ICACTIVERn registers in the GICv2. Those registers allow to set or clear the active state of an IRQ in the distributor. Signed-off-by: NLuc Michel <luc.michel@greensocs.com> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Message-id: 20180727095421.386-3-luc.michel@greensocs.com Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Luc Michel 提交于
In preparation for the virtualization extensions implementation, refactor the name of the functions and macros that act on the GIC distributor to make that fact explicit. It will be useful to differentiate them from the ones that will act on the virtual interfaces. Signed-off-by: NLuc Michel <luc.michel@greensocs.com> Reviewed-by: NPhilippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: NSai Pavan Boddu <sai.pavan.boddu@xilinx.com> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Message-id: 20180727095421.386-2-luc.michel@greensocs.com Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Peter Maydell 提交于
We set up TLB entries in tlb_set_page_with_attrs(), where we have some logic for determining whether the TLB entry is considered to be RAM-backed, and thus has a valid addend field. When we look at the TLB entry in get_page_addr_code(), we use different logic for determining whether to treat the page as RAM-backed and use the addend field. This is confusing, and in fact buggy, because the code in tlb_set_page_with_attrs() correctly decides that rom_device memory regions not in romd mode are not RAM-backed, but the code in get_page_addr_code() thinks they are RAM-backed. This typically results in "Bad ram pointer" assertion if the guest tries to execute from such a memory region. Fix this by making get_page_addr_code() just look at the TLB_MMIO bit in the code_address field of the TLB, which tlb_set_page_with_attrs() sets if and only if the addend field is not valid for code execution. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Tested-by: NPhilippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20180713150945.12348-1-peter.maydell@linaro.org
-
由 Peter Maydell 提交于
Now that we have full support for small regions, including execution, we can remove the workarounds where we marked all small regions as non-executable for the M-profile MPU and SAU. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Reviewed-by: NPhilippe Mathieu-Daudé <f4bug@amsat.org> Tested-by: NCédric Le Goater <clg@kaod.org> Tested-by: NPhilippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20180710160013.26559-7-peter.maydell@linaro.org
-
由 Peter Maydell 提交于
Now that all the callers can handle get_page_addr_code() returning -1, remove all the code which tries to handle execution from MMIO regions or small-MMU-region RAM areas. This will mean that we can correctly execute from these areas, rather than ending up either aborting QEMU or delivering an incorrect guest exception. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Reviewed-by: NPhilippe Mathieu-Daudé <f4bug@amsat.org> Tested-by: NCédric Le Goater <clg@kaod.org> Tested-by: NPhilippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20180710160013.26559-6-peter.maydell@linaro.org
-
由 Peter Maydell 提交于
If get_page_addr_code() returns -1, this indicates that there is no RAM page we can read a full TB from. Instead we must create a TB which contains a single instruction and which we do not cache, so it is executed only once. Since this means we can now have TBs which are not in any page list, we also need to make tb_phys_invalidate() handle them (by not trying to remove them from a nonexistent page list). Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Reviewed-by: NEmilio G. Cota <cota@braap.org> Tested-by: NCédric Le Goater <clg@kaod.org> Message-id: 20180710160013.26559-5-peter.maydell@linaro.org
-
由 Peter Maydell 提交于
When we support execution from non-RAM MMIO regions, get_page_addr_code() will return -1 to indicate that there is no RAM at the requested address. Handle this in tb_check_watchpoint() -- if the exception happened for a PC which doesn't correspond to RAM then there is no need to invalidate any TBs, because the one-instruction TB will not have been cached. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Tested-by: NCédric Le Goater <clg@kaod.org> Message-id: 20180710160013.26559-4-peter.maydell@linaro.org
-
由 Peter Maydell 提交于
When we support execution from non-RAM MMIO regions, get_page_addr_code() will return -1 to indicate that there is no RAM at the requested address. Handle this in the cpu-exec TB hashtable lookup code, treating it as "no match found". Note that the call to get_page_addr_code() in tb_lookup_cmp() needs no changes -- a return of -1 will already correctly result in the function returning false. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Reviewed-by: NEmilio G. Cota <cota@braap.org> Tested-by: NCédric Le Goater <clg@kaod.org> Message-id: 20180710160013.26559-3-peter.maydell@linaro.org
-
由 Peter Maydell 提交于
The io_readx() function needs to know whether the load it is doing is an MMU_DATA_LOAD or an MMU_INST_FETCH, so that it can pass the right value to the cpu_transaction_failed() function. Plumb this information through from the softmmu code. This is currently not often going to give the wrong answer, because usually instruction fetches go via get_page_addr_code(). However once we switch over to handling execution from non-RAM by creating single-insn TBs, the path for an insn fetch to generate a bus error will be through cpu_ld*_code() and io_readx(), so without this change we will generate a d-side fault when we should generate an i-side fault. We also have to pass the access type via a CPU struct global down to unassigned_mem_read(), for the benefit of the targets which still use the cpu_unassigned_access() hook (m68k, mips, sparc, xtensa). Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Reviewed-by: NPhilippe Mathieu-Daudé <f4bug@amsat.org> Tested-by: NCédric Le Goater <clg@kaod.org> Message-id: 20180710160013.26559-2-peter.maydell@linaro.org
-
由 Julia Suvorova 提交于
The differences from ARMv7-M NVIC are: * ARMv6-M only supports up to 32 external interrupts (configurable feature already). The ICTR is reserved. * Active Bit Register is reserved. * ARMv6-M supports 4 priority levels against 256 in ARMv7-M. Signed-off-by: NJulia Suvorova <jusual@mail.ru> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Julia Suvorova 提交于
Forbid stack alignment change. (CCR) Reserve FAULTMASK, BASEPRI registers. Report any fault as a HardFault. Disable MemManage, BusFault and UsageFault, so they always escalated to HardFault. (SHCSR) Signed-off-by: NJulia Suvorova <jusual@mail.ru> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Message-id: 20180718095628.26442-1-jusual@mail.ru Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Julia Suvorova 提交于
Handle SCS reserved registers listed in ARMv6-M ARM D3.6.1. All reserved registers are RAZ/WI. ARM_FEATURE_M_MAIN is used for the checks, because these registers are reserved in ARMv8-M Baseline too. Signed-off-by: NJulia Suvorova <jusual@mail.ru> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Julia Suvorova 提交于
MSR handling is the only place where CONTROL.nPRIV is modified. Signed-off-by: NJulia Suvorova <jusual@mail.ru> Message-id: 20180705222622.17139-1-jusual@mail.ru Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
- 14 8月, 2018 1 次提交
-
-
由 Peter Maydell 提交于
Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
- 08 8月, 2018 1 次提交
-
-
由 Peter Maydell 提交于
Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
- 07 8月, 2018 3 次提交
-
-
由 Marc-André Lureau 提交于
With vga=775 on the Linux command line a first boot of the VM running Linux works fine. After a warm reboot it crashes during Linux boot. Before that, valgrind points out bad memory write to console surface. The VGA code is not aware that virtio-gpu got a message surface scanout when the display is disabled. Let's reset VGA graphic mode when it is the case, so that a new display surface is created when doing further VGA operations. https://bugs.launchpad.net/qemu/+bug/1784900/Reported-by: NStefan Berger <stefanb@linux.vnet.ibm.com> Signed-off-by: NMarc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: NMichael S. Tsirkin <mst@redhat.com> Reviewed-by: NGerd Hoffmann <kraxel@redhat.com> Tested-by: NStefan Berger <stefanb@linux.vnet.ibm.com> Message-id: 20180803153235.4134-1-marcandre.lureau@redhat.com Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Peter Maydell 提交于
The data in an mbuf buffer is not necessarily at the start of the allocated buffer. (For instance m_adj() allows data to be trimmed from the start by just advancing the pointer and reducing the length.) This means that the allocated buffer size (m->m_size) and the amount of space from the m_data pointer to the end of the buffer (M_ROOM(m)) are not necessarily the same. Commit 864036e2 tried to change the m_inc() function from taking the new allocated-buffer-size to taking the new room-size, but forgot to change the initial "do we already have enough space" check. This meant that if we were trying to extend a buffer which had a leading gap between the buffer start and the data, we might incorrectly decide it didn't need to be extended, and then overrun the end of the buffer, causing memory corruption and an eventual crash. Change the "already big enough?" condition from checking the argument against m->m_size to checking against M_ROOM(). This only makes a difference for the callsite in m_cat(); the other three callsites all start with a freshly allocated mbuf from m_get(), which will have m->m_size == M_ROOM(m). Fixes: 864036e2 Fixes: https://bugs.launchpad.net/qemu/+bug/1785670Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NSamuel Thibault <samuel.thibault@ens-lyon.org> Message-id: 20180807114501.12370-1-peter.maydell@linaro.org Tested-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
-
由 Thomas Huth 提交于
The instance_init function of the xtensa CPUs creates a memory region, but does not set an owner, so the memory region is not destroyed correctly when the CPU object is removed. This can happen when introspecting the CPU devices, so introspecting the CPU device will leave a dangling memory region object in the QOM tree. Make sure to set the right owner here to fix this issue. Signed-off-by: NThomas Huth <thuth@redhat.com> Acked-by: NMax Filippov <jcmvbkbc@gmail.com> Message-id: 1532005320-17794-1-git-send-email-thuth@redhat.com Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
- 06 8月, 2018 2 次提交
-
-
由 Peter Maydell 提交于
The code currently in gicv3_gicd_no_migration_shift_bug_post_load() that handles migration from older QEMU versions with a particular bug is misplaced. We need to run this after migration in all cases, not just the cases where the "arm_gicv3/gicd_no_migration_shift_bug" subsection is present, so it must go in a post_load hook for the top level VMSD, not for the subsection. Move it. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Message-id: 20180806123445.1459-6-peter.maydell@linaro.org
-
由 Peter Maydell 提交于
Contrary to the the impression given in docs/devel/migration.rst, the migration code does not run the pre_load hook for a subsection unless the subsection appears on the wire, and so this is not a place where you can set the default value for state for the "subsection not present" case. Instead this needs to be done in a pre_load hook for whatever is the parent VMSD of the subsection. We got this wrong in two of the subsection definitions in the GICv3 migration structs; fix this. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Message-id: 20180806123445.1459-5-peter.maydell@linaro.org
-