- 16 8月, 2019 22 次提交
-
-
由 Andrew Jones 提交于
Unless we're guaranteed to always increase ARM_MAX_VQ by a multiple of four, then we should use DIV_ROUND_UP to ensure we get an appropriate array size. Signed-off-by: NAndrew Jones <drjones@redhat.com> Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Andrew Jones 提交于
The current implementation of ZCR_ELx matches the architecture, only implementing the lower four bits, with the rest RAZ/WI. This puts a strict limit on ARM_MAX_VQ of 16. Make sure we don't let ARM_MAX_VQ grow without a corresponding update here. Suggested-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NAndrew Jones <drjones@redhat.com> Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Reviewed-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Andrew Jones 提交于
We first convert the pmu property from a static property to one with its own accessors. Then we use the set accessor to check if the PMU is supported when using KVM. Indeed a 32-bit KVM host does not support the PMU, so this check will catch an attempt to use it at property-set time. Signed-off-by: NAndrew Jones <drjones@redhat.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Andrew Jones 提交于
If -cpu <cpu>,aarch64=off is used then KVM must also be used, and it and the host must support running the vcpu in 32-bit mode. Also, if -cpu <cpu>,aarch64=on is used, then it doesn't matter if kvm is enabled or not. Signed-off-by: NAndrew Jones <drjones@redhat.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Richard Henderson 提交于
Replace x = double_saturate(y) with x = add_saturate(y, y). There is no need for a separate more specialized helper. Signed-off-by: NRichard Henderson <richard.henderson@linaro.org> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NPhilippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190807045335.1361-12-richard.henderson@linaro.org Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Richard Henderson 提交于
Promote this function from aarch64 to fully general use. Use it to unify the code sequences for generating illegal opcode exceptions. Signed-off-by: NRichard Henderson <richard.henderson@linaro.org> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NPhilippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190807045335.1361-11-richard.henderson@linaro.org Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Richard Henderson 提交于
Unlike the other more generic gen_exception{,_internal}_insn interfaces, breakpoints always refer to the current instruction. Signed-off-by: NRichard Henderson <richard.henderson@linaro.org> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NPhilippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190807045335.1361-10-richard.henderson@linaro.org Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Richard Henderson 提交于
The offset is variable depending on the instruction set. Passing in the actual value is clearer in intent. Signed-off-by: NRichard Henderson <richard.henderson@linaro.org> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NPhilippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190807045335.1361-9-richard.henderson@linaro.org Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Richard Henderson 提交于
The offset is variable depending on the instruction set, whereas we have stored values for the current pc and the next pc. Passing in the actual value is clearer in intent. Signed-off-by: NRichard Henderson <richard.henderson@linaro.org> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NPhilippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190807045335.1361-8-richard.henderson@linaro.org Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Richard Henderson 提交于
We must update s->base.pc_next when we return from the translate_insn hook to the main translator loop. By incrementing s->base.pc_next immediately after reading the insn word, "pc_next" contains the address of the next instruction throughout translation. All remaining uses of s->pc are referencing the address of the next insn, so this is now a simple global replacement. Remove the "s->pc" field. Signed-off-by: NRichard Henderson <richard.henderson@linaro.org> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NPhilippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190807045335.1361-7-richard.henderson@linaro.org Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Richard Henderson 提交于
The thumb bit has already been removed from s->pc, and is always even. Signed-off-by: NRichard Henderson <richard.henderson@linaro.org> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NPhilippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190807045335.1361-6-richard.henderson@linaro.org Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Richard Henderson 提交于
Provide a common routine for the places that require ALIGN(PC, 4) as the base address as opposed to plain PC. The two are always the same for A32, but the difference is meaningful for thumb mode. Signed-off-by: NRichard Henderson <richard.henderson@linaro.org> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NPhilippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190807045335.1361-5-richard.henderson@linaro.org Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Richard Henderson 提交于
We currently have 3 different ways of computing the architectural value of "PC" as seen in the ARM ARM. The value of s->pc has been incremented past the current insn, but that is all. Thus for a32, PC = s->pc + 4; for t32, PC = s->pc; for t16, PC = s->pc + 2. These differing computations make it impossible at present to unify the various code paths. With the newly introduced s->pc_curr, we can compute the correct value for all cases, using the formula given in the ARM ARM. This changes the behaviour for load_reg() and load_reg_var() when called with reg==15 from a 32-bit Thumb instruction: previously they would have returned the incorrect value of pc_curr + 6, and now they will return the architecturally correct value of PC, which is pc_curr + 4. This will not affect well-behaved guest software, because all of the places we call these functions from T32 code are instructions where using r15 is UNPREDICTABLE. Using the architectural PC value here is more consistent with the T16 and A32 behaviour. Signed-off-by: NRichard Henderson <richard.henderson@linaro.org> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NPhilippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190807045335.1361-4-richard.henderson@linaro.org [PMM: added commit message note about UNPREDICTABLE T32 cases] Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Richard Henderson 提交于
Add a new field to retain the address of the instruction currently being translated. The 32-bit uses are all within subroutines used by a32 and t32. This will become less obvious when t16 support is merged with a32+t32, and having a clear definition will help. Convert aarch64 as well for consistency. Note that there is one instance of a pre-assert fprintf that used the wrong value for the address of the current instruction. Signed-off-by: NRichard Henderson <richard.henderson@linaro.org> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NPhilippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190807045335.1361-3-richard.henderson@linaro.org Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Richard Henderson 提交于
This function is used in two different contexts, and it will be clearer if the function is given the address to which it applies. Signed-off-by: NRichard Henderson <richard.henderson@linaro.org> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NPhilippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190807045335.1361-2-richard.henderson@linaro.org Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Peter Maydell 提交于
When generating an architectural single-step exception we were routing it to the "default exception level", which is to say the same exception level we execute at except that EL0 exceptions go to EL1. This is incorrect because the debug exception level can be configured by the guest for situations such as single stepping of EL0 and EL1 code by EL2. We have to track the target debug exception level in the TB flags, because it is dependent on CPU state like HCR_EL2.TGE and MDCR_EL2.TDE. (That we were previously calling the arm_debug_target_el() function to determine dc->ss_same_el is itself a bug, though one that would only have manifested as incorrect syndrome information.) Since we are out of TB flag bits unless we want to expand into the cs_base field, we share some bits with the M-profile only HANDLER and STACKCHECK bits, since only A-profile has this singlestep. Fixes: https://bugs.launchpad.net/qemu/+bug/1838913Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> Tested-by: NAlex Bennée <alex.bennee@linaro.org> Message-id: 20190805130952.4415-3-peter.maydell@linaro.org
-
由 Peter Maydell 提交于
Factor out code to 'generate a singlestep exception', which is currently repeated in four places. To do this we need to also pull the identical copies of the gen-exception() function out of translate-a64.c and translate.c into translate.h. (There is a bug in the code: we're taking the exception to the wrong target EL. This will be simpler to fix if there's only one place to do it.) Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NPhilippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> Message-id: 20190805130952.4415-2-peter.maydell@linaro.org
-
由 Aaron Hill 提交于
This commit properly sets the ENET_BD_BDU flag once the emulated FEC controller has finished processing the last descriptor. This is done for both transmit and receive descriptors. This allows the QNX 7.0.0 BSP for the Sabrelite board (which can be found at http://blackberry.qnx.com/en/developers/bsp) to properly control the FEC. Without this patch, the BSP ethernet driver will never re-use FEC descriptors, as the unset ENET_BD_BDU flag will cause it to believe that the descriptors are still in use by the NIC. Note that Linux does not appear to use this field at all, and is unaffected by this patch. Without this patch, QNX will think that the NIC is still processing its transaction descriptors, and won't send any more data over the network. For reference: On page 1192 of the I.MX 6DQ reference manual revision (Rev. 5, 06/2018), which can be found at https://www.nxp.com/products/processors-and-microcontrollers/arm-based-processors-and-mcus/i.mx-applications-processors/i.mx-6-processors/i.mx-6quad-processors-high-performance-3d-graphics-hd-video-arm-cortex-a9-core:i.MX6Q?&tab=Documentation_Tab&linkline=Application-Note the 'BDU' field is described as follows for the 'Enhanced transmit buffer descriptor': 'Last buffer descriptor update done. Indicates that the last BD data has been updated by uDMA. This field is written by the user (=0) and uDMA (=1).' The same description is used for the receive buffer descriptor. Signed-off-by: NAaron Hill <aa1ronham@gmail.com> Message-id: 20190805142417.10433-1-aaron.hill@alertinnovation.com Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Damien Hedde 提交于
Replace the zynq_slcr registers enum and macros using the hw/registerfields.h macros. Signed-off-by: NDamien Hedde <damien.hedde@greensocs.com> Reviewed-by: NPhilippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: NAlistair Francis <alistair.francis@wdc.com> Message-id: 20190729145654.14644-30-damien.hedde@greensocs.com Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Alex Bennée 提交于
While most features are now detected by probing the ID_* registers kernels can (and do) use MIDR_EL1 for working out of they have to apply errata. This can trip up warnings in the kernel as it tries to work out if it should apply workarounds to features that don't actually exist in the reported CPU type. Avoid this problem by synthesising our own MIDR value. Signed-off-by: NAlex Bennée <alex.bennee@linaro.org> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Message-id: 20190726113950.7499-1-alex.bennee@linaro.org Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Peter Maydell 提交于
Migration pull 2019-08-15 Marcel's vmxnet3 live migraiton fix (that breaks vmxnet3 compatibility but makes it work) Error description improvements from Yury. Multifd fixes from Ivan and Juan. A load of small cleanups from Wei. A small cleanup from Marc-André for a future patch. # gpg: Signature made Wed 14 Aug 2019 19:00:39 BST # gpg: using RSA key 45F5C71B4A0CB7FB977A9FA90516331EBC5BFDE7 # gpg: Good signature from "Dr. David Alan Gilbert (RH2) <dgilbert@redhat.com>" [full] # Primary key fingerprint: 45F5 C71B 4A0C B7FB 977A 9FA9 0516 331E BC5B FDE7 * remotes/dgilbert/tags/pull-migration-20190814a: (33 commits) migration: add some multifd traces migration: Make global sem_sync semaphore by channel migration: Add traces for multifd terminate threads qemu-file: move qemu_{get,put}_counted_string() declarations migration/postcopy: use mis->bh instead of allocating a QEMUBH migration: rename migration_bitmap_sync_range to ramblock_sync_dirty_bitmap migration: update ram_counters for multifd sync packet migration: add speed limit for multifd migration migration: add qemu_file_update_transfer interface migration: always initialise ram_counters for a new migration migration: remove unused field bytes_xfer hmp: Remove migration capabilities from "info migrate" migration/postcopy: use QEMU_IS_ALIGNED to replace host_offset migration/postcopy: simplify calculation of run_start and fixup_start_addr migration/postcopy: make PostcopyDiscardState a static variable migration: extract ram_load_precopy migration: return -EINVAL directly when version_id mismatch migration: equation is more proper than and to check LOADVM_QUIT migration: just pass RAMBlock is enough migration: use migration_in_postcopy() to check POSTCOPY_ACTIVE ... Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Peter Maydell 提交于
Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
- 15 8月, 2019 18 次提交
-
-
由 Peter Maydell 提交于
Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 Juan Quintela 提交于
Signed-off-by: NJuan Quintela <quintela@redhat.com> Message-Id: <20190814020218.1868-6-quintela@redhat.com> Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
-
由 Juan Quintela 提交于
This makes easy to debug things because when you want for all threads to arrive at that semaphore, you know which one your are waiting for. Signed-off-by: NJuan Quintela <quintela@redhat.com> Message-Id: <20190814020218.1868-3-quintela@redhat.com> Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
-
由 Juan Quintela 提交于
Signed-off-by: NJuan Quintela <quintela@redhat.com> Message-Id: <20190814020218.1868-2-quintela@redhat.com> Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
-
由 Marc-André Lureau 提交于
Move migration helpers for strings under include/, so they can be used outside of migration/ Signed-off-by: NMarc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: NJuan Quintela <quintela@redhat.com> Message-Id: <20190808150325.21939-2-marcandre.lureau@redhat.com> Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
-
由 Wei Yang 提交于
For migration incoming side, it either quit in precopy or postcopy. It is safe to use the mis->bh for both instead of allocating a dedicated QEMUBH for postcopy. Signed-off-by: NWei Yang <richardw.yang@linux.intel.com> Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20190805053146.32326-1-richardw.yang@linux.intel.com> Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
-
由 Wei Yang 提交于
Rename for better understanding of the code. Suggested-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NWei Yang <richardw.yang@linux.intel.com> Message-Id: <20190808033155.30162-1-richardw.yang@linux.intel.com> Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
-
由 Ivan Ren 提交于
Multifd sync will send MULTIFD_FLAG_SYNC flag info to destination, add these bytes to ram_counters record. Signed-off-by: NIvan Ren <ivanren@tencent.com> Suggested-by: NWei Yang <richardw.yang@linux.intel.com> Message-Id: <1564464816-21804-4-git-send-email-ivanren@tencent.com> Reviewed-by: NJuan Quintela <quintela@redhat.com> Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
-
由 Ivan Ren 提交于
Limit the speed of multifd migration through common speed limitation qemu file. Signed-off-by: NIvan Ren <ivanren@tencent.com> Message-Id: <1564464816-21804-3-git-send-email-ivanren@tencent.com> Reviewed-by: NWei Yang <richardw.yang@linux.intel.com> Reviewed-by: NJuan Quintela <quintela@redhat.com> Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
-
由 Ivan Ren 提交于
Add qemu_file_update_transfer for just update bytes_xfer for speed limitation. This will be used for further migration feature such as multifd migration. Signed-off-by: NIvan Ren <ivanren@tencent.com> Reviewed-by: NWei Yang <richardw.yang@linux.intel.com> Reviewed-by: NJuan Quintela <quintela@redhat.com> Message-Id: <1564464816-21804-2-git-send-email-ivanren@tencent.com> Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
-
由 Ivan Ren 提交于
This patch fix a multifd migration bug in migration speed calculation, this problem can be reproduced as follows: 1. start a vm and give a heavy memory write stress to prevent the vm be successfully migrated to destination 2. begin a migration with multifd 3. migrate for a long time [actually, this can be measured by transferred bytes] 4. migrate cancel 5. begin a new migration with multifd, the migration will directly run into migration_completion phase Reason as follows: Migration update bandwidth and s->threshold_size in function migration_update_counters after BUFFER_DELAY time: current_bytes = migration_total_bytes(s); transferred = current_bytes - s->iteration_initial_bytes; time_spent = current_time - s->iteration_start_time; bandwidth = (double)transferred / time_spent; s->threshold_size = bandwidth * s->parameters.downtime_limit; In multifd migration, migration_total_bytes function return qemu_ftell(s->to_dst_file) + ram_counters.multifd_bytes. s->iteration_initial_bytes will be initialized to 0 at every new migration, but ram_counters is a global variable, and history migration data will be accumulated. So if the ram_counters.multifd_bytes is big enough, it may lead pending_size >= s->threshold_size become false in migration_iteration_run after the first migration_update_counters. Signed-off-by: NIvan Ren <ivanren@tencent.com> Reviewed-by: NJuan Quintela <quintela@redhat.com> Reviewed-by: NWei Yang <richardw.yang@linux.intel.com> Suggested-by: NWei Yang <richardw.yang@linux.intel.com> Message-Id: <1564741121-1840-1-git-send-email-ivanren@tencent.com> Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
-
由 Wei Yang 提交于
MigrationState->bytes_xfer is only set to 0 in migrate_init(). Remove this unnecessary field. Signed-off-by: NWei Yang <richardw.yang@linux.intel.com> Message-Id: <20190402003106.17614-1-richardw.yang@linux.intel.com> Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
-
由 Wei Yang 提交于
With the growth of migration capabilities, it is not proper to display them in "info migrate". Users are recommended to use "info migrate_capabiltiies" to list them. Signed-off-by: NWei Yang <richardw.yang@linux.intel.com> Suggested-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20190806003645.8426-1-richardw.yang@linux.intel.com> Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
-
由 Wei Yang 提交于
Use QEMU_IS_ALIGNED for the check, it would be more consistent with other align calculations. Signed-off-by: NWei Yang <richardw.yang@linux.intel.com> Message-Id: <20190806004648.8659-3-richardw.yang@linux.intel.com> Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
-
由 Wei Yang 提交于
The purpose of the calculation is to find a HostPage which is partially dirty. * fixup_start_addr points to the start of the HostPage to discard * run_start points to the next HostPage to check While in the middle stage, there would two cases for run_start: * aligned with HostPage means this is not partially dirty * not aligned means this is partially dirty When it is aligned, no work and calculation is necessary. run_start already points to the start of next HostPage and is ready to continue. When it is not aligned, the calculation could be simplified with: * fixup_start_addr = QEMU_ALIGN_DOWN(run_start, host_ratio) * run_start = QEMU_ALIGN_UP(run_start, host_ratio) By doing so, run_start always points to the next HostPage to check. fixup_start_addr always points to the HostPage to discard. Signed-off-by: NWei Yang <richardw.yang@linux.intel.com> Message-Id: <20190806004648.8659-2-richardw.yang@linux.intel.com> Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
-
由 Wei Yang 提交于
In postcopy-ram.c, we provide three functions to discard certain RAMBlock range: * postcopy_discard_send_init() * postcopy_discard_send_range() * postcopy_discard_send_finish() Currently, we allocate/deallocate PostcopyDiscardState for each RAMBlock on sending discard information to destination. This is not necessary and the same data area could be reused for each RAMBlock. This patch defines PostcopyDiscardState a static variable. By doing so: 1) avoid memory allocation and deallocation to the system 2) avoid potential failure of memory allocation 3) hide some details for their users Signed-off-by: NWei Yang <richardw.yang@linux.intel.com> Message-Id: <20190724010721.2146-1-richardw.yang@linux.intel.com> Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
-
由 Wei Yang 提交于
After cleanup, it would be clear to audience there are two cases ram_load: * precopy * postcopy And it is not necessary to check postcopy_running on each iteration for precopy. Signed-off-by: NWei Yang <richardw.yang@linux.intel.com> Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20190725002023.2335-3-richardw.yang@linux.intel.com> Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
-
由 Wei Yang 提交于
It is not reasonable to continue when version_id mismatch. Signed-off-by: NWei Yang <richardw.yang@linux.intel.com> Message-Id: <20190722075339.25121-2-richardw.yang@linux.intel.com> Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
-