1. 16 8月, 2019 1 次提交
    • M
      Include sysemu/sysemu.h a lot less · 46517dd4
      Markus Armbruster 提交于
      In my "build everything" tree, changing sysemu/sysemu.h triggers a
      recompile of some 5400 out of 6600 objects (not counting tests and
      objects that don't depend on qemu/osdep.h).
      
      hw/qdev-core.h includes sysemu/sysemu.h since recent commit e965ffa7
      "qdev: add qdev_add_vm_change_state_handler()".  This is a bad idea:
      hw/qdev-core.h is widely included.
      
      Move the declaration of qdev_add_vm_change_state_handler() to
      sysemu/sysemu.h, and drop the problematic include from hw/qdev-core.h.
      
      Touching sysemu/sysemu.h now recompiles some 1800 objects.
      qemu/uuid.h also drops from 5400 to 1800.  A few more headers show
      smaller improvement: qemu/notify.h drops from 5600 to 5200,
      qemu/timer.h from 5600 to 4500, and qapi/qapi-types-run-state.h from
      5500 to 5000.
      
      Cc: Stefan Hajnoczi <stefanha@redhat.com>
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      Reviewed-by: NAlistair Francis <alistair.francis@wdc.com>
      Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
      Message-Id: <20190812052359.30071-28-armbru@redhat.com>
      Reviewed-by: NAlex Bennée <alex.bennee@linaro.org>
      46517dd4
  2. 24 6月, 2019 1 次提交
  3. 19 3月, 2019 1 次提交
  4. 14 1月, 2019 1 次提交
  5. 11 1月, 2019 1 次提交
    • P
      qemu/queue.h: leave head structs anonymous unless necessary · b58deb34
      Paolo Bonzini 提交于
      Most list head structs need not be given a name.  In most cases the
      name is given just in case one is going to use QTAILQ_LAST, QTAILQ_PREV
      or reverse iteration, but this does not apply to lists of other kinds,
      and even for QTAILQ in practice this is only rarely needed.  In addition,
      we will soon reimplement those macros completely so that they do not
      need a name for the head struct.  So clean up everything, not giving a
      name except in the rare case where it is necessary.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      b58deb34
  6. 02 7月, 2018 1 次提交
  7. 01 6月, 2018 2 次提交
  8. 18 12月, 2017 1 次提交
  9. 19 9月, 2017 2 次提交
    • A
      General warn report fixups · b62e39b4
      Alistair Francis 提交于
      Tidy up some of the warn_report() messages after having converted them
      to use warn_report().
      Signed-off-by: NAlistair Francis <alistair.francis@xilinx.com>
      Reviewed-by: NMarkus Armbruster <armbru@redhat.com>
      Message-Id: <9cb1d23551898c9c9a5f84da6773e99871285120.1505158760.git.alistair.francis@xilinx.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      b62e39b4
    • A
      Convert multi-line fprintf() to warn_report() · 8297be80
      Alistair Francis 提交于
      Convert all the multi-line uses of fprintf(stderr, "warning:"..."\n"...
      to use warn_report() instead. This helps standardise on a single
      method of printing warnings to the user.
      
      All of the warnings were changed using these commands:
        find ./* -type f -exec sed -i \
          'N; {s|fprintf(.*".*warning[,:] \(.*\)\\n"\(.*\));|warn_report("\1"\2);|Ig}' \
          {} +
        find ./* -type f -exec sed -i \
          'N;N; {s|fprintf(.*".*warning[,:] \(.*\)\\n"\(.*\));|warn_report("\1"\2);|Ig}' \
          {} +
        find ./* -type f -exec sed -i \
          'N;N;N; {s|fprintf(.*".*warning[,:] \(.*\)\\n"\(.*\));|warn_report("\1"\2);|Ig}' \
          {} +
        find ./* -type f -exec sed -i \
          'N;N;N;N {s|fprintf(.*".*warning[,:] \(.*\)\\n"\(.*\));|warn_report("\1"\2);|Ig}' \
          {} +
        find ./* -type f -exec sed -i \
          'N;N;N;N;N {s|fprintf(.*".*warning[,:] \(.*\)\\n"\(.*\));|warn_report("\1"\2);|Ig}' \
          {} +
        find ./* -type f -exec sed -i \
          'N;N;N;N;N;N {s|fprintf(.*".*warning[,:] \(.*\)\\n"\(.*\));|warn_report("\1"\2);|Ig}' \
          {} +
        find ./* -type f -exec sed -i \
          'N;N;N;N;N;N;N; {s|fprintf(.*".*warning[,:] \(.*\)\\n"\(.*\));|warn_report("\1"\2);|Ig}' \
          {} +
      
      Indentation fixed up manually afterwards.
      
      Some of the lines were manually edited to reduce the line length to below
      80 charecters. Some of the lines with newlines in the middle of the
      string were also manually edit to avoid checkpatch errrors.
      
      The #include lines were manually updated to allow the code to compile.
      
      Several of the warning messages can be improved after this patch, to
      keep this patch mechanical this has been moved into a later patch.
      Signed-off-by: NAlistair Francis <alistair.francis@xilinx.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Kevin Wolf <kwolf@redhat.com>
      Cc: Max Reitz <mreitz@redhat.com>
      Cc: "Michael S. Tsirkin" <mst@redhat.com>
      Cc: Igor Mammedov <imammedo@redhat.com>
      Cc: Peter Maydell <peter.maydell@linaro.org>
      Cc: Stefano Stabellini <sstabellini@kernel.org>
      Cc: Anthony Perard <anthony.perard@citrix.com>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Eduardo Habkost <ehabkost@redhat.com>
      Cc: Aurelien Jarno <aurelien@aurel32.net>
      Cc: Yongbok Kim <yongbok.kim@imgtec.com>
      Cc: Cornelia Huck <cohuck@redhat.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Alexander Graf <agraf@suse.de>
      Cc: Jason Wang <jasowang@redhat.com>
      Cc: David Gibson <david@gibson.dropbear.id.au>
      Cc: Gerd Hoffmann <kraxel@redhat.com>
      Acked-by: NCornelia Huck <cohuck@redhat.com>
      Reviewed-by: NMarkus Armbruster <armbru@redhat.com>
      Message-Id: <5def63849ca8f551630c6f2b45bcb1c482f765a6.1505158760.git.alistair.francis@xilinx.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      8297be80
  10. 22 7月, 2017 2 次提交
    • A
      xen-mapcache: Fix the bug when overlapping emulated DMA operations may cause... · 7fb394ad
      Alexey G 提交于
      xen-mapcache: Fix the bug when overlapping emulated DMA operations may cause inconsistency in guest memory mappings
      
      Under certain circumstances normal xen-mapcache functioning may be broken
      by guest's actions. This may lead to either QEMU performing exit() due to
      a caught bad pointer (and with QEMU process gone the guest domain simply
      appears hung afterwards) or actual use of the incorrect pointer inside
      QEMU address space -- a write to unmapped memory is possible. The bug is
      hard to reproduce on a i440 machine as multiple DMA sources are required
      (though it's possible in theory, using multiple emulated devices), but can
      be reproduced somewhat easily on a Q35 machine using an emulated AHCI
      controller -- each NCQ queue command slot may be used as an independent
      DMA source ex. using READ FPDMA QUEUED command, so a single storage
      device on the AHCI controller port will be enough to produce multiple DMAs
      (up to 32). The detailed description of the issue follows.
      
      Xen-mapcache provides an ability to map parts of a guest memory into
      QEMU's own address space to work with.
      
      There are two types of cache lookups:
       - translating a guest physical address into a pointer in QEMU's address
         space, mapping a part of guest domain memory if necessary (while trying
         to reduce a number of such (re)mappings to a minimum)
       - translating a QEMU's pointer back to its physical address in guest RAM
      
      These lookups are managed via two linked-lists of structures.
      MapCacheEntry is used for forward cache lookups, while MapCacheRev -- for
      reverse lookups.
      
      Every guest physical address is broken down into 2 parts:
          address_index  = phys_addr >> MCACHE_BUCKET_SHIFT;
          address_offset = phys_addr & (MCACHE_BUCKET_SIZE - 1);
      
      MCACHE_BUCKET_SHIFT depends on a system (32/64) and is equal to 20 for
      a 64-bit system (which assumed for the further description). Basically,
      this means that we deal with 1 MB chunks and offsets within those 1 MB
      chunks. All mappings are created with 1MB-granularity, i.e. 1MB/2MB/3MB
      etc. Most DMA transfers typically are less than 1MB, however, if the
      transfer crosses any 1MB border(s) - than a nearest larger mapping size
      will be used, so ex. a 512-byte DMA transfer with the start address
      700FFF80h will actually require a 2MB range.
      
      Current implementation assumes that MapCacheEntries are unique for a given
      address_index and size pair and that a single MapCacheEntry may be reused
      by multiple requests -- in this case the 'lock' field will be larger than
      1. On other hand, each requested guest physical address (with 'lock' flag)
      is described by each own MapCacheRev. So there may be multiple MapCacheRev
      entries corresponding to a single MapCacheEntry. The xen-mapcache code
      uses MapCacheRev entries to retrieve the address_index & size pair which
      in turn used to find a related MapCacheEntry. The 'lock' field within
      a MapCacheEntry structure is actually a reference counter which shows
      a number of corresponding MapCacheRev entries.
      
      The bug lies in ability for the guest to indirectly manipulate with the
      xen-mapcache MapCacheEntries list via a special sequence of DMA
      operations, typically for storage devices. In order to trigger the bug,
      guest needs to issue DMA operations in specific order and timing.
      Although xen-mapcache is protected by the mutex lock -- this doesn't help
      in this case, as the bug is not due to a race condition.
      
      Suppose we have 3 DMA transfers, namely A, B and C, where
      - transfer A crosses 1MB border and thus uses a 2MB mapping
      - transfers B and C are normal transfers within 1MB range
      - and all 3 transfers belong to the same address_index
      
      In this case, if all these transfers are to be executed one-by-one
      (without overlaps), no special treatment necessary -- each transfer's
      mapping lock will be set and then cleared on unmap before starting
      the next transfer.
      The situation changes when DMA transfers overlap in time, ex. like this:
      
        |===== transfer A (2MB) =====|
      
                    |===== transfer B (1MB) =====|
      
                                |===== transfer C (1MB) =====|
       time --->
      
      In this situation the following sequence of actions happens:
      
      1. transfer A creates a mapping to 2MB area (lock=1)
      2. transfer B (1MB) tries to find available mapping but cannot find one
         because transfer A is still in progress, and it has 2MB size + non-zero
         lock. So transfer B creates another mapping -- same address_index,
         but 1MB size.
      3. transfer A completes, making 1st mapping entry available by setting its
         lock to 0
      4. transfer C starts and tries to find available mapping entry and sees
         that 1st entry has lock=0, so it uses this entry but remaps the mapping
         to a 1MB size
      5. transfer B completes and by this time
        - there are two locked entries in the MapCacheEntry list with the SAME
          values for both address_index and size
        - the entry for transfer B actually resides farther in list while
          transfer C's entry is first
      6. xen_ram_addr_from_mapcache() for transfer B gets correct address_index
         and size pair from corresponding MapCacheRev entry, but then it starts
         looking for MapCacheEntry with these values and finds the first entry
         -- which belongs to transfer C.
      
      At this point there may be following possible (bad) consequences:
      
      1. xen_ram_addr_from_mapcache() will use a wrong entry->vaddr_base value
         in this statement:
      
         raddr = (reventry->paddr_index << MCACHE_BUCKET_SHIFT) +
             ((unsigned long) ptr - (unsigned long) entry->vaddr_base);
      
      resulting in an incorrent raddr value returned from the function. The
      (ptr - entry->vaddr_base) expression may produce both positive and negative
      numbers and its actual value may differ greatly as there are many
      map/unmap operations take place. If the value will be beyond guest RAM
      limits then a "Bad RAM offset" error will be triggered and logged,
      followed by exit() in QEMU.
      
      2. If raddr value won't exceed guest RAM boundaries, the same sequence
      of actions will be performed for xen_invalidate_map_cache_entry() on DMA
      unmap, resulting in a wrong MapCacheEntry being unmapped while DMA
      operation which uses it is still active. The above example must
      be extended by one more DMA transfer in order to allow unmapping as the
      first mapping in the list is sort of resident.
      
      The patch modifies the behavior in which MapCacheEntry's are added to the
      list, avoiding duplicates.
      Signed-off-by: NAlexey Gerasimenko <x1917x@gmail.com>
      Signed-off-by: NStefano Stabellini <sstabellini@kernel.org>
      7fb394ad
    • I
      9e6bdb92
  11. 19 7月, 2017 3 次提交
  12. 17 5月, 2017 1 次提交
    • S
      xen/mapcache: store dma information in revmapcache entries for debugging · 1ff7c598
      Stefano Stabellini 提交于
      The Xen mapcache is able to create long term mappings, they are called
      "locked" mappings. The third parameter of the xen_map_cache call
      specifies if a mapping is a "locked" mapping.
      
      >From the QEMU point of view there are two kinds of long term mappings:
      
      [a] device memory mappings, such as option roms and video memory
      [b] dma mappings, created by dma_memory_map & friends
      
      After certain operations, ballooning a VM in particular, Xen asks QEMU
      kindly to destroy all mappings. However, certainly [a] mappings are
      present and cannot be removed. That's not a problem as they are not
      affected by balloonning. The *real* problem is that if there are any
      mappings of type [b], any outstanding dma operations could fail. This is
      a known shortcoming. In other words, when Xen asks QEMU to destroy all
      mappings, it is an error if any [b] mappings exist.
      
      However today we have no way of distinguishing [a] from [b]. Because of
      that, we cannot even print a decent warning.
      
      This patch introduces a new "dma" bool field to MapCacheRev entires, to
      remember if a given mapping is for dma or is a long term device memory
      mapping. When xen_invalidate_map_cache is called, we print a warning if
      any [b] mappings exist. We ignore [a] mappings.
      
      Mappings created by qemu_map_ram_ptr are assumed to be [a], while
      mappings created by address_space_map->qemu_ram_ptr_length are assumed
      to be [b].
      
      The goal of the patch is to make debugging and system understanding
      easier.
      Signed-off-by: NStefano Stabellini <sstabellini@kernel.org>
      Acked-by: NPaolo Bonzini <pbonzini@redhat.com>
      Acked-by: NAnthony PERARD <anthony.perard@citrix.com>
      1ff7c598
  13. 26 4月, 2017 1 次提交
  14. 01 2月, 2017 1 次提交
  15. 17 1月, 2017 1 次提交
  16. 17 6月, 2016 1 次提交
  17. 29 1月, 2016 1 次提交
    • P
      xen: Clean up includes · 21cbfe5f
      Peter Maydell 提交于
      Clean up includes so that osdep.h is included first and headers
      which it implies are not included manually.
      
      This commit was created with scripts/clean-includes.
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      Message-id: 1453832250-766-14-git-send-email-peter.maydell@linaro.org
      21cbfe5f
  18. 27 1月, 2016 1 次提交
    • I
      xen: Switch uses of xc_map_foreign_{pages,bulk} to use libxenforeignmemory API. · e0cb42ae
      Ian Campbell 提交于
      In Xen 4.7 we are refactoring parts libxenctrl into a number of
      separate libraries which will provide backward and forward API and ABI
      compatiblity.
      
      One such library will be libxenforeignmemory which provides access to
      privileged foreign mappings and which will provide an interface
      equivalent to xc_map_foreign_{pages,bulk}.
      
      The new xenforeignmemory_map() function behaves like
      xc_map_foreign_pages() when the err argument is NULL and like
      xc_map_foreign_bulk() when err is non-NULL, which maps into the shim
      here onto checking err == NULL and calling the appropriate old
      function.
      
      Note that xenforeignmemory_map() takes the number of pages before the
      arrays themselves, in order to support potentially future use of
      variable-length-arrays in the prototype (in the future, when Xen's
      baseline toolchain requirements are new enough to ensure VLAs are
      supported).
      
      In preparation for adding support for libxenforeignmemory add support
      to the <=4.0 and <=4.6 compat code in xen_common.h to allow us to
      switch to using the new API. These shims will disappear for versions
      of Xen which include libxenforeignmemory.
      
      Since libxenforeignmemory will have its own handle type but for <= 4.6
      the functionality is provided by using a libxenctrl handle we
      introduce a new global xen_fmem alongside the existing xen_xc. In fact
      we make xen_fmem a pointer to the existing xen_xc, which then works
      correctly with both <=4.0 (xc handle is an int) and <=4.6 (xc handle
      is a pointer). In the latter case xen_fmem is actually a double
      indirect pointer, but it all falls out in the wash.
      
      Unlike libxenctrl libxenforeignmemory has an explicit unmap function,
      rather than just specifying that munmap should be used, so the unmap
      paths are updated to use xenforeignmemory_unmap, which is a shim for
      munmap on these versions of xen. The mappings in xen-hvm.c do not
      appear to be unmapped (which makes sense for a qemu-dm process)
      
      In fb_disconnect this results in a change from simply mmap over the
      existing mapping (with an implicit munmap) to expliclty unmapping with
      xenforeignmemory_unmap and then mapping the required anonymous memory
      in the same hole. I don't think this is a problem since any other
      thread which was racily touching this region would already be running
      the risk of hitting the mapping halfway through the call. If this is
      thought to be a problem then we could consider adding an extra API to
      the libxenforeignmemory interface to replace a foreign mapping with
      anonymous shared memory, but I'd prefer not to.
      Signed-off-by: NIan Campbell <ian.campbell@citrix.com>
      Reviewed-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      e0cb42ae
  19. 11 9月, 2015 1 次提交
  20. 20 1月, 2015 2 次提交
  21. 07 7月, 2014 1 次提交
  22. 09 4月, 2013 1 次提交
    • P
      hw: move headers to include/ · 0d09e41a
      Paolo Bonzini 提交于
      Many of these should be cleaned up with proper qdev-/QOM-ification.
      Right now there are many catch-all headers in include/hw/ARCH depending
      on cpu.h, and this makes it necessary to compile these files per-target.
      However, fixing this does not belong in these patches.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      0d09e41a
  23. 03 4月, 2013 2 次提交
  24. 19 12月, 2012 2 次提交
  25. 23 10月, 2012 1 次提交
    • A
      Rename target_phys_addr_t to hwaddr · a8170e5e
      Avi Kivity 提交于
      target_phys_addr_t is unwieldly, violates the C standard (_t suffixes are
      reserved) and its purpose doesn't match the name (most target_phys_addr_t
      addresses are not target specific).  Replace it with a finger-friendly,
      standards conformant hwaddr.
      
      Outstanding patchsets can be fixed up with the command
      
        git rebase -i --exec 'find -name "*.[ch]"
                              | xargs s/target_phys_addr_t/hwaddr/g' origin
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com>
      a8170e5e
  26. 22 8月, 2012 1 次提交
  27. 14 4月, 2012 2 次提交
  28. 20 3月, 2012 1 次提交
  29. 14 1月, 2012 1 次提交
  30. 05 12月, 2011 1 次提交
  31. 09 9月, 2011 1 次提交
    • A
      xen-mapcache: Fix rlimit set size. · 56c119e5
      Anthony PERARD 提交于
      Previously, the address space soft limit was set mcache_max_size. So,
      before the mcache_max_size was reached by the mapcache, QEMU was killed
      for overuse of the virtual address space.
      
      This patch fix that by setting the soft limit the maximum than can have
      QEMU. So the soft and hard limit are always set to RLIM_INFINITY if QEMU
      is privileged.
      
      In case QEMU is not run as root and the limit is too low, the maximum
      mapcache size will be set the rlim_max - 80MB because observed that QEMU
      use 75MB more than the maximum mapcache size after several empirical
      tests.
      Signed-off-by: NAnthony PERARD <anthony.perard@citrix.com>
      Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      56c119e5