1. 24 11月, 2014 11 次提交
  2. 22 11月, 2014 5 次提交
    • J
      libxl: destroy domain in migration finish phase on failure · 42874fa4
      Jim Fehlig 提交于
      This patch contains three domain cleanup improvements in the migration
      finish phase, ensuring a domain is properly disposed when a failure is
      detected or the migration is cancelled.
      
      The check for virDomainObjIsActive is moved to libxlDomainMigrationFinish,
      where cleanup can occur if migration failed and the domain is inactive.
      
      The 'cleanup' label was missplaced in libxlDomainMigrationFinish, causing
      a migrated domain to remain in the event of an error or cancelled migration.
      
      In cleanup, the domain was not removed from the driver's list of domains.
      Signed-off-by: NJim Fehlig <jfehlig@suse.com>
      42874fa4
    • J
      libxl: start domain paused on migration dst · 60386825
      Jim Fehlig 提交于
      During the perform phase of migration, the domain is started on
      the dst host in a running state if VIR_MIGRATE_PAUSED flag is not
      specified.  In the finish phase, the domain is also unpaused if
      VIR_MIGRATE_PAUSED flag is unset.  I've noticed this second unpause
      fails if the domain was already unpaused following the perform phase.
      
      This patch changes the perform phase to always start the domain
      paused, and defers unpausing, if requested, to the finish phase.
      Unpausing should occur in the finish phase anyhow, where the domain
      can be properly destroyed if the perform phase fails and migration
      is cancelled.
      Signed-off-by: NJim Fehlig <jfehlig@suse.com>
      60386825
    • J
      libxl: acquire job in migration finish phase · a1f38951
      Jim Fehlig 提交于
      Moving data reception of the perform phase of migration to a
      thread introduces a race with the finish phase, where checking
      if the domain is active races with the thread finishing the
      perform phase.  The race is easily solved by acquiring a job in
      the finish phase, which must wait for the perform phase job to
      complete.
      
      While wrapping the finish phase in a job, noticed the virDomainObj
      was being unlocked in a callee - libxlDomainMigrationFinish.  Move
      the unlocking to libxlDomainMigrateFinish3Params, where the lock
      is acquired.
      Signed-off-by: NJim Fehlig <jfehlig@suse.com>
      a1f38951
    • J
      libxl: Receive migration data in a thread · cb88d433
      Jim Fehlig 提交于
      The libxl driver receives migration data within an IO callback invoked
      by the event loop, effectively disabling the event loop while migration
      occurs.
      
      This patch moves receving of the migration data to a thread.  The
      incoming connection is still accepted in the IO callback, but control
      is immediately returned to the event loop after spawning the thread.
      Signed-off-by: NJim Fehlig <jfehlig@suse.com>
      cb88d433
    • I
      libxl: Allow libxl to find pygrub binary. · d70a51d5
      Ian Campbell 提交于
      Specifying an explicit path to pygrub (e.g. BINDIR "/pygrub") only works if
      Xen and libvirt happen to be installed to the same prefix. A more flexible
      approach is to simply specify "pygrub" which will cause libxl to use the
      correct path which it knows (since it is built with the same prefix as pygrub).
      
      This is particular problematic in the Debian packaging, since the Debian Xen
      package relocates pygrub into a libexec dir, however I think this change makes
      sense upstream.
      Signed-off-by: NIan Campbell <ian.campbell@citrix.com>
      Signed-off-by: NJim Fehlig <jfehlig@suse.com>
      d70a51d5
  3. 21 11月, 2014 21 次提交
  4. 20 11月, 2014 3 次提交
    • E
      build: fix build when not using dbus · be90aa00
      Eric Blake 提交于
      Commit c0e70221 breaks on a machine that lacks dbus headers:
      
      In file included from util/virdbus.c:24:0:
      util/virdbuspriv.h:31:3: error: unknown type name 'dbus_int16_t'
      
      * src/util/virdbuspriv.h (DBusBasicValue): Only provide fallback
      when dbus is compiled.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      be90aa00
    • E
      build: avoid 32-bit failure on older gcc · 0d516839
      Eric Blake 提交于
      On 32-bit platforms with old gcc (hello RHEL 5 gcc 4.1.2), the
      build fails with:
      virsh-domain.c: In function 'cmdBlockCopy':
      virsh-domain.c:2172: warning: comparison is always false due to limited range of data type
      
      Adjust the code to silence the warning.
      
      * tools/virsh-domain.c (cmdBlockCopy): Pacify RHEL 5 gcc.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      0d516839
    • J
      storage: Add thread to refresh for createVport · 512b8747
      John Ferlan 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1152382
      
      When libvirt create's the vport (VPORT_CREATE) for the vHBA, there isn't
      enough "time" between the creation and the running of the following
      backend->refreshPool after a backend->startPool in order to find the LU's.
      Population of LU's happens asynchronously when udevEventHandleCallback
      discovers the "new" vHBA port.  Creation of the infrastructure by udev
      is an iterative process creating and discovering actual storage devices and
      adjusting the environment.
      
      Because of the time it takes to discover and set things up, the backend
      refreshPool call done after the startPool call will generally fail to
      find any devices. This leaves the newly started pool appear empty when
      querying via 'vol-list' after startup. The "workaround" has always been
      to run pool-refresh after startup (or any time thereafter) in order to
      find the LU's. Depending on how quickly run after startup, this too may
      not find any LUs in the pool. Eventually though given enough time and
      retries it will find something if LU's exist for the vHBA.
      
      This patch adds a thread to be executed after the VPORT_CREATE which will
      attempt to find the LU's without requiring the external run of refresh-pool.
      It does this by waiting for 5 seconds and searching for the LU's. If any
      are found, then the thread completes; otherwise, it will retry once more
      in another 5 seconds.  If none are found in that second pass, the thread
      gives up.
      
      Things learned while investigating this... No need to try and fill the
      pool too quickly or too many times. Over the course of creation, the udev
      code may 'add', 'change', and 'delete' the same device. So if the refresh
      code runs and finds something, it may display it only to have a subsequent
      refresh appear to "lose" the device. The udev processing doesn't seem to
      have a way to indicate that it's all done with the creation processing of a
      newly found vHBA. Only the Lone Ranger has silver bullets to fix everything.
      512b8747