1. 23 3月, 2017 2 次提交
    • J
      blockjob: add block_job_start_shim · e3796a24
      John Snow 提交于
      The purpose of this shim is to allow us to pause pre-started jobs.
      The purpose of *that* is to allow us to buffer a pause request that
      will be able to take effect before the job ever does any work, allowing
      us to create jobs during a quiescent state (under which they will be
      automatically paused), then resuming the jobs after the critical section
      in any order, either:
      
      (1) -block_job_start
          -block_job_resume (via e.g. drained_end)
      
      (2) -block_job_resume (via e.g. drained_end)
          -block_job_start
      
      The problem that requires a startup wrapper is the idea that a job must
      start in the busy=true state only its first time-- all subsequent entries
      require busy to be false, and the toggling of this state is otherwise
      handled during existing pause and yield points.
      
      The wrapper simply allows us to mandate that a job can "start," set busy
      to true, then immediately pause only if necessary. We could avoid
      requiring a wrapper, but all jobs would need to do it, so it's been
      factored out here.
      Signed-off-by: NJohn Snow <jsnow@redhat.com>
      Reviewed-by: NJeff Cody <jcody@redhat.com>
      Message-id: 20170316212351.13797-2-jsnow@redhat.com
      Signed-off-by: NJeff Cody <jcody@redhat.com>
      e3796a24
    • P
      blockjob: avoid recursive AioContext locking · d79df2a2
      Paolo Bonzini 提交于
      Streaming or any other block job hangs when performed on a block device
      that has a non-default iothread.  This happens because the AioContext
      is acquired twice by block_job_defer_to_main_loop_bh and then released
      only once by BDRV_POLL_WHILE.  (Insert rants on recursive mutexes, which
      unfortunately are a temporary but necessary evil for iothreads at the
      moment).
      
      Luckily, the reason for the double acquisition is simple; the function
      acquires the AioContext for both the job iothread and the BDS iothread,
      in case the BDS iothread was changed while the job was running.  It
      is therefore enough to skip the second acquisition when the two
      AioContexts are one and the same.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: NEric Blake <eblake@redhat.com>
      Reviewed-by: NJeff Cody <jcody@redhat.com>
      Message-id: 1490118490-5597-1-git-send-email-pbonzini@redhat.com
      Signed-off-by: NJeff Cody <jcody@redhat.com>
      d79df2a2
  2. 01 3月, 2017 5 次提交
  3. 01 2月, 2017 1 次提交
  4. 15 11月, 2016 3 次提交
    • J
      blockjob: add block_job_start · 5ccac6f1
      John Snow 提交于
      Instead of automatically starting jobs at creation time via backup_start
      et al, we'd like to return a job object pointer that can be started
      manually at later point in time.
      
      For now, add the block_job_start mechanism and start the jobs
      automatically as we have been doing, with conversions job-by-job coming
      in later patches.
      
      Of note: cancellation of unstarted jobs will perform all the normal
      cleanup as if the job had started, particularly abort and clean. The
      only difference is that we will not emit any events, because the job
      never actually started.
      Signed-off-by: NJohn Snow <jsnow@redhat.com>
      Message-id: 1478587839-9834-5-git-send-email-jsnow@redhat.com
      Signed-off-by: NJeff Cody <jcody@redhat.com>
      5ccac6f1
    • J
      blockjob: add .clean property · e8a40bf7
      John Snow 提交于
      Cleaning up after we have deferred to the main thread but before the
      transaction has converged can be dangerous and result in deadlocks
      if the job cleanup invokes any BH polling loops.
      
      A job may attempt to begin cleaning up, but may induce another job to
      enter its cleanup routine. The second job, part of our same transaction,
      will block waiting for the first job to finish, so neither job may now
      make progress.
      
      To rectify this, allow jobs to register a cleanup operation that will
      always run regardless of if the job was in a transaction or not, and
      if the transaction job group completed successfully or not.
      
      Move sensitive cleanup to this callback instead which is guaranteed to
      be run only after the transaction has converged, which removes sensitive
      timing constraints from said cleanup.
      
      Furthermore, in future patches these cleanup operations will be performed
      regardless of whether or not we actually started the job. Therefore,
      cleanup callbacks should essentially confine themselves to undoing create
      operations, e.g. setup actions taken in what is now backup_start.
      Reported-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Signed-off-by: NJohn Snow <jsnow@redhat.com>
      Reviewed-by: NKevin Wolf <kwolf@redhat.com>
      Message-id: 1478587839-9834-3-git-send-email-jsnow@redhat.com
      Signed-off-by: NJeff Cody <jcody@redhat.com>
      e8a40bf7
    • V
      blockjob: fix dead pointer in txn list · 1e93b9fb
      Vladimir Sementsov-Ogievskiy 提交于
      Though it is not intended to be reached through normal circumstances,
      if we do not gracefully deconstruct the transaction QLIST, we may wind
      up with stale pointers in the list.
      
      The rest of this series attempts to address the underlying issues,
      but this should fix list inconsistencies.
      Signed-off-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Tested-by: NJohn Snow <jsnow@redhat.com>
      Reviewed-by: NJohn Snow <jsnow@redhat.com>
      Reviewed-by: NEric Blake <eblake@redhat.com>
      Reviewed-by: NKevin Wolf <kwolf@redhat.com>
      Signed-off-by: NJohn Snow <jsnow@redhat.com>
      Message-id: 1478587839-9834-2-git-send-email-jsnow@redhat.com
      [Rewrote commit message. --js]
      Signed-off-by: NJohn Snow <jsnow@redhat.com>
      Reviewed-by: NEric Blake <eblake@redhat.com>
      Reviewed-by: NKevin Wolf <kwolf@redhat.com>
      Signed-off-by: NJohn Snow <jsnow@redhat.com>
      Signed-off-by: NJeff Cody <jcody@redhat.com>
      1e93b9fb
  5. 01 11月, 2016 5 次提交
  6. 31 10月, 2016 1 次提交
  7. 28 10月, 2016 1 次提交
  8. 07 10月, 2016 1 次提交
  9. 06 9月, 2016 1 次提交
  10. 13 7月, 2016 6 次提交
  11. 29 6月, 2016 1 次提交
  12. 20 6月, 2016 5 次提交
  13. 16 6月, 2016 1 次提交
  14. 26 5月, 2016 4 次提交
  15. 19 5月, 2016 1 次提交
    • K
      blockjob: Don't set iostatus of target · 81e254dc
      Kevin Wolf 提交于
      When block job errors were introduced, we assigned the iostatus of the
      target BDS "just in case". The field has never been accessible for the
      user because the target isn't listed in query-block.
      
      Before we can allow the user to have a second BlockBackend on the
      target, we need to clean this up. If anything, we would want to set the
      iostatus for the internal BB of the job (which we can always do later),
      but certainly not for a separate BB which the job doesn't even use.
      
      As a nice side effect, this gets us rid of another bs->blk use.
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      81e254dc
  16. 09 2月, 2016 1 次提交
    • F
      blockjob: Fix hang in block_job_finish_sync · 794f0141
      Fam Zheng 提交于
      With a mirror job running on a virtio-blk dataplane disk, sending "q" to
      HMP will cause a dead loop in block_job_finish_sync.
      
      This is because the aio_poll() only processes the AIO context of bs
      which has no more work to do, while the main loop BH that is scheduled
      for setting the job->completed flag is never processed.
      
      Fix this by adding a flag in BlockJob structure, to track which context
      to poll for the block job to make progress. Its value is set to true
      when block_job_coroutine_complete() is called, and is checked in
      block_job_finish_sync to determine which context to poll.
      Suggested-by: NStefan Hajnoczi <stefanha@redhat.com>
      Signed-off-by: NFam Zheng <famz@redhat.com>
      Message-id: 1454379144-29807-1-git-send-email-famz@redhat.com
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      794f0141
  17. 05 2月, 2016 1 次提交
    • P
      all: Clean up includes · d38ea87a
      Peter Maydell 提交于
      Clean up includes so that osdep.h is included first and headers
      which it implies are not included manually.
      
      This commit was created with scripts/clean-includes.
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      Message-id: 1454089805-5470-16-git-send-email-peter.maydell@linaro.org
      d38ea87a