“c0555f64192920e63cf36a51284f4ba5fbccf0bb”上不存在“projects/reworkd/imports.yml”
  1. 16 10月, 2009 1 次提交
    • N
      md/async: don't pass a memory pointer as a page pointer. · 5dd33c9a
      NeilBrown 提交于
      md/raid6 passes a list of 'struct page *' to the async_tx routines,
      which then either DMA map them for offload, or take the page_address
      for CPU based calculations.
      
      For RAID6 we sometime leave 'blanks' in the list of pages.
      For CPU based calcs, we want to treat theses as a page of zeros.
      For offloaded calculations, we simply don't pass a page to the
      hardware.
      
      Currently the 'blanks' are encoded as a pointer to
      raid6_empty_zero_page.  This is a 4096 byte memory region, not a
      'struct page'.  This is mostly handled correctly but is rather ugly.
      
      So change the code to pass and expect a NULL pointer for the blanks.
      When taking page_address of a page, we need to check for a NULL and
      in that case use raid6_empty_zero_page.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      5dd33c9a
  2. 22 9月, 2009 1 次提交
  3. 09 9月, 2009 1 次提交
    • D
      dmaengine: add fence support · 0403e382
      Dan Williams 提交于
      Some engines optimize operation by reading ahead in the descriptor chain
      such that descriptor2 may start execution before descriptor1 completes.
      If descriptor2 depends on the result from descriptor1 then a fence is
      required (on descriptor2) to disable this optimization.  The async_tx
      api could implicitly identify dependencies via the 'depend_tx'
      parameter, but that would constrain cases where the dependency chain
      only specifies a completion order rather than a data dependency.  So,
      provide an ASYNC_TX_FENCE to explicitly identify data dependencies.
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      0403e382
  4. 30 8月, 2009 1 次提交
    • D
      async_tx: add support for asynchronous RAID6 recovery operations · 0a82a623
      Dan Williams 提交于
       async_raid6_2data_recov() recovers two data disk failures
      
       async_raid6_datap_recov() recovers a data disk and the P disk
      
      These routines are a port of the synchronous versions found in
      drivers/md/raid6recov.c.  The primary difference is breaking out the xor
      operations into separate calls to async_xor.  Two helper routines are
      introduced to perform scalar multiplication where needed.
      async_sum_product() multiplies two sources by scalar coefficients and
      then sums (xor) the result.  async_mult() simply multiplies a single
      source by a scalar.
      
      This implemention also includes, in contrast to the original
      synchronous-only code, special case handling for the 4-disk and 5-disk
      array cases.  In these situations the default N-disk algorithm will
      present 0-source or 1-source operations to dma devices.  To cover for
      dma devices where the minimum source count is 2 we implement 4-disk and
      5-disk handling in the recovery code.
      
      [ Impact: asynchronous raid6 recovery routines for 2data and datap cases ]
      
      Cc: Yuri Tikhonov <yur@emcraft.com>
      Cc: Ilya Yanok <yanok@emcraft.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: David Woodhouse <David.Woodhouse@intel.com>
      Reviewed-by: NAndre Noll <maan@systemlinux.org>
      Acked-by: NMaciej Sosnowski <maciej.sosnowski@intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      
      0a82a623