1. 27 11月, 2012 1 次提交
  2. 04 11月, 2012 1 次提交
    • R
      xen/blkback: persistent-grants fixes · cb5bd4d1
      Roger Pau Monne 提交于
      This patch contains fixes for persistent grants implementation v2:
      
       * handle == 0 is a valid handle, so initialize grants in blkback
         setting the handle to BLKBACK_INVALID_HANDLE instead of 0. Reported
         by Konrad Rzeszutek Wilk.
      
       * new_map is a boolean, use "true" or "false" instead of 1 and 0.
         Reported by Konrad Rzeszutek Wilk.
      
       * blkfront announces the persistent-grants feature as
         feature-persistent-grants, use feature-persistent instead which is
         consistent with blkback and the public Xen headers.
      
       * Add a consistency check in blkfront to make sure we don't try to
         access segments that have not been set.
      Reported-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Signed-off-by: NRoger Pau Monne <roger.pau@citrix.com>
      [v1: The new_map int->bool had already been changed]
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      cb5bd4d1
  3. 30 10月, 2012 1 次提交
    • R
      xen/blkback: Persistent grant maps for xen blk drivers · 0a8704a5
      Roger Pau Monne 提交于
      This patch implements persistent grants for the xen-blk{front,back}
      mechanism. The effect of this change is to reduce the number of unmap
      operations performed, since they cause a (costly) TLB shootdown. This
      allows the I/O performance to scale better when a large number of VMs
      are performing I/O.
      
      Previously, the blkfront driver was supplied a bvec[] from the request
      queue. This was granted to dom0; dom0 performed the I/O and wrote
      directly into the grant-mapped memory and unmapped it; blkfront then
      removed foreign access for that grant. The cost of unmapping scales
      badly with the number of CPUs in Dom0. An experiment showed that when
      Dom0 has 24 VCPUs, and guests are performing parallel I/O to a
      ramdisk, the IPIs from performing unmap's is a bottleneck at 5 guests
      (at which point 650,000 IOPS are being performed in total). If more
      than 5 guests are used, the performance declines. By 10 guests, only
      400,000 IOPS are being performed.
      
      This patch improves performance by only unmapping when the connection
      between blkfront and back is broken.
      
      On startup blkfront notifies blkback that it is using persistent
      grants, and blkback will do the same. If blkback is not capable of
      persistent mapping, blkfront will still use the same grants, since it
      is compatible with the previous protocol, and simplifies the code
      complexity in blkfront.
      
      To perform a read, in persistent mode, blkfront uses a separate pool
      of pages that it maps to dom0. When a request comes in, blkfront
      transmutes the request so that blkback will write into one of these
      free pages. Blkback keeps note of which grefs it has already
      mapped. When a new ring request comes to blkback, it looks to see if
      it has already mapped that page. If so, it will not map it again. If
      the page hasn't been previously mapped, it is mapped now, and a record
      is kept of this mapping. Blkback proceeds as usual. When blkfront is
      notified that blkback has completed a request, it memcpy's from the
      shared memory, into the bvec supplied. A record that the {gref, page}
      tuple is mapped, and not inflight is kept.
      
      Writes are similar, except that the memcpy is peformed from the
      supplied bvecs, into the shared pages, before the request is put onto
      the ring.
      
      Blkback stores a mapping of grefs=>{page mapped to by gref} in
      a red-black tree. As the grefs are not known apriori, and provide no
      guarantees on their ordering, we have to perform a search
      through this tree to find the page, for every gref we receive. This
      operation takes O(log n) time in the worst case. In blkfront grants
      are stored using a single linked list.
      
      The maximum number of grants that blkback will persistenly map is
      currently set to RING_SIZE * BLKIF_MAX_SEGMENTS_PER_REQUEST, to
      prevent a malicios guest from attempting a DoS, by supplying fresh
      grefs, causing the Dom0 kernel to map excessively. If a guest
      is using persistent grants and exceeds the maximum number of grants to
      map persistenly the newly passed grefs will be mapped and unmaped.
      Using this approach, we can have requests that mix persistent and
      non-persistent grants, and we need to handle them correctly.
      This allows us to set the maximum number of persistent grants to a
      lower value than RING_SIZE * BLKIF_MAX_SEGMENTS_PER_REQUEST, although
      setting it will lead to unpredictable performance.
      
      In writing this patch, the question arrises as to if the additional
      cost of performing memcpys in the guest (to/from the pool of granted
      pages) outweigh the gains of not performing TLB shootdowns. The answer
      to that question is `no'. There appears to be very little, if any
      additional cost to the guest of using persistent grants. There is
      perhaps a small saving, from the reduced number of hypercalls
      performed in granting, and ending foreign access.
      Signed-off-by: NOliver Chick <oliver.chick@citrix.com>
      Signed-off-by: NRoger Pau Monne <roger.pau@citrix.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      [v1: Fixed up the misuse of bool as int]
      0a8704a5
  4. 12 9月, 2012 1 次提交
    • S
      xen/m2p: do not reuse kmap_op->dev_bus_addr · 2fc136ee
      Stefano Stabellini 提交于
      If the caller passes a valid kmap_op to m2p_add_override, we use
      kmap_op->dev_bus_addr to store the original mfn, but dev_bus_addr is
      part of the interface with Xen and if we are batching the hypercalls it
      might not have been written by the hypervisor yet. That means that later
      on Xen will write to it and we'll think that the original mfn is
      actually what Xen has written to it.
      
      Rather than "stealing" struct members from kmap_op, keep using
      page->index to store the original mfn and add another parameter to
      m2p_remove_override to get the corresponding kmap_op instead.
      It is now responsibility of the caller to keep track of which kmap_op
      corresponds to a particular page in the m2p_override (gntdev, the only
      user of this interface that passes a valid kmap_op, is already doing that).
      
      CC: stable@kernel.org
      Reported-and-Tested-By: NSander Eikelenboom <linux@eikelenboom.it>
      Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      2fc136ee
  5. 09 8月, 2012 1 次提交
  6. 24 3月, 2012 1 次提交
    • K
      xen/blkback: Squash the discard support for 'file' and 'phy' type. · 4dae7670
      Konrad Rzeszutek Wilk 提交于
      The only reason for the distinction was for the special case of
      'file' (which is assumed to be loopback device), was to reach inside
      the loopback device, find the underlaying file, and call fallocate on it.
      Fortunately "xen-blkback: convert hole punching to discard request on
      loop devices" removes that use-case and we now based the discard
      support based on blk_queue_discard(q) and extract all appropriate
      parameters from the 'struct request_queue'.
      
      CC: Li Dongyang <lidongyang@novell.com>
      Acked-by: NJan Beulich <JBeulich@suse.com>
      [v1: Dropping pointless initializer and keeping blank line]
      [v2: Remove the kfree as it is not used anymore]
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      4dae7670
  7. 20 3月, 2012 2 次提交
  8. 19 11月, 2011 4 次提交
  9. 18 10月, 2011 1 次提交
  10. 15 10月, 2011 1 次提交
  11. 13 10月, 2011 5 次提交
  12. 29 9月, 2011 1 次提交
    • S
      xen: modify kernel mappings corresponding to granted pages · 0930bba6
      Stefano Stabellini 提交于
      If we want to use granted pages for AIO, changing the mappings of a user
      vma and the corresponding p2m is not enough, we also need to update the
      kernel mappings accordingly.
      Currently this is only needed for pages that are created for user usages
      through /dev/xen/gntdev. As in, pages that have been in use by the
      kernel and use the P2M will not need this special mapping.
      However there are no guarantees that in the future the kernel won't
      start accessing pages through the 1:1 even for internal usage.
      
      In order to avoid the complexity of dealing with highmem, we allocated
      the pages lowmem.
      We issue a HYPERVISOR_grant_table_op right away in
      m2p_add_override and we remove the mappings using another
      HYPERVISOR_grant_table_op in m2p_remove_override.
      Considering that m2p_add_override and m2p_remove_override are called
      once per page we use multicalls and hypercall batching.
      
      Use the kmap_op pointer directly as argument to do the mapping as it is
      guaranteed to be present up until the unmapping is done.
      Before issuing any unmapping multicalls, we need to make sure that the
      mapping has already being done, because we need the kmap->handle to be
      set correctly.
      Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      [v1: Removed GRANT_FRAME_BIT usage]
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      0930bba6
  13. 01 7月, 2011 2 次提交
  14. 01 6月, 2011 1 次提交
  15. 18 5月, 2011 1 次提交
  16. 13 5月, 2011 9 次提交
  17. 12 5月, 2011 1 次提交
  18. 06 5月, 2011 3 次提交
  19. 28 4月, 2011 1 次提交
  20. 27 4月, 2011 2 次提交
    • K
      xen/blkback: Stick REQ_SYNC on WRITEs to deal with CFQ I/O scheduler. · 013c3ca1
      Konrad Rzeszutek Wilk 提交于
      If one runs a simple fio request with random read/write with a
      20%/80% ratio, the numbers are incredibly bad when using the CFQ scheduler.
      
      IOmeter       |       |      |          |
      64K, randrw   |  NOOP | CFQ  | deadline |
      randrwmix=80  |       |      |          |
      --------------+-------+------+----------+
      blkback       |103/27 |32/10 | 102/27   |
      --------------+-------+------+----------+
      QEMU qdisk    |103/27 |102/27| 102/27   |
      
      The problem as explained by Vivek Goyal was:
      
      ".. that difference is that sync vs async requests. In the case of
      a kernel thread submitting IO, [..] all the WRITES might be being
      considered as async and will go in a different queue. If you mix those
      with some READS, they are always sync and will go in differnet queue.
      In presence of sync queue, CFQ will idle and choke up WRITES in
      an attempt to improve latencies of READs.
      
      In case of AIO [note: this is what QEMU qdisk is doing] , [..]
      it is direct IO and both READS and WRITES will be considered SYNC
      and will go in a single queue and no choking of WRITES will take place."
      
      The solution is quite simple, tack on REQ_SYNC (which is
      what the WRITE_ODIRECT macro points to) and the numbers go
      back up.
      
      Suggested-by: Vivek Goyal <vgoyal@redhat.com
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      013c3ca1
    • K
      xen/blkback: Move the plugging/unplugging to a higher level. · 97961ef4
      Konrad Rzeszutek Wilk 提交于
      We used to the plug/unplug on the submit_bio. But that means
      if within a stream of WRITE, WRITE, WRITE,...,WRITE we have
      one READ, it could stall the pipeline (as the 'submio_bio'
      could trigger the unplug_fnc to be called and stall/sync
      when doing the READ). Instead we want to move the unplugging
      when the whole (or as a much as possible) ring buffer has been
      processed. This also eliminates us doing plug/unplug for
      each request.
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      97961ef4