1. 12 1月, 2012 3 次提交
  2. 01 11月, 2011 1 次提交
  3. 24 10月, 2011 1 次提交
  4. 30 5月, 2011 2 次提交
    • M
      virtio: add api for delayed callbacks · 7ab358c2
      Michael S. Tsirkin 提交于
      Add an API that tells the other side that callbacks
      should be delayed until a lot of work has been done.
      Implement using the new event_idx feature.
      
      Note: it might seem advantageous to let the drivers
      ask for a callback after a specific capacity has
      been reached. However, as a single head can
      free many entries in the descriptor table,
      we don't really have a clue about capacity
      until get_buf is called. The API is the simplest
      to implement at the moment, we'll see what kind of
      hints drivers can pass when there's more than one
      user of the feature.
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      7ab358c2
    • M
      virtio_ring: support event idx feature · a5c262c5
      Michael S. Tsirkin 提交于
      Support for the new event idx feature:
      1. When enabling interrupts, publish the current avail index
         value to the host to get interrupts on the next update.
      2. Use the new avail_event feature to reduce the number
         of exits from the guest.
      
      Simple test with the simulator:
      
      [virtio]# time ./virtio_test
      spurious wakeus: 0x7
      
      real    0m0.169s
      user    0m0.140s
      sys     0m0.019s
      [virtio]# time ./virtio_test --no-event-idx
      spurious wakeus: 0x11
      
      real    0m0.649s
      user    0m0.295s
      sys     0m0.335s
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      a5c262c5
  5. 21 4月, 2011 1 次提交
  6. 24 11月, 2010 1 次提交
  7. 26 7月, 2010 1 次提交
  8. 23 6月, 2010 1 次提交
  9. 19 5月, 2010 2 次提交
  10. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  11. 24 2月, 2010 4 次提交
    • A
      virtio: Initialize vq->data entries to NULL · 3b870624
      Amit Shah 提交于
      vq operations depend on vq->data[i] being NULL to figure out if the vq
      entry is in use (since the previous patch).
      
      We have to initialize them to NULL to ensure we don't work with junk
      data and trigger false BUG_ONs.
      Signed-off-by: NAmit Shah <amit.shah@redhat.com>
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Cc: Shirley Ma <xma@us.ibm.com>
      3b870624
    • S
      virtio: Add ability to detach unused buffers from vrings · c021eac4
      Shirley Ma 提交于
      There's currently no way for a virtio driver to ask for unused
      buffers, so it has to keep a list itself to reclaim them at shutdown.
      This is redundant, since virtio_ring stores that information.  So
      add a new hook to do this.
      Signed-off-by: NShirley Ma <xma@us.ibm.com>
      Signed-off-by: NAmit Shah <amit.shah@redhat.com>
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      c021eac4
    • M
      virtio: use smp_XX barriers on SMP · d57ed95d
      Michael S. Tsirkin 提交于
      virtio is communicating with a virtual "device" that actually runs on
      another host processor. Thus SMP barriers can be used to control
      memory access ordering.
      
      Where possible, we should use SMP barriers which are more lightweight than
      mandatory barriers, because mandatory barriers also control MMIO effects on
      accesses through relaxed memory I/O windows (which virtio does not use)
      (compare specifically smp_rmb and rmb on x86_64).
      
      We can't just use smp_mb and friends though, because
      we must force memory ordering even if guest is UP since host could be
      running on another CPU, but SMP barriers are defined to barrier() in
      that configuration. So, for UP fall back to mandatory barriers instead.
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      d57ed95d
    • R
      virtio: remove bogus barriers from DEBUG version of virtio_ring.c · 97a545ab
      Rusty Russell 提交于
      With DEBUG defined, we add an ->in_use flag to detect if the caller
      invokes two virtio methods in parallel.  The barriers attempt to ensure
      timely update of the ->in_use flag.
      
      But they're voodoo: if we need these barriers it implies that the
      calling code doesn't have sufficient synchronization to ensure the
      code paths aren't invoked at the same time anyway, and we want to
      detect it.
      
      Also, adding barriers changes timing, so turning on debug has more
      chance of hiding real problems.
      
      Thanks to MST for drawing my attention to this code...
      
      CC: Michael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      97a545ab
  12. 03 2月, 2010 1 次提交
  13. 29 10月, 2009 1 次提交
  14. 23 9月, 2009 1 次提交
  15. 12 6月, 2009 2 次提交
  16. 30 3月, 2009 2 次提交
  17. 30 12月, 2008 1 次提交
  18. 25 7月, 2008 2 次提交
    • R
      virtio: Add transport feature handling stub for virtio_ring. · e34f8725
      Rusty Russell 提交于
      To prepare for virtio_ring transport feature bits, hook in a call in
      all the users to manipulate them.  This currently just clears all the
      bits, since it doesn't understand any features.
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      e34f8725
    • R
      virtio: don't always force a notification when ring is full · 44653eae
      Rusty Russell 提交于
      We force notification when the ring is full, even if the host has
      indicated it doesn't want to know.  This seemed like a good idea at
      the time: if we fill the transmit ring, we should tell the host
      immediately.
      
      Unfortunately this logic also applies to the receiving ring, which is
      refilled constantly.  We should introduce real notification thesholds
      to replace this logic.  Meanwhile, removing the logic altogether breaks
      the heuristics which KVM uses, so we use a hack: only notify if there are
      outgoing parts of the new buffer.
      
      Here are the number of exits with lguest's crappy network implementation:
      Before:
      	network xmit 7859051 recv 236420
      After:
      	network xmit 7858610 recv 118136
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      44653eae
  19. 30 5月, 2008 2 次提交
    • R
      virtio: force callback on empty. · b4f68be6
      Rusty Russell 提交于
      virtio allows drivers to suppress callbacks (ie. interrupts) for
      efficiency (no locking, it's just an optimization).
      
      There's a similar mechanism for the host to suppress notifications
      coming from the guest: in that case, we ignore the suppression if the
      ring is completely full.
      
      It turns out that life is simpler if the host similarly ignores
      callback suppression when the ring is completely empty: the network
      driver wants to free up old packets in a timely manner, and otherwise
      has to use a timer to poll.
      
      We have to remove the code which ignores interrupts when the driver
      has disabled them (again, it had no locking and hence was unreliable
      anyway).
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      b4f68be6
    • C
      virtio_net: another race with virtio_net and enable_cb · 52a3a05f
      Christian Borntraeger 提交于
      Hello Rusty,
      
      seems that we still have a problem with virtio_net and the enable_cb callback.
      During a long running network stress tests with virtio and got the following
      oops:
      
      ------------[ cut here ]------------
      kernel BUG at drivers/virtio/virtio_ring.c:230!
      illegal operation: 0001 [#1] SMP
      Modules linked in:
      CPU: 0 Not tainted 2.6.26-rc2-kvm-00436-gc94c08b-dirty #34
      Process netserver (pid: 2582, task: 000000000fbc4c68, ksp: 000000000f42b990)
      Krnl PSW : 0704c00180000000 00000000002d0ec8 (vring_enable_cb+0x1c/0x60)
                 R:0 T:1 IO:1 EX:1 Key:0 M:1 W:0 P:0 AS:3 CC:0 PM:0 EA:3
      Krnl GPRS: 0000000000000000 0000000000000000 000000000ef3d000 0000000010009800
                 0000000000000000 0000000000419ce0 0000000000000080 000000000000007b
                 000000000adb5538 000000000ef40900 000000000ef40000 000000000ef40920
                 0000000000000000 0000000000000005 000000000029c1b0 000000000fea7d18
      Krnl Code: 00000000002d0ebc: a7110001           tmll    %r1,1
                 00000000002d0ec0: a7740004           brc     7,2d0ec8
                 00000000002d0ec4: a7f40001           brc     15,2d0ec6
                >00000000002d0ec8: a517fffe           nill    %r1,65534
                 00000000002d0ecc: 40103000           sth     %r1,0(%r3)
                 00000000002d0ed0: 07f0               bcr     15,%r0
                 00000000002d0ed2: e31020380004       lg      %r1,56(%r2)
                 00000000002d0ed8: a7480000           lhi     %r4,0
      Call Trace:
      ([<000000000029c0fc>] virtnet_poll+0x290/0x3b8)
       [<0000000000333fb8>] net_rx_action+0x9c/0x1b8
       [<00000000001394bc>] __do_softirq+0x74/0x108
       [<000000000010d16a>] do_softirq+0x92/0xac
       [<0000000000139826>] irq_exit+0x72/0xc8
       [<000000000010a7b6>] do_extint+0xe2/0x104
       [<0000000000110508>] ext_no_vtime+0x16/0x1a
      Last Breaking-Event-Address:
       [<00000000002d0ec4>] vring_enable_cb+0x18/0x60
      
      I looked into the virtio_net code for some time and I think the following
      scenario happened. Please look at virtnet_poll:
      [...]
              /* Out of packets? */
              if (received < budget) {
                      netif_rx_complete(vi->dev, napi);
                      if (unlikely(!vi->rvq->vq_ops->enable_cb(vi->rvq))
                          && napi_schedule_prep(napi)) {
                              vi->rvq->vq_ops->disable_cb(vi->rvq);
                              __netif_rx_schedule(vi->dev, napi);
                              goto again;
                      }
              }
      
      If an interrupt arrives after netif_rx_complete, a second poll routine can run
      on a different cpu. The second check for napi_schedule_prep would prevent any
      harm in the network stack, but we have called enable_cb possibly after the
      disable_cb in skb_recv_done.
      
      static void skb_recv_done(struct virtqueue *rvq)
      {
              struct virtnet_info *vi = rvq->vdev->priv;
              /* Schedule NAPI, Suppress further interrupts if successful. */
              if (netif_rx_schedule_prep(vi->dev, &vi->napi)) {
                      rvq->vq_ops->disable_cb(rvq);
                      __netif_rx_schedule(vi->dev, &vi->napi);
              }
      }
      
      That means that the second poll routine runs with interrupts enabled, which is
      ok, since we can handle additional interrupts. The problem is now that the
      second poll routine might also call enable_cb, triggering the BUG.
      
      The only solution I can come up with, is to remove the BUG statement in
      enable_cb - similar to disable_cb. Opinions or better ideas where the oops
      could come from?
      Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      52a3a05f
  20. 02 5月, 2008 1 次提交
  21. 08 4月, 2008 1 次提交
    • R
      virtio: remove overzealous BUG_ON. · 2557a933
      Rusty Russell 提交于
      The 'disable_cb' callback is designed as an optimization to tell the host
      we don't need callbacks now.  As it is not reliable, the debug check is
      overzealous: it can happen on two CPUs at the same time.  Document this.
      
      Even if it were reliable, the virtio_net driver doesn't disable
      callbacks on transmit so the START_USE/END_USE debugging reentrance
      protection can be easily tripped even on UP.
      
      Thanks to Balaji Rao for the bug report and testing.
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      CC: Balaji Rao <balajirrao@gmail.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2557a933
  22. 17 3月, 2008 1 次提交
    • C
      virtio: fix race in enable_cb · 4265f161
      Christian Borntraeger 提交于
      There is a race in virtio_net, dealing with disabling/enabling the callback.
      I saw the following oops:
      
      kernel BUG at /space/kvm/drivers/virtio/virtio_ring.c:218!
      illegal operation: 0001 [#1] SMP
      Modules linked in: sunrpc dm_mod
      CPU: 2 Not tainted 2.6.25-rc1zlive-host-10623-gd358142-dirty #99
      Process swapper (pid: 0, task: 000000000f85a610, ksp: 000000000f873c60)
      Krnl PSW : 0404300180000000 00000000002b81a6 (vring_disable_cb+0x16/0x20)
                 R:0 T:1 IO:0 EX:0 Key:0 M:1 W:0 P:0 AS:0 CC:3 PM:0 EA:3
      Krnl GPRS: 0000000000000001 0000000000000001 0000000010005800 0000000000000001
                 000000000f3a0900 000000000f85a610 0000000000000000 0000000000000000
                 0000000000000000 000000000f870000 0000000000000000 0000000000001237
                 000000000f3a0920 000000000010ff74 00000000002846f6 000000000fa0bcd8
      Krnl Code: 00000000002b819a: a7110001           tmll    %r1,1
                 00000000002b819e: a7840004           brc     8,2b81a6
                 00000000002b81a2: a7f40001           brc     15,2b81a4
                >00000000002b81a6: a51b0001           oill    %r1,1
                 00000000002b81aa: 40102000           sth     %r1,0(%r2)
                 00000000002b81ae: 07fe               bcr     15,%r14
                 00000000002b81b0: eb7ff0380024       stmg    %r7,%r15,56(%r15)
                 00000000002b81b6: a7f13e00           tmll    %r15,15872
      Call Trace:
      ([<000000000fa0bcd0>] 0xfa0bcd0)
       [<00000000002b8350>] vring_interrupt+0x5c/0x6c
       [<000000000010ab08>] do_extint+0xb8/0xf0
       [<0000000000110716>] ext_no_vtime+0x16/0x1a
       [<0000000000107e72>] cpu_idle+0x1c2/0x1e0
      
      The problem can be triggered with a high amount of host->guest traffic.
      I think its the following race:
      
      poll says netif_rx_complete
      poll calls enable_cb
      enable_cb opens the interrupt mask
      a new packet comes, an interrupt is triggered----\
      enable_cb sees that there is more work           |
      enable_cb disables the interrupt                 |
             .                                         V
             .                            interrupt is delivered
             .                            skb_recv_done does atomic napi test, ok
       some waiting                       disable_cb is called->check fails->bang!
             .
      poll would do napi check
      poll would do disable_cb
      
      The fix is to let enable_cb not disable the interrupt again, but expect the
      caller to do the cleanup if it returns false. In that case, the interrupt is
      only disabled, if the napi test_set_bit was successful.
      Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> (cleaned up doco)
      4265f161
  23. 04 2月, 2008 6 次提交
  24. 12 11月, 2007 1 次提交
    • R
      virtio: Force use of power-of-two for descriptor ring sizes · 42b36cc0
      Rusty Russell 提交于
      The virtio descriptor rings of size N-1 were nicely set up to be
      aligned to an N-byte boundary.  But as Anthony Liguori points out, the
      free-running indices used by virtio require that the sizes be a power
      of 2, otherwise we get problems on wrap (demonstrated with lguest).
      
      So we replace the clever "2^n-1" scheme with a simple "align to page
      boundary" scheme: this means that all virtio rings take at least two
      pages, but it's safer than guessing cache alignment.
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      42b36cc0