1. 16 1月, 2010 1 次提交
  2. 15 1月, 2010 1 次提交
    • M
      vhost_net: a kernel-level virtio server · 3a4d5c94
      Michael S. Tsirkin 提交于
      What it is: vhost net is a character device that can be used to reduce
      the number of system calls involved in virtio networking.
      Existing virtio net code is used in the guest without modification.
      
      There's similarity with vringfd, with some differences and reduced scope
      - uses eventfd for signalling
      - structures can be moved around in memory at any time (good for
        migration, bug work-arounds in userspace)
      - write logging is supported (good for migration)
      - support memory table and not just an offset (needed for kvm)
      
      common virtio related code has been put in a separate file vhost.c and
      can be made into a separate module if/when more backends appear.  I used
      Rusty's lguest.c as the source for developing this part : this supplied
      me with witty comments I wouldn't be able to write myself.
      
      What it is not: vhost net is not a bus, and not a generic new system
      call. No assumptions are made on how guest performs hypercalls.
      Userspace hypervisors are supported as well as kvm.
      
      How it works: Basically, we connect virtio frontend (configured by
      userspace) to a backend. The backend could be a network device, or a tap
      device.  Backend is also configured by userspace, including vlan/mac
      etc.
      
      Status: This works for me, and I haven't see any crashes.
      Compared to userspace, people reported improved latency (as I save up to
      4 system calls per packet), as well as better bandwidth and CPU
      utilization.
      
      Features that I plan to look at in the future:
      - mergeable buffers
      - zero copy
      - scalability tuning: figure out the best threading model to use
      
      Note on RCU usage (this is also documented in vhost.h, near
      private_pointer which is the value protected by this variant of RCU):
      what is happening is that the rcu_dereference() is being used in a
      workqueue item.  The role of rcu_read_lock() is taken on by the start of
      execution of the workqueue item, of rcu_read_unlock() by the end of
      execution of the workqueue item, and of synchronize_rcu() by
      flush_workqueue()/flush_work(). In the future we might need to apply
      some gcc attribute or sparse annotation to the function passed to
      INIT_WORK(). Paul's ack below is for this RCU usage.
      
      (Includes fixes by Alan Cox <alan@linux.intel.com>,
      David L Stevens <dlstevens@us.ibm.com>,
      Chris Wright <chrisw@redhat.com>)
      Acked-by: NRusty Russell <rusty@rustcorp.com.au>
      Acked-by: NArnd Bergmann <arnd@arndb.de>
      Acked-by: N"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3a4d5c94
  3. 24 12月, 2009 1 次提交
  4. 22 12月, 2009 1 次提交
  5. 16 12月, 2009 6 次提交
  6. 15 12月, 2009 1 次提交
    • A
      x25: Update maintainer. · 8bf28059
      Arnd Bergmann 提交于
      On Monday 14 December 2009, andrew hendry wrote:
      > Thanks, I didn't know X.25 was actively maintained. I get bounces.
      > Is the the maintainers out of date?
      
      From looking at the posts on the x.25 mailing list and the changes
      that went into the kernel during the last three years in that area,
      I think it is safe to say that you are now the maintainer ;-).
      
      The last mail on this topic from Henner Eisen was around 2001.
      
      > AX.25 NETWORK LAYER
      > M:      Ralf Baechle <ralf@linux-mips.org>
      >
      > X.25 NETWORK LAYER
      > M:      Henner Eisen <eis@baty.hanse.de>
      
      How about this change?
      Signed-off-by: NAndrew Hendry <andrew.hendry@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8bf28059
  7. 12 12月, 2009 3 次提交
  8. 11 12月, 2009 1 次提交
  9. 10 12月, 2009 4 次提交
  10. 09 12月, 2009 2 次提交
  11. 08 12月, 2009 1 次提交
  12. 07 12月, 2009 1 次提交
  13. 05 12月, 2009 2 次提交
  14. 04 12月, 2009 1 次提交
  15. 03 12月, 2009 1 次提交
  16. 02 12月, 2009 1 次提交
  17. 01 12月, 2009 2 次提交
  18. 30 11月, 2009 1 次提交
  19. 29 11月, 2009 1 次提交
  20. 27 11月, 2009 1 次提交
  21. 26 11月, 2009 1 次提交
  22. 24 11月, 2009 1 次提交
  23. 23 11月, 2009 1 次提交
  24. 18 11月, 2009 1 次提交
  25. 16 11月, 2009 1 次提交
  26. 14 11月, 2009 2 次提交