- 01 2月, 2011 1 次提交
-
-
由 Michael S. Tsirkin 提交于
When built with rcu checks enabled, vhost triggers bogus warnings as vhost features are read without dev->mutex sometimes, and private pointer is read with our kind of rcu where work serves as a read side critical section. Fixing it properly is not trivial. Disable the warnings by stubbing out the checks for now. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
- 09 12月, 2010 1 次提交
-
-
由 Jason Wang 提交于
Signed-off-by: NJason Wang <jasowang@redhat.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
- 05 10月, 2010 1 次提交
-
-
由 Jason Wang 提交于
Qemu supports up to UIO_MAXIOV s/g so we have to match that because guest drivers may rely on this. Allocate indirect and log arrays dynamically to avoid using too much contigious memory and make the length of hdr array to match the header length since each iovec entry has a least one byte. Test with copying large files w/ and w/o migration in both linux and windows guests. Signed-off-by: NJason Wang <jasowang@redhat.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
- 22 8月, 2010 1 次提交
-
-
由 Arnd Bergmann 提交于
Also add rcu_dereference_protected() for code paths where locks are held. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: "Michael S. Tsirkin" <mst@redhat.com>
-
- 28 7月, 2010 2 次提交
-
-
由 David Stevens 提交于
This adds support for mergeable buffers in vhost-net: this is needed for older guests without indirect buffer support, as well as for zero copy with some devices. Includes changes by Michael S. Tsirkin to make the patch as low risk as possible (i.e., close to no changes when feature is disabled). Signed-off-by: NDavid Stevens <dlstevens@us.ibm.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
由 Tejun Heo 提交于
Replace vhost_workqueue with per-vhost kthread. Other than callback argument change from struct work_struct * to struct vhost_work *, there's no visible change to vhost_poll_*() interface. This conversion is to make each vhost use a dedicated kthread so that resource control via cgroup can be applied. Partially based on Sridhar Samudrala's patch. * Updated to use sub structure vhost_work instead of directly using vhost_poll at Michael's suggestion. * Added flusher wake_up() optimization at Michael's suggestion. Changes by MST: * Converted atomics/barrier use to a spinlock. * Create thread on SET_OWNER * Fix flushing Signed-off-by: NTejun Heo <tj@kernel.org> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Cc: Sridhar Samudrala <samudrala.sridhar@gmail.com>
-
- 27 6月, 2010 1 次提交
-
-
由 Michael S. Tsirkin 提交于
When ring parsing fails, we currently handle this as ring empty condition. This means that we enable kicks and recheck ring empty: if this not empty, we re-start polling which of course will fail again. Instead, let's return a negative error code and stop polling. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
- 15 1月, 2010 1 次提交
-
-
由 Michael S. Tsirkin 提交于
What it is: vhost net is a character device that can be used to reduce the number of system calls involved in virtio networking. Existing virtio net code is used in the guest without modification. There's similarity with vringfd, with some differences and reduced scope - uses eventfd for signalling - structures can be moved around in memory at any time (good for migration, bug work-arounds in userspace) - write logging is supported (good for migration) - support memory table and not just an offset (needed for kvm) common virtio related code has been put in a separate file vhost.c and can be made into a separate module if/when more backends appear. I used Rusty's lguest.c as the source for developing this part : this supplied me with witty comments I wouldn't be able to write myself. What it is not: vhost net is not a bus, and not a generic new system call. No assumptions are made on how guest performs hypercalls. Userspace hypervisors are supported as well as kvm. How it works: Basically, we connect virtio frontend (configured by userspace) to a backend. The backend could be a network device, or a tap device. Backend is also configured by userspace, including vlan/mac etc. Status: This works for me, and I haven't see any crashes. Compared to userspace, people reported improved latency (as I save up to 4 system calls per packet), as well as better bandwidth and CPU utilization. Features that I plan to look at in the future: - mergeable buffers - zero copy - scalability tuning: figure out the best threading model to use Note on RCU usage (this is also documented in vhost.h, near private_pointer which is the value protected by this variant of RCU): what is happening is that the rcu_dereference() is being used in a workqueue item. The role of rcu_read_lock() is taken on by the start of execution of the workqueue item, of rcu_read_unlock() by the end of execution of the workqueue item, and of synchronize_rcu() by flush_workqueue()/flush_work(). In the future we might need to apply some gcc attribute or sparse annotation to the function passed to INIT_WORK(). Paul's ack below is for this RCU usage. (Includes fixes by Alan Cox <alan@linux.intel.com>, David L Stevens <dlstevens@us.ibm.com>, Chris Wright <chrisw@redhat.com>) Acked-by: NRusty Russell <rusty@rustcorp.com.au> Acked-by: NArnd Bergmann <arnd@arndb.de> Acked-by: N"Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-