- 27 1月, 2015 1 次提交
-
-
由 David L Stevens 提交于
This patch moves the clearing of ring data in vnet_port_free_tx_bufs to after the freeing of pending buffers in the ring. Otherwise, this can result in dereferencing a NULL pointer. Reported-by: NSowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by: NDavid L Stevens <david.stevens@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 14 1月, 2015 1 次提交
-
-
由 David L Stevens 提交于
This patch fixes the rx packet length check in the sunvnet driver to allow for a TSO max packet length greater than the LDC channel negotiated MTU. These are negotiated separately and there is no requirement that port->tsolen be less than port->rmtu, but if it isn't, it'll drop packets with rx length errors. Signed-off-by: NDavid L Stevens <david.stevens@oracle.com> Acked-by: NSowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 20 12月, 2014 1 次提交
-
-
由 Li RongQing 提交于
when skb_gso_segment returns error, the original skb should be freed Signed-off-by: NLi RongQing <roy.qing.li@gmail.com> Acked-by: NDavid L Stevens <david.stevens@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 12 12月, 2014 1 次提交
-
-
由 Dwight Engen 提交于
Both sunvdc and sunvnet implemented distinct functionality for incrementing and decrementing dring indexes. Create common functions for use by both from the sunvnet versions, which were chosen since they will still work correctly in case a non power of two ring size is used. Signed-off-by: NDwight Engen <dwight.engen@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 09 12月, 2014 7 次提交
-
-
由 David L Stevens 提交于
This patch removes an extra rcu_read_unlock() on an allocation failure in vnet_skb_shape(). The needed rcu_read_unlock() is already done in the out_dropped label. Reported-by: NRashmi Narasimhan <rashmi.narasimhan@oracle.com> Signed-off-by: NDavid L Stevens <david.stevens@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David L Stevens 提交于
This patch adds TSO support for the sunvnet driver. Signed-off-by: NDavid L Stevens <david.stevens@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David L Stevens 提交于
This patch adds GSO support to the sunvnet driver. Signed-off-by: NDavid L Stevens <david.stevens@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David L Stevens 提交于
This patch adds support for sender-side checksum offloading. Signed-off-by: NDavid L Stevens <david.stevens@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David L Stevens 提交于
This patch adds scatter/gather support to the sunvnet driver. Signed-off-by: NDavid L Stevens <david.stevens@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David L Stevens 提交于
This patch adds support for VIO v1.7 (extended descriptor format) and v1.8 (receive-side checksumming) to the sunvnet driver. Signed-off-by: NDavid L Stevens <david.stevens@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David L Stevens 提交于
This patch changes the name of vnet_port_alloc_tx_bufs to vnet_port_alloc_tx_ring, since there are no buffer allocations after transmit zero copy support was added. This patch also moves the ring allocation to after VIO version negotiation to allow for different-sized descriptors in later VIO versions. Signed-off-by: NDavid L Stevens <david.stevens@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 11月, 2014 1 次提交
-
-
由 David L Stevens 提交于
This patch fixes a NULL pointer dereference when __tx_port_find() doesn't find a matching port. Signed-off-by: NDavid L Stevens <david.stevens@oracle.com> Acked-by: NSowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 11 11月, 2014 3 次提交
-
-
由 Sowmini Varadhan 提交于
The out_dropped label will only do rcu_read_unlock for non-null port. So add the missing rcu_read_unlock() when bailing due to non-null port. Signed-off-by: NSowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Sowmini Varadhan 提交于
As per comments in vnet_start_xmit, for the edge case when outgoing vnet_start_xmit() data and an incoming STOPPED ACK cross each other in flight, we may need to send the missed START trigger from maybe_tx_wakeup() after checking for a false value of start_cons Signed-off-by: NSowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Sowmini Varadhan 提交于
When vnet_start_xmit() is concurrent with vnet_ack(), we may have a race that looks like: thread 1 thread 2 vnet_start_xmit vnet_event_napi -> vnet_rx __vnet_tx_trigger for some desc X at this point dr->prod == X peer sends back a stopped ack for X we process X, but X == dr->prod so we bail out in vnet_ack with !idx_is_pending update dr->prod As a result of the fact that we never processed the stopped ack for X, the Tx path is led to incorrectly believe that the peer is still "started" and reading, but the peer has stopped reading, which will ultimately end in flow-control assertions. The fix is to synchronize the above 2 paths on the netif_tx_lock. Signed-off-by: NSowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 07 11月, 2014 2 次提交
-
-
由 Sowmini Varadhan 提交于
vnet_event_napi() may be called as part of the NAPI ->poll, to resume reading descriptor rings. When no data is available, descriptor ring state (e.g., rcv_nxt) needs to be reset carefully to stay in lock-step with ldc_read(). In the interest of simplicity, the best way to do this is to return from vnet_event_napi() when there are no more packets to read. The next trip through ldc_rx will correctly set up the dring state. Signed-off-by: NSowmini Varadhan <sowmini.varadhan@oracle.com> Tested-by: NDavid Stevens <david.stevens@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Sowmini Varadhan 提交于
remove redundant tab. Signed-off-by: NSowmini Varadhan <sowmini.varadhan@oracle.com> Reported-by: NBen Hutchings <ben@decadent.org.uk> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 31 10月, 2014 2 次提交
-
-
由 Sowmini Varadhan 提交于
Use multple Tx netdev queues for sunvnet by supporting a one-to-one mapping between vnet_port and Tx queue. Provide a ndo_select_queue indirection (vnet_select_queue()) which selects the queue based on the peer that would be selected in vnet_start_xmit() Signed-off-by: NSowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Sowmini Varadhan 提交于
When vnet_event_napi re-enables interrupts, it should reset LDC_EVENT_DATA_READY as an optimization. Signed-off-by: NSowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 26 10月, 2014 3 次提交
-
-
由 Sowmini Varadhan 提交于
After the NAPIfication of sunvnet, we no longer need to synchronize by doing irqsave/restore on vio.lock in the I/O fastpath. NAPI ->poll() is non-reentrant, so all RX processing occurs strictly in a serialized environment. TX reclaim is done in NAPI context, so the netif_tx_lock can be used to serialize critical sections between Tx and Rx paths. Signed-off-by: NSowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Sowmini Varadhan 提交于
A vnet_port_remove could be triggered as a result of an ldm-unbind operation by the peer, module unload, or other changes to the inter-vnet-link configuration. When this is concurrent with vnet_start_xmit(), there are several race sequences possible, such as thread 1 thread 2 vnet_start_xmit -> tx_port_find spin_lock_irqsave(&vp->lock..) ret = __tx_port_find(..) spin_lock_irqrestore(&vp->lock..) vio_remove -> .. ->vnet_port_remove spin_lock_irqsave(&vp->lock..) cleanup spin_lock_irqrestore(&vp->lock..) kfree(port) /* attempt to use ret will bomb */ This patch adds RCU locking for port access so that vnet_port_remove will correctly clean up port-related state. Signed-off-by: NSowmini Varadhan <sowmini.varadhan@oracle.com> Acked-by: NDwight Engen <dwight.engen@oracle.com> Acked-by: NBob Picco <bob.picco@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Sowmini Varadhan 提交于
Move Rx packet procssing to the NAPI poll callback. Disable VIO interrupt and unconditioanlly go into NAPI context from vnet_event. Note that we want to minimize the number of LDC STOP/START messages sent. Specifically, do not send a STOP message if vnet_walk_rx does not read all the available descriptors because of the NAPI budget limitation. Instead, note the end index as part of port state, and resume from this index when the next poll callback is triggered. Signed-off-by: NSowmini Varadhan <sowmini.varadhan@oracle.com> Acked-by: NRaghuram Kothakota <raghuram.kothakota@oracle.com> Acked-by: NDwight Engen <dwight.engen@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 02 10月, 2014 1 次提交
-
-
由 David L Stevens 提交于
One of the error cases for vnet_start_xmit()'s "out_dropped" label is port == NULL, so only mess with port->clean_timer when port is not NULL. Signed-off-by: NDavid L Stevens <david.stevens@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 01 10月, 2014 5 次提交
-
-
由 Dwight Engen 提交于
vio_dring_avail() will allow use of every dring entry, but when the last entry is allocated then dr->prod == dr->cons which is indistinguishable from the ring empty condition. This causes the next allocation to reuse an entry. When this happens in sunvdc, the server side vds driver begins nack'ing the messages and ends up resetting the ldc channel. This problem does not effect sunvnet since it checks for < 2. The fix here is to just never allocate the very last dring slot so that full and empty are not the same condition. The request start path was changed to check for the ring being full a bit earlier, and to stop the blk_queue if there is no space left. The blk_queue will be restarted once the ring is only half full again. The number of ring entries was increased to 512 which matches the sunvnet and Solaris vdc drivers, and greatly reduces the frequency of hitting the ring full condition and the associated blk_queue stop/starting. The checks in sunvent were adjusted to account for vio_dring_avail() returning 1 less. Orabug: 19441666 OraBZ: 14983 Signed-off-by: NDwight Engen <dwight.engen@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David L Stevens 提交于
This patch sends ICMP and ICMPv6 messages for Path MTU Discovery when a remote port MTU is smaller than the device MTU. This allows mixing newer VIO protocol devices that support MTU negotiation with older devices that do not on the same vswitch. It also allows Linux-Linux LDOMs to use 64K-1 data packets even though Solaris vswitch is limited to <16K MTU. Signed-off-by: NDavid L Stevens <david.stevens@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David L Stevens 提交于
This patch allows an admin to set the MTU on a sunvnet device to arbitrary values between the minimum (68) and maximum (65535) IPv4 packet sizes. Signed-off-by: NDavid L Stevens <david.stevens@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David L Stevens 提交于
This patch removes pre-allocated transmit buffers and instead directly maps pending packets on demand. This saves O(n^2) maximum-sized transmit buffers, for n hosts on a vswitch, as well as a copy to those buffers. Single-stream TCP throughput linux-solaris dropped ~5% for 1500-byte MTU, but linux-linux at 1500-bytes increased ~20%. Signed-off-by: NDavid L Stevens <david.stevens@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David L Stevens 提交于
This patch upgrades the sunvnet driver to support VIO protocol version 1.6. In particular, it adds per-port MTU negotiation, allowing MTUs other than ETH_FRAMELEN with ports using newer VIO protocol versions. Signed-off-by: NDavid L Stevens <david.stevens@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 9月, 2014 1 次提交
-
-
由 Sowmini Varadhan 提交于
When sending out a burst of packets across multiple descriptors, it is sufficient to send one LDC "start" trigger for the first descriptor, so do not send an LDC "start" for every pass through vnet_start_xmit. Similarly, it is sufficient to send one "DRING_STOPPED" trigger for the last dring (and if that fails, hold off and send the trigger later). Optimizations to the number of LDC messages helps avoid filling up the LDC channel with superfluous LDC messages that risk triggering flow-control on the channel, and also boosts performance. Signed-off-by: NSowmini Varadhan <sowmini.varadhan@oracle.com> Acked-by: NRaghuram Kothakota <raghuram.kothakota@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 10 9月, 2014 1 次提交
-
-
由 Joe Perches 提交于
Use the much more common pr_warn instead of pr_warning. Other miscellanea: o Typo fixes submiting/submitting o Coalesce formats o Realign arguments o Add missing terminating '\n' to formats Signed-off-by: NJoe Perches <joe@perches.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 09 9月, 2014 1 次提交
-
-
由 David L Stevens 提交于
The sunvnet driver does not have an rmb() in the ring consumer corresponding to the wmb() in the producer. According to Documentation/memory-barriers.txt: "When dealing with CPU-CPU interactions, certain types of memory barrier should always be paired. A lack of appropriate pairing is almost certainly an error." In cases where an rmb() is not a no-op and a consumer is removing data from the ring while a producer is adding new entries, a load reorder would allow CPU1 CPU2 ---- ---- LOAD desc.size [e.g] STORE desc.size <wmb> set desc.hdr.state = VIO_DESC_READY LOAD desc.hdr.state [because VIO_DESC_READY, use old desc.size, already loaded out of order] [CPU2 has reordered apparently unrelated LOADs] To ensure other desc fields are not loaded before checking VIO_DESC_READY, we need an rmb() between the check and desc data accesses. I've also moved the viodbg() call to after the rmb() so that it, too, has current descriptor data even with reordering, which has the side effect that it won't print anything for descriptors that are not VIO_DESC_READY as before. That's a) probably a good thing, since the fields are not necessarily set and, b) better than adding another rmb() just for viodbg(). This would not be possible if strict-ordering is enforced, but then the memory barriers should be no-ops in that case. Signed-off-by: NDavid L Stevens <david.stevens@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 14 8月, 2014 3 次提交
-
-
由 Sowmini Varadhan 提交于
At the tail of vnet_event(), if we hit the maybe_tx_wakeup() condition, we try to take the netif_tx_lock() in the recv-interrupt-context and can deadlock with dev_watchdog(). vnet_event() should schedule maybe_tx_wakeup() as a tasklet to avoid this deadlock Signed-off-by: NSowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Sowmini Varadhan 提交于
ldc_rx -> vnet_rx -> .. -> vnet_walk_rx->vnet_send_ack should not spin into an infinite loop waiting EAGAIN to lift. The sender could have sent us a burst, and gone to lunch without doing any more ldc_read()'s. That should not cause the receiver to loop infinitely till soft-lockup kicks in. Similarly __vnet_tx_trigger should only loop on EAGAIN a finite number of times. The caller (vnet_start_xmit()) already has code to reset the dring state and bail on errors from __vnet_tx_trigger Signed-off-by: NSowmini Varadhan <sowmini.varadhan@oracle.com> Acked-by: NRaghuram Kothakota <raghuram.kothakota@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Sowmini Varadhan 提交于
No need to ask for an ack with every vnet_start_xmit()- the single ACK with DRING_STOPPED is sufficient for the protocol, and we free the sk_buff in vnet_start_xmit itself, so we dont need an ACK back. Signed-off-by: NSowmini Varadhan <sowmini.varadhan@oracle.com> Acked-by: NRaghuram Kothakota <raghuram.kothakota@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 29 7月, 2014 1 次提交
-
-
由 David L Stevens 提交于
The sunvnet driver doesn't check whether or not a port is connected when transmitting packets, which results in failures if a port fails to connect (e.g., due to a version mismatch). The original code also assumes unnecessarily that the first port is up and a switch, even though there is a flag for switch ports. This patch only matches a port if it is connected, and otherwise uses the switch_port flag to send the packet to a switch port that is up. Signed-off-by: NDavid L Stevens <david.stevens@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 17 7月, 2014 1 次提交
-
-
由 Sowmini Varadhan 提交于
Nothing cleans up the objects created by vnet_new(), they are completely leaked. vnet_exit(), after doing the vio_unregister_driver() to clean up ports, should call a helper function that iterates over vnet_list and cleans up those objects. This includes unregister_netdevice() as well as free_netdev(). Signed-off-by: NSowmini Varadhan <sowmini.varadhan@oracle.com> Acked-by: NDave Kleikamp <dave.kleikamp@oracle.com> Reviewed-by: NKarl Volz <karl.volz@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 01 1月, 2014 1 次提交
-
-
由 dingtianhong 提交于
Use possibly more efficient ether_addr_equal to instead of memcmp. Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: NDing Tianhong <dingtianhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 11 7月, 2013 1 次提交
-
-
由 Dave Kleikamp 提交于
The missing call to unregister_netdev() leaves the interface active after the driver is unloaded by rmmod. Signed-off-by: NDave Kleikamp <dave.kleikamp@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 28 2月, 2013 1 次提交
-
-
由 Sasha Levin 提交于
I'm not sure why, but the hlist for each entry iterators were conceived list_for_each_entry(pos, head, member) The hlist ones were greedy and wanted an extra parameter: hlist_for_each_entry(tpos, pos, head, member) Why did they need an extra pos parameter? I'm not quite sure. Not only they don't really need it, it also prevents the iterator from looking exactly like the list iterator, which is unfortunate. Besides the semantic patch, there was some manual work required: - Fix up the actual hlist iterators in linux/list.h - Fix up the declaration of other iterators based on the hlist ones. - A very small amount of places were using the 'node' parameter, this was modified to use 'obj->member' instead. - Coccinelle didn't handle the hlist_for_each_entry_safe iterator properly, so those had to be fixed up manually. The semantic patch which is mostly the work of Peter Senna Tschudin is here: @@ iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host; type T; expression a,c,d,e; identifier b; statement S; @@ -T b; <+... when != b ( hlist_for_each_entry(a, - b, c, d) S | hlist_for_each_entry_continue(a, - b, c) S | hlist_for_each_entry_from(a, - b, c) S | hlist_for_each_entry_rcu(a, - b, c, d) S | hlist_for_each_entry_rcu_bh(a, - b, c, d) S | hlist_for_each_entry_continue_rcu_bh(a, - b, c) S | for_each_busy_worker(a, c, - b, d) S | ax25_uid_for_each(a, - b, c) S | ax25_for_each(a, - b, c) S | inet_bind_bucket_for_each(a, - b, c) S | sctp_for_each_hentry(a, - b, c) S | sk_for_each(a, - b, c) S | sk_for_each_rcu(a, - b, c) S | sk_for_each_from -(a, b) +(a) S + sk_for_each_from(a) S | sk_for_each_safe(a, - b, c, d) S | sk_for_each_bound(a, - b, c) S | hlist_for_each_entry_safe(a, - b, c, d, e) S | hlist_for_each_entry_continue_rcu(a, - b, c) S | nr_neigh_for_each(a, - b, c) S | nr_neigh_for_each_safe(a, - b, c, d) S | nr_node_for_each(a, - b, c) S | nr_node_for_each_safe(a, - b, c, d) S | - for_each_gfn_sp(a, c, d, b) S + for_each_gfn_sp(a, c, d) S | - for_each_gfn_indirect_valid_sp(a, c, d, b) S + for_each_gfn_indirect_valid_sp(a, c, d) S | for_each_host(a, - b, c) S | for_each_host_safe(a, - b, c, d) S | for_each_mesh_entry(a, - b, c, d) S ) ...+> [akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c] [akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c] [akpm@linux-foundation.org: checkpatch fixes] [akpm@linux-foundation.org: fix warnings] [akpm@linux-foudnation.org: redo intrusive kvm changes] Tested-by: NPeter Senna Tschudin <peter.senna@gmail.com> Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: NSasha Levin <sasha.levin@oracle.com> Cc: Wu Fengguang <fengguang.wu@intel.com> Cc: Marcelo Tosatti <mtosatti@redhat.com> Cc: Gleb Natapov <gleb@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 09 1月, 2013 1 次提交
-
-
由 Jiri Pirko 提交于
perm_addr is initialized correctly in register_netdevice() so to init it in drivers is no longer needed. Signed-off-by: NJiri Pirko <jiri@resnulli.us> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-