- 02 9月, 2014 26 次提交
-
-
由 Tom Herbert 提交于
Call skb_checksum_try_convert and skb_gro_checksum_try_convert after checksum is found present and validated in the GRE header for normal and GRO paths respectively. In GRO path, call skb_gro_checksum_try_convert Signed-off-by: NTom Herbert <therbert@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Tom Herbert 提交于
Add support for doing CHECKSUM_UNNECESSARY to CHECKSUM_COMPLETE conversion in UDP tunneling path. In the normal UDP path, we call skb_checksum_try_convert after locating the UDP socket. The check is that checksum conversion is enabled for the socket (new flag in UDP socket) and that checksum field is non-zero. In the UDP GRO path, we call skb_gro_checksum_try_convert after checksum is validated and checksum field is non-zero. Since this is already in GRO we assume that checksum conversion is always wanted. Signed-off-by: NTom Herbert <therbert@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Tom Herbert 提交于
For normal path, added skb_checksum_try_convert which is called to attempt to convert CHECKSUM_UNNECESSARY to CHECKSUM_COMPLETE. The primary condition to allow this is that ip_summed is CHECKSUM_NONE and csum_valid is true, which will be the state after consuming a CHECKSUM_UNNECESSARY. For GRO path, added skb_gro_checksum_try_convert which is the GRO analogue of skb_checksum_try_convert. The primary condition to allow this is that NAPI_GRO_CB(skb)->csum_cnt == 0 and NAPI_GRO_CB(skb)->csum_valid is set. This implies that we have consumed all available CHECKSUM_UNNECESSARY checksums in the GRO path. Signed-off-by: NTom Herbert <therbert@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Tom Herbert 提交于
This flag indicates that an invalid checksum was detected in the packet. __skb_mark_checksum_bad helper function was added to set this. Checksums can be marked bad from a driver or the GRO path (the latter is implemented in this patch). csum_bad is checked in __skb_checksum_validate_complete (i.e. calling that when ip_summed == CHECKSUM_NONE). csum_bad works in conjunction with ip_summed value. In the case that ip_summed is CHECKSUM_NONE and csum_bad is set, this implies that the first (or next) checksum encountered in the packet is bad. When ip_summed is CHECKSUM_UNNECESSARY, the first checksum after the last one validated is bad. For example, if ip_summed == CHECKSUM_UNNECESSARY, csum_level == 1, and csum_bad is set-- then the third checksum in the packet is bad. In the normal path, the packet will be dropped when processing the protocol layer of the bad checksum: __skb_decr_checksum_unnecessary called twice for the good checksums changing ip_summed to CHECKSUM_NONE so that __skb_checksum_validate_complete is called to validate the third checksum and that will fail since csum_bad is set. Signed-off-by: NTom Herbert <therbert@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 hayeswang 提交于
The variable "rx_buf_sz" is used by both tx and rx buffers. Replace it with "agg_buf_sz". Signed-off-by: NHayes Wang <hayeswang@realtek.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florian Fainelli 提交于
drivers/net/phy/mdio-bcm-unimac.c:195:37-38: unimac_mdio_ids is not NULL terminated at line 195 Make sure of_device_id tables are NULL terminated Generated by: scripts/coccinelle/misc/of_table.cocci Signed-off-by: NFengguang Wu <fengguang.wu@intel.com> Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florian Fainelli 提交于
net/dsa/dsa.c:624:20: sparse: symbol 'dsa_pack_type' was not declared. Should it be static? Fixes: 3e8a72d1 ("net: dsa: reduce number of protocol hooks") Signed-off-by: NFengguang Wu <fengguang.wu@intel.com> Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Nikolay Aleksandrov 提交于
This patch adds support for slave_changelink to the bonding and uses it to give the ability to change the queue_id of the enslaved devices via netlink. It sets slave_maxtype and uses bond_changelink as a prototype for bond_slave_changelink. Example/test command after the iproute2 patch: ip link set eth0 type bond_slave queue_id 10 CC: David S. Miller <davem@davemloft.net> CC: Jay Vosburgh <j.vosburgh@gmail.com> CC: Veaceslav Falico <vfalico@gmail.com> CC: Andy Gospodarek <andy@greyhouse.net> Suggested-by: NJiri Pirko <jiri@resnulli.us> Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com> Acked-by: NJiri Pirko <jiri@resnulli.us> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 stephen hemminger 提交于
Fix places where there is space before tab, long lines, and awkward if(){, double spacing etc. Add blank line after declaration/initialization. Signed-off-by: NStephen Hemminger <stephen@networkplumber.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florian Fainelli 提交于
When Broadcom tags are enabled, e.g: when interfaced to an Ethernet switch, make sure that we tell the RXCHK engine that it should be expecting a 4-bytes Broadcom tag after the Ethernet MAC Source Address. Use netdev_uses_dsa() to check for that condition since that will tell us if a switch is attached to our network interface. Fixes: 80105bef ("net: systemport: add Broadcom SYSTEMPORT Ethernet MAC driver") Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jesper Dangaard Brouer 提交于
Then testing the TX limits of the stack, then it is useful to be-able to disable the do_gettimeofday() timetamping on every packet. This implements a pktgen flag NO_TIMESTAMP which will disable this call to do_gettimeofday(). The performance change on (my system E5-2695) with skb_clone=0, goes from TX 2,423,751 pps to 2,567,165 pps with flag NO_TIMESTAMP. Thus, the cost of do_gettimeofday() or saving is approx 23 nanosec. Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dmitry Kravkov 提交于
Set correct bit for packed description. Introduced in e42780b6 bnx2x: Utilize FW 7.10.51 Reported-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NDmitry Kravkov <Dmitry.Kravkov@qlogic.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dmitry Kravkov 提交于
Fixes incorrectly defined struct in FW HSI for BE platform. Affects tunneling, tx-switching and anti-spoofing. Introduced in e42780b6 bnx2x: Utilize FW 7.10.51 Reported-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NDmitry Kravkov <Dmitry.Kravkov@qlogic.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Erik Hugne 提交于
TIPC name table updates are distributed asynchronously in a cluster, entailing a risk of certain race conditions. E.g., if two nodes simultaneously issue conflicting (overlapping) publications, this may not be detected until both publications have reached a third node, in which case one of the publications will be silently dropped on that node. Hence, we end up with an inconsistent name table. In most cases this conflict is just a temporary race, e.g., one node is issuing a publication under the assumption that a previous, conflicting, publication has already been withdrawn by the other node. However, because of the (rtt related) distributed update delay, this may not yet hold true on all nodes. The symptom of this failure is a syslog message: "tipc: Cannot publish {%u,%u,%u}, overlap error". In this commit we add a resiliency queue at the receiving end of the name table distributor. When insertion of an arriving publication fails, we retain it in this queue for a short amount of time, assuming that another update will arrive very soon and clear the conflict. If so happens, we insert the publication, otherwise we drop it. The (configurable) retention value defaults to 2000 ms. Knowing from experience that the situation described above is extremely rare, there is no risk that the queue will accumulate any large number of items. Signed-off-by: NErik Hugne <erik.hugne@ericsson.com> Signed-off-by: NJon Maloy <jon.maloy@ericsson.com> Acked-by: NYing Xue <ying.xue@windriver.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Erik Hugne 提交于
We need to perform the same actions when processing deferred name table updates, so this functionality is moved to a separate function. Signed-off-by: NErik Hugne <erik.hugne@ericsson.com> Signed-off-by: NJon Maloy <jon.maloy@ericsson.com> Acked-by: NYing Xue <ying.xue@windriver.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 hayeswang 提交于
Because the Tx has the features of stopping queue and aggregation, We don't need many tx buffers. Change the tx number from 10 to 4 to reduce the usage of the memory. This could save 16K * 6 bytes memory. Signed-off-by: NHayes Wang <hayeswang@realtek.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
David Miller says: ==================== net: Make dev_hard_start_xmit() work fundamentally on lists After this patch set, dev_hard_start_xmit() will work fundemantally on any and all SKB lists. This opens the path for a clean implementation of pulling multiple packets out during qdisc_restart(), and then passing that blob in one shot to dev_hard_start_xmit(). There were two main architectural blockers to this: 1) The GSO handling, we kept the original GSO head SKB around simply because dev_hard_start_xmit() had no way to communicate to the caller how far into the segmented list it was able to go. Now it can, so the head GSO can be liberated immediately. All of the special GSO head SKB destructor et al. handling goes away too. 2) Validate of VLAN, CSUM, and segmentation characteristics was being performed inside of dev_hard_start_xmit(). If want to truly batch, we have to let the higher levels to this. In particular, this is now dequeue_skb()'s job. And with those two issues out of the way, it should now be trivial to build experiments on top of this patch set, all of the framework should be there now. You could do something as simple as: skb = q->dequeue(q); if (skb) skb = validate_xmit_skb(skb, qdisc_dev(q)); if (skb) { struct sk_buff *new, *head = skb; int limit = 5; do { new = q->dequeue(q); if (new) new = validate_xmit_skb(new, qdisc_dev(q)); if (new) { skb->next = new; skb = new; } } while (new && --limit); skb = head; } inside of the else branch of dequeue_skb(). Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Now fundamentally we can process lists of SKBs as cheaply as single packets. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Just maintain the list properly by returning the head of the remaining SKB list from dev_hard_start_xmit(). Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
dev_hard_start_xmit() does two things, it first validates and canonicalizes the SKB, then it actually sends it. Make a set of helper functions for doing the first part. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
For now it will always be false. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
There is a slight policy change happening here as well. The previous code would drop the entire rest of the GSO skb if any of them got, for example, a congestion notification. That makes no sense, anything NET_XMIT_MASK and below is something like congestion or policing. And in the congestion case it doesn't even mean the packet was actually dropped. Just continue until dev_xmit_complete() evaluates to false. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Hopefully making the code a bit easier to read and digest. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
That way we don't have to audit every call site to make sure it is doing this properly. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 30 8月, 2014 14 次提交
-
-
由 Ley Foon Tan 提交于
Warning: drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c:122:41: sparse: cast removes address space of expression drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c:122:38: sparse: incorrect type in assignment (different address spaces) Signed-off-by: NLey Foon Tan <lftan@altera.com> Acked-by: NGiuseppe Cavallaro <peppe.cavallaro@st.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Tom Herbert says: ==================== net: Checksum offload changes - Part VI I am working on overhauling RX checksum offload. Goals of this effort are: - Specify what exactly it means when driver returns CHECKSUM_UNNECESSARY - Preserve CHECKSUM_COMPLETE through encapsulation layers - Don't do skb_checksum more than once per packet - Unify GRO and non-GRO csum verification as much as possible - Unify the checksum functions (checksum_init) - Simplify code What is in this sixth patch set: - Clarify the specific requirements of devices returning CHECKSUM_UNNECESSARY (comments in skbuff.h). - Add csum_level field to skbuff. This is used to express how many checksums are covered by CHECKSUM_UNNECESSARY (stores n - 1). - Change __skb_checksum_validate_needed to "consume" each checksum as indicated by csum_level as layers of the the packet are parsed. - Remove skb_pop_rcv_encapsulation, no longer needed in the new csum_level model. - Allow GRO path to "consume" checksums provided in CHECKSUM_UNNECESSARY and to report new verfied checksums for use in normal path fallback. - Add proper support to SCTP to accept CHECKSUM_UNNECESSARY to validate header CRC. - Modify drivers to set skb->csum_level instead of setting skb->encapsulation to indicate validation of an encapsulated checksum on receive. v2: Allocate a new 16 bits for flags in skbuff. Please review carefully and test if possible, mucking with basic checksum functions is always a little precarious :-) ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Tom Herbert 提交于
Set skb->csum_level instead of skb->encapsulation when indicating CHECKSUM_UNNECESSARY for an encapsulated checksum. Signed-off-by: NTom Herbert <therbert@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Tom Herbert 提交于
Set skb->csum_level instead of skb->encapsulation when indicating CHECKSUM_UNNECESSARY for an encapsulated checksum. Signed-off-by: NTom Herbert <therbert@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Tom Herbert 提交于
Set skb->csum_level instead of skb->encapsulation when indicating CHECKSUM_UNNECESSARY for an encapsulated checksum. Signed-off-by: NTom Herbert <therbert@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Tom Herbert 提交于
Set skb->csum_level instead of skb->encapsulation when indicating CHECKSUM_UNNECESSARY for an encapsulated checksum. Signed-off-by: NTom Herbert <therbert@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Tom Herbert 提交于
Set skb->csum_level instead of skb->encapsulation when indicating CHECKSUM_UNNECESSARY for an encapsulated checksum. Signed-off-by: NTom Herbert <therbert@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Tom Herbert 提交于
CHECKSUM_UNNECESSARY may be applied to the SCTP CRC so we need to appropriate account for this by decrementing csum_level. This is done by calling __skb_dec_checksum_unnecessary. Signed-off-by: NTom Herbert <therbert@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Tom Herbert 提交于
Allow GRO path to "consume" checksums provided in CHECKSUM_UNNECESSARY and to report new checksums verfied for use in fallback to normal path. Change GRO checksum path to track csum_level using a csum_cnt field in NAPI_GRO_CB. On GRO initialization, if ip_summed is CHECKSUM_UNNECESSARY set NAPI_GRO_CB(skb)->csum_cnt to skb->csum_level + 1. For each checksum verified, decrement NAPI_GRO_CB(skb)->csum_cnt while its greater than zero. If a checksum is verfied and NAPI_GRO_CB(skb)->csum_cnt == 0, we have verified a deeper checksum than originally indicated in skbuf so increment csum_level (or initialize to CHECKSUM_UNNECESSARY if ip_summed is CHECKSUM_NONE or CHECKSUM_COMPLETE). Signed-off-by: NTom Herbert <therbert@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Tom Herbert 提交于
This patch: - Clarifies the specific requirements of devices returning CHECKSUM_UNNECESSARY (comments in skbuff.h). - Adds csum_level field to skbuff. This is used to express how many checksums are covered by CHECKSUM_UNNECESSARY (stores n - 1). This replaces the overloading of skb->encapsulation, that field is is now only used to indicate inner headers are valid. - Change __skb_checksum_validate_needed to "consume" each checksum as indicated by csum_level as layers of the the packet are parsed. - Remove skb_pop_rcv_encapsulation, no longer needed in the new csum_level model. Signed-off-by: NTom Herbert <therbert@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Tom Herbert 提交于
Signed-off-by: NTom Herbert <therbert@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Rick Jones 提交于
The be2net driver was still using dev_kfree_skb_any() in a "normal" skb freeing path. This rather clutters perf top -G -e skb_kfree_skb profiling. Signed-off-by: NRick Jones <rick.jones2@hp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yuval Mintz 提交于
This fixes a sprase warning introduced recently by commit eeed018c ("bnx2x: Add timestamping and PTP hardware clock support"), as well as another unrelated sparse endian issue. Signed-off-by: NYuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: NAriel Elior <Ariel.Elior@qlogic.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Rasmus Villemoes 提交于
The header file include/rxrpc/types.h does not seem to be used anywhere. It was orphaned by 63b6be55 "[AF_RXRPC]: Delete the old RxRPC code.". Remove it. Signed-off-by: NRasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-