- 15 12月, 2017 3 次提交
-
-
由 Sean Wang 提交于
I work for MediaTek and maintain SoC targeting to home gateway and also will keep extending and testing the function from MediaTek switch. Signed-off-by: NSean Wang <sean.wang@mediatek.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Sean Wang 提交于
In order to let MT7530 switch can recognize well those egress packets having both special tag and VLAN tag, the information about the special tag should be carried on the existing VLAN tag. On the other hand, it's unnecessary for extra handling for ingress packets when VLAN tag is present since it is able to put the VLAN tag after the special tag and then follow the existing way to parse. Signed-off-by: NSean Wang <sean.wang@mediatek.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Sean Wang 提交于
MT7530 can treat each port as either VLAN-unaware port or VLAN-aware port through the implementation of port matrix mode or port security mode on the ingress port, respectively. On one hand, Each port has been acting as the VLAN-unaware one whenever the device is created in the initial or certain port joins or leaves into/from the bridge at the runtime. On the other hand, the patch just filling the required callbacks for VLAN operations is achieved via extending the port to be into port security mode when the port is configured as VLAN-aware port. Which mode can make the port be able to recognize VID from incoming packets and look up VLAN table to validate and judge which port it should be going to. And the range for VID from 1 to 4094 is valid for the hardware. Signed-off-by: NSean Wang <sean.wang@mediatek.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 14 12月, 2017 37 次提交
-
-
由 Egil Hjelmeland 提交于
Simplify lan9303_indirect_phy_wait_for_completion() and lan9303_switch_wait_for_completion() by using a new function lan9303_read_wait() Changes v1 -> v2: - param 'mask' type u32 - removed param 'value' (will probably never be used) - add newline before return Signed-off-by: NEgil Hjelmeland <privat@egil-hjelmeland.no> Reviewed-by: NVivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Stephen Hemminger says: ==================== hv_netvsc: minor changes This includes minor cleanup of code in send and receive path and also a new statistic to check for allocation failures. This also eliminates some of the extra RCU when not needed. There is a theoritical bug where buffered data could be blocked for longer than necessary if the ring buffer got full. This has not been seen in the wild, found by inspection. The reference count between net device and internal RNDIS is not needed. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Stephen Hemminger 提交于
If the transmit queue is known full, then don't keep aggregating data. And the cp_partial flag which indicates that the current aggregation buffer is full can be folded in to avoid more conditionals. Signed-off-by: NStephen Hemminger <sthemmin@microsoft.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Stephen Hemminger 提交于
There is only ever a single instance of network device object referencing the internal rndis object. Therefore the open_cnt atomic is not necessary. Signed-off-by: NStephen Hemminger <sthemmin@microsoft.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Stephen Hemminger 提交于
The netvsc_receive_callback function was using RCU to find the appropriate underlying netvsc_device. Since calling function already had that pointer, this was unnecessary. Signed-off-by: NStephen Hemminger <sthemmin@microsoft.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Stephen Hemminger 提交于
The caller (netvsc_receive) already has the net device pointer, and should just pass that to functions rather than the hyperv device. This eliminates several impossible error paths in the process. Signed-off-by: NStephen Hemminger <sthemmin@microsoft.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Stephen Hemminger 提交于
When skb can not be allocated, update ethtool statisitics rather than rx_dropped which is intended for netif_receive. Signed-off-by: NStephen Hemminger <sthemmin@microsoft.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Stephen Hemminger 提交于
Since only caller does not care about return value. Signed-off-by: NStephen Hemminger <sthemmin@microsoft.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Florian Fainelli says: ==================== PHYLINK preparatory patches for DSA In preparation for having DSA migrate to PHYLINK, I had to come up with a number of preparatory patches: - we need to be able to pass phy_flags from an external component calling phylink_of_phy_connect() - DSA tries to connect through OF first, then fallsback using its own internal MDIO bus, in that case we would both show an error, but also not know what the correct phy_interface_t would be, instead use the PHY device/driver provided one - Finally bcm_sf2 makes use of all possible PHYs out there: internal, external, fixed, and MoCA, the latter requires a bit of help to signal link notifications through a MMIO interrupt, as well a report a correct PORT type Changes in v2: - rebased against latest net-next/master - added kernel doc documentation - dropped error message in phylink_of_phy_connect() as suggested by Russell ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florian Fainelli 提交于
Similarly to what PHYLIB already does, make sure that PHY_INTERFACE_MODE_MOCA is reported as PORT_BNC. Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florian Fainelli 提交于
phylink_get_fixed_state() currently consults an optional "link_gpio" GPIO descriptor, expand this mechanism to allow specifying a custom callback. This is necessary to support out of band link notifcation (e.g: from an interrupt within a MMIO register). Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florian Fainelli 提交于
Some subsystems like DSA may be trying to connect to a PHY through OF first, and then attempt a connect using a local MDIO bus, remove the error message: "unable to find PHY node" so we can let MAC drivers whether to print it or not. Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florian Fainelli 提交于
We may not always be able to resolve a correct phy_interface_t value before actually connecting to the PHY device, when that happens, just have phylink_connect_phy() utilize what the PHY device/driver provided. Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florian Fainelli 提交于
In order to let subsystems like DSA fully utilize PHYLINK, we need to be able to communicate phy_device::flags from of_phy_{connect,attach} even when using PHYLINK APIs. Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yuchung Cheng 提交于
Prior to this patch, active Fast Open is paused on a specific destination IP address if the previous connections to the IP address have experienced recurring timeouts . But recent experiments by Microsoft (https://goo.gl/cykmn7) and Mozilla browsers indicate the isssue is often caused by broken middle-boxes sitting close to the client. Therefore it is much better user experience if Fast Open is disabled out-right globally to avoid experiencing further timeouts on connections toward other destinations. This patch changes the destination-IP disablement to global disablement if a connection experiencing recurring timeouts or aborts due to timeout. Repeated incidents would still exponentially increase the pause time, starting from an hour. This is extremely conservative but an unfortunate compromise to minimize bad experience due to broken middle-boxes. Reported-by: NDragana Damjanovic <ddamjanovic@mozilla.com> Reported-by: NPatrick McManus <mcmanus@ducksong.com> Signed-off-by: NYuchung Cheng <ycheng@google.com> Reviewed-by: NWei Wang <weiwan@google.com> Reviewed-by: NNeal Cardwell <ncardwell@google.com> Reviewed-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ivan Khoronzhuk 提交于
It's not correct to return NULL when that is actually an error and function returns errors in any other wrong case. In the same time, the cpsw driver and davinci emac doesn't check error case while creating channel and it can miss actual error. Also remove WARNs replacing them on dev_err msgs. Signed-off-by: NIvan Khoronzhuk <ivan.khoronzhuk@linaro.org> Reviewed-by: NGrygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Arjun Vynipadath 提交于
Adds support for ethtool get_module_info() and get_module_eeprom() callbacks that will dump necessary information for a SFP. Signed-off-by: NArjun Vynipadath <arjun@chelsio.com> Signed-off-by: NCasey Leedom <leedom@chelsio.com> Signed-off-by: NGanesh Goudar <ganeshgr@chelsio.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Willem de Bruijn 提交于
skb_warn_bad_offload warns when packets enter the GSO stack that require skb_checksum_help or vice versa. Do not warn on arbitrary bad packets. Packet sockets can craft many. Syzkaller was able to demonstrate another one with eth_type games. In particular, suppress the warning when segmentation returns an error, which is for reasons other than checksum offload. See also commit 36c92474 ("net: WARN if skb_checksum_help() is called on skb requiring segmentation") for context on this warning. Signed-off-by: NWillem de Bruijn <willemb@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
In commit 3a9b76fd ("tcp: allow drivers to tweak TSQ logic") I gave a code sample to set sk->sk_pacing_shift that was not complete. Better add a helper that can be used by drivers without worries, and maybe amended in the future. A wifi driver might use it from its ndo_start_xmit() Following call would setup TCP to allow up to ~8ms of queued data per flow. sk_pacing_shift_update(skb->sk, 7); Signed-off-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Nikolay Aleksandrov 提交于
Before this patch the bridge used a fixed 256 element hash table which was fine for small use cases (in my tests it starts to degrade above 1000 entries), but it wasn't enough for medium or large scale deployments. Modern setups have thousands of participants in a single bridge, even only enabling vlans and adding a few thousand vlan entries will cause a few thousand fdbs to be automatically inserted per participating port. So we need to scale the fdb table considerably to cope with modern workloads, and this patch converts it to use a rhashtable for its operations thus improving the bridge scalability. Tests show the following results (10 runs each), at up to 1000 entries rhashtable is ~3% slower, at 2000 rhashtable is 30% faster, at 3000 it is 2 times faster and at 30000 it is 50 times faster. Obviously this happens because of the properties of the two constructs and is expected, rhashtable keeps pretty much a constant time even with 10000000 entries (tested), while the fixed hash table struggles considerably even above 10000. As a side effect this also reduces the net_bridge struct size from 3248 bytes to 1344 bytes. Also note that the key struct is 8 bytes. Signed-off-by: NNikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Russell King 提交于
Remove XGMII as an option for the 88x3310 PHY driver, as the PHY doesn't support XGMII's 32-bit data lanes. It supports USXGMII, which is not XGMII, but a single-lane serdes interface - see https://developer.cisco.com/site/usgmii-usxgmii/Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Heiner Kallweit says: ==================== r8169: extend PCI core and switch to device-managed functions in probe Probe error path and remove callback can be significantly simplified by using device-managed functions. To be able to do this in the r8169 driver we need a device-managed version of pci_set_mwi first. v2: Change patch 1 based on Björn's review comments and add his Acked-by. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Heiner Kallweit 提交于
netif_napi_del is called implicitely by free_netdev, therefore we don't have to do it explicitely. When the probe error path is reached, the net_device isn't registered yet. Therefore reordering the call to netif_napi_del shouldn't cause any issues. Signed-off-by: NHeiner Kallweit <hkallweit1@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Heiner Kallweit 提交于
Simplify probe error path and remove callback by using device-managed functions. rtl_disable_msi isn't needed any longer because the release callback of pcim_enable_device does this implicitely. Signed-off-by: NHeiner Kallweit <hkallweit1@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Heiner Kallweit 提交于
Add pcim_set_mwi(), a device-managed version of pci_set_mwi(). First user is the Realtek r8169 driver. Signed-off-by: NHeiner Kallweit <hkallweit1@gmail.com> Acked-by: NBjorn Helgaas <bhelgaas@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
First, rename __inet_twsk_hashdance() to inet_twsk_hashdance() Then, remove one inet_twsk_put() by setting tw_refcnt to 3 instead of 4, but adding a fat warning that we do not have the right to access tw anymore after inet_twsk_hashdance() Signed-off-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Subash Abhinov Kasiviswanathan says: ==================== net: qualcomm: rmnet: Configuration options This series adds support for configuring features on rmnet devices. The rmnet specific features to be configured here are aggregation and control commands. Patch 1 is a cleanup of return codes in the transmit path. Patch 2 removes some redundant ingress and egress macros. Patch 3 restricts the creation of rmnet dev to one dev per mux id for a given real dev. Patch 4 adds ethernet data path support. Patches 5-6 add support for configuring features on new and existing rmnet devices. v1->v2: The memory leak fixed as part of patch 1 is merged seperately as a896d94abd2c ("net: qualcomm: rmnet: Fix leak on transmit failure"). Fix a use after free in patch 4 if a packet with headroom lesser than ethernet header length is received. v2->v3: Fix formatting problem in patch 5 in the return statement. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
Add an option to configure the mux id, aggregation and commad feature for existing rmnet devices. Implement the changelink netlink operation for this. Signed-off-by: NSubash Abhinov Kasiviswanathan <subashab@codeaurora.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
Add an option to configure the rmnet aggregation and command features on device creation. This is achieved by using the vlan flags option. Signed-off-by: NSubash Abhinov Kasiviswanathan <subashab@codeaurora.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
Add support to send and receive packets over ethernet. An example of usage is testing the data path on UML. This can be achieved by setting up two UML instances in multicast mode and associating rmnet over the UML ethernet device. Signed-off-by: NSubash Abhinov Kasiviswanathan <subashab@codeaurora.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
Upon de-multiplexing data from one real dev, the packets can be sent to an unique rmnet device for a given mux id. Signed-off-by: NSubash Abhinov Kasiviswanathan <subashab@codeaurora.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
Multiplexing is always enabled when transmiting from a rmnet device, so remove the redundant egress macros. De-multiplexing is always enabled when receiving packets from a rmnet device, so remove those ingress macros. Signed-off-by: NSubash Abhinov Kasiviswanathan <subashab@codeaurora.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
Only the success and consumed entries were actually in use. Use standard error codes instead. Signed-off-by: NSubash Abhinov Kasiviswanathan <subashab@codeaurora.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Neal Cardwell 提交于
This patch enables tail loss probe in cwnd reduction (CWR) state to detect potential losses. Prior to this patch, since the sender uses PRR to determine the cwnd in CWR state, the combination of CWR+PRR plus tcp_tso_should_defer() could cause unnecessary stalls upon losses: PRR makes cwnd so gentle that tcp_tso_should_defer() defers sending wait for more ACKs. The ACKs may not come due to packet losses. Disallowing TLP when there is unused cwnd had the primary effect of disallowing TLP when there is TSO deferral, Nagle deferral, or we hit the rwin limit. Because basically every application write() or incoming ACK will cause us to run tcp_write_xmit() to see if we can send more, and then if we sent something we call tcp_schedule_loss_probe() to see if we should schedule a TLP. At that point, there are a few common reasons why some cwnd budget could still be unused: (a) rwin limit (b) nagle check (c) TSO deferral (d) TSQ For (d), after the next packet tx completion the TSQ mechanism will allow us to send more packets, so we don't really need a TLP (in practice it shouldn't matter whether we schedule one or not). But for (a), (b), (c) the sender won't send any more packets until it gets another ACK. But if the whole flight was lost, or all the ACKs were lost, then we won't get any more ACKs, and ideally we should schedule and send a TLP to get more feedback. In particular for a long time we have wanted some kind of timer for TSO deferral, and at least this would give us some kind of timer Reported-by: NSteve Ibanez <sibanez@stanford.edu> Signed-off-by: NNeal Cardwell <ncardwell@google.com> Signed-off-by: NYuchung Cheng <ycheng@google.com> Reviewed-by: NNandita Dukkipati <nanditad@google.com> Reviewed-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Cong Wang 提交于
Since we now hold RTNL lock in tc_action_net_exit(), it is good to batch them to speedup tc action dismantle. Cc: Jamal Hadi Salim <jhs@mojatatu.com> Cc: Jiri Pirko <jiri@resnulli.us> Signed-off-by: NCong Wang <xiyou.wangcong@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Stephen Hemminger says: ==================== hv_netvsc: Fix default and limit of recv buffer The default for receive buffer descriptors is not correct, it should match the default receive buffer size and the upper limit of receive buffer size is too low. Also, for older versions of Window servers hosts, different lower limit check is necessary, otherwise the buffer request will be rejected by the host, resulting vNIC not come up. This patch set corrects these problems. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Haiyang Zhang 提交于
The values were not computed correctly. There are no significant visible impact, though. The intended size of RX buffer is 16 MB, and the default slot size is 1728. So, NETVSC_DEFAULT_RX should be 16*1024*1024 / 1728 = 9709. The intended size of TX buffer is 1 MB, and the slot size is 6144. So, NETVSC_DEFAULT_TX should be 1024*1024 / 6144 = 170. The patch puts the formula directly into the macro, and moves them to hyperv_net.h, together with related macros. Fixes: 5023a6db ("netvsc: increase default receive buffer size") Signed-off-by: NHaiyang Zhang <haiyangz@microsoft.com> Signed-off-by: NStephen Hemminger <sthemmin@microsoft.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-