- 16 1月, 2017 2 次提交
-
-
由 Dan Carpenter 提交于
The break statement should be indented one more tab. Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 jpinto 提交于
This driver is no longer necessary since it was merged into stmmac. Acked-by: NLars Persson <larper@axis.com> Signed-off-by: NJoao Pinto <jpinto@synopsys.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 15 1月, 2017 2 次提交
-
-
由 David S. Miller 提交于
Merge tag 'mac80211-next-for-davem-2017-01-13' of git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211-next Johannes Berg says: ==================== For 4.11, we seem to have more than in the past few releases: * socket owner support for connections, so when the wifi manager (e.g. wpa_supplicant) is killed, connections are torn down - wpa_supplicant is critical to managing certain operations, and can opt in to this where applicable * minstrel & minstrel_ht updates to be more efficient (time and space) * set wifi_acked/wifi_acked_valid for skb->destructor use in the kernel, which was already available to userspace * don't indicate new mesh peers that might be used if there's no room to add them * multicast-to-unicast support in mac80211, for better medium usage (since unicast frames can use *much* higher rates, by ~3 orders of magnitude) * add API to read channel (frequency) limitations from DT * add infrastructure to allow randomizing public action frames for MAC address privacy (still requires driver support) * many cleanups and small improvements/fixes across the board ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Shyam Saini 提交于
The region set by the call to memset, immediately overwritten by the subsequent call to memcpy and thus makes the memset redundant. Also remove the memset((&info, 0, sizeof(info)) on line 398 because info is memcpy()'ed to before being used in the loop and it isn't used outside of the loop. Signed-off-by: NShyam Saini <mayhs11saini@gmail.com> Reviewed-by: NTobias Klauser <tklauser@distanz.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 14 1月, 2017 23 次提交
-
-
由 Ganesh Goudar 提交于
Do not count pause frames as part of general TX/RX frame counters. Based on the original work of Casey Leedom <leedom@chelsio.com> Signed-off-by: NGanesh Goudar <ganeshgr@chelsio.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Michael Chan says: ==================== bnxt_en: Misc. updates for net-next. Miscellaneous updates including firmware spec update, ethtool -p blinking LED support, RDMA SRIOV config callback, and minor fixes. v2: Dropped the DCBX RoCE app TLV patch until the ETH_P_IBOE RDMA patch is merged. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael Chan 提交于
Add the ulp_sriov_cfg callbacks when the number of VFs is changing. This allows the RDMA driver to provision RDMA resources for the VFs. Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael Chan 提交于
Add LED blinking code to support ethtool -p on the PF. Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael Chan 提交于
Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael Chan 提交于
Commit bdbd1eb5 ("bnxt_en: Handle no aggregation ring gracefully.") introduced the BNXT_FLAG_NO_AGG_RINGS flag. For consistency, bnxt_set_tpa_flags() should also clear TPA flags when there are no aggregation rings. Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael Chan 提交于
CC [M] drivers/net/ethernet/broadcom/bnxt/bnxt.o drivers/net/ethernet/broadcom/bnxt/bnxt.c:4947:21: warning: ‘bnxt_get_max_func_rss_ctxs’ defined but not used [-Wunused-function] static unsigned int bnxt_get_max_func_rss_ctxs(struct bnxt *bp) ^ CC [M] drivers/net/ethernet/broadcom/bnxt/bnxt.o drivers/net/ethernet/broadcom/bnxt/bnxt.c:4956:21: warning: ‘bnxt_get_max_func_vnics’ defined but not used [-Wunused-function] static unsigned int bnxt_get_max_func_vnics(struct bnxt *bp) ^ Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Yuchung Cheng says: ==================== tcp: RACK fast recovery The patch set enables RACK loss detection (draft-ietf-tcpm-rack-01) to trigger fast recovery with a reordering timer. Previously RACK has been running in auxiliary mode where it is used to detect packet losses once the recovery has triggered by other algorithms (e.g., FACK). By inspecting packet timestamps, RACK can start ACK-driven repairs timely. A few similar heuristics are no longer needed and are either removed or disabled to reduce the complexity of the Linux TCP loss recovery engine: 1. FACK (Forward Acknowledgement) 2. Early Retransmit (RFC5827) 3. thin_dupack (fast recovery on single DUPACK for thin-streams) 4. NCR (Non-Congestion Robustness RFC4653) (RFC4653) 5. Forward Retransmit After this change, Linux's loss recovery algorithms consist of 1. Conventional DUPACK threshold approach (RFC6675) 2. RACK and Tail Loss Probe (draft-ietf-tcpm-rack-01) 3. RTO plus F-RTO extension (RFC5682) The patch set has been tested on Google servers extensively and presented in several IETF meetings. The data suggests that RACK successfully improves recovery performance: https://www.ietf.org/proceedings/97/slides/slides-97-tcpm-draft-ietf-tcpm-rack-01.pdf https://www.ietf.org/proceedings/96/slides/slides-96-tcpm-3.pdf ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yuchung Cheng 提交于
This patch disables FACK by default as RACK is the successor of FACK (inspired by the insights behind FACK). FACK[1] in Linux works as follows: a packet P is deemed lost, if packet Q of higher sequence is s/acked and P and Q are distant by at least dupthresh number of packets in sequence space. FACK is more aggressive than the IETF recommened recovery for SACK (RFC3517 A Conservative Selective Acknowledgment (SACK)-based Loss Recovery Algorithm for TCP), because a single SACK may trigger fast recovery. This obviously won't work well with reordering so FACK is dynamically disabled upon detecting reordering. RACK supersedes FACK by using time distance instead of sequence distance. On reordering, RACK waits for a quarter of RTT receiving a single SACK before starting recovery. (the timer can be made more adaptive in the future by measuring reordering distance in time, but currently RTT/4 seem to work well.) Once the recovery starts, RACK behaves almost like FACK because it reduces the reodering window to 1ms, so it fast retransmits quickly. In addition RACK can detect loss retransmission as it does not care about the packet sequences (being repeated or not), which is extremely useful when the connection is going through a traffic policer. Google server experiments indicate that disabling FACK after enabling RACK has negligible impact on the overall loss recovery performance with more reordering events detected. But we still keep the FACK implementation for backup if RACK has bugs that needs to be disabled. [1] M. Mathis, J. Mahdavi, "Forward Acknowledgment: Refining TCP Congestion Control," In Proceedings of SIGCOMM '96, August 1996. Signed-off-by: NYuchung Cheng <ycheng@google.com> Signed-off-by: NNeal Cardwell <ncardwell@google.com> Acked-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yuchung Cheng 提交于
Thin stream DUPACK is to start fast recovery on only one DUPACK provided the connection is a thin stream (i.e., low inflight). But this older feature is now subsumed with RACK. If a connection receives only a single DUPACK, RACK would arm a reordering timer and soon starts fast recovery instead of timeout if no further ACKs are received. The socket option (THIN_DUPACK) is kept as a nop for compatibility. Note that this patch does not change another thin-stream feature which enables linear RTO. Although it might be good to generalize that in the future (i.e., linear RTO for the first say 3 retries). Signed-off-by: NYuchung Cheng <ycheng@google.com> Signed-off-by: NNeal Cardwell <ncardwell@google.com> Acked-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yuchung Cheng 提交于
This patch removes the (partial) implementation of the aggressive limited transmit in RFC4653 TCP Non-Congestion Robustness (NCR). NCR is a mitigation to the problem created by the dynamic DUPACK threshold. With the current adaptive DUPACK threshold (tp->reordering) could cause timeouts by preventing fast recovery. For example, if the last packet of a cwnd burst was reordered, the threshold will be set to the size of cwnd. But if next application burst is smaller than threshold and has drops instead of reorderings, the sender would not trigger fast recovery but instead resorts to a timeout recovery. NCR mitigates this issue by checking the number of DUPACKs against the current flight size additionally. The techniqueue is similar to the early retransmit RFC. With RACK loss detection, this mitigation is not needed, because RACK does not use DUPACK threshold to detect losses. RACK arms a reordering timer to fire at most a quarter RTT later to start fast recovery. Signed-off-by: NYuchung Cheng <ycheng@google.com> Signed-off-by: NNeal Cardwell <ncardwell@google.com> Acked-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yuchung Cheng 提交于
This patch removes the support of RFC5827 early retransmit (i.e., fast recovery on small inflight with <3 dupacks) because it is subsumed by the new RACK loss detection. More specifically when RACK receives DUPACKs, it'll arm a reordering timer to start fast recovery after a quarter of (min)RTT, hence it covers the early retransmit except RACK does not limit itself to specific inflight or dupack numbers. Signed-off-by: NYuchung Cheng <ycheng@google.com> Signed-off-by: NNeal Cardwell <ncardwell@google.com> Acked-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yuchung Cheng 提交于
Forward retransmit is an esoteric feature in RFC3517 (condition(3) in the NextSeg()). Basically if a packet is not considered lost by the current criteria (# of dupacks etc), but the congestion window has room for more packets, then retransmit this packet. However it actually conflicts with the rest of recovery design. For example, when reordering is detected we want to be conservative in retransmitting packets but forward-retransmit feature would break that to force more retransmission. Also the implementation is fairly complicated inside the retransmission logic inducing extra iterations in the write queue. With RACK losses are being detected timely and this heuristic is no longer necessary. There this patch removes the feature. Signed-off-by: NYuchung Cheng <ycheng@google.com> Signed-off-by: NNeal Cardwell <ncardwell@google.com> Acked-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yuchung Cheng 提交于
Current F-RTO reverts cwnd reset whenever a never-retransmitted packet was (s)acked. The timeout can be declared spurious because the packets acknoledged with this ACK was transmitted before the timeout, so clearly not all the packets are lost to reset the cwnd. This nice detection does not really depend F-RTO internals. This patch applies the detection universally. On Google servers this change detected 20% more spurious timeouts. Suggested-by: NNeal Cardwell <ncardwell@google.com> Signed-off-by: NYuchung Cheng <ycheng@google.com> Signed-off-by: NNeal Cardwell <ncardwell@google.com> Acked-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yuchung Cheng 提交于
This patch changes two things: 1. Start fast recovery with RACK in addition to other heuristics (e.g., DUPACK threshold, FACK). Prior to this change RACK is enabled to detect losses only after the recovery has started by other algorithms. 2. Disable TCP early retransmit. RACK subsumes the early retransmit with the new reordering timer feature. A latter patch in this series removes the early retransmit code. Signed-off-by: NYuchung Cheng <ycheng@google.com> Signed-off-by: NNeal Cardwell <ncardwell@google.com> Acked-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yuchung Cheng 提交于
Currently RACK would mark loss before the undo operations in TCP loss recovery. This could incorrectly identify real losses as spurious. For example a sender first experiences a delay spike and then eventually some packets were lost due to buffer overrun. In this case, the sender should perform fast recovery b/c not all the packets were lost. But the sender may first trigger a (spurious) RTO and reset cwnd to 1. The following ACKs may used to mark real losses by tcp_rack_mark_lost. Then in tcp_process_loss this ACK could trigger F-RTO undo condition and unmark real losses and revert the cwnd reduction. If there are no more ACKs coming back, eventually the sender would timeout again instead of performing fast recovery. The patch fixes this incorrect process by always performing the undo checks before detecting losses. Fixes: 4f41b1c5 ("tcp: use RACK to detect losses") Signed-off-by: NYuchung Cheng <ycheng@google.com> Signed-off-by: NNeal Cardwell <ncardwell@google.com> Acked-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yuchung Cheng 提交于
The packets inside a jumbo skb (e.g., TSO) share the same skb timestamp, even though they are sent sequentially on the wire. Since RACK is based on time, it can not detect some packets inside the same skb are lost. However, we can leverage the packet sequence numbers as extended timestamps to detect losses. Therefore, when RACK timestamp is identical to skb's timestamp (i.e., one of the packets of the skb is acked or sacked), we use the sequence numbers of the acked and unacked packets to break ties. We can use the same sequence logic to advance RACK xmit time as well to detect more losses and avoid timeout. Signed-off-by: NYuchung Cheng <ycheng@google.com> Signed-off-by: NNeal Cardwell <ncardwell@google.com> Acked-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yuchung Cheng 提交于
This patch makes RACK install a reordering timer when it suspects some packets might be lost, but wants to delay the decision a little bit to accomodate reordering. It does not create a new timer but instead repurposes the existing RTO timer, because both are meant to retransmit packets. Specifically it arms a timer ICSK_TIME_REO_TIMEOUT when the RACK timing check fails. The wait time is set to RACK.RTT + RACK.reo_wnd - (NOW - Packet.xmit_time) + fudge This translates to expecting a packet (Packet) should take (RACK.RTT + RACK.reo_wnd + fudge) to deliver after it was sent. When there are multiple packets that need a timer, we use one timer with the maximum timeout. Therefore the timer conservatively uses the maximum window to expire N packets by one timeout, instead of N timeouts to expire N packets sent at different times. The fudge factor is 2 jiffies to ensure when the timer fires, all the suspected packets would exceed the deadline and be marked lost by tcp_rack_detect_loss(). It has to be at least 1 jiffy because the clock may tick between calling icsk_reset_xmit_timer(timeout) and actually hang the timer. The next jiffy is to lower-bound the timeout to 2 jiffies when reo_wnd is < 1ms. When the reordering timer fires (tcp_rack_reo_timeout): If we aren't in Recovery we'll enter fast recovery and force fast retransmit. This is very similar to the early retransmit (RFC5827) except RACK is not constrained to only enter recovery for small outstanding flights. Signed-off-by: NYuchung Cheng <ycheng@google.com> Signed-off-by: NNeal Cardwell <ncardwell@google.com> Acked-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yuchung Cheng 提交于
Record the most recent RTT in RACK. It is often identical to the "ca_rtt_us" values in tcp_clean_rtx_queue. But when the packet has been retransmitted, RACK choses to believe the ACK is for the (latest) retransmitted packet if the RTT is over minimum RTT. This requires passing the arrival time of the most recent ACK to RACK routines. The timestamp is now recorded in the "ack_time" in tcp_sacktag_state during the ACK processing. This patch does not change the RACK algorithm itself. It only adds the RTT variable to prepare the next main patch. Signed-off-by: NYuchung Cheng <ycheng@google.com> Signed-off-by: NNeal Cardwell <ncardwell@google.com> Acked-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yuchung Cheng 提交于
Create a new helper tcp_rack_detect_loss to prepare the upcoming RACK reordering timer patch. Signed-off-by: NYuchung Cheng <ycheng@google.com> Signed-off-by: NNeal Cardwell <ncardwell@google.com> Acked-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yuchung Cheng 提交于
Create a new helper tcp_rack_mark_skb_lost to prepare the upcoming RACK reordering timer support. Signed-off-by: NYuchung Cheng <ycheng@google.com> Signed-off-by: NNeal Cardwell <ncardwell@google.com> Acked-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Satanand Burla 提交于
Remove assignment to ndo_select_queue so that fallback is used for selecting txq. Also remove the now-useless function that used to be assigned to ndo_select_queue. Signed-off-by: NSatanand Burla <satananda.burla@cavium.com> Signed-off-by: NFelix Manlunas <felix.manlunas@cavium.com> Signed-off-by: NDerek Chickles <derek.chickles@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Vivien Didelot 提交于
The Marvell 6352 chip has a 8-bit address/16-bit data EEPROM access. The Marvell 6390 chip has a 16-bit address/8-bit data EEPROM access. This patch implements the 8-bit data EEPROM access in the mv88e6xxx driver and adds its support to chips of the 6390 family. Signed-off-by: NVivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 1月, 2017 13 次提交
-
-
由 Jouni Malinen 提交于
The function documentation for cfg80211_connect_bss() and cfg80211_connect_result() was still claiming that they are used only for a success case while these functions can now be used to report both success and various failure cases. The actual use cases were already described in the connect() documentation. Update the function specific comments to note the failure cases and also describe how the special status == -1 case is used in cfg80211_connect_bss() to indicate a connection timeout based on the internal implementation in cfg80211_connect_timeout(). Signed-off-by: NJouni Malinen <jouni@qca.qualcomm.com> [use tabs for indentation] Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
-
由 Purushottam Kushwaha 提交于
This enhances the connect timeout API to also carry the reason for the timeout. These reason codes for the connect time out are represented by enum nl80211_timeout_reason and are passed to user space through a new attribute NL80211_ATTR_TIMEOUT_REASON (u32). Signed-off-by: NPurushottam Kushwaha <pkushwah@qti.qualcomm.com> Signed-off-by: NJouni Malinen <jouni@qca.qualcomm.com> [keep gfp_t argument last] Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
-
由 vamsi krishna 提交于
Enhance sched scan to support option of finding a better BSS while in connected state. Firmware scans the medium and reports when it finds a known BSS which has better RSSI than the current connected BSS. New attributes to specify the relative RSSI (compared to the current BSS) are added to the sched scan to implement this. Signed-off-by: Nvamsi krishna <vamsin@qti.qualcomm.com> Signed-off-by: NJouni Malinen <jouni@qca.qualcomm.com> Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
-
由 vamsi krishna 提交于
Add support to use a random local address (Address 2 = TA in transmit and the same address in receive functionality) for Public Action frames in order to improve privacy of WLAN clients. Applications fill the random transmit address in the frame buffer in the NL80211_CMD_FRAME command. This can be used only with the drivers that indicate support for random local address by setting the new NL80211_EXT_FEATURE_MGMT_TX_RANDOM_TA and/or NL80211_EXT_FEATURE_MGMT_TX_RANDOM_TA_CONNECTED in ext_features. The driver needs to configure receive behavior to accept frames to the specified random address during the time the frame exchange is pending and such frames need to be acknowledged similarly to frames sent to the local permanent address when this random address functionality is not used. Signed-off-by: Nvamsi krishna <vamsin@qti.qualcomm.com> Signed-off-by: NJouni Malinen <jouni@qca.qualcomm.com> Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
-
由 Johannes Berg 提交于
With 78, 111 and 85 bytes respectively (on x86-64), the functions iwe_stream_add_event(), iwe_stream_add_point() and iwe_stream_add_value() really shouldn't be inlines. It appears that at least my compiler already decided the same, and created a single instance of each one of them for each file using it, but that's still a number of instances in the system overall, which this reduces. Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
-
由 Eric Dumazet 提交于
Current allocations are not NUMA aware, and lack proper cleanup in case of error. It is perfectly fine to use static per cpu allocations for 256 bytes per cpu. Signed-off-by: NEric Dumazet <edumazet@google.com> Cc: David Lebrun <david.lebrun@uclouvain.be> Acked-by: NDavid Lebrun <david.lebrun@uclouvain.be> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Nikolay Aleksandrov 提交于
Recently we started using ipmr with thousands of entries and easily hit soft lockups on smaller devices. The reason is that the hash function uses the high order bits from the src and dst, but those don't change in many common cases, also the hash table is only 64 elements so with thousands it doesn't scale at all. This patch migrates the hash table to rhashtable, and in particular the rhl interface which allows for duplicate elements to be chained because of the MFC_PROXY support (*,G; *,*,oif cases) which allows for multiple duplicate entries to be added with different interfaces (IMO wrong, but it's been in for a long time). And here are some results from tests I've run in a VM: mr_table size (default, allocated for all namespaces): Before After 49304 bytes 2400 bytes Add 65000 routes (the diff is much larger on smaller devices): Before After 1m42s 58s Forwarding 256 byte packets with 65000 routes (test done in a VM): Before After 3 Mbps / ~1465 pps 122 Mbps / ~59000 pps As a bonus we no longer see the soft lockups on smaller devices which showed up even with 2000 entries before. Signed-off-by: NNikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
Fixes following warnings : net/core/secure_seq.c:125:28: warning: incorrect type in argument 1 (different base types) net/core/secure_seq.c:125:28: expected unsigned int const [unsigned] [usertype] a net/core/secure_seq.c:125:28: got restricted __be32 [usertype] saddr net/core/secure_seq.c:125:35: warning: incorrect type in argument 2 (different base types) net/core/secure_seq.c:125:35: expected unsigned int const [unsigned] [usertype] b net/core/secure_seq.c:125:35: got restricted __be32 [usertype] daddr net/core/secure_seq.c:125:43: warning: cast from restricted __be16 net/core/secure_seq.c:125:61: warning: restricted __be16 degrades to integer Fixes: 7cd23e53 ("secure_seq: use SipHash in place of MD5") Signed-off-by: NEric Dumazet <edumazet@google.com> Reviewed-by: NJason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Prasad Kanneganti 提交于
Reduce the load time of the VF driver by decreasing the wait time between iterations of the loop that polls for a mailbox response from the PF. Also change the wait time units from jiffies to milliseconds. Signed-off-by: NPrasad Kanneganti <prasad.kanneganti@cavium.com> Signed-off-by: NFelix Manlunas <felix.manlunas@cavium.com> Signed-off-by: NRaghu Vatsavayi <raghu.vatsavayi@cavium.com> Signed-off-by: NDerek Chickles <derek.chickles@cavium.com> Signed-off-by: NSatanand Burla <satananda.burla@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Felix Manlunas 提交于
Remove code that's no longer needed. It used to serve a purpose, which was to fix a link-related bug. For a while now, the NIC firmware has had a more elegant fix for that bug. Signed-off-by: NFelix Manlunas <felix.manlunas@cavium.com> Signed-off-by: NDerek Chickles <derek.chickles@cavium.com> Signed-off-by: NSatanand Burla <satananda.burla@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Joe Perches 提交于
commit bc1f4470 ("net: make ndo_get_stats64 a void function") mistakenly used a return value for this void conversion. Fix it. Signed-off-by: NJoe Perches <joe@perches.com> cc: stephen hemminger <stephen@networkplumber.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Florian Fainelli says: ==================== net: mdio-gpio: Use modern GPIO helpers This patch series modernizes the mdio-gpio and makes it switch to the latest and greatest API for manipulating GPIO lines, thus allowing some simplifications in the driver. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Guenter Roeck 提交于
gpiod functions support handling low-active pins, so we can move thos code out of this driver into the gpio subsystem and simplify the code a bit. Signed-off-by: NGuenter Roeck <linux@roeck-us.net> Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-