- 11 4月, 2017 12 次提交
-
-
由 Benjamin Herrenschmidt 提交于
Directly access the fields when needed. The accessors add clutter not clarity and in some cases cause unnecessary read-modify-write type access on the slow (uncached) descriptor memory. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Benjamin Herrenschmidt 提交于
Add NETIF_F_SG and create multiple TX ring entries for skb fragments. On reclaim, the skb is only freed on the segment marked as "last". Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Benjamin Herrenschmidt 提交于
Those are non-cachable stores, let's avoid those we don't need. Remove the helper, it's not particularly helpful and since it uses "priv" I can't move it to the header file. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Benjamin Herrenschmidt 提交于
This moves the packet freeing to a separate function which is also used by ftgmac100_free_buffers() and will be used more in the error path of fragmented sends. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Benjamin Herrenschmidt 提交于
We'll use variants of this accessor without barriers when building series of descriptors for fragmented sends Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Benjamin Herrenschmidt 提交于
We have a private lock which isn't terribly useful, and we maintain a "tx_pending" counter for information that's already available via a trivial arithmetic operation. Then we unconditionaly wake the queue even when not stopped. Finally our code in tx isn't really safe vs. a concurrent reclaim. The aspeed chips aren't SMP today but I prefer the code being right and future proof. So rip that out and replace it with more "standard" queue handling, currently with a threshold of 1 queue element, which will be increased when we implement fragmented sends. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Benjamin Herrenschmidt 提交于
Rather than in the descriptor. The descriptor is mapped non-cachable and rather slow to access. Since to do that we need to keep track of the tx "pointer" we also have no use of all the accesors to manipulate it, just open code it, it's as clear and will help when adding fragmented sends. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Benjamin Herrenschmidt 提交于
Rather than just transmitting garbage past the end of the small packet. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Benjamin Herrenschmidt 提交于
Use a simple goto to a drop path at the tail of the function, it will be used in a few more cases soon Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Benjamin Herrenschmidt 提交于
This will make subsequent rework of the tx path simpler Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Benjamin Herrenschmidt 提交于
Move it below ftgmac100_xmit() and the rest of the tx path No code change. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Benjamin Herrenschmidt 提交于
We have a reset task to reset our chip, use it. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 10 4月, 2017 6 次提交
-
-
由 Florian Fainelli 提交于
Fixes build errors seen with CONFIG_GPIOLIB disabled and warnings enabled: drivers/net/dsa/mt7530.c: In function 'mt7530_setup': drivers/net/dsa/mt7530.c:948:3: error: implicit declaration of function 'gpiod_set_value_cansleep' [-Werror=implicit-function-declaration] gpiod_set_value_cansleep(priv->reset, 0); ^~~~~~~~~~~~~~~~~~~~~~~~ drivers/net/dsa/mt7530.c: In function 'mt7530_probe': drivers/net/dsa/mt7530.c:1068:17: error: implicit declaration of function 'devm_gpiod_get_optional' [-Werror=implicit-function-declaration] priv->reset = devm_gpiod_get_optional(&mdiodev->dev, "reset", ^~~~~~~~~~~~~~~~~~~~~~~ drivers/net/dsa/mt7530.c:1069:13: error: 'GPIOD_OUT_LOW' undeclared (first use in this function) GPIOD_OUT_LOW); ^~~~~~~~~~~~~ drivers/net/dsa/mt7530.c:1069:13: Fixes: b8f126a8 ("net-next: dsa: add dsa support for Mediatek MT7530 switch") Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Alexander Alemayhu 提交于
o s/bpf_bpf_get_socket_cookie/bpf_get_socket_cookie Signed-off-by: NAlexander Alemayhu <alexander@alemayhu.com> Acked-by: NDaniel Borkmann <daniel@iogearbox.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 stephen hemminger 提交于
This allows using deferred skb freeing and with NAPI. And get buffer recycling. Signed-off-by: NStephen Hemminger <sthemmin@microsoft.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Merge tag 'wireless-drivers-next-for-davem-2017-04-07' of git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/wireless-drivers-next Kalle Valo says: ==================== wireless-drivers-next patches for 4.12 Lots of bugfixes as usual but also some new features. Major changes: ath10k * improve firmware download time for QCA6174 and QCA9377, especially helps resume time ath9k_htc * add support AirTies 1eda:2315 AR9271 device rt2x00 * add support MT7620 mwifiex * enable auto deep sleep mode for USB chipsets brcmfmac * add support for network namespaces (WIPHY_FLAG_NETNS_OK) ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
This reverts commit def12888. As per discussion between Roopa Prabhu and David Ahern, it is advisable that we instead have the code collect the setlink triggered events into a bitmask emitted in the IFLA_EVENT netlink attribute. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue由 David S. Miller 提交于
Jeff Kirsher says: ==================== 40GbE Intel Wired LAN Driver Updates 2017-04-08 This series contains updates to i40e and i40evf only. Mitch fixes an issue where the client driver (i40iw) was attempting to load on x710 devices (which do not support iWARP), so only register with the client if iWARP is supported. Jake fixes up error messages to better clarify to the user when adding a invalid flow type. Updates the driver to look up the MAC address from eth_get_platform_mac_address() first before checking what the firmware provides. Cleans up code so we are not repeating a duplicate loop, by checking both transmit and receive queues in a single loop. Also cleans up flags never used, so remove the definitions. Alex does cleanup so that we are always updating pf->flags when a change is made to the private flags. Adds support for 3K buffers to the receive path so that we can provide the additional padding needed in the event of NET_IP_ALIGN being non-zero or a cache line being greater than 64. Adds support for build_skb() to i40e/i40evf. Maciej adjusts the scope of the rtnl lock held during reset because it was stopping other PFs from running their reset procedures. Alan reduces code complexity in i40e_detect_recover_hung_queue(). ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 09 4月, 2017 5 次提交
-
-
由 David S. Miller 提交于
Florian Fainelli says: ==================== net: dsa: Receive path simplifications This patch series does factor the common code found in all tag implementations into dsa_switch_rcv(). The original motivation was to add GRO support, but this may be a lot of work with unclear benefits at this point. Changes in v2: - take care of tag_mtk.c in the process ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florian Fainelli 提交于
All DSA tag receive functions do strictly the same thing after they have located the originating source port from their tag specific protocol: - push ETH_HLEN bytes - set pkt_type to PACKET_HOST - call eth_type_trans() - bump up counters - call netif_receive_skb() Factor all of that into dsa_switch_rcv(). This also makes us return a pointer to a sk_buff, which makes us symetric with the xmit function. Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florian Fainelli 提交于
All DSA tag receive functions need to unshare the skb before mangling it, move this to the generic dsa_switch_rcv() function which will allow us to make the tag receive function return their mangled skb without caring about freeing a NULL skb. Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florian Fainelli 提交于
dsa_switch_rcv() already tests for dst == NULL, so there is no need to duplicate the same check within the tag receive functions. Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Steffen Klassert 提交于
All available gso_type flags are currently in use, so extend gso_type from 'unsigned short' to 'unsigned int' to be able to add further flags. We reorder the struct skb_shared_info to use two bytes of the four byte hole before dataref. All fields before dataref are cleared, i.e. four bytes more than before the change. The remaining two byte hole is moved to the beginning of the structure, this protects us from immediate overwites on out of bound writes to the sk_buff head. Structure layout on x86-64 before the change: struct skb_shared_info { unsigned char nr_frags; /* 0 1 */ __u8 tx_flags; /* 1 1 */ short unsigned int gso_size; /* 2 2 */ short unsigned int gso_segs; /* 4 2 */ short unsigned int gso_type; /* 6 2 */ struct sk_buff * frag_list; /* 8 8 */ struct skb_shared_hwtstamps hwtstamps; /* 16 8 */ u32 tskey; /* 24 4 */ __be32 ip6_frag_id; /* 28 4 */ atomic_t dataref; /* 32 4 */ /* XXX 4 bytes hole, try to pack */ void * destructor_arg; /* 40 8 */ skb_frag_t frags[17]; /* 48 272 */ /* --- cacheline 5 boundary (320 bytes) --- */ /* size: 320, cachelines: 5, members: 12 */ /* sum members: 316, holes: 1, sum holes: 4 */ }; Structure layout on x86-64 after the change: struct skb_shared_info { short unsigned int _unused; /* 0 2 */ unsigned char nr_frags; /* 2 1 */ __u8 tx_flags; /* 3 1 */ short unsigned int gso_size; /* 4 2 */ short unsigned int gso_segs; /* 6 2 */ struct sk_buff * frag_list; /* 8 8 */ struct skb_shared_hwtstamps hwtstamps; /* 16 8 */ unsigned int gso_type; /* 24 4 */ u32 tskey; /* 28 4 */ __be32 ip6_frag_id; /* 32 4 */ atomic_t dataref; /* 36 4 */ void * destructor_arg; /* 40 8 */ skb_frag_t frags[17]; /* 48 272 */ /* --- cacheline 5 boundary (320 bytes) --- */ /* size: 320, cachelines: 5, members: 13 */ }; Signed-off-by: NSteffen Klassert <steffen.klassert@secunet.com> Acked-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 08 4月, 2017 17 次提交
-
-
由 Felix Manlunas 提交于
For security reasons, NIC firmware does not allow VF to set its VLAN if PF set it already. Firmware allows VF to set its VLAN if PF did not set it. After the VF instructs the firmware to set the VLAN, VF always indicates (via return 0) that the operation is successful--even for the times when it isn't. Put in a mechanism for the VF's set VLAN function to receive the firmware response code, then make that function return -EPERM if the firmware forbids the operation. Make that mechanism available for other functions that may, in the future, be interested in receiving the response code from the firmware. That mechanism involves adding new fields to struct octnic_ctrl_pkt, so make all users of struct octnic_ctrl_pkt initialize the struct to zero before using it; otherwise, the mechanism might act on uninitialized garbage. Signed-off-by: NFelix Manlunas <felix.manlunas@cavium.com> Signed-off-by: NDerek Chickles <derek.chickles@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 K. Y. Srinivasan 提交于
Prior to opening the channel we should have all the state setup to handle interrupts. The current code does not do that; fix the bug. This bug can result in faults in the interrupt path. Signed-off-by: NK. Y. Srinivasan <kys@microsoft.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florian Fainelli 提交于
The SMI clause 22 & 45 read/write operations are local to the global2.c file, so make them static. This eliminates the following warning: drivers/net/dsa/mv88e6xxx/global2.c:571:5: warning: no previous prototype for 'mv88e6xxx_g2_smi_phy_read_c45' [-Wmissing-prototypes] int mv88e6xxx_g2_smi_phy_read_c45(struct mv88e6xxx_chip *chip, int addr, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~ drivers/net/dsa/mv88e6xxx/global2.c:602:5: warning: no previous prototype for 'mv88e6xxx_g2_smi_phy_read_c22' [-Wmissing-prototypes] int mv88e6xxx_g2_smi_phy_read_c22(struct mv88e6xxx_chip *chip, int addr, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~ drivers/net/dsa/mv88e6xxx/global2.c:635:5: warning: no previous prototype for 'mv88e6xxx_g2_smi_phy_write_c45' [-Wmissing-prototypes] int mv88e6xxx_g2_smi_phy_write_c45(struct mv88e6xxx_chip *chip, int addr, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ drivers/net/dsa/mv88e6xxx/global2.c:664:5: warning: no previous prototype for 'mv88e6xxx_g2_smi_phy_write_c22' [-Wmissing-prototypes] int mv88e6xxx_g2_smi_phy_write_c22(struct mv88e6xxx_chip *chip, int addr, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Suggested-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Reviewed-by: NVivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Johannes Berg 提交于
It's rather confusing that the netlink message flags are numbered 1, 2, 4, 8, 16, 32, <unused>, 0x100. Make that more understandable by numbering the lower ones with hex constants as well. Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Colin Ian King 提交于
On the case where nn->eth_port is null the warning message is printing the port by dereferencing this null pointer. Remove the deference to avoid a crash when printing the warning message. Detected by CoverityScan, CID#1426198 ("Dereference after null check") Fixes: ce22f5a2 ("nfp: separate high level and low level NSP headers") Signed-off-by: NColin Ian King <colin.king@canonical.com> Acked-by: NJakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Chenbo Feng says: ==================== New getsockopt option to retrieve socket cookie In the current kernel socket cookie implementation, there is no simple and direct way to retrieve the socket cookie based on file descriptor. A process mat need to get it from sock fd if it want to correlate with sock_diag output or use a bpf map with new socket cookie function. If userspace wants to receive the socket cookie for a given socket fd, it must send a SOCK_DIAG_BY_FAMILY dump request and look for the 5-tuple. This is slow and can be ambiguous in the case of sockets that have the same 5-tuple (e.g., tproxy / transparent sockets, SO_REUSEPORT sockets, etc.). As shown in the example program. The xt_eBPF program is using socket cookie to record the network traffics statistics and with the socket cookie retrieved by getsockopt. The program can directly access to a specific socket data without scanning the whole bpf map. ==================== Acked-by: NAlexei Starovoitov <ast@kernel.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Chenbo Feng 提交于
Added a per socket traffic monitoring option to illustrate the usage of new getsockopt SO_COOKIE. The program is based on the socket traffic monitoring program using xt_eBPF and in the new option the data entry can be directly accessed using socket cookie. The cookie retrieved allow us to lookup an element in the eBPF for a specific socket. Signed-off-by: NChenbo Feng <fengc@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Chenbo Feng 提交于
Introduce a new getsockopt operation to retrieve the socket cookie for a specific socket based on the socket fd. It returns a unique non-decreasing cookie for each socket. Tested: https://android-review.googlesource.com/#/c/358163/Acked-by: NWillem de Bruijn <willemb@google.com> Signed-off-by: NChenbo Feng <fengc@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux由 David S. Miller 提交于
Saeed Mahameed says: ==================== mlx5-updates-2017-04-16 This patchset provides some updates for the mlx5 drivers. From Majd, 1st patch, Adds ConnectX-6 and ConnectX-6 VF PCI IDs support. From Guy, 2nd patch, Adds RXFCS scatter support. 3rd patch, Small cleanup to make a function static. From Eran, 4th patch, Adds 4 zeros padding to ethtool FW version. 6th patch, Trevial code reuse cleanup From Inbar, 5th patch, Show board id in ethtool driver information From Saeed, 7th patch, Set default RX moderation parameters on driver load as a small fix for the latest fail-safe config feature. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Alexander Duyck 提交于
This patch is meant to improve the performance of the Rx path. Specifically by using build_skb we have several distinct advantages. In the case of small frames we were previously using a copy-break approach. This means that we were allocating a page fragment to use for skb->head, and were having to copy the packet into that region. Both of those calls are now avoided since we just build the skb around the data. In the case of large frames the gains are much more significant. Specifically we were having to allocate skb->head, and copy the headers as before. However in addition we were having to parse the header using eth_get_headlen which could be quite expensive. All of this is avoided by building the frame around the data. I have seen gains as high as 30% when using VXLAN for instance due to just header pulling overhead. Finally with all this in place it also sets us up to start looking at enabling XDP. Specifically we now have a path in which the data is in the page and the frame is built around it. So if we parse it with XDP before we call build_skb we can take care of any necessary processing there. Change-ID: Id4bdd618e94473d41f892417e5d8019639e421e3 Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This patch adds padding to the start of frames to make room for headroom for us to eventually start using build_skb. Right now we guarantee at least NET_SKB_PAD + NET_IP_ALIGN, however we allocate more space if more is available. For example on x86 the headroom should be 192 bytes. On systems that have too large of a cache line size to support storing 1.5K padding and shared info we default to using 3K buffers and reserve everything that isn't used for skb_shared_info or the data buffer for headroom. Change-ID: I33c641c9a1ea10cf7cc484c2d20985368d2d709a Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
There are situations where adding padding to the front and back of an Rx buffer will require that we add additional padding. Specifically if NET_IP_ALIGN is non-zero, or the MTU size is larger than 7.5K we would need to use 2K buffers which leaves us with no room for the padding. To preemptively address these cases I am adding support for 3K buffers to the Rx path so that we can provide the additional padding needed in the event of NET_IP_ALIGN being non-zero or a cache line being greater than 64. Change-ID: I938bc1ba611285428df39a613cd66f98e60b55c7 Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
Since an early commit a few flags have no longer been used. Remove these definitions to reduce code clutter. Change-ID: I3589be4622574e747013cd4dc403e18b039f4965 Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alice Michael 提交于
The I40E_FLAG_NEED_LINK_UPDATE was never used. Remove the flag definitions. Change-ID: If59d0c6b4af85ca27281f3183c54b055adb439a4 Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
We can simply check both Tx and Rx queues in a single loop, rather than repeating the loop twice. Change-ID: Ic06f26b0e3c2620e0e33c1a2999edda488e647ad Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
Look up the MAC address from the eth_get_platform_mac_address() function first before checking what the firmware provides. We already handle the case of re-writing the MAC-VLAN filter, so there is no need to add extra code for this. However, update the comment where we do this to indicate that it does impact the Open Firmware MAC address case. Change-ID: I73e59fbe0b0e7e6f3ee9f5170d0bd3a4d5faf4db Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alan Brady 提交于
This patch greatly reduces the unneeded complexity in the i40e_detect_recover_hung_queue code path. The previous implementation set a 'hung bit' which would then get cleared while polling. If the detection routine was called a second time with the bit already set, we would issue a software interrupt. This patch makes it such that if interrupts are disabled and we have pending TX descriptors, we trigger a software interrupt since in, the worst case, queues are already clean and we have an extra interrupt. Additionally this patch removes the workaround for lost interrupts as calling napi_reschedule in this context can cause software interrupts to fire on the wrong CPU. Change-ID: Iae108582a3ceb6229ed1d22e4ed6e69cf97aad8d Signed-off-by: NAlan Brady <alan.brady@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-