- 25 2月, 2021 4 次提交
-
-
由 Sukadev Bhattiprolu 提交于
__ibmvnic_reset() currently reads the adapter->state before getting the rtnl and saves that state as the "target state" for the reset. If this read occurs when adapter is in PROBED state, the target state would be PROBED. Just after the target state is saved, and before the actual reset process is started (i.e before rtnl is acquired) if we get an ibmvnic_open() call we would move the adapter to OPEN state. But when the reset is processed (after ibmvnic_open()) drops the rtnl), it will leave the adapter in PROBED state even though we already moved it to OPEN. To fix this, use the RTNL to improve serialization when reading/updating the adapter state. i.e determine the target state of a reset only after getting the RTNL. And if a reset is in progress during an open, simply set the target state of the adapter and let the reset code finish the open (like we currently do if failover is pending). One twist to this serialization is if the adapter state changes when we drop the RTNL to update the link state. Account for this by checking if there was an intervening open and update the target state for the reset accordingly (see new comments in the code). Note that only the reset functions and ibmvnic_open() can set the adapter to OPEN state and this must happen under rtnl. Fixes: 7d7195a0 ("ibmvnic: Do not process device remove during device reset") Signed-off-by: NSukadev Bhattiprolu <sukadev@linux.ibm.com> Reviewed-by: NDany Madden <drt@linux.ibm.com> Link: https://lore.kernel.org/r/20210224050229.1155468-1-sukadev@linux.ibm.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Wei Yongjun 提交于
The driver allocates the spinlock but not initialize it. Use spin_lock_init() on it to initialize it correctly. Fixes: b38dd98f ("net: stmmac: Add Toshiba Visconti SoCs glue driver") Reported-by: NHulk Robot <hulkci@huawei.com> Signed-off-by: NWei Yongjun <weiyongjun1@huawei.com> Acked-by: NNobuhiro Iwamatsu <nobuhiro1.iwamatsu@toshiba.co.jp> Link: https://lore.kernel.org/r/20210223104803.4047281-1-weiyongjun1@huawei.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Oleksij Rempel 提交于
Since 20dd3850 ("can: Speed up CAN frame receiption by using ml_priv") the CAN framework uses per device specific data in the AF_CAN protocol. For this purpose the struct net_device->ml_priv is used. Later the ml_priv usage in CAN was extended for other users, one of them being CAN_J1939. Later in the kernel ml_priv was converted to an union, used by other drivers. E.g. the tun driver started storing it's stats pointer. Since tun devices can claim to be a CAN device, CAN specific protocols will wrongly interpret this pointer, which will cause system crashes. Mostly this issue is visible in the CAN_J1939 stack. To fix this issue, we request a dedicated CAN pointer within the net_device struct. Reported-by: syzbot+5138c4dd15a0401bec7b@syzkaller.appspotmail.com Fixes: 20dd3850 ("can: Speed up CAN frame receiption by using ml_priv") Fixes: ffd956ee ("can: introduce CAN midlayer private and allocate it automatically") Fixes: 9d71dd0c ("can: add support of SAE J1939 protocol") Fixes: 497a5757 ("tun: switch to net core provided statistics counters") Signed-off-by: NOleksij Rempel <o.rempel@pengutronix.de> Link: https://lore.kernel.org/r/20210223070127.4538-1-o.rempel@pengutronix.deSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Lech Perczak 提交于
Now that interface 3 in "option" driver is no longer mapped, add device ID matching it to qmi_wwan. The modem is used inside ZTE MF283+ router and carriers identify it as such. Interface mapping is: 0: QCDM, 1: AT (PCUI), 2: AT (Modem), 3: QMI, 4: ADB T: Bus=02 Lev=02 Prnt=02 Port=05 Cnt=01 Dev#= 3 Spd=480 MxCh= 0 D: Ver= 2.01 Cls=00(>ifc ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=19d2 ProdID=1275 Rev=f0.00 S: Manufacturer=ZTE,Incorporated S: Product=ZTE Technologies MSM S: SerialNumber=P685M510ZTED0000CP&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&0 C:* #Ifs= 5 Cfg#= 1 Atr=a0 MxPwr=500mA I:* If#= 0 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=ff Driver=option E: Ad=81(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=01(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms I:* If#= 1 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=00 Driver=option E: Ad=83(I) Atr=03(Int.) MxPS= 10 Ivl=32ms E: Ad=82(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=02(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms I:* If#= 2 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=00 Driver=option E: Ad=85(I) Atr=03(Int.) MxPS= 10 Ivl=32ms E: Ad=84(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=03(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms I:* If#= 3 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=ff Prot=ff Driver=qmi_wwan E: Ad=87(I) Atr=03(Int.) MxPS= 8 Ivl=32ms E: Ad=86(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=04(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms I:* If#= 4 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=42 Prot=01 Driver=(none) E: Ad=88(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=05(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms Acked-by: NBjørn Mork <bjorn@mork.no> Signed-off-by: NLech Perczak <lech.perczak@gmail.com> Link: https://lore.kernel.org/r/20210223183456.6377-1-lech.perczak@gmail.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
- 24 2月, 2021 16 次提交
-
-
由 Jason A. Donenfeld 提交于
The condition here was incorrect: a non-neon fallback implementation is available on arm32 when NEON is not supported. Reported-by: NIlya Lipnitskiy <ilya.lipnitskiy@gmail.com> Fixes: e7096c13 ("net: WireGuard secure network tunnel") Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Jason A. Donenfeld 提交于
Having two ring buffers per-peer means that every peer results in two massive ring allocations. On an 8-core x86_64 machine, this commit reduces the per-peer allocation from 18,688 bytes to 1,856 bytes, which is an 90% reduction. Ninety percent! With some single-machine deployments approaching 500,000 peers, we're talking about a reduction from 7 gigs of memory down to 700 megs of memory. In order to get rid of these per-peer allocations, this commit switches to using a list-based queueing approach. Currently GSO fragments are chained together using the skb->next pointer (the skb_list_* singly linked list approach), so we form the per-peer queue around the unused skb->prev pointer (which sort of makes sense because the links are pointing backwards). Use of skb_queue_* is not possible here, because that is based on doubly linked lists and spinlocks. Multiple cores can write into the queue at any given time, because its writes occur in the start_xmit path or in the udp_recv path. But reads happen in a single workqueue item per-peer, amounting to a multi-producer, single-consumer paradigm. The MPSC queue is implemented locklessly and never blocks. However, it is not linearizable (though it is serializable), with a very tight and unlikely race on writes, which, when hit (some tiny fraction of the 0.15% of partial adds on a fully loaded 16-core x86_64 system), causes the queue reader to terminate early. However, because every packet sent queues up the same workqueue item after it is fully added, the worker resumes again, and stopping early isn't actually a problem, since at that point the packet wouldn't have yet been added to the encryption queue. These properties allow us to avoid disabling interrupts or spinning. The design is based on Dmitry Vyukov's algorithm [1]. Performance-wise, ordinarily list-based queues aren't preferable to ringbuffers, because of cache misses when following pointers around. However, we *already* have to follow the adjacent pointers when working through fragments, so there shouldn't actually be any change there. A potential downside is that dequeueing is a bit more complicated, but the ptr_ring structure used prior had a spinlock when dequeueing, so all and all the difference appears to be a wash. Actually, from profiling, the biggest performance hit, by far, of this commit winds up being atomic_add_unless(count, 1, max) and atomic_ dec(count), which account for the majority of CPU time, according to perf. In that sense, the previous ring buffer was superior in that it could check if it was full by head==tail, which the list-based approach cannot do. But all and all, this enables us to get massive memory savings, allowing WireGuard to scale for real world deployments, without taking much of a performance hit. [1] http://www.1024cores.net/home/lock-free-algorithms/queues/intrusive-mpsc-node-based-queueReviewed-by: NDmitry Vyukov <dvyukov@google.com> Reviewed-by: NToke Høiland-Jørgensen <toke@redhat.com> Fixes: e7096c13 ("net: WireGuard secure network tunnel") Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Jason A. Donenfeld 提交于
If skb->protocol doesn't match the actual skb->data header, it's probably not a good idea to pass it off to icmp{,v6}_ndo_send, which is expecting to reply to a valid IP packet. So this commit has that early mismatch case jump to a later error label. Fixes: e7096c13 ("net: WireGuard secure network tunnel") Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Jason A. Donenfeld 提交于
The is_dead boolean is checked for every single packet, while the internal_id member is used basically only for pr_debug messages. So it makes sense to hoist up is_dead into some space formerly unused by a struct hole, while demoting internal_api to below the lowest struct cache line. Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Jann Horn 提交于
The endpoint->src_if4 has nothing to do with fixed-endian numbers; remove the bogus annotation. This was introduced in https://git.zx2c4.com/wireguard-monolithic-historical/commit?id=14e7d0a499a676ec55176c0de2f9fcbd34074a82 in the historical WireGuard repo because the old code used to zero-initialize multiple members as follows: endpoint->src4.s_addr = endpoint->src_if4 = fl.saddr = 0; Because fl.saddr is fixed-endian and an assignment returns a value with the type of its left operand, this meant that sparse detected an assignment between values of different endianness. Since then, this assignment was already split up into separate statements; just the cast survived. Signed-off-by: NJann Horn <jannh@google.com> Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Antonio Quartulli 提交于
The definition of IS_ERR() already applies the unlikely() notation when checking the error status of the passed pointer. For this reason there is no need to have the same notation outside of IS_ERR() itself. Clean up code by removing redundant notation. Signed-off-by: NAntonio Quartulli <a@unstable.cc> Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Taehee Yoo 提交于
The debug check must be done after unregister_netdevice_many() call -- the hlist_del_rcu() for this is done inside .ndo_stop. This is the same with commit 0fda7600 ("geneve: move debug check after netdev unregister") Test commands: ip netns del A ip netns add A ip netns add B ip netns exec B ip link add vxlan0 type vxlan vni 100 local 10.0.0.1 \ remote 10.0.0.2 dstport 4789 srcport 4789 4789 ip netns exec B ip link set vxlan0 netns A ip netns exec A ip link set vxlan0 up ip netns del B Splat looks like: [ 73.176249][ T7] ------------[ cut here ]------------ [ 73.178662][ T7] WARNING: CPU: 4 PID: 7 at drivers/net/vxlan.c:4743 vxlan_exit_batch_net+0x52e/0x720 [vxlan] [ 73.182597][ T7] Modules linked in: vxlan openvswitch nsh nf_conncount nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 mlx5_core nfp mlxfw ixgbevf tls sch_fq_codel nf_tables nfnetlink ip_tables x_tables unix [ 73.190113][ T7] CPU: 4 PID: 7 Comm: kworker/u16:0 Not tainted 5.11.0-rc7+ #838 [ 73.193037][ T7] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1ubuntu1 04/01/2014 [ 73.196986][ T7] Workqueue: netns cleanup_net [ 73.198946][ T7] RIP: 0010:vxlan_exit_batch_net+0x52e/0x720 [vxlan] [ 73.201509][ T7] Code: 00 01 00 00 0f 84 39 fd ff ff 48 89 ca 48 c1 ea 03 80 3c 1a 00 0f 85 a6 00 00 00 89 c2 48 83 c2 02 49 8b 14 d4 48 85 d2 74 ce <0f> 0b eb ca e8 b9 51 db dd 84 c0 0f 85 4a fe ff ff 48 c7 c2 80 bc [ 73.208813][ T7] RSP: 0018:ffff888100907c10 EFLAGS: 00010286 [ 73.211027][ T7] RAX: 000000000000003c RBX: dffffc0000000000 RCX: ffff88800ec411f0 [ 73.213702][ T7] RDX: ffff88800a278000 RSI: ffff88800fc78c70 RDI: ffff88800fc78070 [ 73.216169][ T7] RBP: ffff88800b5cbdc0 R08: fffffbfff424de61 R09: fffffbfff424de61 [ 73.218463][ T7] R10: ffffffffa126f307 R11: fffffbfff424de60 R12: ffff88800ec41000 [ 73.220794][ T7] R13: ffff888100907d08 R14: ffff888100907c50 R15: ffff88800fc78c40 [ 73.223337][ T7] FS: 0000000000000000(0000) GS:ffff888114800000(0000) knlGS:0000000000000000 [ 73.225814][ T7] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 73.227616][ T7] CR2: 0000562b5cb4f4d0 CR3: 0000000105fbe001 CR4: 00000000003706e0 [ 73.229700][ T7] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 73.231820][ T7] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 73.233844][ T7] Call Trace: [ 73.234698][ T7] ? vxlan_err_lookup+0x3c0/0x3c0 [vxlan] [ 73.235962][ T7] ? ops_exit_list.isra.11+0x93/0x140 [ 73.237134][ T7] cleanup_net+0x45e/0x8a0 [ ... ] Fixes: 57b61127 ("vxlan: speedup vxlan tunnels dismantle") Signed-off-by: NTaehee Yoo <ap420073@gmail.com> Link: https://lore.kernel.org/r/20210221154552.11749-1-ap420073@gmail.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Hayes Wang 提交于
Add rtl_eee_plus_en() and rtl_green_en(). Signed-off-by: NHayes Wang <hayeswang@realtek.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Hayes Wang 提交于
Some messages are before calling register_netdev(), so replace netif_err() with dev_err(). Signed-off-by: NHayes Wang <hayeswang@realtek.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Hayes Wang 提交于
Return error code if autosuspend_en, eee_get, or eee_set don't exist. Signed-off-by: NHayes Wang <hayeswang@realtek.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Hayes Wang 提交于
U1/U2 shoued be enabled for USB 3.0 or later. The USB 2.0 doesn't support it. Signed-off-by: NHayes Wang <hayeswang@realtek.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Florian Fainelli 提交于
Add support for being able to set the learning attribute on port, and make sure that the standalone ports start up with learning disabled. We can remove the code in bcm_sf2 that configured the ports learning attribute because we want the standalone ports to have learning disabled by default and port 7 cannot be bridged, so its learning attribute will not change past its initial configuration. Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Reviewed-by: NVladimir Oltean <olteanv@gmail.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Florian Fainelli 提交于
Because bcm_sf2 implements its own dsa_switch_ops we need to export the b53_br_flags_pre(), b53_br_flags() and b53_set_mrouter so we can wire-up them up like they used to be with the former b53_br_egress_floods(). Fixes: a8b659e7 ("net: dsa: act as passthrough for bridge port flags") Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Reviewed-by: NVladimir Oltean <olteanv@gmail.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Krzysztof Halasa 提交于
sky2.c driver uses netdev_warn() before the net device is initialized. Fix it by using dev_warn() instead. Signed-off-by: NKrzysztof Halasa <khalasa@piap.pl> Link: https://lore.kernel.org/r/m3a6s1r1ul.fsf@t19.piap.plSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Sieng Piaw Liew 提交于
In ndo_stop functions, netdev_completed_queue() is called during forced tx reclaim, after netdev_reset_queue(). This may trigger kernel panic if there is any tx skb left. This patch moves netdev_reset_queue() to after tx reclaim, so BQL can complete successfully then reset. Signed-off-by: NSieng Piaw Liew <liew.s.piaw@gmail.com> Fixes: 4c59b0f5 ("bcm63xx_enet: add BQL support") Acked-by: NFlorian Fainelli <f.fainelli@gmail.com> Link: https://lore.kernel.org/r/20210222013530.1356-1-liew.s.piaw@gmail.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Jason A. Donenfeld 提交于
The icmp{,v6}_send functions make all sorts of use of skb->cb, casting it with IPCB or IP6CB, assuming the skb to have come directly from the inet layer. But when the packet comes from the ndo layer, especially when forwarded, there's no telling what might be in skb->cb at that point. As a result, the icmp sending code risks reading bogus memory contents, which can result in nasty stack overflows such as this one reported by a user: panic+0x108/0x2ea __stack_chk_fail+0x14/0x20 __icmp_send+0x5bd/0x5c0 icmp_ndo_send+0x148/0x160 In icmp_send, skb->cb is cast with IPCB and an ip_options struct is read from it. The optlen parameter there is of particular note, as it can induce writes beyond bounds. There are quite a few ways that can happen in __ip_options_echo. For example: // sptr/skb are attacker-controlled skb bytes sptr = skb_network_header(skb); // dptr/dopt points to stack memory allocated by __icmp_send dptr = dopt->__data; // sopt is the corrupt skb->cb in question if (sopt->rr) { optlen = sptr[sopt->rr+1]; // corrupt skb->cb + skb->data soffset = sptr[sopt->rr+2]; // corrupt skb->cb + skb->data // this now writes potentially attacker-controlled data, over // flowing the stack: memcpy(dptr, sptr+sopt->rr, optlen); } In the icmpv6_send case, the story is similar, but not as dire, as only IP6CB(skb)->iif and IP6CB(skb)->dsthao are used. The dsthao case is worse than the iif case, but it is passed to ipv6_find_tlv, which does a bit of bounds checking on the value. This is easy to simulate by doing a `memset(skb->cb, 0x41, sizeof(skb->cb));` before calling icmp{,v6}_ndo_send, and it's only by good fortune and the rarity of icmp sending from that context that we've avoided reports like this until now. For example, in KASAN: BUG: KASAN: stack-out-of-bounds in __ip_options_echo+0xa0e/0x12b0 Write of size 38 at addr ffff888006f1f80e by task ping/89 CPU: 2 PID: 89 Comm: ping Not tainted 5.10.0-rc7-debug+ #5 Call Trace: dump_stack+0x9a/0xcc print_address_description.constprop.0+0x1a/0x160 __kasan_report.cold+0x20/0x38 kasan_report+0x32/0x40 check_memory_region+0x145/0x1a0 memcpy+0x39/0x60 __ip_options_echo+0xa0e/0x12b0 __icmp_send+0x744/0x1700 Actually, out of the 4 drivers that do this, only gtp zeroed the cb for the v4 case, while the rest did not. So this commit actually removes the gtp-specific zeroing, while putting the code where it belongs in the shared infrastructure of icmp{,v6}_ndo_send. This commit fixes the issue by passing an empty IPCB or IP6CB along to the functions that actually do the work. For the icmp_send, this was already trivial, thanks to __icmp_send providing the plumbing function. For icmpv6_send, this required a tiny bit of refactoring to make it behave like the v4 case, after which it was straight forward. Fixes: a2b78e9b ("sunvnet: generate ICMP PTMUD messages for smaller port MTUs") Reported-by: NSinYu <liuxyon@gmail.com> Reviewed-by: NWillem de Bruijn <willemb@google.com> Link: https://lore.kernel.org/netdev/CAF=yD-LOF116aHub6RMe8vB8ZpnrrnoTdqhobEx+bvoA8AsP0w@mail.gmail.com/T/Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com> Link: https://lore.kernel.org/r/20210223131858.72082-1-Jason@zx2c4.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
- 23 2月, 2021 11 次提交
-
-
由 Chuhong Yuan 提交于
mlx4_do_mirror_rule() forgets to call mlx4_free_cmd_mailbox() to free the memory region allocated by mlx4_alloc_cmd_mailbox() before an exit. Add the missed call to fix it. Fixes: 78efed27 ("net/mlx4_core: Support mirroring VF DMFS rules on both ports") Signed-off-by: NChuhong Yuan <hslester96@gmail.com> Reviewed-by: NTariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20210221143559.390277-1-hslester96@gmail.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Song, Yoong Siang 提交于
When link speed is not 100 Mbps, port transmit rate and speed divider are set to 8 and 1000000 respectively. These values are incorrect for CBS idleslope and sendslope HW values calculation if the link speed is not 1 Gbps. This patch adds switch statement to set the values of port transmit rate and speed divider for 10 Gbps, 5 Gbps, 2.5 Gbps, 1 Gbps, and 100 Mbps. Note that CBS is not supported at 10 Mbps. Fixes: bc41a668 ("net: stmmac: tc: Remove the speed dependency") Fixes: 1f705bc6 ("net: stmmac: Add support for CBS QDISC") Signed-off-by: NSong, Yoong Siang <yoong.siang.song@intel.com> Link: https://lore.kernel.org/r/1613655653-11755-1-git-send-email-yoong.siang.song@intel.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Dan Carpenter 提交于
The comments to phy_select_page() say that "phy_restore_page() must always be called after this, irrespective of success or failure of this call." If we don't call phy_restore_page() then we are still holding the phy_lock_mdio_bus() so it eventually leads to a dead lock. Fixes: 32ab60e5 ("net: phy: icplus: add MDI/MDIX support for IP101A/G") Fixes: f9bc51e6 ("net: phy: icplus: fix paged register access") Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Reviewed-by: NMichael Walle <michael@walle.cc> Reviewed-by: NRussell King <rmk+kernel@armlinux.org.uk> Link: https://lore.kernel.org/r/YC+OpFGsDPXPnXM5@mwandaSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Stefan Chulski 提交于
PPv2 loopback port doesn't support RSS, so we should skip RSS configurations for this port. Signed-off-by: NStefan Chulski <stefanc@marvell.com> Reviewed-by: NMarcin Wojtas <mw@semihalf.com> Link: https://lore.kernel.org/r/1613652123-19021-1-git-send-email-stefanc@marvell.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Camelia Groza 提交于
The current use of container_of is flawed and unnecessary. Obtain the dpaa_napi_portal reference from the private percpu data instead. Fixes: a1e031ff ("dpaa_eth: add XDP_REDIRECT support") Reported-by: NSascha Hauer <s.hauer@pengutronix.de> Signed-off-by: NCamelia Groza <camelia.groza@nxp.com> Acked-by: NMadalin Bucur <madalin.bucur@oss.nxp.com> Acked-by: NJesper Dangaard Brouer <brouer@redhat.com> Link: https://lore.kernel.org/r/20210218182106.22613-1-camelia.groza@nxp.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 DENG Qingfang 提交于
2 bytes of the MTU are reserved for Atheros DSA tag, but DSA core has already handled that since commit dc0fe7d4. Remove the unnecessary reservation. Fixes: d51b6ce4 ("net: ethernet: add ag71xx driver") Signed-off-by: NDENG Qingfang <dqfext@gmail.com> Reviewed-by: NOleksij Rempel <o.rempel@pengutronix.de> Link: https://lore.kernel.org/r/20210218034514.3421-1-dqfext@gmail.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Henry Tieman 提交于
It was possible to have Rx queues that were not available for use by RSS. This would happen when increasing the number of Rx queues while there was a user defined RSS LUT. Always update the number of available RSS queues when changing the number of Rx queues. Fixes: 87324e74 ("ice: Implement ethtool ops for channels") Signed-off-by: NHenry Tieman <henry.w.tieman@intel.com> Tested-by: NTony Brelinski <tonyx.brelinski@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
由 Dave Ertman 提交于
DCBX_CAP bits were not being adjusted when switching between SW and FW controlled LLDP. Adjust bits to correctly indicate which mode the LLDP logic is in. Fixes: b94b013e ("ice: Implement DCBNL support") Signed-off-by: NDave Ertman <david.m.ertman@intel.com> Tested-by: NTony Brelinski <tonyx.brelinski@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
由 Brett Creeley 提交于
Currently if an AVF driver doesn't account for the possibility of a port VLAN when determining its max packet size then packets at MTU will be dropped. It is not the VF driver's responsibility to account for a port VLAN so fix this. To fix this, do the following: 1. Add a function that determines the max packet size a VF is allowed by using the port's max packet size and whether the VF is in a port VLAN. If a port VLAN is configured then a VF's max packet size will always be the port's max packet size minus VLAN_HLEN. Otherwise it will be the port's max packet size. 2. Use this function to verify the max packet size from the VF. 3. If there is a port VLAN configured then add 4 bytes (VLAN_HLEN) to the VF's max packet size configuration. Also, the VIRTCHNL_OP_GET_VF_RESOURCES message provides the capability to communicate a VF's max packet size. Use the new function for this purpose. Fixes: 1071a835 ("ice: Implement virtchnl commands for AVF support") Signed-off-by: NBrett Creeley <brett.creeley@intel.com> Tested-by: NTony Brelinski <tonyx.brelinski@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
由 Brett Creeley 提交于
Currently the PF will only set a trusted VF as the default VSI when it requests FLAG_VF_UNICAST_PROMISC over VIRTCHNL. However, when FLAG_VF_MULTICAST_PROMISC is set it's expected that the trusted VF will see multicast packets that don't have a matching destination MAC in the devices internal switch. Fix this by setting the trusted VF as the default VSI if either FLAG_VF_UNICAST_PROMISC or FLAG_VF_MULTICAST_PROMISC is set. Fixes: 01b5e89a ("ice: Add VF promiscuous support") Signed-off-by: NBrett Creeley <brett.creeley@intel.com> Tested-by: NTony Brelinski <tonyx.brelinski@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
由 Dave Ertman 提交于
In the driver currently, we are reporting max number of TCs to the DCBNL callback as a kernel define set to 8. This is preventing userspace applications performing DCBx to correctly down map the TCs from requested to actual values. Report the actual max TC value to userspace from the capability struct. Fixes: b94b013e ("ice: Implement DCBNL support") Signed-off-by: NDave Ertman <david.m.ertman@intel.com> Tested-by: NTony Brelinski <tonyx.brelinski@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
- 22 2月, 2021 1 次提交
-
-
由 Dan Carpenter 提交于
This code does not allocate enough memory for the NUL terminator so it ends up putting it one character beyond the end of the buffer. Fixes: 8756828a ("octeontx2-af: Add NPA aura and pool contexts to debugfs") Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 20 2月, 2021 2 次提交
-
-
由 Norbert Ciosek 提交于
Fixes the following sparse warnings: i40e_main.c:5953:32: warning: cast from restricted __le16 i40e_main.c:8008:29: warning: incorrect type in assignment (different base types) i40e_main.c:8008:29: expected unsigned int [assigned] [usertype] ipa i40e_main.c:8008:29: got restricted __le32 [usertype] i40e_main.c:8008:29: warning: incorrect type in assignment (different base types) i40e_main.c:8008:29: expected unsigned int [assigned] [usertype] ipa i40e_main.c:8008:29: got restricted __le32 [usertype] i40e_txrx.c:1950:59: warning: incorrect type in initializer (different base types) i40e_txrx.c:1950:59: expected unsigned short [usertype] vlan_tag i40e_txrx.c:1950:59: got restricted __le16 [usertype] l2tag1 i40e_txrx.c:1953:40: warning: cast to restricted __le16 i40e_xsk.c:448:38: warning: invalid assignment: |= i40e_xsk.c:448:38: left side has type restricted __le64 i40e_xsk.c:448:38: right side has type int Fixes: 2f4b411a ("i40e: Enable cloud filters via tc-flower") Fixes: 2a508c64 ("i40e: fix VLAN.TCI == 0 RX HW offload") Fixes: 3106c580 ("i40e: Use batched xsk Tx interfaces to increase performance") Fixes: 8f88b303 ("i40e: Add infrastructure for queue channel support") Signed-off-by: NNorbert Ciosek <norbertx.ciosek@intel.com> Tested-by: NTony Brelinski <tonyx.brelinski@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
由 Mateusz Palczewski 提交于
Fix insufficient distinction between IPv4 and IPv6 addresses when creating a filter. IPv4 and IPv6 are kept in the same memory area. If IPv6 is added, then it's caught by IPv4 check, which leads to err -95. Fixes: 2f4b411a ("i40e: Enable cloud filters via tc-flower") Signed-off-by: NGrzegorz Szczurek <grzegorzx.szczurek@intel.com> Signed-off-by: NMateusz Palczewski <mateusz.palczewski@intel.com> Reviewed-by: NJaroslaw Gawin <jaroslawx.gawin@intel.com> Tested-by: NTony Brelinski <tonyx.brelinski@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
- 19 2月, 2021 6 次提交
-
-
由 Sylwester Dziedziuch 提交于
When creating VFs they were sometimes not getting resources. It was caused by not executing i40e_reset_all_vfs due to flag __I40E_VF_DISABLE being set on PF. Because of this IAVF was never able to finish setup sequence never getting reset indication from PF. Changed test_and_set_bit __I40E_VF_DISABLE in i40e_sync_filters_subtask to test_bit and removed clear_bit. This function should not set this bit it should only check if it hasn't been already set. Fixes: a7542b87 ("i40e: check __I40E_VF_DISABLE bit in i40e_sync_filters_subtask") Signed-off-by: NSylwester Dziedziuch <sylwesterx.dziedziuch@intel.com> Tested-by: NKonrad Jankowski <konrad0.jankowski@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
由 Mateusz Palczewski 提交于
Fix addition of VLAN filter for PF after enabling FW LLDP agent. Changing LLDP Agent causes FW to re-initialize per NVM settings. Remove default PF filter and move "Enable/Disable" to currently used reset flag. Without this patch PF would try to add MAC VLAN filter with default switch filter present. This causes AQ error and sets promiscuous mode on. Fixes: c65e78f8 ("i40e: Further implementation of LLDP") Signed-off-by: NPrzemyslaw Patynowski <przemyslawx.patynowski@intel.com> Signed-off-by: NMateusz Palczewski <mateusz.palczewski@intel.com> Reviewed-by: NSylwester Dziedziuch <sylwesterx.dziedziuch@intel.com> Reviewed-by: NAleksandr Loktionov <aleksandr.loktionov@intel.com> Tested-by: NTony Brelinski <tonyx.brelinski@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
由 Mateusz Palczewski 提交于
During driver loading flow control settings were written to FW using a variable which was always zero, since it was being set only by ethtool. This behavior has been corrected and driver no longer overwrites the default FW/NVM settings. Fixes: 373149fc ("i40e: Decrease the scope of rtnl lock") Signed-off-by: NDawid Lukwinski <dawid.lukwinski@intel.com> Signed-off-by: NMateusz Palczewski <mateusz.palczewski@intel.com> Reviewed-by: NAleksandr Loktionov <aleksandr.loktionov@intel.com> Tested-by: NTony Brelinski <tonyx.brelinski@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
由 Mateusz Palczewski 提交于
Zero-initialize AQ command data structures to comply with API specifications. Fixes: 2f4b411a ("i40e: Enable cloud filters via tc-flower") Fixes: f4492db1 ("i40e: Add NPAR BW get and set functions") Signed-off-by: NAndrzej Sawuła <andrzej.sawula@intel.com> Signed-off-by: NMateusz Palczewski <mateusz.palczewski@intel.com> Reviewed-by: NArkadiusz Kubalewski <arkadiusz.kubalewski@intel.com> Reviewed-by: NAleksandr Loktionov <aleksandr.loktionov@intel.com> Tested-by: NTony Brelinski <tonyx.brelinski@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
由 Keita Suzuki 提交于
Struct i40e_veb is allocated in function i40e_setup_pf_switch, and stored to an array field veb inside struct i40e_pf. However when i40e_setup_misc_vector fails, this memory leaks. Fix this by calling exit and teardown functions. Signed-off-by: NKeita Suzuki <keitasuzuki.park@sslab.ics.keio.ac.jp> Tested-by: NTony Brelinski <tonyx.brelinski@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
由 Slawomir Laba 提交于
When a packet contains an IPv6 header with next header which is an extension header and not a protocol one, the kernel function skb_transport_header called with such sk_buff will return a pointer to the extension header and not to the TCP one. The above explained call caused a problem with packet processing for skb with encapsulation for tunnel with I40E_TX_CTX_EXT_IP_IPV6. The extension header was not skipped at all. The ipv6_skip_exthdr function does check if next header of the IPV6 header is an extension header and doesn't modify the l4_proto pointer if it points to a protocol header value so its safe to omit the comparison of exthdr and l4.hdr pointers. The ipv6_skip_exthdr can return value -1. This means that the skipping process failed and there is something wrong with the packet so it will be dropped. Fixes: a3fd9d88 ("i40e/i40evf: Handle IPv6 extension headers in checksum offload") Signed-off-by: NSlawomir Laba <slawomirx.laba@intel.com> Signed-off-by: NPrzemyslaw Patynowski <przemyslawx.patynowski@intel.com> Reviewed-by: NAleksandr Loktionov <aleksandr.loktionov@intel.com> Tested-by: NTony Brelinski <tonyx.brelinski@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-