- 14 2月, 2019 7 次提交
-
-
由 Dan Carpenter 提交于
The value of ->num_ports comes from bcm_sf2_sw_probe() and it is less than or equal to DSA_MAX_PORTS. The ds->ports[] array is used inside the dsa_is_user_port() and dsa_is_cpu_port() functions. The ds->ports[] array is allocated in dsa_switch_alloc() and it has ds->num_ports elements so this leads to a static checker warning about a potential out of bounds read. Fixes: 8cfa9498 ("net: dsa: bcm_sf2: add suspend/resume callbacks") Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Reviewed-by: NVivien Didelot <vivien.didelot@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 John David Anglin 提交于
The GPIO interrupt controller on the espressobin board only supports edge interrupts. If one enables the use of hardware interrupts in the device tree for the 88E6341, it is possible to miss an edge. When this happens, the INTn pin on the Marvell switch is stuck low and no further interrupts occur. I found after adding debug statements to mv88e6xxx_g1_irq_thread_work() that there is a race in handling device interrupts (e.g. PHY link interrupts). Some interrupts are directly cleared by reading the Global 1 status register. However, the device interrupt flag, for example, is not cleared until all the unmasked SERDES and PHY ports are serviced. This is done by reading the relevant SERDES and PHY status register. The code only services interrupts whose status bit is set at the time of reading its status register. If an interrupt event occurs after its status is read and before all interrupts are serviced, then this event will not be serviced and the INTn output pin will remain low. This is not a problem with polling or level interrupts since the handler will be called again to process the event. However, it's a big problem when using level interrupts. The fix presented here is to add a loop around the code servicing switch interrupts. If any pending interrupts remain after the current set has been handled, we loop and process the new set. If there are no pending interrupts after servicing, we are sure that INTn has gone high and we will get an edge when a new event occurs. Tested on espressobin board. Fixes: dc30c35b ("net: dsa: mv88e6xxx: Implement interrupt support.") Signed-off-by: NJohn David Anglin <dave.anglin@bell.net> Tested-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Heiner Kallweit 提交于
phylib enables interrupts before phy_start() has been called, and if we receive an interrupt in a non-started state, the interrupt handler returns IRQ_NONE. This causes problems with at least one Marvell chip as reported by Andrew. Fix this by handling interrupts the same as in phy_mac_interrupt(), basically always running the phylib state machine. It knows when it has to do something and when not. This change allows to handle interrupts gracefully even if they occur in a non-started state. Fixes: 2b3e88ea ("net: phy: improve phy state checking") Reported-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NHeiner Kallweit <hkallweit1@gmail.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Saeed Mahameed 提交于
Currently mlx5 driver creates xdp redirect hw queues unconditionally on netdevice open, This is great until someone starts redirecting XDP traffic via ndo_xdp_xmit on mlx5 device and changes the device configuration at the same time, this might cause crashes, since the other device's napi is not aware of the mlx5 state change (resources un-availability). To fix this we must synchronize with other devices napi's on the system. Added a new flag under mlx5e_priv to determine XDP TX resources are available, set/clear it up when necessary and use synchronize_rcu() when the flag is turned off, so other napi's are in-sync with it, before we actually cleanup the hw resources. The flag is tested prior to committing to transmit on mlx5e_xdp_xmit, and it is sufficient to determine if it safe to transmit or not. The other two internal flags (MLX5E_STATE_OPENED and MLX5E_SQ_STATE_ENABLED) become unnecessary. Thus, they are removed from data path. Fixes: 58b99ee3 ("net/mlx5e: Add support for XDP_REDIRECT in device-out side") Reported-by: NToke Høiland-Jørgensen <toke@redhat.com> Reviewed-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Tariq Toukan 提交于
Eliminate the following compilation warning: drivers/net/ethernet/mellanox/mlx5/core/events.c: warning: 'error_str' may be used uninitialized in this function [-Wuninitialized]: => 238:3 Fixes: c2fb3db2 ("net/mlx5: Rework handling of port module events") Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Reviewed-by: NMikhael Goikhman <migo@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Huy Nguyen 提交于
When EEH is injected and PCI bus stalls, mlx5's pci error detect function is called to deactivate the command interface and tear down the device. The issue is that there can be a thread that already passed MLX5_DEVICE_STATE_INTERNAL_ERROR check, it will send the command and stuck in the wait_func. Solution: Add function mlx5_cmd_flush to disable command interface and clear all the pending commands. When device state is set to MLX5_DEVICE_STATE_INTERNAL_ERROR, call mlx5_cmd_flush to ensure all pending threads waiting for firmware commands completion are terminated. Fixes: c1d4d2e9 ("net/mlx5: Avoid calling sleeping function by the health poll thread") Signed-off-by: NHuy Nguyen <huyn@mellanox.com> Reviewed-by: NDaniel Jurgens <danielj@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Maria Pasechnik 提交于
New channels are applied to the priv channels only after they are successfully opened. Then, the indirection table should be built according to the new number of channels. Currently, such build is preformed independently of whether the channels opening is successful, and is not reverted on failure. The bug is caused due to removal of rss params from channels struct and moving it to priv struct. That change cause to independency between channels and rss params. This causes a crash on a later point, when accessing rqn of a non existing channel. This patch fixes it by moving the indirection table build right before switching the priv channels to new channels struct, after the new set of channels was successfully opened. Fixes: bbeb53b8 ("net/mlx5e: Move RSS params to a dedicated struct") Signed-off-by: NMaria Pasechnik <mariap@mellanox.com> Reviewed-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
- 13 2月, 2019 7 次提交
-
-
由 Cong Wang 提交于
The current opt_inst_list operations inside team_nl_cmd_options_set() is too complex to track: LIST_HEAD(opt_inst_list); nla_for_each_nested(...) { list_for_each_entry(opt_inst, &team->option_inst_list, list) { if (__team_option_inst_tmp_find(&opt_inst_list, opt_inst)) continue; list_add(&opt_inst->tmp_list, &opt_inst_list); } } team_nl_send_event_options_get(team, &opt_inst_list); as while we retrieve 'opt_inst' from team->option_inst_list, it could be added to the local 'opt_inst_list' for multiple times. The __team_option_inst_tmp_find() doesn't work, as the setter team_mode_option_set() still calls team->ops.exit() which uses ->tmp_list too in __team_options_change_check(). Simplify the list operations by moving the 'opt_inst_list' and team_nl_send_event_options_get() into the nla_for_each_nested() loop so that it can be guranteed that we won't insert a same list entry for multiple times. Therefore, __team_option_inst_tmp_find() can be removed too. Fixes: 4fb0534f ("team: avoid adding twice the same option to the event list") Fixes: 2fcdb2c9 ("team: allow to send multiple set events in one message") Reported-by: syzbot+4d4af685432dc0e56c91@syzkaller.appspotmail.com Reported-by: syzbot+68ee510075cf64260cc4@syzkaller.appspotmail.com Cc: Jiri Pirko <jiri@resnulli.us> Cc: Paolo Abeni <pabeni@redhat.com> Signed-off-by: NCong Wang <xiyou.wangcong@gmail.com> Acked-by: NJiri Pirko <jiri@mellanox.com> Reviewed-by: NPaolo Abeni <pabeni@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Arthur Kiyanovski 提交于
Update driver version due to bug fix. Signed-off-by: NArthur Kiyanovski <akiyano@amazon.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Arthur Kiyanovski 提交于
Fix race condition between ena_update_on_link_change() and ena_restore_device(). This race can occur if link notification arrives while the driver is performing a reset sequence. In this case link can be set up, enabling the device, before it is fully restored. If packets are sent at this time, the driver might access uninitialized data structures, causing kernel crash. Move the clearing of ENA_FLAG_ONGOING_RESET and netif_carrier_on() after ena_up() to ensure the device is ready when link is set up. Fixes: d18e4f68 ("net: ena: fix race condition between device reset and link up setup") Signed-off-by: NArthur Kiyanovski <akiyano@amazon.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Saeed Mahameed 提交于
When an ethernet frame is padded to meet the minimum ethernet frame size, the padding octets are not covered by the hardware checksum. Fortunately the padding octets are usually zero's, which don't affect checksum. However, it is not guaranteed. For example, switches might choose to make other use of these octets. This repeatedly causes kernel hardware checksum fault. Prior to the cited commit below, skb checksum was forced to be CHECKSUM_NONE when padding is detected. After it, we need to keep skb->csum updated. However, fixing up CHECKSUM_COMPLETE requires to verify and parse IP headers, it does not worth the effort as the packets are so small that CHECKSUM_COMPLETE has no significant advantage. Future work: when reporting checksum complete is not an option for IP non-TCP/UDP packets, we can actually fallback to report checksum unnecessary, by looking at cqe IPOK bit. Fixes: 88078d98 ("net: pskb_trim_rcsum() and CHECKSUM_COMPLETE are friends") Cc: Eric Dumazet <edumazet@google.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Reviewed-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Russell King 提交于
During testing on Armada 388 platforms, it was found with a certain module configuration that it was possible to trigger a kernel oops during the module load process, caused by the phylink resolver being triggered for a currently disabled interface. This problem was introduced by changing the way the SFP registration works, which now can result in the sfp link down notification being called during phylink_create(). Fixes: b5bfc21a ("net: sfp: do not probe SFP module before we're attached") Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Matteo Croce 提交于
Due to the depends on NET_UDP_TUNNEL, at the moment it is impossible to compile GENEVE if no other protocol depending on NET_UDP_TUNNEL is selected. Fix this changing the depends to a select, and drop NET_IP_TUNNEL from the select list, as it already depends on NET_UDP_TUNNEL. Signed-off-by: NMatteo Croce <mcroce@redhat.com> Reviewed-and-tested-by: NAndrea Claudi <aclaudi@redhat.com> Tested-by: NDavide Caratti <dcaratti@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Bert Kenward 提交于
The bitmap of found partitions in efx_ef10_mtd_probe was not initialised, causing partitions to be suppressed based off whatever value was in the bitmap at the start. Fixes: 33664635 ("sfc: suppress duplicate nvmem partition types in efx_ef10_mtd_probe") Signed-off-by: NBert Kenward <bkenward@solarflare.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 12 2月, 2019 1 次提交
-
-
由 Eric Dumazet 提交于
netif_rx() must be called under a strict contract. At device dismantle phase, core networking clears IFF_UP and flush_all_backlogs() is called after rcu grace period to make sure no incoming packet might be in a cpu backlog and still referencing the device. Most drivers call netif_rx() from their interrupt handler, and since the interrupts are disabled at device dismantle, netif_rx() does not have to check dev->flags & IFF_UP Virtual drivers do not have this guarantee, and must therefore make the check themselves. Otherwise we risk use-after-free and/or crashes. Note this patch also fixes a small issue that came with commit ce6502a8 ("vxlan: fix a use after free in vxlan_encap_bypass"), since the dev->stats.rx_dropped change was done on the wrong device. Fixes: d342894c ("vxlan: virtual extensible lan") Fixes: ce6502a8 ("vxlan: fix a use after free in vxlan_encap_bypass") Signed-off-by: NEric Dumazet <edumazet@google.com> Cc: Petr Machata <petrm@mellanox.com> Cc: Ido Schimmel <idosch@mellanox.com> Cc: Roopa Prabhu <roopa@cumulusnetworks.com> Cc: Stefano Brivio <sbrivio@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 11 2月, 2019 2 次提交
-
-
由 Heiner Kallweit 提交于
This reverts commit 2e6eedb4. Sander reported a regression causing a kernel panic[1], therefore let's revert this commit. [1] https://marc.info/?t=154965066400001&r=1&w=2Signed-off-by: NHeiner Kallweit <hkallweit1@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Heiner Kallweit 提交于
This reverts commit bd7153bd. There doesn't seem to be anything wrong with this patch, it's just reverted to get a stable baseline again. Signed-off-by: NHeiner Kallweit <hkallweit1@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 09 2月, 2019 1 次提交
-
-
由 Russell King 提交于
When we probe a SFP module, we expect to be able to call the upstream device's module_insert() function so that the upstream link can be configured. However, when the upstream device is delayed, we currently may end up probing the module before the upstream device is available, and lose the module_insert() call. Avoid this by holding off probing the module until the SFP bus is properly connected to both the SFP socket driver and the upstream driver. Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 08 2月, 2019 2 次提交
-
-
由 Arun Parameswaran 提交于
Fixes the issues with non BCM58XX chips in the b53 driver failing, when the irq is not specified in the device tree. Removed the check for BCM58XX in b53_srab_prepare_irq(), so the 'port->irq' will be set to '-EXIO' if the irq is not specified in the device tree. Fixes: 16994374 ("net: dsa: b53: Make SRAB driver manage port interrupts") Fixes: b2ddc48a ("net: dsa: b53: Do not fail when IRQ are not initialized") Signed-off-by: NArun Parameswaran <arun.parameswaran@broadcom.com> Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Hangbin Liu 提交于
When we add a new GENEVE device with IPv6 remote, checking only for IS_ENABLED(CONFIG_IPV6) is not enough as we may disable IPv6 in the kernel command line (ipv6.disable=1), and calling rt6_lookup() would cause a NULL pointer dereference. v2: - don't mix declarations and code (reported by Stefano Brivio, Eric Dumazet) - there's no need to use in6_dev_get() as we only need to check that idev exists (reported by David Ahern). This is under RTNL, so we can simply use __in6_dev_get() instead (Stefano, Eric). Reported-by: NJianlin Shi <jishi@redhat.com> Fixes: c40e89fd ("geneve: configure MTU based on a lower device") Cc: Alexey Kodanev <alexey.kodanev@oracle.com> Signed-off-by: NHangbin Liu <liuhangbin@gmail.com> Reviewed-by: NStefano Brivio <sbrivio@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 07 2月, 2019 20 次提交
-
-
由 Bjorn Helgaas 提交于
8c56df37 ("net: add support for Cavium PTP coprocessor") added the Cavium PTP coprocessor driver and enabled it by default. Remove the "default y" because the driver only applies to Cavium ThunderX processors. Fixes: 8c56df37 ("net: add support for Cavium PTP coprocessor") Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yang Wei 提交于
dev_consume_skb_irq() should be called in sbdma_tx_process() when skb xmit done. It makes drop profiles(dropwatch, perf) more friendly. Signed-off-by: NYang Wei <yang.wei9@zte.com.cn> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yang Wei 提交于
dev_consume_skb_irq() should be called in velocity_free_tx_buf() when skb xmit done. It makes drop profiles(dropwatch, perf) more friendly. Signed-off-by: NYang Wei <yang.wei9@zte.com.cn> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yang Wei 提交于
dev_consume_skb_irq() should be called in bdx_tx_cleanup() when skb xmit done. It makes drop profiles(dropwatch, perf) more friendly. Signed-off-by: NYang Wei <yang.wei9@zte.com.cn> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yang Wei 提交于
dev_consume_skb_irq() should be called when skb xmit done. It makes drop profiles(dropwatch, perf) more friendly. Signed-off-by: NYang Wei <yang.wei9@zte.com.cn> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yang Wei 提交于
dev_consume_skb_irq() should be called in hdlc_tx_done() when skb xmit done. It makes drop profiles(dropwatch, perf) more friendly. Signed-off-by: NYang Wei <yang.wei9@zte.com.cn> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yang Wei 提交于
dev_consume_skb_irq() should be called in mpc52xx_fec_tx_interrupt() when skb xmit done. It makes drop profiles(dropwatch, perf) more friendly. Signed-off-by: NYang Wei <yang.wei9@zte.com.cn> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yang Wei 提交于
dev_consume_skb_irq() should be called in epic_tx() when skb xmit done. It makes drop profiles(dropwatch, perf) more friendly. Signed-off-by: NYang Wei <yang.wei9@zte.com.cn> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yang Wei 提交于
dev_consume_skb_irq() should be called in dscc4_tx_irq() when skb xmit done. It makes drop profiles(dropwatch, perf) more friendly. Signed-off-by: NYang Wei <yang.wei9@zte.com.cn> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yang Wei 提交于
dev_consume_skb_irq() should be called in de_tx() when skb xmit done. It makes drop profiles(dropwatch, perf) more friendly. Signed-off-by: NYang Wei <yang.wei9@zte.com.cn> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yang Wei 提交于
dev_consume_skb_irq() should be called in dfx_xmt_done() when skb xmit done. It makes drop profiles(dropwatch, perf) more friendly. Signed-off-by: NYang Wei <yang.wei9@zte.com.cn> Reviewed-by: NMaciej W. Rozycki <macro@linux-mips.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Tonghao Zhang 提交于
In some case, we may use multiple pedit actions to modify packets. The command shown as below: the last pedit action is effective. $ tc filter add dev netdev_rep parent ffff: protocol ip prio 1 \ flower skip_sw ip_proto icmp dst_ip 3.3.3.3 \ action pedit ex munge ip dst set 192.168.1.100 pipe \ action pedit ex munge eth src set 00:00:00:00:00:01 pipe \ action pedit ex munge eth dst set 00:00:00:00:00:02 pipe \ action csum ip pipe \ action tunnel_key set src_ip 1.1.1.100 dst_ip 1.1.1.200 dst_port 4789 id 100 \ action mirred egress redirect dev vxlan0 To fix it, we add max_mod_hdr_actions to mlx5e_tc_flow_parse_attr struction, max_mod_hdr_actions will store the max pedit action number we support and num_mod_hdr_actions indicates how many pedit action we used, and store all pedit action to mod_hdr_actions. Fixes: d79b6df6 ("net/mlx5e: Add parsing of TC pedit actions to HW format") Cc: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NTonghao Zhang <xiangxia.m.yue@gmail.com> Reviewed-by: NOr Gerlitz <ogerlitz@mellanox.com> Acked-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Tonghao Zhang 提交于
When we offload tc filters to hardware, hardware flows can be updated when mac of encap destination ip is changed. But we ignore one case, that the mac of local encap ip can be changed too, so we should also update them. To fix it, add route_dev in mlx5e_encap_entry struct to save the local encap netdevice, and when mac changed, kernel will flush all the neighbour on the netdevice and send NETEVENT_NEIGH_UPDATE event. The mlx5 driver will delete the flows and add them when neighbour available again. Fixes: 232c0013 ("net/mlx5e: Add support to neighbour update flow") Cc: Hadar Hen Zion <hadarh@mellanox.com> Signed-off-by: NTonghao Zhang <xiangxia.m.yue@gmail.com> Reviewed-by: NOr Gerlitz <ogerlitz@mellanox.com> Acked-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Manish Chopra 提交于
Version update for qed/qede modules. Signed-off-by: NManish Chopra <manishc@marvell.com> Signed-off-by: NAriel Elior <aelior@marvell.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Rahul Verma 提交于
Fix unnecessary logging of message in an expected default case where coalescing value read (via ethtool -c) migh not be valid unless they are configured explicitly in the hardware using ethtool -C. Signed-off-by: NRahul Verma <rverma@marvell.com> Signed-off-by: NManish Chopra <manishc@marvell.com> Signed-off-by: NAriel Elior <aelior@marvell.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Sudarsana Reddy Kalluru 提交于
Under heavy traffic load, when changing number of channels via ethtool (ethtool -L) which will cause interface to be reloaded, it was observed that some packets gets transmitted on old TX channel/queue id which doesn't really exist after the channel configuration leads to system crash. Add a safeguard in the driver by validating queue id through ndo_select_queue() which is called before the ndo_start_xmit(). Signed-off-by: NSudarsana Reddy Kalluru <skalluru@marvell.com> Signed-off-by: NAriel Elior <aelior@marvell.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Sudarsana Reddy Kalluru 提交于
Max supported queues is derived incorrectly in the case of multi-CoS. Need to consider TCs while calculating num_queues for PF. Signed-off-by: NSudarsana Reddy Kalluru <skalluru@marvell.com> Signed-off-by: NAriel Elior <aelior@marvell.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Sudarsana Reddy Kalluru 提交于
In the case of Unified Fabric Port (UFP) mode, switch provides the traffic class (TC) value to be used for the traffic. Configure hardware to use this TC value for vlan priority. Signed-off-by: NSudarsana Reddy Kalluru <skalluru@marvell.com> Signed-off-by: NAriel Elior <aelior@marvell.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Manish Chopra 提交于
When slowpath messages are sent with high rate, the resulting events can lead to a FW assert in case they are not handled fast enough (Event Queue Full assert). Attempt to send queued slowpath messages only after the newly evacuated entries in the EQ ring are indicated to FW. Signed-off-by: NManish Chopra <manishc@marvell.com> Signed-off-by: NAriel Elior <aelior@marvell.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Mike Snitzer 提交于
bio_trim() has an early return, which makes it _not_ idempotent, if the offset is 0 and the bio's bi_size already matches the requested size. Prior to DM, all users of bio_trim() were fine with this. But DM has exposed the fact that bio_trim()'s early return is incompatible with a cloned bio whose integrity payload must be trimmed via bio_integrity_trim(). Fix this by reverting DM back to doing the equivalent of bio_trim() but in an idempotent manner (so bio_integrity_trim is always performed). Follow-on work is needed to assess what benefit bio_trim()'s early return is providing to its existing callers. Reported-by: NMilan Broz <gmazyland@gmail.com> Fixes: 57c36519 ("dm: fix clone_bio() to trigger blk_recount_segments()") Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-