- 16 8月, 2019 1 次提交
-
-
由 Sudarsana Reddy Kalluru 提交于
The patch adds driver interface for reading the config attributes from user provided buffer, and updates these values on nvm config flash partition. This is basically an expansion of our existing ethtool -f implementation. The management FW has exposed an additional method of configuring some of the nvram options, and this makes use of that. This implementation will come into use when newer FW files which contain configuration directives employing this API will be provided to ethtool -f. Signed-off-by: NSudarsana Reddy Kalluru <skalluru@marvell.com> Signed-off-by: NAriel Elior <aelior@marvell.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 14 8月, 2019 2 次提交
-
-
由 Heiner Kallweit 提交于
So far phy_speed_down/up can be used up to 1Gbps only. Remove this restriction by using new helper __phy_speed_down. New member adv_old in struct phy_device is used by phy_speed_up to restore the advertised modes before calling phy_speed_down. Don't simply advertise what is supported because a user may have intentionally removed modes from advertisement. Signed-off-by: NHeiner Kallweit <hkallweit1@gmail.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com>
-
由 Heiner Kallweit 提交于
phy_speed_down_core provides most of the functionality for phy_speed_down. It makes use of new helper phy_resolve_min_speed that is based on the sorting of the settings[] array. In certain cases it may be helpful to be able to exclude legacy half duplex modes, therefore prepare phy_resolve_min_speed() for it. v2: - rename __phy_speed_down to phy_speed_down_core Signed-off-by: NHeiner Kallweit <hkallweit1@gmail.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com>
-
- 13 8月, 2019 4 次提交
-
-
由 Jeremy Sowden 提交于
A number of non-UAPI Netfilter header-files contained superfluous "#ifdef __KERNEL__" guards. Removed them. Signed-off-by: NJeremy Sowden <jeremy@azazel.net> Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
-
由 Jeremy Sowden 提交于
linux/netfilter.h defines a number of struct and inline function definitions which are only available is CONFIG_NETFILTER is enabled. These structs and functions are used in declarations and definitions in other header-files. Added preprocessor checks to make sure these headers will compile if CONFIG_NETFILTER is disabled. Signed-off-by: NJeremy Sowden <jeremy@azazel.net> Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
-
由 Jeremy Sowden 提交于
A number of netfilter header-files used declarations and definitions from other headers without including them. Added include directives to make those declarations and definitions available. Signed-off-by: NJeremy Sowden <jeremy@azazel.net> Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
-
由 Jeremy Sowden 提交于
linux/netfilter/ipset/ip_set.h included four other header files: include/linux/netfilter/ipset/ip_set_comment.h include/linux/netfilter/ipset/ip_set_counter.h include/linux/netfilter/ipset/ip_set_skbinfo.h include/linux/netfilter/ipset/ip_set_timeout.h Of these the first three were not included anywhere else. The last, ip_set_timeout.h, was included in a couple of other places, but defined inline functions which call other inline functions defined in ip_set.h, so ip_set.h had to be included before it. Inlined all four into ip_set.h, and updated the other files that included ip_set_timeout.h. Signed-off-by: NJeremy Sowden <jeremy@azazel.net> Acked-by: NJozsef Kadlecsik <kadlec@netfilter.org> Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
-
- 12 8月, 2019 2 次提交
-
-
由 Heiner Kallweit 提交于
Add helper function phy_modify_paged_changed, behavios is the same as for phy_modify_changed. Signed-off-by: NHeiner Kallweit <hkallweit1@gmail.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Heiner Kallweit 提交于
The integrated PHY in 2.5Gbps chip RTL8125 is the first (known to me) PHY that uses standard Clause 22 for all modes up to 1Gbps and adds 2.5Gbps control using vendor-specific registers. To use phylib for the standard part little extensions are needed: - Move most of genphy_config_aneg to a new function __genphy_config_aneg that takes a parameter whether restarting auto-negotiation is needed (depending on whether content of vendor-specific advertisement register changed). - Don't clear phydev->lp_advertising in genphy_read_status so that we can set non-C22 mode flags before. Basically both changes mimic the behavior of the equivalent Clause 45 functions. Signed-off-by: NHeiner Kallweit <hkallweit1@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 11 8月, 2019 2 次提交
-
-
由 Greg Kroah-Hartman 提交于
When calling debugfs functions, there is no need to ever check the return value. The function can work or not, but the code logic should never do something different based on this. This cleans up a lot of unneeded code and logic around the debugfs files, making all of this much simpler and easier to understand as we don't need to keep the dentries saved anymore. Cc: Saeed Mahameed <saeedm@mellanox.com> Cc: Leon Romanovsky <leon@kernel.org> Cc: netdev@vger.kernel.org Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Greg Kroah-Hartman 提交于
When calling debugfs functions, there is no need to ever check the return value. The function can work or not, but the code logic should never do something different based on this. This cleans up a lot of unneeded code and logic around the debugfs wimax files, making all of this much simpler and easier to understand. Cc: Inaky Perez-Gonzalez <inaky.perez-gonzalez@intel.com> Cc: linux-wimax@intel.com Cc: netdev@vger.kernel.org Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 10 8月, 2019 3 次提交
-
-
由 Parav Pandit 提交于
Currently mlx5_eswitch_rep stores same hw ID for all representors. However it is never used from this structure. It is always used from mlx5_vport. Hence, remove unused field. Signed-off-by: NParav Pandit <parav@mellanox.com> Reviewed-by: NVu Pham <vuhuong@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Vlad Buslov 提交于
To remove dependency on rtnl lock, protect mod_hdr hash table from concurrent modifications with new mutex. Implement helper function to get flow namespace to prevent code duplication. Signed-off-by: NVlad Buslov <vladbu@mellanox.com> Reviewed-by: NRoi Dayan <roid@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Vlad Buslov 提交于
List of flows attached to mod header entry is used as implicit reference counter (mod header entry is deallocated when list becomes free) and as a mechanism to obtain mod header entry that flow is attached to (through list head). This is not safe when concurrent modification of list of flows attached to mod header entry is possible. Proper atomic reference counter is required to support concurrent access. As a preparation for extending mod header with reference counting, extract code that lookups and deletes mod header entry into standalone put/get helpers. In order to remove this dependency on external locking, extend mod header entry with reference counter to manage its lifetime and extend flow structure with direct pointer to mod header entry that flow is attached to. To remove code duplication between legacy and switchdev mode implementations that both support mod_hdr functionality, store mod_hdr table in dedicated structure used by both fdb and kernel namespaces. New table structure is extended with table lock by one of the following patches in this series. Implement helper function to get correct mod_hdr table depending on flow namespace. Signed-off-by: NVlad Buslov <vladbu@mellanox.com> Reviewed-by: NJianbo Liu <jianbol@mellanox.com> Reviewed-by: NRoi Dayan <roid@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
- 09 8月, 2019 3 次提交
-
-
由 Jose Abreu 提交于
Implement the RSS functionality and add the corresponding callbacks in XGMAC core. Changes from v1: - Do not use magic constants (Jakub) - Use ethtool_rxfh_indir_default() (Jakub) Signed-off-by: NJose Abreu <joabreu@synopsys.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Edward Cree 提交于
When GRO decides not to coalesce a packet, in napi_frags_finish(), instead of passing it to the stack immediately, place it on a list in the napi struct. Then, at flush time (napi_complete_done(), napi_poll(), or napi_busy_loop()), call netif_receive_skb_list_internal() on the list. We'd like to do that in napi_gro_flush(), but it's not called if !napi->gro_bitmask, so we have to do it in the callers instead. (There are a handful of drivers that call napi_gro_flush() themselves, but it's not clear why, or whether this will affect them.) Because a full 64 packets is an inefficiently large batch, also consume the list whenever it exceeds gro_normal_batch, a new net/core sysctl that defaults to 8. Signed-off-by: NEdward Cree <ecree@solarflare.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Rahul Verma 提交于
Supported ports in ethtool <eth1> are displayed based on media type. For media type fibre and twinaxial, port type is "FIBRE". Media type Base-T is "TP" and media KR is "Backplane". V1->V2: Corrected the subject. Signed-off-by: NRahul Verma <rahulv@marvell.com> Signed-off-by: NMichal Kalderon <michal.kalderon@marvell.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 03 8月, 2019 1 次提交
-
-
由 Arnd Bergmann 提交于
ARM64 randdconfig builds regularly run into a build error, especially when NUMA_BALANCING and SPARSEMEM are enabled but not SPARSEMEM_VMEMMAP: #error "KASAN: not enough bits in page flags for tag" The last-cpuid bits are already contitional on the available space, so the result of the calculation is a bit random on whether they were already left out or not. Adding the kasan tag bits before last-cpuid makes it much more likely to end up with a successful build here, and should be reliable for randconfig at least, as long as that does not randomize NR_CPUS or NODES_SHIFT but uses the defaults. In order for the modified check to not trigger in the x86 vdso32 code where all constants are wrong (building with -m32), enclose all the definitions with an #ifdef. [arnd@arndb.de: build fix] Link: http://lkml.kernel.org/r/CAK8P3a3Mno1SWTcuAOT0Wa9VS15pdU6EfnkxLbDpyS55yO04+g@mail.gmail.com Link: http://lkml.kernel.org/r/20190722115520.3743282-1-arnd@arndb.de Link: https://lore.kernel.org/lkml/20190618095347.3850490-1-arnd@arndb.de/ Fixes: 2813b9c0 ("kasan, mm, arm64: tag non slab memory allocated via pagealloc") Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NArnd Bergmann <arnd@arndb.de> Reviewed-by: NAndrey Konovalov <andreyknvl@google.com> Reviewed-by: NAndrey Ryabinin <aryabinin@virtuozzo.com> Cc: Andrey Konovalov <andreyknvl@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Christoph Lameter <cl@linux.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 02 8月, 2019 5 次提交
-
-
由 Gavi Teitz 提交于
Add a pool of flow counters, based on flow counter bulks, removing the need to allocate a new counter via a costly FW command during the flow creation process. The time it takes to acquire/release a flow counter is cut from ~50 [us] to ~50 [ns]. The pool is part of the mlx5 driver instance, and provides flow counters for aging flows. mlx5_fc_create() was modified to provide counters for aging flows from the pool by default, and mlx5_destroy_fc() was modified to release counters back to the pool for later reuse. If bulk allocation is not supported or fails, and for non-aging flows, the fallback behavior is to allocate and free individual counters. The pool is comprised of three lists of flow counter bulks, one of fully used bulks, one of partially used bulks, and one of unused bulks. Counters are provided from the partially used bulks first, to help limit bulk fragmentation. The pool maintains a threshold, and strives to maintain the amount of available counters below it. The pool is increased in size when a counter acquisition request is made and there are no available counters, and it is decreased in size when the last counter in a bulk is released and there are more available counters than the threshold. All pool size changes are done in the context of the acquiring/releasing process. The value of the threshold is directly correlated to the amount of used counters the pool is providing, while constrained by a hard maximum, and is recalculated every time a bulk is allocated/freed. This ensures that the pool only consumes large amounts of memory for available counters if the pool is being used heavily. When fully populated and at the hard maximum, the buffer of available counters consumes ~40 [MB]. Signed-off-by: NGavi Teitz <gavi@mellanox.com> Reviewed-by: NVlad Buslov <vladbu@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Eli Cohen 提交于
Check if firmware supports the requested element type before attempting to create the element type. In addition, explicitly specify the request element type and tsar type. Signed-off-by: NEli Cohen <eli@mellanox.com> Reviewed-by: NPaul Blakey <paulb@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Saeed Mahameed 提交于
First reserved field is off by one instead of reserved_at_1 it should be reserved_at_2, fix that. Fixes: a12ff35e ("net/mlx5: Introduce TLS TX offload hardware bits and structures") Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com> Reviewed-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Gavi Teitz 提交于
Add a handle to invoke the new FW capability of allocating a bulk of flow counters. Signed-off-by: NGavi Teitz <gavi@mellanox.com> Reviewed-by: NVlad Buslov <vladbu@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Gavi Teitz 提交于
Towards introducing the ability to allocate bulks of flow counters, refactor the flow counter bulk query process, removing functions and structs whose names indicated being used for flow counter bulk allocation FW commands, despite them actually only being used to support bulk querying, and migrate their functionality to correctly named functions in their natural location, fs_counters.c. Additionally, optimize the bulk query process by: * Extracting the memory used for the query to mlx5_fc_stats so that it is only allocated once, and not for each bulk query. * Querying all the counters in one function call. Signed-off-by: NGavi Teitz <gavi@mellanox.com> Reviewed-by: NVlad Buslov <vladbu@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
- 01 8月, 2019 1 次提交
-
-
由 Juergen Gross 提交于
Instead of always calling xen_destroy_contiguous_region() in case the memory is DMA-able for the used device, do so only in case it has been made DMA-able via xen_create_contiguous_region() before. This will avoid a lot of xen_destroy_contiguous_region() calls for 64-bit capable devices. As the memory in question is owned by swiotlb-xen the PG_owner_priv_1 flag of the first allocated page can be used for remembering. Signed-off-by: NJuergen Gross <jgross@suse.com> Acked-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NJuergen Gross <jgross@suse.com>
-
- 31 7月, 2019 7 次提交
-
-
由 Stefano Garzarella 提交于
fwd_cnt and last_fwd_cnt are protected by rx_lock, so we should use the same spinlock also if we are in the TX path. Move also buf_alloc under the same lock. Signed-off-by: NStefano Garzarella <sgarzare@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Acked-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Stefano Garzarella 提交于
In order to reduce the number of credit update messages, we send them only when the space available seen by the transmitter is less than VIRTIO_VSOCK_MAX_PKT_BUF_SIZE. Signed-off-by: NStefano Garzarella <sgarzare@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Acked-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Stefano Garzarella 提交于
Since virtio-vsock was introduced, the buffers filled by the host and pushed to the guest using the vring, are directly queued in a per-socket list. These buffers are preallocated by the guest with a fixed size (4 KB). The maximum amount of memory used by each socket should be controlled by the credit mechanism. The default credit available per-socket is 256 KB, but if we use only 1 byte per packet, the guest can queue up to 262144 of 4 KB buffers, using up to 1 GB of memory per-socket. In addition, the guest will continue to fill the vring with new 4 KB free buffers to avoid starvation of other sockets. This patch mitigates this issue copying the payload of small packets (< 128 bytes) into the buffer of last packet queued, in order to avoid wasting memory. Signed-off-by: NStefano Garzarella <sgarzare@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Acked-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Arnd Bergmann 提交于
Support for handling the PPPOEIOCSFWD ioctl in compat mode was added in linux-2.5.69 along with hundreds of other commands, but was always broken sincen only the structure is compatible, but the command number is not, due to the size being sizeof(size_t), or at first sizeof(sizeof((struct sockaddr_pppox)), which is different on 64-bit architectures. Guillaume Nault adds: And the implementation was broken until 2016 (see 29e73269 ("pppoe: fix reference counting in PPPoE proxy")), and nobody ever noticed. I should probably have removed this ioctl entirely instead of fixing it. Clearly, it has never been used. Fix it by adding a compat_ioctl handler for all pppoe variants that translates the command number and then calls the regular ioctl function. All other ioctl commands handled by pppoe are compatible between 32-bit and 64-bit, and require compat_ptr() conversion. This should apply to all stable kernels. Acked-by: NGuillaume Nault <g.nault@alphalink.fr> Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jonathan Lemon 提交于
Now that page_offset is referenced through accessors, remove the union, and use bv_offset. Signed-off-by: NJonathan Lemon <jonathan.lemon@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jonathan Lemon 提交于
Add skb_frag_off(), skb_frag_off_add(), skb_frag_off_set(), and skb_frag_off_copy() accessors for page_offset. Signed-off-by: NJonathan Lemon <jonathan.lemon@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jan Kara 提交于
Commit 33ec3e53 ("loop: Don't change loop device under exclusive opener") made LOOP_SET_FD ioctl acquire exclusive block device reference while it updates loop device binding. However this can make perfectly valid mount(2) fail with EBUSY due to racing LOOP_SET_FD holding temporarily the exclusive bdev reference in cases like this: for i in {a..z}{a..z}; do dd if=/dev/zero of=$i.image bs=1k count=0 seek=1024 mkfs.ext2 $i.image mkdir mnt$i done echo "Run" for i in {a..z}{a..z}; do mount -o loop -t ext2 $i.image mnt$i & done Fix the problem by not getting full exclusive bdev reference in LOOP_SET_FD but instead just mark the bdev as being claimed while we update the binding information. This just blocks new exclusive openers instead of failing them with EBUSY thus fixing the problem. Fixes: 33ec3e53 ("loop: Don't change loop device under exclusive opener") Cc: stable@vger.kernel.org Tested-by: NKai-Heng Feng <kai.heng.feng@canonical.com> Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
- 30 7月, 2019 2 次提交
-
-
由 Toke Høiland-Jørgensen 提交于
A common pattern when using xdp_redirect_map() is to create a device map where the lookup key is simply ifindex. Because device maps are arrays, this leaves holes in the map, and the map has to be sized to fit the largest ifindex, regardless of how many devices actually are actually needed in the map. This patch adds a second type of device map where the key is looked up using a hashmap, instead of being used as an array index. This allows maps to be densely packed, so they can be smaller. Signed-off-by: NToke Høiland-Jørgensen <toke@redhat.com> Acked-by: NYonghong Song <yhs@fb.com> Acked-by: NJesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
-
由 Toke Høiland-Jørgensen 提交于
When we changed the device and CPU maps to use linked lists instead of bitmaps, we also removed the need for the map_insert_ctx() helpers to keep track of the bitmaps inside each map. However, it seems I forgot to remove the function definitions stubs, so remove those here. Signed-off-by: NToke Høiland-Jørgensen <toke@redhat.com> Acked-by: NYonghong Song <yhs@fb.com> Acked-by: NJesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
-
- 29 7月, 2019 3 次提交
-
-
由 Andy Shevchenko 提交于
Legacy platform data must go away. We are on the safe side here since there are no users of it in the kernel. If anyone by any odd reason needs it the GPIO lookup tables and built-in device properties at your service. Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Tested-by: NSedat Dilek <sedat.dilek@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 John Crispin 提交于
HE allows peers to negotiate the aggregation fragmentation level to be used during transmission. The level can be 1-3. The Ext element is added behind the ADDBA request inside the action frame. The responder will then reply with the same level or a lower one if the requested one is not supported. This patch only handles the negotiation part as the ADDBA frames get passed to the ATH11k firmware, which does the rest of the magic for us aswell as generating the requests. Signed-off-by: NShashidhar Lakkavalli <slakkavalli@datto.com> Signed-off-by: NJohn Crispin <john@phrozen.org> Link: https://lore.kernel.org/r/20190729104512.27615-1-john@phrozen.orgSigned-off-by: NJohannes Berg <johannes.berg@intel.com>
-
由 John Crispin 提交于
Johannes mentioned that the comment should not reference mac80211 as other subsystems might call the helper. Signed-off-by: NJohn Crispin <john@phrozen.org> Signed-off-by: NKalle Valo <kvalo@codeaurora.org> Link: https://lore.kernel.org/r/20190729102342.8659-1-john@phrozen.orgSigned-off-by: NJohannes Berg <johannes.berg@intel.com>
-
- 28 7月, 2019 2 次提交
-
-
由 Bartosz Golaszewski 提交于
If gpiolib is disabled, we use the inline stubs from gpio/consumer.h instead of regular definitions of GPIO API. The stubs for 'optional' variants of gpiod_get routines return NULL in this case as if the relevant GPIO wasn't found. This is correct so far. Calling other (non-gpio_get) stubs from this header triggers a warning because the GPIO descriptor couldn't have been requested. The warning however is unconditional (WARN_ON(1)) and is emitted even if the passed descriptor pointer is NULL. We don't want to force the users of 'optional' gpio_get to check the returned pointer before calling e.g. gpiod_set_value() so let's only WARN on non-NULL descriptors. Cc: stable@vger.kernel.org Reported-by: NClaus H. Stovgaard <cst@phaseone.com> Signed-off-by: NBartosz Golaszewski <bgolaszewski@baylibre.com>
-
由 Thierry Reding 提交于
The Tegra EQOS driver already resets the MDIO bus at probe time via the reset GPIO specified in the phy-reset-gpios device tree property. There is no need to reset the bus again later on. This avoids the need to query the device tree for the snps,reset GPIO, which is not part of the Tegra EQOS device tree bindings. This quiesces an error message from the generic bus reset code if it doesn't find the snps,reset related delays. Signed-off-by: NThierry Reding <treding@nvidia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 27 7月, 2019 2 次提交
-
-
由 Thierry Reding 提交于
"Findfrom" is not a word. Replace the function synopsis by something that makes sense. Signed-off-by: NThierry Reding <treding@nvidia.com> Signed-off-by: NRob Herring <robh@kernel.org>
-
The udp_ip4_ind bit is set only for IPv4 UDP non-fragmented packets so that the hardware can flip the checksum to 0xFFFF if the computed checksum is 0 per RFC768. However, this bit had to be set for IPv6 UDP non fragmented packets as well per hardware requirements. Otherwise, IPv6 UDP packets with computed checksum as 0 were transmitted by hardware and were dropped in the network. In addition to setting this bit for IPv6 UDP, the field is also appropriately renamed to udp_ind as part of this change. Fixes: 5eb5f860 ("net: qualcomm: rmnet: Add support for TX checksum offload") Cc: Sean Tranchetti <stranche@codeaurora.org> Signed-off-by: NSubash Abhinov Kasiviswanathan <subashab@codeaurora.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-