- 24 3月, 2018 40 次提交
-
-
由 Björn Töpel 提交于
The current page counting scheme assumes that the reference count cannot decrease until the received frame is sent to the upper layers of the networking stack. This assumption does not hold for the XDP_REDIRECT action, since a page (pointed out by xdp_buff) can have its reference count decreased via the xdp_do_redirect call. To work around that, we now start off by a large page count and then don't allow a refcount less than two. Signed-off-by: NBjörn Töpel <bjorn.topel@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Tony Nguyen 提交于
XDP stats are included in TX stats, however, they are not reported in TX queue stats since they are setup on different queues. Add reporting for XDP queue stats to provide consistency between the total stats and per queue stats. Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com> Acked-by: NJohn Fastabend <john.fastabend@gmail.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Tony Nguyen 提交于
Add support for XDP meta data when using build skb. Based on commit 366a88fe ("bpf, ixgbe: add meta data support") Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com> Acked-by: NJohn Fastabend <john.fastabend@gmail.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Tony Nguyen 提交于
Current XDP implementation hits the tail on every XDP_TX; change the driver to only hit the tail after packet processing is complete. Based on commit 7379f97a ("ixgbe: delay tail write to every 'n' packets") Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com> Acked-by: NJohn Fastabend <john.fastabend@gmail.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Tony Nguyen 提交于
This implements the XDP_TX action which is modeled on the ixgbe implementation. However instead of using CPU id to determine which XDP queue to use, this uses the received RX queue index, which is similar to i40e. Doing this eliminates the restriction that number of CPUs not exceed number of XDP queues that ixgbe has. Also, based on the number of queues available, the number of TX queues may be reduced when an XDP program is loaded in order to accommodate the XDP queues. Based largely on commit 33fdc82f ("ixgbe: add support for XDP_TX action") Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com> Acked-by: NJohn Fastabend <john.fastabend@gmail.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Tony Nguyen 提交于
Implement XDP_PASS and XDP_DROP based on the ixgbe implementation. Based largely on commit 92470808 ("ixgbe: add XDP support for pass and drop actions"). Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com> Acked-by: NJohn Fastabend <john.fastabend@gmail.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Shannon Nelson 提交于
Fix things up to support TSO offload in conjunction with IPsec hw offload. This raises throughput with IPsec offload on to nearly line rate. Signed-off-by: NShannon Nelson <shannon.nelson@oracle.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Shannon Nelson 提交于
There is no need to calculate the trailer length if we're doing a GSO/TSO, as there is no trailer added to the packet data. Also, don't bother clearing the flags field as it was already cleared earlier. Signed-off-by: NShannon Nelson <shannon.nelson@oracle.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Shannon Nelson 提交于
Since the ipsec data fields will be zero anyway in the non-ipsec case, we can remove the conditional jump. Suggested-by: NAlexander Duyck <alexander.duyck@gmail.com> Signed-off-by: NShannon Nelson <shannon.nelson@oracle.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Shannon Nelson 提交于
With the patch commit f8aa2696b4af ("esp: check the NETIF_F_HW_ESP_TX_CSUM bit before segmenting") we no longer need to protect ourself from checksum offload requests on IPsec packets, so we can remove the check in our .ndo_features_check callback. Signed-off-by: NShannon Nelson <shannon.nelson@oracle.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Paul Greenwalt 提交于
Replaced an assignment operation with an OR operation. The variable assignment was overwriting the value read from the PHY register. The OR operation sets only the intended register bits. The bits that were being overwritten are reserved, so the assignment had no functional impact. Reported by: Shannon Nelson <shannon.nelson@oracle.com> Signed-off-by: NPaul Greenwalt <paul.greenwalt@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Paul Greenwalt 提交于
Add status register reads and delay between reads to ixgbe_check_remove. Registers can read 0xFFFFFFFF during PCI reset, which causes the driver to remove the adapter. The additional status register reads can reduce the chance of this race condition. If the status register is not 0xFFFFFFFF, then ixgbe_check_remove returns the value of the register being read. Signed-off-by: NPaul Greenwalt <paul.greenwalt@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Mathias Kresin 提交于
The phys embedded into the v1.1 of the VR9 SoC are using different phy ids. Add the phy ids to use the driver for this VR9 version as well. Signed-off-by: NMathias Kresin <dev@kresin.me> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Mathias Kresin 提交于
The VR9 phy ids are matching only for the SoC version 1.2. Rename the macros and change the names to take this into account. Signed-off-by: NMathias Kresin <dev@kresin.me> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 kbuild test robot 提交于
Fixes: 2bfbd35d ("net: hns3: Changes required in PF mailbox to support VF reset") Signed-off-by: NFengguang Wu <fengguang.wu@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Gustavo A. R. Silva 提交于
Assign true or false to boolean variables instead of an integer value. This issue was detected with the help of Coccinelle. Signed-off-by: NGustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: NGustavo A. R. Silva <gustavo@embeddedor.com> Acked-by: NSudarsana Kalluru <Sudarsana.Kalluru@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Gustavo A. R. Silva 提交于
Assign true or false to boolean variables instead of an integer value. This issue was detected with the help of Coccinelle. Signed-off-by: NGustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Jon Maloy says: ==================== tipc: introduce 128-bit auto-configurable node id We introduce a 128-bit free-format node identity as an alternative to the legacy <Zone.Cluster.Node> structured 32-bit node address. We also make configuration of this identity optional; if a bearer is enabled without a pre-configured node id it will be set automatically based on the used interface's MAC or IP address. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jon Maloy 提交于
Selecting and explicitly configuring a TIPC node identity may be unwanted in some cases. In this commit we introduce a default setting if the identity has not been set at the moment the first bearer is enabled. We do this by using a raw copy of a unique identifier from the used interface: MAC address in the case of an L2 bearer, IPv4/IPv6 address in the case of a UDP bearer. Acked-by: NYing Xue <ying.xue@windriver.com> Signed-off-by: NJon Maloy <jon.maloy@ericsson.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jon Maloy 提交于
When a 32-bit node address is generated from a 128-bit identifier, there is a risk of collisions which must be discovered and handled. We do this as follows: - We don't apply the generated address immediately to the node, but do instead initiate a 1 sec trial period to allow other cluster members to discover and handle such collisions. - During the trial period the node periodically sends out a new type of message, DSC_TRIAL_MSG, using broadcast or emulated broadcast, to all the other nodes in the cluster. - When a node is receiving such a message, it must check that the presented 32-bit identifier either is unused, or was used by the very same peer in a previous session. In both cases it accepts the request by not responding to it. - If it finds that the same node has been up before using a different address, it responds with a DSC_TRIAL_FAIL_MSG containing that address. - If it finds that the address has already been taken by some other node, it generates a new, unused address and returns it to the requester. - During the trial period the requesting node must always be prepared to accept a failure message, i.e., a message where a peer suggests a different (or equal) address to the one tried. In those cases it must apply the suggested value as trial address and restart the trial period. This algorithm ensures that in the vast majority of cases a node will have the same address before and after a reboot. If a legacy user configures the address explicitly, there will be no trial period and messages, so this protocol addition is completely backwards compatible. Acked-by: NYing Xue <ying.xue@windriver.com> Signed-off-by: NJon Maloy <jon.maloy@ericsson.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jon Maloy 提交于
We add a 128-bit node identity, as an alternative to the currently used 32-bit node address. For the sake of compatibility and to minimize message header changes we retain the existing 32-bit address field. When not set explicitly by the user, this field will be filled with a hash value generated from the much longer node identity, and be used as a shorthand value for the latter. We permit either the address or the identity to be set by configuration, but not both, so when the address value is set by a legacy user the corresponding 128-bit node identity is generated based on the that value. Acked-by: NYing Xue <ying.xue@windriver.com> Signed-off-by: NJon Maloy <jon.maloy@ericsson.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jon Maloy 提交于
As a preparation to changing the addressing structure of TIPC we replace all direct accesses to the tipc_net::own_addr field with the function dedicated for this, tipc_own_addr(). There are no changes to program logics in this commit. Acked-by: NYing Xue <ying.xue@windriver.com> Signed-off-by: NJon Maloy <jon.maloy@ericsson.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jon Maloy 提交于
The removal of an internal structure of the node address has an unwanted side effect. - Currently, if a user is sending an anycast message with destination domain 0, the tipc_namebl_translate() function will use the 'closest- first' algorithm to first look for a node local destination, and only when no such is found, will it resort to the cluster global 'round- robin' lookup algorithm. - Current users can get around this, and enforce unconditional use of global round-robin by indicating a destination as Z.0.0 or Z.C.0. - This option disappears when we make the node address flat, since the lookup algorithm has no way of recognizing this case. So, as long as there are node local destinations, the algorithm will always select one of those, and there is nothing the sender can do to change this. We solve this by eliminating the 'closest-first' option, which was never a good idea anyway, for non-legacy users, but only for those. To distinguish between legacy users and non-legacy users we introduce a new flag 'legacy_addr_format' in struct tipc_core, to be set when the user configures a legacy-style Z.C.N node address. Hence, when a legacy user indicates a zero lookup domain 'closest-first' is selected, and in all other cases we use 'round-robin'. Acked-by: NYing Xue <ying.xue@windriver.com> Signed-off-by: NJon Maloy <jon.maloy@ericsson.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jon Maloy 提交于
Nominally, TIPC organizes network nodes into a three-level network hierarchy consisting of the levels 'zone', 'cluster' and 'node'. This hierarchy is reflected in the node address format, - it is sub-divided into an 8-bit zone id, and 12 bit cluster id, and a 12-bit node id. However, the 'zone' and 'cluster' levels have in reality never been fully implemented,and never will be. The result of this has been that the first 20 bits the node identity structure have been wasted, and the usable node identity range within a cluster has been limited to 12 bits. This is starting to become a problem. In the following commits, we will need to be able to connect between nodes which are using the whole 32-bit value space of the node address. We therefore remove the restrictions on which values can be assigned to node identity, -it is from now on only a 32-bit integer with no assumed internal structure. Isolation between clusters is now achieved only by setting different values for the 'network id' field used during neighbor discovery, in practice leading to the latter becoming the new cluster identity. The rules for accepting discovery requests/responses from neighboring nodes now become: - If the user is using legacy address format on both peers, reception of discovery messages is subject to the legacy lookup domain check in addition to the cluster id check. - Otherwise, the discovery request/response is always accepted, provided both peers have the same network id. This secures backwards compatibility for users who have been using zone or cluster identities as cluster separators, instead of the intended 'network id'. Acked-by: NYing Xue <ying.xue@windriver.com> Signed-off-by: NJon Maloy <jon.maloy@ericsson.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jon Maloy 提交于
To facilitate the coming changes in the neighbor discovery functionality we make some renaming and refactoring of that code. The functional changes in this commit are trivial, e.g., that we move the message sending call in tipc_disc_timeout() outside the spinlock protected region. Acked-by: NYing Xue <ying.xue@windriver.com> Signed-off-by: NJon Maloy <jon.maloy@ericsson.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jon Maloy 提交于
As a preparation for the next commits we try to reduce the footprint of the function tipc_enable_bearer(), while hopefully making is simpler to follow. Acked-by: NYing Xue <ying.xue@windriver.com> Signed-off-by: NJon Maloy <jon.maloy@ericsson.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Gustavo A. R. Silva 提交于
_rule_ is being freed and then dereferenced by accessing rule->ctx Fix this by copying the value returned by PTR_ERR(rule->ctx) into a local variable for its safe use after freeing _rule_ Addresses-Coverity-ID: 1466041 ("Read from pointer after free") Fixes: 05564d0a ("net/mlx5: Add flow-steering commands for FPGA IPSec implementation") Reviewed-by: NYuval Shaia <yuval.shaia@oracle.com> Signed-off-by: NGustavo A. R. Silva <gustavo@embeddedor.com> Acked-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Kirill Tkhai says: ==================== Converting pernet_operations (part #11) this series continues to review and to convert pernet_operations to make them possible to be executed in parallel for several net namespaces at the same time. I thought last series was last, but there is one new pernet_operations came to kernel. This is udp_sysctl_ops, and here we convert it. Also, David Howells acked rxrpc_net_ops, so I resend the patch in case of it should be queued by patchwork: https://www.spinics.net/lists/netdev/msg490678.html ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Kirill Tkhai 提交于
These pernet_operations modifies rxrpc_net_id-pointed per-net entities. There is external link to AF_RXRPC in fs/afs/Kconfig, but it seems there is no other pernet_operations interested in that per-net entities. Signed-off-by: NKirill Tkhai <ktkhai@virtuozzo.com> Acked-by: NDavid Howells <dhowells@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Kirill Tkhai 提交于
These pernet_operations just initialize udp4 defaults. Signed-off-by: NKirill Tkhai <ktkhai@virtuozzo.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Petr Machata 提交于
Since the first element of struct mlxsw_sp_span_parms is a pointer, to zero-initialize this structure the correct notation is not = {0}, but rather = {NULL}, as reported by sparse. Signed-off-by: NPetr Machata <petrm@mellanox.com> Acked-by: NJiri Pirko <jiri@mellanox.com> Signed-off-by: NIdo Schimmel <idosch@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Davide Caratti 提交于
Test d959: Add cBPF action with valid bytecode Test f84a: Add cBPF action with invalid bytecode Test e939: Add eBPF action with valid object-file Test 282d: Add eBPF action with invalid object-file Test d819: Replace cBPF bytecode and action control Test 6ae3: Delete cBPF action Test 3e0d: List cBPF actions Test 55ce: Flush BPF actions Test ccc3: Add cBPF action with duplicate index Test 89c7: Add cBPF action with invalid index Test 7ab9: Add cBPF action with cookie Changes since v1: - use index=2^32-1 in test ccc3, add tests 7a89, 89c7 (thanks Roman Mashak) - added test 282d Signed-off-by: NDavide Caratti <dcaratti@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Nikolay Aleksandrov 提交于
We need to use br_vlan_enabled() helper otherwise we'll break builds without bridge vlans: net/bridge//br_if.c: In function ‘br_mtu’: net/bridge//br_if.c:458:8: error: ‘const struct net_bridge’ has no member named ‘vlan_enabled’ if (br->vlan_enabled) ^ net/bridge//br_if.c:462:1: warning: control reaches end of non-void function [-Wreturn-type] } ^ scripts/Makefile.build:324: recipe for target 'net/bridge//br_if.o' failed Fixes: 419d14af ("bridge: Allow max MTU when multiple VLANs present") Signed-off-by: NNikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Dave Watson says: ==================== TLS Rx TLS tcp socket RX implementation, to match existing TX code. This patchset completes the software TLS socket, allowing full bi-directional communication over TLS using normal socket syscalls, after the handshake has been done in userspace. Only the symmetric encryption is done in the kernel. This allows usage of TLS sockets from within the kernel (for example with network block device, or from bpf). Performance can be better than userspace, with appropriate crypto routines [1]. sk->sk_socket->ops must be overridden to implement splice_read and poll, but otherwise the interface & implementation match TX closely. strparser is used to parse TLS framing on receive. There are Openssl RX patches that work with this interface [2], as well as a testing tool using the socket interface directly (without cmsg support) [3]. An example tcp socket setup is: // Normal tcp socket connect/accept, and TLS handshake // using any TLS library. setsockopt(sock, SOL_TCP, TCP_ULP, "tls", sizeof("tls")); struct tls12_crypto_info_aes_gcm_128 crypto_info_rx; // Fill in crypto_info based on negotiated keys. setsockopt(sock, SOL_TLS, TLS_RX, &crypto_info, sizeof(crypto_info_rx)); // You can optionally TLX_TX as well. char buffer[16384]; int ret = recv(sock, buffer, 16384); // cmsg can be received using recvmsg and a msg_control // of type TLS_GET_RECORD_TYPE will be set. V1 -> V2 * For too-small framing errors, return EBADMSG, to match openssl error code semantics. Docs and commit logs about this also updated. RFC -> V1 * Refactor 'tx' variable names to drop tx * Error return codes changed per discussion * Only call skb_cow_data based on in-place decryption, drop unnecessary frag list check. [1] Recent crypto patchset to remove copies, resulting in optimally zero copies vs. userspace's one, vs. previous kernel's two. https://marc.info/?l=linux-crypto-vger&m=151931242406416&w=2 [2] https://github.com/Mellanox/openssl/commits/tls_rx2 [3] https://github.com/ktls/af_ktls-tool/tree/RX ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dave Watson 提交于
Add documentation on rx path setup and cmsg interface. Signed-off-by: NDave Watson <davejwatson@fb.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dave Watson 提交于
Add rx path for tls software implementation. recvmsg, splice_read, and poll implemented. An additional sockopt TLS_RX is added, with the same interface as TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or together (with two different setsockopt calls with appropriate keys). Control messages are passed via CMSG in a similar way to transmit. If no cmsg buffer is passed, then only application data records will be passed to userspace, and EIO is returned for other types of alerts. EBADMSG is passed for decryption errors, and EMSGSIZE is passed for framing too big, and EBADMSG for framing too small (matching openssl semantics). EINVAL is returned for TLS versions that do not match the original setsockopt call. All are unrecoverable. strparser is used to parse TLS framing. Decryption is done directly in to userspace buffers if they are large enough to support it, otherwise sk_cow_data is called (similar to ipsec), and buffers are decrypted in place and copied. splice_read always decrypts in place, since no buffers are provided to decrypt in to. sk_poll is overridden, and only returns POLLIN if a full TLS message is received. Otherwise we wait for strparser to finish reading a full frame. Actual decryption is only done during recvmsg or splice_read calls. Signed-off-by: NDave Watson <davejwatson@fb.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dave Watson 提交于
Several config variables are prefixed with tx, drop the prefix since these will be used for both tx and rx. Signed-off-by: NDave Watson <davejwatson@fb.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dave Watson 提交于
Pass EBADMSG explicitly to tls_err_abort. Receive path will pass additional codes - EMSGSIZE if framing is larger than max TLS record size, EINVAL if TLS version mismatch. Signed-off-by: NDave Watson <davejwatson@fb.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dave Watson 提交于
Separate tx crypto parameters to a separate cipher_context struct. The same parameters will be used for rx using the same struct. tls_advance_record_sn is modified to only take the cipher info. Signed-off-by: NDave Watson <davejwatson@fb.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dave Watson 提交于
Refactor zerocopy_from_iter to take arguments for pages and size, such that it can be used for both tx and rx. RX will also support zerocopy direct to output iter, as long as the full message can be copied at once (a large enough userspace buffer was provided). Signed-off-by: NDave Watson <davejwatson@fb.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-