- 13 10月, 2017 9 次提交
-
-
由 Ursula Braun 提交于
SMC should not open code the function pointer get_netdev of the IB device. Replacing ib_query_gid(..., NULL) with ib_query_gid(..., gid_attr) allows access to the netdev. Signed-off-by: NUrsula Braun <ubraun@linux.vnet.ibm.com> Suggested-by: NParav Pandit <parav@mellanox.com> Reviewed-by: NParav Pandit <parav@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Florian Fainelli says: ==================== Enable ACB for bcm_sf2 and bcmsysport This patch series enables Broadcom's Advanced Congestion Buffering mechanism which requires cooperation between the CPU/Management Ethernet MAC controller and the switch. I took the notifier approach because ultimately the information we need to carry to the master network device is DSA specific and I saw little room for generalizing beyond what DSA requires. Chances are that this is highly specific to the Broadcom HW as I don't know of any HW out there that supports something nearly similar for similar or identical needs. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florian Fainelli 提交于
Now that we have established the queue mapping between the switch port egress queues and the SYSTEMPORT egress queues, we can turn on Advanced Congestion Buffering (ACB) at the SYSTEMPORT level. This enables the Ethernet MAC controller to get out of band flow control information directly from the switch port and queue that it monitors such that its internal TDMA can be appropriately backpressured. Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florian Fainelli 提交于
Turn on the out of band Advanced Congestion Buffering (ACB) mechanism at the switch level now that we have properly established the queue mapping between the switch egress queues and the SYSTEMPORT egress queues. This allows the switch to correctly backpressure the host system when one of its queue drops below the configured thresholds. This is also helping achieve so called "lossless" behavior by adapting the TX interrupt pacing to the actual speed and capacity of the switch port. Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florian Fainelli 提交于
Establish a queue mapping between the DSA slave network device queues created that correspond to switch port queues, and the transmit queue that SYSTEMPORT manages. We need to configure the SYSTEMPORT transmit queue with the switch port number and switch port queue number in order for the switch and SYSTEMPORT hardware to utilize the out of band congestion notification. This hardware mechanism works by looking at the switch port egress queue and determines whether there is enough buffers for this queue, with that class of service for a successful transmission and if not, backpressures the SYSTEMPORT queue that is being used. For this to work, we implement a notifier which looks at the DSA_PORT_REGISTER event. When DSA network devices are registered, the framework calls the DSA notifiers when that happens, extracts the number of queues for these devices and their associated port number, remembers that in the driver private structure and linearly maps those queues to TX rings/queues that we manage. This scheme works because DSA slave network deviecs always transmit through SYSTEMPORT so when DSA slave network devices are destroyed/brought down, the corresponding SYSTEMPORT queues are no longer used. Also, by design of the DSA framework, the master network device (SYSTEMPORT) is registered first. For faster lookups we use an array of up to DSA_MAX_PORTS * number of queues per port, and then map pointers to bcm_sysport_tx_ring such that our ndo_select_queue() implementation can just index into that array to locate the corresponding ring index. Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florian Fainelli 提交于
We need to tell the DSA master network device doing the actual transmission what the desired switch port and queue number is for it to resolve that to the internal transmit queue it is mapped to. Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florian Fainelli 提交于
In preparation for communicating a given DSA network device's port number and switch index, create a specialized DSA notifier and two events: DSA_PORT_REGISTER and DSA_PORT_UNREGISTER that communicate: the slave network device (slave_dev), port number and switch number in the tree. This will be later used for network device drivers like bcmsysport which needs to cooperate with its DSA network devices to set-up queue mapping and scheduling. Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Timur Tabi 提交于
This reverts commit df1ec1b9. It turns out that memory allocated via dma_alloc_coherent is always aligned to the size of the buffer, so there's no way the RRD and RFD can ever be in separate 32-bit regions. Signed-off-by: NTimur Tabi <timur@codeaurora.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
Remove three inline helpers that are no longer needed. Signed-off-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 12 10月, 2017 31 次提交
-
-
由 Colin Ian King 提交于
Variable old_flags is being assigned but is never read; it is redundant and can be removed. Cleans up clang warning: Value stored to 'old_flags' is never read Signed-off-by: NColin Ian King <colin.king@canonical.com> Acked-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NDaniel Borkmann <daniel@iogearbox.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Tariq Toukan says: ==================== mlx4_en XDP TX improvements This patchset contains performance improvements to the XDP_TX use case in the mlx4 Eth driver. Patch 1 is a simple change in a function parameter type. Patch 2 replaces a call to a generic function with the relevant parts inlined. Patch 3 moves the write of descriptors' constant values from data path to control path. Series generated against net-next commit: 833e0e2f net: dst: move cpu inside ifdef to avoid compilation warning ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Tariq Toukan 提交于
In XDP_TX, some fields in tx_info and tx_desc are constants across all entries of the different XDP_TX rings. Assign values to these fields on ring creation time, rather than in data-path. Patchset performance tests: Tested on ConnectX3Pro, Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz Single queue no-RSS optimization ON. XDP_TX packet rate: ------------------------------ Before | After | Gain | 13.7 Mpps | 14.0 Mpps | %2.2 | ------------------------------ Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Tariq Toukan 提交于
Function mlx4_en_tx_write_desc() is not optimized to use of XDP xmit. Use the relevant parts inline instead. Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Tariq Toukan 提交于
The struct net_device parameter was passed only to extract struct mlx4_en_priv out of it. Here we pass the priv parameter directly. Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Colin Ian King 提交于
The function ipgre_mpls_encap_hlen is local to the source and does not need to be in global scope, so make it static. Cleans up sparse warning: symbol 'ipgre_mpls_encap_hlen' was not declared. Should it be static? Fixes: bdc47641 ("ip_tunnel: add mpls over gre support") Signed-off-by: NColin Ian King <colin.king@canonical.com> Acked-by: NDavid Ahern <dsahern@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Colin Ian King 提交于
The array sctp_sched_ops is local to the source and does not need to be in global scope, so make it static. Cleans up sparse warning: symbol 'sctp_sched_ops' was not declared. Should it be static? Signed-off-by: NColin Ian King <colin.king@canonical.com> Acked-by: NNeil Horman <nhorman@tuxdriver.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florian Westphal 提交于
Similar to the previous patch, use the device lookup functions that bump device refcount and flag this as DOIT_UNLOCKED to avoid rtnl mutex. Signed-off-by: NFlorian Westphal <fw@strlen.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florian Westphal 提交于
Instead of relying on rtnl mutex bump device reference count. After this change, values reported can change in parallel, but thats not much different from current state, as anyone can change the settings right after rtnl_unlock (and before userspace processed reply). While at it, switch to GFP_KERNEL allocation. Signed-off-by: NFlorian Westphal <fw@strlen.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Jiri Pirko says: ==================== net: sched: get rid of cls_flower->egress_dev Introduction of cls_flower->egress_dev was a workaround. Turned out to be a bit ugly hack. So replace it with more generic and reusable infrastructure. This is a dependency of shared block introduction that will be send as a follow-up patchsets group. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jiri Pirko 提交于
The helper and the struct field ares no longer used by any code, so remove them. Signed-off-by: NJiri Pirko <jiri@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jiri Pirko 提交于
The only user of cls_flower->egress_dev is mlx5. So do the conversion there alongside with the code originating the call in cls_flower function fl_hw_replace_filter to the newly introduced egress device callback infrastucture. Signed-off-by: NJiri Pirko <jiri@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jiri Pirko 提交于
Introduce infrastructure that allows drivers to register callbacks that are called whenever tc would offload inserted rule and specified device acts as tc action egress device. Signed-off-by: NJiri Pirko <jiri@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jiri Pirko 提交于
Return dev directly, NULL if not possible. That is enough. Makes no sense to pass struct net * to get_dev op, as there is only one net possible, the one the action was created in. So just store it in mirred priv and use directly. Rename the mirred op callback function. Signed-off-by: NJiri Pirko <jiri@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Subash Abhinov Kasiviswanathan says: ==================== net: qualcomm: rmnet: Rewrite some existing functionality This series fixes some of the broken rmnet functionality. Bridge mode is re-written and made useable and the muxed_ep is converted to hlist. Patches 1-5 are cleanups in preparation for these changes. Patch 6 does the hlist conversion. Patch 7 has the implementation of the rmnet bridge mode. v1->v2: Fix the warning and code style issue in rmnet_rx_handler as mentioned by David. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
Add support to bridge two devices which can send multiplexing and aggregation (MAP) data. This is done only when the data itself is not going to be consumed in the stack but is being passed on to a different endpoint. This is mainly used for testing. Signed-off-by: NSubash Abhinov Kasiviswanathan <subashab@codeaurora.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
Rather than using a static array, use a hlist to store the muxed endpoints and use the mux id to query the rmnet_device. This is useful as usually very few mux ids are used. Signed-off-by: NSubash Abhinov Kasiviswanathan <subashab@codeaurora.org> Cc: Dan Williams <dcbw@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
The rmnet_devices information is already stored in muxed_ep, so storing this in rmnet_devices[] again is redundant. Signed-off-by: NSubash Abhinov Kasiviswanathan <subashab@codeaurora.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
The end point is set twice in the local_ep as well as the mux_id and the real_dev in the rmnet private structure. Remove the local_ep. While these elements are equivalent, rmnet_endpoint will be used only as part of the rmnet_port for muxed scenarios in VND mode. Signed-off-by: NSubash Abhinov Kasiviswanathan <subashab@codeaurora.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
Mode information on the real device makes it easier to route packets to rmnet device or bridged device based on the configuration. Signed-off-by: NSubash Abhinov Kasiviswanathan <subashab@codeaurora.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
Most of these constants were used in the initial patchset where custom netlink configuration was used and hence are no longer relevant. Signed-off-by: NSubash Abhinov Kasiviswanathan <subashab@codeaurora.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
This will be rewritten in the following patches. Signed-off-by: NSubash Abhinov Kasiviswanathan <subashab@codeaurora.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Timur Tabi says: ==================== net: qcom/emac: various minor fixes A set of patches for 4.15 that clean up some code, apply minors fixes, and so on. Some of the code also prepares the driver for a future version of the EMAC controller. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Timur Tabi 提交于
Some of the error messages that are printed by the interrupt handlers are poorly written. For example, many don't include a device prefix, so there's no indication that they are EMAC errors. Also use rate limiting for all messages that could be printed from interrupt context. Signed-off-by: NTimur Tabi <timur@codeaurora.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Timur Tabi 提交于
The EMAC has a restriction that the upper 32 bits of the base addresses for the RFD and RRD rings must be the same. The ensure that restriction, we allocate twice the space for the RRD and locate it at an appropriate address. We also re-arrange the allocations so that invalid addresses are even less likely. Signed-off-by: NTimur Tabi <timur@codeaurora.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Timur Tabi 提交于
The EMAC is capable of multiple TX and RX rings, but the driver only supports one ring for each. One function had some left-over unused code that supports multiple rings, but all it did was make the code harder to read. Signed-off-by: NTimur Tabi <timur@codeaurora.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Timur Tabi 提交于
The 64/32-bit DMA mask hackery in the EMAC driver is not actually necessary, and is technically not accurate. The EMAC hardware is limted to a 45-bit DMA address. Although no EMAC-enabled system can have that much DDR, an IOMMU could possible provide a larger address. Rather than play games with the DMA mappings, the driver should provide a correct value and trust the DMA/IOMMU layers to do the right thing. Signed-off-by: NTimur Tabi <timur@codeaurora.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Bjorn Andersson says: ==================== net: qrtr: Fixes and support receiving version 2 packets On the latest Qualcomm platforms remote processors are sending packets with version 2 of the message header. This series starts off with some fixes and then refactors the qrtr code to support receiving messages of both version 1 and version 2. As all remotes are backwards compatible transmitted packets continues to be send as version 1, but some groundwork has been done to make this a per-link property. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Bjorn Andersson 提交于
Add the necessary logic for decoding incoming messages of version 2 as well. Also make sure there's room for the bigger of version 1 and 2 headers in the code allocating skbs for outgoing messages. Signed-off-by: NBjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Bjorn Andersson 提交于
Rather than parsing the header of incoming messages throughout the implementation do it once when we retrieve the message and store the relevant information in the "cb" member of the sk_buff. This allows us to, in a later commit, decode version 2 messages into this same structure. Signed-off-by: NBjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Bjorn Andersson 提交于
As the message header generation is deferred the internal functions for generating control packets can be simplified. This patch modifies qrtr_alloc_ctrl_packet() to, in addition to the sk_buff, return a reference to a struct qrtr_ctrl_pkt, which clarifies and simplifies the helpers to the point that these functions can be folded back into the callers. Signed-off-by: NBjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-