- 30 3月, 2018 25 次提交
-
-
由 Sudarsana Reddy Kalluru 提交于
This patch adds the required driver support for updating the flash or non volatile memory of the adapter. At highlevel, flash upgrade comprises of reading the flash images from the input file, validating the images and writing them to the respective paritions. Signed-off-by: NSudarsana Reddy Kalluru <Sudarsana.Kalluru@cavium.com> Signed-off-by: NAriel Elior <ariel.elior@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Sudarsana Reddy Kalluru 提交于
This patch adds APIs for flash access. Signed-off-by: NSudarsana Reddy Kalluru <Sudarsana.Kalluru@cavium.com> Signed-off-by: NAriel Elior <ariel.elior@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Sudarsana Reddy Kalluru 提交于
Signed-off-by: NSudarsana Reddy Kalluru <Sudarsana.Kalluru@cavium.com> Signed-off-by: NAriel Elior <ariel.elior@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Sudarsana Reddy Kalluru 提交于
This patch adds support for populating the flash image attributes. Signed-off-by: NSudarsana Reddy Kalluru <Sudarsana.Kalluru@cavium.com> Signed-off-by: NAriel Elior <ariel.elior@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michal Kalderon 提交于
This FW contains several fixes and features RDMA Features - SRQ support - XRC support - Memory window support - RDMA low latency queue support - RDMA bonding support RDMA bug fixes - RDMA remote invalidate during retransmit fix - iWARP MPA connect interop issue with RTR fix - iWARP Legacy DPM support - Fix MPA reject flow - iWARP error handling - RQ WQE validation checks MISC - Fix some HSI types endianity - New Restriction: vlan insertion in core_tx_bd_data can't be set for LB packets ETH - HW QoS offload support - Fix vlan, dcb and sriov flow of VF sending a packet with inband VLAN tag instead of default VLAN - Allow GRE version 1 offloads in RX flow - Allow VXLAN steering iSCSI / FcoE - Fix bd availability checking flow - Support 256th sge proerly in iscsi/fcoe retransmit - Performance improvement - Fix handle iSCSI command arrival with AHS and with immediate - Fix ipv6 traffic class configuration DEBUG - Update debug utilities Signed-off-by: NMichal Kalderon <Michal.Kalderon@cavium.com> Signed-off-by: NTomer Tayar <Tomer.Tayar@cavium.com> Signed-off-by: NManish Rangankar <Manish.Rangankar@cavium.com> Signed-off-by: NAriel Elior <Ariel.Elior@cavium.com> Acked-by: NJason Gunthorpe <jgg@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
IPv4 was changed in commit 52a773d6 ("net: Export ip fragment sysctl to unprivileged users") The only sysctl that is not per-netns is not used : ip6frag_secret_interval Signed-off-by: NEric Dumazet <edumazet@google.com> Cc: Nikolay Borisov <kernel@kyup.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Intiyaz Basha 提交于
During heavy tx traffic, control messages (sent by liquidio driver to NIC firmware) sometimes do not get processed in a timely manner. Reason is: the low-level metadata of control messages and that of egress network packets indicate that they have the same priority. Fix it by setting a higher priority for control messages through the new ctrl_qpg field in the oct_txpciq struct. It is the NIC firmware that does the actual setting of priority by writing to the new ctrl_qpg field; the host driver treats that value as opaque and just assigns it to pki_ih3->qpg Signed-off-by: NIntiyaz Basha <intiyaz.basha@cavium.com> Signed-off-by: NFelix Manlunas <felix.manlunas@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
David Ahern says: ==================== net: Allow FIB notifiers to fail add and replace I wanted to revisit how resource overload is handled for hardware offload of FIB entries and rules. At the moment, the in-kernel fib notifier can tell a driver about a route or rule add, replace, and delete, but the notifier can not affect the action. Specifically, in the case of mlxsw if a route or rule add is going to overflow the ASIC resources the only recourse is to abort hardware offload. Aborting offload is akin to taking down the switch as the path from data plane to the control plane simply can not support the traffic bandwidth of the front panel ports. Further, the current state of FIB notifiers is inconsistent with other resources where a driver can affect a user request - e.g., enslavement of a port into a bridge or a VRF. As a result of the work done over the past 3+ years, I believe we are at a point where we can bring consistency to the stack and offloads, and reliably allow the FIB notifiers to fail a request, pushing an error along with a suitable error message back to the user. Rather than aborting offload when the switch is out of resources, userspace is simply prevented from adding more routes and has a clear indication of why. This set does not resolve the corner case where rules or routes not supported by the device are installed prior to the driver getting loaded and registering for FIB notifications. In that case, hardware offload has not been established and it can refuse to offload anything, sending errors back to userspace via extack. Since conceptually the driver owns the netdevices associated with its asic, this corner case mainly applies to unsupported rules and any races during the bringup phase. Patch 1 fixes call_fib_notifiers to extract the errno from the encoded response from handlers. Patches 2-5 allow the call to call_fib_notifiers to fail the add or replace of a route or rule. Patch 6 adds a simple resource controller to netdevsim to illustrate how a FIB resource controller can limit the number of route entries. Changes since RFC - correct return code for call_fib_notifier - dropped patch 6 exporting devlink symbols - limited example resource controller to init_net only - updated Kconfig for netdevsim to use MAY_USE_DEVLINK - updated cover letter regarding startup case noted by Ido ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David Ahern 提交于
Add devlink support to netdevsim and use it to implement a simple, profile based resource controller. Only one controller is needed per namespace, so the first netdevsim netdevice in a namespace registers with devlink. If that device is deleted, the resource settings are deleted. The resource controller allows a user to limit the number of IPv4 and IPv6 FIB entries and FIB rules. The resource paths are: /IPv4 /IPv4/fib /IPv4/fib-rules /IPv6 /IPv6/fib /IPv6/fib-rules The IPv4 and IPv6 top level resources are unlimited in size and can not be changed. From there, the number of FIB entries and FIB rule entries are unlimited by default. A user can specify a limit for the fib and fib-rules resources: $ devlink resource set netdevsim/netdevsim0 path /IPv4/fib size 96 $ devlink resource set netdevsim/netdevsim0 path /IPv4/fib-rules size 16 $ devlink resource set netdevsim/netdevsim0 path /IPv6/fib size 64 $ devlink resource set netdevsim/netdevsim0 path /IPv6/fib-rules size 16 $ devlink dev reload netdevsim/netdevsim0 such that the number of rules or routes is limited (96 ipv4 routes in the example above): $ for n in $(seq 1 32); do ip ro add 10.99.$n.0/24 dev eth1; done Error: netdevsim: Exceeded number of supported fib entries. $ devlink resource show netdevsim/netdevsim0 netdevsim/netdevsim0: name IPv4 size unlimited unit entry size_min 0 size_max unlimited size_gran 1 dpipe_tables non resources: name fib size 96 occ 96 unit entry size_min 0 size_max unlimited size_gran 1 dpipe_tables ... With this template in place for resource management, it is fairly trivial to extend and shows one way to implement a simple counter based resource controller typical of network profiles. Currently, devlink only supports initial namespace. Code is in place to adapt netdevsim to a per namespace controller once the network namespace issues are resolved. Signed-off-by: NDavid Ahern <dsa@cumulusnetworks.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David Ahern 提交于
Move call to call_fib6_entry_notifiers for new IPv6 routes to right before the insertion into the FIB. At this point notifier handlers can decide the fate of the new route with a clean path to delete the potential new entry if the notifier returns non-0. Signed-off-by: NDavid Ahern <dsa@cumulusnetworks.com> Reviewed-by: NIdo Schimmel <idosch@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David Ahern 提交于
Add checking to call to call_fib_entry_notifiers for IPv4 route replace. Allows a notifier handler to fail the replace. Signed-off-by: NDavid Ahern <dsa@cumulusnetworks.com> Reviewed-by: NIdo Schimmel <idosch@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David Ahern 提交于
Move call to call_fib_entry_notifiers for new IPv4 routes to right before the call to fib_insert_alias. At this point the only remaining failure path is memory allocations in fib_insert_node. Handle that very unlikely failure with a call to call_fib_entry_notifiers to tell drivers about it. At this point notifier handlers can decide the fate of the new route with a clean path to delete the potential new entry if the notifier returns non-0. Signed-off-by: NDavid Ahern <dsa@cumulusnetworks.com> Reviewed-by: NIdo Schimmel <idosch@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David Ahern 提交于
Move call_fib_rule_notifiers up in fib_nl_newrule to the point right before the rule is inserted into the list. At this point there are no more failure paths within the core rule code, so if the notifier does not fail then the rule will be inserted into the list. Signed-off-by: NDavid Ahern <dsa@cumulusnetworks.com> Reviewed-by: NIdo Schimmel <idosch@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David Ahern 提交于
Notifier handlers use notifier_from_errno to convert any potential error to an encoded format. As a consequence the other side, call_fib_notifier{s} in this case, needs to use notifier_to_errno to return the error from the handler back to its caller. Signed-off-by: NDavid Ahern <dsa@cumulusnetworks.com> Reviewed-by: NIdo Schimmel <idosch@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux由 David S. Miller 提交于
Saeed Mahameed says: ==================== mlx5-updates-2018-03-27 (Misc updates & SQ recovery) This series contains Misc updates and cleanups for mlx5e rx path and SQ recovery feature for tx path. From Tariq: (RX updates) - Disable Striding RQ when PCI devices, striding RQ limits the use of CQE compression feature, which is very critical for slow PCI devices performance, in this change we will prefer CQE compression over Striding RQ only on specific "slow" PCIe links. - RX path cleanups - Private flag to enable/disable striding RQ From Eran: (TX fast recovery) - TX timeout logic improvements, fast SQ recovery and TX error reporting if a HW error occurs while transmitting on a specific SQ, the driver will ignore such error and will wait for TX timeout to occur and reset all the rings. Instead, the current series improves the resiliency for such HW errors by detecting TX completions with errors, which will report them and perform a fast recover for the specific faulty SQ even before a TX timeout is detected. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Kirill Tkhai says: ==================== Introduce net_rwsem to protect net_namespace_list The series introduces fine grained rw_semaphore, which will be used instead of rtnl_lock() to protect net_namespace_list. This improves scalability and allows to do non-exclusive sleepable iteration for_each_net(), which is enough for most cases. scripts/get_maintainer.pl gives enormous list of people, and I add all to CC. Note, that this patch is independent of "Close race between {un, }register_netdevice_notifier and pernet_operations": https://patchwork.ozlabs.org/project/netdev/list/?series=36495Signed-off-by: NKirill Tkhai <ktkhai@virtuozzo.com> ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Kirill Tkhai 提交于
rtnl_lock() doesn't protect net::ct::count, and it's not needed for__nf_ct_unconfirmed_destroy() and for nf_queue_nf_hook_drop(). Signed-off-by: NKirill Tkhai <ktkhai@virtuozzo.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Kirill Tkhai 提交于
Here we iterate for_each_net() and removes vport from alive net to the exiting net. ovs_net::dps are protected by ovs_mutex(), and the others, who change it (ovs_dp_cmd_new(), __dp_destroy()) also take it. The same with datapath::ports list. So, we remove rtnl_lock() here. Signed-off-by: NKirill Tkhai <ktkhai@virtuozzo.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Kirill Tkhai 提交于
rt_genid_bump_all() consists of ipv4 and ipv6 part. ipv4 part is incrementing of net::ipv4::rt_genid, and I see many places, where it's read without rtnl_lock(). ipv6 part calls __fib6_clean_all(), and it's also called without rtnl_lock() in other places. So, rtnl_lock() here was used to iterate net_namespace_list only, and we can remove it. Signed-off-by: NKirill Tkhai <ktkhai@virtuozzo.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Kirill Tkhai 提交于
This function iterates over net_namespace_list and flushes the queue for every of them. What does this rtnl_lock() protects?! Since we may add skbs to net::wext_nlevents without rtnl_lock(), it does not protects us about queuers. It guarantees, two threads can't flush the queue in parallel, that can change the order, but since skb can be queued in any order, it doesn't matter, how many threads do this in parallel. In case of several threads, this will be even faster. So, we can remove rtnl_lock() here, as it was used for iteration over net_namespace_list only. Signed-off-by: NKirill Tkhai <ktkhai@virtuozzo.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Kirill Tkhai 提交于
rtnl_lock() is used everywhere, and contention is very high. When someone wants to iterate over alive net namespaces, he/she has no a possibility to do that without exclusive lock. But the exclusive rtnl_lock() in such places is overkill, and it just increases the contention. Yes, there is already for_each_net_rcu() in kernel, but it requires rcu_read_lock(), and this can't be sleepable. Also, sometimes it may be need really prevent net_namespace_list growth, so for_each_net_rcu() is not fit there. This patch introduces new rw_semaphore, which will be used instead of rtnl_mutex to protect net_namespace_list. It is sleepable and allows not-exclusive iterations over net namespaces list. It allows to stop using rtnl_lock() in several places (what is made in next patches) and makes less the time, we keep rtnl_mutex. Here we just add new lock, while the explanation of we can remove rtnl_lock() there are in next patches. Fine grained locks generally are better, then one big lock, so let's do that with net_namespace_list, while the situation allows that. Signed-off-by: NKirill Tkhai <ktkhai@virtuozzo.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Florian Fainelli says: ==================== net: bgmac: Couple of small bgmac changes This patch series addresses two minor issues with the bgmac driver: - provides the interface name through /proc/interrupts rather than "bgmac" - makes sure the interrupts are masked during probe, in case the block was not properly reset ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florian Fainelli 提交于
We can have interrupts left enabled form e.g: the bootloader which used the network device for network boot. Make sure we have those disabled as early as possible to avoid spurious interrupts. Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florian Fainelli 提交于
When the system contains several BGMAC adapters, it is nice to be able to tell which one is which by looking at /proc/interrupts. Use the network device name as a name to request_irq() with. Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs由 David S. Miller 提交于
David Howells says: ==================== rxrpc: Tracing updates Here are some patches that update tracing in AF_RXRPC and AFS: (1) Add a tracepoint for tracking resend events. (2) Use debug_ids in traces rather than pointers (as pointers are now hashed) and allow use of the same debug_id in AFS calls as in the corresponding AF_RXRPC calls. This makes filtering the trace output much easier. (3) Add a tracepoint for tracking call completion. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 29 3月, 2018 3 次提交
-
-
由 Moritz Fischer 提交于
Add support for the National Instruments XGE 1/10G network device. It uses the EEPROM on the board via NVMEM. Signed-off-by: NMoritz Fischer <mdf@kernel.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Moritz Fischer 提交于
This adds bindings for the NI XGE 1G/10G network device. Reviewed-by: NRob Herring <robh@kernel.org> Signed-off-by: NMoritz Fischer <mdf@kernel.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec-next由 David S. Miller 提交于
Steffen Klassert says: ==================== pull request (net-next): ipsec-next 2018-03-29 1) Remove a redundant pointer initialization esp_input_set_header(). From Colin Ian King. 2) Mark the xfrm kmem_caches as __ro_after_init. From Alexey Dobriyan. 3) Do the checksum for an ipsec offlad packet in software if the device does not advertise NETIF_F_HW_ESP_TX_CSUM. From Shannon Nelson. 4) Use booleans for true and false instead of integers in xfrm_policy_cache_flush(). From Gustavo A. R. Silva Please pull or let me know if there are problems. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 28 3月, 2018 12 次提交
-
-
由 Eran Ben Elisha 提交于
An error TX completion (CQE) which arrived on a specific SQ indicates that this SQ got moved by the hardware to error state, which means all pending and incoming TX requests are dropped or will be dropped and no further "Good" CQEs will be generated for that SQ. Before this patch TX completions (CQEs) were not monitored and were handled as a regular CQE. This caused the SQ to stay in an error state, making it useless for xmiting new packets. Mitigation plan: In case of an error completion, schedule a recovery work which would do the following: - Mark the TXQ as DRV_XOFF to disable new packets to arrive from the stack - NAPI to flush all pending SQ WQEs (via flush_in_error_en bit) to release SW and HW resources(SKB, DMA, etc) and have the SQ and CQ consumer/producer indices synced. - Modify the SQ state ERR -> RST -> RDY (restart the SQ). - Reactivate the SQ and reset SQ cc and pc If we identify two consecutive requests for SQ recover in less than 500 msecs, drop the recover request to avoid CPU overload, as this scenario most likely happened due to a severe repeated bug. In addition, add SQ recover SW counter to monitor successful recoveries. Signed-off-by: NEran Ben Elisha <eranbe@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Eran Ben Elisha 提交于
Monitor and dump xmit error completions. In addition, add err_cqe counter to track the number of error completion per send queue. Signed-off-by: NEran Ben Elisha <eranbe@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Eran Ben Elisha 提交于
Move mlx5_ib dump error CQE implementation to mlx5 CQ header file in order to use it in a downstream patch from mlx5e. In addition, use print_hex_dump instead of manual dumping of the buffer. Signed-off-by: NEran Ben Elisha <eranbe@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Eran Ben Elisha 提交于
Move query SQ state function from mlx5_ib to mlx5_core in order to have it in shared code. It will be used in a downstream patch from mlx5e. Signed-off-by: NEran Ben Elisha <eranbe@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Eran Ben Elisha 提交于
Driver callback for handling TX timeout should access some internal resources (SQ, CQ) in order to decide if the tx timeout work should be scheduled. These resources might be unavailable if channels are closed in parallel (ifdown for example). The state lock is the mechanism to protect from such races. Move all TX timeout logic to be in the work under a state lock. In addition, Move the work from the global WQ to mlx5e WQ to make sure this work is flushed when device is detached.. Also, move the mlx5e_tx_timeout_work code to be next to the TX timeout NDO for better code locality. Fixes: 3947ca18 ("net/mlx5e: Implement ndo_tx_timeout callback") Signed-off-by: NEran Ben Elisha <eranbe@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Gal Pressman 提交于
Commit 58d52291 ("net/mlx5e: Support TX packet copy into WQE") introduced the max inline WQE as an ethtool tunable. One commit later, that functionality was made dependent on BlueFlame. Commit 6982ab60 ("net/mlx5e: Xmit, no write combining") removed BlueFlame support, and with it the max inline WQE. This patch cleans up the leftovers from the removed feature. Signed-off-by: NGal Pressman <galp@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Tariq Toukan 提交于
Add a control private flag in ethtool to enable/disable Striding RQ feature. Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Tariq Toukan 提交于
Do not implicit a call to mlx5e_init_rq_type_params() upon every change in RQ type. It should be called only on channels creation. Fixes: 2fc4bfb7 ("net/mlx5e: Dynamic RQ type infrastructure") Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Tariq Toukan 提交于
It can be derived from other params, calculate it via the dedicated function when needed. Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Tariq Toukan 提交于
Introduce functions to calculate them when needed. They can be derived from other params. This will simplify transition between RQ configurations. In general, any parameter that is not explicitly set or controlled, but derived from other parameters, should not have a control-path field itself, but a getter function. Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Tariq Toukan 提交于
In copying skb header to skb->data, replace the call to skb_copy_to_linear_data_offset() with a zero offset with the call to the no-offset function skb_copy_to_linear_data(). Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Tariq Toukan 提交于
Pass the base dma address and offset to dma_sync_single_range_for_cpu(), instead of doing the pre-calculation. Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-