- 16 2月, 2012 6 次提交
-
-
由 Ben Hutchings 提交于
Each port has a block of 64-bit SRAM that is divided between buffer table and descriptor cache regions at initialisation time. Currently we use a fixed allocation, but it needs to be changed to support larger numbers of queues. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Ben Hutchings 提交于
Abstract some of the channel operations to allow for 'extra' channels that do not have RX or TX queues. - Try to assign a channel to each extra channel type that is enabled for the NIC, but gracefully degrade if we can't allocate sufficient MSI-X vectors - Allow each extra channel type to generate its own channel name - Allow channel types to disable reallocation and reinitialisation of their channels Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Ben Hutchings 提交于
Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Ben Hutchings 提交于
Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Steve Hodgson 提交于
The TX DMA engine issues upstream read requests when there is room in the TX FIFO for the completion. However, the fetches for the rest of the packet might be delayed by any back pressure. Since a flush must wait for an EOP, the entire flush may be delayed by back pressure. Mitigate this by disabling flow control before the flushes are started. Since PF and VF flushes run in parallel introduce fc_disable, a reference count of the number of flushes outstanding. The same principle could be applied to Falcon, but that would bring with it its own testing. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Ben Hutchings 提交于
When SR-IOV is enabled we may receive FLR (Function-Level Reset) events, associated queue flush events and requests from VF drivers at any time. Therefore we need to keep event queues and interrupts enabled whenever possible. Currently we stop interrupt-driven event processing before flushing RX and TX queues; efx_nic_flush_queues() then polls event queues for flush events and discards any others it finds. Change it to work with the regular event handling functions. Currently efx_start_channel() fills RX queues synchronously when a device is brought up. This could now race with NAPI, so change it to send fill events. This was almost entirely written by Steve Hodgson, formerly shodgson@solarflare.com. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
- 27 1月, 2012 9 次提交
-
-
由 Ben Hutchings 提交于
Replace checksummed and discard booleans from efx_handle_rx_event() with a bitmask, added to the flags field. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Ben Hutchings 提交于
Currently we use type u64 for byte counts, which can very quickly exceed 2^32, and unsigned long for packet counts, which do not. But it can still take only 20-something minutes to send or receive 2^32 packets, and not all tools properly handle overflow even if they sample more often than this. The MAC statistics are all updated synchronously, so it costs very little to make them all 64-bit regardless of native word size. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Ben Hutchings 提交于
Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Ben Hutchings 提交于
Interrupts are normally generated by the event queues, moderated by timers. However, they may also be triggered by detection of a 'fatal' error condition (e.g. memory parity error) or by the host writing to certain CSR fields as part of a self-test. The IRQ level/index used for these on Falcon rev B0 and Siena is set by the KER_INT_LEVE_SEL field and cached by the driver in efx_nic::fatal_irq_level. Since this value is also relevant to self-tests rename the field to just 'irq_level'. Avoid unnecessary cache traffic by using a per-channel 'last_irq_cpu' field and only writing to the per-controller field when the interrupt matches efx_nic::irq_level. Remove the volatile qualifier and use ACCESS_ONCE in the places we read these fields. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Ben Hutchings 提交于
We currently assume that the timer quantum for Siena is 5 us, the same as for Falcon. This is not correct; timer ticks are generated on a rota which takes a minimum of 768 cycles (each event delivery or other timer change will delay it by 3 cycles). The timer quantum should be 6.144 or 3.072 us depending on whether turbo mode is active. Replace EFX_IRQ_MOD_RESOLUTION with a timer_quantum_ns field in struct efx_nic, initialised by the efx_nic_type::probe function. While we're at it, replace EFX_IRQ_MOD_MAX with a timer_period_max field in struct efx_nic_type. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Ben Hutchings 提交于
The netif_dbg() macro is defined in <linux/netdevice.h>. If the DEBUG macro is defined, it logs a message at 'debug' level, otherwise it does nothing. In net_driver.h we define DEBUG if EFX_ENABLE_DEBUG is defined, but this is too late for those source files that already got a definition of netif_dbg() by including <linux/netdevice.h> Get rid of EFX_ENABLE_DEBUG, and only define and test DEBUG. In mtd.c, we do not use DEBUG as a condition flag but are forced to use the DEBUG macro-function from <linux/mtd/mtd.h>. Undefine DEBUG before including it. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Ben Hutchings 提交于
Both implementations of efx_nic_type::reconfigure_mac operation push the multicast hash filter to the hardware. It is therefore redundant to call efx_nic_type::push_multicast_hash as well. efx_mcdi_mac_reconfigure() also uses this operation, but the implementation for Siena just uses MCDI anyway. Merge that into efx_mcdi_mac_reconfigure(). Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Ben Hutchings 提交于
No NICs need to switch efx_mac_operations at run-time, and the MAC operations are fairly closely bound to NIC types. Move efx_mac_operations::reconfigure to efx_nic_type::reconfigure_mac and efx_mac_operations::check_fault fo efx_nic_type::check_mac_fault. Change callers to call through efx->type or directly if the NIC type is known. Remove efx_mac_operations::update_stats. The implementations for Falcon used to fetch MAC statistics synchronously and this was used by efx_register_netdev() to clear statistics after running self-tests. However, it now only converts statistics that have already been fetched (and that only for Falcon), and the call from efx_register_netdev() has no effect. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Ben Hutchings 提交于
efx_nic::stats_lock is used to serialise stats updates, but each reader was dropping it before it finished reading efx_nic::mac_stats. If there were concurrent stats reads using procfs, or one using procfs and one using ethtool, an update could race with a read. On a 32-bit system, the reader could see word-tearing of 64-bit stats (32 bits of the old value and 32 bits of the new). Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
- 10 1月, 2012 1 次提交
-
-
由 Ben Hutchings 提交于
Fix the following warnings: WARNING: struct dev_pm_ops should normally be const WARNING: static const char * array should probably be static const char * const Similarly const-qualify struct i2c_board_info, struct i2c_algo_bit_data, struct efx_ethtool_stat, struct efx_mtd_ops and struct siena_nvram_type_info. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
- 17 11月, 2011 1 次提交
-
-
由 Michał Mirosław 提交于
v2: add couple missing conversions in drivers split unexporting netdev_fix_features() implemented %pNF convert sock::sk_route_(no?)caps Signed-off-by: NMichał Mirosław <mirq-linux@rere.qmqm.pl> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 11 8月, 2011 1 次提交
-
-
由 Jeff Kirsher 提交于
Moves the Solarflare drivers into drivers/net/ethernet/sfc/ and make the necessary Kconfig and Makefile changes. CC: Steve Hodgson <shodgson@solarflare.com> CC: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 25 6月, 2011 2 次提交
-
-
由 Ben Hutchings 提交于
There are certain hardware bugs that may occur on Falcon during normal operation, that require a reset to recover from. We try to minimise disruption by keeping the PHY running, following a reset sequence labelled as 'invisible'. Siena does not suffer from these hardware bugs, so we have not implemented an 'invisible' reset sequence. However, if a similar error does occur (due to a hardware fault or software bug) then the code shared with Falcon will wrongly assume that the PHY is not being reset. Since the mapping of reset reasons (internal) and flags (ethtool) to methods must differ significantly between NIC types, move it into per-NIC-type functions (replacing the insufficient reset_world_flags field). Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Ben Hutchings 提交于
Currently an attempt to schedule any reset is ignored if a reset is already pending. This ignores the relative scopes - if the requested reset is greater in scope then the scheduled reset should be upgraded accordingly. There are also some race conditions which could lead to a reset request being lost. Deal with them by using atomic operations on a bitmask. This also makes tests on reset_pending easier to get right. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
- 24 6月, 2011 1 次提交
-
-
由 Jesper Juhl 提交于
It was pointed out by 'make versioncheck' that some includes of linux/version.h are not needed in drivers/net/. This patch removes them. Signed-off-by: NJesper Juhl <jj@chaosbits.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 18 5月, 2011 1 次提交
-
-
由 David S. Miller 提交于
This fixes: drivers/net/sfc/mcdi_mac.c: In function ‘efx_mcdi_set_mac’: drivers/net/sfc/mcdi_mac.c:36:2: warning: case value ‘3’ not in enumerated type ‘enum efx_fc_type’ Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 17 5月, 2011 1 次提交
-
-
由 Ben Hutchings 提交于
We need to keep the TX queues stopped throughout a reset, without triggering the TX watchdog and regardless of the link state. The proper way to do this is to use netif_device_{detach,attach}() just as we do around suspend/resume, rather than the current bodge of faking link-down. Since we also need to do this during an offline self-test and we perform a reset during that, add these function calls outside of efx_reset_down() and efx_reset_up(). Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
- 15 4月, 2011 1 次提交
-
-
由 stephen hemminger 提交于
The phy, mac, and board information structures should be const. Since tables contain function pointer this improves security (at least theoretically). Compile tested only. Signed-off-by: NStephen Hemminger <shemminger@vyatta.com> Acked-by: NBen Hutchings <bhutchings@solarflare.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 12 4月, 2011 1 次提交
-
-
由 Ben Hutchings 提交于
During self-tests we use efx_process_channel_now() to handle completion and other events synchronously. This disables interrupts and NAPI processing for the channel in question, but it may still be interrupted by another channel. A single socket may receive packets from multiple net devices or even multiple channels of the same net device, so this can result in deadlock on a socket lock. Receiving packets in process context will also result in incorrect classification by the network cgroup classifier. Therefore, we must only use efx_process_channel_now() in the offline loopback tests (which never deliver packets up the stack) and not for the online interrupt and event tests. For the interrupt test, there is no reason to process events. We only care that an interrupt is raised. For the event test, we want to know whether events have been received, and there may be many events ahead of the one we inject. Therefore remove efx_channel::magic_count and instead test whether efx_channel::eventq_read_ptr advances. This is currently an event queue index and might wrap around to exactly the same value, resulting in a false negative. Therefore move the masking to efx_event() and efx_nic_eventq_read_ack() so that it cannot wrap within the time of the test. The event test also tries to diagnose failures by checking whether an event was delivered without causing an interrupt. Add and use a helper function that only does this. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
- 05 4月, 2011 1 次提交
-
-
由 Ben Hutchings 提交于
Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
- 31 3月, 2011 1 次提交
-
-
由 Lucas De Marchi 提交于
Fixes generated by 'codespell' and manually reviewed. Signed-off-by: NLucas De Marchi <lucas.demarchi@profusion.mobi>
-
- 01 3月, 2011 4 次提交
-
-
由 Ben Hutchings 提交于
All features originally planned for version 3.1 (and some that weren't) have been implemented. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Ben Hutchings 提交于
Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Steve Hodgson 提交于
Instead calculate the KVA of receive data. It's not like it's a hard sum. [bwh: Fixed to work with GRO.] Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Steve Hodgson 提交于
[bwh: Forward-ported to net-next-2.6.] Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
- 18 2月, 2011 1 次提交
-
-
由 Ben Hutchings 提交于
Use the existing filter management functions to insert TCP/IPv4 and UDP/IPv4 4-tuple filters for Receive Flow Steering. For each channel, track how many RFS filters are being added during processing of received packets and scan the corresponding number of table entries for filters that may be reclaimed. Do this in batches to reduce lock overhead. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
- 16 2月, 2011 2 次提交
-
-
由 Ben Hutchings 提交于
Implement the ndo_setup_tc() operation with 2 traffic classes. Current Solarstorm controllers do not implement TX queue priority, but they do allow queues to be 'paced' with an enforced delay between packets. Paced and unpaced queues are scheduled in round-robin within two separate hardware bins (paced queues with a large delay may be placed into a third bin temporarily, but we won't use that). If there are queues in both bins, the TX scheduler will alternate between them. If we make high-priority queues unpaced and best-effort queues paced, and high-priority queues are mostly empty, a single high-priority queue can then instantly take 50% of the packet rate regardless of how many of the best-effort queues have descriptors outstanding. We do not actually want an enforced delay between packets on best- effort queues, so we set the pace value to a reserved value that actually results in a delay of 0. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Ben Hutchings 提交于
efx_channel_get_{rx,tx}_queue() currently return NULL if the channel isn't used for traffic in that direction. In most cases this is a bug, but some callers rely on it as an existence test. Add existence test functions efx_channel_has_{rx_queue,tx_queues}() and use them as appropriate. Change efx_channel_get_{rx,tx}_queue() to assert that the requested queue exists. Remove now-redundant initialisation from efx_set_channels(). Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
- 25 1月, 2011 1 次提交
-
-
由 Michał Mirosław 提交于
Quoting Ben Hutchings: we presumably won't be defining features that can only be enabled on 64-bit architectures. Occurences found by `grep -r` on net/, drivers/net, include/ [ Move features and vlan_features next to each other in struct netdev, as per Eric Dumazet's suggestion -DaveM ] Signed-off-by: NMichał Mirosław <mirq-linux@rere.qmqm.pl> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 14 1月, 2011 1 次提交
-
-
由 Ben Hutchings 提交于
Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
- 11 12月, 2010 2 次提交
-
-
由 Ben Hutchings 提交于
Long before this driver went into mainline, it had support for multiple TX queues per port, with lockless TX enabled. Since Linux did not know anything of this, filling up any hardware TX queue would stop the core TX queue and multiple hardware TX queues could fill up before the scheduler reacted. Thus it was necessary to keep a count of how many TX queues were stopped and to wake the core TX queue only when all had free space again. The driver also previously (ab)used the per-hardware-queue stopped flag as a counter to deal with various things that can inhibit TX, but it no longer does that. Remove the per-channel tx_stop_count, tx_stop_lock and per-hardware-queue stopped count and just use the networking core queue state directly. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Ben Hutchings 提交于
Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
- 08 12月, 2010 1 次提交
-
-
由 Ben Hutchings 提交于
If we are using a legacy interrupt, our IRQ may be shared and our interrupt handler may be called even though interrupts are disabled on the NIC. When we change ring sizes, we reallocate the event queue and the interrupt handler may use an invalid pointer when called for another device's interrupt. Maintain a legacy_irq_enabled flag and test that at the top of the interrupt handler. Note that this problem results from the need to work around broken INT_ISR0 reads, and does not affect the legacy interrupt handler for Falcon A1. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
- 07 12月, 2010 1 次提交
-
-
由 Ben Hutchings 提交于
Whenever we add DMA descriptors to a TX ring and update the ring pointer, the TX DMA engine must first read the new DMA descriptors and then start reading packet data. However, all released Solarflare 10G controllers have a 'TX push' feature that allows us to reduce latency by writing the first new DMA descriptor along with the pointer update. This is only useful when the queue is empty. The hardware should ignore the pushed descriptor if the queue is not empty, but this check is buggy, so we must do it in software. In order to tell whether a TX queue is empty, we need to compare the previous transmission count (write_count) and completion count (read_count). However, if we do that every time we update the ring pointer then read_count may ping-pong between the caches of two CPUs running the transmission and completion paths for the queue. Therefore, we split the check for an empty queue between the completion path and the transmission path: - Add an empty_read_count field representing a point at which the completion path saw the TX queue as empty. - Add an old_write_count field for use on the completion path. - On the completion path, whenever read_count reaches or passes old_write_count the TX queue may be empty. We then read write_count, set empty_read_count if read_count == write_count, and update old_write_count. - On the transmission path, we read empty_read_count. If it's set, we compare it with the value of write_count before the current set of descriptors was added. If they match, the queue really is empty and we can use TX push. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-