- 24 9月, 2011 8 次提交
-
-
由 Emil Tantilov 提交于
Reloading FW during resets can cause issues. Remove the full reset as it is not needed. Signed-off-by: NEmil Tantilov <emil.s.tantilov@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Emil Tantilov 提交于
Add support for WOL as determined by the EEPROM. Signed-off-by: NEmil Tantilov <emil.s.tantilov@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Emil Tantilov 提交于
This change is meant to avoid a hardware lockup when Tx work is still pending and we request a reset. Signed-off-by: NEmil Tantilov <emil.s.tantilov@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 John Fastabend 提交于
This patch adds support for configuring the priority to traffic class mapping. Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com> Tested-by: NRoss Brattain <ross.b.brattain@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Don Skidmore 提交于
We don't need SFP+ plugable support for X540 hardware (copper only) so don't enable the SFP+ interrupts. Signed-off-by: NDon Skidmore <donald.c.skidmore@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 John Fastabend 提交于
The DCB CEE command set_state() will complete successfully but is misleading because it enables IEEE mode. After this patch the command is failed. And IEEE PFC/ETS is managed from ieee paths now instead of using CEE primitives. Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com> Tested-by: NRoss Brattain <ross.b.brattain@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Greg Rose 提交于
Use the PCI device flag indicating if a VF is assigned to a guest VM to guard against destroying VFs upon driver removal. Implement additional feature to detect if VFs already exist when the driver is loaded and if so configure them and set the driver state to SR-IOV enabled. Signed-off-by: NGreg Rose <gregory.v.rose@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Greg Rose 提交于
Device drivers that create and destroy SR-IOV virtual functions via calls to pci_enable_sriov() and pci_disable_sriov can cause catastrophic failures if they attempt to destroy VFs while they are assigned to guest virtual machines. By adding a flag for use by the KVM module to indicate that a device is assigned a device driver can check that flag and avoid destroying VFs while they are assigned and avoid system failures. CC: Ian Campbell <ijc@hellion.org.uk> CC: Konrad Wilk <konrad.wilk@oracle.com> Signed-off-by: NGreg Rose <gregory.v.rose@intel.com> Acked-by: NJesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 21 9月, 2011 7 次提交
-
-
由 Giuseppe CAVALLARO 提交于
This patch adds the IC+ IP101A Single port 10/100 PHY and supports the APS (i.e. power saving mode while link is down) for both IP1001 and IP101A (where this mode is supported). Signed-off-by: NGiuseppe Cavallaro <peppe.cavallaro@st.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Rajesh Borundia 提交于
o A performance drop was seen with firmware loaded from flash. This workaround fixes it. o Updated driver version to 4.0.77 Signed-off-by: NRajesh Borundia <rajesh.borundia@qlogic.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Rajesh Borundia 提交于
o Set vlan header length to zero. Signed-off-by: NRajesh Borundia <rajesh.borundia@qlogic.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
Add IP6_TNL_F_USE_ORIG_FWMARK to ip6_tunnel, so that ip6_tnl_xmit2() makes a route lookup taking into account skb->fwmark and doesnt cache lookup result. This permits more flexibility in policies and firewall setups. To setup such a tunnel, "fwmark inherit" option should be added to "ip -f inet6 tunnel" command. Reported-by: NAnders Franzen <Anders.Franzen@ericsson.com> CC: Hans Schillström <hans.schillstrom@ericsson.com> Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jason Wang 提交于
Commit d1b08284 use new frag API but would leave f to be used uninitialized, this patch fix it. Signed-off-by: NJason Wang <jasowang@redhat.com> Acked-by: NMichael S. Tsirkin <mst@redhat.com> Acked-by: NIan Campbell <ian.campbell@citrix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
-
- 20 9月, 2011 10 次提交
-
-
由 Alexander Duyck 提交于
Instead of using the multi_tx_table to map possible Tx queues to Tx rings we can just do simple subtraction for the unlikely event that the Tx queue provided exceeds the number of Tx rings. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
Since igb only uses advanced descriptors we might as well just use an IGB specific define and drop the _ADV suffix for the descriptor declarations. In addition this can be further reduced by assuming that it will be working on pointers since that is normally how the Tx descriptors are handled. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
Many of the function names in the hot path are carrying an extra "_adv" suffix on the end of them to represent the fact that they are using advanced descriptors instead of legacy descriptors. However since all igb uses are advanced descriptors adding the extra suffix doesn't really add any additional data. Since this is the case it is easiest to just drop the suffix and save us from having to store the extra characters. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This change is meant to be a general cleanup and performance improvement for clean_rx_irq. The previous patch should have updated the allocation so that the rings can be treated as read-only within the clean_rx_irq function. In addition I am re-ordering the operations such that several goals are accomplished including reducing the overhead for packet accounting, reducing the number of items on the stack, and improving overall performance. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This change is meant to improve performance by splitting the Tx and Rx rings into 3 sections. The first is primarily a read only section containing basic things like the indexes, a pointer to the dev and netdev structures, and basic information. The second section contains the stats and next_to_use and next_to_clean values. The third section is primarily unused values that can just be placed at the end of the ring and are not used in the hot path. The adapter structure has several sections that are read in the hot path. In order to improve performance there I am combining the frequent read hot path items into a single cache line. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This change is meant to streamline the Rx buffer allocation and cleanup. This is accomplished by reducing the number of writes by only having the Rx descriptor ring written by software during allocation, and it will only be read during cleanup. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This change removes support for single buffer mode from igb and makes the driver function in packet split always. The advantage to doing this is that we can reduce total memory allocation overhead significantly as we will only need to allocate one 1K slab per packet and then make use of a reusable half page instead of allocating a 2K slab per packet. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This patch modifies the max_frame_size in order account for an optional VLAN tag. In order to support this we must also increase the MAX_STD_JUMBO_FRAME_SIZE to account for the 4 extra bytes. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This change cleans up the RXDCTL and TXDCTL configurations and optimizes RX performance by allowing back write-backs on all hardware other than 82576. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
-
- 18 9月, 2011 11 次提交
-
-
由 Ying Xue 提交于
Elimintes prototype link event tracking functionality that has never been fleshed out and doesn't do anything useful at the current time. Signed-off-by: NYing Xue <ying.xue@windriver.com> Signed-off-by: NAllan Stephens <allan.stephens@windriver.com> Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
由 Ying Xue 提交于
Eliminate the "event_cb" member from TIPC's "subscription" structure since the function pointer it holds always points to subscr_send_event(). Signed-off-by: NYing Xue <ying.xue@windriver.com> Signed-off-by: NAllan Stephens <allan.stephens@windriver.com> Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
由 Ying Xue 提交于
Modifies the proto_ops structure used by TIPC DGRAM and RDM sockets so that calls to listen() and accept() are handled by existing kernel "unsupported operation" routines, and eliminates the related checks in the listen and accept routines used by SEQPACKET and STREAM sockets that are no longer needed. Signed-off-by: NYing Xue <ying.xue@windriver.com> Signed-off-by: NAllan Stephens <allan.stephens@windriver.com> Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
由 Ying Xue 提交于
Adds support for the SO_SNDTIMEO socket option. (This complements the existing support for SO_RCVTIMEO that is already present.) Signed-off-by: NYing Xue <ying.xue@windriver.com> Signed-off-by: NAllan Stephens <allan.stephens@windriver.com> Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
由 Allan Stephens 提交于
Modifies the initial transfer of name table entries to a new neighboring node so that the messages are enqueued as a unit, rather than individually. The revised algorithm now locates the link carrying the message only once, and eliminates unnecessary checks for link congestion, message fragmentation, and message bundling that are not required when sending these messages. Signed-off-by: NAllan Stephens <allan.stephens@windriver.com> Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
由 Paul Gortmaker 提交于
Functions like this are called using unsigned longs from function pointers. In this case, the function is passed in a node which is normally internally treated as a u32 by TIPC. Rather than add more casts into this function in the future for each added use of node within, move the cast to a single place on a local. Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
由 Allan Stephens 提交于
Reduces the maximum size of messages sent during the initial exchange of name table information between two nodes to be no larger than the MTU of the first link established between the nodes. This ensures that messages will never need to be fragmented, which would add unnecessary overhead to the name table synchronization mechanism. Signed-off-by: NAllan Stephens <allan.stephens@windriver.com> Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
由 Allan Stephens 提交于
Reduces the number of bearers a node can support to 2, which can use identical or non-identical media. This change won't impact users, since they are currently limited to a maximum of 2 Ethernet bearers, and will save memory by eliminating a number of unused entries in TIPC's media and bearer arrays. Signed-off-by: NAllan Stephens <allan.stephens@windriver.com> Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
由 Allan Stephens 提交于
Removes obsolete code that searches for an Ethernet bearer structure entry to use for a newly enabled bearer, since this search is now performed at the start of the enabling algorithm. Signed-off-by: NAllan Stephens <allan.stephens@windriver.com> Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
由 Allan Stephens 提交于
Ensures that the device list lock is held while trying to locate the Ethernet device used by a newly enabled bearer, so that the addition or removal of a device does not cause problems. Signed-off-by: NAllan Stephens <allan.stephens@windriver.com> Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
由 Allan Stephens 提交于
Enhances TIPC to ensure that a node that loses contact with a neighboring node does not allow contact to be re-established until it sees that its peer has also recognized the loss of contact. Previously, nodes that were connected by two or more links could encounter a situation in which node A would lose contact with node B on all of its links, purge its name table of names published by B, and then fail to repopulate those names once contact with B was restored. This would happen because B was able to re-establish one or more links so quickly that it never reached a point where it had no links to A -- meaning that B never saw a loss of contact with A, and consequently didn't re-publish its names to A. This problem is now prevented by enhancing the cleanup done by TIPC following a loss of contact with a neighboring node to ensure that node A ignores all messages sent by B until it receives a LINK_PROTOCOL message that indicates B has lost contact with A, thereby preventing the (re)establishment of links between the nodes. The loss of contact is recognized when a RESET or ACTIVATE message is received that has a "redundant link exists" field of 0, indicating that B's sending link endpoint is in a reset state and that B has no other working links. Additionally, TIPC now suppresses the sending of (most) link protocol messages to a neighboring node while it is cleaning up after an earlier loss of contact with that node. This stops the peer node from prematurely activating its link endpoint, which would prevent TIPC from later activating its own end. TIPC still allows outgoing RESET messages to occur during cleanup, to avoid problems if its own node recognizes the loss of contact first and tries to notify the peer of the situation. Finally, TIPC now recognizes an impending loss of contact with a peer node as soon as it receives a RESET message on a working link that is the peer's only link to the node, and ensures that the link protocol suppression mentioned above goes into effect right away -- that is, even before its own link endpoints have failed. This is necessary to ensure correct operation when there are redundant links between the nodes, since otherwise TIPC would send an ACTIVATE message upon receiving a RESET on its first link and only begin suppressing when a RESET on its second link was received, instead of initiating suppression with the first RESET message as it needs to. Note: The reworked cleanup code also eliminates a check that prevented a link endpoint's discovery object from responding to incoming messages while stale name table entries are being purged. This check is now unnecessary and would have slowed down re-establishment of communication between the nodes in some situations. Signed-off-by: NAllan Stephens <allan.stephens@windriver.com> Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
- 17 9月, 2011 4 次提交
-
-
由 Eric Dumazet 提交于
tcp_md5sig_pool is currently an 'array' (a percpu object) of pointers to struct tcp_md5sig_pool. Only the pointers are NUMA aware, but objects themselves are all allocated on a single node. Remove this extra indirection to get proper percpu memory (NUMA aware) and make code simpler. Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Rasesh Mody 提交于
Change details: - In a continuous sequence of ifconfig up/down operations, there is a small window of race between bnad_set_rx_mode() and bnad_cleanup_rx() while the former tries to access rx_info->rx & the latter sets it to NULL. This race could lead to bna_rx_mode_set() being called with a NULL (rx_info->rx) pointer and a crash. - Hold bnad->bna_lock while setting / unsetting rx_info->rx in bnad_setup_rx() & bnad_cleanup_rx(), thereby eliminating the race described above. Signed-off-by: NGurunatha Karaje <gkaraje@brocade.com> Signed-off-by: NRasesh Mody <rmody@brocade.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Rasesh Mody 提交于
When Rx queue size is changed, queues are torn down and setup with the new queue size. During this operation, clear promiscuous mode and restore the original VLAN filter. Signed-off-by: NGurunatha Karaje <gkaraje@brocade.com> Signed-off-by: NRasesh Mody <rmody@brocade.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Rasesh Mody 提交于
Remove a BUG_ON() as it is not required. Change the unconditional write to release a semaphore to read sem first and then write. This will eliminate the possibility of sem getting locked while trying to release it in case if previous sem_get operation failed. Signed-off-by: NGurunatha Karaje <gkaraje@brocade.com> Signed-off-by: NRasesh Mody <rmody@brocade.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-