- 14 9月, 2014 40 次提交
-
-
由 Mark Leonard 提交于
This patch adds support for the dump-module-eeprom and module-info ethtool options. Signed-off-by: NMark Leonard <mark.leonard@emulex.com> Signed-off-by: NSuresh Reddy <Suresh.Reddy@emulex.com> Signed-off-by: NSathya Perla <sathya.perla@emulex.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Suresh Reddy 提交于
In some configurations the FW doesn't allow changing flow control settings of a link. Unless a v1 version of the SET_FLOW_CONTROL cmd is used, the FW doesn't report an error to the driver. Signed-off-by: NSuresh Reddy <Suresh.Reddy@emulex.com> Signed-off-by: NSathya Perla <sathya.perla@emulex.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ajit Khaparde 提交于
In the RX path, the driver currently consumes upto 64 (budget) packets in one NAPI sweep. When the size of the packet received is larger than a fragment size (2K), more than one fragment is consumed for each packet. As the driver currently posts a max of 64 fragments, all the consumed fragments may not be replenished. This can cause avoidable drops in RX path. This patch fixes this by posting a max(consumed_frags, 64) frags. This is done only when there are atleast 64 free slots in the RXQ. Signed-off-by: NAjit Khaparde <ajit.khaparde@emulex.com> Signed-off-by: NKalesh AP <kalesh.purayil@emulex.com> Signed-off-by: NSathya Perla <sathya.perla@emulex.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Vasundhara Volam 提交于
Replace strcpy with strlcpy, as it avoids a possible buffer overflow. Signed-off-by: NSathya Perla <sathya.perla@emulex.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Vasundhara Volam 提交于
This patch fixes the following minor issues with log messages in be2net: 1) Period is not required at the end of log message. 2) Remove "Unknown grp5 event" logs to reduce noise. The driver can safely ignore async events from FW it's not interested in. 3) Reword a log message for better readability to say that SRIOV "is disabled" rather than "not supported". Signed-off-by: NVasundhara Volam <vasundhara.volam@emulex.com> Signed-off-by: NSathya Perla <sathya.perla@emulex.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Hannes Frederic Sowa 提交于
Currently we have 2 pkt_type_offset functions doing the same thing and spread across the architecture files. Remove those and replace them with a PKT_TYPE_OFFSET macro helper which gets the constant value from a zero sized sk_buff member right in front of the bitfield with offsetof. This new offset marker does not change size of struct sk_buff. Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Markos Chandras <markos.chandras@imgtec.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Daniel Borkmann <dborkman@redhat.com> Cc: Alexei Starovoitov <alexei.starovoitov@gmail.com> Signed-off-by: NDenis Kirjanov <kda@linux-powerpc.org> Signed-off-by: NHannes Frederic Sowa <hannes@stressinduktion.org> Acked-by: NAlexei Starovoitov <ast@plumgrid.com> Acked-by: NDaniel Borkmann <dborkman@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florian Fainelli 提交于
Now that we introduced an additional multiplexing/demultiplexing layer with commit 3e8a72d1 ("net: dsa: reduce number of protocol hooks") that lives within the DSA code, we no longer need to have a given switch driver tag_protocol be an actual ethertype value, instead, we can replace it with an enum: dsa_tag_protocol. Do this replacement in the drivers, which allows us to get rid of the cpu_to_be16()/htons() dance, and remove ETH_P_BRCMTAG since we do not need it anymore. Suggested-by: NAlexander Duyck <alexander.duyck@gmail.com> Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 hayeswang 提交于
Support hw VLAN for tx and rx. And enable them by default. Signed-off-by: NHayes Wang <hayeswang@realtek.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Wei Yongjun 提交于
In case of error, the function devm_ioremap_resource() returns ERR_PTR() and never returns NULL. The NULL test in the return value check should be replaced with IS_ERR(). Signed-off-by: NWei Yongjun <yongjun_wei@trendmicro.com.cn> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 WANG Cong 提交于
If IPv6 is explicitly disabled before the interface comes up, it makes no sense to continue when it comes up, even just print a message. (I am not sure about other cases though, so I prefer not to touch) Signed-off-by: NCong Wang <xiyou.wangcong@gmail.com> Acked-by: NHannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Cong Wang says: ==================== ipv6: clean up locking code in anycast and mcast This patchset cleans up the locking code in anycast.c and mcast.c and makes the refcount code more readable. Signed-off-by: NCong Wang <xiyou.wangcong@gmail.com> v1 -> v2: * refactor some code and make it in a separated patch * update comments ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 WANG Cong 提交于
Refactor out allocation and initialization and make the refcount code more readable. Signed-off-by: NCong Wang <xiyou.wangcong@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 WANG Cong 提交于
Signed-off-by: NCong Wang <xiyou.wangcong@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 WANG Cong 提交于
Similarly the code is already protected by rtnl lock. Signed-off-by: NCong Wang <xiyou.wangcong@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 WANG Cong 提交于
Similarly the code is already protected by rtnl lock. Signed-off-by: NCong Wang <xiyou.wangcong@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 WANG Cong 提交于
Refactor out allocation and initialization and make the refcount code more readable. Signed-off-by: NCong Wang <xiyou.wangcong@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 WANG Cong 提交于
Make it accept inet6_dev, and rename it to __ipv6_dev_ac_inc() to reflect this change. Signed-off-by: NCong Wang <xiyou.wangcong@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 WANG Cong 提交于
Just move rtnl lock up, so that the anycast list can be protected by rtnl lock now. Signed-off-by: NCong Wang <xiyou.wangcong@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 WANG Cong 提交于
These code is now protected by rtnl lock, rcu read lock is useless now. Signed-off-by: NCong Wang <xiyou.wangcong@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Nikolay Aleksandrov says: ==================== bonding: get rid of curr_slave_lock This is the second patch-set dealing with bond locking and the purpose here is to convert curr_slave_lock into a spinlock called "mode_lock" which can be used in the various modes for their specific needs. The first three patches cleanup the use of curr_slave_lock and prepare it for the conversion which is done in patch 4 and then the modes that were using their own locks are converted to use the new "mode_lock" giving us the opportunity to remove their locks. This patch-set has been tested in each mode by running enslave/release of slaves in parallel with traffic transmission and miimon=1 i.e. running all the time. In fact this lead to the discovery of a subtle bug related to RCU which will be fixed in -net. Also did an allmodconfig test just in case :-) v2: fix bond_3ad_state_machine_handler's use of mode_lock and curr_slave_lock ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Nikolay Aleksandrov 提交于
Now that locks have been removed, remove some unnecessary comments and adjust others to reflect reality. Also add a comment to "mode_lock" to describe its current users and give a brief summary why they need it. Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Nikolay Aleksandrov 提交于
Now that we have bond->mode_lock, we can remove the state_machine_lock and use it in its place. There're no fast paths requiring the per-port spinlocks so it should be okay to consolidate them into mode_lock. Also move it inside the unbinding function as we don't want to expose mode_lock outside of the specific modes. Suggested-by: NJay Vosburgh <jay.vosburgh@canonical.com> Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Nikolay Aleksandrov 提交于
The ALB/TLB specific spinlocks are no longer necessary as we now have bond->mode_lock for this purpose, so convert them and remove them from struct alb_bond_info. Also remove the unneeded lock/unlock functions and use spin_lock/unlock directly. Suggested-by: NJay Vosburgh <jay.vosburgh@canonical.com> Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Nikolay Aleksandrov 提交于
curr_slave_lock is now a misleading name, a much better name is mode_lock as it'll be used for each mode's purposes and it's no longer necessary to use a rwlock, a simple spinlock is enough. Suggested-by: NJay Vosburgh <jay.vosburgh@canonical.com> Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Nikolay Aleksandrov 提交于
Mostly all users of curr_slave_lock already have RTNL as we've discussed previously so there's no point in using it, the one case where the lock must stay is the 3ad code, in fact it's the only one. It's okay to remove it from bond_do_fail_over_mac() as it's called with RTNL and drops the curr_slave_lock anyway. bond_change_active_slave() is one of the main places where curr_slave_lock was used, it's okay to remove it as all callers use RTNL these days before calling it, that's why we move the ASSERT_RTNL() in the beginning to catch any potential offenders to this rule. The RTNL argument actually applies to all of the places where curr_slave_lock has been removed from in this patch. Also remove the unnecessary bond_deref_active_protected() macro and use rtnl_dereference() instead. Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Nikolay Aleksandrov 提交于
First in rlb_teach_disabled_mac_on_primary() it's okay to remove curr_slave_lock as all callers except bond_alb_monitor() already hold RTNL, and in case bond_alb_monitor() is executing we can at most have a period with bad throughput (very unlikely though). In bond_alb_monitor() it's okay to remove the read_lock as the slave list is walked with RCU and the worst that could happen is another transmitter at the same time and thus for a period which currently is 10 seconds (bond_alb.h: BOND_ALB_LP_TICKS). And bond_alb_handle_active_change() is okay because it's always called with RTNL. Removed the ASSERT_RTNL() because it'll be inserted in the parent function in a following patch. Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Nikolay Aleksandrov 提交于
Remove the read_lock in bond_3ad_lacpdu_recv() since when the slave is being released its rx_handler is removed before 3ad unbind, so even if packets arrive, they won't see the slave in an inconsistent state. Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Rusty Russell 提交于
virtqueue_add() populates the virtqueue descriptor table from the sgs given. If it uses an indirect descriptor table, then it puts a single descriptor in the descriptor table pointing to the kmalloc'ed indirect table where the sg is populated. Previously vring_add_indirect() did the allocation and the simple linear layout. We replace that with alloc_indirect() which allocates the indirect table then chains it like the normal descriptor table so we can reuse the core logic. This slows down pktgen by less than 1/2 a percent (which uses direct descriptors), as well as vring_bench, but it's far neater. vring_bench before: 1061485790-1104800648(1.08254e+09+/-6.6e+06)ns vring_bench after: 1125610268-1183528965(1.14172e+09+/-8e+06)ns pktgen before: 787781-796334(793165+/-2.4e+03)pps 365-369(367.5+/-1.2)Mb/sec (365530384-369498976(3.68028e+08+/-1.1e+06)bps) errors: 0 pktgen after: 779988-790404(786391+/-2.5e+03)pps 361-366(364.35+/-1.3)Mb/sec (361914432-366747456(3.64885e+08+/-1.2e+06)bps) errors: 0 Now, if we make force indirect descriptors by turning off any_header_sg in virtio_net.c: pktgen before: 713773-721062(718374+/-2.1e+03)pps 331-334(332.95+/-0.92)Mb/sec (331190672-334572768(3.33325e+08+/-9.6e+05)bps) errors: 0 pktgen after: 710542-719195(714898+/-2.4e+03)pps 329-333(331.15+/-1.1)Mb/sec (329691488-333706480(3.31713e+08+/-1.1e+06)bps) errors: 0 Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Rusty Russell 提交于
We used to have several callers which just used arrays. They're gone, so we can use sg_next() everywhere, simplifying the code. On my laptop, this slowed down vring_bench by 15%: vring_bench before: 936153354-967745359(9.44739e+08+/-6.1e+06)ns vring_bench after: 1061485790-1104800648(1.08254e+09+/-6.6e+06)ns However, a more realistic test using pktgen on a AMD FX(tm)-8320 saw a few percent improvement: pktgen before: 767390-792966(785159+/-6.5e+03)pps 356-367(363.75+/-2.9)Mb/sec (356068960-367936224(3.64314e+08+/-3e+06)bps) errors: 0 pktgen after: 787781-796334(793165+/-2.4e+03)pps 365-369(367.5+/-1.2)Mb/sec (365530384-369498976(3.68028e+08+/-1.1e+06)bps) errors: 0 Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Rusty Russell 提交于
This is the only driver which doesn't hand virtqueue_add_inbuf and virtqueue_add_outbuf a well-formed, well-terminated sg. Fix it, so we can make virtio_add_* simpler. pktgen results: modprobe pktgen echo 'add_device eth0' > /proc/net/pktgen/kpktgend_0 echo nowait 1 > /proc/net/pktgen/eth0 echo count 1000000 > /proc/net/pktgen/eth0 echo clone_skb 100000 > /proc/net/pktgen/eth0 echo dst_mac 4e:14:25:a9:30:ac > /proc/net/pktgen/eth0 echo dst 192.168.1.2 > /proc/net/pktgen/eth0 for i in `seq 20`; do echo start > /proc/net/pktgen/pgctrl; tail -n1 /proc/net/pktgen/eth0; done Before: 746547-793084(786421+/-9.6e+03)pps 346-367(364.4+/-4.4)Mb/sec (346397808-367990976(3.649e+08+/-4.5e+06)bps) errors: 0 After: 767390-792966(785159+/-6.5e+03)pps 356-367(363.75+/-2.9)Mb/sec (356068960-367936224(3.64314e+08+/-3e+06)bps) errors: 0 Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-next由 David S. Miller 提交于
Jeff Kirsher says: ==================== Intel Wired LAN Driver Updates 2014-09-12 This series contains updates to e1000, ixgbe and ixgbevf. Mark provide two fixes to reduce compile warnings produce by ixgbe and ixgbevf. Alex provides two patches for ixgbe, first removes the receive buffer allocation at the end of the ixgbe_clean_rx_irq(). The reason for removing this is to avoid the extra latency introduced by the MMIO write. Second patch addresses several issues in the current ixgbe implementation of busy poll sockets. It was possible for frames to be delivered out of order if they were held in GRO, so address this by flushing the GRO buffers before releasing the q_vector back to the idle state. Also, we were having to take a spinlock on changing the state to and from idle, so to resolve this, replaced the state value with an atomic and use atomic_cmpxchg to change the value from idle, and a simple atomic set to restore it back to idle after we have acquired it. This allows us to only use a locked operation on acquiring the vector without a need for a locked operation to release it. Florian Westphal provides several patches for e1000 which does some cleanup and updating of the driver. Moved e1000_tbi_adjust_stats() so that he could make the function static. Added a helper function to deal with the tbi workaround that was located in 2 different Rx clean functions. Added a e1000_rx_buffer struct for use on receive since the transmit and receive have different requirements. Updates e1000 to use napi_gro_frags API. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
John Fastabend says: ==================== net/sched rcu classifiers and tcf This series converts the tcf_proto usage to RCU. This requires updating each classifier individually to handle the new copy/update requirement and also to update the core list traversals. This makes the assumption that updates to the tables are infrequent in comparison to the packet per second being classified. On a 10Gbps running near line rate we can easily produce 12+ million packets per second so IMO this is a reasonable assumption. The updates are serialized by RTNL. I have done some basic testing on this series and do not see any immediate splats or issues. The patch series has been running on my dev systems for a month or so now and I've not seen any issues. Although my configurations are not overly complicated. My test cases at this point cover all the filters with a tight loop to add/remove filters. Some basic estimator tests where I add an estimator to the qdisc and verify the statistics accurate using pktgen. And finally I have a small script to exercise the 'tc actions' interface. Feel free to send me more tests off list and I can run them. This is prep work to drop the qdisc lock with the first target being the ingress qdisc. To be done is making the tc actions RCU safe and statistics per cpu. These patches are in the works. Comments: - Checkpatch is still giving errors on some >80 char lines I know about this. IMO the way to fix this is to restructure the sched code to avoid being so heavily indented. But doing this here bloats the patchset and anyways there are already lots of >80 chars in these files. I would prefer to keep the patches as is but let me know if others think I should fix these and I will. A follow up patch set could restructure the code and fix this throughout the code blocks. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 John Fastabend 提交于
This patch makes the cls_bpf classifier RCU safe. The tcf_lock was being used to protect a list of cls_bpf_prog now this list is RCU safe and updates occur with rcu_replace. Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com> Acked-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 John Fastabend 提交于
Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com> Acked-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 John Fastabend 提交于
Make cls_u32 classifier safe to run without holding lock. This patch converts statistics that are kept in read section u32_classify into per cpu counters. This patch was tested with a tight u32 filter add/delete loop while generating traffic with pktgen. By running pktgen on vlan devices created on top of a physical device we can hit the qdisc layer correctly. For ingress qdisc's a loopback cable was used. for i in {1..100}; do q=`echo $i%8|bc`; echo -n "u32 tos: iteration $i on queue $q"; tc filter add dev p3p2 parent $p prio $i u32 match ip tos 0x10 0xff \ action skbedit queue_mapping $q; sleep 1; tc filter del dev p3p2 prio $i; echo -n "u32 tos hash table: iteration $i on queue $q"; tc filter add dev p3p2 parent $p protocol ip prio $i handle 628: u32 divisor 1 tc filter add dev p3p2 parent $p protocol ip prio $i u32 \ match ip protocol 17 0xff link 628: offset at 0 mask 0xf00 shift 6 plus 0 tc filter add dev p3p2 parent $p protocol ip prio $i u32 \ ht 628:0 match ip tos 0x10 0xff action skbedit queue_mapping $q sleep 2; tc filter del dev p3p2 prio $i sleep 1; done Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com> Acked-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 John Fastabend 提交于
This uses per cpu counters in cls_u32 in preparation to convert over to rcu. Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com> Acked-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 John Fastabend 提交于
Make cls_tcindex RCU safe. This patch addds a new RCU routine rcu_dereference_bh_rtnl() to check caller either holds the rcu read lock or RTNL. This is needed to handle the case where tcindex_lookup() is being called in both cases. Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com> Acked-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 John Fastabend 提交于
RCUify the route classifier. For now however spinlock's are used to protect fastmap cache. The issue here is the fastmap may be read by one CPU while the cache is being updated by another. An array of pointers could be one possible solution. Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com> Acked-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 John Fastabend 提交于
RCU'ify fw classifier. Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com> Acked-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 John Fastabend 提交于
Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com> Acked-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-