- 05 10月, 2015 7 次提交
-
-
由 Nikolay Aleksandrov 提交于
Add IFLA_BR_ROOT_ID and export br->designated_root via netlink. For this purpose add struct ifla_bridge_id that would represent struct bridge_id. Signed-off-by: NNikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Nikolay Aleksandrov 提交于
Add IFLA_BR_GROUP_FWD_MASK attribute to allow setting and retrieving the group_fwd_mask via netlink. Signed-off-by: NNikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Nikolay Aleksandrov says: ==================== bridge: vlan: cleanups & fixes (part 2) This is the second follow-up set with one fix (patch 01) and more cleanups (patches 02,03 and 04). These are minor compared to the previous ones and should be the last before taking on the optimization changes on the fast-path. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Nikolay Aleksandrov 提交于
The checks that lead to num_vlans change are always what br_vlan_should_use checks for, namely if the vlan is only a context or not and depending on that it's either not counted or counted as a real/used vlan respectively. Also give better explanation in br_vlan_should_use's comment. Signed-off-by: NNikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Nikolay Aleksandrov 提交于
There's only one user now and we can include the flag directly. Signed-off-by: NNikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Nikolay Aleksandrov 提交于
Introduce br_vlan_(get|put)_master which take a reference (or create the master vlan first if it didn't exist) and drop a reference respectively. Signed-off-by: NNikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Nikolay Aleksandrov 提交于
When I did the conversion to rhashtable I missed the required locking of one important user of the vlan list - br_get_link_af_size_filtered() which is called: br_ifinfo_notify() -> br_nlmsg_size() -> br_get_link_af_size_filtered() and the notifications can be sent without holding rtnl. Before this conversion the function relied on using rcu and since we already use rcu to destroy the vlans, we can simply migrate the list to use the rcu helpers. Signed-off-by: NNikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 04 10月, 2015 1 次提交
-
-
由 Eric Dumazet 提交于
Before letting request sockets being put in TCP/DCCP regular ehash table, we need to add either : - SLAB_DESTROY_BY_RCU flag to their kmem_cache - add RCU grace period before freeing them. Since we carefully respected the SLAB_DESTROY_BY_RCU protocol like ESTABLISH and TIMEWAIT sockets, use it here. req_prot_init() being only used by TCP and DCCP, I did not add a new slab_flags into their rsk_prot, but reuse prot->slab_flags Since all reqsk_alloc() users are correctly dealing with a failure, add the __GFP_NOWARN flag to avoid traces under pressure. Fixes: 079096f1 ("tcp/dccp: install syn_recv requests into ehash table") Signed-off-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 03 10月, 2015 32 次提交
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue由 David S. Miller 提交于
Jeff Kirsher says: ==================== Intel Wired LAN Driver Updates 2015-09-30 This series contains updates to i40e and i40evf only. Vasily Averin provides a couple of rtnl lock/unlock fixes for both i40e and i40evf. Shannon provides several updates and fixes, first fixes up a type clash in i40e_aq_rc_to_posix(), where the error codes are signed values, so we need to treat them as such. Then fixes up a padding issue where an extra byte is added in i40e_aqc_get_cee_dcb_cfg_v1_resp to directly acknowledge the padding. Updated i40e to keep debugfs register read and writes from accessing outside of the io-remapped space. Added support and device id for another 20 GbE device. Jesse fixes the transmit hand workaround code for ARM that was causing Tx hangs to still occur occasionally when there really was no hang. Then fixed the receive dropped counter to show up in netstat interface. Refactor the interrupt enable function since it was always making the caller add the base_vector from the VSI struct which is already passed to the function. Fix kbuild warnings found in 0day build infrastructure by adding a harmless cast to a dev_info(), also fix 32 bit build warnings found by sparse. Greg fixed a configuration error that results if a port VLAN is set for a VF before the VF driver is loaded, so that when the VF driver is loaded the port VLAN is ignored. Mitch fixes the use of QOS field consistently in i40e_ndo_set_vf_port_vlan(). Modified the init timing of the driver to increase stability on load/unload and SR-IOV enable/disable cycles. Anjali updates i40e to not collect VEB stats if they are disabled in the hardware for performance reasons. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Simon Horman says: ==================== ravb: Add support for r8a7795 SoC please consider this series for net-next. It enhances the ravb driver to support the r8a7795 SoC. Changes: * Dropped RFC prefix * Details in changelog of individual patches Base: * net-next/master Availability: To aid review of this in conjunction with other EtherAVB changes the following branches are available in my renesas tree on kernel.org. * me/r8a7795-ravb-driver-v4: this series * me/r8a7795-ravb-pfc-v2: r8a7795 sh-pfc update for EthernetAVB * me/r8a7795-ravb-integration-v4: enable EthernetAVB on r8a7795 * me/r8a7795-ravb-driver-and-integration-v4.runtime: the above three branches with their runtime dependencies ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Kazuya Mizuguchi 提交于
This patch supports the r8a7795 SoC by: - Using two interrupts + One for E-MAC + One for everything else + Both can be handled by the existing common interrupt handler, which affords a simpler update to support the new SoC. In future some consideration may be given to implementing multiple interrupt handlers - Limiting the phy speed to 100Mbit/s for the new SoC; at this time it is not clear how this restriction may be lifted but I hope it will be possible as more information comes to light Signed-off-by: NKazuya Mizuguchi <kazuya.mizuguchi.ks@renesas.com> [horms: reworked] Signed-off-by: NSimon Horman <horms+renesas@verge.net.au> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Kazuya Mizuguchi 提交于
This patch updates the ravb binding to support the r8a7795 SoC by: - Adding a compat string for the new hardware - Adding 25 named interrupts to binding for the new SoC; older SoCs continue to use a single multiplexed interrupt The example is also updated to reflect the r8a7795 as this is the more complex case. Based on work by Kazuya Mizuguchi and others. Signed-off-by: NSimon Horman <horms+renesas@verge.net.au> Acked-by: NGeert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Kazuya Mizuguchi 提交于
This patch is in preparation for using this driver on arm64 where the implementation of __dma_alloc_coherent fails if a device parameter is not provided. Signed-off-by: NKazuya Mizuguchi <kazuya.mizuguchi.ks@renesas.com> Signed-off-by: NYoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Signed-off-by: NMasaru Nagai <masaru.nagai.vx@renesas.com> [horms: squashed into a single patch] Signed-off-by: NSimon Horman <horms+renesas@verge.net.au> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Simon Horman 提交于
Add a helper to allow ethernet drivers to limit the speed of a phy (that they are attached to). This mainly involves factoring out the business-end of of_set_phy_supported() and exporting a new symbol. This code seems to be open coded in several places, in several different variants. It is is envisaged that this will be used in situations where setting the "max-speed" property in DT is not appropriate, e.g. because the maximum speed is not a property of the phy hardware. Signed-off-by: NSimon Horman <horms+renesas@verge.net.au> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Daniel Borkmann says: ==================== BPF updates Some minor updates to {cls,act}_bpf to retrieve routing realms and to make skb->priority writable. Thanks! v1 -> v2: - Dropped preclassify patch for now from the series as the rest is pretty much independent of it - Rest unchanged, only rebased and already posted Acked-by's kept ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Daniel Borkmann 提交于
{cls,act}_bpf can now set the skb->priority from an eBPF program based on various critera, so that for example classful qdiscs like multiq can update the skb's priority during enqueue time and further push it down into subsequent qdiscs. Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NAlexei Starovoitov <ast@plumgrid.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Daniel Borkmann 提交于
Using routing realms as part of the classifier is quite useful, it can be viewed as a tag for one or multiple routing entries (think of an analogy to net_cls cgroup for processes), set by user space routing daemons or via iproute2 as an indicator for traffic classifiers and later on processed in the eBPF program. Unlike actions, the classifier can inspect device flags and enable netif_keep_dst() if necessary. tc actions don't have that possibility, but in case people know what they are doing, it can be used from there as well (e.g. via devs that must keep dsts by design anyway). If a realm is set, the handler returns the non-zero realm. User space can set the full 32bit realm for the dst. Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NAlexei Starovoitov <ast@plumgrid.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Daniel Borkmann 提交于
As we need to add further flags to the bpf_prog structure, lets migrate both bools to a bitfield representation. The size of the base structure (excluding insns) remains unchanged at 40 bytes. Add also tags for the kmemchecker, so that it doesn't throw false positives. Even in case gcc would generate suboptimal code, it's not being accessed in performance critical paths. Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NAlexei Starovoitov <ast@plumgrid.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Jiri Pirko says: ==================== switchdev: bring back switchdev_obj Second version of the patch extends to a patchset. Basically this patchset brings object structure back which disappeared with recent Vivien's patchset. Also it does a bit of naming changes in order to get the things in line. Also, object id is put back into object structure. Thanks to Scott and Vivien for review and suggestions. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jiri Pirko 提交于
Suggested-by: NScott Feldman <sfeldma@gmail.com> Signed-off-by: NJiri Pirko <jiri@mellanox.com> Acked-by: NScott Feldman <sfeldma@gmail.com> Reviewed-by: NVivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jiri Pirko 提交于
Replace "void *obj" with a generic structure. Introduce couple of helpers along that. Signed-off-by: NJiri Pirko <jiri@mellanox.com> Acked-by: NScott Feldman <sfeldma@gmail.com> Reviewed-by: NVivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jiri Pirko 提交于
Make the struct name in sync with object id name. Suggested-by: NVivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: NJiri Pirko <jiri@mellanox.com> Acked-by: NScott Feldman <sfeldma@gmail.com> Reviewed-by: NVivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jiri Pirko 提交于
Make the struct name in sync with object id name. Suggested-by: NVivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: NJiri Pirko <jiri@mellanox.com> Acked-by: NScott Feldman <sfeldma@gmail.com> Reviewed-by: NVivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jiri Pirko 提交于
To be aligned with obj. Signed-off-by: NJiri Pirko <jiri@mellanox.com> Acked-by: NScott Feldman <sfeldma@gmail.com> Reviewed-by: NVivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jiri Pirko 提交于
Suggested-by: NVivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: NJiri Pirko <jiri@mellanox.com> Acked-by: NScott Feldman <sfeldma@gmail.com> Reviewed-by: NVivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Eric Dumazet says: ==================== tcp/dccp: lockless listener TCP listener refactoring : this is becoming interesting ! This patch series takes the steps to use normal TCP/DCCP ehash table to store SYN_RECV requests, instead of the private per-listener hash table we had until now. SYNACK skb are now attached to their syn_recv request socket, so that we no longer heavily modify listener sk_wmem_alloc. listener lock is no longer held in fast path, including SYNCOOKIE mode. During my tests, my server was able to process 3,500,000 SYN packets per second on one listener and still had available cpu cycles. That is about 2 to 3 order of magnitude what we had with older kernels. This effort started two years ago and I am pleased to reach expectations. We'll probably extend SO_REUSEPORT to add proper cpu/numa affinities, so that heavy duty TCP servers can get proper siloing thanks to multi-queues NIC. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
Everything should now be ready to finally allow SYN packets processing without holding listener lock. Tested: 3.5 Mpps SYNFLOOD. Plenty of cpu cycles available. Next bottleneck is the refcount taken on listener, that could be avoided if we remove SLAB_DESTROY_BY_RCU strict semantic for listeners, and use regular RCU. 13.18% [kernel] [k] __inet_lookup_listener 9.61% [kernel] [k] tcp_conn_request 8.16% [kernel] [k] sha_transform 5.30% [kernel] [k] inet_reqsk_alloc 4.22% [kernel] [k] sock_put 3.74% [kernel] [k] tcp_make_synack 2.88% [kernel] [k] ipt_do_table 2.56% [kernel] [k] memcpy_erms 2.53% [kernel] [k] sock_wfree 2.40% [kernel] [k] tcp_v4_rcv 2.08% [kernel] [k] fib_table_lookup 1.84% [kernel] [k] tcp_openreq_init_rwin Signed-off-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
If a listener with thousands of children in accept queue is dismantled, it can take a while to close all of them. Signed-off-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
This control variable was set at first listen(fd, backlog) call, but not updated if application tried to increase or decrease backlog. It made sense at the time listener had a non resizeable hash table. Also rounding to powers of two was not very friendly. Signed-off-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
It is enough to check listener sk_state, no need for an extra condition. max_qlen_log can be moved into struct request_sock_queue We can remove syn_wait_lock and the alignment it enforced. Signed-off-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
If a listen backlog is very big (to avoid syncookies), then the listener sk->sk_wmem_alloc is the main source of false sharing, as we need to touch it twice per SYNACK re-transmit and TX completion. (One SYN packet takes listener lock once, but up to 6 SYNACK are generated) By attaching the skb to the request socket, we remove this source of contention. Tested: listen(fd, 10485760); // single listener (no SO_REUSEPORT) 16 RX/TX queue NIC Sustain a SYNFLOOD attack of ~320,000 SYN per second, Sending ~1,400,000 SYNACK per second. Perf profiles now show listener spinlock being next bottleneck. 20.29% [kernel] [k] queued_spin_lock_slowpath 10.06% [kernel] [k] __inet_lookup_established 5.12% [kernel] [k] reqsk_timer_handler 3.22% [kernel] [k] get_next_timer_interrupt 3.00% [kernel] [k] tcp_make_synack 2.77% [kernel] [k] ipt_do_table 2.70% [kernel] [k] run_timer_softirq 2.50% [kernel] [k] ip_finish_output 2.04% [kernel] [k] cascade Signed-off-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
inet6_csk_search_req() and inet6_csk_reqsk_queue_hash_add() no longer exist. Signed-off-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
We no longer use hash_rnd, nr_table_entries and syn_table[] For a listener with a backlog of 10 millions sockets, this saves 80 MBytes of vmalloced memory. Signed-off-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
In this patch, we insert request sockets into TCP/DCCP regular ehash table (where ESTABLISHED and TIMEWAIT sockets are) instead of using the per listener hash table. ACK packets find SYN_RECV pseudo sockets without having to find and lock the listener. In nominal conditions, this halves pressure on listener lock. Note that this will allow for SO_REUSEPORT refinements, so that we can select a listener using cpu/numa affinities instead of the prior 'consistent hash', since only SYN packets will apply this selection logic. We will shrink listen_sock in the following patch to ease code review. Signed-off-by: NEric Dumazet <edumazet@google.com> Cc: Ying Cai <ycai@google.com> Cc: Willem de Bruijn <willemb@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
This is no longer used. Signed-off-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
When request sockets are no longer in a per listener hash table but on regular TCP ehash, we need to access listener uid through req->rsk_listener get_openreq6() also gets a const for its request socket argument. Signed-off-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
Once listener is lockless, its sk_state can change anytime. Signed-off-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
We'll soon have to call tcp_v[46]_inbound_md5_hash() twice. Also add const attribute to the socket, as it might be the unlocked listener for SYN packets. Signed-off-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
We plan to use generic functions to insert request sockets into ehash table. sk_prot needs to be set (to retrieve sk_prot->h.hashinfo) sk_node needs to be cleared. Signed-off-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
This fixes a typo : We want to store the NAPI id on child socket. Presumably nobody really uses busy polling, on short lived flows. Fixes: 3d97379a ("tcp: move sk_mark_napi_id() at the right place") Signed-off-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-