- 08 5月, 2018 14 次提交
-
-
由 Sudarsana Reddy Kalluru 提交于
This patch adds driver changes for supporting the Unified Fabric Port (UFP). This is a new paritioning mode wherein MFW provides the set of parameters to be used by the device such as traffic class, outer-vlan tag value, priority type etc. Drivers receives this info via notifications from mfw and configures the hardware accordingly. Signed-off-by: NSudarsana Reddy Kalluru <Sudarsana.Kalluru@cavium.com> Signed-off-by: NAriel Elior <ariel.elior@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Sudarsana Reddy Kalluru 提交于
The patch adds support for new Multi function mode wherein the traffic classification is done based on the 802.1ad tagging and the outer vlan tag provided by the management firmware. Signed-off-by: NSudarsana Reddy Kalluru <Sudarsana.Kalluru@cavium.com> Signed-off-by: NAriel Elior <ariel.elior@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Sudarsana Reddy Kalluru 提交于
The data member 'is_mf_default' is not used by the qed/qede drivers, removing the same. Signed-off-by: NSudarsana Reddy Kalluru <Sudarsana.Kalluru@cavium.com> Signed-off-by: NAriel Elior <ariel.elior@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Sudarsana Reddy Kalluru 提交于
`mf_mode' field indicates the multi-partitioning mode the device is configured to. This method doesn't scale very well, adding a new MF mode requires going over all the existing conditions, and deciding whether those are needed for the new mode or not. The patch defines a set of bit-fields for modes which are derived according to the mode info shared by the MFW and all the configuration would be made according to those. To add a new mode, there would be a single place where we'll need to go and choose which bits apply and which don't. Signed-off-by: NSudarsana Reddy Kalluru <Sudarsana.Kalluru@cavium.com> Signed-off-by: NAriel Elior <ariel.elior@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Sun Lianwen 提交于
The v9fs_get_trans_by_name(char *s) variable name is not "name" but "s". Signed-off-by: NSun Lianwen <sunlw.fnst@cn.fujitsu.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Sun Lianwen 提交于
The vlan_flags enum is defined in include/uapi/linux/if_vlan.h file. not in include/linux/if_vlan.h file. Signed-off-by: NSun Lianwen <sunlw.fnst@cn.fujitsu.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next由 David S. Miller 提交于
Minor conflict, a CHECK was placed into an if() statement in net-next, whilst a newline was added to that CHECK call in 'net'. Thanks to Daniel for the merge resolution. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Weilin Chang 提交于
Support setting the link speed of CN23XX-225 cards (which can do 25Gbps or 10Gbps) via ethtool_ops.set_link_ksettings. Also fix the function assigned to ethtool_ops.get_link_ksettings to use the new link_ksettings api completely (instead of partially via ethtool_convert_legacy_u32_to_link_mode). Signed-off-by: NWeilin Chang <weilin.chang@cavium.com> Acked-by: NRaghu Vatsavayi <raghu.vatsavayi@cavium.com> Signed-off-by: NFelix Manlunas <felix.manlunas@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Sebastian Andrzej Siewior says: ==================== 3c59x patches and the removal of an unused function The first patch removes an unused function. The goal of remaining three patches is to get rid of the local_irq_save() usage in the driver which benefits -RT. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Anna-Maria Gleixner 提交于
When vortex_boomerang_interrupt() is invoked from vortex_tx_timeout() or poll_vortex() interrupts must be disabled. This detaches the interrupt disable logic from locking which requires patching for PREEMPT_RT. The advantage of avoiding spin_lock_irqsave() in the interrupt handler is minimal, but converting it removes all the extra code for callers which come not from interrupt context. Cc: Steffen Klassert <klassert@mathematik.tu-chemnitz.de> Signed-off-by: NAnna-Maria Gleixner <anna-maria@linutronix.de> Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Anna-Maria Gleixner 提交于
Locking is done in the same way in _vortex_interrupt() and _boomerang_interrupt(). To prevent duplication, move the locking into the calling vortex_boomerang_interrupt() function. No functional change. Cc: Steffen Klassert <klassert@mathematik.tu-chemnitz.de> Signed-off-by: NAnna-Maria Gleixner <anna-maria@linutronix.de> Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Anna-Maria Gleixner 提交于
If vp->full_bus_master_tx is set, vp->full_bus_master_rx is set as well (see vortex_probe1()). Therefore the conditionals for the decision if boomerang or vortex ISR is executed have the same result. Instead of repeating the explicit conditional execution of the boomerang/vortex ISR, move it into an own function. No functional change. Cc: Steffen Klassert <klassert@mathematik.tu-chemnitz.de> Signed-off-by: NAnna-Maria Gleixner <anna-maria@linutronix.de> Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Anna-Maria Gleixner 提交于
Commit 67db3e4b ("tcp: no longer hold ehash lock while calling tcp_get_info()") removes the only users of u64_stats_update_end/begin_raw() without removing the function in header file. Remove no longer used functions. Cc: Eric Dumazet <edumazet@google.com> Signed-off-by: NAnna-Maria Gleixner <anna-maria@linutronix.de> Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Anders Roxell 提交于
The generated files udpgso* shouldn't be part of TEST_PROGS, they are used by udpgso.sh and udpgsp_bench.sh. They should be added to the TEST_GEN_FILES to get installed without being added to the main run_kselftest.sh script. Fixes: 3a687bef ("selftests: udp gso benchmark") Signed-off-by: NAnders Roxell <anders.roxell@linaro.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 07 5月, 2018 9 次提交
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/pablo/nf-next由 David S. Miller 提交于
Pablo Neira Ayuso says: ==================== Netfilter/IPVS updates for net-next The following patchset contains Netfilter/IPVS updates for your net-next tree, more relevant updates in this batch are: 1) Add Maglev support to IPVS. Moreover, store lastest server weight in IPVS since this is needed by maglev, patches from from Inju Song. 2) Preparation works to add iptables flowtable support, patches from Felix Fietkau. 3) Hand over flows back to conntrack slow path in case of TCP RST/FIN packet is seen via new teardown state, also from Felix. 4) Add support for extended netlink error reporting for nf_tables. 5) Support for larger timeouts that 23 days in nf_tables, patch from Florian Westphal. 6) Always set an upper limit to dynamic sets, also from Florian. 7) Allow number generator to make map lookups, from Laura Garcia. 8) Use hash_32() instead of opencode hashing in IPVS, from Vicent Bernat. 9) Extend ip6tables SRH match to support previous, next and last SID, from Ahmed Abdelsalam. 10) Move Passive OS fingerprint nf_osf.c, from Fernando Fernandez. 11) Expose nf_conntrack_max through ctnetlink, from Florent Fourcot. 12) Several housekeeping patches for xt_NFLOG, x_tables and ebtables, from Taehee Yoo. 13) Unify meta bridge with core nft_meta, then make nft_meta built-in. Make rt and exthdr built-in too, again from Florian. 14) Missing initialization of tbl->entries in IPVS, from Cong Wang. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florian Westphal 提交于
This must now use a 64bit jiffies value, else we set a bogus timeout on 32bit. Fixes: 8e1102d5 ("netfilter: nf_tables: support timeouts larger than 23 days") Signed-off-by: NFlorian Westphal <fw@strlen.de> Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
-
由 Florent Fourcot 提交于
IPCTNL_MSG_CT_GET_STATS netlink command allow to monitor current number of conntrack entries. However, if one wants to compare it with the maximum (and detect exhaustion), the only solution is currently to read sysctl value. This patch add nf_conntrack_max value in netlink message, and simplify monitoring for application built on netlink API. Signed-off-by: NFlorent Fourcot <florent.fourcot@wifirst.fr> Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
-
Add nf_osf_ttl() and nf_osf_match() into nf_osf.c to prepare for nf_tables support. Signed-off-by: NFernando Fernandez Mancera <ffmancera@riseup.net> Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
-
由 Phil Sutter 提交于
These macros allow conveniently declaring arrays which use NFT_{RT,CT}_* values as indexes. Signed-off-by: NPhil Sutter <phil@nwl.cc> Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
-
由 Florian Westphal 提交于
Signed-off-by: NFlorian Westphal <fw@strlen.de> Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
-
由 Ahmed Abdelsalam 提交于
IPv6 Segment Routing Header (SRH) contains a list of SIDs to be crossed by SR encapsulated packet. Each SID is encoded as an IPv6 prefix. When a Firewall receives an SR encapsulated packet, it should be able to identify which node previously processed the packet (previous SID), which node is going to process the packet next (next SID), and which node is the last to process the packet (last SID) which represent the final destination of the packet in case of inline SR mode. An example use-case of using these features could be SID list that includes two firewalls. When the second firewall receives a packet, it can check whether the packet has been processed by the first firewall or not. Based on that check, it decides to apply all rules, apply just subset of the rules, or totally skip all rules and forward the packet to the next SID. This patch extends SRH match to support matching previous SID, next SID, and last SID. Signed-off-by: NAhmed Abdelsalam <amsalam20@gmail.com> Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
-
由 Laura Garcia Liebana 提交于
The modulus in the hash function was limited to > 1 as initially there was no sense to create a hashing of just one element. Nevertheless, there are certain cases specially for load balancing where this case needs to be addressed. This patch fixes the following error. Error: Could not process rule: Numerical result out of range add rule ip nftlb lb01 dnat to jhash ip saddr mod 1 map { 0: 192.168.0.10 } ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The solution comes to force the hash to 0 when the modulus is 1. Signed-off-by: NLaura Garcia Liebana <nevola@gmail.com>
-
由 Laura Garcia Liebana 提交于
This patch includes a new attribute in the numgen structure to allow the lookup of an element based on the number generator as a key. For this purpose, different ops have been included to extend the current numgen inc functions. Currently, only supported for numgen incremental operations, but it will be supported for random in a follow-up patch. Signed-off-by: NLaura Garcia Liebana <nevola@gmail.com> Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
-
- 05 5月, 2018 14 次提交
-
-
由 David Ahern 提交于
This slipped through the cracks in the followup set to the fib6_info flip. Rename rt6_next to fib6_next. Signed-off-by: NDavid Ahern <dsahern@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Daniel Borkmann 提交于
If bpf_map_precharge_memlock() did not fail, then we set err to zero. However, any subsequent failure from either alloc_percpu() or the bpf_map_area_alloc() will return ERR_PTR(0) which in find_and_alloc_map() will cause NULL pointer deref. In devmap we have the convention that we return -EINVAL on page count overflow, so keep the same logic here and just set err to -ENOMEM after successful bpf_map_precharge_memlock(). Fixes: fbfc504a ("bpf: introduce new bpf AF_XDP map type BPF_MAP_TYPE_XSKMAP") Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Cc: Björn Töpel <bjorn.topel@intel.com> Acked-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
-
由 Daniel Borkmann 提交于
Jakub Kicinski says: ==================== This series centres on NFP offload of bpf_event_output(). The first patch allows perf event arrays to be used by offloaded programs. Next patch makes the nfp driver keep track of such arrays to be able to filter FW events referring to maps. Perf event arrays are not device bound. Having driver reimplement and manage the perf array seems brittle and unnecessary. Patch 4 moves slightly the verifier step which replaces map fds with map pointers. This is useful for nfp JIT since we can then easily replace host pointers with NFP table ids (patch 6). This allows us to lift the limitation on map helpers having to be used with the same map pointer on all paths. Second use of replacing fds with real host map pointers is that we can use the host map pointer as a key for FW events in perf event array offload. Patch 5 adds perf event output offload support for the NFP. There are some differences between bpf_event_output() offloaded and non-offloaded version. The FW messages which carry events may get dropped and reordered relatively easily. The return codes from the helper are also not guaranteed to match the host. Users are warned about some of those discrepancies with a one time warning message to kernel logs. bpftool gains an ability to dump perf ring events in a very simple format. This was very useful for testing and simple debug, maybe it will be useful to others? Last patch is a trivial comment fix. ==================== Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
-
由 Jakub Kicinski 提交于
Comments in the verifier refer to free_bpf_prog_info() which seems to have never existed in tree. Replace it with free_used_maps(). Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NQuentin Monnet <quentin.monnet@netronome.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
-
由 Jakub Kicinski 提交于
Users of BPF sooner or later discover perf_event_output() helpers and BPF_MAP_TYPE_PERF_EVENT_ARRAY. Dumping this array type is not possible, however, we can add simple reading of perf events. Create a new event_pipe subcommand for maps, this sub command will only work with BPF_MAP_TYPE_PERF_EVENT_ARRAY maps. Parts of the code from samples/bpf/trace_output_user.c. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NQuentin Monnet <quentin.monnet@netronome.com> Acked-by: NAlexei Starovoitov <ast@kernel.org> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
-
由 Jakub Kicinski 提交于
Move the get_possible_cpus() function to shared code. No functional changes. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NQuentin Monnet <quentin.monnet@netronome.com> Reviewed-by: NJiong Wang <jiong.wang@netronome.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
-
由 Jakub Kicinski 提交于
Instead of spelling [hex] BYTES everywhere use DATA as keyword for generalized value. This will help us keep the messages concise when longer command are added in the future. It will also be useful once BTF support comes. We will only have to change the definition of DATA. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NQuentin Monnet <quentin.monnet@netronome.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
-
由 Jakub Kicinski 提交于
Kernel will now replace map fds with actual pointer before calling the offload prepare. We can identify those pointers and replace them with NFP table IDs instead of loading the table ID in code generated for CALL instruction. This allows us to support having the same CALL being used with different maps. Since we don't want to change the FW ABI we still need to move the TID from R1 to portion of R0 before the jump. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NQuentin Monnet <quentin.monnet@netronome.com> Reviewed-by: NJiong Wang <jiong.wang@netronome.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
-
由 Jakub Kicinski 提交于
Add support for the perf_event_output family of helpers. The implementation on the NFP will not match the host code exactly. The state of the host map and rings is unknown to the device, hence device can't return errors when rings are not installed. The device simply packs the data into a firmware notification message and sends it over to the host, returning success to the program. There is no notion of a host CPU on the device when packets are being processed. Device will only offload programs which set BPF_F_CURRENT_CPU. Still, if map index doesn't match CPU no error will be returned (see above). Dropped/lost firmware notification messages will not cause "lost events" event on the perf ring, they are only visible via device error counters. Firmware notification messages may also get reordered in respect to the packets which caused their generation. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NQuentin Monnet <quentin.monnet@netronome.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
-
由 Jakub Kicinski 提交于
Offloads may find host map pointers more useful than map fds. Map pointers can be used to identify the map, while fds are only valid within the context of loading process. Jump to skip_full_check on error in case verifier log overflow has to be handled (replace_map_fd_with_map_ptr() prints to the log, driver prep may do that too in the future). Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NQuentin Monnet <quentin.monnet@netronome.com> Reviewed-by: NJiong Wang <jiong.wang@netronome.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
-
由 Jakub Kicinski 提交于
bpf_event_output() is useful for offloads to add events to BPF event rings, export it. Note that export is placed near the stub since tracing is optional and kernel/bpf/core.c is always going to be built. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NQuentin Monnet <quentin.monnet@netronome.com> Reviewed-by: NJiong Wang <jiong.wang@netronome.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
-
由 Jakub Kicinski 提交于
For asynchronous events originating from the device, like perf event output, we need to be able to make sure that objects being referred to by the FW message are valid on the host. FW events can get queued and reordered. Even if we had a FW message "barrier" we should still protect ourselves from bogus FW output. Add a reverse-mapping hash table and record in it all raw map pointers FW may refer to. Only record neutral maps, i.e. perf event arrays. These are currently the only objects FW can refer to. Use RCU protection on the read side, update side is under RTNL. Since program vs map destruction order is slightly painful for offload simply take an extra reference on all the recorded maps to make sure they don't disappear. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NQuentin Monnet <quentin.monnet@netronome.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
-
由 Jakub Kicinski 提交于
BPF_MAP_TYPE_PERF_EVENT_ARRAY is special as far as offload goes. The map only holds glue to perf ring, not actual data. Allow non-offloaded perf event arrays to be used in offloaded programs. Offload driver can extract the events from HW and put them in the map for user space to retrieve. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NQuentin Monnet <quentin.monnet@netronome.com> Reviewed-by: NJiong Wang <jiong.wang@netronome.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
-
由 Daniel Borkmann 提交于
Jiong Wang says: ==================== This patch set clean up some code logic related with managing subprog information. Part of the set are inspried by Edwin's code in his RFC: "bpf/verifier: subprog/func_call simplifications" but with clearer separation so it could be easier to review. - Path 1 unifies main prog and subprogs. All of them are registered in env->subprog_starts. - After patch 1, it is clear that subprog_starts and subprog_stack_depth could be merged as both of them now have main and subprog unified. Patch 2 therefore does the merge, all subprog information are centred at bpf_subprog_info. - Patch 3 goes further to introduce a new fake "exit" subprog which serves as an ending marker to the subprog list. We could then turn the following code snippets across verifier: if (env->subprog_cnt == cur_subprog + 1) subprog_end = insn_cnt; else subprog_end = env->subprog_info[cur_subprog + 1].start; into: subprog_end = env->subprog_info[cur_subprog + 1].start; There is no functional change by this patch set. No bpf selftest (both non-jit and jit) regression found after this set. v2: - fixed adjust_subprog_starts to also update fake "exit" subprog start. - for John's suggestion on renaming subprog to prog, I could work on a follow-up patch if it is recognized as worth the change. ==================== Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NAlexei Starovoitov <ast@kernel.org>
-
- 04 5月, 2018 3 次提交
-
-
由 Eric Dumazet 提交于
While trying to support CHECKSUM_COMPLETE for IPV6 fragments, I had to experiments various hacks in get_fixed_ipv6_csum(). I must admit I could not find how to implement this :/ However, get_fixed_ipv6_csum() does a lot of redundant operations, calling csum_partial() twice. First csum_partial() computes the checksum of saddr and daddr, put in @csum_pseudo_hdr. Undone later in the second csum_partial() computed on whole ipv6 header. Then nexthdr is added once, added a second time, then substracted. payload_len is added once, then substracted. Really all this can be reduced to two add_csum(), to add back 6 bytes that were removed by mlx4 when providing hw_checksum in RX descriptor. Signed-off-by: NEric Dumazet <edumazet@google.com> Cc: Saeed Mahameed <saeedm@mellanox.com> Cc: Tariq Toukan <tariqt@mellanox.com> Reviewed-by: NSaeed Mahameed <saeedm@mellanox.com> Acked-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Ursula Braun says: ==================== net/smc: splice implementation Stefan comes up with an smc implementation for splice(). The first three patches are preparational patches, the 4th patch implements splice(). ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Stefan Raspl 提交于
Provide an implementation for splice() when we are using SMC. See smc_splice_read() for further details. Signed-off-by: NStefan Raspl <raspl@linux.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.ibm.com>< Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-