- 10 10月, 2017 40 次提交
-
-
由 Jayaprakash Shanmugam 提交于
- When the I2C is busy, the PHY reads are delayed. The firmware will return EGAIN in these cases with an expectation that the SW will trigger the reads again - This patch retries the operation for a maximum period of 500ms Signed-off-by: NJayaprakash Shanmugam <jayaprakash.shanmugam@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Lihong Yang 提交于
The find_first_bit function will return the size passed to search if the first set bit is not found. This patch adds the check in case that happens as the return value would be used as the index in an array and that would have caused the out-of-bounds access. Detected by CoverityScan, CID 1295969 Out-of-bounds access Signed-off-by: NLihong Yang <lihong.yang@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
Recently, the kernel gained support for enabling XPS and QoS at the same time. Thus, we no longer need to worry about the number of traffic classes when enabling XPS. Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
Double the number of descriptors we'll bundle into one tail bump when receiving. Empirical testing has shown that we reduce CPU utilization and don't appear to reduce throughput or packet rate. 32 seems to be the sweet spot, as it's half the default polling budget, so we'd essentially reduce from 4 tail writes when polling down to 2. Increasing this up to 64 appears to have negative impacts as it may become possible that we don't bump the tail each time we get polled, which could cause a long delay between returning descriptors to the hardware. Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
Hardware only fetches descriptors on cachelines of 8, essentially ignoring the lower 3 bits of the tail register. Thus, it is pointless to bump tail by an unaligned access as the hardware will ignore some of the new descriptors we allocated. Thus, it's ideal if we can ensure tail writes are always aligned to 8. At first, it seems like we'd already do this, since we allocate descriptors in batches which are a multiple of 8. Since we'd always increment by a multiple of 8, it seems like the value should always be aligned. However, this ignores allocation failures. If we fail to allocate a buffer, our tail register will become unaligned. Once it has become unaligned it will essentially be stuck unaligned until a buffer allocation happens to fail at the exact amount necessary to re-align it. We can do better, by simply rounding down the number of buffers we're about to allocate (cleaned_count) such that "next_to_clean + cleaned_count" is rounded to the nearest multiple of 8. We do this by calculating how far off that value is and subtracting it from the cleaned_count. This essentially defers allocation of buffers if they're going to be ignored by hardware anyways, and re-aligns our next_to_use and tail values after a failure to allocate a descriptor. This calculation ensures that we always align the tail writes in a way the hardware expects and don't unnecessarily allocate buffers which won't be fetched immediately. Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
The lrxq thresh value tells hardware to immediately interrupt when there are fewer than N*64 packets left in the ring. Counter intuitively, empirical testing has shown that decreasing this value from 2 to 1, and thus changing from an immediate interrupt at fewer than 128 descriptors down to 64 descriptors causes a small increase in the maximum total packets per second we can receive. This increase occurs even when we're polling with interrupts masked, as the hardware must still handle interrupts internally even if we've disabled them in software. Also reduce the value for any VFs we allocate. Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
In the past we changed driver behavior to not clear the PBA when re-enabling interrupts. This change was motivated by the flawed belief that clearing the PBA would cause a lost interrupt if a receive interrupt occurred while interrupts were disabled. According to empirical testing this isn't the case. Additionally, the data sheet specifically says that we should set the CLEARPBA bit when re-enabling interrupts in a polling setup. This reverts commit 40d72a50 ("i40e/i40evf: don't lose interrupts") Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
The ITR register expects to be programmed in units of 2 microseconds. Because of this, all of the drivers I40E_ITR_* constants are in terms of this 2 microsecond register. Unfortunately, the rx_itr_default value is expected to be programmed in microseconds. Effectively the driver defaults to an ITR value of half the expected value (in terms of minimum microseconds between interrupts). Fix this by changing the default values to be calculated using ITR_REG_TO_USEC macro which indicates that we're converting from the register units into microseconds. Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alan Brady 提交于
Due to the asynchronous nature in which mac filters are added and deleted, there exists a bug in which filters are erroneously removed if removed then added again quickly. The events are as such: - filter marked for removal - same filter is re-added before watchdog that cleans up filters - we skip re-adding the filter because we have it already in the list - watchdog filter cleanup kicks off and filter is removed So when we were re-adding the same filter, it didn't actually get added because it already existed in the list, but was marked for removal and had yet to actually be removed. This patch fixes the issue by making sure that when adding a filter, if we find it already existing in our list, make sure it is not marked to be removed. Signed-off-by: NAlan Brady <alan.brady@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Lihong Yang 提交于
This patch replaces hash_for_each function with hash_for_each_safe when calling __i40e_del_filter. The hash_for_each_safe function is the right one to use when iterating over a hash table to safely remove a hash entry. Otherwise, incorrect values may be read from freed memory. Detected by CoverityScan, CID 1402048 Read from pointer after free Signed-off-by: NLihong Yang <lihong.yang@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
Since we don't yet have more than 32 flags, we'll use a u32 for both the hw_features and flag field. Should we gain more flags in the future, we may need to convert to a u64 or separate flags out into two fields. This was overlooked in the previous commit 2781de2134c4 ("i40e/i40evf: organize and re-number feature flags"), where the feature flag was not converted form u64 to u32. Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Reviewed-by: NMitch Williams <mitch.a.williams@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 David S. Miller 提交于
Eric Dumazet says: ==================== ipv6: addrlabel: avoid dirtying ip6addrlbl_entry The refcount on ip6addrlbl_entry is only used to make sure ip6addrlbl_entry does not disappear while ip6addrlbl_get() is allocating an skb. We can instead allocate skb first, then use RCU, so that we no longer need to refcount these structures. ==================== Acked-by: NMartin KaFai Lau <kafai@fb.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
After previous patch ("ipv6: addrlabel: rework ip6addrlbl_get()") we can remove the refcount from struct ip6addrlbl_entry, since it is no longer elevated in p6addrlbl_get() Signed-off-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
If we allocate skb before the lookup, we can use RCU without the need of ip6addrlbl_hold() This means that the following patch can get rid of refcounting. Signed-off-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Tariq Toukan says: ==================== Fix mlx4 static checker warnings This patchset contains fixes for static checker warnings in the mlx4 Core and Eth drivers. Patch 1 fixes an actual bug discovered by the checker. Patches 2 and 3 fix the warnings without functional changes. Series generated against net-next commit: c49c777f qed: Delete redundant check on dcb_app priority ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Tariq Toukan 提交于
In TX data-path, we intentionally do not byte-swap, as documented in code and in the cited commit log. This fixes sparse warning: en_tx.c:720:23: warning: incorrect type in argument 1 (different base types) en_tx.c:720:23: expected unsigned int [unsigned] [usertype] <noident> en_tx.c:720:23: got restricted __be32 [usertype] doorbell_qpn Fixes: 492f5add ("net/mlx4_en: Doorbell is byteswapped in Little Endian archs") Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Tariq Toukan 提交于
Fix the following SPARSE warning, in MLX4_GET() macro: drivers/net/ethernet/mellanox/mlx4/fw.c:233:9: warning: cast to restricted __be64 Fixes: 17d5ceb6 ("net/mlx4_core: Fix unaligned accesses") Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Tariq Toukan 提交于
Should take care of the endianness before assigning to params2 field. Fixes: 53f33ae2 ("net/mlx4_core: Port aggregation upper layer interface") Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Mika Westerberg 提交于
The 0day kbuild robot reports following crash: BUG: unable to handle kernel NULL pointer dereference at 00000004 IP: tb_property_find+0xe/0x41 *pde = 00000000 Oops: 0000 [#1] CPU: 0 PID: 1 Comm: swapper Not tainted 4.14.0-rc1-00741-ge69b6c02 #412 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014 task: 89c80000 task.stack: 89c7c000 EIP: tb_property_find+0xe/0x41 EFLAGS: 00210246 CPU: 0 EAX: 00000000 EBX: 7a368f47 ECX: 00000044 EDX: 7a368f47 ESI: 8851d340 EDI: 7a368f47 EBP: 89c7df0c ESP: 89c7defc DS: 007b ES: 007b FS: 0000 GS: 0000 SS: 0068 CR0: 80050033 CR2: 00000004 CR3: 027a2000 CR4: 00000690 Call Trace: tb_register_property_dir+0x49/0xb9 ? cdc_mbim_driver_init+0x1b/0x1b tbnet_init+0x77/0x9f ? cdc_mbim_driver_init+0x1b/0x1b do_one_initcall+0x7e/0x145 ? parse_args+0x10c/0x1b3 ? kernel_init_freeable+0xbe/0x159 kernel_init_freeable+0xd1/0x159 ? rest_init+0x110/0x110 kernel_init+0xd/0xd0 ret_from_fork+0x19/0x30 The reason is that both Thunderbolt bus and thunderbolt-net are build into the kernel image, and the latter is linked first because drivers/net comes before drivers/thunderbolt. Since both use module_init() thunderbolt-net ends up calling Thunderbolt bus functions too early triggering the above crash. Fix this by moving Thunderbolt bus initialization to happen earlier to make sure all the data structures are ready when Thunderbolt service drivers are initialized. To be on the safe side also add a check for properly initialized xdomain_property_dir to tb_register_property_dir(). Reported-by: Nkernel test robot <fengguang.wu@intel.com> Signed-off-by: NMika Westerberg <mika.westerberg@linux.intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
per cpu allocations are already zeroed, no need to clear them again. Fixes: d52d3997 ("ipv6: Create percpu rt6_info") Signed-off-by: NEric Dumazet <edumazet@google.com> Cc: Martin KaFai Lau <kafai@fb.com> Cc: Tejun Heo <tj@kernel.org> Acked-by: NTejun Heo <tj@kernel.org> Acked-by: NMartin KaFai Lau <kafai@fb.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Michal Kalderon says: ==================== qed: Add iWARP support for unaligned MPA packets This patch series adds support for handling unaligned MPA packets. (FPDUs split over more than one tcp packet). When FW detects a packet is unaligned it fowards the packet to the driver via a light l2 dedicated connection. The driver then stores this packet until the remainder of the packet is received. Once the driver reconstructs the full FPDU, it sends it down to fw via the ll2 connection. Driver also breaks down any packed PDUs into separate packets for FW. Patches 1-6 are all slight modifications to ll2 to support additional requirements for the unaligned MPA ll2 client. Patch 7 opens the additional ll2 connection for iWARP. Patches 8-12 contain the algorithm for aligning packets. ==================== Signed-off-by: NMichal Kalderon <Michal.Kalderon@cavium.com> Signed-off-by: NAriel Elior <Ariel.Elior@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michal Kalderon 提交于
We continue to maintain a maximum of three buffers per fpdu, to ensure that there are enough buffers for additional unaligned mpa packets. To support this, if a fpdu is split over more than two tcp packets, we use an intermediate buffer to copy the data to the previous buffer, then we can release the data. We need an intermediate buffer as the initial buffer partial packet could be located at the end of the packet, not leaving room for additional data. This is a corner case, and will usually not be the case. Signed-off-by: NMichal Kalderon <Michal.Kalderon@cavium.com> Signed-off-by: NAriel Elior <Ariel.Elior@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michal Kalderon 提交于
There is a special case where an MPA header is split over to tcp packets, in this case we need to wait for the next packet to get the fpdu length. We use the incomplete_bytes to mark this fpdu as a "special" one which requires updating the length with the next packet Signed-off-by: NMichal Kalderon <Michal.Kalderon@cavium.com> Signed-off-by: NAriel Elior <Ariel.Elior@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michal Kalderon 提交于
When posting a packet on the ll2 tx, we can provide a cookie that will be returned upon tx completion. This cookie is the ll2 iwarp buffer which is then reposted to the rx ring. Part of the unaligned mpa flow is determining when a buffer can be reposted. Each buffer needs to be sent only once as a cookie for on the tx ring. In packed fpdu case, only the last packet will be sent with the buffer, meaning we need to handle the case that a cookie can be NULL on tx complete. In addition, when a fpdu splits over two buffers, but there are no more fpdus on the second buffer, two buffers need to be provided as a cookie. To avoid changing the ll2 interface to provide two cookies, we introduce a piggy buf pointer, relevant for iWARP only, that holds a pointer to a second buffer that needs to be released during tx completion. Signed-off-by: NMichal Kalderon <Michal.Kalderon@cavium.com> Signed-off-by: NAriel Elior <Ariel.Elior@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michal Kalderon 提交于
The fpdu data structure is preallocated per connection. Each connection stores the current status of the connection: either nothing pending, or there is a partial fpdu that is waiting for the rest of the fpdu (incomplete bytes != 0). The same structure is also used for splitting a packet when there are packed fpdus. The structure is initialized with all data required for sending the fpdu back to the FW. A fpdu will always be spanned across a maximum of 3 tx bds. One for the header, one for the partial fdpu received and one for the remainder (unaligned) packet. In case of packed fpdu's, two fragments are used, one for the header and one for the data. Corner cases are not handled in the patch for clarity, and will be added as a separate patch. Signed-off-by: NMichal Kalderon <Michal.Kalderon@cavium.com> Signed-off-by: NAriel Elior <Ariel.Elior@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michal Kalderon 提交于
The mpa buff is a descriptor for iwarp ll2 buffers that contains additional information required for aligining fpdu's. In some cases, an additional packet will arrive which will complete the alignment of a fpdu, but we won't be able to post the fpdu due to insufficient place on the tx ring. In this case we can't loose the data and require storing it for later. Processing is therefore done in two places, during rx completion, where we initialize a mpa buffer descriptor and add it to the pending list, and during tx-completion, since we free up an entry in the tx chain we can process any pending mpa packets. The mpa buff descriptors are pre-allocated since we have to ensure that we won't reach a state where we can't store an incoming unaligned packet. All packets received on the ll2 MUST be processed by the driver at some stage. Since they are preallocated, we hold a free list. Signed-off-by: NMichal Kalderon <Michal.Kalderon@cavium.com> Signed-off-by: NAriel Elior <Ariel.Elior@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michal Kalderon 提交于
This patch adds only the establishment and termination of the ll2 connection that handles unaligned MPA packets. Signed-off-by: NMichal Kalderon <Michal.Kalderon@cavium.com> Signed-off-by: NAriel Elior <Ariel.Elior@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michal Kalderon 提交于
For iWARP unaligned MPA flow, a slowpath event of flushing an MPA connection that entered an unaligned state is required. The flush ramrod is received on the ll2 queue, and a pre-registered callback function is called to handle the flush event. Signed-off-by: NMichal Kalderon <Michal.Kalderon@cavium.com> Signed-off-by: NAriel Elior <Ariel.Elior@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michal Kalderon 提交于
When a packet is sent back to iWARP FW via the tx ll2 connection the FW needs to know the source of the packet. Whether it is OOO or unaligned MPA related. Since OOO is implemented entirely inside the ll2 code (and shared with iSCSI), packets are marked as IN_ORDER inside the ll2 code. For unaligned mpa the value will be determined in the iWARP code and sent on the pkt->vlan field. Signed-off-by: NMichal Kalderon <Michal.Kalderon@cavium.com> Signed-off-by: NAriel Elior <Ariel.Elior@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michal Kalderon 提交于
enable_ip_cksum, enable_l4_cksum, calc_ip_len were added in commit stated below but not passed through to FW. This was OK until now as it wasn't used, but is required for the iWARP unaligned flow Fixes:7c7973b2 ("qed: LL2 to use packed information for tx") Signed-off-by: NMichal Kalderon <Michal.Kalderon@cavium.com> Signed-off-by: NAriel Elior <Ariel.Elior@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michal Kalderon 提交于
The option of sending a packet on the ll2 and dropping it exists in hardware and was not used until now, thus not exposed. The iWARP unaligned MPA flow requires this functionality for flushing the tx queue. Signed-off-by: NMichal Kalderon <Michal.Kalderon@cavium.com> Signed-off-by: NAriel Elior <Ariel.Elior@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michal Kalderon 提交于
When more than one ll2 queue is opened ( that is not an OOO queue ) ll2 code does not have enough information to determine whether the queue is the main one or not, so a new field is added to the acquire input data to expose the control of determining whether the queue is the main queue or a secondary queue. Signed-off-by: NMichal Kalderon <Michal.Kalderon@cavium.com> Signed-off-by: NAriel Elior <Ariel.Elior@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michal Kalderon 提交于
iWARP uses 3 ll2 connections, the maximum number of bds is known during connection setup. This patch modifies the static array in the ll2_tx_packet descriptor to be a flexible array and significantlly reduces memory size. In addition, some redundant fields in the ll2_tx_packet were removed, which also contributed to decreasing the descriptor size. Signed-off-by: NMichal Kalderon <Michal.Kalderon@cavium.com> Signed-off-by: NAriel Elior <Ariel.Elior@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Jiri Pirko says: ==================== mlxsw: Offload bridge device mrouter Yotam says: Similarly to a bridged port, the bridge device itself can be configured by the user to be an mrouter port. In this case, all multicast traffic should be forwarded to it. Make the mlxsw Spectrum driver offload these directives to the Spectrum hardware. Patches 1 and 2 add a new switchdev notification for bridge device mrouter port status and make the bridge module notify about it. Patches 3-5 change the mlxsw Spectrum driver to handle these notifications by adding the Spectrum router port to the bridge MDB entries. v1->v2: - patch1: - Don't add the MDB_RTR_TYPE_TEMP state and use the timer_pending to distinguish between learning-on and learning-off states ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yotam Gigi 提交于
Support the SWITCHDEV_ATTR_ID_BRIDGE_MROUTER port attribute switchdev notification. To do that, add the mrouter flag to struct mlxsw_sp_bridge_device, which indicates whether the bridge device was set to be mrouter port. This field is set when: - A new bridge is created, where the value is taken from the kernel bridge value. - A switchdev SWITCHDEV_ATTR_ID_BRIDGE_MROUTER notification is sent. In addition, change the bridge MID entries to include the router port when the bridge device is configured to be mrouter port. The MID entries are updated in the following cases: - When a new MID entry is created, update the router port according to the bridge mrouter state. - When a SWITCHDEV_ATTR_ID_BRIDGE_MROUTER notification is sent, update all the bridge's MID entries. This is aligned with the case where a bridge slave is configured to be mrouter port. Signed-off-by: NYotam Gigi <yotamg@mellanox.com> Reviewed-by: NNogah Frankel <nogahf@mellanox.com> Signed-off-by: NJiri Pirko <jiri@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yotam Gigi 提交于
In Spectrum, MDB entries point to MID entries, that indicate which ports a packet should be forwarded to. Add the support in creating MID entries that forward the packet to the Spectrum router port. This will be later used to handle the bridge mrouter port switchdev notifications. Signed-off-by: NYotam Gigi <yotamg@mellanox.com> Reviewed-by: NNogah Frankel <nogahf@mellanox.com> Signed-off-by: NJiri Pirko <jiri@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yotam Gigi 提交于
In Spectrum hardware, the router port is a virtual port that is the gateway to the routing mechanism. Hence, in order for a packet to be L3 forwarded, it must first be L2 forwarded to the router port inside the hardware. Further patches in this patchset are going to introduce support in bridge device used as an mrouter port. In this case, the router port index will be needed in order to update the MDB entries to include the router port. Thus, export the mlxsw_sp_router_port function, which returns the index of the Spectrum router port. Signed-off-by: NYotam Gigi <yotamg@mellanox.com> Reviewed-by: NNogah Frankel <nogahf@mellanox.com> Signed-off-by: NJiri Pirko <jiri@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yotam Gigi 提交于
Add an access function that, given a bridge netdevice, returns whether the bridge device is currently an mrouter or not. The function uses the already existing br_multicast_is_router function to check that. This function is needed in order to allow ports that join an already existing bridge to know the current mrouter state of the bridge device. Together with the bridge device mrouter ports switchdev notifications, it is possible to have full offloading of the semantics of the bridge device mcast router state. Due to the fact that the bridge multicast router status can change in packet RX path, take the multicast_router bridge spinlock to protect the read. Signed-off-by: NYotam Gigi <yotamg@mellanox.com> Reviewed-by: NNogah Frankel <nogahf@mellanox.com> Reviewed-by: NNikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: NJiri Pirko <jiri@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yotam Gigi 提交于
Add the SWITCHDEV_ATTR_ID_BRIDGE_MROUTER switchdev notification type, used to indicate whether the bridge is or isn't mrouter. Notify when the bridge changes its state, similarly to the already existing bridged port mrouter notifications. The notification uses the switchdev_attr.u.mrouter boolean flag to indicate the current bridge mrouter status. Thus, it only indicates whether the bridge is currently used as an mrouter or not, and does not indicate the exact mrouter state of the bridge (learning, permanent, etc.). Signed-off-by: NYotam Gigi <yotamg@mellanox.com> Signed-off-by: NJiri Pirko <jiri@mellanox.com> Signed-off-by: NNikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Jakub Kicinski says: ==================== nfp: bpf ABIv2 and multi port This series migrates our eBPF offload from old PoC firmware to a redesigned, faster and more feature rich FW. Marking support is dropped for now. We have to teach the JIT about encoding local memory accesses (one of NFP memory types). There is also code to populate the ECC of instructions (PoC had ECC protection on instruction store disabled). There is also a minor ld_field fix and all 64 bit shifts can now be encoded. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-