- 02 7月, 2021 2 次提交
-
-
由 Geetha sowjanya 提交于
This patch extends the lmtst_tbl_setup_req mbox to support run time LMTST configuration. RVU PF/VF and DPDK/ODP allocates a LMT region, creates a translation entry for a device via VFIO IOCTLs. This IOVA is shared with AF through above mbox. AF then uses RVU_SMMU transulation Widget and gets PA for the IOVA and updates the LMTtable entry for that device. Signed-off-by: NGeetha sowjanya <gakula@marvell.com> Signed-off-by: NSunil Kovvuri Goutham <sgoutham@marvell.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Harman Kalra 提交于
Introducing a new mailbox to support updating lmt entries and common lmt base address scheme i.e. multiple pcifuncs can share lmt region to reduce L1 cache pressure for application. Parameters passed to mailbox includes the primary pcifunc value whose lmt regions will be shared by other secondary pcifuncs. Here secondary pcifunc will be the one who is calling the mailbox. For example: By default each pcifunc has its own LMT base address: PCIFUNC1 LMT_BASE_ADDR A PCIFUNC2 LMT_BASE_ADDR B PCIFUNC3 LMT_BASE_ADDR C PCIFUNC4 LMT_BASE_ADDR D Application will choose PCIFUNC1 as base/primary pcifunc and as and when other pcifunc(secondary pcifuncs) gets probed, this mailbox will be called and LMTST table will be updated as: PCIFUNC1 LMT_BASE_ADDR A PCIFUNC2 LMT_BASE_ADDR A PCIFUNC3 LMT_BASE_ADDR A PCIFUNC4 LMT_BASE_ADDR A On FLR lmtst map table gets resetted to the default lmt base addresses for all secondary pcifuncs. Signed-off-by: NHarman Kalra <hkalra@marvell.com> Signed-off-by: NGeetha sowjanya <gakula@marvell.com> Signed-off-by: NSunil Goutham <sgoutham@marvell.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 30 6月, 2021 17 次提交
-
-
由 Nathan Chancellor 提交于
fw_cfg_showrev() is called by an indirect call in kobj_attr_show(), which violates clang's CFI checking because fw_cfg_showrev()'s second parameter is 'struct attribute', whereas the ->show() member of 'struct kobj_structure' expects the second parameter to be of type 'struct kobj_attribute'. $ cat /sys/firmware/qemu_fw_cfg/rev 3 $ dmesg | grep "CFI failure" [ 26.016832] CFI failure (target: fw_cfg_showrev+0x0/0x8): Fix this by converting fw_cfg_rev_attr to 'struct kobj_attribute' where this would have been caught automatically by the incompatible pointer types compiler warning. Update fw_cfg_showrev() accordingly. Fixes: 75f3e8e4 ("firmware: introduce sysfs driver for QEMU's fw_cfg device") Link: https://github.com/ClangBuiltLinux/linux/issues/1299Signed-off-by: NNathan Chancellor <nathan@kernel.org> Reviewed-by: NSami Tolvanen <samitolvanen@google.com> Tested-by: NSedat Dilek <sedat.dilek@gmail.com> Reviewed-by: NSami Tolvanen <samitolvanen@google.com> Reviewed-by: NPhilippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: NKees Cook <keescook@chromium.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20210211194258.4137998-1-nathan@kernel.org
-
由 Dan Carpenter 提交于
The rx->dqo.buf_states[] array is allocated in gve_rx_alloc_ring_dqo() and it has rx->dqo.num_buf_states so this > needs to >= to prevent an out of bounds access. Fixes: 9b8dd5e5 ("gve: DQO: Add RX path") Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Voon Weifeng 提交于
During suspend, set the Intel mgbe to D3hot state to save power consumption. Signed-off-by: NVoon Weifeng <weifeng.voon@intel.com> Signed-off-by: NLing Pei Lee <pei.lee.ling@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ling Pei Lee 提交于
Enable PHY Wake On LAN in Intel EHL Intel platform. PHY Wake on LAN option is enabled due to Intel EHL Intel platform is designed for PHY Wake On LAN but not MAC Wake On LAN. Signed-off-by: NLing Pei Lee <pei.lee.ling@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ling Pei Lee 提交于
The current stmmac driver WOL implementation will enable MAC WOL if MAC HW PMT feature is on. Else, the driver will check for PHY WOL support. There is another case where MAC HW PMT is enabled but the platform still goes for the PHY WOL option. E.g, Intel platform are designed for PHY WOL but not MAC WOL although HW MAC PMT features are enabled. Introduce use_phy_wol platform data to select PHY WOL instead of depending on HW PMT features. Set use_phy_wol will disable the plat->pmt which currently used to determine the system to wake up by MAC WOL or PHY WOL. Signed-off-by: NLing Pei Lee <pei.lee.ling@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jonathan Lemon 提交于
When creating a PTP device, the configuration block allows creation of an associated PPS device. However, there isn't any way to associate the two devices after creation. Set the PPS cookie, so pps_lookup_dev(ptp) performs correctly. Signed-off-by: NJonathan Lemon <jonathan.lemon@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Alexander Aring 提交于
This patch introduces a function wrapper to call the sk_error_report callback. That will prepare to add additional handling whenever sk_error_report is called, for example to trace socket errors. Signed-off-by: NAlexander Aring <aahringo@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Mel Gorman 提交于
NUMA statistics are maintained on the zone level for hits, misses, foreign etc but nothing relies on them being perfectly accurate for functional correctness. The counters are used by userspace to get a general overview of a workloads NUMA behaviour but the page allocator incurs a high cost to maintain perfect accuracy similar to what is required for a vmstat like NR_FREE_PAGES. There even is a sysctl vm.numa_stat to allow userspace to turn off the collection of NUMA statistics like NUMA_HIT. This patch converts NUMA_HIT and friends to be NUMA events with similar accuracy to VM events. There is a possibility that slight errors will be introduced but the overall trend as seen by userspace will be similar. The counters are no longer updated from vmstat_refresh context as it is unnecessary overhead for counters that may never be read by userspace. Note that counters could be maintained at the node level to save space but it would have a user-visible impact due to /proc/zoneinfo. [lkp@intel.com: Fix misplaced closing brace for !CONFIG_NUMA] Link: https://lkml.kernel.org/r/20210512095458.30632-4-mgorman@techsingularity.netSigned-off-by: NMel Gorman <mgorman@techsingularity.net> Acked-by: NVlastimil Babka <vbabka@suse.cz> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Chuck Lever <chuck.lever@oracle.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jesper Dangaard Brouer <brouer@redhat.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Liam Howlett 提交于
Use vma_lookup() to find the VMA at a specific address. As vma_lookup() will return NULL if the address is not within any VMA, the start address no longer needs to be validated. Link: https://lkml.kernel.org/r/20210521174745.2219620-16-Liam.Howlett@Oracle.comSigned-off-by: NLiam R. Howlett <Liam.Howlett@Oracle.com> Reviewed-by: NLaurent Dufour <ldufour@linux.ibm.com> Acked-by: NDavid Hildenbrand <david@redhat.com> Acked-by: NDavidlohr Bueso <dbueso@suse.de> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Liam Howlett 提交于
vma_lookup() finds the vma of a specific address with a cleaner interface and is more readable. Link: https://lkml.kernel.org/r/20210521174745.2219620-15-Liam.Howlett@Oracle.comSigned-off-by: NLiam R. Howlett <Liam.Howlett@Oracle.com> Reviewed-by: NLaurent Dufour <ldufour@linux.ibm.com> Acked-by: NDavid Hildenbrand <david@redhat.com> Acked-by: NDavidlohr Bueso <dbueso@suse.de> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Liam Howlett 提交于
Use vma_lookup() to find the VMA at a specific address. As vma_lookup() will return NULL if the address is not within any VMA, the start address no longer needs to be validated. Link: https://lkml.kernel.org/r/20210521174745.2219620-14-Liam.Howlett@Oracle.comSigned-off-by: NLiam R. Howlett <Liam.Howlett@Oracle.com> Reviewed-by: NLaurent Dufour <ldufour@linux.ibm.com> Acked-by: NDavid Hildenbrand <david@redhat.com> Acked-by: NDavidlohr Bueso <dbueso@suse.de> Acked-by: NAlex Deucher <alexander.deucher@amd.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Liam Howlett 提交于
vma_lookup() finds the vma of a specific address with a cleaner interface and is more readable. Link: https://lkml.kernel.org/r/20210521174745.2219620-12-Liam.Howlett@Oracle.comSigned-off-by: NLiam R. Howlett <Liam.Howlett@Oracle.com> Reviewed-by: NLaurent Dufour <ldufour@linux.ibm.com> Acked-by: NDavid Hildenbrand <david@redhat.com> Acked-by: NDavidlohr Bueso <dbueso@suse.de> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Liam Howlett 提交于
vma_lookup() will look up the vma at a specific address. find_vma() will start the search for a specific address and continue upwards. This fixes an issue with the selftest as the returned vma may not be the newly created vma, but simply the vma at a higher address. objects Link: https://lkml.kernel.org/r/20210521174745.2219620-3-Liam.Howlett@Oracle.com Fixes: 6fedafac (drm/i915/selftests: Wrap vm_mmap() around GEM Signed-off-by: NLiam R. Howlett <Liam.Howlett@Oracle.com> Reviewed-by: NLaurent Dufour <ldufour@linux.ibm.com> Acked-by: NDavid Hildenbrand <david@redhat.com> Acked-by: NDavidlohr Bueso <dbueso@suse.de> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Dan Schatzberg 提交于
The current code only associates with the existing blkcg when aio is used to access the backing file. This patch covers all types of i/o to the backing file and also associates the memcg so if the backing file is on tmpfs, memory is charged appropriately. This patch also exports cgroup_get_e_css and int_active_memcg so it can be used by the loop module. Link: https://lkml.kernel.org/r/20210610173944.1203706-4-schatzberg.dan@gmail.comSigned-off-by: NDan Schatzberg <schatzberg.dan@gmail.com> Acked-by: NJohannes Weiner <hannes@cmpxchg.org> Acked-by: NJens Axboe <axboe@kernel.dk> Cc: Chris Down <chris@chrisdown.name> Cc: Michal Hocko <mhocko@suse.com> Cc: Ming Lei <ming.lei@redhat.com> Cc: Shakeel Butt <shakeelb@google.com> Cc: Tejun Heo <tj@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Dan Schatzberg 提交于
Patch series "Charge loop device i/o to issuing cgroup", v14. The loop device runs all i/o to the backing file on a separate kworker thread which results in all i/o being charged to the root cgroup. This allows a loop device to be used to trivially bypass resource limits and other policy. This patch series fixes this gap in accounting. A simple script to demonstrate this behavior on cgroupv2 machine: ''' #!/bin/bash set -e CGROUP=/sys/fs/cgroup/test.slice LOOP_DEV=/dev/loop0 if [[ ! -d $CGROUP ]] then sudo mkdir $CGROUP fi grep oom_kill $CGROUP/memory.events # Set a memory limit, write more than that limit to tmpfs -> OOM kill sudo unshare -m bash -c " echo \$\$ > $CGROUP/cgroup.procs; echo 0 > $CGROUP/memory.swap.max; echo 64M > $CGROUP/memory.max; mount -t tmpfs -o size=512m tmpfs /tmp; dd if=/dev/zero of=/tmp/file bs=1M count=256" || true grep oom_kill $CGROUP/memory.events # Set a memory limit, write more than that limit through loopback # device -> no OOM kill sudo unshare -m bash -c " echo \$\$ > $CGROUP/cgroup.procs; echo 0 > $CGROUP/memory.swap.max; echo 64M > $CGROUP/memory.max; mount -t tmpfs -o size=512m tmpfs /tmp; truncate -s 512m /tmp/backing_file losetup $LOOP_DEV /tmp/backing_file dd if=/dev/zero of=$LOOP_DEV bs=1M count=256; losetup -D $LOOP_DEV" || true grep oom_kill $CGROUP/memory.events ''' Naively charging cgroups could result in priority inversions through the single kworker thread in the case where multiple cgroups are reading/writing to the same loop device. This patch series does some minor modification to the loop driver so that each cgroup can make forward progress independently to avoid this inversion. With this patch series applied, the above script triggers OOM kills when writing through the loop device as expected. This patch (of 3): Existing uses of loop device may have multiple cgroups reading/writing to the same device. Simply charging resources for I/O to the backing file could result in priority inversion where one cgroup gets synchronously blocked, holding up all other I/O to the loop device. In order to avoid this priority inversion, we use a single workqueue where each work item is a "struct loop_worker" which contains a queue of struct loop_cmds to issue. The loop device maintains a tree mapping blk css_id -> loop_worker. This allows each cgroup to independently make forward progress issuing I/O to the backing file. There is also a single queue for I/O associated with the rootcg which can be used in cases of extreme memory shortage where we cannot allocate a loop_worker. The locking for the tree and queues is fairly heavy handed - we acquire a per-loop-device spinlock any time either is accessed. The existing implementation serializes all I/O through a single thread anyways, so I don't believe this is any worse. [colin.king@canonical.com: fixes] Link: https://lkml.kernel.org/r/20210610173944.1203706-1-schatzberg.dan@gmail.com Link: https://lkml.kernel.org/r/20210610173944.1203706-2-schatzberg.dan@gmail.comSigned-off-by: NDan Schatzberg <schatzberg.dan@gmail.com> Reviewed-by: NMing Lei <ming.lei@redhat.com> Acked-by: NJens Axboe <axboe@kernel.dk> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Chris Down <chris@chrisdown.name> Cc: Shakeel Butt <shakeelb@google.com> Cc: Tejun Heo <tj@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Matthew Wilcox (Oracle) 提交于
Use __set_page_dirty_no_writeback() instead. This will set the dirty bit on the page, which will be used to avoid calling set_page_dirty() in the future. It will have no effect on actually writing the page back, as the pages are not on any LRU lists. [akpm@linux-foundation.org: export __set_page_dirty_no_writeback() to modules] Link: https://lkml.kernel.org/r/20210615162342.1669332-6-willy@infradead.orgSigned-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Christoph Hellwig <hch@lst.de> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Jan Kara <jack@suse.cz> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Gavin Shan 提交于
The page reporting won't be triggered if the freeing page can't come up with a free area, whose size is equal or bigger than the threshold (page reporting order). The default page reporting order, equal to @pageblock_order, is too huge on some architectures to trigger page reporting. One example is ARM64 when 64KB base page size is used. PAGE_SIZE: 64KB pageblock_order: 13 (512MB) MAX_ORDER: 14 This specifies the page reporting order to 5 (2MB) for this specific case so that page reporting can be triggered. Link: https://lkml.kernel.org/r/20210625014710.42954-5-gshan@redhat.comSigned-off-by: NGavin Shan <gshan@redhat.com> Reviewed-by: NAlexander Duyck <alexanderduyck@fb.com> Cc: Michael S. Tsirkin <mst@redhat.com> Cc: David Hildenbrand <david@redhat.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 29 6月, 2021 21 次提交
-
-
由 Nathan Chancellor 提交于
Clang warns: drivers/net/ethernet/microchip/sparx5/sparx5_main.c:760:29: warning: variable 'mac_addr' is uninitialized when used here [-Wuninitialized] if (of_get_mac_address(np, mac_addr)) { ^~~~~~~~ drivers/net/ethernet/microchip/sparx5/sparx5_main.c:669:14: note: initialize the variable 'mac_addr' to silence this warning u8 *mac_addr; ^ = NULL 1 warning generated. mac_addr is only used to store the value retrieved from of_get_mac_address(), which is then copied into the base_mac member of the sparx5 struct using ether_addr_copy(). It is easier to just use the base_mac address directly, which avoids the warning and the extra copy. Fixes: 3cfa11ba ("net: sparx5: add the basic sparx5 driver") Link: https://github.com/ClangBuiltLinux/linux/issues/1413Signed-off-by: NNathan Chancellor <nathan@kernel.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Vladimir Oltean 提交于
The SJA1105P/Q/R/S and SJA1110 may have the same layout for the command to read/write/search for L2 Address Lookup entries, but as explained in the comments at the beginning of the sja1105_dynamic_config.c file, the command portion of the buffer is at the end, and we need to obtain a pointer to it by adding the length of the entry to the buffer. Alas, the length of an L2 Address Lookup entry is larger in SJA1110 than it is for SJA1105P/Q/R/S, so we need to create a common helper to access the command buffer, and this receives as argument the length of the entry buffer. Fixes: 3e77e59b ("net: dsa: sja1105: add support for the SJA1110 switch family") Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David Bauer 提交于
AR8031/AR8033 have different status registers for copper and fiber operation. However, the extended status register is the same for both operation modes. As a result of that, ESTATUS_1000_XFULL is set to 1 even when operating in copper TP mode. Remove this mode from the supported link modes, as this driver currently only supports copper operation. Signed-off-by: NDavid Bauer <mail@david-bauer.net> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yang Yingliang 提交于
Fix to return a negative error code from the error handling case instead of 0, as done elsewhere in this function. Fixes: d6fce514 ("net: sparx5: add switching support") Reported-by: NHulk Robot <hulkci@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yang Yingliang 提交于
In case of error, the function devm_ioremap() returns NULL pointer not ERR_PTR(). The IS_ERR() test in the return value check should be replaced with NULL test. Fixes: 3cfa11ba ("net: sparx5: add the basic sparx5 driver") Reported-by: NHulk Robot <hulkci@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yang Yingliang 提交于
It will cause null-ptr-deref if platform_get_resource() returns NULL, we need check the return value. Fixes: 3cfa11ba ("net: sparx5: add the basic sparx5 driver") Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Vladimir Oltean 提交于
When a switchdev port leaves a LAG that is a bridge port, the switchdev objects and port attributes offloaded to that port are not removed: ip link add br0 type bridge ip link add bond0 type bond mode 802.3ad ip link set swp0 master bond0 ip link set bond0 master br0 bridge vlan add dev bond0 vid 100 ip link set swp0 nomaster VLAN 100 will remain installed on swp0 despite it going into standalone mode, because as far as the bridge is concerned, nothing ever happened to its bridge port. Let's extend the bridge vlan, fdb and mdb replay functions to take a 'bool adding' argument, and make DSA and ocelot call the replay functions with 'adding' as false from the switchdev unsync path, for the switch port that leaves the bridge. Note that this patch in itself does not salvage anything, because in the current pull mode of operation, DSA still needs to call the replay helpers with adding=false. This will be done in another patch. Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Vladimir Oltean 提交于
There is a slight inconvenience in the switchdev replay helpers added recently, and this is when: ip link add br0 type bridge ip link add bond0 type bond ip link set bond0 master br0 bridge vlan add dev bond0 vid 100 ip link set swp0 master bond0 ip link set swp1 master bond0 Since the underlying driver (currently only DSA) asks for a replay of VLANs when swp0 and swp1 join the LAG because it is bridged, what will happen is that DSA will try to react twice on the VLAN event for swp0. This is not really a huge problem right now, because most drivers accept duplicates since the bridge itself does, but it will become a problem when we add support for replaying switchdev object deletions. Let's fix this by adding a blank void *ctx in the replay helpers, which will be passed on by the bridge in the switchdev notifications. If the context is NULL, everything is the same as before. But if the context is populated with a valid pointer, the underlying switchdev driver (currently DSA) can use the pointer to 'see through' the bridge port (which in the example above is bond0) and 'know' that the event is only for a particular physical port offloading that bridge port, and not for all of them. Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Vladimir Oltean 提交于
In the case where the driver asks for a replay of a certain type of event (port object or attribute) for a bridge port that is a LAG, it may do so because this port has just joined the LAG. But there might already be other switchdev ports in that LAG, and it is preferable that those preexisting switchdev ports do not act upon the replayed event. The solution is to add a context to switchdev events, which is NULL most of the time (when the bridge layer initiates the call) but which can be set to a value controlled by the switchdev driver when a replay is requested. The driver can then check the context to figure out if all ports within the LAG should act upon the switchdev event, or just the ones that match the context. We have to modify all switchdev_handle_* helper functions as well as the prototypes in the drivers that use these helpers too, because these helpers hide the underlying struct switchdev_notifier_info from us and there is no way to retrieve the context otherwise. The context structure will be populated and used in later patches. Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Vladimir Oltean 提交于
Not using this driver, I did not realize it doesn't react to SWITCHDEV_FDB_{ADD,DEL}_TO_DEVICE notifications, but it implements just the bridge bypass operations (.ndo_fdb_{add,del}). So the call to br_fdb_replay just produces notifications that are ignored, delete it for now. Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael Chan 提交于
Call bnxt_ptp_init() to initialize and register with the clock driver to enable PTP support. Call bnxt_ptp_free() to unregister and clean up during shutdown. Reviewed-by: NEdwin Peer <edwin.peer@broadcom.com> Reviewed-by: NPavan Chebbi <pavan.chebbi@broadcom.com> Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Pavan Chebbi 提交于
Setup the TXBD to enable TX timestamp if requested. At TX packet DMA completion, if we requested TX timestamp on that packet, we defer to .do_aux_work() to obtain the TX timestamp from the firmware before we free the TX SKB. v2: Use .do_aux_work() to get the TX timestamp from firmware. Reviewed-by: NEdwin Peer <edwin.peer@broadcom.com> Signed-off-by: NPavan Chebbi <pavan.chebbi@broadcom.com> Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Pavan Chebbi 提交于
If the RX packet is timestamped by the hardware, the RX completion record will contain the lower 32-bit of the timestamp. This needs to be combined with the upper 16-bit of the periodic timestamp that we get from the timer. The previous snapshot in ptp->old_timer is used to make sure that the snapshot is not ahead of the RX timestamp and we adjust for wrap-around if needed. v2: Make ptp->old_time read access safe on 32-bit CPUs. Reviewed-by: NEdwin Peer <edwin.peer@broadcom.com> Signed-off-by: NPavan Chebbi <pavan.chebbi@broadcom.com> Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Pavan Chebbi 提交于
From the bnxt_timer(), read the 48-bit hardware running clock periodically and store it in ptp->current_time. The previous snapshot of the clock will be stored in ptp->old_time. The old_time snapshot will be used in the next patches to compute the RX packet timestamps. v2: Use .do_aux_work() to read the timer periodically. Reviewed-by: NEdwin Peer <edwin.peer@broadcom.com> Signed-off-by: NPavan Chebbi <pavan.chebbi@broadcom.com> Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael Chan 提交于
Add the clock APIs to set/get/adjust the hw clock, and the related ioctls and ethtool methods. v2: Propagate error code from ptp_clock_register(). Add spinlock to serialize access to the timecounter. The timecounter is accessed in process context and the RX datapath. Read the PHC using direct registers. Reviewed-by: NEdwin Peer <edwin.peer@broadcom.com> Signed-off-by: NPavan Chebbi <pavan.chebbi@broadcom.com> Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael Chan 提交于
Store PTP hardware info in a structure if hardware and firmware support PTP. Reviewed-by: NEdwin Peer <edwin.peer@broadcom.com> Reviewed-by: NPavan Chebbi <pavan.chebbi@broadcom.com> Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael Chan 提交于
Adding the PTP related firmware interface is the main change. There is also a name change for admin_mtu, requiring code fixup. Reviewed-by: NPavan Chebbi <pavan.chebbi@broadcom.com> Signed-off-by: NEdwin Peer <edwin.peer@broadcom.com> Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jian Shen 提交于
This patch adds support of dumping MAC umv counter in debugfs, which will be helpful for debugging. The display style is below: $ cat umv_info num_alloc_vport : 2 max_umv_size : 256 wanted_umv_size : 256 priv_umv_size : 85 share_umv_size : 86 vport(0) used_umv_num : 1 vport(1) used_umv_num : 1 Signed-off-by: NJian Shen <shenjian15@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jian Shen 提交于
Previously, the flow director counter is not enabled. To improve the maintainability for chechking whether flow director hit or not, enable flow director counter for each function, and add debugfs query inerface to query the counters for each function. The debugfs command is below: cat fd_counter func_id hit_times pf 0 vf0 0 vf1 0 Signed-off-by: NJian Shen <shenjian15@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Guillaume Nault 提交于
For consistency with other L3 tunnel devices, reset the mac_header pointer after decapsulation. This makes the mac_header 0 bytes long, thus making it clear that this skb has no mac_header. Compile tested only. Signed-off-by: NGuillaume Nault <gnault@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Guillaume Nault 提交于
Even though bareudp transports L3 data (typically IP or MPLS), it needs to reset the mac_header pointer, so that other parts of the stack don't mistakenly access the outer header after the packet has been decapsulated. This allows to push an Ethernet header to bareudp packets and redirect them to an Ethernet device: $ tc filter add dev bareudp0 ingress matchall \ action vlan push_eth dst_mac 00:00:5e:00:53:01 \ src_mac 00:00:5e:00:53:00 \ action mirred egress redirect dev eth0 Without this patch, push_eth refuses to add an ethernet header because the skb appears to already have a MAC header. Signed-off-by: NGuillaume Nault <gnault@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-