- 04 5月, 2017 7 次提交
-
-
由 Minchan Kim 提交于
The zram_free_page already handles NULL handle case and same page so use it to reduce error probability. (Acutaully, I made a mistake when I handled same page feature) Link: http://lkml.kernel.org/r/1492052365-16169-7-git-send-email-minchan@kernel.orgSigned-off-by: NMinchan Kim <minchan@kernel.org> Cc: Hannes Reinecke <hare@suse.com> Cc: Johannes Thumshirn <jthumshirn@suse.de> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Minchan Kim 提交于
With element, sometime I got confused handle and element access. It might be my bad but I think it's time to introduce accessor to prevent future idiot like me. This patch is just clean-up patch so it shouldn't change any behavior. Link: http://lkml.kernel.org/r/1492052365-16169-6-git-send-email-minchan@kernel.orgSigned-off-by: NMinchan Kim <minchan@kernel.org> Reviewed-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Hannes Reinecke <hare@suse.com> Cc: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Minchan Kim 提交于
It's redundant now. Instead, remove it and use zram structure directly. Link: http://lkml.kernel.org/r/1492052365-16169-5-git-send-email-minchan@kernel.orgSigned-off-by: NMinchan Kim <minchan@kernel.org> Cc: Hannes Reinecke <hare@suse.com> Cc: Johannes Thumshirn <jthumshirn@suse.de> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Minchan Kim 提交于
With this clean-up phase, I want to use zram's wrapper function to lock table access which is more consistent with other zram's functions. Link: http://lkml.kernel.org/r/1492052365-16169-4-git-send-email-minchan@kernel.orgSigned-off-by: NMinchan Kim <minchan@kernel.org> Reviewed-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Hannes Reinecke <hare@suse.com> Cc: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Minchan Kim 提交于
For architecture(PAGE_SIZE > 4K), zram have supported partial IO. However, the mixed code for handling normal/partial IO is too mess, error-prone to modify IO handler functions with upcoming feature so this patch aims for cleaning up zram's IO handling functions. Link: http://lkml.kernel.org/r/1492052365-16169-3-git-send-email-minchan@kernel.orgSigned-off-by: NMinchan Kim <minchan@kernel.org> Cc: Hannes Reinecke <hare@suse.com> Cc: Johannes Thumshirn <jthumshirn@suse.de> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Minchan Kim 提交于
Patch series "zram clean up", v2. This patchset aims to clean up zram . [1] clean up multiple pages's bvec handling. [2] clean up partial IO handling [3-6] clean up zram via using accessor and removing pointless structure. With [2-6] applied, we can get a few hundred bytes as well as huge readibility enhance. x86: 708 byte save add/remove: 1/1 grow/shrink: 0/11 up/down: 478/-1186 (-708) function old new delta zram_special_page_read - 478 +478 zram_reset_device 317 314 -3 mem_used_max_store 131 128 -3 compact_store 96 93 -3 mm_stat_show 203 197 -6 zram_add 719 712 -7 zram_slot_free_notify 229 214 -15 zram_make_request 819 803 -16 zram_meta_free 128 111 -17 zram_free_page 180 151 -29 disksize_store 432 361 -71 zram_decompress_page.isra 504 - -504 zram_bvec_rw 2592 2080 -512 Total: Before=25350773, After=25350065, chg -0.00% ppc64: 231 byte save add/remove: 2/0 grow/shrink: 1/9 up/down: 681/-912 (-231) function old new delta zram_special_page_read - 480 +480 zram_slot_lock - 200 +200 vermagic 39 40 +1 mm_stat_show 256 248 -8 zram_meta_free 200 184 -16 zram_add 944 912 -32 zram_free_page 348 308 -40 disksize_store 572 492 -80 zram_decompress_page 664 564 -100 zram_slot_free_notify 292 160 -132 zram_make_request 1132 1000 -132 zram_bvec_rw 2768 2396 -372 Total: Before=17565825, After=17565594, chg -0.00% This patch (of 6): Johannes Thumshirn reported system goes the panic when using NVMe over Fabrics loopback target with zram. The reason is zram expects each bvec in bio contains a single page but nvme can attach a huge bulk of pages attached to the bio's bvec so that zram's index arithmetic could be wrong so that out-of-bound access makes system panic. [1] in mainline solved solved the problem by limiting max_sectors with SECTORS_PER_PAGE but it makes zram slow because bio should split with each pages so this patch makes zram aware of multiple pages in a bvec so it could solve without any regression(ie, bio split). [1] 0bc31538, zram: set physical queue limits to avoid array out of bounds accesses Link: http://lkml.kernel.org/r/20170413134057.GA27499@bboxSigned-off-by: NMinchan Kim <minchan@kernel.org> Reported-by: NJohannes Thumshirn <jthumshirn@suse.de> Tested-by: NJohannes Thumshirn <jthumshirn@suse.de> Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de> Reviewed-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Hannes Reinecke <hare@suse.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Michal Hocko 提交于
Tetsuo has reported that sysrq triggered OOM killer will print a misleading information when no tasks are selected: sysrq: SysRq : Manual OOM execution Out of memory: Kill process 4468 ((agetty)) score 0 or sacrifice child Killed process 4468 ((agetty)) total-vm:43704kB, anon-rss:1760kB, file-rss:0kB, shmem-rss:0kB sysrq: SysRq : Manual OOM execution Out of memory: Kill process 4469 (systemd-cgroups) score 0 or sacrifice child Killed process 4469 (systemd-cgroups) total-vm:10704kB, anon-rss:120kB, file-rss:0kB, shmem-rss:0kB sysrq: SysRq : Manual OOM execution sysrq: OOM request ignored because killer is disabled sysrq: SysRq : Manual OOM execution sysrq: OOM request ignored because killer is disabled sysrq: SysRq : Manual OOM execution sysrq: OOM request ignored because killer is disabled The real reason is that there are no eligible tasks for the OOM killer to select but since commit 7c5f64f8 ("mm: oom: deduplicate victim selection code for memcg and global oom") the semantic of out_of_memory has changed without updating moom_callback. This patch updates moom_callback to tell that no task was eligible which is the case for both oom killer disabled and no eligible tasks. In order to help distinguish first case from the second add printk to both oom_killer_{enable,disable}. This information is useful on its own because it might help debugging potential memory allocation failures. Fixes: 7c5f64f8 ("mm: oom: deduplicate victim selection code for memcg and global oom") Link: http://lkml.kernel.org/r/20170404134705.6361-1-mhocko@kernel.orgSigned-off-by: NMichal Hocko <mhocko@suse.com> Reported-by: NTetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 03 5月, 2017 13 次提交
-
-
由 Sunil Goutham 提交于
Driver follows a method of taking one extra reference on the page for recycling which is fine in usual packet path where each 64KB page is segmented into multiple receive buffers. But in XDP mode since there is just one receive buffer per page taking extra page reference itself becomes big bottleneck consuming ~50% of CPU cycles due to atomic operations. This patch adds a internal ref count in pgcache for each page and additional page references are taken in a batch instead of just one at a time. Internal i.e 'pgcache->ref_count' and page's i.e 'page->_refcount' counters are compared to check page's recyclability. Signed-off-by: NSunil Goutham <sgoutham@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Sunil Goutham 提交于
When in XDP mode reserve XDP_PACKET_HEADROOM bytes at the start of receive buffer for XDP program to modify headers and adjust packet start. Additional code changes done to handle such packets. Signed-off-by: NSunil Goutham <sgoutham@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Sunil Goutham 提交于
Adds support for XDP_TX i.e transmits packet out of the XDP TX queue mapped to the corresponding Rx queue on which packet is received. Since SQ for XDP TX will be used only on a single cpu i.e SQ description creation and freeing, using atomic free count is not necessary and will become a bottleneck. Hence added a separate 'xdp_free_cnt' used for SQs designated for XDP to track descriptor free count. Changes also include - A new entry 'xdp_page' is added to save transmitted packet's page pointer for later cleanup. - XDP Tx SQ's doorbell is ringed once per NAPI instance. - Retrieving designated SQ for packets being sent out by stack via 'nicvf_xmit'. Signed-off-by: NSunil Goutham <sgoutham@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Sunil Goutham 提交于
Adds support for XDP_DROP. Also since in XDP mode there is just a single buffer per page, made changes to recycle DMA mapping info as well along with pages. Signed-off-by: NSunil Goutham <sgoutham@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Sunil Goutham 提交于
Adds basic XDP support i.e attaching a BPF program to an interface. Also takes care of allocating separate Tx queues for XDP path and for network stack packet transmission. This patch doesn't support handling of any of the XDP actions, all are treated as XDP_PASS i.e packets will be handed over to the network stack. Changes also involve allocating one receive buffer per page in XDP mode and multiple in normal mode i.e when no BPF program is attached. Signed-off-by: NSunil Goutham <sgoutham@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Sunil Goutham 提交于
Get rid of unnecessary double pointer references and type casting in receive buffer allocation code. Signed-off-by: NSunil Goutham <sgoutham@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Sunil Goutham 提交于
Optimized CQE handling with below changes - Feeing descriptors back to SQ in bulk i.e once per NAPI instance instead for every CQE_TX, this will reduce number of atomic updates to 'sq->free_cnt'. - Checking errors in CQE_TX and CQE_RX before calling appropriate fn()s to update error stats i.e reduce branching. Also removed debug messages in packet handling path which otherwise causes issues if DEBUG is enabled. Signed-off-by: NSunil Goutham <sgoutham@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Sunil Goutham 提交于
Receive buffer's physical address or iova will anyway not go beyond 49bits, since it is the max supported HW address. As per perf, updating bitfields i.e buf_addr:42 in RBDR descriptor entry consumes lots of cpu cycles, hence changed it to a 64bit field with alignment requirements taken care of. Signed-off-by: NSunil Goutham <sgoutham@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Sunil Goutham 提交于
Adds support for page recycling for allocating receive buffers to reduce cost of refilling RBDR ring. Also got rid of using compound pages when pagesize is 4K, only order-0 pages now. Only page is recycled, DMA mappings still needs to be done for every receive buffer allocated due to following constraints - Cannot have just one receive buffer per 64KB page. - There is just one buffer ring shared across 8 Rx queues, so buffers of same page can go to any Rx queue. - HW gives buffer address where packet has been DMA'ed and not the index into buffer ring. This makes it not possible to resue DMA mapping info. So unfortunately have to go through costly mapping route for every buffer. Signed-off-by: NSunil Goutham <sgoutham@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
PTP hardware filter configuration performed by the driver for a given user requested config is not correct for some of the PTP modes. Following changes are needed for PTP config-filter implementation. 1. NIG_REG_TX_PTP_EN register - Bits 0/1/2 respectively enables TimeSync/"V1 frame format support"/"V2 frame format support" on the TX side. Set the associated bits based on the user request. 2. ptp4l application fails to operate in Peer Delay mode. Following changes are needed to fix this, a. Driver should enable (set to 0) DA #1-related bits for IPv4, IPv6 and MAC destination addresses in these registers: NIG_REG_TX_LLH_PTP_RULE_MASK NIG_REG_LLH_PTP_RULE_MASK b. NIG_REG_LLH_PTP_PARAM_MASK/NIG_REG_TX_LLH_PTP_PARAM_MASK should be set to 0x0 in all modes. Signed-off-by: NSudarsana Reddy Kalluru <Sudarsana.Kalluru@cavium.com> Signed-off-by: NYuval Mintz <Yuval.Mintz@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
PTP Tx timestamping data structures are not protected against the concurrent access in the Tx paths. Protecting the same using atomic bit locks. Signed-off-by: NSudarsana Reddy Kalluru <Sudarsana.Kalluru@cavium.com> Signed-off-by: NYuval Mintz <Yuval.Mintz@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jan Kiszka 提交于
The IOT2000 is industrial controller platform, derived from the Intel Galileo Gen2 board. The variant IOT2020 comes with one LAN port, the IOT2040 has two of them. They can be told apart based on the board asset tag in the DMI table. Based on patch by Sascha Weisenberger. Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com> Signed-off-by: NSascha Weisenberger <sascha.weisenberger@siemens.com> Reviewed-by: NAndy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Timmy Li 提交于
hns_get_sset_count() returns HNS_NET_STATS_CNT and the data space allocated is not enough for ethtool_get_strings(), which will cause random memory corruption. When SLAB and DEBUG_SLAB are both enabled, memory corruptions like the the following can be observed without this patch: [ 43.115200] Slab corruption (Not tainted): Acpi-ParseExt start=ffff801fb0b69030, len=80 [ 43.115206] Redzone: 0x9f911029d006462/0x5f78745f31657070. [ 43.115208] Last user: [<5f7272655f746b70>](0x5f7272655f746b70) [ 43.115214] 010: 70 70 65 31 5f 74 78 5f 70 6b 74 00 6b 6b 6b 6b ppe1_tx_pkt.kkkk [ 43.115217] 030: 70 70 65 31 5f 74 78 5f 70 6b 74 5f 6f 6b 00 6b ppe1_tx_pkt_ok.k [ 43.115218] Next obj: start=ffff801fb0b69098, len=80 [ 43.115220] Redzone: 0x706d655f6f666966/0x9f911029d74e35b. [ 43.115229] Last user: [<ffff0000084b11b0>](acpi_os_release_object+0x28/0x38) [ 43.115231] 000: 74 79 00 6b 6b 6b 6b 6b 70 70 65 31 5f 74 78 5f ty.kkkkkppe1_tx_ [ 43.115232] 010: 70 6b 74 5f 65 72 72 5f 63 73 75 6d 5f 66 61 69 pkt_err_csum_fai Signed-off-by: NTimmy Li <lixiaoping3@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 02 5月, 2017 18 次提交
-
-
由 Vivien Didelot 提交于
The 6390 family of chips use only 2 of the 3 VTU Data registers to pack the MemberTag and PortState VLAN data. This means that they must be written or read before or after each VTU/STU operations. Implement this variant to add support for VTU with such chips. These chips have a 13th bit for the VID thus set their max_vid to 8191. Signed-off-by: NVivien Didelot <vivien.didelot@savoirfairelinux.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Vivien Didelot 提交于
Newer chips such as the 88E6390 have a VTU Page bit in the VTU VID register to specify a 13th bit for the VID. This can be used to support 8K VLANs. When dumping the whole VTU, all VID bits must be set to one, including this VTU Page bit. Add support for VID greater than 4095. Signed-off-by: NVivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Vivien Didelot 提交于
Make the code which fetches or initializes a new VTU entry more concise. This allows us the get rid of the old underscore prefix naming. Signed-off-by: NVivien Didelot <vivien.didelot@savoirfairelinux.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Vivien Didelot 提交于
Now that we have chip operations for VTU accesses, mark all helpers from global1_vtu.c as static. Only the various implementations of the GetNext, LoadPurge and Flush operations need to be exposed. Signed-off-by: NVivien Didelot <vivien.didelot@savoirfairelinux.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Vivien Didelot 提交于
Add a new vtu_loadpurge operation to the chip info structure to differ the various implementations of the VTU accesses. Now that the STU handling is abstracted behind VTU operations, kill the obsolete MV88E6XXX_FLAG_STU flag. Signed-off-by: NVivien Didelot <vivien.didelot@savoirfairelinux.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Vivien Didelot 提交于
Add a new vtu_getnext operation to the chip info structure to differ the various implementations of the VTU accesses. Signed-off-by: NVivien Didelot <vivien.didelot@savoirfairelinux.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Vivien Didelot 提交于
Now that the code writes both VTU and STU data when loading a VTU entry, load the corresponding STU entry at the same time. This allows us to get rid of the STU management in the _mv88e6xxx_vtu_new helper and thus remove the separate implementations of STU Load/Purge and STU GetNext, as well as the unused family checks. Signed-off-by: NVivien Didelot <vivien.didelot@savoirfairelinux.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Vivien Didelot 提交于
Now that the code reads both VTU and STU data on VTU GetNext operation, fetch the STU entry data of a VTU entry at the same time. The STU data bits are masked with the VTU data bits and they are now all read at the same time a VTU GetNext operation is issued. Signed-off-by: NVivien Didelot <vivien.didelot@savoirfairelinux.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Vivien Didelot 提交于
Extract the generic portion of code to issue an STU GetNext operation, which will be used in other implementations. Signed-off-by: NVivien Didelot <vivien.didelot@savoirfairelinux.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Vivien Didelot 提交于
The code to access the VTU Data registers currently only supports the 88E6185 family and alike: 2-bit membership adjacent to 2-bit port state. Even though the 88E6352 family introduced an indirect table to program the VLAN Spanning Tree states, the usage of the VTU Data registers remains the same regardless the VTU or STU operation. Now that the mv88e6xxx_vtu_entry structure contains both port membership and states data, factorize the code to access them in global1_vtu.c. Signed-off-by: NVivien Didelot <vivien.didelot@savoirfairelinux.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Vivien Didelot 提交于
Even though every switch model has a different way to access the VTU Data bits, the base implementation of the VTU GetNext operation remains the same: wait, write the first VID to iterate from, start the operation, and read the next VID. Move this generic implementation into global1_vtu.c and abstract the handling of the start VID (similarly to the ATU GetNext implementation), before introducing a new chip operation for specific chips. Signed-off-by: NVivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Vivien Didelot 提交于
Add helpers to access the VTU VID register in the global1_vtu.c file. At the same time, move mv88e6xxx_g1_vtu_vid_write at the beginning of _mv88e6xxx_vtu_loadpurge, which adds no functional changes but makes future patches simpler. Signed-off-by: NVivien Didelot <vivien.didelot@savoirfairelinux.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Vivien Didelot 提交于
Add helpers to access the VTU SID register in the global1_vtu.c file. Signed-off-by: NVivien Didelot <vivien.didelot@savoirfairelinux.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Vivien Didelot 提交于
Add helpers to access the VTU FID register in the global1_vtu.c file. Signed-off-by: NVivien Didelot <vivien.didelot@savoirfairelinux.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Vivien Didelot 提交于
Move the VTU flush operation to global1_vtu.c and call it from a mv88e6xxx_vtu_setup helper, similarly to the ATU and PVT setup. Signed-off-by: NVivien Didelot <vivien.didelot@savoirfairelinux.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Vivien Didelot 提交于
Move the helper functions to access the Global 1 VTU Operation register to a new global1_vtu.c file, and get rid of the old underscore prefix naming convention. This file will be extended will all VTU/STU related code. Signed-off-by: NVivien Didelot <vivien.didelot@savoirfairelinux.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Vivien Didelot 提交于
VLAN aware Marvell chips can program 802.1Q VLAN membership as well as 802.1s per VLAN Spanning Tree state using the same 3 VTU Data registers. Some chips such as 88E6185 use different Data registers offsets for ports state and membership, and program them in a single operation. Other chips such as 88E6352 use the same register layout but program them in distinct operations (an indirect table is used for 802.1s.) Newer chips such as 88E6390 use the same offsets for both state and membership in distinct operations, thus require multiple data accesses. To correctly abstract this, split the "data" structure member of mv88e6xxx_vtu_entry in two "state" and "member" members, before adding VTU support for newer chips. Signed-off-by: NVivien Didelot <vivien.didelot@savoirfairelinux.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Vivien Didelot 提交于
Some chips don't have a VLAN Table Unit, most of them do have a 4K table, some others as the 88E6390 family has a 13th bit for the VID. Add a new max_vid member to the info structure, used to check the presence of a VTU as well as the value used to iterate from in VTU GetNext operations. This makes the MV88E6XXX_FLAG_VTU obsolete, thus remove it. Signed-off-by: NVivien Didelot <vivien.didelot@savoirfairelinux.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 01 5月, 2017 2 次提交
-
-
由 Ido Schimmel 提交于
When a netdev is enslaved to a VRF master, its router interface (RIF) needs to be destroyed (if exists) and a new one created using the corresponding virtual router (VR). >From the driver's perspective, the above is equivalent to an inetaddr event sent for this netdev. Therefore, when a port netdev (or its uppers) are enslaved to a VRF master, call the same function that would've been called had a NETDEV_UP was sent for this netdev in the inetaddr notification chain. This patch also fixes a bug when a LAG netdev with an existing RIF is enslaved to a VRF. Before this patch, each LAG port would drop the reference on the RIF, but would re-join the same one (in the wrong VR) soon after. With this patch, the corresponding RIF is first destroyed and a new one is created using the correct VR. Fixes: 7179eb5a ("mlxsw: spectrum_router: Add support for VRFs") Signed-off-by: NIdo Schimmel <idosch@mellanox.com> Reviewed-by: NJiri Pirko <jiri@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Mintz, Yuval 提交于
After removing the PTP related initialization from slowpath start, the remaining PTT entry is required only in case CONFIG_RFS_ACCEL is set. Otherwise, it leads to a warning due to it being unused. Fixes: d179bd16 ("qed: Acquire/release ptt_ptp lock when enabling/disabling PTP") Signed-off-by: NYuval Mintz <Yuval.Mintz@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-