-
由 Alexander Duyck 提交于
On architectures that have a cache line size larger than 64 Bytes we start running into issues where the amount of headroom for the frame starts shrinking. The size of skb_shared_info on a system with a 64B L1 cache line size is 320. This increases to 384 with a 128B cache line, and 512 with a 256B cache line. In addition the NET_SKB_PAD value increases as well consistent with the cache line size. As a result when we get to a 256B cache line as seen on the s390 we end up 768 bytes used by padding and shared info leaving us with only 1280 bytes to use for data storage. On architectures such as this we should default to using 3K Rx buffers out of a 8K page instead of trying to do 1.5K buffers out of a 4K page. To take all of this into account I have added one small check so that we compare the max_frame to the amount of actual data we can store. This was already occurring for igb, but I had overlooked it for ixgbe as it doesn't have strict limits for 82599 once we enable jumbo frames. By adding this check we will automatically enable 3K Rx buffers as soon as the maximum frame size we can handle drops below the standard Ethernet MTU. I also went through and fixed one small typo that I found where I had left an IGB in a variable name due to a copy/paste error. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
c74042f3