提交 32344a39 编写于 作者: J Jesse Brandeburg 提交者: David S. Miller

ixbge: fix bug when using large pages and jumbo frames

it was pointed out on the list that ixgbe was failing when using 64kB pages
and large 16kB MTU.

since with a 64kB PAGE_SIZE MAX_SKB_FRAGS = 3, the way the driver was
configuring page usage was assuming 2kB is half a page, and was only
ever dmaing that much data to a half page.

(16kB - header size) / 2048 = 7 or 8 pages, which would far exceed 3

adjust the driver to account for these large pages, the hardware can
support DMA to up to 16kB for each descriptor.
Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
上级 c7e4358a
......@@ -71,6 +71,7 @@
#define IXGBE_RXBUFFER_128 128 /* Used for packet split */
#define IXGBE_RXBUFFER_256 256 /* Used for packet split */
#define IXGBE_RXBUFFER_2048 2048
#define IXGBE_MAX_RXBUFFER 16384 /* largest size for a single descriptor */
#define IXGBE_RX_HDR_SIZE IXGBE_RXBUFFER_256
......
......@@ -1550,7 +1550,14 @@ static void ixgbe_configure_srrctl(struct ixgbe_adapter *adapter, int index)
srrctl &= ~IXGBE_SRRCTL_BSIZEPKT_MASK;
if (adapter->flags & IXGBE_FLAG_RX_PS_ENABLED) {
srrctl |= IXGBE_RXBUFFER_2048 >> IXGBE_SRRCTL_BSIZEPKT_SHIFT;
u16 bufsz = IXGBE_RXBUFFER_2048;
/* grow the amount we can receive on large page machines */
if (bufsz < (PAGE_SIZE / 2))
bufsz = (PAGE_SIZE / 2);
/* cap the bufsz at our largest descriptor size */
bufsz = min((u16)IXGBE_MAX_RXBUFFER, bufsz);
srrctl |= bufsz >> IXGBE_SRRCTL_BSIZEPKT_SHIFT;
srrctl |= IXGBE_SRRCTL_DESCTYPE_HDR_SPLIT_ALWAYS;
srrctl |= ((IXGBE_RX_HDR_SIZE <<
IXGBE_SRRCTL_BSIZEHDRSIZE_SHIFT) &
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册