提交 1feafb1b 编写于 作者: A Alexander Lobakin 提交者: Zheng Zengkai

xsk: Respect device's headroom and tailroom on generic xmit path

stable inclusion
from stable-5.10.37
commit 613f9d1f1587e1365bcf9a81a5ed009d9e36e648
bugzilla: 51868
CVE: NA

--------------------------------

[ Upstream commit 3914d88f ]

xsk_generic_xmit() allocates a new skb and then queues it for
xmitting. The size of new skb's headroom is desc->len, so it comes
to the driver/device with no reserved headroom and/or tailroom.
Lots of drivers need some headroom (and sometimes tailroom) to
prepend (and/or append) some headers or data, e.g. CPU tags,
device-specific headers/descriptors (LSO, TLS etc.), and if case
of no available space skb_cow_head() will reallocate the skb.
Reallocations are unwanted on fast-path, especially when it comes
to XDP, so generic XSK xmit should reserve the spaces declared in
dev->needed_headroom and dev->needed tailroom to avoid them.

Note on max(NET_SKB_PAD, L1_CACHE_ALIGN(dev->needed_headroom)):

Usually, output functions reserve LL_RESERVED_SPACE(dev), which
consists of dev->hard_header_len + dev->needed_headroom, aligned
by 16.

However, on XSK xmit hard header is already here in the chunk, so
hard_header_len is not needed. But it'd still be better to align
data up to cacheline, while reserving no less than driver requests
for headroom. NET_SKB_PAD here is to double-insure there will be
no reallocations even when the driver advertises no needed_headroom,
but in fact need it (not so rare case).

Fixes: 35fcde7f ("xsk: support for Tx")
Signed-off-by: NAlexander Lobakin <alobakin@pm.me>
Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
Acked-by: NMagnus Karlsson <magnus.karlsson@intel.com>
Acked-by: NJohn Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20210218204908.5455-5-alobakin@pm.meSigned-off-by: NSasha Levin <sashal@kernel.org>
Signed-off-by: NChen Jun <chenjun102@huawei.com>
Acked-by: NWeilong Chen <chenweilong@huawei.com>
Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
上级 affbce49
...@@ -380,12 +380,16 @@ static int xsk_generic_xmit(struct sock *sk) ...@@ -380,12 +380,16 @@ static int xsk_generic_xmit(struct sock *sk)
struct sk_buff *skb; struct sk_buff *skb;
unsigned long flags; unsigned long flags;
int err = 0; int err = 0;
u32 hr, tr;
mutex_lock(&xs->mutex); mutex_lock(&xs->mutex);
if (xs->queue_id >= xs->dev->real_num_tx_queues) if (xs->queue_id >= xs->dev->real_num_tx_queues)
goto out; goto out;
hr = max(NET_SKB_PAD, L1_CACHE_ALIGN(xs->dev->needed_headroom));
tr = xs->dev->needed_tailroom;
while (xskq_cons_peek_desc(xs->tx, &desc, xs->pool)) { while (xskq_cons_peek_desc(xs->tx, &desc, xs->pool)) {
char *buffer; char *buffer;
u64 addr; u64 addr;
...@@ -397,11 +401,13 @@ static int xsk_generic_xmit(struct sock *sk) ...@@ -397,11 +401,13 @@ static int xsk_generic_xmit(struct sock *sk)
} }
len = desc.len; len = desc.len;
skb = sock_alloc_send_skb(sk, len, 1, &err); skb = sock_alloc_send_skb(sk, hr + len + tr, 1, &err);
if (unlikely(!skb)) if (unlikely(!skb))
goto out; goto out;
skb_reserve(skb, hr);
skb_put(skb, len); skb_put(skb, len);
addr = desc.addr; addr = desc.addr;
buffer = xsk_buff_raw_get_data(xs->pool, addr); buffer = xsk_buff_raw_get_data(xs->pool, addr);
err = skb_store_bits(skb, 0, buffer, len); err = skb_store_bits(skb, 0, buffer, len);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册