提交 a6c90dd3 编写于 作者: D David S. Miller

Merge branch 'sched-introduce-chain-templates-support-with-offloading-to-mlxsw'

Jiri Pirko says:

====================
sched: introduce chain templates support with offloading to mlxsw

For the TC clsact offload these days, some of HW drivers need
to hold a magic ball. The reason is, with the first inserted rule inside
HW they need to guess what fields will be used for the matching. If
later on this guess proves to be wrong and user adds a filter with a
different field to match, there's a problem. Mlxsw resolves it now with
couple of patterns. Those try to cover as many match fields as possible.
This aproach is far from optimal, both performance-wise and scale-wise.
Also, there is a combination of filters that in certain order won't
succeed.

Most of the time, when user inserts filters in chain, he knows right away
how the filters are going to look like - what type and option will they
have. For example, he knows that he will only insert filters of type
flower matching destination IP address. He can specify a template that
would cover all the filters in the chain.

This patchset is providing the possibility to user to provide such
template to kernel and propagate it all the way down to device
drivers.

See the examples below.

Create dummy device with clsact first:

There is no chain present by by default:

Add chain number 11 by explicit command:
chain parent ffff: chain 11

Add filter to chain number 12 which does not exist. That will create
implicit chain 12:
chain parent ffff: chain 11
chain parent ffff: chain 12

Delete both chains:

Add a chain with template of type flower allowing to insert rules matching
on last 2 bytes of destination mac address:

The chain with template is now showed in the list:
chain parent ffff: flower chain 0
  dst_mac 00:00:00:00:00:00/00:00:00:00:ff:ff
  eth_type ipv4

Add another chain (number 22) with template:
chain parent ffff: flower chain 0
  dst_mac 00:00:00:00:00:00/00:00:00:00:ff:ff
  eth_type ipv4
chain parent ffff: flower chain 22
  eth_type ipv4
  dst_ip 0.0.0.0/16

Add a filter that fits the template:

Addition of filters that does not fit the template would fail:
Error: cls_flower: Mask does not fit the template.
We have an error talking to the kernel, -1
Error: cls_flower: Mask does not fit the template.
We have an error talking to the kernel, -1

Additions of filters to chain 22:
Error: cls_flower: Mask does not fit the template.
We have an error talking to the kernel, -1
Error: cls_flower: Mask does not fit the template.
We have an error talking to the kernel, -1

---
v3->v4:
- patch 2:
  - new patch
- patch 3:
  - new patch, derived from the previous v3 chaintemplate obj patch
- patch 4:
  - only templates part as chains creation/deletion is now a separate patch
  - don't pass template priv as arg of "change" op
- patch 6:
  - rebased on top of flower cvlan patch and ip tos/ttl patch
- patch 7:
  - templave priv is no longer passed as an arg to "change" op
- patch 11:
  - split from the originally single patch
- patch 12:
  - split from the originally single patch
v2->v3:
- patch 7:
  - rebase on top of the reoffload patchset
- patch 8:
  - rebase on top of the reoffload patchset
v1->v2:
- patch 8:
  - remove leftover extack arg in fl_hw_create_tmplt()
====================
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
......@@ -1455,6 +1455,11 @@ mlxsw_sp_setup_tc_cls_flower(struct mlxsw_sp_acl_block *acl_block,
return 0;
case TC_CLSFLOWER_STATS:
return mlxsw_sp_flower_stats(mlxsw_sp, acl_block, f);
case TC_CLSFLOWER_TMPLT_CREATE:
return mlxsw_sp_flower_tmplt_create(mlxsw_sp, acl_block, f);
case TC_CLSFLOWER_TMPLT_DESTROY:
mlxsw_sp_flower_tmplt_destroy(mlxsw_sp, acl_block, f);
return 0;
default:
return -EOPNOTSUPP;
}
......
......@@ -543,7 +543,8 @@ mlxsw_sp_acl_ruleset_lookup(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_acl_ruleset *
mlxsw_sp_acl_ruleset_get(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_acl_block *block, u32 chain_index,
enum mlxsw_sp_acl_profile profile);
enum mlxsw_sp_acl_profile profile,
struct mlxsw_afk_element_usage *tmplt_elusage);
void mlxsw_sp_acl_ruleset_put(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_acl_ruleset *ruleset);
u16 mlxsw_sp_acl_ruleset_group_id(struct mlxsw_sp_acl_ruleset *ruleset);
......@@ -667,6 +668,12 @@ void mlxsw_sp_flower_destroy(struct mlxsw_sp *mlxsw_sp,
int mlxsw_sp_flower_stats(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_acl_block *block,
struct tc_cls_flower_offload *f);
int mlxsw_sp_flower_tmplt_create(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_acl_block *block,
struct tc_cls_flower_offload *f);
void mlxsw_sp_flower_tmplt_destroy(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_acl_block *block,
struct tc_cls_flower_offload *f);
/* spectrum_qdisc.c */
int mlxsw_sp_tc_qdisc_init(struct mlxsw_sp_port *mlxsw_sp_port);
......
......@@ -317,7 +317,8 @@ int mlxsw_sp_acl_block_unbind(struct mlxsw_sp *mlxsw_sp,
static struct mlxsw_sp_acl_ruleset *
mlxsw_sp_acl_ruleset_create(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_acl_block *block, u32 chain_index,
const struct mlxsw_sp_acl_profile_ops *ops)
const struct mlxsw_sp_acl_profile_ops *ops,
struct mlxsw_afk_element_usage *tmplt_elusage)
{
struct mlxsw_sp_acl *acl = mlxsw_sp->acl;
struct mlxsw_sp_acl_ruleset *ruleset;
......@@ -337,7 +338,8 @@ mlxsw_sp_acl_ruleset_create(struct mlxsw_sp *mlxsw_sp,
if (err)
goto err_rhashtable_init;
err = ops->ruleset_add(mlxsw_sp, &acl->tcam, ruleset->priv);
err = ops->ruleset_add(mlxsw_sp, &acl->tcam, ruleset->priv,
tmplt_elusage);
if (err)
goto err_ops_ruleset_add;
......@@ -419,7 +421,8 @@ mlxsw_sp_acl_ruleset_lookup(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_acl_ruleset *
mlxsw_sp_acl_ruleset_get(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_acl_block *block, u32 chain_index,
enum mlxsw_sp_acl_profile profile)
enum mlxsw_sp_acl_profile profile,
struct mlxsw_afk_element_usage *tmplt_elusage)
{
const struct mlxsw_sp_acl_profile_ops *ops;
struct mlxsw_sp_acl *acl = mlxsw_sp->acl;
......@@ -434,7 +437,8 @@ mlxsw_sp_acl_ruleset_get(struct mlxsw_sp *mlxsw_sp,
mlxsw_sp_acl_ruleset_ref_inc(ruleset);
return ruleset;
}
return mlxsw_sp_acl_ruleset_create(mlxsw_sp, block, chain_index, ops);
return mlxsw_sp_acl_ruleset_create(mlxsw_sp, block, chain_index, ops,
tmplt_elusage);
}
void mlxsw_sp_acl_ruleset_put(struct mlxsw_sp *mlxsw_sp,
......
......@@ -189,6 +189,8 @@ struct mlxsw_sp_acl_tcam_group {
struct mlxsw_sp_acl_tcam_group_ops *ops;
const struct mlxsw_sp_acl_tcam_pattern *patterns;
unsigned int patterns_count;
bool tmplt_elusage_set;
struct mlxsw_afk_element_usage tmplt_elusage;
};
struct mlxsw_sp_acl_tcam_chunk {
......@@ -234,13 +236,19 @@ mlxsw_sp_acl_tcam_group_add(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_acl_tcam *tcam,
struct mlxsw_sp_acl_tcam_group *group,
const struct mlxsw_sp_acl_tcam_pattern *patterns,
unsigned int patterns_count)
unsigned int patterns_count,
struct mlxsw_afk_element_usage *tmplt_elusage)
{
int err;
group->tcam = tcam;
group->patterns = patterns;
group->patterns_count = patterns_count;
if (tmplt_elusage) {
group->tmplt_elusage_set = true;
memcpy(&group->tmplt_elusage, tmplt_elusage,
sizeof(group->tmplt_elusage));
}
INIT_LIST_HEAD(&group->region_list);
err = mlxsw_sp_acl_tcam_group_id_get(tcam, &group->id);
if (err)
......@@ -449,6 +457,15 @@ mlxsw_sp_acl_tcam_group_use_patterns(struct mlxsw_sp_acl_tcam_group *group,
const struct mlxsw_sp_acl_tcam_pattern *pattern;
int i;
/* In case the template is set, we don't have to look up the pattern
* and just use the template.
*/
if (group->tmplt_elusage_set) {
memcpy(out, &group->tmplt_elusage, sizeof(*out));
WARN_ON(!mlxsw_afk_element_usage_subset(elusage, out));
return;
}
for (i = 0; i < group->patterns_count; i++) {
pattern = &group->patterns[i];
mlxsw_afk_element_usage_fill(out, pattern->elements,
......@@ -865,13 +882,15 @@ struct mlxsw_sp_acl_tcam_flower_rule {
static int
mlxsw_sp_acl_tcam_flower_ruleset_add(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_acl_tcam *tcam,
void *ruleset_priv)
void *ruleset_priv,
struct mlxsw_afk_element_usage *tmplt_elusage)
{
struct mlxsw_sp_acl_tcam_flower_ruleset *ruleset = ruleset_priv;
return mlxsw_sp_acl_tcam_group_add(mlxsw_sp, tcam, &ruleset->group,
mlxsw_sp_acl_tcam_patterns,
MLXSW_SP_ACL_TCAM_PATTERNS_COUNT);
MLXSW_SP_ACL_TCAM_PATTERNS_COUNT,
tmplt_elusage);
}
static void
......
......@@ -64,7 +64,8 @@ int mlxsw_sp_acl_tcam_priority_get(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_acl_profile_ops {
size_t ruleset_priv_size;
int (*ruleset_add)(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_acl_tcam *tcam, void *ruleset_priv);
struct mlxsw_sp_acl_tcam *tcam, void *ruleset_priv,
struct mlxsw_afk_element_usage *tmplt_elusage);
void (*ruleset_del)(struct mlxsw_sp *mlxsw_sp, void *ruleset_priv);
int (*ruleset_bind)(struct mlxsw_sp *mlxsw_sp, void *ruleset_priv,
struct mlxsw_sp_port *mlxsw_sp_port,
......
......@@ -414,7 +414,7 @@ int mlxsw_sp_flower_replace(struct mlxsw_sp *mlxsw_sp,
ruleset = mlxsw_sp_acl_ruleset_get(mlxsw_sp, block,
f->common.chain_index,
MLXSW_SP_ACL_PROFILE_FLOWER);
MLXSW_SP_ACL_PROFILE_FLOWER, NULL);
if (IS_ERR(ruleset))
return PTR_ERR(ruleset);
......@@ -458,7 +458,7 @@ void mlxsw_sp_flower_destroy(struct mlxsw_sp *mlxsw_sp,
ruleset = mlxsw_sp_acl_ruleset_get(mlxsw_sp, block,
f->common.chain_index,
MLXSW_SP_ACL_PROFILE_FLOWER);
MLXSW_SP_ACL_PROFILE_FLOWER, NULL);
if (IS_ERR(ruleset))
return;
......@@ -484,7 +484,7 @@ int mlxsw_sp_flower_stats(struct mlxsw_sp *mlxsw_sp,
ruleset = mlxsw_sp_acl_ruleset_get(mlxsw_sp, block,
f->common.chain_index,
MLXSW_SP_ACL_PROFILE_FLOWER);
MLXSW_SP_ACL_PROFILE_FLOWER, NULL);
if (WARN_ON(IS_ERR(ruleset)))
return -EINVAL;
......@@ -506,3 +506,41 @@ int mlxsw_sp_flower_stats(struct mlxsw_sp *mlxsw_sp,
mlxsw_sp_acl_ruleset_put(mlxsw_sp, ruleset);
return err;
}
int mlxsw_sp_flower_tmplt_create(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_acl_block *block,
struct tc_cls_flower_offload *f)
{
struct mlxsw_sp_acl_ruleset *ruleset;
struct mlxsw_sp_acl_rule_info rulei;
int err;
memset(&rulei, 0, sizeof(rulei));
err = mlxsw_sp_flower_parse(mlxsw_sp, block, &rulei, f);
if (err)
return err;
ruleset = mlxsw_sp_acl_ruleset_get(mlxsw_sp, block,
f->common.chain_index,
MLXSW_SP_ACL_PROFILE_FLOWER,
&rulei.values.elusage);
if (IS_ERR(ruleset))
return PTR_ERR(ruleset);
/* keep the reference to the ruleset */
return 0;
}
void mlxsw_sp_flower_tmplt_destroy(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_acl_block *block,
struct tc_cls_flower_offload *f)
{
struct mlxsw_sp_acl_ruleset *ruleset;
ruleset = mlxsw_sp_acl_ruleset_get(mlxsw_sp, block,
f->common.chain_index,
MLXSW_SP_ACL_PROFILE_FLOWER, NULL);
if (IS_ERR(ruleset))
return;
/* put the reference to the ruleset kept in create */
mlxsw_sp_acl_ruleset_put(mlxsw_sp, ruleset);
mlxsw_sp_acl_ruleset_put(mlxsw_sp, ruleset);
}
......@@ -721,6 +721,8 @@ enum tc_fl_command {
TC_CLSFLOWER_REPLACE,
TC_CLSFLOWER_DESTROY,
TC_CLSFLOWER_STATS,
TC_CLSFLOWER_TMPLT_CREATE,
TC_CLSFLOWER_TMPLT_DESTROY,
};
struct tc_cls_flower_offload {
......
......@@ -238,6 +238,8 @@ struct tcf_result {
};
};
struct tcf_chain;
struct tcf_proto_ops {
struct list_head head;
char kind[IFNAMSIZ];
......@@ -263,10 +265,18 @@ struct tcf_proto_ops {
tc_setup_cb_t *cb, void *cb_priv,
struct netlink_ext_ack *extack);
void (*bind_class)(void *, u32, unsigned long);
void * (*tmplt_create)(struct net *net,
struct tcf_chain *chain,
struct nlattr **tca,
struct netlink_ext_ack *extack);
void (*tmplt_destroy)(void *tmplt_priv);
/* rtnetlink specific */
int (*dump)(struct net*, struct tcf_proto*, void *,
struct sk_buff *skb, struct tcmsg*);
int (*tmplt_dump)(struct sk_buff *skb,
struct net *net,
void *tmplt_priv);
struct module *owner;
};
......@@ -300,11 +310,13 @@ typedef void tcf_chain_head_change_t(struct tcf_proto *tp_head, void *priv);
struct tcf_chain {
struct tcf_proto __rcu *filter_chain;
struct list_head filter_chain_list;
struct list_head list;
struct tcf_block *block;
u32 index; /* chain index */
unsigned int refcnt;
bool explicitly_created;
const struct tcf_proto_ops *tmplt_ops;
void *tmplt_priv;
};
struct tcf_block {
......@@ -318,6 +330,10 @@ struct tcf_block {
bool keep_dst;
unsigned int offloadcnt; /* Number of oddloaded filters */
unsigned int nooffloaddevcnt; /* Number of devs unable to do offload */
struct {
struct tcf_chain *chain;
struct list_head filter_chain_list;
} chain0;
};
static inline void tcf_block_offload_inc(struct tcf_block *block, u32 *flags)
......
......@@ -150,6 +150,13 @@ enum {
RTM_NEWCACHEREPORT = 96,
#define RTM_NEWCACHEREPORT RTM_NEWCACHEREPORT
RTM_NEWCHAIN = 100,
#define RTM_NEWCHAIN RTM_NEWCHAIN
RTM_DELCHAIN,
#define RTM_DELCHAIN RTM_DELCHAIN
RTM_GETCHAIN,
#define RTM_GETCHAIN RTM_GETCHAIN
__RTM_MAX,
#define RTM_MAX (((__RTM_MAX + 3) & ~3) - 1)
};
......
此差异已折叠。
......@@ -72,6 +72,13 @@ struct fl_flow_mask {
struct list_head list;
};
struct fl_flow_tmplt {
struct fl_flow_key dummy_key;
struct fl_flow_key mask;
struct flow_dissector dissector;
struct tcf_chain *chain;
};
struct cls_fl_head {
struct rhashtable ht;
struct list_head masks;
......@@ -147,6 +154,23 @@ static void fl_set_masked_key(struct fl_flow_key *mkey, struct fl_flow_key *key,
*lmkey++ = *lkey++ & *lmask++;
}
static bool fl_mask_fits_tmplt(struct fl_flow_tmplt *tmplt,
struct fl_flow_mask *mask)
{
const long *lmask = fl_key_get_start(&mask->key, mask);
const long *ltmplt;
int i;
if (!tmplt)
return true;
ltmplt = fl_key_get_start(&tmplt->mask, mask);
for (i = 0; i < fl_mask_range(mask); i += sizeof(long)) {
if (~*ltmplt++ & *lmask++)
return false;
}
return true;
}
static void fl_clear_masked_range(struct fl_flow_key *key,
struct fl_flow_mask *mask)
{
......@@ -826,51 +850,52 @@ static int fl_init_mask_hashtable(struct fl_flow_mask *mask)
FL_KEY_SET(keys, cnt, id, member); \
} while(0);
static void fl_init_dissector(struct fl_flow_mask *mask)
static void fl_init_dissector(struct flow_dissector *dissector,
struct fl_flow_key *mask)
{
struct flow_dissector_key keys[FLOW_DISSECTOR_KEY_MAX];
size_t cnt = 0;
FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_CONTROL, control);
FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_BASIC, basic);
FL_KEY_SET_IF_MASKED(&mask->key, keys, cnt,
FL_KEY_SET_IF_MASKED(mask, keys, cnt,
FLOW_DISSECTOR_KEY_ETH_ADDRS, eth);
FL_KEY_SET_IF_MASKED(&mask->key, keys, cnt,
FL_KEY_SET_IF_MASKED(mask, keys, cnt,
FLOW_DISSECTOR_KEY_IPV4_ADDRS, ipv4);
FL_KEY_SET_IF_MASKED(&mask->key, keys, cnt,
FL_KEY_SET_IF_MASKED(mask, keys, cnt,
FLOW_DISSECTOR_KEY_IPV6_ADDRS, ipv6);
FL_KEY_SET_IF_MASKED(&mask->key, keys, cnt,
FL_KEY_SET_IF_MASKED(mask, keys, cnt,
FLOW_DISSECTOR_KEY_PORTS, tp);
FL_KEY_SET_IF_MASKED(&mask->key, keys, cnt,
FL_KEY_SET_IF_MASKED(mask, keys, cnt,
FLOW_DISSECTOR_KEY_IP, ip);
FL_KEY_SET_IF_MASKED(&mask->key, keys, cnt,
FL_KEY_SET_IF_MASKED(mask, keys, cnt,
FLOW_DISSECTOR_KEY_TCP, tcp);
FL_KEY_SET_IF_MASKED(&mask->key, keys, cnt,
FL_KEY_SET_IF_MASKED(mask, keys, cnt,
FLOW_DISSECTOR_KEY_ICMP, icmp);
FL_KEY_SET_IF_MASKED(&mask->key, keys, cnt,
FL_KEY_SET_IF_MASKED(mask, keys, cnt,
FLOW_DISSECTOR_KEY_ARP, arp);
FL_KEY_SET_IF_MASKED(&mask->key, keys, cnt,
FL_KEY_SET_IF_MASKED(mask, keys, cnt,
FLOW_DISSECTOR_KEY_MPLS, mpls);
FL_KEY_SET_IF_MASKED(&mask->key, keys, cnt,
FL_KEY_SET_IF_MASKED(mask, keys, cnt,
FLOW_DISSECTOR_KEY_VLAN, vlan);
FL_KEY_SET_IF_MASKED(&mask->key, keys, cnt,
FL_KEY_SET_IF_MASKED(mask, keys, cnt,
FLOW_DISSECTOR_KEY_CVLAN, cvlan);
FL_KEY_SET_IF_MASKED(&mask->key, keys, cnt,
FL_KEY_SET_IF_MASKED(mask, keys, cnt,
FLOW_DISSECTOR_KEY_ENC_KEYID, enc_key_id);
FL_KEY_SET_IF_MASKED(&mask->key, keys, cnt,
FL_KEY_SET_IF_MASKED(mask, keys, cnt,
FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS, enc_ipv4);
FL_KEY_SET_IF_MASKED(&mask->key, keys, cnt,
FL_KEY_SET_IF_MASKED(mask, keys, cnt,
FLOW_DISSECTOR_KEY_ENC_IPV6_ADDRS, enc_ipv6);
if (FL_KEY_IS_MASKED(&mask->key, enc_ipv4) ||
FL_KEY_IS_MASKED(&mask->key, enc_ipv6))
if (FL_KEY_IS_MASKED(mask, enc_ipv4) ||
FL_KEY_IS_MASKED(mask, enc_ipv6))
FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_ENC_CONTROL,
enc_control);
FL_KEY_SET_IF_MASKED(&mask->key, keys, cnt,
FL_KEY_SET_IF_MASKED(mask, keys, cnt,
FLOW_DISSECTOR_KEY_ENC_PORTS, enc_tp);
FL_KEY_SET_IF_MASKED(&mask->key, keys, cnt,
FL_KEY_SET_IF_MASKED(mask, keys, cnt,
FLOW_DISSECTOR_KEY_ENC_IP, enc_ip);
skb_flow_dissector_init(&mask->dissector, keys, cnt);
skb_flow_dissector_init(dissector, keys, cnt);
}
static struct fl_flow_mask *fl_create_new_mask(struct cls_fl_head *head,
......@@ -889,7 +914,7 @@ static struct fl_flow_mask *fl_create_new_mask(struct cls_fl_head *head,
if (err)
goto errout_free;
fl_init_dissector(newmask);
fl_init_dissector(&newmask->dissector, &newmask->key);
INIT_LIST_HEAD_RCU(&newmask->filters);
......@@ -938,6 +963,7 @@ static int fl_set_parms(struct net *net, struct tcf_proto *tp,
struct cls_fl_filter *f, struct fl_flow_mask *mask,
unsigned long base, struct nlattr **tb,
struct nlattr *est, bool ovr,
struct fl_flow_tmplt *tmplt,
struct netlink_ext_ack *extack)
{
int err;
......@@ -958,6 +984,11 @@ static int fl_set_parms(struct net *net, struct tcf_proto *tp,
fl_mask_update_range(mask);
fl_set_masked_key(&f->mkey, &f->key, mask);
if (!fl_mask_fits_tmplt(tmplt, mask)) {
NL_SET_ERR_MSG_MOD(extack, "Mask does not fit the template");
return -EINVAL;
}
return 0;
}
......@@ -1023,7 +1054,7 @@ static int fl_change(struct net *net, struct sk_buff *in_skb,
}
err = fl_set_parms(net, tp, fnew, &mask, base, tb, tca[TCA_RATE], ovr,
extack);
tp->chain->tmplt_priv, extack);
if (err)
goto errout_idr;
......@@ -1163,6 +1194,91 @@ static int fl_reoffload(struct tcf_proto *tp, bool add, tc_setup_cb_t *cb,
return 0;
}
static void fl_hw_create_tmplt(struct tcf_chain *chain,
struct fl_flow_tmplt *tmplt)
{
struct tc_cls_flower_offload cls_flower = {};
struct tcf_block *block = chain->block;
struct tcf_exts dummy_exts = { 0, };
cls_flower.common.chain_index = chain->index;
cls_flower.command = TC_CLSFLOWER_TMPLT_CREATE;
cls_flower.cookie = (unsigned long) tmplt;
cls_flower.dissector = &tmplt->dissector;
cls_flower.mask = &tmplt->mask;
cls_flower.key = &tmplt->dummy_key;
cls_flower.exts = &dummy_exts;
/* We don't care if driver (any of them) fails to handle this
* call. It serves just as a hint for it.
*/
tc_setup_cb_call(block, NULL, TC_SETUP_CLSFLOWER,
&cls_flower, false);
}
static void fl_hw_destroy_tmplt(struct tcf_chain *chain,
struct fl_flow_tmplt *tmplt)
{
struct tc_cls_flower_offload cls_flower = {};
struct tcf_block *block = chain->block;
cls_flower.common.chain_index = chain->index;
cls_flower.command = TC_CLSFLOWER_TMPLT_DESTROY;
cls_flower.cookie = (unsigned long) tmplt;
tc_setup_cb_call(block, NULL, TC_SETUP_CLSFLOWER,
&cls_flower, false);
}
static void *fl_tmplt_create(struct net *net, struct tcf_chain *chain,
struct nlattr **tca,
struct netlink_ext_ack *extack)
{
struct fl_flow_tmplt *tmplt;
struct nlattr **tb;
int err;
if (!tca[TCA_OPTIONS])
return ERR_PTR(-EINVAL);
tb = kcalloc(TCA_FLOWER_MAX + 1, sizeof(struct nlattr *), GFP_KERNEL);
if (!tb)
return ERR_PTR(-ENOBUFS);
err = nla_parse_nested(tb, TCA_FLOWER_MAX, tca[TCA_OPTIONS],
fl_policy, NULL);
if (err)
goto errout_tb;
tmplt = kzalloc(sizeof(*tmplt), GFP_KERNEL);
if (!tmplt)
goto errout_tb;
tmplt->chain = chain;
err = fl_set_key(net, tb, &tmplt->dummy_key, &tmplt->mask, extack);
if (err)
goto errout_tmplt;
kfree(tb);
fl_init_dissector(&tmplt->dissector, &tmplt->mask);
fl_hw_create_tmplt(chain, tmplt);
return tmplt;
errout_tmplt:
kfree(tmplt);
errout_tb:
kfree(tb);
return ERR_PTR(err);
}
static void fl_tmplt_destroy(void *tmplt_priv)
{
struct fl_flow_tmplt *tmplt = tmplt_priv;
fl_hw_destroy_tmplt(tmplt->chain, tmplt);
kfree(tmplt);
}
static int fl_dump_key_val(struct sk_buff *skb,
void *val, int val_type,
void *mask, int mask_type, int len)
......@@ -1296,29 +1412,9 @@ static int fl_dump_key_flags(struct sk_buff *skb, u32 flags_key, u32 flags_mask)
return nla_put(skb, TCA_FLOWER_KEY_FLAGS_MASK, 4, &_mask);
}
static int fl_dump(struct net *net, struct tcf_proto *tp, void *fh,
struct sk_buff *skb, struct tcmsg *t)
static int fl_dump_key(struct sk_buff *skb, struct net *net,
struct fl_flow_key *key, struct fl_flow_key *mask)
{
struct cls_fl_filter *f = fh;
struct nlattr *nest;
struct fl_flow_key *key, *mask;
if (!f)
return skb->len;
t->tcm_handle = f->handle;
nest = nla_nest_start(skb, TCA_OPTIONS);
if (!nest)
goto nla_put_failure;
if (f->res.classid &&
nla_put_u32(skb, TCA_FLOWER_CLASSID, f->res.classid))
goto nla_put_failure;
key = &f->key;
mask = &f->mask->key;
if (mask->indev_ifindex) {
struct net_device *dev;
......@@ -1327,9 +1423,6 @@ static int fl_dump(struct net *net, struct tcf_proto *tp, void *fh,
goto nla_put_failure;
}
if (!tc_skip_hw(f->flags))
fl_hw_update_stats(tp, f);
if (fl_dump_key_val(skb, key->eth.dst, TCA_FLOWER_KEY_ETH_DST,
mask->eth.dst, TCA_FLOWER_KEY_ETH_DST_MASK,
sizeof(key->eth.dst)) ||
......@@ -1505,6 +1598,41 @@ static int fl_dump(struct net *net, struct tcf_proto *tp, void *fh,
if (fl_dump_key_flags(skb, key->control.flags, mask->control.flags))
goto nla_put_failure;
return 0;
nla_put_failure:
return -EMSGSIZE;
}
static int fl_dump(struct net *net, struct tcf_proto *tp, void *fh,
struct sk_buff *skb, struct tcmsg *t)
{
struct cls_fl_filter *f = fh;
struct nlattr *nest;
struct fl_flow_key *key, *mask;
if (!f)
return skb->len;
t->tcm_handle = f->handle;
nest = nla_nest_start(skb, TCA_OPTIONS);
if (!nest)
goto nla_put_failure;
if (f->res.classid &&
nla_put_u32(skb, TCA_FLOWER_CLASSID, f->res.classid))
goto nla_put_failure;
key = &f->key;
mask = &f->mask->key;
if (fl_dump_key(skb, net, key, mask))
goto nla_put_failure;
if (!tc_skip_hw(f->flags))
fl_hw_update_stats(tp, f);
if (f->flags && nla_put_u32(skb, TCA_FLOWER_FLAGS, f->flags))
goto nla_put_failure;
......@@ -1523,6 +1651,31 @@ static int fl_dump(struct net *net, struct tcf_proto *tp, void *fh,
return -1;
}
static int fl_tmplt_dump(struct sk_buff *skb, struct net *net, void *tmplt_priv)
{
struct fl_flow_tmplt *tmplt = tmplt_priv;
struct fl_flow_key *key, *mask;
struct nlattr *nest;
nest = nla_nest_start(skb, TCA_OPTIONS);
if (!nest)
goto nla_put_failure;
key = &tmplt->dummy_key;
mask = &tmplt->mask;
if (fl_dump_key(skb, net, key, mask))
goto nla_put_failure;
nla_nest_end(skb, nest);
return skb->len;
nla_put_failure:
nla_nest_cancel(skb, nest);
return -EMSGSIZE;
}
static void fl_bind_class(void *fh, u32 classid, unsigned long cl)
{
struct cls_fl_filter *f = fh;
......@@ -1543,6 +1696,9 @@ static struct tcf_proto_ops cls_fl_ops __read_mostly = {
.reoffload = fl_reoffload,
.dump = fl_dump,
.bind_class = fl_bind_class,
.tmplt_create = fl_tmplt_create,
.tmplt_destroy = fl_tmplt_destroy,
.tmplt_dump = fl_tmplt_dump,
.owner = THIS_MODULE,
};
......
......@@ -159,7 +159,7 @@ int selinux_nlmsg_lookup(u16 sclass, u16 nlmsg_type, u32 *perm)
switch (sclass) {
case SECCLASS_NETLINK_ROUTE_SOCKET:
/* RTM_MAX always point to RTM_SETxxxx, ie RTM_NEWxxx + 3 */
BUILD_BUG_ON(RTM_MAX != (RTM_NEWCACHEREPORT + 3));
BUILD_BUG_ON(RTM_MAX != (RTM_NEWCHAIN + 3));
err = nlmsg_perm(nlmsg_type, perm, nlmsg_route_perms,
sizeof(nlmsg_route_perms));
break;
......
......@@ -33,7 +33,10 @@ check_tc_version()
echo "SKIP: iproute2 too old; tc is missing JSON support"
exit 1
fi
}
check_tc_shblock_support()
{
tc filter help 2>&1 | grep block &> /dev/null
if [[ $? -ne 0 ]]; then
echo "SKIP: iproute2 too old; tc is missing shared block support"
......@@ -41,6 +44,15 @@ check_tc_version()
fi
}
check_tc_chain_support()
{
tc help 2>&1|grep chain &> /dev/null
if [[ $? -ne 0 ]]; then
echo "SKIP: iproute2 too old; tc is missing chain support"
exit 1
fi
}
if [[ "$(id -u)" -ne 0 ]]; then
echo "SKIP: need root privileges"
exit 0
......
#!/bin/bash
# SPDX-License-Identifier: GPL-2.0
ALL_TESTS="unreachable_chain_test gact_goto_chain_test"
ALL_TESTS="unreachable_chain_test gact_goto_chain_test create_destroy_chain \
template_filter_fits"
NUM_NETIFS=2
source tc_common.sh
source lib.sh
......@@ -80,6 +81,66 @@ gact_goto_chain_test()
log_test "gact goto chain ($tcflags)"
}
create_destroy_chain()
{
RET=0
tc chain add dev $h2 ingress
check_err $? "Failed to create default chain"
tc chain add dev $h2 ingress chain 1
check_err $? "Failed to create chain 1"
tc chain del dev $h2 ingress
check_err $? "Failed to destroy default chain"
tc chain del dev $h2 ingress chain 1
check_err $? "Failed to destroy chain 1"
log_test "create destroy chain"
}
template_filter_fits()
{
RET=0
tc chain add dev $h2 ingress protocol ip \
flower dst_mac 00:00:00:00:00:00/FF:FF:FF:FF:FF:FF &> /dev/null
tc chain add dev $h2 ingress chain 1 protocol ip \
flower src_mac 00:00:00:00:00:00/FF:FF:FF:FF:FF:FF &> /dev/null
tc filter add dev $h2 ingress protocol ip pref 1 handle 1101 \
flower dst_mac $h2mac action drop
check_err $? "Failed to insert filter which fits template"
tc filter add dev $h2 ingress protocol ip pref 1 handle 1102 \
flower src_mac $h2mac action drop &> /dev/null
check_fail $? "Incorrectly succeded to insert filter which does not template"
tc filter add dev $h2 ingress chain 1 protocol ip pref 1 handle 1101 \
flower src_mac $h2mac action drop
check_err $? "Failed to insert filter which fits template"
tc filter add dev $h2 ingress chain 1 protocol ip pref 1 handle 1102 \
flower dst_mac $h2mac action drop &> /dev/null
check_fail $? "Incorrectly succeded to insert filter which does not template"
tc filter del dev $h2 ingress chain 1 protocol ip pref 1 handle 1102 \
flower &> /dev/null
tc filter del dev $h2 ingress chain 1 protocol ip pref 1 handle 1101 \
flower &> /dev/null
tc filter del dev $h2 ingress protocol ip pref 1 handle 1102 \
flower &> /dev/null
tc filter del dev $h2 ingress protocol ip pref 1 handle 1101 \
flower &> /dev/null
tc chain del dev $h2 ingress chain 1
tc chain del dev $h2 ingress
log_test "template filter fits"
}
setup_prepare()
{
h1=${NETIFS[p1]}
......@@ -103,6 +164,8 @@ cleanup()
vrf_cleanup
}
check_tc_chain_support
trap cleanup EXIT
setup_prepare
......
......@@ -105,6 +105,8 @@ cleanup()
ip link set $swp2 address $swp2origmac
}
check_tc_shblock_support
trap cleanup EXIT
setup_prepare
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册