提交 7d384846 编写于 作者: D David S. Miller

Merge git://git.kernel.org/pub/scm/linux/kernel/git/pablo/nf-next

Pablo Neira Ayuso says:

====================
Netfilter updates for net-next

The following patchset contains a second batch of Netfilter updates for
your net-next tree. This includes a rework of the core hook
infrastructure that improves Netfilter performance by ~15% according to
synthetic benchmarks. Then, a large batch with ipset updates, including
a new hash:ipmac set type, via Jozsef Kadlecsik. This also includes a
couple of assorted updates.

Regarding the core hook infrastructure rework to improve performance,
using this simple drop-all packets ruleset from ingress:

        nft add table netdev x
        nft add chain netdev x y { type filter hook ingress device eth0 priority 0\; }
        nft add rule netdev x y drop

And generating traffic through Jesper Brouer's
samples/pktgen/pktgen_bench_xmit_mode_netif_receive.sh script using -i
option. perf report shows nf_tables calls in its top 10:

    17.30%  kpktgend_0   [nf_tables]            [k] nft_do_chain
    15.75%  kpktgend_0   [kernel.vmlinux]       [k] __netif_receive_skb_core
    10.39%  kpktgend_0   [nf_tables_netdev]     [k] nft_do_chain_netdev

I'm measuring here an improvement of ~15% in performance with this
patchset, so we got +2.5Mpps more. I have used my old laptop Intel(R)
Core(TM) i5-3320M CPU @ 2.60GHz 4-cores.

This rework contains more specifically, in strict order, these patches:

1) Remove compile-time debugging from core.

2) Remove obsolete comments that predate the rcu era. These days it is
   well known that a Netfilter hook always runs under rcu_read_lock().

3) Remove threshold handling, this is only used by br_netfilter too.
   We already have specific code to handle this from br_netfilter,
   so remove this code from the core path.

4) Deprecate NF_STOP, as this is only used by br_netfilter.

5) Place nf_state_hook pointer into xt_action_param structure, so
   this structure fits into one single cacheline according to pahole.
   This also implicit affects nftables since it also relies on the
   xt_action_param structure.

6) Move state->hook_entries into nf_queue entry. The hook_entries
   pointer is only required by nf_queue(), so we can store this in the
   queue entry instead.

7) use switch() statement to handle verdict cases.

8) Remove hook_entries field from nf_hook_state structure, this is only
   required by nf_queue, so store it in nf_queue_entry structure.

9) Merge nf_iterate() into nf_hook_slow() that results in a much more
   simple and readable function.

10) Handle NF_REPEAT away from the core, so far the only client is
    nf_conntrack_in() and we can restart the packet processing using a
    simple goto to jump back there when the TCP requires it.
    This update required a second pass to fix fallout, fix from
    Arnd Bergmann.

11) Set random seed from nft_hash when no seed is specified from
    userspace.

12) Simplify nf_tables expression registration, in a much smarter way
    to save lots of boiler plate code, by Liping Zhang.

13) Simplify layer 4 protocol conntrack tracker registration, from
    Davide Caratti.

14) Missing CONFIG_NF_SOCKET_IPV4 dependency for udp4_lib_lookup, due
    to recent generalization of the socket infrastructure, from Arnd
    Bergmann.

15) Then, the ipset batch from Jozsef, he describes it as it follows:

* Cleanup: Remove extra whitespaces in ip_set.h
* Cleanup: Mark some of the helpers arguments as const in ip_set.h
* Cleanup: Group counter helper functions together in ip_set.h
* struct ip_set_skbinfo is introduced instead of open coded fields
  in skbinfo get/init helper funcions.
* Use kmalloc() in comment extension helper instead of kzalloc()
  because it is unnecessary to zero out the area just before
  explicit initialization.
* Cleanup: Split extensions into separate files.
* Cleanup: Separate memsize calculation code into dedicated function.
* Cleanup: group ip_set_put_extensions() and ip_set_get_extensions()
  together.
* Add element count to hash headers by Eric B Munson.
* Add element count to all set types header for uniform output
  across all set types.
* Count non-static extension memory into memsize calculation for
  userspace.
* Cleanup: Remove redundant mtype_expire() arguments, because
  they can be get from other parameters.
* Cleanup: Simplify mtype_expire() for hash types by removing
  one level of intendation.
* Make NLEN compile time constant for hash types.
* Make sure element data size is a multiple of u32 for the hash set
  types.
* Optimize hash creation routine, exit as early as possible.
* Make struct htype per ipset family so nets array becomes fixed size
  and thus simplifies the struct htype allocation.
* Collapse same condition body into a single one.
* Fix reported memory size for hash:* types, base hash bucket structure
  was not taken into account.
* hash:ipmac type support added to ipset by Tomasz Chilinski.
* Use setup_timer() and mod_timer() instead of init_timer()
  by Muhammad Falak R Wani, individually for the set type families.

16) Remove useless connlabel field in struct netns_ct, patch from
    Florian Westphal.

17) xt_find_table_lock() doesn't return ERR_PTR() anymore, so simplify
    {ip,ip6,arp}tables code that uses this.
====================
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
...@@ -49,13 +49,11 @@ struct sock; ...@@ -49,13 +49,11 @@ struct sock;
struct nf_hook_state { struct nf_hook_state {
unsigned int hook; unsigned int hook;
int thresh;
u_int8_t pf; u_int8_t pf;
struct net_device *in; struct net_device *in;
struct net_device *out; struct net_device *out;
struct sock *sk; struct sock *sk;
struct net *net; struct net *net;
struct nf_hook_entry __rcu *hook_entries;
int (*okfn)(struct net *, struct sock *, struct sk_buff *); int (*okfn)(struct net *, struct sock *, struct sk_buff *);
}; };
...@@ -82,9 +80,8 @@ struct nf_hook_entry { ...@@ -82,9 +80,8 @@ struct nf_hook_entry {
}; };
static inline void nf_hook_state_init(struct nf_hook_state *p, static inline void nf_hook_state_init(struct nf_hook_state *p,
struct nf_hook_entry *hook_entry,
unsigned int hook, unsigned int hook,
int thresh, u_int8_t pf, u_int8_t pf,
struct net_device *indev, struct net_device *indev,
struct net_device *outdev, struct net_device *outdev,
struct sock *sk, struct sock *sk,
...@@ -92,13 +89,11 @@ static inline void nf_hook_state_init(struct nf_hook_state *p, ...@@ -92,13 +89,11 @@ static inline void nf_hook_state_init(struct nf_hook_state *p,
int (*okfn)(struct net *, struct sock *, struct sk_buff *)) int (*okfn)(struct net *, struct sock *, struct sk_buff *))
{ {
p->hook = hook; p->hook = hook;
p->thresh = thresh;
p->pf = pf; p->pf = pf;
p->in = indev; p->in = indev;
p->out = outdev; p->out = outdev;
p->sk = sk; p->sk = sk;
p->net = net; p->net = net;
RCU_INIT_POINTER(p->hook_entries, hook_entry);
p->okfn = okfn; p->okfn = okfn;
} }
...@@ -152,23 +147,20 @@ void nf_unregister_sockopt(struct nf_sockopt_ops *reg); ...@@ -152,23 +147,20 @@ void nf_unregister_sockopt(struct nf_sockopt_ops *reg);
extern struct static_key nf_hooks_needed[NFPROTO_NUMPROTO][NF_MAX_HOOKS]; extern struct static_key nf_hooks_needed[NFPROTO_NUMPROTO][NF_MAX_HOOKS];
#endif #endif
int nf_hook_slow(struct sk_buff *skb, struct nf_hook_state *state); int nf_hook_slow(struct sk_buff *skb, struct nf_hook_state *state,
struct nf_hook_entry *entry);
/** /**
* nf_hook_thresh - call a netfilter hook * nf_hook - call a netfilter hook
* *
* Returns 1 if the hook has allowed the packet to pass. The function * Returns 1 if the hook has allowed the packet to pass. The function
* okfn must be invoked by the caller in this case. Any other return * okfn must be invoked by the caller in this case. Any other return
* value indicates the packet has been consumed by the hook. * value indicates the packet has been consumed by the hook.
*/ */
static inline int nf_hook_thresh(u_int8_t pf, unsigned int hook, static inline int nf_hook(u_int8_t pf, unsigned int hook, struct net *net,
struct net *net, struct sock *sk, struct sk_buff *skb,
struct sock *sk, struct net_device *indev, struct net_device *outdev,
struct sk_buff *skb, int (*okfn)(struct net *, struct sock *, struct sk_buff *))
struct net_device *indev,
struct net_device *outdev,
int (*okfn)(struct net *, struct sock *, struct sk_buff *),
int thresh)
{ {
struct nf_hook_entry *hook_head; struct nf_hook_entry *hook_head;
int ret = 1; int ret = 1;
...@@ -185,24 +177,16 @@ static inline int nf_hook_thresh(u_int8_t pf, unsigned int hook, ...@@ -185,24 +177,16 @@ static inline int nf_hook_thresh(u_int8_t pf, unsigned int hook,
if (hook_head) { if (hook_head) {
struct nf_hook_state state; struct nf_hook_state state;
nf_hook_state_init(&state, hook_head, hook, thresh, nf_hook_state_init(&state, hook, pf, indev, outdev,
pf, indev, outdev, sk, net, okfn); sk, net, okfn);
ret = nf_hook_slow(skb, &state); ret = nf_hook_slow(skb, &state, hook_head);
} }
rcu_read_unlock(); rcu_read_unlock();
return ret; return ret;
} }
static inline int nf_hook(u_int8_t pf, unsigned int hook, struct net *net,
struct sock *sk, struct sk_buff *skb,
struct net_device *indev, struct net_device *outdev,
int (*okfn)(struct net *, struct sock *, struct sk_buff *))
{
return nf_hook_thresh(pf, hook, net, sk, skb, indev, outdev, okfn, INT_MIN);
}
/* Activate hook; either okfn or kfree_skb called, unless a hook /* Activate hook; either okfn or kfree_skb called, unless a hook
returns NF_STOLEN (in which case, it's up to the hook to deal with returns NF_STOLEN (in which case, it's up to the hook to deal with
the consequences). the consequences).
...@@ -220,19 +204,6 @@ static inline int nf_hook(u_int8_t pf, unsigned int hook, struct net *net, ...@@ -220,19 +204,6 @@ static inline int nf_hook(u_int8_t pf, unsigned int hook, struct net *net,
coders :) coders :)
*/ */
static inline int
NF_HOOK_THRESH(uint8_t pf, unsigned int hook, struct net *net, struct sock *sk,
struct sk_buff *skb, struct net_device *in,
struct net_device *out,
int (*okfn)(struct net *, struct sock *, struct sk_buff *),
int thresh)
{
int ret = nf_hook_thresh(pf, hook, net, sk, skb, in, out, okfn, thresh);
if (ret == 1)
ret = okfn(net, sk, skb);
return ret;
}
static inline int static inline int
NF_HOOK_COND(uint8_t pf, unsigned int hook, struct net *net, struct sock *sk, NF_HOOK_COND(uint8_t pf, unsigned int hook, struct net *net, struct sock *sk,
struct sk_buff *skb, struct net_device *in, struct net_device *out, struct sk_buff *skb, struct net_device *in, struct net_device *out,
...@@ -242,7 +213,7 @@ NF_HOOK_COND(uint8_t pf, unsigned int hook, struct net *net, struct sock *sk, ...@@ -242,7 +213,7 @@ NF_HOOK_COND(uint8_t pf, unsigned int hook, struct net *net, struct sock *sk,
int ret; int ret;
if (!cond || if (!cond ||
((ret = nf_hook_thresh(pf, hook, net, sk, skb, in, out, okfn, INT_MIN)) == 1)) ((ret = nf_hook(pf, hook, net, sk, skb, in, out, okfn)) == 1))
ret = okfn(net, sk, skb); ret = okfn(net, sk, skb);
return ret; return ret;
} }
...@@ -252,7 +223,10 @@ NF_HOOK(uint8_t pf, unsigned int hook, struct net *net, struct sock *sk, struct ...@@ -252,7 +223,10 @@ NF_HOOK(uint8_t pf, unsigned int hook, struct net *net, struct sock *sk, struct
struct net_device *in, struct net_device *out, struct net_device *in, struct net_device *out,
int (*okfn)(struct net *, struct sock *, struct sk_buff *)) int (*okfn)(struct net *, struct sock *, struct sk_buff *))
{ {
return NF_HOOK_THRESH(pf, hook, net, sk, skb, in, out, okfn, INT_MIN); int ret = nf_hook(pf, hook, net, sk, skb, in, out, okfn);
if (ret == 1)
ret = okfn(net, sk, skb);
return ret;
} }
/* Call setsockopt() */ /* Call setsockopt() */
......
...@@ -79,10 +79,12 @@ enum ip_set_ext_id { ...@@ -79,10 +79,12 @@ enum ip_set_ext_id {
IPSET_EXT_ID_MAX, IPSET_EXT_ID_MAX,
}; };
struct ip_set;
/* Extension type */ /* Extension type */
struct ip_set_ext_type { struct ip_set_ext_type {
/* Destroy extension private data (can be NULL) */ /* Destroy extension private data (can be NULL) */
void (*destroy)(void *ext); void (*destroy)(struct ip_set *set, void *ext);
enum ip_set_extension type; enum ip_set_extension type;
enum ipset_cadt_flags flag; enum ipset_cadt_flags flag;
/* Size and minimal alignment */ /* Size and minimal alignment */
...@@ -92,17 +94,6 @@ struct ip_set_ext_type { ...@@ -92,17 +94,6 @@ struct ip_set_ext_type {
extern const struct ip_set_ext_type ip_set_extensions[]; extern const struct ip_set_ext_type ip_set_extensions[];
struct ip_set_ext {
u64 packets;
u64 bytes;
u32 timeout;
u32 skbmark;
u32 skbmarkmask;
u32 skbprio;
u16 skbqueue;
char *comment;
};
struct ip_set_counter { struct ip_set_counter {
atomic64_t bytes; atomic64_t bytes;
atomic64_t packets; atomic64_t packets;
...@@ -122,6 +113,15 @@ struct ip_set_skbinfo { ...@@ -122,6 +113,15 @@ struct ip_set_skbinfo {
u32 skbmarkmask; u32 skbmarkmask;
u32 skbprio; u32 skbprio;
u16 skbqueue; u16 skbqueue;
u16 __pad;
};
struct ip_set_ext {
struct ip_set_skbinfo skbinfo;
u64 packets;
u64 bytes;
char *comment;
u32 timeout;
}; };
struct ip_set; struct ip_set;
...@@ -252,6 +252,10 @@ struct ip_set { ...@@ -252,6 +252,10 @@ struct ip_set {
u8 flags; u8 flags;
/* Default timeout value, if enabled */ /* Default timeout value, if enabled */
u32 timeout; u32 timeout;
/* Number of elements (vs timeout) */
u32 elements;
/* Size of the dynamic extensions (vs timeout) */
size_t ext_size;
/* Element data size */ /* Element data size */
size_t dsize; size_t dsize;
/* Offsets to extensions in elements */ /* Offsets to extensions in elements */
...@@ -268,7 +272,7 @@ ip_set_ext_destroy(struct ip_set *set, void *data) ...@@ -268,7 +272,7 @@ ip_set_ext_destroy(struct ip_set *set, void *data)
*/ */
if (SET_WITH_COMMENT(set)) if (SET_WITH_COMMENT(set))
ip_set_extensions[IPSET_EXT_ID_COMMENT].destroy( ip_set_extensions[IPSET_EXT_ID_COMMENT].destroy(
ext_comment(data, set)); set, ext_comment(data, set));
} }
static inline int static inline int
...@@ -294,104 +298,6 @@ ip_set_put_flags(struct sk_buff *skb, struct ip_set *set) ...@@ -294,104 +298,6 @@ ip_set_put_flags(struct sk_buff *skb, struct ip_set *set)
return nla_put_net32(skb, IPSET_ATTR_CADT_FLAGS, htonl(cadt_flags)); return nla_put_net32(skb, IPSET_ATTR_CADT_FLAGS, htonl(cadt_flags));
} }
static inline void
ip_set_add_bytes(u64 bytes, struct ip_set_counter *counter)
{
atomic64_add((long long)bytes, &(counter)->bytes);
}
static inline void
ip_set_add_packets(u64 packets, struct ip_set_counter *counter)
{
atomic64_add((long long)packets, &(counter)->packets);
}
static inline u64
ip_set_get_bytes(const struct ip_set_counter *counter)
{
return (u64)atomic64_read(&(counter)->bytes);
}
static inline u64
ip_set_get_packets(const struct ip_set_counter *counter)
{
return (u64)atomic64_read(&(counter)->packets);
}
static inline void
ip_set_update_counter(struct ip_set_counter *counter,
const struct ip_set_ext *ext,
struct ip_set_ext *mext, u32 flags)
{
if (ext->packets != ULLONG_MAX &&
!(flags & IPSET_FLAG_SKIP_COUNTER_UPDATE)) {
ip_set_add_bytes(ext->bytes, counter);
ip_set_add_packets(ext->packets, counter);
}
if (flags & IPSET_FLAG_MATCH_COUNTERS) {
mext->packets = ip_set_get_packets(counter);
mext->bytes = ip_set_get_bytes(counter);
}
}
static inline void
ip_set_get_skbinfo(struct ip_set_skbinfo *skbinfo,
const struct ip_set_ext *ext,
struct ip_set_ext *mext, u32 flags)
{
mext->skbmark = skbinfo->skbmark;
mext->skbmarkmask = skbinfo->skbmarkmask;
mext->skbprio = skbinfo->skbprio;
mext->skbqueue = skbinfo->skbqueue;
}
static inline bool
ip_set_put_skbinfo(struct sk_buff *skb, struct ip_set_skbinfo *skbinfo)
{
/* Send nonzero parameters only */
return ((skbinfo->skbmark || skbinfo->skbmarkmask) &&
nla_put_net64(skb, IPSET_ATTR_SKBMARK,
cpu_to_be64((u64)skbinfo->skbmark << 32 |
skbinfo->skbmarkmask),
IPSET_ATTR_PAD)) ||
(skbinfo->skbprio &&
nla_put_net32(skb, IPSET_ATTR_SKBPRIO,
cpu_to_be32(skbinfo->skbprio))) ||
(skbinfo->skbqueue &&
nla_put_net16(skb, IPSET_ATTR_SKBQUEUE,
cpu_to_be16(skbinfo->skbqueue)));
}
static inline void
ip_set_init_skbinfo(struct ip_set_skbinfo *skbinfo,
const struct ip_set_ext *ext)
{
skbinfo->skbmark = ext->skbmark;
skbinfo->skbmarkmask = ext->skbmarkmask;
skbinfo->skbprio = ext->skbprio;
skbinfo->skbqueue = ext->skbqueue;
}
static inline bool
ip_set_put_counter(struct sk_buff *skb, struct ip_set_counter *counter)
{
return nla_put_net64(skb, IPSET_ATTR_BYTES,
cpu_to_be64(ip_set_get_bytes(counter)),
IPSET_ATTR_PAD) ||
nla_put_net64(skb, IPSET_ATTR_PACKETS,
cpu_to_be64(ip_set_get_packets(counter)),
IPSET_ATTR_PAD);
}
static inline void
ip_set_init_counter(struct ip_set_counter *counter,
const struct ip_set_ext *ext)
{
if (ext->bytes != ULLONG_MAX)
atomic64_set(&(counter)->bytes, (long long)(ext->bytes));
if (ext->packets != ULLONG_MAX)
atomic64_set(&(counter)->packets, (long long)(ext->packets));
}
/* Netlink CB args */ /* Netlink CB args */
enum { enum {
IPSET_CB_NET = 0, /* net namespace */ IPSET_CB_NET = 0, /* net namespace */
...@@ -431,6 +337,8 @@ extern size_t ip_set_elem_len(struct ip_set *set, struct nlattr *tb[], ...@@ -431,6 +337,8 @@ extern size_t ip_set_elem_len(struct ip_set *set, struct nlattr *tb[],
size_t len, size_t align); size_t len, size_t align);
extern int ip_set_get_extensions(struct ip_set *set, struct nlattr *tb[], extern int ip_set_get_extensions(struct ip_set *set, struct nlattr *tb[],
struct ip_set_ext *ext); struct ip_set_ext *ext);
extern int ip_set_put_extensions(struct sk_buff *skb, const struct ip_set *set,
const void *e, bool active);
static inline int static inline int
ip_set_get_hostipaddr4(struct nlattr *nla, u32 *ipaddr) ip_set_get_hostipaddr4(struct nlattr *nla, u32 *ipaddr)
...@@ -546,10 +454,8 @@ bitmap_bytes(u32 a, u32 b) ...@@ -546,10 +454,8 @@ bitmap_bytes(u32 a, u32 b)
#include <linux/netfilter/ipset/ip_set_timeout.h> #include <linux/netfilter/ipset/ip_set_timeout.h>
#include <linux/netfilter/ipset/ip_set_comment.h> #include <linux/netfilter/ipset/ip_set_comment.h>
#include <linux/netfilter/ipset/ip_set_counter.h>
int #include <linux/netfilter/ipset/ip_set_skbinfo.h>
ip_set_put_extensions(struct sk_buff *skb, const struct ip_set *set,
const void *e, bool active);
#define IP_SET_INIT_KEXT(skb, opt, set) \ #define IP_SET_INIT_KEXT(skb, opt, set) \
{ .bytes = (skb)->len, .packets = 1, \ { .bytes = (skb)->len, .packets = 1, \
......
...@@ -6,8 +6,8 @@ ...@@ -6,8 +6,8 @@
#define IPSET_BITMAP_MAX_RANGE 0x0000FFFF #define IPSET_BITMAP_MAX_RANGE 0x0000FFFF
enum { enum {
IPSET_ADD_STORE_PLAIN_TIMEOUT = -1,
IPSET_ADD_FAILED = 1, IPSET_ADD_FAILED = 1,
IPSET_ADD_STORE_PLAIN_TIMEOUT,
IPSET_ADD_START_STORED_TIMEOUT, IPSET_ADD_START_STORED_TIMEOUT,
}; };
......
...@@ -20,13 +20,14 @@ ip_set_comment_uget(struct nlattr *tb) ...@@ -20,13 +20,14 @@ ip_set_comment_uget(struct nlattr *tb)
* The kadt functions don't use the comment extensions in any way. * The kadt functions don't use the comment extensions in any way.
*/ */
static inline void static inline void
ip_set_init_comment(struct ip_set_comment *comment, ip_set_init_comment(struct ip_set *set, struct ip_set_comment *comment,
const struct ip_set_ext *ext) const struct ip_set_ext *ext)
{ {
struct ip_set_comment_rcu *c = rcu_dereference_protected(comment->c, 1); struct ip_set_comment_rcu *c = rcu_dereference_protected(comment->c, 1);
size_t len = ext->comment ? strlen(ext->comment) : 0; size_t len = ext->comment ? strlen(ext->comment) : 0;
if (unlikely(c)) { if (unlikely(c)) {
set->ext_size -= sizeof(*c) + strlen(c->str) + 1;
kfree_rcu(c, rcu); kfree_rcu(c, rcu);
rcu_assign_pointer(comment->c, NULL); rcu_assign_pointer(comment->c, NULL);
} }
...@@ -34,16 +35,17 @@ ip_set_init_comment(struct ip_set_comment *comment, ...@@ -34,16 +35,17 @@ ip_set_init_comment(struct ip_set_comment *comment,
return; return;
if (unlikely(len > IPSET_MAX_COMMENT_SIZE)) if (unlikely(len > IPSET_MAX_COMMENT_SIZE))
len = IPSET_MAX_COMMENT_SIZE; len = IPSET_MAX_COMMENT_SIZE;
c = kzalloc(sizeof(*c) + len + 1, GFP_ATOMIC); c = kmalloc(sizeof(*c) + len + 1, GFP_ATOMIC);
if (unlikely(!c)) if (unlikely(!c))
return; return;
strlcpy(c->str, ext->comment, len + 1); strlcpy(c->str, ext->comment, len + 1);
set->ext_size += sizeof(*c) + strlen(c->str) + 1;
rcu_assign_pointer(comment->c, c); rcu_assign_pointer(comment->c, c);
} }
/* Used only when dumping a set, protected by rcu_read_lock_bh() */ /* Used only when dumping a set, protected by rcu_read_lock_bh() */
static inline int static inline int
ip_set_put_comment(struct sk_buff *skb, struct ip_set_comment *comment) ip_set_put_comment(struct sk_buff *skb, const struct ip_set_comment *comment)
{ {
struct ip_set_comment_rcu *c = rcu_dereference_bh(comment->c); struct ip_set_comment_rcu *c = rcu_dereference_bh(comment->c);
...@@ -58,13 +60,14 @@ ip_set_put_comment(struct sk_buff *skb, struct ip_set_comment *comment) ...@@ -58,13 +60,14 @@ ip_set_put_comment(struct sk_buff *skb, struct ip_set_comment *comment)
* of the set data anymore. * of the set data anymore.
*/ */
static inline void static inline void
ip_set_comment_free(struct ip_set_comment *comment) ip_set_comment_free(struct ip_set *set, struct ip_set_comment *comment)
{ {
struct ip_set_comment_rcu *c; struct ip_set_comment_rcu *c;
c = rcu_dereference_protected(comment->c, 1); c = rcu_dereference_protected(comment->c, 1);
if (unlikely(!c)) if (unlikely(!c))
return; return;
set->ext_size -= sizeof(*c) + strlen(c->str) + 1;
kfree_rcu(c, rcu); kfree_rcu(c, rcu);
rcu_assign_pointer(comment->c, NULL); rcu_assign_pointer(comment->c, NULL);
} }
......
#ifndef _IP_SET_COUNTER_H
#define _IP_SET_COUNTER_H
/* Copyright (C) 2015 Jozsef Kadlecsik <kadlec@blackhole.kfki.hu>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifdef __KERNEL__
static inline void
ip_set_add_bytes(u64 bytes, struct ip_set_counter *counter)
{
atomic64_add((long long)bytes, &(counter)->bytes);
}
static inline void
ip_set_add_packets(u64 packets, struct ip_set_counter *counter)
{
atomic64_add((long long)packets, &(counter)->packets);
}
static inline u64
ip_set_get_bytes(const struct ip_set_counter *counter)
{
return (u64)atomic64_read(&(counter)->bytes);
}
static inline u64
ip_set_get_packets(const struct ip_set_counter *counter)
{
return (u64)atomic64_read(&(counter)->packets);
}
static inline void
ip_set_update_counter(struct ip_set_counter *counter,
const struct ip_set_ext *ext,
struct ip_set_ext *mext, u32 flags)
{
if (ext->packets != ULLONG_MAX &&
!(flags & IPSET_FLAG_SKIP_COUNTER_UPDATE)) {
ip_set_add_bytes(ext->bytes, counter);
ip_set_add_packets(ext->packets, counter);
}
if (flags & IPSET_FLAG_MATCH_COUNTERS) {
mext->packets = ip_set_get_packets(counter);
mext->bytes = ip_set_get_bytes(counter);
}
}
static inline bool
ip_set_put_counter(struct sk_buff *skb, const struct ip_set_counter *counter)
{
return nla_put_net64(skb, IPSET_ATTR_BYTES,
cpu_to_be64(ip_set_get_bytes(counter)),
IPSET_ATTR_PAD) ||
nla_put_net64(skb, IPSET_ATTR_PACKETS,
cpu_to_be64(ip_set_get_packets(counter)),
IPSET_ATTR_PAD);
}
static inline void
ip_set_init_counter(struct ip_set_counter *counter,
const struct ip_set_ext *ext)
{
if (ext->bytes != ULLONG_MAX)
atomic64_set(&(counter)->bytes, (long long)(ext->bytes));
if (ext->packets != ULLONG_MAX)
atomic64_set(&(counter)->packets, (long long)(ext->packets));
}
#endif /* __KERNEL__ */
#endif /* _IP_SET_COUNTER_H */
#ifndef _IP_SET_SKBINFO_H
#define _IP_SET_SKBINFO_H
/* Copyright (C) 2015 Jozsef Kadlecsik <kadlec@blackhole.kfki.hu>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifdef __KERNEL__
static inline void
ip_set_get_skbinfo(struct ip_set_skbinfo *skbinfo,
const struct ip_set_ext *ext,
struct ip_set_ext *mext, u32 flags)
{
mext->skbinfo = *skbinfo;
}
static inline bool
ip_set_put_skbinfo(struct sk_buff *skb, const struct ip_set_skbinfo *skbinfo)
{
/* Send nonzero parameters only */
return ((skbinfo->skbmark || skbinfo->skbmarkmask) &&
nla_put_net64(skb, IPSET_ATTR_SKBMARK,
cpu_to_be64((u64)skbinfo->skbmark << 32 |
skbinfo->skbmarkmask),
IPSET_ATTR_PAD)) ||
(skbinfo->skbprio &&
nla_put_net32(skb, IPSET_ATTR_SKBPRIO,
cpu_to_be32(skbinfo->skbprio))) ||
(skbinfo->skbqueue &&
nla_put_net16(skb, IPSET_ATTR_SKBQUEUE,
cpu_to_be16(skbinfo->skbqueue)));
}
static inline void
ip_set_init_skbinfo(struct ip_set_skbinfo *skbinfo,
const struct ip_set_ext *ext)
{
*skbinfo = ext->skbinfo;
}
#endif /* __KERNEL__ */
#endif /* _IP_SET_SKBINFO_H */
...@@ -40,7 +40,7 @@ ip_set_timeout_uget(struct nlattr *tb) ...@@ -40,7 +40,7 @@ ip_set_timeout_uget(struct nlattr *tb)
} }
static inline bool static inline bool
ip_set_timeout_expired(unsigned long *t) ip_set_timeout_expired(const unsigned long *t)
{ {
return *t != IPSET_ELEM_PERMANENT && time_is_before_jiffies(*t); return *t != IPSET_ELEM_PERMANENT && time_is_before_jiffies(*t);
} }
...@@ -63,7 +63,7 @@ ip_set_timeout_set(unsigned long *timeout, u32 value) ...@@ -63,7 +63,7 @@ ip_set_timeout_set(unsigned long *timeout, u32 value)
} }
static inline u32 static inline u32
ip_set_timeout_get(unsigned long *timeout) ip_set_timeout_get(const unsigned long *timeout)
{ {
return *timeout == IPSET_ELEM_PERMANENT ? 0 : return *timeout == IPSET_ELEM_PERMANENT ? 0 :
jiffies_to_msecs(*timeout - jiffies)/MSEC_PER_SEC; jiffies_to_msecs(*timeout - jiffies)/MSEC_PER_SEC;
......
...@@ -4,6 +4,7 @@ ...@@ -4,6 +4,7 @@
#include <linux/netdevice.h> #include <linux/netdevice.h>
#include <linux/static_key.h> #include <linux/static_key.h>
#include <linux/netfilter.h>
#include <uapi/linux/netfilter/x_tables.h> #include <uapi/linux/netfilter/x_tables.h>
/* Test a struct->invflags and a boolean for inequality */ /* Test a struct->invflags and a boolean for inequality */
...@@ -17,14 +18,9 @@ ...@@ -17,14 +18,9 @@
* @target: the target extension * @target: the target extension
* @matchinfo: per-match data * @matchinfo: per-match data
* @targetinfo: per-target data * @targetinfo: per-target data
* @net network namespace through which the action was invoked * @state: pointer to hook state this packet came from
* @in: input netdevice
* @out: output netdevice
* @fragoff: packet is a fragment, this is the data offset * @fragoff: packet is a fragment, this is the data offset
* @thoff: position of transport header relative to skb->data * @thoff: position of transport header relative to skb->data
* @hook: hook number given packet came from
* @family: Actual NFPROTO_* through which the function is invoked
* (helpful when match->family == NFPROTO_UNSPEC)
* *
* Fields written to by extensions: * Fields written to by extensions:
* *
...@@ -38,15 +34,47 @@ struct xt_action_param { ...@@ -38,15 +34,47 @@ struct xt_action_param {
union { union {
const void *matchinfo, *targinfo; const void *matchinfo, *targinfo;
}; };
struct net *net; const struct nf_hook_state *state;
const struct net_device *in, *out;
int fragoff; int fragoff;
unsigned int thoff; unsigned int thoff;
unsigned int hooknum;
u_int8_t family;
bool hotdrop; bool hotdrop;
}; };
static inline struct net *xt_net(const struct xt_action_param *par)
{
return par->state->net;
}
static inline struct net_device *xt_in(const struct xt_action_param *par)
{
return par->state->in;
}
static inline const char *xt_inname(const struct xt_action_param *par)
{
return par->state->in->name;
}
static inline struct net_device *xt_out(const struct xt_action_param *par)
{
return par->state->out;
}
static inline const char *xt_outname(const struct xt_action_param *par)
{
return par->state->out->name;
}
static inline unsigned int xt_hooknum(const struct xt_action_param *par)
{
return par->state->hook;
}
static inline u_int8_t xt_family(const struct xt_action_param *par)
{
return par->state->pf;
}
/** /**
* struct xt_mtchk_param - parameters for match extensions' * struct xt_mtchk_param - parameters for match extensions'
* checkentry functions * checkentry functions
......
...@@ -26,10 +26,10 @@ static inline int nf_hook_ingress(struct sk_buff *skb) ...@@ -26,10 +26,10 @@ static inline int nf_hook_ingress(struct sk_buff *skb)
if (unlikely(!e)) if (unlikely(!e))
return 0; return 0;
nf_hook_state_init(&state, e, NF_NETDEV_INGRESS, INT_MIN, nf_hook_state_init(&state, NF_NETDEV_INGRESS,
NFPROTO_NETDEV, skb->dev, NULL, NULL, NFPROTO_NETDEV, skb->dev, NULL, NULL,
dev_net(skb->dev), NULL); dev_net(skb->dev), NULL);
return nf_hook_slow(skb, &state); return nf_hook_slow(skb, &state, e);
} }
static inline void nf_hook_ingress_init(struct net_device *dev) static inline void nf_hook_ingress_init(struct net_device *dev)
......
...@@ -125,14 +125,24 @@ struct nf_conntrack_l4proto *nf_ct_l4proto_find_get(u_int16_t l3proto, ...@@ -125,14 +125,24 @@ struct nf_conntrack_l4proto *nf_ct_l4proto_find_get(u_int16_t l3proto,
void nf_ct_l4proto_put(struct nf_conntrack_l4proto *p); void nf_ct_l4proto_put(struct nf_conntrack_l4proto *p);
/* Protocol pernet registration. */ /* Protocol pernet registration. */
int nf_ct_l4proto_pernet_register_one(struct net *net,
struct nf_conntrack_l4proto *proto);
void nf_ct_l4proto_pernet_unregister_one(struct net *net,
struct nf_conntrack_l4proto *proto);
int nf_ct_l4proto_pernet_register(struct net *net, int nf_ct_l4proto_pernet_register(struct net *net,
struct nf_conntrack_l4proto *proto); struct nf_conntrack_l4proto *proto[],
unsigned int num_proto);
void nf_ct_l4proto_pernet_unregister(struct net *net, void nf_ct_l4proto_pernet_unregister(struct net *net,
struct nf_conntrack_l4proto *proto); struct nf_conntrack_l4proto *proto[],
unsigned int num_proto);
/* Protocol global registration. */ /* Protocol global registration. */
int nf_ct_l4proto_register(struct nf_conntrack_l4proto *proto); int nf_ct_l4proto_register_one(struct nf_conntrack_l4proto *proto);
void nf_ct_l4proto_unregister(struct nf_conntrack_l4proto *proto); void nf_ct_l4proto_unregister_one(struct nf_conntrack_l4proto *proto);
int nf_ct_l4proto_register(struct nf_conntrack_l4proto *proto[],
unsigned int num_proto);
void nf_ct_l4proto_unregister(struct nf_conntrack_l4proto *proto[],
unsigned int num_proto);
/* Generic netlink helpers */ /* Generic netlink helpers */
int nf_ct_port_tuple_to_nlattr(struct sk_buff *skb, int nf_ct_port_tuple_to_nlattr(struct sk_buff *skb,
......
...@@ -12,6 +12,7 @@ struct nf_queue_entry { ...@@ -12,6 +12,7 @@ struct nf_queue_entry {
unsigned int id; unsigned int id;
struct nf_hook_state state; struct nf_hook_state state;
struct nf_hook_entry *hook;
u16 size; /* sizeof(entry) + saved route keys */ u16 size; /* sizeof(entry) + saved route keys */
/* extra space to store route keys */ /* extra space to store route keys */
......
...@@ -14,27 +14,43 @@ ...@@ -14,27 +14,43 @@
struct nft_pktinfo { struct nft_pktinfo {
struct sk_buff *skb; struct sk_buff *skb;
struct net *net;
const struct net_device *in;
const struct net_device *out;
u8 pf;
u8 hook;
bool tprot_set; bool tprot_set;
u8 tprot; u8 tprot;
/* for x_tables compatibility */ /* for x_tables compatibility */
struct xt_action_param xt; struct xt_action_param xt;
}; };
static inline struct net *nft_net(const struct nft_pktinfo *pkt)
{
return pkt->xt.state->net;
}
static inline unsigned int nft_hook(const struct nft_pktinfo *pkt)
{
return pkt->xt.state->hook;
}
static inline u8 nft_pf(const struct nft_pktinfo *pkt)
{
return pkt->xt.state->pf;
}
static inline const struct net_device *nft_in(const struct nft_pktinfo *pkt)
{
return pkt->xt.state->in;
}
static inline const struct net_device *nft_out(const struct nft_pktinfo *pkt)
{
return pkt->xt.state->out;
}
static inline void nft_set_pktinfo(struct nft_pktinfo *pkt, static inline void nft_set_pktinfo(struct nft_pktinfo *pkt,
struct sk_buff *skb, struct sk_buff *skb,
const struct nf_hook_state *state) const struct nf_hook_state *state)
{ {
pkt->skb = skb; pkt->skb = skb;
pkt->net = pkt->xt.net = state->net; pkt->xt.state = state;
pkt->in = pkt->xt.in = state->in;
pkt->out = pkt->xt.out = state->out;
pkt->hook = pkt->xt.hooknum = state->hook;
pkt->pf = pkt->xt.family = state->pf;
} }
static inline void nft_set_pktinfo_proto_unspec(struct nft_pktinfo *pkt, static inline void nft_set_pktinfo_proto_unspec(struct nft_pktinfo *pkt,
......
#ifndef _NET_NF_TABLES_CORE_H #ifndef _NET_NF_TABLES_CORE_H
#define _NET_NF_TABLES_CORE_H #define _NET_NF_TABLES_CORE_H
extern struct nft_expr_type nft_imm_type;
extern struct nft_expr_type nft_cmp_type;
extern struct nft_expr_type nft_lookup_type;
extern struct nft_expr_type nft_bitwise_type;
extern struct nft_expr_type nft_byteorder_type;
extern struct nft_expr_type nft_payload_type;
extern struct nft_expr_type nft_dynset_type;
extern struct nft_expr_type nft_range_type;
int nf_tables_core_module_init(void); int nf_tables_core_module_init(void);
void nf_tables_core_module_exit(void); void nf_tables_core_module_exit(void);
int nft_immediate_module_init(void);
void nft_immediate_module_exit(void);
struct nft_cmp_fast_expr { struct nft_cmp_fast_expr {
u32 data; u32 data;
enum nft_registers sreg:8; enum nft_registers sreg:8;
...@@ -25,24 +31,6 @@ static inline u32 nft_cmp_fast_mask(unsigned int len) ...@@ -25,24 +31,6 @@ static inline u32 nft_cmp_fast_mask(unsigned int len)
extern const struct nft_expr_ops nft_cmp_fast_ops; extern const struct nft_expr_ops nft_cmp_fast_ops;
int nft_cmp_module_init(void);
void nft_cmp_module_exit(void);
int nft_range_module_init(void);
void nft_range_module_exit(void);
int nft_lookup_module_init(void);
void nft_lookup_module_exit(void);
int nft_dynset_module_init(void);
void nft_dynset_module_exit(void);
int nft_bitwise_module_init(void);
void nft_bitwise_module_exit(void);
int nft_byteorder_module_init(void);
void nft_byteorder_module_exit(void);
struct nft_payload { struct nft_payload {
enum nft_payload_bases base:8; enum nft_payload_bases base:8;
u8 offset; u8 offset;
...@@ -62,7 +50,4 @@ struct nft_payload_set { ...@@ -62,7 +50,4 @@ struct nft_payload_set {
extern const struct nft_expr_ops nft_payload_fast_ops; extern const struct nft_expr_ops nft_payload_fast_ops;
extern struct static_key_false nft_trace_enabled; extern struct static_key_false nft_trace_enabled;
int nft_payload_module_init(void);
void nft_payload_module_exit(void);
#endif /* _NET_NF_TABLES_CORE_H */ #endif /* _NET_NF_TABLES_CORE_H */
...@@ -91,7 +91,6 @@ struct netns_ct { ...@@ -91,7 +91,6 @@ struct netns_ct {
struct nf_ip_net nf_ct_proto; struct nf_ip_net nf_ct_proto;
#if defined(CONFIG_NF_CONNTRACK_LABELS) #if defined(CONFIG_NF_CONNTRACK_LABELS)
unsigned int labels_used; unsigned int labels_used;
u8 label_words;
#endif #endif
}; };
#endif #endif
...@@ -13,7 +13,7 @@ ...@@ -13,7 +13,7 @@
#define NF_STOLEN 2 #define NF_STOLEN 2
#define NF_QUEUE 3 #define NF_QUEUE 3
#define NF_REPEAT 4 #define NF_REPEAT 4
#define NF_STOP 5 #define NF_STOP 5 /* Deprecated, for userspace nf_queue compatibility. */
#define NF_MAX_VERDICT NF_STOP #define NF_MAX_VERDICT NF_STOP
/* we overload the higher bits for encoding auxiliary data such as the queue /* we overload the higher bits for encoding auxiliary data such as the queue
......
...@@ -561,8 +561,8 @@ static int br_nf_forward_finish(struct net *net, struct sock *sk, struct sk_buff ...@@ -561,8 +561,8 @@ static int br_nf_forward_finish(struct net *net, struct sock *sk, struct sk_buff
} }
nf_bridge_push_encap_header(skb); nf_bridge_push_encap_header(skb);
NF_HOOK_THRESH(NFPROTO_BRIDGE, NF_BR_FORWARD, net, sk, skb, br_nf_hook_thresh(NF_BR_FORWARD, net, sk, skb, in, skb->dev,
in, skb->dev, br_forward_finish, 1); br_forward_finish);
return 0; return 0;
} }
...@@ -845,8 +845,10 @@ static unsigned int ip_sabotage_in(void *priv, ...@@ -845,8 +845,10 @@ static unsigned int ip_sabotage_in(void *priv,
struct sk_buff *skb, struct sk_buff *skb,
const struct nf_hook_state *state) const struct nf_hook_state *state)
{ {
if (skb->nf_bridge && !skb->nf_bridge->in_prerouting) if (skb->nf_bridge && !skb->nf_bridge->in_prerouting) {
return NF_STOP; state->okfn(state->net, state->sk, skb);
return NF_STOLEN;
}
return NF_ACCEPT; return NF_ACCEPT;
} }
...@@ -1016,10 +1018,10 @@ int br_nf_hook_thresh(unsigned int hook, struct net *net, ...@@ -1016,10 +1018,10 @@ int br_nf_hook_thresh(unsigned int hook, struct net *net,
/* We may already have this, but read-locks nest anyway */ /* We may already have this, but read-locks nest anyway */
rcu_read_lock(); rcu_read_lock();
nf_hook_state_init(&state, elem, hook, NF_BR_PRI_BRNF + 1, nf_hook_state_init(&state, hook, NFPROTO_BRIDGE, indev, outdev,
NFPROTO_BRIDGE, indev, outdev, sk, net, okfn); sk, net, okfn);
ret = nf_hook_slow(skb, &state); ret = nf_hook_slow(skb, &state, elem);
rcu_read_unlock(); rcu_read_unlock();
if (ret == 1) if (ret == 1)
ret = okfn(net, sk, skb); ret = okfn(net, sk, skb);
......
...@@ -51,7 +51,8 @@ ebt_arpreply_tg(struct sk_buff *skb, const struct xt_action_param *par) ...@@ -51,7 +51,8 @@ ebt_arpreply_tg(struct sk_buff *skb, const struct xt_action_param *par)
if (diptr == NULL) if (diptr == NULL)
return EBT_DROP; return EBT_DROP;
arp_send(ARPOP_REPLY, ETH_P_ARP, *siptr, (struct net_device *)par->in, arp_send(ARPOP_REPLY, ETH_P_ARP, *siptr,
(struct net_device *)xt_in(par),
*diptr, shp, info->mac, shp); *diptr, shp, info->mac, shp);
return info->target; return info->target;
......
...@@ -179,7 +179,7 @@ ebt_log_tg(struct sk_buff *skb, const struct xt_action_param *par) ...@@ -179,7 +179,7 @@ ebt_log_tg(struct sk_buff *skb, const struct xt_action_param *par)
{ {
const struct ebt_log_info *info = par->targinfo; const struct ebt_log_info *info = par->targinfo;
struct nf_loginfo li; struct nf_loginfo li;
struct net *net = par->net; struct net *net = xt_net(par);
li.type = NF_LOG_TYPE_LOG; li.type = NF_LOG_TYPE_LOG;
li.u.log.level = info->loglevel; li.u.log.level = info->loglevel;
...@@ -190,11 +190,12 @@ ebt_log_tg(struct sk_buff *skb, const struct xt_action_param *par) ...@@ -190,11 +190,12 @@ ebt_log_tg(struct sk_buff *skb, const struct xt_action_param *par)
* nf_log_packet() with NFT_LOG_TYPE_LOG here. --Pablo * nf_log_packet() with NFT_LOG_TYPE_LOG here. --Pablo
*/ */
if (info->bitmask & EBT_LOG_NFLOG) if (info->bitmask & EBT_LOG_NFLOG)
nf_log_packet(net, NFPROTO_BRIDGE, par->hooknum, skb, nf_log_packet(net, NFPROTO_BRIDGE, xt_hooknum(par), skb,
par->in, par->out, &li, "%s", info->prefix); xt_in(par), xt_out(par), &li, "%s",
info->prefix);
else else
ebt_log_packet(net, NFPROTO_BRIDGE, par->hooknum, skb, par->in, ebt_log_packet(net, NFPROTO_BRIDGE, xt_hooknum(par), skb,
par->out, &li, info->prefix); xt_in(par), xt_out(par), &li, info->prefix);
return EBT_CONTINUE; return EBT_CONTINUE;
} }
......
...@@ -23,16 +23,16 @@ static unsigned int ...@@ -23,16 +23,16 @@ static unsigned int
ebt_nflog_tg(struct sk_buff *skb, const struct xt_action_param *par) ebt_nflog_tg(struct sk_buff *skb, const struct xt_action_param *par)
{ {
const struct ebt_nflog_info *info = par->targinfo; const struct ebt_nflog_info *info = par->targinfo;
struct net *net = xt_net(par);
struct nf_loginfo li; struct nf_loginfo li;
struct net *net = par->net;
li.type = NF_LOG_TYPE_ULOG; li.type = NF_LOG_TYPE_ULOG;
li.u.ulog.copy_len = info->len; li.u.ulog.copy_len = info->len;
li.u.ulog.group = info->group; li.u.ulog.group = info->group;
li.u.ulog.qthreshold = info->threshold; li.u.ulog.qthreshold = info->threshold;
nf_log_packet(net, PF_BRIDGE, par->hooknum, skb, par->in, nf_log_packet(net, PF_BRIDGE, xt_hooknum(par), skb, xt_in(par),
par->out, &li, "%s", info->prefix); xt_out(par), &li, "%s", info->prefix);
return EBT_CONTINUE; return EBT_CONTINUE;
} }
......
...@@ -23,12 +23,12 @@ ebt_redirect_tg(struct sk_buff *skb, const struct xt_action_param *par) ...@@ -23,12 +23,12 @@ ebt_redirect_tg(struct sk_buff *skb, const struct xt_action_param *par)
if (!skb_make_writable(skb, 0)) if (!skb_make_writable(skb, 0))
return EBT_DROP; return EBT_DROP;
if (par->hooknum != NF_BR_BROUTING) if (xt_hooknum(par) != NF_BR_BROUTING)
/* rcu_read_lock()ed by nf_hook_thresh */ /* rcu_read_lock()ed by nf_hook_thresh */
ether_addr_copy(eth_hdr(skb)->h_dest, ether_addr_copy(eth_hdr(skb)->h_dest,
br_port_get_rcu(par->in)->br->dev->dev_addr); br_port_get_rcu(xt_in(par))->br->dev->dev_addr);
else else
ether_addr_copy(eth_hdr(skb)->h_dest, par->in->dev_addr); ether_addr_copy(eth_hdr(skb)->h_dest, xt_in(par)->dev_addr);
skb->pkt_type = PACKET_HOST; skb->pkt_type = PACKET_HOST;
return info->target; return info->target;
} }
......
...@@ -53,7 +53,7 @@ static int ebt_broute(struct sk_buff *skb) ...@@ -53,7 +53,7 @@ static int ebt_broute(struct sk_buff *skb)
struct nf_hook_state state; struct nf_hook_state state;
int ret; int ret;
nf_hook_state_init(&state, NULL, NF_BR_BROUTING, INT_MIN, nf_hook_state_init(&state, NF_BR_BROUTING,
NFPROTO_BRIDGE, skb->dev, NULL, NULL, NFPROTO_BRIDGE, skb->dev, NULL, NULL,
dev_net(skb->dev), NULL); dev_net(skb->dev), NULL);
......
...@@ -194,12 +194,8 @@ unsigned int ebt_do_table(struct sk_buff *skb, ...@@ -194,12 +194,8 @@ unsigned int ebt_do_table(struct sk_buff *skb,
const struct ebt_table_info *private; const struct ebt_table_info *private;
struct xt_action_param acpar; struct xt_action_param acpar;
acpar.family = NFPROTO_BRIDGE; acpar.state = state;
acpar.net = state->net;
acpar.in = state->in;
acpar.out = state->out;
acpar.hotdrop = false; acpar.hotdrop = false;
acpar.hooknum = hook;
read_lock_bh(&table->lock); read_lock_bh(&table->lock);
private = table->private; private = table->private;
......
...@@ -23,7 +23,7 @@ static void nft_meta_bridge_get_eval(const struct nft_expr *expr, ...@@ -23,7 +23,7 @@ static void nft_meta_bridge_get_eval(const struct nft_expr *expr,
const struct nft_pktinfo *pkt) const struct nft_pktinfo *pkt)
{ {
const struct nft_meta *priv = nft_expr_priv(expr); const struct nft_meta *priv = nft_expr_priv(expr);
const struct net_device *in = pkt->in, *out = pkt->out; const struct net_device *in = nft_in(pkt), *out = nft_out(pkt);
u32 *dest = &regs->data[priv->dreg]; u32 *dest = &regs->data[priv->dreg];
const struct net_bridge_port *p; const struct net_bridge_port *p;
......
...@@ -315,17 +315,20 @@ static void nft_reject_bridge_eval(const struct nft_expr *expr, ...@@ -315,17 +315,20 @@ static void nft_reject_bridge_eval(const struct nft_expr *expr,
case htons(ETH_P_IP): case htons(ETH_P_IP):
switch (priv->type) { switch (priv->type) {
case NFT_REJECT_ICMP_UNREACH: case NFT_REJECT_ICMP_UNREACH:
nft_reject_br_send_v4_unreach(pkt->net, pkt->skb, nft_reject_br_send_v4_unreach(nft_net(pkt), pkt->skb,
pkt->in, pkt->hook, nft_in(pkt),
nft_hook(pkt),
priv->icmp_code); priv->icmp_code);
break; break;
case NFT_REJECT_TCP_RST: case NFT_REJECT_TCP_RST:
nft_reject_br_send_v4_tcp_reset(pkt->net, pkt->skb, nft_reject_br_send_v4_tcp_reset(nft_net(pkt), pkt->skb,
pkt->in, pkt->hook); nft_in(pkt),
nft_hook(pkt));
break; break;
case NFT_REJECT_ICMPX_UNREACH: case NFT_REJECT_ICMPX_UNREACH:
nft_reject_br_send_v4_unreach(pkt->net, pkt->skb, nft_reject_br_send_v4_unreach(nft_net(pkt), pkt->skb,
pkt->in, pkt->hook, nft_in(pkt),
nft_hook(pkt),
nft_reject_icmp_code(priv->icmp_code)); nft_reject_icmp_code(priv->icmp_code));
break; break;
} }
...@@ -333,17 +336,20 @@ static void nft_reject_bridge_eval(const struct nft_expr *expr, ...@@ -333,17 +336,20 @@ static void nft_reject_bridge_eval(const struct nft_expr *expr,
case htons(ETH_P_IPV6): case htons(ETH_P_IPV6):
switch (priv->type) { switch (priv->type) {
case NFT_REJECT_ICMP_UNREACH: case NFT_REJECT_ICMP_UNREACH:
nft_reject_br_send_v6_unreach(pkt->net, pkt->skb, nft_reject_br_send_v6_unreach(nft_net(pkt), pkt->skb,
pkt->in, pkt->hook, nft_in(pkt),
nft_hook(pkt),
priv->icmp_code); priv->icmp_code);
break; break;
case NFT_REJECT_TCP_RST: case NFT_REJECT_TCP_RST:
nft_reject_br_send_v6_tcp_reset(pkt->net, pkt->skb, nft_reject_br_send_v6_tcp_reset(nft_net(pkt), pkt->skb,
pkt->in, pkt->hook); nft_in(pkt),
nft_hook(pkt));
break; break;
case NFT_REJECT_ICMPX_UNREACH: case NFT_REJECT_ICMPX_UNREACH:
nft_reject_br_send_v6_unreach(pkt->net, pkt->skb, nft_reject_br_send_v6_unreach(nft_net(pkt), pkt->skb,
pkt->in, pkt->hook, nft_in(pkt),
nft_hook(pkt),
nft_reject_icmpv6_code(priv->icmp_code)); nft_reject_icmpv6_code(priv->icmp_code));
break; break;
} }
......
...@@ -217,11 +217,7 @@ unsigned int arpt_do_table(struct sk_buff *skb, ...@@ -217,11 +217,7 @@ unsigned int arpt_do_table(struct sk_buff *skb,
*/ */
e = get_entry(table_base, private->hook_entry[hook]); e = get_entry(table_base, private->hook_entry[hook]);
acpar.net = state->net; acpar.state = state;
acpar.in = state->in;
acpar.out = state->out;
acpar.hooknum = hook;
acpar.family = NFPROTO_ARP;
acpar.hotdrop = false; acpar.hotdrop = false;
arp = arp_hdr(skb); arp = arp_hdr(skb);
...@@ -809,7 +805,7 @@ static int get_info(struct net *net, void __user *user, ...@@ -809,7 +805,7 @@ static int get_info(struct net *net, void __user *user,
#endif #endif
t = try_then_request_module(xt_find_table_lock(net, NFPROTO_ARP, name), t = try_then_request_module(xt_find_table_lock(net, NFPROTO_ARP, name),
"arptable_%s", name); "arptable_%s", name);
if (!IS_ERR_OR_NULL(t)) { if (t) {
struct arpt_getinfo info; struct arpt_getinfo info;
const struct xt_table_info *private = t->private; const struct xt_table_info *private = t->private;
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
...@@ -838,7 +834,7 @@ static int get_info(struct net *net, void __user *user, ...@@ -838,7 +834,7 @@ static int get_info(struct net *net, void __user *user,
xt_table_unlock(t); xt_table_unlock(t);
module_put(t->me); module_put(t->me);
} else } else
ret = t ? PTR_ERR(t) : -ENOENT; ret = -ENOENT;
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
if (compat) if (compat)
xt_compat_unlock(NFPROTO_ARP); xt_compat_unlock(NFPROTO_ARP);
...@@ -863,7 +859,7 @@ static int get_entries(struct net *net, struct arpt_get_entries __user *uptr, ...@@ -863,7 +859,7 @@ static int get_entries(struct net *net, struct arpt_get_entries __user *uptr,
get.name[sizeof(get.name) - 1] = '\0'; get.name[sizeof(get.name) - 1] = '\0';
t = xt_find_table_lock(net, NFPROTO_ARP, get.name); t = xt_find_table_lock(net, NFPROTO_ARP, get.name);
if (!IS_ERR_OR_NULL(t)) { if (t) {
const struct xt_table_info *private = t->private; const struct xt_table_info *private = t->private;
if (get.size == private->size) if (get.size == private->size)
...@@ -875,7 +871,7 @@ static int get_entries(struct net *net, struct arpt_get_entries __user *uptr, ...@@ -875,7 +871,7 @@ static int get_entries(struct net *net, struct arpt_get_entries __user *uptr,
module_put(t->me); module_put(t->me);
xt_table_unlock(t); xt_table_unlock(t);
} else } else
ret = t ? PTR_ERR(t) : -ENOENT; ret = -ENOENT;
return ret; return ret;
} }
...@@ -902,8 +898,8 @@ static int __do_replace(struct net *net, const char *name, ...@@ -902,8 +898,8 @@ static int __do_replace(struct net *net, const char *name,
t = try_then_request_module(xt_find_table_lock(net, NFPROTO_ARP, name), t = try_then_request_module(xt_find_table_lock(net, NFPROTO_ARP, name),
"arptable_%s", name); "arptable_%s", name);
if (IS_ERR_OR_NULL(t)) { if (!t) {
ret = t ? PTR_ERR(t) : -ENOENT; ret = -ENOENT;
goto free_newinfo_counters_untrans; goto free_newinfo_counters_untrans;
} }
...@@ -1018,8 +1014,8 @@ static int do_add_counters(struct net *net, const void __user *user, ...@@ -1018,8 +1014,8 @@ static int do_add_counters(struct net *net, const void __user *user,
return PTR_ERR(paddc); return PTR_ERR(paddc);
t = xt_find_table_lock(net, NFPROTO_ARP, tmp.name); t = xt_find_table_lock(net, NFPROTO_ARP, tmp.name);
if (IS_ERR_OR_NULL(t)) { if (!t) {
ret = t ? PTR_ERR(t) : -ENOENT; ret = -ENOENT;
goto free; goto free;
} }
...@@ -1408,7 +1404,7 @@ static int compat_get_entries(struct net *net, ...@@ -1408,7 +1404,7 @@ static int compat_get_entries(struct net *net,
xt_compat_lock(NFPROTO_ARP); xt_compat_lock(NFPROTO_ARP);
t = xt_find_table_lock(net, NFPROTO_ARP, get.name); t = xt_find_table_lock(net, NFPROTO_ARP, get.name);
if (!IS_ERR_OR_NULL(t)) { if (t) {
const struct xt_table_info *private = t->private; const struct xt_table_info *private = t->private;
struct xt_table_info info; struct xt_table_info info;
...@@ -1423,7 +1419,7 @@ static int compat_get_entries(struct net *net, ...@@ -1423,7 +1419,7 @@ static int compat_get_entries(struct net *net,
module_put(t->me); module_put(t->me);
xt_table_unlock(t); xt_table_unlock(t);
} else } else
ret = t ? PTR_ERR(t) : -ENOENT; ret = -ENOENT;
xt_compat_unlock(NFPROTO_ARP); xt_compat_unlock(NFPROTO_ARP);
return ret; return ret;
......
...@@ -261,11 +261,7 @@ ipt_do_table(struct sk_buff *skb, ...@@ -261,11 +261,7 @@ ipt_do_table(struct sk_buff *skb,
acpar.fragoff = ntohs(ip->frag_off) & IP_OFFSET; acpar.fragoff = ntohs(ip->frag_off) & IP_OFFSET;
acpar.thoff = ip_hdrlen(skb); acpar.thoff = ip_hdrlen(skb);
acpar.hotdrop = false; acpar.hotdrop = false;
acpar.net = state->net; acpar.state = state;
acpar.in = state->in;
acpar.out = state->out;
acpar.family = NFPROTO_IPV4;
acpar.hooknum = hook;
IP_NF_ASSERT(table->valid_hooks & (1 << hook)); IP_NF_ASSERT(table->valid_hooks & (1 << hook));
local_bh_disable(); local_bh_disable();
...@@ -977,7 +973,7 @@ static int get_info(struct net *net, void __user *user, ...@@ -977,7 +973,7 @@ static int get_info(struct net *net, void __user *user,
#endif #endif
t = try_then_request_module(xt_find_table_lock(net, AF_INET, name), t = try_then_request_module(xt_find_table_lock(net, AF_INET, name),
"iptable_%s", name); "iptable_%s", name);
if (!IS_ERR_OR_NULL(t)) { if (t) {
struct ipt_getinfo info; struct ipt_getinfo info;
const struct xt_table_info *private = t->private; const struct xt_table_info *private = t->private;
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
...@@ -1007,7 +1003,7 @@ static int get_info(struct net *net, void __user *user, ...@@ -1007,7 +1003,7 @@ static int get_info(struct net *net, void __user *user,
xt_table_unlock(t); xt_table_unlock(t);
module_put(t->me); module_put(t->me);
} else } else
ret = t ? PTR_ERR(t) : -ENOENT; ret = -ENOENT;
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
if (compat) if (compat)
xt_compat_unlock(AF_INET); xt_compat_unlock(AF_INET);
...@@ -1032,7 +1028,7 @@ get_entries(struct net *net, struct ipt_get_entries __user *uptr, ...@@ -1032,7 +1028,7 @@ get_entries(struct net *net, struct ipt_get_entries __user *uptr,
get.name[sizeof(get.name) - 1] = '\0'; get.name[sizeof(get.name) - 1] = '\0';
t = xt_find_table_lock(net, AF_INET, get.name); t = xt_find_table_lock(net, AF_INET, get.name);
if (!IS_ERR_OR_NULL(t)) { if (t) {
const struct xt_table_info *private = t->private; const struct xt_table_info *private = t->private;
if (get.size == private->size) if (get.size == private->size)
ret = copy_entries_to_user(private->size, ret = copy_entries_to_user(private->size,
...@@ -1043,7 +1039,7 @@ get_entries(struct net *net, struct ipt_get_entries __user *uptr, ...@@ -1043,7 +1039,7 @@ get_entries(struct net *net, struct ipt_get_entries __user *uptr,
module_put(t->me); module_put(t->me);
xt_table_unlock(t); xt_table_unlock(t);
} else } else
ret = t ? PTR_ERR(t) : -ENOENT; ret = -ENOENT;
return ret; return ret;
} }
...@@ -1068,8 +1064,8 @@ __do_replace(struct net *net, const char *name, unsigned int valid_hooks, ...@@ -1068,8 +1064,8 @@ __do_replace(struct net *net, const char *name, unsigned int valid_hooks,
t = try_then_request_module(xt_find_table_lock(net, AF_INET, name), t = try_then_request_module(xt_find_table_lock(net, AF_INET, name),
"iptable_%s", name); "iptable_%s", name);
if (IS_ERR_OR_NULL(t)) { if (!t) {
ret = t ? PTR_ERR(t) : -ENOENT; ret = -ENOENT;
goto free_newinfo_counters_untrans; goto free_newinfo_counters_untrans;
} }
...@@ -1184,8 +1180,8 @@ do_add_counters(struct net *net, const void __user *user, ...@@ -1184,8 +1180,8 @@ do_add_counters(struct net *net, const void __user *user,
return PTR_ERR(paddc); return PTR_ERR(paddc);
t = xt_find_table_lock(net, AF_INET, tmp.name); t = xt_find_table_lock(net, AF_INET, tmp.name);
if (IS_ERR_OR_NULL(t)) { if (!t) {
ret = t ? PTR_ERR(t) : -ENOENT; ret = -ENOENT;
goto free; goto free;
} }
...@@ -1630,7 +1626,7 @@ compat_get_entries(struct net *net, struct compat_ipt_get_entries __user *uptr, ...@@ -1630,7 +1626,7 @@ compat_get_entries(struct net *net, struct compat_ipt_get_entries __user *uptr,
xt_compat_lock(AF_INET); xt_compat_lock(AF_INET);
t = xt_find_table_lock(net, AF_INET, get.name); t = xt_find_table_lock(net, AF_INET, get.name);
if (!IS_ERR_OR_NULL(t)) { if (t) {
const struct xt_table_info *private = t->private; const struct xt_table_info *private = t->private;
struct xt_table_info info; struct xt_table_info info;
ret = compat_table_info(private, &info); ret = compat_table_info(private, &info);
...@@ -1644,7 +1640,7 @@ compat_get_entries(struct net *net, struct compat_ipt_get_entries __user *uptr, ...@@ -1644,7 +1640,7 @@ compat_get_entries(struct net *net, struct compat_ipt_get_entries __user *uptr,
module_put(t->me); module_put(t->me);
xt_table_unlock(t); xt_table_unlock(t);
} else } else
ret = t ? PTR_ERR(t) : -ENOENT; ret = -ENOENT;
xt_compat_unlock(AF_INET); xt_compat_unlock(AF_INET);
return ret; return ret;
......
...@@ -55,7 +55,8 @@ masquerade_tg(struct sk_buff *skb, const struct xt_action_param *par) ...@@ -55,7 +55,8 @@ masquerade_tg(struct sk_buff *skb, const struct xt_action_param *par)
range.min_proto = mr->range[0].min; range.min_proto = mr->range[0].min;
range.max_proto = mr->range[0].max; range.max_proto = mr->range[0].max;
return nf_nat_masquerade_ipv4(skb, par->hooknum, &range, par->out); return nf_nat_masquerade_ipv4(skb, xt_hooknum(par), &range,
xt_out(par));
} }
static struct xt_target masquerade_tg_reg __read_mostly = { static struct xt_target masquerade_tg_reg __read_mostly = {
......
...@@ -34,7 +34,7 @@ static unsigned int ...@@ -34,7 +34,7 @@ static unsigned int
reject_tg(struct sk_buff *skb, const struct xt_action_param *par) reject_tg(struct sk_buff *skb, const struct xt_action_param *par)
{ {
const struct ipt_reject_info *reject = par->targinfo; const struct ipt_reject_info *reject = par->targinfo;
int hook = par->hooknum; int hook = xt_hooknum(par);
switch (reject->with) { switch (reject->with) {
case IPT_ICMP_NET_UNREACHABLE: case IPT_ICMP_NET_UNREACHABLE:
...@@ -59,7 +59,7 @@ reject_tg(struct sk_buff *skb, const struct xt_action_param *par) ...@@ -59,7 +59,7 @@ reject_tg(struct sk_buff *skb, const struct xt_action_param *par)
nf_send_unreach(skb, ICMP_PKT_FILTERED, hook); nf_send_unreach(skb, ICMP_PKT_FILTERED, hook);
break; break;
case IPT_TCP_RESET: case IPT_TCP_RESET:
nf_send_reset(par->net, skb, hook); nf_send_reset(xt_net(par), skb, hook);
case IPT_ICMP_ECHOREPLY: case IPT_ICMP_ECHOREPLY:
/* Doesn't happen. */ /* Doesn't happen. */
break; break;
......
...@@ -263,12 +263,12 @@ static unsigned int ...@@ -263,12 +263,12 @@ static unsigned int
synproxy_tg4(struct sk_buff *skb, const struct xt_action_param *par) synproxy_tg4(struct sk_buff *skb, const struct xt_action_param *par)
{ {
const struct xt_synproxy_info *info = par->targinfo; const struct xt_synproxy_info *info = par->targinfo;
struct net *net = par->net; struct net *net = xt_net(par);
struct synproxy_net *snet = synproxy_pernet(net); struct synproxy_net *snet = synproxy_pernet(net);
struct synproxy_options opts = {}; struct synproxy_options opts = {};
struct tcphdr *th, _th; struct tcphdr *th, _th;
if (nf_ip_checksum(skb, par->hooknum, par->thoff, IPPROTO_TCP)) if (nf_ip_checksum(skb, xt_hooknum(par), par->thoff, IPPROTO_TCP))
return NF_DROP; return NF_DROP;
th = skb_header_pointer(skb, par->thoff, sizeof(_th), &_th); th = skb_header_pointer(skb, par->thoff, sizeof(_th), &_th);
......
...@@ -95,7 +95,7 @@ static bool rpfilter_mt(const struct sk_buff *skb, struct xt_action_param *par) ...@@ -95,7 +95,7 @@ static bool rpfilter_mt(const struct sk_buff *skb, struct xt_action_param *par)
flow.flowi4_tos = RT_TOS(iph->tos); flow.flowi4_tos = RT_TOS(iph->tos);
flow.flowi4_scope = RT_SCOPE_UNIVERSE; flow.flowi4_scope = RT_SCOPE_UNIVERSE;
return rpfilter_lookup_reverse(par->net, &flow, par->in, info->flags) ^ invert; return rpfilter_lookup_reverse(xt_net(par), &flow, xt_in(par), info->flags) ^ invert;
} }
static int rpfilter_check(const struct xt_mtchk_param *par) static int rpfilter_check(const struct xt_mtchk_param *par)
......
...@@ -336,47 +336,34 @@ MODULE_ALIAS("nf_conntrack-" __stringify(AF_INET)); ...@@ -336,47 +336,34 @@ MODULE_ALIAS("nf_conntrack-" __stringify(AF_INET));
MODULE_ALIAS("ip_conntrack"); MODULE_ALIAS("ip_conntrack");
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
static struct nf_conntrack_l4proto *builtin_l4proto4[] = {
&nf_conntrack_l4proto_tcp4,
&nf_conntrack_l4proto_udp4,
&nf_conntrack_l4proto_icmp,
};
static int ipv4_net_init(struct net *net) static int ipv4_net_init(struct net *net)
{ {
int ret = 0; int ret = 0;
ret = nf_ct_l4proto_pernet_register(net, &nf_conntrack_l4proto_tcp4); ret = nf_ct_l4proto_pernet_register(net, builtin_l4proto4,
if (ret < 0) { ARRAY_SIZE(builtin_l4proto4));
pr_err("nf_conntrack_tcp4: pernet registration failed\n"); if (ret < 0)
goto out_tcp; return ret;
}
ret = nf_ct_l4proto_pernet_register(net, &nf_conntrack_l4proto_udp4);
if (ret < 0) {
pr_err("nf_conntrack_udp4: pernet registration failed\n");
goto out_udp;
}
ret = nf_ct_l4proto_pernet_register(net, &nf_conntrack_l4proto_icmp);
if (ret < 0) {
pr_err("nf_conntrack_icmp4: pernet registration failed\n");
goto out_icmp;
}
ret = nf_ct_l3proto_pernet_register(net, &nf_conntrack_l3proto_ipv4); ret = nf_ct_l3proto_pernet_register(net, &nf_conntrack_l3proto_ipv4);
if (ret < 0) { if (ret < 0) {
pr_err("nf_conntrack_ipv4: pernet registration failed\n"); pr_err("nf_conntrack_ipv4: pernet registration failed\n");
goto out_ipv4; nf_ct_l4proto_pernet_unregister(net, builtin_l4proto4,
ARRAY_SIZE(builtin_l4proto4));
} }
return 0;
out_ipv4:
nf_ct_l4proto_pernet_unregister(net, &nf_conntrack_l4proto_icmp);
out_icmp:
nf_ct_l4proto_pernet_unregister(net, &nf_conntrack_l4proto_udp4);
out_udp:
nf_ct_l4proto_pernet_unregister(net, &nf_conntrack_l4proto_tcp4);
out_tcp:
return ret; return ret;
} }
static void ipv4_net_exit(struct net *net) static void ipv4_net_exit(struct net *net)
{ {
nf_ct_l3proto_pernet_unregister(net, &nf_conntrack_l3proto_ipv4); nf_ct_l3proto_pernet_unregister(net, &nf_conntrack_l3proto_ipv4);
nf_ct_l4proto_pernet_unregister(net, &nf_conntrack_l4proto_icmp); nf_ct_l4proto_pernet_unregister(net, builtin_l4proto4,
nf_ct_l4proto_pernet_unregister(net, &nf_conntrack_l4proto_udp4); ARRAY_SIZE(builtin_l4proto4));
nf_ct_l4proto_pernet_unregister(net, &nf_conntrack_l4proto_tcp4);
} }
static struct pernet_operations ipv4_net_ops = { static struct pernet_operations ipv4_net_ops = {
...@@ -410,37 +397,21 @@ static int __init nf_conntrack_l3proto_ipv4_init(void) ...@@ -410,37 +397,21 @@ static int __init nf_conntrack_l3proto_ipv4_init(void)
goto cleanup_pernet; goto cleanup_pernet;
} }
ret = nf_ct_l4proto_register(&nf_conntrack_l4proto_tcp4); ret = nf_ct_l4proto_register(builtin_l4proto4,
if (ret < 0) { ARRAY_SIZE(builtin_l4proto4));
pr_err("nf_conntrack_ipv4: can't register tcp4 proto.\n"); if (ret < 0)
goto cleanup_hooks; goto cleanup_hooks;
}
ret = nf_ct_l4proto_register(&nf_conntrack_l4proto_udp4);
if (ret < 0) {
pr_err("nf_conntrack_ipv4: can't register udp4 proto.\n");
goto cleanup_tcp4;
}
ret = nf_ct_l4proto_register(&nf_conntrack_l4proto_icmp);
if (ret < 0) {
pr_err("nf_conntrack_ipv4: can't register icmpv4 proto.\n");
goto cleanup_udp4;
}
ret = nf_ct_l3proto_register(&nf_conntrack_l3proto_ipv4); ret = nf_ct_l3proto_register(&nf_conntrack_l3proto_ipv4);
if (ret < 0) { if (ret < 0) {
pr_err("nf_conntrack_ipv4: can't register ipv4 proto.\n"); pr_err("nf_conntrack_ipv4: can't register ipv4 proto.\n");
goto cleanup_icmpv4; goto cleanup_l4proto;
} }
return ret; return ret;
cleanup_icmpv4: cleanup_l4proto:
nf_ct_l4proto_unregister(&nf_conntrack_l4proto_icmp); nf_ct_l4proto_unregister(builtin_l4proto4,
cleanup_udp4: ARRAY_SIZE(builtin_l4proto4));
nf_ct_l4proto_unregister(&nf_conntrack_l4proto_udp4);
cleanup_tcp4:
nf_ct_l4proto_unregister(&nf_conntrack_l4proto_tcp4);
cleanup_hooks: cleanup_hooks:
nf_unregister_hooks(ipv4_conntrack_ops, ARRAY_SIZE(ipv4_conntrack_ops)); nf_unregister_hooks(ipv4_conntrack_ops, ARRAY_SIZE(ipv4_conntrack_ops));
cleanup_pernet: cleanup_pernet:
...@@ -454,9 +425,8 @@ static void __exit nf_conntrack_l3proto_ipv4_fini(void) ...@@ -454,9 +425,8 @@ static void __exit nf_conntrack_l3proto_ipv4_fini(void)
{ {
synchronize_net(); synchronize_net();
nf_ct_l3proto_unregister(&nf_conntrack_l3proto_ipv4); nf_ct_l3proto_unregister(&nf_conntrack_l3proto_ipv4);
nf_ct_l4proto_unregister(&nf_conntrack_l4proto_icmp); nf_ct_l4proto_unregister(builtin_l4proto4,
nf_ct_l4proto_unregister(&nf_conntrack_l4proto_udp4); ARRAY_SIZE(builtin_l4proto4));
nf_ct_l4proto_unregister(&nf_conntrack_l4proto_tcp4);
nf_unregister_hooks(ipv4_conntrack_ops, ARRAY_SIZE(ipv4_conntrack_ops)); nf_unregister_hooks(ipv4_conntrack_ops, ARRAY_SIZE(ipv4_conntrack_ops));
unregister_pernet_subsys(&ipv4_net_ops); unregister_pernet_subsys(&ipv4_net_ops);
nf_unregister_sockopt(&so_getorigdst); nf_unregister_sockopt(&so_getorigdst);
......
...@@ -30,7 +30,7 @@ static void nft_dup_ipv4_eval(const struct nft_expr *expr, ...@@ -30,7 +30,7 @@ static void nft_dup_ipv4_eval(const struct nft_expr *expr,
}; };
int oif = regs->data[priv->sreg_dev]; int oif = regs->data[priv->sreg_dev];
nf_dup_ipv4(pkt->net, pkt->skb, pkt->hook, &gw, oif); nf_dup_ipv4(nft_net(pkt), pkt->skb, nft_hook(pkt), &gw, oif);
} }
static int nft_dup_ipv4_init(const struct nft_ctx *ctx, static int nft_dup_ipv4_init(const struct nft_ctx *ctx,
......
...@@ -45,9 +45,9 @@ void nft_fib4_eval_type(const struct nft_expr *expr, struct nft_regs *regs, ...@@ -45,9 +45,9 @@ void nft_fib4_eval_type(const struct nft_expr *expr, struct nft_regs *regs,
__be32 addr; __be32 addr;
if (priv->flags & NFTA_FIB_F_IIF) if (priv->flags & NFTA_FIB_F_IIF)
dev = pkt->in; dev = nft_in(pkt);
else if (priv->flags & NFTA_FIB_F_OIF) else if (priv->flags & NFTA_FIB_F_OIF)
dev = pkt->out; dev = nft_out(pkt);
iph = ip_hdr(pkt->skb); iph = ip_hdr(pkt->skb);
if (priv->flags & NFTA_FIB_F_DADDR) if (priv->flags & NFTA_FIB_F_DADDR)
...@@ -55,7 +55,7 @@ void nft_fib4_eval_type(const struct nft_expr *expr, struct nft_regs *regs, ...@@ -55,7 +55,7 @@ void nft_fib4_eval_type(const struct nft_expr *expr, struct nft_regs *regs,
else else
addr = iph->saddr; addr = iph->saddr;
*dst = inet_dev_addr_type(pkt->net, dev, addr); *dst = inet_dev_addr_type(nft_net(pkt), dev, addr);
} }
EXPORT_SYMBOL_GPL(nft_fib4_eval_type); EXPORT_SYMBOL_GPL(nft_fib4_eval_type);
...@@ -89,13 +89,13 @@ void nft_fib4_eval(const struct nft_expr *expr, struct nft_regs *regs, ...@@ -89,13 +89,13 @@ void nft_fib4_eval(const struct nft_expr *expr, struct nft_regs *regs,
* Search results for the desired outinterface instead. * Search results for the desired outinterface instead.
*/ */
if (priv->flags & NFTA_FIB_F_OIF) if (priv->flags & NFTA_FIB_F_OIF)
oif = pkt->out; oif = nft_out(pkt);
else if (priv->flags & NFTA_FIB_F_IIF) else if (priv->flags & NFTA_FIB_F_IIF)
oif = pkt->in; oif = nft_in(pkt);
else else
oif = NULL; oif = NULL;
if (pkt->hook == NF_INET_PRE_ROUTING && fib4_is_local(pkt->skb)) { if (nft_hook(pkt) == NF_INET_PRE_ROUTING && fib4_is_local(pkt->skb)) {
nft_fib_store_result(dest, priv->result, pkt, LOOPBACK_IFINDEX); nft_fib_store_result(dest, priv->result, pkt, LOOPBACK_IFINDEX);
return; return;
} }
...@@ -122,7 +122,7 @@ void nft_fib4_eval(const struct nft_expr *expr, struct nft_regs *regs, ...@@ -122,7 +122,7 @@ void nft_fib4_eval(const struct nft_expr *expr, struct nft_regs *regs,
fl4.saddr = get_saddr(iph->daddr); fl4.saddr = get_saddr(iph->daddr);
} }
if (fib_lookup(pkt->net, &fl4, &res, FIB_LOOKUP_IGNORE_LINKSTATE)) if (fib_lookup(nft_net(pkt), &fl4, &res, FIB_LOOKUP_IGNORE_LINKSTATE))
return; return;
switch (res.type) { switch (res.type) {
......
...@@ -31,8 +31,8 @@ static void nft_masq_ipv4_eval(const struct nft_expr *expr, ...@@ -31,8 +31,8 @@ static void nft_masq_ipv4_eval(const struct nft_expr *expr,
range.max_proto.all = range.max_proto.all =
*(__be16 *)&regs->data[priv->sreg_proto_max]; *(__be16 *)&regs->data[priv->sreg_proto_max];
} }
regs->verdict.code = nf_nat_masquerade_ipv4(pkt->skb, pkt->hook, regs->verdict.code = nf_nat_masquerade_ipv4(pkt->skb, nft_hook(pkt),
&range, pkt->out); &range, nft_out(pkt));
} }
static struct nft_expr_type nft_masq_ipv4_type; static struct nft_expr_type nft_masq_ipv4_type;
......
...@@ -35,8 +35,7 @@ static void nft_redir_ipv4_eval(const struct nft_expr *expr, ...@@ -35,8 +35,7 @@ static void nft_redir_ipv4_eval(const struct nft_expr *expr,
mr.range[0].flags |= priv->flags; mr.range[0].flags |= priv->flags;
regs->verdict.code = nf_nat_redirect_ipv4(pkt->skb, &mr, regs->verdict.code = nf_nat_redirect_ipv4(pkt->skb, &mr, nft_hook(pkt));
pkt->hook);
} }
static struct nft_expr_type nft_redir_ipv4_type; static struct nft_expr_type nft_redir_ipv4_type;
......
...@@ -27,10 +27,10 @@ static void nft_reject_ipv4_eval(const struct nft_expr *expr, ...@@ -27,10 +27,10 @@ static void nft_reject_ipv4_eval(const struct nft_expr *expr,
switch (priv->type) { switch (priv->type) {
case NFT_REJECT_ICMP_UNREACH: case NFT_REJECT_ICMP_UNREACH:
nf_send_unreach(pkt->skb, priv->icmp_code, pkt->hook); nf_send_unreach(pkt->skb, priv->icmp_code, nft_hook(pkt));
break; break;
case NFT_REJECT_TCP_RST: case NFT_REJECT_TCP_RST:
nf_send_reset(pkt->net, pkt->skb, pkt->hook); nf_send_reset(nft_net(pkt), pkt->skb, nft_hook(pkt));
break; break;
default: default:
break; break;
......
...@@ -580,7 +580,8 @@ EXPORT_SYMBOL_GPL(udp4_lib_lookup_skb); ...@@ -580,7 +580,8 @@ EXPORT_SYMBOL_GPL(udp4_lib_lookup_skb);
* Does increment socket refcount. * Does increment socket refcount.
*/ */
#if IS_ENABLED(CONFIG_NETFILTER_XT_MATCH_SOCKET) || \ #if IS_ENABLED(CONFIG_NETFILTER_XT_MATCH_SOCKET) || \
IS_ENABLED(CONFIG_NETFILTER_XT_TARGET_TPROXY) IS_ENABLED(CONFIG_NETFILTER_XT_TARGET_TPROXY) || \
IS_ENABLED(CONFIG_NF_SOCKET_IPV4)
struct sock *udp4_lib_lookup(struct net *net, __be32 saddr, __be16 sport, struct sock *udp4_lib_lookup(struct net *net, __be32 saddr, __be16 sport,
__be32 daddr, __be16 dport, int dif) __be32 daddr, __be16 dport, int dif)
{ {
......
...@@ -291,11 +291,7 @@ ip6t_do_table(struct sk_buff *skb, ...@@ -291,11 +291,7 @@ ip6t_do_table(struct sk_buff *skb,
* rule is also a fragment-specific rule, non-fragments won't * rule is also a fragment-specific rule, non-fragments won't
* match it. */ * match it. */
acpar.hotdrop = false; acpar.hotdrop = false;
acpar.net = state->net; acpar.state = state;
acpar.in = state->in;
acpar.out = state->out;
acpar.family = NFPROTO_IPV6;
acpar.hooknum = hook;
IP_NF_ASSERT(table->valid_hooks & (1 << hook)); IP_NF_ASSERT(table->valid_hooks & (1 << hook));
...@@ -1007,7 +1003,7 @@ static int get_info(struct net *net, void __user *user, ...@@ -1007,7 +1003,7 @@ static int get_info(struct net *net, void __user *user,
#endif #endif
t = try_then_request_module(xt_find_table_lock(net, AF_INET6, name), t = try_then_request_module(xt_find_table_lock(net, AF_INET6, name),
"ip6table_%s", name); "ip6table_%s", name);
if (!IS_ERR_OR_NULL(t)) { if (t) {
struct ip6t_getinfo info; struct ip6t_getinfo info;
const struct xt_table_info *private = t->private; const struct xt_table_info *private = t->private;
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
...@@ -1037,7 +1033,7 @@ static int get_info(struct net *net, void __user *user, ...@@ -1037,7 +1033,7 @@ static int get_info(struct net *net, void __user *user,
xt_table_unlock(t); xt_table_unlock(t);
module_put(t->me); module_put(t->me);
} else } else
ret = t ? PTR_ERR(t) : -ENOENT; ret = -ENOENT;
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
if (compat) if (compat)
xt_compat_unlock(AF_INET6); xt_compat_unlock(AF_INET6);
...@@ -1063,7 +1059,7 @@ get_entries(struct net *net, struct ip6t_get_entries __user *uptr, ...@@ -1063,7 +1059,7 @@ get_entries(struct net *net, struct ip6t_get_entries __user *uptr,
get.name[sizeof(get.name) - 1] = '\0'; get.name[sizeof(get.name) - 1] = '\0';
t = xt_find_table_lock(net, AF_INET6, get.name); t = xt_find_table_lock(net, AF_INET6, get.name);
if (!IS_ERR_OR_NULL(t)) { if (t) {
struct xt_table_info *private = t->private; struct xt_table_info *private = t->private;
if (get.size == private->size) if (get.size == private->size)
ret = copy_entries_to_user(private->size, ret = copy_entries_to_user(private->size,
...@@ -1074,7 +1070,7 @@ get_entries(struct net *net, struct ip6t_get_entries __user *uptr, ...@@ -1074,7 +1070,7 @@ get_entries(struct net *net, struct ip6t_get_entries __user *uptr,
module_put(t->me); module_put(t->me);
xt_table_unlock(t); xt_table_unlock(t);
} else } else
ret = t ? PTR_ERR(t) : -ENOENT; ret = -ENOENT;
return ret; return ret;
} }
...@@ -1099,8 +1095,8 @@ __do_replace(struct net *net, const char *name, unsigned int valid_hooks, ...@@ -1099,8 +1095,8 @@ __do_replace(struct net *net, const char *name, unsigned int valid_hooks,
t = try_then_request_module(xt_find_table_lock(net, AF_INET6, name), t = try_then_request_module(xt_find_table_lock(net, AF_INET6, name),
"ip6table_%s", name); "ip6table_%s", name);
if (IS_ERR_OR_NULL(t)) { if (!t) {
ret = t ? PTR_ERR(t) : -ENOENT; ret = -ENOENT;
goto free_newinfo_counters_untrans; goto free_newinfo_counters_untrans;
} }
...@@ -1214,8 +1210,8 @@ do_add_counters(struct net *net, const void __user *user, unsigned int len, ...@@ -1214,8 +1210,8 @@ do_add_counters(struct net *net, const void __user *user, unsigned int len,
if (IS_ERR(paddc)) if (IS_ERR(paddc))
return PTR_ERR(paddc); return PTR_ERR(paddc);
t = xt_find_table_lock(net, AF_INET6, tmp.name); t = xt_find_table_lock(net, AF_INET6, tmp.name);
if (IS_ERR_OR_NULL(t)) { if (!t) {
ret = t ? PTR_ERR(t) : -ENOENT; ret = -ENOENT;
goto free; goto free;
} }
...@@ -1651,7 +1647,7 @@ compat_get_entries(struct net *net, struct compat_ip6t_get_entries __user *uptr, ...@@ -1651,7 +1647,7 @@ compat_get_entries(struct net *net, struct compat_ip6t_get_entries __user *uptr,
xt_compat_lock(AF_INET6); xt_compat_lock(AF_INET6);
t = xt_find_table_lock(net, AF_INET6, get.name); t = xt_find_table_lock(net, AF_INET6, get.name);
if (!IS_ERR_OR_NULL(t)) { if (t) {
const struct xt_table_info *private = t->private; const struct xt_table_info *private = t->private;
struct xt_table_info info; struct xt_table_info info;
ret = compat_table_info(private, &info); ret = compat_table_info(private, &info);
...@@ -1665,7 +1661,7 @@ compat_get_entries(struct net *net, struct compat_ip6t_get_entries __user *uptr, ...@@ -1665,7 +1661,7 @@ compat_get_entries(struct net *net, struct compat_ip6t_get_entries __user *uptr,
module_put(t->me); module_put(t->me);
xt_table_unlock(t); xt_table_unlock(t);
} else } else
ret = t ? PTR_ERR(t) : -ENOENT; ret = -ENOENT;
xt_compat_unlock(AF_INET6); xt_compat_unlock(AF_INET6);
return ret; return ret;
......
...@@ -24,7 +24,7 @@ ...@@ -24,7 +24,7 @@
static unsigned int static unsigned int
masquerade_tg6(struct sk_buff *skb, const struct xt_action_param *par) masquerade_tg6(struct sk_buff *skb, const struct xt_action_param *par)
{ {
return nf_nat_masquerade_ipv6(skb, par->targinfo, par->out); return nf_nat_masquerade_ipv6(skb, par->targinfo, xt_out(par));
} }
static int masquerade_tg6_checkentry(const struct xt_tgchk_param *par) static int masquerade_tg6_checkentry(const struct xt_tgchk_param *par)
......
...@@ -39,35 +39,40 @@ static unsigned int ...@@ -39,35 +39,40 @@ static unsigned int
reject_tg6(struct sk_buff *skb, const struct xt_action_param *par) reject_tg6(struct sk_buff *skb, const struct xt_action_param *par)
{ {
const struct ip6t_reject_info *reject = par->targinfo; const struct ip6t_reject_info *reject = par->targinfo;
struct net *net = par->net; struct net *net = xt_net(par);
switch (reject->with) { switch (reject->with) {
case IP6T_ICMP6_NO_ROUTE: case IP6T_ICMP6_NO_ROUTE:
nf_send_unreach6(net, skb, ICMPV6_NOROUTE, par->hooknum); nf_send_unreach6(net, skb, ICMPV6_NOROUTE, xt_hooknum(par));
break; break;
case IP6T_ICMP6_ADM_PROHIBITED: case IP6T_ICMP6_ADM_PROHIBITED:
nf_send_unreach6(net, skb, ICMPV6_ADM_PROHIBITED, par->hooknum); nf_send_unreach6(net, skb, ICMPV6_ADM_PROHIBITED,
xt_hooknum(par));
break; break;
case IP6T_ICMP6_NOT_NEIGHBOUR: case IP6T_ICMP6_NOT_NEIGHBOUR:
nf_send_unreach6(net, skb, ICMPV6_NOT_NEIGHBOUR, par->hooknum); nf_send_unreach6(net, skb, ICMPV6_NOT_NEIGHBOUR,
xt_hooknum(par));
break; break;
case IP6T_ICMP6_ADDR_UNREACH: case IP6T_ICMP6_ADDR_UNREACH:
nf_send_unreach6(net, skb, ICMPV6_ADDR_UNREACH, par->hooknum); nf_send_unreach6(net, skb, ICMPV6_ADDR_UNREACH,
xt_hooknum(par));
break; break;
case IP6T_ICMP6_PORT_UNREACH: case IP6T_ICMP6_PORT_UNREACH:
nf_send_unreach6(net, skb, ICMPV6_PORT_UNREACH, par->hooknum); nf_send_unreach6(net, skb, ICMPV6_PORT_UNREACH,
xt_hooknum(par));
break; break;
case IP6T_ICMP6_ECHOREPLY: case IP6T_ICMP6_ECHOREPLY:
/* Do nothing */ /* Do nothing */
break; break;
case IP6T_TCP_RESET: case IP6T_TCP_RESET:
nf_send_reset6(net, skb, par->hooknum); nf_send_reset6(net, skb, xt_hooknum(par));
break; break;
case IP6T_ICMP6_POLICY_FAIL: case IP6T_ICMP6_POLICY_FAIL:
nf_send_unreach6(net, skb, ICMPV6_POLICY_FAIL, par->hooknum); nf_send_unreach6(net, skb, ICMPV6_POLICY_FAIL, xt_hooknum(par));
break; break;
case IP6T_ICMP6_REJECT_ROUTE: case IP6T_ICMP6_REJECT_ROUTE:
nf_send_unreach6(net, skb, ICMPV6_REJECT_ROUTE, par->hooknum); nf_send_unreach6(net, skb, ICMPV6_REJECT_ROUTE,
xt_hooknum(par));
break; break;
} }
......
...@@ -277,12 +277,12 @@ static unsigned int ...@@ -277,12 +277,12 @@ static unsigned int
synproxy_tg6(struct sk_buff *skb, const struct xt_action_param *par) synproxy_tg6(struct sk_buff *skb, const struct xt_action_param *par)
{ {
const struct xt_synproxy_info *info = par->targinfo; const struct xt_synproxy_info *info = par->targinfo;
struct net *net = par->net; struct net *net = xt_net(par);
struct synproxy_net *snet = synproxy_pernet(net); struct synproxy_net *snet = synproxy_pernet(net);
struct synproxy_options opts = {}; struct synproxy_options opts = {};
struct tcphdr *th, _th; struct tcphdr *th, _th;
if (nf_ip6_checksum(skb, par->hooknum, par->thoff, IPPROTO_TCP)) if (nf_ip6_checksum(skb, xt_hooknum(par), par->thoff, IPPROTO_TCP))
return NF_DROP; return NF_DROP;
th = skb_header_pointer(skb, par->thoff, sizeof(_th), &_th); th = skb_header_pointer(skb, par->thoff, sizeof(_th), &_th);
......
...@@ -93,7 +93,8 @@ static bool rpfilter_mt(const struct sk_buff *skb, struct xt_action_param *par) ...@@ -93,7 +93,8 @@ static bool rpfilter_mt(const struct sk_buff *skb, struct xt_action_param *par)
if (unlikely(saddrtype == IPV6_ADDR_ANY)) if (unlikely(saddrtype == IPV6_ADDR_ANY))
return true ^ invert; /* not routable: forward path will drop it */ return true ^ invert; /* not routable: forward path will drop it */
return rpfilter_lookup_reverse6(par->net, skb, par->in, info->flags) ^ invert; return rpfilter_lookup_reverse6(xt_net(par), skb, xt_in(par),
info->flags) ^ invert;
} }
static int rpfilter_check(const struct xt_mtchk_param *par) static int rpfilter_check(const struct xt_mtchk_param *par)
......
...@@ -336,47 +336,35 @@ static struct nf_sockopt_ops so_getorigdst6 = { ...@@ -336,47 +336,35 @@ static struct nf_sockopt_ops so_getorigdst6 = {
.owner = THIS_MODULE, .owner = THIS_MODULE,
}; };
static struct nf_conntrack_l4proto *builtin_l4proto6[] = {
&nf_conntrack_l4proto_tcp6,
&nf_conntrack_l4proto_udp6,
&nf_conntrack_l4proto_icmpv6,
};
static int ipv6_net_init(struct net *net) static int ipv6_net_init(struct net *net)
{ {
int ret = 0; int ret = 0;
ret = nf_ct_l4proto_pernet_register(net, &nf_conntrack_l4proto_tcp6); ret = nf_ct_l4proto_pernet_register(net, builtin_l4proto6,
if (ret < 0) { ARRAY_SIZE(builtin_l4proto6));
pr_err("nf_conntrack_tcp6: pernet registration failed\n"); if (ret < 0)
goto out; return ret;
}
ret = nf_ct_l4proto_pernet_register(net, &nf_conntrack_l4proto_udp6);
if (ret < 0) {
pr_err("nf_conntrack_udp6: pernet registration failed\n");
goto cleanup_tcp6;
}
ret = nf_ct_l4proto_pernet_register(net, &nf_conntrack_l4proto_icmpv6);
if (ret < 0) {
pr_err("nf_conntrack_icmp6: pernet registration failed\n");
goto cleanup_udp6;
}
ret = nf_ct_l3proto_pernet_register(net, &nf_conntrack_l3proto_ipv6); ret = nf_ct_l3proto_pernet_register(net, &nf_conntrack_l3proto_ipv6);
if (ret < 0) { if (ret < 0) {
pr_err("nf_conntrack_ipv6: pernet registration failed.\n"); pr_err("nf_conntrack_ipv6: pernet registration failed.\n");
goto cleanup_icmpv6; nf_ct_l4proto_pernet_unregister(net, builtin_l4proto6,
ARRAY_SIZE(builtin_l4proto6));
} }
return 0;
cleanup_icmpv6:
nf_ct_l4proto_pernet_unregister(net, &nf_conntrack_l4proto_icmpv6);
cleanup_udp6:
nf_ct_l4proto_pernet_unregister(net, &nf_conntrack_l4proto_udp6);
cleanup_tcp6:
nf_ct_l4proto_pernet_unregister(net, &nf_conntrack_l4proto_tcp6);
out:
return ret; return ret;
} }
static void ipv6_net_exit(struct net *net) static void ipv6_net_exit(struct net *net)
{ {
nf_ct_l3proto_pernet_unregister(net, &nf_conntrack_l3proto_ipv6); nf_ct_l3proto_pernet_unregister(net, &nf_conntrack_l3proto_ipv6);
nf_ct_l4proto_pernet_unregister(net, &nf_conntrack_l4proto_icmpv6); nf_ct_l4proto_pernet_unregister(net, builtin_l4proto6,
nf_ct_l4proto_pernet_unregister(net, &nf_conntrack_l4proto_udp6); ARRAY_SIZE(builtin_l4proto6));
nf_ct_l4proto_pernet_unregister(net, &nf_conntrack_l4proto_tcp6);
} }
static struct pernet_operations ipv6_net_ops = { static struct pernet_operations ipv6_net_ops = {
...@@ -409,37 +397,20 @@ static int __init nf_conntrack_l3proto_ipv6_init(void) ...@@ -409,37 +397,20 @@ static int __init nf_conntrack_l3proto_ipv6_init(void)
goto cleanup_pernet; goto cleanup_pernet;
} }
ret = nf_ct_l4proto_register(&nf_conntrack_l4proto_tcp6); ret = nf_ct_l4proto_register(builtin_l4proto6,
if (ret < 0) { ARRAY_SIZE(builtin_l4proto6));
pr_err("nf_conntrack_ipv6: can't register tcp6 proto.\n"); if (ret < 0)
goto cleanup_hooks; goto cleanup_hooks;
}
ret = nf_ct_l4proto_register(&nf_conntrack_l4proto_udp6);
if (ret < 0) {
pr_err("nf_conntrack_ipv6: can't register udp6 proto.\n");
goto cleanup_tcp6;
}
ret = nf_ct_l4proto_register(&nf_conntrack_l4proto_icmpv6);
if (ret < 0) {
pr_err("nf_conntrack_ipv6: can't register icmpv6 proto.\n");
goto cleanup_udp6;
}
ret = nf_ct_l3proto_register(&nf_conntrack_l3proto_ipv6); ret = nf_ct_l3proto_register(&nf_conntrack_l3proto_ipv6);
if (ret < 0) { if (ret < 0) {
pr_err("nf_conntrack_ipv6: can't register ipv6 proto.\n"); pr_err("nf_conntrack_ipv6: can't register ipv6 proto.\n");
goto cleanup_icmpv6; goto cleanup_l4proto;
} }
return ret; return ret;
cleanup_l4proto:
cleanup_icmpv6: nf_ct_l4proto_unregister(builtin_l4proto6,
nf_ct_l4proto_unregister(&nf_conntrack_l4proto_icmpv6); ARRAY_SIZE(builtin_l4proto6));
cleanup_udp6:
nf_ct_l4proto_unregister(&nf_conntrack_l4proto_udp6);
cleanup_tcp6:
nf_ct_l4proto_unregister(&nf_conntrack_l4proto_tcp6);
cleanup_hooks: cleanup_hooks:
nf_unregister_hooks(ipv6_conntrack_ops, ARRAY_SIZE(ipv6_conntrack_ops)); nf_unregister_hooks(ipv6_conntrack_ops, ARRAY_SIZE(ipv6_conntrack_ops));
cleanup_pernet: cleanup_pernet:
...@@ -453,9 +424,8 @@ static void __exit nf_conntrack_l3proto_ipv6_fini(void) ...@@ -453,9 +424,8 @@ static void __exit nf_conntrack_l3proto_ipv6_fini(void)
{ {
synchronize_net(); synchronize_net();
nf_ct_l3proto_unregister(&nf_conntrack_l3proto_ipv6); nf_ct_l3proto_unregister(&nf_conntrack_l3proto_ipv6);
nf_ct_l4proto_unregister(&nf_conntrack_l4proto_tcp6); nf_ct_l4proto_unregister(builtin_l4proto6,
nf_ct_l4proto_unregister(&nf_conntrack_l4proto_udp6); ARRAY_SIZE(builtin_l4proto6));
nf_ct_l4proto_unregister(&nf_conntrack_l4proto_icmpv6);
nf_unregister_hooks(ipv6_conntrack_ops, ARRAY_SIZE(ipv6_conntrack_ops)); nf_unregister_hooks(ipv6_conntrack_ops, ARRAY_SIZE(ipv6_conntrack_ops));
unregister_pernet_subsys(&ipv6_net_ops); unregister_pernet_subsys(&ipv6_net_ops);
nf_unregister_sockopt(&so_getorigdst6); nf_unregister_sockopt(&so_getorigdst6);
......
...@@ -28,7 +28,7 @@ static void nft_dup_ipv6_eval(const struct nft_expr *expr, ...@@ -28,7 +28,7 @@ static void nft_dup_ipv6_eval(const struct nft_expr *expr,
struct in6_addr *gw = (struct in6_addr *)&regs->data[priv->sreg_addr]; struct in6_addr *gw = (struct in6_addr *)&regs->data[priv->sreg_addr];
int oif = regs->data[priv->sreg_dev]; int oif = regs->data[priv->sreg_dev];
nf_dup_ipv6(pkt->net, pkt->skb, pkt->hook, gw, oif); nf_dup_ipv6(nft_net(pkt), pkt->skb, nft_hook(pkt), gw, oif);
} }
static int nft_dup_ipv6_init(const struct nft_ctx *ctx, static int nft_dup_ipv6_init(const struct nft_ctx *ctx,
......
...@@ -80,17 +80,17 @@ static u32 __nft_fib6_eval_type(const struct nft_fib *priv, ...@@ -80,17 +80,17 @@ static u32 __nft_fib6_eval_type(const struct nft_fib *priv,
return RTN_UNREACHABLE; return RTN_UNREACHABLE;
if (priv->flags & NFTA_FIB_F_IIF) if (priv->flags & NFTA_FIB_F_IIF)
dev = pkt->in; dev = nft_in(pkt);
else if (priv->flags & NFTA_FIB_F_OIF) else if (priv->flags & NFTA_FIB_F_OIF)
dev = pkt->out; dev = nft_out(pkt);
nft_fib6_flowi_init(&fl6, priv, pkt, dev); nft_fib6_flowi_init(&fl6, priv, pkt, dev);
v6ops = nf_get_ipv6_ops(); v6ops = nf_get_ipv6_ops();
if (dev && v6ops && v6ops->chk_addr(pkt->net, &fl6.daddr, dev, true)) if (dev && v6ops && v6ops->chk_addr(nft_net(pkt), &fl6.daddr, dev, true))
ret = RTN_LOCAL; ret = RTN_LOCAL;
route_err = afinfo->route(pkt->net, (struct dst_entry **)&rt, route_err = afinfo->route(nft_net(pkt), (struct dst_entry **)&rt,
flowi6_to_flowi(&fl6), false); flowi6_to_flowi(&fl6), false);
if (route_err) if (route_err)
goto err; goto err;
...@@ -158,20 +158,20 @@ void nft_fib6_eval(const struct nft_expr *expr, struct nft_regs *regs, ...@@ -158,20 +158,20 @@ void nft_fib6_eval(const struct nft_expr *expr, struct nft_regs *regs,
int lookup_flags; int lookup_flags;
if (priv->flags & NFTA_FIB_F_IIF) if (priv->flags & NFTA_FIB_F_IIF)
oif = pkt->in; oif = nft_in(pkt);
else if (priv->flags & NFTA_FIB_F_OIF) else if (priv->flags & NFTA_FIB_F_OIF)
oif = pkt->out; oif = nft_out(pkt);
lookup_flags = nft_fib6_flowi_init(&fl6, priv, pkt, oif); lookup_flags = nft_fib6_flowi_init(&fl6, priv, pkt, oif);
if (pkt->hook == NF_INET_PRE_ROUTING && fib6_is_local(pkt->skb)) { if (nft_hook(pkt) == NF_INET_PRE_ROUTING && fib6_is_local(pkt->skb)) {
nft_fib_store_result(dest, priv->result, pkt, LOOPBACK_IFINDEX); nft_fib_store_result(dest, priv->result, pkt, LOOPBACK_IFINDEX);
return; return;
} }
*dest = 0; *dest = 0;
again: again:
rt = (void *)ip6_route_lookup(pkt->net, &fl6, lookup_flags); rt = (void *)ip6_route_lookup(nft_net(pkt), &fl6, lookup_flags);
if (rt->dst.error) if (rt->dst.error)
goto put_rt_err; goto put_rt_err;
......
...@@ -32,7 +32,8 @@ static void nft_masq_ipv6_eval(const struct nft_expr *expr, ...@@ -32,7 +32,8 @@ static void nft_masq_ipv6_eval(const struct nft_expr *expr,
range.max_proto.all = range.max_proto.all =
*(__be16 *)&regs->data[priv->sreg_proto_max]; *(__be16 *)&regs->data[priv->sreg_proto_max];
} }
regs->verdict.code = nf_nat_masquerade_ipv6(pkt->skb, &range, pkt->out); regs->verdict.code = nf_nat_masquerade_ipv6(pkt->skb, &range,
nft_out(pkt));
} }
static struct nft_expr_type nft_masq_ipv6_type; static struct nft_expr_type nft_masq_ipv6_type;
......
...@@ -35,7 +35,8 @@ static void nft_redir_ipv6_eval(const struct nft_expr *expr, ...@@ -35,7 +35,8 @@ static void nft_redir_ipv6_eval(const struct nft_expr *expr,
range.flags |= priv->flags; range.flags |= priv->flags;
regs->verdict.code = nf_nat_redirect_ipv6(pkt->skb, &range, pkt->hook); regs->verdict.code =
nf_nat_redirect_ipv6(pkt->skb, &range, nft_hook(pkt));
} }
static struct nft_expr_type nft_redir_ipv6_type; static struct nft_expr_type nft_redir_ipv6_type;
......
...@@ -27,11 +27,11 @@ static void nft_reject_ipv6_eval(const struct nft_expr *expr, ...@@ -27,11 +27,11 @@ static void nft_reject_ipv6_eval(const struct nft_expr *expr,
switch (priv->type) { switch (priv->type) {
case NFT_REJECT_ICMP_UNREACH: case NFT_REJECT_ICMP_UNREACH:
nf_send_unreach6(pkt->net, pkt->skb, priv->icmp_code, nf_send_unreach6(nft_net(pkt), pkt->skb, priv->icmp_code,
pkt->hook); nft_hook(pkt));
break; break;
case NFT_REJECT_TCP_RST: case NFT_REJECT_TCP_RST:
nf_send_reset6(pkt->net, pkt->skb, pkt->hook); nf_send_reset6(nft_net(pkt), pkt->skb, nft_hook(pkt));
break; break;
default: default:
break; break;
......
...@@ -302,7 +302,8 @@ EXPORT_SYMBOL_GPL(udp6_lib_lookup_skb); ...@@ -302,7 +302,8 @@ EXPORT_SYMBOL_GPL(udp6_lib_lookup_skb);
* Does increment socket refcount. * Does increment socket refcount.
*/ */
#if IS_ENABLED(CONFIG_NETFILTER_XT_MATCH_SOCKET) || \ #if IS_ENABLED(CONFIG_NETFILTER_XT_MATCH_SOCKET) || \
IS_ENABLED(CONFIG_NETFILTER_XT_TARGET_TPROXY) IS_ENABLED(CONFIG_NETFILTER_XT_TARGET_TPROXY) || \
IS_ENABLED(CONFIG_NF_SOCKET_IPV6)
struct sock *udp6_lib_lookup(struct net *net, const struct in6_addr *saddr, __be16 sport, struct sock *udp6_lib_lookup(struct net *net, const struct in6_addr *saddr, __be16 sport,
const struct in6_addr *daddr, __be16 dport, int dif) const struct in6_addr *daddr, __be16 dport, int dif)
{ {
......
...@@ -302,70 +302,40 @@ void _nf_unregister_hooks(struct nf_hook_ops *reg, unsigned int n) ...@@ -302,70 +302,40 @@ void _nf_unregister_hooks(struct nf_hook_ops *reg, unsigned int n)
} }
EXPORT_SYMBOL(_nf_unregister_hooks); EXPORT_SYMBOL(_nf_unregister_hooks);
unsigned int nf_iterate(struct sk_buff *skb, /* Returns 1 if okfn() needs to be executed by the caller,
struct nf_hook_state *state, * -EPERM for NF_DROP, 0 otherwise. Caller must hold rcu_read_lock. */
struct nf_hook_entry **entryp) int nf_hook_slow(struct sk_buff *skb, struct nf_hook_state *state,
struct nf_hook_entry *entry)
{ {
unsigned int verdict; unsigned int verdict;
int ret;
/* do {
* The caller must not block between calls to this verdict = entry->ops.hook(entry->ops.priv, skb, state);
* function because of risk of continuing from deleted element. switch (verdict & NF_VERDICT_MASK) {
*/ case NF_ACCEPT:
while (*entryp) { entry = rcu_dereference(entry->next);
if (state->thresh > (*entryp)->ops.priority) { break;
*entryp = rcu_dereference((*entryp)->next); case NF_DROP:
continue; kfree_skb(skb);
} ret = NF_DROP_GETERR(verdict);
if (ret == 0)
/* Optimization: we don't need to hold module ret = -EPERM;
reference here, since function can't sleep. --RR */ return ret;
repeat: case NF_QUEUE:
verdict = (*entryp)->ops.hook((*entryp)->ops.priv, skb, state); ret = nf_queue(skb, state, &entry, verdict);
if (verdict != NF_ACCEPT) { if (ret == 1 && entry)
#ifdef CONFIG_NETFILTER_DEBUG
if (unlikely((verdict & NF_VERDICT_MASK)
> NF_MAX_VERDICT)) {
NFDEBUG("Evil return from %p(%u).\n",
(*entryp)->ops.hook, state->hook);
*entryp = rcu_dereference((*entryp)->next);
continue; continue;
} return ret;
#endif default:
if (verdict != NF_REPEAT) /* Implicit handling for NF_STOLEN, as well as any other
return verdict; * non conventional verdicts.
goto repeat; */
return 0;
} }
*entryp = rcu_dereference((*entryp)->next); } while (entry);
}
return NF_ACCEPT;
}
/* Returns 1 if okfn() needs to be executed by the caller, return 1;
* -EPERM for NF_DROP, 0 otherwise. Caller must hold rcu_read_lock. */
int nf_hook_slow(struct sk_buff *skb, struct nf_hook_state *state)
{
struct nf_hook_entry *entry;
unsigned int verdict;
int ret = 0;
entry = rcu_dereference(state->hook_entries);
next_hook:
verdict = nf_iterate(skb, state, &entry);
if (verdict == NF_ACCEPT || verdict == NF_STOP) {
ret = 1;
} else if ((verdict & NF_VERDICT_MASK) == NF_DROP) {
kfree_skb(skb);
ret = NF_DROP_GETERR(verdict);
if (ret == 0)
ret = -EPERM;
} else if ((verdict & NF_VERDICT_MASK) == NF_QUEUE) {
ret = nf_queue(skb, state, &entry, verdict);
if (ret == 1 && entry)
goto next_hook;
}
return ret;
} }
EXPORT_SYMBOL(nf_hook_slow); EXPORT_SYMBOL(nf_hook_slow);
......
...@@ -99,6 +99,15 @@ config IP_SET_HASH_IPPORTNET ...@@ -99,6 +99,15 @@ config IP_SET_HASH_IPPORTNET
To compile it as a module, choose M here. If unsure, say N. To compile it as a module, choose M here. If unsure, say N.
config IP_SET_HASH_IPMAC
tristate "hash:ip,mac set support"
depends on IP_SET
help
This option adds the hash:ip,mac set type support, by which
one can store IPv4/IPv6 address and MAC (ethernet address) pairs in a set.
To compile it as a module, choose M here. If unsure, say N.
config IP_SET_HASH_MAC config IP_SET_HASH_MAC
tristate "hash:mac set support" tristate "hash:mac set support"
depends on IP_SET depends on IP_SET
......
...@@ -14,6 +14,7 @@ obj-$(CONFIG_IP_SET_BITMAP_PORT) += ip_set_bitmap_port.o ...@@ -14,6 +14,7 @@ obj-$(CONFIG_IP_SET_BITMAP_PORT) += ip_set_bitmap_port.o
# hash types # hash types
obj-$(CONFIG_IP_SET_HASH_IP) += ip_set_hash_ip.o obj-$(CONFIG_IP_SET_HASH_IP) += ip_set_hash_ip.o
obj-$(CONFIG_IP_SET_HASH_IPMAC) += ip_set_hash_ipmac.o
obj-$(CONFIG_IP_SET_HASH_IPMARK) += ip_set_hash_ipmark.o obj-$(CONFIG_IP_SET_HASH_IPMARK) += ip_set_hash_ipmark.o
obj-$(CONFIG_IP_SET_HASH_IPPORT) += ip_set_hash_ipport.o obj-$(CONFIG_IP_SET_HASH_IPPORT) += ip_set_hash_ipport.o
obj-$(CONFIG_IP_SET_HASH_IPPORTIP) += ip_set_hash_ipportip.o obj-$(CONFIG_IP_SET_HASH_IPPORTIP) += ip_set_hash_ipportip.o
......
...@@ -22,6 +22,7 @@ ...@@ -22,6 +22,7 @@
#define mtype_kadt IPSET_TOKEN(MTYPE, _kadt) #define mtype_kadt IPSET_TOKEN(MTYPE, _kadt)
#define mtype_uadt IPSET_TOKEN(MTYPE, _uadt) #define mtype_uadt IPSET_TOKEN(MTYPE, _uadt)
#define mtype_destroy IPSET_TOKEN(MTYPE, _destroy) #define mtype_destroy IPSET_TOKEN(MTYPE, _destroy)
#define mtype_memsize IPSET_TOKEN(MTYPE, _memsize)
#define mtype_flush IPSET_TOKEN(MTYPE, _flush) #define mtype_flush IPSET_TOKEN(MTYPE, _flush)
#define mtype_head IPSET_TOKEN(MTYPE, _head) #define mtype_head IPSET_TOKEN(MTYPE, _head)
#define mtype_same_set IPSET_TOKEN(MTYPE, _same_set) #define mtype_same_set IPSET_TOKEN(MTYPE, _same_set)
...@@ -40,11 +41,8 @@ mtype_gc_init(struct ip_set *set, void (*gc)(unsigned long ul_set)) ...@@ -40,11 +41,8 @@ mtype_gc_init(struct ip_set *set, void (*gc)(unsigned long ul_set))
{ {
struct mtype *map = set->data; struct mtype *map = set->data;
init_timer(&map->gc); setup_timer(&map->gc, gc, (unsigned long)set);
map->gc.data = (unsigned long)set; mod_timer(&map->gc, jiffies + IPSET_GC_PERIOD(set->timeout) * HZ);
map->gc.function = gc;
map->gc.expires = jiffies + IPSET_GC_PERIOD(set->timeout) * HZ;
add_timer(&map->gc);
} }
static void static void
...@@ -82,6 +80,16 @@ mtype_flush(struct ip_set *set) ...@@ -82,6 +80,16 @@ mtype_flush(struct ip_set *set)
if (set->extensions & IPSET_EXT_DESTROY) if (set->extensions & IPSET_EXT_DESTROY)
mtype_ext_cleanup(set); mtype_ext_cleanup(set);
memset(map->members, 0, map->memsize); memset(map->members, 0, map->memsize);
set->elements = 0;
set->ext_size = 0;
}
/* Calculate the actual memory size of the set data */
static size_t
mtype_memsize(const struct mtype *map, size_t dsize)
{
return sizeof(*map) + map->memsize +
map->elements * dsize;
} }
static int static int
...@@ -89,14 +97,15 @@ mtype_head(struct ip_set *set, struct sk_buff *skb) ...@@ -89,14 +97,15 @@ mtype_head(struct ip_set *set, struct sk_buff *skb)
{ {
const struct mtype *map = set->data; const struct mtype *map = set->data;
struct nlattr *nested; struct nlattr *nested;
size_t memsize = sizeof(*map) + map->memsize; size_t memsize = mtype_memsize(map, set->dsize) + set->ext_size;
nested = ipset_nest_start(skb, IPSET_ATTR_DATA); nested = ipset_nest_start(skb, IPSET_ATTR_DATA);
if (!nested) if (!nested)
goto nla_put_failure; goto nla_put_failure;
if (mtype_do_head(skb, map) || if (mtype_do_head(skb, map) ||
nla_put_net32(skb, IPSET_ATTR_REFERENCES, htonl(set->ref)) || nla_put_net32(skb, IPSET_ATTR_REFERENCES, htonl(set->ref)) ||
nla_put_net32(skb, IPSET_ATTR_MEMSIZE, htonl(memsize))) nla_put_net32(skb, IPSET_ATTR_MEMSIZE, htonl(memsize)) ||
nla_put_net32(skb, IPSET_ATTR_ELEMENTS, htonl(set->elements)))
goto nla_put_failure; goto nla_put_failure;
if (unlikely(ip_set_put_flags(skb, set))) if (unlikely(ip_set_put_flags(skb, set)))
goto nla_put_failure; goto nla_put_failure;
...@@ -140,6 +149,7 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext, ...@@ -140,6 +149,7 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext,
if (ret == IPSET_ADD_FAILED) { if (ret == IPSET_ADD_FAILED) {
if (SET_WITH_TIMEOUT(set) && if (SET_WITH_TIMEOUT(set) &&
ip_set_timeout_expired(ext_timeout(x, set))) { ip_set_timeout_expired(ext_timeout(x, set))) {
set->elements--;
ret = 0; ret = 0;
} else if (!(flags & IPSET_FLAG_EXIST)) { } else if (!(flags & IPSET_FLAG_EXIST)) {
set_bit(e->id, map->members); set_bit(e->id, map->members);
...@@ -148,6 +158,8 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext, ...@@ -148,6 +158,8 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext,
/* Element is re-added, cleanup extensions */ /* Element is re-added, cleanup extensions */
ip_set_ext_destroy(set, x); ip_set_ext_destroy(set, x);
} }
if (ret > 0)
set->elements--;
if (SET_WITH_TIMEOUT(set)) if (SET_WITH_TIMEOUT(set))
#ifdef IP_SET_BITMAP_STORED_TIMEOUT #ifdef IP_SET_BITMAP_STORED_TIMEOUT
...@@ -159,12 +171,13 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext, ...@@ -159,12 +171,13 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext,
if (SET_WITH_COUNTER(set)) if (SET_WITH_COUNTER(set))
ip_set_init_counter(ext_counter(x, set), ext); ip_set_init_counter(ext_counter(x, set), ext);
if (SET_WITH_COMMENT(set)) if (SET_WITH_COMMENT(set))
ip_set_init_comment(ext_comment(x, set), ext); ip_set_init_comment(set, ext_comment(x, set), ext);
if (SET_WITH_SKBINFO(set)) if (SET_WITH_SKBINFO(set))
ip_set_init_skbinfo(ext_skbinfo(x, set), ext); ip_set_init_skbinfo(ext_skbinfo(x, set), ext);
/* Activate element */ /* Activate element */
set_bit(e->id, map->members); set_bit(e->id, map->members);
set->elements++;
return 0; return 0;
} }
...@@ -181,6 +194,7 @@ mtype_del(struct ip_set *set, void *value, const struct ip_set_ext *ext, ...@@ -181,6 +194,7 @@ mtype_del(struct ip_set *set, void *value, const struct ip_set_ext *ext,
return -IPSET_ERR_EXIST; return -IPSET_ERR_EXIST;
ip_set_ext_destroy(set, x); ip_set_ext_destroy(set, x);
set->elements--;
if (SET_WITH_TIMEOUT(set) && if (SET_WITH_TIMEOUT(set) &&
ip_set_timeout_expired(ext_timeout(x, set))) ip_set_timeout_expired(ext_timeout(x, set)))
return -IPSET_ERR_EXIST; return -IPSET_ERR_EXIST;
...@@ -276,6 +290,7 @@ mtype_gc(unsigned long ul_set) ...@@ -276,6 +290,7 @@ mtype_gc(unsigned long ul_set)
if (ip_set_timeout_expired(ext_timeout(x, set))) { if (ip_set_timeout_expired(ext_timeout(x, set))) {
clear_bit(id, map->members); clear_bit(id, map->members);
ip_set_ext_destroy(set, x); ip_set_ext_destroy(set, x);
set->elements--;
} }
} }
spin_unlock_bh(&set->lock); spin_unlock_bh(&set->lock);
......
...@@ -324,7 +324,7 @@ ip_set_get_ipaddr6(struct nlattr *nla, union nf_inet_addr *ipaddr) ...@@ -324,7 +324,7 @@ ip_set_get_ipaddr6(struct nlattr *nla, union nf_inet_addr *ipaddr)
} }
EXPORT_SYMBOL_GPL(ip_set_get_ipaddr6); EXPORT_SYMBOL_GPL(ip_set_get_ipaddr6);
typedef void (*destroyer)(void *); typedef void (*destroyer)(struct ip_set *, void *);
/* ipset data extension types, in size order */ /* ipset data extension types, in size order */
const struct ip_set_ext_type ip_set_extensions[] = { const struct ip_set_ext_type ip_set_extensions[] = {
...@@ -426,20 +426,20 @@ ip_set_get_extensions(struct ip_set *set, struct nlattr *tb[], ...@@ -426,20 +426,20 @@ ip_set_get_extensions(struct ip_set *set, struct nlattr *tb[],
if (!SET_WITH_SKBINFO(set)) if (!SET_WITH_SKBINFO(set))
return -IPSET_ERR_SKBINFO; return -IPSET_ERR_SKBINFO;
fullmark = be64_to_cpu(nla_get_be64(tb[IPSET_ATTR_SKBMARK])); fullmark = be64_to_cpu(nla_get_be64(tb[IPSET_ATTR_SKBMARK]));
ext->skbmark = fullmark >> 32; ext->skbinfo.skbmark = fullmark >> 32;
ext->skbmarkmask = fullmark & 0xffffffff; ext->skbinfo.skbmarkmask = fullmark & 0xffffffff;
} }
if (tb[IPSET_ATTR_SKBPRIO]) { if (tb[IPSET_ATTR_SKBPRIO]) {
if (!SET_WITH_SKBINFO(set)) if (!SET_WITH_SKBINFO(set))
return -IPSET_ERR_SKBINFO; return -IPSET_ERR_SKBINFO;
ext->skbprio = be32_to_cpu(nla_get_be32( ext->skbinfo.skbprio =
tb[IPSET_ATTR_SKBPRIO])); be32_to_cpu(nla_get_be32(tb[IPSET_ATTR_SKBPRIO]));
} }
if (tb[IPSET_ATTR_SKBQUEUE]) { if (tb[IPSET_ATTR_SKBQUEUE]) {
if (!SET_WITH_SKBINFO(set)) if (!SET_WITH_SKBINFO(set))
return -IPSET_ERR_SKBINFO; return -IPSET_ERR_SKBINFO;
ext->skbqueue = be16_to_cpu(nla_get_be16( ext->skbinfo.skbqueue =
tb[IPSET_ATTR_SKBQUEUE])); be16_to_cpu(nla_get_be16(tb[IPSET_ATTR_SKBQUEUE]));
} }
return 0; return 0;
} }
...@@ -541,7 +541,7 @@ int ...@@ -541,7 +541,7 @@ int
ip_set_test(ip_set_id_t index, const struct sk_buff *skb, ip_set_test(ip_set_id_t index, const struct sk_buff *skb,
const struct xt_action_param *par, struct ip_set_adt_opt *opt) const struct xt_action_param *par, struct ip_set_adt_opt *opt)
{ {
struct ip_set *set = ip_set_rcu_get(par->net, index); struct ip_set *set = ip_set_rcu_get(xt_net(par), index);
int ret = 0; int ret = 0;
BUG_ON(!set); BUG_ON(!set);
...@@ -579,7 +579,7 @@ int ...@@ -579,7 +579,7 @@ int
ip_set_add(ip_set_id_t index, const struct sk_buff *skb, ip_set_add(ip_set_id_t index, const struct sk_buff *skb,
const struct xt_action_param *par, struct ip_set_adt_opt *opt) const struct xt_action_param *par, struct ip_set_adt_opt *opt)
{ {
struct ip_set *set = ip_set_rcu_get(par->net, index); struct ip_set *set = ip_set_rcu_get(xt_net(par), index);
int ret; int ret;
BUG_ON(!set); BUG_ON(!set);
...@@ -601,7 +601,7 @@ int ...@@ -601,7 +601,7 @@ int
ip_set_del(ip_set_id_t index, const struct sk_buff *skb, ip_set_del(ip_set_id_t index, const struct sk_buff *skb,
const struct xt_action_param *par, struct ip_set_adt_opt *opt) const struct xt_action_param *par, struct ip_set_adt_opt *opt)
{ {
struct ip_set *set = ip_set_rcu_get(par->net, index); struct ip_set *set = ip_set_rcu_get(xt_net(par), index);
int ret = 0; int ret = 0;
BUG_ON(!set); BUG_ON(!set);
......
...@@ -85,6 +85,8 @@ struct htable { ...@@ -85,6 +85,8 @@ struct htable {
}; };
#define hbucket(h, i) ((h)->bucket[i]) #define hbucket(h, i) ((h)->bucket[i])
#define ext_size(n, dsize) \
(sizeof(struct hbucket) + (n) * (dsize))
#ifndef IPSET_NET_COUNT #ifndef IPSET_NET_COUNT
#define IPSET_NET_COUNT 1 #define IPSET_NET_COUNT 1
...@@ -150,24 +152,34 @@ htable_bits(u32 hashsize) ...@@ -150,24 +152,34 @@ htable_bits(u32 hashsize)
#define INIT_CIDR(cidr, host_mask) \ #define INIT_CIDR(cidr, host_mask) \
DCIDR_PUT(((cidr) ? NCIDR_GET(cidr) : host_mask)) DCIDR_PUT(((cidr) ? NCIDR_GET(cidr) : host_mask))
#define SET_HOST_MASK(family) (family == AF_INET ? 32 : 128)
#ifdef IP_SET_HASH_WITH_NET0 #ifdef IP_SET_HASH_WITH_NET0
/* cidr from 0 to SET_HOST_MASK() value and c = cidr + 1 */ /* cidr from 0 to HOST_MASK value and c = cidr + 1 */
#define NLEN(family) (SET_HOST_MASK(family) + 1) #define NLEN (HOST_MASK + 1)
#define CIDR_POS(c) ((c) - 1) #define CIDR_POS(c) ((c) - 1)
#else #else
/* cidr from 1 to SET_HOST_MASK() value and c = cidr + 1 */ /* cidr from 1 to HOST_MASK value and c = cidr + 1 */
#define NLEN(family) SET_HOST_MASK(family) #define NLEN HOST_MASK
#define CIDR_POS(c) ((c) - 2) #define CIDR_POS(c) ((c) - 2)
#endif #endif
#else #else
#define NLEN(family) 0 #define NLEN 0
#endif /* IP_SET_HASH_WITH_NETS */ #endif /* IP_SET_HASH_WITH_NETS */
#endif /* _IP_SET_HASH_GEN_H */ #endif /* _IP_SET_HASH_GEN_H */
#ifndef MTYPE
#error "MTYPE is not defined!"
#endif
#ifndef HTYPE
#error "HTYPE is not defined!"
#endif
#ifndef HOST_MASK
#error "HOST_MASK is not defined!"
#endif
/* Family dependent templates */ /* Family dependent templates */
#undef ahash_data #undef ahash_data
...@@ -191,7 +203,6 @@ htable_bits(u32 hashsize) ...@@ -191,7 +203,6 @@ htable_bits(u32 hashsize)
#undef mtype_same_set #undef mtype_same_set
#undef mtype_kadt #undef mtype_kadt
#undef mtype_uadt #undef mtype_uadt
#undef mtype
#undef mtype_add #undef mtype_add
#undef mtype_del #undef mtype_del
...@@ -207,6 +218,7 @@ htable_bits(u32 hashsize) ...@@ -207,6 +218,7 @@ htable_bits(u32 hashsize)
#undef mtype_variant #undef mtype_variant
#undef mtype_data_match #undef mtype_data_match
#undef htype
#undef HKEY #undef HKEY
#define mtype_data_equal IPSET_TOKEN(MTYPE, _data_equal) #define mtype_data_equal IPSET_TOKEN(MTYPE, _data_equal)
...@@ -233,7 +245,6 @@ htable_bits(u32 hashsize) ...@@ -233,7 +245,6 @@ htable_bits(u32 hashsize)
#define mtype_same_set IPSET_TOKEN(MTYPE, _same_set) #define mtype_same_set IPSET_TOKEN(MTYPE, _same_set)
#define mtype_kadt IPSET_TOKEN(MTYPE, _kadt) #define mtype_kadt IPSET_TOKEN(MTYPE, _kadt)
#define mtype_uadt IPSET_TOKEN(MTYPE, _uadt) #define mtype_uadt IPSET_TOKEN(MTYPE, _uadt)
#define mtype MTYPE
#define mtype_add IPSET_TOKEN(MTYPE, _add) #define mtype_add IPSET_TOKEN(MTYPE, _add)
#define mtype_del IPSET_TOKEN(MTYPE, _del) #define mtype_del IPSET_TOKEN(MTYPE, _del)
...@@ -249,62 +260,54 @@ htable_bits(u32 hashsize) ...@@ -249,62 +260,54 @@ htable_bits(u32 hashsize)
#define mtype_variant IPSET_TOKEN(MTYPE, _variant) #define mtype_variant IPSET_TOKEN(MTYPE, _variant)
#define mtype_data_match IPSET_TOKEN(MTYPE, _data_match) #define mtype_data_match IPSET_TOKEN(MTYPE, _data_match)
#ifndef MTYPE
#error "MTYPE is not defined!"
#endif
#ifndef HOST_MASK
#error "HOST_MASK is not defined!"
#endif
#ifndef HKEY_DATALEN #ifndef HKEY_DATALEN
#define HKEY_DATALEN sizeof(struct mtype_elem) #define HKEY_DATALEN sizeof(struct mtype_elem)
#endif #endif
#define HKEY(data, initval, htable_bits) \ #define htype MTYPE
(jhash2((u32 *)(data), HKEY_DATALEN / sizeof(u32), initval) \
& jhash_mask(htable_bits))
#ifndef htype #define HKEY(data, initval, htable_bits) \
#ifndef HTYPE ({ \
#error "HTYPE is not defined!" const u32 *__k = (const u32 *)data; \
#endif /* HTYPE */ u32 __l = HKEY_DATALEN / sizeof(u32); \
#define htype HTYPE \
BUILD_BUG_ON(HKEY_DATALEN % sizeof(u32) != 0); \
\
jhash2(__k, __l, initval) & jhash_mask(htable_bits); \
})
/* The generic hash structure */ /* The generic hash structure */
struct htype { struct htype {
struct htable __rcu *table; /* the hash table */ struct htable __rcu *table; /* the hash table */
struct timer_list gc; /* garbage collection when timeout enabled */
u32 maxelem; /* max elements in the hash */ u32 maxelem; /* max elements in the hash */
u32 elements; /* current element (vs timeout) */
u32 initval; /* random jhash init value */ u32 initval; /* random jhash init value */
#ifdef IP_SET_HASH_WITH_MARKMASK #ifdef IP_SET_HASH_WITH_MARKMASK
u32 markmask; /* markmask value for mark mask to store */ u32 markmask; /* markmask value for mark mask to store */
#endif #endif
struct timer_list gc; /* garbage collection when timeout enabled */
struct mtype_elem next; /* temporary storage for uadd */
#ifdef IP_SET_HASH_WITH_MULTI #ifdef IP_SET_HASH_WITH_MULTI
u8 ahash_max; /* max elements in an array block */ u8 ahash_max; /* max elements in an array block */
#endif #endif
#ifdef IP_SET_HASH_WITH_NETMASK #ifdef IP_SET_HASH_WITH_NETMASK
u8 netmask; /* netmask value for subnets to store */ u8 netmask; /* netmask value for subnets to store */
#endif #endif
struct mtype_elem next; /* temporary storage for uadd */
#ifdef IP_SET_HASH_WITH_NETS #ifdef IP_SET_HASH_WITH_NETS
struct net_prefixes nets[0]; /* book-keeping of prefixes */ struct net_prefixes nets[NLEN]; /* book-keeping of prefixes */
#endif #endif
}; };
#endif /* htype */
#ifdef IP_SET_HASH_WITH_NETS #ifdef IP_SET_HASH_WITH_NETS
/* Network cidr size book keeping when the hash stores different /* Network cidr size book keeping when the hash stores different
* sized networks. cidr == real cidr + 1 to support /0. * sized networks. cidr == real cidr + 1 to support /0.
*/ */
static void static void
mtype_add_cidr(struct htype *h, u8 cidr, u8 nets_length, u8 n) mtype_add_cidr(struct htype *h, u8 cidr, u8 n)
{ {
int i, j; int i, j;
/* Add in increasing prefix order, so larger cidr first */ /* Add in increasing prefix order, so larger cidr first */
for (i = 0, j = -1; i < nets_length && h->nets[i].cidr[n]; i++) { for (i = 0, j = -1; i < NLEN && h->nets[i].cidr[n]; i++) {
if (j != -1) { if (j != -1) {
continue; continue;
} else if (h->nets[i].cidr[n] < cidr) { } else if (h->nets[i].cidr[n] < cidr) {
...@@ -323,11 +326,11 @@ mtype_add_cidr(struct htype *h, u8 cidr, u8 nets_length, u8 n) ...@@ -323,11 +326,11 @@ mtype_add_cidr(struct htype *h, u8 cidr, u8 nets_length, u8 n)
} }
static void static void
mtype_del_cidr(struct htype *h, u8 cidr, u8 nets_length, u8 n) mtype_del_cidr(struct htype *h, u8 cidr, u8 n)
{ {
u8 i, j, net_end = nets_length - 1; u8 i, j, net_end = NLEN - 1;
for (i = 0; i < nets_length; i++) { for (i = 0; i < NLEN; i++) {
if (h->nets[i].cidr[n] != cidr) if (h->nets[i].cidr[n] != cidr)
continue; continue;
h->nets[CIDR_POS(cidr)].nets[n]--; h->nets[CIDR_POS(cidr)].nets[n]--;
...@@ -343,24 +346,9 @@ mtype_del_cidr(struct htype *h, u8 cidr, u8 nets_length, u8 n) ...@@ -343,24 +346,9 @@ mtype_del_cidr(struct htype *h, u8 cidr, u8 nets_length, u8 n)
/* Calculate the actual memory size of the set data */ /* Calculate the actual memory size of the set data */
static size_t static size_t
mtype_ahash_memsize(const struct htype *h, const struct htable *t, mtype_ahash_memsize(const struct htype *h, const struct htable *t)
u8 nets_length, size_t dsize)
{ {
u32 i; return sizeof(*h) + sizeof(*t);
struct hbucket *n;
size_t memsize = sizeof(*h) + sizeof(*t);
#ifdef IP_SET_HASH_WITH_NETS
memsize += sizeof(struct net_prefixes) * nets_length;
#endif
for (i = 0; i < jhash_size(t->htable_bits); i++) {
n = rcu_dereference_bh(hbucket(t, i));
if (!n)
continue;
memsize += sizeof(struct hbucket) + n->size * dsize;
}
return memsize;
} }
/* Get the ith element from the array block n */ /* Get the ith element from the array block n */
...@@ -398,9 +386,10 @@ mtype_flush(struct ip_set *set) ...@@ -398,9 +386,10 @@ mtype_flush(struct ip_set *set)
kfree_rcu(n, rcu); kfree_rcu(n, rcu);
} }
#ifdef IP_SET_HASH_WITH_NETS #ifdef IP_SET_HASH_WITH_NETS
memset(h->nets, 0, sizeof(struct net_prefixes) * NLEN(set->family)); memset(h->nets, 0, sizeof(h->nets));
#endif #endif
h->elements = 0; set->elements = 0;
set->ext_size = 0;
} }
/* Destroy the hashtable part of the set */ /* Destroy the hashtable part of the set */
...@@ -444,11 +433,8 @@ mtype_gc_init(struct ip_set *set, void (*gc)(unsigned long ul_set)) ...@@ -444,11 +433,8 @@ mtype_gc_init(struct ip_set *set, void (*gc)(unsigned long ul_set))
{ {
struct htype *h = set->data; struct htype *h = set->data;
init_timer(&h->gc); setup_timer(&h->gc, gc, (unsigned long)set);
h->gc.data = (unsigned long)set; mod_timer(&h->gc, jiffies + IPSET_GC_PERIOD(set->timeout) * HZ);
h->gc.function = gc;
h->gc.expires = jiffies + IPSET_GC_PERIOD(set->timeout) * HZ;
add_timer(&h->gc);
pr_debug("gc initialized, run in every %u\n", pr_debug("gc initialized, run in every %u\n",
IPSET_GC_PERIOD(set->timeout)); IPSET_GC_PERIOD(set->timeout));
} }
...@@ -473,12 +459,13 @@ mtype_same_set(const struct ip_set *a, const struct ip_set *b) ...@@ -473,12 +459,13 @@ mtype_same_set(const struct ip_set *a, const struct ip_set *b)
/* Delete expired elements from the hashtable */ /* Delete expired elements from the hashtable */
static void static void
mtype_expire(struct ip_set *set, struct htype *h, u8 nets_length, size_t dsize) mtype_expire(struct ip_set *set, struct htype *h)
{ {
struct htable *t; struct htable *t;
struct hbucket *n, *tmp; struct hbucket *n, *tmp;
struct mtype_elem *data; struct mtype_elem *data;
u32 i, j, d; u32 i, j, d;
size_t dsize = set->dsize;
#ifdef IP_SET_HASH_WITH_NETS #ifdef IP_SET_HASH_WITH_NETS
u8 k; u8 k;
#endif #endif
...@@ -494,21 +481,20 @@ mtype_expire(struct ip_set *set, struct htype *h, u8 nets_length, size_t dsize) ...@@ -494,21 +481,20 @@ mtype_expire(struct ip_set *set, struct htype *h, u8 nets_length, size_t dsize)
continue; continue;
} }
data = ahash_data(n, j, dsize); data = ahash_data(n, j, dsize);
if (ip_set_timeout_expired(ext_timeout(data, set))) { if (!ip_set_timeout_expired(ext_timeout(data, set)))
pr_debug("expired %u/%u\n", i, j); continue;
clear_bit(j, n->used); pr_debug("expired %u/%u\n", i, j);
smp_mb__after_atomic(); clear_bit(j, n->used);
smp_mb__after_atomic();
#ifdef IP_SET_HASH_WITH_NETS #ifdef IP_SET_HASH_WITH_NETS
for (k = 0; k < IPSET_NET_COUNT; k++) for (k = 0; k < IPSET_NET_COUNT; k++)
mtype_del_cidr(h, mtype_del_cidr(h,
NCIDR_PUT(DCIDR_GET(data->cidr, NCIDR_PUT(DCIDR_GET(data->cidr, k)),
k)), k);
nets_length, k);
#endif #endif
ip_set_ext_destroy(set, data); ip_set_ext_destroy(set, data);
h->elements--; set->elements--;
d++; d++;
}
} }
if (d >= AHASH_INIT_SIZE) { if (d >= AHASH_INIT_SIZE) {
if (d >= n->size) { if (d >= n->size) {
...@@ -532,6 +518,7 @@ mtype_expire(struct ip_set *set, struct htype *h, u8 nets_length, size_t dsize) ...@@ -532,6 +518,7 @@ mtype_expire(struct ip_set *set, struct htype *h, u8 nets_length, size_t dsize)
d++; d++;
} }
tmp->pos = d; tmp->pos = d;
set->ext_size -= ext_size(AHASH_INIT_SIZE, dsize);
rcu_assign_pointer(hbucket(t, i), tmp); rcu_assign_pointer(hbucket(t, i), tmp);
kfree_rcu(n, rcu); kfree_rcu(n, rcu);
} }
...@@ -546,7 +533,7 @@ mtype_gc(unsigned long ul_set) ...@@ -546,7 +533,7 @@ mtype_gc(unsigned long ul_set)
pr_debug("called\n"); pr_debug("called\n");
spin_lock_bh(&set->lock); spin_lock_bh(&set->lock);
mtype_expire(set, h, NLEN(set->family), set->dsize); mtype_expire(set, h);
spin_unlock_bh(&set->lock); spin_unlock_bh(&set->lock);
h->gc.expires = jiffies + IPSET_GC_PERIOD(set->timeout) * HZ; h->gc.expires = jiffies + IPSET_GC_PERIOD(set->timeout) * HZ;
...@@ -563,7 +550,7 @@ mtype_resize(struct ip_set *set, bool retried) ...@@ -563,7 +550,7 @@ mtype_resize(struct ip_set *set, bool retried)
struct htype *h = set->data; struct htype *h = set->data;
struct htable *t, *orig; struct htable *t, *orig;
u8 htable_bits; u8 htable_bits;
size_t dsize = set->dsize; size_t extsize, dsize = set->dsize;
#ifdef IP_SET_HASH_WITH_NETS #ifdef IP_SET_HASH_WITH_NETS
u8 flags; u8 flags;
struct mtype_elem *tmp; struct mtype_elem *tmp;
...@@ -606,6 +593,7 @@ mtype_resize(struct ip_set *set, bool retried) ...@@ -606,6 +593,7 @@ mtype_resize(struct ip_set *set, bool retried)
/* There can't be another parallel resizing, but dumping is possible */ /* There can't be another parallel resizing, but dumping is possible */
atomic_set(&orig->ref, 1); atomic_set(&orig->ref, 1);
atomic_inc(&orig->uref); atomic_inc(&orig->uref);
extsize = 0;
pr_debug("attempt to resize set %s from %u to %u, t %p\n", pr_debug("attempt to resize set %s from %u to %u, t %p\n",
set->name, orig->htable_bits, htable_bits, orig); set->name, orig->htable_bits, htable_bits, orig);
for (i = 0; i < jhash_size(orig->htable_bits); i++) { for (i = 0; i < jhash_size(orig->htable_bits); i++) {
...@@ -636,6 +624,7 @@ mtype_resize(struct ip_set *set, bool retried) ...@@ -636,6 +624,7 @@ mtype_resize(struct ip_set *set, bool retried)
goto cleanup; goto cleanup;
} }
m->size = AHASH_INIT_SIZE; m->size = AHASH_INIT_SIZE;
extsize = ext_size(AHASH_INIT_SIZE, dsize);
RCU_INIT_POINTER(hbucket(t, key), m); RCU_INIT_POINTER(hbucket(t, key), m);
} else if (m->pos >= m->size) { } else if (m->pos >= m->size) {
struct hbucket *ht; struct hbucket *ht;
...@@ -655,6 +644,7 @@ mtype_resize(struct ip_set *set, bool retried) ...@@ -655,6 +644,7 @@ mtype_resize(struct ip_set *set, bool retried)
memcpy(ht, m, sizeof(struct hbucket) + memcpy(ht, m, sizeof(struct hbucket) +
m->size * dsize); m->size * dsize);
ht->size = m->size + AHASH_INIT_SIZE; ht->size = m->size + AHASH_INIT_SIZE;
extsize += ext_size(AHASH_INIT_SIZE, dsize);
kfree(m); kfree(m);
m = ht; m = ht;
RCU_INIT_POINTER(hbucket(t, key), ht); RCU_INIT_POINTER(hbucket(t, key), ht);
...@@ -668,6 +658,7 @@ mtype_resize(struct ip_set *set, bool retried) ...@@ -668,6 +658,7 @@ mtype_resize(struct ip_set *set, bool retried)
} }
} }
rcu_assign_pointer(h->table, t); rcu_assign_pointer(h->table, t);
set->ext_size = extsize;
spin_unlock_bh(&set->lock); spin_unlock_bh(&set->lock);
...@@ -715,11 +706,11 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext, ...@@ -715,11 +706,11 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext,
bool deleted = false, forceadd = false, reuse = false; bool deleted = false, forceadd = false, reuse = false;
u32 key, multi = 0; u32 key, multi = 0;
if (h->elements >= h->maxelem) { if (set->elements >= h->maxelem) {
if (SET_WITH_TIMEOUT(set)) if (SET_WITH_TIMEOUT(set))
/* FIXME: when set is full, we slow down here */ /* FIXME: when set is full, we slow down here */
mtype_expire(set, h, NLEN(set->family), set->dsize); mtype_expire(set, h);
if (h->elements >= h->maxelem && SET_WITH_FORCEADD(set)) if (set->elements >= h->maxelem && SET_WITH_FORCEADD(set))
forceadd = true; forceadd = true;
} }
...@@ -727,20 +718,15 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext, ...@@ -727,20 +718,15 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext,
key = HKEY(value, h->initval, t->htable_bits); key = HKEY(value, h->initval, t->htable_bits);
n = __ipset_dereference_protected(hbucket(t, key), 1); n = __ipset_dereference_protected(hbucket(t, key), 1);
if (!n) { if (!n) {
if (forceadd) { if (forceadd || set->elements >= h->maxelem)
if (net_ratelimit())
pr_warn("Set %s is full, maxelem %u reached\n",
set->name, h->maxelem);
return -IPSET_ERR_HASH_FULL;
} else if (h->elements >= h->maxelem) {
goto set_full; goto set_full;
}
old = NULL; old = NULL;
n = kzalloc(sizeof(*n) + AHASH_INIT_SIZE * set->dsize, n = kzalloc(sizeof(*n) + AHASH_INIT_SIZE * set->dsize,
GFP_ATOMIC); GFP_ATOMIC);
if (!n) if (!n)
return -ENOMEM; return -ENOMEM;
n->size = AHASH_INIT_SIZE; n->size = AHASH_INIT_SIZE;
set->ext_size += ext_size(AHASH_INIT_SIZE, set->dsize);
goto copy_elem; goto copy_elem;
} }
for (i = 0; i < n->pos; i++) { for (i = 0; i < n->pos; i++) {
...@@ -778,14 +764,14 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext, ...@@ -778,14 +764,14 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext,
for (i = 0; i < IPSET_NET_COUNT; i++) for (i = 0; i < IPSET_NET_COUNT; i++)
mtype_del_cidr(h, mtype_del_cidr(h,
NCIDR_PUT(DCIDR_GET(data->cidr, i)), NCIDR_PUT(DCIDR_GET(data->cidr, i)),
NLEN(set->family), i); i);
#endif #endif
ip_set_ext_destroy(set, data); ip_set_ext_destroy(set, data);
h->elements--; set->elements--;
} }
goto copy_data; goto copy_data;
} }
if (h->elements >= h->maxelem) if (set->elements >= h->maxelem)
goto set_full; goto set_full;
/* Create a new slot */ /* Create a new slot */
if (n->pos >= n->size) { if (n->pos >= n->size) {
...@@ -804,17 +790,17 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext, ...@@ -804,17 +790,17 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext,
memcpy(n, old, sizeof(struct hbucket) + memcpy(n, old, sizeof(struct hbucket) +
old->size * set->dsize); old->size * set->dsize);
n->size = old->size + AHASH_INIT_SIZE; n->size = old->size + AHASH_INIT_SIZE;
set->ext_size += ext_size(AHASH_INIT_SIZE, set->dsize);
} }
copy_elem: copy_elem:
j = n->pos++; j = n->pos++;
data = ahash_data(n, j, set->dsize); data = ahash_data(n, j, set->dsize);
copy_data: copy_data:
h->elements++; set->elements++;
#ifdef IP_SET_HASH_WITH_NETS #ifdef IP_SET_HASH_WITH_NETS
for (i = 0; i < IPSET_NET_COUNT; i++) for (i = 0; i < IPSET_NET_COUNT; i++)
mtype_add_cidr(h, NCIDR_PUT(DCIDR_GET(d->cidr, i)), mtype_add_cidr(h, NCIDR_PUT(DCIDR_GET(d->cidr, i)), i);
NLEN(set->family), i);
#endif #endif
memcpy(data, d, sizeof(struct mtype_elem)); memcpy(data, d, sizeof(struct mtype_elem));
overwrite_extensions: overwrite_extensions:
...@@ -824,7 +810,7 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext, ...@@ -824,7 +810,7 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext,
if (SET_WITH_COUNTER(set)) if (SET_WITH_COUNTER(set))
ip_set_init_counter(ext_counter(data, set), ext); ip_set_init_counter(ext_counter(data, set), ext);
if (SET_WITH_COMMENT(set)) if (SET_WITH_COMMENT(set))
ip_set_init_comment(ext_comment(data, set), ext); ip_set_init_comment(set, ext_comment(data, set), ext);
if (SET_WITH_SKBINFO(set)) if (SET_WITH_SKBINFO(set))
ip_set_init_skbinfo(ext_skbinfo(data, set), ext); ip_set_init_skbinfo(ext_skbinfo(data, set), ext);
/* Must come last for the case when timed out entry is reused */ /* Must come last for the case when timed out entry is reused */
...@@ -883,11 +869,11 @@ mtype_del(struct ip_set *set, void *value, const struct ip_set_ext *ext, ...@@ -883,11 +869,11 @@ mtype_del(struct ip_set *set, void *value, const struct ip_set_ext *ext,
smp_mb__after_atomic(); smp_mb__after_atomic();
if (i + 1 == n->pos) if (i + 1 == n->pos)
n->pos--; n->pos--;
h->elements--; set->elements--;
#ifdef IP_SET_HASH_WITH_NETS #ifdef IP_SET_HASH_WITH_NETS
for (j = 0; j < IPSET_NET_COUNT; j++) for (j = 0; j < IPSET_NET_COUNT; j++)
mtype_del_cidr(h, NCIDR_PUT(DCIDR_GET(d->cidr, j)), mtype_del_cidr(h, NCIDR_PUT(DCIDR_GET(d->cidr, j)),
NLEN(set->family), j); j);
#endif #endif
ip_set_ext_destroy(set, data); ip_set_ext_destroy(set, data);
...@@ -896,6 +882,7 @@ mtype_del(struct ip_set *set, void *value, const struct ip_set_ext *ext, ...@@ -896,6 +882,7 @@ mtype_del(struct ip_set *set, void *value, const struct ip_set_ext *ext,
k++; k++;
} }
if (n->pos == 0 && k == 0) { if (n->pos == 0 && k == 0) {
set->ext_size -= ext_size(n->size, dsize);
rcu_assign_pointer(hbucket(t, key), NULL); rcu_assign_pointer(hbucket(t, key), NULL);
kfree_rcu(n, rcu); kfree_rcu(n, rcu);
} else if (k >= AHASH_INIT_SIZE) { } else if (k >= AHASH_INIT_SIZE) {
...@@ -914,6 +901,7 @@ mtype_del(struct ip_set *set, void *value, const struct ip_set_ext *ext, ...@@ -914,6 +901,7 @@ mtype_del(struct ip_set *set, void *value, const struct ip_set_ext *ext,
k++; k++;
} }
tmp->pos = k; tmp->pos = k;
set->ext_size -= ext_size(AHASH_INIT_SIZE, dsize);
rcu_assign_pointer(hbucket(t, key), tmp); rcu_assign_pointer(hbucket(t, key), tmp);
kfree_rcu(n, rcu); kfree_rcu(n, rcu);
} }
...@@ -957,14 +945,13 @@ mtype_test_cidrs(struct ip_set *set, struct mtype_elem *d, ...@@ -957,14 +945,13 @@ mtype_test_cidrs(struct ip_set *set, struct mtype_elem *d,
int i, j = 0; int i, j = 0;
#endif #endif
u32 key, multi = 0; u32 key, multi = 0;
u8 nets_length = NLEN(set->family);
pr_debug("test by nets\n"); pr_debug("test by nets\n");
for (; j < nets_length && h->nets[j].cidr[0] && !multi; j++) { for (; j < NLEN && h->nets[j].cidr[0] && !multi; j++) {
#if IPSET_NET_COUNT == 2 #if IPSET_NET_COUNT == 2
mtype_data_reset_elem(d, &orig); mtype_data_reset_elem(d, &orig);
mtype_data_netmask(d, NCIDR_GET(h->nets[j].cidr[0]), false); mtype_data_netmask(d, NCIDR_GET(h->nets[j].cidr[0]), false);
for (k = 0; k < nets_length && h->nets[k].cidr[1] && !multi; for (k = 0; k < NLEN && h->nets[k].cidr[1] && !multi;
k++) { k++) {
mtype_data_netmask(d, NCIDR_GET(h->nets[k].cidr[1]), mtype_data_netmask(d, NCIDR_GET(h->nets[k].cidr[1]),
true); true);
...@@ -1021,7 +1008,7 @@ mtype_test(struct ip_set *set, void *value, const struct ip_set_ext *ext, ...@@ -1021,7 +1008,7 @@ mtype_test(struct ip_set *set, void *value, const struct ip_set_ext *ext,
* try all possible network sizes * try all possible network sizes
*/ */
for (i = 0; i < IPSET_NET_COUNT; i++) for (i = 0; i < IPSET_NET_COUNT; i++)
if (DCIDR_GET(d->cidr, i) != SET_HOST_MASK(set->family)) if (DCIDR_GET(d->cidr, i) != HOST_MASK)
break; break;
if (i == IPSET_NET_COUNT) { if (i == IPSET_NET_COUNT) {
ret = mtype_test_cidrs(set, d, ext, mext, flags); ret = mtype_test_cidrs(set, d, ext, mext, flags);
...@@ -1062,7 +1049,7 @@ mtype_head(struct ip_set *set, struct sk_buff *skb) ...@@ -1062,7 +1049,7 @@ mtype_head(struct ip_set *set, struct sk_buff *skb)
rcu_read_lock_bh(); rcu_read_lock_bh();
t = rcu_dereference_bh_nfnl(h->table); t = rcu_dereference_bh_nfnl(h->table);
memsize = mtype_ahash_memsize(h, t, NLEN(set->family), set->dsize); memsize = mtype_ahash_memsize(h, t) + set->ext_size;
htable_bits = t->htable_bits; htable_bits = t->htable_bits;
rcu_read_unlock_bh(); rcu_read_unlock_bh();
...@@ -1083,7 +1070,8 @@ mtype_head(struct ip_set *set, struct sk_buff *skb) ...@@ -1083,7 +1070,8 @@ mtype_head(struct ip_set *set, struct sk_buff *skb)
goto nla_put_failure; goto nla_put_failure;
#endif #endif
if (nla_put_net32(skb, IPSET_ATTR_REFERENCES, htonl(set->ref)) || if (nla_put_net32(skb, IPSET_ATTR_REFERENCES, htonl(set->ref)) ||
nla_put_net32(skb, IPSET_ATTR_MEMSIZE, htonl(memsize))) nla_put_net32(skb, IPSET_ATTR_MEMSIZE, htonl(memsize)) ||
nla_put_net32(skb, IPSET_ATTR_ELEMENTS, htonl(set->elements)))
goto nla_put_failure; goto nla_put_failure;
if (unlikely(ip_set_put_flags(skb, set))) if (unlikely(ip_set_put_flags(skb, set)))
goto nla_put_failure; goto nla_put_failure;
...@@ -1238,41 +1226,35 @@ IPSET_TOKEN(HTYPE, _create)(struct net *net, struct ip_set *set, ...@@ -1238,41 +1226,35 @@ IPSET_TOKEN(HTYPE, _create)(struct net *net, struct ip_set *set,
struct htype *h; struct htype *h;
struct htable *t; struct htable *t;
pr_debug("Create set %s with family %s\n",
set->name, set->family == NFPROTO_IPV4 ? "inet" : "inet6");
#ifndef IP_SET_PROTO_UNDEF #ifndef IP_SET_PROTO_UNDEF
if (!(set->family == NFPROTO_IPV4 || set->family == NFPROTO_IPV6)) if (!(set->family == NFPROTO_IPV4 || set->family == NFPROTO_IPV6))
return -IPSET_ERR_INVALID_FAMILY; return -IPSET_ERR_INVALID_FAMILY;
#endif #endif
#ifdef IP_SET_HASH_WITH_MARKMASK
markmask = 0xffffffff;
#endif
#ifdef IP_SET_HASH_WITH_NETMASK
netmask = set->family == NFPROTO_IPV4 ? 32 : 128;
pr_debug("Create set %s with family %s\n",
set->name, set->family == NFPROTO_IPV4 ? "inet" : "inet6");
#endif
if (unlikely(!ip_set_optattr_netorder(tb, IPSET_ATTR_HASHSIZE) || if (unlikely(!ip_set_optattr_netorder(tb, IPSET_ATTR_HASHSIZE) ||
!ip_set_optattr_netorder(tb, IPSET_ATTR_MAXELEM) || !ip_set_optattr_netorder(tb, IPSET_ATTR_MAXELEM) ||
!ip_set_optattr_netorder(tb, IPSET_ATTR_TIMEOUT) || !ip_set_optattr_netorder(tb, IPSET_ATTR_TIMEOUT) ||
!ip_set_optattr_netorder(tb, IPSET_ATTR_CADT_FLAGS))) !ip_set_optattr_netorder(tb, IPSET_ATTR_CADT_FLAGS)))
return -IPSET_ERR_PROTOCOL; return -IPSET_ERR_PROTOCOL;
#ifdef IP_SET_HASH_WITH_MARKMASK #ifdef IP_SET_HASH_WITH_MARKMASK
/* Separated condition in order to avoid directive in argument list */ /* Separated condition in order to avoid directive in argument list */
if (unlikely(!ip_set_optattr_netorder(tb, IPSET_ATTR_MARKMASK))) if (unlikely(!ip_set_optattr_netorder(tb, IPSET_ATTR_MARKMASK)))
return -IPSET_ERR_PROTOCOL; return -IPSET_ERR_PROTOCOL;
#endif
if (tb[IPSET_ATTR_HASHSIZE]) { markmask = 0xffffffff;
hashsize = ip_set_get_h32(tb[IPSET_ATTR_HASHSIZE]); if (tb[IPSET_ATTR_MARKMASK]) {
if (hashsize < IPSET_MIMINAL_HASHSIZE) markmask = ntohl(nla_get_be32(tb[IPSET_ATTR_MARKMASK]));
hashsize = IPSET_MIMINAL_HASHSIZE; if (markmask == 0)
return -IPSET_ERR_INVALID_MARKMASK;
} }
#endif
if (tb[IPSET_ATTR_MAXELEM])
maxelem = ip_set_get_h32(tb[IPSET_ATTR_MAXELEM]);
#ifdef IP_SET_HASH_WITH_NETMASK #ifdef IP_SET_HASH_WITH_NETMASK
netmask = set->family == NFPROTO_IPV4 ? 32 : 128;
if (tb[IPSET_ATTR_NETMASK]) { if (tb[IPSET_ATTR_NETMASK]) {
netmask = nla_get_u8(tb[IPSET_ATTR_NETMASK]); netmask = nla_get_u8(tb[IPSET_ATTR_NETMASK]);
...@@ -1282,33 +1264,21 @@ IPSET_TOKEN(HTYPE, _create)(struct net *net, struct ip_set *set, ...@@ -1282,33 +1264,21 @@ IPSET_TOKEN(HTYPE, _create)(struct net *net, struct ip_set *set,
return -IPSET_ERR_INVALID_NETMASK; return -IPSET_ERR_INVALID_NETMASK;
} }
#endif #endif
#ifdef IP_SET_HASH_WITH_MARKMASK
if (tb[IPSET_ATTR_MARKMASK]) {
markmask = ntohl(nla_get_be32(tb[IPSET_ATTR_MARKMASK]));
if (markmask == 0) if (tb[IPSET_ATTR_HASHSIZE]) {
return -IPSET_ERR_INVALID_MARKMASK; hashsize = ip_set_get_h32(tb[IPSET_ATTR_HASHSIZE]);
if (hashsize < IPSET_MIMINAL_HASHSIZE)
hashsize = IPSET_MIMINAL_HASHSIZE;
} }
#endif
if (tb[IPSET_ATTR_MAXELEM])
maxelem = ip_set_get_h32(tb[IPSET_ATTR_MAXELEM]);
hsize = sizeof(*h); hsize = sizeof(*h);
#ifdef IP_SET_HASH_WITH_NETS
hsize += sizeof(struct net_prefixes) * NLEN(set->family);
#endif
h = kzalloc(hsize, GFP_KERNEL); h = kzalloc(hsize, GFP_KERNEL);
if (!h) if (!h)
return -ENOMEM; return -ENOMEM;
h->maxelem = maxelem;
#ifdef IP_SET_HASH_WITH_NETMASK
h->netmask = netmask;
#endif
#ifdef IP_SET_HASH_WITH_MARKMASK
h->markmask = markmask;
#endif
get_random_bytes(&h->initval, sizeof(h->initval));
set->timeout = IPSET_NO_TIMEOUT;
hbits = htable_bits(hashsize); hbits = htable_bits(hashsize);
hsize = htable_size(hbits); hsize = htable_size(hbits);
if (hsize == 0) { if (hsize == 0) {
...@@ -1320,8 +1290,17 @@ IPSET_TOKEN(HTYPE, _create)(struct net *net, struct ip_set *set, ...@@ -1320,8 +1290,17 @@ IPSET_TOKEN(HTYPE, _create)(struct net *net, struct ip_set *set,
kfree(h); kfree(h);
return -ENOMEM; return -ENOMEM;
} }
h->maxelem = maxelem;
#ifdef IP_SET_HASH_WITH_NETMASK
h->netmask = netmask;
#endif
#ifdef IP_SET_HASH_WITH_MARKMASK
h->markmask = markmask;
#endif
get_random_bytes(&h->initval, sizeof(h->initval));
t->htable_bits = hbits; t->htable_bits = hbits;
rcu_assign_pointer(h->table, t); RCU_INIT_POINTER(h->table, t);
set->data = h; set->data = h;
#ifndef IP_SET_PROTO_UNDEF #ifndef IP_SET_PROTO_UNDEF
...@@ -1339,6 +1318,7 @@ IPSET_TOKEN(HTYPE, _create)(struct net *net, struct ip_set *set, ...@@ -1339,6 +1318,7 @@ IPSET_TOKEN(HTYPE, _create)(struct net *net, struct ip_set *set,
__alignof__(struct IPSET_TOKEN(HTYPE, 6_elem))); __alignof__(struct IPSET_TOKEN(HTYPE, 6_elem)));
} }
#endif #endif
set->timeout = IPSET_NO_TIMEOUT;
if (tb[IPSET_ATTR_TIMEOUT]) { if (tb[IPSET_ATTR_TIMEOUT]) {
set->timeout = ip_set_timeout_uget(tb[IPSET_ATTR_TIMEOUT]); set->timeout = ip_set_timeout_uget(tb[IPSET_ATTR_TIMEOUT]);
#ifndef IP_SET_PROTO_UNDEF #ifndef IP_SET_PROTO_UNDEF
......
...@@ -82,7 +82,7 @@ hash_ip4_kadt(struct ip_set *set, const struct sk_buff *skb, ...@@ -82,7 +82,7 @@ hash_ip4_kadt(struct ip_set *set, const struct sk_buff *skb,
const struct xt_action_param *par, const struct xt_action_param *par,
enum ipset_adt adt, struct ip_set_adt_opt *opt) enum ipset_adt adt, struct ip_set_adt_opt *opt)
{ {
const struct hash_ip *h = set->data; const struct hash_ip4 *h = set->data;
ipset_adtfn adtfn = set->variant->adt[adt]; ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_ip4_elem e = { 0 }; struct hash_ip4_elem e = { 0 };
struct ip_set_ext ext = IP_SET_INIT_KEXT(skb, opt, set); struct ip_set_ext ext = IP_SET_INIT_KEXT(skb, opt, set);
...@@ -101,7 +101,7 @@ static int ...@@ -101,7 +101,7 @@ static int
hash_ip4_uadt(struct ip_set *set, struct nlattr *tb[], hash_ip4_uadt(struct ip_set *set, struct nlattr *tb[],
enum ipset_adt adt, u32 *lineno, u32 flags, bool retried) enum ipset_adt adt, u32 *lineno, u32 flags, bool retried)
{ {
const struct hash_ip *h = set->data; const struct hash_ip4 *h = set->data;
ipset_adtfn adtfn = set->variant->adt[adt]; ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_ip4_elem e = { 0 }; struct hash_ip4_elem e = { 0 };
struct ip_set_ext ext = IP_SET_INIT_UEXT(set); struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
...@@ -199,7 +199,7 @@ hash_ip6_data_list(struct sk_buff *skb, const struct hash_ip6_elem *e) ...@@ -199,7 +199,7 @@ hash_ip6_data_list(struct sk_buff *skb, const struct hash_ip6_elem *e)
} }
static inline void static inline void
hash_ip6_data_next(struct hash_ip4_elem *next, const struct hash_ip6_elem *e) hash_ip6_data_next(struct hash_ip6_elem *next, const struct hash_ip6_elem *e)
{ {
} }
...@@ -217,7 +217,7 @@ hash_ip6_kadt(struct ip_set *set, const struct sk_buff *skb, ...@@ -217,7 +217,7 @@ hash_ip6_kadt(struct ip_set *set, const struct sk_buff *skb,
const struct xt_action_param *par, const struct xt_action_param *par,
enum ipset_adt adt, struct ip_set_adt_opt *opt) enum ipset_adt adt, struct ip_set_adt_opt *opt)
{ {
const struct hash_ip *h = set->data; const struct hash_ip6 *h = set->data;
ipset_adtfn adtfn = set->variant->adt[adt]; ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_ip6_elem e = { { .all = { 0 } } }; struct hash_ip6_elem e = { { .all = { 0 } } };
struct ip_set_ext ext = IP_SET_INIT_KEXT(skb, opt, set); struct ip_set_ext ext = IP_SET_INIT_KEXT(skb, opt, set);
...@@ -234,7 +234,7 @@ static int ...@@ -234,7 +234,7 @@ static int
hash_ip6_uadt(struct ip_set *set, struct nlattr *tb[], hash_ip6_uadt(struct ip_set *set, struct nlattr *tb[],
enum ipset_adt adt, u32 *lineno, u32 flags, bool retried) enum ipset_adt adt, u32 *lineno, u32 flags, bool retried)
{ {
const struct hash_ip *h = set->data; const struct hash_ip6 *h = set->data;
ipset_adtfn adtfn = set->variant->adt[adt]; ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_ip6_elem e = { { .all = { 0 } } }; struct hash_ip6_elem e = { { .all = { 0 } } };
struct ip_set_ext ext = IP_SET_INIT_UEXT(set); struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
......
/* Copyright (C) 2016 Tomasz Chilinski <tomasz.chilinski@chilan.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
/* Kernel module implementing an IP set type: the hash:ip,mac type */
#include <linux/jhash.h>
#include <linux/module.h>
#include <linux/ip.h>
#include <linux/etherdevice.h>
#include <linux/skbuff.h>
#include <linux/errno.h>
#include <linux/random.h>
#include <linux/if_ether.h>
#include <net/ip.h>
#include <net/ipv6.h>
#include <net/netlink.h>
#include <net/tcp.h>
#include <linux/netfilter.h>
#include <linux/netfilter/ipset/pfxlen.h>
#include <linux/netfilter/ipset/ip_set.h>
#include <linux/netfilter/ipset/ip_set_hash.h>
#define IPSET_TYPE_REV_MIN 0
#define IPSET_TYPE_REV_MAX 0
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Tomasz Chilinski <tomasz.chilinski@chilan.com>");
IP_SET_MODULE_DESC("hash:ip,mac", IPSET_TYPE_REV_MIN, IPSET_TYPE_REV_MAX);
MODULE_ALIAS("ip_set_hash:ip,mac");
/* Type specific function prefix */
#define HTYPE hash_ipmac
/* Zero valued element is not supported */
static const unsigned char invalid_ether[ETH_ALEN] = { 0 };
/* IPv4 variant */
/* Member elements */
struct hash_ipmac4_elem {
/* Zero valued IP addresses cannot be stored */
__be32 ip;
union {
unsigned char ether[ETH_ALEN];
__be32 foo[2];
};
};
/* Common functions */
static inline bool
hash_ipmac4_data_equal(const struct hash_ipmac4_elem *e1,
const struct hash_ipmac4_elem *e2,
u32 *multi)
{
return e1->ip == e2->ip && ether_addr_equal(e1->ether, e2->ether);
}
static bool
hash_ipmac4_data_list(struct sk_buff *skb, const struct hash_ipmac4_elem *e)
{
if (nla_put_ipaddr4(skb, IPSET_ATTR_IP, e->ip) ||
nla_put(skb, IPSET_ATTR_ETHER, ETH_ALEN, e->ether))
goto nla_put_failure;
return false;
nla_put_failure:
return true;
}
static inline void
hash_ipmac4_data_next(struct hash_ipmac4_elem *next,
const struct hash_ipmac4_elem *e)
{
next->ip = e->ip;
}
#define MTYPE hash_ipmac4
#define PF 4
#define HOST_MASK 32
#define HKEY_DATALEN sizeof(struct hash_ipmac4_elem)
#include "ip_set_hash_gen.h"
static int
hash_ipmac4_kadt(struct ip_set *set, const struct sk_buff *skb,
const struct xt_action_param *par,
enum ipset_adt adt, struct ip_set_adt_opt *opt)
{
ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_ipmac4_elem e = { .ip = 0, { .foo[0] = 0, .foo[1] = 0 } };
struct ip_set_ext ext = IP_SET_INIT_KEXT(skb, opt, set);
/* MAC can be src only */
if (!(opt->flags & IPSET_DIM_TWO_SRC))
return 0;
if (skb_mac_header(skb) < skb->head ||
(skb_mac_header(skb) + ETH_HLEN) > skb->data)
return -EINVAL;
memcpy(e.ether, eth_hdr(skb)->h_source, ETH_ALEN);
if (ether_addr_equal(e.ether, invalid_ether))
return -EINVAL;
ip4addrptr(skb, opt->flags & IPSET_DIM_ONE_SRC, &e.ip);
return adtfn(set, &e, &ext, &opt->ext, opt->cmdflags);
}
static int
hash_ipmac4_uadt(struct ip_set *set, struct nlattr *tb[],
enum ipset_adt adt, u32 *lineno, u32 flags, bool retried)
{
ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_ipmac4_elem e = { .ip = 0, { .foo[0] = 0, .foo[1] = 0 } };
struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
int ret;
if (unlikely(!tb[IPSET_ATTR_IP] ||
!tb[IPSET_ATTR_ETHER] ||
nla_len(tb[IPSET_ATTR_ETHER]) != ETH_ALEN ||
!ip_set_optattr_netorder(tb, IPSET_ATTR_TIMEOUT) ||
!ip_set_optattr_netorder(tb, IPSET_ATTR_PACKETS) ||
!ip_set_optattr_netorder(tb, IPSET_ATTR_BYTES) ||
!ip_set_optattr_netorder(tb, IPSET_ATTR_SKBMARK) ||
!ip_set_optattr_netorder(tb, IPSET_ATTR_SKBPRIO) ||
!ip_set_optattr_netorder(tb, IPSET_ATTR_SKBQUEUE)))
return -IPSET_ERR_PROTOCOL;
if (tb[IPSET_ATTR_LINENO])
*lineno = nla_get_u32(tb[IPSET_ATTR_LINENO]);
ret = ip_set_get_ipaddr4(tb[IPSET_ATTR_IP], &e.ip) ||
ip_set_get_extensions(set, tb, &ext);
if (ret)
return ret;
memcpy(e.ether, nla_data(tb[IPSET_ATTR_ETHER]), ETH_ALEN);
if (ether_addr_equal(e.ether, invalid_ether))
return -IPSET_ERR_HASH_ELEM;
return adtfn(set, &e, &ext, &ext, flags);
}
/* IPv6 variant */
/* Member elements */
struct hash_ipmac6_elem {
/* Zero valued IP addresses cannot be stored */
union nf_inet_addr ip;
union {
unsigned char ether[ETH_ALEN];
__be32 foo[2];
};
};
/* Common functions */
static inline bool
hash_ipmac6_data_equal(const struct hash_ipmac6_elem *e1,
const struct hash_ipmac6_elem *e2,
u32 *multi)
{
return ipv6_addr_equal(&e1->ip.in6, &e2->ip.in6) &&
ether_addr_equal(e1->ether, e2->ether);
}
static bool
hash_ipmac6_data_list(struct sk_buff *skb, const struct hash_ipmac6_elem *e)
{
if (nla_put_ipaddr6(skb, IPSET_ATTR_IP, &e->ip.in6) ||
nla_put(skb, IPSET_ATTR_ETHER, ETH_ALEN, e->ether))
goto nla_put_failure;
return false;
nla_put_failure:
return true;
}
static inline void
hash_ipmac6_data_next(struct hash_ipmac6_elem *next,
const struct hash_ipmac6_elem *e)
{
}
#undef MTYPE
#undef PF
#undef HOST_MASK
#undef HKEY_DATALEN
#define MTYPE hash_ipmac6
#define PF 6
#define HOST_MASK 128
#define HKEY_DATALEN sizeof(struct hash_ipmac6_elem)
#define IP_SET_EMIT_CREATE
#include "ip_set_hash_gen.h"
static int
hash_ipmac6_kadt(struct ip_set *set, const struct sk_buff *skb,
const struct xt_action_param *par,
enum ipset_adt adt, struct ip_set_adt_opt *opt)
{
ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_ipmac6_elem e = {
{ .all = { 0 } },
{ .foo[0] = 0, .foo[1] = 0 }
};
struct ip_set_ext ext = IP_SET_INIT_KEXT(skb, opt, set);
/* MAC can be src only */
if (!(opt->flags & IPSET_DIM_TWO_SRC))
return 0;
if (skb_mac_header(skb) < skb->head ||
(skb_mac_header(skb) + ETH_HLEN) > skb->data)
return -EINVAL;
memcpy(e.ether, eth_hdr(skb)->h_source, ETH_ALEN);
if (ether_addr_equal(e.ether, invalid_ether))
return -EINVAL;
ip6addrptr(skb, opt->flags & IPSET_DIM_ONE_SRC, &e.ip.in6);
return adtfn(set, &e, &ext, &opt->ext, opt->cmdflags);
}
static int
hash_ipmac6_uadt(struct ip_set *set, struct nlattr *tb[],
enum ipset_adt adt, u32 *lineno, u32 flags, bool retried)
{
ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_ipmac6_elem e = {
{ .all = { 0 } },
{ .foo[0] = 0, .foo[1] = 0 }
};
struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
int ret;
if (unlikely(!tb[IPSET_ATTR_IP] ||
!tb[IPSET_ATTR_ETHER] ||
nla_len(tb[IPSET_ATTR_ETHER]) != ETH_ALEN ||
!ip_set_optattr_netorder(tb, IPSET_ATTR_TIMEOUT) ||
!ip_set_optattr_netorder(tb, IPSET_ATTR_PACKETS) ||
!ip_set_optattr_netorder(tb, IPSET_ATTR_BYTES) ||
!ip_set_optattr_netorder(tb, IPSET_ATTR_SKBMARK) ||
!ip_set_optattr_netorder(tb, IPSET_ATTR_SKBPRIO) ||
!ip_set_optattr_netorder(tb, IPSET_ATTR_SKBQUEUE)))
return -IPSET_ERR_PROTOCOL;
if (tb[IPSET_ATTR_LINENO])
*lineno = nla_get_u32(tb[IPSET_ATTR_LINENO]);
ret = ip_set_get_ipaddr6(tb[IPSET_ATTR_IP], &e.ip) ||
ip_set_get_extensions(set, tb, &ext);
if (ret)
return ret;
memcpy(e.ether, nla_data(tb[IPSET_ATTR_ETHER]), ETH_ALEN);
if (ether_addr_equal(e.ether, invalid_ether))
return -IPSET_ERR_HASH_ELEM;
return adtfn(set, &e, &ext, &ext, flags);
}
static struct ip_set_type hash_ipmac_type __read_mostly = {
.name = "hash:ip,mac",
.protocol = IPSET_PROTOCOL,
.features = IPSET_TYPE_IP | IPSET_TYPE_MAC,
.dimension = IPSET_DIM_TWO,
.family = NFPROTO_UNSPEC,
.revision_min = IPSET_TYPE_REV_MIN,
.revision_max = IPSET_TYPE_REV_MAX,
.create = hash_ipmac_create,
.create_policy = {
[IPSET_ATTR_HASHSIZE] = { .type = NLA_U32 },
[IPSET_ATTR_MAXELEM] = { .type = NLA_U32 },
[IPSET_ATTR_PROBES] = { .type = NLA_U8 },
[IPSET_ATTR_RESIZE] = { .type = NLA_U8 },
[IPSET_ATTR_TIMEOUT] = { .type = NLA_U32 },
[IPSET_ATTR_CADT_FLAGS] = { .type = NLA_U32 },
},
.adt_policy = {
[IPSET_ATTR_IP] = { .type = NLA_NESTED },
[IPSET_ATTR_ETHER] = { .type = NLA_BINARY,
.len = ETH_ALEN },
[IPSET_ATTR_TIMEOUT] = { .type = NLA_U32 },
[IPSET_ATTR_LINENO] = { .type = NLA_U32 },
[IPSET_ATTR_BYTES] = { .type = NLA_U64 },
[IPSET_ATTR_PACKETS] = { .type = NLA_U64 },
[IPSET_ATTR_COMMENT] = { .type = NLA_NUL_STRING },
[IPSET_ATTR_SKBMARK] = { .type = NLA_U64 },
[IPSET_ATTR_SKBPRIO] = { .type = NLA_U32 },
[IPSET_ATTR_SKBQUEUE] = { .type = NLA_U16 },
},
.me = THIS_MODULE,
};
static int __init
hash_ipmac_init(void)
{
return ip_set_type_register(&hash_ipmac_type);
}
static void __exit
hash_ipmac_fini(void)
{
ip_set_type_unregister(&hash_ipmac_type);
}
module_init(hash_ipmac_init);
module_exit(hash_ipmac_fini);
...@@ -85,7 +85,7 @@ hash_ipmark4_kadt(struct ip_set *set, const struct sk_buff *skb, ...@@ -85,7 +85,7 @@ hash_ipmark4_kadt(struct ip_set *set, const struct sk_buff *skb,
const struct xt_action_param *par, const struct xt_action_param *par,
enum ipset_adt adt, struct ip_set_adt_opt *opt) enum ipset_adt adt, struct ip_set_adt_opt *opt)
{ {
const struct hash_ipmark *h = set->data; const struct hash_ipmark4 *h = set->data;
ipset_adtfn adtfn = set->variant->adt[adt]; ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_ipmark4_elem e = { }; struct hash_ipmark4_elem e = { };
struct ip_set_ext ext = IP_SET_INIT_KEXT(skb, opt, set); struct ip_set_ext ext = IP_SET_INIT_KEXT(skb, opt, set);
...@@ -101,7 +101,7 @@ static int ...@@ -101,7 +101,7 @@ static int
hash_ipmark4_uadt(struct ip_set *set, struct nlattr *tb[], hash_ipmark4_uadt(struct ip_set *set, struct nlattr *tb[],
enum ipset_adt adt, u32 *lineno, u32 flags, bool retried) enum ipset_adt adt, u32 *lineno, u32 flags, bool retried)
{ {
const struct hash_ipmark *h = set->data; const struct hash_ipmark4 *h = set->data;
ipset_adtfn adtfn = set->variant->adt[adt]; ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_ipmark4_elem e = { }; struct hash_ipmark4_elem e = { };
struct ip_set_ext ext = IP_SET_INIT_UEXT(set); struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
...@@ -193,7 +193,7 @@ hash_ipmark6_data_list(struct sk_buff *skb, ...@@ -193,7 +193,7 @@ hash_ipmark6_data_list(struct sk_buff *skb,
} }
static inline void static inline void
hash_ipmark6_data_next(struct hash_ipmark4_elem *next, hash_ipmark6_data_next(struct hash_ipmark6_elem *next,
const struct hash_ipmark6_elem *d) const struct hash_ipmark6_elem *d)
{ {
} }
...@@ -211,7 +211,7 @@ hash_ipmark6_kadt(struct ip_set *set, const struct sk_buff *skb, ...@@ -211,7 +211,7 @@ hash_ipmark6_kadt(struct ip_set *set, const struct sk_buff *skb,
const struct xt_action_param *par, const struct xt_action_param *par,
enum ipset_adt adt, struct ip_set_adt_opt *opt) enum ipset_adt adt, struct ip_set_adt_opt *opt)
{ {
const struct hash_ipmark *h = set->data; const struct hash_ipmark6 *h = set->data;
ipset_adtfn adtfn = set->variant->adt[adt]; ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_ipmark6_elem e = { }; struct hash_ipmark6_elem e = { };
struct ip_set_ext ext = IP_SET_INIT_KEXT(skb, opt, set); struct ip_set_ext ext = IP_SET_INIT_KEXT(skb, opt, set);
...@@ -227,7 +227,7 @@ static int ...@@ -227,7 +227,7 @@ static int
hash_ipmark6_uadt(struct ip_set *set, struct nlattr *tb[], hash_ipmark6_uadt(struct ip_set *set, struct nlattr *tb[],
enum ipset_adt adt, u32 *lineno, u32 flags, bool retried) enum ipset_adt adt, u32 *lineno, u32 flags, bool retried)
{ {
const struct hash_ipmark *h = set->data; const struct hash_ipmark6 *h = set->data;
ipset_adtfn adtfn = set->variant->adt[adt]; ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_ipmark6_elem e = { }; struct hash_ipmark6_elem e = { };
struct ip_set_ext ext = IP_SET_INIT_UEXT(set); struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
......
...@@ -108,7 +108,7 @@ static int ...@@ -108,7 +108,7 @@ static int
hash_ipport4_uadt(struct ip_set *set, struct nlattr *tb[], hash_ipport4_uadt(struct ip_set *set, struct nlattr *tb[],
enum ipset_adt adt, u32 *lineno, u32 flags, bool retried) enum ipset_adt adt, u32 *lineno, u32 flags, bool retried)
{ {
const struct hash_ipport *h = set->data; const struct hash_ipport4 *h = set->data;
ipset_adtfn adtfn = set->variant->adt[adt]; ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_ipport4_elem e = { .ip = 0 }; struct hash_ipport4_elem e = { .ip = 0 };
struct ip_set_ext ext = IP_SET_INIT_UEXT(set); struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
...@@ -231,7 +231,7 @@ hash_ipport6_data_list(struct sk_buff *skb, ...@@ -231,7 +231,7 @@ hash_ipport6_data_list(struct sk_buff *skb,
} }
static inline void static inline void
hash_ipport6_data_next(struct hash_ipport4_elem *next, hash_ipport6_data_next(struct hash_ipport6_elem *next,
const struct hash_ipport6_elem *d) const struct hash_ipport6_elem *d)
{ {
next->port = d->port; next->port = d->port;
...@@ -266,7 +266,7 @@ static int ...@@ -266,7 +266,7 @@ static int
hash_ipport6_uadt(struct ip_set *set, struct nlattr *tb[], hash_ipport6_uadt(struct ip_set *set, struct nlattr *tb[],
enum ipset_adt adt, u32 *lineno, u32 flags, bool retried) enum ipset_adt adt, u32 *lineno, u32 flags, bool retried)
{ {
const struct hash_ipport *h = set->data; const struct hash_ipport6 *h = set->data;
ipset_adtfn adtfn = set->variant->adt[adt]; ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_ipport6_elem e = { .ip = { .all = { 0 } } }; struct hash_ipport6_elem e = { .ip = { .all = { 0 } } };
struct ip_set_ext ext = IP_SET_INIT_UEXT(set); struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
......
...@@ -111,7 +111,7 @@ static int ...@@ -111,7 +111,7 @@ static int
hash_ipportip4_uadt(struct ip_set *set, struct nlattr *tb[], hash_ipportip4_uadt(struct ip_set *set, struct nlattr *tb[],
enum ipset_adt adt, u32 *lineno, u32 flags, bool retried) enum ipset_adt adt, u32 *lineno, u32 flags, bool retried)
{ {
const struct hash_ipportip *h = set->data; const struct hash_ipportip4 *h = set->data;
ipset_adtfn adtfn = set->variant->adt[adt]; ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_ipportip4_elem e = { .ip = 0 }; struct hash_ipportip4_elem e = { .ip = 0 };
struct ip_set_ext ext = IP_SET_INIT_UEXT(set); struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
...@@ -241,7 +241,7 @@ hash_ipportip6_data_list(struct sk_buff *skb, ...@@ -241,7 +241,7 @@ hash_ipportip6_data_list(struct sk_buff *skb,
} }
static inline void static inline void
hash_ipportip6_data_next(struct hash_ipportip4_elem *next, hash_ipportip6_data_next(struct hash_ipportip6_elem *next,
const struct hash_ipportip6_elem *d) const struct hash_ipportip6_elem *d)
{ {
next->port = d->port; next->port = d->port;
...@@ -277,7 +277,7 @@ static int ...@@ -277,7 +277,7 @@ static int
hash_ipportip6_uadt(struct ip_set *set, struct nlattr *tb[], hash_ipportip6_uadt(struct ip_set *set, struct nlattr *tb[],
enum ipset_adt adt, u32 *lineno, u32 flags, bool retried) enum ipset_adt adt, u32 *lineno, u32 flags, bool retried)
{ {
const struct hash_ipportip *h = set->data; const struct hash_ipportip6 *h = set->data;
ipset_adtfn adtfn = set->variant->adt[adt]; ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_ipportip6_elem e = { .ip = { .all = { 0 } } }; struct hash_ipportip6_elem e = { .ip = { .all = { 0 } } };
struct ip_set_ext ext = IP_SET_INIT_UEXT(set); struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
......
...@@ -138,7 +138,7 @@ hash_ipportnet4_kadt(struct ip_set *set, const struct sk_buff *skb, ...@@ -138,7 +138,7 @@ hash_ipportnet4_kadt(struct ip_set *set, const struct sk_buff *skb,
const struct xt_action_param *par, const struct xt_action_param *par,
enum ipset_adt adt, struct ip_set_adt_opt *opt) enum ipset_adt adt, struct ip_set_adt_opt *opt)
{ {
const struct hash_ipportnet *h = set->data; const struct hash_ipportnet4 *h = set->data;
ipset_adtfn adtfn = set->variant->adt[adt]; ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_ipportnet4_elem e = { struct hash_ipportnet4_elem e = {
.cidr = INIT_CIDR(h->nets[0].cidr[0], HOST_MASK), .cidr = INIT_CIDR(h->nets[0].cidr[0], HOST_MASK),
...@@ -163,7 +163,7 @@ static int ...@@ -163,7 +163,7 @@ static int
hash_ipportnet4_uadt(struct ip_set *set, struct nlattr *tb[], hash_ipportnet4_uadt(struct ip_set *set, struct nlattr *tb[],
enum ipset_adt adt, u32 *lineno, u32 flags, bool retried) enum ipset_adt adt, u32 *lineno, u32 flags, bool retried)
{ {
const struct hash_ipportnet *h = set->data; const struct hash_ipportnet4 *h = set->data;
ipset_adtfn adtfn = set->variant->adt[adt]; ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_ipportnet4_elem e = { .cidr = HOST_MASK - 1 }; struct hash_ipportnet4_elem e = { .cidr = HOST_MASK - 1 };
struct ip_set_ext ext = IP_SET_INIT_UEXT(set); struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
...@@ -370,7 +370,7 @@ hash_ipportnet6_data_list(struct sk_buff *skb, ...@@ -370,7 +370,7 @@ hash_ipportnet6_data_list(struct sk_buff *skb,
} }
static inline void static inline void
hash_ipportnet6_data_next(struct hash_ipportnet4_elem *next, hash_ipportnet6_data_next(struct hash_ipportnet6_elem *next,
const struct hash_ipportnet6_elem *d) const struct hash_ipportnet6_elem *d)
{ {
next->port = d->port; next->port = d->port;
...@@ -389,7 +389,7 @@ hash_ipportnet6_kadt(struct ip_set *set, const struct sk_buff *skb, ...@@ -389,7 +389,7 @@ hash_ipportnet6_kadt(struct ip_set *set, const struct sk_buff *skb,
const struct xt_action_param *par, const struct xt_action_param *par,
enum ipset_adt adt, struct ip_set_adt_opt *opt) enum ipset_adt adt, struct ip_set_adt_opt *opt)
{ {
const struct hash_ipportnet *h = set->data; const struct hash_ipportnet6 *h = set->data;
ipset_adtfn adtfn = set->variant->adt[adt]; ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_ipportnet6_elem e = { struct hash_ipportnet6_elem e = {
.cidr = INIT_CIDR(h->nets[0].cidr[0], HOST_MASK), .cidr = INIT_CIDR(h->nets[0].cidr[0], HOST_MASK),
...@@ -414,7 +414,7 @@ static int ...@@ -414,7 +414,7 @@ static int
hash_ipportnet6_uadt(struct ip_set *set, struct nlattr *tb[], hash_ipportnet6_uadt(struct ip_set *set, struct nlattr *tb[],
enum ipset_adt adt, u32 *lineno, u32 flags, bool retried) enum ipset_adt adt, u32 *lineno, u32 flags, bool retried)
{ {
const struct hash_ipportnet *h = set->data; const struct hash_ipportnet6 *h = set->data;
ipset_adtfn adtfn = set->variant->adt[adt]; ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_ipportnet6_elem e = { .cidr = HOST_MASK - 1 }; struct hash_ipportnet6_elem e = { .cidr = HOST_MASK - 1 };
struct ip_set_ext ext = IP_SET_INIT_UEXT(set); struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
......
...@@ -117,7 +117,7 @@ hash_net4_kadt(struct ip_set *set, const struct sk_buff *skb, ...@@ -117,7 +117,7 @@ hash_net4_kadt(struct ip_set *set, const struct sk_buff *skb,
const struct xt_action_param *par, const struct xt_action_param *par,
enum ipset_adt adt, struct ip_set_adt_opt *opt) enum ipset_adt adt, struct ip_set_adt_opt *opt)
{ {
const struct hash_net *h = set->data; const struct hash_net4 *h = set->data;
ipset_adtfn adtfn = set->variant->adt[adt]; ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_net4_elem e = { struct hash_net4_elem e = {
.cidr = INIT_CIDR(h->nets[0].cidr[0], HOST_MASK), .cidr = INIT_CIDR(h->nets[0].cidr[0], HOST_MASK),
...@@ -139,7 +139,7 @@ static int ...@@ -139,7 +139,7 @@ static int
hash_net4_uadt(struct ip_set *set, struct nlattr *tb[], hash_net4_uadt(struct ip_set *set, struct nlattr *tb[],
enum ipset_adt adt, u32 *lineno, u32 flags, bool retried) enum ipset_adt adt, u32 *lineno, u32 flags, bool retried)
{ {
const struct hash_net *h = set->data; const struct hash_net4 *h = set->data;
ipset_adtfn adtfn = set->variant->adt[adt]; ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_net4_elem e = { .cidr = HOST_MASK }; struct hash_net4_elem e = { .cidr = HOST_MASK };
struct ip_set_ext ext = IP_SET_INIT_UEXT(set); struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
...@@ -268,7 +268,7 @@ hash_net6_data_list(struct sk_buff *skb, const struct hash_net6_elem *data) ...@@ -268,7 +268,7 @@ hash_net6_data_list(struct sk_buff *skb, const struct hash_net6_elem *data)
} }
static inline void static inline void
hash_net6_data_next(struct hash_net4_elem *next, hash_net6_data_next(struct hash_net6_elem *next,
const struct hash_net6_elem *d) const struct hash_net6_elem *d)
{ {
} }
...@@ -286,7 +286,7 @@ hash_net6_kadt(struct ip_set *set, const struct sk_buff *skb, ...@@ -286,7 +286,7 @@ hash_net6_kadt(struct ip_set *set, const struct sk_buff *skb,
const struct xt_action_param *par, const struct xt_action_param *par,
enum ipset_adt adt, struct ip_set_adt_opt *opt) enum ipset_adt adt, struct ip_set_adt_opt *opt)
{ {
const struct hash_net *h = set->data; const struct hash_net6 *h = set->data;
ipset_adtfn adtfn = set->variant->adt[adt]; ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_net6_elem e = { struct hash_net6_elem e = {
.cidr = INIT_CIDR(h->nets[0].cidr[0], HOST_MASK), .cidr = INIT_CIDR(h->nets[0].cidr[0], HOST_MASK),
......
...@@ -156,7 +156,7 @@ hash_netiface4_kadt(struct ip_set *set, const struct sk_buff *skb, ...@@ -156,7 +156,7 @@ hash_netiface4_kadt(struct ip_set *set, const struct sk_buff *skb,
const struct xt_action_param *par, const struct xt_action_param *par,
enum ipset_adt adt, struct ip_set_adt_opt *opt) enum ipset_adt adt, struct ip_set_adt_opt *opt)
{ {
struct hash_netiface *h = set->data; struct hash_netiface4 *h = set->data;
ipset_adtfn adtfn = set->variant->adt[adt]; ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_netiface4_elem e = { struct hash_netiface4_elem e = {
.cidr = INIT_CIDR(h->nets[0].cidr[0], HOST_MASK), .cidr = INIT_CIDR(h->nets[0].cidr[0], HOST_MASK),
...@@ -170,7 +170,7 @@ hash_netiface4_kadt(struct ip_set *set, const struct sk_buff *skb, ...@@ -170,7 +170,7 @@ hash_netiface4_kadt(struct ip_set *set, const struct sk_buff *skb,
ip4addrptr(skb, opt->flags & IPSET_DIM_ONE_SRC, &e.ip); ip4addrptr(skb, opt->flags & IPSET_DIM_ONE_SRC, &e.ip);
e.ip &= ip_set_netmask(e.cidr); e.ip &= ip_set_netmask(e.cidr);
#define IFACE(dir) (par->dir ? par->dir->name : "") #define IFACE(dir) (par->state->dir ? par->state->dir->name : "")
#define SRCDIR (opt->flags & IPSET_DIM_TWO_SRC) #define SRCDIR (opt->flags & IPSET_DIM_TWO_SRC)
if (opt->cmdflags & IPSET_FLAG_PHYSDEV) { if (opt->cmdflags & IPSET_FLAG_PHYSDEV) {
...@@ -196,7 +196,7 @@ static int ...@@ -196,7 +196,7 @@ static int
hash_netiface4_uadt(struct ip_set *set, struct nlattr *tb[], hash_netiface4_uadt(struct ip_set *set, struct nlattr *tb[],
enum ipset_adt adt, u32 *lineno, u32 flags, bool retried) enum ipset_adt adt, u32 *lineno, u32 flags, bool retried)
{ {
struct hash_netiface *h = set->data; struct hash_netiface4 *h = set->data;
ipset_adtfn adtfn = set->variant->adt[adt]; ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_netiface4_elem e = { .cidr = HOST_MASK, .elem = 1 }; struct hash_netiface4_elem e = { .cidr = HOST_MASK, .elem = 1 };
struct ip_set_ext ext = IP_SET_INIT_UEXT(set); struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
...@@ -348,7 +348,7 @@ hash_netiface6_data_list(struct sk_buff *skb, ...@@ -348,7 +348,7 @@ hash_netiface6_data_list(struct sk_buff *skb,
} }
static inline void static inline void
hash_netiface6_data_next(struct hash_netiface4_elem *next, hash_netiface6_data_next(struct hash_netiface6_elem *next,
const struct hash_netiface6_elem *d) const struct hash_netiface6_elem *d)
{ {
} }
...@@ -367,7 +367,7 @@ hash_netiface6_kadt(struct ip_set *set, const struct sk_buff *skb, ...@@ -367,7 +367,7 @@ hash_netiface6_kadt(struct ip_set *set, const struct sk_buff *skb,
const struct xt_action_param *par, const struct xt_action_param *par,
enum ipset_adt adt, struct ip_set_adt_opt *opt) enum ipset_adt adt, struct ip_set_adt_opt *opt)
{ {
struct hash_netiface *h = set->data; struct hash_netiface6 *h = set->data;
ipset_adtfn adtfn = set->variant->adt[adt]; ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_netiface6_elem e = { struct hash_netiface6_elem e = {
.cidr = INIT_CIDR(h->nets[0].cidr[0], HOST_MASK), .cidr = INIT_CIDR(h->nets[0].cidr[0], HOST_MASK),
......
...@@ -143,7 +143,7 @@ hash_netnet4_kadt(struct ip_set *set, const struct sk_buff *skb, ...@@ -143,7 +143,7 @@ hash_netnet4_kadt(struct ip_set *set, const struct sk_buff *skb,
const struct xt_action_param *par, const struct xt_action_param *par,
enum ipset_adt adt, struct ip_set_adt_opt *opt) enum ipset_adt adt, struct ip_set_adt_opt *opt)
{ {
const struct hash_netnet *h = set->data; const struct hash_netnet4 *h = set->data;
ipset_adtfn adtfn = set->variant->adt[adt]; ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_netnet4_elem e = { }; struct hash_netnet4_elem e = { };
struct ip_set_ext ext = IP_SET_INIT_KEXT(skb, opt, set); struct ip_set_ext ext = IP_SET_INIT_KEXT(skb, opt, set);
...@@ -165,7 +165,7 @@ static int ...@@ -165,7 +165,7 @@ static int
hash_netnet4_uadt(struct ip_set *set, struct nlattr *tb[], hash_netnet4_uadt(struct ip_set *set, struct nlattr *tb[],
enum ipset_adt adt, u32 *lineno, u32 flags, bool retried) enum ipset_adt adt, u32 *lineno, u32 flags, bool retried)
{ {
const struct hash_netnet *h = set->data; const struct hash_netnet4 *h = set->data;
ipset_adtfn adtfn = set->variant->adt[adt]; ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_netnet4_elem e = { }; struct hash_netnet4_elem e = { };
struct ip_set_ext ext = IP_SET_INIT_UEXT(set); struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
...@@ -352,7 +352,7 @@ hash_netnet6_data_list(struct sk_buff *skb, ...@@ -352,7 +352,7 @@ hash_netnet6_data_list(struct sk_buff *skb,
} }
static inline void static inline void
hash_netnet6_data_next(struct hash_netnet4_elem *next, hash_netnet6_data_next(struct hash_netnet6_elem *next,
const struct hash_netnet6_elem *d) const struct hash_netnet6_elem *d)
{ {
} }
...@@ -377,7 +377,7 @@ hash_netnet6_kadt(struct ip_set *set, const struct sk_buff *skb, ...@@ -377,7 +377,7 @@ hash_netnet6_kadt(struct ip_set *set, const struct sk_buff *skb,
const struct xt_action_param *par, const struct xt_action_param *par,
enum ipset_adt adt, struct ip_set_adt_opt *opt) enum ipset_adt adt, struct ip_set_adt_opt *opt)
{ {
const struct hash_netnet *h = set->data; const struct hash_netnet6 *h = set->data;
ipset_adtfn adtfn = set->variant->adt[adt]; ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_netnet6_elem e = { }; struct hash_netnet6_elem e = { };
struct ip_set_ext ext = IP_SET_INIT_KEXT(skb, opt, set); struct ip_set_ext ext = IP_SET_INIT_KEXT(skb, opt, set);
......
...@@ -133,7 +133,7 @@ hash_netport4_kadt(struct ip_set *set, const struct sk_buff *skb, ...@@ -133,7 +133,7 @@ hash_netport4_kadt(struct ip_set *set, const struct sk_buff *skb,
const struct xt_action_param *par, const struct xt_action_param *par,
enum ipset_adt adt, struct ip_set_adt_opt *opt) enum ipset_adt adt, struct ip_set_adt_opt *opt)
{ {
const struct hash_netport *h = set->data; const struct hash_netport4 *h = set->data;
ipset_adtfn adtfn = set->variant->adt[adt]; ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_netport4_elem e = { struct hash_netport4_elem e = {
.cidr = INIT_CIDR(h->nets[0].cidr[0], HOST_MASK), .cidr = INIT_CIDR(h->nets[0].cidr[0], HOST_MASK),
...@@ -157,7 +157,7 @@ static int ...@@ -157,7 +157,7 @@ static int
hash_netport4_uadt(struct ip_set *set, struct nlattr *tb[], hash_netport4_uadt(struct ip_set *set, struct nlattr *tb[],
enum ipset_adt adt, u32 *lineno, u32 flags, bool retried) enum ipset_adt adt, u32 *lineno, u32 flags, bool retried)
{ {
const struct hash_netport *h = set->data; const struct hash_netport4 *h = set->data;
ipset_adtfn adtfn = set->variant->adt[adt]; ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_netport4_elem e = { .cidr = HOST_MASK - 1 }; struct hash_netport4_elem e = { .cidr = HOST_MASK - 1 };
struct ip_set_ext ext = IP_SET_INIT_UEXT(set); struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
...@@ -329,7 +329,7 @@ hash_netport6_data_list(struct sk_buff *skb, ...@@ -329,7 +329,7 @@ hash_netport6_data_list(struct sk_buff *skb,
} }
static inline void static inline void
hash_netport6_data_next(struct hash_netport4_elem *next, hash_netport6_data_next(struct hash_netport6_elem *next,
const struct hash_netport6_elem *d) const struct hash_netport6_elem *d)
{ {
next->port = d->port; next->port = d->port;
...@@ -348,7 +348,7 @@ hash_netport6_kadt(struct ip_set *set, const struct sk_buff *skb, ...@@ -348,7 +348,7 @@ hash_netport6_kadt(struct ip_set *set, const struct sk_buff *skb,
const struct xt_action_param *par, const struct xt_action_param *par,
enum ipset_adt adt, struct ip_set_adt_opt *opt) enum ipset_adt adt, struct ip_set_adt_opt *opt)
{ {
const struct hash_netport *h = set->data; const struct hash_netport6 *h = set->data;
ipset_adtfn adtfn = set->variant->adt[adt]; ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_netport6_elem e = { struct hash_netport6_elem e = {
.cidr = INIT_CIDR(h->nets[0].cidr[0], HOST_MASK), .cidr = INIT_CIDR(h->nets[0].cidr[0], HOST_MASK),
...@@ -372,7 +372,7 @@ static int ...@@ -372,7 +372,7 @@ static int
hash_netport6_uadt(struct ip_set *set, struct nlattr *tb[], hash_netport6_uadt(struct ip_set *set, struct nlattr *tb[],
enum ipset_adt adt, u32 *lineno, u32 flags, bool retried) enum ipset_adt adt, u32 *lineno, u32 flags, bool retried)
{ {
const struct hash_netport *h = set->data; const struct hash_netport6 *h = set->data;
ipset_adtfn adtfn = set->variant->adt[adt]; ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_netport6_elem e = { .cidr = HOST_MASK - 1 }; struct hash_netport6_elem e = { .cidr = HOST_MASK - 1 };
struct ip_set_ext ext = IP_SET_INIT_UEXT(set); struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
......
...@@ -154,7 +154,7 @@ hash_netportnet4_kadt(struct ip_set *set, const struct sk_buff *skb, ...@@ -154,7 +154,7 @@ hash_netportnet4_kadt(struct ip_set *set, const struct sk_buff *skb,
const struct xt_action_param *par, const struct xt_action_param *par,
enum ipset_adt adt, struct ip_set_adt_opt *opt) enum ipset_adt adt, struct ip_set_adt_opt *opt)
{ {
const struct hash_netportnet *h = set->data; const struct hash_netportnet4 *h = set->data;
ipset_adtfn adtfn = set->variant->adt[adt]; ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_netportnet4_elem e = { }; struct hash_netportnet4_elem e = { };
struct ip_set_ext ext = IP_SET_INIT_KEXT(skb, opt, set); struct ip_set_ext ext = IP_SET_INIT_KEXT(skb, opt, set);
...@@ -180,7 +180,7 @@ static int ...@@ -180,7 +180,7 @@ static int
hash_netportnet4_uadt(struct ip_set *set, struct nlattr *tb[], hash_netportnet4_uadt(struct ip_set *set, struct nlattr *tb[],
enum ipset_adt adt, u32 *lineno, u32 flags, bool retried) enum ipset_adt adt, u32 *lineno, u32 flags, bool retried)
{ {
const struct hash_netportnet *h = set->data; const struct hash_netportnet4 *h = set->data;
ipset_adtfn adtfn = set->variant->adt[adt]; ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_netportnet4_elem e = { }; struct hash_netportnet4_elem e = { };
struct ip_set_ext ext = IP_SET_INIT_UEXT(set); struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
...@@ -406,7 +406,7 @@ hash_netportnet6_data_list(struct sk_buff *skb, ...@@ -406,7 +406,7 @@ hash_netportnet6_data_list(struct sk_buff *skb,
} }
static inline void static inline void
hash_netportnet6_data_next(struct hash_netportnet4_elem *next, hash_netportnet6_data_next(struct hash_netportnet6_elem *next,
const struct hash_netportnet6_elem *d) const struct hash_netportnet6_elem *d)
{ {
next->port = d->port; next->port = d->port;
...@@ -432,7 +432,7 @@ hash_netportnet6_kadt(struct ip_set *set, const struct sk_buff *skb, ...@@ -432,7 +432,7 @@ hash_netportnet6_kadt(struct ip_set *set, const struct sk_buff *skb,
const struct xt_action_param *par, const struct xt_action_param *par,
enum ipset_adt adt, struct ip_set_adt_opt *opt) enum ipset_adt adt, struct ip_set_adt_opt *opt)
{ {
const struct hash_netportnet *h = set->data; const struct hash_netportnet6 *h = set->data;
ipset_adtfn adtfn = set->variant->adt[adt]; ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_netportnet6_elem e = { }; struct hash_netportnet6_elem e = { };
struct ip_set_ext ext = IP_SET_INIT_KEXT(skb, opt, set); struct ip_set_ext ext = IP_SET_INIT_KEXT(skb, opt, set);
...@@ -458,7 +458,7 @@ static int ...@@ -458,7 +458,7 @@ static int
hash_netportnet6_uadt(struct ip_set *set, struct nlattr *tb[], hash_netportnet6_uadt(struct ip_set *set, struct nlattr *tb[],
enum ipset_adt adt, u32 *lineno, u32 flags, bool retried) enum ipset_adt adt, u32 *lineno, u32 flags, bool retried)
{ {
const struct hash_netportnet *h = set->data; const struct hash_netportnet6 *h = set->data;
ipset_adtfn adtfn = set->variant->adt[adt]; ipset_adtfn adtfn = set->variant->adt[adt];
struct hash_netportnet6_elem e = { }; struct hash_netportnet6_elem e = { };
struct ip_set_ext ext = IP_SET_INIT_UEXT(set); struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
......
...@@ -166,6 +166,7 @@ __list_set_del_rcu(struct rcu_head * rcu) ...@@ -166,6 +166,7 @@ __list_set_del_rcu(struct rcu_head * rcu)
static inline void static inline void
list_set_del(struct ip_set *set, struct set_elem *e) list_set_del(struct ip_set *set, struct set_elem *e)
{ {
set->elements--;
list_del_rcu(&e->list); list_del_rcu(&e->list);
call_rcu(&e->rcu, __list_set_del_rcu); call_rcu(&e->rcu, __list_set_del_rcu);
} }
...@@ -227,7 +228,7 @@ list_set_init_extensions(struct ip_set *set, const struct ip_set_ext *ext, ...@@ -227,7 +228,7 @@ list_set_init_extensions(struct ip_set *set, const struct ip_set_ext *ext,
if (SET_WITH_COUNTER(set)) if (SET_WITH_COUNTER(set))
ip_set_init_counter(ext_counter(e, set), ext); ip_set_init_counter(ext_counter(e, set), ext);
if (SET_WITH_COMMENT(set)) if (SET_WITH_COMMENT(set))
ip_set_init_comment(ext_comment(e, set), ext); ip_set_init_comment(set, ext_comment(e, set), ext);
if (SET_WITH_SKBINFO(set)) if (SET_WITH_SKBINFO(set))
ip_set_init_skbinfo(ext_skbinfo(e, set), ext); ip_set_init_skbinfo(ext_skbinfo(e, set), ext);
/* Update timeout last */ /* Update timeout last */
...@@ -309,6 +310,7 @@ list_set_uadd(struct ip_set *set, void *value, const struct ip_set_ext *ext, ...@@ -309,6 +310,7 @@ list_set_uadd(struct ip_set *set, void *value, const struct ip_set_ext *ext,
list_add_rcu(&e->list, &prev->list); list_add_rcu(&e->list, &prev->list);
else else
list_add_tail_rcu(&e->list, &map->members); list_add_tail_rcu(&e->list, &map->members);
set->elements++;
return 0; return 0;
} }
...@@ -419,6 +421,8 @@ list_set_flush(struct ip_set *set) ...@@ -419,6 +421,8 @@ list_set_flush(struct ip_set *set)
list_for_each_entry_safe(e, n, &map->members, list) list_for_each_entry_safe(e, n, &map->members, list)
list_set_del(set, e); list_set_del(set, e);
set->elements = 0;
set->ext_size = 0;
} }
static void static void
...@@ -441,12 +445,12 @@ list_set_destroy(struct ip_set *set) ...@@ -441,12 +445,12 @@ list_set_destroy(struct ip_set *set)
set->data = NULL; set->data = NULL;
} }
static int /* Calculate the actual memory size of the set data */
list_set_head(struct ip_set *set, struct sk_buff *skb) static size_t
list_set_memsize(const struct list_set *map, size_t dsize)
{ {
const struct list_set *map = set->data;
struct nlattr *nested;
struct set_elem *e; struct set_elem *e;
size_t memsize;
u32 n = 0; u32 n = 0;
rcu_read_lock(); rcu_read_lock();
...@@ -454,13 +458,25 @@ list_set_head(struct ip_set *set, struct sk_buff *skb) ...@@ -454,13 +458,25 @@ list_set_head(struct ip_set *set, struct sk_buff *skb)
n++; n++;
rcu_read_unlock(); rcu_read_unlock();
memsize = sizeof(*map) + n * dsize;
return memsize;
}
static int
list_set_head(struct ip_set *set, struct sk_buff *skb)
{
const struct list_set *map = set->data;
struct nlattr *nested;
size_t memsize = list_set_memsize(map, set->dsize) + set->ext_size;
nested = ipset_nest_start(skb, IPSET_ATTR_DATA); nested = ipset_nest_start(skb, IPSET_ATTR_DATA);
if (!nested) if (!nested)
goto nla_put_failure; goto nla_put_failure;
if (nla_put_net32(skb, IPSET_ATTR_SIZE, htonl(map->size)) || if (nla_put_net32(skb, IPSET_ATTR_SIZE, htonl(map->size)) ||
nla_put_net32(skb, IPSET_ATTR_REFERENCES, htonl(set->ref)) || nla_put_net32(skb, IPSET_ATTR_REFERENCES, htonl(set->ref)) ||
nla_put_net32(skb, IPSET_ATTR_MEMSIZE, nla_put_net32(skb, IPSET_ATTR_MEMSIZE, htonl(memsize)) ||
htonl(sizeof(*map) + n * set->dsize))) nla_put_net32(skb, IPSET_ATTR_ELEMENTS, htonl(set->elements)))
goto nla_put_failure; goto nla_put_failure;
if (unlikely(ip_set_put_flags(skb, set))) if (unlikely(ip_set_put_flags(skb, set)))
goto nla_put_failure; goto nla_put_failure;
...@@ -570,11 +586,8 @@ list_set_gc_init(struct ip_set *set, void (*gc)(unsigned long ul_set)) ...@@ -570,11 +586,8 @@ list_set_gc_init(struct ip_set *set, void (*gc)(unsigned long ul_set))
{ {
struct list_set *map = set->data; struct list_set *map = set->data;
init_timer(&map->gc); setup_timer(&map->gc, gc, (unsigned long)set);
map->gc.data = (unsigned long)set; mod_timer(&map->gc, jiffies + IPSET_GC_PERIOD(set->timeout) * HZ);
map->gc.function = gc;
map->gc.expires = jiffies + IPSET_GC_PERIOD(set->timeout) * HZ;
add_timer(&map->gc);
} }
/* Create list:set type of sets */ /* Create list:set type of sets */
......
...@@ -1305,7 +1305,7 @@ nf_conntrack_in(struct net *net, u_int8_t pf, unsigned int hooknum, ...@@ -1305,7 +1305,7 @@ nf_conntrack_in(struct net *net, u_int8_t pf, unsigned int hooknum,
if (skb->nfct) if (skb->nfct)
goto out; goto out;
} }
repeat:
ct = resolve_normal_ct(net, tmpl, skb, dataoff, pf, protonum, ct = resolve_normal_ct(net, tmpl, skb, dataoff, pf, protonum,
l3proto, l4proto, &set_reply, &ctinfo); l3proto, l4proto, &set_reply, &ctinfo);
if (!ct) { if (!ct) {
...@@ -1337,6 +1337,12 @@ nf_conntrack_in(struct net *net, u_int8_t pf, unsigned int hooknum, ...@@ -1337,6 +1337,12 @@ nf_conntrack_in(struct net *net, u_int8_t pf, unsigned int hooknum,
NF_CT_STAT_INC_ATOMIC(net, invalid); NF_CT_STAT_INC_ATOMIC(net, invalid);
if (ret == -NF_DROP) if (ret == -NF_DROP)
NF_CT_STAT_INC_ATOMIC(net, drop); NF_CT_STAT_INC_ATOMIC(net, drop);
/* Special case: TCP tracker reports an attempt to reopen a
* closed/aborted connection. We have to go back and create a
* fresh conntrack.
*/
if (ret == -NF_REPEAT)
goto repeat;
ret = -ret; ret = -ret;
goto out; goto out;
} }
...@@ -1344,15 +1350,8 @@ nf_conntrack_in(struct net *net, u_int8_t pf, unsigned int hooknum, ...@@ -1344,15 +1350,8 @@ nf_conntrack_in(struct net *net, u_int8_t pf, unsigned int hooknum,
if (set_reply && !test_and_set_bit(IPS_SEEN_REPLY_BIT, &ct->status)) if (set_reply && !test_and_set_bit(IPS_SEEN_REPLY_BIT, &ct->status))
nf_conntrack_event_cache(IPCT_REPLY, ct); nf_conntrack_event_cache(IPCT_REPLY, ct);
out: out:
if (tmpl) { if (tmpl)
/* Special case: we have to repeat this hook, assign the nf_ct_put(tmpl);
* template again to this packet. We assume that this packet
* has no conntrack assigned. This is used by nf_ct_tcp. */
if (ret == NF_REPEAT)
skb->nfct = (struct nf_conntrack *)tmpl;
else
nf_ct_put(tmpl);
}
return ret; return ret;
} }
......
...@@ -281,15 +281,15 @@ void nf_ct_l4proto_unregister_sysctl(struct net *net, ...@@ -281,15 +281,15 @@ void nf_ct_l4proto_unregister_sysctl(struct net *net,
/* FIXME: Allow NULL functions and sub in pointers to generic for /* FIXME: Allow NULL functions and sub in pointers to generic for
them. --RR */ them. --RR */
int nf_ct_l4proto_register(struct nf_conntrack_l4proto *l4proto) int nf_ct_l4proto_register_one(struct nf_conntrack_l4proto *l4proto)
{ {
int ret = 0; int ret = 0;
if (l4proto->l3proto >= PF_MAX) if (l4proto->l3proto >= PF_MAX)
return -EBUSY; return -EBUSY;
if ((l4proto->to_nlattr && !l4proto->nlattr_size) if ((l4proto->to_nlattr && !l4proto->nlattr_size) ||
|| (l4proto->tuple_to_nlattr && !l4proto->nlattr_tuple_size)) (l4proto->tuple_to_nlattr && !l4proto->nlattr_tuple_size))
return -EINVAL; return -EINVAL;
mutex_lock(&nf_ct_proto_mutex); mutex_lock(&nf_ct_proto_mutex);
...@@ -307,7 +307,8 @@ int nf_ct_l4proto_register(struct nf_conntrack_l4proto *l4proto) ...@@ -307,7 +307,8 @@ int nf_ct_l4proto_register(struct nf_conntrack_l4proto *l4proto)
} }
for (i = 0; i < MAX_NF_CT_PROTO; i++) for (i = 0; i < MAX_NF_CT_PROTO; i++)
RCU_INIT_POINTER(proto_array[i], &nf_conntrack_l4proto_generic); RCU_INIT_POINTER(proto_array[i],
&nf_conntrack_l4proto_generic);
/* Before making proto_array visible to lockless readers, /* Before making proto_array visible to lockless readers,
* we must make sure its content is committed to memory. * we must make sure its content is committed to memory.
...@@ -335,10 +336,10 @@ int nf_ct_l4proto_register(struct nf_conntrack_l4proto *l4proto) ...@@ -335,10 +336,10 @@ int nf_ct_l4proto_register(struct nf_conntrack_l4proto *l4proto)
mutex_unlock(&nf_ct_proto_mutex); mutex_unlock(&nf_ct_proto_mutex);
return ret; return ret;
} }
EXPORT_SYMBOL_GPL(nf_ct_l4proto_register); EXPORT_SYMBOL_GPL(nf_ct_l4proto_register_one);
int nf_ct_l4proto_pernet_register(struct net *net, int nf_ct_l4proto_pernet_register_one(struct net *net,
struct nf_conntrack_l4proto *l4proto) struct nf_conntrack_l4proto *l4proto)
{ {
int ret = 0; int ret = 0;
struct nf_proto_net *pn = NULL; struct nf_proto_net *pn = NULL;
...@@ -361,9 +362,9 @@ int nf_ct_l4proto_pernet_register(struct net *net, ...@@ -361,9 +362,9 @@ int nf_ct_l4proto_pernet_register(struct net *net,
out: out:
return ret; return ret;
} }
EXPORT_SYMBOL_GPL(nf_ct_l4proto_pernet_register); EXPORT_SYMBOL_GPL(nf_ct_l4proto_pernet_register_one);
void nf_ct_l4proto_unregister(struct nf_conntrack_l4proto *l4proto) void nf_ct_l4proto_unregister_one(struct nf_conntrack_l4proto *l4proto)
{ {
BUG_ON(l4proto->l3proto >= PF_MAX); BUG_ON(l4proto->l3proto >= PF_MAX);
...@@ -378,10 +379,10 @@ void nf_ct_l4proto_unregister(struct nf_conntrack_l4proto *l4proto) ...@@ -378,10 +379,10 @@ void nf_ct_l4proto_unregister(struct nf_conntrack_l4proto *l4proto)
synchronize_rcu(); synchronize_rcu();
} }
EXPORT_SYMBOL_GPL(nf_ct_l4proto_unregister); EXPORT_SYMBOL_GPL(nf_ct_l4proto_unregister_one);
void nf_ct_l4proto_pernet_unregister(struct net *net, void nf_ct_l4proto_pernet_unregister_one(struct net *net,
struct nf_conntrack_l4proto *l4proto) struct nf_conntrack_l4proto *l4proto)
{ {
struct nf_proto_net *pn = NULL; struct nf_proto_net *pn = NULL;
...@@ -395,6 +396,66 @@ void nf_ct_l4proto_pernet_unregister(struct net *net, ...@@ -395,6 +396,66 @@ void nf_ct_l4proto_pernet_unregister(struct net *net,
/* Remove all contrack entries for this protocol */ /* Remove all contrack entries for this protocol */
nf_ct_iterate_cleanup(net, kill_l4proto, l4proto, 0, 0); nf_ct_iterate_cleanup(net, kill_l4proto, l4proto, 0, 0);
} }
EXPORT_SYMBOL_GPL(nf_ct_l4proto_pernet_unregister_one);
int nf_ct_l4proto_register(struct nf_conntrack_l4proto *l4proto[],
unsigned int num_proto)
{
int ret = -EINVAL, ver;
unsigned int i;
for (i = 0; i < num_proto; i++) {
ret = nf_ct_l4proto_register_one(l4proto[i]);
if (ret < 0)
break;
}
if (i != num_proto) {
ver = l4proto[i]->l3proto == PF_INET6 ? 6 : 4;
pr_err("nf_conntrack_ipv%d: can't register %s%d proto.\n",
ver, l4proto[i]->name, ver);
nf_ct_l4proto_unregister(l4proto, i);
}
return ret;
}
EXPORT_SYMBOL_GPL(nf_ct_l4proto_register);
int nf_ct_l4proto_pernet_register(struct net *net,
struct nf_conntrack_l4proto *l4proto[],
unsigned int num_proto)
{
int ret = -EINVAL;
unsigned int i;
for (i = 0; i < num_proto; i++) {
ret = nf_ct_l4proto_pernet_register_one(net, l4proto[i]);
if (ret < 0)
break;
}
if (i != num_proto) {
pr_err("nf_conntrack_%s%d: pernet registration failed\n",
l4proto[i]->name,
l4proto[i]->l3proto == PF_INET6 ? 6 : 4);
nf_ct_l4proto_pernet_unregister(net, l4proto, i);
}
return ret;
}
EXPORT_SYMBOL_GPL(nf_ct_l4proto_pernet_register);
void nf_ct_l4proto_unregister(struct nf_conntrack_l4proto *l4proto[],
unsigned int num_proto)
{
while (num_proto-- != 0)
nf_ct_l4proto_unregister_one(l4proto[num_proto]);
}
EXPORT_SYMBOL_GPL(nf_ct_l4proto_unregister);
void nf_ct_l4proto_pernet_unregister(struct net *net,
struct nf_conntrack_l4proto *l4proto[],
unsigned int num_proto)
{
while (num_proto-- != 0)
nf_ct_l4proto_pernet_unregister_one(net, l4proto[num_proto]);
}
EXPORT_SYMBOL_GPL(nf_ct_l4proto_pernet_unregister); EXPORT_SYMBOL_GPL(nf_ct_l4proto_pernet_unregister);
int nf_conntrack_proto_pernet_init(struct net *net) int nf_conntrack_proto_pernet_init(struct net *net)
......
...@@ -936,30 +936,21 @@ static struct nf_conntrack_l4proto dccp_proto6 __read_mostly = { ...@@ -936,30 +936,21 @@ static struct nf_conntrack_l4proto dccp_proto6 __read_mostly = {
.init_net = dccp_init_net, .init_net = dccp_init_net,
}; };
static struct nf_conntrack_l4proto *dccp_proto[] = {
&dccp_proto4,
&dccp_proto6,
};
static __net_init int dccp_net_init(struct net *net) static __net_init int dccp_net_init(struct net *net)
{ {
int ret = 0; return nf_ct_l4proto_pernet_register(net, dccp_proto,
ret = nf_ct_l4proto_pernet_register(net, &dccp_proto4); ARRAY_SIZE(dccp_proto));
if (ret < 0) {
pr_err("nf_conntrack_dccp4: pernet registration failed.\n");
goto out;
}
ret = nf_ct_l4proto_pernet_register(net, &dccp_proto6);
if (ret < 0) {
pr_err("nf_conntrack_dccp6: pernet registration failed.\n");
goto cleanup_dccp4;
}
return 0;
cleanup_dccp4:
nf_ct_l4proto_pernet_unregister(net, &dccp_proto4);
out:
return ret;
} }
static __net_exit void dccp_net_exit(struct net *net) static __net_exit void dccp_net_exit(struct net *net)
{ {
nf_ct_l4proto_pernet_unregister(net, &dccp_proto6); nf_ct_l4proto_pernet_unregister(net, dccp_proto,
nf_ct_l4proto_pernet_unregister(net, &dccp_proto4); ARRAY_SIZE(dccp_proto));
} }
static struct pernet_operations dccp_net_ops = { static struct pernet_operations dccp_net_ops = {
...@@ -975,29 +966,16 @@ static int __init nf_conntrack_proto_dccp_init(void) ...@@ -975,29 +966,16 @@ static int __init nf_conntrack_proto_dccp_init(void)
ret = register_pernet_subsys(&dccp_net_ops); ret = register_pernet_subsys(&dccp_net_ops);
if (ret < 0) if (ret < 0)
goto out_pernet; return ret;
ret = nf_ct_l4proto_register(dccp_proto, ARRAY_SIZE(dccp_proto));
ret = nf_ct_l4proto_register(&dccp_proto4);
if (ret < 0)
goto out_dccp4;
ret = nf_ct_l4proto_register(&dccp_proto6);
if (ret < 0) if (ret < 0)
goto out_dccp6; unregister_pernet_subsys(&dccp_net_ops);
return 0;
out_dccp6:
nf_ct_l4proto_unregister(&dccp_proto4);
out_dccp4:
unregister_pernet_subsys(&dccp_net_ops);
out_pernet:
return ret; return ret;
} }
static void __exit nf_conntrack_proto_dccp_fini(void) static void __exit nf_conntrack_proto_dccp_fini(void)
{ {
nf_ct_l4proto_unregister(&dccp_proto6); nf_ct_l4proto_unregister(dccp_proto, ARRAY_SIZE(dccp_proto));
nf_ct_l4proto_unregister(&dccp_proto4);
unregister_pernet_subsys(&dccp_net_ops); unregister_pernet_subsys(&dccp_net_ops);
} }
......
...@@ -396,7 +396,9 @@ static struct nf_conntrack_l4proto nf_conntrack_l4proto_gre4 __read_mostly = { ...@@ -396,7 +396,9 @@ static struct nf_conntrack_l4proto nf_conntrack_l4proto_gre4 __read_mostly = {
static int proto_gre_net_init(struct net *net) static int proto_gre_net_init(struct net *net)
{ {
int ret = 0; int ret = 0;
ret = nf_ct_l4proto_pernet_register(net, &nf_conntrack_l4proto_gre4);
ret = nf_ct_l4proto_pernet_register_one(net,
&nf_conntrack_l4proto_gre4);
if (ret < 0) if (ret < 0)
pr_err("nf_conntrack_gre4: pernet registration failed.\n"); pr_err("nf_conntrack_gre4: pernet registration failed.\n");
return ret; return ret;
...@@ -404,7 +406,7 @@ static int proto_gre_net_init(struct net *net) ...@@ -404,7 +406,7 @@ static int proto_gre_net_init(struct net *net)
static void proto_gre_net_exit(struct net *net) static void proto_gre_net_exit(struct net *net)
{ {
nf_ct_l4proto_pernet_unregister(net, &nf_conntrack_l4proto_gre4); nf_ct_l4proto_pernet_unregister_one(net, &nf_conntrack_l4proto_gre4);
nf_ct_gre_keymap_flush(net); nf_ct_gre_keymap_flush(net);
} }
...@@ -422,8 +424,7 @@ static int __init nf_ct_proto_gre_init(void) ...@@ -422,8 +424,7 @@ static int __init nf_ct_proto_gre_init(void)
ret = register_pernet_subsys(&proto_gre_net_ops); ret = register_pernet_subsys(&proto_gre_net_ops);
if (ret < 0) if (ret < 0)
goto out_pernet; goto out_pernet;
ret = nf_ct_l4proto_register_one(&nf_conntrack_l4proto_gre4);
ret = nf_ct_l4proto_register(&nf_conntrack_l4proto_gre4);
if (ret < 0) if (ret < 0)
goto out_gre4; goto out_gre4;
...@@ -436,7 +437,7 @@ static int __init nf_ct_proto_gre_init(void) ...@@ -436,7 +437,7 @@ static int __init nf_ct_proto_gre_init(void)
static void __exit nf_ct_proto_gre_fini(void) static void __exit nf_ct_proto_gre_fini(void)
{ {
nf_ct_l4proto_unregister(&nf_conntrack_l4proto_gre4); nf_ct_l4proto_unregister_one(&nf_conntrack_l4proto_gre4);
unregister_pernet_subsys(&proto_gre_net_ops); unregister_pernet_subsys(&proto_gre_net_ops);
} }
......
...@@ -816,32 +816,21 @@ static struct nf_conntrack_l4proto nf_conntrack_l4proto_sctp6 __read_mostly = { ...@@ -816,32 +816,21 @@ static struct nf_conntrack_l4proto nf_conntrack_l4proto_sctp6 __read_mostly = {
.init_net = sctp_init_net, .init_net = sctp_init_net,
}; };
static struct nf_conntrack_l4proto *sctp_proto[] = {
&nf_conntrack_l4proto_sctp4,
&nf_conntrack_l4proto_sctp6,
};
static int sctp_net_init(struct net *net) static int sctp_net_init(struct net *net)
{ {
int ret = 0; return nf_ct_l4proto_pernet_register(net, sctp_proto,
ARRAY_SIZE(sctp_proto));
ret = nf_ct_l4proto_pernet_register(net, &nf_conntrack_l4proto_sctp4);
if (ret < 0) {
pr_err("nf_conntrack_sctp4: pernet registration failed.\n");
goto out;
}
ret = nf_ct_l4proto_pernet_register(net, &nf_conntrack_l4proto_sctp6);
if (ret < 0) {
pr_err("nf_conntrack_sctp6: pernet registration failed.\n");
goto cleanup_sctp4;
}
return 0;
cleanup_sctp4:
nf_ct_l4proto_pernet_unregister(net, &nf_conntrack_l4proto_sctp4);
out:
return ret;
} }
static void sctp_net_exit(struct net *net) static void sctp_net_exit(struct net *net)
{ {
nf_ct_l4proto_pernet_unregister(net, &nf_conntrack_l4proto_sctp6); nf_ct_l4proto_pernet_unregister(net, sctp_proto,
nf_ct_l4proto_pernet_unregister(net, &nf_conntrack_l4proto_sctp4); ARRAY_SIZE(sctp_proto));
} }
static struct pernet_operations sctp_net_ops = { static struct pernet_operations sctp_net_ops = {
...@@ -857,29 +846,16 @@ static int __init nf_conntrack_proto_sctp_init(void) ...@@ -857,29 +846,16 @@ static int __init nf_conntrack_proto_sctp_init(void)
ret = register_pernet_subsys(&sctp_net_ops); ret = register_pernet_subsys(&sctp_net_ops);
if (ret < 0) if (ret < 0)
goto out_pernet; return ret;
ret = nf_ct_l4proto_register(sctp_proto, ARRAY_SIZE(sctp_proto));
ret = nf_ct_l4proto_register(&nf_conntrack_l4proto_sctp4);
if (ret < 0)
goto out_sctp4;
ret = nf_ct_l4proto_register(&nf_conntrack_l4proto_sctp6);
if (ret < 0) if (ret < 0)
goto out_sctp6; unregister_pernet_subsys(&sctp_net_ops);
return 0;
out_sctp6:
nf_ct_l4proto_unregister(&nf_conntrack_l4proto_sctp4);
out_sctp4:
unregister_pernet_subsys(&sctp_net_ops);
out_pernet:
return ret; return ret;
} }
static void __exit nf_conntrack_proto_sctp_fini(void) static void __exit nf_conntrack_proto_sctp_fini(void)
{ {
nf_ct_l4proto_unregister(&nf_conntrack_l4proto_sctp6); nf_ct_l4proto_unregister(sctp_proto, ARRAY_SIZE(sctp_proto));
nf_ct_l4proto_unregister(&nf_conntrack_l4proto_sctp4);
unregister_pernet_subsys(&sctp_net_ops); unregister_pernet_subsys(&sctp_net_ops);
} }
......
...@@ -336,32 +336,21 @@ static struct nf_conntrack_l4proto nf_conntrack_l4proto_udplite6 __read_mostly = ...@@ -336,32 +336,21 @@ static struct nf_conntrack_l4proto nf_conntrack_l4proto_udplite6 __read_mostly =
.init_net = udplite_init_net, .init_net = udplite_init_net,
}; };
static struct nf_conntrack_l4proto *udplite_proto[] = {
&nf_conntrack_l4proto_udplite4,
&nf_conntrack_l4proto_udplite6,
};
static int udplite_net_init(struct net *net) static int udplite_net_init(struct net *net)
{ {
int ret = 0; return nf_ct_l4proto_pernet_register(net, udplite_proto,
ARRAY_SIZE(udplite_proto));
ret = nf_ct_l4proto_pernet_register(net, &nf_conntrack_l4proto_udplite4);
if (ret < 0) {
pr_err("nf_conntrack_udplite4: pernet registration failed.\n");
goto out;
}
ret = nf_ct_l4proto_pernet_register(net, &nf_conntrack_l4proto_udplite6);
if (ret < 0) {
pr_err("nf_conntrack_udplite6: pernet registration failed.\n");
goto cleanup_udplite4;
}
return 0;
cleanup_udplite4:
nf_ct_l4proto_pernet_unregister(net, &nf_conntrack_l4proto_udplite4);
out:
return ret;
} }
static void udplite_net_exit(struct net *net) static void udplite_net_exit(struct net *net)
{ {
nf_ct_l4proto_pernet_unregister(net, &nf_conntrack_l4proto_udplite6); nf_ct_l4proto_pernet_unregister(net, udplite_proto,
nf_ct_l4proto_pernet_unregister(net, &nf_conntrack_l4proto_udplite4); ARRAY_SIZE(udplite_proto));
} }
static struct pernet_operations udplite_net_ops = { static struct pernet_operations udplite_net_ops = {
...@@ -377,29 +366,16 @@ static int __init nf_conntrack_proto_udplite_init(void) ...@@ -377,29 +366,16 @@ static int __init nf_conntrack_proto_udplite_init(void)
ret = register_pernet_subsys(&udplite_net_ops); ret = register_pernet_subsys(&udplite_net_ops);
if (ret < 0) if (ret < 0)
goto out_pernet; return ret;
ret = nf_ct_l4proto_register(udplite_proto, ARRAY_SIZE(udplite_proto));
ret = nf_ct_l4proto_register(&nf_conntrack_l4proto_udplite4);
if (ret < 0)
goto out_udplite4;
ret = nf_ct_l4proto_register(&nf_conntrack_l4proto_udplite6);
if (ret < 0) if (ret < 0)
goto out_udplite6; unregister_pernet_subsys(&udplite_net_ops);
return 0;
out_udplite6:
nf_ct_l4proto_unregister(&nf_conntrack_l4proto_udplite4);
out_udplite4:
unregister_pernet_subsys(&udplite_net_ops);
out_pernet:
return ret; return ret;
} }
static void __exit nf_conntrack_proto_udplite_exit(void) static void __exit nf_conntrack_proto_udplite_exit(void)
{ {
nf_ct_l4proto_unregister(&nf_conntrack_l4proto_udplite6); nf_ct_l4proto_unregister(udplite_proto, ARRAY_SIZE(udplite_proto));
nf_ct_l4proto_unregister(&nf_conntrack_l4proto_udplite4);
unregister_pernet_subsys(&udplite_net_ops); unregister_pernet_subsys(&udplite_net_ops);
} }
......
...@@ -19,7 +19,7 @@ void nf_dup_netdev_egress(const struct nft_pktinfo *pkt, int oif) ...@@ -19,7 +19,7 @@ void nf_dup_netdev_egress(const struct nft_pktinfo *pkt, int oif)
struct net_device *dev; struct net_device *dev;
struct sk_buff *skb; struct sk_buff *skb;
dev = dev_get_by_index_rcu(pkt->net, oif); dev = dev_get_by_index_rcu(nft_net(pkt), oif);
if (dev == NULL) if (dev == NULL)
return; return;
......
...@@ -11,11 +11,6 @@ ...@@ -11,11 +11,6 @@
#define NFDEBUG(format, args...) #define NFDEBUG(format, args...)
#endif #endif
/* core.c */
unsigned int nf_iterate(struct sk_buff *skb, struct nf_hook_state *state,
struct nf_hook_entry **entryp);
/* nf_queue.c */ /* nf_queue.c */
int nf_queue(struct sk_buff *skb, struct nf_hook_state *state, int nf_queue(struct sk_buff *skb, struct nf_hook_state *state,
struct nf_hook_entry **entryp, unsigned int verdict); struct nf_hook_entry **entryp, unsigned int verdict);
......
...@@ -108,7 +108,7 @@ void nf_queue_nf_hook_drop(struct net *net, const struct nf_hook_entry *entry) ...@@ -108,7 +108,7 @@ void nf_queue_nf_hook_drop(struct net *net, const struct nf_hook_entry *entry)
} }
static int __nf_queue(struct sk_buff *skb, const struct nf_hook_state *state, static int __nf_queue(struct sk_buff *skb, const struct nf_hook_state *state,
unsigned int queuenum) struct nf_hook_entry *hook_entry, unsigned int queuenum)
{ {
int status = -ENOENT; int status = -ENOENT;
struct nf_queue_entry *entry = NULL; struct nf_queue_entry *entry = NULL;
...@@ -136,6 +136,7 @@ static int __nf_queue(struct sk_buff *skb, const struct nf_hook_state *state, ...@@ -136,6 +136,7 @@ static int __nf_queue(struct sk_buff *skb, const struct nf_hook_state *state,
*entry = (struct nf_queue_entry) { *entry = (struct nf_queue_entry) {
.skb = skb, .skb = skb,
.state = *state, .state = *state,
.hook = hook_entry,
.size = sizeof(*entry) + afinfo->route_key_size, .size = sizeof(*entry) + afinfo->route_key_size,
}; };
...@@ -163,8 +164,7 @@ int nf_queue(struct sk_buff *skb, struct nf_hook_state *state, ...@@ -163,8 +164,7 @@ int nf_queue(struct sk_buff *skb, struct nf_hook_state *state,
struct nf_hook_entry *entry = *entryp; struct nf_hook_entry *entry = *entryp;
int ret; int ret;
RCU_INIT_POINTER(state->hook_entries, entry); ret = __nf_queue(skb, state, entry, verdict >> NF_VERDICT_QBITS);
ret = __nf_queue(skb, state, verdict >> NF_VERDICT_QBITS);
if (ret < 0) { if (ret < 0) {
if (ret == -ESRCH && if (ret == -ESRCH &&
(verdict & NF_VERDICT_FLAG_QUEUE_BYPASS)) { (verdict & NF_VERDICT_FLAG_QUEUE_BYPASS)) {
...@@ -177,17 +177,34 @@ int nf_queue(struct sk_buff *skb, struct nf_hook_state *state, ...@@ -177,17 +177,34 @@ int nf_queue(struct sk_buff *skb, struct nf_hook_state *state,
return 0; return 0;
} }
static unsigned int nf_iterate(struct sk_buff *skb,
struct nf_hook_state *state,
struct nf_hook_entry **entryp)
{
unsigned int verdict;
do {
repeat:
verdict = (*entryp)->ops.hook((*entryp)->ops.priv, skb, state);
if (verdict != NF_ACCEPT) {
if (verdict != NF_REPEAT)
return verdict;
goto repeat;
}
*entryp = rcu_dereference((*entryp)->next);
} while (*entryp);
return NF_ACCEPT;
}
void nf_reinject(struct nf_queue_entry *entry, unsigned int verdict) void nf_reinject(struct nf_queue_entry *entry, unsigned int verdict)
{ {
struct nf_hook_entry *hook_entry; struct nf_hook_entry *hook_entry = entry->hook;
struct nf_hook_ops *elem = &hook_entry->ops;
struct sk_buff *skb = entry->skb; struct sk_buff *skb = entry->skb;
const struct nf_afinfo *afinfo; const struct nf_afinfo *afinfo;
struct nf_hook_ops *elem;
int err; int err;
hook_entry = rcu_dereference(entry->state.hook_entries);
elem = &hook_entry->ops;
nf_queue_entry_release_refs(entry); nf_queue_entry_release_refs(entry);
/* Continue traversal iff userspace said ok... */ /* Continue traversal iff userspace said ok... */
...@@ -200,8 +217,6 @@ void nf_reinject(struct nf_queue_entry *entry, unsigned int verdict) ...@@ -200,8 +217,6 @@ void nf_reinject(struct nf_queue_entry *entry, unsigned int verdict)
verdict = NF_DROP; verdict = NF_DROP;
} }
entry->state.thresh = INT_MIN;
if (verdict == NF_ACCEPT) { if (verdict == NF_ACCEPT) {
hook_entry = rcu_dereference(hook_entry->next); hook_entry = rcu_dereference(hook_entry->next);
if (hook_entry) if (hook_entry)
......
...@@ -53,10 +53,10 @@ static noinline void __nft_trace_packet(struct nft_traceinfo *info, ...@@ -53,10 +53,10 @@ static noinline void __nft_trace_packet(struct nft_traceinfo *info,
nft_trace_notify(info); nft_trace_notify(info);
nf_log_trace(pkt->net, pkt->pf, pkt->hook, pkt->skb, pkt->in, nf_log_trace(nft_net(pkt), nft_pf(pkt), nft_hook(pkt), pkt->skb,
pkt->out, &trace_loginfo, "TRACE: %s:%s:%s:%u ", nft_in(pkt), nft_out(pkt), &trace_loginfo,
chain->table->name, chain->name, comments[type], "TRACE: %s:%s:%s:%u ",
rulenum); chain->table->name, chain->name, comments[type], rulenum);
} }
static inline void nft_trace_packet(struct nft_traceinfo *info, static inline void nft_trace_packet(struct nft_traceinfo *info,
...@@ -124,7 +124,7 @@ unsigned int ...@@ -124,7 +124,7 @@ unsigned int
nft_do_chain(struct nft_pktinfo *pkt, void *priv) nft_do_chain(struct nft_pktinfo *pkt, void *priv)
{ {
const struct nft_chain *chain = priv, *basechain = chain; const struct nft_chain *chain = priv, *basechain = chain;
const struct net *net = pkt->net; const struct net *net = nft_net(pkt);
const struct nft_rule *rule; const struct nft_rule *rule;
const struct nft_expr *expr, *last; const struct nft_expr *expr, *last;
struct nft_regs regs; struct nft_regs regs;
...@@ -232,68 +232,40 @@ nft_do_chain(struct nft_pktinfo *pkt, void *priv) ...@@ -232,68 +232,40 @@ nft_do_chain(struct nft_pktinfo *pkt, void *priv)
} }
EXPORT_SYMBOL_GPL(nft_do_chain); EXPORT_SYMBOL_GPL(nft_do_chain);
static struct nft_expr_type *nft_basic_types[] = {
&nft_imm_type,
&nft_cmp_type,
&nft_lookup_type,
&nft_bitwise_type,
&nft_byteorder_type,
&nft_payload_type,
&nft_dynset_type,
&nft_range_type,
};
int __init nf_tables_core_module_init(void) int __init nf_tables_core_module_init(void)
{ {
int err; int err, i;
err = nft_immediate_module_init();
if (err < 0)
goto err1;
err = nft_cmp_module_init();
if (err < 0)
goto err2;
err = nft_lookup_module_init();
if (err < 0)
goto err3;
err = nft_bitwise_module_init();
if (err < 0)
goto err4;
err = nft_byteorder_module_init(); for (i = 0; i < ARRAY_SIZE(nft_basic_types); i++) {
if (err < 0) err = nft_register_expr(nft_basic_types[i]);
goto err5; if (err)
goto err;
err = nft_payload_module_init(); }
if (err < 0)
goto err6;
err = nft_dynset_module_init();
if (err < 0)
goto err7;
err = nft_range_module_init();
if (err < 0)
goto err8;
return 0; return 0;
err8:
nft_dynset_module_exit(); err:
err7: while (i-- > 0)
nft_payload_module_exit(); nft_unregister_expr(nft_basic_types[i]);
err6:
nft_byteorder_module_exit();
err5:
nft_bitwise_module_exit();
err4:
nft_lookup_module_exit();
err3:
nft_cmp_module_exit();
err2:
nft_immediate_module_exit();
err1:
return err; return err;
} }
void nf_tables_core_module_exit(void) void nf_tables_core_module_exit(void)
{ {
nft_dynset_module_exit(); int i;
nft_payload_module_exit();
nft_byteorder_module_exit(); i = ARRAY_SIZE(nft_basic_types);
nft_bitwise_module_exit(); while (i-- > 0)
nft_lookup_module_exit(); nft_unregister_expr(nft_basic_types[i]);
nft_cmp_module_exit();
nft_immediate_module_exit();
} }
...@@ -171,7 +171,7 @@ void nft_trace_notify(struct nft_traceinfo *info) ...@@ -171,7 +171,7 @@ void nft_trace_notify(struct nft_traceinfo *info)
unsigned int size; unsigned int size;
int event = (NFNL_SUBSYS_NFTABLES << 8) | NFT_MSG_TRACE; int event = (NFNL_SUBSYS_NFTABLES << 8) | NFT_MSG_TRACE;
if (!nfnetlink_has_listeners(pkt->net, NFNLGRP_NFTRACE)) if (!nfnetlink_has_listeners(nft_net(pkt), NFNLGRP_NFTRACE))
return; return;
size = nlmsg_total_size(sizeof(struct nfgenmsg)) + size = nlmsg_total_size(sizeof(struct nfgenmsg)) +
...@@ -207,7 +207,7 @@ void nft_trace_notify(struct nft_traceinfo *info) ...@@ -207,7 +207,7 @@ void nft_trace_notify(struct nft_traceinfo *info)
nfmsg->version = NFNETLINK_V0; nfmsg->version = NFNETLINK_V0;
nfmsg->res_id = 0; nfmsg->res_id = 0;
if (nla_put_be32(skb, NFTA_TRACE_NFPROTO, htonl(pkt->pf))) if (nla_put_be32(skb, NFTA_TRACE_NFPROTO, htonl(nft_pf(pkt))))
goto nla_put_failure; goto nla_put_failure;
if (nla_put_be32(skb, NFTA_TRACE_TYPE, htonl(info->type))) if (nla_put_be32(skb, NFTA_TRACE_TYPE, htonl(info->type)))
...@@ -249,7 +249,7 @@ void nft_trace_notify(struct nft_traceinfo *info) ...@@ -249,7 +249,7 @@ void nft_trace_notify(struct nft_traceinfo *info)
goto nla_put_failure; goto nla_put_failure;
if (!info->packet_dumped) { if (!info->packet_dumped) {
if (nf_trace_fill_dev_info(skb, pkt->in, pkt->out)) if (nf_trace_fill_dev_info(skb, nft_in(pkt), nft_out(pkt)))
goto nla_put_failure; goto nla_put_failure;
if (nf_trace_fill_pkt_info(skb, pkt)) if (nf_trace_fill_pkt_info(skb, pkt))
...@@ -258,7 +258,7 @@ void nft_trace_notify(struct nft_traceinfo *info) ...@@ -258,7 +258,7 @@ void nft_trace_notify(struct nft_traceinfo *info)
} }
nlmsg_end(skb, nlh); nlmsg_end(skb, nlh);
nfnetlink_send(skb, pkt->net, 0, NFNLGRP_NFTRACE, 0, GFP_ATOMIC); nfnetlink_send(skb, nft_net(pkt), 0, NFNLGRP_NFTRACE, 0, GFP_ATOMIC);
return; return;
nla_put_failure: nla_put_failure:
......
...@@ -919,7 +919,7 @@ static struct notifier_block nfqnl_dev_notifier = { ...@@ -919,7 +919,7 @@ static struct notifier_block nfqnl_dev_notifier = {
static int nf_hook_cmp(struct nf_queue_entry *entry, unsigned long entry_ptr) static int nf_hook_cmp(struct nf_queue_entry *entry, unsigned long entry_ptr)
{ {
return rcu_access_pointer(entry->state.hook_entries) == return rcu_access_pointer(entry->hook) ==
(struct nf_hook_entry *)entry_ptr; (struct nf_hook_entry *)entry_ptr;
} }
......
...@@ -121,7 +121,6 @@ static int nft_bitwise_dump(struct sk_buff *skb, const struct nft_expr *expr) ...@@ -121,7 +121,6 @@ static int nft_bitwise_dump(struct sk_buff *skb, const struct nft_expr *expr)
return -1; return -1;
} }
static struct nft_expr_type nft_bitwise_type;
static const struct nft_expr_ops nft_bitwise_ops = { static const struct nft_expr_ops nft_bitwise_ops = {
.type = &nft_bitwise_type, .type = &nft_bitwise_type,
.size = NFT_EXPR_SIZE(sizeof(struct nft_bitwise)), .size = NFT_EXPR_SIZE(sizeof(struct nft_bitwise)),
...@@ -130,20 +129,10 @@ static const struct nft_expr_ops nft_bitwise_ops = { ...@@ -130,20 +129,10 @@ static const struct nft_expr_ops nft_bitwise_ops = {
.dump = nft_bitwise_dump, .dump = nft_bitwise_dump,
}; };
static struct nft_expr_type nft_bitwise_type __read_mostly = { struct nft_expr_type nft_bitwise_type __read_mostly = {
.name = "bitwise", .name = "bitwise",
.ops = &nft_bitwise_ops, .ops = &nft_bitwise_ops,
.policy = nft_bitwise_policy, .policy = nft_bitwise_policy,
.maxattr = NFTA_BITWISE_MAX, .maxattr = NFTA_BITWISE_MAX,
.owner = THIS_MODULE, .owner = THIS_MODULE,
}; };
int __init nft_bitwise_module_init(void)
{
return nft_register_expr(&nft_bitwise_type);
}
void nft_bitwise_module_exit(void)
{
nft_unregister_expr(&nft_bitwise_type);
}
...@@ -169,7 +169,6 @@ static int nft_byteorder_dump(struct sk_buff *skb, const struct nft_expr *expr) ...@@ -169,7 +169,6 @@ static int nft_byteorder_dump(struct sk_buff *skb, const struct nft_expr *expr)
return -1; return -1;
} }
static struct nft_expr_type nft_byteorder_type;
static const struct nft_expr_ops nft_byteorder_ops = { static const struct nft_expr_ops nft_byteorder_ops = {
.type = &nft_byteorder_type, .type = &nft_byteorder_type,
.size = NFT_EXPR_SIZE(sizeof(struct nft_byteorder)), .size = NFT_EXPR_SIZE(sizeof(struct nft_byteorder)),
...@@ -178,20 +177,10 @@ static const struct nft_expr_ops nft_byteorder_ops = { ...@@ -178,20 +177,10 @@ static const struct nft_expr_ops nft_byteorder_ops = {
.dump = nft_byteorder_dump, .dump = nft_byteorder_dump,
}; };
static struct nft_expr_type nft_byteorder_type __read_mostly = { struct nft_expr_type nft_byteorder_type __read_mostly = {
.name = "byteorder", .name = "byteorder",
.ops = &nft_byteorder_ops, .ops = &nft_byteorder_ops,
.policy = nft_byteorder_policy, .policy = nft_byteorder_policy,
.maxattr = NFTA_BYTEORDER_MAX, .maxattr = NFTA_BYTEORDER_MAX,
.owner = THIS_MODULE, .owner = THIS_MODULE,
}; };
int __init nft_byteorder_module_init(void)
{
return nft_register_expr(&nft_byteorder_type);
}
void nft_byteorder_module_exit(void)
{
nft_unregister_expr(&nft_byteorder_type);
}
...@@ -107,7 +107,6 @@ static int nft_cmp_dump(struct sk_buff *skb, const struct nft_expr *expr) ...@@ -107,7 +107,6 @@ static int nft_cmp_dump(struct sk_buff *skb, const struct nft_expr *expr)
return -1; return -1;
} }
static struct nft_expr_type nft_cmp_type;
static const struct nft_expr_ops nft_cmp_ops = { static const struct nft_expr_ops nft_cmp_ops = {
.type = &nft_cmp_type, .type = &nft_cmp_type,
.size = NFT_EXPR_SIZE(sizeof(struct nft_cmp_expr)), .size = NFT_EXPR_SIZE(sizeof(struct nft_cmp_expr)),
...@@ -208,20 +207,10 @@ nft_cmp_select_ops(const struct nft_ctx *ctx, const struct nlattr * const tb[]) ...@@ -208,20 +207,10 @@ nft_cmp_select_ops(const struct nft_ctx *ctx, const struct nlattr * const tb[])
return &nft_cmp_ops; return &nft_cmp_ops;
} }
static struct nft_expr_type nft_cmp_type __read_mostly = { struct nft_expr_type nft_cmp_type __read_mostly = {
.name = "cmp", .name = "cmp",
.select_ops = nft_cmp_select_ops, .select_ops = nft_cmp_select_ops,
.policy = nft_cmp_policy, .policy = nft_cmp_policy,
.maxattr = NFTA_CMP_MAX, .maxattr = NFTA_CMP_MAX,
.owner = THIS_MODULE, .owner = THIS_MODULE,
}; };
int __init nft_cmp_module_init(void)
{
return nft_register_expr(&nft_cmp_type);
}
void nft_cmp_module_exit(void)
{
nft_unregister_expr(&nft_cmp_type);
}
...@@ -261,7 +261,6 @@ static int nft_dynset_dump(struct sk_buff *skb, const struct nft_expr *expr) ...@@ -261,7 +261,6 @@ static int nft_dynset_dump(struct sk_buff *skb, const struct nft_expr *expr)
return -1; return -1;
} }
static struct nft_expr_type nft_dynset_type;
static const struct nft_expr_ops nft_dynset_ops = { static const struct nft_expr_ops nft_dynset_ops = {
.type = &nft_dynset_type, .type = &nft_dynset_type,
.size = NFT_EXPR_SIZE(sizeof(struct nft_dynset)), .size = NFT_EXPR_SIZE(sizeof(struct nft_dynset)),
...@@ -271,20 +270,10 @@ static const struct nft_expr_ops nft_dynset_ops = { ...@@ -271,20 +270,10 @@ static const struct nft_expr_ops nft_dynset_ops = {
.dump = nft_dynset_dump, .dump = nft_dynset_dump,
}; };
static struct nft_expr_type nft_dynset_type __read_mostly = { struct nft_expr_type nft_dynset_type __read_mostly = {
.name = "dynset", .name = "dynset",
.ops = &nft_dynset_ops, .ops = &nft_dynset_ops,
.policy = nft_dynset_policy, .policy = nft_dynset_policy,
.maxattr = NFTA_DYNSET_MAX, .maxattr = NFTA_DYNSET_MAX,
.owner = THIS_MODULE, .owner = THIS_MODULE,
}; };
int __init nft_dynset_module_init(void)
{
return nft_register_expr(&nft_dynset_type);
}
void nft_dynset_module_exit(void)
{
nft_unregister_expr(&nft_dynset_type);
}
...@@ -144,7 +144,7 @@ void nft_fib_store_result(void *reg, enum nft_fib_result r, ...@@ -144,7 +144,7 @@ void nft_fib_store_result(void *reg, enum nft_fib_result r,
*dreg = index; *dreg = index;
break; break;
case NFT_FIB_RESULT_OIFNAME: case NFT_FIB_RESULT_OIFNAME:
dev = dev_get_by_index_rcu(pkt->net, index); dev = dev_get_by_index_rcu(nft_net(pkt), index);
strncpy(reg, dev ? dev->name : "", IFNAMSIZ); strncpy(reg, dev ? dev->name : "", IFNAMSIZ);
break; break;
default: default:
......
...@@ -21,7 +21,7 @@ static void nft_fib_inet_eval(const struct nft_expr *expr, ...@@ -21,7 +21,7 @@ static void nft_fib_inet_eval(const struct nft_expr *expr,
{ {
const struct nft_fib *priv = nft_expr_priv(expr); const struct nft_fib *priv = nft_expr_priv(expr);
switch (pkt->pf) { switch (nft_pf(pkt)) {
case NFPROTO_IPV4: case NFPROTO_IPV4:
switch (priv->result) { switch (priv->result) {
case NFT_FIB_RESULT_OIF: case NFT_FIB_RESULT_OIF:
......
...@@ -57,7 +57,6 @@ static int nft_hash_init(const struct nft_ctx *ctx, ...@@ -57,7 +57,6 @@ static int nft_hash_init(const struct nft_ctx *ctx,
if (!tb[NFTA_HASH_SREG] || if (!tb[NFTA_HASH_SREG] ||
!tb[NFTA_HASH_DREG] || !tb[NFTA_HASH_DREG] ||
!tb[NFTA_HASH_LEN] || !tb[NFTA_HASH_LEN] ||
!tb[NFTA_HASH_SEED] ||
!tb[NFTA_HASH_MODULUS]) !tb[NFTA_HASH_MODULUS])
return -EINVAL; return -EINVAL;
...@@ -80,7 +79,10 @@ static int nft_hash_init(const struct nft_ctx *ctx, ...@@ -80,7 +79,10 @@ static int nft_hash_init(const struct nft_ctx *ctx,
if (priv->offset + priv->modulus - 1 < priv->offset) if (priv->offset + priv->modulus - 1 < priv->offset)
return -EOVERFLOW; return -EOVERFLOW;
priv->seed = ntohl(nla_get_be32(tb[NFTA_HASH_SEED])); if (tb[NFTA_HASH_SEED])
priv->seed = ntohl(nla_get_be32(tb[NFTA_HASH_SEED]));
else
get_random_bytes(&priv->seed, sizeof(priv->seed));
return nft_validate_register_load(priv->sreg, len) && return nft_validate_register_load(priv->sreg, len) &&
nft_validate_register_store(ctx, priv->dreg, NULL, nft_validate_register_store(ctx, priv->dreg, NULL,
......
...@@ -102,7 +102,6 @@ static int nft_immediate_validate(const struct nft_ctx *ctx, ...@@ -102,7 +102,6 @@ static int nft_immediate_validate(const struct nft_ctx *ctx,
return 0; return 0;
} }
static struct nft_expr_type nft_imm_type;
static const struct nft_expr_ops nft_imm_ops = { static const struct nft_expr_ops nft_imm_ops = {
.type = &nft_imm_type, .type = &nft_imm_type,
.size = NFT_EXPR_SIZE(sizeof(struct nft_immediate_expr)), .size = NFT_EXPR_SIZE(sizeof(struct nft_immediate_expr)),
...@@ -113,20 +112,10 @@ static const struct nft_expr_ops nft_imm_ops = { ...@@ -113,20 +112,10 @@ static const struct nft_expr_ops nft_imm_ops = {
.validate = nft_immediate_validate, .validate = nft_immediate_validate,
}; };
static struct nft_expr_type nft_imm_type __read_mostly = { struct nft_expr_type nft_imm_type __read_mostly = {
.name = "immediate", .name = "immediate",
.ops = &nft_imm_ops, .ops = &nft_imm_ops,
.policy = nft_immediate_policy, .policy = nft_immediate_policy,
.maxattr = NFTA_IMMEDIATE_MAX, .maxattr = NFTA_IMMEDIATE_MAX,
.owner = THIS_MODULE, .owner = THIS_MODULE,
}; };
int __init nft_immediate_module_init(void)
{
return nft_register_expr(&nft_imm_type);
}
void nft_immediate_module_exit(void)
{
nft_unregister_expr(&nft_imm_type);
}
...@@ -32,8 +32,9 @@ static void nft_log_eval(const struct nft_expr *expr, ...@@ -32,8 +32,9 @@ static void nft_log_eval(const struct nft_expr *expr,
{ {
const struct nft_log *priv = nft_expr_priv(expr); const struct nft_log *priv = nft_expr_priv(expr);
nf_log_packet(pkt->net, pkt->pf, pkt->hook, pkt->skb, pkt->in, nf_log_packet(nft_net(pkt), nft_pf(pkt), nft_hook(pkt), pkt->skb,
pkt->out, &priv->loginfo, "%s", priv->prefix); nft_in(pkt), nft_out(pkt), &priv->loginfo, "%s",
priv->prefix);
} }
static const struct nla_policy nft_log_policy[NFTA_LOG_MAX + 1] = { static const struct nla_policy nft_log_policy[NFTA_LOG_MAX + 1] = {
......
...@@ -35,9 +35,8 @@ static void nft_lookup_eval(const struct nft_expr *expr, ...@@ -35,9 +35,8 @@ static void nft_lookup_eval(const struct nft_expr *expr,
const struct nft_set_ext *ext; const struct nft_set_ext *ext;
bool found; bool found;
found = set->ops->lookup(pkt->net, set, &regs->data[priv->sreg], &ext) ^ found = set->ops->lookup(nft_net(pkt), set, &regs->data[priv->sreg],
priv->invert; &ext) ^ priv->invert;
if (!found) { if (!found) {
regs->verdict.code = NFT_BREAK; regs->verdict.code = NFT_BREAK;
return; return;
...@@ -155,7 +154,6 @@ static int nft_lookup_dump(struct sk_buff *skb, const struct nft_expr *expr) ...@@ -155,7 +154,6 @@ static int nft_lookup_dump(struct sk_buff *skb, const struct nft_expr *expr)
return -1; return -1;
} }
static struct nft_expr_type nft_lookup_type;
static const struct nft_expr_ops nft_lookup_ops = { static const struct nft_expr_ops nft_lookup_ops = {
.type = &nft_lookup_type, .type = &nft_lookup_type,
.size = NFT_EXPR_SIZE(sizeof(struct nft_lookup)), .size = NFT_EXPR_SIZE(sizeof(struct nft_lookup)),
...@@ -165,20 +163,10 @@ static const struct nft_expr_ops nft_lookup_ops = { ...@@ -165,20 +163,10 @@ static const struct nft_expr_ops nft_lookup_ops = {
.dump = nft_lookup_dump, .dump = nft_lookup_dump,
}; };
static struct nft_expr_type nft_lookup_type __read_mostly = { struct nft_expr_type nft_lookup_type __read_mostly = {
.name = "lookup", .name = "lookup",
.ops = &nft_lookup_ops, .ops = &nft_lookup_ops,
.policy = nft_lookup_policy, .policy = nft_lookup_policy,
.maxattr = NFTA_LOOKUP_MAX, .maxattr = NFTA_LOOKUP_MAX,
.owner = THIS_MODULE, .owner = THIS_MODULE,
}; };
int __init nft_lookup_module_init(void)
{
return nft_register_expr(&nft_lookup_type);
}
void nft_lookup_module_exit(void)
{
nft_unregister_expr(&nft_lookup_type);
}
...@@ -36,7 +36,7 @@ void nft_meta_get_eval(const struct nft_expr *expr, ...@@ -36,7 +36,7 @@ void nft_meta_get_eval(const struct nft_expr *expr,
{ {
const struct nft_meta *priv = nft_expr_priv(expr); const struct nft_meta *priv = nft_expr_priv(expr);
const struct sk_buff *skb = pkt->skb; const struct sk_buff *skb = pkt->skb;
const struct net_device *in = pkt->in, *out = pkt->out; const struct net_device *in = nft_in(pkt), *out = nft_out(pkt);
struct sock *sk; struct sock *sk;
u32 *dest = &regs->data[priv->dreg]; u32 *dest = &regs->data[priv->dreg];
...@@ -49,7 +49,7 @@ void nft_meta_get_eval(const struct nft_expr *expr, ...@@ -49,7 +49,7 @@ void nft_meta_get_eval(const struct nft_expr *expr,
*(__be16 *)dest = skb->protocol; *(__be16 *)dest = skb->protocol;
break; break;
case NFT_META_NFPROTO: case NFT_META_NFPROTO:
*dest = pkt->pf; *dest = nft_pf(pkt);
break; break;
case NFT_META_L4PROTO: case NFT_META_L4PROTO:
if (!pkt->tprot_set) if (!pkt->tprot_set)
...@@ -146,7 +146,7 @@ void nft_meta_get_eval(const struct nft_expr *expr, ...@@ -146,7 +146,7 @@ void nft_meta_get_eval(const struct nft_expr *expr,
break; break;
} }
switch (pkt->pf) { switch (nft_pf(pkt)) {
case NFPROTO_IPV4: case NFPROTO_IPV4:
if (ipv4_is_multicast(ip_hdr(skb)->daddr)) if (ipv4_is_multicast(ip_hdr(skb)->daddr))
*dest = PACKET_MULTICAST; *dest = PACKET_MULTICAST;
......
...@@ -148,7 +148,6 @@ static int nft_payload_dump(struct sk_buff *skb, const struct nft_expr *expr) ...@@ -148,7 +148,6 @@ static int nft_payload_dump(struct sk_buff *skb, const struct nft_expr *expr)
return -1; return -1;
} }
static struct nft_expr_type nft_payload_type;
static const struct nft_expr_ops nft_payload_ops = { static const struct nft_expr_ops nft_payload_ops = {
.type = &nft_payload_type, .type = &nft_payload_type,
.size = NFT_EXPR_SIZE(sizeof(struct nft_payload)), .size = NFT_EXPR_SIZE(sizeof(struct nft_payload)),
...@@ -320,20 +319,10 @@ nft_payload_select_ops(const struct nft_ctx *ctx, ...@@ -320,20 +319,10 @@ nft_payload_select_ops(const struct nft_ctx *ctx,
return &nft_payload_ops; return &nft_payload_ops;
} }
static struct nft_expr_type nft_payload_type __read_mostly = { struct nft_expr_type nft_payload_type __read_mostly = {
.name = "payload", .name = "payload",
.select_ops = nft_payload_select_ops, .select_ops = nft_payload_select_ops,
.policy = nft_payload_policy, .policy = nft_payload_policy,
.maxattr = NFTA_PAYLOAD_MAX, .maxattr = NFTA_PAYLOAD_MAX,
.owner = THIS_MODULE, .owner = THIS_MODULE,
}; };
int __init nft_payload_module_init(void)
{
return nft_register_expr(&nft_payload_type);
}
void nft_payload_module_exit(void)
{
nft_unregister_expr(&nft_payload_type);
}
...@@ -43,7 +43,7 @@ static void nft_queue_eval(const struct nft_expr *expr, ...@@ -43,7 +43,7 @@ static void nft_queue_eval(const struct nft_expr *expr,
queue = priv->queuenum + cpu % priv->queues_total; queue = priv->queuenum + cpu % priv->queues_total;
} else { } else {
queue = nfqueue_hash(pkt->skb, queue, queue = nfqueue_hash(pkt->skb, queue,
priv->queues_total, pkt->pf, priv->queues_total, nft_pf(pkt),
jhash_initval); jhash_initval);
} }
} }
......
...@@ -122,7 +122,6 @@ static int nft_range_dump(struct sk_buff *skb, const struct nft_expr *expr) ...@@ -122,7 +122,6 @@ static int nft_range_dump(struct sk_buff *skb, const struct nft_expr *expr)
return -1; return -1;
} }
static struct nft_expr_type nft_range_type;
static const struct nft_expr_ops nft_range_ops = { static const struct nft_expr_ops nft_range_ops = {
.type = &nft_range_type, .type = &nft_range_type,
.size = NFT_EXPR_SIZE(sizeof(struct nft_range_expr)), .size = NFT_EXPR_SIZE(sizeof(struct nft_range_expr)),
...@@ -131,20 +130,10 @@ static const struct nft_expr_ops nft_range_ops = { ...@@ -131,20 +130,10 @@ static const struct nft_expr_ops nft_range_ops = {
.dump = nft_range_dump, .dump = nft_range_dump,
}; };
static struct nft_expr_type nft_range_type __read_mostly = { struct nft_expr_type nft_range_type __read_mostly = {
.name = "range", .name = "range",
.ops = &nft_range_ops, .ops = &nft_range_ops,
.policy = nft_range_policy, .policy = nft_range_policy,
.maxattr = NFTA_RANGE_MAX, .maxattr = NFTA_RANGE_MAX,
.owner = THIS_MODULE, .owner = THIS_MODULE,
}; };
int __init nft_range_module_init(void)
{
return nft_register_expr(&nft_range_type);
}
void nft_range_module_exit(void)
{
nft_unregister_expr(&nft_range_type);
}
...@@ -23,36 +23,36 @@ static void nft_reject_inet_eval(const struct nft_expr *expr, ...@@ -23,36 +23,36 @@ static void nft_reject_inet_eval(const struct nft_expr *expr,
{ {
struct nft_reject *priv = nft_expr_priv(expr); struct nft_reject *priv = nft_expr_priv(expr);
switch (pkt->pf) { switch (nft_pf(pkt)) {
case NFPROTO_IPV4: case NFPROTO_IPV4:
switch (priv->type) { switch (priv->type) {
case NFT_REJECT_ICMP_UNREACH: case NFT_REJECT_ICMP_UNREACH:
nf_send_unreach(pkt->skb, priv->icmp_code, nf_send_unreach(pkt->skb, priv->icmp_code,
pkt->hook); nft_hook(pkt));
break; break;
case NFT_REJECT_TCP_RST: case NFT_REJECT_TCP_RST:
nf_send_reset(pkt->net, pkt->skb, pkt->hook); nf_send_reset(nft_net(pkt), pkt->skb, nft_hook(pkt));
break; break;
case NFT_REJECT_ICMPX_UNREACH: case NFT_REJECT_ICMPX_UNREACH:
nf_send_unreach(pkt->skb, nf_send_unreach(pkt->skb,
nft_reject_icmp_code(priv->icmp_code), nft_reject_icmp_code(priv->icmp_code),
pkt->hook); nft_hook(pkt));
break; break;
} }
break; break;
case NFPROTO_IPV6: case NFPROTO_IPV6:
switch (priv->type) { switch (priv->type) {
case NFT_REJECT_ICMP_UNREACH: case NFT_REJECT_ICMP_UNREACH:
nf_send_unreach6(pkt->net, pkt->skb, priv->icmp_code, nf_send_unreach6(nft_net(pkt), pkt->skb,
pkt->hook); priv->icmp_code, nft_hook(pkt));
break; break;
case NFT_REJECT_TCP_RST: case NFT_REJECT_TCP_RST:
nf_send_reset6(pkt->net, pkt->skb, pkt->hook); nf_send_reset6(nft_net(pkt), pkt->skb, nft_hook(pkt));
break; break;
case NFT_REJECT_ICMPX_UNREACH: case NFT_REJECT_ICMPX_UNREACH:
nf_send_unreach6(pkt->net, pkt->skb, nf_send_unreach6(nft_net(pkt), pkt->skb,
nft_reject_icmpv6_code(priv->icmp_code), nft_reject_icmpv6_code(priv->icmp_code),
pkt->hook); nft_hook(pkt));
break; break;
} }
break; break;
......
...@@ -43,14 +43,14 @@ void nft_rt_get_eval(const struct nft_expr *expr, ...@@ -43,14 +43,14 @@ void nft_rt_get_eval(const struct nft_expr *expr,
break; break;
#endif #endif
case NFT_RT_NEXTHOP4: case NFT_RT_NEXTHOP4:
if (pkt->pf != NFPROTO_IPV4) if (nft_pf(pkt) != NFPROTO_IPV4)
goto err; goto err;
*dest = rt_nexthop((const struct rtable *)dst, *dest = rt_nexthop((const struct rtable *)dst,
ip_hdr(skb)->daddr); ip_hdr(skb)->daddr);
break; break;
case NFT_RT_NEXTHOP6: case NFT_RT_NEXTHOP6:
if (pkt->pf != NFPROTO_IPV6) if (nft_pf(pkt) != NFPROTO_IPV6)
goto err; goto err;
memcpy(dest, rt6_nexthop((struct rt6_info *)dst, memcpy(dest, rt6_nexthop((struct rt6_info *)dst,
......
...@@ -982,7 +982,7 @@ void xt_free_table_info(struct xt_table_info *info) ...@@ -982,7 +982,7 @@ void xt_free_table_info(struct xt_table_info *info)
} }
EXPORT_SYMBOL(xt_free_table_info); EXPORT_SYMBOL(xt_free_table_info);
/* Find table by name, grabs mutex & ref. Returns ERR_PTR() on error. */ /* Find table by name, grabs mutex & ref. Returns NULL on error. */
struct xt_table *xt_find_table_lock(struct net *net, u_int8_t af, struct xt_table *xt_find_table_lock(struct net *net, u_int8_t af,
const char *name) const char *name)
{ {
......
...@@ -132,9 +132,9 @@ audit_tg(struct sk_buff *skb, const struct xt_action_param *par) ...@@ -132,9 +132,9 @@ audit_tg(struct sk_buff *skb, const struct xt_action_param *par)
goto errout; goto errout;
audit_log_format(ab, "action=%hhu hook=%u len=%u inif=%s outif=%s", audit_log_format(ab, "action=%hhu hook=%u len=%u inif=%s outif=%s",
info->type, par->hooknum, skb->len, info->type, xt_hooknum(par), skb->len,
par->in ? par->in->name : "?", xt_in(par) ? xt_inname(par) : "?",
par->out ? par->out->name : "?"); xt_out(par) ? xt_outname(par) : "?");
if (skb->mark) if (skb->mark)
audit_log_format(ab, " mark=%#x", skb->mark); audit_log_format(ab, " mark=%#x", skb->mark);
...@@ -144,7 +144,7 @@ audit_tg(struct sk_buff *skb, const struct xt_action_param *par) ...@@ -144,7 +144,7 @@ audit_tg(struct sk_buff *skb, const struct xt_action_param *par)
eth_hdr(skb)->h_source, eth_hdr(skb)->h_dest, eth_hdr(skb)->h_source, eth_hdr(skb)->h_dest,
ntohs(eth_hdr(skb)->h_proto)); ntohs(eth_hdr(skb)->h_proto));
if (par->family == NFPROTO_BRIDGE) { if (xt_family(par) == NFPROTO_BRIDGE) {
switch (eth_hdr(skb)->h_proto) { switch (eth_hdr(skb)->h_proto) {
case htons(ETH_P_IP): case htons(ETH_P_IP):
audit_ip4(ab, skb); audit_ip4(ab, skb);
...@@ -157,7 +157,7 @@ audit_tg(struct sk_buff *skb, const struct xt_action_param *par) ...@@ -157,7 +157,7 @@ audit_tg(struct sk_buff *skb, const struct xt_action_param *par)
} }
} }
switch (par->family) { switch (xt_family(par)) {
case NFPROTO_IPV4: case NFPROTO_IPV4:
audit_ip4(ab, skb); audit_ip4(ab, skb);
break; break;
......
...@@ -32,15 +32,15 @@ static unsigned int ...@@ -32,15 +32,15 @@ static unsigned int
log_tg(struct sk_buff *skb, const struct xt_action_param *par) log_tg(struct sk_buff *skb, const struct xt_action_param *par)
{ {
const struct xt_log_info *loginfo = par->targinfo; const struct xt_log_info *loginfo = par->targinfo;
struct net *net = xt_net(par);
struct nf_loginfo li; struct nf_loginfo li;
struct net *net = par->net;
li.type = NF_LOG_TYPE_LOG; li.type = NF_LOG_TYPE_LOG;
li.u.log.level = loginfo->level; li.u.log.level = loginfo->level;
li.u.log.logflags = loginfo->logflags; li.u.log.logflags = loginfo->logflags;
nf_log_packet(net, par->family, par->hooknum, skb, par->in, par->out, nf_log_packet(net, xt_family(par), xt_hooknum(par), skb, xt_in(par),
&li, "%s", loginfo->prefix); xt_out(par), &li, "%s", loginfo->prefix);
return XT_CONTINUE; return XT_CONTINUE;
} }
......
...@@ -33,8 +33,8 @@ netmap_tg6(struct sk_buff *skb, const struct xt_action_param *par) ...@@ -33,8 +33,8 @@ netmap_tg6(struct sk_buff *skb, const struct xt_action_param *par)
netmask.ip6[i] = ~(range->min_addr.ip6[i] ^ netmask.ip6[i] = ~(range->min_addr.ip6[i] ^
range->max_addr.ip6[i]); range->max_addr.ip6[i]);
if (par->hooknum == NF_INET_PRE_ROUTING || if (xt_hooknum(par) == NF_INET_PRE_ROUTING ||
par->hooknum == NF_INET_LOCAL_OUT) xt_hooknum(par) == NF_INET_LOCAL_OUT)
new_addr.in6 = ipv6_hdr(skb)->daddr; new_addr.in6 = ipv6_hdr(skb)->daddr;
else else
new_addr.in6 = ipv6_hdr(skb)->saddr; new_addr.in6 = ipv6_hdr(skb)->saddr;
...@@ -51,7 +51,7 @@ netmap_tg6(struct sk_buff *skb, const struct xt_action_param *par) ...@@ -51,7 +51,7 @@ netmap_tg6(struct sk_buff *skb, const struct xt_action_param *par)
newrange.min_proto = range->min_proto; newrange.min_proto = range->min_proto;
newrange.max_proto = range->max_proto; newrange.max_proto = range->max_proto;
return nf_nat_setup_info(ct, &newrange, HOOK2MANIP(par->hooknum)); return nf_nat_setup_info(ct, &newrange, HOOK2MANIP(xt_hooknum(par)));
} }
static int netmap_tg6_checkentry(const struct xt_tgchk_param *par) static int netmap_tg6_checkentry(const struct xt_tgchk_param *par)
...@@ -72,16 +72,16 @@ netmap_tg4(struct sk_buff *skb, const struct xt_action_param *par) ...@@ -72,16 +72,16 @@ netmap_tg4(struct sk_buff *skb, const struct xt_action_param *par)
const struct nf_nat_ipv4_multi_range_compat *mr = par->targinfo; const struct nf_nat_ipv4_multi_range_compat *mr = par->targinfo;
struct nf_nat_range newrange; struct nf_nat_range newrange;
NF_CT_ASSERT(par->hooknum == NF_INET_PRE_ROUTING || NF_CT_ASSERT(xt_hooknum(par) == NF_INET_PRE_ROUTING ||
par->hooknum == NF_INET_POST_ROUTING || xt_hooknum(par) == NF_INET_POST_ROUTING ||
par->hooknum == NF_INET_LOCAL_OUT || xt_hooknum(par) == NF_INET_LOCAL_OUT ||
par->hooknum == NF_INET_LOCAL_IN); xt_hooknum(par) == NF_INET_LOCAL_IN);
ct = nf_ct_get(skb, &ctinfo); ct = nf_ct_get(skb, &ctinfo);
netmask = ~(mr->range[0].min_ip ^ mr->range[0].max_ip); netmask = ~(mr->range[0].min_ip ^ mr->range[0].max_ip);
if (par->hooknum == NF_INET_PRE_ROUTING || if (xt_hooknum(par) == NF_INET_PRE_ROUTING ||
par->hooknum == NF_INET_LOCAL_OUT) xt_hooknum(par) == NF_INET_LOCAL_OUT)
new_ip = ip_hdr(skb)->daddr & ~netmask; new_ip = ip_hdr(skb)->daddr & ~netmask;
else else
new_ip = ip_hdr(skb)->saddr & ~netmask; new_ip = ip_hdr(skb)->saddr & ~netmask;
...@@ -96,7 +96,7 @@ netmap_tg4(struct sk_buff *skb, const struct xt_action_param *par) ...@@ -96,7 +96,7 @@ netmap_tg4(struct sk_buff *skb, const struct xt_action_param *par)
newrange.max_proto = mr->range[0].max; newrange.max_proto = mr->range[0].max;
/* Hand modified range to generic setup. */ /* Hand modified range to generic setup. */
return nf_nat_setup_info(ct, &newrange, HOOK2MANIP(par->hooknum)); return nf_nat_setup_info(ct, &newrange, HOOK2MANIP(xt_hooknum(par)));
} }
static int netmap_tg4_check(const struct xt_tgchk_param *par) static int netmap_tg4_check(const struct xt_tgchk_param *par)
......
...@@ -25,8 +25,8 @@ static unsigned int ...@@ -25,8 +25,8 @@ static unsigned int
nflog_tg(struct sk_buff *skb, const struct xt_action_param *par) nflog_tg(struct sk_buff *skb, const struct xt_action_param *par)
{ {
const struct xt_nflog_info *info = par->targinfo; const struct xt_nflog_info *info = par->targinfo;
struct net *net = xt_net(par);
struct nf_loginfo li; struct nf_loginfo li;
struct net *net = par->net;
li.type = NF_LOG_TYPE_ULOG; li.type = NF_LOG_TYPE_ULOG;
li.u.ulog.copy_len = info->len; li.u.ulog.copy_len = info->len;
...@@ -37,8 +37,8 @@ nflog_tg(struct sk_buff *skb, const struct xt_action_param *par) ...@@ -37,8 +37,8 @@ nflog_tg(struct sk_buff *skb, const struct xt_action_param *par)
if (info->flags & XT_NFLOG_F_COPY_LEN) if (info->flags & XT_NFLOG_F_COPY_LEN)
li.u.ulog.flags |= NF_LOG_F_COPY_LEN; li.u.ulog.flags |= NF_LOG_F_COPY_LEN;
nfulnl_log_packet(net, par->family, par->hooknum, skb, par->in, nfulnl_log_packet(net, xt_family(par), xt_hooknum(par), skb,
par->out, &li, info->prefix); xt_in(par), xt_out(par), &li, info->prefix);
return XT_CONTINUE; return XT_CONTINUE;
} }
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册