1. 18 1月, 2011 3 次提交
    • F
      netfilter: allow NFQUEUE bypass if no listener is available · 94b27cc3
      Florian Westphal 提交于
      If an skb is to be NF_QUEUE'd, but no program has opened the queue, the
      packet is dropped.
      
      This adds a v2 target revision of xt_NFQUEUE that allows packets to
      continue through the ruleset instead.
      
      Because the actual queueing happens outside of the target context, the
      'bypass' flag has to be communicated back to the netfilter core.
      
      Unfortunately the only choice to do this without adding a new function
      argument is to use the target function return value (i.e. the verdict).
      
      In the NF_QUEUE case, the upper 16bit already contain the queue number
      to use.  The previous patch reduced NF_VERDICT_MASK to 0xff, i.e.
      we now have extra room for a new flag.
      
      If a hook issued a NF_QUEUE verdict, then the netfilter core will
      continue packet processing if the queueing hook
      returns -ESRCH (== "this queue does not exist") and the new
      NF_VERDICT_FLAG_QUEUE_BYPASS flag is set in the verdict value.
      
      Note: If the queue exists, but userspace does not consume packets fast
      enough, the skb will still be dropped.
      Signed-off-by: NFlorian Westphal <fwestphal@astaro.com>
      Signed-off-by: NPatrick McHardy <kaber@trash.net>
      94b27cc3
    • F
      netfilter: reduce NF_VERDICT_MASK to 0xff · f615df76
      Florian Westphal 提交于
      NF_VERDICT_MASK is currently 0xffff. This is because the upper
      16 bits are used to store errno (for NF_DROP) or the queue number
      (NF_QUEUE verdict).
      
      As there are up to 0xffff different queues available, there is no more
      room to store additional flags.
      
      At the moment there are only 6 different verdicts, i.e. we can reduce
      NF_VERDICT_MASK to 0xff to allow storing additional flags in the 0xff00 space.
      
      NF_VERDICT_BITS would then be reduced to 8, but because the value is
      exported to userspace, this might cause breakage; e.g.:
      
      e.g. 'queuenr = (1 << NF_VERDICT_BITS) | NF_QUEUE'  would now break.
      
      Thus, remove NF_VERDICT_BITS usage in the kernel and move the old value
      to the 'userspace compat' section.
      Signed-off-by: NFlorian Westphal <fw@strlen.de>
      Signed-off-by: NPatrick McHardy <kaber@trash.net>
      f615df76
    • C
      netfilter: nf_nat: fix conversion to non-atomic bit ops · a7c2f4d7
      Changli Gao 提交于
      My previous patch (netfilter: nf_nat: don't use atomic bit operation)
      made a mistake when converting atomic_set to a normal bit 'or'.
      IPS_*_BIT should be replaced with IPS_*.
      Signed-off-by: NChangli Gao <xiaosuo@gmail.com>
      Cc: Tim Gardner <tim.gardner@canonical.com>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NPatrick McHardy <kaber@trash.net>
      a7c2f4d7
  2. 17 1月, 2011 2 次提交
    • T
      netfilter: create audit records for x_tables replaces · fbabf31e
      Thomas Graf 提交于
      The setsockopt() syscall to replace tables is already recorded
      in the audit logs. This patch stores additional information
      such as table name and netfilter protocol.
      
      Cc: Patrick McHardy <kaber@trash.net>
      Cc: Eric Paris <eparis@parisplace.org>
      Cc: Al Viro <viro@ZenIV.linux.org.uk>
      Signed-off-by: NThomas Graf <tgraf@redhat.com>
      Signed-off-by: NPatrick McHardy <kaber@trash.net>
      fbabf31e
    • T
      netfilter: audit target to record accepted/dropped packets · 43f393ca
      Thomas Graf 提交于
      This patch adds a new netfilter target which creates audit records
      for packets traversing a certain chain.
      
      It can be used to record packets which are rejected administraively
      as follows:
      
        -N AUDIT_DROP
        -A AUDIT_DROP -j AUDIT --type DROP
        -A AUDIT_DROP -j DROP
      
      a rule which would typically drop or reject a packet would then
      invoke the new chain to record packets before dropping them.
      
        -j AUDIT_DROP
      
      The module is protocol independant and works for iptables, ip6tables
      and ebtables.
      
      The following information is logged:
       - netfilter hook
       - packet length
       - incomming/outgoing interface
       - MAC src/dst/proto for ethernet packets
       - src/dst/protocol address for IPv4/IPv6
       - src/dst port for TCP/UDP/UDPLITE
       - icmp type/code
      
      Cc: Patrick McHardy <kaber@trash.net>
      Cc: Eric Paris <eparis@parisplace.org>
      Cc: Al Viro <viro@ZenIV.linux.org.uk>
      Signed-off-by: NThomas Graf <tgraf@redhat.com>
      Signed-off-by: NPatrick McHardy <kaber@trash.net>
      43f393ca
  3. 14 1月, 2011 2 次提交
  4. 13 1月, 2011 23 次提交
  5. 07 1月, 2011 10 次提交
    • Y
      sched: Consolidate the name of root_task_group and init_task_group · 07e06b01
      Yong Zhang 提交于
      root_task_group is the leftover of USER_SCHED, now it's always
      same to init_task_group.
      But as Mike suggested, root_task_group is maybe the suitable name
      to keep for a tree.
      So in this patch:
        init_task_group      --> root_task_group
        init_task_group_load --> root_task_group_load
        INIT_TASK_GROUP_LOAD --> ROOT_TASK_GROUP_LOAD
      Suggested-by: NMike Galbraith <efault@gmx.de>
      Signed-off-by: NYong Zhang <yong.zhang0@gmail.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <20110107071736.GA32635@windriver.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      07e06b01
    • N
      fs: scale mntget/mntput · b3e19d92
      Nick Piggin 提交于
      The problem that this patch aims to fix is vfsmount refcounting scalability.
      We need to take a reference on the vfsmount for every successful path lookup,
      which often go to the same mount point.
      
      The fundamental difficulty is that a "simple" reference count can never be made
      scalable, because any time a reference is dropped, we must check whether that
      was the last reference. To do that requires communication with all other CPUs
      that may have taken a reference count.
      
      We can make refcounts more scalable in a couple of ways, involving keeping
      distributed counters, and checking for the global-zero condition less
      frequently.
      
      - check the global sum once every interval (this will delay zero detection
        for some interval, so it's probably a showstopper for vfsmounts).
      
      - keep a local count and only taking the global sum when local reaches 0 (this
        is difficult for vfsmounts, because we can't hold preempt off for the life of
        a reference, so a counter would need to be per-thread or tied strongly to a
        particular CPU which requires more locking).
      
      - keep a local difference of increments and decrements, which allows us to sum
        the total difference and hence find the refcount when summing all CPUs. Then,
        keep a single integer "long" refcount for slow and long lasting references,
        and only take the global sum of local counters when the long refcount is 0.
      
      This last scheme is what I implemented here. Attached mounts and process root
      and working directory references are "long" references, and everything else is
      a short reference.
      
      This allows scalable vfsmount references during path walking over mounted
      subtrees and unattached (lazy umounted) mounts with processes still running
      in them.
      
      This results in one fewer atomic op in the fastpath: mntget is now just a
      per-CPU inc, rather than an atomic inc; and mntput just requires a spinlock
      and non-atomic decrement in the common case. However code is otherwise bigger
      and heavier, so single threaded performance is basically a wash.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      b3e19d92
    • N
      fs: implement faster dentry memcmp · 9d55c369
      Nick Piggin 提交于
      The standard memcmp function on a Westmere system shows up hot in
      profiles in the `git diff` workload (both parallel and single threaded),
      and it is likely due to the costs associated with trapping into
      microcode, and little opportunity to improve memory access (dentry
      name is not likely to take up more than a cacheline).
      
      So replace it with an open-coded byte comparison. This increases code
      size by 8 bytes in the critical __d_lookup_rcu function, but the
      speedup is huge, averaging 10 runs of each:
      
      git diff st   user   sys   elapsed  CPU
      before        1.15   2.57  3.82      97.1
      after         1.14   2.35  3.61      96.8
      
      git diff mt   user   sys   elapsed  CPU
      before        1.27   3.85  1.46     349
      after         1.26   3.54  1.43     333
      
      Elapsed time for single threaded git diff at 95.0% confidence:
              -0.21  +/- 0.01
              -5.45% +/- 0.24%
      
      It's -0.66% +/- 0.06% elapsed time on my Opteron, so rep cmp costs on the
      fam10h seem to be relatively smaller, but there is still a win.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      9d55c369
    • N
      fs: improve scalability of pseudo filesystems · 4b936885
      Nick Piggin 提交于
      Regardless of how much we possibly try to scale dcache, there is likely
      always going to be some fundamental contention when adding or removing children
      under the same parent. Pseudo filesystems do not seem need to have connected
      dentries because by definition they are disconnected.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      4b936885
    • N
      fs: dcache per-inode inode alias locking · 873feea0
      Nick Piggin 提交于
      dcache_inode_lock can be replaced with per-inode locking. Use existing
      inode->i_lock for this. This is slightly non-trivial because we sometimes
      need to find the inode from the dentry, which requires d_inode to be
      stabilised (either with refcount or d_lock).
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      873feea0
    • N
      fs: dcache per-bucket dcache hash locking · ceb5bdc2
      Nick Piggin 提交于
      We can turn the dcache hash locking from a global dcache_hash_lock into
      per-bucket locking.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      ceb5bdc2
    • N
      bit_spinlock: add required includes · 626d6074
      Nick Piggin 提交于
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      626d6074
    • N
      kernel: add bl_list · 4e35e607
      Nick Piggin 提交于
      Introduce a type of hlist that can support the use of the lowest bit in the
      hlist_head. This will be subsequently used to implement per-bucket bit spinlock
      for inode and dentry hashes, and may be useful in other cases such as network
      hashes.
      Reviewed-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      4e35e607
    • N
      fs: provide simple rcu-walk generic_check_acl implementation · 1e1743eb
      Nick Piggin 提交于
      This simple implementation just checks for no ACLs on the inode, and
      if so, then the rcu-walk may proceed, otherwise fail it.
      
      This could easily be extended to put acls under RCU and check them
      under seqlock, if need be. But this implementation is enough to show
      the rcu-walk aware permissions code for path lookups is working, and
      will handle cases where there are no ACLs or ACLs in just the final
      element.
      
      This patch implicity converts tmpfs to rcu-aware permission check.
      Subsequent patches onvert ext*, xfs, and, btrfs. Each of these uses
      acl/permission code in a different way, so convert them all to provide
      templates and proof of concept.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      1e1743eb
    • N
      fs: provide rcu-walk aware permission i_ops · b74c79e9
      Nick Piggin 提交于
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      b74c79e9