1. 03 8月, 2014 9 次提交
  2. 01 8月, 2014 7 次提交
  3. 31 7月, 2014 4 次提交
    • P
      net: filter: don't release unattached filter through call_rcu() · 34c5bd66
      Pablo Neira 提交于
      sk_unattached_filter_destroy() does not always need to release the
      filter object via rcu. Since this filter is never attached to the
      socket, the caller should be responsible for releasing the filter
      in a safe way, which may not necessarily imply rcu.
      
      This is a short summary of clients of this function:
      
      1) xt_bpf.c and cls_bpf.c use the bpf matchers from rules, these rules
         are removed from the packet path before the filter is released. Thus,
         the framework makes sure the filter is safely removed.
      
      2) In the ppp driver, the ppp_lock ensures serialization between the
         xmit and filter attachment/detachment path. This doesn't use rcu
         so deferred release via rcu makes no sense.
      
      3) In the isdn/ppp driver, it is called from isdn_ppp_release()
         the isdn_ppp_ioctl(). This driver uses mutex and spinlocks, no rcu.
         Thus, deferred rcu makes no sense to me either, the deferred releases
         may be just masking the effects of wrong locking strategy, which
         should be fixed in the driver itself.
      
      4) In the team driver, this is the only place where the rcu
         synchronization with unattached filter is used. Therefore, this
         patch introduces synchronize_rcu() which is called from the
         genetlink path to make sure the filter doesn't go away while packets
         are still walking over it. I think we can revisit this once struct
         bpf_prog (that only wraps specific bpf code bits) is in place, then
         add some specific struct rcu_head in the scope of the team driver if
         Jiri thinks this is needed.
      
      Deferred rcu release for unattached filters was originally introduced
      in 302d6637 ("filter: Allow to create sk-unattached filters").
      Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      34c5bd66
    • T
      net: Remove unlikely() for WARN_ON() conditions · 80019d31
      Thomas Graf 提交于
      No need for the unlikely(), WARN_ON() and BUG_ON() internally use
      unlikely() on the condition.
      Signed-off-by: NThomas Graf <tgraf@suug.ch>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      80019d31
    • A
      dcbnl : Fix misleading dcb_app->priority explanation · 16eecd9b
      Anish Bhatt 提交于
      Current explanation of dcb_app->priority is wrong. It says priority is
      expected to be a 3-bit unsigned integer which is only true when working with
      DCBx-IEEE. Use of dcb_app->priority by DCBx-CEE expects it to be 802.1p user
      priority bitmap. Updated accordingly
      
      This affects the cxgb4 driver, but I will post those changes as part of a
      larger changeset shortly.
      
      Fixes: 3e29027a ("dcbnl: add support for ieee8021Qaz attributes")
      Signed-off-by: NAnish Bhatt <anish@chelsio.com>
      Acked-by: NJohn Fastabend <john.r.fastabend@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      16eecd9b
    • A
      netfilter: nfnetlink_acct: dump unmodified nfacct flags · d24675cb
      Alexey Perevalov 提交于
      NFNL_MSG_ACCT_GET_CTRZERO modifies dumped flags, in this case
      client see unmodified (uncleared) counter value and cleared
      overquota state - end user doesn't know anything about overquota state,
      unless end user subscribed on overquota report.
      Signed-off-by: NAlexey Perevalov <a.perevalov@samsung.com>
      Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
      d24675cb
  4. 30 7月, 2014 11 次提交
  5. 29 7月, 2014 3 次提交
    • E
      ip: make IP identifiers less predictable · 04ca6973
      Eric Dumazet 提交于
      In "Counting Packets Sent Between Arbitrary Internet Hosts", Jeffrey and
      Jedidiah describe ways exploiting linux IP identifier generation to
      infer whether two machines are exchanging packets.
      
      With commit 73f156a6 ("inetpeer: get rid of ip_id_count"), we
      changed IP id generation, but this does not really prevent this
      side-channel technique.
      
      This patch adds a random amount of perturbation so that IP identifiers
      for a given destination [1] are no longer monotonically increasing after
      an idle period.
      
      Note that prandom_u32_max(1) returns 0, so if generator is used at most
      once per jiffy, this patch inserts no hole in the ID suite and do not
      increase collision probability.
      
      This is jiffies based, so in the worst case (HZ=1000), the id can
      rollover after ~65 seconds of idle time, which should be fine.
      
      We also change the hash used in __ip_select_ident() to not only hash
      on daddr, but also saddr and protocol, so that ICMP probes can not be
      used to infer information for other protocols.
      
      For IPv6, adds saddr into the hash as well, but not nexthdr.
      
      If I ping the patched target, we can see ID are now hard to predict.
      
      21:57:11.008086 IP (...)
          A > target: ICMP echo request, seq 1, length 64
      21:57:11.010752 IP (... id 2081 ...)
          target > A: ICMP echo reply, seq 1, length 64
      
      21:57:12.013133 IP (...)
          A > target: ICMP echo request, seq 2, length 64
      21:57:12.015737 IP (... id 3039 ...)
          target > A: ICMP echo reply, seq 2, length 64
      
      21:57:13.016580 IP (...)
          A > target: ICMP echo request, seq 3, length 64
      21:57:13.019251 IP (... id 3437 ...)
          target > A: ICMP echo reply, seq 3, length 64
      
      [1] TCP sessions uses a per flow ID generator not changed by this patch.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Reported-by: NJeffrey Knockel <jeffk@cs.unm.edu>
      Reported-by: NJedidiah R. Crandall <crandall@cs.unm.edu>
      Cc: Willy Tarreau <w@1wt.eu>
      Cc: Hannes Frederic Sowa <hannes@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      04ca6973
    • J
      tipc: make tipc_buf_append() more robust · 13e9b997
      Jon Paul Maloy 提交于
      As per comment from David Miller, we try to make the buffer reassembly
      function more resilient to user errors than it is today.
      
      - We check that the "*buf" parameter always is set, since this is
        mandatory input.
      
      - We ensure that *buf->next always is set to NULL before linking in
        the buffer, instead of relying of the caller to have done this.
      
      - We ensure that the "tail" pointer in the head buffer's control
        block is initialized to NULL when the first fragment arrives.
      Signed-off-by: NJon Maloy <jon.maloy@ericsson.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      13e9b997
    • J
      neighbour : fix ndm_type type error issue · 545469f7
      Jun Zhao 提交于
      ndm_type means L3 address type, in neighbour proxy and vxlan, it's RTN_UNICAST.
      NDA_DST is for netlink TLV type, hence it's not right value in this context.
      Signed-off-by: NJun Zhao <mypopydev@gmail.com>
      Acked-by: NHannes Frederic Sowa <hannes@stressinduktion.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      545469f7
  6. 28 7月, 2014 6 次提交
    • N
      inet: frag: set limits and make init_net's high_thresh limit global · 1bab4c75
      Nikolay Aleksandrov 提交于
      This patch makes init_net's high_thresh limit to be the maximum for all
      namespaces, thus introducing a global memory limit threshold equal to the
      sum of the individual high_thresh limits which are capped.
      It also introduces some sane minimums for low_thresh as it shouldn't be
      able to drop below 0 (or > high_thresh in the unsigned case), and
      overall low_thresh should not ever be above high_thresh, so we make the
      following relations for a namespace:
      init_net:
       high_thresh - max(not capped), min(init_net low_thresh)
       low_thresh - max(init_net high_thresh), min (0)
      
      all other namespaces:
       high_thresh = max(init_net high_thresh), min(namespace's low_thresh)
       low_thresh = max(namespace's high_thresh), min(0)
      
      The major issue with having low_thresh > high_thresh is that we'll
      schedule eviction but never evict anything and thus rely only on the
      timers.
      Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1bab4c75
    • F
      inet: frag: use seqlock for hash rebuild · ab1c724f
      Florian Westphal 提交于
      rehash is rare operation, don't force readers to take
      the read-side rwlock.
      
      Instead, we only have to detect the (rare) case where
      the secret was altered while we are trying to insert
      a new inetfrag queue into the table.
      
      If it was changed, drop the bucket lock and recompute
      the hash to get the 'new' chain bucket that we have to
      insert into.
      
      Joint work with Nikolay Aleksandrov.
      Signed-off-by: NFlorian Westphal <fw@strlen.de>
      Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ab1c724f
    • F
      inet: frag: remove periodic secret rebuild timer · e3a57d18
      Florian Westphal 提交于
      merge functionality into the eviction workqueue.
      
      Instead of rebuilding every n seconds, take advantage of the upper
      hash chain length limit.
      
      If we hit it, mark table for rebuild and schedule workqueue.
      To prevent frequent rebuilds when we're completely overloaded,
      don't rebuild more than once every 5 seconds.
      
      ipfrag_secret_interval sysctl is now obsolete and has been marked as
      deprecated, it still can be changed so scripts won't be broken but it
      won't have any effect. A comment is left above each unused secret_timer
      variable to avoid confusion.
      
      Joint work with Nikolay Aleksandrov.
      Signed-off-by: NFlorian Westphal <fw@strlen.de>
      Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e3a57d18
    • F
      inet: frag: remove lru list · 3fd588eb
      Florian Westphal 提交于
      no longer used.
      Signed-off-by: NFlorian Westphal <fw@strlen.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3fd588eb
    • F
      inet: frag: don't account number of fragment queues · 434d3054
      Florian Westphal 提交于
      The 'nqueues' counter is protected by the lru list lock,
      once thats removed this needs to be converted to atomic
      counter.  Given this isn't used for anything except for
      reporting it to userspace via /proc, just remove it.
      
      We still report the memory currently used by fragment
      reassembly queues.
      Signed-off-by: NFlorian Westphal <fw@strlen.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      434d3054
    • F
      inet: frag: move eviction of queues to work queue · b13d3cbf
      Florian Westphal 提交于
      When the high_thresh limit is reached we try to toss the 'oldest'
      incomplete fragment queues until memory limits are below the low_thresh
      value.  This happens in softirq/packet processing context.
      
      This has two drawbacks:
      
      1) processors might evict a queue that was about to be completed
      by another cpu, because they will compete wrt. resource usage and
      resource reclaim.
      
      2) LRU list maintenance is expensive.
      
      But when constantly overloaded, even the 'least recently used' element is
      recent, so removing 'lru' queue first is not 'fairer' than removing any
      other fragment queue.
      
      This moves eviction out of the fast path:
      
      When the low threshold is reached, a work queue is scheduled
      which then iterates over the table and removes the queues that exceed
      the memory limits of the namespace. It sets a new flag called
      INET_FRAG_EVICTED on the evicted queues so the proper counters will get
      incremented when the queue is forcefully expired.
      
      When the high threshold is reached, no more fragment queues are
      created until we're below the limit again.
      
      The LRU list is now unused and will be removed in a followup patch.
      
      Joint work with Nikolay Aleksandrov.
      Suggested-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NFlorian Westphal <fw@strlen.de>
      Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b13d3cbf
反馈
建议
客服 返回
顶部