1. 17 11月, 2012 1 次提交
  2. 08 11月, 2012 1 次提交
    • E
      af-packet: fix oops when socket is not present · a3d744e9
      Eric Leblond 提交于
      Due to a NULL dereference, the following patch is causing oops
      in normal trafic condition:
      
      commit c0de08d0
      Author: Eric Leblond <eric@regit.org>
      Date:   Thu Aug 16 22:02:58 2012 +0000
      
          af_packet: don't emit packet on orig fanout group
      
      This buggy patch was a feature fix and has reached most stable
      branches.
      
      When skb->sk is NULL and when packet fanout is used, there is a
      crash in match_fanout_group where skb->sk is accessed.
      This patch fixes the issue by returning false as soon as the
      socket is NULL: this correspond to the wanted behavior because
      the kernel as to resend the skb to all the listening socket in
      this case.
      Signed-off-by: NEric Leblond <eric@regit.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a3d744e9
  3. 09 10月, 2012 2 次提交
    • F
      vlan: don't deliver frames for unknown vlans to protocols · 48cc32d3
      Florian Zumbiehl 提交于
      6a32e4f9 made the vlan code skip marking
      vlan-tagged frames for not locally configured vlans as PACKET_OTHERHOST if
      there was an rx_handler, as the rx_handler could cause the frame to be received
      on a different (virtual) vlan-capable interface where that vlan might be
      configured.
      
      As rx_handlers do not necessarily return RX_HANDLER_ANOTHER, this could cause
      frames for unknown vlans to be delivered to the protocol stack as if they had
      been received untagged.
      
      For example, if an ipv6 router advertisement that's tagged for a locally not
      configured vlan is received on an interface with macvlan interfaces attached,
      macvlan's rx_handler returns RX_HANDLER_PASS after delivering the frame to the
      macvlan interfaces, which caused it to be passed to the protocol stack, leading
      to ipv6 addresses for the announced prefix being configured even though those
      are completely unusable on the underlying interface.
      
      The fix moves marking as PACKET_OTHERHOST after the rx_handler so the
      rx_handler, if there is one, sees the frame unchanged, but afterwards,
      before the frame is delivered to the protocol stack, it gets marked whether
      there is an rx_handler or not.
      Signed-off-by: NFlorian Zumbiehl <florz@florz.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      48cc32d3
    • E
      net: gro: selective flush of packets · 2e71a6f8
      Eric Dumazet 提交于
      Current GRO can hold packets in gro_list for almost unlimited
      time, in case napi->poll() handler consumes its budget over and over.
      
      In this case, napi_complete()/napi_gro_flush() are not called.
      
      Another problem is that gro_list is flushed in non friendly way :
      We scan the list and complete packets in the reverse order.
      (youngest packets first, oldest packets last)
      This defeats priorities that sender could have cooked.
      
      Since GRO currently only store TCP packets, we dont really notice the
      bug because of retransmits, but this behavior can add unexpected
      latencies, particularly on mice flows clamped by elephant flows.
      
      This patch makes sure no packet can stay more than 1 ms in queue, and
      only in stress situations.
      
      It also complete packets in the right order to minimize latencies.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Cc: Jesse Gross <jesse@nicira.com>
      Cc: Tom Herbert <therbert@google.com>
      Cc: Yuchung Cheng <ycheng@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2e71a6f8
  4. 08 10月, 2012 1 次提交
  5. 02 10月, 2012 1 次提交
    • E
      net: add gro_cells infrastructure · c9e6bc64
      Eric Dumazet 提交于
      This adds a new include file (include/net/gro_cells.h), to bring GRO
      (Generic Receive Offload) capability to tunnels, in a modular way.
      
      Because tunnels receive path is lockless, and GRO adds a serialization
      using a napi_struct, I chose to add an array of up to
      DEFAULT_MAX_NUM_RSS_QUEUES cells, so that multi queue devices wont be
      slowed down because of GRO layer.
      
      skb_get_rx_queue() is used as selector.
      
      In the future, we might add optional fanout capabilities, using rxhash
      for example.
      
      With help from Ben Hutchings who reminded me
      netif_get_num_default_rss_queues() function.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Ben Hutchings <bhutchings@solarflare.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c9e6bc64
  6. 21 9月, 2012 1 次提交
    • E
      net: do not disable sg for packets requiring no checksum · c0d680e5
      Ed Cashin 提交于
      A change in a series of VLAN-related changes appears to have
      inadvertently disabled the use of the scatter gather feature of
      network cards for transmission of non-IP ethernet protocols like ATA
      over Ethernet (AoE).  Below is a reference to the commit that
      introduces a "harmonize_features" function that turns off scatter
      gather when the NIC does not support hardware checksumming for the
      ethernet protocol of an sk buff.
      
        commit f01a5236
        Author: Jesse Gross <jesse@nicira.com>
        Date:   Sun Jan 9 06:23:31 2011 +0000
      
            net offloading: Generalize netif_get_vlan_features().
      
      The can_checksum_protocol function is not equipped to consider a
      protocol that does not require checksumming.  Calling it for a
      protocol that requires no checksum is inappropriate.
      
      The patch below has harmonize_features call can_checksum_protocol when
      the protocol needs a checksum, so that the network layer is not forced
      to perform unnecessary skb linearization on the transmission of AoE
      packets.  Unnecessary linearization results in decreased performance
      and increased memory pressure, as reported here:
      
        http://www.spinics.net/lists/linux-mm/msg15184.html
      
      The problem has probably not been widely experienced yet, because
      only recently has the kernel.org-distributed aoe driver acquired the
      ability to use payloads of over a page in size, with the patchset
      recently included in the mm tree:
      
        https://lkml.org/lkml/2012/8/28/140
      
      The coraid.com-distributed aoe driver already could use payloads of
      greater than a page in size, but its users generally do not use the
      newest kernels.
      Signed-off-by: NEd Cashin <ecashin@coraid.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c0d680e5
  7. 20 9月, 2012 4 次提交
  8. 19 9月, 2012 1 次提交
  9. 18 9月, 2012 1 次提交
    • E
      userns: Convert the audit loginuid to be a kuid · e1760bd5
      Eric W. Biederman 提交于
      Always store audit loginuids in type kuid_t.
      
      Print loginuids by converting them into uids in the appropriate user
      namespace, and then printing the resulting uid.
      
      Modify audit_get_loginuid to return a kuid_t.
      
      Modify audit_set_loginuid to take a kuid_t.
      
      Modify /proc/<pid>/loginuid on read to convert the loginuid into the
      user namespace of the opener of the file.
      
      Modify /proc/<pid>/loginud on write to convert the loginuid
      rom the user namespace of the opener of the file.
      
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Eric Paris <eparis@redhat.com>
      Cc: Paul Moore <paul@paul-moore.com> ?
      Cc: David Miller <davem@davemloft.net>
      Signed-off-by: NEric W. Biederman <ebiederm@xmission.com>
      e1760bd5
  10. 17 9月, 2012 3 次提交
  11. 09 9月, 2012 1 次提交
    • C
      net: small bug on rxhash calculation · 68622342
      Chema Gonzalez 提交于
      In the current rxhash calculation function, while the
      sorting of the ports/addrs is coherent (you get the
      same rxhash for packets sharing the same 4-tuple, in
      both directions), ports and addrs are sorted
      independently. This implies packets from a connection
      between the same addresses but crossed ports hash to
      the same rxhash.
      
      For example, traffic between A=S:l and B=L:s is hashed
      (in both directions) from {L, S, {s, l}}. The same
      rxhash is obtained for packets between C=S:s and D=L:l.
      
      This patch ensures that you either swap both addrs and ports,
      or you swap none. Traffic between A and B, and traffic
      between C and D, get their rxhash from different sources
      ({L, S, {l, s}} for A<->B, and {L, S, {s, l}} for C<->D)
      
      The patch is co-written with Eric Dumazet <edumazet@google.com>
      Signed-off-by: NChema Gonzalez <chema@google.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      68622342
  12. 01 9月, 2012 1 次提交
    • R
      net: fix documentation of skb_needs_linearize(). · d1a53dfd
      Rami Rosen 提交于
      skb_needs_linearize() does not check highmem DMA as it does not call
      illegal_highdma() anymore, so there is no need to mention highmem DMA here.
      
      (Indeed, ~NETIF_F_SG flag, which is checked in skb_needs_linearize(), can
      be set when illegal_highdma() returns true, and we are assured that
      illegal_highdma() is invoked prior to skb_needs_linearize() as
      skb_needs_linearize() is a static method called only once.
      But ~NETIF_F_SG can be set not only there in this same invocation path.
      It can also be set when can_checksum_protocol() returns false).
      
      see commit 02932ce9,
      Convert skb_need_linearize() to use precomputed features.
      Signed-off-by: NRami Rosen <rosenr@marvell.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d1a53dfd
  13. 31 8月, 2012 1 次提交
    • G
      net: dev: fix the incorrect hold of net namespace's lo device · 6549dd43
      Gao feng 提交于
      When moving a net device from one net namespace to another
      net namespace,dev_change_net_namespace calls NETDEV_DOWN
      event,so the original net namespace's dst entries which
      beloned to this net device will be put into dst_garbage
      list.
      
      then dev_change_net_namespace will set this net device's
      net to the new net namespace.
      
      If we unregister this net device's driver, this will trigger
      the NETDEV_UNREGISTER_FINAL event, dst_ifdown will be called,
      and get this net device's dst entries from dst_garbage list,
      put these entries' dev to the new net namespace's lo device.
      
      It's not what we want,actually we need these dst entries hold
      the original net namespace's lo device,this incorrect device
      holding will trigger emg message like below.
      unregister_netdevice: waiting for lo to become free. Usage count = 1
      
      so we should call NETDEV_UNREGISTER_FINAL event in
      dev_change_net_namespace too,in order to make sure dst entries
      already in the dst_garbage list, we need rcu_barrier before we
      call NETDEV_UNREGISTER_FINAL event.
      
      With help form Eric Dumazet.
      Signed-off-by: NGao feng <gaofeng@cn.fujitsu.com>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6549dd43
  14. 25 8月, 2012 1 次提交
    • B
      net: Set device operstate at registration time · 8f4cccbb
      Ben Hutchings 提交于
      The operstate of a device is initially IF_OPER_UNKNOWN and is updated
      asynchronously by linkwatch after each change of carrier state
      reported by the driver.  The default carrier state of a net device is
      on, and this will never be changed on drivers that do not support
      carrier detection, thus the operstate remains IF_OPER_UNKNOWN.
      
      For devices that do support carrier detection, the driver must set the
      carrier state to off initially, then poll the hardware state when the
      device is opened.  However, we must not activate linkwatch for a
      unregistered device, and commit b4730016 ('net: Do not fire linkwatch
      events until the device is registered.') ensured that we don't.  But
      this means that the operstate for many devices that support carrier
      detection remains IF_OPER_UNKNOWN when it should be IF_OPER_DOWN.
      
      The same issue exists with the dormant state.
      
      The proper initialisation sequence, avoiding a race with opening of
      the device, is:
      
              rtnl_lock();
              rc = register_netdevice(dev);
              if (rc)
                      goto out_unlock;
              netif_carrier_off(dev); /* or netif_dormant_on(dev) */
              rtnl_unlock();
      
      but it seems silly that this should have to be repeated in so many
      drivers.  Further, the operstate seen immediately after opening the
      device may still be IF_OPER_UNKNOWN due to the asynchronous nature of
      linkwatch.
      
      Commit 22604c86 ('net: Fix for initial link state in 2.6.28') attempted
      to fix this by setting the operstate synchronously, but it was
      reverted as it could lead to deadlock.
      
      This initialises the operstate synchronously at registration time
      only.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8f4cccbb
  15. 24 8月, 2012 1 次提交
  16. 23 8月, 2012 1 次提交
    • E
      net: remove delay at device dismantle · 0115e8e3
      Eric Dumazet 提交于
      I noticed extra one second delay in device dismantle, tracked down to
      a call to dst_dev_event() while some call_rcu() are still in RCU queues.
      
      These call_rcu() were posted by rt_free(struct rtable *rt) calls.
      
      We then wait a little (but one second) in netdev_wait_allrefs() before
      kicking again NETDEV_UNREGISTER.
      
      As the call_rcu() are now completed, dst_dev_event() can do the needed
      device swap on busy dst.
      
      To solve this problem, add a new NETDEV_UNREGISTER_FINAL, called
      after a rcu_barrier(), but outside of RTNL lock.
      
      Use NETDEV_UNREGISTER_FINAL with care !
      
      Change dst_dev_event() handler to react to NETDEV_UNREGISTER_FINAL
      
      Also remove NETDEV_UNREGISTER_BATCH, as its not used anymore after
      IP cache removal.
      
      With help from Gao feng
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Tom Herbert <therbert@google.com>
      Cc: Mahesh Bandewar <maheshb@google.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Gao feng <gaofeng@cn.fujitsu.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0115e8e3
  17. 20 8月, 2012 2 次提交
  18. 15 8月, 2012 3 次提交
  19. 10 8月, 2012 2 次提交
  20. 09 8月, 2012 1 次提交
  21. 02 8月, 2012 1 次提交
    • B
      net: Allow driver to limit number of GSO segments per skb · 30b678d8
      Ben Hutchings 提交于
      A peer (or local user) may cause TCP to use a nominal MSS of as little
      as 88 (actual MSS of 76 with timestamps).  Given that we have a
      sufficiently prodigious local sender and the peer ACKs quickly enough,
      it is nevertheless possible to grow the window for such a connection
      to the point that we will try to send just under 64K at once.  This
      results in a single skb that expands to 861 segments.
      
      In some drivers with TSO support, such an skb will require hundreds of
      DMA descriptors; a substantial fraction of a TX ring or even more than
      a full ring.  The TX queue selected for the skb may stall and trigger
      the TX watchdog repeatedly (since the problem skb will be retried
      after the TX reset).  This particularly affects sfc, for which the
      issue is designated as CVE-2012-3412.
      
      Therefore:
      1. Add the field net_device::gso_max_segs holding the device-specific
         limit.
      2. In netif_skb_features(), if the number of segments is too high then
         mask out GSO features to force fall back to software GSO.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      30b678d8
  22. 01 8月, 2012 1 次提交
    • M
      netvm: set PF_MEMALLOC as appropriate during SKB processing · b4b9e355
      Mel Gorman 提交于
      In order to make sure pfmemalloc packets receive all memory needed to
      proceed, ensure processing of pfmemalloc SKBs happens under PF_MEMALLOC.
      This is limited to a subset of protocols that are expected to be used for
      writing to swap.  Taps are not allowed to use PF_MEMALLOC as these are
      expected to communicate with userspace processes which could be paged out.
      
      [a.p.zijlstra@chello.nl: Ideas taken from various patches]
      [jslaby@suse.cz: Lock imbalance fix]
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Cc: Neil Brown <neilb@suse.de>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Christie <michaelc@cs.wisc.edu>
      Cc: Eric B Munson <emunson@mgebm.net>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Cc: Sebastian Andrzej Siewior <sebastian@breakpoint.cc>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Christoph Lameter <cl@linux.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b4b9e355
  23. 24 7月, 2012 1 次提交
  24. 23 7月, 2012 1 次提交
  25. 19 7月, 2012 1 次提交
  26. 15 7月, 2012 1 次提交
  27. 11 7月, 2012 2 次提交
  28. 10 7月, 2012 1 次提交
  29. 05 7月, 2012 1 次提交