1. 24 12月, 2014 6 次提交
  2. 17 12月, 2014 1 次提交
  3. 11 12月, 2014 4 次提交
    • A
      net: sock: fix access via invalid file descriptor · 198bf1b0
      Alexei Starovoitov 提交于
      0day robot reported the following crash:
      [   21.233581] BUG: unable to handle kernel NULL pointer dereference at 0000000000000007
      [   21.234709] IP: [<ffffffff8156ebda>] sk_attach_bpf+0x39/0xc2
      
      It's due to bpf_prog_get() returning ERR_PTR.
      Check it properly.
      Reported-by: NFengguang Wu <fengguang.wu@intel.com>
      Fixes: 89aa0758 ("net: sock: allow eBPF programs to be attached to sockets")
      Signed-off-by: NAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      198bf1b0
    • G
      net: introduce helper macro for_each_cmsghdr · f95b414e
      Gu Zheng 提交于
      Introduce helper macro for_each_cmsghdr as a wrapper of the enumerating
      cmsghdr from msghdr, just cleanup.
      Signed-off-by: NGu Zheng <guz.fnst@cn.fujitsu.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f95b414e
    • A
      net: Pull out core bits of __netdev_alloc_skb and add __napi_alloc_skb · fd11a83d
      Alexander Duyck 提交于
      This change pulls the core functionality out of __netdev_alloc_skb and
      places them in a new function named __alloc_rx_skb.  The reason for doing
      this is to make these bits accessible to a new function __napi_alloc_skb.
      In addition __alloc_rx_skb now has a new flags value that is used to
      determine which page frag pool to allocate from.  If the SKB_ALLOC_NAPI
      flag is set then the NAPI pool is used.  The advantage of this is that we
      do not have to use local_irq_save/restore when accessing the NAPI pool from
      NAPI context.
      
      In my test setup I saw at least 11ns of savings using the napi_alloc_skb
      function versus the netdev_alloc_skb function, most of this being due to
      the fact that we didn't have to call local_irq_save/restore.
      
      The main use case for napi_alloc_skb would be for things such as copybreak
      or page fragment based receive paths where an skb is allocated after the
      data has been received instead of before.
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fd11a83d
    • A
      net: Split netdev_alloc_frag into __alloc_page_frag and add __napi_alloc_frag · ffde7328
      Alexander Duyck 提交于
      This patch splits the netdev_alloc_frag function up so that it can be used
      on one of two page frag pools instead of being fixed on the
      netdev_alloc_cache.  By doing this we can add a NAPI specific function
      __napi_alloc_frag that accesses a pool that is only used from softirq
      context.  The advantage to this is that we do not need to call
      local_irq_save/restore which can be a significant savings.
      
      I also took the opportunity to refactor the core bits that were placed in
      __alloc_page_frag.  First I updated the allocation to do either a 32K
      allocation or an order 0 page.  This is based on the changes in commmit
      d9b2938a where it was found that latencies could be reduced in case of
      failures.  Then I also rewrote the logic to work from the end of the page to
      the start.  By doing this the size value doesn't have to be used unless we
      have run out of space for page fragments.  Finally I cleaned up the atomic
      bits so that we just do an atomic_sub_and_test and if that returns true then
      we set the page->_count via an atomic_set.  This way we can remove the extra
      conditional for the atomic_read since it would have led to an atomic_inc in
      the case of success anyway.
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@redhat.com>
      Acked-by: NAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ffde7328
  4. 10 12月, 2014 7 次提交
    • R
      rocker: remove swdev mode · 1d460b98
      Roopa Prabhu 提交于
      Remove use of 'swdev' mode in rocker. rocker dev offloads
      can use the BRIDGE_FLAGS_SELF to indicate offload to hardware.
      Signed-off-by: NRoopa Prabhu <roopa@cumulusnetworks.com>
      Signed-off-by: NScott Feldman <sfeldma@gmail.com>
      Signed-off-by: NJiri Pirko <jiri@resnulli.us>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1d460b98
    • L
      net: avoid to call skb_queue_len again · e008f3f0
      Li RongQing 提交于
      the queue length of sd->input_pkt_queue has been put into qlen,
      and impossible to change, since hold the lock
      Signed-off-by: NLi RongQing <roy.qing.li@gmail.com>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Cc: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e008f3f0
    • A
      skb_copy_datagram_iovec() can die · d3a9632f
      Al Viro 提交于
      no callers other than itself.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      d3a9632f
    • A
      switch memcpy_to_msg() and skb_copy{,_and_csum}_datagram_msg() to primitives · e5a4b0bb
      Al Viro 提交于
      ... making both non-draining.  That means that tcp_recvmsg() becomes
      non-draining.  And _that_ would break iscsit_do_rx_data() unless we
      	a) make sure tcp_recvmsg() is uniformly non-draining (it is)
      	b) make sure it copes with arbitrary (including shifted)
      iov_iter (it does, all it uses is iov_iter primitives)
      	c) make iscsit_do_rx_data() initialize ->msg_iter only once.
      
      Fortunately, (c) is doable with minimal work and we are rid of one
      the two places where kernel send/recvmsg users would be unhappy with
      non-draining behaviour.
      
      Actually, that makes all but one of ->recvmsg() instances iov_iter-clean.
      The exception is skcipher_recvmsg() and it also isn't hard to convert
      to primitives (iov_iter_get_pages() is needed there).  That'll wait
      a bit - there's some interplay with ->sendmsg() path for that one.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      e5a4b0bb
    • H
      dst: no need to take reference on DST_NOCACHE dsts · dbfc4fb7
      Hannes Frederic Sowa 提交于
      Since commit f8864972 ("ipv4: fix dst race in sk_dst_get()")
      DST_NOCACHE dst_entries get freed by RCU. So there is no need to get a
      reference on them when we are in rcu protected sections.
      
      Cc: Eric Dumazet <edumazet@google.com>
      Cc: Julian Anastasov <ja@ssi.bg>
      Signed-off-by: NHannes Frederic Sowa <hannes@stressinduktion.org>
      Reviewed-by: NJulian Anastasov <ja@ssi.bg>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      dbfc4fb7
    • E
      net: avoid two atomic operations in fast clones · 6ffe75eb
      Eric Dumazet 提交于
      Commit ce1a4ea3 ("net: avoid one atomic operation in skb_clone()")
      took the wrong way to save one atomic operation.
      
      It is actually possible to avoid two atomic operations, if we
      do not change skb->fclone values, and only rely on clone_ref
      content to signal if the clone is available or not.
      
      skb_clone() can simply use the fast clone if clone_ref is 1.
      
      kfree_skbmem() can avoid the atomic_dec_and_test() if clone_ref is 1.
      
      Note that because we usually free the clone before the original skb,
      this particular attempt is only done for the original skb to have better
      branch prediction.
      
      SKB_FCLONE_FREE is removed.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Chris Mason <clm@fb.com>
      Cc: Sabrina Dubroca <sd@queasysnail.net>
      Cc: Vijay Subramanian <subramanian.vijay@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6ffe75eb
    • M
      rtnetlink: delay RTM_DELLINK notification until after ndo_uninit() · 395eea6c
      Mahesh Bandewar 提交于
      The commit 56bfa7ee ("unregister_netdevice : move RTM_DELLINK to
      until after ndo_uninit") tried to do this ealier but while doing so
      it created a problem. Unfortunately the delayed rtmsg_ifinfo() also
      delayed call to fill_info(). So this translated into asking driver
      to remove private state and then query it's private state. This
      could have catastropic consequences.
      
      This change breaks the rtmsg_ifinfo() into two parts - one takes the
      precise snapshot of the device by called fill_info() before calling
      the ndo_uninit() and the second part sends the notification using
      collected snapshot.
      
      It was brought to notice when last link is deleted from an ipvlan device
      when it has free-ed the port and the subsequent .fill_info() call is
      trying to get the info from the port.
      
      kernel: [  255.139429] ------------[ cut here ]------------
      kernel: [  255.139439] WARNING: CPU: 12 PID: 11173 at net/core/rtnetlink.c:2238 rtmsg_ifinfo+0x100/0x110()
      kernel: [  255.139493] Modules linked in: ipvlan bonding w1_therm ds2482 wire cdc_acm ehci_pci ehci_hcd i2c_dev i2c_i801 i2c_core msr cpuid bnx2x ptp pps_core mdio libcrc32c
      kernel: [  255.139513] CPU: 12 PID: 11173 Comm: ip Not tainted 3.18.0-smp-DEV #167
      kernel: [  255.139514] Hardware name: Intel RML,PCH/Ibis_QC_18, BIOS 1.0.10 05/15/2012
      kernel: [  255.139515]  0000000000000009 ffff880851b6b828 ffffffff815d87f4 00000000000000e0
      kernel: [  255.139516]  0000000000000000 ffff880851b6b868 ffffffff8109c29c 0000000000000000
      kernel: [  255.139518]  00000000ffffffa6 00000000000000d0 ffffffff81aaf580 0000000000000011
      kernel: [  255.139520] Call Trace:
      kernel: [  255.139527]  [<ffffffff815d87f4>] dump_stack+0x46/0x58
      kernel: [  255.139531]  [<ffffffff8109c29c>] warn_slowpath_common+0x8c/0xc0
      kernel: [  255.139540]  [<ffffffff8109c2ea>] warn_slowpath_null+0x1a/0x20
      kernel: [  255.139544]  [<ffffffff8150d570>] rtmsg_ifinfo+0x100/0x110
      kernel: [  255.139547]  [<ffffffff814f78b5>] rollback_registered_many+0x1d5/0x2d0
      kernel: [  255.139549]  [<ffffffff814f79cf>] unregister_netdevice_many+0x1f/0xb0
      kernel: [  255.139551]  [<ffffffff8150acab>] rtnl_dellink+0xbb/0x110
      kernel: [  255.139553]  [<ffffffff8150da90>] rtnetlink_rcv_msg+0xa0/0x240
      kernel: [  255.139557]  [<ffffffff81329283>] ? rhashtable_lookup_compare+0x43/0x80
      kernel: [  255.139558]  [<ffffffff8150d9f0>] ? __rtnl_unlock+0x20/0x20
      kernel: [  255.139562]  [<ffffffff8152cb11>] netlink_rcv_skb+0xb1/0xc0
      kernel: [  255.139563]  [<ffffffff8150a495>] rtnetlink_rcv+0x25/0x40
      kernel: [  255.139565]  [<ffffffff8152c398>] netlink_unicast+0x178/0x230
      kernel: [  255.139567]  [<ffffffff8152c75f>] netlink_sendmsg+0x30f/0x420
      kernel: [  255.139571]  [<ffffffff814e0b0c>] sock_sendmsg+0x9c/0xd0
      kernel: [  255.139575]  [<ffffffff811d1d7f>] ? rw_copy_check_uvector+0x6f/0x130
      kernel: [  255.139577]  [<ffffffff814e11c9>] ? copy_msghdr_from_user+0x139/0x1b0
      kernel: [  255.139578]  [<ffffffff814e1774>] ___sys_sendmsg+0x304/0x310
      kernel: [  255.139581]  [<ffffffff81198723>] ? handle_mm_fault+0xca3/0xde0
      kernel: [  255.139585]  [<ffffffff811ebc4c>] ? destroy_inode+0x3c/0x70
      kernel: [  255.139589]  [<ffffffff8108e6ec>] ? __do_page_fault+0x20c/0x500
      kernel: [  255.139597]  [<ffffffff811e8336>] ? dput+0xb6/0x190
      kernel: [  255.139606]  [<ffffffff811f05f6>] ? mntput+0x26/0x40
      kernel: [  255.139611]  [<ffffffff811d2b94>] ? __fput+0x174/0x1e0
      kernel: [  255.139613]  [<ffffffff814e2129>] __sys_sendmsg+0x49/0x90
      kernel: [  255.139615]  [<ffffffff814e2182>] SyS_sendmsg+0x12/0x20
      kernel: [  255.139617]  [<ffffffff815df092>] system_call_fastpath+0x12/0x17
      kernel: [  255.139619] ---[ end trace 5e6703e87d984f6b ]---
      Signed-off-by: NMahesh Bandewar <maheshb@google.com>
      Reported-by: NToshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
      Cc: Eric Dumazet <edumazet@google.com>
      Cc: Roopa Prabhu <roopa@cumulusnetworks.com>
      Cc: David S. Miller <davem@davemloft.net>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Acked-by: NThomas Graf <tgraf@suug.ch>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      395eea6c
  5. 09 12月, 2014 1 次提交
    • E
      ethtool: Support for configurable RSS hash function · 892311f6
      Eyal Perry 提交于
      This patch extends the set/get_rxfh ethtool-options for getting or
      setting the RSS hash function.
      
      It modifies drivers implementation of set/get_rxfh accordingly.
      
      This change also delegates the responsibility of checking whether a
      modification to a certain RX flow hash parameter is supported to the
      driver implementation of set_rxfh.
      
      User-kernel API is done through the new hfunc bitmask field in the
      ethtool_rxfh struct. A bit set in the hfunc field is corresponding to an
      index in the new string-set ETH_SS_RSS_HASH_FUNCS.
      
      Got approval from most of the relevant driver maintainers that their
      driver is using Toeplitz, and for the few that didn't answered, also
      assumed it is Toeplitz.
      
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Cc: Ariel Elior <ariel.elior@qlogic.com>
      Cc: Prashant Sreedharan <prashant@broadcom.com>
      Cc: Michael Chan <mchan@broadcom.com>
      Cc: Hariprasad S <hariprasad@chelsio.com>
      Cc: Sathya Perla <sathya.perla@emulex.com>
      Cc: Subbu Seetharaman <subbu.seetharaman@emulex.com>
      Cc: Ajit Khaparde <ajit.khaparde@emulex.com>
      Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
      Cc: Jesse Brandeburg <jesse.brandeburg@intel.com>
      Cc: Bruce Allan <bruce.w.allan@intel.com>
      Cc: Carolyn Wyborny <carolyn.wyborny@intel.com>
      Cc: Don Skidmore <donald.c.skidmore@intel.com>
      Cc: Greg Rose <gregory.v.rose@intel.com>
      Cc: Matthew Vick <matthew.vick@intel.com>
      Cc: John Ronciak <john.ronciak@intel.com>
      Cc: Mitch Williams <mitch.a.williams@intel.com>
      Cc: Amir Vadai <amirv@mellanox.com>
      Cc: Solarflare linux maintainers <linux-net-drivers@solarflare.com>
      Cc: Shradha Shah <sshah@solarflare.com>
      Cc: Shreyas Bhatewara <sbhatewara@vmware.com>
      Cc: "VMware, Inc." <pv-drivers@vmware.com>
      Cc: Ben Hutchings <ben@decadent.org.uk>
      Signed-off-by: NEyal Perry <eyalpe@mellanox.com>
      Signed-off-by: NAmir Vadai <amirv@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      892311f6
  6. 06 12月, 2014 1 次提交
    • A
      net: sock: allow eBPF programs to be attached to sockets · 89aa0758
      Alexei Starovoitov 提交于
      introduce new setsockopt() command:
      
      setsockopt(sock, SOL_SOCKET, SO_ATTACH_BPF, &prog_fd, sizeof(prog_fd))
      
      where prog_fd was received from syscall bpf(BPF_PROG_LOAD, attr, ...)
      and attr->prog_type == BPF_PROG_TYPE_SOCKET_FILTER
      
      setsockopt() calls bpf_prog_get() which increments refcnt of the program,
      so it doesn't get unloaded while socket is using the program.
      
      The same eBPF program can be attached to multiple sockets.
      
      User task exit automatically closes socket which calls sk_filter_uncharge()
      which decrements refcnt of eBPF program
      Signed-off-by: NAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      89aa0758
  7. 05 12月, 2014 6 次提交
  8. 03 12月, 2014 5 次提交
  9. 30 11月, 2014 1 次提交
  10. 27 11月, 2014 2 次提交
  11. 25 11月, 2014 1 次提交
    • D
      crypto: algif - add and use sock_kzfree_s() instead of memzero_explicit() · 79e88659
      Daniel Borkmann 提交于
      Commit e1bd95bf ("crypto: algif - zeroize IV buffer") and
      2a6af25b ("crypto: algif - zeroize message digest buffer")
      added memzero_explicit() calls on buffers that are later on
      passed back to sock_kfree_s().
      
      This is a discussed follow-up that, instead, extends the sock
      API and adds sock_kzfree_s(), which internally uses kzfree()
      instead of kfree() for passing the buffers back to slab.
      
      Having sock_kzfree_s() allows to keep the changes more minimal
      by just having a drop-in replacement instead of adding
      memzero_explicit() calls everywhere before sock_kfree_s().
      
      In kzfree(), the compiler is not allowed to optimize the memset()
      away and thus there's no need for memzero_explicit(). Both,
      sock_kfree_s() and sock_kzfree_s() are wrappers for
      __sock_kfree_s() and call into kfree() resp. kzfree(); here,
      __sock_kfree_s() needs to be explicitly inlined as we want the
      compiler to optimize the call and condition away and thus it
      produces e.g. on x86_64 the _same_ assembler output for
      sock_kfree_s() before and after, and thus also allows for
      avoiding code duplication.
      
      Cc: David S. Miller <davem@davemloft.net>
      Signed-off-by: NDaniel Borkmann <dborkman@redhat.com>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      79e88659
  12. 24 11月, 2014 3 次提交
  13. 22 11月, 2014 2 次提交