1. 02 8月, 2017 1 次提交
  2. 20 6月, 2017 1 次提交
  3. 18 4月, 2017 1 次提交
  4. 07 4月, 2017 1 次提交
  5. 10 3月, 2017 1 次提交
    • D
      net: Work around lockdep limitation in sockets that use sockets · cdfbabfb
      David Howells 提交于
      Lockdep issues a circular dependency warning when AFS issues an operation
      through AF_RXRPC from a context in which the VFS/VM holds the mmap_sem.
      
      The theory lockdep comes up with is as follows:
      
       (1) If the pagefault handler decides it needs to read pages from AFS, it
           calls AFS with mmap_sem held and AFS begins an AF_RXRPC call, but
           creating a call requires the socket lock:
      
      	mmap_sem must be taken before sk_lock-AF_RXRPC
      
       (2) afs_open_socket() opens an AF_RXRPC socket and binds it.  rxrpc_bind()
           binds the underlying UDP socket whilst holding its socket lock.
           inet_bind() takes its own socket lock:
      
      	sk_lock-AF_RXRPC must be taken before sk_lock-AF_INET
      
       (3) Reading from a TCP socket into a userspace buffer might cause a fault
           and thus cause the kernel to take the mmap_sem, but the TCP socket is
           locked whilst doing this:
      
      	sk_lock-AF_INET must be taken before mmap_sem
      
      However, lockdep's theory is wrong in this instance because it deals only
      with lock classes and not individual locks.  The AF_INET lock in (2) isn't
      really equivalent to the AF_INET lock in (3) as the former deals with a
      socket entirely internal to the kernel that never sees userspace.  This is
      a limitation in the design of lockdep.
      
      Fix the general case by:
      
       (1) Double up all the locking keys used in sockets so that one set are
           used if the socket is created by userspace and the other set is used
           if the socket is created by the kernel.
      
       (2) Store the kern parameter passed to sk_alloc() in a variable in the
           sock struct (sk_kern_sock).  This informs sock_lock_init(),
           sock_init_data() and sk_clone_lock() as to the lock keys to be used.
      
           Note that the child created by sk_clone_lock() inherits the parent's
           kern setting.
      
       (3) Add a 'kern' parameter to ->accept() that is analogous to the one
           passed in to ->create() that distinguishes whether kernel_accept() or
           sys_accept4() was the caller and can be passed to sk_alloc().
      
           Note that a lot of accept functions merely dequeue an already
           allocated socket.  I haven't touched these as the new socket already
           exists before we get the parameter.
      
           Note also that there are a couple of places where I've made the accepted
           socket unconditionally kernel-based:
      
      	irda_accept()
      	rds_rcp_accept_one()
      	tcp_accept_from_sock()
      
           because they follow a sock_create_kern() and accept off of that.
      
      Whilst creating this, I noticed that lustre and ocfs don't create sockets
      through sock_create_kern() and thus they aren't marked as for-kernel,
      though they appear to be internal.  I wonder if these should do that so
      that they use the new set of lock keys.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cdfbabfb
  6. 29 8月, 2016 1 次提交
  7. 01 7月, 2016 1 次提交
    • J
      tun: switch to use skb array for tx · 1576d986
      Jason Wang 提交于
      We used to queue tx packets in sk_receive_queue, this is less
      efficient since it requires spinlocks to synchronize between producer
      and consumer.
      
      This patch tries to address this by:
      
      - switch from sk_receive_queue to a skb_array, and resize it when
        tx_queue_len was changed.
      - introduce a new proto_ops peek_len which was used for peeking the
        skb length.
      - implement a tun version of peek_len for vhost_net to use and convert
        vhost_net to use peek_len if possible.
      
      Pktgen test shows about 15.3% improvement on guest receiving pps for small
      buffers:
      
      Before: ~1300000pps
      After : ~1500000pps
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1576d986
  8. 16 6月, 2016 1 次提交
    • J
      net: Don't forget pr_fmt on net_dbg_ratelimited for CONFIG_DYNAMIC_DEBUG · daddef76
      Jason A. Donenfeld 提交于
      The implementation of net_dbg_ratelimited in the CONFIG_DYNAMIC_DEBUG
      case was added with 2c94b537 ("net: Implement net_dbg_ratelimited() for
      CONFIG_DYNAMIC_DEBUG case"). The implementation strategy was to take the
      usual definition of the dynamic_pr_debug macro, but alter it by adding a
      call to "net_ratelimit()" in the if statement. This is, in fact, the
      correct approach.
      
      However, while doing this, the author of the commit forgot to surround
      fmt by pr_fmt, resulting in unprefixed log messages appearing in the
      console. So, this commit adds back the pr_fmt(fmt) invocation, making
      net_dbg_ratelimited properly consistent across DEBUG, no DEBUG, and
      DYNAMIC_DEBUG cases, and bringing parity with the behavior of
      dynamic_pr_debug as well.
      
      Fixes: 2c94b537 ("net: Implement net_dbg_ratelimited() for CONFIG_DYNAMIC_DEBUG case")
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      Cc: Tim Bingham <tbingham@akamai.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      daddef76
  9. 02 5月, 2016 1 次提交
    • T
      net: Implement net_dbg_ratelimited() for CONFIG_DYNAMIC_DEBUG case · 2c94b537
      Tim Bingham 提交于
      Prior to commit d92cff89 ("net_dbg_ratelimited: turn into no-op
      when !DEBUG") the implementation of net_dbg_ratelimited() was buggy
      for both the DEBUG and CONFIG_DYNAMIC_DEBUG cases.
      
      The bug was that net_ratelimit() was being called and, despite
      returning true, nothing was being printed to the console. This
      resulted in messages like the following -
      
      "net_ratelimit: %d callbacks suppressed"
      
      with no other output nearby.
      
      After commit d92cff89 ("net_dbg_ratelimited: turn into no-op when
      !DEBUG") the bug is fixed for the DEBUG case. However, there's no
      output at all for CONFIG_DYNAMIC_DEBUG case.
      
      This patch restores debug output (if enabled) for the
      CONFIG_DYNAMIC_DEBUG case.
      
      Add a definition of net_dbg_ratelimited() for the CONFIG_DYNAMIC_DEBUG
      case. The implementation takes care to check that dynamic debugging is
      enabled before calling net_ratelimit().
      
      Fixes: d92cff89 ("net_dbg_ratelimited: turn into no-op when !DEBUG")
      Signed-off-by: NTim Bingham <tbingham@akamai.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2c94b537
  10. 29 3月, 2016 1 次提交
  11. 10 3月, 2016 1 次提交
  12. 02 12月, 2015 2 次提交
    • E
      net: fix sock_wake_async() rcu protection · ceb5d58b
      Eric Dumazet 提交于
      Dmitry provided a syzkaller (http://github.com/google/syzkaller)
      triggering a fault in sock_wake_async() when async IO is requested.
      
      Said program stressed af_unix sockets, but the issue is generic
      and should be addressed in core networking stack.
      
      The problem is that by the time sock_wake_async() is called,
      we should not access the @flags field of 'struct socket',
      as the inode containing this socket might be freed without
      further notice, and without RCU grace period.
      
      We already maintain an RCU protected structure, "struct socket_wq"
      so moving SOCKWQ_ASYNC_NOSPACE & SOCKWQ_ASYNC_WAITDATA into it
      is the safe route.
      
      It also reduces number of cache lines needing dirtying, so might
      provide a performance improvement anyway.
      
      In followup patches, we might move remaining flags (SOCK_NOSPACE,
      SOCK_PASSCRED, SOCK_PASSSEC) to save 8 bytes and let 'struct socket'
      being mostly read and let it being shared between cpus.
      Reported-by: NDmitry Vyukov <dvyukov@google.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ceb5d58b
    • E
      net: rename SOCK_ASYNC_NOSPACE and SOCK_ASYNC_WAITDATA · 9cd3e072
      Eric Dumazet 提交于
      This patch is a cleanup to make following patch easier to
      review.
      
      Goal is to move SOCK_ASYNC_NOSPACE and SOCK_ASYNC_WAITDATA
      from (struct socket)->flags to a (struct socket_wq)->flags
      to benefit from RCU protection in sock_wake_async()
      
      To ease backports, we rename both constants.
      
      Two new helpers, sk_set_bit(int nr, struct sock *sk)
      and sk_clear_bit(int net, struct sock *sk) are added so that
      following patch can change their implementation.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9cd3e072
  13. 08 10月, 2015 1 次提交
  14. 07 8月, 2015 1 次提交
    • J
      net_dbg_ratelimited: turn into no-op when !DEBUG · d92cff89
      Jason A. Donenfeld 提交于
      The pr_debug family of functions turns into a no-op when -DDEBUG is not
      specified, opting instead to call "no_printk", which gets compiled to a
      no-op (but retains gcc's nice warnings about printf-style arguments).
      
      The problem with net_dbg_ratelimited is that it is defined to be a
      variant of net_ratelimited_function, which expands to essentially:
      
          if (net_ratelimit())
              pr_debug(fmt, ...);
      
      When DEBUG is not defined, then this becomes,
      
          if (net_ratelimit())
              ;
      
      This seems benign, except it isn't. Firstly, there's the obvious
      overhead of calling net_ratelimit needlessly, which does quite some book
      keeping for the rate limiting. Given that the pr_debug and
      net_dbg_ratelimited family of functions are sprinkled liberally through
      performance critical code, with developers assuming they'll be compiled
      out to a no-op most of the time, we certainly do not want this needless
      book keeping. Secondly, and most visibly, even though no debug message
      is printed when DEBUG is not defined, if there is a flood of
      invocations, dmesg winds up peppered with messages such as
      "net_ratelimit: 320 callbacks suppressed". This is because our
      aforementioned net_ratelimit() function actually prints this text in
      some circumstances. It's especially odd to see this when there isn't any
      other accompanying debug message.
      
      So, in sum, it doesn't make sense to have this function's current
      behavior, and instead it should match what every other debug family of
      functions in the kernel does with !DEBUG -- nothing.
      
      This patch replaces calls to net_dbg_ratelimited when !DEBUG with
      no_printk, keeping with the idiom of all the other debug print helpers.
      
      Also, though not strictly neccessary, it guards the call with an if (0)
      so that all evaluation of any arguments are sure to be compiled out.
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d92cff89
  15. 11 5月, 2015 2 次提交
  16. 12 4月, 2015 1 次提交
  17. 03 3月, 2015 1 次提交
  18. 14 5月, 2014 1 次提交
    • H
      net: avoid dependency of net_get_random_once on nop patching · 3d440522
      Hannes Frederic Sowa 提交于
      net_get_random_once depends on the static keys infrastructure to patch up
      the branch to the slow path during boot. This was realized by abusing the
      static keys api and defining a new initializer to not enable the call
      site while still indicating that the branch point should get patched
      up. This was needed to have the fast path considered likely by gcc.
      
      The static key initialization during boot up normally walks through all
      the registered keys and either patches in ideal nops or enables the jump
      site but omitted that step on x86 if ideal nops where already placed at
      static_key branch points. Thus net_get_random_once branches not always
      became active.
      
      This patch switches net_get_random_once to the ordinary static_key
      api and thus places the kernel fast path in the - by gcc considered -
      unlikely path.  Microbenchmarks on Intel and AMD x86-64 showed that
      the unlikely path actually beats the likely path in terms of cycle cost
      and that different nop patterns did not make much difference, thus this
      switch should not be noticeable.
      
      Fixes: a48e4292 ("net: introduce new macro net_get_random_once")
      Reported-by: NTuomas Räsänen <tuomasjjrasanen@tjjr.fi>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NHannes Frederic Sowa <hannes@stressinduktion.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3d440522
  19. 15 1月, 2014 1 次提交
  20. 11 12月, 2013 1 次提交
  21. 21 11月, 2013 1 次提交
    • H
      net: rework recvmsg handler msg_name and msg_namelen logic · f3d33426
      Hannes Frederic Sowa 提交于
      This patch now always passes msg->msg_namelen as 0. recvmsg handlers must
      set msg_namelen to the proper size <= sizeof(struct sockaddr_storage)
      to return msg_name to the user.
      
      This prevents numerous uninitialized memory leaks we had in the
      recvmsg handlers and makes it harder for new code to accidentally leak
      uninitialized memory.
      
      Optimize for the case recvfrom is called with NULL as address. We don't
      need to copy the address at all, so set it to NULL before invoking the
      recvmsg handler. We can do so, because all the recvmsg handlers must
      cope with the case a plain read() is called on them. read() also sets
      msg_name to NULL.
      
      Also document these changes in include/linux/net.h as suggested by David
      Miller.
      
      Changes since RFC:
      
      Set msg->msg_name = NULL if user specified a NULL in msg_name but had a
      non-null msg_namelen in verify_iovec/verify_compat_iovec. This doesn't
      affect sendto as it would bail out earlier while trying to copy-in the
      address. It also more naturally reflects the logic by the callers of
      verify_iovec.
      
      With this change in place I could remove "
      if (!uaddr || msg_sys->msg_namelen == 0)
      	msg->msg_name = NULL
      ".
      
      This change does not alter the user visible error logic as we ignore
      msg_namelen as long as msg_name is NULL.
      
      Also remove two unnecessary curly brackets in ___sys_recvmsg and change
      comments to netdev style.
      
      Cc: David Miller <davem@davemloft.net>
      Suggested-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NHannes Frederic Sowa <hannes@stressinduktion.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f3d33426
  22. 26 10月, 2013 1 次提交
  23. 22 10月, 2013 1 次提交
  24. 20 10月, 2013 1 次提交
    • H
      net: introduce new macro net_get_random_once · a48e4292
      Hannes Frederic Sowa 提交于
      net_get_random_once is a new macro which handles the initialization
      of secret keys. It is possible to call it in the fast path. Only the
      initialization depends on the spinlock and is rather slow. Otherwise
      it should get used just before the key is used to delay the entropy
      extration as late as possible to get better randomness. It returns true
      if the key got initialized.
      
      The usage of static_keys for net_get_random_once is a bit uncommon so
      it needs some further explanation why this actually works:
      
      === In the simple non-HAVE_JUMP_LABEL case we actually have ===
      no constrains to use static_key_(true|false) on keys initialized with
      STATIC_KEY_INIT_(FALSE|TRUE). So this path just expands in favor of
      the likely case that the initialization is already done. The key is
      initialized like this:
      
      ___done_key = { .enabled = ATOMIC_INIT(0) }
      
      The check
      
                      if (!static_key_true(&___done_key))                     \
      
      expands into (pseudo code)
      
                      if (!likely(___done_key > 0))
      
      , so we take the fast path as soon as ___done_key is increased from the
      helper function.
      
      === If HAVE_JUMP_LABELs are available this depends ===
      on patching of jumps into the prepared NOPs, which is done in
      jump_label_init at boot-up time (from start_kernel). It is forbidden
      and dangerous to use net_get_random_once in functions which are called
      before that!
      
      At compilation time NOPs are generated at the call sites of
      net_get_random_once. E.g. net/ipv6/inet6_hashtable.c:inet6_ehashfn (we
      need to call net_get_random_once two times in inet6_ehashfn, so two NOPs):
      
            71:       0f 1f 44 00 00          nopl   0x0(%rax,%rax,1)
            76:       0f 1f 44 00 00          nopl   0x0(%rax,%rax,1)
      
      Both will be patched to the actual jumps to the end of the function to
      call __net_get_random_once at boot time as explained above.
      
      arch_static_branch is optimized and inlined for false as return value and
      actually also returns false in case the NOP is placed in the instruction
      stream. So in the fast case we get a "return false". But because we
      initialize ___done_key with (enabled != (entries & 1)) this call-site
      will get patched up at boot thus returning true. The final check looks
      like this:
      
                      if (!static_key_true(&___done_key))                     \
                              ___ret = __net_get_random_once(buf,             \
      
      expands to
      
                      if (!!static_key_false(&___done_key))                     \
                              ___ret = __net_get_random_once(buf,             \
      
      So we get true at boot time and as soon as static_key_slow_inc is called
      on the key it will invert the logic and return false for the fast path.
      static_key_slow_inc will change the branch because it got initialized
      with .enabled == 0. After static_key_slow_inc is called on the key the
      branch is replaced with a nop again.
      
      === Misc: ===
      The helper defers the increment into a workqueue so we don't
      have problems calling this code from atomic sections. A seperate boolean
      (___done) guards the case where we enter net_get_random_once again before
      the increment happend.
      
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Jason Baron <jbaron@redhat.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Eric Dumazet <edumazet@google.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Signed-off-by: NHannes Frederic Sowa <hannes@stressinduktion.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a48e4292
  25. 27 9月, 2013 1 次提交
    • J
      net.h/skbuff.h: Remove extern from function prototypes · 7965bd4d
      Joe Perches 提交于
      There are a mix of function prototypes with and without extern
      in the kernel sources.  Standardize on not using extern for
      function prototypes.
      
      Function prototypes don't need to be written with extern.
      extern is assumed by the compiler.  Its use is as unnecessary as
      using auto to declare automatic/local variables in a block.
      Signed-off-by: NJoe Perches <joe@perches.com>
      7965bd4d
  26. 05 6月, 2013 1 次提交
  27. 30 4月, 2013 1 次提交
  28. 13 10月, 2012 1 次提交
  29. 27 9月, 2012 1 次提交
  30. 23 7月, 2012 1 次提交
    • J
      net: netprio_cgroup: rework update socket logic · 406a3c63
      John Fastabend 提交于
      Instead of updating the sk_cgrp_prioidx struct field on every send
      this only updates the field when a task is moved via cgroup
      infrastructure.
      
      This allows sockets that may be used by a kernel worker thread
      to be managed. For example in the iscsi case today a user can
      put iscsid in a netprio cgroup and control traffic will be sent
      with the correct sk_cgrp_prioidx value set but as soon as data
      is sent the kernel worker thread isssues a send and sk_cgrp_prioidx
      is updated with the kernel worker threads value which is the
      default case.
      
      It seems more correct to only update the field when the user
      explicitly sets it via control group infrastructure. This allows
      the users to manage sockets that may be used with other threads.
      Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com>
      Acked-by: NNeil Horman <nhorman@tuxdriver.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      406a3c63
  31. 21 7月, 2012 1 次提交
    • M
      tun: fix a crash bug and a memory leak · b09e786b
      Mikulas Patocka 提交于
      This patch fixes a crash
      tun_chr_close -> netdev_run_todo -> tun_free_netdev -> sk_release_kernel ->
      sock_release -> iput(SOCK_INODE(sock))
      introduced by commit 1ab5ecb9
      
      The problem is that this socket is embedded in struct tun_struct, it has
      no inode, iput is called on invalid inode, which modifies invalid memory
      and optionally causes a crash.
      
      sock_release also decrements sockets_in_use, this causes a bug that
      "sockets: used" field in /proc/*/net/sockstat keeps on decreasing when
      creating and closing tun devices.
      
      This patch introduces a flag SOCK_EXTERNALLY_ALLOCATED that instructs
      sock_release to not free the inode and not decrement sockets_in_use,
      fixing both memory corruption and sockets_in_use underflow.
      
      It should be backported to 3.3 an 3.4 stabke.
      Signed-off-by: NMikulas Patocka <mikulas@artax.karlin.mff.cuni.cz>
      Cc: stable@kernel.org
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b09e786b
  32. 30 5月, 2012 1 次提交
    • N
      net: add MODULE_ALIAS_NET_PF_PROTO_NAME · 2033e9bf
      Neil Horman 提交于
      The MODULE_ALAIS_NET_PF macro set is missing a variant that allows for the
      appending of an arbitrary string to the net-pf-<x>-proto-<y> base.  while
      MODULE_ALIAS_NET_PF_PROTO_NAME_TYPE allows an appending of a numerical type, we
      need to be able to append a generic string to support generic netlink families
      that have neither a fix numberical protocol nor type number
      Signed-off-by: NNeil Horman <nhorman@tuxdriver.com>
      CC: Eric Dumazet <eric.dumazet@gmail.com>
      CC: David Miller <davem@davemloft.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2033e9bf
  33. 16 5月, 2012 1 次提交
    • J
      net: Add net_ratelimited_function and net_<level>_ratelimited macros · 3a3bfb61
      Joe Perches 提交于
      __ratelimit() can be considered an inverted bool test because
      it returns true when not ratelimited.  Several tests in the
      kernel tree use this __ratelimit() function incorrectly.
      
      No net_ratelimit uses are incorrect currently though.
      
      Most uses of net_ratelimit are to log something via printk or
      pr_<level>.
      
      In order to minimize the uses of net_ratelimit, and to start
      standardizing the code style used for __ratelimit() and net_ratelimit(),
      add a net_ratelimited_function() macro and net_<level>_ratelimited()
      logging macros similar to pr_<level>_ratelimited that use the global
      net_ratelimit instead of a static per call site "struct ratelimit_state".
      Signed-off-by: NJoe Perches <joe@perches.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3a3bfb61
  34. 22 2月, 2012 1 次提交
    • P
      sock: Introduce the SO_PEEK_OFF sock option · ef64a54f
      Pavel Emelyanov 提交于
      This one specifies where to start MSG_PEEK-ing queue data from. When
      set to negative value means that MSG_PEEK works as ususally -- peeks
      from the head of the queue always.
      
      When some bytes are peeked from queue and the peeking offset is non
      negative it is moved forward so that the next peek will return next
      portion of data.
      
      When non-peeking recvmsg occurs and the peeking offset is non negative
      is is moved backward so that the next peek will still peek the proper
      data (i.e. the one that would have been picked if there were no non
      peeking recv in between).
      
      The offset is set using per-proto opteration to let the protocol handle
      the locking issues and to check whether the peeking offset feature is
      supported by the protocol the socket belongs to.
      Signed-off-by: NPavel Emelyanov <xemul@parallels.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ef64a54f
  35. 28 5月, 2011 1 次提交
  36. 06 5月, 2011 1 次提交
    • A
      net: Add sendmmsg socket system call · 228e548e
      Anton Blanchard 提交于
      This patch adds a multiple message send syscall and is the send
      version of the existing recvmmsg syscall. This is heavily
      based on the patch by Arnaldo that added recvmmsg.
      
      I wrote a microbenchmark to test the performance gains of using
      this new syscall:
      
      http://ozlabs.org/~anton/junkcode/sendmmsg_test.c
      
      The test was run on a ppc64 box with a 10 Gbit network card. The
      benchmark can send both UDP and RAW ethernet packets.
      
      64B UDP
      
      batch   pkts/sec
      1       804570
      2       872800 (+ 8 %)
      4       916556 (+14 %)
      8       939712 (+17 %)
      16      952688 (+18 %)
      32      956448 (+19 %)
      64      964800 (+20 %)
      
      64B raw socket
      
      batch   pkts/sec
      1       1201449
      2       1350028 (+12 %)
      4       1461416 (+22 %)
      8       1513080 (+26 %)
      16      1541216 (+28 %)
      32      1553440 (+29 %)
      64      1557888 (+30 %)
      
      We see a 20% improvement in throughput on UDP send and 30%
      on raw socket send.
      
      [ Add sparc syscall entries. -DaveM ]
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      228e548e
  37. 23 2月, 2011 1 次提交
  38. 07 1月, 2011 1 次提交