1. 11 2月, 2010 1 次提交
  2. 27 1月, 2010 21 次提交
  3. 09 12月, 2009 2 次提交
  4. 26 11月, 2009 1 次提交
  5. 12 11月, 2009 1 次提交
    • A
      net/atm: move all compat_ioctl handling to atm/ioctl.c · 805003a4
      Arnd Bergmann 提交于
      We have two implementations of the compat_ioctl handling for ATM, the
      one that we have had for ages in fs/compat_ioctl.c and the one added to
      net/atm/ioctl.c by David Woodhouse. Unfortunately, both versions are
      incomplete, and in practice we use a very confusing combination of the
      two.
      
      For ioctl numbers that have the same identifier on 32 and 64 bit systems,
      we go directly through the compat_ioctl socket operation, for those that
      
      differ, we do a conversion in fs/compat_ioctl.c.
      
      This patch moves both variants into the vcc_compat_ioctl() function,
      while preserving the current behaviour. It also kills off the COMPATIBLE_IOCTL
      definitions that we never use here.
      Doing it this way is clearly not a good solution, but I hope it is a
      step into the right direction, so that someone is able to clean up this
      mess for real.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      805003a4
  6. 06 11月, 2009 1 次提交
  7. 13 10月, 2009 1 次提交
    • N
      net: Generalize socket rx gap / receive queue overflow cmsg · 3b885787
      Neil Horman 提交于
      Create a new socket level option to report number of queue overflows
      
      Recently I augmented the AF_PACKET protocol to report the number of frames lost
      on the socket receive queue between any two enqueued frames.  This value was
      exported via a SOL_PACKET level cmsg.  AFter I completed that work it was
      requested that this feature be generalized so that any datagram oriented socket
      could make use of this option.  As such I've created this patch, It creates a
      new SOL_SOCKET level option called SO_RXQ_OVFL, which when enabled exports a
      SOL_SOCKET level cmsg that reports the nubmer of times the sk_receive_queue
      overflowed between any two given frames.  It also augments the AF_PACKET
      protocol to take advantage of this new feature (as it previously did not touch
      sk->sk_drops, which this patch uses to record the overflow count).  Tested
      successfully by me.
      
      Notes:
      
      1) Unlike my previous patch, this patch simply records the sk_drops value, which
      is not a number of drops between packets, but rather a total number of drops.
      Deltas must be computed in user space.
      
      2) While this patch currently works with datagram oriented protocols, it will
      also be accepted by non-datagram oriented protocols. I'm not sure if thats
      agreeable to everyone, but my argument in favor of doing so is that, for those
      protocols which aren't applicable to this option, sk_drops will always be zero,
      and reporting no drops on a receive queue that isn't used for those
      non-participating protocols seems reasonable to me.  This also saves us having
      to code in a per-protocol opt in mechanism.
      
      3) This applies cleanly to net-next assuming that commit
      97775007 (my af packet cmsg patch) is reverted
      Signed-off-by: NNeil Horman <nhorman@tuxdriver.com>
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3b885787
  8. 07 10月, 2009 1 次提交
  9. 01 10月, 2009 1 次提交
  10. 03 9月, 2009 1 次提交
  11. 02 9月, 2009 1 次提交
  12. 01 9月, 2009 1 次提交
  13. 06 8月, 2009 1 次提交
  14. 10 7月, 2009 1 次提交
    • J
      net: adding memory barrier to the poll and receive callbacks · a57de0b4
      Jiri Olsa 提交于
      Adding memory barrier after the poll_wait function, paired with
      receive callbacks. Adding fuctions sock_poll_wait and sk_has_sleeper
      to wrap the memory barrier.
      
      Without the memory barrier, following race can happen.
      The race fires, when following code paths meet, and the tp->rcv_nxt
      and __add_wait_queue updates stay in CPU caches.
      
      CPU1                         CPU2
      
      sys_select                   receive packet
        ...                        ...
        __add_wait_queue           update tp->rcv_nxt
        ...                        ...
        tp->rcv_nxt check          sock_def_readable
        ...                        {
        schedule                      ...
                                      if (sk->sk_sleep && waitqueue_active(sk->sk_sleep))
                                              wake_up_interruptible(sk->sk_sleep)
                                      ...
                                   }
      
      If there was no cache the code would work ok, since the wait_queue and
      rcv_nxt are opposit to each other.
      
      Meaning that once tp->rcv_nxt is updated by CPU2, the CPU1 either already
      passed the tp->rcv_nxt check and sleeps, or will get the new value for
      tp->rcv_nxt and will return with new data mask.
      In both cases the process (CPU1) is being added to the wait queue, so the
      waitqueue_active (CPU2) call cannot miss and will wake up CPU1.
      
      The bad case is when the __add_wait_queue changes done by CPU1 stay in its
      cache, and so does the tp->rcv_nxt update on CPU2 side.  The CPU1 will then
      endup calling schedule and sleep forever if there are no more data on the
      socket.
      
      Calls to poll_wait in following modules were ommited:
      	net/bluetooth/af_bluetooth.c
      	net/irda/af_irda.c
      	net/irda/irnet/irnet_ppp.c
      	net/mac80211/rc80211_pid_debugfs.c
      	net/phonet/socket.c
      	net/rds/af_rds.c
      	net/rfkill/core.c
      	net/sunrpc/cache.c
      	net/sunrpc/rpc_pipe.c
      	net/tipc/socket.c
      Signed-off-by: NJiri Olsa <jolsa@redhat.com>
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a57de0b4
  15. 06 7月, 2009 2 次提交
  16. 18 6月, 2009 1 次提交
  17. 13 6月, 2009 1 次提交
  18. 12 6月, 2009 1 次提交