1. 12 9月, 2009 1 次提交
    • M
      net: unix: fix sending fds in multiple buffers · 8ba69ba6
      Miklos Szeredi 提交于
      Kalle Olavi Niemitalo reported that:
      
        "..., when one process calls sendmsg once to send 43804 bytes of
        data and one file descriptor, and another process then calls recvmsg
        three times to receive the 16032+16032+11740 bytes, each of those
        recvmsg calls returns the file descriptor in the ancillary data.  I
        confirmed this with strace.  The behaviour differs from Linux
        2.6.26, where reportedly only one of those recvmsg calls (I think
        the first one) returned the file descriptor."
      
      This bug was introduced by a patch from me titled "net: unix: fix inflight
      counting bug in garbage collector", commit 6209344f.
      
      And the reason is, quoting Kalle:
      
        "Before your patch, unix_attach_fds() would set scm->fp = NULL, so
        that if the loop in unix_stream_sendmsg() ran multiple iterations,
        it could not call unix_attach_fds() again.  But now,
        unix_attach_fds() leaves scm->fp unchanged, and I think this causes
        it to be called multiple times and duplicate the same file
        descriptors to each struct sk_buff."
      
      Fix this by introducing a flag that is cleared at the start and set
      when the fds attached to the first buffer.  The resulting code should
      work equivalently to the one on 2.6.26.
      Reported-by: NKalle Olavi Niemitalo <kon@iki.fi>
      Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8ba69ba6
  2. 10 7月, 2009 1 次提交
    • J
      net: adding memory barrier to the poll and receive callbacks · a57de0b4
      Jiri Olsa 提交于
      Adding memory barrier after the poll_wait function, paired with
      receive callbacks. Adding fuctions sock_poll_wait and sk_has_sleeper
      to wrap the memory barrier.
      
      Without the memory barrier, following race can happen.
      The race fires, when following code paths meet, and the tp->rcv_nxt
      and __add_wait_queue updates stay in CPU caches.
      
      CPU1                         CPU2
      
      sys_select                   receive packet
        ...                        ...
        __add_wait_queue           update tp->rcv_nxt
        ...                        ...
        tp->rcv_nxt check          sock_def_readable
        ...                        {
        schedule                      ...
                                      if (sk->sk_sleep && waitqueue_active(sk->sk_sleep))
                                              wake_up_interruptible(sk->sk_sleep)
                                      ...
                                   }
      
      If there was no cache the code would work ok, since the wait_queue and
      rcv_nxt are opposit to each other.
      
      Meaning that once tp->rcv_nxt is updated by CPU2, the CPU1 either already
      passed the tp->rcv_nxt check and sleeps, or will get the new value for
      tp->rcv_nxt and will return with new data mask.
      In both cases the process (CPU1) is being added to the wait queue, so the
      waitqueue_active (CPU2) call cannot miss and will wake up CPU1.
      
      The bad case is when the __add_wait_queue changes done by CPU1 stay in its
      cache, and so does the tp->rcv_nxt update on CPU2 side.  The CPU1 will then
      endup calling schedule and sleep forever if there are no more data on the
      socket.
      
      Calls to poll_wait in following modules were ommited:
      	net/bluetooth/af_bluetooth.c
      	net/irda/af_irda.c
      	net/irda/irnet/irnet_ppp.c
      	net/mac80211/rc80211_pid_debugfs.c
      	net/phonet/socket.c
      	net/rds/af_rds.c
      	net/rfkill/core.c
      	net/sunrpc/cache.c
      	net/sunrpc/rpc_pipe.c
      	net/tipc/socket.c
      Signed-off-by: NJiri Olsa <jolsa@redhat.com>
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a57de0b4
  3. 18 6月, 2009 1 次提交
  4. 01 4月, 2009 1 次提交
  5. 27 2月, 2009 1 次提交
  6. 01 1月, 2009 1 次提交
  7. 27 11月, 2008 1 次提交
  8. 26 11月, 2008 1 次提交
  9. 24 11月, 2008 2 次提交
  10. 20 11月, 2008 2 次提交
  11. 17 11月, 2008 3 次提交
  12. 14 11月, 2008 1 次提交
  13. 10 11月, 2008 1 次提交
    • M
      net: unix: fix inflight counting bug in garbage collector · 6209344f
      Miklos Szeredi 提交于
      Previously I assumed that the receive queues of candidates don't
      change during the GC.  This is only half true, nothing can be received
      from the queues (see comment in unix_gc()), but buffers could be added
      through the other half of the socket pair, which may still have file
      descriptors referring to it.
      
      This can result in inc_inflight_move_tail() erronously increasing the
      "inflight" counter for a unix socket for which dec_inflight() wasn't
      previously called.  This in turn can trigger the "BUG_ON(total_refs <
      inflight_refs)" in a later garbage collection run.
      
      Fix this by only manipulating the "inflight" counter for sockets which
      are candidates themselves.  Duplicating the file references in
      unix_attach_fds() is also needed to prevent a socket becoming a
      candidate for GC while the skb that contains it is not yet queued.
      Reported-by: NAndrea Bittau <a.bittau@cs.ucl.ac.uk>
      Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz>
      CC: stable@kernel.org
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6209344f
  14. 02 11月, 2008 2 次提交
  15. 23 10月, 2008 1 次提交
  16. 14 10月, 2008 1 次提交
  17. 27 7月, 2008 1 次提交
  18. 26 7月, 2008 1 次提交
  19. 28 6月, 2008 1 次提交
    • R
      af_unix: fix 'poll for write'/connected DGRAM sockets · ec0d215f
      Rainer Weikusat 提交于
      For n:1 'datagram connections' (eg /dev/log), the unix_dgram_sendmsg
      routine implements a form of receiver-imposed flow control by
      comparing the length of the receive queue of the 'peer socket' with
      the max_ack_backlog value stored in the corresponding sock structure,
      either blocking the thread which caused the send-routine to be called
      or returning EAGAIN. This routine is used by both SOCK_DGRAM and
      SOCK_SEQPACKET sockets. The poll-implementation for these socket types
      is datagram_poll from core/datagram.c. A socket is deemed to be
      writeable by this routine when the memory presently consumed by
      datagrams owned by it is less than the configured socket send buffer
      size. This is always wrong for PF_UNIX non-stream sockets connected to
      server sockets dealing with (potentially) multiple clients if the
      abovementioned receive queue is currently considered to be full.
      'poll' will then return, indicating that the socket is writeable, but
      a subsequent write result in EAGAIN, effectively causing an (usual)
      application to 'poll for writeability by repeated send request with
      O_NONBLOCK set' until it has consumed its time quantum.
      
      The change below uses a suitably modified variant of the datagram_poll
      routines for both type of PF_UNIX sockets, which tests if the
      recv-queue of the peer a socket is connected to is presently
      considered to be 'full' as part of the 'is this socket
      writeable'-checking code. The socket being polled is additionally
      put onto the peer_wait wait queue associated with its peer, because the
      unix_dgram_recvmsg routine does a wake up on this queue after a
      datagram was received and the 'other wakeup call' is done implicitly
      as part of skb destruction, meaning, a process blocked in poll
      because of a full peer receive queue could otherwise sleep forever
      if no datagram owned by its socket was already sitting on this queue.
      Among this change is a small (inline) helper routine named
      'unix_recvq_full', which consolidates the actual testing code (in three
      different places) into a single location.
      Signed-off-by: NRainer Weikusat <rweikusat@mssgmbh.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ec0d215f
  20. 18 6月, 2008 1 次提交
    • R
      af_unix: fix 'poll for write'/ connected DGRAM sockets · 3c73419c
      Rainer Weikusat 提交于
      The unix_dgram_sendmsg routine implements a (somewhat crude)
      form of receiver-imposed flow control by comparing the length of the
      receive queue of the 'peer socket' with the max_ack_backlog value
      stored in the corresponding sock structure, either blocking
      the thread which caused the send-routine to be called or returning
      EAGAIN. This routine is used by both SOCK_DGRAM and SOCK_SEQPACKET
      sockets. The poll-implementation for these socket types is
      datagram_poll from core/datagram.c. A socket is deemed to be writeable
      by this routine when the memory presently consumed by datagrams
      owned by it is less than the configured socket send buffer size. This
      is always wrong for connected PF_UNIX non-stream sockets when the
      abovementioned receive queue is currently considered to be full.
      'poll' will then return, indicating that the socket is writeable, but
      a subsequent write result in EAGAIN, effectively causing an
      (usual) application to 'poll for writeability by repeated send request
      with O_NONBLOCK set' until it has consumed its time quantum.
      
      The change below uses a suitably modified variant of the datagram_poll
      routines for both type of PF_UNIX sockets, which tests if the
      recv-queue of the peer a socket is connected to is presently
      considered to be 'full' as part of the 'is this socket
      writeable'-checking code. The socket being polled is additionally
      put onto the peer_wait wait queue associated with its peer, because the
      unix_dgram_sendmsg routine does a wake up on this queue after a
      datagram was received and the 'other wakeup call' is done implicitly
      as part of skb destruction, meaning, a process blocked in poll
      because of a full peer receive queue could otherwise sleep forever
      if no datagram owned by its socket was already sitting on this queue.
      Among this change is a small (inline) helper routine named
      'unix_recvq_full', which consolidates the actual testing code (in three
      different places) into a single location.
      Signed-off-by: NRainer Weikusat <rweikusat@mssgmbh.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3c73419c
  21. 12 6月, 2008 1 次提交
  22. 24 4月, 2008 1 次提交
  23. 19 4月, 2008 1 次提交
  24. 13 4月, 2008 1 次提交
  25. 26 3月, 2008 3 次提交
  26. 06 3月, 2008 1 次提交
  27. 15 2月, 2008 2 次提交
  28. 29 1月, 2008 5 次提交