1. 04 8月, 2018 1 次提交
    • J
      af_unix: ensure POLLOUT on remote close() for connected dgram socket · 51f7e951
      Jason Baron 提交于
      Applications use -ECONNREFUSED as returned from write() in order to
      determine that a socket should be closed. However, when using connected
      dgram unix sockets in a poll/write loop, a final POLLOUT event can be
      missed when the remote end closes. Thus, the poll is stuck forever:
      
                thread 1 (client)                   thread 2 (server)
      
      connect() to server
      write() returns -EAGAIN
      unix_dgram_poll()
       -> unix_recvq_full() is true
                                             close()
                                              ->unix_release_sock()
                                               ->wake_up_interruptible_all()
      unix_dgram_poll() (due to the
           wake_up_interruptible_all)
       -> unix_recvq_full() still is true
                                               ->free all skbs
      
      Now thread 1 is stuck and will not receive anymore wakeups. In this
      case, when thread 1 gets the -EAGAIN, it has not queued any skbs
      otherwise the 'free all skbs' step would in fact cause a wakeup and
      a POLLOUT return. So the race here is probably fairly rare because
      it means there are no skbs that thread 1 queued and that thread 1
      schedules before the 'free all skbs' step.
      
      This issue was reported as a hang when /dev/log is closed.
      
      The fix is to signal POLLOUT if the socket is marked as SOCK_DEAD, which
      means a subsequent write() will get -ECONNREFUSED.
      Reported-by: NIan Lance Taylor <iant@golang.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Rainer Weikusat <rweikusat@mobileactivedefense.com>
      Cc: Eric Dumazet <edumazet@google.com>
      Signed-off-by: NJason Baron <jbaron@akamai.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      51f7e951
  2. 31 7月, 2018 1 次提交
  3. 29 6月, 2018 1 次提交
    • L
      Revert changes to convert to ->poll_mask() and aio IOCB_CMD_POLL · a11e1d43
      Linus Torvalds 提交于
      The poll() changes were not well thought out, and completely
      unexplained.  They also caused a huge performance regression, because
      "->poll()" was no longer a trivial file operation that just called down
      to the underlying file operations, but instead did at least two indirect
      calls.
      
      Indirect calls are sadly slow now with the Spectre mitigation, but the
      performance problem could at least be largely mitigated by changing the
      "->get_poll_head()" operation to just have a per-file-descriptor pointer
      to the poll head instead.  That gets rid of one of the new indirections.
      
      But that doesn't fix the new complexity that is completely unwarranted
      for the regular case.  The (undocumented) reason for the poll() changes
      was some alleged AIO poll race fixing, but we don't make the common case
      slower and more complex for some uncommon special case, so this all
      really needs way more explanations and most likely a fundamental
      redesign.
      
      [ This revert is a revert of about 30 different commits, not reverted
        individually because that would just be unnecessarily messy  - Linus ]
      
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Christoph Hellwig <hch@lst.de>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a11e1d43
  4. 26 5月, 2018 1 次提交
  5. 16 5月, 2018 1 次提交
  6. 04 4月, 2018 1 次提交
  7. 28 3月, 2018 1 次提交
  8. 14 2月, 2018 1 次提交
  9. 13 2月, 2018 2 次提交
    • K
      net: Convert unix_net_ops · 167f7ac7
      Kirill Tkhai 提交于
      These pernet_operations are just create and destroy
      /proc and sysctl entries, and are not touched by
      foreign pernet_operations.
      
      So, we are able to make them async.
      Signed-off-by: NKirill Tkhai <ktkhai@virtuozzo.com>
      Acked-by: NAndrei Vagin <avagin@virtuozzo.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      167f7ac7
    • D
      net: make getname() functions return length rather than use int* parameter · 9b2c45d4
      Denys Vlasenko 提交于
      Changes since v1:
      Added changes in these files:
          drivers/infiniband/hw/usnic/usnic_transport.c
          drivers/staging/lustre/lnet/lnet/lib-socket.c
          drivers/target/iscsi/iscsi_target_login.c
          drivers/vhost/net.c
          fs/dlm/lowcomms.c
          fs/ocfs2/cluster/tcp.c
          security/tomoyo/network.c
      
      Before:
      All these functions either return a negative error indicator,
      or store length of sockaddr into "int *socklen" parameter
      and return zero on success.
      
      "int *socklen" parameter is awkward. For example, if caller does not
      care, it still needs to provide on-stack storage for the value
      it does not need.
      
      None of the many FOO_getname() functions of various protocols
      ever used old value of *socklen. They always just overwrite it.
      
      This change drops this parameter, and makes all these functions, on success,
      return length of sockaddr. It's always >= 0 and can be differentiated
      from an error.
      
      Tests in callers are changed from "if (err)" to "if (err < 0)", where needed.
      
      rpc_sockname() lost "int buflen" parameter, since its only use was
      to be passed to kernel_getsockname() as &buflen and subsequently
      not used in any way.
      
      Userspace API is not changed.
      
          text    data     bss      dec     hex filename
      30108430 2633624  873672 33615726 200ef6e vmlinux.before.o
      30108109 2633612  873672 33615393 200ee21 vmlinux.o
      Signed-off-by: NDenys Vlasenko <dvlasenk@redhat.com>
      CC: David S. Miller <davem@davemloft.net>
      CC: linux-kernel@vger.kernel.org
      CC: netdev@vger.kernel.org
      CC: linux-bluetooth@vger.kernel.org
      CC: linux-decnet-user@lists.sourceforge.net
      CC: linux-wireless@vger.kernel.org
      CC: linux-rdma@vger.kernel.org
      CC: linux-sctp@vger.kernel.org
      CC: linux-nfs@vger.kernel.org
      CC: linux-x25@vger.kernel.org
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9b2c45d4
  10. 12 2月, 2018 1 次提交
    • L
      vfs: do bulk POLL* -> EPOLL* replacement · a9a08845
      Linus Torvalds 提交于
      This is the mindless scripted replacement of kernel use of POLL*
      variables as described by Al, done by this script:
      
          for V in IN OUT PRI ERR RDNORM RDBAND WRNORM WRBAND HUP RDHUP NVAL MSG; do
              L=`git grep -l -w POLL$V | grep -v '^t' | grep -v /um/ | grep -v '^sa' | grep -v '/poll.h$'|grep -v '^D'`
              for f in $L; do sed -i "-es/^\([^\"]*\)\(\<POLL$V\>\)/\\1E\\2/" $f; done
          done
      
      with de-mangling cleanups yet to come.
      
      NOTE! On almost all architectures, the EPOLL* constants have the same
      values as the POLL* constants do.  But they keyword here is "almost".
      For various bad reasons they aren't the same, and epoll() doesn't
      actually work quite correctly in some cases due to this on Sparc et al.
      
      The next patch from Al will sort out the final differences, and we
      should be all done.
      Scripted-by: NAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a9a08845
  11. 17 1月, 2018 1 次提交
    • A
      net: delete /proc THIS_MODULE references · 96890d62
      Alexey Dobriyan 提交于
      /proc has been ignoring struct file_operations::owner field for 10 years.
      Specifically, it started with commit 786d7e16
      ("Fix rmmod/read/write races in /proc entries"). Notice the chunk where
      inode->i_fop is initialized with proxy struct file_operations for
      regular files:
      
      	-               if (de->proc_fops)
      	-                       inode->i_fop = de->proc_fops;
      	+               if (de->proc_fops) {
      	+                       if (S_ISREG(inode->i_mode))
      	+                               inode->i_fop = &proc_reg_file_ops;
      	+                       else
      	+                               inode->i_fop = de->proc_fops;
      	+               }
      
      VFS stopped pinning module at this point.
      Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      96890d62
  12. 28 11月, 2017 2 次提交
  13. 22 10月, 2017 1 次提交
  14. 19 8月, 2017 1 次提交
    • M
      datagram: When peeking datagrams with offset < 0 don't skip empty skbs · a0917e0b
      Matthew Dawson 提交于
      Due to commit e6afc8ac ("udp: remove
      headers from UDP packets before queueing"), when udp packets are being
      peeked the requested extra offset is always 0 as there is no need to skip
      the udp header.  However, when the offset is 0 and the next skb is
      of length 0, it is only returned once.  The behaviour can be seen with
      the following python script:
      
      from socket import *;
      f=socket(AF_INET6, SOCK_DGRAM | SOCK_NONBLOCK, 0);
      g=socket(AF_INET6, SOCK_DGRAM | SOCK_NONBLOCK, 0);
      f.bind(('::', 0));
      addr=('::1', f.getsockname()[1]);
      g.sendto(b'', addr)
      g.sendto(b'b', addr)
      print(f.recvfrom(10, MSG_PEEK));
      print(f.recvfrom(10, MSG_PEEK));
      
      Where the expected output should be the empty string twice.
      
      Instead, make sk_peek_offset return negative values, and pass those values
      to __skb_try_recv_datagram/__skb_try_recv_from_queue.  If the passed offset
      to __skb_try_recv_from_queue is negative, the checked skb is never skipped.
      __skb_try_recv_from_queue will then ensure the offset is reset back to 0
      if a peek is requested without an offset, unless no packets are found.
      
      Also simplify the if condition in __skb_try_recv_from_queue.  If _off is
      greater then 0, and off is greater then or equal to skb->len, then
      (_off || skb->len) must always be true assuming skb->len >= 0 is always
      true.
      
      Also remove a redundant check around a call to sk_peek_offset in af_unix.c,
      as it double checked if MSG_PEEK was set in the flags.
      
      V2:
       - Moved the negative fixup into __skb_try_recv_from_queue, and remove now
      redundant checks
       - Fix peeking in udp{,v6}_recvmsg to report the right value when the
      offset is 0
      
      V3:
       - Marked new branch in __skb_try_recv_from_queue as unlikely.
      Signed-off-by: NMatthew Dawson <matthew@mjdsystems.ca>
      Acked-by: NWillem de Bruijn <willemb@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a0917e0b
  15. 17 7月, 2017 1 次提交
    • D
      net/unix: drop obsolete fd-recursion limits · 27eac47b
      David Herrmann 提交于
      All unix sockets now account inflight FDs to the respective sender.
      This was introduced in:
      
          commit 712f4aad
          Author: willy tarreau <w@1wt.eu>
          Date:   Sun Jan 10 07:54:56 2016 +0100
      
              unix: properly account for FDs passed over unix sockets
      
      and further refined in:
      
          commit 415e3d3e
          Author: Hannes Frederic Sowa <hannes@stressinduktion.org>
          Date:   Wed Feb 3 02:11:03 2016 +0100
      
              unix: correctly track in-flight fds in sending process user_struct
      
      Hence, regardless of the stacking depth of FDs, the total number of
      inflight FDs is limited, and accounted. There is no known way for a
      local user to exceed those limits or exploit the accounting.
      
      Furthermore, the GC logic is independent of the recursion/stacking depth
      as well. It solely depends on the total number of inflight FDs,
      regardless of their layout.
      
      Lastly, the current `recursion_level' suffers a TOCTOU race, since it
      checks and inherits depths only at queue time. If we consider `A<-B' to
      mean `queue-B-on-A', the following sequence circumvents the recursion
      level easily:
      
          A<-B
             B<-C
                C<-D
                   ...
                     Y<-Z
      
      resulting in:
      
          A<-B<-C<-...<-Z
      
      With all of this in mind, lets drop the recursion limit. It has no
      additional security value, anymore. On the contrary, it randomly
      confuses message brokers that try to forward file-descriptors, since
      any sendmsg(2) call can fail spuriously with ETOOMANYREFS if a client
      maliciously modifies the FD while inflight.
      
      Cc: Alban Crequy <alban.crequy@collabora.co.uk>
      Cc: Simon McVittie <simon.mcvittie@collabora.co.uk>
      Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com>
      Reviewed-by: NTom Gundersen <teg@jklm.no>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      27eac47b
  16. 01 7月, 2017 3 次提交
  17. 20 6月, 2017 1 次提交
    • I
      sched/wait: Rename wait_queue_t => wait_queue_entry_t · ac6424b9
      Ingo Molnar 提交于
      Rename:
      
      	wait_queue_t		=>	wait_queue_entry_t
      
      'wait_queue_t' was always a slight misnomer: its name implies that it's a "queue",
      but in reality it's a queue *entry*. The 'real' queue is the wait queue head,
      which had to carry the name.
      
      Start sorting this out by renaming it to 'wait_queue_entry_t'.
      
      This also allows the real structure name 'struct __wait_queue' to
      lose its double underscore and become 'struct wait_queue_entry',
      which is the more canonical nomenclature for such data types.
      
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      ac6424b9
  18. 09 6月, 2017 1 次提交
  19. 07 4月, 2017 1 次提交
  20. 10 3月, 2017 1 次提交
    • D
      net: Work around lockdep limitation in sockets that use sockets · cdfbabfb
      David Howells 提交于
      Lockdep issues a circular dependency warning when AFS issues an operation
      through AF_RXRPC from a context in which the VFS/VM holds the mmap_sem.
      
      The theory lockdep comes up with is as follows:
      
       (1) If the pagefault handler decides it needs to read pages from AFS, it
           calls AFS with mmap_sem held and AFS begins an AF_RXRPC call, but
           creating a call requires the socket lock:
      
      	mmap_sem must be taken before sk_lock-AF_RXRPC
      
       (2) afs_open_socket() opens an AF_RXRPC socket and binds it.  rxrpc_bind()
           binds the underlying UDP socket whilst holding its socket lock.
           inet_bind() takes its own socket lock:
      
      	sk_lock-AF_RXRPC must be taken before sk_lock-AF_INET
      
       (3) Reading from a TCP socket into a userspace buffer might cause a fault
           and thus cause the kernel to take the mmap_sem, but the TCP socket is
           locked whilst doing this:
      
      	sk_lock-AF_INET must be taken before mmap_sem
      
      However, lockdep's theory is wrong in this instance because it deals only
      with lock classes and not individual locks.  The AF_INET lock in (2) isn't
      really equivalent to the AF_INET lock in (3) as the former deals with a
      socket entirely internal to the kernel that never sees userspace.  This is
      a limitation in the design of lockdep.
      
      Fix the general case by:
      
       (1) Double up all the locking keys used in sockets so that one set are
           used if the socket is created by userspace and the other set is used
           if the socket is created by the kernel.
      
       (2) Store the kern parameter passed to sk_alloc() in a variable in the
           sock struct (sk_kern_sock).  This informs sock_lock_init(),
           sock_init_data() and sk_clone_lock() as to the lock keys to be used.
      
           Note that the child created by sk_clone_lock() inherits the parent's
           kern setting.
      
       (3) Add a 'kern' parameter to ->accept() that is analogous to the one
           passed in to ->create() that distinguishes whether kernel_accept() or
           sys_accept4() was the caller and can be passed to sk_alloc().
      
           Note that a lot of accept functions merely dequeue an already
           allocated socket.  I haven't touched these as the new socket already
           exists before we get the parameter.
      
           Note also that there are a couple of places where I've made the accepted
           socket unconditionally kernel-based:
      
      	irda_accept()
      	rds_rcp_accept_one()
      	tcp_accept_from_sock()
      
           because they follow a sock_create_kern() and accept off of that.
      
      Whilst creating this, I noticed that lustre and ocfs don't create sockets
      through sock_create_kern() and thus they aren't marked as for-kernel,
      though they appear to be internal.  I wonder if these should do that so
      that they use the new set of lock keys.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cdfbabfb
  21. 02 3月, 2017 1 次提交
  22. 03 2月, 2017 1 次提交
    • A
      unix: add ioctl to open a unix socket file with O_PATH · ba94f308
      Andrey Vagin 提交于
      This ioctl opens a file to which a socket is bound and
      returns a file descriptor. The caller has to have CAP_NET_ADMIN
      in the socket network namespace.
      
      Currently it is impossible to get a path and a mount point
      for a socket file. socket_diag reports address, device ID and inode
      number for unix sockets. An address can contain a relative path or
      a file may be moved somewhere. And these properties say nothing about
      a mount namespace and a mount point of a socket file.
      
      With the introduced ioctl, we can get a path by reading
      /proc/self/fd/X and get mnt_id from /proc/self/fdinfo/X.
      
      In CRIU we are going to use this ioctl to dump and restore unix socket.
      
      Here is an example how it can be used:
      
      $ strace -e socket,bind,ioctl ./test /tmp/test_sock
      socket(AF_UNIX, SOCK_STREAM, 0)         = 3
      bind(3, {sa_family=AF_UNIX, sun_path="test_sock"}, 11) = 0
      ioctl(3, SIOCUNIXFILE, 0)           = 4
      ^Z
      
      $ ss -a | grep test_sock
      u_str  LISTEN     0      1      test_sock 17798                 * 0
      
      $ ls -l /proc/760/fd/{3,4}
      lrwx------ 1 root root 64 Feb  1 09:41 3 -> 'socket:[17798]'
      l--------- 1 root root 64 Feb  1 09:41 4 -> /tmp/test_sock
      
      $ cat /proc/760/fdinfo/4
      pos:	0
      flags:	012000000
      mnt_id:	40
      
      $ cat /proc/self/mountinfo | grep "^40\s"
      40 19 0:37 / /tmp rw shared:23 - tmpfs tmpfs rw
      Signed-off-by: NAndrei Vagin <avagin@openvz.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ba94f308
  23. 25 1月, 2017 1 次提交
  24. 25 12月, 2016 1 次提交
  25. 16 12月, 2016 1 次提交
  26. 19 11月, 2016 1 次提交
    • W
      af_unix: conditionally use freezable blocking calls in read · 06a77b07
      WANG Cong 提交于
      Commit 2b15af6f ("af_unix: use freezable blocking calls in read")
      converts schedule_timeout() to its freezable version, it was probably
      correct at that time, but later, commit 2b514574
      ("net: af_unix: implement splice for stream af_unix sockets") breaks
      the strong requirement for a freezable sleep, according to
      commit 0f9548ca:
      
          We shouldn't try_to_freeze if locks are held.  Holding a lock can cause a
          deadlock if the lock is later acquired in the suspend or hibernate path
          (e.g.  by dpm).  Holding a lock can also cause a deadlock in the case of
          cgroup_freezer if a lock is held inside a frozen cgroup that is later
          acquired by a process outside that group.
      
      The pipe_lock is still held at that point.
      
      So use freezable version only for the recvmsg call path, avoid impact for
      Android.
      
      Fixes: 2b514574 ("net: af_unix: implement splice for stream af_unix sockets")
      Reported-by: NDmitry Vyukov <dvyukov@google.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Colin Cross <ccross@android.com>
      Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
      Cc: Hannes Frederic Sowa <hannes@stressinduktion.org>
      Signed-off-by: NCong Wang <xiyou.wangcong@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      06a77b07
  27. 08 11月, 2016 1 次提交
    • P
      udp: do fwd memory scheduling on dequeue · 7c13f97f
      Paolo Abeni 提交于
      A new argument is added to __skb_recv_datagram to provide
      an explicit skb destructor, invoked under the receive queue
      lock.
      The UDP protocol uses such argument to perform memory
      reclaiming on dequeue, so that the UDP protocol does not
      set anymore skb->desctructor.
      Instead explicit memory reclaiming is performed at close() time and
      when skbs are removed from the receive queue.
      The in kernel UDP protocol users now need to call a
      skb_recv_udp() variant instead of skb_recv_datagram() to
      properly perform memory accounting on dequeue.
      
      Overall, this allows acquiring only once the receive queue
      lock on dequeue.
      
      Tested using pktgen with random src port, 64 bytes packet,
      wire-speed on a 10G link as sender and udp_sink as the receiver,
      using an l4 tuple rxhash to stress the contention, and one or more
      udp_sink instances with reuseport.
      
      nr sinks	vanilla		patched
      1		440		560
      3		2150		2300
      6		3650		3800
      9		4450		4600
      12		6250		6450
      
      v1 -> v2:
       - do rmem and allocated memory scheduling under the receive lock
       - do bulk scheduling in first_packet_length() and in udp_destruct_sock()
       - avoid the typdef for the dequeue callback
      Suggested-by: NEric Dumazet <edumazet@google.com>
      Acked-by: NHannes Frederic Sowa <hannes@stressinduktion.org>
      Signed-off-by: NPaolo Abeni <pabeni@redhat.com>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7c13f97f
  28. 02 11月, 2016 1 次提交
  29. 04 10月, 2016 1 次提交
    • A
      skb_splice_bits(): get rid of callback · 25869262
      Al Viro 提交于
      since pipe_lock is the outermost now, we don't need to drop/regain
      socket locks around the call of splice_to_pipe() from skb_splice_bits(),
      which kills the need to have a socket-specific callback; we can just
      call splice_to_pipe() and be done with that.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      25869262
  30. 05 9月, 2016 2 次提交
  31. 27 7月, 2016 1 次提交
    • V
      af_unix: charge buffers to kmemcg · 3aa9799e
      Vladimir Davydov 提交于
      Unix sockets can consume a significant amount of system memory, hence
      they should be accounted to kmemcg.
      
      Since unix socket buffers are always allocated from process context, all
      we need to do to charge them to kmemcg is set __GFP_ACCOUNT in
      sock->sk_allocation mask.
      
      Eric asked:
      
      > 1) What happens when a buffer, allocated from socket <A> lands in a
      > different socket <B>, maybe owned by another user/process.
      >
      > Who owns it now, in term of kmemcg accounting ?
      
      We never move memcg charges.  E.g.  if two processes from different
      cgroups are sharing a memory region, each page will be charged to the
      process which touched it first.  Or if two processes are working with
      the same directory tree, inodes and dentries will be charged to the
      first user.  The same is fair for unix socket buffers - they will be
      charged to the sender.
      
      > 2) Has performance impact been evaluated ?
      
      I ran netperf STREAM_STREAM with default options in a kmemcg on a 4 core
      x2 HT box.  The results are below:
      
       # clients            bandwidth (10^6bits/sec)
                          base              patched
               1      67643 +-  725      64874 +-  353    - 4.0 %
               4     193585 +- 2516     186715 +- 1460    - 3.5 %
               8     194820 +-  377     187443 +- 1229    - 3.7 %
      
      So the accounting doesn't come for free - it takes ~4% of performance.
      I believe we could optimize it by using per cpu batching not only on
      charge, but also on uncharge in memcg core, but that's beyond the scope
      of this patch set - I'll take a look at this later.
      
      Anyway, if performance impact is found to be unacceptable, it is always
      possible to disable kmem accounting at boot time (cgroup.memory=nokmem)
      or not use memory cgroups at runtime at all (thanks to jump labels
      there'll be no overhead even if they are compiled in).
      
      Link: http://lkml.kernel.org/r/fcfe6cae27a59fbc5e40145664b3cf085a560c68.1464079538.git.vdavydov@virtuozzo.comSigned-off-by: NVladimir Davydov <vdavydov@virtuozzo.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3aa9799e
  32. 21 5月, 2016 1 次提交
    • M
      af_unix: fix hard linked sockets on overlay · eb0a4a47
      Miklos Szeredi 提交于
      Overlayfs uses separate inodes even in the case of hard links on the
      underlying filesystems.  This is a problem for AF_UNIX socket
      implementation which indexes sockets based on the inode.  This resulted in
      hard linked sockets not working.
      
      The fix is to use the real, underlying inode.
      
      Test case follows:
      
      -- ovl-sock-test.c --
      #include <unistd.h>
      #include <err.h>
      #include <sys/socket.h>
      #include <sys/un.h>
      
      #define SOCK "test-sock"
      #define SOCK2 "test-sock2"
      
      int main(void)
      {
      	int fd, fd2;
      	struct sockaddr_un addr = {
      		.sun_family = AF_UNIX,
      		.sun_path = SOCK,
      	};
      	struct sockaddr_un addr2 = {
      		.sun_family = AF_UNIX,
      		.sun_path = SOCK2,
      	};
      
      	unlink(SOCK);
      	unlink(SOCK2);
      	if ((fd = socket(AF_UNIX, SOCK_STREAM, 0)) == -1)
      		err(1, "socket");
      	if (bind(fd, (struct sockaddr *) &addr, sizeof(addr)) == -1)
      		err(1, "bind");
      	if (listen(fd, 0) == -1)
      		err(1, "listen");
      	if (link(SOCK, SOCK2) == -1)
      		err(1, "link");
      	if ((fd2 = socket(AF_UNIX, SOCK_STREAM, 0)) == -1)
      		err(1, "socket");
      	if (connect(fd2, (struct sockaddr *) &addr2, sizeof(addr2)) == -1)
      		err (1, "connect");
      	return 0;
      }
      ----
      
      Reported-by: Alexander Morozov <alexandr.morozov@docker.com> 
      Signed-off-by: NMiklos Szeredi <mszeredi@redhat.com>
      Cc: <stable@vger.kernel.org>
      eb0a4a47
  33. 28 3月, 2016 1 次提交
  34. 20 2月, 2016 1 次提交
  35. 17 2月, 2016 1 次提交
    • R
      af_unix: Guard against other == sk in unix_dgram_sendmsg · a5527dda
      Rainer Weikusat 提交于
      The unix_dgram_sendmsg routine use the following test
      
      if (unlikely(unix_peer(other) != sk && unix_recvq_full(other))) {
      
      to determine if sk and other are in an n:1 association (either
      established via connect or by using sendto to send messages to an
      unrelated socket identified by address). This isn't correct as the
      specified address could have been bound to the sending socket itself or
      because this socket could have been connected to itself by the time of
      the unix_peer_get but disconnected before the unix_state_lock(other). In
      both cases, the if-block would be entered despite other == sk which
      might either block the sender unintentionally or lead to trying to unlock
      the same spin lock twice for a non-blocking send. Add a other != sk
      check to guard against this.
      
      Fixes: 7d267278 ("unix: avoid use-after-free in ep_remove_wait_queue")
      Reported-By: NPhilipp Hahn <pmhahn@pmhahn.de>
      Signed-off-by: NRainer Weikusat <rweikusat@mobileactivedefense.com>
      Tested-by: NPhilipp Hahn <pmhahn@pmhahn.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a5527dda