1. 19 8月, 2017 1 次提交
    • M
      datagram: When peeking datagrams with offset < 0 don't skip empty skbs · a0917e0b
      Matthew Dawson 提交于
      Due to commit e6afc8ac ("udp: remove
      headers from UDP packets before queueing"), when udp packets are being
      peeked the requested extra offset is always 0 as there is no need to skip
      the udp header.  However, when the offset is 0 and the next skb is
      of length 0, it is only returned once.  The behaviour can be seen with
      the following python script:
      
      from socket import *;
      f=socket(AF_INET6, SOCK_DGRAM | SOCK_NONBLOCK, 0);
      g=socket(AF_INET6, SOCK_DGRAM | SOCK_NONBLOCK, 0);
      f.bind(('::', 0));
      addr=('::1', f.getsockname()[1]);
      g.sendto(b'', addr)
      g.sendto(b'b', addr)
      print(f.recvfrom(10, MSG_PEEK));
      print(f.recvfrom(10, MSG_PEEK));
      
      Where the expected output should be the empty string twice.
      
      Instead, make sk_peek_offset return negative values, and pass those values
      to __skb_try_recv_datagram/__skb_try_recv_from_queue.  If the passed offset
      to __skb_try_recv_from_queue is negative, the checked skb is never skipped.
      __skb_try_recv_from_queue will then ensure the offset is reset back to 0
      if a peek is requested without an offset, unless no packets are found.
      
      Also simplify the if condition in __skb_try_recv_from_queue.  If _off is
      greater then 0, and off is greater then or equal to skb->len, then
      (_off || skb->len) must always be true assuming skb->len >= 0 is always
      true.
      
      Also remove a redundant check around a call to sk_peek_offset in af_unix.c,
      as it double checked if MSG_PEEK was set in the flags.
      
      V2:
       - Moved the negative fixup into __skb_try_recv_from_queue, and remove now
      redundant checks
       - Fix peeking in udp{,v6}_recvmsg to report the right value when the
      offset is 0
      
      V3:
       - Marked new branch in __skb_try_recv_from_queue as unlikely.
      Signed-off-by: NMatthew Dawson <matthew@mjdsystems.ca>
      Acked-by: NWillem de Bruijn <willemb@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a0917e0b
  2. 17 7月, 2017 1 次提交
    • D
      net/unix: drop obsolete fd-recursion limits · 27eac47b
      David Herrmann 提交于
      All unix sockets now account inflight FDs to the respective sender.
      This was introduced in:
      
          commit 712f4aad
          Author: willy tarreau <w@1wt.eu>
          Date:   Sun Jan 10 07:54:56 2016 +0100
      
              unix: properly account for FDs passed over unix sockets
      
      and further refined in:
      
          commit 415e3d3e
          Author: Hannes Frederic Sowa <hannes@stressinduktion.org>
          Date:   Wed Feb 3 02:11:03 2016 +0100
      
              unix: correctly track in-flight fds in sending process user_struct
      
      Hence, regardless of the stacking depth of FDs, the total number of
      inflight FDs is limited, and accounted. There is no known way for a
      local user to exceed those limits or exploit the accounting.
      
      Furthermore, the GC logic is independent of the recursion/stacking depth
      as well. It solely depends on the total number of inflight FDs,
      regardless of their layout.
      
      Lastly, the current `recursion_level' suffers a TOCTOU race, since it
      checks and inherits depths only at queue time. If we consider `A<-B' to
      mean `queue-B-on-A', the following sequence circumvents the recursion
      level easily:
      
          A<-B
             B<-C
                C<-D
                   ...
                     Y<-Z
      
      resulting in:
      
          A<-B<-C<-...<-Z
      
      With all of this in mind, lets drop the recursion limit. It has no
      additional security value, anymore. On the contrary, it randomly
      confuses message brokers that try to forward file-descriptors, since
      any sendmsg(2) call can fail spuriously with ETOOMANYREFS if a client
      maliciously modifies the FD while inflight.
      
      Cc: Alban Crequy <alban.crequy@collabora.co.uk>
      Cc: Simon McVittie <simon.mcvittie@collabora.co.uk>
      Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com>
      Reviewed-by: NTom Gundersen <teg@jklm.no>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      27eac47b
  3. 01 7月, 2017 3 次提交
  4. 20 6月, 2017 1 次提交
    • I
      sched/wait: Rename wait_queue_t => wait_queue_entry_t · ac6424b9
      Ingo Molnar 提交于
      Rename:
      
      	wait_queue_t		=>	wait_queue_entry_t
      
      'wait_queue_t' was always a slight misnomer: its name implies that it's a "queue",
      but in reality it's a queue *entry*. The 'real' queue is the wait queue head,
      which had to carry the name.
      
      Start sorting this out by renaming it to 'wait_queue_entry_t'.
      
      This also allows the real structure name 'struct __wait_queue' to
      lose its double underscore and become 'struct wait_queue_entry',
      which is the more canonical nomenclature for such data types.
      
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      ac6424b9
  5. 09 6月, 2017 1 次提交
  6. 07 4月, 2017 1 次提交
  7. 10 3月, 2017 1 次提交
    • D
      net: Work around lockdep limitation in sockets that use sockets · cdfbabfb
      David Howells 提交于
      Lockdep issues a circular dependency warning when AFS issues an operation
      through AF_RXRPC from a context in which the VFS/VM holds the mmap_sem.
      
      The theory lockdep comes up with is as follows:
      
       (1) If the pagefault handler decides it needs to read pages from AFS, it
           calls AFS with mmap_sem held and AFS begins an AF_RXRPC call, but
           creating a call requires the socket lock:
      
      	mmap_sem must be taken before sk_lock-AF_RXRPC
      
       (2) afs_open_socket() opens an AF_RXRPC socket and binds it.  rxrpc_bind()
           binds the underlying UDP socket whilst holding its socket lock.
           inet_bind() takes its own socket lock:
      
      	sk_lock-AF_RXRPC must be taken before sk_lock-AF_INET
      
       (3) Reading from a TCP socket into a userspace buffer might cause a fault
           and thus cause the kernel to take the mmap_sem, but the TCP socket is
           locked whilst doing this:
      
      	sk_lock-AF_INET must be taken before mmap_sem
      
      However, lockdep's theory is wrong in this instance because it deals only
      with lock classes and not individual locks.  The AF_INET lock in (2) isn't
      really equivalent to the AF_INET lock in (3) as the former deals with a
      socket entirely internal to the kernel that never sees userspace.  This is
      a limitation in the design of lockdep.
      
      Fix the general case by:
      
       (1) Double up all the locking keys used in sockets so that one set are
           used if the socket is created by userspace and the other set is used
           if the socket is created by the kernel.
      
       (2) Store the kern parameter passed to sk_alloc() in a variable in the
           sock struct (sk_kern_sock).  This informs sock_lock_init(),
           sock_init_data() and sk_clone_lock() as to the lock keys to be used.
      
           Note that the child created by sk_clone_lock() inherits the parent's
           kern setting.
      
       (3) Add a 'kern' parameter to ->accept() that is analogous to the one
           passed in to ->create() that distinguishes whether kernel_accept() or
           sys_accept4() was the caller and can be passed to sk_alloc().
      
           Note that a lot of accept functions merely dequeue an already
           allocated socket.  I haven't touched these as the new socket already
           exists before we get the parameter.
      
           Note also that there are a couple of places where I've made the accepted
           socket unconditionally kernel-based:
      
      	irda_accept()
      	rds_rcp_accept_one()
      	tcp_accept_from_sock()
      
           because they follow a sock_create_kern() and accept off of that.
      
      Whilst creating this, I noticed that lustre and ocfs don't create sockets
      through sock_create_kern() and thus they aren't marked as for-kernel,
      though they appear to be internal.  I wonder if these should do that so
      that they use the new set of lock keys.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cdfbabfb
  8. 02 3月, 2017 1 次提交
  9. 03 2月, 2017 1 次提交
    • A
      unix: add ioctl to open a unix socket file with O_PATH · ba94f308
      Andrey Vagin 提交于
      This ioctl opens a file to which a socket is bound and
      returns a file descriptor. The caller has to have CAP_NET_ADMIN
      in the socket network namespace.
      
      Currently it is impossible to get a path and a mount point
      for a socket file. socket_diag reports address, device ID and inode
      number for unix sockets. An address can contain a relative path or
      a file may be moved somewhere. And these properties say nothing about
      a mount namespace and a mount point of a socket file.
      
      With the introduced ioctl, we can get a path by reading
      /proc/self/fd/X and get mnt_id from /proc/self/fdinfo/X.
      
      In CRIU we are going to use this ioctl to dump and restore unix socket.
      
      Here is an example how it can be used:
      
      $ strace -e socket,bind,ioctl ./test /tmp/test_sock
      socket(AF_UNIX, SOCK_STREAM, 0)         = 3
      bind(3, {sa_family=AF_UNIX, sun_path="test_sock"}, 11) = 0
      ioctl(3, SIOCUNIXFILE, 0)           = 4
      ^Z
      
      $ ss -a | grep test_sock
      u_str  LISTEN     0      1      test_sock 17798                 * 0
      
      $ ls -l /proc/760/fd/{3,4}
      lrwx------ 1 root root 64 Feb  1 09:41 3 -> 'socket:[17798]'
      l--------- 1 root root 64 Feb  1 09:41 4 -> /tmp/test_sock
      
      $ cat /proc/760/fdinfo/4
      pos:	0
      flags:	012000000
      mnt_id:	40
      
      $ cat /proc/self/mountinfo | grep "^40\s"
      40 19 0:37 / /tmp rw shared:23 - tmpfs tmpfs rw
      Signed-off-by: NAndrei Vagin <avagin@openvz.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ba94f308
  10. 25 1月, 2017 1 次提交
  11. 25 12月, 2016 1 次提交
  12. 16 12月, 2016 1 次提交
  13. 19 11月, 2016 1 次提交
    • W
      af_unix: conditionally use freezable blocking calls in read · 06a77b07
      WANG Cong 提交于
      Commit 2b15af6f ("af_unix: use freezable blocking calls in read")
      converts schedule_timeout() to its freezable version, it was probably
      correct at that time, but later, commit 2b514574
      ("net: af_unix: implement splice for stream af_unix sockets") breaks
      the strong requirement for a freezable sleep, according to
      commit 0f9548ca:
      
          We shouldn't try_to_freeze if locks are held.  Holding a lock can cause a
          deadlock if the lock is later acquired in the suspend or hibernate path
          (e.g.  by dpm).  Holding a lock can also cause a deadlock in the case of
          cgroup_freezer if a lock is held inside a frozen cgroup that is later
          acquired by a process outside that group.
      
      The pipe_lock is still held at that point.
      
      So use freezable version only for the recvmsg call path, avoid impact for
      Android.
      
      Fixes: 2b514574 ("net: af_unix: implement splice for stream af_unix sockets")
      Reported-by: NDmitry Vyukov <dvyukov@google.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Colin Cross <ccross@android.com>
      Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
      Cc: Hannes Frederic Sowa <hannes@stressinduktion.org>
      Signed-off-by: NCong Wang <xiyou.wangcong@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      06a77b07
  14. 08 11月, 2016 1 次提交
    • P
      udp: do fwd memory scheduling on dequeue · 7c13f97f
      Paolo Abeni 提交于
      A new argument is added to __skb_recv_datagram to provide
      an explicit skb destructor, invoked under the receive queue
      lock.
      The UDP protocol uses such argument to perform memory
      reclaiming on dequeue, so that the UDP protocol does not
      set anymore skb->desctructor.
      Instead explicit memory reclaiming is performed at close() time and
      when skbs are removed from the receive queue.
      The in kernel UDP protocol users now need to call a
      skb_recv_udp() variant instead of skb_recv_datagram() to
      properly perform memory accounting on dequeue.
      
      Overall, this allows acquiring only once the receive queue
      lock on dequeue.
      
      Tested using pktgen with random src port, 64 bytes packet,
      wire-speed on a 10G link as sender and udp_sink as the receiver,
      using an l4 tuple rxhash to stress the contention, and one or more
      udp_sink instances with reuseport.
      
      nr sinks	vanilla		patched
      1		440		560
      3		2150		2300
      6		3650		3800
      9		4450		4600
      12		6250		6450
      
      v1 -> v2:
       - do rmem and allocated memory scheduling under the receive lock
       - do bulk scheduling in first_packet_length() and in udp_destruct_sock()
       - avoid the typdef for the dequeue callback
      Suggested-by: NEric Dumazet <edumazet@google.com>
      Acked-by: NHannes Frederic Sowa <hannes@stressinduktion.org>
      Signed-off-by: NPaolo Abeni <pabeni@redhat.com>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7c13f97f
  15. 02 11月, 2016 1 次提交
  16. 04 10月, 2016 1 次提交
    • A
      skb_splice_bits(): get rid of callback · 25869262
      Al Viro 提交于
      since pipe_lock is the outermost now, we don't need to drop/regain
      socket locks around the call of splice_to_pipe() from skb_splice_bits(),
      which kills the need to have a socket-specific callback; we can just
      call splice_to_pipe() and be done with that.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      25869262
  17. 05 9月, 2016 2 次提交
  18. 27 7月, 2016 1 次提交
    • V
      af_unix: charge buffers to kmemcg · 3aa9799e
      Vladimir Davydov 提交于
      Unix sockets can consume a significant amount of system memory, hence
      they should be accounted to kmemcg.
      
      Since unix socket buffers are always allocated from process context, all
      we need to do to charge them to kmemcg is set __GFP_ACCOUNT in
      sock->sk_allocation mask.
      
      Eric asked:
      
      > 1) What happens when a buffer, allocated from socket <A> lands in a
      > different socket <B>, maybe owned by another user/process.
      >
      > Who owns it now, in term of kmemcg accounting ?
      
      We never move memcg charges.  E.g.  if two processes from different
      cgroups are sharing a memory region, each page will be charged to the
      process which touched it first.  Or if two processes are working with
      the same directory tree, inodes and dentries will be charged to the
      first user.  The same is fair for unix socket buffers - they will be
      charged to the sender.
      
      > 2) Has performance impact been evaluated ?
      
      I ran netperf STREAM_STREAM with default options in a kmemcg on a 4 core
      x2 HT box.  The results are below:
      
       # clients            bandwidth (10^6bits/sec)
                          base              patched
               1      67643 +-  725      64874 +-  353    - 4.0 %
               4     193585 +- 2516     186715 +- 1460    - 3.5 %
               8     194820 +-  377     187443 +- 1229    - 3.7 %
      
      So the accounting doesn't come for free - it takes ~4% of performance.
      I believe we could optimize it by using per cpu batching not only on
      charge, but also on uncharge in memcg core, but that's beyond the scope
      of this patch set - I'll take a look at this later.
      
      Anyway, if performance impact is found to be unacceptable, it is always
      possible to disable kmem accounting at boot time (cgroup.memory=nokmem)
      or not use memory cgroups at runtime at all (thanks to jump labels
      there'll be no overhead even if they are compiled in).
      
      Link: http://lkml.kernel.org/r/fcfe6cae27a59fbc5e40145664b3cf085a560c68.1464079538.git.vdavydov@virtuozzo.comSigned-off-by: NVladimir Davydov <vdavydov@virtuozzo.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3aa9799e
  19. 21 5月, 2016 1 次提交
    • M
      af_unix: fix hard linked sockets on overlay · eb0a4a47
      Miklos Szeredi 提交于
      Overlayfs uses separate inodes even in the case of hard links on the
      underlying filesystems.  This is a problem for AF_UNIX socket
      implementation which indexes sockets based on the inode.  This resulted in
      hard linked sockets not working.
      
      The fix is to use the real, underlying inode.
      
      Test case follows:
      
      -- ovl-sock-test.c --
      #include <unistd.h>
      #include <err.h>
      #include <sys/socket.h>
      #include <sys/un.h>
      
      #define SOCK "test-sock"
      #define SOCK2 "test-sock2"
      
      int main(void)
      {
      	int fd, fd2;
      	struct sockaddr_un addr = {
      		.sun_family = AF_UNIX,
      		.sun_path = SOCK,
      	};
      	struct sockaddr_un addr2 = {
      		.sun_family = AF_UNIX,
      		.sun_path = SOCK2,
      	};
      
      	unlink(SOCK);
      	unlink(SOCK2);
      	if ((fd = socket(AF_UNIX, SOCK_STREAM, 0)) == -1)
      		err(1, "socket");
      	if (bind(fd, (struct sockaddr *) &addr, sizeof(addr)) == -1)
      		err(1, "bind");
      	if (listen(fd, 0) == -1)
      		err(1, "listen");
      	if (link(SOCK, SOCK2) == -1)
      		err(1, "link");
      	if ((fd2 = socket(AF_UNIX, SOCK_STREAM, 0)) == -1)
      		err(1, "socket");
      	if (connect(fd2, (struct sockaddr *) &addr2, sizeof(addr2)) == -1)
      		err (1, "connect");
      	return 0;
      }
      ----
      
      Reported-by: Alexander Morozov <alexandr.morozov@docker.com> 
      Signed-off-by: NMiklos Szeredi <mszeredi@redhat.com>
      Cc: <stable@vger.kernel.org>
      eb0a4a47
  20. 28 3月, 2016 1 次提交
  21. 20 2月, 2016 1 次提交
  22. 17 2月, 2016 2 次提交
  23. 08 2月, 2016 2 次提交
  24. 25 1月, 2016 1 次提交
  25. 11 1月, 2016 1 次提交
  26. 05 1月, 2016 1 次提交
    • R
      af_unix: Fix splice-bind deadlock · c845acb3
      Rainer Weikusat 提交于
      On 2015/11/06, Dmitry Vyukov reported a deadlock involving the splice
      system call and AF_UNIX sockets,
      
      http://lists.openwall.net/netdev/2015/11/06/24
      
      The situation was analyzed as
      
      (a while ago) A: socketpair()
      B: splice() from a pipe to /mnt/regular_file
      	does sb_start_write() on /mnt
      C: try to freeze /mnt
      	wait for B to finish with /mnt
      A: bind() try to bind our socket to /mnt/new_socket_name
      	lock our socket, see it not bound yet
      	decide that it needs to create something in /mnt
      	try to do sb_start_write() on /mnt, block (it's
      	waiting for C).
      D: splice() from the same pipe to our socket
      	lock the pipe, see that socket is connected
      	try to lock the socket, block waiting for A
      B:	get around to actually feeding a chunk from
      	pipe to file, try to lock the pipe.  Deadlock.
      
      on 2015/11/10 by Al Viro,
      
      http://lists.openwall.net/netdev/2015/11/10/4
      
      The patch fixes this by removing the kern_path_create related code from
      unix_mknod and executing it as part of unix_bind prior acquiring the
      readlock of the socket in question. This means that A (as used above)
      will sb_start_write on /mnt before it acquires the readlock, hence, it
      won't indirectly block B which first did a sb_start_write and then
      waited for a thread trying to acquire the readlock. Consequently, A
      being blocked by C waiting for B won't cause a deadlock anymore
      (effectively, both A and B acquire two locks in opposite order in the
      situation described above).
      
      Dmitry Vyukov(<dvyukov@google.com>) tested the original patch.
      Signed-off-by: NRainer Weikusat <rweikusat@mobileactivedefense.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c845acb3
  27. 18 12月, 2015 1 次提交
  28. 07 12月, 2015 1 次提交
    • R
      af_unix: fix unix_dgram_recvmsg entry locking · 64874280
      Rainer Weikusat 提交于
      The current unix_dgram_recvsmg code acquires the u->readlock mutex in
      order to protect access to the peek offset prior to calling
      __skb_recv_datagram for actually receiving data. This implies that a
      blocking reader will go to sleep with this mutex held if there's
      presently no data to return to userspace. Two non-desirable side effects
      of this are that a later non-blocking read call on the same socket will
      block on the ->readlock mutex until the earlier blocking call releases it
      (or the readers is interrupted) and that later blocking read calls
      will wait longer than the effective socket read timeout says they
      should: The timeout will only start 'ticking' once such a reader hits
      the schedule_timeout in wait_for_more_packets (core.c) while the time it
      already had to wait until it could acquire the mutex is unaccounted for.
      
      The patch avoids both by using the __skb_try_recv_datagram and
      __skb_wait_for_more packets functions created by the first patch to
      implement a unix_dgram_recvmsg read loop which releases the readlock
      mutex prior to going to sleep and reacquires it as needed
      afterwards. Non-blocking readers will thus immediately return with
      -EAGAIN if there's no data available regardless of any concurrent
      blocking readers and all blocking readers will end up sleeping via
      schedule_timeout, thus honouring the configured socket receive timeout.
      Signed-off-by: NRainer Weikusat <rweikusat@mobileactivedefense.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      64874280
  29. 02 12月, 2015 2 次提交
  30. 01 12月, 2015 2 次提交
  31. 24 11月, 2015 1 次提交
    • R
      unix: avoid use-after-free in ep_remove_wait_queue · 7d267278
      Rainer Weikusat 提交于
      Rainer Weikusat <rweikusat@mobileactivedefense.com> writes:
      An AF_UNIX datagram socket being the client in an n:1 association with
      some server socket is only allowed to send messages to the server if the
      receive queue of this socket contains at most sk_max_ack_backlog
      datagrams. This implies that prospective writers might be forced to go
      to sleep despite none of the message presently enqueued on the server
      receive queue were sent by them. In order to ensure that these will be
      woken up once space becomes again available, the present unix_dgram_poll
      routine does a second sock_poll_wait call with the peer_wait wait queue
      of the server socket as queue argument (unix_dgram_recvmsg does a wake
      up on this queue after a datagram was received). This is inherently
      problematic because the server socket is only guaranteed to remain alive
      for as long as the client still holds a reference to it. In case the
      connection is dissolved via connect or by the dead peer detection logic
      in unix_dgram_sendmsg, the server socket may be freed despite "the
      polling mechanism" (in particular, epoll) still has a pointer to the
      corresponding peer_wait queue. There's no way to forcibly deregister a
      wait queue with epoll.
      
      Based on an idea by Jason Baron, the patch below changes the code such
      that a wait_queue_t belonging to the client socket is enqueued on the
      peer_wait queue of the server whenever the peer receive queue full
      condition is detected by either a sendmsg or a poll. A wake up on the
      peer queue is then relayed to the ordinary wait queue of the client
      socket via wake function. The connection to the peer wait queue is again
      dissolved if either a wake up is about to be relayed or the client
      socket reconnects or a dead peer is detected or the client socket is
      itself closed. This enables removing the second sock_poll_wait from
      unix_dgram_poll, thus avoiding the use-after-free, while still ensuring
      that no blocked writer sleeps forever.
      Signed-off-by: NRainer Weikusat <rweikusat@mobileactivedefense.com>
      Fixes: ec0d215f ("af_unix: fix 'poll for write'/connected DGRAM sockets")
      Reviewed-by: NJason Baron <jbaron@akamai.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7d267278
  32. 18 11月, 2015 1 次提交
  33. 17 11月, 2015 1 次提交