1. 02 12月, 2015 3 次提交
    • E
      net: fix sock_wake_async() rcu protection · ceb5d58b
      Eric Dumazet 提交于
      Dmitry provided a syzkaller (http://github.com/google/syzkaller)
      triggering a fault in sock_wake_async() when async IO is requested.
      
      Said program stressed af_unix sockets, but the issue is generic
      and should be addressed in core networking stack.
      
      The problem is that by the time sock_wake_async() is called,
      we should not access the @flags field of 'struct socket',
      as the inode containing this socket might be freed without
      further notice, and without RCU grace period.
      
      We already maintain an RCU protected structure, "struct socket_wq"
      so moving SOCKWQ_ASYNC_NOSPACE & SOCKWQ_ASYNC_WAITDATA into it
      is the safe route.
      
      It also reduces number of cache lines needing dirtying, so might
      provide a performance improvement anyway.
      
      In followup patches, we might move remaining flags (SOCK_NOSPACE,
      SOCK_PASSCRED, SOCK_PASSSEC) to save 8 bytes and let 'struct socket'
      being mostly read and let it being shared between cpus.
      Reported-by: NDmitry Vyukov <dvyukov@google.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ceb5d58b
    • E
      net: rename SOCK_ASYNC_NOSPACE and SOCK_ASYNC_WAITDATA · 9cd3e072
      Eric Dumazet 提交于
      This patch is a cleanup to make following patch easier to
      review.
      
      Goal is to move SOCK_ASYNC_NOSPACE and SOCK_ASYNC_WAITDATA
      from (struct socket)->flags to a (struct socket_wq)->flags
      to benefit from RCU protection in sock_wake_async()
      
      To ease backports, we rename both constants.
      
      Two new helpers, sk_set_bit(int nr, struct sock *sk)
      and sk_clear_bit(int net, struct sock *sk) are added so that
      following patch can change their implementation.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9cd3e072
    • N
      Revert "ipv6: ndisc: inherit metadata dst when creating ndisc requests" · 304d888b
      Nicolas Dichtel 提交于
      This reverts commit ab450605.
      
      In IPv6, we cannot inherit the dst of the original dst. ndisc packets
      are IPv6 packets and may take another route than the original packet.
      
      This patch breaks the following scenario: a packet comes from eth0 and
      is forwarded through vxlan1. The encapsulated packet triggers an NS
      which cannot be sent because of the wrong route.
      
      CC: Jiri Benc <jbenc@redhat.com>
      CC: Thomas Graf <tgraf@suug.ch>
      Signed-off-by: NNicolas Dichtel <nicolas.dichtel@6wind.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      304d888b
  2. 01 12月, 2015 2 次提交
  3. 30 11月, 2015 1 次提交
  4. 25 11月, 2015 6 次提交
    • Q
      RDS: fix race condition when sending a message on unbound socket · 8c7188b2
      Quentin Casasnovas 提交于
      Sasha's found a NULL pointer dereference in the RDS connection code when
      sending a message to an apparently unbound socket.  The problem is caused
      by the code checking if the socket is bound in rds_sendmsg(), which checks
      the rs_bound_addr field without taking a lock on the socket.  This opens a
      race where rs_bound_addr is temporarily set but where the transport is not
      in rds_bind(), leading to a NULL pointer dereference when trying to
      dereference 'trans' in __rds_conn_create().
      
      Vegard wrote a reproducer for this issue, so kindly ask him to share if
      you're interested.
      
      I cannot reproduce the NULL pointer dereference using Vegard's reproducer
      with this patch, whereas I could without.
      
      Complete earlier incomplete fix to CVE-2015-6937:
      
        74e98eb0 ("RDS: verify the underlying transport exists before creating a connection")
      
      Cc: David S. Miller <davem@davemloft.net>
      Cc: stable@vger.kernel.org
      Reviewed-by: NVegard Nossum <vegard.nossum@oracle.com>
      Reviewed-by: NSasha Levin <sasha.levin@oracle.com>
      Acked-by: NSantosh Shilimkar <santosh.shilimkar@oracle.com>
      Signed-off-by: NQuentin Casasnovas <quentin.casasnovas@oracle.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8c7188b2
    • A
      net: openvswitch: Remove invalid comment · 20f79566
      Aaron Conole 提交于
      During pre-upstream development, the openvswitch datapath used a custom
      hashtable to store vports that could fail on delete due to lack of
      memory. However, prior to upstream submission, this code was reworked to
      use an hlist based hastable with flexible-array based buckets. As such
      the failure condition was eliminated from the vport_del path, rendering
      this comment invalid.
      Signed-off-by: NAaron Conole <aconole@bytheb.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      20f79566
    • N
      net: ipmr, ip6mr: fix vif/tunnel failure race condition · fbdd29bf
      Nikolay Aleksandrov 提交于
      Since (at least) commit b17a7c17 ("[NET]: Do sysfs registration as
      part of register_netdevice."), netdev_run_todo() deals only with
      unregistration, so we don't need to do the rtnl_unlock/lock cycle to
      finish registration when failing pimreg or dvmrp device creation. In
      fact that opens a race condition where someone can delete the device
      while rtnl is unlocked because it's fully registered. The problem gets
      worse when netlink support is introduced as there are more points of entry
      that can cause it and it also makes reusing that code correctly impossible.
      Signed-off-by: NNikolay Aleksandrov <nikolay@cumulusnetworks.com>
      Reviewed-by: NCong Wang <cwang@twopensource.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fbdd29bf
    • D
      rxrpc: Correctly handle ack at end of client call transmit phase · 33c40e24
      David Howells 提交于
      Normally, the transmit phase of a client call is implicitly ack'd by the
      reception of the first data packet of the response being received.
      However, if a security negotiation happens, the transmit phase, if it is
      entirely contained in a single packet, may get an ack packet in response
      and then may get aborted due to security negotiation failure.
      
      Because the client has shifted state to RXRPC_CALL_CLIENT_AWAIT_REPLY due
      to having transmitted all the data, the code that handles processing of the
      received ack packet doesn't note the hard ack the data packet.
      
      The following abort packet in the case of security negotiation failure then
      incurs an assertion failure when it tries to drain the Tx queue because the
      hard ack state is out of sync (hard ack means the packets have been
      processed and can be discarded by the sender; a soft ack means that the
      packets are received but could still be discarded and rerequested by the
      receiver).
      
      To fix this, we should record the hard ack we received for the ack packet.
      
      The assertion failure looks like:
      
      	RxRPC: Assertion failed
      	1 <= 0 is false
      	0x1 <= 0x0 is false
      	------------[ cut here ]------------
      	kernel BUG at ../net/rxrpc/ar-ack.c:431!
      	...
      	RIP: 0010:[<ffffffffa006857b>]  [<ffffffffa006857b>] rxrpc_rotate_tx_window+0xbc/0x131 [af_rxrpc]
      	...
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      33c40e24
    • M
      ipv6: distinguish frag queues by device for multicast and link-local packets · 264640fc
      Michal Kubeček 提交于
      If a fragmented multicast packet is received on an ethernet device which
      has an active macvlan on top of it, each fragment is duplicated and
      received both on the underlying device and the macvlan. If some
      fragments for macvlan are processed before the whole packet for the
      underlying device is reassembled, the "overlapping fragments" test in
      ip6_frag_queue() discards the whole fragment queue.
      
      To resolve this, add device ifindex to the search key and require it to
      match reassembling multicast packets and packets to link-local
      addresses.
      
      Note: similar patch has been already submitted by Yoshifuji Hideaki in
      
        http://patchwork.ozlabs.org/patch/220979/
      
      but got lost and forgotten for some reason.
      Signed-off-by: NMichal Kubecek <mkubecek@suse.cz>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      264640fc
    • Y
      tipc: fix error handling of expanding buffer headroom · 7098356b
      Ying Xue 提交于
      Coverity says:
      
      *** CID 1338065:  Error handling issues  (CHECKED_RETURN)
      /net/tipc/udp_media.c: 162 in tipc_udp_send_msg()
      156     	struct udp_media_addr *dst = (struct udp_media_addr *)&dest->value;
      157     	struct udp_media_addr *src = (struct udp_media_addr *)&b->addr.value;
      158     	struct sk_buff *clone;
      159     	struct rtable *rt;
      160
      161     	if (skb_headroom(skb) < UDP_MIN_HEADROOM)
      >>>     CID 1338065:  Error handling issues  (CHECKED_RETURN)
      >>>     Calling "pskb_expand_head" without checking return value (as is done elsewhere 51 out of 56 times).
      162     		pskb_expand_head(skb, UDP_MIN_HEADROOM, 0, GFP_ATOMIC);
      163
      164     	clone = skb_clone(skb, GFP_ATOMIC);
      165     	skb_set_inner_protocol(clone, htons(ETH_P_TIPC));
      166     	ub = rcu_dereference_rtnl(b->media_ptr);
      167     	if (!ub) {
      
      When expanding buffer headroom over udp tunnel with pskb_expand_head(),
      it's unfortunate that we don't check its return value. As a result, if
      the function returns an error code due to the lack of memory, it may
      cause unpredictable consequence as we unconditionally consider that
      it's always successful.
      
      Fixes: e5356794 ("tipc: conditionally expand buffer headroom over udp tunnel")
      Reported-by: <scan-admin@coverity.com>
      Cc: Stephen Hemminger <stephen@networkplumber.org>
      Signed-off-by: NYing Xue <ying.xue@windriver.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7098356b
  5. 24 11月, 2015 4 次提交
    • Y
      tipc: avoid packets leaking on socket receive queue · f4195d1e
      Ying Xue 提交于
      Even if we drain receive queue thoroughly in tipc_release() after tipc
      socket is removed from rhashtable, it is possible that some packets
      are in flight because some CPU runs receiver and did rhashtable lookup
      before we removed socket. They will achieve receive queue, but nobody
      delete them at all. To avoid this leak, we register a private socket
      destructor to purge receive queue, meaning releasing packets pending
      on receive queue will be delayed until the last reference of tipc
      socket will be released.
      Signed-off-by: NYing Xue <ying.xue@windriver.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f4195d1e
    • D
      net/hsr: fix a warning message · 3d1a54e8
      Dan Carpenter 提交于
      WARN_ON_ONCE() takes a condition, it doesn't take an error message.  I
      have converted this to WARN() instead.
      Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3d1a54e8
    • R
      unix: avoid use-after-free in ep_remove_wait_queue · 7d267278
      Rainer Weikusat 提交于
      Rainer Weikusat <rweikusat@mobileactivedefense.com> writes:
      An AF_UNIX datagram socket being the client in an n:1 association with
      some server socket is only allowed to send messages to the server if the
      receive queue of this socket contains at most sk_max_ack_backlog
      datagrams. This implies that prospective writers might be forced to go
      to sleep despite none of the message presently enqueued on the server
      receive queue were sent by them. In order to ensure that these will be
      woken up once space becomes again available, the present unix_dgram_poll
      routine does a second sock_poll_wait call with the peer_wait wait queue
      of the server socket as queue argument (unix_dgram_recvmsg does a wake
      up on this queue after a datagram was received). This is inherently
      problematic because the server socket is only guaranteed to remain alive
      for as long as the client still holds a reference to it. In case the
      connection is dissolved via connect or by the dead peer detection logic
      in unix_dgram_sendmsg, the server socket may be freed despite "the
      polling mechanism" (in particular, epoll) still has a pointer to the
      corresponding peer_wait queue. There's no way to forcibly deregister a
      wait queue with epoll.
      
      Based on an idea by Jason Baron, the patch below changes the code such
      that a wait_queue_t belonging to the client socket is enqueued on the
      peer_wait queue of the server whenever the peer receive queue full
      condition is detected by either a sendmsg or a poll. A wake up on the
      peer queue is then relayed to the ordinary wait queue of the client
      socket via wake function. The connection to the peer wait queue is again
      dissolved if either a wake up is about to be relayed or the client
      socket reconnects or a dead peer is detected or the client socket is
      itself closed. This enables removing the second sock_poll_wait from
      unix_dgram_poll, thus avoiding the use-after-free, while still ensuring
      that no blocked writer sleeps forever.
      Signed-off-by: NRainer Weikusat <rweikusat@mobileactivedefense.com>
      Fixes: ec0d215f ("af_unix: fix 'poll for write'/connected DGRAM sockets")
      Reviewed-by: NJason Baron <jbaron@akamai.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7d267278
    • N
      cgroups: Allow dynamically changing net_classid · 3b13758f
      Nina Schiff 提交于
      The classid of a process is changed either when a process is moved to
      or from a cgroup or when the net_cls.classid file is updated.
      Previously net_cls only supported propogating these changes to the
      cgroup's related sockets when a process was added or removed from the
      cgroup. This means it was neccessary to remove and re-add all processes
      to a cgroup in order to update its classid. This change introduces
      support for doing this dynamically - i.e. when the value is changed in
      the net_cls_classid file, this will also trigger an update to the
      classid associated with all sockets controlled by the cgroup.
      This mimics the behaviour of other cgroup subsystems.
      net_prio circumvents this issue by storing an index into a table with
      each socket (and so any updates to the table, don't require updating
      the value associated with the socket). net_cls, however, passes the
      socket the classid directly, and so this additional step is needed.
      Signed-off-by: NNina Schiff <ninasc@fb.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3b13758f
  6. 23 11月, 2015 3 次提交
    • N
      net: ip6mr: fix static mfc/dev leaks on table destruction · 4c698046
      Nikolay Aleksandrov 提交于
      Similar to ipv4, when destroying an mrt table the static mfc entries and
      the static devices are kept, which leads to devices that can never be
      destroyed (because of refcnt taken) and leaked memory. Make sure that
      everything is cleaned up on netns destruction.
      
      Fixes: 8229efda ("netns: ip6mr: enable namespace support in ipv6 multicast forwarding code")
      CC: Benjamin Thery <benjamin.thery@bull.net>
      Signed-off-by: NNikolay Aleksandrov <nikolay@cumulusnetworks.com>
      Reviewed-by: NCong Wang <cwang@twopensource.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4c698046
    • N
      net: ipmr: fix static mfc/dev leaks on table destruction · 0e615e96
      Nikolay Aleksandrov 提交于
      When destroying an mrt table the static mfc entries and the static
      devices are kept, which leads to devices that can never be destroyed
      (because of refcnt taken) and leaked memory, for example:
      unreferenced object 0xffff880034c144c0 (size 192):
        comm "mfc-broken", pid 4777, jiffies 4320349055 (age 46001.964s)
        hex dump (first 32 bytes):
          98 53 f0 34 00 88 ff ff 98 53 f0 34 00 88 ff ff  .S.4.....S.4....
          ef 0a 0a 14 01 02 03 04 00 00 00 00 01 00 00 00  ................
        backtrace:
          [<ffffffff815c1b9e>] kmemleak_alloc+0x4e/0xb0
          [<ffffffff811ea6e0>] kmem_cache_alloc+0x190/0x300
          [<ffffffff815931cb>] ip_mroute_setsockopt+0x5cb/0x910
          [<ffffffff8153d575>] do_ip_setsockopt.isra.11+0x105/0xff0
          [<ffffffff8153e490>] ip_setsockopt+0x30/0xa0
          [<ffffffff81564e13>] raw_setsockopt+0x33/0x90
          [<ffffffff814d1e14>] sock_common_setsockopt+0x14/0x20
          [<ffffffff814d0b51>] SyS_setsockopt+0x71/0xc0
          [<ffffffff815cdbf6>] entry_SYSCALL_64_fastpath+0x16/0x7a
          [<ffffffffffffffff>] 0xffffffffffffffff
      
      Make sure that everything is cleaned on netns destruction.
      Signed-off-by: NNikolay Aleksandrov <nikolay@cumulusnetworks.com>
      Reviewed-by: NCong Wang <cwang@twopensource.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0e615e96
    • D
      net, scm: fix PaX detected msg_controllen overflow in scm_detach_fds · 6900317f
      Daniel Borkmann 提交于
      David and HacKurx reported a following/similar size overflow triggered
      in a grsecurity kernel, thanks to PaX's gcc size overflow plugin:
      
      (Already fixed in later grsecurity versions by Brad and PaX Team.)
      
      [ 1002.296137] PAX: size overflow detected in function scm_detach_fds net/core/scm.c:314
                     cicus.202_127 min, count: 4, decl: msg_controllen; num: 0; context: msghdr;
      [ 1002.296145] CPU: 0 PID: 3685 Comm: scm_rights_recv Not tainted 4.2.3-grsec+ #7
      [ 1002.296149] Hardware name: Apple Inc. MacBookAir5,1/Mac-66F35F19FE2A0D05, [...]
      [ 1002.296153]  ffffffff81c27366 0000000000000000 ffffffff81c27375 ffffc90007843aa8
      [ 1002.296162]  ffffffff818129ba 0000000000000000 ffffffff81c27366 ffffc90007843ad8
      [ 1002.296169]  ffffffff8121f838 fffffffffffffffc fffffffffffffffc ffffc90007843e60
      [ 1002.296176] Call Trace:
      [ 1002.296190]  [<ffffffff818129ba>] dump_stack+0x45/0x57
      [ 1002.296200]  [<ffffffff8121f838>] report_size_overflow+0x38/0x60
      [ 1002.296209]  [<ffffffff816a979e>] scm_detach_fds+0x2ce/0x300
      [ 1002.296220]  [<ffffffff81791899>] unix_stream_read_generic+0x609/0x930
      [ 1002.296228]  [<ffffffff81791c9f>] unix_stream_recvmsg+0x4f/0x60
      [ 1002.296236]  [<ffffffff8178dc00>] ? unix_set_peek_off+0x50/0x50
      [ 1002.296243]  [<ffffffff8168fac7>] sock_recvmsg+0x47/0x60
      [ 1002.296248]  [<ffffffff81691522>] ___sys_recvmsg+0xe2/0x1e0
      [ 1002.296257]  [<ffffffff81693496>] __sys_recvmsg+0x46/0x80
      [ 1002.296263]  [<ffffffff816934fc>] SyS_recvmsg+0x2c/0x40
      [ 1002.296271]  [<ffffffff8181a3ab>] entry_SYSCALL_64_fastpath+0x12/0x85
      
      Further investigation showed that this can happen when an *odd* number of
      fds are being passed over AF_UNIX sockets.
      
      In these cases CMSG_LEN(i * sizeof(int)) and CMSG_SPACE(i * sizeof(int)),
      where i is the number of successfully passed fds, differ by 4 bytes due
      to the extra CMSG_ALIGN() padding in CMSG_SPACE() to an 8 byte boundary
      on 64 bit. The padding is used to align subsequent cmsg headers in the
      control buffer.
      
      When the control buffer passed in from the receiver side *lacks* these 4
      bytes (e.g. due to buggy/wrong API usage), then msg->msg_controllen will
      overflow in scm_detach_fds():
      
        int cmlen = CMSG_LEN(i * sizeof(int));  <--- cmlen w/o tail-padding
        err = put_user(SOL_SOCKET, &cm->cmsg_level);
        if (!err)
          err = put_user(SCM_RIGHTS, &cm->cmsg_type);
        if (!err)
          err = put_user(cmlen, &cm->cmsg_len);
        if (!err) {
          cmlen = CMSG_SPACE(i * sizeof(int));  <--- cmlen w/ 4 byte extra tail-padding
          msg->msg_control += cmlen;
          msg->msg_controllen -= cmlen;         <--- iff no tail-padding space here ...
        }                                            ... wrap-around
      
      F.e. it will wrap to a length of 18446744073709551612 bytes in case the
      receiver passed in msg->msg_controllen of 20 bytes, and the sender
      properly transferred 1 fd to the receiver, so that its CMSG_LEN results
      in 20 bytes and CMSG_SPACE in 24 bytes.
      
      In case of MSG_CMSG_COMPAT (scm_detach_fds_compat()), I haven't seen an
      issue in my tests as alignment seems always on 4 byte boundary. Same
      should be in case of native 32 bit, where we end up with 4 byte boundaries
      as well.
      
      In practice, passing msg->msg_controllen of 20 to recvmsg() while receiving
      a single fd would mean that on successful return, msg->msg_controllen is
      being set by the kernel to 24 bytes instead, thus more than the input
      buffer advertised. It could f.e. become an issue if such application later
      on zeroes or copies the control buffer based on the returned msg->msg_controllen
      elsewhere.
      
      Maximum number of fds we can send is a hard upper limit SCM_MAX_FD (253).
      
      Going over the code, it seems like msg->msg_controllen is not being read
      after scm_detach_fds() in scm_recv() anymore by the kernel, good!
      
      Relevant recvmsg() handler are unix_dgram_recvmsg() (unix_seqpacket_recvmsg())
      and unix_stream_recvmsg(). Both return back to their recvmsg() caller,
      and ___sys_recvmsg() places the updated length, that is, new msg_control -
      old msg_control pointer into msg->msg_controllen (hence the 24 bytes seen
      in the example).
      
      Long time ago, Wei Yongjun fixed something related in commit 1ac70e7a
      ("[NET]: Fix function put_cmsg() which may cause usr application memory
      overflow").
      
      RFC3542, section 20.2. says:
      
        The fields shown as "XX" are possible padding, between the cmsghdr
        structure and the data, and between the data and the next cmsghdr
        structure, if required by the implementation. While sending an
        application may or may not include padding at the end of last
        ancillary data in msg_controllen and implementations must accept both
        as valid. On receiving a portable application must provide space for
        padding at the end of the last ancillary data as implementations may
        copy out the padding at the end of the control message buffer and
        include it in the received msg_controllen. When recvmsg() is called
        if msg_controllen is too small for all the ancillary data items
        including any trailing padding after the last item an implementation
        may set MSG_CTRUNC.
      
      Since we didn't place MSG_CTRUNC for already quite a long time, just do
      the same as in 1ac70e7a to avoid an overflow.
      
      Btw, even man-page author got this wrong :/ See db939c9b26e9 ("cmsg.3: Fix
      error in SCM_RIGHTS code sample"). Some people must have copied this (?),
      thus it got triggered in the wild (reported several times during boot by
      David and HacKurx).
      
      No Fixes tag this time as pre 2002 (that is, pre history tree).
      Reported-by: NDavid Sterba <dave@jikos.cz>
      Reported-by: NHacKurx <hackurx@gmail.com>
      Cc: PaX Team <pageexec@freemail.hu>
      Cc: Emese Revfy <re.emese@gmail.com>
      Cc: Brad Spengler <spender@grsecurity.net>
      Cc: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
      Cc: Eric Dumazet <edumazet@google.com>
      Reviewed-by: NHannes Frederic Sowa <hannes@stressinduktion.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6900317f
  7. 21 11月, 2015 1 次提交
    • J
      tipc: correct settings of broadcast link state · 9a650838
      Jon Paul Maloy 提交于
      Since commit 52666986 ("tipc: let broadcast packet
      reception use new link receive function") the broadcast send
      link state was meant to always be set to LINK_ESTABLISHED, since
      we don't need this link to follow the regular link FSM rules. It
      was also the intention that this state anyway shouldn't impact
      the run-time working state of the link, since the latter in
      reality is controlled by the number of registered peers.
      
      We have now discovered that this assumption is not quite correct.
      If the broadcast link is reset because of too many retransmissions,
      its state will inadvertently go to LINK_RESETTING, and never go
      back to LINK_ESTABLISHED, because the LINK_FAILURE event was not
      anticipated. This will work well once, but if it happens a second
      time, the reset on a link in LINK_RESETTING has has no effect, and
      neither the broadcast link nor the unicast links will go down as
      they should.
      
      Furthermore, it is confusing that the management tool shows that
      this link is in UP state when that obviously isn't the case.
      
      We now ensure that this state strictly follows the true working
      state of the link. The state is set to LINK_ESTABLISHED when
      the number of peers is non-zero, and to LINK_RESET otherwise.
      Signed-off-by: NJon Maloy <jon.maloy@ericsson.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9a650838
  8. 20 11月, 2015 3 次提交
  9. 19 11月, 2015 4 次提交
    • E
      tcp: md5: fix lockdep annotation · 1b8e6a01
      Eric Dumazet 提交于
      When a passive TCP is created, we eventually call tcp_md5_do_add()
      with sk pointing to the child. It is not owner by the user yet (we
      will add this socket into listener accept queue a bit later anyway)
      
      But we do own the spinlock, so amend the lockdep annotation to avoid
      following splat :
      
      [ 8451.090932] net/ipv4/tcp_ipv4.c:923 suspicious rcu_dereference_protected() usage!
      [ 8451.090932]
      [ 8451.090932] other info that might help us debug this:
      [ 8451.090932]
      [ 8451.090934]
      [ 8451.090934] rcu_scheduler_active = 1, debug_locks = 1
      [ 8451.090936] 3 locks held by socket_sockopt_/214795:
      [ 8451.090936]  #0:  (rcu_read_lock){.+.+..}, at: [<ffffffff855c6ac1>] __netif_receive_skb_core+0x151/0xe90
      [ 8451.090947]  #1:  (rcu_read_lock){.+.+..}, at: [<ffffffff85618143>] ip_local_deliver_finish+0x43/0x2b0
      [ 8451.090952]  #2:  (slock-AF_INET){+.-...}, at: [<ffffffff855acda5>] sk_clone_lock+0x1c5/0x500
      [ 8451.090958]
      [ 8451.090958] stack backtrace:
      [ 8451.090960] CPU: 7 PID: 214795 Comm: socket_sockopt_
      
      [ 8451.091215] Call Trace:
      [ 8451.091216]  <IRQ>  [<ffffffff856fb29c>] dump_stack+0x55/0x76
      [ 8451.091229]  [<ffffffff85123b5b>] lockdep_rcu_suspicious+0xeb/0x110
      [ 8451.091235]  [<ffffffff8564544f>] tcp_md5_do_add+0x1bf/0x1e0
      [ 8451.091239]  [<ffffffff85645751>] tcp_v4_syn_recv_sock+0x1f1/0x4c0
      [ 8451.091242]  [<ffffffff85642b27>] ? tcp_v4_md5_hash_skb+0x167/0x190
      [ 8451.091246]  [<ffffffff85647c78>] tcp_check_req+0x3c8/0x500
      [ 8451.091249]  [<ffffffff856451ae>] ? tcp_v4_inbound_md5_hash+0x11e/0x190
      [ 8451.091253]  [<ffffffff85647170>] tcp_v4_rcv+0x3c0/0x9f0
      [ 8451.091256]  [<ffffffff85618143>] ? ip_local_deliver_finish+0x43/0x2b0
      [ 8451.091260]  [<ffffffff856181b6>] ip_local_deliver_finish+0xb6/0x2b0
      [ 8451.091263]  [<ffffffff85618143>] ? ip_local_deliver_finish+0x43/0x2b0
      [ 8451.091267]  [<ffffffff85618d38>] ip_local_deliver+0x48/0x80
      [ 8451.091270]  [<ffffffff85618510>] ip_rcv_finish+0x160/0x700
      [ 8451.091273]  [<ffffffff8561900e>] ip_rcv+0x29e/0x3d0
      [ 8451.091277]  [<ffffffff855c74b7>] __netif_receive_skb_core+0xb47/0xe90
      
      Fixes: a8afca03 ("tcp: md5: protects md5sig_info with RCU")
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Reported-by: NWillem de Bruijn <willemb@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1b8e6a01
    • A
      net: dns_resolver: convert time_t to time64_t · 451c2b5c
      Aya Mahfouz 提交于
      Changes the definition of the pointer _expiry from time_t to
      time64_t. This is to handle the Y2038 problem where time_t
      will overflow in the year 2038. The change is safe because
      the kernel subsystems that call dns_query pass NULL.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NAya Mahfouz <mahfouz.saif.elyazal@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      451c2b5c
    • P
      net/ip6_tunnel: fix dst leak · 206b4950
      Paolo Abeni 提交于
      the commit cdf3464e ("ipv6: Fix dst_entry refcnt bugs in ip6_tunnel")
      introduced percpu storage for ip6_tunnel dst cache, but while clearing
      such cache it used raw_cpu_ptr to walk the per cpu entries, so cached
      dst on non current cpu are not actually reset.
      
      This patch replaces raw_cpu_ptr with per_cpu_ptr, properly cleaning
      such storage.
      
      Fixes: cdf3464e ("ipv6: Fix dst_entry refcnt bugs in ip6_tunnel")
      Signed-off-by: NPaolo Abeni <pabeni@redhat.com>
      Acked-by: NMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      206b4950
    • S
      945fae44
  10. 18 11月, 2015 8 次提交
  11. 17 11月, 2015 5 次提交