1. 12 3月, 2014 8 次提交
  2. 11 3月, 2014 3 次提交
  3. 07 3月, 2014 12 次提交
    • S
      ipv6: don't set DST_NOCOUNT for remotely added routes · c88507fb
      Sabrina Dubroca 提交于
      DST_NOCOUNT should only be used if an authorized user adds routes
      locally. In case of routes which are added on behalf of router
      advertisments this flag must not get used as it allows an unlimited
      number of routes getting added remotely.
      Signed-off-by: NSabrina Dubroca <sd@queasysnail.net>
      Acked-by: NHannes Frederic Sowa <hannes@stressinduktion.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c88507fb
    • A
      ipv6: Fix exthdrs offload registration. · d2d273ff
      Anton Nayshtut 提交于
      Without this fix, ipv6_exthdrs_offload_init doesn't register IPPROTO_DSTOPTS
      offload, but returns 0 (as the IPPROTO_ROUTING registration actually succeeds).
      
      This then causes the ipv6_gso_segment to drop IPv6 packets with IPPROTO_DSTOPTS
      header.
      
      The issue detected and the fix verified by running MS HCK Offload LSO test on
      top of QEMU Windows guests, as this test sends IPv6 packets with
      IPPROTO_DSTOPTS.
      Signed-off-by: NAnton Nayshtut <anton@swortex.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d2d273ff
    • A
      net: unix socket code abuses csum_partial · 0a13404d
      Anton Blanchard 提交于
      The unix socket code is using the result of csum_partial to
      hash into a lookup table:
      
      	unix_hash_fold(csum_partial(sunaddr, len, 0));
      
      csum_partial is only guaranteed to produce something that can be
      folded into a checksum, as its prototype explains:
      
       * returns a 32-bit number suitable for feeding into itself
       * or csum_tcpudp_magic
      
      The 32bit value should not be used directly.
      
      Depending on the alignment, the ppc64 csum_partial will return
      different 32bit partial checksums that will fold into the same
      16bit checksum.
      
      This difference causes the following testcase (courtesy of
      Gustavo) to sometimes fail:
      
      #include <sys/socket.h>
      #include <stdio.h>
      
      int main()
      {
      	int fd = socket(PF_LOCAL, SOCK_STREAM|SOCK_CLOEXEC, 0);
      
      	int i = 1;
      	setsockopt(fd, SOL_SOCKET, SO_REUSEADDR, &i, 4);
      
      	struct sockaddr addr;
      	addr.sa_family = AF_LOCAL;
      	bind(fd, &addr, 2);
      
      	listen(fd, 128);
      
      	struct sockaddr_storage ss;
      	socklen_t sslen = (socklen_t)sizeof(ss);
      	getsockname(fd, (struct sockaddr*)&ss, &sslen);
      
      	fd = socket(PF_LOCAL, SOCK_STREAM|SOCK_CLOEXEC, 0);
      
      	if (connect(fd, (struct sockaddr*)&ss, sslen) == -1){
      		perror(NULL);
      		return 1;
      	}
      	printf("OK\n");
      	return 0;
      }
      
      As suggested by davem, fix this by using csum_fold to fold the
      partial 32bit checksum into a 16bit checksum before using it.
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Cc: stable@vger.kernel.org
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0a13404d
    • F
      inet: frag: make sure forced eviction removes all frags · e588e2f2
      Florian Westphal 提交于
      Quoting Alexander Aring:
        While fragmentation and unloading of 6lowpan module I got this kernel Oops
        after few seconds:
      
        BUG: unable to handle kernel paging request at f88bbc30
        [..]
        Modules linked in: ipv6 [last unloaded: 6lowpan]
        Call Trace:
         [<c012af4c>] ? call_timer_fn+0x54/0xb3
         [<c012aef8>] ? process_timeout+0xa/0xa
         [<c012b66b>] run_timer_softirq+0x140/0x15f
      
      Problem is that incomplete frags are still around after unload; when
      their frag expire timer fires, we get crash.
      
      When a netns is removed (also done when unloading module), inet_frag
      calls the evictor with 'force' argument to purge remaining frags.
      
      The evictor loop terminates when accounted memory ('work') drops to 0
      or the lru-list becomes empty.  However, the mem accounting is done
      via percpu counters and may not be accurate, i.e. loop may terminate
      prematurely.
      
      Alter evictor to only stop once the lru list is empty when force is
      requested.
      Reported-by: NPhoebe Buckheister <phoebe.buckheister@itwm.fraunhofer.de>
      Reported-by: NAlexander Aring <alex.aring@gmail.com>
      Tested-by: NAlexander Aring <alex.aring@gmail.com>
      Signed-off-by: NFlorian Westphal <fw@strlen.de>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e588e2f2
    • E
      tipc: don't log disabled tasklet handler errors · 2892505e
      Erik Hugne 提交于
      Failure to schedule a TIPC tasklet with tipc_k_signal because the
      tasklet handler is disabled is not an error. It means TIPC is
      currently in the process of shutting down. We remove the error
      logging in this case.
      Signed-off-by: NErik Hugne <erik.hugne@ericsson.com>
      Reviewed-by: NJon Maloy <jon.maloy@ericsson.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2892505e
    • E
      tipc: fix memory leak during module removal · 1bb8dce5
      Erik Hugne 提交于
      When the TIPC module is removed, the tasklet handler is disabled
      before all other subsystems. This will cause lingering publications
      in the name table because the node_down tasklets responsible to
      clean up publications from an unreachable node will never run.
      When the name table is shut down, these publications are detected
      and an error message is logged:
      tipc: nametbl_stop(): orphaned hash chain detected
      This is actually a memory leak, introduced with commit
      993b858e ("tipc: correct the order
      of stopping services at rmmod")
      
      Instead of just logging an error and leaking memory, we free
      the orphaned entries during nametable shutdown.
      Signed-off-by: NErik Hugne <erik.hugne@ericsson.com>
      Reviewed-by: NJon Maloy <jon.maloy@ericsson.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1bb8dce5
    • E
      tipc: drop subscriber connection id invalidation · edcc0511
      Erik Hugne 提交于
      When a topology server subscriber is disconnected, the associated
      connection id is set to zero. A check vs zero is then done in the
      subscription timeout function to see if the subscriber have been
      shut down. This is unnecessary, because all subscription timers
      will be cancelled when a subscriber terminates. Setting the
      connection id to zero is actually harmful because id zero is the
      identity of the topology server listening socket, and can cause a
      race that leads to this socket being closed instead.
      Signed-off-by: NErik Hugne <erik.hugne@ericsson.com>
      Acked-by: NYing Xue <ying.xue@windriver.com>
      Reviewed-by: NJon Maloy <jon.maloy@ericsson.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      edcc0511
    • Y
      tipc: avoid to unnecessary process switch under non-block mode · fe8e4649
      Ying Xue 提交于
      When messages are received via tipc socket under non-block mode,
      schedule_timeout() is called in tipc_wait_for_rcvmsg(), that is,
      the process of receiving messages will be scheduled once although
      timeout value passed to schedule_timeout() is 0. The same issue
      exists in accept()/wait_for_accept(). To avoid this unnecessary
      process switch, we only call schedule_timeout() if the timeout
      value is non-zero.
      Signed-off-by: NYing Xue <ying.xue@windriver.com>
      Reviewed-by: NErik Hugne <erik.hugne@ericsson.com>
      Reviewed-by: NJon Maloy <jon.maloy@ericsson.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fe8e4649
    • Y
      tipc: fix connection refcount leak · 4652edb7
      Ying Xue 提交于
      When tipc_conn_sendmsg() calls tipc_conn_lookup() to query a
      connection instance, its reference count value is increased if
      it's found. But subsequently if it's found that the connection is
      closed, the work of sending message is not queued into its server
      send workqueue, and the connection reference count is not decreased.
      This will cause a reference count leak. To reproduce this problem,
      an application would need to open and closes topology server
      connections with high intensity.
      
      We fix this by immediately decrementing the connection reference
      count if a send fails due to the connection being closed.
      Signed-off-by: NYing Xue <ying.xue@windriver.com>
      Acked-by: NErik Hugne <erik.hugne@ericsson.com>
      Reviewed-by: NJon Maloy <jon.maloy@ericsson.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4652edb7
    • Y
      tipc: allow connection shutdown callback to be invoked in advance · 6d4ebeb4
      Ying Xue 提交于
      Currently connection shutdown callback function is called when
      connection instance is released in tipc_conn_kref_release(), and
      receiving packets and sending packets are running in different
      threads. Even if connection is closed by the thread of receiving
      packets, its shutdown callback may not be called immediately as
      the connection reference count is non-zero at that moment. So,
      although the connection is shut down by the thread of receiving
      packets, the thread of sending packets doesn't know it. Before
      its shutdown callback is invoked to tell the sending thread its
      connection has been closed, the sending thread may deliver
      messages by tipc_conn_sendmsg(), this is why the following error
      information appears:
      
      "Sending subscription event failed, no memory"
      
      To eliminate it, allow connection shutdown callback function to
      be called before connection id is removed in tipc_close_conn(),
      which makes the sending thread know the truth in time that its
      socket is closed so that it doesn't send message to it. We also
      remove the "Sending XXX failed..." error reporting for topology
      and config services.
      Signed-off-by: NYing Xue <ying.xue@windriver.com>
      Signed-off-by: NErik Hugne <erik.hugne@ericsson.com>
      Reviewed-by: NJon Maloy <jon.maloy@ericsson.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6d4ebeb4
    • G
      l2tp: fix userspace reception on plain L2TP sockets · 9e9cb622
      Guillaume Nault 提交于
      As pppol2tp_recv() never queues up packets to plain L2TP sockets,
      pppol2tp_recvmsg() never returns data to userspace, thus making
      the recv*() system calls unusable.
      
      Instead of dropping packets when the L2TP socket isn't bound to a PPP
      channel, this patch adds them to its reception queue.
      Signed-off-by: NGuillaume Nault <g.nault@alphalink.fr>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9e9cb622
    • G
      l2tp: fix manual sequencing (de)activation in L2TPv2 · bb5016ea
      Guillaume Nault 提交于
      Commit e0d4435f "l2tp: Update PPP-over-L2TP driver to work over L2TPv3"
      broke the PPPOL2TP_SO_SENDSEQ setsockopt. The L2TP header length was
      previously computed by pppol2tp_l2t_header_len() before each call to
      l2tp_xmit_skb(). Now that header length is retrieved from the hdr_len
      session field, this field must be updated every time the L2TP header
      format is modified, or l2tp_xmit_skb() won't push the right amount of
      data for the L2TP header.
      
      This patch uses l2tp_session_set_header_len() to adjust hdr_len every
      time sequencing is (de)activated from userspace (either by the
      PPPOL2TP_SO_SENDSEQ setsockopt or the L2TP_ATTR_SEND_SEQ netlink
      attribute).
      Signed-off-by: NGuillaume Nault <g.nault@alphalink.fr>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      bb5016ea
  4. 06 3月, 2014 3 次提交
    • D
      net: sctp: fix skb leakage in COOKIE ECHO path of chunk->auth_chunk · c485658b
      Daniel Borkmann 提交于
      While working on ec0223ec ("net: sctp: fix sctp_sf_do_5_1D_ce to
      verify if we/peer is AUTH capable"), we noticed that there's a skb
      memory leakage in the error path.
      
      Running the same reproducer as in ec0223ec and by unconditionally
      jumping to the error label (to simulate an error condition) in
      sctp_sf_do_5_1D_ce() receive path lets kmemleak detector bark about
      the unfreed chunk->auth_chunk skb clone:
      
      Unreferenced object 0xffff8800b8f3a000 (size 256):
        comm "softirq", pid 0, jiffies 4294769856 (age 110.757s)
        hex dump (first 32 bytes):
          00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
          89 ab 75 5e d4 01 58 13 00 00 00 00 00 00 00 00  ..u^..X.........
        backtrace:
          [<ffffffff816660be>] kmemleak_alloc+0x4e/0xb0
          [<ffffffff8119f328>] kmem_cache_alloc+0xc8/0x210
          [<ffffffff81566929>] skb_clone+0x49/0xb0
          [<ffffffffa0467459>] sctp_endpoint_bh_rcv+0x1d9/0x230 [sctp]
          [<ffffffffa046fdbc>] sctp_inq_push+0x4c/0x70 [sctp]
          [<ffffffffa047e8de>] sctp_rcv+0x82e/0x9a0 [sctp]
          [<ffffffff815abd38>] ip_local_deliver_finish+0xa8/0x210
          [<ffffffff815a64af>] nf_reinject+0xbf/0x180
          [<ffffffffa04b4762>] nfqnl_recv_verdict+0x1d2/0x2b0 [nfnetlink_queue]
          [<ffffffffa04aa40b>] nfnetlink_rcv_msg+0x14b/0x250 [nfnetlink]
          [<ffffffff815a3269>] netlink_rcv_skb+0xa9/0xc0
          [<ffffffffa04aa7cf>] nfnetlink_rcv+0x23f/0x408 [nfnetlink]
          [<ffffffff815a2bd8>] netlink_unicast+0x168/0x250
          [<ffffffff815a2fa1>] netlink_sendmsg+0x2e1/0x3f0
          [<ffffffff8155cc6b>] sock_sendmsg+0x8b/0xc0
          [<ffffffff8155d449>] ___sys_sendmsg+0x369/0x380
      
      What happens is that commit bbd0d598 clones the skb containing
      the AUTH chunk in sctp_endpoint_bh_rcv() when having the edge case
      that an endpoint requires COOKIE-ECHO chunks to be authenticated:
      
        ---------- INIT[RANDOM; CHUNKS; HMAC-ALGO] ---------->
        <------- INIT-ACK[RANDOM; CHUNKS; HMAC-ALGO] ---------
        ------------------ AUTH; COOKIE-ECHO ---------------->
        <-------------------- COOKIE-ACK ---------------------
      
      When we enter sctp_sf_do_5_1D_ce() and before we actually get to
      the point where we process (and subsequently free) a non-NULL
      chunk->auth_chunk, we could hit the "goto nomem_init" path from
      an error condition and thus leave the cloned skb around w/o
      freeing it.
      
      The fix is to centrally free such clones in sctp_chunk_destroy()
      handler that is invoked from sctp_chunk_free() after all refs have
      dropped; and also move both kfree_skb(chunk->auth_chunk) there,
      so that chunk->auth_chunk is either NULL (since sctp_chunkify()
      allocs new chunks through kmem_cache_zalloc()) or non-NULL with
      a valid skb pointer. chunk->skb and chunk->auth_chunk are the
      only skbs in the sctp_chunk structure that need to be handeled.
      
      While at it, we should use consume_skb() for both. It is the same
      as dev_kfree_skb() but more appropriately named as we are not
      a device but a protocol. Also, this effectively replaces the
      kfree_skb() from both invocations into consume_skb(). Functions
      are the same only that kfree_skb() assumes that the frame was
      being dropped after a failure (e.g. for tools like drop monitor),
      usage of consume_skb() seems more appropriate in function
      sctp_chunk_destroy() though.
      
      Fixes: bbd0d598 ("[SCTP]: Implement the receive and verification of AUTH chunk")
      Signed-off-by: NDaniel Borkmann <dborkman@redhat.com>
      Cc: Vlad Yasevich <yasevich@gmail.com>
      Cc: Neil Horman <nhorman@tuxdriver.com>
      Acked-by: NVlad Yasevich <vyasevich@gmail.com>
      Acked-by: NNeil Horman <nhorman@tuxdriver.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c485658b
    • L
      bridge: multicast: add sanity check for query source addresses · 6565b9ee
      Linus Lüssing 提交于
      MLD queries are supposed to have an IPv6 link-local source address
      according to RFC2710, section 4 and RFC3810, section 5.1.14. This patch
      adds a sanity check to ignore such broken MLD queries.
      
      Without this check, such malformed MLD queries can result in a
      denial of service: The queries are ignored by any MLD listener
      therefore they will not respond with an MLD report. However,
      without this patch these malformed MLD queries would enable the
      snooping part in the bridge code, potentially shutting down the
      according ports towards these hosts for multicast traffic as the
      bridge did not learn about these listeners.
      Reported-by: NJan Stancek <jstancek@redhat.com>
      Signed-off-by: NLinus Lüssing <linus.luessing@web.de>
      Reviewed-by: NHannes Frederic Sowa <hannes@stressinduktion.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6565b9ee
    • N
      net: fix for a race condition in the inet frag code · 24b9bf43
      Nikolay Aleksandrov 提交于
      I stumbled upon this very serious bug while hunting for another one,
      it's a very subtle race condition between inet_frag_evictor,
      inet_frag_intern and the IPv4/6 frag_queue and expire functions
      (basically the users of inet_frag_kill/inet_frag_put).
      
      What happens is that after a fragment has been added to the hash chain
      but before it's been added to the lru_list (inet_frag_lru_add) in
      inet_frag_intern, it may get deleted (either by an expired timer if
      the system load is high or the timer sufficiently low, or by the
      fraq_queue function for different reasons) before it's added to the
      lru_list, then after it gets added it's a matter of time for the
      evictor to get to a piece of memory which has been freed leading to a
      number of different bugs depending on what's left there.
      
      I've been able to trigger this on both IPv4 and IPv6 (which is normal
      as the frag code is the same), but it's been much more difficult to
      trigger on IPv4 due to the protocol differences about how fragments
      are treated.
      
      The setup I used to reproduce this is: 2 machines with 4 x 10G bonded
      in a RR bond, so the same flow can be seen on multiple cards at the
      same time. Then I used multiple instances of ping/ping6 to generate
      fragmented packets and flood the machines with them while running
      other processes to load the attacked machine.
      
      *It is very important to have the _same flow_ coming in on multiple CPUs
      concurrently. Usually the attacked machine would die in less than 30
      minutes, if configured properly to have many evictor calls and timeouts
      it could happen in 10 minutes or so.
      
      An important point to make is that any caller (frag_queue or timer) of
      inet_frag_kill will remove both the timer refcount and the
      original/guarding refcount thus removing everything that's keeping the
      frag from being freed at the next inet_frag_put.  All of this could
      happen before the frag was ever added to the LRU list, then it gets
      added and the evictor uses a freed fragment.
      
      An example for IPv6 would be if a fragment is being added and is at
      the stage of being inserted in the hash after the hash lock is
      released, but before inet_frag_lru_add executes (or is able to obtain
      the lru lock) another overlapping fragment for the same flow arrives
      at a different CPU which finds it in the hash, but since it's
      overlapping it drops it invoking inet_frag_kill and thus removing all
      guarding refcounts, and afterwards freeing it by invoking
      inet_frag_put which removes the last refcount added previously by
      inet_frag_find, then inet_frag_lru_add gets executed by
      inet_frag_intern and we have a freed fragment in the lru_list.
      
      The fix is simple, just move the lru_add under the hash chain locked
      region so when a removing function is called it'll have to wait for
      the fragment to be added to the lru_list, and then it'll remove it (it
      works because the hash chain removal is done before the lru_list one
      and there's no window between the two list adds when the frag can get
      dropped). With this fix applied I couldn't kill the same machine in 24
      hours with the same setup.
      
      Fixes: 3ef0eb0d ("net: frag, move LRU list maintenance outside of
      rwlock")
      
      CC: Florian Westphal <fw@strlen.de>
      CC: Jesper Dangaard Brouer <brouer@redhat.com>
      CC: David S. Miller <davem@davemloft.net>
      Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com>
      Acked-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      24b9bf43
  5. 05 3月, 2014 1 次提交
  6. 04 3月, 2014 4 次提交
    • D
      net: sctp: fix sctp_sf_do_5_1D_ce to verify if we/peer is AUTH capable · ec0223ec
      Daniel Borkmann 提交于
      RFC4895 introduced AUTH chunks for SCTP; during the SCTP
      handshake RANDOM; CHUNKS; HMAC-ALGO are negotiated (CHUNKS
      being optional though):
      
        ---------- INIT[RANDOM; CHUNKS; HMAC-ALGO] ---------->
        <------- INIT-ACK[RANDOM; CHUNKS; HMAC-ALGO] ---------
        -------------------- COOKIE-ECHO -------------------->
        <-------------------- COOKIE-ACK ---------------------
      
      A special case is when an endpoint requires COOKIE-ECHO
      chunks to be authenticated:
      
        ---------- INIT[RANDOM; CHUNKS; HMAC-ALGO] ---------->
        <------- INIT-ACK[RANDOM; CHUNKS; HMAC-ALGO] ---------
        ------------------ AUTH; COOKIE-ECHO ---------------->
        <-------------------- COOKIE-ACK ---------------------
      
      RFC4895, section 6.3. Receiving Authenticated Chunks says:
      
        The receiver MUST use the HMAC algorithm indicated in
        the HMAC Identifier field. If this algorithm was not
        specified by the receiver in the HMAC-ALGO parameter in
        the INIT or INIT-ACK chunk during association setup, the
        AUTH chunk and all the chunks after it MUST be discarded
        and an ERROR chunk SHOULD be sent with the error cause
        defined in Section 4.1. [...] If no endpoint pair shared
        key has been configured for that Shared Key Identifier,
        all authenticated chunks MUST be silently discarded. [...]
      
        When an endpoint requires COOKIE-ECHO chunks to be
        authenticated, some special procedures have to be followed
        because the reception of a COOKIE-ECHO chunk might result
        in the creation of an SCTP association. If a packet arrives
        containing an AUTH chunk as a first chunk, a COOKIE-ECHO
        chunk as the second chunk, and possibly more chunks after
        them, and the receiver does not have an STCB for that
        packet, then authentication is based on the contents of
        the COOKIE-ECHO chunk. In this situation, the receiver MUST
        authenticate the chunks in the packet by using the RANDOM
        parameters, CHUNKS parameters and HMAC_ALGO parameters
        obtained from the COOKIE-ECHO chunk, and possibly a local
        shared secret as inputs to the authentication procedure
        specified in Section 6.3. If authentication fails, then
        the packet is discarded. If the authentication is successful,
        the COOKIE-ECHO and all the chunks after the COOKIE-ECHO
        MUST be processed. If the receiver has an STCB, it MUST
        process the AUTH chunk as described above using the STCB
        from the existing association to authenticate the
        COOKIE-ECHO chunk and all the chunks after it. [...]
      
      Commit bbd0d598 introduced the possibility to receive
      and verification of AUTH chunk, including the edge case for
      authenticated COOKIE-ECHO. On reception of COOKIE-ECHO,
      the function sctp_sf_do_5_1D_ce() handles processing,
      unpacks and creates a new association if it passed sanity
      checks and also tests for authentication chunks being
      present. After a new association has been processed, it
      invokes sctp_process_init() on the new association and
      walks through the parameter list it received from the INIT
      chunk. It checks SCTP_PARAM_RANDOM, SCTP_PARAM_HMAC_ALGO
      and SCTP_PARAM_CHUNKS, and copies them into asoc->peer
      meta data (peer_random, peer_hmacs, peer_chunks) in case
      sysctl -w net.sctp.auth_enable=1 is set. If in INIT's
      SCTP_PARAM_SUPPORTED_EXT parameter SCTP_CID_AUTH is set,
      peer_random != NULL and peer_hmacs != NULL the peer is to be
      assumed asoc->peer.auth_capable=1, in any other case
      asoc->peer.auth_capable=0.
      
      Now, if in sctp_sf_do_5_1D_ce() chunk->auth_chunk is
      available, we set up a fake auth chunk and pass that on to
      sctp_sf_authenticate(), which at latest in
      sctp_auth_calculate_hmac() reliably dereferences a NULL pointer
      at position 0..0008 when setting up the crypto key in
      crypto_hash_setkey() by using asoc->asoc_shared_key that is
      NULL as condition key_id == asoc->active_key_id is true if
      the AUTH chunk was injected correctly from remote. This
      happens no matter what net.sctp.auth_enable sysctl says.
      
      The fix is to check for net->sctp.auth_enable and for
      asoc->peer.auth_capable before doing any operations like
      sctp_sf_authenticate() as no key is activated in
      sctp_auth_asoc_init_active_key() for each case.
      
      Now as RFC4895 section 6.3 states that if the used HMAC-ALGO
      passed from the INIT chunk was not used in the AUTH chunk, we
      SHOULD send an error; however in this case it would be better
      to just silently discard such a maliciously prepared handshake
      as we didn't even receive a parameter at all. Also, as our
      endpoint has no shared key configured, section 6.3 says that
      MUST silently discard, which we are doing from now onwards.
      
      Before calling sctp_sf_pdiscard(), we need not only to free
      the association, but also the chunk->auth_chunk skb, as
      commit bbd0d598 created a skb clone in that case.
      
      I have tested this locally by using netfilter's nfqueue and
      re-injecting packets into the local stack after maliciously
      modifying the INIT chunk (removing RANDOM; HMAC-ALGO param)
      and the SCTP packet containing the COOKIE_ECHO (injecting
      AUTH chunk before COOKIE_ECHO). Fixed with this patch applied.
      
      Fixes: bbd0d598 ("[SCTP]: Implement the receive and verification of AUTH chunk")
      Signed-off-by: NDaniel Borkmann <dborkman@redhat.com>
      Cc: Vlad Yasevich <yasevich@gmail.com>
      Cc: Neil Horman <nhorman@tuxdriver.com>
      Acked-by: NVlad Yasevich <vyasevich@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ec0223ec
    • X
      ip_tunnel:multicast process cause panic due to skb->_skb_refdst NULL pointer · 10ddceb2
      Xin Long 提交于
      when ip_tunnel process multicast packets, it may check if the packet is looped
      back packet though 'rt_is_output_route(skb_rtable(skb))' in ip_tunnel_rcv(),
      but before that , skb->_skb_refdst has been dropped in iptunnel_pull_header(),
      so which leads to a panic.
      
      fix the bug: https://bugzilla.kernel.org/show_bug.cgi?id=70681Signed-off-by: NXin Long <lucien.xin@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      10ddceb2
    • Y
      tcp: fix bogus RTT on special retransmission · c84a5711
      Yuchung Cheng 提交于
      RTT may be bogus with tall loss probe (TLP) when a packet
      is retransmitted and latter (s)acked without TCPCB_SACKED_RETRANS flag.
      
      For example, TLP calls __tcp_retransmit_skb() instead of
      tcp_retransmit_skb(). The skb timestamps are updated but the sacked
      flag is not marked with TCPCB_SACKED_RETRANS. As a result we'll
      get bogus RTT in tcp_clean_rtx_queue() or in tcp_sacktag_one() on
      spurious retransmission.
      
      The fix is to apply the sticky flag TCP_EVER_RETRANS to enforce Karn's
      check on RTT sampling. However this will disable F-RTO if timeout occurs
      after TLP, by resetting undo_marker in tcp_enter_loss(). We relax this
      check to only if any pending retransmists are still in-flight.
      Signed-off-by: NYuchung Cheng <ycheng@google.com>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Acked-by: NNeal Cardwell <ncardwell@google.com>
      Acked-by: NNandita Dukkipati <nanditad@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c84a5711
    • D
      hsr: off by one sanity check in hsr_register_frame_in() · de39d7a4
      Dan Carpenter 提交于
      This is a sanity check and we never pass invalid values so this patch
      doesn't change anything.  However the node->time_in[] array has
      HSR_MAX_SLAVE (2) elements and not HSR_MAX_DEV (3).
      Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      de39d7a4
  7. 03 3月, 2014 3 次提交
  8. 28 2月, 2014 5 次提交
  9. 27 2月, 2014 1 次提交
    • E
      net: tcp: use NET_INC_STATS() · 9a9bfd03
      Eric Dumazet 提交于
      While LINUX_MIB_TCPSPURIOUS_RTX_HOSTQUEUES can only be incremented
      in tcp_transmit_skb() from softirq (incoming message or timer
      activation), it is better to use NET_INC_STATS() instead of
      NET_INC_STATS_BH() as tcp_transmit_skb() can be called from process
      context.
      
      This will avoid copy/paste confusion when/if we want to add
      other SNMP counters in tcp_transmit_skb()
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Hannes Frederic Sowa <hannes@stressinduktion.org>
      Cc: Florian Westphal <fw@strlen.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9a9bfd03