1. 14 4月, 2008 5 次提交
  2. 04 4月, 2008 5 次提交
  3. 29 3月, 2008 1 次提交
  4. 23 3月, 2008 1 次提交
    • P
      [SOCK]: Add udp_hash member to struct proto. · 39d8cda7
      Pavel Emelyanov 提交于
      Inspired by the commit ab1e0a13 ([SOCK] proto: Add hashinfo member to 
      struct proto) from Arnaldo, I made similar thing for UDP/-Lite IPv4 
      and -v6 protocols.
      
      The result is not that exciting, but it removes some levels of
      indirection in udpxxx_get_port and saves some space in code and text.
      
      The first step is to union existing hashinfo and new udp_hash on the
      struct proto and give a name to this union, since future initialization 
      of tcpxxx_prot, dccp_vx_protinfo and udpxxx_protinfo will cause gcc 
      warning about inability to initialize anonymous member this way.
      Signed-off-by: NPavel Emelyanov <xemul@openvz.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      39d8cda7
  5. 01 3月, 2008 1 次提交
  6. 03 2月, 2008 1 次提交
    • A
      [SOCK] proto: Add hashinfo member to struct proto · ab1e0a13
      Arnaldo Carvalho de Melo 提交于
      This way we can remove TCP and DCCP specific versions of
      
      sk->sk_prot->get_port: both v4 and v6 use inet_csk_get_port
      sk->sk_prot->hash:     inet_hash is directly used, only v6 need
                             a specific version to deal with mapped sockets
      sk->sk_prot->unhash:   both v4 and v6 use inet_hash directly
      
      struct inet_connection_sock_af_ops also gets a new member, bind_conflict, so
      that inet_csk_get_port can find the per family routine.
      
      Now only the lookup routines receive as a parameter a struct inet_hashtable.
      
      With this we further reuse code, reducing the difference among INET transport
      protocols.
      
      Eventually work has to be done on UDP and SCTP to make them share this
      infrastructure and get as a bonus inet_diag interfaces so that iproute can be
      used with these protocols.
      
      net-2.6/net/ipv4/inet_hashtables.c:
        struct proto			     |   +8
        struct inet_connection_sock_af_ops |   +8
       2 structs changed
        __inet_hash_nolisten               |  +18
        __inet_hash                        | -210
        inet_put_port                      |   +8
        inet_bind_bucket_create            |   +1
        __inet_hash_connect                |   -8
       5 functions changed, 27 bytes added, 218 bytes removed, diff: -191
      
      net-2.6/net/core/sock.c:
        proto_seq_show                     |   +3
       1 function changed, 3 bytes added, diff: +3
      
      net-2.6/net/ipv4/inet_connection_sock.c:
        inet_csk_get_port                  |  +15
       1 function changed, 15 bytes added, diff: +15
      
      net-2.6/net/ipv4/tcp.c:
        tcp_set_state                      |   -7
       1 function changed, 7 bytes removed, diff: -7
      
      net-2.6/net/ipv4/tcp_ipv4.c:
        tcp_v4_get_port                    |  -31
        tcp_v4_hash                        |  -48
        tcp_v4_destroy_sock                |   -7
        tcp_v4_syn_recv_sock               |   -2
        tcp_unhash                         | -179
       5 functions changed, 267 bytes removed, diff: -267
      
      net-2.6/net/ipv6/inet6_hashtables.c:
        __inet6_hash |   +8
       1 function changed, 8 bytes added, diff: +8
      
      net-2.6/net/ipv4/inet_hashtables.c:
        inet_unhash                        | +190
        inet_hash                          | +242
       2 functions changed, 432 bytes added, diff: +432
      
      vmlinux:
       16 functions changed, 485 bytes added, 492 bytes removed, diff: -7
      
      /home/acme/git/net-2.6/net/ipv6/tcp_ipv6.c:
        tcp_v6_get_port                    |  -31
        tcp_v6_hash                        |   -7
        tcp_v6_syn_recv_sock               |   -9
       3 functions changed, 47 bytes removed, diff: -47
      
      /home/acme/git/net-2.6/net/dccp/proto.c:
        dccp_destroy_sock                  |   -7
        dccp_unhash                        | -179
        dccp_hash                          |  -49
        dccp_set_state                     |   -7
        dccp_done                          |   +1
       5 functions changed, 1 bytes added, 242 bytes removed, diff: -241
      
      /home/acme/git/net-2.6/net/dccp/ipv4.c:
        dccp_v4_get_port                   |  -31
        dccp_v4_request_recv_sock          |   -2
       2 functions changed, 33 bytes removed, diff: -33
      
      /home/acme/git/net-2.6/net/dccp/ipv6.c:
        dccp_v6_get_port                   |  -31
        dccp_v6_hash                       |   -7
        dccp_v6_request_recv_sock          |   +5
       3 functions changed, 5 bytes added, 38 bytes removed, diff: -33
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ab1e0a13
  7. 01 2月, 2008 1 次提交
  8. 29 1月, 2008 2 次提交
  9. 07 11月, 2007 1 次提交
  10. 24 10月, 2007 1 次提交
  11. 22 10月, 2007 1 次提交
  12. 16 10月, 2007 1 次提交
  13. 11 10月, 2007 3 次提交
    • G
      [DCCP]: Twice the wrong reset code in receiving connection-Requests · 4a5409a5
      Gerrit Renker 提交于
      This fixes two bugs in processing of connection-Requests in
      v{4,6}_conn_request:
      
       1. Due to using the variable `reset_code', the Reset code generated
          internally by dccp_parse_options() is overwritten with the
          initialised value ("Too Busy") of reset_code, which is not what is
          intended.
      
       2. When receiving a connection-Request on a multicast or broadcast
          address, no Reset should be generated, to avoid storms of such
          packets. Instead of jumping to the `drop' label, the
          v{4,6}_conn_request functions now return 0. Below is why in my
          understanding this is correct:
      
          When the conn_request function returns < 0, then the caller,
          dccp_rcv_state_process(), returns 1. In all instances where
          dccp_rcv_state_process is called (dccp_v4_do_rcv, dccp_v6_do_rcv,
          and dccp_child_process), a return value of != 0 from
          dccp_rcv_state_process() means that a Reset is generated.
      
          If on the other hand the conn_request function returns 0, the
          packet is discarded and no Reset is generated.
      
      Note: There may be a related problem when sending the Response, due to
      the following.
      
      	if (dccp_v6_send_response(sk, req, NULL))
      		goto drop_and_free;
      	/* ... */
      	drop_and_free:
      		return -1;
      
      In this case, if send_response fails due to transmission errors, the
      next thing that is generated is a Reset with a code "Too Busy". I
      haven't been able to conjure up such a condition, but it might be good
      to change the behaviour here also (not done by this patch).
      Signed-off-by: NGerrit Renker <gerrit@erg.abdn.ac.uk>
      Signed-off-by: NIan McDonald <ian.mcdonald@jandi.co.nz>
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4a5409a5
    • G
      [DCCP]: Factor out common code for generating Resets · e356d37a
      Gerrit Renker 提交于
      This factors code common to dccp_v{4,6}_ctl_send_reset into a separate function,
      and adds support for filling in the Data 1 ... Data 3 fields from RFC 4340, 5.6.
      
      It is useful to have this separate, since the following Reset codes will always
      be generated from the control socket rather than via dccp_send_reset:
       * Code 3, "No Connection", cf. 8.3.1;
       * Code 4, "Packet Error" (identification for Data 1 added);
       * Code 5, "Option Error" (identification for Data 1..3 added, will be used later);
       * Code 6, "Mandatory Error" (same as Option Error);
       * Code 7, "Connection Refused" (what on Earth is the difference to "No Connection"?);
       * Code 8, "Bad Service Code";
       * Code 9, "Too Busy";
       * Code 10, "Bad Init Cookie" (not used).
      
      Code 0 is not recommended by the RFC, the following codes would be used in
      dccp_send_reset() instead, since they all relate to an established DCCP connection:
       * Code 1, "Closed";
       * Code 2, "Aborted";
       * Code 11, "Aggression Penalty" (12.3).
      Signed-off-by: NGerrit Renker <gerrit@erg.abdn.ac.uk>
      Signed-off-by: NArnaldo Carvalho de Melo <acme@ghostprotocols.net>
      e356d37a
    • G
      [DCCP]: Sequence number wrap-around when sending reset · 9bf55cda
      Gerrit Renker 提交于
      This replaces normal addition with mod-48 addition so that sequence number
      wraparound is respected.
      Signed-off-by: NGerrit Renker <gerrit@erg.abdn.ac.uk>
      Signed-off-by: NIan McDonald <ian.mcdonald@jandi.co.nz>
      Signed-off-by: NArnaldo Carvalho de Melo <acme@ghostprotocols.net>
      9bf55cda
  14. 11 7月, 2007 1 次提交
  15. 25 5月, 2007 1 次提交
    • D
      [XFRM]: Allow packet drops during larval state resolution. · 14e50e57
      David S. Miller 提交于
      The current IPSEC rule resolution behavior we have does not work for a
      lot of people, even though technically it's an improvement from the
      -EAGAIN buisness we had before.
      
      Right now we'll block until the key manager resolves the route.  That
      works for simple cases, but many folks would rather packets get
      silently dropped until the key manager resolves the IPSEC rules.
      
      We can't tell these folks to "set the socket non-blocking" because
      they don't have control over the non-block setting of things like the
      sockets used to resolve DNS deep inside of the resolver libraries in
      libc.
      
      With that in mind I coded up the patch below with some help from
      Herbert Xu which provides packet-drop behavior during larval state
      resolution, controllable via sysctl and off by default.
      
      This lays the framework to either:
      
      1) Make this default at some point or...
      
      2) Move this logic into xfrm{4,6}_policy.c and implement the
         ARP-like resolution queue we've all been dreaming of.
         The idea would be to queue packets to the policy, then
         once the larval state is resolved by the key manager we
         re-resolve the route and push the packets out.  The
         packets would timeout if the rule didn't get resolved
         in a certain amount of time.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      14e50e57
  16. 26 4月, 2007 2 次提交
  17. 11 2月, 2007 1 次提交
  18. 09 2月, 2007 1 次提交
  19. 12 12月, 2006 1 次提交
  20. 03 12月, 2006 9 次提交