1. 16 6月, 2016 37 次提交
  2. 15 6月, 2016 3 次提交
    • W
      act_police: rename tcf_act_police_locate() to tcf_act_police_init() · d9fa17ef
      WANG Cong 提交于
      This function is just ->init(), rename it to make it obvious.
      
      Cc: Jamal Hadi Salim <jhs@mojatatu.com>
      Signed-off-by: NCong Wang <xiyou.wangcong@gmail.com>
      Acked-by: NJamal Hadi Salim <jhs@mojatatu.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d9fa17ef
    • W
      net_sched: remove internal use of TC_POLICE_* · 95df1b16
      WANG Cong 提交于
      These should be gone when we removed CONFIG_NET_CLS_POLICE.
      We can not totally remove them since they are exposed
      to userspace.
      
      Cc: Jamal Hadi Salim <jhs@mojatatu.com>
      Signed-off-by: NCong Wang <xiyou.wangcong@gmail.com>
      Acked-by: NJamal Hadi Salim <jhs@mojatatu.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      95df1b16
    • D
      Merge branch 'rds-mprds-foundations' · 161cd45f
      David S. Miller 提交于
      Sowmini Varadhan says:
      
      ====================
      RDS: multiple connection paths for scaling
      
      Today RDS-over-TCP is implemented by demux-ing multiple PF_RDS sockets
      between any 2 endpoints (where endpoint == [IP address, port]) over a
      single TCP socket between the 2 IP addresses involved. This has the
      limitation that it ends up funneling multiple RDS flows over a single
      TCP flow, thus the rds/tcp connection is
         (a) upper-bounded to the single-flow bandwidth,
         (b) suffers from head-of-line blocking for the RDS sockets.
      
      Better throughput (for a fixed small packet size, MTU) can be achieved
      by having multiple TCP/IP flows per rds/tcp connection, i.e., multipathed
      RDS (mprds).  Each such TCP/IP flow constitutes a path for the rds/tcp
      connection. RDS sockets will be attached to a path based on some hash
      (e.g., of local address and RDS port number) and packets for that RDS
      socket will be sent over the attached path using TCP to segment/reassemble
      RDS datagrams on that path.
      
      The table below, generated using a prototype that implements mprds,
      shows that this is significant for scaling to 40G.  Packet sizes
      used were: 8K byte req, 256 byte resp. MTU: 1500.  The parameters for
      RDS-concurrency used below are described in the rds-stress(1) man page-
      the number listed is proportional to the number of threads at which max
      throughput was attained.
      
        -------------------------------------------------------------------
           RDS-concurrency   Num of       tx+rx K/s (iops)       throughput
           (-t N -d N)       TCP paths
        -------------------------------------------------------------------
              16             1             600K -  700K            4 Gbps
              28             8            5000K - 6000K           32 Gbps
        -------------------------------------------------------------------
      
      FAQ: what is the relation between mprds and mptcp?
        mprds is orthogonal to mptcp. Whereas mptcp creates
        sub-flows for a single TCP connection, mprds parallelizes tx/rx
        at the RDS layer. MPRDS with N paths will allow N datagrams to
        be sent in parallel; each path will continue to send one
        datagram at a time, with sender and receiver keeping track of
        the retransmit and dgram-assembly state based on the RDS header.
        If desired, mptcp can additionally be used to speed up each TCP
        path. That acceleration is orthogonal to the parallelization benefits
        of mprds.
      
      This patch series lays down the foundational data-structures to support
      mprds in the kernel. It implements the changes to split up the
      rds_connection structure into a common (to all paths) part,
      and a per-path rds_conn_path. All I/O workqs are driven from
      the rds_conn_path.
      
      Note that this patchset does not (yet) actually enable multipathing
      for any of the transports; all transports will continue to use a
      single path with the refactored data-structures. A subsequent patchset
      will  add the changes to the rds-tcp module to actually use mprds
      in rds-tcp.
      ====================
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      161cd45f