1. 03 9月, 2014 1 次提交
  2. 02 9月, 2014 2 次提交
  3. 30 8月, 2014 1 次提交
  4. 02 7月, 2014 1 次提交
  5. 01 4月, 2014 1 次提交
  6. 11 1月, 2014 1 次提交
    • J
      net: core: explicitly select a txq before doing l2 forwarding · f663dd9a
      Jason Wang 提交于
      Currently, the tx queue were selected implicitly in ndo_dfwd_start_xmit(). The
      will cause several issues:
      
      - NETIF_F_LLTX were removed for macvlan, so txq lock were done for macvlan
        instead of lower device which misses the necessary txq synchronization for
        lower device such as txq stopping or frozen required by dev watchdog or
        control path.
      - dev_hard_start_xmit() was called with NULL txq which bypasses the net device
        watchdog.
      - dev_hard_start_xmit() does not check txq everywhere which will lead a crash
        when tso is disabled for lower device.
      
      Fix this by explicitly introducing a new param for .ndo_select_queue() for just
      selecting queues in the case of l2 forwarding offload. netdev_pick_tx() was also
      extended to accept this parameter and dev_queue_xmit_accel() was used to do l2
      forwarding transmission.
      
      With this fixes, NETIF_F_LLTX could be preserved for macvlan and there's no need
      to check txq against NULL in dev_hard_start_xmit(). Also there's no need to keep
      a dedicated ndo_dfwd_start_xmit() and we can just reuse the code of
      dev_queue_xmit() to do the transmission.
      
      In the future, it was also required for macvtap l2 forwarding support since it
      provides a necessary synchronization method.
      
      Cc: John Fastabend <john.r.fastabend@intel.com>
      Cc: Neil Horman <nhorman@tuxdriver.com>
      Cc: e1000-devel@lists.sourceforge.net
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      Acked-by: NNeil Horman <nhorman@tuxdriver.com>
      Acked-by: NJohn Fastabend <john.r.fastabend@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f663dd9a
  7. 14 12月, 2013 1 次提交
  8. 11 12月, 2013 1 次提交
  9. 08 11月, 2013 1 次提交
  10. 08 10月, 2013 1 次提交
    • E
      net: Separate the close_list and the unreg_list v2 · 5cde2829
      Eric W. Biederman 提交于
      Separate the unreg_list and the close_list in dev_close_many preventing
      dev_close_many from permuting the unreg_list.  The permutations of the
      unreg_list have resulted in cases where the loopback device is accessed
      it has been freed in code such as dst_ifdown.  Resulting in subtle memory
      corruption.
      
      This is the second bug from sharing the storage between the close_list
      and the unreg_list.  The issues that crop up with sharing are
      apparently too subtle to show up in normal testing or usage, so let's
      forget about being clever and use two separate lists.
      
      v2: Make all callers pass in a close_list to dev_close_many
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5cde2829
  11. 21 9月, 2013 1 次提交
  12. 01 9月, 2013 2 次提交
  13. 31 8月, 2013 1 次提交
    • S
      qdisc: allow setting default queuing discipline · 6da7c8fc
      stephen hemminger 提交于
      By default, the pfifo_fast queue discipline has been used by default
      for all devices. But we have better choices now.
      
      This patch allow setting the default queueing discipline with sysctl.
      This allows easy use of better queueing disciplines on all devices
      without having to use tc qdisc scripts. It is intended to allow
      an easy path for distributions to make fq_codel or sfq the default
      qdisc.
      
      This patch also makes pfifo_fast more of a first class qdisc, since
      it is now possible to manually override the default and explicitly
      use pfifo_fast. The behavior for systems who do not use the sysctl
      is unchanged, they still get pfifo_fast
      
      Also removes leftover random # in sysctl net core.
      Signed-off-by: NStephen Hemminger <stephen@networkplumber.org>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6da7c8fc
  14. 15 8月, 2013 1 次提交
    • J
      net_sched: restore "linklayer atm" handling · 8a8e3d84
      Jesper Dangaard Brouer 提交于
      commit 56b765b7 ("htb: improved accuracy at high rates")
      broke the "linklayer atm" handling.
      
       tc class add ... htb rate X ceil Y linklayer atm
      
      The linklayer setting is implemented by modifying the rate table
      which is send to the kernel.  No direct parameter were
      transferred to the kernel indicating the linklayer setting.
      
      The commit 56b765b7 ("htb: improved accuracy at high rates")
      removed the use of the rate table system.
      
      To keep compatible with older iproute2 utils, this patch detects
      the linklayer by parsing the rate table.  It also supports future
      versions of iproute2 to send this linklayer parameter to the
      kernel directly. This is done by using the __reserved field in
      struct tc_ratespec, to convey the choosen linklayer option, but
      only using the lower 4 bits of this field.
      
      Linklayer detection is limited to speeds below 100Mbit/s, because
      at high rates the rtab is gets too inaccurate, so bad that
      several fields contain the same values, this resembling the ATM
      detect.  Fields even start to contain "0" time to send, e.g. at
      1000Mbit/s sending a 96 bytes packet cost "0", thus the rtab have
      been more broken than we first realized.
      Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8a8e3d84
  15. 06 8月, 2013 1 次提交
  16. 12 6月, 2013 1 次提交
  17. 03 6月, 2013 1 次提交
  18. 28 3月, 2013 1 次提交
    • S
      sch: add missing u64 in psched_ratecfg_precompute() · ea872d77
      Sergey Popovich 提交于
      It seems that commit
      
      commit 292f1c7f
      Author: Jiri Pirko <jiri@resnulli.us>
      Date:   Tue Feb 12 00:12:03 2013 +0000
      
          sch: make htb_rate_cfg and functions around that generic
      
      adds little regression.
      
      Before:
      
      # tc qdisc add dev eth0 root handle 1: htb default ffff
      # tc class add dev eth0 classid 1:ffff htb rate 5Gbit
      # tc -s class show dev eth0
      class htb 1:ffff root prio 0 rate 5000Mbit ceil 5000Mbit burst 625b cburst
      625b
       Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
       rate 0bit 0pps backlog 0b 0p requeues 0
       lended: 0 borrowed: 0 giants: 0
       tokens: 31 ctokens: 31
      
      After:
      
      # tc qdisc add dev eth0 root handle 1: htb default ffff
      # tc class add dev eth0 classid 1:ffff htb rate 5Gbit
      # tc -s class show dev eth0
      class htb 1:ffff root prio 0 rate 1544Mbit ceil 1544Mbit burst 625b cburst
      625b
       Sent 5073 bytes 41 pkt (dropped 0, overlimits 0 requeues 0)
       rate 1976bit 2pps backlog 0b 0p requeues 0
       lended: 41 borrowed: 0 giants: 0
       tokens: 1802 ctokens: 1802
      
      This probably due to lost u64 cast of rate parameter in
      psched_ratecfg_precompute() (net/sched/sch_generic.c).
      Signed-off-by: NSergey Popovich <popovich_sergei@mail.ru>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ea872d77
  19. 13 2月, 2013 1 次提交
  20. 12 12月, 2012 1 次提交
    • E
      pkt_sched: avoid requeues if possible · 1abbe139
      Eric Dumazet 提交于
      With BQL being deployed, we can more likely have following behavior :
      
      We dequeue a packet from qdisc in dequeue_skb(), then we realize target
      tx queue is in XOFF state in sch_direct_xmit(), and we have to hold the
      skb into gso_skb for later.
      
      This shows in stats (tc -s qdisc dev eth0) as requeues.
      
      Problem of these requeues is that high priority packets can not be
      dequeued as long as this (possibly low prio and big TSO packet) is not
      removed from gso_skb.
      
      At 1Gbps speed, a full size TSO packet is 500 us of extra latency.
      
      In some cases, we know that all packets dequeued from a qdisc are
      for a particular and known txq :
      
      - If device is non multi queue
      - For all MQ/MQPRIO slave qdiscs
      
      This patch introduces a new qdisc flag, TCQ_F_ONETXQUEUE to mark
      this capability, so that dequeue_skb() is allowed to dequeue a packet
      only if the associated txq is not stopped.
      
      This indeed reduce latencies for high prio packets (or improve fairness
      with sfq/fq_codel), and almost remove qdisc 'requeues'.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Jamal Hadi Salim <jhs@mojatatu.com>
      Cc: John Fastabend <john.r.fastabend@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1abbe139
  21. 06 9月, 2012 1 次提交
    • E
      net: qdisc busylock needs lockdep annotations · 23d3b8bf
      Eric Dumazet 提交于
      It seems we need to provide ability for stacked devices
      to use specific lock_class_key for sch->busylock
      
      We could instead default l2tpeth tx_queue_len to 0 (no qdisc), but
      a user might use a qdisc anyway.
      
      (So same fixes are probably needed on non LLTX stacked drivers)
      
      Noticed while stressing L2TPV3 setup :
      
      ======================================================
       [ INFO: possible circular locking dependency detected ]
       3.6.0-rc3+ #788 Not tainted
       -------------------------------------------------------
       netperf/4660 is trying to acquire lock:
        (l2tpsock){+.-...}, at: [<ffffffffa0208db2>] l2tp_xmit_skb+0x172/0xa50 [l2tp_core]
      
       but task is already holding lock:
        (&(&sch->busylock)->rlock){+.-...}, at: [<ffffffff81596595>] dev_queue_xmit+0xd75/0xe00
      
       which lock already depends on the new lock.
      
       the existing dependency chain (in reverse order) is:
      
       -> #1 (&(&sch->busylock)->rlock){+.-...}:
              [<ffffffff810a5df0>] lock_acquire+0x90/0x200
              [<ffffffff817499fc>] _raw_spin_lock_irqsave+0x4c/0x60
              [<ffffffff81074872>] __wake_up+0x32/0x70
              [<ffffffff8136d39e>] tty_wakeup+0x3e/0x80
              [<ffffffff81378fb3>] pty_write+0x73/0x80
              [<ffffffff8136cb4c>] tty_put_char+0x3c/0x40
              [<ffffffff813722b2>] process_echoes+0x142/0x330
              [<ffffffff813742ab>] n_tty_receive_buf+0x8fb/0x1230
              [<ffffffff813777b2>] flush_to_ldisc+0x142/0x1c0
              [<ffffffff81062818>] process_one_work+0x198/0x760
              [<ffffffff81063236>] worker_thread+0x186/0x4b0
              [<ffffffff810694d3>] kthread+0x93/0xa0
              [<ffffffff81753e24>] kernel_thread_helper+0x4/0x10
      
       -> #0 (l2tpsock){+.-...}:
              [<ffffffff810a5288>] __lock_acquire+0x1628/0x1b10
              [<ffffffff810a5df0>] lock_acquire+0x90/0x200
              [<ffffffff817498c1>] _raw_spin_lock+0x41/0x50
              [<ffffffffa0208db2>] l2tp_xmit_skb+0x172/0xa50 [l2tp_core]
              [<ffffffffa021a802>] l2tp_eth_dev_xmit+0x32/0x60 [l2tp_eth]
              [<ffffffff815952b2>] dev_hard_start_xmit+0x502/0xa70
              [<ffffffff815b63ce>] sch_direct_xmit+0xfe/0x290
              [<ffffffff81595a05>] dev_queue_xmit+0x1e5/0xe00
              [<ffffffff815d9d60>] ip_finish_output+0x3d0/0x890
              [<ffffffff815db019>] ip_output+0x59/0xf0
              [<ffffffff815da36d>] ip_local_out+0x2d/0xa0
              [<ffffffff815da5a3>] ip_queue_xmit+0x1c3/0x680
              [<ffffffff815f4192>] tcp_transmit_skb+0x402/0xa60
              [<ffffffff815f4a94>] tcp_write_xmit+0x1f4/0xa30
              [<ffffffff815f5300>] tcp_push_one+0x30/0x40
              [<ffffffff815e6672>] tcp_sendmsg+0xe82/0x1040
              [<ffffffff81614495>] inet_sendmsg+0x125/0x230
              [<ffffffff81576cdc>] sock_sendmsg+0xdc/0xf0
              [<ffffffff81579ece>] sys_sendto+0xfe/0x130
              [<ffffffff81752c92>] system_call_fastpath+0x16/0x1b
        Possible unsafe locking scenario:
      
              CPU0                    CPU1
              ----                    ----
         lock(&(&sch->busylock)->rlock);
                                      lock(l2tpsock);
                                      lock(&(&sch->busylock)->rlock);
         lock(l2tpsock);
      
        *** DEADLOCK ***
      
       5 locks held by netperf/4660:
        #0:  (sk_lock-AF_INET){+.+.+.}, at: [<ffffffff815e581c>] tcp_sendmsg+0x2c/0x1040
        #1:  (rcu_read_lock){.+.+..}, at: [<ffffffff815da3e0>] ip_queue_xmit+0x0/0x680
        #2:  (rcu_read_lock_bh){.+....}, at: [<ffffffff815d9ac5>] ip_finish_output+0x135/0x890
        #3:  (rcu_read_lock_bh){.+....}, at: [<ffffffff81595820>] dev_queue_xmit+0x0/0xe00
        #4:  (&(&sch->busylock)->rlock){+.-...}, at: [<ffffffff81596595>] dev_queue_xmit+0xd75/0xe00
      
       stack backtrace:
       Pid: 4660, comm: netperf Not tainted 3.6.0-rc3+ #788
       Call Trace:
        [<ffffffff8173dbf8>] print_circular_bug+0x1fb/0x20c
        [<ffffffff810a5288>] __lock_acquire+0x1628/0x1b10
        [<ffffffff810a334b>] ? check_usage+0x9b/0x4d0
        [<ffffffff810a3f44>] ? __lock_acquire+0x2e4/0x1b10
        [<ffffffff810a5df0>] lock_acquire+0x90/0x200
        [<ffffffffa0208db2>] ? l2tp_xmit_skb+0x172/0xa50 [l2tp_core]
        [<ffffffff817498c1>] _raw_spin_lock+0x41/0x50
        [<ffffffffa0208db2>] ? l2tp_xmit_skb+0x172/0xa50 [l2tp_core]
        [<ffffffffa0208db2>] l2tp_xmit_skb+0x172/0xa50 [l2tp_core]
        [<ffffffffa021a802>] l2tp_eth_dev_xmit+0x32/0x60 [l2tp_eth]
        [<ffffffff815952b2>] dev_hard_start_xmit+0x502/0xa70
        [<ffffffff81594e0e>] ? dev_hard_start_xmit+0x5e/0xa70
        [<ffffffff81595961>] ? dev_queue_xmit+0x141/0xe00
        [<ffffffff815b63ce>] sch_direct_xmit+0xfe/0x290
        [<ffffffff81595a05>] dev_queue_xmit+0x1e5/0xe00
        [<ffffffff81595820>] ? dev_hard_start_xmit+0xa70/0xa70
        [<ffffffff815d9d60>] ip_finish_output+0x3d0/0x890
        [<ffffffff815d9ac5>] ? ip_finish_output+0x135/0x890
        [<ffffffff815db019>] ip_output+0x59/0xf0
        [<ffffffff815da36d>] ip_local_out+0x2d/0xa0
        [<ffffffff815da5a3>] ip_queue_xmit+0x1c3/0x680
        [<ffffffff815da3e0>] ? ip_local_out+0xa0/0xa0
        [<ffffffff815f4192>] tcp_transmit_skb+0x402/0xa60
        [<ffffffff815fa25e>] ? tcp_md5_do_lookup+0x18e/0x1a0
        [<ffffffff815f4a94>] tcp_write_xmit+0x1f4/0xa30
        [<ffffffff815f5300>] tcp_push_one+0x30/0x40
        [<ffffffff815e6672>] tcp_sendmsg+0xe82/0x1040
        [<ffffffff81614495>] inet_sendmsg+0x125/0x230
        [<ffffffff81614370>] ? inet_create+0x6b0/0x6b0
        [<ffffffff8157e6e2>] ? sock_update_classid+0xc2/0x3b0
        [<ffffffff8157e750>] ? sock_update_classid+0x130/0x3b0
        [<ffffffff81576cdc>] sock_sendmsg+0xdc/0xf0
        [<ffffffff81162579>] ? fget_light+0x3f9/0x4f0
        [<ffffffff81579ece>] sys_sendto+0xfe/0x130
        [<ffffffff810a69ad>] ? trace_hardirqs_on+0xd/0x10
        [<ffffffff8174a0b0>] ? _raw_spin_unlock_irq+0x30/0x50
        [<ffffffff810757e3>] ? finish_task_switch+0x83/0xf0
        [<ffffffff810757a6>] ? finish_task_switch+0x46/0xf0
        [<ffffffff81752cb7>] ? sysret_check+0x1b/0x56
        [<ffffffff81752c92>] system_call_fastpath+0x16/0x1b
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      23d3b8bf
  22. 15 8月, 2012 1 次提交
  23. 16 5月, 2012 1 次提交
  24. 02 4月, 2012 1 次提交
  25. 30 11月, 2011 1 次提交
  26. 17 11月, 2011 1 次提交
  27. 15 7月, 2011 1 次提交
  28. 27 6月, 2011 1 次提交
    • J
      net_sched: fix dequeuer fairness · d5b8aa1d
      jamal 提交于
      Results on dummy device can be seen in my netconf 2011
      slides. These results are for a 10Gige IXGBE intel
      nic - on another i5 machine, very similar specs to
      the one used in the netconf2011 results.
      It turns out - this is a hell lot worse than dummy
      and so this patch is even more beneficial for 10G.
      
      Test setup:
      ----------
      
      System under test sending packets out.
      Additional box connected directly dropping packets.
      Installed prio qdisc on the eth device and default
      netdev default length of 1000 used as is.
      The 3 prio bands each were set to 100 (didnt factor in
      the results).
      
      5 packet runs were made and the middle 3 picked.
      
      results
      -------
      
      The "cpu" column indicates the which cpu the sample
      was taken on,
      The "Pkt runx" carries the number of packets a cpu
      dequeued when forced to be in the "dequeuer" role.
      The "avg" for each run is the number of times each
      cpu should be a "dequeuer" if the system was fair.
      
      3.0-rc4      (plain)
      cpu         Pkt run1        Pkt run2        Pkt run3
      ================================================
      cpu0        21853354        21598183        22199900
      cpu1          431058          473476          393159
      cpu2          481975          477529          458466
      cpu3        23261406        23412299        22894315
      avg         11506948        11490372        11486460
      
      3.0-rc4 with patch and default weight 64
      cpu 	     Pkt run1        Pkt run2        Pkt run3
      ================================================
      cpu0        13205312        13109359        13132333
      cpu1        10189914        10159127        10122270
      cpu2        10213871        10124367        10168722
      cpu3        13165760        13164767        13096705
      avg         11693714        11639405        11630008
      
      As you can see the system is still not perfect but
      is a lot better than what it was before...
      
      At the moment we use the old backlog weight, weight_p
      which is 64 packets. It seems to be reasonably fine
      with that value.
      The system could be made more fair if we reduce the
      weight_p (as per my presentation), but we are going
      to affect the shared backlog weight. Unless deemed
      necessary, I think the default value is fine. If not
      we could add yet another knob.
      Signed-off-by: NJamal Hadi Salim <jhs@mojatatu.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d5b8aa1d
  29. 07 6月, 2011 1 次提交
  30. 23 5月, 2011 1 次提交
    • E
      net: avoid synchronize_rcu() in dev_deactivate_many · 3137663d
      Eric Dumazet 提交于
      dev_deactivate_many() issues one synchronize_rcu() call after qdiscs set
      to noop_qdisc.
      
      This call is here to make sure they are no outstanding qdisc-less
      dev_queue_xmit calls before returning to caller.
      
      But in dismantle phase, we dont have to wait, because we wont activate
      again the device, and we are going to wait one rcu grace period later in
      rollback_registered_many().
      
      After this patch, device dismantle uses one synchronize_net() and one
      rcu_barrier() call only, so we have a ~30% speedup and a smaller RTNL
      latency.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      CC: Patrick McHardy <kaber@trash.net>,
      CC: Ben Greear <greearb@candelatech.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3137663d
  31. 04 3月, 2011 1 次提交
  32. 21 2月, 2011 1 次提交
    • E
      net: Fix more stale on-stack list_head objects. · 5f04d506
      Eric W. Biederman 提交于
      From: Eric W. Biederman <ebiederm@xmission.com>
      
      In the beginning with batching unreg_list was a list that was used only
      once in the lifetime of a network device (I think).  Now we have calls
      using the unreg_list that can happen multiple times in the life of a
      network device like dev_deactivate and dev_close that are also using the
      unreg_list.  In addition in unregister_netdevice_queue we also do a
      list_move because for devices like veth pairs it is possible that
      unregister_netdevice_queue will be called multiple times.
      
      So I think the change below to fix dev_deactivate which Eric D. missed
      will fix this problem.  Now to go test that.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5f04d506
  33. 22 1月, 2011 1 次提交
    • E
      net_sched: TCQ_F_CAN_BYPASS generalization · 23624935
      Eric Dumazet 提交于
      Now qdisc stab is handled before TCQ_F_CAN_BYPASS test in
      __dev_xmit_skb(), we can generalize TCQ_F_CAN_BYPASS to other qdiscs
      than pfifo_fast : pfifo, bfifo, pfifo_head_drop and sfq
      
      SFQ is special because it can have external classifiers, and in these
      cases, we cannot bypass queue discipline (packet could be dropped by
      classifier) without admin asking it, or further changes.
      
      Its worth doing this, especially for SFQ, avoiding dirtying memory in
      case no packets are already waiting in queue.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      23624935
  34. 21 1月, 2011 1 次提交
    • E
      net_sched: RCU conversion of stab · a2da570d
      Eric Dumazet 提交于
      This patch converts stab qdisc management to RCU, so that we can perform
      the qdisc_calculate_pkt_len() call before getting qdisc lock.
      
      This shortens the lock's held time in __dev_xmit_skb().
      
      This permits more qdiscs to get TCQ_F_CAN_BYPASS status, avoiding lot of
      cache misses and so reducing latencies.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      CC: Patrick McHardy <kaber@trash.net>
      CC: Jesper Dangaard Brouer <hawk@diku.dk>
      CC: Jarek Poplawski <jarkao2@gmail.com>
      CC: Jamal Hadi Salim <hadi@cyberus.ca>
      CC: Stephen Hemminger <shemminger@vyatta.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a2da570d
  35. 20 1月, 2011 2 次提交
    • E
      net_sched: cleanups · cc7ec456
      Eric Dumazet 提交于
      Cleanup net/sched code to current CodingStyle and practices.
      
      Reduce inline abuse
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cc7ec456
    • J
      net_sched: implement a root container qdisc sch_mqprio · b8970f0b
      John Fastabend 提交于
      This implements a mqprio queueing discipline that by default creates
      a pfifo_fast qdisc per tx queue and provides the needed configuration
      interface.
      
      Using the mqprio qdisc the number of tcs currently in use along
      with the range of queues alloted to each class can be configured. By
      default skbs are mapped to traffic classes using the skb priority.
      This mapping is configurable.
      
      Configurable parameters,
      
      struct tc_mqprio_qopt {
      	__u8    num_tc;
      	__u8    prio_tc_map[TC_BITMASK + 1];
      	__u8    hw;
      	__u16   count[TC_MAX_QUEUE];
      	__u16   offset[TC_MAX_QUEUE];
      };
      
      Here the count/offset pairing give the queue alignment and the
      prio_tc_map gives the mapping from skb->priority to tc.
      
      The hw bit determines if the hardware should configure the count
      and offset values. If the hardware bit is set then the operation
      will fail if the hardware does not implement the ndo_setup_tc
      operation. This is to avoid undetermined states where the hardware
      may or may not control the queue mapping. Also minimal bounds
      checking is done on the count/offset to verify a queue does not
      exceed num_tx_queues and that queue ranges do not overlap. Otherwise
      it is left to user policy or hardware configuration to create
      useful mappings.
      
      It is expected that hardware QOS schemes can be implemented by
      creating appropriate mappings of queues in ndo_tc_setup().
      
      One expected use case is drivers will use the ndo_setup_tc to map
      queue ranges onto 802.1Q traffic classes. This provides a generic
      mechanism to map network traffic onto these traffic classes and
      removes the need for lower layer drivers to know specifics about
      traffic types.
      Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b8970f0b
  36. 17 12月, 2010 1 次提交
  37. 02 12月, 2010 1 次提交