1. 07 8月, 2008 1 次提交
  2. 05 8月, 2008 1 次提交
  3. 04 8月, 2008 1 次提交
  4. 03 8月, 2008 2 次提交
  5. 01 8月, 2008 1 次提交
  6. 30 7月, 2008 1 次提交
    • D
      pkt_sched: Fix OOPS on ingress qdisc add. · 8d50b53d
      David S. Miller 提交于
      Bug report from Steven Jan Springl:
      
      	Issuing the following command causes a kernel oops:
      		tc qdisc add dev eth0 handle ffff: ingress
      
      The problem mostly stems from all of the special case handling of
      ingress qdiscs.
      
      So, to fix this, do the grafting operation the same way we do for TX
      qdiscs.  Which means that dev_activate() and dev_deactivate() now do
      the "qdisc_sleeping <--> qdisc" transitions on dev->rx_queue too.
      
      Future simplifications are possible now, mainly because it is
      impossible for dev_queue->{qdisc,qdisc_sleeping} to be NULL.  There
      are NULL checks all over to handle the ingress qdisc special case
      that used to exist before this commit.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8d50b53d
  7. 26 7月, 2008 1 次提交
  8. 24 7月, 2008 1 次提交
    • D
      netdev: Remove warning from __netif_schedule(). · 5b3ab1db
      David S. Miller 提交于
      It isn't helping anything and we aren't going to be able to change all
      the drivers that do queue wakeups in strange situations.
      
      Just letting a noop_qdisc get scheduled will work because when
      qdisc_run() executes via net_tx_work() it will simply find no packets
      pending when it makes the ->dequeue() call in qdisc_restart.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5b3ab1db
  9. 23 7月, 2008 2 次提交
  10. 22 7月, 2008 4 次提交
  11. 20 7月, 2008 1 次提交
  12. 19 7月, 2008 1 次提交
    • D
      pkt_sched: Manage qdisc list inside of root qdisc. · 30723673
      David S. Miller 提交于
      Idea is from Patrick McHardy.
      
      Instead of managing the list of qdiscs on the device level, manage it
      in the root qdisc of a netdev_queue.  This solves all kinds of
      visibility issues during qdisc destruction.
      
      The way to iterate over all qdiscs of a netdev_queue is to visit
      the netdev_queue->qdisc, and then traverse it's list.
      
      The only special case is to ignore builting qdiscs at the root when
      dumping or doing a qdisc_lookup().  That was not needed previously
      because builtin qdiscs were not added to the device's qdisc_list.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      30723673
  13. 18 7月, 2008 7 次提交
    • D
      pkt_sched: Kill netdev_queue lock. · 83874000
      David S. Miller 提交于
      We can simply use the qdisc->q.lock for all of the
      qdisc tree synchronization.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      83874000
    • D
      netdevice: Move qdisc_list back into net_device proper. · ead81cc5
      David S. Miller 提交于
      And give it it's own lock.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ead81cc5
    • D
      pkt_sched: Schedule qdiscs instead of netdev_queue. · 37437bb2
      David S. Miller 提交于
      When we have shared qdiscs, packets come out of the qdiscs
      for multiple transmit queues.
      
      Therefore it doesn't make any sense to schedule the transmit
      queue when logically we cannot know ahead of time the TX
      queue of the SKB that the qdisc->dequeue() will give us.
      
      Just for sanity I added a BUG check to make sure we never
      get into a state where the noop_qdisc is scheduled.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      37437bb2
    • D
      net: Implement simple sw TX hashing. · 8f0f2223
      David S. Miller 提交于
      It just xor hashes over IPv4/IPv6 addresses and ports of transport.
      
      The only assumption it makes is that skb_network_header() is set
      correctly.
      
      With bug fixes from Eric Dumazet.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8f0f2223
    • D
      netdev: Add netdev->select_queue() method. · eae792b7
      David S. Miller 提交于
      Devices or device layers can set this to control the queue selection
      performed by dev_pick_tx().
      
      This function runs under RCU protection, which allows overriding
      functions to have some way of synchronizing with things like dynamic
      ->real_num_tx_queues adjustments.
      
      This makes the spinlock prefetch in dev_queue_xmit() a little bit
      less effective, but that's the price right now for correctness.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      eae792b7
    • D
      net: Use queue aware tests throughout. · fd2ea0a7
      David S. Miller 提交于
      This effectively "flips the switch" by making the core networking
      and multiqueue-aware drivers use the new TX multiqueue structures.
      
      Non-multiqueue drivers need no changes.  The interfaces they use such
      as netif_stop_queue() degenerate into an operation on TX queue zero.
      So everything "just works" for them.
      
      Code that really wants to do "X" to all TX queues now invokes a
      routine that does so, such as netif_tx_wake_all_queues(),
      netif_tx_stop_all_queues(), etc.
      
      pktgen and netpoll required a little bit more surgery than the others.
      
      In particular the pktgen changes, whilst functional, could be largely
      improved.  The initial check in pktgen_xmit() will sometimes check the
      wrong queue, which is mostly harmless.  The thing to do is probably to
      invoke fill_packet() earlier.
      
      The bulk of the netpoll changes is to make the code operate solely on
      the TX queue indicated by by the SKB queue mapping.
      
      Setting of the SKB queue mapping is entirely confined inside of
      net/core/dev.c:dev_pick_tx().  If we end up needing any kind of
      special semantics (drops, for example) it will be implemented here.
      
      Finally, we now have a "real_num_tx_queues" which is where the driver
      indicates how many TX queues are actually active.
      
      With IGB changes from Jeff Kirsher.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fd2ea0a7
    • D
      netdev: Allocate multiple queues for TX. · e8a0464c
      David S. Miller 提交于
      alloc_netdev_mq() now allocates an array of netdev_queue
      structures for TX, based upon the queue_count argument.
      
      Furthermore, all accesses to the TX queues are now vectored
      through the netdev_get_tx_queue() and netdev_for_each_tx_queue()
      interfaces.  This makes it easy to grep the tree for all
      things that want to get to a TX queue of a net device.
      
      Problem spots which are not really multiqueue aware yet, and
      only work with one queue, can easily be spotted by grepping
      for all netdev_get_tx_queue() calls that pass in a zero index.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e8a0464c
  14. 15 7月, 2008 4 次提交
  15. 09 7月, 2008 9 次提交
  16. 07 7月, 2008 1 次提交
  17. 02 7月, 2008 1 次提交
  18. 28 6月, 2008 1 次提交