1. 02 7月, 2011 3 次提交
    • J
      decnet: Reduce switch/case indent · 06f8fe11
      Joe Perches 提交于
      Make the case labels the same indent as the switch.
      
      git diff -w shows differences for line wrapping.
      (fit multiple lines to 80 columns, join where possible)
      Signed-off-by: NJoe Perches <joe@perches.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      06f8fe11
    • J
      appletalk: Reduce switch/case indent · 4a9e4b09
      Joe Perches 提交于
      Make the case labels the same indent as the switch.
      
      (git diff -w net/appletalk shows no difference)
      Signed-off-by: NJoe Perches <joe@perches.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4a9e4b09
    • T
      rtnl: provide link dump consistency info · 4e985ada
      Thomas Graf 提交于
      This patch adds a change sequence counter to each net namespace
      which is bumped whenever a netdevice is added or removed from
      the list. If such a change occurred while a link dump took place,
      the dump will have the NLM_F_DUMP_INTR flag set in the first
      message which has been interrupted and in all subsequent messages
      of the same dump.
      
      Note that links may still be modified or renamed while a dump is
      taking place but we can guarantee for userspace to receive a
      complete list of links and not miss any.
      
      Testing:
      I have added 500 VLAN netdevices to make sure the dump is split
      over multiple messages. Then while continuously dumping links in
      one process I also continuously deleted and re-added a dummy
      netdevice in another process. Multiple dumps per seconds have
      had the NLM_F_DUMP_INTR flag set.
      
      I guess we can wait for Johannes patch to hit net-next via the
      wireless tree.  I just wanted to give this some testing right away.
      Signed-off-by: NThomas Graf <tgraf@infradead.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4e985ada
  2. 29 6月, 2011 1 次提交
  3. 27 6月, 2011 1 次提交
    • J
      net_sched: fix dequeuer fairness · d5b8aa1d
      jamal 提交于
      Results on dummy device can be seen in my netconf 2011
      slides. These results are for a 10Gige IXGBE intel
      nic - on another i5 machine, very similar specs to
      the one used in the netconf2011 results.
      It turns out - this is a hell lot worse than dummy
      and so this patch is even more beneficial for 10G.
      
      Test setup:
      ----------
      
      System under test sending packets out.
      Additional box connected directly dropping packets.
      Installed prio qdisc on the eth device and default
      netdev default length of 1000 used as is.
      The 3 prio bands each were set to 100 (didnt factor in
      the results).
      
      5 packet runs were made and the middle 3 picked.
      
      results
      -------
      
      The "cpu" column indicates the which cpu the sample
      was taken on,
      The "Pkt runx" carries the number of packets a cpu
      dequeued when forced to be in the "dequeuer" role.
      The "avg" for each run is the number of times each
      cpu should be a "dequeuer" if the system was fair.
      
      3.0-rc4      (plain)
      cpu         Pkt run1        Pkt run2        Pkt run3
      ================================================
      cpu0        21853354        21598183        22199900
      cpu1          431058          473476          393159
      cpu2          481975          477529          458466
      cpu3        23261406        23412299        22894315
      avg         11506948        11490372        11486460
      
      3.0-rc4 with patch and default weight 64
      cpu 	     Pkt run1        Pkt run2        Pkt run3
      ================================================
      cpu0        13205312        13109359        13132333
      cpu1        10189914        10159127        10122270
      cpu2        10213871        10124367        10168722
      cpu3        13165760        13164767        13096705
      avg         11693714        11639405        11630008
      
      As you can see the system is still not perfect but
      is a lot better than what it was before...
      
      At the moment we use the old backlog weight, weight_p
      which is 64 packets. It seems to be reasonably fine
      with that value.
      The system could be made more fair if we reduce the
      weight_p (as per my presentation), but we are going
      to affect the shared backlog weight. Unless deemed
      necessary, I think the default value is fine. If not
      we could add yet another knob.
      Signed-off-by: NJamal Hadi Salim <jhs@mojatatu.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d5b8aa1d
  4. 25 6月, 2011 22 次提交
  5. 23 6月, 2011 3 次提交
    • D
      dcb: use nlmsg_free() instead of kfree() · 4d054f2f
      Dan Carpenter 提交于
      These sk_buff structs were allocated with nlmsg_new() so they should
      be freed with nlmsg_free().
      Signed-off-by: NDan Carpenter <error27@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4d054f2f
    • J
      nl80211: use netlink consistent dump feature for BSS dumps · 9720bb3a
      Johannes Berg 提交于
      Use the new consistent dump feature from (generic) netlink
      to advertise when dumps are incomplete.
      
      Readers may note that this does not initialize the
      rdev->bss_generation counter to a non-zero value. This is
      still OK since the value is modified only under spinlock
      when the list is modified. Since the dump code holds the
      spinlock, the value will either be > 0 already, or the
      list will still be empty in which case a consistent dump
      will actually be made (and be empty).
      Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
      Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
      9720bb3a
    • J
      netlink: advertise incomplete dumps · 670dc283
      Johannes Berg 提交于
      Consider the following situation:
       * a dump that would show 8 entries, four in the first
         round, and four in the second
       * between the first and second rounds, 6 entries are
         removed
       * now the second round will not show any entry, and
         even if there is a sequence/generation counter the
         application will not know
      
      To solve this problem, add a new flag NLM_F_DUMP_INTR
      to the netlink header that indicates the dump wasn't
      consistent, this flag can also be set on the MSG_DONE
      message that terminates the dump, and as such above
      situation can be detected.
      
      To achieve this, add a sequence counter to the netlink
      callback struct. Of course, netlink code still needs
      to use this new functionality. The correct way to do
      that is to always set cb->seq when a dumpit callback
      is invoked and call nl_dump_check_consistent() for
      each new message. The core code will also call this
      function for the final MSG_DONE message.
      
      To make it usable with generic netlink, a new function
      genlmsg_nlhdr() is needed to obtain the netlink header
      from the genetlink user header.
      Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
      670dc283
  6. 22 6月, 2011 10 次提交