1. 04 1月, 2017 3 次提交
  2. 02 12月, 2016 1 次提交
  3. 05 11月, 2016 1 次提交
  4. 18 10月, 2016 1 次提交
    • J
      ethernet/intel: use core min/max MTU checking · 91c527a5
      Jarod Wilson 提交于
      e100: min_mtu 68, max_mtu 1500
      - remove e100_change_mtu entirely, is identical to old eth_change_mtu,
        and no longer serves a purpose. No need to set min_mtu or max_mtu
        explicitly, as ether_setup() will already set them to 68 and 1500.
      
      e1000: min_mtu 46, max_mtu 16110
      
      e1000e: min_mtu 68, max_mtu varies based on adapter
      
      fm10k: min_mtu 68, max_mtu 15342
      - remove fm10k_change_mtu entirely, does nothing now
      
      i40e: min_mtu 68, max_mtu 9706
      
      i40evf: min_mtu 68, max_mtu 9706
      
      igb: min_mtu 68, max_mtu 9216
      - There are two different "max" frame sizes claimed and both checked in
        the driver, the larger value wasn't relevant though, so I've set max_mtu
        to the smaller of the two values here to retain identical behavior.
      
      igbvf: min_mtu 68, max_mtu 9216
      - Same issue as igb duplicated
      
      ixgb: min_mtu 68, max_mtu 16114
      - Also remove pointless old == new check, as that's done in dev_set_mtu
      
      ixgbe: min_mtu 68, max_mtu 9710
      
      ixgbevf: min_mtu 68, max_mtu dependent on hardware/firmware
      - Some hw can only handle up to max_mtu 1504 on a vf, others 9710
      
      CC: netdev@vger.kernel.org
      CC: intel-wired-lan@lists.osuosl.org
      CC: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
      Signed-off-by: NJarod Wilson <jarod@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      91c527a5
  5. 23 9月, 2016 1 次提交
  6. 19 8月, 2016 2 次提交
  7. 22 7月, 2016 5 次提交
  8. 21 5月, 2016 2 次提交
  9. 04 5月, 2016 4 次提交
  10. 25 4月, 2016 3 次提交
  11. 08 4月, 2016 1 次提交
  12. 05 4月, 2016 2 次提交
  13. 30 3月, 2016 2 次提交
  14. 18 3月, 2016 1 次提交
    • J
      mm: introduce page reference manipulation functions · fe896d18
      Joonsoo Kim 提交于
      The success of CMA allocation largely depends on the success of
      migration and key factor of it is page reference count.  Until now, page
      reference is manipulated by direct calling atomic functions so we cannot
      follow up who and where manipulate it.  Then, it is hard to find actual
      reason of CMA allocation failure.  CMA allocation should be guaranteed
      to succeed so finding offending place is really important.
      
      In this patch, call sites where page reference is manipulated are
      converted to introduced wrapper function.  This is preparation step to
      add tracepoint to each page reference manipulation function.  With this
      facility, we can easily find reason of CMA allocation failure.  There is
      no functional change in this patch.
      
      In addition, this patch also converts reference read sites.  It will
      help a second step that renames page._count to something else and
      prevents later attempt to direct access to it (Suggested by Andrew).
      Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Acked-by: NMichal Nazarewicz <mina86@mina86.com>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fe896d18
  15. 30 12月, 2015 2 次提交
  16. 03 12月, 2015 3 次提交
  17. 24 11月, 2015 4 次提交
  18. 19 11月, 2015 1 次提交
    • E
      net: provide generic busy polling to all NAPI drivers · 93d05d4a
      Eric Dumazet 提交于
      NAPI drivers no longer need to observe a particular protocol
      to benefit from busy polling (CONFIG_NET_RX_BUSY_POLL=y)
      
      napi_hash_add() and napi_hash_del() are automatically called
      from core networking stack, respectively from
      netif_napi_add() and netif_napi_del()
      
      This patch depends on free_netdev() and netif_napi_del() being
      called from process context, which seems to be the norm.
      
      Drivers might still prefer to call napi_hash_del() on their
      own, since they might combine all the rcu grace periods into
      a single one, knowing their NAPI structures lifetime, while
      core networking stack has no idea of a possible combining.
      
      Once this patch proves to not bring serious regressions,
      we will cleanup drivers to either remove napi_hash_del()
      or provide appropriate rcu grace periods combining.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      93d05d4a
  19. 23 10月, 2015 1 次提交
    • H
      ixgbe, ixgbevf: Add new mbox API xcast mode · 8443c1a4
      Hiroshi Shimamoto 提交于
      The limitation of the number of multicast address for VF is not enough
      for the large scale server with SR-IOV feature. IPv6 requires the multicast
      MAC address for each IP address to handle the Neighbor Solicitation
      message. We couldn't assign over 30 IPv6 addresses to a single VF.
      
      This patch introduces the new mailbox API, IXGBE_VF_UPDATE_XCAST_MODE,
      to update multicast mode of VF. This adds 3 modes;
        - NONE     only L2 exact match addresses or Flow Director enabled
        - MULTI    BAM and ROMPE set
        - ALLMULTI BAM, ROMPE and MPE set
      
      If a guest VF user wants over 30 MAC multicast addresses, set IFF_ALLMULTI
      to request PF to update xcast mode to enable VF multicast promiscuous mode.
      
      On the other hand, enabling VF multicast promiscuous mode may affect
      security and performance in the network of the NIC. Only trusted VF can
      enable multicast promiscuous mode. The behavior of untrusted VF is the
      same as previous version.
      Signed-off-by: NHiroshi Shimamoto <h-shimamoto@ct.jp.nec.com>
      Tested-by: NKrishneil Singh <Krishneil.k.singh@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      8443c1a4