1. 10 4月, 2017 2 次提交
    • D
      Revert "rtnl: Add support for netdev event to link messages" · bf74b20d
      David S. Miller 提交于
      This reverts commit def12888.
      
      As per discussion between Roopa Prabhu and David Ahern, it is
      advisable that we instead have the code collect the setlink triggered
      events into a bitmask emitted in the IFLA_EVENT netlink attribute.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      bf74b20d
    • D
      Merge branch '40GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue · 0492b71c
      David S. Miller 提交于
      Jeff Kirsher says:
      
      ====================
      40GbE Intel Wired LAN Driver Updates 2017-04-08
      
      This series contains updates to i40e and i40evf only.
      
      Mitch fixes an issue where the client driver (i40iw) was attempting to
      load on x710 devices (which do not support iWARP), so only register with
      the client if iWARP is supported.
      
      Jake fixes up error messages to better clarify to the user when adding a
      invalid flow type.  Updates the driver to look up the MAC address from
      eth_get_platform_mac_address() first before checking what the firmware
      provides.  Cleans up code so we are not repeating a duplicate loop, by
      checking both transmit and receive queues in a single loop.  Also cleans
      up flags never used, so remove the definitions.
      
      Alex does cleanup so that we are always updating pf->flags when a change
      is made to the private flags.  Adds support for 3K buffers to the receive
      path so that we can provide the additional padding needed in the event
      of NET_IP_ALIGN being non-zero or a cache line being greater than 64.
      Adds support for build_skb() to i40e/i40evf.
      
      Maciej adjusts the scope of the rtnl lock held during reset because it
      was stopping other PFs from running their reset procedures.
      
      Alan reduces code complexity in i40e_detect_recover_hung_queue().
      ====================
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0492b71c
  2. 09 4月, 2017 5 次提交
    • D
      Merge branch 'dsa-receive-path-simplifications' · 417d978f
      David S. Miller 提交于
      Florian Fainelli says:
      
      ====================
      net: dsa: Receive path simplifications
      
      This patch series does factor the common code found in all tag implementations
      into dsa_switch_rcv(). The original motivation was to add GRO support, but this
      may be a lot of work with unclear benefits at this point.
      
      Changes in v2:
      - take care of tag_mtk.c in the process
      ====================
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      417d978f
    • F
      net: dsa: Factor bottom tag receive functions · a86d8bec
      Florian Fainelli 提交于
      All DSA tag receive functions do strictly the same thing after they have located
      the originating source port from their tag specific protocol:
      
      - push ETH_HLEN bytes
      - set pkt_type to PACKET_HOST
      - call eth_type_trans()
      - bump up counters
      - call netif_receive_skb()
      
      Factor all of that into dsa_switch_rcv(). This also makes us return a pointer to
      a sk_buff, which makes us symetric with the xmit function.
      Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a86d8bec
    • F
      net: dsa: Move skb_unshare() to dsa_switch_rcv() · 16c5dcb1
      Florian Fainelli 提交于
      All DSA tag receive functions need to unshare the skb before mangling it, move
      this to the generic dsa_switch_rcv() function which will allow us to make the
      tag receive function return their mangled skb without caring about freeing a
      NULL skb.
      Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      16c5dcb1
    • F
      net: dsa: Do not check for NULL dst in tag parsers · 9d7f9c4f
      Florian Fainelli 提交于
      dsa_switch_rcv() already tests for dst == NULL, so there is no need to duplicate
      the same check within the tag receive functions.
      Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9d7f9c4f
    • S
      skbuff: Extend gso_type to unsigned int. · 7f564528
      Steffen Klassert 提交于
      All available gso_type flags are currently in use, so
      extend gso_type from 'unsigned short' to 'unsigned int'
      to be able to add further flags.
      
      We reorder the struct skb_shared_info to use
      two bytes of the four byte hole before dataref.
      All fields before dataref are cleared, i.e.
      four bytes more than before the change.
      
      The remaining two byte hole is moved to the
      beginning of the structure, this protects us
      from immediate overwites on out of bound writes
      to the sk_buff head.
      
      Structure layout on x86-64 before the change:
      
      struct skb_shared_info {
      	unsigned char              nr_frags;             /*     0     1 */
      	__u8                       tx_flags;             /*     1     1 */
      	short unsigned int         gso_size;             /*     2     2 */
      	short unsigned int         gso_segs;             /*     4     2 */
      	short unsigned int         gso_type;             /*     6     2 */
      	struct sk_buff *           frag_list;            /*     8     8 */
      	struct skb_shared_hwtstamps hwtstamps;           /*    16     8 */
      	u32                        tskey;                /*    24     4 */
      	__be32                     ip6_frag_id;          /*    28     4 */
      	atomic_t                   dataref;              /*    32     4 */
      
      	/* XXX 4 bytes hole, try to pack */
      
      	void *                     destructor_arg;       /*    40     8 */
      	skb_frag_t                 frags[17];            /*    48   272 */
      	/* --- cacheline 5 boundary (320 bytes) --- */
      
      	/* size: 320, cachelines: 5, members: 12 */
      	/* sum members: 316, holes: 1, sum holes: 4 */
      };
      
      Structure layout on x86-64 after the change:
      
      struct skb_shared_info {
      	short unsigned int         _unused;              /*     0     2 */
      	unsigned char              nr_frags;             /*     2     1 */
      	__u8                       tx_flags;             /*     3     1 */
      	short unsigned int         gso_size;             /*     4     2 */
      	short unsigned int         gso_segs;             /*     6     2 */
      	struct sk_buff *           frag_list;            /*     8     8 */
      	struct skb_shared_hwtstamps hwtstamps;           /*    16     8 */
      	unsigned int               gso_type;             /*    24     4 */
      	u32                        tskey;                /*    28     4 */
      	__be32                     ip6_frag_id;          /*    32     4 */
      	atomic_t                   dataref;              /*    36     4 */
      	void *                     destructor_arg;       /*    40     8 */
      	skb_frag_t                 frags[17];            /*    48   272 */
      	/* --- cacheline 5 boundary (320 bytes) --- */
      
      	/* size: 320, cachelines: 5, members: 13 */
      };
      Signed-off-by: NSteffen Klassert <steffen.klassert@secunet.com>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7f564528
  3. 08 4月, 2017 28 次提交
  4. 07 4月, 2017 5 次提交