1. 31 1月, 2017 1 次提交
    • A
      gianfar: synchronize DMA API usage by free_skb_rx_queue w/ gfar_new_page · 4af0e5bb
      Arseny Solokha 提交于
      In spite of switching to paged allocation of Rx buffers, the driver still
      called dma_unmap_single() in the Rx queues tear-down path.
      
      The DMA region unmapping code in free_skb_rx_queue() basically predates
      the introduction of paged allocation to the driver. While being refactored,
      it apparently hasn't reflected the change in the DMA API usage by its
      counterpart gfar_new_page().
      
      As a result, setting an interface to the DOWN state now yields the following:
      
        # ip link set eth2 down
        fsl-gianfar ffe24000.ethernet: DMA-API: device driver frees DMA memory with wrong function [device address=0x000000001ecd0000] [size=40]
        ------------[ cut here ]------------
        WARNING: CPU: 1 PID: 189 at lib/dma-debug.c:1123 check_unmap+0x8e0/0xa28
        CPU: 1 PID: 189 Comm: ip Tainted: G           O    4.9.5 #1
        task: dee73400 task.stack: dede2000
        NIP: c02101e8 LR: c02101e8 CTR: c0260d74
        REGS: dede3bb0 TRAP: 0700   Tainted: G           O     (4.9.5)
        MSR: 00021000 <CE,ME>  CR: 28002222  XER: 00000000
      
        GPR00: c02101e8 dede3c60 dee73400 000000b6 dfbd033c dfbd36c4 1f622000 dede2000
        GPR08: 00000007 c05b1634 1f622000 00000000 22002484 100a9904 00000000 00000000
        GPR16: 00000000 db4c849c 00000002 db4c8480 00000001 df142240 db4c84bc 00000000
        GPR24: c0706148 c0700000 00029000 c07552e8 c07323b4 dede3cb8 c07605e0 db535540
        NIP [c02101e8] check_unmap+0x8e0/0xa28
        LR [c02101e8] check_unmap+0x8e0/0xa28
        Call Trace:
        [dede3c60] [c02101e8] check_unmap+0x8e0/0xa28 (unreliable)
        [dede3cb0] [c02103b8] debug_dma_unmap_page+0x88/0x9c
        [dede3d30] [c02dffbc] free_skb_resources+0x2c4/0x404
        [dede3d80] [c02e39b4] gfar_close+0x24/0xc8
        [dede3da0] [c0361550] __dev_close_many+0xa0/0xf8
        [dede3dd0] [c03616f0] __dev_close+0x2c/0x4c
        [dede3df0] [c036b1b8] __dev_change_flags+0xa0/0x174
        [dede3e10] [c036b2ac] dev_change_flags+0x20/0x60
        [dede3e30] [c03e130c] devinet_ioctl+0x540/0x824
        [dede3e90] [c0347dcc] sock_ioctl+0x134/0x298
        [dede3eb0] [c0111814] do_vfs_ioctl+0xac/0x854
        [dede3f20] [c0111ffc] SyS_ioctl+0x40/0x74
        [dede3f40] [c000f290] ret_from_syscall+0x0/0x3c
        --- interrupt: c01 at 0xff45da0
            LR = 0xff45cd0
        Instruction dump:
        811d001c 7c66482e 813d0020 9061000c 807f000c 5463103a 7cc6182e 3c60c052
        386309ac 90c10008 4cc63182 4826b845 <0fe00000> 4bfffa60 3c80c052 388402c4
        ---[ end trace 695ae6d7ac1d0c47 ]---
        Mapped at:
         [<c02e22a8>] gfar_alloc_rx_buffs+0x178/0x248
         [<c02e3ef0>] startup_gfar+0x368/0x570
         [<c036aeb4>] __dev_open+0xdc/0x150
         [<c036b1b8>] __dev_change_flags+0xa0/0x174
         [<c036b2ac>] dev_change_flags+0x20/0x60
      
      Even though the issue was discovered in 4.9 kernel, the code in question
      is identical in the current net and net-next trees.
      
      Fixes: 75354148 ("gianfar: Add paged allocation and Rx S/G")
      Signed-off-by: NArseny Solokha <asolokha@kb.kras.ru>
      Acked-by: NClaudiu Manoil <claudiu.manoil@nxp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4af0e5bb
  2. 20 1月, 2017 1 次提交
  3. 25 12月, 2016 1 次提交
  4. 30 11月, 2016 1 次提交
  5. 18 10月, 2016 1 次提交
    • J
      ethernet: use core min/max MTU checking · 44770e11
      Jarod Wilson 提交于
      et131x: min_mtu 64, max_mtu 9216
      
      altera_tse: min_mtu 64, max_mtu 1500
      
      amd8111e: min_mtu 60, max_mtu 9000
      
      bnad: min_mtu 46, max_mtu 9000
      
      macb: min_mtu 68, max_mtu 1500 or 10240 depending on hardware capability
      
      xgmac: min_mtu 46, max_mtu 9000
      
      cxgb2: min_mtu 68, max_mtu 9582 (pm3393) or 9600 (vsc7326)
      
      enic: min_mtu 68, max_mtu 9000
      
      gianfar: min_mtu 50, max_mu 9586
      
      hns_enet: min_mtu 68, max_mtu 9578 (v1) or 9706 (v2)
      
      ksz884x: min_mtu 60, max_mtu 1894
      
      myri10ge: min_mtu 68, max_mtu 9000
      
      natsemi: min_mtu 64, max_mtu 2024
      
      nfp: min_mtu 68, max_mtu hardware-specific
      
      forcedeth: min_mtu 64, max_mtu 1500 or 9100, depending on hardware
      
      pch_gbe: min_mtu 46, max_mtu 10300
      
      pasemi_mac: min_mtu 64, max_mtu 9000
      
      qcaspi: min_mtu 46, max_mtu 1500
      - remove qcaspi_netdev_change_mtu as it is now redundant
      
      rocker: min_mtu 68, max_mtu 9000
      
      sxgbe: min_mtu 68, max_mtu 9000
      
      stmmac: min_mtu 46, max_mtu depends on hardware
      
      tehuti: min_mtu 60, max_mtu 16384
      - driver had no max mtu checking, but product docs say 16k jumbo packets
        are supported by the hardware
      
      netcp: min_mtu 68, max_mtu 9486
      - remove netcp_ndo_change_mtu as it is now redundant
      
      via-velocity: min_mtu 64, max_mtu 9000
      
      octeon: min_mtu 46, max_mtu 65370
      
      CC: netdev@vger.kernel.org
      CC: Mark Einon <mark.einon@gmail.com>
      CC: Vince Bridgers <vbridger@opensource.altera.com>
      CC: Rasesh Mody <rasesh.mody@qlogic.com>
      CC: Nicolas Ferre <nicolas.ferre@atmel.com>
      CC: Santosh Raspatur <santosh@chelsio.com>
      CC: Hariprasad S <hariprasad@chelsio.com>
      CC:  Christian Benvenuti <benve@cisco.com>
      CC: Sujith Sankar <ssujith@cisco.com>
      CC: Govindarajulu Varadarajan <_govind@gmx.com>
      CC: Neel Patel <neepatel@cisco.com>
      CC: Claudiu Manoil <claudiu.manoil@freescale.com>
      CC: Yisen Zhuang <yisen.zhuang@huawei.com>
      CC: Salil Mehta <salil.mehta@huawei.com>
      CC: Hyong-Youb Kim <hykim@myri.com>
      CC: Jakub Kicinski <jakub.kicinski@netronome.com>
      CC: Olof Johansson <olof@lixom.net>
      CC: Jiri Pirko <jiri@resnulli.us>
      CC: Byungho An <bh74.an@samsung.com>
      CC: Girish K S <ks.giri@samsung.com>
      CC: Vipul Pandya <vipul.pandya@samsung.com>
      CC: Giuseppe Cavallaro <peppe.cavallaro@st.com>
      CC: Alexandre Torgue <alexandre.torgue@st.com>
      CC: Andy Gospodarek <andy@greyhouse.net>
      CC: Wingman Kwok <w-kwok2@ti.com>
      CC: Murali Karicheri <m-karicheri2@ti.com>
      CC: Francois Romieu <romieu@fr.zoreil.com>
      Signed-off-by: NJarod Wilson <jarod@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      44770e11
  6. 24 8月, 2016 1 次提交
    • Z
      gianfar: fix size of scatter-gathered frames · 6c389fc9
      Zefir Kurtisi 提交于
      The current scatter-gather logic in gianfar is flawed, since
      it does not consider the eTSEC's RxBD 'Data Length' field is
      context depening: for the last fragment it contains the full
      frame size, while fragments contain the fragment size, which
      equals the value written to register MRBLR.
      
      This causes data corruption as soon as the hardware starts
      to fragment receiving frames. As a result, the size of
      fragmented frames is increased by
      (nr_frags - 1) * MRBLR
      
      We first noticed this issue working with DSA, where an ICMP
      request sized 1472 bytes causes the scatter-gather logic to
      kick in. The full Ethernet frame (1518) gets increased by
      DSA (4), GMAC_FCB_LEN (8), and FSL_GIANFAR_DEV_HAS_TIMER
      (priv->padding=8) to a total of 1538 octets, which is
      fragmented by the hardware and reconstructed by the driver
      to a 3074 octet frame.
      
      This patch fixes the problem by adjusting the size of
      the last fragment.
      
      It was tested by setting MRBLR to different multiples of
      64, proving correct scatter-gather operation on frames
      with up to 9000 octets in size.
      Signed-off-by: NZefir Kurtisi <zefir.kurtisi@neratec.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6c389fc9
  7. 17 6月, 2016 1 次提交
  8. 04 6月, 2016 1 次提交
  9. 17 5月, 2016 1 次提交
  10. 05 5月, 2016 1 次提交
  11. 18 3月, 2016 1 次提交
    • J
      mm: introduce page reference manipulation functions · fe896d18
      Joonsoo Kim 提交于
      The success of CMA allocation largely depends on the success of
      migration and key factor of it is page reference count.  Until now, page
      reference is manipulated by direct calling atomic functions so we cannot
      follow up who and where manipulate it.  Then, it is hard to find actual
      reason of CMA allocation failure.  CMA allocation should be guaranteed
      to succeed so finding offending place is really important.
      
      In this patch, call sites where page reference is manipulated are
      converted to introduced wrapper function.  This is preparation step to
      add tracepoint to each page reference manipulation function.  With this
      facility, we can easily find reason of CMA allocation failure.  There is
      no functional change in this patch.
      
      In addition, this patch also converts reference read sites.  It will
      help a second step that renames page._count to something else and
      prevents later attempt to direct access to it (Suggested by Andrew).
      Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Acked-by: NMichal Nazarewicz <mina86@mina86.com>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fe896d18
  12. 07 3月, 2016 1 次提交
  13. 26 2月, 2016 1 次提交
  14. 25 2月, 2016 3 次提交
  15. 08 1月, 2016 1 次提交
  16. 17 12月, 2015 1 次提交
  17. 01 12月, 2015 1 次提交
  18. 23 11月, 2015 1 次提交
  19. 19 11月, 2015 1 次提交
  20. 28 10月, 2015 1 次提交
  21. 26 10月, 2015 2 次提交
    • C
      gianfar: Fix Rx BSY error handling · 1de65a5e
      Claudiu Manoil 提交于
      The Rx BSY error interrupt indicates that a frame was
      received and discarded due to lack of buffers, so it's
      a rx ring overflow condition and has nothing to do with
      with bad rx packets.  Use the right counter.
      
      BSY conditions happen when the SoC is under performance
      stress.  Doing *more* work in stress situations by trying
      to schedule NAPI is not a good idea as the stressed system
      becomes still more stressed.  The Rx interrupt is already
      at work making sure the NAPI is scheduled.
      So calling gfar_receive() here does not help.  This issue
      was present since day 1.
      Signed-off-by: NClaudiu Manoil <claudiu.manoil@freescale.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1de65a5e
    • C
      gianfar: Don't enable the Filer w/o the Parser · 15bf176d
      Claudiu Manoil 提交于
      Under one unusual circumstance it's possible to wrongly set
      FILREN without enabling PRSDEP as well in the RCTRL register,
      against the hardware specifications.  With the default config
      this does not happen because the default Rx offloads (Rx csum
      and Rx VLAN) properly enable PRSDEP.  But if anyone disables
      all these offloads (via ethtool), we get a wrong configuration
      were the Rx flow classification and hashing, and other Filer
      based features (e.g. wake-on-filer interrupt) won't work.
      This patch fixes the issue.
      Also, account for Rx FCB insertion which happens every time
      PRSDEP is set.
      Signed-off-by: NClaudiu Manoil <claudiu.manoil@freescale.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      15bf176d
  22. 07 10月, 2015 1 次提交
    • C
      gianfar: Add WAKE_UCAST and "wake-on-filer" support · 3e905b80
      Claudiu Manoil 提交于
      This enables eTSEC's filer (Rx parser) and the FGPI Rx
      interrupt (Filer General Purpose Interrupt) as a wakeup
      source event.
      
      Upon entering suspend state, the eTSEC filer is given
      a rule to match incoming L2 unicast packets.  A packet
      matching the rule will be enqueued in the Rx ring and
      a FGPI Rx interrupt will be asserted by the filer to
      wakeup the system.  Other packet types will be dropped.
      On resume the filer table is restored to the content
      before entering suspend state.
      The set of rules from gfar_filer_config_wol() could be
      extended to implement other WoL capabilities as well.
      
      The "fsl,wake-on-filer" DT binding enables this capability
      on certain platforms that feature the necessary power
      management infrastructure, targeting mainly printing and
      imaging applications.
      (refer to Power Management section of the SoC Ref Man)
      
      Cc: Li Yang <leoli@freescale.com>
      Cc: Zhao Chenhui <chenhui.zhao@freescale.com>
      Signed-off-by: NClaudiu Manoil <claudiu.manoil@freescale.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3e905b80
  23. 25 9月, 2015 3 次提交
  24. 14 8月, 2015 1 次提交
    • C
      gianfar: Restore link state settings after MAC reset · 2a4eebf0
      Claudiu Manoil 提交于
      There are some MAC registers that need to be kept in sync
      with the link state parameters, see adjust_link().
      However, after a MAC soft reset default values for
      these registers are assumed.  In some cases (excepting
      if down/ if up for example) adjust_link() does not see
      that these values were reset to default because the
      priv->old* link parameters were left unchanged.
      So, reset the priv->old* link params as well during a
      MAC reset to let adjust_link() restore the MAC link
      settings to the actual link state values.
      
      Fixes following case, for example:
      Setting link to 100M, changing MTU (implies MAC reset),
      link state remains unchanged to 100M but MAC registers
      were reset to default (1G) breaking the connectivity w/
      the PHY.  Closing and re-opening the interface would
      restore the MAC link parameters to the correct values.
      Signed-off-by: NClaudiu Manoil <claudiu.manoil@freescale.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2a4eebf0
  25. 01 8月, 2015 3 次提交
    • C
      gianfar: Enable device wakeup when appropriate · b0734b6d
      Claudiu Manoil 提交于
      The wol_en flag is 0 by default anyway, and we have the
      following inconsistency: a MAGIC packet wol capable eth
      interface is registered as a wake-up source but unable
      to wake-up the system as wol_en is 0 (wake-on flag set to 'd').
      Calling set_wakeup_enable() at netdev open is just redundant
      because wol_en is 0 by default.
      Let only ethtool call set_wakeup_enable() for now.
      
      The bflock is obviously obsoleted, its utility has been corroded
      over time.  The bitfield flags used today in gianfar are accessed
      only on the init/ config path, with no real possibility of
      concurrency - nothing that would justify smth. like bflock.
      Signed-off-by: NClaudiu Manoil <claudiu.manoil@freescale.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b0734b6d
    • C
      gianfar: Fix suspend/resume for wol magic packet · 614b4242
      Claudiu Manoil 提交于
      If we disable NAPI in the first place we can mask the device's
      interrupts (and halt it) without fearing that imask may be
      concurrently accessed from interrupt context, so there's
      no need to do local_irq_save() around gfar_halt_nodisable().
      lock_rx_qs()/unlock_tx_qs() are just obsoleted and potentially
      buggy routines.  The txlock is currently used in the driver only
      to manage TX congestion, it has nothing to do with halting the
      device.  With these changes, the TX processing is stopped before
      gfar_halt().
      
      Compact gfar_halt() is used instead of gfar_halt_nodisable(),
      as it disables Rx/TX DMA h/w blocks and the Rx/TX h/w queues.
      gfar_start() re-enables all these blocks on resume.  Enabling
      the magic-packet mode remains the same, note that the RX block
      is re-enabled just before entering sleep mode.
      
      Add IRQF_NO_SUSPEND flag for the error interrupt line, to signal
      that the interrupt line must remain active during sleep in order
      to wake the system by magic packet (MAG) reception interrupt.
      (On some systems the MAG interrupt did trigger w/o this flag
      as well, but on others it didn't.)
      
      Without these fixes, when suspended during fair Tx traffic the
      interface occasionally failed to be woken up by magic packet.
      Signed-off-by: NClaudiu Manoil <claudiu.manoil@freescale.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      614b4242
    • C
      gianfar: Fix warning when CONFIG_PM off · 84868305
      Claudiu Manoil 提交于
      CC      drivers/net/ethernet/freescale/gianfar.o
      drivers/net/ethernet/freescale/gianfar.c:568:13: warning: 'lock_tx_qs'
      defined but not used [-Wunused-function]
       static void lock_tx_qs(struct gfar_private *priv)
                   ^
      drivers/net/ethernet/freescale/gianfar.c:576:13: warning: 'unlock_tx_qs'
      defined but not used [-Wunused-function]
       static void unlock_tx_qs(struct gfar_private *priv)
                   ^
      Reported-by: NScott Wood <scottwood@freescale.com>
      Signed-off-by: NClaudiu Manoil <claudiu.manoil@freescale.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      84868305
  26. 30 7月, 2015 1 次提交
  27. 16 7月, 2015 4 次提交
    • C
      gianfar: Add paged allocation and Rx S/G · 75354148
      Claudiu Manoil 提交于
      The eTSEC h/w is capable of scatter/gather on the receive side
      too if MAXFRM > MRBLR, when the allowed maximum Rx frame size
      is set to be greater than the maximum Rx buffer size (MRBLR).
      It's about time the driver makes use of this h/w capability,
      by supporting fixed buffer sizes and Rx S/G.
      
      The buffer size given to eTSEC for reception is fixed to
      1536B (must be multiple of 64), which is the same default
      buffer size as before, used to accommodate standard MTU
      (1500B) size frames.  As before, eTSEC can receive frames of
      up to 9600B.  Individual Rx buffers are mapped to page halves
      (page size for eTSEC systems is 4KB).  The skb is built around
      the first buffer of a frame (using build_skb()).  In case the
      frame spans multiple buffers, the trailing buffers are added
      as Rx fragments to the skb.  The last buffer in frame is marked
      by the L status flag.  A mechanism is in place to reuse the pages
      owned by the driver (for Rx) for subsequent receptions.
      
      Supporting fixed size buffers allows the implementation of Rx S/G,
      which in turn removes the memory pressure issues the driver had
      before when MTU was set for jumbo frame reception.
      Also, in most cases, the Rx path becomes faster due to Rx page
      reusal, since the overhead of allocating new rx buffers is removed
      from the fast path.
      Signed-off-by: NClaudiu Manoil <claudiu.manoil@freescale.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      75354148
    • C
      gianfar: Use ndev, more Rx path cleanup · f23223f1
      Claudiu Manoil 提交于
      Use "ndev" instead of "dev", as the rx queue back pointer
      to a net_device struct, to avoid name clashing with a
      "struct device" reference.  This prepares the addition of a
      "struct device" back pointer to the rx queue structure.
      
      Remove duplicated rxq registration in the process.
      Move napi_gro_receive() outside gfar_process_frame().
      Signed-off-by: NClaudiu Manoil <claudiu.manoil@freescale.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f23223f1
    • C
      gianfar: Fix and cleanup rxbd status handling · f966082e
      Claudiu Manoil 提交于
      There are several (long standing) problems about how the status
      field of the rx buffer descriptor (rxbd) is currently handled on
      the error path:
      - too many unnecessary 16bit reads of the two halves of the rxbd
      status field (32bit), also resulting in overuse of endianness
      convesion macros;
      - "bdp->status = RXBD_LARGE" makes no sense, since the "large"
      flag is read only (only eTSEC can write it), and trying to clear
      the other status bits is also error prone in this context
      (most of the rx status bits are read only anyway).
      
      This is fixed with a single 32bit read of the "status" field,
      and then the appropriate 16bit shifting is applied to access
      the various status bits or the rx frame length. Also corrected
      the use of the RXBD_LARGE flag.
      
      Additional fix:
      "rx_over_errors" stat is incremented instead of "rx_crc_errors"
      in case of RXBD_OVERRUN occurrence.
      Signed-off-by: NClaudiu Manoil <claudiu.manoil@freescale.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f966082e
    • C
      gianfar: Bundle Rx allocation, cleanup · 76f31e8b
      Claudiu Manoil 提交于
      Use a more common consumer/ producer index design to improve
      rx buffer allocation.  Instead of allocating a single new buffer
      (skb) on each iteration, bundle the allocation of several rx
      buffers at a time.  This also opens the path for further memory
      optimizations.
      
      Remove useless check of rxq->rfbptr, since this patch touches
      rx pause frame handling code as well.  rxq->rfbptr is always
      initialized as part of Rx BD ring init.
      Remove redundant (and misleading) 'amount_pull' parameter.
      Signed-off-by: NClaudiu Manoil <claudiu.manoil@freescale.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      76f31e8b
  28. 10 5月, 2015 2 次提交
    • C
      gianfar: Enable changing mac addr when if up · 3d23a05c
      Claudiu Manoil 提交于
      Use device flag IFF_LIVE_ADDR_CHANGE to signal that
      the device supports changing the hardware address when
      the device is running.
      This allows eth_mac_addr() to change the mac address
      also when the network device's interface is open.
      This capability is required by certain applications,
      like bonding mode 6 (Adaptive Load Balancing).
      Signed-off-by: NClaudiu Manoil <claudiu.manoil@freescale.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3d23a05c
    • C
      gianfar: Move TxFIFO underrun handling to reset path · bc602280
      Claudiu Manoil 提交于
      Handle TxFIFO underrun exceptions outside the fast path.
      A controller reset is more reliable in this exceptional
      case, as opposed to re-enabling on-the-fly the Tx DMA.
      
      As the controller reset is handled outside the fast path
      by the reset_gfar() workqueue handler, the locking
      scheme on the Tx path is significantly simplified.
      Because the Tx processing (xmit queues and tx napi) is
      disabled during controller reset, tstat access from xmit
      does not require locking.  So the scope of the txlock on
      the processing path is now reduced to num_txbdfree, which
      is shared only between process context (xmit) and softirq
      (clean_tx_ring).  As a result, the txlock must not guard
      against interrupt context, and the spin_lock_irqsave()
      from xmit can be replaced by spin_lock_bh().  Likewise,
      the locking has been downgraded for clean_tx_ring().
      Signed-off-by: NClaudiu Manoil <claudiu.manoil@freescale.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      bc602280
  29. 18 3月, 2015 1 次提交