1. 03 2月, 2018 1 次提交
    • A
      net: qlge: use memmove instead of skb_copy_to_linear_data · cfabb177
      Arnd Bergmann 提交于
      gcc-8 points out that the skb_copy_to_linear_data() argument points to
      the skb itself, which makes it run into a problem with overlapping
      memcpy arguments:
      
      In file included from include/linux/ip.h:20,
                       from drivers/net/ethernet/qlogic/qlge/qlge_main.c:26:
      drivers/net/ethernet/qlogic/qlge/qlge_main.c: In function 'ql_realign_skb':
      include/linux/skbuff.h:3378:2: error: 'memcpy' source argument is the same as destination [-Werror=restrict]
        memcpy(skb->data, from, len);
      
      It's unclear to me what the best solution is, maybe it ought to use a
      different helper that adjusts the skb data in a safe way. Simply using
      memmove() here seems like the easiest workaround.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cfabb177
  2. 16 11月, 2017 1 次提交
    • M
      mm: remove __GFP_COLD · 453f85d4
      Mel Gorman 提交于
      As the page free path makes no distinction between cache hot and cold
      pages, there is no real useful ordering of pages in the free list that
      allocation requests can take advantage of.  Juding from the users of
      __GFP_COLD, it is likely that a number of them are the result of copying
      other sites instead of actually measuring the impact.  Remove the
      __GFP_COLD parameter which simplifies a number of paths in the page
      allocator.
      
      This is potentially controversial but bear in mind that the size of the
      per-cpu pagelists versus modern cache sizes means that the whole per-cpu
      list can often fit in the L3 cache.  Hence, there is only a potential
      benefit for microbenchmarks that alloc/free pages in a tight loop.  It's
      even worse when THP is taken into account which has little or no chance
      of getting a cache-hot page as the per-cpu list is bypassed and the
      zeroing of multiple pages will thrash the cache anyway.
      
      The truncate microbenchmarks are not shown as this patch affects the
      allocation path and not the free path.  A page fault microbenchmark was
      tested but it showed no sigificant difference which is not surprising
      given that the __GFP_COLD branches are a miniscule percentage of the
      fault path.
      
      Link: http://lkml.kernel.org/r/20171018075952.10627-9-mgorman@techsingularity.netSigned-off-by: NMel Gorman <mgorman@techsingularity.net>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      453f85d4
  3. 05 10月, 2017 1 次提交
    • K
      timer: Remove init_timer_deferrable() in favor of timer_setup() · df7e828c
      Kees Cook 提交于
      This refactors the only users of init_timer_deferrable() to use
      the new timer_setup() and from_timer(). Removes definition of
      init_timer_deferrable().
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: David S. Miller <davem@davemloft.net> # for networking parts
      Acked-by: Sebastian Reichel <sre@kernel.org> # for drivers/hsi parts
      Cc: linux-mips@linux-mips.org
      Cc: Petr Mladek <pmladek@suse.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Lai Jiangshan <jiangshanlai@gmail.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Kalle Valo <kvalo@qca.qualcomm.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: linux1394-devel@lists.sourceforge.net
      Cc: Chris Metcalf <cmetcalf@mellanox.com>
      Cc: linux-s390@vger.kernel.org
      Cc: "James E.J. Bottomley" <jejb@linux.vnet.ibm.com>
      Cc: Wim Van Sebroeck <wim@iguana.be>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Ursula Braun <ubraun@linux.vnet.ibm.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Viresh Kumar <viresh.kumar@linaro.org>
      Cc: Harish Patil <harish.patil@cavium.com>
      Cc: Stephen Boyd <sboyd@codeaurora.org>
      Cc: Guenter Roeck <linux@roeck-us.net>
      Cc: Manish Chopra <manish.chopra@cavium.com>
      Cc: Len Brown <len.brown@intel.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: linux-pm@vger.kernel.org
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Julian Wiedmann <jwi@linux.vnet.ibm.com>
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: Mark Gross <mark.gross@intel.com>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: linux-watchdog@vger.kernel.org
      Cc: linux-scsi@vger.kernel.org
      Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: linux-wireless@vger.kernel.org
      Cc: Sebastian Reichel <sre@kernel.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Stefan Richter <stefanr@s5r6.in-berlin.de>
      Cc: Michael Reed <mdr@sgi.com>
      Cc: netdev@vger.kernel.org
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: linuxppc-dev@lists.ozlabs.org
      Cc: Sudip Mukherjee <sudipm.mukherjee@gmail.com>
      Link: https://lkml.kernel.org/r/1507159627-127660-6-git-send-email-keescook@chromium.org
      df7e828c
  4. 16 6月, 2017 1 次提交
    • J
      networking: introduce and use skb_put_data() · 59ae1d12
      Johannes Berg 提交于
      A common pattern with skb_put() is to just want to memcpy()
      some data into the new space, introduce skb_put_data() for
      this.
      
      An spatch similar to the one for skb_put_zero() converts many
      of the places using it:
      
          @@
          identifier p, p2;
          expression len, skb, data;
          type t, t2;
          @@
          (
          -p = skb_put(skb, len);
          +p = skb_put_data(skb, data, len);
          |
          -p = (t)skb_put(skb, len);
          +p = skb_put_data(skb, data, len);
          )
          (
          p2 = (t2)p;
          -memcpy(p2, data, len);
          |
          -memcpy(p, data, len);
          )
      
          @@
          type t, t2;
          identifier p, p2;
          expression skb, data;
          @@
          t *p;
          ...
          (
          -p = skb_put(skb, sizeof(t));
          +p = skb_put_data(skb, data, sizeof(t));
          |
          -p = (t *)skb_put(skb, sizeof(t));
          +p = skb_put_data(skb, data, sizeof(t));
          )
          (
          p2 = (t2)p;
          -memcpy(p2, data, sizeof(*p));
          |
          -memcpy(p, data, sizeof(*p));
          )
      
          @@
          expression skb, len, data;
          @@
          -memcpy(skb_put(skb, len), data, len);
          +skb_put_data(skb, data, len);
      
      (again, manually post-processed to retain some comments)
      Reviewed-by: NStephen Hemminger <stephen@networkplumber.org>
      Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      59ae1d12
  5. 07 4月, 2017 1 次提交
  6. 31 1月, 2017 1 次提交
  7. 21 10月, 2016 1 次提交
    • J
      ethernet: use net core MTU range checking in more drivers · d894be57
      Jarod Wilson 提交于
      Somehow, I missed a healthy number of ethernet drivers in the last pass.
      Most of these drivers either were in need of an updated max_mtu to make
      jumbo frames possible to enable again. In a few cases, also setting a
      different min_mtu to match previous lower bounds. There are also a few
      drivers that had no upper bounds checking, so they're getting a brand new
      ETH_MAX_MTU that is identical to IP_MAX_MTU, but accessible by includes
      all ethernet and ethernet-like drivers all have already.
      
      acenic:
      - min_mtu = 0, max_mtu = 9000
      
      amazon/ena:
      - min_mtu = 128, max_mtu = adapter->max_mtu
      
      amd/xgbe:
      - min_mtu = 0, max_mtu = 9000
      
      sb1250:
      - min_mtu = 0, max_mtu = 1518
      
      cxgb3:
      - min_mtu = 81, max_mtu = 65535
      
      cxgb4:
      - min_mtu = 81, max_mtu = 9600
      
      cxgb4vf:
      - min_mtu = 81, max_mtu = 65535
      
      benet:
      - min_mtu = 256, max_mtu = 9000
      
      ibmveth:
      - min_mtu = 68, max_mtu = 65535
      
      ibmvnic:
      - min_mtu = adapter->min_mtu, max_mtu = adapter->max_mtu
      - remove now redundant ibmvnic_change_mtu
      
      jme:
      - min_mtu = 1280, max_mtu = 9202
      
      mv643xx_eth:
      - min_mtu = 64, max_mtu = 9500
      
      mlxsw:
      - min_mtu = 0, max_mtu = 65535
      - Basically bypassing the core checks, and instead relying on dynamic
        checks in the respective switch drivers' ndo_change_mtu functions
      
      ns83820:
      - min_mtu = 0
      - remove redundant ns83820_change_mtu, only checked for mtu > 1500
      
      netxen:
      - min_mtu = 0, max_mtu = 8000 (P2), max_mtu = 9600 (P3)
      
      qlge:
      - min_mtu = 1500, max_mtu = 9000
      - driver only supports setting mtu to 1500 or 9000, so the core check only
        rules out < 1500 and > 9000, qlge_change_mtu still needs to check that
        the value is 1500 or 9000
      
      qualcomm/emac:
      - min_mtu = 46, max_mtu = 9194
      
      xilinx_axienet:
      - min_mtu = 64, max_mtu = 9000
      
      Fixes: 61e84623 ("net: centralize net_device min/max MTU checking")
      CC: netdev@vger.kernel.org
      CC: Jes Sorensen <jes@trained-monkey.org>
      CC: Netanel Belgazal <netanel@annapurnalabs.com>
      CC: Tom Lendacky <thomas.lendacky@amd.com>
      CC: Santosh Raspatur <santosh@chelsio.com>
      CC: Hariprasad S <hariprasad@chelsio.com>
      CC: Sathya Perla <sathya.perla@broadcom.com>
      CC: Ajit Khaparde <ajit.khaparde@broadcom.com>
      CC: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
      CC: Somnath Kotur <somnath.kotur@broadcom.com>
      CC: Thomas Falcon <tlfalcon@linux.vnet.ibm.com>
      CC: John Allen <jallen@linux.vnet.ibm.com>
      CC: Guo-Fu Tseng <cooldavid@cooldavid.org>
      CC: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
      CC: Jiri Pirko <jiri@mellanox.com>
      CC: Ido Schimmel <idosch@mellanox.com>
      CC: Manish Chopra <manish.chopra@qlogic.com>
      CC: Sony Chacko <sony.chacko@qlogic.com>
      CC: Rajesh Borundia <rajesh.borundia@qlogic.com>
      CC: Timur Tabi <timur@codeaurora.org>
      CC: Anirudha Sarangi <anirudh@xilinx.com>
      CC: John Linn <John.Linn@xilinx.com>
      Signed-off-by: NJarod Wilson <jarod@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d894be57
  8. 02 8月, 2016 1 次提交
  9. 25 5月, 2016 1 次提交
    • G
      net/qlge: Avoids recursive EEH error · 3275c0c6
      Gavin Shan 提交于
      One timer, whose handler keeps reading on MMIO register for EEH
      core to detect error in time, is started when the PCI device driver
      is loaded. MMIO register can't be accessed during PE reset in EEH
      recovery. Otherwise, the unexpected recursive error is triggered.
      The timer isn't closed that time if the interface isn't brought
      up. So the unexpected recursive error is seen during EEH recovery
      when the interface is down.
      
      This avoids the unexpected recursive EEH error by closing the timer
      in qlge_io_error_detected() before EEH PE reset unconditionally. The
      timer is started unconditionally after EEH PE reset in qlge_io_resume().
      Also, the timer should be closed unconditionally when the device is
      removed from the system permanently in qlge_io_error_detected().
      Reported-by: NShriya R. Kulkarni <shriyakul@in.ibm.com>
      Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3275c0c6
  10. 16 4月, 2016 1 次提交
  11. 19 3月, 2016 1 次提交
  12. 16 12月, 2015 1 次提交
  13. 22 5月, 2015 1 次提交
  14. 04 3月, 2015 1 次提交
  15. 03 2月, 2015 1 次提交
  16. 14 1月, 2015 1 次提交
  17. 23 9月, 2014 1 次提交
  18. 26 8月, 2014 1 次提交
    • V
      qlge: Fix TSO for non-accelerated vlan traffic · 1ee1cfe7
      Vlad Yasevich 提交于
      This device claims TSO support for vlans.  It also allows a user to
      control vlan acceleration offloading.  As such, it is possible to turn
      off vlan acceleration and configure a vlan which will continue to send
      TSO traffic.
      
      In such situation the packet passed down the the device will contain
      a vlan header and skb->protocol will be set to ETH_P_8021Q.
      The device assumes that skb->protocol contains network protocol
      value and uses that value to set up TSO information.
      This results in corrupted frames sent on the wire.
      
      This patch extracts the protocol value correctly by using a
      vlan_get_protocol() helper and corrects corrupt TSO frames.
      
      CC: Shahed Shaikh <shahed.shaikh@qlogic.com>
      CC: Jitendra Kalsaria <jitendra.kalsaria@qlogic.com>
      CC: Ron Mercer <ron.mercer@qlogic.com>
      CC: linux-driver@qlogic.com
      Signed-off-by: NVladislav Yasevich <vyasevic@redhat.com>
      Acked-by: NShahed Shaikh <shahed.shaikh@qlogic.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1ee1cfe7
  19. 13 8月, 2014 1 次提交
  20. 09 8月, 2014 1 次提交
  21. 14 5月, 2014 1 次提交
  22. 28 4月, 2014 1 次提交
  23. 30 3月, 2014 1 次提交
  24. 29 3月, 2014 1 次提交
  25. 19 2月, 2014 2 次提交
  26. 17 1月, 2014 1 次提交
  27. 15 1月, 2014 1 次提交
  28. 06 12月, 2013 1 次提交
  29. 22 10月, 2013 1 次提交
  30. 28 9月, 2013 1 次提交
  31. 26 5月, 2013 1 次提交
  32. 23 5月, 2013 1 次提交
  33. 14 5月, 2013 1 次提交
    • T
      qlge: fix dma map leak when the last chunk is not allocated · ef380794
      Thadeu Lima de Souza Cascardo 提交于
      qlge allocates chunks from a page that it maps and unmaps that page when
      the last chunk is released. When the driver is unloaded or the card is
      removed, all chunks are released and the page is unmapped for the last
      chunk.
      
      However, when the last chunk of a page is not allocated and the device
      is removed, that page is not unmapped. In fact, its last reference is
      not put and there's also a page leak. This bug prevents a device from
      being properly hotplugged.
      
      When the DMA API debug option is enabled, the following messages show
      the pending DMA allocation after we remove the driver.
      
      This patch fixes the bug by unmapping and putting the page from the ring
      if its last chunk has not been allocated.
      
      pci 0005:98:00.0: DMA-API: device driver has pending DMA allocations while released from device [count=1]
      One of leaked entries details: [device address=0x0000000060a80000] [size=65536 bytes] [mapped with DMA_FROM_DEVICE] [mapped as page]
      ------------[ cut here ]------------
      WARNING: at lib/dma-debug.c:746
      Modules linked in: qlge(-) rpadlpar_io rpaphp pci_hotplug fuse [last unloaded: qlge]
      NIP: c0000000003fc3ec LR: c0000000003fc3e8 CTR: c00000000054de60
      REGS: c0000003ee9c74e0 TRAP: 0700   Tainted: G           O  (3.7.2)
      MSR: 8000000000029032 <SF,EE,ME,IR,DR,RI>  CR: 28002424  XER: 00000001
      SOFTE: 1
      CFAR: c0000000007a39c8
      TASK = c0000003ee8d5c90[8406] 'rmmod' THREAD: c0000003ee9c4000 CPU: 31
      GPR00: c0000000003fc3e8 c0000003ee9c7760 c000000000c789f8 00000000000000ee
      GPR04: 0000000000000000 00000000000000ef 0000000000004000 0000000000010000
      GPR08: 00000000000000be c000000000b22088 c000000000c4c218 00000000007c0000
      GPR12: 0000000028002422 c00000000ff26c80 0000000000000000 000001001b0f1b40
      GPR16: 00000000100cb9d8 0000000010093088 c000000000cdf910 0000000000000001
      GPR20: 0000000000000000 c000000000dbfc00 0000000000000000 c000000000dbfb80
      GPR24: c0000003fafc9d80 0000000000000001 000000000001ff80 c0000003f38f7888
      GPR28: c000000000ddfc00 0000000000000400 c000000000bd7790 c000000000ddfb80
      NIP [c0000000003fc3ec] .dma_debug_device_change+0x22c/0x2b0
      LR [c0000000003fc3e8] .dma_debug_device_change+0x228/0x2b0
      Call Trace:
      [c0000003ee9c7760] [c0000000003fc3e8] .dma_debug_device_change+0x228/0x2b0 (unreliable)
      [c0000003ee9c7840] [c00000000079a098] .notifier_call_chain+0x78/0xf0
      [c0000003ee9c78e0] [c0000000000acc20] .__blocking_notifier_call_chain+0x70/0xb0
      [c0000003ee9c7990] [c0000000004a9580] .__device_release_driver+0x100/0x140
      [c0000003ee9c7a20] [c0000000004a9708] .driver_detach+0x148/0x150
      [c0000003ee9c7ac0] [c0000000004a8144] .bus_remove_driver+0xc4/0x150
      [c0000003ee9c7b60] [c0000000004aa58c] .driver_unregister+0x8c/0xe0
      [c0000003ee9c7bf0] [c0000000004090b4] .pci_unregister_driver+0x34/0xf0
      [c0000003ee9c7ca0] [d000000002231194] .qlge_exit+0x1c/0x34 [qlge]
      [c0000003ee9c7d20] [c0000000000e36d8] .SyS_delete_module+0x1e8/0x290
      [c0000003ee9c7e30] [c0000000000098d4] syscall_exit+0x0/0x94
      Instruction dump:
      7f26cb78 e818003a e87e81a0 e8f80028 e9180030 796b1f24 78001f24 7d6a5a14
      7d2a002a e94b0020 483a7595 60000000 <0fe00000> 2fb80000 40de0048 80120050
      ---[ end trace 4294f9abdb01031d ]---
      Mapped at:
       [<d000000002222f54>] .ql_update_lbq+0x384/0x580 [qlge]
       [<d000000002227bd0>] .ql_clean_inbound_rx_ring+0x300/0xc60 [qlge]
       [<d0000000022288cc>] .ql_napi_poll_msix+0x39c/0x5a0 [qlge]
       [<c0000000006b3c50>] .net_rx_action+0x170/0x300
       [<c000000000081840>] .__do_softirq+0x170/0x300
      Signed-off-by: NThadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
      Acked-by: NJitendra Kalsaria <Jitendra.kalsaria@qlogic.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ef380794
  34. 20 4月, 2013 4 次提交
  35. 10 3月, 2013 1 次提交
  36. 09 2月, 2013 1 次提交