1. 16 11月, 2013 2 次提交
  2. 15 11月, 2013 15 次提交
    • M
      virtio-net: mergeable buffer size should include virtio-net header · 5061de36
      Michael Dalton 提交于
      Commit 2613af0e ("virtio_net: migrate mergeable rx buffers to page
      frag allocators") changed the mergeable receive buffer size from PAGE_SIZE
      to MTU-size. However, the merge buffer size does not take into account the
      size of the virtio-net header. Consequently, packets that are MTU-size
      will take two buffers intead of one (to store the virtio-net header),
      substantially decreasing the throughput of MTU-size traffic due to TCP
      window / SKB truesize effects.
      
      This commit changes the mergeable buffer size to include the virtio-net
      header. The buffer size is cacheline-aligned because skb_page_frag_refill
      will not automatically align the requested size.
      
      Benchmarks taken from an average of 5 netperf 30-second TCP_STREAM runs
      between two QEMU VMs on a single physical machine. Each VM has two VCPUs and
      vhost enabled. All VMs and vhost threads run in a single 4 CPU cgroup
      cpuset, using cgroups to ensure that other processes in the system will not
      be scheduled on the benchmark CPUs. Transmit offloads and mergeable receive
      buffers are enabled, but guest_tso4 / guest_csum are explicitly disabled to
      force MTU-sized packets on the receiver.
      
      next-net trunk before 2613af0e (PAGE_SIZE buf): 3861.08Gb/s
      net-next trunk (MTU 1500- packet uses two buf due to size bug): 4076.62Gb/s
      net-next trunk (MTU 1480- packet fits in one buf): 6301.34Gb/s
      net-next trunk w/ size fix (MTU 1500 - packet fits in one buf): 6445.44Gb/s
      Suggested-by: NEric Northup <digitaleric@google.com>
      Signed-off-by: NMichael Dalton <mwdalton@google.com>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5061de36
    • C
      connector: improved unaligned access error fix · 1ca1a4cf
      Chris Metcalf 提交于
      In af3e095a, Erik Jacobsen fixed one type of unaligned access
      bug for ia64 by converting a 64-bit write to use put_unaligned().
      Unfortunately, since gcc will convert a short memset() to a series
      of appropriately-aligned stores, the problem is now visible again
      on tilegx, where the memset that zeros out proc_event is converted
      to three 64-bit stores, causing an unaligned access panic.
      
      A better fix for the original problem is to ensure that proc_event
      is aligned to 8 bytes here.  We can do that relatively easily by
      arranging to start the struct cn_msg aligned to 8 bytes and then
      offset by 4 bytes.  Doing so means that the immediately following
      proc_event structure is then correctly aligned to 8 bytes.
      
      The result is that the memset() stores are now aligned, and as an
      added benefit, we can remove the put_unaligned() calls in the code.
      Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1ca1a4cf
    • H
      alx: Reset phy speed after resume · b54629e2
      hahnjo 提交于
      This fixes bug 62491 (https://bugzilla.kernel.org/show_bug.cgi?id=62491).
      After resuming some users got the following error flooding the kernel log:
      alx 0000:02:00.0: invalid PHY speed/duplex: 0xffff
      Signed-off-by: NJonas Hahnfeld <linux@hahnjo.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b54629e2
    • J
      genetlink: make all genl_ops users const · 4534de83
      Johannes Berg 提交于
      Now that genl_ops are no longer modified in place when
      registering, they can be made const. This patch was done
      mostly with spatch:
      
      @@
      identifier ops;
      @@
      +const
       struct genl_ops ops[] = {
       ...
       };
      
      (except the struct thing in net/openvswitch/datapath.c)
      Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4534de83
    • D
      isdnloop: use strlcpy() instead of strcpy() · f9a23c84
      Dan Carpenter 提交于
      These strings come from a copy_from_user() and there is no way to be
      sure they are NUL terminated.
      Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f9a23c84
    • D
      net:fec: fix WARNING caused by lack of calls to dma_mapping_error() · d842a31f
      Duan Fugang-B38611 提交于
      The driver fails to check the results of DMA mapping and results in
      the following warning: (with kernel config "CONFIG_DMA_API_DEBUG" enable)
      
      ------------[ cut here ]------------
      WARNING: at lib/dma-debug.c:937 check_unmap+0x43c/0x7d8()
      fec 2188000.ethernet: DMA-API: device driver failed to check map
      error[device address=0x00000000383a8040] [size=2048 bytes] [mapped as single]
      
      Modules linked in:
      CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.10.17-16827-g9cdb0ba-dirty #188
      [<80013c4c>] (unwind_backtrace+0x0/0xf8) from [<80011704>] (show_stack+0x10/0x14)
      [<80011704>] (show_stack+0x10/0x14) from [<80025614>] (warn_slowpath_common+0x4c/0x6c)
      [<80025614>] (warn_slowpath_common+0x4c/0x6c) from [<800256c8>] (warn_slowpath_fmt+0x30/0x40)
      [<800256c8>] (warn_slowpath_fmt+0x30/0x40) from [<8026bfdc>] (check_unmap+0x43c/0x7d8)
      [<8026bfdc>] (check_unmap+0x43c/0x7d8) from [<8026c584>] (debug_dma_unmap_page+0x6c/0x78)
      [<8026c584>] (debug_dma_unmap_page+0x6c/0x78) from [<8038049c>] (fec_enet_rx_napi+0x254/0x8a8)
      [<8038049c>] (fec_enet_rx_napi+0x254/0x8a8) from [<804dc8c0>] (net_rx_action+0x94/0x160)
      [<804dc8c0>] (net_rx_action+0x94/0x160) from [<8002c758>] (__do_softirq+0xe8/0x1d0)
      [<8002c758>] (__do_softirq+0xe8/0x1d0) from [<8002c8e8>] (do_softirq+0x4c/0x58)
      [<8002c8e8>] (do_softirq+0x4c/0x58) from [<8002cb50>] (irq_exit+0x90/0xc8)
      [<8002cb50>] (irq_exit+0x90/0xc8) from [<8000ea88>] (handle_IRQ+0x3c/0x94)
      [<8000ea88>] (handle_IRQ+0x3c/0x94) from [<8000855c>] (gic_handle_irq+0x28/0x5c)
      [<8000855c>] (gic_handle_irq+0x28/0x5c) from [<8000de00>] (__irq_svc+0x40/0x50)
      Exception stack(0x815a5f38 to 0x815a5f80)
      5f20:                                                       815a5f80 3b9aca00
      5f40: 0fe52383 00000002 0dd8950e 00000002 81e7b080 00000000 00000000 815ac4d8
      5f60: 806032ec 00000000 00000017 815a5f80 80059028 8041fc4c 60000013 ffffffff
      [<8000de00>] (__irq_svc+0x40/0x50) from [<8041fc4c>] (cpuidle_enter_state+0x50/0xf0)
      [<8041fc4c>] (cpuidle_enter_state+0x50/0xf0) from [<8041fd94>] (cpuidle_idle_call+0xa8/0x14c)
      [<8041fd94>] (cpuidle_idle_call+0xa8/0x14c) from [<8000edac>] (arch_cpu_idle+0x10/0x4c)
      [<8000edac>] (arch_cpu_idle+0x10/0x4c) from [<800582f8>] (cpu_startup_entry+0x60/0x130)
      [<800582f8>] (cpu_startup_entry+0x60/0x130) from [<80bc7a48>] (start_kernel+0x2d0/0x328)
      [<80bc7a48>] (start_kernel+0x2d0/0x328) from [<10008074>] (0x10008074)
      ---[ end trace c6edec32436e0042 ]---
      
      Because dma-debug add new interfaces to debug dma mapping errors, pls refer
      to: http://lwn.net/Articles/516640/
      
      After dma mapping, it must call dma_mapping_error() to check mapping error,
      otherwise the map_err_type alway is MAP_ERR_NOT_CHECKED, check_unmap() define
      the mapping is not checked and dump the error msg. So,add dma_mapping_error()
      checking to fix the WARNING
      
      And RX DMA buffers are used repeatedly and the driver copies it into an skb,
      fec_enet_rx() should not map or unmap, use dma_sync_single_for_cpu()/dma_sync_single_for_device()
      instead of dma_map_single()/dma_unmap_single().
      
      There have another potential issue:  fec_enet_rx() passes the DMA address to __va().
      Physical and DMA addresses are *not* the same thing. They may differ if the device
      is behind an IOMMU or bounce buffering was required, or just because there is a fixed
      offset between the device and host physical addresses. Also fix it in this patch.
      
      =============================================
      V2: add net_ratelimit() to limit map err message.
          use dma_sync_single_for_cpu() instead of dma_map_single().
          fix the issue that pass DMA addresses to __va() to get virture address.
      V1: initial send
      =============================================
      Signed-off-by: NFugang Duan <B38611@freescale.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d842a31f
    • N
      bonding: fix two race conditions in bond_store_updelay/downdelay · b869ccfa
      Nikolay Aleksandrov 提交于
      This patch fixes two race conditions between bond_store_updelay/downdelay
      and bond_store_miimon which could lead to division by zero as miimon can
      be set to 0 while either updelay/downdelay are being set and thus miss the
      zero check in the beginning, the zero div happens because updelay/downdelay
      are stored as new_value / bond->params.miimon. Use rtnl to synchronize with
      miimon setting.
      
      CC: Jay Vosburgh <fubar@us.ibm.com>
      CC: Andy Gospodarek <andy@greyhouse.net>
      CC: Veaceslav Falico <vfalico@redhat.com>
      Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com>
      Acked-by: NVeaceslav Falico <vfalico@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b869ccfa
    • B
      ixp4xx_eth: Validate hwtstamp_config completely before applying it · a5be8cd3
      Ben Hutchings 提交于
      hwtstamp_ioctl() should validate all fields of hwtstamp_config
      before making any changes.  Currently it sets the TX configuration
      before validating the rx_filter field.
      
      Untested as I don't have a cross-compiler to hand.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a5be8cd3
    • B
      ti_cpsw: Validate hwtstamp_config completely before applying it · 2ee91e54
      Ben Hutchings 提交于
      cpsw_hwtstamp_ioctl() should validate all fields of hwtstamp_config,
      and the hardware version, before making any changes.  Currently it
      sets the TX configuration before validating the rx_filter field
      or that the hardware supports timestamping.
      
      Also correct the error code for hardware versions that don't
      support timestamping.  ENOTSUPP is used by the NFS implementation
      and is not part of userland API; we want EOPNOTSUPP (which glibc
      also calls ENOTSUP, with one 'P').
      
      Untested as I don't have a cross-compiler to hand.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      Acked-by: NMugunthan V N <mugunthanvnm@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2ee91e54
    • B
      stmmac: Validate hwtstamp_config completely before applying it · 5f3da328
      Ben Hutchings 提交于
      stmmac_hwtstamp_ioctl() should validate all fields of hwtstamp_config
      before making any changes.  Currently it sets the TX configuration
      before validating the rx_filter field.
      
      Compile-tested only.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5f3da328
    • B
      pch_gbe: Validate hwtstamp_config completely before applying it · 810abe9b
      Ben Hutchings 提交于
      hwtstamp_ioctl() should validate all fields of hwtstamp_config
      before making any changes.  Currently it sets the TX configuration
      before validating the rx_filter field.
      
      Compile-tested only.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      810abe9b
    • B
      e1000e: Validate hwtstamp_config completely before applying it · 62d7e3a2
      Ben Hutchings 提交于
      e1000e_hwtstamp_ioctl() should validate all fields of hwtstamp_config
      before making any changes.  Currently it copies the configuration to
      the e1000_adapter structure before validating it at all.
      
      Change e1000e_config_hwtstamp() to take a pointer to the
      hwstamp_config and to copy the config after validating it.
      
      Compile-tested only.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      Acked-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      62d7e3a2
    • B
      tg3: Validate hwtstamp_config completely before applying it · 58b187c6
      Ben Hutchings 提交于
      tg3_hwtstamp_ioctl() should validate all fields of hwtstamp_config
      before making any changes.  Currently it sets the TX configuration
      before validating the rx_filter field.
      
      Compile-tested only.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      Acked-by: NNithin Nayak Sujir <nsujir@broadcom.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      58b187c6
    • J
      macvtap: limit head length of skb allocated · 16a3fa28
      Jason Wang 提交于
      We currently use hdr_len as a hint of head length which is advertised by
      guest. But when guest advertise a very big value, it can lead to an 64K+
      allocating of kmalloc() which has a very high possibility of failure when host
      memory is fragmented or under heavy stress. The huge hdr_len also reduce the
      effect of zerocopy or even disable if a gso skb is linearized in guest.
      
      To solves those issues, this patch introduces an upper limit (PAGE_SIZE) of the
      head, which guarantees an order 0 allocation each time.
      
      Cc: Stefan Hajnoczi <stefanha@redhat.com>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      16a3fa28
    • J
      tuntap: limit head length of skb allocated · 96f8d9ec
      Jason Wang 提交于
      We currently use hdr_len as a hint of head length which is advertised by
      guest. But when guest advertise a very big value, it can lead to an 64K+
      allocating of kmalloc() which has a very high possibility of failure when host
      memory is fragmented or under heavy stress. The huge hdr_len also reduce the
      effect of zerocopy or even disable if a gso skb is linearized in guest.
      
      To solves those issues, this patch introduces an upper limit (PAGE_SIZE) of the
      head, which guarantees an order 0 allocation each time.
      
      Cc: Stefan Hajnoczi <stefanha@redhat.com>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      96f8d9ec
  3. 14 11月, 2013 4 次提交
  4. 13 11月, 2013 19 次提交