1. 10 10月, 2014 3 次提交
  2. 09 10月, 2014 3 次提交
  3. 08 10月, 2014 2 次提交
    • E
      net: better IFF_XMIT_DST_RELEASE support · 02875878
      Eric Dumazet 提交于
      Testing xmit_more support with netperf and connected UDP sockets,
      I found strange dst refcount false sharing.
      
      Current handling of IFF_XMIT_DST_RELEASE is not optimal.
      
      Dropping dst in validate_xmit_skb() is certainly too late in case
      packet was queued by cpu X but dequeued by cpu Y
      
      The logical point to take care of drop/force is in __dev_queue_xmit()
      before even taking qdisc lock.
      
      As Julian Anastasov pointed out, need for skb_dst() might come from some
      packet schedulers or classifiers.
      
      This patch adds new helper to cleanly express needs of various drivers
      or qdiscs/classifiers.
      
      Drivers that need skb_dst() in their ndo_start_xmit() should call
      following helper in their setup instead of the prior :
      
      	dev->priv_flags &= ~IFF_XMIT_DST_RELEASE;
      ->
      	netif_keep_dst(dev);
      
      Instead of using a single bit, we use two bits, one being
      eventually rebuilt in bonding/team drivers.
      
      The other one, is permanent and blocks IFF_XMIT_DST_RELEASE being
      rebuilt in bonding/team. Eventually, we could add something
      smarter later.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Julian Anastasov <ja@ssi.bg>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      02875878
    • P
      net: phy: adjust fixed_phy_register() return value · fd2ef0ba
      Petri Gynther 提交于
      Adjust fixed_phy_register() to return struct phy_device *, so that
      it becomes easy to use fixed PHYs without device tree support:
      
        phydev = fixed_phy_register(PHY_POLL, &fixed_phy_status, NULL);
        fixed_phy_set_link_update(phydev, fixed_phy_link_update);
        phy_connect_direct(netdev, phydev, handler_fn, phy_interface);
      
      This change is a prerequisite for modifying bcmgenet driver to work
      without a device tree on Broadcom's MIPS-based 7xxx platforms.
      Signed-off-by: NPetri Gynther <pgynther@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fd2ef0ba
  4. 07 10月, 2014 1 次提交
  5. 06 10月, 2014 1 次提交
  6. 05 10月, 2014 1 次提交
    • V
      net: Cleanup skb cloning by adding SKB_FCLONE_FREE · c8753d55
      Vijay Subramanian 提交于
      SKB_FCLONE_UNAVAILABLE has overloaded meaning depending on type of skb.
      1: If skb is allocated from head_cache, it indicates fclone is not available.
      2: If skb is a companion fclone skb (allocated from fclone_cache), it indicates
      it is available to be used.
      
      To avoid confusion for case 2 above, this patch  replaces
      SKB_FCLONE_UNAVAILABLE with SKB_FCLONE_FREE where appropriate. For fclone
      companion skbs, this indicates it is free for use.
      
      SKB_FCLONE_UNAVAILABLE will now simply indicate skb is from head_cache and
      cannot / will not have a companion fclone.
      Signed-off-by: NVijay Subramanian <subramanian.vijay@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c8753d55
  7. 04 10月, 2014 7 次提交
  8. 02 10月, 2014 3 次提交
    • P
      d068b02c
    • T
      udp: Generalize skb_udp_segment · 8bce6d7d
      Tom Herbert 提交于
      skb_udp_segment is the function called from udp4_ufo_fragment to
      segment a UDP tunnel packet. This function currently assumes
      segmentation is transparent Ethernet bridging (i.e. VXLAN
      encapsulation). This patch generalizes the function to
      operate on either Ethertype or IP protocol.
      
      The inner_protocol field must be set to the protocol of the inner
      header. This can now be either an Ethertype or an IP protocol
      (in a union). A new flag in the skbuff indicates which type is
      effective. skb_set_inner_protocol and skb_set_inner_ipproto
      helper functions were added to set the inner_protocol. These
      functions are called from the point where the tunnel encapsulation
      is occuring.
      
      When skb_udp_tunnel_segment is called, the function to segment the
      inner packet is selected based on the inner IP or Ethertype. In the
      case of an IP protocol encapsulation, the function is derived from
      inet[6]_offloads. In the case of Ethertype, skb->protocol is
      set to the inner_protocol and skb_mac_gso_segment is called. (GRE
      currently does this, but it might be possible to lookup the protocol
      in offload_base and call the appropriate segmenation function
      directly).
      Signed-off-by: NTom Herbert <therbert@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8bce6d7d
    • E
      net: cleanup and document skb fclone layout · d0bf4a9e
      Eric Dumazet 提交于
      Lets use a proper structure to clearly document and implement
      skb fast clones.
      
      Then, we might experiment more easily alternative layouts.
      
      This patch adds a new skb_fclone_busy() helper, used by tcp and xfrm,
      to stop leaking of implementation details.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d0bf4a9e
  9. 01 10月, 2014 3 次提交
    • B
      HID: wacom: implement generic HID handling for pen generic devices · 7704ac93
      Benjamin Tissoires 提交于
      ISDv4 and v5 are plain HID devices. We can directly implement a generic
      HID parsing/handling and remove the need to manually add those PID in
      the list of supported devices.
      
      This patch implements the pen support only. The finger part will come in
      a later patch.
      
      To be properly notified of an .event() and a .report(), we need to force
      hid-core to go through the HID parsing. By default, wacom.ko binds only
      hidraw, so the hid parsing is not done by hid-core. When a true HID device
      is there, we add the flag HID_CLAIMED_DRIVER to hid->claimed which will
      force hid-core to parse the incoming reports.
      (Note that this can be easily backported by directly setting the .claimed
      flag to HID_CLAIMED_DRIVER even if hid-core does not support
      HID_CONNECT_DRIVER)
      Signed-off-by: NBenjamin Tissoires <benjamin.tissoires@redhat.com>
      Acked-by: NJason Gerecke <killertofu@gmail.com>
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      7704ac93
    • M
      net/mlx4_core: New init and exit flow for mlx4_core · e1c00e10
      Majd Dibbiny 提交于
      In the new flow, we separate the pci initialization and teardown
      from the initialization and teardown of the other resources.
      
      __mlx4_init_one handles the pci resources initialization. It then
      calls mlx4_load_one to initialize the remainder of the resources.
      
      When removing a device, mlx4_remove_one is invoked. However, now
      mlx4_remove_one calls mlx4_unload_one to free all the resources except the pci
      resources. When mlx4_unload_one returns, mlx4_remove_one then frees the
      pci resources.
      
      The above separation will allow us to implement 'reset flow' in the future.
      It will also enable more EQs for VFs and is a pre-step to the modern API to
      enable/disable SRIOV.
      
      Also added nvfs; an integer array of size MLX4_MAX_PORTS + 1; to the mlx4_dev
      struct. This new field is used to avoid parsing the num_vfs module parameter
      each time the mlx4_restart_one is called.
      Signed-off-by: NMajd Dibbiny <majd@mellanox.com>
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e1c00e10
    • H
      bcma: register bcma as device tree driver · 2101e533
      Hauke Mehrtens 提交于
      This driver is used by the bcm53xx ARM SoC code. Now it is possible to
      give the address of the chipcommon core in device tree and bcma will
      search for all the other cores.
      Signed-off-by: NHauke Mehrtens <hauke@hauke-m.de>
      Acked-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
      2101e533
  10. 30 9月, 2014 7 次提交
    • S
      tty: serial: 8250: use 32bit variable for rpm_tx_active · baeb7ef3
      Sebastian Andrzej Siewior 提交于
      The kbuild test robot wrote me:
      |  make.cross ARCH=powerpc
      |>> ERROR: ".__xchg_called_with_bad_pointer" [drivers/tty/serial/8250/8250.ko] undefined!
      
      The generic implementation of xchg() on arm and x86 works for variables of
      size one bye (char). According to the report powerpc does not support
      xchg() for one byte sized variables and looking further it seems also to
      be the same case for sparc and tile (or for 10 out of 26 architectures
      which provide a custom implementation).
      For that reason I increase the size of the variable from one to four
      bytes to get it work on powerpc (and the others).
      Reported-By: Nkbuild test robot <fengguang.wu@intel.com>
      Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      baeb7ef3
    • M
      macvlan: add source mode · 79cf79ab
      Michael Braun 提交于
      This patch adds a new mode of operation to macvlan, called "source".
      It allows one to set a list of allowed mac address, which is used
      to match against source mac address from received frames on underlying
      interface.
      This enables creating mac based VLAN associations, instead of standard
      port or tag based. The feature is useful to deploy 802.1x mac based
      behavior, where drivers of underlying interfaces doesn't allows that.
      
      Configuration is done through the netlink interface using e.g.:
       ip link add link eth0 name macvlan0 type macvlan mode source
       ip link add link eth0 name macvlan1 type macvlan mode source
       ip link set link dev macvlan0 type macvlan macaddr add 00:11:11:11:11:11
       ip link set link dev macvlan0 type macvlan macaddr add 00:22:22:22:22:22
       ip link set link dev macvlan0 type macvlan macaddr add 00:33:33:33:33:33
       ip link set link dev macvlan1 type macvlan macaddr add 00:33:33:33:33:33
       ip link set link dev macvlan1 type macvlan macaddr add 00:44:44:44:44:44
      
      This allows clients with MAC addresses 00:11:11:11:11:11,
      00:22:22:22:22:22 to be part of only VLAN associated with macvlan0
      interface. Clients with MAC addresses 00:44:44:44:44:44 with only VLAN
      associated with macvlan1 interface. And client with MAC address
      00:33:33:33:33:33 to be associated with both VLANs.
      
      Based on work of Stefan Gula <steweg@gmail.com>
      
      v8: last version of Stefan Gula for Kernel 3.2.1
      v9: rework onto linux-next 2014-03-12 by Michael Braun
          add MACADDR_SET command, enable to configure mac for source mode
          while creating interface
      v10:
        - reduce indention level
        - rename source_list to source_entry
        - use aligned 64bit ether address
        - use hash_64 instead of addr[5]
      v11:
        - rebase for 3.14 / linux-next 20.04.2014
      v12
        - rebase for linux-next 2014-09-25
      Signed-off-by: NMichael Braun <michael-dev@fami-braun.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      79cf79ab
    • M
      ARCNET: add support for multi interfaces on com20020 · c51da42a
      Michael Grzeschik 提交于
      The com20020-pci driver is currently designed to instance
      one netdev with one pci device. This patch adds support to
      instance many cards with one pci device, depending on the device
      data in the private data.
      Signed-off-by: NMichael Grzeschik <m.grzeschik@pengutronix.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c51da42a
    • M
      ARCNET: add com20020 PCI IDs with metadata · 8c14f9c7
      Michael Grzeschik 提交于
      This patch adds metadata for the com20020 to prepare for devices with
      multiple io address areas with multi card interfaces.
      Signed-off-by: NMichael Grzeschik <m.grzeschik@pengutronix.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8c14f9c7
    • A
      NFSD: Implement SEEK · 24bab491
      Anna Schumaker 提交于
      This patch adds server support for the NFS v4.2 operation SEEK, which
      returns the position of the next hole or data segment in a file.
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      24bab491
    • A
      NFSD: Add generic v4.2 infrastructure · 87a15a80
      Anna Schumaker 提交于
      It's cleaner to introduce everything at once and have the server reply
      with "not supported" than it would be to introduce extra operations when
      implementing a specific one in the middle of the list.
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      87a15a80
    • E
      net: reorganize sk_buff for faster __copy_skb_header() · b1937227
      Eric Dumazet 提交于
      With proliferation of bit fields in sk_buff, __copy_skb_header() became
      quite expensive, showing as the most expensive function in a GSO
      workload.
      
      __copy_skb_header() performance is also critical for non GSO TCP
      operations, as it is used from skb_clone()
      
      This patch carefully moves all the fields that were not copied in a
      separate zone : cloned, nohdr, fclone, peeked, head_frag, xmit_more
      
      Then I moved all other fields and all other copied fields in a section
      delimited by headers_start[0]/headers_end[0] section so that we
      can use a single memcpy() call, inlined by compiler using long
      word load/stores.
      
      I also tried to make all copies in the natural orders of sk_buff,
      to help hardware prefetching.
      
      I made sure sk_buff size did not change.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b1937227
  11. 29 9月, 2014 4 次提交
  12. 28 9月, 2014 1 次提交
  13. 27 9月, 2014 4 次提交
    • M
      fuse: honour max_read and max_write in direct_io mode · 2c80929c
      Miklos Szeredi 提交于
      The third argument of fuse_get_user_pages() "nbytesp" refers to the number of
      bytes a caller asked to pack into fuse request. This value may be lesser
      than capacity of fuse request or iov_iter.  So fuse_get_user_pages() must
      ensure that *nbytesp won't grow.
      
      Now, when helper iov_iter_get_pages() performs all hard work of extracting
      pages from iov_iter, it can be done by passing properly calculated
      "maxsize" to the helper.
      
      The other caller of iov_iter_get_pages() (dio_refill_pages()) doesn't need
      this capability, so pass LONG_MAX as the maxsize argument here.
      
      Fixes: c9c37e2e ("fuse: switch to iov_iter_get_pages()")
      Reported-by: NWerner Baumann <werner.baumann@onlinehome.de>
      Tested-by: NMaxim Patlasov <mpatlasov@parallels.com>
      Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      2c80929c
    • E
      net: introduce __skb_header_release() · f4a775d1
      Eric Dumazet 提交于
      While profiling TCP stack, I noticed one useless atomic operation
      in tcp_sendmsg(), caused by skb_header_release().
      
      It turns out all current skb_header_release() users have a fresh skb,
      that no other user can see, so we can avoid one atomic operation.
      
      Introduce __skb_header_release() to clearly document this.
      
      This gave me a 1.5 % improvement on TCP_RR workload.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f4a775d1
    • J
      net: Change netdev_<level> logging functions to return void · 6ea754eb
      Joe Perches 提交于
      No caller or macro uses the return value so make all
      the functions return void.
      Signed-off-by: NJoe Perches <joe@perches.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6ea754eb
    • A
      bpf: verifier (add verifier core) · 17a52670
      Alexei Starovoitov 提交于
      This patch adds verifier core which simulates execution of every insn and
      records the state of registers and program stack. Every branch instruction seen
      during simulation is pushed into state stack. When verifier reaches BPF_EXIT,
      it pops the state from the stack and continues until it reaches BPF_EXIT again.
      For program:
      1: bpf_mov r1, xxx
      2: if (r1 == 0) goto 5
      3: bpf_mov r0, 1
      4: goto 6
      5: bpf_mov r0, 2
      6: bpf_exit
      The verifier will walk insns: 1, 2, 3, 4, 6
      then it will pop the state recorded at insn#2 and will continue: 5, 6
      
      This way it walks all possible paths through the program and checks all
      possible values of registers. While doing so, it checks for:
      - invalid instructions
      - uninitialized register access
      - uninitialized stack access
      - misaligned stack access
      - out of range stack access
      - invalid calling convention
      - instruction encoding is not using reserved fields
      
      Kernel subsystem configures the verifier with two callbacks:
      
      - bool (*is_valid_access)(int off, int size, enum bpf_access_type type);
        that provides information to the verifer which fields of 'ctx'
        are accessible (remember 'ctx' is the first argument to eBPF program)
      
      - const struct bpf_func_proto *(*get_func_proto)(enum bpf_func_id func_id);
        returns argument constraints of kernel helper functions that eBPF program
        may call, so that verifier can checks that R1-R5 types match the prototype
      
      More details in Documentation/networking/filter.txt and in kernel/bpf/verifier.c
      Signed-off-by: NAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      17a52670