1. 30 9月, 2015 9 次提交
    • N
      bridge: vlan: add per-vlan struct and move to rhashtables · 2594e906
      Nikolay Aleksandrov 提交于
      This patch changes the bridge vlan implementation to use rhashtables
      instead of bitmaps. The main motivation behind this change is that we
      need extensible per-vlan structures (both per-port and global) so more
      advanced features can be introduced and the vlan support can be
      extended. I've tried to break this up but the moment net_port_vlans is
      changed and the whole API goes away, thus this is a larger patch.
      A few short goals of this patch are:
      - Extensible per-vlan structs stored in rhashtables and a sorted list
      - Keep user-visible behaviour (compressed vlans etc)
      - Keep fastpath ingress/egress logic the same (optimizations to come
        later)
      
      Here's a brief list of some of the new features we'd like to introduce:
      - per-vlan counters
      - vlan ingress/egress mapping
      - per-vlan igmp configuration
      - vlan priorities
      - avoid fdb entries replication (e.g. local fdb scaling issues)
      
      The structure is kept single for both global and per-port entries so to
      avoid code duplication where possible and also because we'll soon introduce
      "port0 / aka bridge as port" which should simplify things further
      (thanks to Vlad for the suggestion!).
      
      Now we have per-vlan global rhashtable (bridge-wide) and per-vlan port
      rhashtable, if an entry is added to a port it'll get a pointer to its
      global context so it can be quickly accessed later. There's also a
      sorted vlan list which is used for stable walks and some user-visible
      behaviour such as the vlan ranges, also for error paths.
      VLANs are stored in a "vlan group" which currently contains the
      rhashtable, sorted vlan list and the number of "real" vlan entries.
      A good side-effect of this change is that it resembles how hw keeps
      per-vlan data.
      One important note after this change is that if a VLAN is being looked up
      in the bridge's rhashtable for filtering purposes (or to check if it's an
      existing usable entry, not just a global context) then the new helper
      br_vlan_should_use() needs to be used if the vlan is found. In case the
      lookup is done only with a port's vlan group, then this check can be
      skipped.
      
      Things tested so far:
      - basic vlan ingress/egress
      - pvids
      - untagged vlans
      - undef CONFIG_BRIDGE_VLAN_FILTERING
      - adding/deleting vlans in different scenarios (with/without global ctx,
        while transmitting traffic, in ranges etc)
      - loading/removing the module while having/adding/deleting vlans
      - extracting bridge vlan information (user ABI), compressed requests
      - adding/deleting fdbs on vlans
      - bridge mac change, promisc mode
      - default pvid change
      - kmemleak ON during the whole time
      Signed-off-by: NNikolay Aleksandrov <nikolay@cumulusnetworks.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2594e906
    • D
      Merge branch 'mvneta_percpu_irq' · 191988e0
      David S. Miller 提交于
      Gregory CLEMENT says:
      
      ====================
      net: mvneta: Switch to per-CPU irq and make rxq_def useful
      
      As stated in the first version: "this patchset reworks the Marvell
      neta driver in order to really support its per-CPU interrupts, instead
      of faking them as SPI, and allow the use of any RX queue instead of
      the hardcoded RX queue 0 that we have currently."
      
      Following the review which has been done, Maxime started adding the
      CPU hotplug support. I continued his work a few weeks ago and here is
      the result.
      
      Since the 1st version the main change is this CPU hotplug support, in
      order to validate it I powered up and down the CPUs while performing
      iperf. I ran the tests during hours: the kernel didn't crash and the
      network interfaces were still usable. Of course it impacted the
      performance, but continuously power down and up the CPUs is not
      something we usually do.
      
      I also reorganized the series, the 3 first patches should go through
      the irq subsystem, whereas the 4 others should go to the network
      subsystem.
      
      However, there is a runtime dependency between the two parts. Patch 5
      depend on the patch 3 to be able to use the percpu irq.
      
      Thanks,
      
      Gregory
      
      PS: Thanks to Willy who gave me some pointers on how to deal with the
      NAPI.
      ====================
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      191988e0
    • M
      net: mvneta: Statically assign queues to CPUs · f8642885
      Maxime Ripard 提交于
      Since the switch to per-CPU interrupts, we lost the ability to set which
      CPU was going to receive our RX interrupt, which was now only the CPU on
      which the mvneta_open function was run.
      
      We can now assign our queues to their respective CPUs, and make sure only
      this CPU is going to handle our traffic.
      
      This also paves the road to be able to change that at runtime, and later on
      to support RSS.
      
      [gregory.clement@free-electrons.com]: hardened the CPU hotplug support.
      Signed-off-by: NMaxime Ripard <maxime.ripard@free-electrons.com>
      Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f8642885
    • M
      net: mvneta: Allow different queues · d8936657
      Maxime Ripard 提交于
      The mvneta driver allows to change the default RX queue trough the rxq_def
      kernel parameter.
      
      However, the current code doesn't allow to have any value but 0. It is
      actively checked for in the driver's probe because the drivers makes a
      number of assumption and takes a number of shortcuts in order to just use
      that RX queue.
      
      Remove these limitations in order to be able to specify any available
      queue.
      Signed-off-by: NMaxime Ripard <maxime.ripard@free-electrons.com>
      Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d8936657
    • M
      net: mvneta: Handle per-cpu interrupts · 12bb03b4
      Maxime Ripard 提交于
      Now that our interrupt controller is allowing us to use per-CPU interrupts,
      actually use it in the mvneta driver.
      
      This involves obviously reworking the driver to have a CPU-local NAPI
      structure, and report for incoming packet using that structure.
      Signed-off-by: NMaxime Ripard <maxime.ripard@free-electrons.com>
      Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      12bb03b4
    • M
      net: mvneta: Fix CPU_MAP registers initialisation · 2502d0ef
      Maxime Ripard 提交于
      The CPU_MAP register is duplicated for each CPUs at different addresses,
      each instance being at a different address.
      
      However, the code so far was using CONFIG_NR_CPUS to initialise the CPU_MAP
      registers for each registers, while the SoCs embed at most 4 CPUs.
      
      This is especially an issue with multi_v7_defconfig, where CONFIG_NR_CPUS
      is currently set to 16, resulting in writes to registers that are not
      CPU_MAP.
      
      Fixes: c5aff182 ("net: mvneta: driver for Marvell Armada 370/XP network unit")
      Signed-off-by: NMaxime Ripard <maxime.ripard@free-electrons.com>
      Cc: <stable@vger.kernel.org> # v3.8+
      Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2502d0ef
    • M
      irqchip: armada-370-xp: Rework per-cpu interrupts handling · 080481f9
      Maxime Ripard 提交于
      The MPIC driver currently has a list of interrupts to handle as per-cpu.
      
      Since the timer, fabric and neta interrupts were the only per-cpu
      interrupts in the system, we can now remove the switch and just check for
      the hardware irq number to determine whether a given interrupt is per-cpu
      or not.
      Signed-off-by: NMaxime Ripard <maxime.ripard@free-electrons.com>
      Acked-by: NGregory CLEMENT <gregory.clement@free-electrons.com>
      Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      080481f9
    • M
      irq: Export per-cpu irq allocation and de-allocation functions · aec2e2ad
      Maxime Ripard 提交于
      Some drivers might use the per-cpu interrupts and still might be built as a
      module. Export request_percpu_irq an free_percpu_irq to these user, which
      also make it consistent with enable/disable_percpu_irq that were exported.
      Reported-by: NWilly Tarreau <w@1wt.eu>
      Signed-off-by: NMaxime Ripard <maxime.ripard@free-electrons.com>
      Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      aec2e2ad
    • M
      genirq: Fix the documentation of request_percpu_irq · a1b7febd
      Maxime Ripard 提交于
      The documentation of request_percpu_irq is confusing and suggest that the
      interrupt is not enabled at all, while it is actually enabled on the local
      CPU.
      
      Clarify that.
      Signed-off-by: NMaxime Ripard <maxime.ripard@free-electrons.com>
      Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a1b7febd
  2. 29 9月, 2015 28 次提交
  3. 28 9月, 2015 1 次提交
  4. 27 9月, 2015 2 次提交
    • D
      Merge branch 'vxlan-ipv4-ipv6' · 8f350437
      David S. Miller 提交于
      Jiri Benc says:
      
      ====================
      vxlan: support both IPv4 and IPv6 sockets
      
      Note: this needs net merged into net-next in order to apply.
      
      It's currently not easy enough to work with metadata based vxlan tunnels. In
      particular, it's necessary to create separate network interfaces for IPv4
      and IPv6 tunneling. Assigning an IPv6 address to an IPv4 interface is
      allowed yet won't do what's expected. With route based tunneling, one has to
      pay attention to use the vxlan interface opened with the correct family.
      Other users of this (openvswitch) would need to always create two vxlan
      interfaces.
      
      Furthermore, there's no sane API for creating an IPv6 vxlan metadata based
      interface.
      
      This patchset simplifies this by opening both IPv4 and IPv6 socket if the
      vxlan interface has the metadata flag (IFLA_VXLAN_COLLECT_METADATA) set.
      Assignment of addresses etc. works as expected after this.
      ====================
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8f350437
    • J
      vxlan: support both IPv4 and IPv6 sockets in a single vxlan device · b1be00a6
      Jiri Benc 提交于
      For metadata based vxlan interface, open both IPv4 and IPv6 socket. This is
      much more user friendly: it's not necessary to create two vxlan interfaces
      and pay attention to using the right one in routing rules.
      Signed-off-by: NJiri Benc <jbenc@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b1be00a6