1. 02 6月, 2014 4 次提交
    • S
      net: ks8851: Don't use regulator_get_optional() · 2a82e40d
      Stephen Boyd 提交于
      We shouldn't be using regulator_get_optional() here. These
      regulators are always present as part of the physical design and
      there isn't any way to use an internal regulator or change the
      source of the reference voltage via software. Given that the only
      users of this driver in the kernel are DT based, this change
      should be transparent to them even if they don't specify any
      supplies because the regulator framework will insert dummy
      supplies as needed.
      
      Cc: Nishanth Menon <nm@ti.com>
      Cc: Mark Brown <broonie@kernel.org>
      Signed-off-by: NStephen Boyd <sboyd@codeaurora.org>
      Reviewed-by: NMark Brown <broonie@linaro.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2a82e40d
    • D
      Merge branch 'mlx4-next' · b07166b2
      David S. Miller 提交于
      Amir Vadai says:
      
      ====================
      cpumask,net: Affinity hint helper function
      
      This patchset will set affinity hint to influence IRQs to be allocated on the
      same NUMA node as the one where the card resides. As discussed in
      http://www.spinics.net/lists/netdev/msg271497.html
      
      If the number of IRQs allocated is greater than the number of local NUMA cores, all
      local cores will be used first, and the rest of the IRQs will be on a remote
      NUMA node.
      If no NUMA support - IRQ's and cores will be mapped 1:1
      
      Since the utility function to calculate the mapping could be useful in other mq
      drivers in the kernel, it was added to cpumask.[ch]
      
      This patchset was tested and applied on top of net-next since the first
      consumer is a network device (mlx4_en).  Over commit 506724c4: "tg3: Override
      clock, link aware and link idle mode during NVRAM dump"
      
      I couldn't find a maintainer for cpumask.c, so only added the kernel mailing
      list
      
      Amir
      
      Changes from V5:
      - Moved the utility function from kernel/irq/manage.c to lib/cpumask.c, and
        renamed it's name accordingly to cpumask_set_cpu_local_first()
      - Added some comments as Thomas Gleixner suggested
      - Changed -EINVAL to -EAGAIN, that describes the error situtation better.
      
      Changes from V4:
      - Patch 1/2: irq: Utility function to get affinity_hint by policy
        Thank you Ben for the great review:
        - Moved the function it kernel/irq/manage.c since it could be useful for
          block mq devices
        - Fixed Typo's
        - Use cpumask_t * instead of cpumask_var_t in function header
        - Restructured the function to remove NULL assignment in a cpumask_var_t
        - Fix for offline local CPU's
      
      Changes from V3:
      - Patch 2/2: net/mlx4_en: Use affinity hint
        - somehow patch file was corrupted
      
      Changes from V2:
      - Patch 1/2: net: Utility function to get affinity_hint by policy
        - Fixed style issues
      
      Changes from V1:
      - Patch 1/2: net: Utility function to get affinity_hint by policy
        - Fixed error flow to return -EINVAL on error (thanks govind)
      - Patch 2/2: net/mlx4_en: Use affinity hint
        - Set ring->affinity_hint to NULL on error
      
      Changes from V0:
      - Fixed small style issues
      ====================
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b07166b2
    • Y
      net/mlx4_en: Use affinity hint · 70a640d0
      Yuval Atias 提交于
      The “affinity hint” mechanism is used by the user space
      daemon, irqbalancer, to indicate a preferred CPU mask for irqs.
      Irqbalancer can use this hint to balance the irqs between the
      cpus indicated by the mask.
      
      We wish the HCA to preferentially map the IRQs it uses to numa cores
      close to it.  To accomplish this, we use cpumask_set_cpu_local_first(), that
      sets the affinity hint according the following policy:
      First it maps IRQs to “close” numa cores.  If these are exhausted, the
      remaining IRQs are mapped to “far” numa cores.
      Signed-off-by: NYuval Atias <yuvala@mellanox.com>
      Signed-off-by: NAmir Vadai <amirv@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      70a640d0
    • A
      cpumask: Utility function to set n'th cpu - local cpu first · c8865b64
      Amir Vadai 提交于
      This function sets the n'th cpu - local cpu's first.
      For example: in a 16 cores server with even cpu's local, will get the
      following values:
      cpumask_set_cpu_local_first(0, numa, cpumask) => cpu 0 is set
      cpumask_set_cpu_local_first(1, numa, cpumask) => cpu 2 is set
      ...
      cpumask_set_cpu_local_first(7, numa, cpumask) => cpu 14 is set
      cpumask_set_cpu_local_first(8, numa, cpumask) => cpu 1 is set
      cpumask_set_cpu_local_first(9, numa, cpumask) => cpu 3 is set
      ...
      cpumask_set_cpu_local_first(15, numa, cpumask) => cpu 15 is set
      
      Curently this function will be used by multi queue networking devices to
      calculate the irq affinity mask, such that as many local cpu's as
      possible will be utilized to handle the mq device irq's.
      Signed-off-by: NAmir Vadai <amirv@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c8865b64
  2. 31 5月, 2014 31 次提交
  3. 30 5月, 2014 4 次提交
  4. 29 5月, 2014 1 次提交