1. 10 10月, 2017 28 次提交
    • B
      sched/fair: Force balancing on NOHZ balance if local group has capacity · 583ffd99
      Brendan Jackman 提交于
      The "goto force_balance" here is intended to mitigate the fact that
      avg_load calculations can result in bad placement decisions when
      priority is asymmetrical.
      
      The original commit that adds it:
      
        fab47622 ("sched: Force balancing on newidle balance if local group has capacity")
      
      explains:
      
          Under certain situations, such as a niced down task (i.e. nice =
          -15) in the presence of nr_cpus NICE0 tasks, the niced task lands
          on a sched group and kicks away other tasks because of its large
          weight. This leads to sub-optimal utilization of the
          machine. Even though the sched group has capacity, it does not
          pull tasks because sds.this_load >> sds.max_load, and f_b_g()
          returns NULL.
      
      A similar but inverted issue also affects ARM big.LITTLE (asymmetrical CPU
      capacity) systems - consider 8 always-running, same-priority tasks on a
      system with 4 "big" and 4 "little" CPUs. Suppose that 5 of them end up on
      the "big" CPUs (which will be represented by one sched_group in the DIE
      sched_domain) and 3 on the "little" (the other sched_group in DIE), leaving
      one CPU unused. Because the "big" group has a higher group_capacity its
      avg_load may not present an imbalance that would cause migrating a
      task to the idle "little".
      
      The force_balance case here solves the problem but currently only for
      CPU_NEWLY_IDLE balances, which in theory might never happen on the
      unused CPU. Including CPU_IDLE in the force_balance case means
      there's an upper bound on the time before we can attempt to solve the
      underutilization: after DIE's sd->balance_interval has passed the
      next nohz balance kick will help us out.
      Signed-off-by: NBrendan Jackman <brendan.jackman@arm.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Morten Rasmussen <morten.rasmussen@arm.com>
      Cc: Paul Turner <pjt@google.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20170807163900.25180-1-brendan.jackman@arm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      583ffd99
    • B
      sched/fair: Sync task util before slow-path wakeup · ea16f0ea
      Brendan Jackman 提交于
      We use task_util() in find_idlest_group() via capacity_spare_wake().
      This task_util() updated in wake_cap(). However wake_cap() is not the
      only reason for ending up in find_idlest_group() - we could have been sent
      there by wake_wide(). So explicitly sync the task util with prev_cpu
      when we are about to head to find_idlest_group().
      
      We could simply do this at the beginning of
      select_task_rq_fair() (i.e. irrespective of whether we're heading to
      select_idle_sibling() or find_idlest_group() & co), but I didn't want to
      slow down the select_idle_sibling() path more than necessary.
      
      Don't do this during fork balancing, we won't need the task_util and
      we'd just clobber the last_update_time, which is supposed to be 0.
      Signed-off-by: NBrendan Jackman <brendan.jackman@arm.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andres Oportus <andresoportus@google.com>
      Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
      Cc: Joel Fernandes <joelaf@google.com>
      Cc: Josef Bacik <josef@toxicpanda.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Morten Rasmussen <morten.rasmussen@arm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vincent Guittot <vincent.guittot@linaro.org>
      Link: http://lkml.kernel.org/r/20170808095519.10077-1-brendan.jackman@arm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      ea16f0ea
    • U
      sched/fair: Search a task from the tail of the queue · 93824900
      Uladzislau Rezki 提交于
      As a first step this patch makes cfs_tasks list as MRU one.
      It means, that when a next task is picked to run on physical
      CPU it is moved to the front of the list.
      
      Therefore, the cfs_tasks list is more or less sorted (except
      woken tasks) starting from recently given CPU time tasks toward
      tasks with max wait time in a run-queue, i.e. MRU list.
      
      Second, as part of the load balance operation, this approach
      starts detach_tasks()/detach_one_task() from the tail of the
      queue instead of the head, giving some advantages:
      
       - tends to pick a task with highest wait time;
      
       - tasks located in the tail are less likely cache-hot,
         therefore the can_migrate_task() decision is higher.
      
      hackbench illustrates slightly better performance. For example
      doing 1000 samples and 40 groups on i5-3320M CPU, it shows below
      figures:
      
       default: 0.657 avg
       patched: 0.646 avg
      Signed-off-by: NUladzislau Rezki (Sony) <urezki@gmail.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Kirill Tkhai <tkhai@yandex.ru>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
      Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Oleksiy Avramchenko <oleksiy.avramchenko@sonymobile.com>
      Cc: Paul Turner <pjt@google.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Link: http://lkml.kernel.org/r/20170913102430.8985-2-urezki@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      93824900
    • S
      sched/topology: Introduce NUMA identity node sched domain · 051f3ca0
      Suravee Suthikulpanit 提交于
      On AMD Family17h-based (EPYC) system, a logical NUMA node can contain
      upto 8 cores (16 threads) with the following topology.
      
                   ----------------------------
               C0  | T0 T1 |    ||    | T0 T1 | C4
                   --------|    ||    |--------
               C1  | T0 T1 | L3 || L3 | T0 T1 | C5
                   --------|    ||    |--------
               C2  | T0 T1 | #0 || #1 | T0 T1 | C6
                   --------|    ||    |--------
               C3  | T0 T1 |    ||    | T0 T1 | C7
                   ----------------------------
      
      Here, there are 2 last-level (L3) caches per logical NUMA node.
      A socket can contain upto 4 NUMA nodes, and a system can support
      upto 2 sockets. With full system configuration, current scheduler
      creates 4 sched domains:
      
        domain0 SMT       (span a core)
        domain1 MC        (span a last-level-cache)
        domain2 NUMA      (span a socket: 4 nodes)
        domain3 NUMA      (span a system: 8 nodes)
      
      Note that there is no domain to represent cpus spaning a logical
      NUMA node.  With this hierarchy of sched domains, the scheduler does
      not balance properly in the following cases:
      
      Case1:
      
       When running 8 tasks, a properly balanced system should
       schedule a task per logical NUMA node. This is not the case for
       the current scheduler.
      
      Case2:
      
       In some cases, threads are scheduled on the same cpu, while other
       cpus are idle. This results in run-to-run inconsistency. For example:
      
        taskset -c 0-7 sysbench --num-threads=8 --test=cpu \
                                --cpu-max-prime=100000 run
      
      Total execution time ranges from 25.1s to 33.5s depending on threads
      placement, where 25.1s is when all 8 threads are balanced properly
      on 8 cpus.
      
      Introducing NUMA identity node sched domain, which is based on how
      SRAT/SLIT table define a logical NUMA node. This results in the following
      hierarchy of sched domains on the same system described above.
      
        domain0 SMT       (span a core)
        domain1 MC        (span a last-level-cache)
        domain2 NODE      (span a logical NUMA node)
        domain3 NUMA      (span a socket: 4 nodes)
        domain4 NUMA      (span a system: 8 nodes)
      
      This fixes the improper load balancing cases mentioned above.
      Signed-off-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: bp@suse.de
      Link: http://lkml.kernel.org/r/1504768805-46716-1-git-send-email-suravee.suthikulpanit@amd.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      051f3ca0
    • P
      sched/topology: Restore SD_PREFER_SIBLING on MC domains · ed4ad1ca
      Peter Zijlstra 提交于
      The normal x86_topology on NHM+ machines degenerates because the MC
      and CPU domains are of the same size, therefore MC inherits
      SD_PREFER_SIBLING from CPU (which then gets taken out). The result is
      that we'll spread tasks across the first NUMA level in order to
      maximize cache utilization.
      
      However, for the x86_numa_in_package_topology we loose the CPU domain,
      and we'll not have SD_PREFER_SIBLING set anywhere, giving a distinct
      difference in behaviour.
      
      Commit:
      
        8e7fbcbc ("sched: Remove stale power aware scheduling remnants and dysfunctional knobs")
      
      made a fail by not preserving the SD_PREFER_SIBLING for the !power_saving
      case on both CPU and MC.
      
      Then commit:
      
        6956dc56 ("sched/numa: Add SD_PERFER_SIBLING to CPU domain")
      
      adds it back to the CPU but not MC.
      
      Restore that now, such that we get consistent spreading behaviour wrt
      L3 and NUMA.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      ed4ad1ca
    • L
      sched/deadline: Use C bitfields for the state flags · 799ba82d
      luca abeni 提交于
      Ask the compiler to use a single bit for storing true / false values,
      instead of wasting the size of a whole int value.
      Tested with gcc 5.4.0 on x86_64, and the compiler produces the expected
      Assembly (similar to the Assembly code generated when explicitly accessing
      the bits with bitmasks, "&" and "|").
      Signed-off-by: Nluca abeni <luca.abeni@santannapisa.it>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: NDaniel Bristot de Oliveira <bristot@redhat.com>
      Cc: Juri Lelli <juri.lelli@arm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mathieu Poirier <mathieu.poirier@linaro.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1504778971-13573-5-git-send-email-luca.abeni@santannapisa.itSigned-off-by: NIngo Molnar <mingo@kernel.org>
      799ba82d
    • P
      sched/deadline: Rename __dl_clear() to __dl_sub() · 8c0944ce
      Peter Zijlstra 提交于
      __dl_sub() is more meaningful as a name, and is more consistent
      with the naming of the dual function (__dl_add()).
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NLuca Abeni <luca.abeni@santannapisa.it>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: NDaniel Bristot de Oliveira <bristot@redhat.com>
      Cc: Juri Lelli <juri.lelli@arm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mathieu Poirier <mathieu.poirier@linaro.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1504778971-13573-4-git-send-email-luca.abeni@santannapisa.itSigned-off-by: NIngo Molnar <mingo@kernel.org>
      8c0944ce
    • L
      sched/deadline: Fix switching to -deadline · 295d6d5e
      Luca Abeni 提交于
      Fix a bug introduced in:
      
        72f9f3fd ("sched/deadline: Remove dl_new from struct sched_dl_entity")
      
      After that commit, when switching to -deadline if the scheduling
      deadline of a task is in the past then switched_to_dl() calls
      setup_new_entity() to properly initialize the scheduling deadline
      and runtime.
      
      The problem is that the task is enqueued _before_ having its parameters
      initialized by setup_new_entity(), and this can cause problems.
      For example, a task with its out-of-date deadline in the past will
      potentially be enqueued as the highest priority one; however, its
      adjusted deadline may not be the earliest one.
      
      This patch fixes the problem by initializing the task's parameters before
      enqueuing it.
      Signed-off-by: Nluca abeni <luca.abeni@santannapisa.it>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: NDaniel Bristot de Oliveira <bristot@redhat.com>
      Cc: Juri Lelli <juri.lelli@arm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mathieu Poirier <mathieu.poirier@linaro.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1504778971-13573-3-git-send-email-luca.abeni@santannapisa.itSigned-off-by: NIngo Molnar <mingo@kernel.org>
      295d6d5e
    • L
      sched/headers: Remove duplicate prototype of __dl_clear_params() · e964d350
      luca abeni 提交于
      Signed-off-by: Nluca abeni <luca.abeni@santannapisa.it>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: NDaniel Bristot de Oliveira <bristot@redhat.com>
      Cc: Juri Lelli <juri.lelli@arm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mathieu Poirier <mathieu.poirier@linaro.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1504778971-13573-2-git-send-email-luca.abeni@santannapisa.itSigned-off-by: NIngo Molnar <mingo@kernel.org>
      e964d350
    • P
      sched/debug: Rename task-state printing helpers · 1d48b080
      Peter Zijlstra 提交于
      Steve requested better names for the new task-state helper functions.
      
      So introduce the concept of task-state index for the printing and
      rename __get_task_state() to task_state_index() and
      __task_state_to_char() to task_index_to_char().
      Requested-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20170929115016.pzlqc7ss3ccystyg@hirez.programming.kicks-ass.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      1d48b080
    • P
      sched/idle: Move quiet_vmstate() into the NOHZ code · 62cb1188
      Peter Zijlstra 提交于
      quiet_vmstat() is an expensive function that only makes sense when we
      go into NOHZ.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: aubrey.li@linux.intel.com
      Cc: cl@linux.com
      Cc: fweisbec@gmail.com
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      62cb1188
    • I
      151aeab7
    • P
      sched/core: Ensure load_balance() respects the active_mask · 024c9d2f
      Peter Zijlstra 提交于
      While load_balance() masks the source CPUs against active_mask, it had
      a hole against the destination CPU. Ensure the destination CPU is also
      part of the 'domain-mask & active-mask' set.
      Reported-by: NLevin, Alexander (Sasha Levin) <alexander.levin@verizon.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Fixes: 77d1dfda ("sched/topology, cpuset: Avoid spurious/wrong domain rebuilds")
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      024c9d2f
    • P
      sched/core: Address more wake_affine() regressions · f2cdd9cc
      Peter Zijlstra 提交于
      The trivial wake_affine_idle() implementation is very good for a
      number of workloads, but it comes apart at the moment there are no
      idle CPUs left, IOW. the overloaded case.
      
      hackbench:
      
      		NO_WA_WEIGHT		WA_WEIGHT
      
      hackbench-20  : 7.362717561 seconds	6.450509391 seconds
      
      (win)
      
      netperf:
      
      		  NO_WA_WEIGHT		WA_WEIGHT
      
      TCP_SENDFILE-1	: Avg: 54524.6		Avg: 52224.3
      TCP_SENDFILE-10	: Avg: 48185.2          Avg: 46504.3
      TCP_SENDFILE-20	: Avg: 29031.2          Avg: 28610.3
      TCP_SENDFILE-40	: Avg: 9819.72          Avg: 9253.12
      TCP_SENDFILE-80	: Avg: 5355.3           Avg: 4687.4
      
      TCP_STREAM-1	: Avg: 41448.3          Avg: 42254
      TCP_STREAM-10	: Avg: 24123.2          Avg: 25847.9
      TCP_STREAM-20	: Avg: 15834.5          Avg: 18374.4
      TCP_STREAM-40	: Avg: 5583.91          Avg: 5599.57
      TCP_STREAM-80	: Avg: 2329.66          Avg: 2726.41
      
      TCP_RR-1	: Avg: 80473.5          Avg: 82638.8
      TCP_RR-10	: Avg: 72660.5          Avg: 73265.1
      TCP_RR-20	: Avg: 52607.1          Avg: 52634.5
      TCP_RR-40	: Avg: 57199.2          Avg: 56302.3
      TCP_RR-80	: Avg: 25330.3          Avg: 26867.9
      
      UDP_RR-1	: Avg: 108266           Avg: 107844
      UDP_RR-10	: Avg: 95480            Avg: 95245.2
      UDP_RR-20	: Avg: 68770.8          Avg: 68673.7
      UDP_RR-40	: Avg: 76231            Avg: 75419.1
      UDP_RR-80	: Avg: 34578.3          Avg: 35639.1
      
      UDP_STREAM-1	: Avg: 64684.3          Avg: 66606
      UDP_STREAM-10	: Avg: 52701.2          Avg: 52959.5
      UDP_STREAM-20	: Avg: 30376.4          Avg: 29704
      UDP_STREAM-40	: Avg: 15685.8          Avg: 15266.5
      UDP_STREAM-80	: Avg: 8415.13          Avg: 7388.97
      
      (wins and losses)
      
      sysbench:
      
      		    NO_WA_WEIGHT		WA_WEIGHT
      
      sysbench-mysql-2  :  2135.17 per sec.		 2142.51 per sec.
      sysbench-mysql-5  :  4809.68 per sec.            4800.19 per sec.
      sysbench-mysql-10 :  9158.59 per sec.            9157.05 per sec.
      sysbench-mysql-20 : 14570.70 per sec.           14543.55 per sec.
      sysbench-mysql-40 : 22130.56 per sec.           22184.82 per sec.
      sysbench-mysql-80 : 20995.56 per sec.           21904.18 per sec.
      
      sysbench-psql-2   :  1679.58 per sec.            1705.06 per sec.
      sysbench-psql-5   :  3797.69 per sec.            3879.93 per sec.
      sysbench-psql-10  :  7253.22 per sec.            7258.06 per sec.
      sysbench-psql-20  : 11166.75 per sec.           11220.00 per sec.
      sysbench-psql-40  : 17277.28 per sec.           17359.78 per sec.
      sysbench-psql-80  : 17112.44 per sec.           17221.16 per sec.
      
      (increase on the top end)
      
      tbench:
      
      NO_WA_WEIGHT
      
      Throughput 685.211 MB/sec   2 clients   2 procs  max_latency=0.123 ms
      Throughput 1596.64 MB/sec   5 clients   5 procs  max_latency=0.119 ms
      Throughput 2985.47 MB/sec  10 clients  10 procs  max_latency=0.262 ms
      Throughput 4521.15 MB/sec  20 clients  20 procs  max_latency=0.506 ms
      Throughput 9438.1  MB/sec  40 clients  40 procs  max_latency=2.052 ms
      Throughput 8210.5  MB/sec  80 clients  80 procs  max_latency=8.310 ms
      
      WA_WEIGHT
      
      Throughput 697.292 MB/sec   2 clients   2 procs  max_latency=0.127 ms
      Throughput 1596.48 MB/sec   5 clients   5 procs  max_latency=0.080 ms
      Throughput 2975.22 MB/sec  10 clients  10 procs  max_latency=0.254 ms
      Throughput 4575.14 MB/sec  20 clients  20 procs  max_latency=0.502 ms
      Throughput 9468.65 MB/sec  40 clients  40 procs  max_latency=2.069 ms
      Throughput 8631.73 MB/sec  80 clients  80 procs  max_latency=8.605 ms
      
      (increase on the top end)
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      f2cdd9cc
    • P
      sched/core: Fix wake_affine() performance regression · d153b153
      Peter Zijlstra 提交于
      Eric reported a sysbench regression against commit:
      
        3fed382b ("sched/numa: Implement NUMA node level wake_affine()")
      
      Similarly, Rik was looking at the NAS-lu.C benchmark, which regressed
      against his v3.10 enterprise kernel.
      
      PRE (current tip/master):
      
       ivb-ep sysbench:
      
         2: [30 secs]     transactions:                        64110  (2136.94 per sec.)
         5: [30 secs]     transactions:                        143644 (4787.99 per sec.)
        10: [30 secs]     transactions:                        274298 (9142.93 per sec.)
        20: [30 secs]     transactions:                        418683 (13955.45 per sec.)
        40: [30 secs]     transactions:                        320731 (10690.15 per sec.)
        80: [30 secs]     transactions:                        355096 (11834.28 per sec.)
      
       hsw-ex NAS:
      
       OMP_PROC_BIND/lu.C.x_threads_144_run_1.log: Time in seconds =                    18.01
       OMP_PROC_BIND/lu.C.x_threads_144_run_2.log: Time in seconds =                    17.89
       OMP_PROC_BIND/lu.C.x_threads_144_run_3.log: Time in seconds =                    17.93
       lu.C.x_threads_144_run_1.log: Time in seconds =                   434.68
       lu.C.x_threads_144_run_2.log: Time in seconds =                   405.36
       lu.C.x_threads_144_run_3.log: Time in seconds =                   433.83
      
      POST (+patch):
      
       ivb-ep sysbench:
      
         2: [30 secs]     transactions:                        64494  (2149.75 per sec.)
         5: [30 secs]     transactions:                        145114 (4836.99 per sec.)
        10: [30 secs]     transactions:                        278311 (9276.69 per sec.)
        20: [30 secs]     transactions:                        437169 (14571.60 per sec.)
        40: [30 secs]     transactions:                        669837 (22326.73 per sec.)
        80: [30 secs]     transactions:                        631739 (21055.88 per sec.)
      
       hsw-ex NAS:
      
       lu.C.x_threads_144_run_1.log: Time in seconds =                    23.36
       lu.C.x_threads_144_run_2.log: Time in seconds =                    22.96
       lu.C.x_threads_144_run_3.log: Time in seconds =                    22.52
      
      This patch takes out all the shiny wake_affine() stuff and goes back to
      utter basics. Between the two CPUs involved with the wakeup (the CPU
      doing the wakeup and the CPU we ran on previously) pick the CPU we can
      run on _now_.
      
      This restores much of the regressions against the older kernels,
      but leaves some ground in the overloaded case. The default-enabled
      WA_WEIGHT (which will be introduced in the next patch) is an attempt
      to address the overloaded situation.
      Reported-by: NEric Farman <farman@linux.vnet.ibm.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: jinpuwang@gmail.com
      Cc: vcaputo@pengaru.com
      Fixes: 3fed382b ("sched/numa: Implement NUMA node level wake_affine()")
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      d153b153
    • L
      Merge branch 'ppc-bundle' (bundle from Michael Ellerman) · 529a86e0
      Linus Torvalds 提交于
      Merge powerpc transactional memory fixes from Michael Ellerman:
       "I figured I'd still send you the commits using a bundle to make sure
        it works in case I need to do it again in future"
      
      This fixes transactional memory state restore for powerpc.
      
      * bundle'd patches from Michael Ellerman:
        powerpc/tm: Fix illegal TM state in signal handler
        powerpc/64s: Use emergency stack for kernel TM Bad Thing program checks
      529a86e0
    • L
      Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net · ff33952e
      Linus Torvalds 提交于
      Pull networking fixes from David Miller:
      
       1) Fix object leak on IPSEC offload failure, from Steffen Klassert.
      
       2) Fix range checks in ipset address range addition operations, from
          Jozsef Kadlecsik.
      
       3) Fix pernet ops unregistration order in ipset, from Florian Westphal.
      
       4) Add missing netlink attribute policy for nl80211 packet pattern
          attrs, from Peng Xu.
      
       5) Fix PPP device destruction race, from Guillaume Nault.
      
       6) Write marks get lost when BPF verifier processes R1=R2 register
          assignments, causing incorrect liveness information and less state
          pruning. Fix from Alexei Starovoitov.
      
       7) Fix blockhole routes so that they are marked dead and therefore not
          cached in sockets, otherwise IPSEC stops working. From Steffen
          Klassert.
      
       8) Fix broadcast handling of UDP socket early demux, from Paolo Abeni.
      
      * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (37 commits)
        cdc_ether: flag the u-blox TOBY-L2 and SARA-U2 as wwan
        net: thunderx: mark expected switch fall-throughs in nicvf_main()
        udp: fix bcast packet reception
        netlink: do not set cb_running if dump's start() errs
        ipv4: Fix traffic triggered IPsec connections.
        ipv6: Fix traffic triggered IPsec connections.
        ixgbe: incorrect XDP ring accounting in ethtool tx_frame param
        net: ixgbe: Use new PCI_DEV_FLAGS_NO_RELAXED_ORDERING flag
        Revert commit 1a8b6d76 ("net:add one common config...")
        ixgbe: fix masking of bits read from IXGBE_VXLANCTRL register
        ixgbe: Return error when getting PHY address if PHY access is not supported
        netfilter: xt_bpf: Fix XT_BPF_MODE_FD_PINNED mode of 'xt_bpf_info_v1'
        netfilter: SYNPROXY: skip non-tcp packet in {ipv4, ipv6}_synproxy_hook
        tipc: Unclone message at secondary destination lookup
        tipc: correct initialization of skb list
        gso: fix payload length when gso_size is zero
        mlxsw: spectrum_router: Avoid expensive lookup during route removal
        bpf: fix liveness marking
        doc: Fix typo "8023.ad" in bonding documentation
        ipv6: fix net.ipv6.conf.all.accept_dad behaviour for real
        ...
      ff33952e
    • A
      cdc_ether: flag the u-blox TOBY-L2 and SARA-U2 as wwan · fdfbad32
      Aleksander Morgado 提交于
      The u-blox TOBY-L2 is a LTE Cat 4 module with HSPA+ and 2G fallback.
      This module allows switching to different USB profiles with the
      'AT+UUSBCONF' command, and provides a ECM network interface when the
      'AT+UUSBCONF=2' profile is selected.
      
      The u-blox SARA-U2 is a HSPA module with 2G fallback. The default USB
      configuration includes a ECM network interface.
      
      Both these modules are controlled via AT commands through one of the
      TTYs exposed. Connecting these modules may be done just by activating
      the desired PDP context with 'AT+CGACT=1,<cid>' and then running DHCP
      on the ECM interface.
      Signed-off-by: NAleksander Morgado <aleksander@aleksander.es>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fdfbad32
    • L
      Merge tag 'nfs-for-4.14-3' of git://git.linux-nfs.org/projects/trondmy/linux-nfs · 68ebe3cb
      Linus Torvalds 提交于
      Pull NFS client bugfixes from Trond Myklebust:
       "Hightlights include:
      
        stable fixes:
         - nfs/filelayout: fix oops when freeing filelayout segment
         - NFS: Fix uninitialized rpc_wait_queue
      
        bugfixes:
         - NFSv4/pnfs: Fix an infinite layoutget loop
         - nfs: RPC_MAX_AUTH_SIZE is in bytes"
      
      * tag 'nfs-for-4.14-3' of git://git.linux-nfs.org/projects/trondmy/linux-nfs:
        NFSv4/pnfs: Fix an infinite layoutget loop
        nfs/filelayout: fix oops when freeing filelayout segment
        sunrpc: remove redundant initialization of sock
        NFS: Fix uninitialized rpc_wait_queue
        NFS: Cleanup error handling in nfs_idmap_request_key()
        nfs: RPC_MAX_AUTH_SIZE is in bytes
      68ebe3cb
    • G
      net: thunderx: mark expected switch fall-throughs in nicvf_main() · 1a2ace56
      Gustavo A. R. Silva 提交于
      In preparation to enabling -Wimplicit-fallthrough, mark switch cases
      where we are expecting to fall through.
      
      Cc: Sunil Goutham <sgoutham@cavium.com>
      Cc: Robert Richter <rric@kernel.org>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: netdev@vger.kernel.org
      Signed-off-by: NGustavo A. R. Silva <gustavo@embeddedor.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1a2ace56
    • D
      Merge git://git.kernel.org/pub/scm/linux/kernel/git/pablo/nf · fb60bccc
      David S. Miller 提交于
      Pablo Neira Ayuso says:
      
      ====================
      Netfilter/IPVS fixes for net
      
      The following patchset contains Netfilter/IPVS fixes for your net tree,
      they are:
      
      1) Fix packet drops due to incorrect ECN handling in IPVS, from Vadim
         Fedorenko.
      
      2) Fix splat with mark restoration in xt_socket with non-full-sock,
         patch from Subash Abhinov Kasiviswanathan.
      
      3) ipset bogusly bails out when adding IPv4 range containing more than
         2^31 addresses, from Jozsef Kadlecsik.
      
      4) Incorrect pernet unregistration order in ipset, from Florian Westphal.
      
      5) Races between dump and swap in ipset results in BUG_ON splats, from
         Ross Lagerwall.
      
      6) Fix chain renames in nf_tables, from JingPiao Chen.
      
      7) Fix race in pernet codepath with ebtables table registration, from
         Artem Savkov.
      
      8) Memory leak in error path in set name allocation in nf_tables, patch
         from Arvind Yadav.
      
      9) Don't dump chain counters if they are not available, this fixes a
         crash when listing the ruleset.
      
      10) Fix out of bound memory read in strlcpy() in x_tables compat code,
          from Eric Dumazet.
      
      11) Make sure we only process TCP packets in SYNPROXY hooks, patch from
          Lin Zhang.
      
      12) Cannot load rules incrementally anymore after xt_bpf with pinned
          objects, added in revision 1. From Shmulik Ladkani.
      ====================
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fb60bccc
    • D
      Merge branch '10GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-queue · 5766cd68
      David S. Miller 提交于
      Jeff Kirsher says:
      
      ====================
      Intel Wired LAN Driver Updates 2017-10-09
      
      This series contains updates to ixgbe and arch/Kconfig.
      
      Mark fixes a case where PHY register access is not supported and we were
      returning a PHY address, when we should have been returning -EOPNOTSUPP.
      
      Sabrina Dubroca fixes the use of a logical "and" when it should have been
      the bitwise "and" operator.
      
      Ding Tianhong reverts the commit that added the Kconfig bool option
      ARCH_WANT_RELAX_ORDER, since there is now a new flag
      PCI_DEV_FLAGS_NO_RELAXED_ORDERING that has been added to indicate that
      Relaxed Ordering Attributes should not be used for Transaction Layer
      Packets.  Then follows up with making the needed changes to ixgbe to
      use the new PCI_DEV_FLAGS_NO_RELAXED_ORDERING flag.
      
      John Fastabend fixes an issue in the ring accounting when the transmit
      ring parameters are changed via ethtool when an XDP program is attached.
      ====================
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5766cd68
    • P
      udp: fix bcast packet reception · 996b44fc
      Paolo Abeni 提交于
      The commit bc044e8d ("udp: perform source validation for
      mcast early demux") does not take into account that broadcast packets
      lands in the same code path and they need different checks for the
      source address - notably, zero source address are valid for bcast
      and invalid for mcast.
      
      As a result, 2nd and later broadcast packets with 0 source address
      landing to the same socket are dropped. This breaks dhcp servers.
      
      Since we don't have stringent performance requirements for ingress
      broadcast traffic, fix it by disabling UDP early demux such traffic.
      Reported-by: NHannes Frederic Sowa <hannes@stressinduktion.org>
      Fixes: bc044e8d ("udp: perform source validation for mcast early demux")
      Signed-off-by: NPaolo Abeni <pabeni@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      996b44fc
    • J
      netlink: do not set cb_running if dump's start() errs · 41c87425
      Jason A. Donenfeld 提交于
      It turns out that multiple places can call netlink_dump(), which means
      it's still possible to dereference partially initialized values in
      dump() that were the result of a faulty returned start().
      
      This fixes the issue by calling start() _before_ setting cb_running to
      true, so that there's no chance at all of hitting the dump() function
      through any indirect paths.
      
      It also moves the call to start() to be when the mutex is held. This has
      the nice side effect of serializing invocations to start(), which is
      likely desirable anyway. It also prevents any possible other races that
      might come out of this logic.
      
      In testing this with several different pieces of tricky code to trigger
      these issues, this commit fixes all avenues that I'm aware of.
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      Cc: Johannes Berg <johannes@sipsolutions.net>
      Reviewed-by: NJohannes Berg <johannes@sipsolutions.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      41c87425
    • D
      Merge tag 'mac80211-for-davem-2017-10-09' of... · 6df4d17c
      David S. Miller 提交于
      Merge tag 'mac80211-for-davem-2017-10-09' of git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211
      
      Johannes Berg says:
      
      ====================
      pull-request: mac80211 2017-10-09
      
      The QCA folks found another netlink problem - we were missing validation
      of some attributes. It's not super problematic since one can only read a
      few bytes beyond the message (and that memory must exist), but here's the
      fix for it.
      
      I thought perhaps we can make nla_parse_nested() require a policy, but
      given the two-stage validation/parsing in regular netlink that won't work.
      
      Please pull and let me know if there's any problem.
      ====================
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6df4d17c
    • D
      Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec · 93b03193
      David S. Miller 提交于
      Steffen Klassert says:
      
      ====================
      pull request (net): ipsec 2017-10-09
      
      1) Fix some error paths of the IPsec offloading API.
      
      2) Fix a NULL pointer dereference when IPsec is used
         with vti. From Alexey Kodanev.
      
      3) Don't call xfrm_policy_cache_flush under xfrm_state_lock,
         it triggers several locking warnings. From Artem Savkov.
      
      Please pull or let me know if there are problems.
      ====================
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      93b03193
    • S
      ipv4: Fix traffic triggered IPsec connections. · 6c0e7284
      Steffen Klassert 提交于
      A recent patch removed the dst_free() on the allocated
      dst_entry in ipv4_blackhole_route(). The dst_free() marked the
      dst_entry as dead and added it to the gc list. I.e. it was setup
      for a one time usage. As a result we may now have a blackhole
      route cached at a socket on some IPsec scenarios. This makes the
      connection unusable.
      
      Fix this by marking the dst_entry directly at allocation time
      as 'dead', so it is used only once.
      
      Fixes: b838d5e1 ("ipv4: mark DST_NOGC and remove the operation of dst_free()")
      Reported-by: NTobias Brunner <tobias@strongswan.org>
      Signed-off-by: NSteffen Klassert <steffen.klassert@secunet.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6c0e7284
    • S
      ipv6: Fix traffic triggered IPsec connections. · 62cf27e5
      Steffen Klassert 提交于
      A recent patch removed the dst_free() on the allocated
      dst_entry in ipv6_blackhole_route(). The dst_free() marked
      the dst_entry as dead and added it to the gc list. I.e. it
      was setup for a one time usage. As a result we may now have
      a blackhole route cached at a socket on some IPsec scenarios.
      This makes the connection unusable.
      
      Fix this by marking the dst_entry directly at allocation time
      as 'dead', so it is used only once.
      
      Fixes: 587fea74 ("ipv6: mark DST_NOGC and remove the operation of dst_free()")
      Reported-by: NTobias Brunner <tobias@strongswan.org>
      Signed-off-by: NSteffen Klassert <steffen.klassert@secunet.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      62cf27e5
  2. 09 10月, 2017 12 次提交