1. 14 7月, 2016 4 次提交
    • F
      sched/cputime: Clean up the old vtime gen irqtime accounting completely · 0cfdf9a1
      Frederic Weisbecker 提交于
      Vtime generic irqtime accounting has been removed but there are a few
      remnants to clean up:
      
      * The vtime_accounting_cpu_enabled() check in irq entry was only used
        by CONFIG_VIRT_CPU_ACCOUNTING_GEN. We can safely remove it.
      
      * Without the vtime_accounting_cpu_enabled(), we no longer need to
        have a vtime_common_account_irq_enter() indirect function.
      
      * Move vtime_account_irq_enter() implementation under
        CONFIG_VIRT_CPU_ACCOUNTING_NATIVE which is the last user.
      
      * The vtime_account_user() call was only used on irq entry for
        CONFIG_VIRT_CPU_ACCOUNTING_GEN. We can remove that too.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Radim Krcmar <rkrcmar@redhat.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Wanpeng Li <wanpeng.li@hotmail.com>
      Link: http://lkml.kernel.org/r/1468421405-20056-4-git-send-email-fweisbec@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      0cfdf9a1
    • R
      sched/cputime: Replace VTIME_GEN irq time code with IRQ_TIME_ACCOUNTING code · b58c3584
      Rik van Riel 提交于
      The CONFIG_VIRT_CPU_ACCOUNTING_GEN irq time tracking code does not
      appear to currently work right.
      
      On CPUs without nohz_full=, only tick based irq time sampling is
      done, which breaks down when dealing with a nohz_idle CPU.
      
      On firewalls and similar systems, no ticks may happen on a CPU for a
      while, and the irq time spent may never get accounted properly. This
      can cause issues with capacity planning and power saving, which use
      the CPU statistics as inputs in decision making.
      
      Remove the VTIME_GEN vtime irq time code, and replace it with the
      IRQ_TIME_ACCOUNTING code, when selected as a config option by the user.
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Radim Krcmar <rkrcmar@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Wanpeng Li <wanpeng.li@hotmail.com>
      Link: http://lkml.kernel.org/r/1468421405-20056-3-git-send-email-fweisbec@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      b58c3584
    • R
      sched/cputime: Count actually elapsed irq & softirq time · 57430218
      Rik van Riel 提交于
      Currently, if there was any irq or softirq time during 'ticks'
      jiffies, the entire period will be accounted as irq or softirq
      time.
      
      This is inaccurate if only a subset of the time was actually spent
      handling irqs, and could conceivably mis-count all of the ticks during
      a period as irq time, when there was some irq and some softirq time.
      
      This can actually happen when irqtime_account_process_tick is called
      from account_idle_ticks, which can pass a larger number of ticks down
      all at once.
      
      Fix this by changing irqtime_account_hi_update(), irqtime_account_si_update(),
      and steal_account_process_ticks() to work with cputime_t time units, and
      return the amount of time spent in each mode.
      
      Rename steal_account_process_ticks() to steal_account_process_time(), to
      reflect that time is now accounted in cputime_t, instead of ticks.
      
      Additionally, have irqtime_account_process_tick() take into account how
      much time was spent in each of steal, irq, and softirq time.
      
      The latter could help improve the accuracy of cputime
      accounting when returning from idle on a NO_HZ_IDLE CPU.
      
      Properly accounting how much time was spent in hardirq and
      softirq time will also allow the NO_HZ_FULL code to re-use
      these same functions for hardirq and softirq accounting.
      Signed-off-by: NRik van Riel <riel@redhat.com>
      [ Make nsecs_to_cputime64() actually return cputime64_t. ]
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Radim Krcmar <rkrcmar@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Wanpeng Li <wanpeng.li@hotmail.com>
      Link: http://lkml.kernel.org/r/1468421405-20056-2-git-send-email-fweisbec@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      57430218
    • I
  2. 11 7月, 2016 4 次提交
  3. 09 7月, 2016 11 次提交
  4. 08 7月, 2016 18 次提交
  5. 07 7月, 2016 3 次提交
    • J
      arm64: kernel: Save and restore UAO and addr_limit on exception entry · e19a6ee2
      James Morse 提交于
      If we take an exception while at EL1, the exception handler inherits
      the original context's addr_limit and PSTATE.UAO values. To be consistent
      always reset addr_limit and PSTATE.UAO on (re-)entry to EL1. This
      prevents accidental re-use of the original context's addr_limit.
      
      Based on a similar patch for arm from Russell King.
      
      Cc: <stable@vger.kernel.org> # 4.6-
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      e19a6ee2
    • J
      xenbus: don't BUG() on user mode induced condition · 0beef634
      Jan Beulich 提交于
      Inability to locate a user mode specified transaction ID should not
      lead to a kernel crash. For other than XS_TRANSACTION_START also
      don't issue anything to xenbus if the specified ID doesn't match that
      of any active transaction.
      Signed-off-by: NJan Beulich <jbeulich@suse.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      0beef634
    • M
      perf/core: Fix pmu::filter_match for SW-led groups · 2c81a647
      Mark Rutland 提交于
      The following commit:
      
        66eb579e ("perf: allow for PMU-specific event filtering")
      
      added the pmu::filter_match() callback. This was intended to
      avoid HW constraints on events from resulting in extremely
      pessimistic scheduling.
      
      However, pmu::filter_match() is only called for the leader of each event
      group. When the leader is a SW event, we do not filter the groups, and
      may fail at pmu::add() time, and when this happens we'll give up on
      scheduling any event groups later in the list until they are rotated
      ahead of the failing group.
      
      This can result in extremely sub-optimal event scheduling behaviour,
      e.g. if running the following on a big.LITTLE platform:
      
      $ taskset -c 0 ./perf stat \
       -e 'a57{context-switches,armv8_cortex_a57/config=0x11/}' \
       -e 'a53{context-switches,armv8_cortex_a53/config=0x11/}' \
       ls
      
           <not counted>      context-switches                                              (0.00%)
           <not counted>      armv8_cortex_a57/config=0x11/                                 (0.00%)
                      24      context-switches                                              (37.36%)
                57589154      armv8_cortex_a53/config=0x11/                                 (37.36%)
      
      Here the 'a53' event group was always eligible to be scheduled, but
      the 'a57' group never eligible to be scheduled, as the task was always
      affine to a Cortex-A53 CPU. The SW (group leader) event in the 'a57'
      group was eligible, but the HW event failed at pmu::add() time,
      resulting in ctx_flexible_sched_in giving up on scheduling further
      groups with HW events.
      
      One way of avoiding this is to check pmu::filter_match() on siblings
      as well as the group leader. If any of these fail their
      pmu::filter_match() call, we must skip the entire group before
      attempting to add any events.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Fixes: 66eb579e ("perf: allow for PMU-specific event filtering")
      Link: http://lkml.kernel.org/r/1465917041-15339-1-git-send-email-mark.rutland@arm.com
      [ Small readability edits. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      2c81a647