1. 19 2月, 2013 1 次提交
  2. 11 2月, 2013 1 次提交
  3. 08 2月, 2013 4 次提交
  4. 05 2月, 2013 3 次提交
  5. 04 2月, 2013 1 次提交
  6. 31 1月, 2013 1 次提交
  7. 28 1月, 2013 7 次提交
    • F
      cputime: Safely read cputime of full dynticks CPUs · 6a61671b
      Frederic Weisbecker 提交于
      While remotely reading the cputime of a task running in a
      full dynticks CPU, the values stored in utime/stime fields
      of struct task_struct may be stale. Its values may be those
      of the last kernel <-> user transition time snapshot and
      we need to add the tickless time spent since this snapshot.
      
      To fix this, flush the cputime of the dynticks CPUs on
      kernel <-> user transition and record the time / context
      where we did this. Then on top of this snapshot and the current
      time, perform the fixup on the reader side from task_times()
      accessors.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Li Zhong <zhong@linux.vnet.ibm.com>
      Cc: Namhyung Kim <namhyung.kim@lge.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      [fixed kvm module related build errors]
      Signed-off-by: NSedat Dilek <sedat.dilek@gmail.com>
      6a61671b
    • F
      kvm: Prepare to add generic guest entry/exit callbacks · c11f11fc
      Frederic Weisbecker 提交于
      Do some ground preparatory work before adding guest_enter()
      and guest_exit() context tracking callbacks. Those will
      be later used to read the guest cputime safely when we
      run in full dynticks mode.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Gleb Natapov <gleb@redhat.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Li Zhong <zhong@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Namhyung Kim <namhyung.kim@lge.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      c11f11fc
    • F
      cputime: Use accessors to read task cputime stats · 6fac4829
      Frederic Weisbecker 提交于
      This is in preparation for the full dynticks feature. While
      remotely reading the cputime of a task running in a full
      dynticks CPU, we'll need to do some extra-computation. This
      way we can account the time it spent tickless in userspace
      since its last cputime snapshot.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Li Zhong <zhong@linux.vnet.ibm.com>
      Cc: Namhyung Kim <namhyung.kim@lge.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      6fac4829
    • F
      cputime: Allow dynamic switch between tick/virtual based cputime accounting · 3f4724ea
      Frederic Weisbecker 提交于
      Allow to dynamically switch between tick and virtual based
      cputime accounting. This way we can provide a kind of "on-demand"
      virtual based cputime accounting. In this mode, the kernel relies
      on the context tracking subsystem to dynamically probe on kernel
      boundaries.
      
      This is in preparation for being able to stop the timer tick in
      more places than just the idle state. Doing so will depend on
      CONFIG_VIRT_CPU_ACCOUNTING_GEN which makes it possible to account
      the cputime without the tick by hooking on kernel/user boundaries.
      
      Depending whether the tick is stopped or not, we can switch between
      tick and vtime based accounting anytime in order to minimize the
      overhead associated to user hooks.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Li Zhong <zhong@linux.vnet.ibm.com>
      Cc: Namhyung Kim <namhyung.kim@lge.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      3f4724ea
    • F
      cputime: Generic on-demand virtual cputime accounting · abf917cd
      Frederic Weisbecker 提交于
      If we want to stop the tick further idle, we need to be
      able to account the cputime without using the tick.
      
      Virtual based cputime accounting solves that problem by
      hooking into kernel/user boundaries.
      
      However implementing CONFIG_VIRT_CPU_ACCOUNTING require
      low level hooks and involves more overhead. But we already
      have a generic context tracking subsystem that is required
      for RCU needs by archs which plan to shut down the tick
      outside idle.
      
      This patch implements a generic virtual based cputime
      accounting that relies on these generic kernel/user hooks.
      
      There are some upsides of doing this:
      
      - This requires no arch code to implement CONFIG_VIRT_CPU_ACCOUNTING
      if context tracking is already built (already necessary for RCU in full
      tickless mode).
      
      - We can rely on the generic context tracking subsystem to dynamically
      (de)activate the hooks, so that we can switch anytime between virtual
      and tick based accounting. This way we don't have the overhead
      of the virtual accounting when the tick is running periodically.
      
      And one downside:
      
      - There is probably more overhead than a native virtual based cputime
      accounting. But this relies on hooks that are already set anyway.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Li Zhong <zhong@linux.vnet.ibm.com>
      Cc: Namhyung Kim <namhyung.kim@lge.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      abf917cd
    • F
      cputime: Move default nsecs_to_cputime() to jiffies based cputime file · ae8dda5c
      Frederic Weisbecker 提交于
      If the architecture doesn't provide an implementation of
      nsecs_to_cputime(), the cputime accounting core uses a
      default one that converts the nanoseconds to jiffies. However
      this only makes sense if we use the jiffies based cputime.
      
      For now it doesn't matter much because this API is only
      called on code that uses jiffies based cputime accounting.
      
      But the code may evolve and this API may be used more
      broadly in the future. Keeping this default implementation
      around is very error prone as it may introduce a bug and
      hide it on architectures that don't override this API.
      
      Fix this by moving this definition to the jiffies based
      cputime headers as it is the only place where it belongs to.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Li Zhong <zhong@linux.vnet.ibm.com>
      Cc: Namhyung Kim <namhyung.kim@lge.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      ae8dda5c
    • F
      cputime: Librarize per nsecs resolution cputime definitions · 39613766
      Frederic Weisbecker 提交于
      The full dynticks cputime accounting that we'll soon introduce
      will rely on sched_clock(). And its clock can have a per
      nanosecond granularity.
      
      To prepare for this, we need to have a cputime_t implementation
      that has this precision.
      
      ia64 virtual cputime accounting already uses that granularity
      so all we need is to librarize its implementation in the asm
      generic headers.
      
      Also librarize the default per jiffy granularity cputime_t
      as well so that we can easily pick either implementation
      depending on the cputime accounting config we choose.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Li Zhong <zhong@linux.vnet.ibm.com>
      Cc: Namhyung Kim <namhyung.kim@lge.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      39613766
  8. 27 1月, 2013 2 次提交
    • F
      cputime: Avoid multiplication overflow on utime scaling · 62188451
      Frederic Weisbecker 提交于
      We scale stime, utime values based on rtime (sum_exec_runtime
      converted to jiffies). During scaling we multiple rtime * utime,
      which seems to be fine, since both values are converted to u64,
      but it's not.
      
      Let assume HZ is 1000 - 1ms tick. Process consist of 64 threads,
      run for 1 day, threads utilize 100% cpu on user space. Machine
      has 64 cpus.
      
      Process rtime = utime will be 64 * 24 * 60 * 60 * 1000 jiffies,
      which is 0x149970000. Multiplication rtime * utime result is
      0x1a855771100000000, which can not be covered in 64 bits.
      
      Result of overflow is stall of utime values visible in user
      space (prev_utime in kernel), even if application still consume
      lot of CPU time.
      
      A solution to solve this is to perform the multiplication on
      stime instead of utime. It's easy to grow the utime value fast
      with a CPU bound thread in userspace for example. Now we assume
      that doing so with stime is much harder. In most cases a task
      shouldn't ever spend much time in kernel space as it tends to
      sleep waiting for jobs completion when they take long to
      achieve. IO is the typical example of that.
      
      Hence scaling the cputime by performing the multiplication on
      stime instead of utime should considerably reduce the chances of
      an overflow on most workloads.
      
      This is largely inspired by a patch from Stanislaw Gruszka:
      http://lkml.kernel.org/r/20130107113144.GA7544@redhat.comInspired-by: NStanislaw Gruszka <sgruszka@redhat.com>
      Reported-by: NStanislaw Gruszka <sgruszka@redhat.com>
      Acked-by: NStanislaw Gruszka <sgruszka@redhat.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Link: http://lkml.kernel.org/r/1359217182-25184-1-git-send-email-fweisbec@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      62188451
    • F
      context_tracking: Export context state for generic vtime · 95a79fd4
      Frederic Weisbecker 提交于
      Export the context state: whether we run in user / kernel
      from the context tracking subsystem point of view.
      
      This is going to be used by the generic virtual cputime
      accounting subsystem that is needed to implement the full
      dynticks.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Li Zhong <zhong@linux.vnet.ibm.com>
      Cc: Namhyung Kim <namhyung.kim@lge.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      95a79fd4
  9. 25 1月, 2013 3 次提交
    • Y
      sched/rt: Avoid updating RT entry timeout twice within one tick period · 57d2aa00
      Ying Xue 提交于
      The issue below was found in 2.6.34-rt rather than mainline rt
      kernel, but the issue still exists upstream as well.
      
      So please let me describe how it was noticed on 2.6.34-rt:
      
      On this version, each softirq has its own thread, it means there
      is at least one RT FIFO task per cpu. The priority of these
      tasks is set to 49 by default. If user launches an RT FIFO task
      with priority lower than 49 of softirq RT tasks, it's possible
      there are two RT FIFO tasks enqueued one cpu runqueue at one
      moment. By current strategy of balancing RT tasks, when it comes
      to RT tasks, we really need to put them off to a CPU that they
      can run on as soon as possible. Even if it means a bit of cache
      line flushing, we want RT tasks to be run with the least latency.
      
      When the user RT FIFO task which just launched before is
      running, the sched timer tick of the current cpu happens. In this
      tick period, the timeout value of the user RT task will be
      updated once. Subsequently, we try to wake up one softirq RT
      task on its local cpu. As the priority of current user RT task
      is lower than the softirq RT task, the current task will be
      preempted by the higher priority softirq RT task. Before
      preemption, we check to see if current can readily move to a
      different cpu. If so, we will reschedule to allow the RT push logic
      to try to move current somewhere else. Whenever the woken
      softirq RT task runs, it first tries to migrate the user FIFO RT
      task over to a cpu that is running a task of lesser priority. If
      migration is done, it will send a reschedule request to the found
      cpu by IPI interrupt. Once the target cpu responds the IPI
      interrupt, it will pick the migrated user RT task to preempt its
      current task. When the user RT task is running on the new cpu,
      the sched timer tick of the cpu fires. So it will tick the user
      RT task again. This also means the RT task timeout value will be
      updated again. As the migration may be done in one tick period,
      it means the user RT task timeout value will be updated twice
      within one tick.
      
      If we set a limit on the amount of cpu time for the user RT task
      by setrlimit(RLIMIT_RTTIME), the SIGXCPU signal should be posted
      upon reaching the soft limit.
      
      But exactly when the SIGXCPU signal should be sent depends on the
      RT task timeout value. In fact the timeout mechanism of sending
      the SIGXCPU signal assumes the RT task timeout is increased once
      every tick.
      
      However, currently the timeout value may be added twice per
      tick. So it results in the SIGXCPU signal being sent earlier
      than expected.
      
      To solve this issue, we prevent the timeout value from increasing
      twice within one tick time by remembering the jiffies value of
      last updating the timeout. As long as the RT task's jiffies is
      different with the global jiffies value, we allow its timeout to
      be updated.
      Signed-off-by: NYing Xue <ying.xue@windriver.com>
      Signed-off-by: NFan Du <fan.du@windriver.com>
      Reviewed-by: NYong Zhang <yong.zhang0@gmail.com>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/1342508623-2887-1-git-send-email-ying.xue@windriver.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      57d2aa00
    • V
      sched/fair: Set se->vruntime directly in place_entity() · 16c8f1c7
      Viresh Kumar 提交于
      We are first storing the new vruntime in a variable and then
      storing it in se->vruntime. Simply update se->vruntime directly.
      Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org>
      Cc: linaro-dev@lists.linaro.org
      Cc: patches@linaro.org
      Cc: peterz@infradead.org
      Link: http://lkml.kernel.org/r/ae59db1945518d6f6250920d46eb1f1a9cc0024e.1352361704.git.viresh.kumar@linaro.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      16c8f1c7
    • K
      sched/rt: Add reschedule check to switched_from_rt() · 1158ddb5
      Kirill Tkhai 提交于
      Reschedule rq->curr if the first RT task has just been
      pulled to the rq.
      Signed-off-by: NKirill V Tkhai <tkhai@yandex.ru>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Tkhai Kirill <tkhai@yandex.ru>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/118761353614535@web28f.yandex.ruSigned-off-by: NIngo Molnar <mingo@kernel.org>
      1158ddb5
  10. 24 1月, 2013 11 次提交
  11. 23 1月, 2013 6 次提交
    • T
      ALSA: hda - Fix inconsistent pin states after resume · 31614bb8
      Takashi Iwai 提交于
      The commit [26a6cb6c: ALSA: hda - Implement a poll loop for jacks as a
      module parameter] introduced the polling jack detection code, but it
      also moved the call of snd_hda_jack_set_dirty_all() in the resume path
      after resume/init ops call.  This caused a regression when the jack
      state has been changed during power-down (e.g. in the power save
      mode).  Since the driver doesn't probe the new jack state but keeps
      using the cached value due to no dirty flag, the pin state remains
      also as if the jack is still plugged.
      
      The fix is simply moving snd_hda_jack_set_dirty_all() to the original
      position.
      Reported-by: NManolo Díaz <diaz.manolo@gmail.com>
      Signed-off-by: NTakashi Iwai <tiwai@suse.de>
      31614bb8
    • L
      Revert "drivers/misc/ti-st: remove gpio handling" · a7e2ca17
      Luciano Coelho 提交于
      This reverts commit eccf2979.
      
      The reason is that it broke TI WiLink shared transport on Panda.
      Also, callback functions should not be added to board files anymore,
      so revert to implementing the power functions in the driver itself.
      
      Additionally, changed a variable name ('status' to 'err') so that this
      revert compiles properly.
      
      Cc: stable <stable@vger.kernel.org> [3.7]
      Acked-by: NTony Lindgren <tony@atomide.com>
      Signed-off-by: NLuciano Coelho <coelho@ti.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      a7e2ca17
    • L
      Merge tag '3.8-pci-fixes-2' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci · 1d854908
      Linus Torvalds 提交于
      Pull PCI updates from Bjorn Helgaas:
       "The most important is a fix for a pciehp deadlock that occurs when
        unplugging a Thunderbolt adapter.  We also applied the same fix to
        shpchp, removed CONFIG_EXPERIMENTAL dependencies, fixed a
        pcie_aspm=force problem, and fixed a refcount leak.
      
        Details:
      
         - Hotplug
            PCI: pciehp: Use per-slot workqueues to avoid deadlock
            PCI: shpchp: Make shpchp_wq non-ordered
            PCI: shpchp: Handle push button event asynchronously
            PCI: shpchp: Use per-slot workqueues to avoid deadlock
      
         - Power management
            PCI: Allow pcie_aspm=force even when FADT indicates it is unsupported
      
         - Misc
            PCI/AER: pci_get_domain_bus_and_slot() call missing required pci_dev_put()
            PCI: remove depends on CONFIG_EXPERIMENTAL"
      
      * tag '3.8-pci-fixes-2' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci:
        PCI: remove depends on CONFIG_EXPERIMENTAL
        PCI: Allow pcie_aspm=force even when FADT indicates it is unsupported
        PCI: shpchp: Use per-slot workqueues to avoid deadlock
        PCI: shpchp: Handle push button event asynchronously
        PCI: shpchp: Make shpchp_wq non-ordered
        PCI/AER: pci_get_domain_bus_and_slot() call missing required pci_dev_put()
        PCI: pciehp: Use per-slot workqueues to avoid deadlock
      1d854908
    • T
      async: fix __lowest_in_progress() · f56c3196
      Tejun Heo 提交于
      Commit 083b804c ("async: use workqueue for worker pool") made it
      possible that async jobs are moved from pending to running out-of-order.
      While pending async jobs will be queued and dispatched for execution in
      the same order, nothing guarantees they'll enter "1) move self to the
      running queue" of async_run_entry_fn() in the same order.
      
      Before the conversion, async implemented its own worker pool.  An async
      worker, upon being woken up, fetches the first item from the pending
      list, which kept the executing lists sorted.  The conversion to
      workqueue was done by adding work_struct to each async_entry and async
      just schedules the work item.  The queueing and dispatching of such work
      items are still in order but now each worker thread is associated with a
      specific async_entry and moves that specific async_entry to the
      executing list.  So, depending on which worker reaches that point
      earlier, which is non-deterministic, we may end up moving an async_entry
      with larger cookie before one with smaller one.
      
      This broke __lowest_in_progress().  running->domain may not be properly
      sorted and is not guaranteed to contain lower cookies than pending list
      when not empty.  Fix it by ensuring sort-inserting to the running list
      and always looking at both pending and running when trying to determine
      the lowest cookie.
      
      Over time, the async synchronization implementation became quite messy.
      We better restructure it such that each async_entry is linked to two
      lists - one global and one per domain - and not move it when execution
      starts.  There's no reason to distinguish pending and running.  They
      behave the same for synchronization purposes.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f56c3196
    • L
      Merge tag 'perf-urgent-for-mingo' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux · ed06ef31
      Linus Torvalds 提交于
      Pull perf/urgent fixes from Arnaldo Carvalho de Melo:
      
       . revert 20b279 - require exclude_guest to use PEBS - kernel side, now
         older binaries will continue working for things like cycles:pp
         without needing to pass extra modifiers, from David Ahern.
      
       . Fix building from 'make perf-*-src-pkg' tarballs, broken by UAPI,
         from Sebastian Andrzej Siewior
      
      [ Pulling directly, Ingo would normally pull but has been unresponsive ]
      
      * tag 'perf-urgent-for-mingo' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux:
        perf tools: Fix building from 'make perf-*-src-pkg' tarballs
        perf x86: revert 20b279 - require exclude_guest to use PEBS - kernel side
      ed06ef31
    • L
      Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux · 343391b1
      Linus Torvalds 提交于
      Pull parisc fixes from Helge Deller:
       "Improve the stability of the linux kernel on the parisc architecture"
      
      * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux:
        parisc: sigaltstack doesn't round ss.ss_sp as required
        parisc: improve ptrace support for gdb single-step
        parisc: don't claim cpu irqs more than once
        parisc: avoid undefined shift in cnv_float.h
      343391b1