1. 15 9月, 2010 6 次提交
    • S
      watchdog: Avoid kernel crash when disabling watchdog · d9ca07a0
      Stephane Eranian 提交于
      In case you boot with the watchdog disabled, i.e., nowatchdog, then,
      if you try to disable it via /proc/sys/kernel/watchdog, you get
      a kernel crash. The reason is that you are trying to cancel a hrtimer
      which has never been initialized.
      
      This patch fixes this by skipping execution of
      watchdog_disable_all_cpus() when the watchdog is marked
      disabled from boot.
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <4c8f7a23.cae9d80a.2c11.0bb4@mx.google.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d9ca07a0
    • S
      tracing: Remove leftover FTRACE_ENABLE/DISABLE_MCOUNT enums · 79e406d7
      Steven Rostedt 提交于
      The enums for FTRACE_ENABLE_MCOUNT and FTRACE_DISABLE_MCOUNT were
      used as commands to ftrace_run_update_code(). But these commands
      were used by the old nasty ftrace daemon that has long been slain.
      
      This is a clean up patch to remove the references to these enums
      and simplify the code a little.
      Reported-by: NWu Zhangjin <wuzhangjin@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      79e406d7
    • S
      tracing: Do not trace in irq when funcgraph-irq option is zero · b304d044
      Steven Rostedt 提交于
      When the function graph tracer funcgraph-irq option is zero, disable
      tracing in IRQs. This makes the option have two effects.
      
      1) When reading the trace file, do not display the functions that
         happen in interrupt context (when detected)
      
      2) [*new*] When recording a trace, skip those that are detected
         to be in interrupt by the 'in_irq()' function
      
      Note, in_irq() is updated at irq_enter() and irq_exit(). There are
      still functions that are recorded by the function graph tracer that
      is in interrupt context but outside the irq_enter/exit() routines.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      b304d044
    • J
      tracing: Add funcgraph-irq option for function graph tracer. · 2bd16212
      Jiri Olsa 提交于
      It's handy to be able to disable the irq related output
      and not to have to jump over each irq related code, when
      you have no interrest in it.
      
      The option is by default enabled, so there's no change to
      current behaviour. It affects only the final output, so all
      the irq related data stay in the ring buffer.
      Signed-off-by: NJiri Olsa <jolsa@redhat.com>
      LKML-Reference: <20100907145344.GC1912@jolsa.brq.redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      2bd16212
    • S
      tracing: Fix reading of set_ftrace_filter across lists · 57c072c7
      Steven Rostedt 提交于
      If we do:
      
       # cd /sys/kernel/debug
       # echo 'do_IRQ:traceon schedule:traceon sys_write:traceon' > \
          set_ftrace_filter
       # cat set_ftrace_filter
      
      We get the following output:
      
       #### all functions enabled ####
       sys_write:traceon:unlimited
       schedule:traceon:unlimited
       do_IRQ:traceon:unlimited
      
      This outputs two lists. One is the fact that all functions are
      currently enabled for function tracing, the other has three probed
      functions, which happen to have 'traceon' as their commands.
      
      Currently, when reading the first list (functions enabled) the
      seq_file code will receive a "NULL" from the t_next() function
      causing it to exit early. This makes "read()" from userspace stop
      reading the code at this boarder. Although read is allowed to do this,
      some (broken) applications might consider this an end of file and
      stop early.
      
      This patch adds the start of the second list to t_next() when it
      finishes the first list. It is a simple change and gives the
      set_ftrace_filter file nicer reading ability.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      57c072c7
    • S
      tracing: Keep track of set_ftrace_filter position and allow lseek again · 98c4fd04
      Steven Rostedt 提交于
      This patch keeps track of the index within the elements of
      set_ftrace_filter and if the position goes backwards, it nicely
      resets and starts from the beginning again.
      
      This allows for lseek and pread to work properly now.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      98c4fd04
  2. 14 9月, 2010 3 次提交
  3. 13 9月, 2010 2 次提交
  4. 12 9月, 2010 1 次提交
    • R
      PM / Hibernate: Avoid hitting OOM during preallocation of memory · 6715045d
      Rafael J. Wysocki 提交于
      There is a problem in hibernate_preallocate_memory() that it calls
      preallocate_image_memory() with an argument that may be greater than
      the total number of available non-highmem memory pages.  If that's
      the case, the OOM condition is guaranteed to trigger, which in turn
      can cause significant slowdown to occur during hibernation.
      
      To avoid that, make preallocate_image_memory() adjust its argument
      before calling preallocate_image_pages(), so that the total number of
      saveable non-highem pages left is not less than the minimum size of
      a hibernation image.  Change hibernate_preallocate_memory() to try to
      allocate from highmem if the number of pages allocated by
      preallocate_image_memory() is too low.
      
      Modify free_unnecessary_pages() to take all possible memory
      allocation patterns into account.
      Reported-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      Tested-by: NM. Vefa Bicakci <bicave@superonline.com>
      6715045d
  5. 11 9月, 2010 1 次提交
  6. 10 9月, 2010 27 次提交
    • P
      perf: Fix perf_init_event() · e5f4d339
      Peter Zijlstra 提交于
      We ought to return -ENOENT when non of the registered PMUs
      recognise the requested event.
      
      This fixes a boot crash that occurs if no PMU is available
      but the NMI watchdog tries to register an event.
      Reported-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e5f4d339
    • P
      perf: Ensure we call add_event_to_ctx() with the right locks held · cee010ec
      Peter Zijlstra 提交于
      Even though we call it from the inherit path, where the child is
      not yet accessible, we need to hold ctx->lock, add_event_to_ctx()
      assumes IRQs are disabled.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      cee010ec
    • C
      tracing: t_start: reset FTRACE_ITER_HASH in case of seek/pread · df091625
      Chris Wright 提交于
      Be sure to avoid entering t_show() with FTRACE_ITER_HASH set without
      having properly started the iterator to iterate the hash.  This case is
      degenerate and, as discovered by Robert Swiecki, can cause t_hash_show()
      to misuse a pointer.  This causes a NULL ptr deref with possible security
      implications.  Tracked as CVE-2010-3079.
      
      Cc: Robert Swiecki <swiecki@google.com>
      Cc: Eugene Teo <eugene@redhat.com>
      Cc: <stable@kernel.org>
      Signed-off-by: NChris Wright <chrisw@sous-sol.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      df091625
    • H
      swap: revert special hibernation allocation · 910321ea
      Hugh Dickins 提交于
      Please revert 2.6.36-rc commit d2997b10
      "hibernation: freeze swap at hibernation".  It complicated matters by
      adding a second swap allocation path, just for hibernation; without in any
      way fixing the issue that it was intended to address - page reclaim after
      fixing the hibernation image might free swap from a page already imaged as
      swapcache, letting its swap be reallocated to store a different page of
      the image: resulting in data corruption if the imaged page were freed as
      clean then swapped back in.  Pages freed to si->swap_map were still in
      danger of being reallocated by the alternative allocation path.
      
      I guess it inadvertently fixed slow SSD swap allocation for hibernation,
      as reported by Nigel Cunningham: by missing out the discards that occur on
      the usual swap allocation path; but that was unintentional, and needs a
      separate fix.
      Signed-off-by: NHugh Dickins <hughd@google.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
      Cc: Ondrej Zary <linux@rainbow-software.org>
      Cc: Andrea Gelmini <andrea.gelmini@gmail.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Nigel Cunningham <nigel@tuxonice.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      910321ea
    • J
      kernel/groups.c: fix integer overflow in groups_search · 1c24de60
      Jerome Marchand 提交于
      gid_t is a unsigned int.  If group_info contains a gid greater than
      MAX_INT, groups_search() function may look on the wrong side of the search
      tree.
      
      This solves some unfair "permission denied" problems.
      Signed-off-by: NJerome Marchand <jmarchan@redhat.com>
      Cc: <stable@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1c24de60
    • M
      cgroups: fix API thinko · 31583bb0
      Michael S. Tsirkin 提交于
      Add cgroup_attach_task_all()
      
      The existing cgroup_attach_task_current_cg() API is called by a thread to
      attach another thread to all of its cgroups; this is unsuitable for cases
      where a privileged task wants to attach itself to the cgroups of a less
      privileged one, since the call must be made from the context of the target
      task.
      
      This patch adds a more generic cgroup_attach_task_all() API that allows
      both the source task and to-be-moved task to be specified.
      cgroup_attach_task_current_cg() becomes a specialization of the more
      generic new function.
      
      [menage@google.com: rewrote changelog]
      [akpm@linux-foundation.org: address reviewer comments]
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Tested-by: NAlex Williamson <alex.williamson@redhat.com>
      Acked-by: NPaul Menage <menage@google.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Ben Blum <bblum@google.com>
      Cc: Sridhar Samudrala <sri@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      31583bb0
    • P
      gcov: fix null-pointer dereference for certain module types · 85a0fdfd
      Peter Oberparleiter 提交于
      The gcov-kernel infrastructure expects that each object file is loaded
      only once.  This may not be true, e.g.  when loading multiple kernel
      modules which are linked to the same object file.  As a result, loading
      such kernel modules will result in incorrect gcov results while unloading
      will cause a null-pointer dereference.
      
      This patch fixes these problems by changing the gcov-kernel infrastructure
      so that multiple profiling data sets can be associated with one debugfs
      entry.  It applies to 2.6.36-rc1.
      Signed-off-by: NPeter Oberparleiter <oberpar@linux.vnet.ibm.com>
      Reported-by: NWerner Spies <werner.spies@thalesgroup.com>
      Cc: <stable@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      85a0fdfd
    • P
      perf: Fix up delayed_put_task_struct() · 4e231c79
      Peter Zijlstra 提交于
      I missed a perf_event_ctxp user when converting it to an array. Pull this
      last user into perf_event.c as well and fix it up.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4e231c79
    • P
      perf: Optimize context ops · 1b9a644f
      Peter Zijlstra 提交于
      Assuming we don't mix events of different pmus onto a single context
      (with the exeption of software events inside a hardware group) we can
      now assume that all events on a particular context belong to the same
      pmu, hence we can disable the pmu for the entire context operations.
      
      This reduces the amount of hardware writes.
      
      The exception for swevents comes from the fact that the sw pmu disable
      is a nop.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1b9a644f
    • P
      perf: Provide a separate task context for swevents · 89a1e187
      Peter Zijlstra 提交于
      Since software events are always schedulable, mixing them up with
      hardware events (who are not) can lead to funny scheduling oddities.
      
      Giving them their own context solves this.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      89a1e187
    • P
      perf: Multiple task contexts · 8dc85d54
      Peter Zijlstra 提交于
      Provide the infrastructure for multiple task contexts.
      
      A more flexible approach would have resulted in more pointer chases
      in the scheduling hot-paths. This approach has the limitation of a
      static number of task contexts.
      
      Since I expect most external PMUs to be system wide, or at least node
      wide (as per the intel uncore unit) they won't actually need a task
      context.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8dc85d54
    • P
      perf: Clean up perf_event_context allocation · eb184479
      Peter Zijlstra 提交于
      Unify the two perf_event_context allocation sites.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      eb184479
    • P
      perf: Move some code around · 97dee4f3
      Peter Zijlstra 提交于
      Move all inherit code near each other.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      97dee4f3
    • P
      perf: Per-pmu-per-cpu contexts · 108b02cf
      Peter Zijlstra 提交于
      Allocate per-cpu contexts per pmu.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      108b02cf
    • P
      perf: Per cpu-context rotation timer · b5ab4cd5
      Peter Zijlstra 提交于
      Give each cpu-context its own timer so that it is a self contained
      entity, this eases the way for per-pmu-per-cpu contexts as well as
      provides the basic infrastructure to allow different rotation
      times per pmu.
      
      Things to look at:
       - folding the tick and these TICK_NSEC timers
       - separate task context rotation
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b5ab4cd5
    • P
      perf: Remove the swevent hash-table from the cpu context · b28ab83c
      Peter Zijlstra 提交于
      Separate the swevent hash-table from the cpu_context bits in
      preparation for per pmu cpu contexts.
      
      This keeps the swevent hash a global entity.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b28ab83c
    • P
      perf: Separate find_get_context() from event initialization · c3f00c70
      Peter Zijlstra 提交于
      Separate find_get_context() from the event allocation and
      initialization so that we may make find_get_context() depend
      on the event pmu in a later patch.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c3f00c70
    • P
      perf: Remove the sysfs bits · 15ac9a39
      Peter Zijlstra 提交于
      Neither the overcommit nor the reservation sysfs parameter were
      actually working, remove them as they'll only get in the way.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      15ac9a39
    • P
      perf: Rework the PMU methods · a4eaf7f1
      Peter Zijlstra 提交于
      Replace pmu::{enable,disable,start,stop,unthrottle} with
      pmu::{add,del,start,stop}, all of which take a flags argument.
      
      The new interface extends the capability to stop a counter while
      keeping it scheduled on the PMU. We replace the throttled state with
      the generic stopped state.
      
      This also allows us to efficiently stop/start counters over certain
      code paths (like IRQ handlers).
      
      It also allows scheduling a counter without it starting, allowing for
      a generic frozen state (useful for rotating stopped counters).
      
      The stopped state is implemented in two different ways, depending on
      how the architecture implemented the throttled state:
      
       1) We disable the counter:
          a) the pmu has per-counter enable bits, we flip that
          b) we program a NOP event, preserving the counter state
      
       2) We store the counter state and ignore all read/overflow events
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Michael Cree <mcree@orcon.net.nz>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a4eaf7f1
    • P
      perf: Shrink hw_perf_event · fa407f35
      Peter Zijlstra 提交于
      Use hw_perf_event::period_left instead of hw_perf_event::remaining
      and win back 8 bytes.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      fa407f35
    • P
      perf: Default PMU ops · ad5133b7
      Peter Zijlstra 提交于
      Provide default implementations for the pmu txn methods, this
      allows us to remove some conditional code.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Michael Cree <mcree@orcon.net.nz>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ad5133b7
    • P
      perf: Per PMU disable · 33696fc0
      Peter Zijlstra 提交于
      Changes perf_disable() into perf_pmu_disable().
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Michael Cree <mcree@orcon.net.nz>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      33696fc0
    • P
      perf: Reduce perf_disable() usage · 24cd7f54
      Peter Zijlstra 提交于
      Since the current perf_disable() usage is only an optimization,
      remove it for now. This eases the removal of the __weak
      hw_perf_enable() interface.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Michael Cree <mcree@orcon.net.nz>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      24cd7f54
    • P
      perf: Unindent labels · 9ed6060d
      Peter Zijlstra 提交于
      Fixup random annoying style bits.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9ed6060d
    • P
      perf: Register PMU implementations · b0a873eb
      Peter Zijlstra 提交于
      Simple registration interface for struct pmu, this provides the
      infrastructure for removing all the weak functions.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Michael Cree <mcree@orcon.net.nz>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b0a873eb
    • P
      perf: Deconstify struct pmu · 51b0fe39
      Peter Zijlstra 提交于
      sed -ie 's/const struct pmu\>/struct pmu/g' `git grep -l "const struct pmu\>"`
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Michael Cree <mcree@orcon.net.nz>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      51b0fe39
    • S
      sched: Move sched_avg_update() to update_cpu_load() · da2b71ed
      Suresh Siddha 提交于
      Currently sched_avg_update() (which updates rt_avg stats in the rq)
      is getting called from scale_rt_power() (in the load balance context)
      which doesn't take rq->lock.
      
      Fix it by moving the sched_avg_update() to more appropriate
      update_cpu_load() where the CFS load gets updated as well.
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1282596171.2694.3.camel@sbsiddha-MOBL3>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      da2b71ed