- 21 12月, 2011 7 次提交
-
-
由 Steven Rostedt 提交于
Change set_ftrace_early_filter() to ftrace_set_early_filter() and make it a global function. This will allow other subsystems in the kernel to be able to enable function tracing at start up and reuse the ftrace function parsing code. Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Steven Rostedt 提交于
The set_ftrace_filter shows "hashed" functions, which are functions that are added with operations to them (like traceon and traceoff). As other subsystems may be able to show what functions they are using for function tracing, the hash items should no longer be shown just because the FILTER flag is set. As they have nothing to do with other subsystems filters. Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Steven Rostedt 提交于
The function tracer is set up to allow any other subsystem (like perf) to use it. Ftrace already has a way to list what functions are enabled by the global_ops. It would be very helpful to let other users of the function tracer to be able to use the same code. Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Steven Rostedt 提交于
As new functions come in to be initalized from mcount to nop, they are done by groups of pages. Whether it is the core kernel or a module. There's no need to keep track of these on a per record basis. At startup, and as any module is loaded, the functions to be traced are stored in a group of pages and added to the function list at the end. We just need to keep a pointer to the first page of the list that was added, and use that to know where to start on the list for initializing functions. Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Steven Rostedt 提交于
Records that are added to the function trace table are permanently there, except for modules. By separating out the modules to their own pages that can be freed in one shot we can remove the "freed" flag and simplify some of the record management. Another benefit of doing this is that we can also move the records around; sort them. Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Steven Rostedt 提交于
The stop machine method to modify all functions in the kernel (some 20,000 of them) is the safest way to do so across all archs. But some archs may not need this big hammer approach to modify code on SMP machines, and can simply just update the code it needs. Adding a weak function arch_ftrace_update_code() that now does the stop machine, will also let any arch override this method. If the arch needs to check the system and then decide if it can avoid stop machine, it can still call ftrace_run_stop_machine() to use the old method. Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Steven Rostedt 提交于
When gcc inlines a function, it does not mark it with the mcount prologue, which in turn means that inlined functions are not traced by the function tracer. But if CONFIG_OPTIMIZE_INLINING is set, then gcc is allowed not to inline a function that is marked inline. Depending on the options and the compiler, a function may or may not be traced by the function tracer, depending on whether gcc decides to inline a function or not. This has caused several problems in the pass becaues gcc is not always consistent with what it decides to inline between different gcc versions. Some places should not be traced (like paravirt native_* functions) and these are mostly marked as inline. When gcc decides not to inline the function, and if that function should not be traced, then the ftrace function tracer will suddenly break when it use to work fine. This becomes even harder to debug when different versions of gcc will not inline that function, making the same kernel and config work for some gcc versions and not work for others. By making all functions marked inline to not be traced will remove the ambiguity that gcc adds when it comes to tracing functions marked inline. All gcc versions will be consistent with what functions are traced and having volatile working code will be removed. Note, only the inline macro when CONFIG_OPTIMIZE_INLINING is set needs to have notrace added, as the attribute __always_inline will force the function to be inlined and then not traced. Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 07 12月, 2011 1 次提交
-
-
由 Peter Zijlstra 提交于
Provide two initializers for jump_label_key that initialize it enabled or disabled. Also modify all jump_label code to allow for jump_labels to be initialized enabled. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: Jason Baron <jbaron@redhat.com> Link: http://lkml.kernel.org/n/tip-p40e3yj21b68y03z1yv825e7@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
- 06 12月, 2011 4 次提交
-
-
由 Gleb Natapov 提交于
jump_lable patching is very expensive operation that involves pausing all cpus. The patching of perf_sched_events jump_label is easily controllable from userspace by unprivileged user. When te user runs a loop like this: "while true; do perf stat -e cycles true; done" ... the performance of my test application that just increments a counter for one second drops by 4%. This is on a 16 cpu box with my test application using only one of them. An impact on a real server doing real work will be worse. Performance of KVM PMU drops nearly 50% due to jump_lable for "perf record" since KVM PMU implementation creates and destroys perf event frequently. This patch introduces a way to rate limit jump_label patching and uses it to fix the above problem. I believe that as jump_label use will spread the problem will become more common and thus solving it in a generic code is appropriate. Also fixing it in the perf code would result in moving jump_label accounting logic to perf code with all the ifdefs in case of JUMP_LABEL=n kernel. With this patch all details are nicely hidden inside jump_label code. Signed-off-by: NGleb Natapov <gleb@redhat.com> Acked-by: NJason Baron <jbaron@redhat.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20111127155909.GO2557@redhat.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Robert Richter 提交于
This patch introduces x86 perf scheduler code helper functions. We need this to later add more complex functionality to support overlapping counter constraints (next patch). The algorithm is modified so that the range of weight values is now generated from the constraints. There shouldn't be other functional changes. With the helper functions the scheduler is controlled. There are functions to initialize, traverse the event list, find unused counters etc. The scheduler keeps its own state. V3: * Added macro for_each_set_bit_cont(). * Changed functions interfaces of perf_sched_find_counter() and perf_sched_next_event() to use bool as return value. * Added some comments to make code better understandable. V4: * Fix broken event assignment if weight of the first event is not wmin (perf_sched_init()). Signed-off-by: NRobert Richter <robert.richter@amd.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1321616122-1533-2-git-send-email-robert.richter@amd.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Gleb writes: > Currently pmu is disabled and re-enabled on each timer interrupt even > when no rotation or frequency adjustment is needed. On Intel CPU this > results in two writes into PERF_GLOBAL_CTRL MSR per tick. On bare metal > it does not cause significant slowdown, but when running perf in a virtual > machine it leads to 20% slowdown on my machine. Cure this by keeping a perf_event_context::nr_freq counter that counts the number of active events that require frequency adjustments and use this in a similar fashion to the already existing nr_events != nr_active test in perf_rotate_context(). By being able to exclude both rotation and frequency adjustments a-priory for the common case we can avoid the otherwise superfluous PMU disable. Suggested-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/n/tip-515yhoatehd3gza7we9fapaa@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Li Zefan 提交于
Though not all events have field 'prev_pid', it was allowed to do this: # echo 'prev_pid == 100' > events/sched/filter but commit 75b8e982 (tracing/filter: Swap entire filter of events) broke it without any reason. Link: http://lkml.kernel.org/r/4EAF46CF.8040408@cn.fujitsu.comSigned-off-by: NLi Zefan <lizf@cn.fujitsu.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 05 12月, 2011 1 次提交
-
-
由 Peter Zijlstra 提交于
When you do: $ perf record -e cycles,cycles,cycles noploop 10 You expect about 10,000 samples for each event, i.e., 10s at 1000samples/sec. However, this is not what's happening. You get much fewer samples, maybe 3700 samples/event: $ perf report -D | tail -15 Aggregated stats: TOTAL events: 10998 MMAP events: 66 COMM events: 2 SAMPLE events: 10930 cycles stats: TOTAL events: 3644 SAMPLE events: 3644 cycles stats: TOTAL events: 3642 SAMPLE events: 3642 cycles stats: TOTAL events: 3644 SAMPLE events: 3644 On a Intel Nehalem or even AMD64, there are 4 counters capable of measuring cycles, so there is plenty of space to measure those events without multiplexing (even with the NMI watchdog active). And even with multiplexing, we'd expect roughly the same number of samples per event. The root of the problem was that when the event that caused the buffer to become full was not the first event passed on the cmdline, the user notification would get lost. The notification was sent to the file descriptor of the overflowed event but the perf tool was not polling on it. The perf tool aggregates all samples into a single buffer, i.e., the buffer of the first event. Consequently, it assumes notifications for any event will come via that descriptor. The seemingly straight forward solution of moving the waitq into the ringbuffer object doesn't work because of life-time issues. One could perf_event_set_output() on a fd that you're also blocking on and cause the old rb object to be freed while its waitq would still be referenced by the blocked thread -> FAIL. Therefore link all events to the ringbuffer and broadcast the wakeup from the ringbuffer object to all possible events that could be waited upon. This is rather ugly, and we're open to better solutions but it works for now. Reported-by: NStephane Eranian <eranian@google.com> Finished-by: NStephane Eranian <eranian@google.com> Reviewed-by: NStephane Eranian <eranian@google.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20111126014731.GA7030@quadSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
- 11 11月, 2011 6 次提交
-
-
由 Inki Dae 提交于
Signed-off-by: NInki Dae <inki.dae@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Marcin Slusarz 提交于
Nouveau, when configured with debugfs, creates debugfs files for every channel, so structure holding list of files needs to be protected from simultaneous changes by multiple threads. Without this patch it's possible to hit kernel oops in drm_debugfs_remove_files just by running a couple of xterms with looped glxinfo. Signed-off-by: NMarcin Slusarz <marcin.slusarz@gmail.com> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Kuninori Morimoto 提交于
Signed-off-by: NKuninori Morimoto <kuninori.morimoto.gx@renesas.com> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Kuninori Morimoto 提交于
This patch moves PORT_xx helper macro to sh_pfc.h, and it expects CPU_ALL_PORT() macro for each CPU Signed-off-by: NKuninori Morimoto <kuninori.morimoto.gx@renesas.com> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Kuninori Morimoto 提交于
This patch move PORT_DATA_xx helper macro to sh_pfc.h. and pfc-sh7372.c used it Signed-off-by: NKuninori Morimoto <kuninori.morimoto.gx@renesas.com> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
Now that all of the named string association with clocks has been migrated to clkdev lookups there's no meaningful named topology that can be constructed for a debugfs tree view. Get rid of the left over bits, and shrink struct clk a bit in the process. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 09 11月, 2011 1 次提交
-
-
由 Keng-Yu Lin 提交于
Signed-off-by: NKeng-Yu Lin <kengyu@canonical.com> Signed-off-by: NJeff Garzik <jgarzik@redhat.com>
-
- 08 11月, 2011 2 次提交
-
-
由 Axel Lin 提交于
Fix below build warning: CC arch/arm/mach-omap2/hwspinlock.o In file included from arch/arm/mach-omap2/hwspinlock.c:22: include/linux/hwspinlock.h: In function '__hwspin_unlock': include/linux/hwspinlock.h:121: warning: 'return' with a value, in function returning void Signed-off-by: NAxel Lin <axel.lin@gmail.com> Signed-off-by: NOhad Ben-Cohen <ohad@wizery.com>
-
由 Jonathan Corbet 提交于
The "private_date" field in struct devfreq_dev_status almost certainly wants to be "private_data"; since there are no in-tree users of this functionality, now seems like an easy time to make the fix. Signed-off-by: NJonathan Corbet <corbet@lwn.net> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
- 07 11月, 2011 7 次提交
-
-
由 Deepthi Dharwar 提交于
This patch makes the cpuidle_states structure global (single copy) instead of per-cpu. The statistics needed on per-cpu basis by the governor are kept per-cpu. This simplifies the cpuidle subsystem as state registration is done by single cpu only. Having single copy of cpuidle_states saves memory. Rare case of asymmetric C-states can be handled within the cpuidle driver and architectures such as POWER do not have asymmetric C-states. Having single/global registration of all the idle states, dynamic C-state transitions on x86 are handled by the boot cpu. Here, the boot cpu would disable all the devices, re-populate the states and later enable all the devices, irrespective of the cpu that would receive the notification first. Reference: https://lkml.org/lkml/2011/4/25/83Signed-off-by: NDeepthi Dharwar <deepthi@linux.vnet.ibm.com> Signed-off-by: NTrinabh Gupta <g.trinabh@gmail.com> Tested-by: NJean Pihet <j-pihet@ti.com> Reviewed-by: NKevin Hilman <khilman@ti.com> Acked-by: NArjan van de Ven <arjan@linux.intel.com> Acked-by: NKevin Hilman <khilman@ti.com> Signed-off-by: NLen Brown <len.brown@intel.com>
-
由 Deepthi Dharwar 提交于
This is the first step towards global registration of cpuidle states. The statistics used primarily by the governor are per-cpu and have to be split from rest of the fields inside cpuidle_state, which would be made global i.e. single copy. The driver_data field is also per-cpu and moved. Signed-off-by: NDeepthi Dharwar <deepthi@linux.vnet.ibm.com> Signed-off-by: NTrinabh Gupta <g.trinabh@gmail.com> Tested-by: NJean Pihet <j-pihet@ti.com> Reviewed-by: NKevin Hilman <khilman@ti.com> Acked-by: NArjan van de Ven <arjan@linux.intel.com> Acked-by: NKevin Hilman <khilman@ti.com> Signed-off-by: NLen Brown <len.brown@intel.com>
-
由 Deepthi Dharwar 提交于
The cpuidle_device->prepare() mechanism causes updates to the cpuidle_state[].flags, setting and clearing CPUIDLE_FLAG_IGNORE to tell the governor not to chose a state on a per-cpu basis at run-time. State demotion is now handled by the driver and it returns the actual state entered. Hence, this mechanism is not required. Also this removes per-cpu flags from cpuidle_state enabling it to be made global. Reference: https://lkml.org/lkml/2011/3/25/52Signed-off-by: NDeepthi Dharwar <deepthi@linux.vnet.ibm> Signed-off-by: NTrinabh Gupta <g.trinabh@gmail.com> Tested-by: NJean Pihet <j-pihet@ti.com> Acked-by: NArjan van de Ven <arjan@linux.intel.com> Reviewed-by: NKevin Hilman <khilman@ti.com> Signed-off-by: NLen Brown <len.brown@intel.com>
-
由 Deepthi Dharwar 提交于
Cpuidle governor only suggests the state to enter using the governor->select() interface, but allows the low level driver to override the recommended state. The actual entered state may be different because of software or hardware demotion. Software demotion is done by the back-end cpuidle driver and can be accounted correctly. Current cpuidle code uses last_state field to capture the actual state entered and based on that updates the statistics for the state entered. Ideally the driver enter routine should update the counters, and it should return the state actually entered rather than the time spent there. The generic cpuidle code should simply handle where the counters live in the sysfs namespace, not updating the counters. Reference: https://lkml.org/lkml/2011/3/25/52Signed-off-by: NDeepthi Dharwar <deepthi@linux.vnet.ibm.com> Signed-off-by: NTrinabh Gupta <g.trinabh@gmail.com> Tested-by: NJean Pihet <j-pihet@ti.com> Reviewed-by: NKevin Hilman <khilman@ti.com> Acked-by: NArjan van de Ven <arjan@linux.intel.com> Acked-by: NKevin Hilman <khilman@ti.com> Signed-off-by: NLen Brown <len.brown@intel.com>
-
由 Bart Van Assche 提交于
Recently the ACPI ops structs were constified but the inline version of register_hotplug_dock_device() was overlooked (see also commit 9c8b04be, June 25 2011). Update the inline function register_hotplug_dock_device() that is enabled with CONFIG_ACPI_DOCK=n too. This patch fixes at least the following compiler warnings: drivers/ata/libata-acpi.c: In function .ata_acpi_associate.: drivers/ata/libata-acpi.c:266:11: warning: passing argument 2 of .register_hotplug_dock_device. discards qualifiers from pointer target type include/acpi/acpi_drivers.h:146:19: note: expected .struct acpi_dock_ops *. but argument is of type .const struct acpi_dock_ops *. drivers/ata/libata-acpi.c:275:11: warning: passing argument 2 of .register_hotplug_dock_device. discards qualifiers from pointer target type include/acpi/acpi_drivers.h:146:19: note: expected .struct acpi_dock_ops *. but argument is of type .const struct acpi_dock_ops *. Cc: stable@vger.kernel.org Signed-off-by: NLen Brown <len.brown@intel.com>
-
由 Rafael J. Wysocki 提交于
ACPI_NO_HARDWARE_INIT is only used by acpi_early_init() and acpi_bus_init() when calling acpi_enable_subsystem(), but acpi_enable_subsystem() doesn't check that flag, so it can be dropped. Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl> Signed-off-by: NLen Brown <len.brown@intel.com>
-
由 Ben Hutchings 提交于
Use of the GPL or a compatible licence doesn't necessarily make the code any good. We already consider staging modules to be suspect, and this should also be true for out-of-tree modules which may receive very little review. Signed-off-by: NBen Hutchings <ben@decadent.org.uk> Reviewed-by: NDave Jones <davej@redhat.com> Acked-by: NGreg Kroah-Hartman <gregkh@suse.de> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> (patched oops-tracing.txt)
-
- 05 11月, 2011 7 次提交
-
-
由 Mark Brown 提交于
Occasionally we may see an accessory reported before we have a stable impedance for the accessory. If this happens then reread the status in order to ensure that the handler can take the appropriate action for the status change. Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
-
由 Dan Carpenter 提交于
slave->duplex is a u8 type so the in bond_info_show_slave() when we check "if (slave->duplex == -1)", it's always false. Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Johannes Berg 提交于
The documentation for how the length of attributes is checked is wrong ("Exact length" isn't true, the policy checks are for "minimum length") and a bit misleading. Make it more complete and explain what really happens. Cc: Thomas Graf <tgraf@suug.ch> Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Oleg Nesterov 提交于
Commit 27920651 "PM / Freezer: Make fake_signal_wake_up() wake TASK_KILLABLE tasks too" updated fake_signal_wake_up() used by freezer to wake up KILLABLE tasks. Sending unsolicited wakeups to tasks in killable sleep is dangerous as there are code paths which depend on tasks not waking up spuriously from KILLABLE sleep. For example. sys_read() or page can sleep in TASK_KILLABLE assuming that wait/down/whatever _killable can only fail if we can not return to the usermode. TASK_TRACED is another obvious example. The offending commit was to resolve freezer hang during system PM operations caused by KILLABLE sleeps in network filesystems. wait_event_freezekillable(), which depends on the spurious KILLABLE wakeup, was added by f06ac72e "cifs, freezer: add wait_event_freezekillable and have cifs use it" to be used to implement killable & freezable sleeps in network filesystems. To prepare for reverting of 27920651, this patch reimplements wait_event_freezekillable() using freezer_do_not_count/freezer_count() so that it doesn't depend on the spurious KILLABLE wakeup. This isn't very nice but should do for now. [tj: Refreshed patch to apply to linus/master and updated commit description on Rafael's request.] Signed-off-by: NOleg Nesterov <oleg@redhat.com> Signed-off-by: NTejun Heo <tj@kernel.org> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Tony Lindgren 提交于
Commit 03ca370f (PM / OPP: Add OPP availability change notifier) does not compile if CONFIG_PM_OPP is not set: arch/arm/plat-omap/omap-pm-noop.o: In function `opp_get_notifier': include/linux/opp.h:103: multiple definition of `opp_get_notifier' include/linux/opp.h:103: first defined here Also fix incorrect comment. Signed-off-by: NTony Lindgren <tony@atomide.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Srivatsa S. Bhat 提交于
Remove the suspend_cpu_hotplug declaration, which doesn't correspond to an existing variable. [rjw: Added the changelog.] Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Johannes Berg 提交于
The warning really shouldn't happen, but until we find the reason why it does don't spew it all the time, just once is enough to know we've hit it. Reported-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
-
- 04 11月, 2011 4 次提交
-
-
由 Kuninori Morimoto 提交于
This provides a clk_rate_mult_range_round() helper for use by some of the CPG PLL ranged multipliers, following the same approach as used by the div ranges. Signed-off-by: NKuninori Morimoto <kuninori.morimoto.gx@renesas.com> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Phil Edworthy 提交于
This fixes up support for SH-2(A) SCIFs by introducing a new regtype. As expected, it's close to the SH-4A SCIF with fifodata, but still different enough to warrant its own type. Fixes up a number of FIFO overflows and similar for both SH7203/SH7264. Signed-off-by: NPhil Edworthy <phil.edworthy@renesas.com> Tested-by: NFederico Fuga <fuga@studiofuga.com> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Christoph Hellwig 提交于
All ->execute_task instances now need to complete the I/O explicitly, which can either happen synchronously or asynchronously. Note that a lot of the CDB emulations appear to return success even if some lowlevel operations failed. Given that this is an existing issue this patch doesn't change that fact. (nab: Adding missing switch breaks in PR-IN + PR_OUT) Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
-
由 Christoph Hellwig 提交于
We want to be able to handle all CDBs through it and remove hacks like always using the first task in a CDB in target_report_luns. Also rename the callback to ->execute_task to better describe its use. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
-