- 17 3月, 2011 2 次提交
-
-
由 Rafael J. Wysocki 提交于
None of the existing cpufreq drivers uses the second argument of its .suspend() callback (which isn't useful anyway), so remove it. Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl> Signed-off-by: NDave Jones <davej@redhat.com>
-
由 Thomas Renninger 提交于
and it also is misleading due to another message above which makes the index look like it is the CPU. https://bugzilla.kernel.org/show_bug.cgi?id=24562Signed-off-by: NThomas Renninger <trenn@suse.de> Signed-off-by: NDave Jones <davej@redhat.com> CC: cpufreq@vger.kernel.org
-
- 16 3月, 2011 3 次提交
-
-
由 Stephen Rothwell 提交于
[AV: on architectures where default conflicts with existing flags, that is] Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Boris Ostrovsky 提交于
Support for Always Running APIC timer (ARAT) was introduced in commit db954b58. This feature allows us to avoid switching timers from LAPIC to something else (e.g. HPET) and go into timer broadcasts when entering deep C-states. AMD processors don't provide a CPUID bit for that feature but they also keep APIC timers running in deep C-states (except for cases when the processor is affected by erratum 400). Therefore we should set ARAT feature bit on AMD CPUs. Tested-by: NBorislav Petkov <borislav.petkov@amd.com> Acked-by: NAndreas Herrmann <andreas.herrmann3@amd.com> Acked-by: NMark Langsdorf <mark.langsdorf@amd.com> Acked-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NBoris Ostrovsky <boris.ostrovsky@amd.com> LKML-Reference: <1300205624-4813-1-git-send-email-ostr@amd64.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Andreas Herrmann 提交于
Commit 7f74f8f2 (x86 quirk: Fix polarity for IRQ0 pin2 override on SB800 systems) introduced a regression. It removed some SB600 specific code to determine the revision ID without adapting a corresponding revision ID check for SB600. See this mail thread: http://marc.info/?l=linux-kernel&m=129980296006380&w=2 This patch adapts the corresponding check to cover all SB600 revisions. Tested-by: NWang Lei <f3d27b@gmail.com> Signed-off-by: NAndreas Herrmann <andreas.herrmann3@amd.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: stable@kernel.org # 38.x, 37.x, 32.x LKML-Reference: <20110315143137.GD29499@alberich.amd.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 15 3月, 2011 35 次提交
-
-
由 Mathieu Desnoyers 提交于
Intel Archiecture Software Developer's Manual section 7.1.3 specifies that a core serializing instruction such as "cpuid" should be executed on _each_ core before the new instruction is made visible. Failure to do so can lead to unspecified behavior (Intel XMC erratas include General Protection Fault in the list), so we should avoid this at all cost. This problem can affect modified code executed by interrupt handlers after interrupt are re-enabled at the end of stop_machine, because no core serializing instruction is executed between the code modification and the moment interrupts are reenabled. Because stop_machine_text_poke performs the text modification from the first CPU decrementing stop_machine_first, modified code executed in thread context is also affected by this problem. To explain why, we have to split the CPUs in two categories: the CPU that initiates the text modification (calls text_poke_smp) and all the others. The scheduler, executed on all other CPUs after stop_machine, issues an "iret" core serializing instruction, and therefore handles core serialization for all these CPUs. However, the text modification initiator can continue its execution on the same thread and access the modified text without any scheduler call. Given that the CPU that initiates the code modification is not guaranteed to be the one actually performing the code modification, it falls into the XMC errata. Q: Isn't this executed from an IPI handler, which will return with IRET (a serializing instruction) anyway? A: No, now stop_machine uses per-cpu workqueue, so that handler will be executed from worker threads. There is no iret anymore. Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com> LKML-Reference: <20110303160137.GB1590@Krystal> Reviewed-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: <stable@kernel.org> Cc: Arjan van de Ven <arjan@infradead.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andi Kleen <andi@firstfloor.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Michal Simek 提交于
Reset vector can be setup by bootloader and kernel doens't need to touch it. If you require to setup reset vector, please use CONFIG_MANUAL_RESET_VECTOR throught menuconfig. It is not possible to setup address 0x0 as reset address because make no sense to set it up at all. Signed-off-by: NMichal Simek <monstr@monstr.eu> Signed-off-by: NJohn Williams <john.williams@petalogix.com>
-
由 Michal Simek 提交于
If soft reset falls through with no hardware assisted reset, the best we can do is jump to the reset vector and see what the bootloader left for us. Signed-off-by: NMichal Simek <monstr@monstr.eu> Signed-off-by: NJohn Williams <john.williams@petalogix.com>
-
由 Michal Simek 提交于
Microblaze vector table stores several vectors (reset, user exception, interrupt, debug exception and hardware exception). All these functions can be below address 0x10000. If they are, wrong vector table is genarated because jump is not setup from two instructions (imm upper 16bit and brai lower 16bit). Adding specific offset prevent problem if address is below 0x10000. For this case only brai instruction is used. Signed-off-by: NMichal Simek <monstr@monstr.eu>
-
由 Xiao Guangrong 提交于
native_flush_tlb_others() is called from: flush_tlb_current_task() flush_tlb_mm() flush_tlb_page() All these functions disable preemption explicitly, so we can use smp_processor_id() instead of get_cpu() and put_cpu(). Signed-off-by: NXiao Guangrong <xiaoguangrong@cn.fujitsu.com> Cc: Cliff Wickman <cpw@sgi.com> LKML-Reference: <4D7EC791.4040003@cn.fujitsu.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Aneesh Kumar K.V 提交于
This patch add new syscalls to x86_64 Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Aneesh Kumar K.V 提交于
This patch adds new syscalls to x86_32 Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Rafael J. Wysocki 提交于
The variable pm_flags is used to prevent APM from being enabled along with ACPI, which would lead to problems. However, acpi_init() is always called before apm_init() and after acpi_init() has returned, it is known whether or not ACPI will be used. Namely, if acpi_disabled is not set after acpi_init() has returned, this means that ACPI is enabled. Thus, it is sufficient to check acpi_disabled in apm_init() to prevent APM from being enabled in parallel with ACPI. Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl> Acked-by: NLen Brown <len.brown@intel.com>
-
由 Rafael J. Wysocki 提交于
From the users' point of view CONFIG_PM is really only used for making it possible to set CONFIG_SUSPEND, CONFIG_HIBERNATION, CONFIG_PM_RUNTIME and (surprisingly enough) CONFIG_XEN_SAVE_RESTORE (CONFIG_PM_OPP also depends on CONFIG_PM, but quite artificially). However, both CONFIG_SUSPEND and CONFIG_HIBERNATION require platform support (independent of CONFIG_PM) and it is not quite obvious that CONFIG_PM has to be set for CONFIG_XEN_SAVE_RESTORE to be available. Thus, from the users' point of view, it would be more logical to automatically select CONFIG_PM if any of the above options depending on it are set. Make CONFIG_PM depend on (CONFIG_PM_SLEEP || CONFIG_PM_RUNTIME), which will cause it to be selected when any of CONFIG_SUSPEND, CONFIG_HIBERNATION, CONFIG_PM_RUNTIME, CONFIG_XEN_SAVE_RESTORE is set and will clarify its meaning. Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Thomas Gleixner 提交于
commit 522d7dec(futex: Remove redundant pagefault_disable in futex_atomic_cmpxchg_inatomic()) added a bogus comment. /* Note that preemption is disabled by futex_atomic_cmpxchg_inatomic * call sites. */ Bogus in two aspects: 1) pagefault_disable != preempt_disable even if the mechanism we use is the same 2) we have a call site which deliberately does not disable pagefaults as it wants the possible fault to be handled - though that has been changed for consistency reasons now. Sigh. I really should have seen that when committing the above. :( Catched-by-and-rightfully-ranted-at-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> LKML-Reference: <alpine.LFD.2.00.1103141126590.2787@localhost6.localdomain6> Cc: Michel Lespinasse <walken@google.com> Cc: Darren Hart <darren@dvhart.com>
-
由 Florian Fainelli 提交于
Since commit 32fd6901 (MIPS: Alchemy: get rid of common/reset.c) Alchemy-based boards use their own reset function. For MTX-1 and XXS1500, the reset function pokes at the BCSR.SYSTEM_RESET register, but this does not work. According to Bruno Randolf, this was not tested when written. Previously, the generic au1000_restart() routine called the board specific reset function, which for MTX-1 and XXS1500 did not work, but finally made a jump to the reset vector, which really triggers a system restart. Fix reboot for both targets by jumping to the reset vector. Signed-off-by: NFlorian Fainelli <florian@openwrt.org> To: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/2093/Acked-by: NBruno Randolf <br1@einfach.org> Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Florian Fainelli 提交于
When au1000_eth probes the MII bus for PHY address, if we do not set au1000_eth platform data's phy_search_highest_address, the MII probing logic will exit early and will assume a valid PHY is found at address 0. For MTX-1, the PHY is at address 31, and without this patch, the link detection/speed/duplex would not work correctly. CC: stable@kernel.org Signed-off-by: NFlorian Fainelli <florian@openwrt.org> To: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/2111/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Maurus Cuelenaere 提交于
Jz4740 supports the clock framework but doesn't have HAVE_CLK defined, so define it! Signed-off-by: NMaurus Cuelenaere <mcuelenaere@gmail.com> To: linux-mips@linux-mips.org To: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/2112/Acked-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Maksim Rayskiy 提交于
To avoid forking usermode thread when creating an idle task, move fork_idle to a work queue. If kernel starts with maxcpus= option which does not bring all available cpus online at boot time, idle tasks for offline cpus are not created. If later offline cpus are hotplugged through sysfs, __cpu_up is called in the context of the user task, and fork_idle copies its non-zero mm pointer. This causes BUG() in per_cpu_trap_init. This also avoids issues with resource limits of the CPU writing to sysfs, containers, maybe others. Signed-off-by: NMaksim Rayskiy <mrayskiy@broadcom.com> To: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/2070/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Deng-Cheng Zhu 提交于
Leverage the commit for ARM by Will Deacon: - 446a5a8b ARM: 6205/1: perf: ensure counter delta is treated as unsigned Hardware performance counters on ARM are 32-bits wide but atomic64_t variables are used to represent counter data in the hw_perf_event structure. The armpmu_event_update function right-shifts a signed 64-bit delta variable and adds the result to the event count. This can lead to shifting in sign-bits if the MSB of the 32-bit counter value is set. This results in perf output such as: Performance counter stats for 'sleep 20': 18446744073460670464 cycles <-- 0xFFFFFFFFF12A6000 7783773 instructions # 0.000 IPC 465 context-switches 161 page-faults 1172393 branches 20.154242147 seconds time elapsed This patch ensures that the delta value is treated as unsigned so that the right shift sets the upper bits to zero. Acked-by: NWill Deacon <will.deacon@arm.com> Acked-by: NDavid Daney <ddaney@caviumnetworks.com> Signed-off-by: NDeng-Cheng Zhu <dengcheng.zhu@gmail.com> To: a.p.zijlstra@chello.nl To: fweisbec@gmail.com To: will.deacon@arm.com Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Cc: wuzhangjin@gmail.com Cc: paulus@samba.org Cc: mingo@elte.hu Cc: acme@redhat.com Cc: matt@console-pimps.org Cc: sshtylyov@mvista.com Patchwork: http://patchwork.linux-mips.org/patch/2015/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Deng-Cheng Zhu 提交于
This is the MIPS part of the following commits by Frederic Weisbecker: - f72c1a93 perf: Factorize callchain context handling Store the kernel and user contexts from the generic layer instead of archs, this gathers some repetitive code. - 56962b44 perf: Generalize some arch callchain code - Most archs use one callchain buffer per cpu, except x86 that needs to deal with NMIs. Provide a default perf_callchain_buffer() implementation that x86 overrides. - Centralize all the kernel/user regs handling and invoke new arch handlers from there: perf_callchain_user() / perf_callchain_kernel() That avoid all the user_mode(), current->mm checks and so... - Invert some parameters in perf_callchain_*() helpers: entry to the left, regs to the right, following the traditional (dst, src). - 70791ce9 perf: Generalize callchain_store() callchain_store() is the same on every archs, inline it in perf_event.h and rename it to perf_callchain_store() to avoid any collision. This removes repetitive code. - c1a65932 perf: Drop unappropriate tests on arch callchains Drop the TASK_RUNNING test on user tasks for callchains as this check doesn't seem to make any sense. Also remove the tests for !current that is not supposed to happen and current->pid as this should be handled at the generic level, with exclude_idle attribute. Reported-by: NWu Zhangjin <wuzhangjin@gmail.com> Acked-by: NFrederic Weisbecker <fweisbec@gmail.com> Acked-by: NDavid Daney <ddaney@caviumnetworks.com> Signed-off-by: NDeng-Cheng Zhu <dengcheng.zhu@gmail.com> To: a.p.zijlstra@chello.nl To: will.deacon@arm.com Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Cc: paulus@samba.org Cc: mingo@elte.hu Cc: acme@redhat.com Cc: dengcheng.zhu@gmail.com Cc: matt@console-pimps.org Cc: sshtylyov@mvista.com Patchwork: http://patchwork.linux-mips.org/patch/2014/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Deng-Cheng Zhu 提交于
Ignore events that are in off/error state or belong to a different PMU. This patch originates from the following commit for ARM by Will Deacon: - 65b4711f ARM: 6352/1: perf: fix event validation The validate_event function in the ARM perf events backend has the following problems: 1.) Events that are disabled count towards the cost. 2.) Events associated with other PMUs [for example, software events or breakpoints] do not count towards the cost, but do fail validation, causing the group to fail. This patch changes validate_event so that it ignores events in the PERF_EVENT_STATE_OFF state or that are scheduled for other PMUs. Acked-by: NWill Deacon <will.deacon@arm.com> Acked-by: NDavid Daney <ddaney@caviumnetworks.com> Signed-off-by: NDeng-Cheng Zhu <dengcheng.zhu@gmail.com> To: a.p.zijlstra@chello.nl To: fweisbec@gmail.com To: will.deacon@arm.com Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Cc: wuzhangjin@gmail.com Cc: paulus@samba.org Cc: mingo@elte.hu Cc: acme@redhat.com Cc: dengcheng.zhu@gmail.com Cc: matt@console-pimps.org Cc: sshtylyov@mvista.com Cc: ddaney@caviumnetworks.com Patchwork: http://patchwork.linux-mips.org/patch/2013/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Deng-Cheng Zhu 提交于
This is the MIPS part of the following commits by Peter Zijlstra: - a4eaf7f1 perf: Rework the PMU methods Replace pmu::{enable,disable,start,stop,unthrottle} with pmu::{add,del,start,stop}, all of which take a flags argument. The new interface extends the capability to stop a counter while keeping it scheduled on the PMU. We replace the throttled state with the generic stopped state. This also allows us to efficiently stop/start counters over certain code paths (like IRQ handlers). It also allows scheduling a counter without it starting, allowing for a generic frozen state (useful for rotating stopped counters). The stopped state is implemented in two different ways, depending on how the architecture implemented the throttled state: 1) We disable the counter: a) the pmu has per-counter enable bits, we flip that b) we program a NOP event, preserving the counter state 2) We store the counter state and ignore all read/overflow events For MIPSXX, the stopped state is implemented in the way of 1.b as above. - 33696fc0 perf: Per PMU disable Changes perf_disable() into perf_pmu_disable(). - 24cd7f54 perf: Reduce perf_disable() usage Since the current perf_disable() usage is only an optimization, remove it for now. This eases the removal of the __weak hw_perf_enable() interface. - b0a873eb perf: Register PMU implementations Simple registration interface for struct pmu, this provides the infrastructure for removing all the weak functions. - 51b0fe39 perf: Deconstify struct pmu sed -ie 's/const struct pmu\>/struct pmu/g' `git grep -l "const struct pmu\>"` Reported-by: NWu Zhangjin <wuzhangjin@gmail.com> Acked-by: NDavid Daney <ddaney@caviumnetworks.com> Signed-off-by: NDeng-Cheng Zhu <dengcheng.zhu@gmail.com> To: a.p.zijlstra@chello.nl To: fweisbec@gmail.com To: will.deacon@arm.com Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Cc: wuzhangjin@gmail.com Cc: paulus@samba.org Cc: mingo@elte.hu Cc: acme@redhat.com Cc: dengcheng.zhu@gmail.com Cc: matt@console-pimps.org Cc: sshtylyov@mvista.com Cc: ddaney@caviumnetworks.com Patchwork: http://patchwork.linux-mips.org/patch/2012/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Deng-Cheng Zhu 提交于
This is the MIPS part of the following commit by Peter Zijlstra: - e360adbe irq_work: Add generic hardirq context callbacks Provide a mechanism that allows running code in IRQ context. It is most useful for NMI code that needs to interact with the rest of the system -- like wakeup a task to drain buffers. Perf currently has such a mechanism, so extract that and provide it as a generic feature, independent of perf so that others may also benefit. The IRQ context callback is generated through self-IPIs where possible, or on architectures like powerpc the decrementer (the built-in timer facility) is set to generate an interrupt immediately. Architectures that don't have anything like this get to do with a callback from the timer tick. These architectures can call irq_work_run() at the tail of any IRQ handlers that might enqueue such work (like the perf IRQ handler) to avoid undue latencies in processing the work. For MIPSXX, we need to call irq_work_run() at the tail of the perf IRQ handler as described above. Reported-by: NWu Zhangjin <wuzhangjin@gmail.com> Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: NDavid Daney <ddaney@caviumnetworks.com> Signed-off-by: NDeng-Cheng Zhu <dengcheng.zhu@gmail.com> To: fweisbec@gmail.com To: will.deacon@arm.com Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Cc: paulus@samba.org Cc: mingo@elte.hu Cc: acme@redhat.com Cc: matt@console-pimps.org Cc: sshtylyov@mvista.com, Patchwork: http://patchwork.linux-mips.org/patch/2011/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Yoichi Yuasa 提交于
Signed-off-by: NYoichi Yuasa <yuasa@linux-mips.org> Cc: linux-mips <linux-mips@linux-mips.org> Patchwork: https://patchwork.linux-mips.org/patch/2055/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Stefan Weil 提交于
This error was reported by cppcheck: arch/mips/loongson/common/machtype.c:56: error: Dangerous usage of 'str' (strncpy doesn't always 0-terminate it) If strncpy copied MACHTYPE_LEN bytes, the destination string str was not terminated. The patch adds one more byte to str and makes sure that this byte is always 0. Signed-off-by: NStefan Weil <weil@mail.berlios.de> Cc: Wu Zhangjin <wuzhangjin@gmail.com> Cc: Arnaud Patard <apatard@mandriva.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/2053/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 David Daney 提交于
Under some combinations of CONFIG_*, lastpfn in page_is_ram is 'set but not used'. Mark it as __maybe_unused to quiet the warning/error. Signed-off-by: NDavid Daney <ddaney@caviumnetworks.com> To: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/2033/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 David Daney 提交于
GCC-4.6 can find more unused code than previous versions could. In the case of arch/mips/math-emu/ieee754int.h, the COMPXSP and COMPXDP macros are used in several places, but a couple of them leave xs unused. The easiest thing to do is mark it as __maybe_unused to quiet the warning. Signed-off-by: NDavid Daney <ddaney@caviumnetworks.com> To: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/2032/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 David Daney 提交于
The variable arg3 in _sys_sysmips() is unused. Remove it. Signed-off-by: NDavid Daney <ddaney@caviumnetworks.com> To: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/2034/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 David Daney 提交于
GCC-4.6 can find more unused code than previous versions could. In the case of protected_restore_fp_context{,32}, the variable tmp is really used. Its use is tricky in that we really care about the side effects of the __put_user() calls. So we must mark tmp with __maybe_unused to quiet the warning. Signed-off-by: NDavid Daney <ddaney@caviumnetworks.com> To: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/2035/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Anoop P A 提交于
Signed-off-by: NAnoop P A <anoop.pa@gmail.com> To: Ben Hutchings <ben@decadent.org.uk> To: linux-mips@linux-mips.org To: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/1804/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Anoop P A 提交于
Signed-off-by: NAnoop P A <anoop.pa@gmail.com> To: linux-mips@linux-mips.org To: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/1803/Tested-by: NShane McDonald <mcdonald.shane@gmail.com> Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Robert Millan 提交于
Loongson builds have an ad-hoc cmdline default of "console=ttyS0,115200 root=/dev/hda1". These settings come from a vendor; I remember builds from Lemote branch requiring a "console=tty" override in order to get a working console. At least on Yeeloong, they're particularly useless: there's no external serial port, and the IDE drive is now recognised as /dev/sda. Signed-off-by: NRobert Millan <rmh@gnu.org> To: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/1759/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Stefan Oberhumer 提交于
The sysmips(MIPS_FIXADE, ...) case contains an obvious copy-and-paste error in the handling of the TIF_LOGADE flag. Fix that Patchwork: https://patchwork.linux-mips.org/patch/1997/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 David Daney 提交于
It was reported that GCC-4.3.3 (with CodeSourcery extensions) fails without this. Reported-by: NJonas Gorski <jonas.gorski@gmail.com> Signed-off-by: NDavid Daney <ddaney@caviumnetworks.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/2010/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Wu Zhangjin 提交于
trace.func should be set to the recorded ip of the mcount calling site in the __mcount_loc section to filter the function entries configured through the tracing/set_graph_function interface, but before, this is set to the self_ra(the return address of mcount), which has made set_graph_function not work as expected. This fixes it via calculating the right recorded ip in the __mcount_loc section and assign it to trace.func. Reported-by: NZhiping Zhong <xzhong86@163.com> Signed-off-by: NWu Zhangjin <wuzhangjin@gmail.com> Cc: Steven Rostedt <srostedt@redhat.com> Cc: Sergei Shtylyov <sshtylyov@mvista.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/2017/Signed-off-by: NRalf Baechle <ralf@duck.linux-mips.net>
-
由 Wu Zhangjin 提交于
This moves the comments out of ftrace_make_nop() and cleans it. At the same time, a macro MCOUNT_OFFSET_INSNS is defined for sharing with the next patch. Signed-off-by: NWu Zhangjin <wuzhangjin@gmail.com> Cc: Steven Rostedt <srostedt@redhat.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/2008/Signed-off-by: NRalf Baechle <ralf@duck.linux-mips.net>
-
由 Wu Zhangjin 提交于
The old prepare_ftrace_return() for MIPS is confused and have introduced some problem. This patch cleans up the names of the arguments, variables and related functions. For MIPS, the 2nd argument of prepare_ftrace_return() is not really the 'selfpc' described in ftrace-design.txt but instead it is the self return address. This did break the compatibility of the generic interface but really reduced one unneeded calculation for to get the current function name, the parent return address and the self return address are enough, no need to tranform the self return address to the self address. But set_graph_function of function graph tracer is an exception, it does need the 2nd argument of prepare_ftrace_return() as 'selfpc', for it will use 'selfpc' to match user's configuration of function graph entries, but in reality, it doesn't need the 'selfpc' but the recorded ip address of the mcount calling site in the __mcount_loc section. So, the 2nd argument of prepare_ftrace_return() is not important, the real requirement is the right recorded ip address should be calculated and assign to trace.func, this will be fixed in the next patches. Reported-by: NZhiping Zhong <xzhong86@163.com> Signed-off-by: NWu Zhangjin <wuzhangjin@gmail.com> Cc: Steven Rostedt <srostedt@redhat.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/2007/Signed-off-by: NRalf Baechle <ralf@duck.linux-mips.net>
-
由 Wu Zhangjin 提交于
The old in_module() may not work in some situations(e.g. when module & kernel are in the same address space when CONFIG_MAPPED_KERNEL=y), The in_kernel_space() is more generic and it is also easy to be implemented via cloning the existing core_kernel_text(), so, replace the in_module() with in_kernel_space(). Signed-off-by: NWu Zhangjin <wuzhangjin@gmail.com> Cc: Steven Rostedt <srostedt@redhat.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/2005/Signed-off-by: NRalf Baechle <ralf@duck.linux-mips.net>
-
由 Wu Zhangjin 提交于
This simply moves the "ip-=4" statement down to the end of the do { ... } while (...); loop, which reduces one unneeded subtration and the subsequent memory loading and comparison. Signed-off-by: NWu Zhangjin <wuzhangjin@gmail.com> Cc: Steven Rostedt <srostedt@redhat.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/2006/Signed-off-by: NRalf Baechle <ralf@duck.linux-mips.net>
-