- 30 8月, 2017 4 次提交
-
-
由 Paul Burton 提交于
Allow the boot_secondary SMP op to return an error to __cpu_up(), which will in turn return it to its caller. This will allow SMP implementations to return errors quickly in cases they they know have failed, rather than relying upon __cpu_up() eventually timing out waiting for the cpu_running completion. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/17014/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Paul Burton 提交于
With CM >= 3.5 we have the notion of multiple clusters & can access their CM, CPC & GIC registers via the apporpriate redirect/other register blocks. In order to allow for this introduce cluster & block arguments to mips_cm_lock_other() which configures the redirect/other region to point at the appropriate cluster, core, VP & register block. Since we now have 4 arguments to mips_cm_lock_other() & a common use is likely to be to target the cluster, core & VP corresponding to a particular Linux CPU number we also add a new mips_cm_lock_other_cpu() helper function which handles that without the caller needing to manually pull out the cluster, core & VP numbers. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/17013/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Paul Burton 提交于
Up until now we have open-coded checks for whether CPUs are siblings, with slight variations on whether we consider the package ID or not. This will only get more complex when we introduce cluster support, so in preparation for that this patch introduces a cpus_are_siblings() function which can be used to check whether or not 2 CPUs are siblings in a consistent manner. By checking globalnumber with the VP ID masked out this also has the neat side effect of being ready for multi-cluster systems already. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Acked-by: NRafael J. Wysocki <rjw@rjwysocki.net> Acked-by: NThomas Gleixner <tglx@linutronix.de> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/17011/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Paul Burton 提交于
We currently have fields in struct cpuinfo_mips for the core & VP(E) ID of a particular CPU, and various pieces of code directly access those fields. This patch abstracts such access by introducing accessor functions cpu_core(), cpu_set_core(), cpu_vpe_id() & cpu_set_vpe_id() and having code that needs to access these values call those functions rather than directly accessing the struct cpuinfo_mips fields. This prepares us for changes to the way in which those values are stored in later patches. The cpu_vpe_id() function is introduced even though we already had a cpu_vpe_id() macro for a couple of reasons: 1) It's more consistent with the core, and future cluster, accessors. 2) It ensures a sensible return type without explicit casts. 3) It's generally preferable to use functions rather than macros. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/17009/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 29 8月, 2017 2 次提交
-
-
由 Matt Redfearn 提交于
smp_ops providers do not modify their ops structures, so they should be made const for robustness. Since currently the MIPS kernel is not mapped with memory protection, this does not in itself provide any security benefit, but it still makes sense to make this change. There are also slight code size efficincies from the structure being made read-only, saving 128 bytes of kernel text on a pistachio_defconfig. Before: text data bss dec hex filename 7187239 1772752 470224 9430215 8fe4c7 vmlinux After: text data bss dec hex filename 7187111 1772752 470224 9430087 8fe447 vmlinux Signed-off-by: NMatt Redfearn <matt.redfearn@imgtec.com> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Marcin Nowakowski <marcin.nowakowski@imgtec.com> Cc: Bart Van Assche <bart.vanassche@sandisk.com> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Huacai Chen <chenhc@lemote.com> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: Kevin Cernekee <cernekee@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Doug Ledford <dledford@redhat.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Joe Perches <joe@perches.com> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Steven J. Hill <steven.hill@cavium.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/16784/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Ying Huang 提交于
struct call_single_data is used in IPIs to transfer information between CPUs. Its size is bigger than sizeof(unsigned long) and less than cache line size. Currently it is not allocated with any explicit alignment requirements. This makes it possible for allocated call_single_data to cross two cache lines, which results in double the number of the cache lines that need to be transferred among CPUs. This can be fixed by requiring call_single_data to be aligned with the size of call_single_data. Currently the size of call_single_data is the power of 2. If we add new fields to call_single_data, we may need to add padding to make sure the size of new definition is the power of 2 as well. Fortunately, this is enforced by GCC, which will report bad sizes. To set alignment requirements of call_single_data to the size of call_single_data, a struct definition and a typedef is used. To test the effect of the patch, I used the vm-scalability multiple thread swap test case (swap-w-seq-mt). The test will create multiple threads and each thread will eat memory until all RAM and part of swap is used, so that huge number of IPIs are triggered when unmapping memory. In the test, the throughput of memory writing improves ~5% compared with misaligned call_single_data, because of faster IPIs. Suggested-by: NPeter Zijlstra <peterz@infradead.org> Signed-off-by: NHuang, Ying <ying.huang@intel.com> [ Add call_single_data_t and align with size of call_single_data. ] Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Aaron Lu <aaron.lu@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Juergen Gross <jgross@suse.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/87bmnqd6lz.fsf@yhuang-mobile.sh.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 07 8月, 2017 1 次提交
-
-
由 Matija Glavinic Pecotic 提交于
While testing cpu hoptlug (cpu down and up in loops) on kernel 4.4, it was observed that occasionally check for cpu online will fail in kernel/cpu.c, _cpu_up: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/tree/kernel/cpu.c?h=v4.4.79#n485 518 /* Arch-specific enabling code. */ 519 ret = __cpu_up(cpu, idle); 520 521 if (ret != 0) 522 goto out_notify; 523 BUG_ON(!cpu_online(cpu)); Reason is race between start_secondary and _cpu_up. cpu_callin_map is set before cpu_online_mask. In __cpu_up, cpu_callin_map is waited for, but cpu online mask is not, resulting in race in which secondary processor started and set cpu_callin_map, but not yet set the online mask,resulting in above BUG being hit. Upstream differs in the area. cpu_online check is in bringup_wait_for_ap, which is after cpu reached AP_ONLINE_IDLE,where secondary passed its start function. Nonetheless, fix makes start_secondary safe and not depending on other locks throughout the code. It protects as well against cpu_online checks put in between sometimes in the future. Fix this by moving completion after all flags are set. Signed-off-by: NMatija Glavinic Pecotic <matija.glavinic-pecotic.ext@nokia.com> Cc: Alexander Sverdlin <alexander.sverdlin@nokia.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16925/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 29 6月, 2017 1 次提交
-
-
由 Paul Burton 提交于
If we're running on a system with only 1 possible CPU then it makes no sense to reserve or initialise IPIs since we'll never use them. Avoid doing so. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16192/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 13 4月, 2017 1 次提交
-
-
由 Paul Burton 提交于
Commit fbde2d7d ("MIPS: Add generic SMP IPI support") introduced a sanity check that an IPI IRQ domain can be found during boot, in order to ensure that IPIs are able to be set up in systems using such domains. However it was added at a point where systems may have used an IPI IRQ domain in some situations but not others, and we could not know which were the case until runtime, so commit 578bffc8 ("MIPS: Don't BUG_ON when no IPI domain is found") made that check simply skip IPI init if no domain were found in order to fix the boot for systems such as QEMU Malta. We now use IPI IRQ domains for the MIPS CPU interrupt controller, which means systems which make use of IPI IRQ domains will always do so when running on multiple CPUs. As a result we now strengthen the sanity check to ensure that an IPI IRQ domain is found when multiple CPUs are present in the system. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Jason Cooper <jason@lakedaemon.net> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/15838/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 02 3月, 2017 1 次提交
-
-
由 Ingo Molnar 提交于
Update code that relied on sched.h including various MM types for them. This will allow us to remove the <linux/mm_types.h> include from <linux/sched.h>. Acked-by: NLinus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 03 1月, 2017 3 次提交
-
-
由 Marcin Nowakowski 提交于
SMP_DUMP has been added as a new IPI signal when kexec support was added for Cavium Octeon CPUs ('commit 7aa1c8f4 ("MIPS: kdump: Add support")'. However, the new signal doesn't appear to ever have a proper handler added (octeon_message_functions[] array has an empty handler for it), and generic IPI handlers now trigger a BUG() on unhandled signal. As the method is unused remove it completely and replace its only invocation with a smp_call_function(). [ralf@linux-mips.org: Renumber SMP_ASK_C0COUNT to avoid numbering gaps.] Signed-off-by: NMarcin Nowakowski <marcin.nowakowski@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14630/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Matt Redfearn 提交于
The previous commit made cpu_callin_map redundant, since it is no longer used to signal secondary CPUs starting, or going offline. Remove it now. Signed-off-by: NMatt Redfearn <matt.redfearn@imgtec.com> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Qais Yousef <qsyousef@gmail.com> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Huacai Chen <chenhc@lemote.com> Cc: Kevin Cernekee <cernekee@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: James Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: Anna-Maria Gleixner <anna-maria@linutronix.de> Cc: Adam Buchbinder <adam.buchbinder@gmail.com> Cc: Yang Shi <yang.shi@windriver.com> Cc: David Daney <david.daney@cavium.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/14503/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Matt Redfearn 提交于
If a secondary CPU failed to start, for any reason, the CPU requesting the secondary to start would get stuck in the loop waiting for the secondary to be present in the cpu_callin_map. Rather than that, use a completion event to signal that the secondary CPU has started and is waiting to synchronise counters. Since the CPU presence will no longer be marked in cpu_callin_map, remove the redundant test from arch_cpu_idle_dead(). Signed-off-by: NMatt Redfearn <matt.redfearn@imgtec.com> Cc: Maciej W. Rozycki <macro@imgtec.com> Cc: Jiri Slaby <jslaby@suse.cz> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: Chris Metcalf <cmetcalf@mellanox.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Qais Yousef <qsyousef@gmail.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Marcin Nowakowski <marcin.nowakowski@imgtec.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/14502/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 05 10月, 2016 2 次提交
-
-
由 Paul Gortmaker 提交于
Historically a lot of these existed because we did not have a distinction between what was modular code and what was providing support to modules via EXPORT_SYMBOL and friends. That changed when we forked out support for the latter into the export.h file. This means we should be able to reduce the usage of module.h in code that is obj-y Makefile or bool Kconfig. The advantage in doing so is that module.h itself sources about 15 other headers; adding significantly to what we feed cpp, and it can obscure what headers we are effectively using. Since module.h was the source for init.h (for __init) and for export.h (for EXPORT_SYMBOL) we consider each obj-y/bool instance for the presence of either and replace as needed. In the case of the n32/o32 files, we have to get rid of a couple no-op MODULE_ tags to facilitate the module.h removal. They piggy back off the fs/ elf binary support, which is also a bool Kconfig. Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14032/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Matt Redfearn 提交于
For the MIPS remote processor implementation, we need additional IPIs to talk to the remote processor. Since MIPS GIC reserves exactly the right number of IPI IRQs required by Linux for the number of VPs in the system, this is not possible without releasing some recources. This commit introduces mips_smp_ipi_allocate() which allocates IPIs to a given cpumask. It is called as normal with the cpu_possible_mask at bootup to initialise IPIs to all CPUs. mips_smp_ipi_free() may then be used to free IPIs to a subset of those CPUs so that their hardware resources can be reused. Signed-off-by: NMatt Redfearn <matt.redfearn@imgtec.com> Cc: Bjorn Andersson <bjorn.andersson@linaro.org> Cc: Ohad Ben-Cohen <ohad@wizery.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Lisa Parratt <Lisa.Parratt@imgtec.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Qais Yousef <qsyousef@gmail.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Cc: linux-remoteproc@vger.kernel.org Cc: lisa.parratt@imgtec.com Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/14285/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 04 10月, 2016 1 次提交
-
-
由 Matt Redfearn 提交于
All calls to mips_cpc_lock_other should be wrapped in mips_cm_lock_other. This only matters if the system has CM3 and is using cpu idle, since otherwise a) the CPC lock is sufficent for CM < 3 and b) any systems with CM > 3 have not been able to use cpu idle until now. Signed-off-by: NMatt Redfearn <matt.redfearn@imgtec.com> Reviewed-by: NPaul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Cc: Thomas Gleixner <tglx@linutronix.de> Cc: James Hogan <james.hogan@imgtec.com> Cc: Qais Yousef <qsyousef@gmail.com> Patchwork: https://patchwork.linux-mips.org/patch/14227/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 25 9月, 2016 1 次提交
-
-
由 Matt Redfearn 提交于
This patch fixes the possibility of a deadlock when bringing up secondary CPUs. The deadlock occurs because the set_cpu_online() is called before synchronise_count_slave(). This can cause a deadlock if the boot CPU, having scheduled another thread, attempts to send an IPI to the secondary CPU, which it sees has been marked online. The secondary is blocked in synchronise_count_slave() waiting for the boot CPU to enter synchronise_count_master(), but the boot cpu is blocked in smp_call_function_many() waiting for the secondary to respond to it's IPI request. Fix this by marking the CPU online in cpu_callin_map and synchronising counters before declaring the CPU online and calculating the maps for IPIs. Signed-off-by: NMatt Redfearn <matt.redfearn@imgtec.com> Reported-by: NJustin Chen <justinpopo6@gmail.com> Tested-by: NJustin Chen <justinpopo6@gmail.com> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: stable@vger.kernel.org # v4.1+ Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14302/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 29 7月, 2016 4 次提交
-
-
由 James Hogan 提交于
When performing SMP calls to foreign cores, exclude sibling CPUs from the provided map, as we already handle the local core on the current CPU. This prevents an SMP call from for example core 0, VPE 1 to VPE 0 on the same core. In the process the cpu_foreign_map cpumask is turned into an array of cpumasks, so that each CPU has its own version of it which excludes sibling CPUs. r4k_op_needs_ipi() is also updated to reflect that cache management SMP calls are not needed when all CPUs are siblings (i.e. there are no foreign CPUs according to the new cpu_foreign_map[] semantics which exclude siblings). Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com> Cc: Felix Fietkau <nbd@nbd.name> Cc: Jayachandran C. <jchandra@broadcom.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13801/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 James Hogan 提交于
Commit cccf34e9 ("MIPS: c-r4k: Fix cache flushing for MT cores") added the cpu_foreign_map cpumask containing a single VPE from each online core, and recalculated it when secondary CPUs are brought up. stop_this_cpu() was also updated to recalculate cpu_foreign_map, but with an additional hack before marking the CPU as offline to copy cpu_online_mask into cpu_foreign_map and perform an SMP memory barrier. This appears to have been intended to prevent cache management IPIs being missed when the VPE representing the core in cpu_foreign_map is taken offline while other VPEs remain online. Unfortunately there is nothing in this hack to prevent r4k_on_each_cpu() from reading the old cpu_foreign_map, and smp_call_function_many() from reading that new cpu_online_mask with the core's representative VPE marked offline. It then wouldn't send an IPI to any online VPEs of that core. stop_this_cpu() is only actually called in panic and system shutdown / halt / reboot situations, in which case all CPUs are going down and we don't really need to care about cache management, so drop this hack. Note that the __cpu_disable() case for CPU hotplug is handled in the previous commit, and no synchronisation is needed there due to the use of stop_machine() which prevents hotplug from taking place while any CPU has disabled preemption (as r4k_on_each_cpu() does). Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13796/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 James Hogan 提交于
When a CPU is disabled via CPU hotplug, cpu_foreign_map is not updated. This could result in cache management SMP calls being sent to offline CPUs instead of online siblings in the same core. Add a call to calculate_cpu_foreign_map() in the various MIPS cpu disable callbacks after set_cpu_online(). All cases are updated for consistency and to keep cpu_foreign_map strictly up to date, not just those which may support hardware multithreading. Fixes: cccf34e9 ("MIPS: c-r4k: Fix cache flushing for MT cores") Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: David Daney <david.daney@cavium.com> Cc: Kevin Cernekee <cernekee@gmail.com> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: Huacai Chen <chenhc@lemote.com> Cc: Hongliang Tao <taohl@lemote.com> Cc: Hua Yan <yanh@lemote.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13799/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 James Hogan 提交于
The SMP flush_tlb_*() functions may clear the memory map's ASIDs for other CPUs if the mm has only a single user (the current CPU) in order to avoid SMP calls. However this makes it appear to has_valid_asid(), which is used by various cache flush functions, as if the CPUs have never run in the mm, and therefore can't have cached any of its memory. For flush_tlb_mm() this doesn't sound unreasonable. flush_tlb_range() corresponds to flush_cache_range() which does do full indexed cache flushes, but only on the icache if the specified mapping is executable, otherwise it doesn't guarantee that there are no cache contents left for the mm. flush_tlb_page() corresponds to flush_cache_page(), which will perform address based cache ops on the specified page only, and also only touches the icache if the page is executable. It does not guarantee that there are no cache contents left for the mm. For example, this affects flush_cache_range() which uses the has_valid_asid() optimisation. It is required to flush the icache when mappings are made executable (e.g. using mprotect) so they are immediately usable. If some code is changed to non executable in order to be modified then it will not be flushed from the icache during that time, but the ASID on other CPUs may still be cleared for TLB flushing. When the code is changed back to executable, flush_cache_range() will assume the code hasn't run on those other CPUs due to the zero ASID, and won't invalidate the icache on them. This is fixed by clearing the other CPUs ASIDs to 1 instead of 0 for the above two flush_tlb_*() functions when the corresponding cache flushes are likely to be incomplete (non executable range flush, or any page flush). This ASID appears valid to has_valid_asid(), but still triggers ASID regeneration due to the upper ASID version bits being 0, which is less than the minimum ASID version of 1 and so always treated as stale. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13795/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 09 5月, 2016 1 次提交
-
-
由 Paul Burton 提交于
Commit fbde2d7d ("MIPS: Add generic SMP IPI support") introduced code that BUG_ON's in the case of a kernel that supports IPI domains but does not have one at runtime. This case is possible on Malta where for IPIs we may use either the GIC (which has an IPI IRQ domain implementation) or core-local software interrupts between VPEs (which do not currently have an IPI IRQ domain implementation). We can not know which will be used until runtime when we know whether a GIC is actually present, and if we run on a system with multiple VPEs and no GIC then the BUG_ON is hit. Commit 19fb5818 ("IPS: Fix broken malta qemu") worked around this for the single-core single-VPE case typically seen using QEMU, but does not catch the multi-VPE case. This patch removes the insufficient CPU presence check that was added and works around the bug differently, effectively reverting that commit. A simple way to reproduce this bug is by using QEMU, which partially implements the MT ASE but does not implement the GIC as of version 2.5. Using "-cpu 34Kf -smp 2" will present a system with 2 VPEs in one core & no GIC, hitting the BUG_ON. Given that we're post-merge-window on the way to v4.6, avoid this by just returning from mips_smp_ipi_init when no IPI IRQ domain is found. Ideally at some point all IPI implementations would be converted to the same IPI IRQ domain interface & we'd be able to restore the check. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Cc: Qais Yousef <qsyousef@gmail.com> Fixes: fbde2d7d ("MIPS: Add generic SMP IPI support") Fixes: 19fb5818 ("IPS: Fix broken malta qemu") Reverts: 19fb5818 ("IPS: Fix broken malta qemu") Cc: Qais Yousef <qsyousef@gmail.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Alex Smith <alex.smith@imgtec.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/13007/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 29 3月, 2016 1 次提交
-
-
由 Qais Yousef 提交于
Malta defconfig compiles with GIC on. Hence when compiling for SMP it causes the new IPI code to be activated. But on qemu malta there's no GIC causing a BUG_ON(!ipidomain) to be hit in mips_smp_ipi_init(). Since in that configuration one can only run a single core SMP (!), skip IPI initialisation if we detect that this is the case. It is a sensible behaviour to introduce and should keep such possible configuration to run rather than die hard unnecessarily. Signed-off-by: NQais Yousef <qsyousef@gmail.com> Reported-by: NGuenter Roeck <linux@roeck-us.net> Tested-by: NGuenter Roeck <linux@roeck-us.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/12892/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 13 3月, 2016 1 次提交
-
-
由 James Hogan 提交于
When calculate_cpu_foreign_map() recalculates the cpu_foreign_map cpumask it uses the local variable temp_foreign_map without initialising it to zero. Since the calculation only ever sets bits in this cpumask any existing bits at that memory location will remain set and find their way into cpu_foreign_map too. This could potentially lead to cache operations suboptimally doing smp calls to multiple VPEs in the same core, even though the VPEs share primary caches. Therefore initialise temp_foreign_map using cpumask_clear() before use. Fixes: cccf34e9 ("MIPS: c-r4k: Fix cache flushing for MT cores") Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/12759/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 02 3月, 2016 1 次提交
-
-
由 Thomas Gleixner 提交于
Let the non boot cpus call into idle with the corresponding hotplug state, so the hotplug core can handle the further bringup. That's a first step to convert the boot side of the hotplugged cpus to do all the synchronization with the other side through the state machine. For now it'll only start the hotplug thread and kick the full bringup of the cpu. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: linux-arch@vger.kernel.org Cc: Rik van Riel <riel@redhat.com> Cc: Rafael Wysocki <rafael.j.wysocki@intel.com> Cc: "Srivatsa S. Bhat" <srivatsa@mit.edu> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Sebastian Siewior <bigeasy@linutronix.de> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Tejun Heo <tj@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Paul McKenney <paulmck@linux.vnet.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul Turner <pjt@google.com> Link: http://lkml.kernel.org/r/20160226182341.614102639@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 25 2月, 2016 1 次提交
-
-
由 Qais Yousef 提交于
Use the new generic IPI layer to provide generic SMP IPI support if the irqchip supports it. Signed-off-by: NQais Yousef <qais.yousef@imgtec.com> Acked-by: NRalf Baechle <ralf@linux-mips.org> Cc: <jason@lakedaemon.net> Cc: <marc.zyngier@arm.com> Cc: <jiang.liu@linux.intel.com> Cc: <linux-mips@linux-mips.org> Cc: <lisa.parratt@imgtec.com> Cc: Qais Yousef <qsyousef@gmail.com> Link: http://lkml.kernel.org/r/1449580830-23652-17-git-send-email-qais.yousef@imgtec.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 27 9月, 2015 1 次提交
-
-
由 Paul Burton 提交于
MAARs should be initialised on each CPU (or rather, core) in the system in order to achieve consistent behaviour & performance. Previously they have only been initialised on the boot CPU which leads to performance problems if tasks are later scheduled on a secondary CPU, particularly if those tasks make use of unaligned vector accesses where some CPUs don't handle any cases in hardware for non-speculative memory regions. Fix this by recording the MAAR configuration from the boot CPU and applying it to secondary CPUs as part of their bringup. Reported-by: NDoug Gilmore <doug.gilmore@imgtec.com> Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Steven J. Hill <Steven.Hill@imgtec.com> Cc: Andrew Bresticker <abrestic@chromium.org> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: David Hildenbrand <dahi@linux.vnet.ibm.com> Cc: linux-kernel@vger.kernel.org Cc: Aaro Koskinen <aaro.koskinen@iki.fi> Cc: James Hogan <james.hogan@imgtec.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Markos Chandras <markos.chandras@imgtec.com> Cc: Hemmo Nieminen <hemmo.nieminen@iki.fi> Cc: Alex Smith <alex.smith@imgtec.com> Cc: Peter Zijlstra (Intel) <peterz@infradead.org> Patchwork: https://patchwork.linux-mips.org/patch/11239/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 03 8月, 2015 1 次提交
-
-
由 Alex Smith 提交于
The majority of SMP platforms handle their IPIs through do_IRQ() which calls irq_{enter/exit}(). When a call function IPI is received, smp_call_function_interrupt() is called which also calls irq_{enter,exit}(), meaning irq_count is raised twice. When tick broadcasting is used (which is implemented via a call function IPI), this incorrectly causes all CPU idle time on the core receiving broadcast ticks to be accounted as time spent servicing IRQs, as account_process_tick() will account as such if irq_count is greater than 1. This results in 100% CPU usage being reported on a core which receives its ticks via broadcast. This patch removes the SMP smp_call_function_interrupt() wrapper which calls irq_{enter,exit}(). Platforms which handle their IPIs through do_IRQ() now call generic_smp_call_function_interrupt() directly to avoid incrementing irq_count a second time. Platforms which don't (loongson, sgi-ip27, sibyte) call generic_smp_call_function_interrupt() wrapped in irq_{enter,exit}(). Signed-off-by: NAlex Smith <alex.smith@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/10770/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 10 7月, 2015 1 次提交
-
-
由 Markos Chandras 提交于
MT_SMP is not the only SMP option for MT cores. The MT_SMP option allows more than one VPE per core to appear as a secondary CPU in the system. Because of how CM works, it propagates the address-based cache ops to the secondary cores but not the index-based ones. Because of that, the code does not use IPIs to flush the L1 caches on secondary cores because the CM would have done that already. However, the CM functionality is independent of the type of SMP kernel so even in non-MT kernels, IPIs are not necessary. As a result of which, we change the conditional to depend on the CM presence. Moreover, since VPEs on the same core share the same L1 caches, there is no need to send an IPI on all of them so we calculate a suitable cpumask with only one VPE per core. Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Cc: <stable@vger.kernel.org> # 3.15+ Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/10654/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 12 5月, 2015 1 次提交
-
-
由 Ralf Baechle 提交于
CC arch/mips/kernel/smp.o arch/mips/kernel/smp.c: In function ‘start_secondary’: arch/mips/kernel/smp.c:149:2: error: passing argument 2 of ‘cpumask_set_cpu’ discards ‘volatile’ qualifier from pointer target type [-Werror] cpumask_set_cpu(cpu, &cpu_callin_map); ^ In file included from ./arch/mips/include/asm/processor.h:14:0, from ./arch/mips/include/asm/thread_info.h:15, from include/linux/thread_info.h:54, from include/asm-generic/preempt.h:4, from arch/mips/include/generated/asm/preempt.h:1, from include/linux/preempt.h:18, from include/linux/interrupt.h:8, from arch/mips/kernel/smp.c:24: include/linux/cpumask.h:272:91: note: expected ‘struct cpumask *’ but argument is of type ‘volatile struct cpumask_t *’ static inline void cpumask_set_cpu(unsigned int cpu, struct cpumask *dstp) ^ arch/mips/kernel/smp.c: In function ‘smp_prepare_boot_cpu’: arch/mips/kernel/smp.c:211:2: error: passing argument 2 of ‘cpumask_set_cpu’ discards ‘volatile’ qualifier from pointer target type [-Werror] cpumask_set_cpu(0, &cpu_callin_map); ^ In file included from ./arch/mips/include/asm/processor.h:14:0, from ./arch/mips/include/asm/thread_info.h:15, from include/linux/thread_info.h:54, from include/asm-generic/preempt.h:4, from arch/mips/include/generated/asm/preempt.h:1, from include/linux/preempt.h:18, from include/linux/interrupt.h:8, from arch/mips/kernel/smp.c:24: include/linux/cpumask.h:272:91: note: expected ‘struct cpumask *’ but argument is of type ‘volatile struct cpumask_t *’ static inline void cpumask_set_cpu(unsigned int cpu, struct cpumask *dstp) ^ arch/mips/kernel/smp.c: In function ‘__cpu_up’: arch/mips/kernel/smp.c:221:10: error: passing argument 2 of ‘cpumask_test_cpu’ discards ‘volatile’ qualifier from pointer target type [-Werror] while (!cpumask_test_cpu(cpu, &cpu_callin_map)) ^ In file included from ./arch/mips/include/asm/processor.h:14:0, from ./arch/mips/include/asm/thread_info.h:15, from include/linux/thread_info.h:54, from include/asm-generic/preempt.h:4, from arch/mips/include/generated/asm/preempt.h:1, from include/linux/preempt.h:18, from include/linux/interrupt.h:8, from arch/mips/kernel/smp.c:24: include/linux/cpumask.h:294:90: note: expected ‘const struct cpumask *’ but argument is of type ‘volatile struct cpumask_t *’ static inline int cpumask_test_cpu(int cpu, const struct cpumask *cpumask) ^ cc1: all warnings being treated as errors make[2]: *** [arch/mips/kernel/smp.o] Error 1 make[1]: *** [arch/mips/kernel] Error 2 make: *** [arch/mips] Error 2 Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 01 4月, 2015 1 次提交
-
-
由 Andrew Bresticker 提交于
Since cpu_wait() enables interrupts upon return, CPUs which have entered stop_this_cpu() may still end up handling interrupts. This can lead to the softlockup detector firing on a panic or restart/poweroff/halt. Just disable interrupts and spin to ensure nothing else runs on the CPU once it has entered stop_this_cpu(). Signed-off-by: NAndrew Bresticker <abrestic@chromium.org> Cc: James Hogan <james.hogan@imgtec.com> Cc: Maciej W. Rozycki <macro@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/9601/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 05 3月, 2015 1 次提交
-
-
由 Rusty Russell 提交于
Thanks to spatch, plus manual removal of "&*". Then a sweep for for_each_cpu_mask => for_each_cpu. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Kevin Cernekee <cernekee@gmail.com> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: linux-mips@linux-mips.org
-
- 30 1月, 2015 1 次提交
-
-
由 Hemmo Nieminen 提交于
As printk() invocation can cause e.g. a TLB miss, printk() cannot be called before the exception handlers have been properly initialized. This can happen e.g. when netconsole has been loaded as a kernel module and the TLB table has been cleared when a CPU was offline. Call cpu_report() in start_secondary() only after the exception handlers have been initialized to fix this. Without the patch the kernel will randomly either lockup or crash after a CPU is onlined and the console driver is a module. Signed-off-by: NHemmo Nieminen <hemmo.nieminen@iki.fi> Signed-off-by: NAaro Koskinen <aaro.koskinen@iki.fi> Cc: stable@vger.kernel.org Cc: David Daney <david.daney@cavium.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/8953/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 31 7月, 2014 1 次提交
-
-
由 Huacai Chen 提交于
This patch is prepared for Loongson's NUMA support, it offer meaningful sysfs files such as physical_package_id, core_id, core_siblings and thread_siblings in /sys/devices/system/cpu/cpu?/topology. Signed-off-by: NHuacai Chen <chenhc@lemote.com> Reviewed-by: NAndreas Herrmann <andreas.herrmann@caviumnetworks.com> Cc: John Crispin <john@phrozen.org> Cc: Steven J. Hill <Steven.Hill@imgtec.com> Cc: Aurelien Jarno <aurelien@aurel32.net> Cc: linux-mips@linux-mips.org Cc: Fuxin Zhang <zhangfx@lemote.com> Cc: Zhangjin Wu <wuzhangjin@gmail.com> Patchwork: https://patchwork.linux-mips.org/patch/7184/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 27 5月, 2014 1 次提交
-
-
由 Ralf Baechle 提交于
Nothing was using the method and there isn't any need for this hook. This leaves smp_cpus_done() empty for the moment. As suggested by Paul Bolle <pebolle@tiscali.nl>. Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 24 5月, 2014 1 次提交
-
-
由 Ralf Baechle 提交于
Nobody is maintaining SMTC anymore and there also seems to be no userbase. Which is a pity - the SMTC technology primarily developed by Kevin D. Kissell <kevink@paralogos.com> is an ingenious demonstration for the MT ASE's power and elegance. Based on Markos Chandras <Markos.Chandras@imgtec.com> patch https://patchwork.linux-mips.org/patch/6719/ which while very similar did no longer apply cleanly when I tried to merge it plus some additional post-SMTC cleanup - SMTC was a feature as tricky to remove as it was to merge once upon a time. Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 02 5月, 2014 2 次提交
-
-
由 Paul Burton 提交于
Add a mask of CPUs which are currently known to be operating coherently. This is setup initially to be all present CPUs, but in a subsequent patch CPUs in a MIPS Coherent Processing System will be cleared in this mask as they enter non-coherent idle states. This will be used in order to determine when a CPU within a CPS system may need to be powered back up, but may also be used in future to optimise away wakeups for cache operations or TLB invalidations. Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
-
由 Paul Burton 提交于
This patch adds support for generic clockevents broadcast using the a dummy clockevent device and the tick_broadcast function introduced by commit 12ad1000 "clockevents: Add generic timer broadcast function". Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
-
- 30 10月, 2013 1 次提交
-
-
由 Jiang Liu 提交于
Since commit 9a46ad6d "smp: make smp_call_function_many() use logic similar to smp_call_function_single()", generic_smp_call_function_single_interrupt() is an alias of generic_smp_call_function_interrupt(), so kill the redundant call. Signed-off-by: NJiang Liu <jiang.liu@huawei.com> Cc: Jiang Liu <liuj97@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Shaohua Li <shli@kernel.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Jiri Kosina <trivial@kernel.org> Cc: Wang YanQing <udknight@gmail.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Cc: linux-arch@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/5820/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 15 7月, 2013 1 次提交
-
-
由 Paul Gortmaker 提交于
commit 3747069b25e419f6b51395f48127e9812abc3596 upstream. The __cpuinit type of throwaway sections might have made sense some time ago when RAM was more constrained, but now the savings do not offset the cost and complications. For example, the fix in commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time") is a good example of the nasty type of bugs that can be created with improper use of the various __init prefixes. After a discussion on LKML[1] it was decided that cpuinit should go the way of devinit and be phased out. Once all the users are gone, we can then finally remove the macros themselves from linux/init.h. Note that some harmless section mismatch warnings may result, since notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c) and are flagged as __cpuinit -- so if we remove the __cpuinit from the arch specific callers, we will also get section mismatch warnings. As an intermediate step, we intend to turn the linux/init.h cpuinit related content into no-ops as early as possible, since that will get rid of these warnings. In any case, they are temporary and harmless. Here, we remove all the MIPS __cpuinit from C code and __CPUINIT from asm files. MIPS is interesting in this respect, because there are also uasm users hiding behind their own renamed versions of the __cpuinit macros. [1] https://lkml.org/lkml/2013/5/20/589 [ralf@linux-mips.org: Folded in Paul's followup fix.] Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/5494/ Patchwork: https://patchwork.linux-mips.org/patch/5495/ Patchwork: https://patchwork.linux-mips.org/patch/5509/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-