- 31 7月, 2014 2 次提交
-
-
由 Paul Burton 提交于
When determining the VPE ID of a CPU, make use of the cpu_vpe_id macro which will return 0 in a non-MT kernel build. Most code is already doing so but a couple of places weren't. Fixing this prevents a build failure for non-MT kernels where struct cpuinfo_mips does not contain the vpe_id field: arch/mips/kernel/pm-cps.c: In function 'cps_pm_enter_state': arch/mips/kernel/pm-cps.c:153:51: error: 'struct cpuinfo_mips' has no member named 'vpe_id' vpe_cfg = &core_cfg->vpe_config[current_cpu_data.vpe_id]; arch/mips/kernel/smp-cps.c: In function 'wait_for_sibling_halt': arch/mips/kernel/smp-cps.c:363:33: error: 'struct cpuinfo_mips' has no member named 'vpe_id' unsigned vpe_id = cpu_data[cpu].vpe_id; Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Reviewed-by: NMarkos Chandras <markos.chandras@imgtec.com> Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Paul Burton 提交于
These symbols will not be defined when CONFIG_MIPS_CPS=n, but although the CPS_PM_POWER_GATED state will never be used in that case the compiler doesn't have enough information to figure that out. Add checks which evaluate to a constant false for CONFIG_MIPS_CPS=n cases in order to help the compiler out & eliminate the symbol references. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Reviewed-by: NMarkos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7278/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 26 6月, 2014 2 次提交
-
-
由 Markos Chandras 提交于
Previously, the lower limit for the MIPS SC initialization loop was set incorrectly allowing one extra loop leading to writes beyond the MSC ioremap'd space. More precisely, the value of the 'imp' in the last loop increased beyond the msc_irqmap_t boundaries and as a result of which, the 'n' variable was loaded with an incorrect value. This value was used later on to calculate the offset in the MSC01_IC_SUP which led to random crashes like the following one: CPU 0 Unable to handle kernel paging request at virtual address e75c0200, epc == 8058dba4, ra == 8058db90 [...] Call Trace: [<8058dba4>] init_msc_irqs+0x104/0x154 [<8058b5bc>] arch_init_irq+0xd8/0x154 [<805897b0>] start_kernel+0x220/0x36c Kernel panic - not syncing: Attempted to kill the idle task! This patch fixes the problem Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Reviewed-by: NJames Hogan <james.hogan@imgtec.com> Cc: stable@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7118/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Paul Burton 提交于
This reverts commit eec43a22 "MIPS: Save/restore MSA context around signals" and the MSA parts of ca750649 "MIPS: kernel: signal: Prevent save/restore FPU context in user memory" (the restore path of which appears incorrect anyway...). The reverted patch took care not to break compatibility with userland users of struct sigcontext, but inadvertantly changed the offset of the uc_sigmask field of struct ucontext. Thus Linux v3.15 breaks the userland ABI. The MSA context will need to be saved via some other opt-in mechanism, but for now revert the change to reduce the fallout. This will have minimal impact upon use of MSA since the only supported CPU which includes it (the P5600) is 32-bit and therefore requires that the experimental CONFIG_MIPS_O32_FP64_SUPPORT Kconfig option be selected before the kernel will set FR=1 for a task, a requirement for MSA use. Thus the users of MSA are limited to known small groups of people & this patch won't be breaking any previously working MSA-using userland outside of experimental settings. [ralf@linux-mips.org: Fixed rejects.] Cc: stable@vger.kernel.org Reported-by: NJoseph S. Myers <joseph@codesourcery.com> Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/7107/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 16 6月, 2014 2 次提交
-
-
由 Paul Burton 提交于
Commit 91bbefe6 "arch,mips: Convert smp_mb__*()" replaced the smp_mb__* functions with a simpler API, whilst commit 3179d37e "MIPS: pm-cps: add PM state entry code for CPS systems" introduced new uses of smp_mb__before_atomic_inc & smp_mb__after_clear_bit. Replace those calls with the corresponding before & after atomic functions of the new, simpler API in order to avoid a build failure: arch/mips/kernel/pm-cps.c: In function 'coupled_barrier': arch/mips/kernel/pm-cps.c:104:2: error: 'smp_mb__before_atomic_inc' is deprecated (declared at include/linux/atomic.h:11) [-Werror=deprecated-declarations] arch/mips/kernel/pm-cps.c: In function 'cps_pm_enter_state': arch/mips/kernel/pm-cps.c:161:2: error: 'smp_mb__after_clear_bit' is deprecated (declared at include/linux/bitops.h:48) [-Werror=deprecated-declarations] Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Reviewed-by: NMarkos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7086/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Paul Burton 提交于
Commit 91bbefe6 "arch,mips: Convert smp_mb__*()" replaced the smp_mb__after_atomic_dec function with smp_mb__after_atomic, whilst commit 1d8f1f5a "MIPS: smp-cps: hotplug support" introduced a new use of it. Replace that use with smp_mb__after_atomic in order to avoid a build failure: arch/mips/kernel/smp-cps.c: In function 'cps_cpu_disable': arch/mips/kernel/smp-cps.c:304:2: error: 'smp_mb__after_atomic_dec' is deprecated (declared at include/linux/atomic.h:35) [-Werror=deprecated-declarations] Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Reviewed-by: NMarkos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7085/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 04 6月, 2014 1 次提交
-
-
由 Davidlohr Bueso 提交于
Performing vma lookups without taking the mm->mmap_sem is asking for trouble. While doing the search, the vma in question can be modified or even removed before returning to the caller. Take the lock (exclusively) in order to avoid races while iterating through the vmacache and/or rbtree. Updates two functions: - process_fpemu_return() - cteon_flush_cache_sigtramp() Signed-off-by: NDavidlohr Bueso <davidlohr@hp.com> Tested-by: NAndreas Herrmann <andreas.herrmann@caviumnetworks.com> Cc: linux-mips@linux-mips.org Cc: akpm@linux-foundation.org Cc: zeus@gnu.org Cc: aswin@hp.com Cc: davidlohr@hp.com Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org Patchwork: http://patchwork.linux-mips.org/patch/6811/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 31 5月, 2014 3 次提交
-
-
由 David Daney 提交于
This returns the CPUNum from the low order Ebase bits. Signed-off-by: NDavid Daney <david.daney@cavium.com> Signed-off-by: NAndreas Herrmann <andreas.herrmann@caviumnetworks.com> Cc: linux-mips@linux-mips.org Cc: James Hogan <james.hogan@imgtec.com> Cc: kvm@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/7012/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 David Daney 提交于
Some versions of the assembler will not assemble CFC1 for OCTEON, so override the ISA for these. Add r4k_fpu.o to handle low level FPU initialization. Modify octeon_switch.S to save the FPU registers. And include r4k_switch.S to pick up more FPU support. Get rid of "#define cpu_has_fpu 0" Signed-off-by: NDavid Daney <david.daney@cavium.com> Signed-off-by: NAndreas Herrmann <andreas.herrmann@caviumnetworks.com> Cc: linux-mips@linux-mips.org Cc: James Hogan <james.hogan@imgtec.com> Cc: kvm@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/7006/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Maciej W. Rozycki 提交于
Update to commit 9c9b415c [MIPS: Reimplement get_cycles().] On systems were for whatever reasons we can't use the cycle counter, fall back to the c0_random register as an entropy source. It has however a very small range that makes it suitable for random_get_entropy only and not get_cycles. This optimised version compiles to 8 instructions in the fast path even in the worst case of all the conditions to check being variable (including a MFC0 move delay slot that is only required for very old processors): 828: 8cf90000 lw t9,0(a3) 828: R_MIPS_LO16 jiffies 82c: 40057800 mfc0 a1,c0_prid 830: 3c0200ff lui v0,0xff 834: 00a21024 and v0,a1,v0 838: 1040007d beqz v0,a30 <add_interrupt_randomness+0x22c> 83c: 3c030000 lui v1,0x0 83c: R_MIPS_HI16 cpu_data 840: 40024800 mfc0 v0,c0_count 844: 00000000 nop 848: 00409021 move s2,v0 84c: 8ce20000 lw v0,0(a3) 84c: R_MIPS_LO16 jiffies On most targets the sequence will be shorter and on some it will reduce to a single `MFC0 <reg>,c0_count', as all MIPS architecture (i.e. non-legacy MIPS) processors require the CP0 Count register to be present. The only known exception that reports MIPS architecture compliance, but contrary to that lacks CP0 Count is the Ingenic JZ4740 thingy. For broken platforms like that this code requires cpu_has_counter to be hardcoded to 0 (i.e. no variable setting is permitted) so as not to penalise all the other good platforms out there. The asm barrier is required so that the compiler does not pull any potentially costly (cold cache!) `cpu_data' variable access into the fast path. Signed-off-by: NRalf Baechle <ralf@linux-mips.org> Signed-off-by: NMaciej W. Rozycki <macro@linux-mips.org> Cc: Theodore Ts'o <tytso@mit.edu> Cc: John Crispin <blogic@openwrt.org> Cc: Andrew McGregor <andrewmcgr@gmail.com> Cc: Dave Taht <dave.taht@bufferbloat.net> Cc: Felix Fietkau <nbd@nbd.name> Cc: Simon Kelley <simon@thekelleys.org.uk> Cc: Jim Gettys <jg@freedesktop.org> Cc: David Daney <ddaney@caviumnetworks.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/6702/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 30 5月, 2014 1 次提交
-
-
由 Yonghong Song 提交于
Add support for the XLP5XX processor which is an 8 core variant of the XLP9XX. Add XLP5XX cases to code which earlier handled XLP9XX. Signed-off-by: NYonghong Song <ysong@broadcom.com> Signed-off-by: NJayachandran C <jchandra@broadcom.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/6871/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 28 5月, 2014 12 次提交
-
-
由 Paul Burton 提交于
This patch adds a cpuidle driver for systems based around the MIPS Coherent Processing System (CPS) architecture. It supports four idle states: - The standard MIPS wait instruction. - The non-coherent wait, clock gated & power gated states exposed by the recently added pm-cps layer. The pm-cps layer is used to enter all the deep idle states. Since cores in the clock or power gated states cannot service interrupts, the gic_send_ipi_single function is modified to send a power up command for the appropriate core to the CPC in cases where the target CPU has marked itself potentially incoherent. Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
-
由 Paul Burton 提交于
Defines a macro intended to allow trivial use of the regular MIPS wait instruction from cpuidle drivers, which may simply invoke the macro within their array of states. Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
-
由 Paul Burton 提交于
Rather than hardcoding CCA=0x5 for secondary cores, re-use the CCA from the boot CPU. This allows overrides of the CCA using the cca= kernel parameter to take effect on all CPUs for consistency. Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
-
由 Paul Burton 提交于
This patch sets a default CCA suited for use with multi-core SMP on all current MIPS CPS based systems. It may still be overriden by the cca= argument on the kernel command line. Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
-
由 Paul Burton 提交于
If the user or bootloader sets the CCA to a value which is not suited for multi-core SMP (ie. anything non-coherent) then limit the system to using only a single core and warn the user. Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
-
由 Paul Burton 提交于
This patch adds support for offlining CPUs via hotplug when using the CONFIG_MIPS_CPS SMP implementation. When a CPU is offlined one of 2 things will happen: - If the CPU is part of a core which implements the MT ASE and there is at least one other VPE online within that core then the VPE will be halted by settings its TCHalt bit. - Otherwise if supported the core will be powered down via the CPC. - Otherwise the CPU will hang by executing an infinite loop. Bringing CPUs back online is then a process of either clearing the appropriate VPEs TCHalt bit or powering up the appropriate core via the CPC. Throughout the process the struct core_boot_config vpe_mask field must be maintained such that mips_cps_boot_vpes will start & stop the correct VPEs. Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
-
由 Paul Burton 提交于
This patch adds code to generate entry & exit code for various low power states available on systems based around the MIPS Coherent Processing System architecture (ie. those with a Coherence Manager, Global Interrupt Controller & for >=CM2 a Cluster Power Controller). States supported are: - Non-coherent wait. This state first leaves the coherent domain and then executes a regular MIPS wait instruction. Power savings are found from the elimination of coherency interventions between the core and any other coherent requestors in the system. - Clock gated. This state leaves the coherent domain and then gates the clock input to the core. This removes all dynamic power from the core but leaves the core at the mercy of another to restart its clock. Register state is preserved, but the core can not service interrupts whilst its clock is gated. - Power gated. This deepest state removes all power input to the core. All register state is lost and the core will restart execution from its BEV when another core powers it back up. Because register state is lost this state requires cooperation with the CONFIG_MIPS_CPS SMP implementation in order for the core to exit the state successfully. The code will detect which states are available on the current system during boot & generate the entry/exit code for those states. This will be used by cpuidle & hotplug implementations. Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
-
由 Paul Burton 提交于
The core which the CPC core-other region relates to is based upon the core-local core-other addressing register. As its name suggests this register is shared between all VPEs within a core, and if there is a possibility that multiple VPEs within a core will attempt to access another core simultaneously then locking is required. This wasn't previously a problem with the only user being cpu0 during boot, but will be an issue once hotplug is implemented & may race with other users such as cpuidle. Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
-
由 Paul Burton 提交于
The start of mips_cps_core_entry is patched in order to provide the code with the address of the CM register region at a point where it will be running non-coherent with the rest of the system. However the cache wasn't being flushed after that patching which could in principle lead to secondary cores using an invalid CM base address. The patching is moved to cps_prepare_cpus since local_flush_icache_range has not been initialised at the point cps_smp_setup is called. Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
-
由 Paul Burton 提交于
The core power down state for cpuidle will require that the CPS SMP implementation is in use. This patch provides a mips_cps_smp_in_use function which determines whether or not the CPS SMP implementation is currently in use. Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
-
由 Paul Burton 提交于
When hotplug and/or a powered down idle state are supported cases will arise where a non-zero VPE must be brought online without VPE 0, and it where multiple VPEs must be onlined simultaneously. This patch prepares for that by: - Splitting struct boot_config into core & VPE boot config structures, allocated one per core or VPE respectively. This allows for multiple VPEs to be onlined simultaneously without clobbering each others configuration. - Indicating which VPEs should be online within a core at any given time using a bitmap. This allows multiple VPEs to be brought online simultaneously and also indicates to VPE 0 whether it should halt after starting any non-zero VPEs that should be online within the core. For example if all VPEs within a core are offlined via hotplug and the user onlines the second VPE within that core: 1) The core will be powered up. 2) VPE 0 will run from the BEV (ie. mips_cps_core_entry) to initialise the core. 3) VPE 0 will start VPE 1 because its bit is set in the cores bitmap. 4) VPE 0 will halt itself because its bit is clear in the cores bitmap. - Moving the core & VPE initialisation to assembly code which does not make any use of the stack. This is because if a non-zero VPE is to be brought online in a powered down core then when VPE 0 of that core runs it may not have a valid stack, and even if it did then it's messy to run through parts of generic kernel code on VPE 0 before starting the correct VPE. Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
-
由 Alex Smith 提交于
ptrace_{get,set}_watch_regs access current_cpu_data to get the watch register count/masks, which calls smp_processor_id(). However they are run in preemptible context and therefore trigger warnings like so: [ 6340.092000] BUG: using smp_processor_id() in preemptible [00000000] code: gdb/367 [ 6340.092000] caller is ptrace_get_watch_regs+0x44/0x220 Since the watch register count/masks should be the same across all CPUs, use boot_cpu_data instead. Note that this may need to change in future should a heterogenous system be supported where the count/masks are not the same across all CPUs (the current code is also incorrect for this scenario - current_cpu_data here would not necessarily be correct for the CPU that the target task will execute on). Signed-off-by: NAlex Smith <alex.smith@imgtec.com> Reviewed-by: NPaul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/6879/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 27 5月, 2014 1 次提交
-
-
由 Ralf Baechle 提交于
Nothing was using the method and there isn't any need for this hook. This leaves smp_cpus_done() empty for the moment. As suggested by Paul Bolle <pebolle@tiscali.nl>. Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 25 5月, 2014 1 次提交
-
-
由 Markos Chandras 提交于
Introduced by the following two commits: 75b5b5e0 "MIPS: Add support for FTLBs" 6de20451 "MIPS: Add printing of ES bit for Imgtec cores when cache error occurs" Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Reported-by: NMatheus Almeida <Matheus.Almeida@imgtec.com> Cc: linux-mips@linux-mips.org Cc: Markos Chandras <markos.chandras@imgtec.com> Cc: stable@vger.kernel.org # v3.14+ Patchwork: https://patchwork.linux-mips.org/patch/6980/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 24 5月, 2014 1 次提交
-
-
由 Ralf Baechle 提交于
Nobody is maintaining SMTC anymore and there also seems to be no userbase. Which is a pity - the SMTC technology primarily developed by Kevin D. Kissell <kevink@paralogos.com> is an ingenious demonstration for the MT ASE's power and elegance. Based on Markos Chandras <Markos.Chandras@imgtec.com> patch https://patchwork.linux-mips.org/patch/6719/ which while very similar did no longer apply cleanly when I tried to merge it plus some additional post-SMTC cleanup - SMTC was a feature as tricky to remove as it was to merge once upon a time. Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 23 5月, 2014 4 次提交
-
-
由 Ralf Baechle 提交于
Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Ralf Baechle 提交于
Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Ralf Baechle 提交于
mm_isBranchInstr() did reside in the math emu code even though it logically is separate and also is used outside the math emu code. In addition GCC 4.9.0 leaves the following unnnecessarily bloated function body for a non-microMIPS configuration: <mm_isBranchInstr>: 105c: afa50004 sw a1,4(sp) 1060: afa60008 sw a2,8(sp) 1064: afa7000c sw a3,12(sp) 1068: 03e00008 jr ra 106c: 00001021 move v0,zero which stores arguments that are never going to be used on the stack frame. Move mm_isBranchInstr() from cp1emu.c to branch.c, then split mm_isBranchInstr() into a __mm_isBranchInstr() core and a mm_isBranchInstr() wrapper inline function which only invokes __mm_isBranchInstr() on microMIPS configurations. This shaves off 112 bytes off the kernel and improves code flow a bit. Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Ralf Baechle 提交于
Two issues: o For beql_op, beql_op, bne_op, bnel_op, blez_op, blezl_op, bgtz_op and bgtzl_op the wrong field was being checked for the instruction opcode. o For blez_op / blezl_op and bgtz_op / bgtzl_op the test was testing for the wrong opcode. This bug got introduced by d8d4e3ae [MIPS Kprobes: Refactor branch emulation]. Signed-off-by: NRalf Baechle <ralf@linux-mips.org> Acked-by: NLeonid Yegoshin <Leonid.Yegoshin@imgtec.com> Acked-by: NVictor Kamensky <kamensky@cisco.com>
-
- 13 5月, 2014 2 次提交
-
-
由 Ralf Baechle 提交于
Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Ralf Baechle 提交于
Reverts commit 795038a6 because d6d3c9af provides the same functionality in a more generic way. Both patches applied however means that the VPE and TC IDs get printed twice currently.
-
- 02 5月, 2014 8 次提交
-
-
由 Paul Burton 提交于
This patch provides functions to lock & unlock access to the "core-other" register region of the CPC. Without performing appropriate locking it is possible for code using this region to be preempted or to race with code on another VPE within the same core, with one changing the core which the "core-other" region is acting upon at an inopportune time for the other. Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
-
由 Paul Burton 提交于
Add a mask of CPUs which are currently known to be operating coherently. This is setup initially to be all present CPUs, but in a subsequent patch CPUs in a MIPS Coherent Processing System will be cleared in this mask as they enter non-coherent idle states. This will be used in order to determine when a CPU within a CPS system may need to be powered back up, but may also be used in future to optimise away wakeups for cache operations or TLB invalidations. Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
-
由 Paul Burton 提交于
This patch adds support for generic clockevents broadcast using the a dummy clockevent device and the tick_broadcast function introduced by commit 12ad1000 "clockevents: Add generic timer broadcast function". Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
-
由 Paul Burton 提交于
Having the GIC clockevent driver compiled should not prevent the R4K timer clockevent driver from functioning. One will be selected as the CPU local timer based upon their priorities and the other may simply be unused or in the case of the GIC timer may be used as the tick broadcast device. Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
-
由 Paul Burton 提交于
The CLOCK_EVT_FEAT_PERCPU flag indicates that a clockevent device is only configurable by the CPU for which it is registered, and thus cannot be used as the tick broadcast device. That property is true of the R4K timer, which is inaccessible from other cores. Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
-
由 Paul Burton 提交于
When a core enters a clock off or power down state its CP0 counter will be stopped along with it. Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
-
由 Paul Burton 提交于
This patch allows the GIC clockevent device for a CPU to be configured by another CPU. This makes GIC clockevent devices suitable for use as the tick broadcast device, where formerly the GIC timer local to the configuring CPU would have been configured incorrectly. Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
-
由 Paul Burton 提交于
Although the GIC counter will continue when a core is in a low power state and it will still trigger interrupts, the core will be incapable of servicing those interrupts rendering them useless. Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
-