- 20 3月, 2014 1 次提交
-
-
由 Alexander Shiyan 提交于
Statements following return will never be executed. This patch removes this code. Signed-off-by: NAlexander Shiyan <shc_work@mail.ru> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 19 2月, 2014 1 次提交
-
-
由 Dave Martin 提交于
The functions in mcpm_entry.c are mostly intended for use during scary cache and coherency disabling sequences, or do other things which confuse trace ... like powering a CPU down and not returning. Similarly for the backend code. For simplicity, this patch just makes whole files notrace. There should be more than enough traceable points on the paths to these functions, but we can be more fine-grained later if there is a need for it. Jon Medhurst: Also added spc.o to the list of files as it contains functions used by MCPM code which have comments comments like: "might be used in code paths where normal cacheable locks are not working" Signed-off-by: NDave Martin <dave.martin@linaro.org> Signed-off-by: NJon Medhurst <tixy@linaro.org> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 13 2月, 2014 1 次提交
-
-
由 Linus Walleij 提交于
This modifies the SP804 driver so that the clock will be taken from the device tree node for the timer. Cc: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Rob Herring <rob.herring@linaro.org> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org>
-
- 29 12月, 2013 2 次提交
-
-
由 Will Deacon 提交于
sync_cache_w already includes a dsb, so we can just use sev() directly then following a cache-sync. Acked-by: NDave Martin <Dave.Martin@arm.com> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Nicolas Pitre 提交于
We have a handy macro to replace open coded __cpuc_flush_dcache_area(() and outer_clean_range() sequences. Let's use it. No functional change. Signed-off-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 12 12月, 2013 1 次提交
-
-
由 Yijing Wang 提交于
Use dev_is_pci() instead of checking bus type directly. Signed-off-by: NYijing Wang <wangyijing@huawei.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
-
- 22 11月, 2013 1 次提交
-
-
由 Stephen Boyd 提交于
The 32 bit sched_clock interface now supports 64 bits. Upgrade to the 64 bit function to allow us to remove the 32 bit registration interface. Also mark the read function notrace since we're here and failure to do so would cause ftrace to break. Cc: Russell King <linux@arm.linux.org.uk> Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NKevin Hilman <khilman@linaro.org>
-
- 07 11月, 2013 1 次提交
-
-
由 Tushar Behera 提交于
Commit 6dedcca6 ("hotplug, powerpc, x86: Remove cpu_hotplug_driver_lock())" removes the the definition of cpu_hotplug_driver_{lock,unlock} APIs, thereby causing a build error. Replace these calls with {lock,unlock}_device_hotplug(). Signed-off-by: NTushar Behera <tushar.behera@linaro.org> Signed-off-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 30 10月, 2013 1 次提交
-
-
由 Vinod Koul 提交于
edma header defines DMA_COMPLETE, this causes issues as commit adfedd9a move DMA_SUCCESS to DMA_COMPLETE. edma should properly namespace its defines and needs a future fix Reported-by: NOlof Johansson <olof@lixom.net> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
- 29 10月, 2013 3 次提交
-
-
由 Dave Martin 提交于
CPU hotplug and kexec rely on smp_ops.cpu_kill(), which is supposed to wait for the CPU to park or power down, and perform the last rites (such as disabling clocks etc., where the platform doesn't do this automatically). kexec in particular is unsafe without performing this synchronisation to park secondaries. Without it, the secondaries might not be parked when kexec trashes the kernel. There is no generic way to do this synchronisation, so a new mcpm platform_ops method power_down_finish() is added by this patch. The new method is mandatory. A platform which provides no way to detect when CPUs are parked is likely broken. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Dave Martin 提交于
This patch factors the logical-to-physical CPU translation out of mcpm_boot_secondary(), so that it can be reused elsewhere. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Michael Opdenacker 提交于
This patch proposes to remove the use of the IRQF_DISABLED flag It's a NOOP since 2.6.35 and it will be removed one day. Signed-off-by: NMichael Opdenacker <michael.opdenacker@free-electrons.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 20 10月, 2013 1 次提交
-
-
由 Victor Kamensky 提交于
In big endian mode mcpm_entry_point is first function that called on secondaries CPU. First it should switch CPU into big endian code. [ben.dooks@codethink.co.uk: merge fix patch from Victor into this] Signed-off-by: NVictor Kamensky <victor.kamensky@linaro.org> Acked-by: NNicolas Pitre <nico@linaro.org> Reviewed-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NBen Dooks <ben.dooks@codethink.co.uk>
-
- 03 10月, 2013 2 次提交
-
-
由 Andrea Adami 提交于
This fixes a regression for kernels after v3.2 After commit 72662e01 ARM: head.S: only include __turn_mmu_on in the initial identity mapping Zaurus PXA devices call sharpsl_save_param() during fixup and hang on boot because memcpy refers to physical addresses no longer valid if the MMU is setup. Zaurus collie (SA1100) is unaffected (function is called in init_machine). The code was making assumptions and for PXA the virtual address should have been used before. Signed-off-by: NMarko Katic <dromede@gmail.com> Signed-off-by: NAndrea Adami <andrea.adami@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Nicolas Pitre 提交于
Currently mcpm_cpu_power_down() and mcpm_cpu_suspend() trigger BUG() if mcpm_platform_register() is not called beforehand. This may occur for many reasons such as some incomplete device tree passed to the kernel or the like. Let's be nicer to users and avoid killing the kernel if that happens by logging a warning and returning to the caller. The mcpm_cpu_suspend() user is already set to deal with this situation, and so is cpu_die() invoking mcpm_cpu_die(). The problematic case would have been the B.L switcher's usage of mcpm_cpu_power_down(), however it has to call mcpm_cpu_power_up() first which is already set to catch an error resulting from a missing mcpm_platform_register() call. Signed-off-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 01 10月, 2013 1 次提交
-
-
由 Joel Fernandes 提交于
HWMOD removal for MMC is breaking edma_start as the events are being manually triggered due to unused channel list not being clear. The above issue is fixed by reading the "dmas" property from the DT node if it exists and clearing the bits in the unused channel list if the dma controller used by any device is EDMA. For this purpose we use the of_* helpers to parse the arguments in the dmas phandle list. Also introduced is a minor clean up of a checkpatch error in old code. Reviewed-by: NSekhar Nori <nsekhar@ti.com> Reported-by: NBalaji T K <balajitk@ti.com> Cc: Sekhar Nori <nsekhar@ti.com> Cc: Tony Lindgren <tony@atomide.com> Cc: Olof Johansson <olof@lixom.net> Cc: Nishanth Menon <nm@ti.com> Cc: Pantel Antoniou <panto@antoniou-consulting.com> Cc: Jason Kridner <jkridner@beagleboard.org> Cc: Koen Kooi <koen@dominion.thruhere.net> Signed-off-by: NJoel Fernandes <joelf@ti.com> Signed-off-by: NOlof Johansson <olof@lixom.net>
-
- 24 9月, 2013 10 次提交
-
-
由 Dave Martin 提交于
When the switcher is active, there is no straightforward way to figure out which logical CPU a given physical CPU maps to. This patch provides a function bL_switcher_get_logical_index(mpidr), which is analogous to get_logical_index(). This function returns the logical CPU on which the specified physical CPU is grouped (or -EINVAL if unknown). If the switcher is inactive or not present, -EUNATCH is returned instead. Signed-off-by: NDave Martin <dave.martin@linaro.org> Signed-off-by: NNicolas Pitre <nico@linaro.org>
-
由 Dave Martin 提交于
This patch exports a bL_switcher_trace_trigger() function to provide a means for drivers using the trace events to get the current status when starting a trace session. Calling this function is equivalent to pinging the trace_trigger file in sysfs. Signed-off-by: NDave Martin <dave.martin@linaro.org>
-
由 Dave Martin 提交于
When tracing switching, an external tracer needs a way to bootstrap its knowledge of the logical<->physical CPU mapping. This patch adds a sysfs attribute trace_trigger. A write to this attribute will generate a power:cpu_migrate_current event for each online CPU, indicating the current physical CPU for each logical CPU. Activating or deactivating the switcher also generates these events, so that the tracer knows about the resulting remapping of affected CPUs. Signed-off-by: NDave Martin <dave.martin@linaro.org>
-
由 Dave Martin 提交于
This patch adds simple trace events to the b.L switcher code to allow tracing of CPU migration events. To make use of the trace events, you will need: CONFIG_FTRACE=y CONFIG_ENABLE_DEFAULT_TRACERS=y The following events are added: * power:cpu_migrate_begin * power:cpu_migrate_finish each with the following data: u64 timestamp; u32 cpu_hwid; power:cpu_migrate_begin occurs immediately before the switcher-specific migration operations start. power:cpu_migrate_finish occurs immediately when migration is completed. The cpu_hwid field contains the ID fields of the MPIDR. * For power:cpu_migrate_begin, cpu_hwid is the ID of the outbound physical CPU (equivalent to (from_phys_cpu,from_phys_cluster)). * For power:cpu_migrate_finish, cpu_hwid is the ID of the inbound physical CPU (equivalent to (to_phys_cpu,to_phys_cluster)). By design, the cpu_hwid field is masked in the same way as the device tree cpu node reg property, allowing direct correlation to the DT description of the hardware. The timestamp is added in order to minimise timing noise. An accurate system-wide clock should be used for generating this (hopefully getnstimeofday is appropriate, but it could be changed). It could be any monotonic shared clock, since the aim is to allow accurate deltas to be computed. We don't necessarily care about accurate synchronisation with wall clock time. In practice, each switch takes place on a single logical CPU, and the trace infrastructure should guarantee that events are well-ordered with respect to a single logical CPU. Signed-off-by: NDave Martin <dave.martin@linaro.org> Signed-off-by: NNicolas Pitre <nico@linaro.org>
-
由 Nicolas Pitre 提交于
In some cases, a significant delay may be observed between the moment a request for a CPU to come up is made and the moment it is ready to start executing kernel code. This is especially true when a whole cluster has to be powered up which may take in the order of miliseconds. It is therefore a good idea to let the outbound CPU continue to execute code in the mean time, and be notified when the inbound is ready before performing the actual switch. This is achieved by registering a completion block with the appropriate IPI callback, and programming the sending of an IPI by the early assembly code prior to entering the main kernel code. Once the IPI is delivered to the outbound CPU, the completion block is "completed" and the switcher thread is resumed. Signed-off-by: NNicolas Pitre <nico@linaro.org>
-
由 Nicolas Pitre 提交于
This allows to poke a predetermined value into a specific address upon entering the early boot code in bL_head.S. Signed-off-by: NNicolas Pitre <nico@linaro.org>
-
由 Nicolas Pitre 提交于
Let's wait for the inbound CPU to come up and snoop some of the outbound CPU cache before bringing the outbound CPU down. That should be more efficient than going down right away. Possible improvements might involve some monitoring of the CCI event counters. Signed-off-by: NNicolas Pitre <nico@linaro.org>
-
由 Dave Martin 提交于
There is no explicit way to know when a switch started via bL_switch_request() is complete. This can lead to unpredictable behaviour when the switcher is controlled by a subsystem which makes dynamic decisions (such as cpufreq). The CPU PM notifier is not really suitable for signalling completion, because the CPU could get suspended and resumed for other, independent reasons while a switch request is in flight. Adding a whole new notifier for this seems excessive, and may tempt people to put heavyweight code on this path. This patch implements a new bL_switch_request_cb() function that allows for a per-request lightweight callback, private between the switcher and the caller of bL_switch_request_cb(). Overlapping switches on a single CPU are considered incorrect if they are requested via bL_switch_request_cb() with a callback (they will lead to an unpredictable final state without explicit external synchronisation to force the requests into a particular order). Queuing requests robustly would be overkill because only one subsystem should be attempting to control the switcher at any time. Overlapping requests of this kind will be failed with -EBUSY to indicate that the second request won't take effect and the completer will never be called for it. bL_switch_request() is retained as a wrapper round the new function, with the old, fire-and-forget semantics. In this case the last request will always win. The request may still be denied if a previous request with a completer is still pending. Signed-off-by: NDave Martin <dave.martin@linaro.org> Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
由 Dave Martin 提交于
Some subsystems will need to respond synchronously to runtime enabling and disabling of the switcher. This patch adds a dedicated notifier interface to support such subsystems. Pre- and post- enable/disable notifications are sent to registered callbacks, allowing safe transition of non-b.L- transparent subsystems across these control transitions. Notifier callbacks may veto switcher (de)activation on pre notifications only. Post notifications won't revert the action. If enabling or disabling of the switcher fails after the pre-change notification has been sent, subsystems which have registered notifiers can be left in an inappropriate state. This patch sends a suitable post-change notification on failure, indicating that the old state has been reestablished. For example, a failed initialisation will result in the following sequence: BL_NOTIFY_PRE_ENABLE /* switcher initialisation fails */ BL_NOTIFY_POST_DISABLE It is the responsibility of notified subsystems to respond in an appropriate way. Signed-off-by: NDave Martin <dave.martin@linaro.org> Signed-off-by: NNicolas Pitre <nico@linaro.org>
-
由 Dave Martin 提交于
Some subsystems will need to know for sure whether the switcher is enabled or disabled during certain critical regions. This patch provides a simple mutex-based mechanism to discover whether the switcher is enabled and temporarily lock out further enable/disable: * bL_switcher_get_enabled() returns true iff the switcher is enabled and temporarily inhibits enable/disable. * bL_switcher_put_enabled() permits enable/disable of the switcher again after a previous call to bL_switcher_get_enabled(). Signed-off-by: NDave Martin <dave.martin@linaro.org> Signed-off-by: NNicolas Pitre <nico@linaro.org>
-
- 17 9月, 2013 1 次提交
-
-
由 Linus Walleij 提交于
The Shark machine sub-architecture (also known as DNARD, the DIGITAL Network Appliance Reference Design) lacks a maintainer able to apply and test patches to modernize the architecture. It is suspected that the current kernel, while it compiles, does not even boot on this machine. The listed maintainer has expressed that he will not be able to spend any time on the maintenance for the coming year. So let's delete it from the kernel for now. It can always be resurrected with git revert if maintenance is resumed. As the VIA82c505 PCI adapter was only used by this architecture, that gets deleted too. Cc: arm@kernel.org Cc: Alexander Schulz <alex@shark-linux.de> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org>
-
- 04 9月, 2013 1 次提交
-
-
由 Joel Fernandes 提交于
Manual trigger for events missed as a result of splitting a scatter gather list and DMA'ing it in batches. Add a helper function to trigger a channel incase any such events are missed. Signed-off-by: NJoel Fernandes <joelf@ti.com> Acked-by: NSekhar Nori <nsekhar@ti.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
- 22 8月, 2013 1 次提交
-
-
由 Viresh Kumar 提交于
When a cpu goes to a deep idle state where its local timer is shutdown, it notifies the time frame work to use the broadcast timer instead. Unfortunately, the broadcast device could wake up any CPU, including an idle one which is not concerned by the wake up at all. This implies, in the worst case, an idle CPU will wake up to send an IPI to another idle cpu. This patch fixes this for ARM platforms using timer-sp, by setting CLOCK_EVT_FEAT_DYNIRQ feature. Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
-
- 12 8月, 2013 1 次提交
-
-
由 Will Deacon 提交于
In a similar manner to our spinlock implementation, mcpm uses sev to wake up cores waiting on a lock when the lock is unlocked. In order to ensure that the final write unlocking the lock is visible, a dsb instruction is executed immediately prior to the sev. This patch changes these dsbs to use the -st option, since we only require that the store unlocking the lock is made visible. Acked-by: NNicolas Pitre <nico@linaro.org> Reviewed-by: NDave Martin <dave.martin@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 05 8月, 2013 3 次提交
-
-
由 Nicolas Pitre 提交于
Only the basic call to aid debugging. *** NOT FOR PRODUCTION *** Usage: echo <cpuid>,<clusterid> > /dev/b.L_switcher where <cpuid> is the logical CPU number, and <clusterid> is 0 for the first cluster and 1 for the second cluster. Signed-off-by: Nnicolas Pitre <nico@linaro.org>
-
由 Nicolas Pitre 提交于
Trying to support both the switcher and CPU hotplug at the same time is tricky due to ambiguous semantics. So let's at least prevent users from messing around with those logical CPUs the switcher has removed and those which were not active when the switcher was activated. Signed-off-by: NNicolas Pitre <nico@linaro.org>
-
由 Nicolas Pitre 提交于
Up to now, the logical CPU was somehow tied to the physical CPU number within a cluster. This causes problems when forcing the boot CPU to be different from the first enumerated CPU in the device tree creating a discrepancy between logical and physical CPU numbers. Let's make the pairing completely independent from physical CPU numbers. Let's keep only those logical CPUs with same initial CPU cluster to create a uniform scheduler profile without having to modify any of the probed topology and compute capacity data. This has the potential to create a non contiguous CPU numbering space when the switcher is active with potential impact on buggy user space tools. It is however better to fix those tools rather than making the switcher code more intrusive. Signed-off-by: NNicolas Pitre <nico@linaro.org> Reviewed-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
-
- 30 7月, 2013 7 次提交
-
-
由 Nicolas Pitre 提交于
By adding no_bL_switcher to the kernel cmdline string, the switcher won't be activated automatically at boot time. It is still possible to activate it later with: echo 1 > /sys/kernel/bL_switcher/active Signed-off-by: NNicolas Pitre <nico@linaro.org>
-
由 Nicolas Pitre 提交于
The /sys/kernel/bL_switcher/enable file allows to enable or disable the switcher by writing 1 or 0 to it respectively. It is still enabled by default on boot. Signed-off-by: NNicolas Pitre <nico@linaro.org>
-
由 Nicolas Pitre 提交于
Currently, GIC IDs are hardcoded making the code dependent on the 4+4 b.L configuration. Let's allow for GIC IDs to be discovered upon switcher initialization to support other b.L configurations such as the 1+1 one, or 2+3 as on the VExpress TC2. Signed-off-by: NNicolas Pitre <nico@linaro.org>
-
由 Nicolas Pitre 提交于
In a regular kernel configuration, all the CPUs are initially available. But the switcher execution model uses half of them at any time. Instead of hacking the DTB to remove half of the CPUs, let's remove them at run time and make sure we still have a working switcher configuration. This way, the same DTB can be used whether or not the switcher is used. Signed-off-by: NNicolas Pitre <nico@linaro.org>
-
由 Nicolas Pitre 提交于
We now have a dedicated thread for each logical CPU. That's plenty of stack space for our needs. Signed-off-by: NNicolas Pitre <nico@linaro.org>
-
由 Nicolas Pitre 提交于
The workqueues are problematic as they may be contended. They can't be scheduled with top priority either. Also the optimization in bL_switch_request() to skip the workqueue entirely when the target CPU and the calling CPU were the same didn't allow for bL_switch_request() to be called from atomic context, as might be the case for some cpufreq drivers. Let's move to dedicated kthreads instead. Signed-off-by: NNicolas Pitre <nico@linaro.org>
-
由 Lorenzo Pieralisi 提交于
Per-CPU timers that are shutdown when a CPU is switched over must be disabled upon switching and reprogrammed on the inbound CPU by relying on the clock events management API. save/restore sequence is executed with irqs disabled as mandated by the clock events API. The next_event is an absolute time, hence, when the inbound CPU resumes, if the timer has expired the min delta is forced into the tick device to fire after few cycles. This patch adds switching support for clock events that are per-CPU and have to be migrated when a switch takes place; the cpumask of the clock event device is checked against the cpumask of the current cpu, and if they match, the clockevent device mode is saved and it is put in shutdown mode. Resume code reprogrammes the tick device accordingly. Tested on A15/A7 fast models and architected timers. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NNicolas Pitre <nico@linaro.org>
-