- 28 7月, 2014 1 次提交
-
-
由 Peter Ujfalusi 提交于
Use the lowest priority queue as default for clients. Signed-off-by: NPeter Ujfalusi <peter.ujfalusi@ti.com> Acked-by: NSekhar Nori <nsekhar@ti.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
- 17 6月, 2014 1 次提交
-
-
由 Fabio Estevam 提交于
Remove the 'temp' variable in order to fix the following build warning: arch/arm/common/scoop.c:185:6: warning: unused variable 'temp' [-Wunused-variable] Signed-off-by: NFabio Estevam <fabio.estevam@freescale.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 28 5月, 2014 1 次提交
-
-
由 Nicolas Pitre 提交于
The content of /sys/devices/system/cpu/cpu*/online is still 1 for those CPUs that the switcher has removed even though the global state in /sys/devices/system/cpu/online is updated correctly. It turns out that commit 0902a904 ("Driver core: Use generic offline/online for CPU offline/online") has changed the way those files retrieve their content by relying on on the generic attribute handling code. The switcher, by calling cpu_down() directly, bypasses this handling and the attribute value doesn't get updated. Fix this by calling device_offline()/device_online() instead. Signed-off-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 26 5月, 2014 1 次提交
-
-
由 Dave Martin 提交于
The name "power_down_finish" seems to be causing some confusion, because it suggests that this function is responsible for taking some action to cause the specified CPU to complete its power down. This patch renames the affected functions to "wait_for_powerdown" and similar, since this function's intended purpose is just to wait for the hardware to finish a powerdown initiated by a previous cpu_power_down. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 22 5月, 2014 7 次提交
-
-
由 Peter Ujfalusi 提交于
From CCCFG register of eDMA3 we can get all the needed information for the driver about the IP: Number of channels: NUM_DMACH Number of regions: NUM_REGN Number of slots (PaRAM sets): NUM_PAENTRY Number of TC/EQ: NUM_EVQUE In case when booted with DT or the queue_priority_mapping is not provided set up a default priority map. Signed-off-by: NPeter Ujfalusi <peter.ujfalusi@ti.com> Acked-by: NJoel Fernandes <joelf@ti.com> Signed-off-by: NSekhar Nori <nsekhar@ti.com>
-
由 Peter Ujfalusi 提交于
To be consistent in the code that we take parameters from edma_cc[j] struct and not randomly from info[j] as well. Signed-off-by: NPeter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: NSekhar Nori <nsekhar@ti.com>
-
由 Peter Ujfalusi 提交于
The struct edma is allocated per CC bases so the member num_cc does not make any sense. One CC is one CC, it does not have sub CCs. Signed-off-by: NPeter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: NSekhar Nori <nsekhar@ti.com>
-
由 Peter Ujfalusi 提交于
There is no need to change the default TC -> Queue mapping. By default the mapping is: TC0 -> Q0, TC1 -> Q1, etc. Changing this has no benefits at all and all the board files are just setting the same mapping back to the HW. Signed-off-by: NPeter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: NSekhar Nori <nsekhar@ti.com>
-
由 Peter Ujfalusi 提交于
Instead of saving the for loop length, take the num_tc value from the pdata. In case of DT boot set the n_tc to 3 as it is hardwired in edma_of_parse_dt() This is a temporary state since upcoming patch(es) will change how we are dealing with these parameters. Signed-off-by: NPeter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: NSekhar Nori <nsekhar@ti.com>
-
由 Peter Ujfalusi 提交于
The pdata has been just allocated with devm_kzalloc() in edma_setup_info_from_dt() and passed to this function. Signed-off-by: NPeter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: NSekhar Nori <nsekhar@ti.com>
-
由 Peter Ujfalusi 提交于
Get the two interrupt line number at the same time by merging the two instance of if(node){}else{} places. replace the &pdev->dev with the already existing dev which makes it possible to collapse lines with devm_request_irq() Signed-off-by: NPeter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: NSekhar Nori <nsekhar@ti.com>
-
- 30 4月, 2014 1 次提交
-
-
由 Thomas Gleixner 提交于
As Joel pointed out, edma_read_position() uses memcpy_fromio() to read the parameter ram. That's not synchronized with the internal update as it does a byte by byte copy. We need to do a 32bit read to get a consistent value. Further reading destination and source is pointless. In DEV_TO_MEM transfers we are only interested in the destination, in MEM_TO_DEV we care about the source. In MEM_TO_MEM it really does not matter which one you read. Simple solution: Remove the pointers, select dest/source via a bool and return the read value. Remove the export of this function while at it. The only potential user is the dmaengine and that's always builtin. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NSekhar Nori <nsekhar@ti.com> Signed-off-by: NJoel Fernandes <joelf@ti.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
- 29 4月, 2014 1 次提交
-
-
由 Thomas Gleixner 提交于
This is another great example of trainwreck engineering: commit 2646a0e529 (ARM: edma: Add EDMA crossbar event mux support) added support for using EDMA on peripherals which have no direct EDMA event mapping. The code compiles and does not explode in your face, but that's it. 1) Reading an u16 array from an u32 device tree array simply does not work. Even if the function is named "edma_of_read_u32_to_s16_array". It merily calls of_property_read_u16_array. So the resulting 16bit array will have every other entry = 0. 2) The DT entry for the xbar registers related to xbar has length 0x10 instead of the real length: 0xfd0 - 0xf90 = 0x40. Not a real problem as it does not cross a page boundary, but wrong nevertheless. 3) But none of this matters as the mapping never happens: After reading nonsense edma_of_read_u32_to_s16_array() invalidates the first array entry pair, so nobody can ever notice the braindamage by immediate explosion. Seems the QA criteria for this code was solely not to explode when someone adds edma-xbar-event-map entries to the DT. Goal achieved, congratulations! Not really helpful if someone wants to use edma on a device which requires a xbar mapping. Fix the issues by: - annotating the device tree entry with "/bits/ 16" as documented in the of_property_read_u16_array kernel doc - make the size of the xbar register mapping correct - invalidating the end of the array and not the start This convoluted mess wants to be completely rewritten as there is no point to keep the xbar_chan array memory and the iomapping of the xbar regs around forever. Marking the xbar mapped channels as used should be done right there. But that's a different issue and this patch is small enough to make it work and allows a simple backport for stable. Cc: stable@vger.kernel.org # v3.12+ Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NSekhar Nori <nsekhar@ti.com>
-
- 23 4月, 2014 3 次提交
-
-
由 Nicolas Pitre 提交于
The switcher should not depend on MAX_CLUSTER to determine ifit should be activated or not. In a multiplatform kernel binary it is possible to have dual-cluster and quad-cluster platforms configured in. In that case MAX_CLUSTER which is a build time limit should be 4 and that shouldn't prevent the switcher from working if the kernel is booted on a b.L dual-cluster system. In bL_switcher_halve_cpus() we already have a runtime validation check to make sure we're dealing with only two clusters, so booting on a quad cluster system will be caught and switcher activation aborted. However, the b.L switcher must ensure the MCPM layer is initialized on the booted hardware before doing anything. The mcpm_is_available() function is added to that effect. Signed-off-by: NNicolas Pitre <nico@linaro.org> Tested-by: NAbhilash Kesavan <kesavan.abhilash@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Peter Ujfalusi 提交于
Indicate that the edma dmaengine driver has support for cyclic mode. Signed-off-by: NPeter Ujfalusi <peter.ujfalusi@ti.com> Acked-by: NJoel Fernandes <joelf@ti.com> Reviewed-and-Tested-by: NJoel Fernandes <joelf@ti.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Peter Ujfalusi 提交于
For later use save the number of queues available for the CC. Signed-off-by: NPeter Ujfalusi <peter.ujfalusi@ti.com> Acked-by: NJoel Fernandes <joelf@ti.com> Reviewed-and-Tested-by: NJoel Fernandes <joelf@ti.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
- 20 3月, 2014 1 次提交
-
-
由 Alexander Shiyan 提交于
Statements following return will never be executed. This patch removes this code. Signed-off-by: NAlexander Shiyan <shc_work@mail.ru> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 19 2月, 2014 1 次提交
-
-
由 Dave Martin 提交于
The functions in mcpm_entry.c are mostly intended for use during scary cache and coherency disabling sequences, or do other things which confuse trace ... like powering a CPU down and not returning. Similarly for the backend code. For simplicity, this patch just makes whole files notrace. There should be more than enough traceable points on the paths to these functions, but we can be more fine-grained later if there is a need for it. Jon Medhurst: Also added spc.o to the list of files as it contains functions used by MCPM code which have comments comments like: "might be used in code paths where normal cacheable locks are not working" Signed-off-by: NDave Martin <dave.martin@linaro.org> Signed-off-by: NJon Medhurst <tixy@linaro.org> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 13 2月, 2014 1 次提交
-
-
由 Linus Walleij 提交于
This modifies the SP804 driver so that the clock will be taken from the device tree node for the timer. Cc: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Rob Herring <rob.herring@linaro.org> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org>
-
- 29 12月, 2013 2 次提交
-
-
由 Will Deacon 提交于
sync_cache_w already includes a dsb, so we can just use sev() directly then following a cache-sync. Acked-by: NDave Martin <Dave.Martin@arm.com> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Nicolas Pitre 提交于
We have a handy macro to replace open coded __cpuc_flush_dcache_area(() and outer_clean_range() sequences. Let's use it. No functional change. Signed-off-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 12 12月, 2013 1 次提交
-
-
由 Yijing Wang 提交于
Use dev_is_pci() instead of checking bus type directly. Signed-off-by: NYijing Wang <wangyijing@huawei.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
-
- 22 11月, 2013 1 次提交
-
-
由 Stephen Boyd 提交于
The 32 bit sched_clock interface now supports 64 bits. Upgrade to the 64 bit function to allow us to remove the 32 bit registration interface. Also mark the read function notrace since we're here and failure to do so would cause ftrace to break. Cc: Russell King <linux@arm.linux.org.uk> Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NKevin Hilman <khilman@linaro.org>
-
- 07 11月, 2013 1 次提交
-
-
由 Tushar Behera 提交于
Commit 6dedcca6 ("hotplug, powerpc, x86: Remove cpu_hotplug_driver_lock())" removes the the definition of cpu_hotplug_driver_{lock,unlock} APIs, thereby causing a build error. Replace these calls with {lock,unlock}_device_hotplug(). Signed-off-by: NTushar Behera <tushar.behera@linaro.org> Signed-off-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 30 10月, 2013 1 次提交
-
-
由 Vinod Koul 提交于
edma header defines DMA_COMPLETE, this causes issues as commit adfedd9a move DMA_SUCCESS to DMA_COMPLETE. edma should properly namespace its defines and needs a future fix Reported-by: NOlof Johansson <olof@lixom.net> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
- 29 10月, 2013 3 次提交
-
-
由 Dave Martin 提交于
CPU hotplug and kexec rely on smp_ops.cpu_kill(), which is supposed to wait for the CPU to park or power down, and perform the last rites (such as disabling clocks etc., where the platform doesn't do this automatically). kexec in particular is unsafe without performing this synchronisation to park secondaries. Without it, the secondaries might not be parked when kexec trashes the kernel. There is no generic way to do this synchronisation, so a new mcpm platform_ops method power_down_finish() is added by this patch. The new method is mandatory. A platform which provides no way to detect when CPUs are parked is likely broken. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Dave Martin 提交于
This patch factors the logical-to-physical CPU translation out of mcpm_boot_secondary(), so that it can be reused elsewhere. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Michael Opdenacker 提交于
This patch proposes to remove the use of the IRQF_DISABLED flag It's a NOOP since 2.6.35 and it will be removed one day. Signed-off-by: NMichael Opdenacker <michael.opdenacker@free-electrons.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 20 10月, 2013 1 次提交
-
-
由 Victor Kamensky 提交于
In big endian mode mcpm_entry_point is first function that called on secondaries CPU. First it should switch CPU into big endian code. [ben.dooks@codethink.co.uk: merge fix patch from Victor into this] Signed-off-by: NVictor Kamensky <victor.kamensky@linaro.org> Acked-by: NNicolas Pitre <nico@linaro.org> Reviewed-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NBen Dooks <ben.dooks@codethink.co.uk>
-
- 03 10月, 2013 2 次提交
-
-
由 Andrea Adami 提交于
This fixes a regression for kernels after v3.2 After commit 72662e01 ARM: head.S: only include __turn_mmu_on in the initial identity mapping Zaurus PXA devices call sharpsl_save_param() during fixup and hang on boot because memcpy refers to physical addresses no longer valid if the MMU is setup. Zaurus collie (SA1100) is unaffected (function is called in init_machine). The code was making assumptions and for PXA the virtual address should have been used before. Signed-off-by: NMarko Katic <dromede@gmail.com> Signed-off-by: NAndrea Adami <andrea.adami@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Nicolas Pitre 提交于
Currently mcpm_cpu_power_down() and mcpm_cpu_suspend() trigger BUG() if mcpm_platform_register() is not called beforehand. This may occur for many reasons such as some incomplete device tree passed to the kernel or the like. Let's be nicer to users and avoid killing the kernel if that happens by logging a warning and returning to the caller. The mcpm_cpu_suspend() user is already set to deal with this situation, and so is cpu_die() invoking mcpm_cpu_die(). The problematic case would have been the B.L switcher's usage of mcpm_cpu_power_down(), however it has to call mcpm_cpu_power_up() first which is already set to catch an error resulting from a missing mcpm_platform_register() call. Signed-off-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 01 10月, 2013 1 次提交
-
-
由 Joel Fernandes 提交于
HWMOD removal for MMC is breaking edma_start as the events are being manually triggered due to unused channel list not being clear. The above issue is fixed by reading the "dmas" property from the DT node if it exists and clearing the bits in the unused channel list if the dma controller used by any device is EDMA. For this purpose we use the of_* helpers to parse the arguments in the dmas phandle list. Also introduced is a minor clean up of a checkpatch error in old code. Reviewed-by: NSekhar Nori <nsekhar@ti.com> Reported-by: NBalaji T K <balajitk@ti.com> Cc: Sekhar Nori <nsekhar@ti.com> Cc: Tony Lindgren <tony@atomide.com> Cc: Olof Johansson <olof@lixom.net> Cc: Nishanth Menon <nm@ti.com> Cc: Pantel Antoniou <panto@antoniou-consulting.com> Cc: Jason Kridner <jkridner@beagleboard.org> Cc: Koen Kooi <koen@dominion.thruhere.net> Signed-off-by: NJoel Fernandes <joelf@ti.com> Signed-off-by: NOlof Johansson <olof@lixom.net>
-
- 24 9月, 2013 8 次提交
-
-
由 Dave Martin 提交于
When the switcher is active, there is no straightforward way to figure out which logical CPU a given physical CPU maps to. This patch provides a function bL_switcher_get_logical_index(mpidr), which is analogous to get_logical_index(). This function returns the logical CPU on which the specified physical CPU is grouped (or -EINVAL if unknown). If the switcher is inactive or not present, -EUNATCH is returned instead. Signed-off-by: NDave Martin <dave.martin@linaro.org> Signed-off-by: NNicolas Pitre <nico@linaro.org>
-
由 Dave Martin 提交于
This patch exports a bL_switcher_trace_trigger() function to provide a means for drivers using the trace events to get the current status when starting a trace session. Calling this function is equivalent to pinging the trace_trigger file in sysfs. Signed-off-by: NDave Martin <dave.martin@linaro.org>
-
由 Dave Martin 提交于
When tracing switching, an external tracer needs a way to bootstrap its knowledge of the logical<->physical CPU mapping. This patch adds a sysfs attribute trace_trigger. A write to this attribute will generate a power:cpu_migrate_current event for each online CPU, indicating the current physical CPU for each logical CPU. Activating or deactivating the switcher also generates these events, so that the tracer knows about the resulting remapping of affected CPUs. Signed-off-by: NDave Martin <dave.martin@linaro.org>
-
由 Dave Martin 提交于
This patch adds simple trace events to the b.L switcher code to allow tracing of CPU migration events. To make use of the trace events, you will need: CONFIG_FTRACE=y CONFIG_ENABLE_DEFAULT_TRACERS=y The following events are added: * power:cpu_migrate_begin * power:cpu_migrate_finish each with the following data: u64 timestamp; u32 cpu_hwid; power:cpu_migrate_begin occurs immediately before the switcher-specific migration operations start. power:cpu_migrate_finish occurs immediately when migration is completed. The cpu_hwid field contains the ID fields of the MPIDR. * For power:cpu_migrate_begin, cpu_hwid is the ID of the outbound physical CPU (equivalent to (from_phys_cpu,from_phys_cluster)). * For power:cpu_migrate_finish, cpu_hwid is the ID of the inbound physical CPU (equivalent to (to_phys_cpu,to_phys_cluster)). By design, the cpu_hwid field is masked in the same way as the device tree cpu node reg property, allowing direct correlation to the DT description of the hardware. The timestamp is added in order to minimise timing noise. An accurate system-wide clock should be used for generating this (hopefully getnstimeofday is appropriate, but it could be changed). It could be any monotonic shared clock, since the aim is to allow accurate deltas to be computed. We don't necessarily care about accurate synchronisation with wall clock time. In practice, each switch takes place on a single logical CPU, and the trace infrastructure should guarantee that events are well-ordered with respect to a single logical CPU. Signed-off-by: NDave Martin <dave.martin@linaro.org> Signed-off-by: NNicolas Pitre <nico@linaro.org>
-
由 Nicolas Pitre 提交于
In some cases, a significant delay may be observed between the moment a request for a CPU to come up is made and the moment it is ready to start executing kernel code. This is especially true when a whole cluster has to be powered up which may take in the order of miliseconds. It is therefore a good idea to let the outbound CPU continue to execute code in the mean time, and be notified when the inbound is ready before performing the actual switch. This is achieved by registering a completion block with the appropriate IPI callback, and programming the sending of an IPI by the early assembly code prior to entering the main kernel code. Once the IPI is delivered to the outbound CPU, the completion block is "completed" and the switcher thread is resumed. Signed-off-by: NNicolas Pitre <nico@linaro.org>
-
由 Nicolas Pitre 提交于
This allows to poke a predetermined value into a specific address upon entering the early boot code in bL_head.S. Signed-off-by: NNicolas Pitre <nico@linaro.org>
-
由 Nicolas Pitre 提交于
Let's wait for the inbound CPU to come up and snoop some of the outbound CPU cache before bringing the outbound CPU down. That should be more efficient than going down right away. Possible improvements might involve some monitoring of the CCI event counters. Signed-off-by: NNicolas Pitre <nico@linaro.org>
-
由 Dave Martin 提交于
There is no explicit way to know when a switch started via bL_switch_request() is complete. This can lead to unpredictable behaviour when the switcher is controlled by a subsystem which makes dynamic decisions (such as cpufreq). The CPU PM notifier is not really suitable for signalling completion, because the CPU could get suspended and resumed for other, independent reasons while a switch request is in flight. Adding a whole new notifier for this seems excessive, and may tempt people to put heavyweight code on this path. This patch implements a new bL_switch_request_cb() function that allows for a per-request lightweight callback, private between the switcher and the caller of bL_switch_request_cb(). Overlapping switches on a single CPU are considered incorrect if they are requested via bL_switch_request_cb() with a callback (they will lead to an unpredictable final state without explicit external synchronisation to force the requests into a particular order). Queuing requests robustly would be overkill because only one subsystem should be attempting to control the switcher at any time. Overlapping requests of this kind will be failed with -EBUSY to indicate that the second request won't take effect and the completer will never be called for it. bL_switch_request() is retained as a wrapper round the new function, with the old, fire-and-forget semantics. In this case the last request will always win. The request may still be denied if a previous request with a completer is still pending. Signed-off-by: NDave Martin <dave.martin@linaro.org> Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-