- 26 2月, 2010 4 次提交
-
-
由 Magnus Damm 提交于
Allow building the ecovec board support code even though I2C support is disabled. Signed-off-by: NMagnus Damm <damm@opensource.se> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Magnus Damm 提交于
This patch adds board specific r-standby resume code for ecovec. Signed-off-by: NMagnus Damm <damm@opensource.se> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Magnus Damm 提交于
This patch adds board specific r-standby resume code for ms7724se. Signed-off-by: NMagnus Damm <damm@opensource.se> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Magnus Damm 提交于
Add code to save/restore registers during R-standby sleep on SH-Mobile processors. Signed-off-by: NMagnus Damm <damm@opensource.se> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 25 2月, 2010 4 次提交
-
-
由 Paul Mundt 提交于
All of the SH clocksource drivers follow the scheme that the IRQ is setup prior to registering the clockevent. The interrupt handler in the clockevent cases looks to the event handler function pointer being filled in by the registration code, permitting us to get in to situations where asserted IRQs step in to the handler before registration has had a chance to complete and hitting a NULL pointer deref. In practice this is not an issue for most platforms, but some of them with fairly special loaders (or that are chain-loading from another kernel) may enter in to this situation. This fixes up the oops reported by Rafael on hp6xx. Reported-and-tested-by: NRafael Ignacio Zurita <rafaelignacio.zurita@gmail.com> Cc: stable@kernel.org Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Kuninori Morimoto 提交于
KEYSC::SCN register of SH7724 is 3bit. Thus, scan_timing should be 0 - 7 here. Signed-off-by: NKuninori Morimoto <morimoto.kuninori@renesas.com> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Kuninori Morimoto 提交于
Signed-off-by: NKuninori Morimoto <morimoto.kuninori@renesas.com> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Kuninori Morimoto 提交于
Signed-off-by: NKuninori Morimoto <morimoto.kuninori@renesas.com> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 23 2月, 2010 2 次提交
-
-
由 Paul Mundt 提交于
This hooks up the SET/GET_UNALIGN_CTL knobs cribbing the bulk of it from the PPC and ia64 implementations. The thread flags happen to be the logical inverse of what the global fault mode is set to, so this works out pretty cleanly. By default the global fault mode is used, with tasks now being able to override their own settings via prctl(). Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
Follow the ARM change, which is what our alignment helpers are based on in the first place. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 22 2月, 2010 5 次提交
-
-
由 Kuninori Morimoto 提交于
When FSI and Network (= NFS file system) were used at the same time, the I/O of FSI was unstable. This patch updates the SPU2 clock (which is used for FSI) to solve this issue. Special thanks to Jeremy. Signed-off-by: NJeremy Baker <Jeremy.Baker@renesas.com> Signed-off-by: NKuninori Morimoto <morimoto.kuninori@renesas.com> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Magnus Damm 提交于
Update the sh7724 processor code to always enable vpu_clk. On the Ecovec board, set the vpu_clk to 166 Mhz. The 166MHz setting results in a divide-by-6 setup for vpu_clk and improves the VPU performance compared to the power-on-reset/bootloader configuration. Signed-off-by: NMagnus Damm <damm@opensource.se> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Magnus Damm 提交于
This patch adds a ->kick() callback to clk_div4_table and ties it into sh_clk_div4_set_rate(). A sh7724 specific kick function is also added that updates the KICK bit whenever div4 clocks in FRQCRA and FRQCRB have been set. Allows us to set the VPU clock. Signed-off-by: NMagnus Damm <damm@opensource.se> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Magnus Damm 提交于
This patch introduces struct clk_div4_table. The structure will be used to keep div4 specific data, and is with this patch replacing the struct clk_div_mult_table pointer arg used by the sh_clk_div4_register() functions. Signed-off-by: NMagnus Damm <damm@opensource.se> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Magnus Damm 提交于
Make sure the div4 bitfield is shifted according to the enable_bit value in sh_clk_div4_set_rate(). Signed-off-by: NMagnus Damm <damm@opensource.se> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 18 2月, 2010 5 次提交
-
-
由 Matt Fleming 提交于
Signed-off-by: NMatt Fleming <matt@console-pimps.org> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
-
由 Paul Mundt 提交于
This implements a bit of rework for the PMB code, which permits us to kill off the legacy PMB mode completely. Rather than trusting the boot loader to do the right thing, we do a quick verification of the PMB contents to determine whether to have the kernel setup the initial mappings or whether it needs to mangle them later on instead. If we're booting from legacy mappings, the kernel will now take control of them and make them match the kernel's initial mapping configuration. This is accomplished by breaking the initialization phase out in to multiple steps: synchronization, merging, and resizing. With the recent rework, the synchronization code establishes page links for compound mappings already, so we build on top of this for promoting mappings and reclaiming unused slots. At the same time, the changes introduced for the uncached helpers also permit us to dynamically resize the uncached mapping without any particular headaches. The smallest page size is more than sufficient for mapping all of kernel text, and as we're careful not to jump to any far off locations in the setup code the mapping can safely be resized regardless of whether we are executing from it or not. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
The PMB code is an example of something that spends an absurd amount of time running uncached when only a couple of operations really need to be. This switches over to the shiny new uncached helpers, permitting us to spend far more time running cached. Additionally, MMUCR twiddling is perfectly safe from cached space given that it's paired with a control register barrier, so fix that up, too. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
There are lots of registers that can only be updated from the uncached mapping, so we add some helpers for those cases in order to make it easier to ensure that we only make the jump when it's absolutely necessary. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 17 2月, 2010 10 次提交
-
-
由 Paul Mundt 提交于
This implements some locking for the PMB code. A high level rwlock is added for dealing with rw accesses on the entry map while a per-entry data structure spinlock is added to deal with the PMB entry changing out from underneath us. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
Write-through PMB mappings still require the cache bit to be set, even if they're to be flagged with a different cache policy and bufferability bit. To reduce some of the confusion surrounding the flag encoding we centralize the cache mask based on the system cache policy while we're at it. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
This plugs in entry sizing support for existing mappings and then builds on top of that for linking together entries that are mapping contiguous areas. This will ultimately permit us to coalesce mappings and promote head pages while reclaiming PMB slots for dynamic remapping. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
This adds some helper routines for uncached mapping support. This simplifies some of the cases where we need to check the uncached mapping boundaries in addition to giving us a centralized location for building more complex manipulation on top of. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
Some overdue cleanup of the PMB code, killing off unused functionality and duplication sprinkled about the tree. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
Both the store queue API and the PMB remapping take unsigned long for their pgprot flags, which cuts off the extended protection bits. In the case of the PMB this isn't really a problem since the cache attribute bits that we care about are all in the lower 32-bits, but we do it just to be safe. The store queue remapping on the other hand depends on the extended prot bits for enabling userspace access to the mappings. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Magnus Damm 提交于
Update the sh7723 INTC tables with force_enable support to mask out pending unsupported SDHI interrupt sources. Without this patch the kernel locks up due to a pending SDHI interrupt that the tmio_mmc driver cannot handle. Signed-off-by: NMagnus Damm <damm@opensource.se> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Magnus Damm 提交于
Update the sh7722 INTC tables with force_enable support to mask out pending unsupported SDHI interrupt sources. Without this patch the kernel locks up due to a pending SDHI interrupt that the tmio_mmc driver cannot handle. Signed-off-by: NMagnus Damm <damm@opensource.se> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
Presently there's an ordering issue with the chained handler change which places the set_irq_chip() after set_irq_chained_handler(). This causes a warning to be emitted as the IRQ chip needs to be set first. However, there is the caveat that redirect IRQs can't use the parent IRQ's irq chip as they are just dummy redirects, resulting in intc_enable() blowing up when set_irq_chained_handler() attempts to start up the redirect IRQ. In these cases we can just use dummy_irq_chip directly, as we already extract the parent IRQ and chip from the redirect handler. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
vmemmap and the vmsplit code amongst others need to be able to take page faults much earlier than trap_init() time, so move this in to the early CPU initialization. VBR setup for secondary CPUs is already handled through start_secondary(), so we only need to do this for the boot CPU. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 16 2月, 2010 8 次提交
-
-
由 Paul Mundt 提交于
The __va()/__pa() offsets and the boot memory offsets are consistent for all PMB users, so there is no need to special case these for legacy PMB. Kill the special casing off and depend on CONFIG_PMB across the board. This also fixes up yet another addressing bug for sh64. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
This merges the code for iterating over the legacy PMB mappings and the code for synchronizing software state with the hardware mappings. There's really no reason to do the same iteration twice, and this also buys us the legacy entry logging facility for the dynamic PMB case. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
The PMB initialization code walks the entries and synchronizes the software PMB state with the hardware mappings, preserving the slot index. Unfortunately pmb_alloc() only tested the bit position in the entry map and failed to set it, resulting in subsequent remaps being able to be dynamically assigned a slot that trampled an existing boot mapping with general badness ensuing. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Nobuhiro Iwamatsu 提交于
Signed-off-by: NNobuhiro Iwamatsu <iwamatsu.nobuhiro@renesas.com> Signed-off-by: NYoshihiro Shimoda <shimoda.yoshihiro@renesas.com> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Magnus Damm 提交于
Update the sh7724 INTC tables with force_enable support to mask out pending unsupported SDHI interrupt sources. Without this patch the kernel locks up due to a pending SDHI interrupt that the tmio_mmc driver cannot handle. Signed-off-by: NMagnus Damm <damm@opensource.se> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Magnus Damm 提交于
Extend the shared INTC code with force_disable support to allow keeping mask bits statically disabled. Needed for SDHI support to mask out unsupported interrupt sources. Signed-off-by: NMagnus Damm <damm@opensource.se> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Phil Edworthy 提交于
Fixed SH-Mobile panning. Previously the address of the frame to be displayed was updated in the VSync end interrupt. This meant there was a minimum of 1 frame bewteen calling FBIOPAN_DISPLAY ioctl and the pan occuring. This meant that apps were not able to use the FBIO_WAITFORVSYNC ioctl to wait for the pan to complete. This patch moves the write to LDSA1R mirror reg into the pan ioctl. Tested on MS7724 board against 2.6.33-rc7 Signed-off-by: NPhil Edworthy <phil.edworthy@renesas.com> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Phil Edworthy 提交于
Added FBIO_WAITFORVSYNC ioctl for SH-Mobile devices. Tested on MS7724 and MigoR boards against 2.6.33-rc7. Signed-off-by: NPhil Edworthy <phil.edworthy@renesas.com> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 15 2月, 2010 2 次提交
-
-
由 Paul Mundt 提交于
The change for fixing up sh64 inadvertently inverted the logic for legacy PMB, fix that back up. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
-