- 14 8月, 2009 2 次提交
-
-
由 Paul Mundt 提交于
This was using internal symbols for unaligned accesses, bypassing the exposed interface for variable sized safe accesses. This converts all of the __get_unaligned_cpuXX() users over to get_unaligned() directly, relying on the cast to select the proper internal routine. Additionally, the __put_unaligned_cpuXX() case is superfluous given that the destination address is aligned in all of the current cases, so just drop that outright. Furthermore, this switches to the asm/unaligned.h header instead of the asm-generic version, which was silently bypassing the SH-4A optimized unaligned ops. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Matt Fleming 提交于
This is a first cut at a generic DWARF unwinder for the kernel. It's still lacking DWARF64 support and the DWARF expression support hasn't been tested very well but it is generating proper stacktraces on SH for WARN_ON() and NULL dereferences. Signed-off-by: NMatt Fleming <matt@console-pimps.org> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-