- 09 11月, 2005 1 次提交
-
-
由 Russell King 提交于
save_and_disable_irqs does not need to use mov + msr (which was introduced to work around a documentation bug which was propagated into binutils.) Use msr with an immediate constant, and if we're building for ARMv6 or later, use the new CPSID instruction. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 05 11月, 2005 1 次提交
-
-
由 Nicolas Pitre 提交于
Patch from Nicolas Pitre ARM processors that have pld instructions are not using those copy_user implementation anymore. Let's remove the useless PLD lines which were half wrong anyway. Signed-off-by: NNicolas Pitre <nico@cam.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 03 11月, 2005 1 次提交
-
-
由 Russell King 提交于
The 'K' extension adds several new instructions to the ARMv6 ISA which are primerily useful for SMP. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 02 11月, 2005 3 次提交
-
-
由 Nicolas Pitre 提交于
Patch from Nicolas Pitre This patch provides a preemption safe implementation of copy_to_user and copy_from_user based on the copy template also used for memcpy. It is enabled unconditionally when CONFIG_PREEMPT=y. Otherwise if the configured architecture is not ARMv3 then it is enabled as well as it gives better performances at least on StrongARM and XScale cores. If ARMv3 is not too affected or if it doesn't matter too much then uaccess.S could be removed altogether. Signed-off-by: NNicolas Pitre <nico@cam.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Nicolas Pitre 提交于
Patch from Nicolas Pitre This patch provides a new implementation for optimized memory copy functions on ARM. It is made of two levels: a template that consists of the core copy code and separate files that define macros to be used with the core code depending on the type of copy needed. This allows for best performances while sharing the same core for implementing memcpy(), copy_from_user() and copy_to_user() for instance. Two reasons for this work: 1) the current copy_to_user/copy_from_user implementation assumes no task switch will ever occur in the middle of each copied page making it completely unsafe with CONFIG_PREEMPT=y. 2) current copy implementations are measurably suboptimal and optimizing different implementations separately is a pain and more opportunities for bugs. The reason for (1) is the fact that copy inside user pages are performed with the ldm instruction which has no mean for testing user protections and could possibly race with process preemption bypassing the COW mechanism for example. This is a longstanding issue that we said ought to be fixed for about two years now. The solution is to substitute those ldm insns with a series of ldrt or strt insns to enforce user memory protection. At least on StrongARM and XScale cores the ldm is not faster than the equivalent ldr/str insns with a warm i-cache so there is no measurable performance degradation with that change. The fact that the copy code is a template makes it pretty easy to reuse the same core code as for memcpy and benefit from the same performance optimizations. Now (2) is best demonstrated with actual throughput measurements. First, here is a summary of memcopy tests performed on a StrongARM core: PTR alignment buffer size kernel version this version ------------------------------------------------------------ aligned 32 59.73 107.43 unaligned 32 61.31 74.72 aligned 100 132.47 136.15 unaligned 100 103.84 123.76 aligned 4096 130.67 130.80 unaligned 4096 130.68 130.64 aligned 1048576 68.03 68.18 unaligned 1048576 68.03 68.18 The buffer size is in bytes and the measured speed in MB/s. The copy was performed repeatedly with given buffer and throughput averaged over 3 seconds. Here we can see that the current kernel version has a higher entry cost that shows up with small buffers. As buffer size grows both implementation converge to the same throughput. Now here's the exact same test performed on an XScale core (PXA255): PTR alignment buffer size kernel version this version ------------------------------------------------------------ aligned 32 46.99 77.58 unaligned 32 53.61 59.59 aligned 100 107.19 136.59 unaligned 100 83.61 97.58 aligned 4096 129.13 129.98 unaligned 4096 128.36 128.53 aligned 1048576 53.76 59.41 unaligned 1048576 33.67 56.96 Again we can see the entry setup cost being higher for the current kernel before getting to the main copy loop. Then throughput results converge as long as the buffer remains in the cache. Then the 1MB case shows more differences probably due to better pld placement and/or less instruction interlocks in this proposed implementation. Disclaimer: The PXA system was running with slower clocks than the StrongARM system so trying to infer any conclusion by comparing those separate sets of results side by side would be completely inappropriate. So... What this patch does is to replace both memcpy and memmove with an implementation based on the provided copy code template. The memmove code is kept separate since it is used only if the memory areas involved do overlap in which case the code is a transposition of the template but with the copy occurring in the opposite direction (trying to fit that mode into the template turned it into a mess not worth it for memmove alone). And obviously both memcpy and memmove were tested with all kinds of pointer alignments and buffer sizes to exercise all code paths for correctness. The next patch will provide the now trivial replacement implementation copy_to_user and copy_from_user. Signed-off-by: NNicolas Pitre <nico@cam.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Nicolas Pitre 提交于
Patch from Nicolas Pitre Required for future enhancement patches. Signed-off-by: NNicolas Pitre <nico@cam.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 31 10月, 2005 1 次提交
-
-
由 Nicolas Pitre 提交于
Patch from Nicolas Pitre This patch gets rid of the last C implementations of needed libgcc functions for the kernel, replacing them with optimized assembly versions. Those functions are: __ashldi3 __ashrdi3 __lshrdi3 __muldi3 __ucmpdi2 The first 3 were lifted from gcc, the other two were written from scratch. Signed-off-by: NNicolas Pitre <nico@cam.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 28 10月, 2005 1 次提交
-
-
由 Nicolas Pitre 提交于
Patch from Nicolas Pitre Here's an ARM assembly SHA1 implementation to replace the default C version. It is approximately 50% faster than the generic C version. On an XScale processor running at 400MHz: generic C version: 9.8 MB/s my version: 14.5 MB/s This code is useful to quite a few callers in the tree: crypto/sha1.c: sha_transform(sctx->state, sctx->buffer, temp); crypto/sha1.c: sha_transform(sctx->state, &data[i], temp); drivers/char/random.c: sha_transform(buf, (__u8 *)r->pool+i, buf + 5); drivers/char/random.c: sha_transform(buf, (__u8 *)data, buf + 5); net/ipv4/syncookies.c: sha_transform(tmp + 16, (__u8 *)tmp, tmp + 16 + 5); Signed-off-by: NNicolas Pitre <nico@cam.org> Seems to work fine on big-endian as well. Signed-off-by: NLennert Buytenhek <buytenh@wantstofly.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 10 9月, 2005 1 次提交
-
-
由 Sam Ravnborg 提交于
Delete obsoleted stuff from arch Makefile and rename constants.h to asm-offsets.h Signed-off-by: NSam Ravnborg <sam@ravnborg.org>
-
- 10 8月, 2005 1 次提交
-
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 29 7月, 2005 1 次提交
-
-
由 Russell King 提交于
We sometimes forgot to check whether the exclusive store succeeded. Ensure that we always check. Also ensure that we always use the out of line versions, since the inline versions are not SMP safe. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 28 7月, 2005 1 次提交
-
-
由 Russell King 提交于
If we found that the bit was already in the desired state, we would skip performing the operation, and write random data back. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 17 7月, 2005 1 次提交
-
-
由 Alexander Schulz 提交于
Patch from Alexander Schulz This patch brings a new default config file for the shark and fixes a compilation issue with io addressing and a runtime problem with the serial ports, where I corrected a wrong regshift value. These are all shark specific files so I hope it is ok to put them in one patch. Signed-off-by: NAlexander Schulz <alex@shark-linux.de> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 16 7月, 2005 1 次提交
-
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 30 6月, 2005 1 次提交
-
-
由 Nicolas Pitre 提交于
Patch from Nicolas Pitre Those are big, slow and generally not recommended for kernel code. They are even not present on i386. So it should be concluded that one could as well get away with do_div() alone. Signed-off-by: NNicolas Pitre <nico@cam.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 20 6月, 2005 2 次提交
-
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Convert ugly GCC types to Linux types: UQImode -> u8 SImode -> s32 USImode -> u32 DImode -> s64 UDImode -> u64 word_type -> int Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 09 6月, 2005 1 次提交
-
-
由 Nicolas Pitre 提交于
Patch from Nicolas Pitre Signed-off-by: Nicolas Pitre Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 19 4月, 2005 1 次提交
-
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk@arm.linux.org.uk>
-
- 17 4月, 2005 2 次提交
-
-
由 Russell King 提交于
Convert ARM bitop assembly to a macro. All bitops follow the same format, so it's silly duplicating the code when only one or two instructions are different. Signed-off-by: NRussell King <rmk@arm.linux.org.uk>
-
由 Linus Torvalds 提交于
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!
-