- 16 9月, 2008 13 次提交
-
-
由 Sebastien Dugue 提交于
irq_radix_revmap() currently serves 2 purposes, irq mapping lookup and insertion which happen in interrupt and process context respectively. Separate the function into its 2 components, one for lookup only and one for insertion only. Fix the only user of the revmap tree (XICS) to use the new functions. Also, move the insertion into the radix tree of those irqs that were requested before it was initialized at said tree initialization. Mutual exclusion between the tree initialization and readers/writers is handled via a state variable (revmap_trees_allocated) set to 1 when the tree has been initialized and set to 2 after the already requested irqs have been inserted in the tree by the init path. This state is checked before any reader or writer access just like we used to check for tree.gfp_mask != 0 before. Finally, now that we're not any longer inserting nodes into the radix-tree in interrupt context, turn the GFP_ATOMIC allocations into GFP_KERNEL ones. Signed-off-by: NSebastien Dugue <sebastien.dugue@bull.net> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Becky Bruce 提交于
It's the size of the hardware PTE; make that clear in the name. Signed-off-by: NBecky Bruce <becky.bruce@freescale.com> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Thiemo Seufer 提交于
Those two are required on my fresh gcc 4.3.1. Signed-off-by: NThiemo Seufer <ths@linutronix.de> Signed-off-by: NSebastian Siewior <bigeasy@linutronix.de> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Christoph Hellwig 提交于
sys32_pause is a useless copy of the generic sys_pause. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Paul Mackerras 提交于
This implements CONFIG_RELOCATABLE for 64-bit by making the kernel as a position-independent executable (PIE) when it is set. This involves processing the dynamic relocations in the image in the early stages of booting, even if the kernel is being run at the address it is linked at, since the linker does not necessarily fill in words in the image for which there are dynamic relocations. (In fact the linker does fill in such words for 64-bit executables, though not for 32-bit executables, so in principle we could avoid calling relocate() entirely when we're running a 64-bit kernel at the linked address.) The dynamic relocations are processed by a new function relocate(addr), where the addr parameter is the virtual address where the image will be run. In fact we call it twice; once before calling prom_init, and again when starting the main kernel. This means that reloc_offset() returns 0 in prom_init (since it has been relocated to the address it is running at), which necessitated a few adjustments. This also changes __va and __pa to use an equivalent definition that is simpler. With the relocatable kernel, PAGE_OFFSET and MEMORY_START are constants (for 64-bit) whereas PHYSICAL_START is a variable (and KERNELBASE ideally should be too, but isn't yet). With this, relocatable kernels still copy themselves down to physical address 0 and run there. Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Paul Mackerras 提交于
Using LOAD_REG_IMMEDIATE to get the address of kernel symbols generates 5 instructions where LOAD_REG_ADDR can do it in one, and will generate R_PPC64_ADDR16_* relocations in the output when we get to making the kernel as a position-independent executable, which we'd rather not have to handle. This changes various bits of assembly code to use LOAD_REG_ADDR when we need to get the address of a symbol, or to use suitable position-independent code for cases where we can't access the TOC for various reasons, or if we're not running at the address we were linked at. It also cleans up a few minor things; there's no reason to save and restore SRR0/1 around RTAS calls, __mmu_off can get the return address from LR more conveniently than the caller can supply it in R4 (and we already assume elsewhere that EA == RA if the MMU is on in early boot), and enable_64b_mode was using 5 instructions where 2 would do. Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Paul Mackerras 提交于
This changes the way that the exception prologs transfer control to the handlers in 64-bit kernels with the aim of making it possible to have the prologs separate from the main body of the kernel. Now, instead of computing the address of the handler by taking the top 32 bits of the paca address (to get the 0xc0000000........ part) and ORing in something in the bottom 16 bits, we get the base address of the kernel by doing a load from the paca and add an offset. This also replaces an mfmsr and an ori to compute the MSR value for the handler with a load from the paca. That makes it unnecessary to have a separate version of EXCEPTION_PROLOG_PSERIES that forces 64-bit mode. We can no longer use a direct branches in the exception prolog code, which means that the SLB miss handlers can't branch directly to .slb_miss_realmode any more. Instead we have to compute the address and do an indirect branch. This is conditional on CONFIG_RELOCATABLE; for non-relocatable kernels we use a direct branch as before. (A later change will allow CONFIG_RELOCATABLE to be set on 64-bit powerpc.) Since the secondary CPUs on pSeries start execution in the first 0x100 bytes of real memory and then have to get to wherever the kernel is, we can't use a direct branch to get there. Instead this changes __secondary_hold_spinloop from a flag to a function pointer. When it is set to a non-NULL value, the secondary CPUs jump to the function pointed to by that value. Finally this eliminates one code difference between 32-bit and 64-bit by making __secondary_hold be the text address of the secondary CPU spinloop rather than a function descriptor for it. Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Paul Mackerras 提交于
This rearranges head_64.S so that we have all the first-level exception prologs together starting at 0x100, followed by all the second-level handlers that are invoked from the first-level prologs, followed by other code. This doesn't make any functional change but will make following changes for relocatable kernel support easier. Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Chandru 提交于
Kdump kernel needs to use only those memory regions that it is allowed to use (crashkernel, rtas, tce, etc.). Each of these regions have their own sizes and are currently added under 'linux,usable-memory' property under each memory@xxx node of the device tree. The ibm,dynamic-memory property of ibm,dynamic-reconfiguration-memory node (on POWER6) now stores in it the representation for most of the logical memory blocks with the size of each memory block being a constant (lmb_size). If one or more or part of the above mentioned regions lie under one of the lmb from ibm,dynamic-memory property, there is a need to identify those regions within the given lmb. This makes the kernel recognize a new 'linux,drconf-usable-memory' property added by kexec-tools. Each entry in this property is of the form of a count followed by that many (base, size) pairs for the above mentioned regions. The number of cells in the count value is given by the #size-cells property of the root node. Signed-off-by: NChandru Siddalingappa <chandru@in.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Nathan Fontenot 提交于
The return code from invocation of the notifier for pSeries_reconfig_chain during update of the device tree is not checked. This causes writes to /proc/ppc64/ofdt to update memory properties (i.e. ibm,dyamic-reconfiguration-memory) to always return success, instead of the result of the notifier chain. This happens specifically when we remove/add memory from the device tree on machines using memory specified in the ibm,dynamic-reconfiguration-memory property of the device tree. Signed-off-by: NNathan Fontenot <nfont@austin.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Mark Nelson 提交于
This new copy_4K_page() function was originally tuned for the best performance on the Cell processor, but after testing on more 64bit powerpc chips it was found that with a small modification it either matched the performance offered by the current mainline version or bettered it by a small amount. It was found that on a Cell-based QS22 blade the amount of system time measured when compiling a 2.6.26 pseries_defconfig decreased by 4%. Using the same test, a 4-way 970MP machine saw a decrease of 2% in system time. No noticeable change was seen on Power4, Power5 or Power6. The 4096 byte page is copied in thirty-two 128 byte strides. An initial setup loop executes dcbt instructions for the whole source page and dcbz instructions for the whole destination page. To do this, the cache line size is retrieved from ppc64_caches. A new CPU feature bit, CPU_FTR_CP_USE_DCBTZ, (introduced in the previous patch) is used to make the modification to this new copy routine - on Power4, 970 and Cell the feature bit is set so the setup loop is executed, but on all other 64bit chips the setup loop is nop'ed out. Signed-off-by: NMark Nelson <markn@au1.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Mark Nelson 提交于
Add a new CPU feature bit, CPU_FTR_CP_USE_DCBTZ, to be added to the 64bit powerpc chips that benefit from having dcbt and dcbz instructions used in their memory copy routines. This will be used in a subsequent patch that updates copy_4K_page(). The new bit is added to Cell, PPC970 and Power4 because they show better performance with the new copy_4K_page() when dcbt and dcbz instructions are used. Signed-off-by: NMark Nelson <markn@au1.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 roel kluin 提交于
Evidently MACIO_FLAG_SCCA_ON was meant. Signed-off-by: NRoel Kluin <roel.kluin@gmail.com> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 10 9月, 2008 2 次提交
-
-
由 Prarit Bhargava 提交于
When using kdump modifying the e820 map is yielding strange results. For example starting with BIOS-provided physical RAM map: BIOS-e820: 0000000000000100 - 0000000000093400 (usable) BIOS-e820: 0000000000093400 - 00000000000a0000 (reserved) BIOS-e820: 0000000000100000 - 000000003fee0000 (usable) BIOS-e820: 000000003fee0000 - 000000003fef3000 (ACPI data) BIOS-e820: 000000003fef3000 - 000000003ff80000 (ACPI NVS) BIOS-e820: 000000003ff80000 - 0000000040000000 (reserved) BIOS-e820: 00000000e0000000 - 00000000f0000000 (reserved) BIOS-e820: 00000000fec00000 - 00000000fec10000 (reserved) BIOS-e820: 00000000fee00000 - 00000000fee01000 (reserved) BIOS-e820: 00000000ff000000 - 0000000100000000 (reserved) and booting with args memmap=exactmap memmap=640K@0K memmap=5228K@16384K memmap=125188K@22252K memmap=76K#1047424K memmap=564K#1047500K resulted in: user-defined physical RAM map: user: 0000000000000000 - 0000000000093400 (usable) user: 0000000000093400 - 00000000000a0000 (reserved) user: 0000000000100000 - 000000003fee0000 (usable) user: 000000003fee0000 - 000000003fef3000 (ACPI data) user: 000000003fef3000 - 000000003ff80000 (ACPI NVS) user: 000000003ff80000 - 0000000040000000 (reserved) user: 00000000e0000000 - 00000000f0000000 (reserved) user: 00000000fec00000 - 00000000fec10000 (reserved) user: 00000000fee00000 - 00000000fee01000 (reserved) user: 00000000ff000000 - 0000000100000000 (reserved) But should have resulted in: user-defined physical RAM map: user: 0000000000000000 - 00000000000a0000 (usable) user: 0000000001000000 - 000000000151b000 (usable) user: 00000000015bb000 - 0000000008ffc000 (usable) user: 000000003fee0000 - 000000003ff80000 (ACPI data) This is happening because of an improper usage of strcmp() in the e820 parsing code. The strcmp() always returns !0 and never resets the value for e820.nr_map and returns an incorrect user-defined map. This patch fixes the problem. Signed-off-by: NPrarit Bhargava <prarit@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 James Bottomley 提交于
It was introduced by "vsprintf: add support for '%pS' and '%pF' pointer formats" in commit 0fe1ef24. However, the current way its coded doesn't work on parisc64. For two reasons: 1) parisc isn't in the #ifdef and 2) parisc has a different format for function descriptors Make dereference_function_descriptor() more accommodating by allowing architecture overrides. I put the three overrides (for parisc64, ppc64 and ia64) in arch/kernel/module.c because that's where the kernel internal linker which knows how to deal with function descriptors sits. Signed-off-by: NJames Bottomley <James.Bottomley@HansenPartnership.com> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: NTony Luck <tony.luck@intel.com> Acked-by: NKyle McMartin <kyle@mcmartin.ca> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 09 9月, 2008 3 次提交
-
-
由 Jarod Wilson 提交于
When running a 31-bit ptrace, on either an s390 or s390x kernel, reads and writes into a padding area in struct user_regs_struct32 will result in a kernel panic. This is also known as CVE-2008-1514. Test case available here: http://sources.redhat.com/cgi-bin/cvsweb.cgi/~checkout~/tests/ptrace-tests/tests/user-area-padding.c?cvsroot=systemtap Steps to reproduce: 1) wget the above 2) gcc -o user-area-padding-31bit user-area-padding.c -Wall -ggdb2 -D_GNU_SOURCE -m31 3) ./user-area-padding-31bit <panic> Test status ----------- Without patch, both s390 and s390x kernels panic. With patch, the test case, as well as the gdb testsuite, pass without incident, padding area reads returning zero, writes ignored. Nb: original version returned -EINVAL on write attempts, which broke the gdb test and made the test case slightly unhappy, Jan Kratochvil suggested the change to return 0 on write attempts. Signed-off-by: NJarod Wilson <jarod@redhat.com> Tested-by: NJan Kratochvil <jan.kratochvil@redhat.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Linus Torvalds 提交于
On 32-bit, at least the generic nops are fairly reasonable, but the default nops for 64-bit really look pretty sad, and the P6 nops really do look better. So I would suggest perhaps moving the static P6 nop selection into the CONFIG_X86_64 thing. The alternative is to just get rid of that static nop selection, and just have two cases: 32-bit and 64-bit, and just pick obviously safe cases for them. Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Thomas Bogendoerfer 提交于
The second HPC3 could be found only on Guiness systems (Challenge-S), but not on fullhouse (Indigo2) systems. Signed-off-by: NThomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 08 9月, 2008 2 次提交
-
-
由 Hugh Dickins 提交于
A make -j20 powerpc kernel build broke a couple of months ago saying: In file included from arch/powerpc/boot/gunzip_util.h:13, from arch/powerpc/boot/prpmc2800.c:21: arch/powerpc/boot/zlib.h:85: error: expected ‘:’, ‘,’, ‘;’, ‘}’ or ‘__attribute__’ before ‘*’ token arch/powerpc/boot/zlib.h:630: warning: type defaults to ‘int’ in declaration of ‘Byte’ arch/powerpc/boot/zlib.h:630: error: expected ‘;’, ‘,’ or ‘)’ before ‘*’ token It happened again yesterday: too rare for me to confirm the fix, but it looks like the list of dependants on gunzip_util.h was incomplete. Signed-off-by: NHugh Dickins <hugh@veritas.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Andre Detsch 提交于
We currently have a race when scheduling a context to a SPE - after we have found a runnable context in spusched_tick, the same context may have been scheduled by spu_activate(). This may result in a panic if we try to unschedule a context that has been freed in the meantime. This change exits spu_schedule() if the context has already been scheduled, so we don't end up scheduling it twice. Signed-off-by: NAndre Detsch <adetsch@br.ibm.com> Signed-off-by: NJeremy Kerr <jk@ozlabs.org>
-
- 07 9月, 2008 3 次提交
-
-
由 Andreas Herrmann 提交于
Exception stacks are allocated each time a CPU is set online. But the allocated space is never freed. Thus with one CPU hotplug offline/online cycle there is a memory leak of 24K (6 pages) for a CPU. Fix is to allocate exception stacks only once -- when the CPU is set online for the first time. Signed-off-by: NAndreas Herrmann <andreas.herrmann3@amd.com> Cc: akpm@linux-foundation.org Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Andreas Herrmann 提交于
pda->irqstackptr is allocated whenever a CPU is set online. But it is never freed. This results in a memory leak of 16K for each CPU offline/online cycle. Fix is to allocate pda->irqstackptr only once. Signed-off-by: NAndreas Herrmann <andreas.herrmann3@amd.com> Cc: akpm@linux-foundation.org Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Eduardo Habkost 提交于
Using native_pte_val triggers the BUG_ON() in the paravirt_ops version of pte_flags(). Signed-off-by: NEduardo Habkost <ehabkost@redhat.com> Acked-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 06 9月, 2008 14 次提交
-
-
由 Yinghai Lu 提交于
Krzysztof Helt found MTRR is not detected on k6-2 root cause: we moved mtrr_bp_init() early for mtrr trimming, and in early_detect we only read the CPU capability from cpuid, so some cpu doesn't have that bit in cpuid. So we need to add early_init_xxxx to preset those bit before mtrr_bp_init for those earlier cpus. this patch is for v2.6.27 Reported-by: NKrzysztof Helt <krzysztof.h1@wp.pl> Signed-off-by: NYinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Krzysztof Helt 提交于
Move early cpu initialization after cpu early get cap so the early cpu initialization can fix up cpu caps. Signed-off-by: NKrzysztof Helt <krzysztof.h1@wp.pl> Signed-off-by: NYinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Lennert Buytenhek 提交于
This patch provides an ARM implementation of ioremap_wc(). We use different page table attributes depending on which CPU we are running on: - Non-XScale ARMv5 and earlier systems: The ARMv5 ARM documents four possible mapping types (CB=00/01/10/11). We can't use any of the cached memory types (CB=10/11), since that breaks coherency with peripheral devices. Both CB=00 and CB=01 are suitable for _wc, and CB=01 (Uncached/Buffered) allows the hardware more freedom than CB=00, so we'll use that. (The ARMv5 ARM seems to suggest that CB=01 is allowed to delay stores but isn't allowed to merge them, but there is no other mapping type we can use that allows the hardware to delay and merge stores, so we'll go with CB=01.) - XScale v1/v2 (ARMv5): same as the ARMv5 case above, with the slight difference that on these platforms, CB=01 actually _does_ allow merging stores. (If you want noncoalescing bufferable behavior on Xscale v1/v2, you need to use XCB=101.) - Xscale v3 (ARMv5) and ARMv6+: on these systems, we use TEXCB=00100 mappings (Inner/Outer Uncacheable in xsc3 parlance, Uncached Normal in ARMv6 parlance). The ARMv6 ARM explicitly says that any accesses to Normal memory can be merged, which makes Normal memory more suitable for _wc mappings than Device or Strongly Ordered memory, as the latter two mapping types are guaranteed to maintain transaction number, size and order. We use the Uncached variety of Normal mappings for the same reason that we can't use C=1 mappings on ARMv5. The xsc3 Architecture Specification documents TEXCB=00100 as being Uncacheable and allowing coalescing of writes, which is also just what we need. Signed-off-by: NLennert Buytenhek <buytenh@marvell.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Thomas Gleixner 提交于
After fixing the u32 thinko I sill had occasional hickups on ATI chipsets with small deltas. There seems to be a delay between writing the compare register and the transffer to the internal register which triggers the interrupt. Reading back the value makes sure, that it hit the internal match register befor we compare against the counter value. Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Thomas Gleixner 提交于
We use the HPET only in 32bit mode because: 1) some HPETs are 32bit only 2) on i386 there is no way to read/write the HPET atomic 64bit wide The HPET code unification done by the "moron of the year" did not take into account that unsigned long is different on 32 and 64 bit. This thinko results in a possible endless loop in the clockevents code, when the return comparison fails due to the 64bit/332bit unawareness. unsigned long cnt = (u32) hpet_read() + delta can wrap over 32bit. but the final compare will fail and return -ETIME causing endless loops. Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 H. Peter Anvin 提交于
Use X86_FEATURE_NOPL to determine if it is safe to use P6 NOPs in alternatives. Also, replace table and loop with simple if statement. Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 H. Peter Anvin 提交于
The long noops ("NOPL") are supposed to be detected by family >= 6. Unfortunately, several non-Intel x86 implementations, both hardware and software, don't obey this dictum. Instead, probe for NOPL directly by executing a NOPL instruction and see if we get #UD. Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 H. Peter Anvin 提交于
The CPU feature detection code in the boot code is somewhat minimal, and doesn't include all possible CPUID words. In particular, it doesn't contain the code for CPU feature words 2 (Transmeta), 3 (Linux-specific), 5 (VIA), or 7 (scattered). Zero them out, so we can still set those bits as known at compile time; in particular, this allows creating a Linux-specific NOPL flag and have it required (and therefore resolvable at compile time) in 64-bit mode. Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Atsushi Nemoto 提交于
Currently init_initrd() probes initrd header at the last page of kernel image, but it is valid only if addinitrd was used. If addinitrd was not used, the area contains garbage so probing there might misdetect initrd header (magic number is not strictly robust). This patch introduces CONFIG_PROBE_INITRD_HEADER to explicitly enable this probing. Signed-off-by: NAtsushi Nemoto <anemo@mba.ocn.ne.jp> Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Atsushi Nemoto 提交于
Commmit 59e39ecd933ba49eb6efe84cbfa5597a6c9ef18a ("Fix WARNING: at kernel/smp.c:290") introduced local_flush_icache_range but lacks initialization for some TX39 case. Signed-off-by: NAtsushi Nemoto <anemo@mba.ocn.ne.jp> Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Atsushi Nemoto 提交于
The txx9_pcode variable was introduced in commit fe1c2bc64f65003b39f331a8e4b0d15b235a4afd ("TXx9: Add 64-bit support") but was not initialized properly. Signed-off-by: NAtsushi Nemoto <anemo@mba.ocn.ne.jp> Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Thomas Bogendoerfer 提交于
trap_init issues flush_icache_range(), which uses ipi functions to get icache flushing done on all cpus. But this is done before interrupts are enabled and caused WARN_ON messages. This changeset introduces a new local_flush_icache_range() and uses it before interrupts (and additional CPUs) are enabled to avoid this problem. Signed-off-by: NThomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Thomas Bogendoerfer 提交于
With -ffunction-section the entries in __dbe_table aren't no longer sorted, so the lookup of exception addresses in do_be() failed for some addresses. To avoid this we now sort __dbe_table. Signed-off-by: NThomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 David Woodhouse 提交于
This reverts commit ae82cbfc. It needs the new byteorder headers to be exported to userspace, and they aren't yet -- and probably shouldn't be, at this point in the 2.6.27 release cycle (or ever, for that matter). Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com> Acked-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 05 9月, 2008 3 次提交
-
-
由 Thomas Gleixner 提交于
The minimum reprogramming delta was hardcoded in HPET ticks, which is stupid as it does not work with faster running HPETs. The C1E idle patches made this prominent on AMD/RS690 chipsets, where the HPET runs with 25MHz. Set it to 5us which seems to be a reasonable value and fixes the problems on the bug reporters machines. We have a further sanity check now in the clock events, which increases the delta when it is not sufficient. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NLuiz Fernando N. Capitulino <lcapitulino@mandriva.com.br> Tested-by: NDmitry Nezhevenko <dion@inhex.net> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Paul Mundt 提交于
Follows the SH change. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Carmelo Amoroso 提交于
This patch fixes a problem within the SH implementation of resume_kernel code, that implements in assembly the bulk of preempt_schedule_irq function without taking care of the extra code needed to handle the BKL preemptible. The patch basically consists of removing this asm code and calling the common C implementation (see kernel/sched.c) as other archs do. Another change is the missing 'cli' macro invocation at the beginning of the resume_kernel. Signed-off-by: NGiuseppe Cavallaro <peppe.cavallaro@st.com> Signed-off-by: NCarmelo Amoroso <carmelo.amoroso@st.com> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-