- 20 6月, 2006 4 次提交
-
-
由 David S. Miller 提交于
Inspired by PowerPC XICS interrupt support code. All IRQs are virtualized in order to keep NR_IRQS from needing to be too large. Interrupts on sparc64 are arbitrary 11-bit values, but we don't need to define NR_IRQS to 2048 if we virtualize the IRQs. As PCI and SBUS controller drivers build device IRQs, we divy out virtual IRQ numbers incrementally starting at 1. Zero is a special virtual IRQ used for the timer interrupt. So device drivers all see virtual IRQs, and all the normal interfaces such as request_irq(), enable_irq(), etc. translate that into a real IRQ number in order to configure the IRQ. At this point knowledge of the struct ino_bucket is almost entirely contained within arch/sparc64/kernel/irq.c There are a few small bits in the PCI controller drivers that need to be swept away before we can remove ino_bucket's definition out of asm-sparc64/irq.h and privately into kernel/irq.c Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
And reuse that struct member for virt_irq, which will be used in future changesets for the implementation of mapping between real and virtual IRQ numbers. This nicely kills off a ton of SBUS and PCI controller PIL assignment code which is no longer necessary. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Only pil0_dummy_bucket had a pil of zero and we just killed that off, so we can delete all special case code that used bp->pil==0 as a way to identify a dummy bucket. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
This is the first in a series of cleanups that will hopefully allow a seamless attempt at using the generic IRQ handling infrastructure in the Linux kernel. Define PIL_DEVICE_IRQ and vector all device interrupts through there. Get rid of the ugly pil0_dummy_{bucket,desc}, instead vector the timer interrupt directly to a specific handler since the timer interrupt is the only event that will be signaled on PIL 14. The irq_worklist is now in the per-cpu trap_block[]. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 12 6月, 2006 1 次提交
-
-
由 David S. Miller 提交于
It is already exported by fs/open.c Noticed by Ben Collins. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 11 6月, 2006 1 次提交
-
-
由 David S. Miller 提交于
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 10 6月, 2006 2 次提交
-
-
由 David S. Miller 提交于
Doing PCI config space accesses to non-present PCI slots can result in fatal JBUS errors if the PCI config access hypervisor call is performed on cpus other than the boot cpu. PCI config space accesses to present PCI slots works just fine. Recursively traverse the OBP device tree under the PCI controller node and record all present device IDs into a small hash table. Avoid the hypervisor call for any PCI config space access attempt for a device not recorded in the hash table. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
This makes the debugging information more usable. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 05 6月, 2006 1 次提交
-
-
由 David S. Miller 提交于
Both csum_partial() and the csum_partial_copy*() family of routines forget to do a final fold on the computed checksum value on sparc64. So do the standard Sparc "add + set condition codes, add carry" sequence, then make sure the high 32-bits of the return value are clear. Based upon some excellent detective work and debugging done by Richard Braun and Samuel Thibault. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 31 5月, 2006 1 次提交
-
-
由 David S. Miller 提交于
Uses of smp_processor_id() get pushed earlier and earlier in the start_kernel() sequence. So just get it working before we call start_kernel() to avoid all possible problems. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 23 5月, 2006 1 次提交
-
-
由 David S. Miller 提交于
Using asm-generic/dma-mapping.h does not work because pushing the call down to pci_alloc_coherent() causes the gfp_t argument of dma_alloc_coherent() to be ignored. Fix this by implementing things directly, and adding a gfp_t argument we can use in the internal call down to the PCI DMA implementation of pci_alloc_coherent(). This fixes massive memory corruption when using the sound driver layer, which passes things like __GFP_COMP down into these routines and (correctly) expects that to work. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 22 5月, 2006 1 次提交
-
-
由 David S. Miller 提交于
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 5月, 2006 1 次提交
-
-
由 David S. Miller 提交于
For sparc32 we need R_SPARC_UA32 relocation support, for sparc64 we need the handle R_SPARC_DISP32 relocations. Based upon reports and initial patch by Martin Habets. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 09 5月, 2006 1 次提交
-
-
由 David S. Miller 提交于
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 04 5月, 2006 1 次提交
-
-
由 David S. Miller 提交于
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 01 5月, 2006 2 次提交
-
-
由 Al Viro 提交于
... it's always current, and that's a good thing - allows simpler locking. Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 David S. Miller 提交于
A context switch will force a call to flush_tlb_pending() (via switch_to()), so if we test tlb_nr to be non-zero, then sleep, it would become zero and later back at the original context we'll pass zero down into the TLB flushing code which should never see a nr argument of zero. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 20 4月, 2006 1 次提交
-
-
由 Prasanna S Panchamukhi 提交于
Andrew Morton pointed out that compiler might not inline the functions marked for inline in kprobes. There-by allowing the insertion of probes on these kprobes routines, which might cause recursion. This patch removes all such inline and adds them to kprobes section there by disallowing probes on all such routines. Some of the routines can even still be inlined, since these routines gets executed after the kprobes had done necessay setup for reentrancy. Signed-off-by: NPrasanna S Panchamukhi <prasanna@in.ibm.com> Acked-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 19 4月, 2006 1 次提交
-
-
由 Jean-Luc Léger 提交于
This patch fixes dependencies of HUGETLB_PAGE_SIZE_64K Signed-off-by: NJean-Luc Lger <jean-luc.leger@dspnet.fr.eu.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 18 4月, 2006 1 次提交
-
-
由 David S. Miller 提交于
SYM2 driver uses it. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 15 4月, 2006 1 次提交
-
-
由 David S. Miller 提交于
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 11 4月, 2006 2 次提交
-
-
由 Kyle McMartin 提交于
While cleaning up parisc_ksyms.c earlier, I noticed that strpbrk wasn't being exported from lib/string.c. Investigating further, I noticed a changeset that removed its export and added it to _ksyms.c on a few more architectures. The justification was that "other arches do it." I think this is wrong, since no architecture currently defines __HAVE_ARCH_STRPBRK, there's no reason for any of them to be exporting it themselves. Therefore, consolidate the export to lib/string.c. Signed-off-by: NKyle McMartin <kyle@parisc-linux.org> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 KAMEZAWA Hiroyuki 提交于
for_each_cpu() actually iterates across all possible CPUs. We've had mistakes in the past where people were using for_each_cpu() where they should have been iterating across only online or present CPUs. This is inefficient and possibly buggy. We're renaming for_each_cpu() to for_each_possible_cpu() to avoid this in the future. This patch replaces for_each_cpu with for_each_possible_cpu. for sparc64. Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Acked-by: N"David S. Miller" <davem@davemloft.net> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 10 4月, 2006 6 次提交
-
-
由 David S. Miller 提交于
Otherwise the build breaks with EXPERIMENTAL disabled because SPARSEMEM will not get selected properly. See mm/Kconfig for how that works. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
1) Take doc-book function comment from i386 implementation. 2) cacheline align call_lock, taken from powerpc 3) Need memory barrier after setting call_data 4) Remove timeout Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
This makes debugging things a little bit easier. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
GDB uses a PTRACE_PEEKUSR call with offset 0 to see if a thread is alive, so provide a success return for this particular special case. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 01 4月, 2006 6 次提交
-
-
由 David S. Miller 提交于
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
We are about to fill in all HPAGE_SIZE's worth of PAGE_SIZE ptes, so we have to give the first pte in that set else we scribble over random memory when we fill in the ptes. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
switch_mm() changes the mm state and does a tsb_context_switch() first, then we do the cpu register state switch which changes current_thread_info() and current(). So it's safer to check the PGD physical address stored in the trap block (which will be updated by the tsb_context_switch() in switch_mm()) than current->active_mm. Technically we should never run here in between those two updates, because interrupts are disabled during the entire context switch operation. But some day we might like to leave interrupts enabled during the context switch and this change allows that to happen without any surprises. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 29 3月, 2006 1 次提交
-
-
由 Matt Mackall 提交于
Signed-off-by: NMatt Mackall <mpm@selenic.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Alessandro Zummo <a.zummo@towertech.it> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 28 3月, 2006 1 次提交
-
-
由 Alan Stern 提交于
The kernel's implementation of notifier chains is unsafe. There is no protection against entries being added to or removed from a chain while the chain is in use. The issues were discussed in this thread: http://marc.theaimsgroup.com/?l=linux-kernel&m=113018709002036&w=2 We noticed that notifier chains in the kernel fall into two basic usage classes: "Blocking" chains are always called from a process context and the callout routines are allowed to sleep; "Atomic" chains can be called from an atomic context and the callout routines are not allowed to sleep. We decided to codify this distinction and make it part of the API. Therefore this set of patches introduces three new, parallel APIs: one for blocking notifiers, one for atomic notifiers, and one for "raw" notifiers (which is really just the old API under a new name). New kinds of data structures are used for the heads of the chains, and new routines are defined for registration, unregistration, and calling a chain. The three APIs are explained in include/linux/notifier.h and their implementation is in kernel/sys.c. With atomic and blocking chains, the implementation guarantees that the chain links will not be corrupted and that chain callers will not get messed up by entries being added or removed. For raw chains the implementation provides no guarantees at all; users of this API must provide their own protections. (The idea was that situations may come up where the assumptions of the atomic and blocking APIs are not appropriate, so it should be possible for users to handle these things in their own way.) There are some limitations, which should not be too hard to live with. For atomic/blocking chains, registration and unregistration must always be done in a process context since the chain is protected by a mutex/rwsem. Also, a callout routine for a non-raw chain must not try to register or unregister entries on its own chain. (This did happen in a couple of places and the code had to be changed to avoid it.) Since atomic chains may be called from within an NMI handler, they cannot use spinlocks for synchronization. Instead we use RCU. The overhead falls almost entirely in the unregister routine, which is okay since unregistration is much less frequent that calling a chain. Here is the list of chains that we adjusted and their classifications. None of them use the raw API, so for the moment it is only a placeholder. ATOMIC CHAINS ------------- arch/i386/kernel/traps.c: i386die_chain arch/ia64/kernel/traps.c: ia64die_chain arch/powerpc/kernel/traps.c: powerpc_die_chain arch/sparc64/kernel/traps.c: sparc64die_chain arch/x86_64/kernel/traps.c: die_chain drivers/char/ipmi/ipmi_si_intf.c: xaction_notifier_list kernel/panic.c: panic_notifier_list kernel/profile.c: task_free_notifier net/bluetooth/hci_core.c: hci_notifier net/ipv4/netfilter/ip_conntrack_core.c: ip_conntrack_chain net/ipv4/netfilter/ip_conntrack_core.c: ip_conntrack_expect_chain net/ipv6/addrconf.c: inet6addr_chain net/netfilter/nf_conntrack_core.c: nf_conntrack_chain net/netfilter/nf_conntrack_core.c: nf_conntrack_expect_chain net/netlink/af_netlink.c: netlink_chain BLOCKING CHAINS --------------- arch/powerpc/platforms/pseries/reconfig.c: pSeries_reconfig_chain arch/s390/kernel/process.c: idle_chain arch/x86_64/kernel/process.c idle_notifier drivers/base/memory.c: memory_chain drivers/cpufreq/cpufreq.c cpufreq_policy_notifier_list drivers/cpufreq/cpufreq.c cpufreq_transition_notifier_list drivers/macintosh/adb.c: adb_client_list drivers/macintosh/via-pmu.c sleep_notifier_list drivers/macintosh/via-pmu68k.c sleep_notifier_list drivers/macintosh/windfarm_core.c wf_client_list drivers/usb/core/notify.c usb_notifier_list drivers/video/fbmem.c fb_notifier_list kernel/cpu.c cpu_chain kernel/module.c module_notify_list kernel/profile.c munmap_notifier kernel/profile.c task_exit_notifier kernel/sys.c reboot_notifier_list net/core/dev.c netdev_chain net/decnet/dn_dev.c: dnaddr_chain net/ipv4/devinet.c: inetaddr_chain It's possible that some of these classifications are wrong. If they are, please let us know or submit a patch to fix them. Note that any chain that gets called very frequently should be atomic, because the rwsem read-locking used for blocking chains is very likely to incur cache misses on SMP systems. (However, if the chain's callout routines may sleep then the chain cannot be atomic.) The patch set was written by Alan Stern and Chandra Seetharaman, incorporating material written by Keith Owens and suggestions from Paul McKenney and Andrew Morton. [jes@sgi.com: restructure the notifier chain initialization macros] Signed-off-by: NAlan Stern <stern@rowland.harvard.edu> Signed-off-by: NChandra Seetharaman <sekharan@us.ibm.com> Signed-off-by: NJes Sorensen <jes@sgi.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 27 3月, 2006 3 次提交
-
-
由 David S. Miller 提交于
The worst part about this bug is what it would cause a hugepage TSB to be allocated for every address space since "0 >= 0". Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Kbuild now points these out. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-