- 10 1月, 2011 10 次提交
-
-
由 Mike Frysinger 提交于
Signed-off-by: NMike Frysinger <vapier@gentoo.org>
-
由 Mike Frysinger 提交于
Signed-off-by: NMike Frysinger <vapier@gentoo.org>
-
由 Mike Frysinger 提交于
The array of pointers is never written, so constify it. Signed-off-by: NMike Frysinger <vapier@gentoo.org>
-
由 Mike Frysinger 提交于
Use the same naming convention for DMA traffic MMRs (most were legacy anyways) so we can avoid useless ifdef trees. Same goes for MDMA names -- this actually allows us to undo a bunch of ifdef redirects that existed for this purpose alone. Signed-off-by: NMike Frysinger <vapier@gentoo.org>
-
由 Mike Frysinger 提交于
Signed-off-by: NMike Frysinger <vapier@gentoo.org>
-
由 Mike Frysinger 提交于
Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NMike Frysinger <vapier@gentoo.org>
-
由 Mike Frysinger 提交于
A bunch of arches define reads[bwl]/writes[bwl] helpers for accessing memory mapped registers. Since the Blackfin ones aren't specific to Blackfin code, move them to the common asm-generic/io.h for people. Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NMike Frysinger <vapier@gentoo.org>
-
由 Mike Frysinger 提交于
Each Blackfin port has been duplicating UART structures and defines when there really is no need for it. So start a new bfin_serial.h header to unify all these pieces and give ourselves a fresh start. Signed-off-by: NMike Frysinger <vapier@gentoo.org>
-
由 Bob Liu 提交于
In order to not touch the driver file for different xtal usage, push the clkin value to board file and calculate the register value instead of hardcoding it. Signed-off-by: NBob Liu <lliubbo@gmail.com> Signed-off-by: NMike Frysinger <vapier@gentoo.org>
-
由 Graf Yang 提交于
Signed-off-by: NGraf Yang <graf.yang@analog.com> Signed-off-by: NMike Frysinger <vapier@gentoo.org>
-
- 07 1月, 2011 18 次提交
-
-
由 Nick Piggin 提交于
The problem that this patch aims to fix is vfsmount refcounting scalability. We need to take a reference on the vfsmount for every successful path lookup, which often go to the same mount point. The fundamental difficulty is that a "simple" reference count can never be made scalable, because any time a reference is dropped, we must check whether that was the last reference. To do that requires communication with all other CPUs that may have taken a reference count. We can make refcounts more scalable in a couple of ways, involving keeping distributed counters, and checking for the global-zero condition less frequently. - check the global sum once every interval (this will delay zero detection for some interval, so it's probably a showstopper for vfsmounts). - keep a local count and only taking the global sum when local reaches 0 (this is difficult for vfsmounts, because we can't hold preempt off for the life of a reference, so a counter would need to be per-thread or tied strongly to a particular CPU which requires more locking). - keep a local difference of increments and decrements, which allows us to sum the total difference and hence find the refcount when summing all CPUs. Then, keep a single integer "long" refcount for slow and long lasting references, and only take the global sum of local counters when the long refcount is 0. This last scheme is what I implemented here. Attached mounts and process root and working directory references are "long" references, and everything else is a short reference. This allows scalable vfsmount references during path walking over mounted subtrees and unattached (lazy umounted) mounts with processes still running in them. This results in one fewer atomic op in the fastpath: mntget is now just a per-CPU inc, rather than an atomic inc; and mntput just requires a spinlock and non-atomic decrement in the common case. However code is otherwise bigger and heavier, so single threaded performance is basically a wash. Signed-off-by: NNick Piggin <npiggin@kernel.dk>
-
由 Nick Piggin 提交于
Reduce some branches and memory accesses in dcache lookup by adding dentry flags to indicate common d_ops are set, rather than having to check them. This saves a pointer memory access (dentry->d_op) in common path lookup situations, and saves another pointer load and branch in cases where we have d_op but not the particular operation. Patched with: git grep -E '[.>]([[:space:]])*d_op([[:space:]])*=' | xargs sed -e 's/\([^\t ]*\)->d_op = \(.*\);/d_set_d_op(\1, \2);/' -e 's/\([^\t ]*\)\.d_op = \(.*\);/d_set_d_op(\&\1, \2);/' -i Signed-off-by: NNick Piggin <npiggin@kernel.dk>
-
由 Nick Piggin 提交于
RCU free the struct inode. This will allow: - Subsequent store-free path walking patch. The inode must be consulted for permissions when walking, so an RCU inode reference is a must. - sb_inode_list_lock to be moved inside i_lock because sb list walkers who want to take i_lock no longer need to take sb_inode_list_lock to walk the list in the first place. This will simplify and optimize locking. - Could remove some nested trylock loops in dcache code - Could potentially simplify things a bit in VM land. Do not need to take the page lock to follow page->mapping. The downsides of this is the performance cost of using RCU. In a simple creat/unlink microbenchmark, performance drops by about 10% due to inability to reuse cache-hot slab objects. As iterations increase and RCU freeing starts kicking over, this increases to about 20%. In cases where inode lifetimes are longer (ie. many inodes may be allocated during the average life span of a single inode), a lot of this cache reuse is not applicable, so the regression caused by this patch is smaller. The cache-hot regression could largely be avoided by using SLAB_DESTROY_BY_RCU, however this adds some complexity to list walking and store-free path walking, so I prefer to implement this at a later date, if it is shown to be a win in real situations. I haven't found a regression in any non-micro benchmark so I doubt it will be a problem. Signed-off-by: NNick Piggin <npiggin@kernel.dk>
-
由 Nick Piggin 提交于
dget_locked was a shortcut to avoid the lazy lru manipulation when we already held dcache_lock (lru manipulation was relatively cheap at that point). However, how that the lru lock is an innermost one, we never hold it at any caller, so the lock cost can now be avoided. We already have well working lazy dcache LRU, so it should be fine to defer LRU manipulations to scan time. Signed-off-by: NNick Piggin <npiggin@kernel.dk>
-
由 Nick Piggin 提交于
dcache_lock no longer protects anything. remove it. Signed-off-by: NNick Piggin <npiggin@kernel.dk>
-
由 Nick Piggin 提交于
Protect d_unhashed(dentry) condition with d_lock. This means keeping DCACHE_UNHASHED bit in synch with hash manipulations. Signed-off-by: NNick Piggin <npiggin@kernel.dk>
-
由 Nick Piggin 提交于
Make d_count non-atomic and protect it with d_lock. This allows us to ensure a 0 refcount dentry remains 0 without dcache_lock. It is also fairly natural when we start protecting many other dentry members with d_lock. Signed-off-by: NNick Piggin <npiggin@kernel.dk>
-
由 Nick Piggin 提交于
Change d_delete from a dentry deletion notification to a dentry caching advise, more like ->drop_inode. Require it to be constant and idempotent, and not take d_lock. This is how all existing filesystems use the callback anyway. This makes fine grained dentry locking of dput and dentry lru scanning much simpler. Signed-off-by: NNick Piggin <npiggin@kernel.dk>
-
由 Paul Mundt 提交于
There have likewise been some API updates, so we refactor to use the consolidated smp_prepare_cpus(). Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
This fixes up the SMP support to use the refactored GIC APIs. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
Use the new linux/clkdev.h to get it building again. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Magnus Damm 提交于
Remove now unused IRQ demux code. All R-Mobile and SH-Mobile processors should register IRQ demux handlers during run-time. Signed-off-by: NMagnus Damm <damm@opensource.se> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Magnus Damm 提交于
Break-out GIC specific IRQ demux code from the file entry-macro-intc.S and register during run-time. Covers sh73a0. Signed-off-by: NMagnus Damm <damm@opensource.se> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Magnus Damm 提交于
Break-out INTC specific IRQ demux code from the file entry-macro-intc.S and register during run-time. Covers sh7367, sh7377 and sh7372. Signed-off-by: NMagnus Damm <damm@opensource.se> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Magnus Damm 提交于
Always enable MULTI_IRQ_HANDLER on SH-Mobile. Signed-off-by: NMagnus Damm <damm@opensource.se> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Magnus Damm 提交于
Use the GIC demux code in asm/hardware/entry-macro-gic.S on the R-Mobile / SH-Mobile processors. Signed-off-by: NMagnus Damm <damm@opensource.se> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Simon Horman 提交于
When CONFIG_ZBOOT_ROM is selected, the resulting zImage file will be small boot loader and may be burned to rom or flash. Cc: Magnus Damm <magnus.damm@gmail.com> Cc: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Signed-off-by: NSimon Horman <horms@verge.net.au> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Russell King 提交于
Add ARM support for the DMA debug infrastructure, which allows the DMA API usage to be debugged. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 06 1月, 2011 4 次提交
-
-
由 Magnus Damm 提交于
This patch enables the Migo-R specific touch screen driver in the Migo-R defconfig. Signed-off-by: NMagnus Damm <damm@opensource.se> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Kuninori Morimoto 提交于
Signed-off-by: NKuninori Morimoto <kuninori.morimoto.gx@renesas.com> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Kuninori Morimoto 提交于
Signed-off-by: NKuninori Morimoto <kuninori.morimoto.gx@renesas.com> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Kuninori Morimoto 提交于
fsib_clk will be used when fdiv_clk failed on fsi_hdmi_set_rate. Signed-off-by: NKuninori Morimoto <kuninori.morimoto.gx@renesas.com> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 05 1月, 2011 8 次提交
-
-
由 Huang Ying 提交于
Prevent the long delay in io_check_error making NMI watchdog timeout. Signed-off-by: NHuang Ying <ying.huang@intel.com> Signed-off-by: NDon Zickus <dzickus@redhat.com> LKML-Reference: <1294198689-15447-3-git-send-email-dzickus@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Dongdong Deng 提交于
The spin_lock_debug/rcu_cpu_stall detector uses trigger_all_cpu_backtrace() to dump cpu backtrace. Therefore it is possible that trigger_all_cpu_backtrace() could be called at the same time on different CPUs, which triggers and 'unknown reason NMI' warning. The following case illustrates the problem: CPU1 CPU2 ... CPU N trigger_all_cpu_backtrace() set "backtrace_mask" to cpu mask | generate NMI interrupts generate NMI interrupts ... \ | / \ | / The "backtrace_mask" will be cleaned by the first NMI interrupt at nmi_watchdog_tick(), then the following NMI interrupts generated by other cpus's arch_trigger_all_cpu_backtrace() will be taken as unknown reason NMI interrupts. This patch uses a test_and_set to avoid the problem, and stop the arch_trigger_all_cpu_backtrace() from calling to avoid dumping a double cpu backtrace info when there is already a trigger_all_cpu_backtrace() in progress. Signed-off-by: NDongdong Deng <dongdong.deng@windriver.com> Reviewed-by: NBruce Ashfield <bruce.ashfield@windriver.com> Cc: fweisbec@gmail.com LKML-Reference: <1294198689-15447-2-git-send-email-dzickus@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NDon Zickus <dzickus@redhat.com>
-
由 Don Zickus 提交于
There are some paths that walk the die_chain with preemption on. Make sure we are in an NMI call before we start doing anything. This was triggered by do_general_protection calling notify_die with DIE_GPF. Reported-by: NJan Kiszka <jan.kiszka@web.de> Signed-off-by: NDon Zickus <dzickus@redhat.com> LKML-Reference: <1294198689-15447-1-git-send-email-dzickus@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Yinghai Lu 提交于
Found one x2apic pre-enabled system, x2apic_mode suddenly get corrupted after register some cpus, when compiled CONFIG_NR_CPUS=255 instead of 512. It turns out that generic_processor_info() ==> phyid_set(apicid, phys_cpu_present_map) causes the problem. phys_cpu_present_map is sized by MAX_APICS bits, and pre-enabled system some cpus have an apic id > 255. The variable after phys_cpu_present_map may get corrupted silently: ffffffff828e8420 B phys_cpu_present_map ffffffff828e8440 B apic_verbosity ffffffff828e8444 B local_apic_timer_c2_ok ffffffff828e8448 B disable_apic ffffffff828e844c B x2apic_mode ffffffff828e8450 B x2apic_disabled ffffffff828e8454 B num_processors ... Actually phys_cpu_present_map is referenced via apic id, instead index. We should use MAX_LOCAL_APIC instead MAX_APICS. For 64-bit it will be 32768 in all cases. BSS will increase by 4k bytes on 64-bit: text data bss dec filename 21696943 4193748 12787712 38678403 vmlinux.before 21696943 4193748 12791808 38682499 vmlinux.after No change on 32bit. Finally we can remove MAX_APCIS that was rather confusing. Signed-off-by: NYinghai Lu <yinghai@kernel.org> Cc: H. Peter Anvin <hpa@linux.intel.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> LKML-Reference: <4D23BD9C.3070102@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Heiko Carstens 提交于
When the seqfile /proc/cpuinfo gets accesses for each possible cpu loops_per_jiffy gets recalculated. However its value is only needed on first access. In addition loops_per_jiffy should be recalculated when the machine reports a capability change. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Use get_online_cpus() instead of preempt_disable() to make sure cpus don't go offline while accessing their per cpu data. The preempt_disable() stuff is old code which was used before get_online_cpus() was available. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Get rid of messages that indicate if a cpu went online or offline. There is nothing special about this anymore and these messages might flood the kernel log buffer which makes debugging harder since more important messages might be overwritten. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Gerald Schaefer 提交于
This enables the spinning mutex feature on s390 by removing HAVE_DEFAULT_NO_SPIN_MUTEXES from arch/s390/Kconfig. Signed-off-by: NGerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-