- 14 1月, 2006 16 次提交
-
-
由 Andreas Schwab 提交于
TTY layer buffering revamp broke ia64 in commit 33f0f88f CC arch/ia64/hp/sim/simserial.o arch/ia64/hp/sim/simserial.c: In function `receive_chars': arch/ia64/hp/sim/simserial.c:170: error: structure has no member named `flip' ... and so on ... make[1]: *** [arch/ia64/hp/sim/simserial.o] Error 1 Patch from Andreas Schwab. Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Zhang Yanmin 提交于
When jprobe is hit, the function parameters of the original function should be saved before jprobe handler is executed, and restored it after jprobe handler is executed, because jprobe handler might change the register values due to tail call optimization by the gcc. Signed-off-by: NZhang Yanmin <yanmin.zhang@intel.com> Signed-off-by: NAnil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Keith Owens 提交于
Add hotplug cpu support to salinfo.c. The cpu_event field is a cpumask so use the cpu_* macros consistently, replacing the existing mixture of cpu_* and *_bit macros. Instead of counting the number of outstanding events in a semaphore and trying to track that count over user space context, interrupt context, non-maskable interrupt context and cpu hotplug, replace the semaphore with a test for "any bits set" combined with a mutex. Modify the locking to make the test for "work to do" an atomic operation. Signed-off-by: NKeith Owens <kaos@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Jason Uhlenkott 提交于
We need to handle debug traps in fsys mode non-fatally. They can happen now that we have fsyscalls which contain probe instructions. Signed-off-by: NJason Uhlenkott <jasonuhl@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Prarit Bhargava 提交于
This patch separates the sn_flush_device_list struct into kernel and common (both kernel and PROM accessible) structures. As it was, if the size of a spinlock_t changed (due to additional CONFIG options, etc.) the sal call which populated the sn_flush_device_list structs would erroneously write data (and cause memory corruption and/or a panic). This patch does the following: 1. Removes sn_flush_device_list and adds sn_flush_device_common and sn_flush_device_kernel. 2. Adds a new SAL call to populate a sn_flush_device_common struct per device, not per widget as previously done. 3. Correctly initializes each device's sn_flush_device_kernel spinlock_t struct (before it was only doing each widget's first device). Signed-off-by: NPrarit Bhargava <prarit@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Jack Steiner 提交于
I originally thought this was an bug only in the SN code, but I think I also see a hole in the generic IA64 tlb code. (Separate patch was sent for the SN problem). It looks like there is a bug in the TLB flushing code. During context switch, kernel threads (kswapd, for example) inherit the mm of the task that was previously running on the cpu. Normally, this is ok because the previous context is still loaded into the RR registers. However, if the owner of the mm migrates to another cpu, changes it's context number, and references a page before kswapd issues a tlb_purge for that same page, the purge will be done with a stale context number (& RR registers). Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Russ Anderson 提交于
Altix (shub2) pushes the BTE clean-up into SAL. This patch correctly interfaces with the now implemented SAL call. It also fixes a bug when delaying clean-up to allow busy BTEs to complete (or error out). Signed-off-by: NRuss Anderson <rja@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Francois Wellenrieter 提交于
On return from INIT handler we must convert the address of the minstate area from a kernel virtual uncached address (0xC...) to physical uncached (0x8...). A typo (or thinko?) in the code converted to physical cached. Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Dean Nelson 提交于
Cleanup a few items after moving xpc.h from arch/ia64/sn/kernel to include/asm-ia64/sn. Signed-off-by: NDean Nelson <dcn@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Dean Nelson 提交于
Move xpc.h from arch/ia64/sn/kernel to include/asm-ia64/sn without change. Signed-off-by: NDean Nelson <dcn@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Dean Nelson 提交于
Move xpc_system_reboot() to be closer to the file it calls for readability reasons (which are indeed subjective). Signed-off-by: NDean Nelson <dcn@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Dean Nelson 提交于
Allow for the loss of heartbeat while in kdebug to be ignored by remote partitions. Signed-off-by: NDean Nelson <dcn@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Dean Nelson 提交于
Only unregister from notifier lists if XPC is unloading. Signed-off-by: NDean Nelson <dcn@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Dean Nelson 提交于
Cleanup the XPC disengage related messages that are printed to the log. Signed-off-by: NDean Nelson <dcn@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Dean Nelson 提交于
This patch fixes a problem in XPC disengage processing whereby it was not seeing the request to disengage from a remote partition, so the disengage wasn't happening. The disengagement is suppose to transpire during the time a XPC channel is disconnecting, and should be completed before the channel is declared to be disconnected. Signed-off-by: NDean Nelson <dcn@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Tony Luck 提交于
When this new syscall was added to ia64 in commit 39743889 fsys.S was forgotten. Add a ".data8 0" there to keep it in step. [Reported by Stephane Eranian] Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 13 1月, 2006 4 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Al Viro 提交于
on ia64 thread_info is at the constant offset from task_struct and stack is embedded into the same beast. Set __HAVE_THREAD_FUNCTIONS, made task_thread_info() just add a constant. Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 akpm@osdl.org 提交于
) From: Ingo Molnar <mingo@elte.hu> This is the latest version of the scheduler cache-hot-auto-tune patch. The first problem was that detection time scaled with O(N^2), which is unacceptable on larger SMP and NUMA systems. To solve this: - I've added a 'domain distance' function, which is used to cache measurement results. Each distance is only measured once. This means that e.g. on NUMA distances of 0, 1 and 2 might be measured, on HT distances 0 and 1, and on SMP distance 0 is measured. The code walks the domain tree to determine the distance, so it automatically follows whatever hierarchy an architecture sets up. This cuts down on the boot time significantly and removes the O(N^2) limit. The only assumption is that migration costs can be expressed as a function of domain distance - this covers the overwhelming majority of existing systems, and is a good guess even for more assymetric systems. [ People hacking systems that have assymetries that break this assumption (e.g. different CPU speeds) should experiment a bit with the cpu_distance() function. Adding a ->migration_distance factor to the domain structure would be one possible solution - but lets first see the problem systems, if they exist at all. Lets not overdesign. ] Another problem was that only a single cache-size was used for measuring the cost of migration, and most architectures didnt set that variable up. Furthermore, a single cache-size does not fit NUMA hierarchies with L3 caches and does not fit HT setups, where different CPUs will often have different 'effective cache sizes'. To solve this problem: - Instead of relying on a single cache-size provided by the platform and sticking to it, the code now auto-detects the 'effective migration cost' between two measured CPUs, via iterating through a wide range of cachesizes. The code searches for the maximum migration cost, which occurs when the working set of the test-workload falls just below the 'effective cache size'. I.e. real-life optimized search is done for the maximum migration cost, between two real CPUs. This, amongst other things, has the positive effect hat if e.g. two CPUs share a L2/L3 cache, a different (and accurate) migration cost will be found than between two CPUs on the same system that dont share any caches. (The reliable measurement of migration costs is tricky - see the source for details.) Furthermore i've added various boot-time options to override/tune migration behavior. Firstly, there's a blanket override for autodetection: migration_cost=1000,2000,3000 will override the depth 0/1/2 values with 1msec/2msec/3msec values. Secondly, there's a global factor that can be used to increase (or decrease) the autodetected values: migration_factor=120 will increase the autodetected values by 20%. This option is useful to tune things in a workload-dependent way - e.g. if a workload is cache-insensitive then CPU utilization can be maximized by specifying migration_factor=0. I've tested the autodetection code quite extensively on x86, on 3 P3/Xeon/2MB, and the autodetected values look pretty good: Dual Celeron (128K L2 cache): --------------------- migration cost matrix (max_cache_size: 131072, cpu: 467 MHz): --------------------- [00] [01] [00]: - 1.7(1) [01]: 1.7(1) - --------------------- cacheflush times [2]: 0.0 (0) 1.7 (1784008) --------------------- Here the slow memory subsystem dominates system performance, and even though caches are small, the migration cost is 1.7 msecs. Dual HT P4 (512K L2 cache): --------------------- migration cost matrix (max_cache_size: 524288, cpu: 2379 MHz): --------------------- [00] [01] [02] [03] [00]: - 0.4(1) 0.0(0) 0.4(1) [01]: 0.4(1) - 0.4(1) 0.0(0) [02]: 0.0(0) 0.4(1) - 0.4(1) [03]: 0.4(1) 0.0(0) 0.4(1) - --------------------- cacheflush times [2]: 0.0 (33900) 0.4 (448514) --------------------- Here it can be seen that there is no migration cost between two HT siblings (CPU#0/2 and CPU#1/3 are separate physical CPUs). A fast memory system makes inter-physical-CPU migration pretty cheap: 0.4 msecs. 8-way P3/Xeon [2MB L2 cache]: --------------------- migration cost matrix (max_cache_size: 2097152, cpu: 700 MHz): --------------------- [00] [01] [02] [03] [04] [05] [06] [07] [00]: - 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) [01]: 19.2(1) - 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) [02]: 19.2(1) 19.2(1) - 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) [03]: 19.2(1) 19.2(1) 19.2(1) - 19.2(1) 19.2(1) 19.2(1) 19.2(1) [04]: 19.2(1) 19.2(1) 19.2(1) 19.2(1) - 19.2(1) 19.2(1) 19.2(1) [05]: 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) - 19.2(1) 19.2(1) [06]: 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) - 19.2(1) [07]: 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) - --------------------- cacheflush times [2]: 0.0 (0) 19.2 (19281756) --------------------- This one has huge caches and a relatively slow memory subsystem - so the migration cost is 19 msecs. Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NAshok Raj <ashok.raj@intel.com> Signed-off-by: NKen Chen <kenneth.w.chen@intel.com> Cc: <wilder@us.ibm.com> Signed-off-by: NJohn Hawkes <hawkes@sgi.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Ingo Molnar 提交于
Add per-arch sched_cacheflush() which is a write-back cacheflush used by the migration-cost calibration code at bootup time. Signed-off-by: NIngo Molnar <mingo@elte.hu> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 12 1月, 2006 2 次提交
-
-
由 Randy Dunlap 提交于
arch: Use <linux/capability.h> where capable() is used. Signed-off-by: NRandy Dunlap <rdunlap@xenotime.net> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Keshavamurthy Anil S 提交于
There is a window where a probe gets removed right after the probe is hit on some different cpu. In this case probe handlers can't find a matching probe instance related to break address. In this case we need to read the original instruction at break address to see if that is not a break/int3 instruction and recover safely. Previous code had a bug where we were not checking for the above race in case of reentrant probes and the below patch fixes this race. Tested on IA64, Powerpc, x86_64. Signed-off-by: NAnil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 11 1月, 2006 3 次提交
-
-
由 Anil S Keshavamurthy 提交于
Currently arch_remove_kprobes() is only implemented/required for x86_64 and powerpc. All other architecture like IA64, i386 and sparc64 implementes a dummy function which is being called from arch independent kprobes.c file. This patch removes the dummy functions and replaces it with #define arch_remove_kprobe(p, s) do { } while(0) Signed-off-by: NAnil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Christoph Hellwig 提交于
Now that all these entries in the arch ioctl32.c files are gone [1], we can build fs/compat_ioctl.c as a normal object and kill tons of cruft. We need a special do_ioctl32_pointer handler for s390 so the compat_ptr call is done. This is not needed but harmless on all other architectures. Also remove some superflous includes in fs/compat_ioctl.c Tested on ppc64. [1] parisc still had it's PPP handler left, which is not fully correct for ppp and besides that ppp uses the generic SIOCPRIV ioctl so it'd kick in for all netdevice users. We can introduce a proper handler in one of the next patch series by adding a compat_ioctl method to struct net_device but for now let's just kill it - parisc doesn't compile in mainline anyway and I don't want this to block this patchset. Signed-off-by: NChristoph Hellwig <hch@lst.de> Cc: Matthew Wilcox <willy@debian.org> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Christoph Hellwig 提交于
The comment in compat.c is wrong, every architecture provides a get_compat_sigevent() for the IPC compat code already. This basically moves the x86_64 version to common code and removes all the others. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NPaul Mackerras <paulus@samba.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: "David S. Miller" <davem@davemloft.net> Acked-by: NAndi Kleen <ak@muc.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 09 1月, 2006 5 次提交
-
-
由 Bjorn Helgaas 提交于
Add a hook so architectures can validate /dev/mem mmap requests. This is analogous to validation we already perform in the read/write paths. The identity mapping scheme used on ia64 requires that each 16MB or 64MB granule be accessed with exactly one attribute (write-back or uncacheable). This avoids "attribute aliasing", which can cause a machine check. Sample problem scenario: - Machine supports VGA, so it has uncacheable (UC) MMIO at 640K-768K - efi_memmap_init() discards any write-back (WB) memory in the first granule - Application (e.g., "hwinfo") mmaps /dev/mem, offset 0 - hwinfo receives UC mapping (the default, since memmap says "no WB here") - Machine check abort (on chipsets that don't support UC access to WB memory, e.g., sx1000) In the scenario above, the only choices are - Use WB for hwinfo mmap. Can't do this because it causes attribute aliasing with the UC mapping for the VGA MMIO space. - Use UC for hwinfo mmap. Can't do this because the chipset may not support UC for that region. - Disallow the hwinfo mmap with -EINVAL. That's what this patch does. Signed-off-by: NBjorn Helgaas <bjorn.helgaas@hp.com> Cc: Hugh Dickins <hugh@veritas.com> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Andrew Morton 提交于
Remove various things which were checking for gcc-1.x and gcc-2.x compilers. From: Adrian Bunk <bunk@stusta.de> Some documentation updates and removes some code paths for gcc < 3.2. Acked-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NAdrian Bunk <bunk@stusta.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Christoph Hellwig 提交于
The ptrace_get_task_struct() helper that I added as part of the ptrace consolidation is useful in variety of places that currently opencode it. Switch them to the common helpers. Add a ptrace_traceme() helper that needs to be explicitly called, and simplify the ptrace_get_task_struct() interface. We don't need the request argument now, and we return the task_struct directly, using ERR_PTR() for error returns. It's a bit more code in the callers, but we have two sane routines that do one thing well now. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Christoph Lameter 提交于
sys_migrate_pages implementation using swap based page migration This is the original API proposed by Ray Bryant in his posts during the first half of 2005 on linux-mm@kvack.org and linux-kernel@vger.kernel.org. The intent of sys_migrate is to migrate memory of a process. A process may have migrated to another node. Memory was allocated optimally for the prior context. sys_migrate_pages allows to shift the memory to the new node. sys_migrate_pages is also useful if the processes available memory nodes have changed through cpuset operations to manually move the processes memory. Paul Jackson is working on an automated mechanism that will allow an automatic migration if the cpuset of a process is changed. However, a user may decide to manually control the migration. This implementation is put into the policy layer since it uses concepts and functions that are also needed for mbind and friends. The patch also provides a do_migrate_pages function that may be useful for cpusets to automatically move memory. sys_migrate_pages does not modify policies in contrast to Ray's implementation. The current code here is based on the swap based page migration capability and thus is not able to preserve the physical layout relative to it containing nodeset (which may be a cpuset). When direct page migration becomes available then the implementation needs to be changed to do a isomorphic move of pages between different nodesets. The current implementation simply evicts all pages in source nodeset that are not in the target nodeset. Patch supports ia64, i386 and x86_64. Signed-off-by: NChristoph Lameter <clameter@sgi.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Sam Ravnborg 提交于
This was causing some ordering problems. Remove the up-front evaluation and just revaluate the compiler version each time we need it. (The up-front evaluation was problematic because some architectures modify the value of $(CC)). Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NSam Ravnborg <sam@ravnborg.org>
-
- 06 1月, 2006 1 次提交
-
-
由 Tony Luck 提交于
arch/ia64/kernel/setup.c: In function `show_cpuinfo': arch/ia64/kernel/setup.c:576: warning: long unsigned int format, different type arg (arg 12) arch/ia64/kernel/setup.c:576: warning: long unsigned int format, different type arg (arg 13) Introduced by 95235ca2Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 05 1月, 2006 1 次提交
-
-
由 Paul Jackson 提交于
The first of these changes s/hotplug/uevent/ was needed to compile sn2_defconfig (ia64/sn). The other three files changed are blind changes of all remaining bus_type.hotplug references I could find to bus_type.uevent. This patch attempts to finish similar changes made in the gregkh-driver-kill-hotplug-word-from-driver-core Nov 22 patch. Signed-off-by: NPaul Jackson <pj@sgi.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
- 04 1月, 2006 1 次提交
-
-
由 Alex Williamson 提交于
The function ia64_pci_legacy_write() returns 0 for everything except errors. This return value gets sent back to the user from pci_write_legacy_io(), making it look like every write fails. The trivial patch below copies the behavior of the SGI sn machvec and does what would be expected from something implementing a write() function. Signed-off-by: NAlex Williamson <alex.williamson@hp.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 17 12月, 2005 5 次提交
-
-
由 Christoph Lameter 提交于
sparc64, i386 and x86_64 have support for a special data section dedicated to rarely updated data that is frequently read. The section was created to avoid false sharing of those rarely read data with frequently written kernel data. This patch creates such a data section for ia64 and will group rarely written data into this section. Signed-off-by: NChristoph Lameter <clameter@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 hawkes@sgi.com 提交于
Change the NR_CPUS default for ia64/sn up to 1024. Signed-off-by: NJohn Hawkes <hawkes@sgi.com> Signed-off-by: NJohn Hesterberg <jh@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Jack Steiner 提交于
I see why the problem exists only on SN. SN uses a different hardware mechanism to purge TLB entries across nodes. It looks like there is a bug in the SN TLB flushing code. During context switch, kernel threads inherit the mm of the task that was previously running on the cpu. This confuses the code in sn2_global_tlb_purge(). The result is a missed TLB purge for the task that owns the "borrowed" mm. (I hit the problem running heavy stress where kswapd was purging code pages of a user task that woke kswapd. The user task took a SIGILL fault trying to execute code in the page that had been ripped out from underneath it). Signed-off-by: NJack Steiner <steiner@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Jes Sorensen 提交于
Use raw_smp_processor_id() instead of get_cpu() as we don't need the extra features of get_cpu(). Signed-off-by: NJes Sorensen <jes@trained-monkey.org> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 John Hawkes 提交于
The udelay() inline for ia64 uses the ITC. If CONFIG_PREEMPT is enabled and the platform has unsynchronized ITCs and the calling task migrates to another CPU while doing the udelay loop, then the effective delay may be too short or very, very long. This patch disables preemption around 100 usec chunks of the overall desired udelay time. This minimizes preemption-holdoffs. udelay() is now too big to be inline, move it out of line and export it. Signed-off-by: NJohn Hawkes <hawkes@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 16 12月, 2005 1 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 15 12月, 2005 1 次提交
-
-
由 Robin Holt 提交于
Missed this when fixing the SET_PERSONALITY change. Signed-off-by: NRobin Holt <holt@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-