- 27 3月, 2009 3 次提交
-
-
由 Isaku Yamahata 提交于
paravirtualize mov reg = ar.itc. Signed-off-by: NIsaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Isaku Yamahata 提交于
paravirtualize fsys.S. Signed-off-by: NIsaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Isaku Yamahata 提交于
Add two hooks, paravirt_get_fsyscall_table() and paravirt_get_fsys_bubble_doen() to paravirtualize fsyscall implementation. This patch just add the hooks fsyscall and don't paravirtualize it. Signed-off-by: NIsaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 10 4月, 2008 1 次提交
-
-
由 Pavel Emelyanov 提交于
The sys_getpid() and sys_set_tid_address() behavior changed from return current->tgid to struct pid *pid; pid = current->pids[PIDTYPE_PID].pid; return pid->numbers[pid->level].nr; But the fast system calls on ia64 still operate the old way. Patch them appropriately to let ia64 work with pid namespaces. Besides, this is one more step in deprecating of pid and tgid on task_struct. The fsys_getppid() is to be patched as well, but its logic is much more complex now, so I will make it later. One thing I'm not 100% sure is the trick with the IA64_UPID_SHIFT. On order to access the pid->level's element of an array I have to perform the following calculations pid + sizeof(struct upid) * pid->level The problem is that ia64 can only multiply float point registers, while all the offsets I have in code are in rXX ones. Fortunately, the sizeof(struct upid) is 32 bytes on ia64 (and is very unlikely to ever change), so the calculations get simpler: pid + pid->level << 5 So, I introduce the IA64_UPID_SHIFT and use the shl instruction. I also looked at how gcc compiles the similar place and found that it makes it with shift as well. Is this OK to do so? Tested with ski emulator with 2.6.24 kernel, but fits 2.6.25-rc4 and 2.6.25-rc4-mm1 as well. Signed-off-by: NPavel Emelyanov <xemul@openvz.org> Cc: David Mosberger-Tang <davidm@hpl.hp.com> Cc: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Amy Griffis <amy.griffis@hp.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 11 3月, 2008 1 次提交
-
-
由 Hidetoshi Seto 提交于
This patch does: - Remove outdated comments (which someday I marked with "?"). - Reassemble instructions to fit them in fewer bundles. - If McKinley Errata 9 workaround is not needed, the workaround bundles will be patched out with NOPs. However it also not needed to have a totally NOP bundle (nop * 3) before branch. As a result, this makes the code path 3 (or 2) bundles shorter (and remove 1 unnecessary stop bit). It seems to be 1% faster. (10sec loop test, with nojitter @ Madison 1.5GHz x 4) Before: CPU 0: 0.14 (usecs) (0 errors / 69598875 iterations) CPU 1: 0.14 (usecs) (0 errors / 69630721 iterations) CPU 2: 0.14 (usecs) (0 errors / 69607850 iterations) CPU 3: 0.14 (usecs) (0 errors / 69619832 iterations) After: CPU 0: 0.14 (usecs) (0 errors / 70257728 iterations) CPU 1: 0.14 (usecs) (0 errors / 70309498 iterations) CPU 2: 0.14 (usecs) (0 errors / 70280639 iterations) CPU 3: 0.14 (usecs) (0 errors / 70260682 iterations) Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 21 2月, 2008 1 次提交
-
-
由 Hidetoshi Seto 提交于
This patch implements VIRT_CPU_ACCOUNTING for ia64, which enable us to use more accurate cpu time accounting. The VIRT_CPU_ACCOUNTING is an item of kernel config, which s390 and powerpc arch have. By turning this config on, these archs change the mechanism of cpu time accounting from tick-sampling based one to state-transition based one. The state-transition based accounting is done by checking time (cycle counter in processor) at every state-transition point, such as entrance/exit of kernel, interrupt, softirq etc. The difference between point to point is the actual time consumed during in the state. There is no doubt about that this value is more accurate than that of tick-sampling based accounting. Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 21 7月, 2007 1 次提交
-
-
由 Tony Luck 提交于
This is a merge of Peter Keilty's initial patch (which was revived by Bob Picco) for this with Hidetoshi Seto's fixes and scaling improvements. Acked-by: NBob Picco <bob.picco@hp.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 14 7月, 2007 1 次提交
-
-
由 Hidetoshi Seto 提交于
The ".acq" semantics of the load only apply w.r.t. other data access. Reading the clock (ar.itc) isn't a data access so strange things can happen here. Specifically the read of ar.itc can be launched as soon as the read of xtime_lock.sequence is ISSUED. Since this may cache miss, and that might cause a thread switch, and there may be cache contention for the line containing xtime_lock, it may be a long time before the actual value is returned, so the ar.itc value may be very stale. Move the consumption of r28 up before the read of ar.itc to make sure that we really have got the current value of xtime_lock.sequence before look at ar.itc. Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 08 3月, 2007 1 次提交
-
-
由 Fenghua Yu 提交于
On 1.6GHz Montectio Tiger4, the following performance data is measured with kernel built with defconfig which has NUMA configured: Fastest sys_getcpu: 502 itc counts. Fastest fsys_getcpu: 28 itc counts. fsys_getcpu performance is largly impacted by whether data (node_to_cpu_map etc) is in cache. It can take fsys_getcpu up to ~150 itc counts in cold cache case. Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 01 3月, 2006 1 次提交
-
-
由 Ken Chen 提交于
beautify coding style for zeroing end of fsyscall_table entries. Remove misleading __NR_syscall_last and add more comments. Drop (now unneeded) "guard against failure to increase NR_syscalls" Signed-off-by: NKen Chen <kenneth.w.chen@intel.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 07 2月, 2006 1 次提交
-
-
由 Chen, Kenneth W 提交于
Wire up the ia64 syscalls for *at() functions. Signed-off-by: NKen Chen <kenneth.w.chen@intel.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 14 1月, 2006 1 次提交
-
-
由 Tony Luck 提交于
When this new syscall was added to ia64 in commit 39743889 fsys.S was forgotten. Add a ".data8 0" there to keep it in step. [Reported by Stephane Eranian] Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 10 9月, 2005 1 次提交
-
-
由 Sam Ravnborg 提交于
Delete obsolete stuff from arch Makefile Rename file to asm-offsets.h The trick used in the arch Makefile to circumvent the circular dependency is kept. Signed-off-by: NSam Ravnborg <sam@ravnborg.org>
-
- 10 6月, 2005 1 次提交
-
-
由 Christoph Lameter 提交于
current->blocked will be set to the value of current->thread_info->flags if the cmpxchg to update thread_info->flags fails. For performance reasons the store into current->blocked was placed in the cmpxchg loop. However, the cmpxchg overwrites the register holding the value to be stored. In the rare case of a retry the value of thread_info->flags will be written into current->blocked. The fix is to use another register so that the register containing the current->blocked value is not overwritten. Signed-off-by: NChristoph Lameter <clameter@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 06 5月, 2005 1 次提交
-
-
由 David Mosberger-Tang 提交于
Signed-off-by: NDavid Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 04 5月, 2005 1 次提交
-
-
由 David Woodhouse 提交于
Attached is a patch against David's audit.17 kernel that adds checks for the TIF_SYSCALL_AUDIT thread flag to the ia64 system call and signal handling code paths. The patch enables auditing of system calls set up via fsys_bubble_down, as well as ensuring that audit_syscall_exit() is called on return from sigreturn. Neglecting to check for TIF_SYSCALL_AUDIT at these points results in incorrect information in audit_context, causing frequent system panics when system call auditing is enabled on an ia64 system. I have tested this patch and have seen no problems with it. [Original patch from Amy Griffis ported to current kernel by David Woodhouse] From: Amy Griffis <amy.griffis@hp.com> From: David Woodhouse <dwmw2@infradead.org> Signed-off-by: NChris Wright <chrisw@osdl.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 29 4月, 2005 1 次提交
-
-
由 Amy Griffis 提交于
Attached is a patch against David's audit.17 kernel that adds checks for the TIF_SYSCALL_AUDIT thread flag to the ia64 system call and signal handling code paths.The patch enables auditing of system calls set up via fsys_bubble_down, as well as ensuring that audit_syscall_exit() is called on return from sigreturn. Neglecting to check for TIF_SYSCALL_AUDIT at these points results in incorrect information in audit_context, causing frequent system panics when system call auditing is enabled on an ia64 system. Signed-off-by: NAmy Griffis <amy.griffis@hp.com> Signed-off-by: NDavid Woodhouse <dwmw2@infradead.org>
-
- 28 4月, 2005 2 次提交
-
-
由 David Mosberger-Tang 提交于
This patch changes comments & formatting only. There is no code change. Signed-off-by: NDavid Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 David Mosberger-Tang 提交于
Improvements come from eliminating srlz.i, not scheduling AR/CR-reads too early (while there are others still pending), scheduling the backing-store switch as well as possible, splitting the BBB bundle into a MIB/MBB pair. Why is it safe to eliminate the srlz.i? Observe that we used to clear bits ~PSR_PRESERVED_BITS in PSR.L. Since PSR_PRESERVED_BITS==PSR.{UP,MFL,MFH,PK,DT,PP,SP,RT,IC}, we ended up clearing PSR.{BE,AC,I,DFL,DFH,DI,DB,SI,TB}. However, PSR.BE : already is turned off in __kernel_syscall_via_epc() PSR.AC : don't care (kernel normally turns PSR.AC on) PSR.I : already turned off by the time fsys_bubble_down gets invoked PSR.DFL: always 0 (kernel never turns it on) PSR.DFH: don't care --- kernel never touches f32-f127 on its own initiative PSR.DI : always 0 (kernel never turns it on) PSR.SI : always 0 (kernel never turns it on) PSR.DB : don't care --- kernel never enables kernel-level breakpoints PSR.TB : must be 0 already; if it wasn't zero on entry to __kernel_syscall_via_epc, the branch to fsys_bubble_down will trigger a taken branch; the taken-trap-handler then converts the syscall into a break-based system-call. In other words: all the bits we're clearying are either 0 already or are don't cares! Thus, we don't have to write PSR.L at all and we don't have to do a srlz.i either. Good for another ~20 cycle improvement for EPC-based heavy-weight syscalls. Signed-off-by: NDavid Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 17 4月, 2005 1 次提交
-
-
由 Linus Torvalds 提交于
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!
-