1. 24 5月, 2008 1 次提交
  2. 25 4月, 2008 4 次提交
  3. 20 4月, 2008 1 次提交
  4. 17 4月, 2008 2 次提交
  5. 19 2月, 2008 1 次提交
  6. 10 2月, 2008 1 次提交
  7. 30 1月, 2008 4 次提交
  8. 17 10月, 2007 1 次提交
    • J
      paravirt: refactor struct paravirt_ops into smaller pv_*_ops · 93b1eab3
      Jeremy Fitzhardinge 提交于
      This patch refactors the paravirt_ops structure into groups of
      functionally related ops:
      
      pv_info - random info, rather than function entrypoints
      pv_init_ops - functions used at boot time (some for module_init too)
      pv_misc_ops - lazy mode, which didn't fit well anywhere else
      pv_time_ops - time-related functions
      pv_cpu_ops - various privileged instruction ops
      pv_irq_ops - operations for managing interrupt state
      pv_apic_ops - APIC operations
      pv_mmu_ops - operations for managing pagetables
      
      There are several motivations for this:
      
      1. Some of these ops will be general to all x86, and some will be
         i386/x86-64 specific.  This makes it easier to share common stuff
         while allowing separate implementations where needed.
      
      2. At the moment we must export all of paravirt_ops, but modules only
         need selected parts of it.  This allows us to export on a case by case
         basis (and also choose which export license we want to apply).
      
      3. Functional groupings make things a bit more readable.
      
      Struct paravirt_ops is now only used as a template to generate
      patch-site identifiers, and to extract function pointers for inserting
      into jmp/calls when patching.  It is only instantiated when needed.
      Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com>
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Cc: Andi Kleen <ak@suse.de>
      Cc: Zach Amsden <zach@vmware.com>
      Cc: Avi Kivity <avi@qumranet.com>
      Cc: Anthony Liguory <aliguori@us.ibm.com>
      Cc: "Glauber de Oliveira Costa" <glommer@gmail.com>
      Cc: Jun Nakajima <jun.nakajima@intel.com>
      93b1eab3
  9. 12 10月, 2007 1 次提交
  10. 11 10月, 2007 3 次提交
  11. 19 7月, 2007 1 次提交
  12. 18 7月, 2007 2 次提交
    • J
      xen: use iret directly when possible · 9ec2b804
      Jeremy Fitzhardinge 提交于
      Most of the time we can simply use the iret instruction to exit the
      kernel, rather than having to use the iret hypercall - the only
      exception is if we're returning into vm86 mode, or from delivering an
      NMI (which we don't support yet).
      
      When running native, iret has the behaviour of testing for a pending
      interrupt atomically with re-enabling interrupts.  Unfortunately
      there's no way to do this with Xen, so there's a window in which we
      could get a recursive exception after enabling events but before
      actually returning to userspace.
      
      This causes a problem: if the nested interrupt causes one of the
      task's TIF_WORK_MASK flags to be set, they will not be checked again
      before returning to userspace.  This means that pending work may be
      left pending indefinitely, until the process enters and leaves the
      kernel again.  The net effect is that a pending signal or reschedule
      event could be delayed for an unbounded amount of time.
      
      To deal with this, the xen event upcall handler checks to see if the
      EIP is within the critical section of the iret code, after events
      are (potentially) enabled up to the iret itself.  If its within this
      range, it calls the iret critical section fixup, which adjusts the
      stack to deal with any unrestored registers, and then shifts the
      stack frame up to replace the previous invocation.
      Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com>
      9ec2b804
    • J
      xen: Core Xen implementation · 5ead97c8
      Jeremy Fitzhardinge 提交于
      This patch is a rollup of all the core pieces of the Xen
      implementation, including:
       - booting and setup
       - pagetable setup
       - privileged instructions
       - segmentation
       - interrupt flags
       - upcalls
       - multicall batching
      
      BOOTING AND SETUP
      
      The vmlinux image is decorated with ELF notes which tell the Xen
      domain builder what the kernel's requirements are; the domain builder
      then constructs the address space accordingly and starts the kernel.
      
      Xen has its own entrypoint for the kernel (contained in an ELF note).
      The ELF notes are set up by xen-head.S, which is included into head.S.
      In principle it could be linked separately, but it seems to provoke
      lots of binutils bugs.
      
      Because the domain builder starts the kernel in a fairly sane state
      (32-bit protected mode, paging enabled, flat segments set up), there's
      not a lot of setup needed before starting the kernel proper.  The main
      steps are:
        1. Install the Xen paravirt_ops, which is simply a matter of a
           structure assignment.
        2. Set init_mm to use the Xen-supplied pagetables (analogous to the
           head.S generated pagetables in a native boot).
        3. Reserve address space for Xen, since it takes a chunk at the top
           of the address space for its own use.
        4. Call start_kernel()
      
      PAGETABLE SETUP
      
      Once we hit the main kernel boot sequence, it will end up calling back
      via paravirt_ops to set up various pieces of Xen specific state.  One
      of the critical things which requires a bit of extra care is the
      construction of the initial init_mm pagetable.  Because Xen places
      tight constraints on pagetables (an active pagetable must always be
      valid, and must always be mapped read-only to the guest domain), we
      need to be careful when constructing the new pagetable to keep these
      constraints in mind.  It turns out that the easiest way to do this is
      use the initial Xen-provided pagetable as a template, and then just
      insert new mappings for memory where a mapping doesn't already exist.
      
      This means that during pagetable setup, it uses a special version of
      xen_set_pte which ignores any attempt to remap a read-only page as
      read-write (since Xen will map its own initial pagetable as RO), but
      lets other changes to the ptes happen, so that things like NX are set
      properly.
      
      PRIVILEGED INSTRUCTIONS AND SEGMENTATION
      
      When the kernel runs under Xen, it runs in ring 1 rather than ring 0.
      This means that it is more privileged than user-mode in ring 3, but it
      still can't run privileged instructions directly.  Non-performance
      critical instructions are dealt with by taking a privilege exception
      and trapping into the hypervisor and emulating the instruction, but
      more performance-critical instructions have their own specific
      paravirt_ops.  In many cases we can avoid having to do any hypercalls
      for these instructions, or the Xen implementation is quite different
      from the normal native version.
      
      The privileged instructions fall into the broad classes of:
        Segmentation: setting up the GDT and the GDT entries, LDT,
           TLS and so on.  Xen doesn't allow the GDT to be directly
           modified; all GDT updates are done via hypercalls where the new
           entries can be validated.  This is important because Xen uses
           segment limits to prevent the guest kernel from damaging the
           hypervisor itself.
        Traps and exceptions: Xen uses a special format for trap entrypoints,
           so when the kernel wants to set an IDT entry, it needs to be
           converted to the form Xen expects.  Xen sets int 0x80 up specially
           so that the trap goes straight from userspace into the guest kernel
           without going via the hypervisor.  sysenter isn't supported.
        Kernel stack: The esp0 entry is extracted from the tss and provided to
           Xen.
        TLB operations: the various TLB calls are mapped into corresponding
           Xen hypercalls.
        Control registers: all the control registers are privileged.  The most
           important is cr3, which points to the base of the current pagetable,
           and we handle it specially.
      
      Another instruction we treat specially is CPUID, even though its not
      privileged.  We want to control what CPU features are visible to the
      rest of the kernel, and so CPUID ends up going into a paravirt_op.
      Xen implements this mainly to disable the ACPI and APIC subsystems.
      
      INTERRUPT FLAGS
      
      Xen maintains its own separate flag for masking events, which is
      contained within the per-cpu vcpu_info structure.  Because the guest
      kernel runs in ring 1 and not 0, the IF flag in EFLAGS is completely
      ignored (and must be, because even if a guest domain disables
      interrupts for itself, it can't disable them overall).
      
      (A note on terminology: "events" and interrupts are effectively
      synonymous.  However, rather than using an "enable flag", Xen uses a
      "mask flag", which blocks event delivery when it is non-zero.)
      
      There are paravirt_ops for each of cli/sti/save_fl/restore_fl, which
      are implemented to manage the Xen event mask state.  The only thing
      worth noting is that when events are unmasked, we need to explicitly
      see if there's a pending event and call into the hypervisor to make
      sure it gets delivered.
      
      UPCALLS
      
      Xen needs a couple of upcall (or callback) functions to be implemented
      by each guest.  One is the event upcalls, which is how events
      (interrupts, effectively) are delivered to the guests.  The other is
      the failsafe callback, which is used to report errors in either
      reloading a segment register, or caused by iret.  These are
      implemented in i386/kernel/entry.S so they can jump into the normal
      iret_exc path when necessary.
      
      MULTICALL BATCHING
      
      Xen provides a multicall mechanism, which allows multiple hypercalls
      to be issued at once in order to mitigate the cost of trapping into
      the hypervisor.  This is particularly useful for context switches,
      since the 4-5 hypercalls they would normally need (reload cr3, update
      TLS, maybe update LDT) can be reduced to one.  This patch implements a
      generic batching mechanism for hypercalls, which gets used in many
      places in the Xen code.
      Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com>
      Signed-off-by: NChris Wright <chrisw@sous-sol.org>
      Cc: Ian Pratt <ian.pratt@xensource.com>
      Cc: Christian Limpach <Christian.Limpach@cl.cam.ac.uk>
      Cc: Adrian Bunk <bunk@stusta.de>
      5ead97c8
  13. 07 7月, 2007 1 次提交
    • J
      i386: fix regression, endless loop in ptrace singlestep over an int80 · 1e2e99f0
      Jason Wessel 提交于
      The commit 635cf99a introduced a
      regression.  Executing a ptrace single step after certain int80
      accesses will infinitely loop and never advance the PC.
      
      The TIF_SINGLESTEP check should be done on the return from the syscall
      and not before it.
      
      I loops on each single step on the pop right after the int80 which writes out
      to the console.  At that point you can issue as many single steps as you want
      and it will not advance any further.
      
      The test case is below:
      
      /* Test whether singlestep through an int80 syscall works.
       */
      #define _GNU_SOURCE
      #include <stdio.h>
      #include <unistd.h>
      #include <fcntl.h>
      #include <sys/ptrace.h>
      #include <sys/wait.h>
      #include <sys/mman.h>
      #include <asm/user.h>
      #include <string.h>
      
      static int child, status;
      static struct user_regs_struct regs;
      
      static void do_child()
      {
      	char str[80] = "child: int80 test\n";
      
      	ptrace(PTRACE_TRACEME, 0, 0, 0);
      	kill(getpid(), SIGUSR1);
      	write(fileno(stdout),str,strlen(str));
      	asm ("int $0x80" : : "a" (20)); /* getpid */
      }
      
      static void do_parent()
      {
      	unsigned long eip, expected = 0;
      again:
      	waitpid(child, &status, 0);
      	if (WIFEXITED(status) || WIFSIGNALED(status))
      		return;
      
      	if (WIFSTOPPED(status)) {
      		ptrace(PTRACE_GETREGS, child, 0, &regs);
      		eip = regs.eip;
      		if (expected)
      			fprintf(stderr, "child stop @ %08lx, expected %08lx %s\n",
      					eip, expected,
      					eip == expected ? "" : " <== ERROR");
      
      		if (*(unsigned short *)eip == 0x80cd) {
      			fprintf(stderr, "int 0x80 at %08x\n", (unsigned int)eip);
      			expected = eip + 2;
      		} else
      			expected = 0;
      
      		ptrace(PTRACE_SINGLESTEP, child, NULL, NULL);
      	}
      	goto again;
      }
      
      int main(int argc, char * const argv[])
      {
      	child = fork();
      	if (child)
      		do_parent();
      	else
      		do_child();
      	return 0;
      }
      Signed-off-by: NJason Wessel <jason.wessel@windriver.com>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: <stable@kernel.org>
      Cc: Chuck Ebbert <76306.1226@compuserve.com>
      Acked-by: NAndi Kleen <ak@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1e2e99f0
  14. 03 5月, 2007 8 次提交
  15. 13 2月, 2007 3 次提交
  16. 27 1月, 2007 1 次提交
    • R
      [PATCH] Fix CONFIG_COMPAT_VDSO · a1f3bb9a
      Roland McGrath 提交于
      I wouldn't mind if CONFIG_COMPAT_VDSO went away entirely.  But if it's there,
      it should work properly.  Currently it's quite haphazard: both real vma and
      fixmap are mapped, both are put in the two different AT_* slots, sysenter
      returns to the vma address rather than the fixmap address, and core dumps yet
      are another story.
      
      This patch makes CONFIG_COMPAT_VDSO disable the real vma and use the fixmap
      area consistently.  This makes it actually compatible with what the old vdso
      implementation did.
      Signed-off-by: NRoland McGrath <roland@redhat.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Andi Kleen <ak@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a1f3bb9a
  17. 16 12月, 2006 1 次提交
    • L
      Remove stack unwinder for now · d1526e2c
      Linus Torvalds 提交于
      It has caused more problems than it ever really solved, and is
      apparently not getting cleaned up and fixed.  We can put it back when
      it's stable and isn't likely to make warning or bug events worse.
      
      In the meantime, enable frame pointers for more readable stack traces.
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      d1526e2c
  18. 07 12月, 2006 4 次提交
    • R
      [PATCH] paravirt: Patch inline replacements for paravirt intercepts · 139ec7c4
      Rusty Russell 提交于
      It turns out that the most called ops, by several orders of magnitude,
      are the interrupt manipulation ops.  These are obvious candidates for
      patching, so mark them up and create infrastructure for it.
      
      The method used is that the ops structure has a patch function, which
      is called for each place which needs to be patched: this returns a
      number of instructions (the rest are NOP-padded).
      
      Usually we can spare a register (%eax) for the binary patched code to
      use, but in a couple of critical places in entry.S we can't: we make
      the clobbers explicit at the call site, and manually clobber the
      allowed registers in debug mode as an extra check.
      
      And:
      
      Don't abuse CONFIG_DEBUG_KERNEL, add CONFIG_DEBUG_PARAVIRT.
      
      And:
      
      AK:  Fix warnings in x86-64 alternative.c build
      
      And:
      
      AK: Fix compilation with defconfig
      
      And:
      
      ^From: Andrew Morton <akpm@osdl.org>
      
      Some binutlises still like to emit references to __stop_parainstructions and
      __start_parainstructions.
      
      And:
      
      AK: Fix warnings about unused variables when PARAVIRT is disabled.
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com>
      Signed-off-by: NChris Wright <chrisw@sous-sol.org>
      Signed-off-by: NZachary Amsden <zach@vmware.com>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      139ec7c4
    • R
      [PATCH] paravirt: header and stubs for paravirtualisation · d3561b7f
      Rusty Russell 提交于
      Create a paravirt.h header for all the critical operations which need to be
      replaced with hypervisor calls, and include that instead of defining native
      operations, when CONFIG_PARAVIRT.
      
      This patch does the dumbest possible replacement of paravirtualized
      instructions: calls through a "paravirt_ops" structure.  Currently these are
      function implementations of native hardware: hypervisors will override the ops
      structure with their own variants.
      
      All the pv-ops functions are declared "fastcall" so that a specific
      register-based ABI is used, to make inlining assember easier.
      
      And:
      
      +From: Andy Whitcroft <apw@shadowen.org>
      
      The paravirt ops introduce a 'weak' attribute onto memory_setup().
      Code ordering leads to the following warnings on x86:
      
          arch/i386/kernel/setup.c:651: warning: weak declaration of
                      `memory_setup' after first use results in unspecified behavior
      
      Move memory_setup() to avoid this.
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NChris Wright <chrisw@sous-sol.org>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Zachary Amsden <zach@vmware.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NAndy Whitcroft <apw@shadowen.org>
      d3561b7f
    • J
      [PATCH] i386: Fix entry.S code with !CONFIG_VM86 · 74b47a78
      Joe Korty 提交于
      The entry.S code at work_notifysig is surely wrong.  It drops into unrelated
      code if the branch to work_notifysig_v86 is taken, and CONFIG_VM86=n.
      
      	[PATCH] Make vm86 support optional
      	tree 9b5daef5280800a0006343a17f63072658d91a1d
      	pushed to git Jan 8, 2006, and first appears in 2.6.16
      
      The 'fix' here is to also compile out the vm86 test & branch when
      CONFIG_VM86=n.
      Signed-off-by: NJoe Korty <joe.korty@ccur.com>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      74b47a78
    • J
      [PATCH] i386: Implement smp_processor_id() with the PDA · b2938f88
      Jeremy Fitzhardinge 提交于
      Use the cpu_number in the PDA to implement raw_smp_processor_id.  This is a
      little simpler than using thread_info, though the cpu field in thread_info
      cannot be removed since it is used for things other than getting the current
      CPU in common code.
      Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      Cc: Chuck Ebbert <76306.1226@compuserve.com>
      Cc: Zachary Amsden <zach@vmware.com>
      Cc: Jan Beulich <jbeulich@novell.com>
      Cc: Andi Kleen <ak@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      b2938f88