1. 19 5月, 2007 1 次提交
  2. 11 5月, 2007 4 次提交
    • J
      uml: iRQ stacks · c14b8494
      Jeff Dike 提交于
      Add a separate IRQ stack.  This differs from i386 in having the entire
      interrupt run on a separate stack rather than starting on the normal kernel
      stack and switching over once some preparation has been done.  The underlying
      mechanism, is of course, sigaltstack.
      
      Another difference is that interrupts that happen in userspace are handled on
      the normal kernel stack.  These cause a wait wakeup instead of a signal
      delivery so there is no point in trying to switch stacks for these.  There's
      no other stuff on the stack, so there is no extra stack consumption.
      
      This quirk makes it possible to have the entire interrupt run on a separate
      stack - process preemption (and calls to schedule()) happens on a normal
      kernel stack.  If we enable CONFIG_PREEMPT, this will need to be rethought.
      
      The IRQ stack for CPU 0 is declared in the same way as the initial kernel
      stack.  IRQ stacks for other CPUs will be allocated dynamically.
      
      An extra field was added to the thread_info structure.  When the active
      thread_info is copied to the IRQ stack, the real_thread field points back to
      the original stack.  This makes it easy to tell where to copy the thread_info
      struct back to when the interrupt is finished.  It also serves as a marker of
      a nested interrupt.  It is NULL for the first interrupt on the stack, and
      non-NULL for any nested interrupts.
      
      Care is taken to behave correctly if a second interrupt comes in when the
      thread_info structure is being set up or taken down.  I could just disable
      interrupts here, but I don't feel like giving up any of the performance gained
      by not flipping signals on and off.
      
      If an interrupt comes in during these critical periods, the handler can't run
      because it has no idea what shape the stack is in.  So, it sets a bit for its
      signal in a global mask and returns.  The outer handler will deal with this
      signal itself.
      
      Atomicity is had with xchg.  A nested interrupt that needs to bail out will
      xchg its signal mask into pending_mask and repeat in case yet another
      interrupt hit at the same time, until the mask stabilizes.
      
      The outermost interrupt will set up the thread_info and xchg a zero into
      pending_mask when it is done.  At this point, nested interrupts will look at
      ->real_thread and see that no setup needs to be done.  They can just continue
      normally.
      
      Similar care needs to be taken when exiting the outer handler.  If another
      interrupt comes in while it is copying the thread_info, it will drop a bit
      into pending_mask.  The outer handler will check this and if it is non-zero,
      will loop, set up the stack again, and handle the interrupt.
      Signed-off-by: NJeff Dike <jdike@linux.intel.com>
      Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c14b8494
    • J
      uml: tidy IRQ code · 2ea5bc5e
      Jeff Dike 提交于
      Some tidying of the irq code before introducing irq stacks.  Mostly
      style fixes, but the timer handler calls the timer code directly
      rather than going through the generic sig_handler_common_skas.
      Signed-off-by: NJeff Dike <jdike@linux.intel.com>
      Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2ea5bc5e
    • J
      uml: use UM_THREAD_SIZE in userspace code · e1a79c40
      Jeff Dike 提交于
      Now that we have UM_THREAD_SIZE, we can replace the calculations in
      user-space code (an earlier patch took care of the kernel side of the
      house).
      Signed-off-by: NJeff Dike <jdike@linux.intel.com>
      Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e1a79c40
    • J
      uml: remove task_protections · 57598fd7
      Jeff Dike 提交于
      Replaced task_protections with stack_protections since they do the same
      thing, and task_protections was misnamed anyway.
      
      This needs THREAD_SIZE, so that's imported via common-offsets.h
      
      Also tidied up the code in the vicinity.
      Signed-off-by: NJeff Dike <jdike@linux.intel.com>
      Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      57598fd7
  3. 10 5月, 2007 1 次提交
  4. 09 5月, 2007 1 次提交
  5. 08 5月, 2007 29 次提交
  6. 29 3月, 2007 1 次提交
    • J
      [PATCH] uml: fix LVM crash · af84eab2
      Jason Lunz 提交于
      Permit lvm to create logical volumes without crashing UML.
      
      When device-mapper's DM_DEV_CREATE_CMD ioctl is called to create a new device,
      dev_create()->dm_create()->alloc_dev()-> blk_queue_bounce_limit(md->queue,
      BLK_BOUNCE_ANY) is called.
      
      blk_queue_bounce_limit(BLK_BOUNCE_ANY) calls init_emergency_isa_pool() if
      blk_max_pfn < blk_max_low_pfn.  This is the case on UML, but
      init_emergency_isa_pool() hits BUG_ON(!isa_page_pool) because there doesn't
      seem to be a dma zone on UML for mempool_create() to allocate from.
      
      Most architectures seem to have max_low_pfn == max_pfn, but UML doesn't
      because of the uml_reserved chunk it keeps for itself.  From what I can see,
      max_pfn and max_low_pfn don't get much use after the bootmem-allocator stops
      being used anyway, except that they initialize the block layer's
      blk_max_low_pfn/blk_max_pfn.
      
      This ensures init_emergency_isa_pool() doesn't crash uml in this situation by
      setting max_low_pfn == max_pfn in mem_init().
      Signed-off-by: NJason Lunz <lunz@falooley.org>
      Signed-off-by: NJeff Dike <jdike@linux.intel.com>
      Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
      Cc: Alasdair G Kergon <agk@redhat.com>
      Cc: Jens Axboe <jens.axboe@oracle.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      af84eab2
  7. 08 3月, 2007 1 次提交
  8. 07 3月, 2007 1 次提交
  9. 13 2月, 2007 1 次提交