1. 05 3月, 2009 9 次提交
  2. 04 3月, 2009 2 次提交
  3. 03 3月, 2009 14 次提交
  4. 02 3月, 2009 15 次提交
    • J
      x86: add forward decl for tss_struct · 2fb6b2a0
      Jeremy Fitzhardinge 提交于
      Its the correct thing to do before using the struct in a prototype.
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2fb6b2a0
    • J
      x86: unify chunks of kernel/process*.c · 389d1fb1
      Jeremy Fitzhardinge 提交于
      With x86-32 and -64 using the same mechanism for managing the
      tss io permissions bitmap, large chunks of process*.c are
      trivially unifyable, including:
      
       - exit_thread
       - flush_thread
       - __switch_to_xtra (along with tsc enable/disable)
      
      and as bonus pickups:
      
       - sys_fork
       - sys_vfork
      
      (Note: asmlinkage expands to empty on x86-64)
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      389d1fb1
    • J
      x86-32: use non-lazy io bitmap context switching · db949bba
      Jeremy Fitzhardinge 提交于
      Impact: remove 32-bit optimization to prepare unification
      
      x86-32 and -64 differ in the way they context-switch tasks
      with io permission bitmaps.  x86-64 simply copies the next
      tasks io bitmap into place (if any) on context switch.  x86-32
      invalidates the bitmap on context switch, so that the next
      IO instruction will fault; at that point it installs the
      appropriate IO bitmap.
      
      This makes context switching IO-bitmap-using tasks a bit more
      less expensive, at the cost of making the next IO instruction
      slower due to the extra fault.  This tradeoff only makes sense
      if IO-bitmap-using processes are relatively common, but they
      don't actually use IO instructions very often.
      
      However, in a typical desktop system, the only process likely
      to be using IO bitmaps is the X server, and nothing at all on
      a server.  Therefore the lazy context switch doesn't really win
      all that much, and its just a gratuitious difference from
      64-bit code.
      
      This patch removes the lazy context switch, with a view to
      unifying this code in a later change.
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      db949bba
    • J
      x86_32: apic/numaq_32, fix section mismatch · b6122b38
      Jiri Slaby 提交于
      Remove __cpuinitdata section placement for translation_table
      structure, since it is referenced from a functions within .text.
      Signed-off-by: NJiri Slaby <jirislaby@gmail.com>
      Cc: Jiri Slaby <jirislaby@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Cc: H. Peter Anvin <hpa@zytor.com>
      b6122b38
    • J
      x86_32: apic/summit_32, fix section mismatch · 2fcb1f1f
      Jiri Slaby 提交于
      Remove __init section placement for some functions/data, so that
      we don't get section mismatch warnings.
      
      Also make inline function instead of empty setup_summit macro.
      
      [v2]
      One of them was not caught by
      DEBUG_SECTION_MISMATCH=y
      magic. Fix it.
      Signed-off-by: NJiri Slaby <jirislaby@gmail.com>
      Cc: Jiri Slaby <jirislaby@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Cc: H. Peter Anvin <hpa@zytor.com>
      2fcb1f1f
    • J
      x86_32: apic/es7000_32, fix section mismatch · 871d78c6
      Jiri Slaby 提交于
      Remove __init section placement for some functions, so that we don't
      get section mismatch warnings.
      
      [v2]:
      2 of them were not caught by
      DEBUG_SECTION_MISMATCH=y
      magic. Fix it.
      Signed-off-by: NJiri Slaby <jirislaby@gmail.com>
      Cc: Jiri Slaby <jirislaby@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Cc: H. Peter Anvin <hpa@zytor.com>
      871d78c6
    • J
      x86_32: apic/summit_32, fix cpu_mask_to_apicid · fae176d6
      Jiri Slaby 提交于
      Perform same-cluster checking even for masks with all (nr_cpu_ids)
      bits set and report correct apicid on success instead.
      
      While at it, convert it to for_each_cpu and newer cpumask api.
      Signed-off-by: NJiri Slaby <jirislaby@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      fae176d6
    • J
      x86_32: apic/es7000_32, fix cpu_mask_to_apicid · 0edc0b32
      Jiri Slaby 提交于
      Perform same-cluster checking even for masks with all (nr_cpu_ids)
      bits set and report BAD_APICID on failure.
      
      While at it, convert it to for_each_cpu.
      Signed-off-by: NJiri Slaby <jirislaby@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0edc0b32
    • J
      x86_32: apic/es7000_32, cpu_mask_to_apicid cleanup · c2b20cbd
      Jiri Slaby 提交于
      Remove es7000_cpu_mask_to_apicid_cluster completely, because it's
      almost the same as es7000_cpu_mask_to_apicid except 2 code paths.
      One of them is about to be removed soon, the another should be
      BAD_APICID (it's a fail path).
      
      The _cluster one was not invoked on apic->cpu_mask_to_apicid_and
      anyway, since there was no _cluster_and variant.
      
      Also use newer cpumask functions.
      Signed-off-by: NJiri Slaby <jirislaby@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c2b20cbd
    • J
      x86_32: apic/bigsmp_32, de-inline functions · 9694cd6c
      Jiri Slaby 提交于
      The ones which go only into struct apic are de-inlined
      by compiler anyway, so remove the inline specifier from them.
      
      Afterwards, remove bigsmp_setup_portio_remap completely as it
      is unused.
      Signed-off-by: NJiri Slaby <jirislaby@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9694cd6c
    • I
      x86, mm: dont use non-temporal stores in pagecache accesses · f1800536
      Ingo Molnar 提交于
      Impact: standardize IO on cached ops
      
      On modern CPUs it is almost always a bad idea to use non-temporal stores,
      as the regression in this commit has shown it:
      
        30d697fa: x86: fix performance regression in write() syscall
      
      The kernel simply has no good information about whether using non-temporal
      stores is a good idea or not - and trying to add heuristics only increases
      complexity and inserts fragility.
      
      The regression on cached write()s took very long to be found - over two
      years. So dont take any chances and let the hardware decide how it makes
      use of its caches.
      
      The only exception is drivers/gpu/drm/i915/i915_gem.c: there were we are
      absolutely sure that another entity (the GPU) will pick up the dirty
      data immediately and that the CPU will not touch that data before the
      GPU will.
      
      Also, keep the _nocache() primitives to make it easier for people to
      experiment with these details. There may be more clear-cut cases where
      non-cached copies can be used, outside of filemap.c.
      
      Cc: Salman Qazi <sqazi@google.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f1800536
    • P
      x86 mmiotrace: fix race with release_kmmio_fault_page() · 340430c5
      Pekka Paalanen 提交于
      There was a theoretical possibility to a race between arming a page in
      post_kmmio_handler() and disarming the page in
      release_kmmio_fault_page():
      
      cpu0                             cpu1
      ------------------------------------------------------------------
      mmiotrace shutdown
      enter release_kmmio_fault_page
                                       fault on the page
                                       disarm the page
      disarm the page
                                       handle the MMIO access
                                       re-arm the page
      put the page on release list
      remove_kmmio_fault_pages()
                                       fault on the page
                                       page not known to mmiotrace
                                       fall back to do_page_fault()
                                       *KABOOM*
      
      (This scenario also shows the double disarm case which is allowed.)
      
      Fixed by acquiring kmmio_lock in post_kmmio_handler() and checking
      if the page is being released from mmiotrace.
      Signed-off-by: NPekka Paalanen <pq@iki.fi>
      Cc: Stuart Bennett <stuart@freedesktop.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      340430c5
    • S
      x86 mmiotrace: improve handling of secondary faults · 3e39aa15
      Stuart Bennett 提交于
      Upgrade some kmmio.c debug messages to warnings.
      Allow secondary faults on probed pages to fall through, and only log
      secondary faults that are not due to non-present pages.
      
      Patch edited by Pekka Paalanen.
      Signed-off-by: NStuart Bennett <stuart@freedesktop.org>
      Signed-off-by: NPekka Paalanen <pq@iki.fi>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3e39aa15
    • P
      x86 mmiotrace: split set_page_presence() · 0b700a6a
      Pekka Paalanen 提交于
      From 36772dcb6ffbbb68254cbfc379a103acd2fbfefc Mon Sep 17 00:00:00 2001
      From: Pekka Paalanen <pq@iki.fi>
      Date: Sat, 28 Feb 2009 21:34:59 +0200
      
      Split set_page_presence() in kmmio.c into two more functions set_pmd_presence()
      and set_pte_presence(). Purely code reorganization, no functional changes.
      Signed-off-by: NPekka Paalanen <pq@iki.fi>
      Cc: Stuart Bennett <stuart@freedesktop.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0b700a6a
    • P
      x86 mmiotrace: fix save/restore page table state · 5359b585
      Pekka Paalanen 提交于
      From baa99e2b32449ec7bf147c234adfa444caecac8a Mon Sep 17 00:00:00 2001
      From: Pekka Paalanen <pq@iki.fi>
      Date: Sun, 22 Feb 2009 20:02:43 +0200
      
      Blindly setting _PAGE_PRESENT in disarm_kmmio_fault_page() overlooks the
      possibility, that the page was not present when it was armed.
      
      Make arm_kmmio_fault_page() store the previous page presence in struct
      kmmio_fault_page and use it on disarm.
      
      This patch was originally written by Stuart Bennett, but Pekka Paalanen
      rewrote it a little different.
      Signed-off-by: NPekka Paalanen <pq@iki.fi>
      Cc: Stuart Bennett <stuart@freedesktop.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5359b585