1. 12 8月, 2008 6 次提交
  2. 11 8月, 2008 1 次提交
  3. 09 8月, 2008 5 次提交
  4. 01 8月, 2008 1 次提交
    • K
      x86: fdiv bug detection fix · e0d22d03
      Krzysztof Helt 提交于
      The fdiv detection code writes s32 integer into
      the boot_cpu_data.fdiv_bug.
      However, the boot_cpu_data.fdiv_bug is only char (s8)
      field so the detection overwrites already set fields for
      other bugs, e.g. the f00f bug field.
      
      Use local s32 variable to receive result.
      
      This is a partial fix to Bugzilla #9928  - fixes wrong
      information about the f00f bug (tested) and probably
      for coma bug (I have no cpu to test this).
      Signed-off-by: NKrzysztof Helt <krzysztof.h1@wp.pl>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e0d22d03
  5. 31 7月, 2008 2 次提交
  6. 29 7月, 2008 10 次提交
    • F
      generic, x86: fix add iommu_num_pages helper function · 8978b742
      FUJITA Tomonori 提交于
      This IOMMU helper function doesn't work for some architectures:
      
        http://marc.info/?l=linux-kernel&m=121699304403202&w=2
      
      It also breaks POWER and SPARC builds:
      
        http://marc.info/?l=linux-kernel&m=121730388001890&w=2
      
      Currently, only x86 IOMMUs use this so let's move it to x86 for
      now.
      Reported-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8978b742
    • A
      KVM: Advertise synchronized mmu support to userspace · ed848624
      Avi Kivity 提交于
      Signed-off-by: NAvi Kivity <avi@qumranet.com>
      ed848624
    • A
      KVM: Synchronize guest physical memory map to host virtual memory map · e930bffe
      Andrea Arcangeli 提交于
      Synchronize changes to host virtual addresses which are part of
      a KVM memory slot to the KVM shadow mmu.  This allows pte operations
      like swapping, page migration, and madvise() to transparently work
      with KVM.
      Signed-off-by: NAndrea Arcangeli <andrea@qumranet.com>
      Signed-off-by: NAvi Kivity <avi@qumranet.com>
      e930bffe
    • A
      KVM: Allow browsing memslots with mmu_lock · 604b38ac
      Andrea Arcangeli 提交于
      This allows reading memslots with only the mmu_lock hold for mmu
      notifiers that runs in atomic context and with mmu_lock held.
      Signed-off-by: NAndrea Arcangeli <andrea@qumranet.com>
      Signed-off-by: NAvi Kivity <avi@qumranet.com>
      604b38ac
    • A
      KVM: Allow reading aliases with mmu_lock · a1708ce8
      Andrea Arcangeli 提交于
      This allows the mmu notifier code to run unalias_gfn with only the
      mmu_lock held.  Only alias writes need the mmu_lock held. Readers will
      either take the slots_lock in read mode or the mmu_lock.
      Signed-off-by: NAndrea Arcangeli <andrea@qumranet.com>
      Signed-off-by: NAvi Kivity <avi@qumranet.com>
      a1708ce8
    • L
      Fix 'get_user_pages_fast()' with non-page-aligned start address · 9b79022c
      Linus Torvalds 提交于
      Alexey Dobriyan reported trouble with LTP with the new fast-gup code,
      and Johannes Weiner debugged it to non-page-aligned addresses, where the
      new get_user_pages_fast() code would do all the wrong things, including
      just traversing past the end of the requested area due to 'addr' never
      matching 'end' exactly.
      
      This is not a pretty fix, and we may actually want to move the alignment
      into generic code, leaving just the core code per-arch, but Alexey
      verified that the vmsplice01 LTP test doesn't crash with this.
      Reported-and-tested-by: NAlexey Dobriyan <adobriyan@gmail.com>
      Debugged-by: NJohannes Weiner <hannes@saeurebad.de>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9b79022c
    • R
      lguest: set max_pfn_mapped, growl loudly at Yinghai Lu · 5d006d8d
      Rusty Russell 提交于
      6af61a76 'x86: clean up max_pfn_mapped
      usage - 32-bit' makes the following comment:
      
          XEN PV and lguest may need to assign max_pfn_mapped too.
      
      But no CC.  Yinghai, wasting fellow developers' time is a VERY bad
      habit.  If you do it again, I will hunt you down and try to extract
      the three hours of my life I just lost :)
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Cc: Yinghai Lu <yhlu.kernel@gmail.com>
      5d006d8d
    • A
      mmu-notifiers: core · cddb8a5c
      Andrea Arcangeli 提交于
      With KVM/GFP/XPMEM there isn't just the primary CPU MMU pointing to pages.
       There are secondary MMUs (with secondary sptes and secondary tlbs) too.
      sptes in the kvm case are shadow pagetables, but when I say spte in
      mmu-notifier context, I mean "secondary pte".  In GRU case there's no
      actual secondary pte and there's only a secondary tlb because the GRU
      secondary MMU has no knowledge about sptes and every secondary tlb miss
      event in the MMU always generates a page fault that has to be resolved by
      the CPU (this is not the case of KVM where the a secondary tlb miss will
      walk sptes in hardware and it will refill the secondary tlb transparently
      to software if the corresponding spte is present).  The same way
      zap_page_range has to invalidate the pte before freeing the page, the spte
      (and secondary tlb) must also be invalidated before any page is freed and
      reused.
      
      Currently we take a page_count pin on every page mapped by sptes, but that
      means the pages can't be swapped whenever they're mapped by any spte
      because they're part of the guest working set.  Furthermore a spte unmap
      event can immediately lead to a page to be freed when the pin is released
      (so requiring the same complex and relatively slow tlb_gather smp safe
      logic we have in zap_page_range and that can be avoided completely if the
      spte unmap event doesn't require an unpin of the page previously mapped in
      the secondary MMU).
      
      The mmu notifiers allow kvm/GRU/XPMEM to attach to the tsk->mm and know
      when the VM is swapping or freeing or doing anything on the primary MMU so
      that the secondary MMU code can drop sptes before the pages are freed,
      avoiding all page pinning and allowing 100% reliable swapping of guest
      physical address space.  Furthermore it avoids the code that teardown the
      mappings of the secondary MMU, to implement a logic like tlb_gather in
      zap_page_range that would require many IPI to flush other cpu tlbs, for
      each fixed number of spte unmapped.
      
      To make an example: if what happens on the primary MMU is a protection
      downgrade (from writeable to wrprotect) the secondary MMU mappings will be
      invalidated, and the next secondary-mmu-page-fault will call
      get_user_pages and trigger a do_wp_page through get_user_pages if it
      called get_user_pages with write=1, and it'll re-establishing an updated
      spte or secondary-tlb-mapping on the copied page.  Or it will setup a
      readonly spte or readonly tlb mapping if it's a guest-read, if it calls
      get_user_pages with write=0.  This is just an example.
      
      This allows to map any page pointed by any pte (and in turn visible in the
      primary CPU MMU), into a secondary MMU (be it a pure tlb like GRU, or an
      full MMU with both sptes and secondary-tlb like the shadow-pagetable layer
      with kvm), or a remote DMA in software like XPMEM (hence needing of
      schedule in XPMEM code to send the invalidate to the remote node, while no
      need to schedule in kvm/gru as it's an immediate event like invalidating
      primary-mmu pte).
      
      At least for KVM without this patch it's impossible to swap guests
      reliably.  And having this feature and removing the page pin allows
      several other optimizations that simplify life considerably.
      
      Dependencies:
      
      1) mm_take_all_locks() to register the mmu notifier when the whole VM
         isn't doing anything with "mm".  This allows mmu notifier users to keep
         track if the VM is in the middle of the invalidate_range_begin/end
         critical section with an atomic counter incraese in range_begin and
         decreased in range_end.  No secondary MMU page fault is allowed to map
         any spte or secondary tlb reference, while the VM is in the middle of
         range_begin/end as any page returned by get_user_pages in that critical
         section could later immediately be freed without any further
         ->invalidate_page notification (invalidate_range_begin/end works on
         ranges and ->invalidate_page isn't called immediately before freeing
         the page).  To stop all page freeing and pagetable overwrites the
         mmap_sem must be taken in write mode and all other anon_vma/i_mmap
         locks must be taken too.
      
      2) It'd be a waste to add branches in the VM if nobody could possibly
         run KVM/GRU/XPMEM on the kernel, so mmu notifiers will only enabled if
         CONFIG_KVM=m/y.  In the current kernel kvm won't yet take advantage of
         mmu notifiers, but this already allows to compile a KVM external module
         against a kernel with mmu notifiers enabled and from the next pull from
         kvm.git we'll start using them.  And GRU/XPMEM will also be able to
         continue the development by enabling KVM=m in their config, until they
         submit all GRU/XPMEM GPLv2 code to the mainline kernel.  Then they can
         also enable MMU_NOTIFIERS in the same way KVM does it (even if KVM=n).
         This guarantees nobody selects MMU_NOTIFIER=y if KVM and GRU and XPMEM
         are all =n.
      
      The mmu_notifier_register call can fail because mm_take_all_locks may be
      interrupted by a signal and return -EINTR.  Because mmu_notifier_reigster
      is used when a driver startup, a failure can be gracefully handled.  Here
      an example of the change applied to kvm to register the mmu notifiers.
      Usually when a driver startups other allocations are required anyway and
      -ENOMEM failure paths exists already.
      
       struct  kvm *kvm_arch_create_vm(void)
       {
              struct kvm *kvm = kzalloc(sizeof(struct kvm), GFP_KERNEL);
      +       int err;
      
              if (!kvm)
                      return ERR_PTR(-ENOMEM);
      
              INIT_LIST_HEAD(&kvm->arch.active_mmu_pages);
      
      +       kvm->arch.mmu_notifier.ops = &kvm_mmu_notifier_ops;
      +       err = mmu_notifier_register(&kvm->arch.mmu_notifier, current->mm);
      +       if (err) {
      +               kfree(kvm);
      +               return ERR_PTR(err);
      +       }
      +
              return kvm;
       }
      
      mmu_notifier_unregister returns void and it's reliable.
      
      The patch also adds a few needed but missing includes that would prevent
      kernel to compile after these changes on non-x86 archs (x86 didn't need
      them by luck).
      
      [akpm@linux-foundation.org: coding-style fixes]
      [akpm@linux-foundation.org: fix mm/filemap_xip.c build]
      [akpm@linux-foundation.org: fix mm/mmu_notifier.c build]
      Signed-off-by: NAndrea Arcangeli <andrea@qumranet.com>
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Signed-off-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Jack Steiner <steiner@sgi.com>
      Cc: Robin Holt <holt@sgi.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Kanoj Sarcar <kanojsarcar@yahoo.com>
      Cc: Roland Dreier <rdreier@cisco.com>
      Cc: Steve Wise <swise@opengridcomputing.com>
      Cc: Avi Kivity <avi@qumranet.com>
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Anthony Liguori <aliguori@us.ibm.com>
      Cc: Chris Wright <chrisw@redhat.com>
      Cc: Marcelo Tosatti <marcelo@kvack.org>
      Cc: Eric Dumazet <dada1@cosmosbay.com>
      Cc: "Paul E. McKenney" <paulmck@us.ibm.com>
      Cc: Izik Eidus <izike@qumranet.com>
      Cc: Anthony Liguori <aliguori@us.ibm.com>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cddb8a5c
    • B
      x86/PCI: use dev_printk when possible · 12c0b20f
      Bjorn Helgaas 提交于
      Convert printks to use dev_printk().
      
      I converted DBG() to dev_dbg().  This DBG() is from arch/x86/pci/pci.h and
      requires source-code modification to enable, so dev_dbg() seems roughly
      equivalent.
      Signed-off-by: NBjorn Helgaas <bjorn.helgaas@hp.com>
      Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org>
      12c0b20f
    • L
      cpu masks: optimize and clean up cpumask_of_cpu() · e56b3bc7
      Linus Torvalds 提交于
      Clean up and optimize cpumask_of_cpu(), by sharing all the zero words.
      
      Instead of stupidly generating all possible i=0...NR_CPUS 2^i patterns
      creating a huge array of constant bitmasks, realize that the zero words
      can be shared.
      
      In other words, on a 64-bit architecture, we only ever need 64 of these
      arrays - with a different bit set in one single world (with enough zero
      words around it so that we can create any bitmask by just offsetting in
      that big array). And then we just put enough zeroes around it that we
      can point every single cpumask to be one of those things.
      
      So when we have 4k CPU's, instead of having 4k arrays (of 4k bits each,
      with one bit set in each array - 2MB memory total), we have exactly 64
      arrays instead, each 8k bits in size (64kB total).
      
      And then we just point cpumask(n) to the right position (which we can
      calculate dynamically). Once we have the right arrays, getting
      "cpumask(n)" ends up being:
      
        static inline const cpumask_t *get_cpu_mask(unsigned int cpu)
        {
                const unsigned long *p = cpu_bit_bitmap[1 + cpu % BITS_PER_LONG];
                p -= cpu / BITS_PER_LONG;
                return (const cpumask_t *)p;
        }
      
      This brings other advantages and simplifications as well:
      
       - we are not wasting memory that is just filled with a single bit in
         various different places
      
       - we don't need all those games to re-create the arrays in some dense
         format, because they're already going to be dense enough.
      
      if we compile a kernel for up to 4k CPU's, "wasting" that 64kB of memory
      is a non-issue (especially since by doing this "overlapping" trick we
      probably get better cache behaviour anyway).
      
      [ mingo@elte.hu:
      
        Converted Linus's mails into a commit. See:
      
           http://lkml.org/lkml/2008/7/27/156
           http://lkml.org/lkml/2008/7/28/320
      
        Also applied a family filter - which also has the side-effect of leaving
        out the bits where Linus calls me an idio... Oh, never mind ;-)
      ]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Al Viro <viro@ZenIV.linux.org.uk>
      Cc: Mike Travis <travis@sgi.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e56b3bc7
  7. 28 7月, 2008 1 次提交
  8. 27 7月, 2008 14 次提交
    • S
      KVM: VMX: Fix undefined beaviour of EPT after reload kvm-intel.ko · 5fdbcb9d
      Sheng Yang 提交于
      As well as move set base/mask ptes to vmx_init().
      Signed-off-by: NSheng Yang <sheng.yang@intel.com>
      Signed-off-by: NAvi Kivity <avi@qumranet.com>
      5fdbcb9d
    • S
    • M
      KVM: task switch: translate guest segment limit to virt-extension byte granular field · c93cd3a5
      Marcelo Tosatti 提交于
      If 'g' is one then limit is 4kb granular.
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      Signed-off-by: NAvi Kivity <avi@qumranet.com>
      c93cd3a5
    • A
      KVM: Avoid instruction emulation when event delivery is pending · 577bdc49
      Avi Kivity 提交于
      When an event (such as an interrupt) is injected, and the stack is
      shadowed (and therefore write protected), the guest will exit.  The
      current code will see that the stack is shadowed and emulate a few
      instructions, each time postponing the injection.  Eventually the
      injection may succeed, but at that time the guest may be unwilling
      to accept the interrupt (for example, the TPR may have changed).
      
      This occurs every once in a while during a Windows 2008 boot.
      
      Fix by unshadowing the fault address if the fault was due to an event
      injection.
      Signed-off-by: NAvi Kivity <avi@qumranet.com>
      577bdc49
    • M
      KVM: task switch: use seg regs provided by subarch instead of reading from GDT · 34198bf8
      Marcelo Tosatti 提交于
      There is no guarantee that the old TSS descriptor in the GDT contains
      the proper base address. This is the case for Windows installation's
      reboot-via-triplefault.
      
      Use guest registers instead. Also translate the address properly.
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      Signed-off-by: NAvi Kivity <avi@qumranet.com>
      34198bf8
    • M
      KVM: task switch: segment base is linear address · 98899aa0
      Marcelo Tosatti 提交于
      The segment base is always a linear address, so translate before
      accessing guest memory.
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      Signed-off-by: NAvi Kivity <avi@qumranet.com>
      98899aa0
    • J
      KVM: SVM: allow enabling/disabling NPT by reloading only the architecture module · 5f4cb662
      Joerg Roedel 提交于
      If NPT is enabled after loading both KVM modules on AMD and it should be
      disabled, both KVM modules must be reloaded. If only the architecture module is
      reloaded the behavior is undefined. With this patch it is possible to disable
      NPT only by reloading the kvm_amd module.
      Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com>
      Signed-off-by: NAvi Kivity <avi@qumranet.com>
      5f4cb662
    • J
      x86: use generic show_mem() · 8dad322f
      Johannes Weiner 提交于
      Remove arch-specific show_mem() in favor of the generic version.
      
      This also removes the following redundant information display:
      
      	- pages in swapcache, printed by show_swap_cache_info()
      	- dirty pages, writeback pages, mapped pages, slab pages,
      	  pagetable pages, printed by show_free_areas()
      
      where show_mem() calls show_free_areas(), which calls
      show_swap_cache_info().
      Signed-off-by: NJohannes Weiner <hannes@saeurebad.de>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8dad322f
    • R
      tracehook: exec · 6341c393
      Roland McGrath 提交于
      This moves all the ptrace hooks related to exec into tracehook.h inlines.
      
      This also lifts the calls for tracing out of the binfmt load_binary hooks
      into search_binary_handler() after it calls into the binfmt module.  This
      change has no effect, since all the binfmt modules' load_binary functions
      did the call at the end on success, and now search_binary_handler() does
      it immediately after return if successful.  We consolidate the repeated
      code, and binfmt modules no longer need to import ptrace_notify().
      Signed-off-by: NRoland McGrath <roland@redhat.com>
      Cc: Oleg Nesterov <oleg@tv-sign.ru>
      Reviewed-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6341c393
    • N
      x86: support 1GB hugepages with get_user_pages_lockless() · 652ea695
      Nick Piggin 提交于
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Andi Kleen <andi@firstfloor.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      652ea695
    • N
      x86: lockless get_user_pages_fast() · 8174c430
      Nick Piggin 提交于
      Implement get_user_pages_fast without locking in the fastpath on x86.
      
      Do an optimistic lockless pagetable walk, without taking mmap_sem or any
      page table locks or even mmap_sem.  Page table existence is guaranteed by
      turning interrupts off (combined with the fact that we're always looking
      up the current mm, means we can do the lockless page table walk within the
      constraints of the TLB shootdown design).  Basically we can do this
      lockless pagetable walk in a similar manner to the way the CPU's pagetable
      walker does not have to take any locks to find present ptes.
      
      This patch (combined with the subsequent ones to convert direct IO to use
      it) was found to give about 10% performance improvement on a 2 socket 8
      core Intel Xeon system running an OLTP workload on DB2 v9.5
      
       "To test the effects of the patch, an OLTP workload was run on an IBM
        x3850 M2 server with 2 processors (quad-core Intel Xeon processors at
        2.93 GHz) using IBM DB2 v9.5 running Linux 2.6.24rc7 kernel.  Comparing
        runs with and without the patch resulted in an overall performance
        benefit of ~9.8%.  Correspondingly, oprofiles showed that samples from
        __up_read and __down_read routines that is seen during thread contention
        for system resources was reduced from 2.8% down to .05%.  Monitoring the
        /proc/vmstat output from the patched run showed that the counter for
        fast_gup contained a very high number while the fast_gup_slow value was
        zero."
      
      (fast_gup is the old name for get_user_pages_fast, fast_gup_slow is a
      counter we had for the number of times the slowpath was invoked).
      
      The main reason for the improvement is that DB2 has multiple threads each
      issuing direct-IO.  Direct-IO uses get_user_pages, and thus the threads
      contend the mmap_sem cacheline, and can also contend on page table locks.
      
      I would anticipate larger performance gains on larger systems, however I
      think DB2 uses an adaptive mix of threads and processes, so it could be
      that thread contention remains pretty constant as machine size increases.
      In which case, we stuck with "only" a 10% gain.
      
      The downside of using get_user_pages_fast is that if there is not a pte
      with the correct permissions for the access, we end up falling back to
      get_user_pages and so the get_user_pages_fast is a bit of extra work.
      However this should not be the common case in most performance critical
      code.
      
      [akpm@linux-foundation.org: coding-style fixes]
      [akpm@linux-foundation.org: build fix]
      [akpm@linux-foundation.org: Kconfig fix]
      [akpm@linux-foundation.org: Makefile fix/cleanup]
      [akpm@linux-foundation.org: warning fix]
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Cc: Dave Kleikamp <shaggy@austin.ibm.com>
      Cc: Andy Whitcroft <apw@shadowen.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Dave Kleikamp <shaggy@austin.ibm.com>
      Cc: Badari Pulavarty <pbadari@us.ibm.com>
      Cc: Zach Brown <zach.brown@oracle.com>
      Cc: Jens Axboe <jens.axboe@oracle.com>
      Reviewed-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8174c430
    • H
      kexec jump: save/restore device state · 89081d17
      Huang Ying 提交于
      This patch implements devices state save/restore before after kexec.
      
      This patch together with features in kexec_jump patch can be used for
      following:
      
      - A simple hibernation implementation without ACPI support.  You can kexec a
        hibernating kernel, save the memory image of original system and shutdown
        the system.  When resuming, you restore the memory image of original system
        via ordinary kexec load then jump back.
      
      - Kernel/system debug through making system snapshot.  You can make system
        snapshot, jump back, do some thing and make another system snapshot.
      
      - Cooperative multi-kernel/system.  With kexec jump, you can switch between
        several kernels/systems quickly without boot process except the first time.
        This appears like swap a whole kernel/system out/in.
      
      - A general method to call program in physical mode (paging turning
        off). This can be used to invoke BIOS code under Linux.
      
      The following user-space tools can be used with kexec jump:
      
      - kexec-tools needs to be patched to support kexec jump. The patches
        and the precompiled kexec can be download from the following URL:
             source: http://khibernation.sourceforge.net/download/release_v10/kexec-tools/kexec-tools-src_git_kh10.tar.bz2
             patches: http://khibernation.sourceforge.net/download/release_v10/kexec-tools/kexec-tools-patches_git_kh10.tar.bz2
             binary: http://khibernation.sourceforge.net/download/release_v10/kexec-tools/kexec_git_kh10
      
      - makedumpfile with patches are used as memory image saving tool, it
        can exclude free pages from original kernel memory image file. The
        patches and the precompiled makedumpfile can be download from the
        following URL:
             source: http://khibernation.sourceforge.net/download/release_v10/makedumpfile/makedumpfile-src_cvs_kh10.tar.bz2
             patches: http://khibernation.sourceforge.net/download/release_v10/makedumpfile/makedumpfile-patches_cvs_kh10.tar.bz2
             binary: http://khibernation.sourceforge.net/download/release_v10/makedumpfile/makedumpfile_cvs_kh10
      
      - An initramfs image can be used as the root file system of kexeced
        kernel. An initramfs image built with "BuildRoot" can be downloaded
        from the following URL:
             initramfs image: http://khibernation.sourceforge.net/download/release_v10/initramfs/rootfs_cvs_kh10.gz
        All user space tools above are included in the initramfs image.
      
      Usage example of simple hibernation:
      
      1. Compile and install patched kernel with following options selected:
      
      CONFIG_X86_32=y
      CONFIG_RELOCATABLE=y
      CONFIG_KEXEC=y
      CONFIG_CRASH_DUMP=y
      CONFIG_PM=y
      CONFIG_HIBERNATION=y
      CONFIG_KEXEC_JUMP=y
      
      2. Build an initramfs image contains kexec-tool and makedumpfile, or
         download the pre-built initramfs image, called rootfs.gz in
         following text.
      
      3. Prepare a partition to save memory image of original kernel, called
         hibernating partition in following text.
      
      4. Boot kernel compiled in step 1 (kernel A).
      
      5. In the kernel A, load kernel compiled in step 1 (kernel B) with
         /sbin/kexec. The shell command line can be as follow:
      
         /sbin/kexec --load-preserve-context /boot/bzImage --mem-min=0x100000
           --mem-max=0xffffff --initrd=rootfs.gz
      
      6. Boot the kernel B with following shell command line:
      
         /sbin/kexec -e
      
      7. The kernel B will boot as normal kexec. In kernel B the memory
         image of kernel A can be saved into hibernating partition as
         follow:
      
         jump_back_entry=`cat /proc/cmdline | tr ' ' '\n' | grep kexec_jump_back_entry | cut -d '='`
         echo $jump_back_entry > kexec_jump_back_entry
         cp /proc/vmcore dump.elf
      
         Then you can shutdown the machine as normal.
      
      8. Boot kernel compiled in step 1 (kernel C). Use the rootfs.gz as
         root file system.
      
      9. In kernel C, load the memory image of kernel A as follow:
      
         /sbin/kexec -l --args-none --entry=`cat kexec_jump_back_entry` dump.elf
      
      10. Jump back to the kernel A as follow:
      
         /sbin/kexec -e
      
         Then, kernel A is resumed.
      
      Implementation point:
      
      To support jumping between two kernels, before jumping to (executing)
      the new kernel and jumping back to the original kernel, the devices
      are put into quiescent state, and the state of devices and CPU is
      saved. After jumping back from kexeced kernel and jumping to the new
      kernel, the state of devices and CPU are restored accordingly. The
      devices/CPU state save/restore code of software suspend is called to
      implement corresponding function.
      
      Known issues:
      
      - Because the segment number supported by sys_kexec_load is limited,
        hibernation image with many segments may not be load. This is
        planned to be eliminated by adding a new flag to sys_kexec_load to
        make a image can be loaded with multiple sys_kexec_load invoking.
      
      Now, only the i386 architecture is supported.
      Signed-off-by: NHuang Ying <ying.huang@intel.com>
      Acked-by: NVivek Goyal <vgoyal@redhat.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Nigel Cunningham <nigel@nigel.suspend2.net>
      Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      89081d17
    • H
      kexec jump · 3ab83521
      Huang Ying 提交于
      This patch provides an enhancement to kexec/kdump.  It implements the
      following features:
      
      - Backup/restore memory used by the original kernel before/after
        kexec.
      
      - Save/restore CPU state before/after kexec.
      
      The features of this patch can be used as a general method to call program in
      physical mode (paging turning off).  This can be used to call BIOS code under
      Linux.
      
      kexec-tools needs to be patched to support kexec jump. The patches and
      the precompiled kexec can be download from the following URL:
      
             source: http://khibernation.sourceforge.net/download/release_v10/kexec-tools/kexec-tools-src_git_kh10.tar.bz2
             patches: http://khibernation.sourceforge.net/download/release_v10/kexec-tools/kexec-tools-patches_git_kh10.tar.bz2
             binary: http://khibernation.sourceforge.net/download/release_v10/kexec-tools/kexec_git_kh10
      
      Usage example of calling some physical mode code and return:
      
      1. Compile and install patched kernel with following options selected:
      
      CONFIG_X86_32=y
      CONFIG_KEXEC=y
      CONFIG_PM=y
      CONFIG_KEXEC_JUMP=y
      
      2. Build patched kexec-tool or download the pre-built one.
      
      3. Build some physical mode executable named such as "phy_mode"
      
      4. Boot kernel compiled in step 1.
      
      5. Load physical mode executable with /sbin/kexec. The shell command
         line can be as follow:
      
         /sbin/kexec --load-preserve-context --args-none phy_mode
      
      6. Call physical mode executable with following shell command line:
      
         /sbin/kexec -e
      
      Implementation point:
      
      To support jumping without reserving memory.  One shadow backup page (source
      page) is allocated for each page used by kexeced code image (destination
      page).  When do kexec_load, the image of kexeced code is loaded into source
      pages, and before executing, the destination pages and the source pages are
      swapped, so the contents of destination pages are backupped.  Before jumping
      to the kexeced code image and after jumping back to the original kernel, the
      destination pages and the source pages are swapped too.
      
      C ABI (calling convention) is used as communication protocol between
      kernel and called code.
      
      A flag named KEXEC_PRESERVE_CONTEXT for sys_kexec_load is added to
      indicate that the loaded kernel image is used for jumping back.
      
      Now, only the i386 architecture is supported.
      Signed-off-by: NHuang Ying <ying.huang@intel.com>
      Acked-by: NVivek Goyal <vgoyal@redhat.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Nigel Cunningham <nigel@nigel.suspend2.net>
      Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3ab83521
    • A
      x86 calgary: fix handling of devices that aren't behind the Calgary · 1956a96d
      Alexis Bruemmer 提交于
      The calgary code can give drivers addresses above 4GB which is very bad
      for hardware that is only 32bit DMA addressable.
      
      With this patch, the calgary code sets the global dma_ops to swiotlb or
      nommu properly, and the dma_ops of devices behind the Calgary/CalIOC2
      to calgary_dma_ops.  So the calgary code can handle devices safely that
      aren't behind the Calgary/CalIOC2.
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: NAlexis Bruemmer <alexisb@us.ibm.com>
      Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      Cc: Muli Ben-Yehuda <muli@il.ibm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1956a96d