1. 12 1月, 2015 2 次提交
  2. 08 1月, 2015 4 次提交
  3. 23 12月, 2014 1 次提交
  4. 11 12月, 2014 1 次提交
    • J
      xen: switch to post-init routines in xen mmu.c earlier · cdfa0bad
      Juergen Gross 提交于
      With the virtual mapped linear p2m list the post-init mmu operations
      must be used for setting up the p2m mappings, as in case of
      CONFIG_FLATMEM the init routines may trigger BUGs.
      
      paging_init() sets up all infrastructure needed to switch to the
      post-init mmu ops done by xen_post_allocator_init(). With the virtual
      mapped linear p2m list we need some mmu ops during setup of this list,
      so we have to switch to the correct mmu ops as soon as possible.
      
      The p2m list is usable from the beginning, just expansion requires to
      have established the new linear mapping. So the call of
      xen_remap_memory() had to be introduced, but this is not due to the
      mmu ops requiring this.
      
      Summing it up: calling xen_post_allocator_init() not directly after
      paging_init() was conceptually wrong in the beginning, it just didn't
      matter up to now as no functions used between the two calls needed
      some critical mmu ops (e.g. alloc_pte). This has changed now, so I
      corrected it.
      Reported-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
      Signed-off-by: NJuergen Gross <jgross@suse.com>
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      cdfa0bad
  5. 08 12月, 2014 2 次提交
  6. 04 12月, 2014 14 次提交
  7. 26 11月, 2014 1 次提交
    • A
      kvm: fix kvm_is_mmio_pfn() and rename to kvm_is_reserved_pfn() · d3fccc7e
      Ard Biesheuvel 提交于
      This reverts commit 85c8555f ("KVM: check for !is_zero_pfn() in
      kvm_is_mmio_pfn()") and renames the function to kvm_is_reserved_pfn.
      
      The problem being addressed by the patch above was that some ARM code
      based the memory mapping attributes of a pfn on the return value of
      kvm_is_mmio_pfn(), whose name indeed suggests that such pfns should
      be mapped as device memory.
      
      However, kvm_is_mmio_pfn() doesn't do quite what it says on the tin,
      and the existing non-ARM users were already using it in a way which
      suggests that its name should probably have been 'kvm_is_reserved_pfn'
      from the beginning, e.g., whether or not to call get_page/put_page on
      it etc. This means that returning false for the zero page is a mistake
      and the patch above should be reverted.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      d3fccc7e
  8. 24 11月, 2014 4 次提交
  9. 21 11月, 2014 1 次提交
  10. 19 11月, 2014 3 次提交
  11. 17 11月, 2014 1 次提交
    • L
      x86-64: make csum_partial_copy_from_user() error handling consistent · 3b91270a
      Linus Torvalds 提交于
      Al Viro pointed out that the x86-64 csum_partial_copy_from_user() is
      somewhat confused about what it should do on errors, notably it mostly
      clears the uncopied end result buffer, but misses that for the initial
      alignment case.
      
      All users should check for errors, so it's dubious whether the clearing
      is even necessary, and Al also points out that we should probably clean
      up the calling conventions, but regardless of any future changes to this
      function, the fact that it is inconsistent is just annoying.
      
      So make the __get_user() failure path use the same error exit as all the
      other errors do.
      Reported-by: NAl Viro <viro@zeniv.linux.org.uk>
      Cc: David Miller <davem@davemloft.net>
      Cc: Andi Kleen <andi@firstfloor.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3b91270a
  12. 16 11月, 2014 3 次提交
  13. 10 11月, 2014 2 次提交
  14. 06 11月, 2014 1 次提交