1. 13 1月, 2015 1 次提交
    • J
      x86/xen: properly retrieve NMI reason · f221b04f
      Jan Beulich 提交于
      Using the native code here can't work properly, as the hypervisor would
      normally have cleared the two reason bits by the time Dom0 gets to see
      the NMI (if passed to it at all). There's a shared info field for this,
      and there's an existing hook to use - just fit the two together. This
      is particularly relevant so that NMIs intended to be handled by APEI /
      GHES actually make it to the respective handler.
      
      Note that the hook can (and should) be used irrespective of whether
      being in Dom0, as accessing port 0x61 in a DomU would be even worse,
      while the shared info field would just hold zero all the time. Note
      further that hardware NMI handling for PVH doesn't currently work
      anyway due to missing code in the hypervisor (but it is expected to
      work the native rather than the PV way).
      Signed-off-by: NJan Beulich <jbeulich@suse.com>
      Reviewed-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      f221b04f
  2. 12 1月, 2015 4 次提交
  3. 08 1月, 2015 4 次提交
  4. 23 12月, 2014 1 次提交
  5. 11 12月, 2014 1 次提交
    • J
      xen: switch to post-init routines in xen mmu.c earlier · cdfa0bad
      Juergen Gross 提交于
      With the virtual mapped linear p2m list the post-init mmu operations
      must be used for setting up the p2m mappings, as in case of
      CONFIG_FLATMEM the init routines may trigger BUGs.
      
      paging_init() sets up all infrastructure needed to switch to the
      post-init mmu ops done by xen_post_allocator_init(). With the virtual
      mapped linear p2m list we need some mmu ops during setup of this list,
      so we have to switch to the correct mmu ops as soon as possible.
      
      The p2m list is usable from the beginning, just expansion requires to
      have established the new linear mapping. So the call of
      xen_remap_memory() had to be introduced, but this is not due to the
      mmu ops requiring this.
      
      Summing it up: calling xen_post_allocator_init() not directly after
      paging_init() was conceptually wrong in the beginning, it just didn't
      matter up to now as no functions used between the two calls needed
      some critical mmu ops (e.g. alloc_pte). This has changed now, so I
      corrected it.
      Reported-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
      Signed-off-by: NJuergen Gross <jgross@suse.com>
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      cdfa0bad
  6. 08 12月, 2014 2 次提交
  7. 04 12月, 2014 9 次提交
  8. 16 11月, 2014 1 次提交
  9. 10 11月, 2014 1 次提交
  10. 04 11月, 2014 1 次提交
  11. 23 10月, 2014 5 次提交
  12. 06 10月, 2014 1 次提交
  13. 03 10月, 2014 1 次提交
  14. 23 9月, 2014 4 次提交
    • D
      x86: remove the Xen-specific _PAGE_IOMAP PTE flag · f955371c
      David Vrabel 提交于
      The _PAGE_IO_MAP PTE flag was only used by Xen PV guests to mark PTEs
      that were used to map I/O regions that are 1:1 in the p2m.  This
      allowed Xen to obtain the correct PFN when converting the MFNs read
      from a PTE back to their PFN.
      
      Xen guests no longer use _PAGE_IOMAP for this. Instead mfn_to_pfn()
      returns the correct PFN by using a combination of the m2p and p2m to
      determine if an MFN corresponds to a 1:1 mapping in the the p2m.
      
      Remove _PAGE_IOMAP, replacing it with _PAGE_UNUSED2 to allow for
      future uses of the PTE flag.
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      Acked-by: N"H. Peter Anvin" <hpa@zytor.com>
      f955371c
    • D
      x86/xen: do not use _PAGE_IOMAP PTE flag for I/O mappings · 7f2f8822
      David Vrabel 提交于
      Since mfn_to_pfn() returns the correct PFN for identity mappings (as
      used for MMIO regions), the use of _PAGE_IOMAP is not required in
      pte_mfn_to_pfn().
      
      Do not set the _PAGE_IOMAP flag in pte_pfn_to_mfn() and do not use it
      in pte_mfn_to_pfn().
      
      This will allow _PAGE_IOMAP to be removed, making it available for
      future use.
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      7f2f8822
    • D
      xen/efi: Directly include needed headers · 342cd340
      Daniel Kiper 提交于
      I discovered that some needed stuff is defined/declared in headers
      which are not included directly. Currently it works but if somebody
      remove required headers from currently included headers then build
      will break. So, just in case directly include all needed headers.
      Signed-off-by: NDaniel Kiper <daniel.kiper@oracle.com>
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      342cd340
    • M
      xen/setup: Remap Xen Identity Mapped RAM · 4fbb67e3
      Matt Rushton 提交于
      Instead of ballooning up and down dom0 memory this remaps the existing mfns
      that were replaced by the identity map. The reason for this is that the
      existing implementation ballooned memory up and and down which caused dom0
      to have discontiguous pages. In some cases this resulted in the use of bounce
      buffers which reduced network I/O performance significantly. This change will
      honor the existing order of the pages with the exception of some boundary
      conditions.
      
      To do this we need to update both the Linux p2m table and the Xen m2p table.
      Particular care must be taken when updating the p2m table since it's important
      to limit table memory consumption and reuse the existing leaf pages which get
      freed when an entire leaf page is set to the identity map. To implement this,
      mapping updates are grouped into blocks with table entries getting cached
      temporarily and then released.
      
      On my test system before:
      Total pages: 2105014
      Total contiguous: 1640635
      
      After:
      Total pages: 2105014
      Total contiguous: 2098904
      Signed-off-by: NMatthew Rushton <mrushton@amazon.com>
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      4fbb67e3
  15. 16 9月, 2014 1 次提交
  16. 10 9月, 2014 1 次提交
    • S
      x86/xen: don't copy bogus duplicate entries into kernel page tables · 0b5a5063
      Stefan Bader 提交于
      When RANDOMIZE_BASE (KASLR) is enabled; or the sum of all loaded
      modules exceeds 512 MiB, then loading modules fails with a warning
      (and hence a vmalloc allocation failure) because the PTEs for the
      newly-allocated vmalloc address space are not zero.
      
        WARNING: CPU: 0 PID: 494 at linux/mm/vmalloc.c:128
                 vmap_page_range_noflush+0x2a1/0x360()
      
      This is caused by xen_setup_kernel_pagetables() copying
      level2_kernel_pgt into level2_fixmap_pgt, overwriting many non-present
      entries.
      
      Without KASLR, the normal kernel image size only covers the first half
      of level2_kernel_pgt and module space starts after that.
      
      L4[511]->level3_kernel_pgt[510]->level2_kernel_pgt[  0..255]->kernel
                                                        [256..511]->module
                                [511]->level2_fixmap_pgt[  0..505]->module
      
      This allows 512 MiB of of module vmalloc space to be used before
      having to use the corrupted level2_fixmap_pgt entries.
      
      With KASLR enabled, the kernel image uses the full PUD range of 1G and
      module space starts in the level2_fixmap_pgt. So basically:
      
      L4[511]->level3_kernel_pgt[510]->level2_kernel_pgt[0..511]->kernel
                                [511]->level2_fixmap_pgt[0..505]->module
      
      And now no module vmalloc space can be used without using the corrupt
      level2_fixmap_pgt entries.
      
      Fix this by properly converting the level2_fixmap_pgt entries to MFNs,
      and setting level1_fixmap_pgt as read-only.
      
      A number of comments were also using the the wrong L3 offset for
      level2_kernel_pgt.  These have been corrected.
      Signed-off-by: NStefan Bader <stefan.bader@canonical.com>
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      Reviewed-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: stable@vger.kernel.org
      0b5a5063
  17. 27 8月, 2014 1 次提交
    • C
      x86: Replace __get_cpu_var uses · 89cbc767
      Christoph Lameter 提交于
      __get_cpu_var() is used for multiple purposes in the kernel source. One of
      them is address calculation via the form &__get_cpu_var(x).  This calculates
      the address for the instance of the percpu variable of the current processor
      based on an offset.
      
      Other use cases are for storing and retrieving data from the current
      processors percpu area.  __get_cpu_var() can be used as an lvalue when
      writing data or on the right side of an assignment.
      
      __get_cpu_var() is defined as :
      
      #define __get_cpu_var(var) (*this_cpu_ptr(&(var)))
      
      __get_cpu_var() always only does an address determination. However, store
      and retrieve operations could use a segment prefix (or global register on
      other platforms) to avoid the address calculation.
      
      this_cpu_write() and this_cpu_read() can directly take an offset into a
      percpu area and use optimized assembly code to read and write per cpu
      variables.
      
      This patch converts __get_cpu_var into either an explicit address
      calculation using this_cpu_ptr() or into a use of this_cpu operations that
      use the offset.  Thereby address calculations are avoided and less registers
      are used when code is generated.
      
      Transformations done to __get_cpu_var()
      
      1. Determine the address of the percpu instance of the current processor.
      
      	DEFINE_PER_CPU(int, y);
      	int *x = &__get_cpu_var(y);
      
          Converts to
      
      	int *x = this_cpu_ptr(&y);
      
      2. Same as #1 but this time an array structure is involved.
      
      	DEFINE_PER_CPU(int, y[20]);
      	int *x = __get_cpu_var(y);
      
          Converts to
      
      	int *x = this_cpu_ptr(y);
      
      3. Retrieve the content of the current processors instance of a per cpu
      variable.
      
      	DEFINE_PER_CPU(int, y);
      	int x = __get_cpu_var(y)
      
         Converts to
      
      	int x = __this_cpu_read(y);
      
      4. Retrieve the content of a percpu struct
      
      	DEFINE_PER_CPU(struct mystruct, y);
      	struct mystruct x = __get_cpu_var(y);
      
         Converts to
      
      	memcpy(&x, this_cpu_ptr(&y), sizeof(x));
      
      5. Assignment to a per cpu variable
      
      	DEFINE_PER_CPU(int, y)
      	__get_cpu_var(y) = x;
      
         Converts to
      
      	__this_cpu_write(y, x);
      
      6. Increment/Decrement etc of a per cpu variable
      
      	DEFINE_PER_CPU(int, y);
      	__get_cpu_var(y)++
      
         Converts to
      
      	__this_cpu_inc(y)
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: x86@kernel.org
      Acked-by: NH. Peter Anvin <hpa@linux.intel.com>
      Acked-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NChristoph Lameter <cl@linux.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      89cbc767
  18. 11 8月, 2014 1 次提交