1. 20 8月, 2013 2 次提交
    • D
      x86/xen: during early setup, only 1:1 map the ISA region · e201bfcc
      David Vrabel 提交于
      During early setup, when the reserved regions and MMIO holes are being
      setup as 1:1 in the p2m, clear any mappings instead of making them 1:1
      (execept for the ISA region which is expected to be mapped).
      
      This fixes a regression introduced in 3.5 by 83d51ab4 (xen/setup:
      update VA mapping when releasing memory during setup) which caused
      hosts with tboot to fail to boot.
      
      tboot marks a region in the e820 map as unusable and the dom0 kernel
      would attempt to map this region and Xen does not permit unusable
      regions to be mapped by guests.
      
      (XEN)  0000000000000000 - 0000000000060000 (usable)
      (XEN)  0000000000060000 - 0000000000068000 (reserved)
      (XEN)  0000000000068000 - 000000000009e000 (usable)
      (XEN)  0000000000100000 - 0000000000800000 (usable)
      (XEN)  0000000000800000 - 0000000000972000 (unusable)
      
      tboot marked this region as unusable.
      
      (XEN)  0000000000972000 - 00000000cf200000 (usable)
      (XEN)  00000000cf200000 - 00000000cf38f000 (reserved)
      (XEN)  00000000cf38f000 - 00000000cf3ce000 (ACPI data)
      (XEN)  00000000cf3ce000 - 00000000d0000000 (reserved)
      (XEN)  00000000e0000000 - 00000000f0000000 (reserved)
      (XEN)  00000000fe000000 - 0000000100000000 (reserved)
      (XEN)  0000000100000000 - 0000000630000000 (usable)
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      e201bfcc
    • D
      x86/xen: disable premption when enabling local irqs · fb58e300
      David Vrabel 提交于
      If CONFIG_PREEMPT is enabled then xen_enable_irq() (and
      xen_restore_fl()) could be preempted and rescheduled on a different
      VCPU in between the clear of the mask and the check for pending
      events.  This may result in events being lost as the upcall will check
      for pending events on the wrong VCPU.
      
      Fix this by disabling preemption around the unmask and check for
      events.
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      fb58e300
  2. 09 8月, 2013 2 次提交
    • D
      xen/p2m: avoid unneccesary TLB flush in m2p_remove_override() · 65a45fa2
      David Vrabel 提交于
      In m2p_remove_override() when removing the grant map from the kernel
      mapping and replacing with a mapping to the original page, the grant
      unmap will already have flushed the TLB and it is not necessary to do
      it again after updating the mapping.
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Reviewed-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      65a45fa2
    • K
      xen: Support 64-bit PV guest receiving NMIs · 6efa20e4
      Konrad Rzeszutek Wilk 提交于
      This is based on a patch that Zhenzhong Duan had sent - which
      was missing some of the remaining pieces. The kernel has the
      logic to handle Xen-type-exceptions using the paravirt interface
      in the assembler code (see PARAVIRT_ADJUST_EXCEPTION_FRAME -
      pv_irq_ops.adjust_exception_frame and and INTERRUPT_RETURN -
      pv_cpu_ops.iret).
      
      That means the nmi handler (and other exception handlers) use
      the hypervisor iret.
      
      The other changes that would be neccessary for this would
      be to translate the NMI_VECTOR to one of the entries on the
      ipi_vector and make xen_send_IPI_mask_allbutself use different
      events.
      
      Fortunately for us commit 1db01b49
      (xen: Clean up apic ipi interface) implemented this and we piggyback
      on the cleanup such that the apic IPI interface will pass the right
      vector value for NMI.
      
      With this patch we can trigger NMIs within a PV guest (only tested
      x86_64).
      
      For this to work with normal PV guests (not initial domain)
      we need the domain to be able to use the APIC ops - they are
      already implemented to use the Xen event channels. For that
      to be turned on in a PV domU we need to remove the masking
      of X86_FEATURE_APIC.
      
      Incidentally that means kgdb will also now work within
      a PV guest without using the 'nokgdbroundup' workaround.
      
      Note that the 32-bit version is different and this patch
      does not enable that.
      
      CC: Lisa Nguyen <lisa@xenapiadmin.com>
      CC: Ben Guthro <benjamin.guthro@citrix.com>
      CC: Zhenzhong Duan <zhenzhong.duan@oracle.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      [v1: Fixed up per David Vrabel comments]
      Reviewed-by: NBen Guthro <benjamin.guthro@citrix.com>
      Reviewed-by: NDavid Vrabel <david.vrabel@citrix.com>
      6efa20e4
  3. 03 8月, 2013 2 次提交
  4. 01 8月, 2013 18 次提交
  5. 31 7月, 2013 8 次提交
  6. 30 7月, 2013 1 次提交
  7. 27 7月, 2013 2 次提交
  8. 26 7月, 2013 5 次提交
    • F
      arm64: Change kernel stack size to 16K · 845ad05e
      Feng Kan 提交于
      Written by Catalin Marinas, tested by APM on storm platform. This is needed
      because of the failures encountered when running SpecWeb benchmark test.
      Signed-off-by: NFeng Kan <fkan@apm.com>
      Acked-by: NKumar Sankaran <ksankaran@apm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      845ad05e
    • W
      ARM: 7791/1: a.out: remove partial a.out support · acfdd4b1
      Will Deacon 提交于
      a.out support on ARM requires that argc, argv and envp are passed in
      r0-r2 respectively, which requires hacking load_aout_binary to
      prevent argc being clobbered by the return code. Whilst mainline kernels
      do set the registers up in start_thread, the aout loader has never
      carried the hack in mainline.
      
      Initialising the registers in this way actually goes against the libc
      expectations for ELF binaries, where argc, argv and envp are passed on
      the stack, with r0 being used to hold a pointer to an exit function for
      cleaning up after the dynamic linker if required. If the pointer is
      NULL, then it is ignored. When execing an ELF binary, Linux currently
      zeroes r0, then sets it to argc and then finally clobbers it with the
      return value of the execve syscall, so we actually end up with:
      
      	r0 = 0
      	stack[0] = argc
      	r1 = stack[1] = argv
      	r2 = stack[2] = envp
      
      libc treats r1 and r2 as undefined. The clobbering of r0 by sys_execve
      works for user-spawned threads, but when executing an ELF binary from a
      kernel thread (via call_usermodehelper), the execve is performed on the
      ret_from_fork path, which restores r0 from the saved pt_regs, resulting
      in argc being presented to the C library. This has horrible consequences
      when the application exits, since we have an exit function registered
      using argc, resulting in a jump to hyperspace.
      
      This patch solves the problem by removing the partial a.out support from
      arch/arm/ altogether.
      
      Cc: <stable@vger.kernel.org>
      Cc: Ashish Sangwan <ashishsangwan2@gmail.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      acfdd4b1
    • C
      ARM: 7790/1: Fix deferred mm switch on VIVT processors · bdae73cd
      Catalin Marinas 提交于
      As of commit b9d4d42a (ARM: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW on
      pre-ARMv6 CPUs), the mm switching on VIVT processors is done in the
      finish_arch_post_lock_switch() function to avoid whole cache flushing
      with interrupts disabled. The need for deferred mm switch is stored as a
      thread flag (TIF_SWITCH_MM). However, with preemption enabled, we can
      have another thread switch before finish_arch_post_lock_switch(). If the
      new thread has the same mm as the previous 'next' thread, the scheduler
      will not call switch_mm() and the TIF_SWITCH_MM flag won't be set for
      the new thread.
      
      This patch moves the switch pending flag to the mm_context_t structure
      since this is specific to the mm rather than thread.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reported-by: NMarc Kleine-Budde <mkl@pengutronix.de>
      Tested-by: NMarc Kleine-Budde <mkl@pengutronix.de>
      Cc: <stable@vger.kernel.org> # 3.5+
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      bdae73cd
    • F
      ARM: 7789/1: Do not run dummy_flush_tlb_a15_erratum() on non-Cortex-A15 · 1f49856b
      Fabio Estevam 提交于
      Commit 93dc6887 (ARM: 7684/1: errata: Workaround for Cortex-A15 erratum 798181 (TLBI/DSB operations)) causes the following undefined instruction error on a mx53 (Cortex-A8):
      
      Internal error: Oops - undefined instruction: 0 [#1] SMP ARM
      CPU: 0 PID: 275 Comm: modprobe Not tainted 3.11.0-rc2-next-20130722-00009-g9b0f371 #881
      task: df46cc00 ti: df48e000 task.ti: df48e000
      PC is at check_and_switch_context+0x17c/0x4d0
      LR is at check_and_switch_context+0xdc/0x4d0
      
      This problem happens because check_and_switch_context() calls dummy_flush_tlb_a15_erratum() without checking if we are really running on a Cortex-A15 or not.
      
      To avoid this issue, only call dummy_flush_tlb_a15_erratum() inside
      check_and_switch_context() if erratum_a15_798181() returns true, which means that we are really running on a Cortex-A15.
      Signed-off-by: NFabio Estevam <fabio.estevam@freescale.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: NRoger Quadros <rogerq@ti.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      1f49856b
    • M
      ARM: 7787/1: virt: ensure visibility of __boot_cpu_mode · 8fbac214
      Mark Rutland 提交于
      Secondary CPUs write to __boot_cpu_mode with caches disabled, and thus a
      cached value of __boot_cpu_mode may be incoherent with that in memory.
      This could lead to a failure to detect mismatched boot modes.
      
      This patch adds flushing to ensure that writes by secondaries to
      __boot_cpu_mode are made visible before we test against it.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NDave Martin <Dave.Martin@arm.com>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Cc: Christoffer Dall <cdall@cs.columbia.edu>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      8fbac214