1. 09 11月, 2012 3 次提交
    • W
      ARM: mm: introduce L_PTE_VALID for page table entries · dbf62d50
      Will Deacon 提交于
      For long-descriptor translation table formats, the ARMv7 architecture
      defines the last two bits of the second- and third-level descriptors to
      be:
      
      	x0b	- Invalid
      	01b	- Block (second-level), Reserved (third-level)
      	11b	- Table (second-level), Page (third-level)
      
      This allows us to define L_PTE_PRESENT as (3 << 0) and use this value to
      create ptes directly. However, when determining whether a given pte
      value is present in the low-level page table accessors, we only need to
      check the least significant bit of the descriptor, allowing us to write
      faulting, present entries which are required for PROT_NONE mappings.
      
      This patch introduces L_PTE_VALID, which can be used to test whether a
      pte should fault, and updates the low-level page table accessors
      accordingly.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      dbf62d50
    • W
      ARM: mm: don't use the access flag permissions mechanism for classic MMU · 0cbbbad6
      Will Deacon 提交于
      The simplified access permissions model is not used for the classic MMU
      translation regime, so ensure that it is turned off in the sctlr prior
      to turning on address translation for ARMv7.
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      0cbbbad6
    • W
      ARM: mm: use pteval_t to represent page protection values · 864aa04c
      Will Deacon 提交于
      When updating the page protection map after calculating the user_pgprot
      value, the base protection map is temporarily stored in an unsigned long
      type, causing truncation of the protection bits when LPAE is enabled.
      This effectively means that calls to mprotect() will corrupt the upper
      page attributes, clearing the XN bit unconditionally.
      
      This patch uses pteval_t to store the intermediate protection values,
      preserving the upper bits for 64-bit descriptors.
      
      Cc: stable@vger.kernel.org
      Acked-by: NNicolas Pitre <nico@linaro.org>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      864aa04c
  2. 03 11月, 2012 2 次提交
  3. 02 11月, 2012 4 次提交
    • D
      FRV: Fix the new-style kernel_thread() stuff · e7aa51b2
      David Howells 提交于
      The kernel_thread() changes for FRV don't work, and FRV fails to boot,
      starting with:
      
      	commit 02ce496f
      	Author: Al Viro <viro@zeniv.linux.org.uk>
      	Date:   Tue Sep 18 22:18:51 2012 -0400
      	Subject: frv: split ret_from_fork, simplify kernel_thread() a lot
      
      The problem is that the userspace registers are completely cleared when a
      kernel thread is created and all subsequent user threads are then copied from
      that.  Unfortunately, however, the TBR and PSR registers are restored from the
      pt_regs and the values they should be set to are clobbered by the memset.
      
      Instead, copy across the old user registers as normal, and then merely alter
      GR8 and GR9 in it if we're going to execute a kernel thread.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      e7aa51b2
    • D
      FRV: Fix the preemption handling · 1ee6f566
      David Howells 提交于
      Fix the preemption handling in FRV code where the PREEMPT_ACTIVE value is
      incorrectly loaded into the threadinfo flags rather than the threadinfo
      preemption count.
      
      Unfortunately, the code cannot be simply converted to use
      preempt_schedule_irq() as is because FRV uses virtual interrupt disablement to
      cut down on the cost of actually disabling interrupts and thus
      local_irq_enable() doesn't actually enable interrupts.
      Reported-by: NAl Viro <viro@ZenIV.linux.org.uk>
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      cc: Al Viro <viro@ZenIV.linux.org.uk>
      1ee6f566
    • D
      FRV: Don't objcopy the GNU build_id note · 5f0231d9
      David Howells 提交于
      Don't let objcopy transfer the GNU build_id note into the loadable image as it
      is located at address 0 and the image ends up >3G in size.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      5f0231d9
    • D
      FRV: Add missing linux/export.h #inclusions · a5788caa
      David Howells 提交于
      Add missing linux/export.h #inclusions to the FRV arch.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      a5788caa
  4. 01 11月, 2012 2 次提交
    • X
      KVM: x86: fix vcpu->mmio_fragments overflow · 87da7e66
      Xiao Guangrong 提交于
      After commit b3356bf0 (KVM: emulator: optimize "rep ins" handling),
      the pieces of io data can be collected and write them to the guest memory
      or MMIO together
      
      Unfortunately, kvm splits the mmio access into 8 bytes and store them to
      vcpu->mmio_fragments. If the guest uses "rep ins" to move large data, it
      will cause vcpu->mmio_fragments overflow
      
      The bug can be exposed by isapc (-M isapc):
      
      [23154.818733] general protection fault: 0000 [#1] SMP DEBUG_PAGEALLOC
      [ ......]
      [23154.858083] Call Trace:
      [23154.859874]  [<ffffffffa04f0e17>] kvm_get_cr8+0x1d/0x28 [kvm]
      [23154.861677]  [<ffffffffa04fa6d4>] kvm_arch_vcpu_ioctl_run+0xcda/0xe45 [kvm]
      [23154.863604]  [<ffffffffa04f5a1a>] ? kvm_arch_vcpu_load+0x17b/0x180 [kvm]
      
      Actually, we can use one mmio_fragment to store a large mmio access then
      split it when we pass the mmio-exit-info to userspace. After that, we only
      need two entries to store mmio info for the cross-mmio pages access
      Signed-off-by: NXiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      87da7e66
    • K
      xen/mmu: Use Xen specific TLB flush instead of the generic one. · 95a7d768
      Konrad Rzeszutek Wilk 提交于
      As Mukesh explained it, the MMUEXT_TLB_FLUSH_ALL allows the
      hypervisor to do a TLB flush on all active vCPUs. If instead
      we were using the generic one (which ends up being xen_flush_tlb)
      we end up making the MMUEXT_TLB_FLUSH_LOCAL hypercall. But
      before we make that hypercall the kernel will IPI all of the
      vCPUs (even those that were asleep from the hypervisor
      perspective). The end result is that we needlessly wake them
      up and do a TLB flush when we can just let the hypervisor
      do it correctly.
      
      This patch gives around 50% speed improvement when migrating
      idle guest's from one host to another.
      
      Oracle-bug: 14630170
      
      CC: stable@vger.kernel.org
      Tested-by: NJingjie Jiang <jingjie.jiang@oracle.com>
      Suggested-by: NMukesh Rathor <mukesh.rathor@oracle.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      95a7d768
  5. 30 10月, 2012 2 次提交
  6. 27 10月, 2012 4 次提交
  7. 26 10月, 2012 9 次提交
  8. 25 10月, 2012 14 次提交