1. 11 12月, 2012 6 次提交
    • P
      mm: numa: Add fault driven placement and migration · cbee9f88
      Peter Zijlstra 提交于
      NOTE: This patch is based on "sched, numa, mm: Add fault driven
      	placement and migration policy" but as it throws away all the policy
      	to just leave a basic foundation I had to drop the signed-offs-by.
      
      This patch creates a bare-bones method for setting PTEs pte_numa in the
      context of the scheduler that when faulted later will be faulted onto the
      node the CPU is running on.  In itself this does nothing useful but any
      placement policy will fundamentally depend on receiving hints on placement
      from fault context and doing something intelligent about it.
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      Acked-by: NRik van Riel <riel@redhat.com>
      cbee9f88
    • A
      mm: numa: pte_numa() and pmd_numa() · be3a7284
      Andrea Arcangeli 提交于
      Implement pte_numa and pmd_numa.
      
      We must atomically set the numa bit and clear the present bit to
      define a pte_numa or pmd_numa.
      
      Once a pte or pmd has been set as pte_numa or pmd_numa, the next time
      a thread touches a virtual address in the corresponding virtual range,
      a NUMA hinting page fault will trigger. The NUMA hinting page fault
      will clear the NUMA bit and set the present bit again to resolve the
      page fault.
      
      The expectation is that a NUMA hinting page fault is used as part
      of a placement policy that decides if a page should remain on the
      current node or migrated to a different node.
      Acked-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      be3a7284
    • A
      mm: numa: define _PAGE_NUMA · dbe4d203
      Andrea Arcangeli 提交于
      The objective of _PAGE_NUMA is to be able to trigger NUMA hinting page
      faults to identify the per NUMA node working set of the thread at
      runtime.
      
      Arming the NUMA hinting page fault mechanism works similarly to
      setting up a mprotect(PROT_NONE) virtual range: the present bit is
      cleared at the same time that _PAGE_NUMA is set, so when the fault
      triggers we can identify it as a NUMA hinting page fault.
      
      _PAGE_NUMA on x86 shares the same bit number of _PAGE_PROTNONE (but it
      could also use a different bitflag, it's up to the architecture to
      decide).
      
      It would be confusing to call the "NUMA hinting page faults" as
      "do_prot_none faults". They're different events and _PAGE_NUMA doesn't
      alter the semantics of mprotect(PROT_NONE) in any way.
      
      Sharing the same bitflag with _PAGE_PROTNONE in fact complicates
      things: it requires us to ensure the code paths executed by
      _PAGE_PROTNONE remains mutually exclusive to the code paths executed
      by _PAGE_NUMA at all times, to avoid _PAGE_NUMA and _PAGE_PROTNONE to
      step into each other toes.
      
      Because we want to be able to set this bitflag in any established pte
      or pmd (while clearing the present bit at the same time) without
      losing information, this bitflag must never be set when the pte and
      pmd are present, so the bitflag picked for _PAGE_NUMA usage, must not
      be used by the swap entry format.
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      dbe4d203
    • R
      x86/mm: Introduce pte_accessible() · 2c3cf556
      Rik van Riel 提交于
      We need pte_present to return true for _PAGE_PROTNONE pages, to indicate that
      the pte is associated with a page.
      
      However, for TLB flushing purposes, we would like to know whether the pte
      points to an actually accessible page.  This allows us to skip remote TLB
      flushes for pages that are not actually accessible.
      
      Fill in this method for x86 and provide a safe (but slower) method
      on other architectures.
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Fixed-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/n/tip-66p11te4uj23gevgh4j987ip@git.kernel.org
      [ Added Linus's review fixes. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      2c3cf556
    • R
      x86: mm: drop TLB flush from ptep_set_access_flags · e4a1cc56
      Rik van Riel 提交于
      Intel has an architectural guarantee that the TLB entry causing
      a page fault gets invalidated automatically. This means
      we should be able to drop the local TLB invalidation.
      
      Because of the way other areas of the page fault code work,
      chances are good that all x86 CPUs do this.  However, if
      someone somewhere has an x86 CPU that does not invalidate
      the TLB entry causing a page fault, this one-liner should
      be easy to revert.
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Cc: Linus Torvalds <torvalds@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      e4a1cc56
    • R
      x86: mm: only do a local tlb flush in ptep_set_access_flags() · 0f9a921c
      Rik van Riel 提交于
      The function ptep_set_access_flags() is only ever invoked to set access
      flags or add write permission on a PTE.  The write bit is only ever set
      together with the dirty bit.
      
      Because we only ever upgrade a PTE, it is safe to skip flushing entries on
      remote TLBs. The worst that can happen is a spurious page fault on other
      CPUs, which would flush that TLB entry.
      
      Lazily letting another CPU incur a spurious page fault occasionally is
      (much!) cheaper than aggressively flushing everybody else's TLB.
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      0f9a921c
  2. 17 11月, 2012 3 次提交
    • A
      revert "mm: fix-up zone present pages" · 5576646f
      Andrew Morton 提交于
      Revert commit 7f1290f2 ("mm: fix-up zone present pages")
      
      That patch tried to fix a issue when calculating zone->present_pages,
      but it caused a regression on 32bit systems with HIGHMEM.  With that
      change, reset_zone_present_pages() resets all zone->present_pages to
      zero, and fixup_zone_present_pages() is called to recalculate
      zone->present_pages when the boot allocator frees core memory pages into
      buddy allocator.  Because highmem pages are not freed by bootmem
      allocator, all highmem zones' present_pages becomes zero.
      
      Various options for improving the situation are being discussed but for
      now, let's return to the 3.6 code.
      
      Cc: Jianguo Wu <wujianguo@huawei.com>
      Cc: Jiang Liu <jiang.liu@huawei.com>
      Cc: Petr Tesarik <ptesarik@suse.cz>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Tested-by: NChris Clayton <chris2553@googlemail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5576646f
    • D
      mips, arc: fix build failure · 18f69427
      David Rientjes 提交于
      Using a cross-compiler to fix another issue, the following build error
      occurred for mips defconfig:
      
        arch/mips/fw/arc/misc.c: In function 'ArcHalt':
        arch/mips/fw/arc/misc.c:25:2: error: implicit declaration of function 'local_irq_disable'
      
      Fix it up by including irqflags.h.
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      18f69427
    • T
      KVM: x86: Fix invalid secondary exec controls in vmx_cpuid_update() · 29282fde
      Takashi Iwai 提交于
      The commit [ad756a16: KVM: VMX: Implement PCID/INVPCID for guests with
      EPT] introduced the unconditional access to SECONDARY_VM_EXEC_CONTROL,
      and this triggers kernel warnings like below on old CPUs:
      
          vmwrite error: reg 401e value a0568000 (err 12)
          Pid: 13649, comm: qemu-kvm Not tainted 3.7.0-rc4-test2+ #154
          Call Trace:
           [<ffffffffa0558d86>] vmwrite_error+0x27/0x29 [kvm_intel]
           [<ffffffffa054e8cb>] vmcs_writel+0x1b/0x20 [kvm_intel]
           [<ffffffffa054f114>] vmx_cpuid_update+0x74/0x170 [kvm_intel]
           [<ffffffffa03629b6>] kvm_vcpu_ioctl_set_cpuid2+0x76/0x90 [kvm]
           [<ffffffffa0341c67>] kvm_arch_vcpu_ioctl+0xc37/0xed0 [kvm]
           [<ffffffff81143f7c>] ? __vunmap+0x9c/0x110
           [<ffffffffa0551489>] ? vmx_vcpu_load+0x39/0x1a0 [kvm_intel]
           [<ffffffffa0340ee2>] ? kvm_arch_vcpu_load+0x52/0x1a0 [kvm]
           [<ffffffffa032dcd4>] ? vcpu_load+0x74/0xd0 [kvm]
           [<ffffffffa032deb0>] kvm_vcpu_ioctl+0x110/0x5e0 [kvm]
           [<ffffffffa032e93d>] ? kvm_dev_ioctl+0x4d/0x4a0 [kvm]
           [<ffffffff8117dc6f>] do_vfs_ioctl+0x8f/0x530
           [<ffffffff81139d76>] ? remove_vma+0x56/0x60
           [<ffffffff8113b708>] ? do_munmap+0x328/0x400
           [<ffffffff81187c8c>] ? fget_light+0x4c/0x100
           [<ffffffff8117e1a1>] sys_ioctl+0x91/0xb0
           [<ffffffff815a942d>] system_call_fastpath+0x1a/0x1f
      
      This patch adds a check for the availability of secondary exec
      control to avoid these warnings.
      
      Cc: <stable@vger.kernel.org> [v3.6+]
      Signed-off-by: NTakashi Iwai <tiwai@suse.de>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      29282fde
  3. 16 11月, 2012 5 次提交
  4. 13 11月, 2012 6 次提交
    • R
      MIPS: Malta: Fix interupt number of CBUS UART. · 225ae5fd
      Ralf Baechle 提交于
      The CBUS UART's interrupt number was wrong conflicting with the interrupt
      being tied to the Intel PIIX4.  Since the PIIX4's interrupt is registered
      before the CBUS UART which is not being used on most systems this would
      not be noticed.
      
      Attempts to open the ttyS2 CBUS UART would result in:
      
      genirq: Flags mismatch irq 18. 00000000 (serial) vs. 00010000 (XT-PIC cascade)
      serial_link_irq_chain: request failed: -16 for irq: 18
      
      Qemu was written to match the kernel so will need to be fixed also.
      Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      225ae5fd
    • H
      s390/mm: have 16 byte aligned struct pages · 4bffbb34
      Heiko Carstens 提交于
      Select HAVE_ALIGNED_STRUCT_PAGE on s390, so that the slub allocator can make
      use of compare and swap double for lockless updates. This increases the size
      of struct page to 64 bytes (instead of 56 bytes), however the performance gain
      justifies the increased size:
      
      - now excactly four struct pages fit into a single cache line; the
        case that accessing a struct page causes two cache line loads
        does not exist anymore.
      - calculating the offset of a struct page within the memmap array
        is only a simple shift instead of a more expensive multiplication.
      
      A "hackbench 200 process 200" run on a 32 cpu system did show an 8% runtime
      improvement.
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      4bffbb34
    • H
      s390/gup: fix access_ok() usage in __get_user_pages_fast() · 516bad44
      Heiko Carstens 提交于
      access_ok() returns always "true" on s390. Therefore all access_ok()
      invocations are rather pointless.
      However when walking page tables we need to make sure that everything
      is within bounds of the ASCE limit of the task's address space.
      So remove the access_ok() call and add the same check we have in
      get_user_pages_fast().
      Reviewed-by: NGerald Schaefer <gerald.schaefer@de.ibm.com>
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      516bad44
    • H
      s390/gup: add missing TASK_SIZE check to get_user_pages_fast() · d55c4c61
      Heiko Carstens 提交于
      When walking page tables we need to make sure that everything
      is within bounds of the ASCE limit of the task's address space.
      Otherwise we might calculate e.g. a pud pointer which is not
      within a pud and dereference it.
      So check against TASK_SIZE (which is the ASCE limit) before
      walking page tables.
      Reviewed-by: NGerald Schaefer <gerald.schaefer@de.ibm.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      d55c4c61
    • P
      KVM: x86: invalid opcode oops on SET_SREGS with OSXSAVE bit set (CVE-2012-4461) · 6d1068b3
      Petr Matousek 提交于
      On hosts without the XSAVE support unprivileged local user can trigger
      oops similar to the one below by setting X86_CR4_OSXSAVE bit in guest
      cr4 register using KVM_SET_SREGS ioctl and later issuing KVM_RUN
      ioctl.
      
      invalid opcode: 0000 [#2] SMP
      Modules linked in: tun ip6table_filter ip6_tables ebtable_nat ebtables
      ...
      Pid: 24935, comm: zoog_kvm_monito Tainted: G      D      3.2.0-3-686-pae
      EIP: 0060:[<f8b9550c>] EFLAGS: 00210246 CPU: 0
      EIP is at kvm_arch_vcpu_ioctl_run+0x92a/0xd13 [kvm]
      EAX: 00000001 EBX: 000f387e ECX: 00000000 EDX: 00000000
      ESI: 00000000 EDI: 00000000 EBP: ef5a0060 ESP: d7c63e70
       DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068
      Process zoog_kvm_monito (pid: 24935, ti=d7c62000 task=ed84a0c0
      task.ti=d7c62000)
      Stack:
       00000001 f70a1200 f8b940a9 ef5a0060 00000000 00200202 f8769009 00000000
       ef5a0060 000f387e eda5c020 8722f9c8 00015bae 00000000 ed84a0c0 ed84a0c0
       c12bf02d 0000ae80 ef7f8740 fffffffb f359b740 ef5a0060 f8b85dc1 0000ae80
      Call Trace:
       [<f8b940a9>] ? kvm_arch_vcpu_ioctl_set_sregs+0x2fe/0x308 [kvm]
      ...
       [<c12bfb44>] ? syscall_call+0x7/0xb
      Code: 89 e8 e8 14 ee ff ff ba 00 00 04 00 89 e8 e8 98 48 ff ff 85 c0 74
      1e 83 7d 48 00 75 18 8b 85 08 07 00 00 31 c9 8b 95 0c 07 00 00 <0f> 01
      d1 c7 45 48 01 00 00 00 c7 45 1c 01 00 00 00 0f ae f0 89
      EIP: [<f8b9550c>] kvm_arch_vcpu_ioctl_run+0x92a/0xd13 [kvm] SS:ESP
      0068:d7c63e70
      
      QEMU first retrieves the supported features via KVM_GET_SUPPORTED_CPUID
      and then sets them later. So guest's X86_FEATURE_XSAVE should be masked
      out on hosts without X86_FEATURE_XSAVE, making kvm_set_cr4 with
      X86_CR4_OSXSAVE fail. Userspaces that allow specifying guest cpuid with
      X86_FEATURE_XSAVE even on hosts that do not support it, might be
      susceptible to this attack from inside the guest as well.
      
      Allow setting X86_CR4_OSXSAVE bit only if host has XSAVE support.
      Signed-off-by: NPetr Matousek <pmatouse@redhat.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      6d1068b3
    • F
      ARM: boot: Fix usage of kecho · 2d4d07b9
      Fabio Estevam 提交于
      Since commit edc88ceb (ARM: be really quiet when building with 'make -s') the
      following output is generated when building a kernel for ARM:
      
      echo '  Kernel: arch/arm/boot/Image is ready'
        Kernel: arch/arm/boot/Image is ready
        Building modules, stage 2.
      echo '  Kernel: arch/arm/boot/zImage is ready'
        Kernel: arch/arm/boot/zImage is ready
      
      As per Documentation/kbuild/makefiles.txt the correct way of using kecho is
      '@$(kecho)'.
      
      Make this change so no more unwanted 'echo' messages are displayed.
      Signed-off-by: NFabio Estevam <fabio.estevam@freescale.com>
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      2d4d07b9
  5. 12 11月, 2012 2 次提交
    • H
      s390/topology: fix core id vs physical package id mix-up · 658e5ce7
      Heiko Carstens 提交于
      The current topology code confuses core id vs physical package id.
      
      In other words /sys/devices/system/cpu/cpuX/topology/core_id
      displays the physical_package_id (aka socket id) instead of the
      core id.
      The physical_package_id sysfs attribute always displays "-1"
      instead of the socket id.
      
      Fix this mix-up with a small patch which defines and initializes
      topology_physical_package_id correctly and fixes the broken
      core id handling.
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      658e5ce7
    • M
      s390/signal: set correct address space control · fa968ee2
      Martin Schwidefsky 提交于
      If user space is running in primary mode it can switch to secondary
      or access register mode, this is used e.g. in the clock_gettime code
      of the vdso. If a signal is delivered to the user space process while
      it has been running in access register mode the signal handler is
      executed in access register mode as well which will result in a crash
      most of the time.
      
      Set the address space control bits in the PSW to the default for the
      execution of the signal handler and make sure that the previous
      address space control is restored on signal return. Take care
      that user space can not switch to the kernel address space by
      modifying the registers in the signal frame.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      fa968ee2
  6. 10 11月, 2012 4 次提交
  7. 09 11月, 2012 14 次提交