1. 07 11月, 2015 1 次提交
  2. 06 11月, 2015 6 次提交
    • E
      mm: mlock: add mlock flags to enable VM_LOCKONFAULT usage · b0f205c2
      Eric B Munson 提交于
      The previous patch introduced a flag that specified pages in a VMA should
      be placed on the unevictable LRU, but they should not be made present when
      the area is created.  This patch adds the ability to set this state via
      the new mlock system calls.
      
      We add MLOCK_ONFAULT for mlock2 and MCL_ONFAULT for mlockall.
      MLOCK_ONFAULT will set the VM_LOCKONFAULT modifier for VM_LOCKED.
      MCL_ONFAULT should be used as a modifier to the two other mlockall flags.
      When used with MCL_CURRENT, all current mappings will be marked with
      VM_LOCKED | VM_LOCKONFAULT.  When used with MCL_FUTURE, the mm->def_flags
      will be marked with VM_LOCKED | VM_LOCKONFAULT.  When used with both
      MCL_CURRENT and MCL_FUTURE, all current mappings and mm->def_flags will be
      marked with VM_LOCKED | VM_LOCKONFAULT.
      
      Prior to this patch, mlockall() will unconditionally clear the
      mm->def_flags any time it is called without MCL_FUTURE.  This behavior is
      maintained after adding MCL_ONFAULT.  If a call to mlockall(MCL_FUTURE) is
      followed by mlockall(MCL_CURRENT), the mm->def_flags will be cleared and
      new VMAs will be unlocked.  This remains true with or without MCL_ONFAULT
      in either mlockall() invocation.
      
      munlock() will unconditionally clear both vma flags.  munlockall()
      unconditionally clears for VMA flags on all VMAs and in the mm->def_flags
      field.
      Signed-off-by: NEric B Munson <emunson@akamai.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Guenter Roeck <linux@roeck-us.net>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Shuah Khan <shuahkh@osg.samsung.com>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b0f205c2
    • E
      mm: mlock: add new mlock system call · a8ca5d0e
      Eric B Munson 提交于
      With the refactored mlock code, introduce a new system call for mlock.
      The new call will allow the user to specify what lock states are being
      added.  mlock2 is trivial at the moment, but a follow on patch will add a
      new mlock state making it useful.
      Signed-off-by: NEric B Munson <emunson@akamai.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Cc: Guenter Roeck <linux@roeck-us.net>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Shuah Khan <shuahkh@osg.samsung.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a8ca5d0e
    • A
      kasan: move KASAN_SANITIZE in arch/x86/boot/Makefile · c63f06dd
      Andrey Konovalov 提交于
      Move KASAN_SANITIZE in arch/x86/boot/Makefile above the comment
      related to SVGA_MODE, since the comment refers to 'the next line'.
      Signed-off-by: NAndrey Konovalov <andreyknvl@google.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c63f06dd
    • A
      kasan: update log messages · 25add7ec
      Andrey Konovalov 提交于
      We decided to use KASAN as the short name of the tool and
      KernelAddressSanitizer as the full one.  Update log messages according to
      that.
      Signed-off-by: NAndrey Konovalov <andreyknvl@google.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      25add7ec
    • R
      arch/powerpc/mm/numa.c: do not allocate bootmem memory for non existing nodes · c118baf8
      Raghavendra K T 提交于
      With the setup_nr_nodes(), we have already initialized
      node_possible_map.  So it is safe to use for_each_node here.
      
      There are many places in the kernel that use hardcoded 'for' loop with
      nr_node_ids, because all other architectures have numa nodes populated
      serially.  That should be reason we had maintained the same for
      powerpc.
      
      But, since sparse numa node ids possible on powerpc, we unnecessarily
      allocate memory for non existent numa nodes.
      
      For e.g., on a system with 0,1,16,17 as numa nodes nr_node_ids=18 and
      we allocate memory for nodes 2-14.  This patch we allocate memory for
      only existing numa nodes.
      
      The patch is boot tested on a 4 node tuleta, confirming with printks
      that it works as expected.
      Signed-off-by: NRaghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
      Cc: Vladimir Davydov <vdavydov@parallels.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Anton Blanchard <anton@samba.org>
      Cc: Nishanth Aravamudan <nacc@linux.vnet.ibm.com>
      Cc: Greg Kurz <gkurz@linux.vnet.ibm.com>
      Cc: Grant Likely <grant.likely@linaro.org>
      Cc: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c118baf8
    • A
      uaccess: reimplement probe_kernel_address() using probe_kernel_read() · 0ab32b6f
      Andrew Morton 提交于
      probe_kernel_address() is basically the same as the (later added)
      probe_kernel_read().
      
      The return value on EFAULT is a bit different: probe_kernel_address()
      returns number-of-bytes-not-copied whereas probe_kernel_read() returns
      -EFAULT.  All callers have been checked, none cared.
      
      probe_kernel_read() can be overridden by the architecture whereas
      probe_kernel_address() cannot.  parisc, blackfin and um do this, to insert
      additional checking.  Hence this patch possibly fixes obscure bugs,
      although there are only two probe_kernel_address() callsites outside
      arch/.
      
      My first attempt involved removing probe_kernel_address() entirely and
      converting all callsites to use probe_kernel_read() directly, but that got
      tiresome.
      
      This patch shrinks mm/slab_common.o by 218 bytes.  For a single
      probe_kernel_address() callsite.
      
      Cc: Steven Miao <realmz6@gmail.com>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0ab32b6f
  3. 05 11月, 2015 4 次提交
    • K
      KVM: VMX: Fix commit which broke PML · a3eaa864
      Kai Huang 提交于
      I found PML was broken since below commit:
      
      	commit feda805f
      	Author: Xiao Guangrong <guangrong.xiao@linux.intel.com>
      	Date:   Wed Sep 9 14:05:55 2015 +0800
      
      	KVM: VMX: unify SECONDARY_VM_EXEC_CONTROL update
      
      	Unify the update in vmx_cpuid_update()
      Signed-off-by: NXiao Guangrong <guangrong.xiao@linux.intel.com>
      	[Rewrite to use vmcs_set_secondary_exec_control. - Paolo]
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      
      The reason is in above commit vmx_cpuid_update calls vmx_secondary_exec_control,
      in which currently SECONDARY_EXEC_ENABLE_PML bit is cleared unconditionally (as
      PML is enabled in creating vcpu). Therefore if vcpu_cpuid_update is called after
      vcpu is created, PML will be disabled unexpectedly while log-dirty code still
      thinks PML is used.
      
      Fix this by clearing SECONDARY_EXEC_ENABLE_PML in vmx_secondary_exec_control
      only when PML is not supported or not enabled (!enable_pml). This is more
      reasonable as PML is currently either always enabled or disabled. With this
      explicit updating SECONDARY_EXEC_ENABLE_PML in vmx_enable{disable}_pml is not
      needed so also rename vmx_enable{disable}_pml to vmx_create{destroy}_pml_buffer.
      
      Fixes: feda805fSigned-off-by: NKai Huang <kai.huang@linux.intel.com>
      [While at it, change a wrong ASSERT to an "if".  The condition can happen
       if creating the VCPU fails with ENOMEM. - Paolo]
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      a3eaa864
    • N
      sparc64: Fix numa distance values · 52708d69
      Nitin Gupta 提交于
      Orabug: 21896119
      
      Use machine descriptor (MD) to get node latency
      values instead of just using default values.
      
      Testing:
      On an T5-8 system with:
       - total nodes = 8
       - self latencies = 0x26d18
       - latency to other nodes = 0x3a598
         => latency ratio = ~1.5
      
      output of numactl --hardware
      
       - before fix:
      
      node distances:
      node   0   1   2   3   4   5   6   7
        0:  10  20  20  20  20  20  20  20
        1:  20  10  20  20  20  20  20  20
        2:  20  20  10  20  20  20  20  20
        3:  20  20  20  10  20  20  20  20
        4:  20  20  20  20  10  20  20  20
        5:  20  20  20  20  20  10  20  20
        6:  20  20  20  20  20  20  10  20
        7:  20  20  20  20  20  20  20  10
      
       - after fix:
      
      node distances:
      node   0   1   2   3   4   5   6   7
        0:  10  15  15  15  15  15  15  15
        1:  15  10  15  15  15  15  15  15
        2:  15  15  10  15  15  15  15  15
        3:  15  15  15  10  15  15  15  15
        4:  15  15  15  15  10  15  15  15
        5:  15  15  15  15  15  10  15  15
        6:  15  15  15  15  15  15  10  15
        7:  15  15  15  15  15  15  15  10
      Signed-off-by: NNitin Gupta <nitin.m.gupta@oracle.com>
      Reviewed-by: NChris Hyser <chris.hyser@oracle.com>
      Reviewed-by: NSantosh Shilimkar <santosh.shilimkar@oracle.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      52708d69
    • R
      sparc64: Don't restrict fp regs for no-fault loads · cae9af6a
      Rob Gardner 提交于
      The function handle_ldf_stq() deals with no-fault ASI
      loads and stores, but restricts fp registers to quad
      word regs (ie, %f0, %f4 etc). This is valid for the
      STQ case, but unnecessarily restricts loads, which
      may be single precision, double, or quad. This results
      in SIGFPE being raised for this instruction when the
      source address is invalid:
      	ldda [%g1] ASI_PNF, %f2
      but not for this one:
      	ldda [%g1] ASI_PNF, %f4
      The validation check for quad register is moved to
      within the STQ block so that loads are not affected
      by the check.
      
      An additional problem is that the calculation for freg
      is incorrect when a single precision load is being
      handled. This causes %f1 to be seen as %f32 etc,
      and the incorrect register ends up being overwritten.
      This code sequence demonstrates the problem:
      	ldd [%g1], %f32		! g1 = valid address
      	lda [%i3] ASI_PNF, %f1  ! i3 = invalid address
      	std %f32, [%g1]
      This is corrected by basing the freg calculation on
      the load size.
      Signed-off-by: NRob Gardner <rob.gardner@oracle.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cae9af6a
    • D
      iommu-common: Fix error code used in iommu_tbl_range_{alloc,free}(). · d618382b
      David S. Miller 提交于
      The value returned from iommu_tbl_range_alloc() (and the one passed
      in as a fourth argument to iommu_tbl_range_free) is not a DMA address,
      it is rather an index into the IOMMU page table.
      
      Therefore using DMA_ERROR_CODE is not appropriate.
      
      Use a more type matching error code define, IOMMU_ERROR_CODE, and
      update all users of this interface.
      Reported-by: NAndre Przywara <andre.przywara@arm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d618382b
  4. 04 11月, 2015 11 次提交
  5. 03 11月, 2015 18 次提交