1. 07 9月, 2017 1 次提交
  2. 19 2月, 2016 1 次提交
  3. 18 2月, 2016 1 次提交
    • D
      x86/mm/pkeys: Add arch-specific VMA protection bits · 8f62c883
      Dave Hansen 提交于
      Lots of things seem to do:
      
              vma->vm_page_prot = vm_get_page_prot(flags);
      
      and the ptes get created right from things we pull out
      of ->vm_page_prot.  So it is very convenient if we can
      store the protection key in flags and vm_page_prot, just
      like the existing permission bits (_PAGE_RW/PRESENT).  It
      greatly reduces the amount of plumbing and arch-specific
      hacking we have to do in generic code.
      
      This also takes the new PROT_PKEY{0,1,2,3} flags and
      turns *those* in to VM_ flags for vma->vm_flags.
      
      The protection key values are stored in 4 places:
      	1. "prot" argument to system calls
      	2. vma->vm_flags, filled from the mmap "prot"
      	3. vma->vm_page prot, filled from vma->vm_flags
      	4. the PTE itself.
      
      The pseudocode for these for steps are as follows:
      
      	mmap(PROT_PKEY*)
      	vma->vm_flags 	  = ... | arch_calc_vm_prot_bits(mmap_prot);
      	vma->vm_page_prot = ... | arch_vm_get_page_prot(vma->vm_flags);
      	pte = pfn | vma->vm_page_prot
      
      Note that this provides a new definitions for x86:
      
      	arch_vm_get_page_prot()
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: linux-mm@kvack.org
      Link: http://lkml.kernel.org/r/20160212210210.FE483A42@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      8f62c883
  4. 15 12月, 2012 1 次提交
  5. 12 12月, 2012 1 次提交
    • A
      mm: support more pagesizes for MAP_HUGETLB/SHM_HUGETLB · 42d7395f
      Andi Kleen 提交于
      There was some desire in large applications using MAP_HUGETLB or
      SHM_HUGETLB to use 1GB huge pages on some mappings, and stay with 2MB on
      others.  This is useful together with NUMA policy: use 2MB interleaving
      on some mappings, but 1GB on local mappings.
      
      This patch extends the IPC/SHM syscall interfaces slightly to allow
      specifying the page size.
      
      It borrows some upper bits in the existing flag arguments and allows
      encoding the log of the desired page size in addition to the *_HUGETLB
      flag.  When 0 is specified the default size is used, this makes the
      change fully compatible.
      
      Extending the internal hugetlb code to handle this is straight forward.
      Instead of a single mount it just keeps an array of them and selects the
      right mount based on the specified page size.  When no page size is
      specified it uses the mount of the default page size.
      
      The change is not visible in /proc/mounts because internal mounts don't
      appear there.  It also has very little overhead: the additional mounts
      just consume a super block, but not more memory when not used.
      
      I also exported the new flags to the user headers (they were previously
      under __KERNEL__).  Right now only symbols for x86 and some other
      architecture for 1GB and 2MB are defined.  The interface should already
      work for all other architectures though.  Only architectures that define
      multiple hugetlb sizes actually need it (that is currently x86, tile,
      powerpc).  However tile and powerpc have user configurable hugetlb
      sizes, so it's not easy to add defines.  A program on those
      architectures would need to query sysfs and use the appropiate log2.
      
      [akpm@linux-foundation.org: cleanups]
      [rientjes@google.com: fix build]
      [akpm@linux-foundation.org: checkpatch fixes]
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Acked-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Hillf Danton <dhillf@gmail.com>
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      42d7395f
  6. 19 6月, 2009 2 次提交
  7. 12 6月, 2009 1 次提交
  8. 23 10月, 2008 2 次提交
  9. 16 8月, 2008 2 次提交
    • I
      x86: add MAP_STACK mmap flag · cd98a04a
      Ingo Molnar 提交于
      as per this discussion:
      
         http://lkml.org/lkml/2008/8/12/423
      
      Pardo reported that 64-bit threaded apps, if their stacks exceed the
      combined size of ~4GB, slow down drastically in pthread_create() - because
      glibc uses MAP_32BIT to allocate the stacks. The use of MAP_32BIT is
      a legacy hack - to speed up context switching on certain early model
      64-bit P4 CPUs.
      
      So introduce a new flag to be used by glibc instead, to not constrain
      64-bit apps like this.
      
      glibc can switch to this new flag straight away - it will be ignored
      by the kernel. If those old CPUs ever matter to anyone, support for
      it can be implemented.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Acked-by: NUlrich Drepper <drepper@gmail.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cd98a04a
    • I
      x86: add MAP_STACK mmap flag · 2fdc8690
      Ingo Molnar 提交于
      as per this discussion:
      
         http://lkml.org/lkml/2008/8/12/423
      
      Pardo reported that 64-bit threaded apps, if their stacks exceed the
      combined size of ~4GB, slow down drastically in pthread_create() - because
      glibc uses MAP_32BIT to allocate the stacks. The use of MAP_32BIT is
      a legacy hack - to speed up context switching on certain early model
      64-bit P4 CPUs.
      
      So introduce a new flag to be used by glibc instead, to not constrain
      64-bit apps like this.
      
      glibc can switch to this new flag straight away - it will be ignored
      by the kernel. If those old CPUs ever matter to anyone, support for
      it can be implemented.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Acked-by: NUlrich Drepper <drepper@gmail.com>
      2fdc8690
  10. 23 7月, 2008 1 次提交
    • V
      x86: consolidate header guards · 77ef50a5
      Vegard Nossum 提交于
      This patch is the result of an automatic script that consolidates the
      format of all the headers in include/asm-x86/.
      
      The format:
      
      1. No leading underscore. Names with leading underscores are reserved.
      2. Pathname components are separated by two underscores. So we can
         distinguish between mm_types.h and mm/types.h.
      3. Everything except letters and numbers are turned into single
         underscores.
      Signed-off-by: NVegard Nossum <vegard.nossum@gmail.com>
      77ef50a5
  11. 18 10月, 2007 1 次提交
  12. 11 10月, 2007 1 次提交