1. 19 6月, 2017 1 次提交
    • H
      mm: larger stack guard gap, between vmas · 1be7107f
      Hugh Dickins 提交于
      Stack guard page is a useful feature to reduce a risk of stack smashing
      into a different mapping. We have been using a single page gap which
      is sufficient to prevent having stack adjacent to a different mapping.
      But this seems to be insufficient in the light of the stack usage in
      userspace. E.g. glibc uses as large as 64kB alloca() in many commonly
      used functions. Others use constructs liks gid_t buffer[NGROUPS_MAX]
      which is 256kB or stack strings with MAX_ARG_STRLEN.
      
      This will become especially dangerous for suid binaries and the default
      no limit for the stack size limit because those applications can be
      tricked to consume a large portion of the stack and a single glibc call
      could jump over the guard page. These attacks are not theoretical,
      unfortunatelly.
      
      Make those attacks less probable by increasing the stack guard gap
      to 1MB (on systems with 4k pages; but make it depend on the page size
      because systems with larger base pages might cap stack allocations in
      the PAGE_SIZE units) which should cover larger alloca() and VLA stack
      allocations. It is obviously not a full fix because the problem is
      somehow inherent, but it should reduce attack space a lot.
      
      One could argue that the gap size should be configurable from userspace,
      but that can be done later when somebody finds that the new 1MB is wrong
      for some special case applications.  For now, add a kernel command line
      option (stack_guard_gap) to specify the stack gap size (in page units).
      
      Implementation wise, first delete all the old code for stack guard page:
      because although we could get away with accounting one extra page in a
      stack vma, accounting a larger gap can break userspace - case in point,
      a program run with "ulimit -S -v 20000" failed when the 1MB gap was
      counted for RLIMIT_AS; similar problems could come with RLIMIT_MLOCK
      and strict non-overcommit mode.
      
      Instead of keeping gap inside the stack vma, maintain the stack guard
      gap as a gap between vmas: using vm_start_gap() in place of vm_start
      (or vm_end_gap() in place of vm_end if VM_GROWSUP) in just those few
      places which need to respect the gap - mainly arch_get_unmapped_area(),
      and and the vma tree's subtree_gap support for that.
      Original-patch-by: NOleg Nesterov <oleg@redhat.com>
      Original-patch-by: NMichal Hocko <mhocko@suse.com>
      Signed-off-by: NHugh Dickins <hughd@google.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Tested-by: Helge Deller <deller@gmx.de> # parisc
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1be7107f
  2. 14 3月, 2017 1 次提交
  3. 13 3月, 2017 2 次提交
    • D
      x86/mm: Make mmap(MAP_32BIT) work correctly · 3e6ef9c8
      Dmitry Safonov 提交于
      mmap(MAP_32BIT) is broken due to the dependency on the TIF_ADDR32 thread
      flag.
      
      For 64bit applications MAP_32BIT will force legacy bottom-up allocations and
      the 1GB address space restriction even if the application issued a compat
      syscall, which should not be subject of these restrictions.
      
      For 32bit applications, which issue 64bit syscalls the newly introduced
      mmap base separation into 64-bit and compat bases changed the behaviour
      because now a 64-bit mapping is returned, but due to the TIF_ADDR32
      dependency MAP_32BIT is ignored. Before the separation a 32-bit mapping was
      returned, so the MAP_32BIT handling was irrelevant.
      
      Replace the check for TIF_ADDR32 with a check for the compat syscall. That
      solves both the 64-bit issuing a compat syscall and the 32-bit issuing a
      64-bit syscall problems.
      
      [ tglx: Massaged changelog ]
      Signed-off-by: NDmitry Safonov <dsafonov@virtuozzo.com>
      Cc: 0x7f454c46@gmail.com
      Cc: linux-mm@kvack.org
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Cyrill Gorcunov <gorcunov@openvz.org>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Link: http://lkml.kernel.org/r/20170306141721.9188-5-dsafonov@virtuozzo.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      3e6ef9c8
    • D
      x86/mm: Introduce mmap_compat_base() for 32-bit mmap() · 1b028f78
      Dmitry Safonov 提交于
      mmap() uses a base address, from which it starts to look for a free space
      for allocation.
      
      The base address is stored in mm->mmap_base, which is calculated during
      exec(). The address depends on task's size, set rlimit for stack, ASLR
      randomization. The base depends on the task size and the number of random
      bits which are different for 64-bit and 32bit applications.
      
      Due to the fact, that the base address is fixed, its mmap() from a compat
      (32bit) syscall issued by a 64bit task will return a address which is based
      on the 64bit base address and does not fit into the 32bit address space
      (4GB). The returned pointer is truncated to 32bit, which results in an
      invalid address.
      
      To solve store a seperate compat address base plus a compat legacy address
      base in mm_struct. These bases are calculated at exec() time and can be
      used later to address the 32bit compat mmap() issued by 64 bit
      applications.
      
      As a consequence of this change 32-bit applications issuing a 64-bit
      syscall (after doing a long jump) will get a 64-bit mapping now. Before
      this change 32-bit applications always got a 32bit mapping.
      
      [ tglx: Massaged changelog and added a comment ]
      Signed-off-by: NDmitry Safonov <dsafonov@virtuozzo.com>
      Cc: 0x7f454c46@gmail.com
      Cc: linux-mm@kvack.org
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Cyrill Gorcunov <gorcunov@openvz.org>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Link: http://lkml.kernel.org/r/20170306141721.9188-4-dsafonov@virtuozzo.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      1b028f78
  4. 02 3月, 2017 1 次提交
  5. 12 10月, 2016 1 次提交
  6. 31 3月, 2015 1 次提交
    • H
      x86/mm: Improve AMD Bulldozer ASLR workaround · 4e26d11f
      Hector Marco-Gisbert 提交于
      The ASLR implementation needs to special-case AMD F15h processors by
      clearing out bits [14:12] of the virtual address in order to avoid I$
      cross invalidations and thus performance penalty for certain workloads.
      For details, see:
      
        dfb09f9b ("x86, amd: Avoid cache aliasing penalties on AMD family 15h")
      
      This special case reduces the mmapped file's entropy by 3 bits.
      
      The following output is the run on an AMD Opteron 62xx class CPU
      processor under x86_64 Linux 4.0.0:
      
        $ for i in `seq 1 10`; do cat /proc/self/maps | grep "r-xp.*libc" ; done
        b7588000-b7736000 r-xp 00000000 00:01 4924       /lib/i386-linux-gnu/libc.so.6
        b7570000-b771e000 r-xp 00000000 00:01 4924       /lib/i386-linux-gnu/libc.so.6
        b75d0000-b777e000 r-xp 00000000 00:01 4924       /lib/i386-linux-gnu/libc.so.6
        b75b0000-b775e000 r-xp 00000000 00:01 4924       /lib/i386-linux-gnu/libc.so.6
        b7578000-b7726000 r-xp 00000000 00:01 4924       /lib/i386-linux-gnu/libc.so.6
        ...
      
      Bits [12:14] are always 0, i.e. the address always ends in 0x8000 or
      0x0000.
      
      32-bit systems, as in the example above, are especially sensitive
      to this issue because 32-bit randomness for VA space is 8 bits (see
      mmap_rnd()). With the Bulldozer special case, this diminishes to only 32
      different slots of mmap virtual addresses.
      
      This patch randomizes per boot the three affected bits rather than
      setting them to zero. Since all the shared pages have the same value
      at bits [12..14], there is no cache aliasing problems. This value gets
      generated during system boot and it is thus not known to a potential
      remote attacker. Therefore, the impact from the Bulldozer workaround
      gets diminished and ASLR randomness increased.
      
      More details at:
      
        http://hmarco.org/bugs/AMD-Bulldozer-linux-ASLR-weakness-reducing-mmaped-files-by-eight.html
      
      Original white paper by AMD dealing with the issue:
      
        http://developer.amd.com/wordpress/media/2012/10/SharedL1InstructionCacheonAMD15hCPU.pdfMentored-by: NIsmael Ripoll <iripoll@disca.upv.es>
      Signed-off-by: NHector Marco-Gisbert <hecmargi@upv.es>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Acked-by: NKees Cook <keescook@chromium.org>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Jan-Simon <dl9pf@gmx.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-fsdevel@vger.kernel.org
      Link: http://lkml.kernel.org/r/1427456301-3764-1-git-send-email-hecmargi@upv.esSigned-off-by: NIngo Molnar <mingo@kernel.org>
      4e26d11f
  7. 23 8月, 2013 2 次提交
  8. 14 8月, 2013 1 次提交
  9. 24 1月, 2013 1 次提交
  10. 12 12月, 2012 2 次提交
    • M
      mm: fix cache coloring on x86_64 architecture · 7d025059
      Michel Lespinasse 提交于
      Fix the x86-64 cache alignment code to take pgoff into account.  Use the
      x86 and MIPS cache alignment code as the basis for a generic cache
      alignment function.
      
      The old x86 code will always align the mmap to aliasing boundaries,
      even if the program mmaps the file with a non-zero pgoff.
      
      If program A mmaps the file with pgoff 0, and program B mmaps the file
      with pgoff 1.  The old code would align the mmaps, resulting in misaligned
      pages:
      
        A:  0123
        B:  123
      
      After this patch, they are aligned so the pages line up:
      
        A: 0123
        B:  123
      
      Proposed by Rik van Riel.
      Signed-off-by: NMichel Lespinasse <walken@google.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7d025059
    • M
      mm: use vm_unmapped_area() on x86_64 architecture · f9902472
      Michel Lespinasse 提交于
      Update the x86_64 arch_get_unmapped_area[_topdown] functions to make use
      of vm_unmapped_area() instead of implementing a brute force search.
      Signed-off-by: NMichel Lespinasse <walken@google.com>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f9902472
  11. 22 3月, 2012 1 次提交
  12. 21 2月, 2012 1 次提交
  13. 28 9月, 2011 1 次提交
  14. 07 8月, 2011 1 次提交
  15. 06 8月, 2011 1 次提交
    • B
      x86, amd: Avoid cache aliasing penalties on AMD family 15h · dfb09f9b
      Borislav Petkov 提交于
      This patch provides performance tuning for the "Bulldozer" CPU. With its
      shared instruction cache there is a chance of generating an excessive
      number of cache cross-invalidates when running specific workloads on the
      cores of a compute module.
      
      This excessive amount of cross-invalidations can be observed if cache
      lines backed by shared physical memory alias in bits [14:12] of their
      virtual addresses, as those bits are used for the index generation.
      
      This patch addresses the issue by clearing all the bits in the [14:12]
      slice of the file mapping's virtual address at generation time, thus
      forcing those bits the same for all mappings of a single shared library
      across processes and, in doing so, avoids instruction cache aliases.
      
      It also adds the command line option "align_va_addr=(32|64|on|off)" with
      which virtual address alignment can be enabled for 32-bit or 64-bit x86
      individually, or both, or be completely disabled.
      
      This change leaves virtual region address allocation on other families
      and/or vendors unaffected.
      Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com>
      Link: http://lkml.kernel.org/r/1312550110-24160-2-git-send-email-bp@amd64.orgSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      dfb09f9b
  16. 13 3月, 2010 1 次提交
    • C
      improve sys_newuname() for compat architectures · e28cbf22
      Christoph Hellwig 提交于
      On an architecture that supports 32-bit compat we need to override the
      reported machine in uname with the 32-bit value.  Instead of doing this
      separately in every architecture introduce a COMPAT_UTS_MACHINE define in
      <asm/compat.h> and apply it directly in sys_newuname().
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Hirokazu Takata <takata@linux-m32r.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: James Morris <jmorris@namei.org>
      Cc: Andreas Schwab <schwab@linux-m68k.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e28cbf22
  17. 11 12月, 2009 1 次提交
  18. 12 8月, 2009 1 次提交
  19. 01 8月, 2008 1 次提交
  20. 22 7月, 2008 1 次提交
  21. 04 5月, 2008 1 次提交
    • U
      unified (weak) sys_pipe implementation · d35c7b0e
      Ulrich Drepper 提交于
      This replaces the duplicated arch-specific versions of "sys_pipe()" with
      one unified implementation.  This removes almost 250 lines of duplicated
      code.
      
      It's marked __weak, so that *if* an architecture wants to override the
      default implementation it can do so by simply having its own replacement
      version, since many architectures use alternate calling conventions for
      the 'pipe()' system call for legacy reasons (ie traditional UNIX
      implementations often return the two file descriptors in registers)
      
      I still haven't changed the cris version even though Linus says the BKL
      isn't needed.  The arch maintainer can easily do it if there are really
      no obstacles.
      Signed-off-by: NUlrich Drepper <drepper@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d35c7b0e
  22. 30 1月, 2008 2 次提交
    • A
      x86: PIE executable randomization, checkpatch fixes · bb1ad820
      Andrew Morton 提交于
      #39: FILE: arch/ia64/ia32/binfmt_elf32.c:229:
      +elf32_map (struct file *filep, unsigned long addr, struct elf_phdr *eppnt, int prot, int type, unsigned long unused)
      
      WARNING: no space between function name and open parenthesis '('
      #39: FILE: arch/ia64/ia32/binfmt_elf32.c:229:
      +elf32_map (struct file *filep, unsigned long addr, struct elf_phdr *eppnt, int prot, int type, unsigned long unused)
      
      WARNING: line over 80 characters
      #67: FILE: arch/x86/kernel/sys_x86_64.c:80:
      +			new_begin = randomize_range(*begin, *begin + 0x02000000, 0);
      
      ERROR: use tabs not spaces
      #110: FILE: arch/x86/kernel/sys_x86_64.c:185:
      + ^I        mm->cached_hole_size = 0;$
      
      ERROR: use tabs not spaces
      #111: FILE: arch/x86/kernel/sys_x86_64.c:186:
      + ^I^Imm->free_area_cache = mm->mmap_base;$
      
      ERROR: use tabs not spaces
      #112: FILE: arch/x86/kernel/sys_x86_64.c:187:
      + ^I}$
      
      ERROR: use tabs not spaces
      #141: FILE: arch/x86/kernel/sys_x86_64.c:216:
      + ^I^I/* remember the largest hole we saw so far */$
      
      ERROR: use tabs not spaces
      #142: FILE: arch/x86/kernel/sys_x86_64.c:217:
      + ^I^Iif (addr + mm->cached_hole_size < vma->vm_start)$
      
      ERROR: use tabs not spaces
      #143: FILE: arch/x86/kernel/sys_x86_64.c:218:
      + ^I^I        mm->cached_hole_size = vma->vm_start - addr;$
      
      ERROR: use tabs not spaces
      #157: FILE: arch/x86/kernel/sys_x86_64.c:232:
      +  ^Imm->free_area_cache = TASK_UNMAPPED_BASE;$
      
      ERROR: need a space before the open parenthesis '('
      #291: FILE: arch/x86/mm/mmap_64.c:101:
      +	} else if(mmap_is_legacy()) {
      
      WARNING: braces {} are not necessary for single statement blocks
      #302: FILE: arch/x86/mm/mmap_64.c:112:
      +	if (current->flags & PF_RANDOMIZE) {
      +		mm->mmap_base += ((long)rnd) << PAGE_SHIFT;
      +	}
      
      WARNING: line over 80 characters
      #314: FILE: fs/binfmt_elf.c:48:
      +static unsigned long elf_map (struct file *, unsigned long, struct elf_phdr *, int, int, unsigned long);
      
      WARNING: no space between function name and open parenthesis '('
      #314: FILE: fs/binfmt_elf.c:48:
      +static unsigned long elf_map (struct file *, unsigned long, struct elf_phdr *, int, int, unsigned long);
      
      WARNING: line over 80 characters
      #429: FILE: fs/binfmt_elf.c:438:
      +					   eppnt, elf_prot, elf_type, total_size);
      
      ERROR: need space after that ',' (ctx:VxV)
      #480: FILE: fs/binfmt_elf.c:939:
      +				elf_prot, elf_flags,0);
       				                   ^
      
      total: 9 errors, 7 warnings, 461 lines checked
      Your patch has style problems, please review.  If any of these errors
      are false positives report them to the maintainer, see
      CHECKPATCH in MAINTAINERS.
      
      Please run checkpatch prior to sending patches
      
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Jakub Jelinek <jakub@redhat.com>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Roland McGrath <roland@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      bb1ad820
    • J
      x86: PIE executable randomization · cc503c1b
      Jiri Kosina 提交于
      main executable of (specially compiled/linked -pie/-fpie) ET_DYN binaries
      onto a random address (in cases in which mmap() is allowed to perform a
      randomization).
      
      The code has been extraced from Ingo's exec-shield patch
      http://people.redhat.com/mingo/exec-shield/
      
      [akpm@linux-foundation.org: fix used-uninitialsied warning]
      [kamezawa.hiroyu@jp.fujitsu.com: fixed ia32 ELF on x86_64 handling]
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Roland McGrath <roland@redhat.com>
      Cc: Jakub Jelinek <jakub@redhat.com>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      cc503c1b
  23. 14 10月, 2007 1 次提交
    • D
      Delete filenames in comments. · 835c34a1
      Dave Jones 提交于
      Since the x86 merge, lots of files that referenced their own filenames
      are no longer correct.  Rather than keep them up to date, just delete
      them, as they add no real value.
      
      Additionally:
      - fix up comment formatting in scx200_32.c
      - Remove a credit from myself in setup_64.c from a time when we had no SCM
      - remove longwinded history from tsc_32.c which can be figured out from
        git.
      Signed-off-by: NDave Jones <davej@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      835c34a1
  24. 11 10月, 2007 1 次提交
  25. 30 7月, 2007 1 次提交
    • A
      Remove fs.h from mm.h · 4e950f6f
      Alexey Dobriyan 提交于
      Remove fs.h from mm.h. For this,
       1) Uninline vma_wants_writenotify(). It's pretty huge anyway.
       2) Add back fs.h or less bloated headers (err.h) to files that need it.
      
      As result, on x86_64 allyesconfig, fs.h dependencies cut down from 3929 files
      rebuilt down to 3444 (-12.3%).
      
      Cross-compile tested without regressions on my two usual configs and (sigh):
      
      alpha              arm-mx1ads        mips-bigsur          powerpc-ebony
      alpha-allnoconfig  arm-neponset      mips-capcella        powerpc-g5
      alpha-defconfig    arm-netwinder     mips-cobalt          powerpc-holly
      alpha-up           arm-netx          mips-db1000          powerpc-iseries
      arm                arm-ns9xxx        mips-db1100          powerpc-linkstation
      arm-assabet        arm-omap_h2_1610  mips-db1200          powerpc-lite5200
      arm-at91rm9200dk   arm-onearm        mips-db1500          powerpc-maple
      arm-at91rm9200ek   arm-picotux200    mips-db1550          powerpc-mpc7448_hpc2
      arm-at91sam9260ek  arm-pleb          mips-ddb5477         powerpc-mpc8272_ads
      arm-at91sam9261ek  arm-pnx4008       mips-decstation      powerpc-mpc8313_rdb
      arm-at91sam9263ek  arm-pxa255-idp    mips-e55             powerpc-mpc832x_mds
      arm-at91sam9rlek   arm-realview      mips-emma2rh         powerpc-mpc832x_rdb
      arm-ateb9200       arm-realview-smp  mips-excite          powerpc-mpc834x_itx
      arm-badge4         arm-rpc           mips-fulong          powerpc-mpc834x_itxgp
      arm-carmeva        arm-s3c2410       mips-ip22            powerpc-mpc834x_mds
      arm-cerfcube       arm-shannon       mips-ip27            powerpc-mpc836x_mds
      arm-clps7500       arm-shark         mips-ip32            powerpc-mpc8540_ads
      arm-collie         arm-simpad        mips-jazz            powerpc-mpc8544_ds
      arm-corgi          arm-spitz         mips-jmr3927         powerpc-mpc8560_ads
      arm-csb337         arm-trizeps4      mips-malta           powerpc-mpc8568mds
      arm-csb637         arm-versatile     mips-mipssim         powerpc-mpc85xx_cds
      arm-ebsa110        i386              mips-mpc30x          powerpc-mpc8641_hpcn
      arm-edb7211        i386-allnoconfig  mips-msp71xx         powerpc-mpc866_ads
      arm-em_x270        i386-defconfig    mips-ocelot          powerpc-mpc885_ads
      arm-ep93xx         i386-up           mips-pb1100          powerpc-pasemi
      arm-footbridge     ia64              mips-pb1500          powerpc-pmac32
      arm-fortunet       ia64-allnoconfig  mips-pb1550          powerpc-ppc64
      arm-h3600          ia64-bigsur       mips-pnx8550-jbs     powerpc-prpmc2800
      arm-h7201          ia64-defconfig    mips-pnx8550-stb810  powerpc-ps3
      arm-h7202          ia64-gensparse    mips-qemu            powerpc-pseries
      arm-hackkit        ia64-sim          mips-rbhma4200       powerpc-up
      arm-integrator     ia64-sn2          mips-rbhma4500       s390
      arm-iop13xx        ia64-tiger        mips-rm200           s390-allnoconfig
      arm-iop32x         ia64-up           mips-sb1250-swarm    s390-defconfig
      arm-iop33x         ia64-zx1          mips-sead            s390-up
      arm-ixp2000        m68k              mips-tb0219          sparc
      arm-ixp23xx        m68k-amiga        mips-tb0226          sparc-allnoconfig
      arm-ixp4xx         m68k-apollo       mips-tb0287          sparc-defconfig
      arm-jornada720     m68k-atari        mips-workpad         sparc-up
      arm-kafa           m68k-bvme6000     mips-wrppmc          sparc64
      arm-kb9202         m68k-hp300        mips-yosemite        sparc64-allnoconfig
      arm-ks8695         m68k-mac          parisc               sparc64-defconfig
      arm-lart           m68k-mvme147      parisc-allnoconfig   sparc64-up
      arm-lpd270         m68k-mvme16x      parisc-defconfig     um-x86_64
      arm-lpd7a400       m68k-q40          parisc-up            x86_64
      arm-lpd7a404       m68k-sun3         powerpc              x86_64-allnoconfig
      arm-lubbock        m68k-sun3x        powerpc-cell         x86_64-defconfig
      arm-lusl7200       mips              powerpc-celleb       x86_64-up
      arm-mainstone      mips-atlas        powerpc-chrp32
      Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4e950f6f
  26. 09 5月, 2007 1 次提交
  27. 08 5月, 2007 1 次提交
  28. 02 10月, 2006 1 次提交
  29. 15 11月, 2005 1 次提交
  30. 22 6月, 2005 2 次提交
    • W
      [PATCH] Avoiding mmap fragmentation · 1363c3cd
      Wolfgang Wander 提交于
      Ingo recently introduced a great speedup for allocating new mmaps using the
      free_area_cache pointer which boosts the specweb SSL benchmark by 4-5% and
      causes huge performance increases in thread creation.
      
      The downside of this patch is that it does lead to fragmentation in the
      mmap-ed areas (visible via /proc/self/maps), such that some applications
      that work fine under 2.4 kernels quickly run out of memory on any 2.6
      kernel.
      
      The problem is twofold:
      
        1) the free_area_cache is used to continue a search for memory where
           the last search ended.  Before the change new areas were always
           searched from the base address on.
      
           So now new small areas are cluttering holes of all sizes
           throughout the whole mmap-able region whereas before small holes
           tended to close holes near the base leaving holes far from the base
           large and available for larger requests.
      
        2) the free_area_cache also is set to the location of the last
           munmap-ed area so in scenarios where we allocate e.g.  five regions of
           1K each, then free regions 4 2 3 in this order the next request for 1K
           will be placed in the position of the old region 3, whereas before we
           appended it to the still active region 1, placing it at the location
           of the old region 2.  Before we had 1 free region of 2K, now we only
           get two free regions of 1K -> fragmentation.
      
      The patch addresses thes issues by introducing yet another cache descriptor
      cached_hole_size that contains the largest known hole size below the
      current free_area_cache.  If a new request comes in the size is compared
      against the cached_hole_size and if the request can be filled with a hole
      below free_area_cache the search is started from the base instead.
      
      The results look promising: Whereas 2.6.12-rc4 fragments quickly and my
      (earlier posted) leakme.c test program terminates after 50000+ iterations
      with 96 distinct and fragmented maps in /proc/self/maps it performs nicely
      (as expected) with thread creation, Ingo's test_str02 with 20000 threads
      requires 0.7s system time.
      
      Taking out Ingo's patch (un-patch available per request) by basically
      deleting all mentions of free_area_cache from the kernel and starting the
      search for new memory always at the respective bases we observe: leakme
      terminates successfully with 11 distinctive hardly fragmented areas in
      /proc/self/maps but thread creating is gringdingly slow: 30+s(!) system
      time for Ingo's test_str02 with 20000 threads.
      
      Now - drumroll ;-) the appended patch works fine with leakme: it ends with
      only 7 distinct areas in /proc/self/maps and also thread creation seems
      sufficiently fast with 0.71s for 20000 threads.
      Signed-off-by: NWolfgang Wander <wwc@rentec.com>
      Credit-to: "Richard Purdie" <rpurdie@rpsys.net>
      Signed-off-by: NKen Chen <kenneth.w.chen@intel.com>
      Acked-by: Ingo Molnar <mingo@elte.hu> (partly)
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      1363c3cd
    • S
      [PATCH] x86_64: TASK_SIZE fixes for compatibility mode processes · 84929801
      Suresh Siddha 提交于
      Appended patch will setup compatibility mode TASK_SIZE properly.  This will
      fix atleast three known bugs that can be encountered while running
      compatibility mode apps.
      
      a) A malicious 32bit app can have an elf section at 0xffffe000.  During
         exec of this app, we will have a memory leak as insert_vm_struct() is
         not checking for return value in syscall32_setup_pages() and thus not
         freeing the vma allocated for the vsyscall page.  And instead of exec
         failing (as it has addresses > TASK_SIZE), we were allowing it to
         succeed previously.
      
      b) With a 32bit app, hugetlb_get_unmapped_area/arch_get_unmapped_area
         may return addresses beyond 32bits, ultimately causing corruption
         because of wrap-around and resulting in SEGFAULT, instead of returning
         ENOMEM.
      
      c) 32bit app doing this below mmap will now fail.
      
        mmap((void *)(0xFFFFE000UL), 0x10000UL, PROT_READ|PROT_WRITE,
      	MAP_FIXED|MAP_PRIVATE|MAP_ANON, 0, 0);
      Signed-off-by: NZou Nan hai <nanhai.zou@intel.com>
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Cc: Andi Kleen <ak@muc.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      84929801
  31. 01 5月, 2005 1 次提交
  32. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4