1. 30 12月, 2013 1 次提交
  2. 10 7月, 2012 2 次提交
  3. 30 4月, 2012 1 次提交
  4. 19 12月, 2011 1 次提交
    • A
      powerpc: POWER7 optimised copy_to_user/copy_from_user using VMX · a66086b8
      Anton Blanchard 提交于
      Implement a POWER7 optimised copy_to_user/copy_from_user using VMX.
      For large aligned copies this new loop is over 10% faster, and for
      large unaligned copies it is over 200% faster.
      
      If we take a fault we fall back to the old version, this keeps
      things relatively simple and easy to verify.
      
      On POWER7 unaligned stores rarely slow down - they only flush when
      a store crosses a 4KB page boundary. Furthermore this flush is
      handled completely in hardware and should be 20-30 cycles.
      
      Unaligned loads on the other hand flush much more often - whenever
      crossing a 128 byte cache line, or a 32 byte sector if either sector
      is an L1 miss.
      
      Considering this information we really want to get the loads aligned
      and not worry about the alignment of the stores. Microbenchmarks
      confirm that this approach is much faster than the current unaligned
      copy loop that uses shifts and rotates to ensure both loads and
      stores are aligned.
      
      We also want to try and do the stores in cacheline aligned, cacheline
      sized chunks. If the store queue is unable to merge an entire
      cacheline of stores then the L2 cache will have to do a
      read/modify/write. Even worse, we will serialise this with the stores
      in the next iteration of the copy loop since both iterations hit
      the same cacheline.
      
      Based on this, the new loop does the following things:
      
      1 - 127 bytes
      Get the source 8 byte aligned and use 8 byte loads and stores. Pretty
      boring and similar to how the current loop works.
      
      128 - 4095 bytes
      Get the source 8 byte aligned and use 8 byte loads and stores,
      1 cacheline at a time. We aren't doing the stores in cacheline
      aligned chunks so we will potentially serialise once per cacheline.
      Even so it is much better than the loop we have today.
      
      4096 - bytes
      If both source and destination have the same alignment get them both
      16 byte aligned, then get the destination cacheline aligned. Do
      cacheline sized loads and stores using VMX.
      
      If source and destination do not have the same alignment, we get the
      destination cacheline aligned, and use permute to do aligned loads.
      
      In both cases the VMX loop should be optimal - we always do aligned
      loads and stores and are always doing stores in cacheline aligned,
      cacheline sized chunks.
      
      To be able to use VMX we must be careful about interrupts and
      sleeping. We don't use the VMX loop when in an interrupt (which should
      be rare anyway) and we wrap the VMX loop in disable/enable_pagefault
      and fall back to the existing copy_tofrom_user loop if we do need to
      sleep.
      
      The VMX breakpoint of 4096 bytes was chosen using this microbenchmark:
      
      http://ozlabs.org/~anton/junkcode/copy_to_user.c
      
      Since we are using VMX and there is a cost to saving and restoring
      the user VMX state there are two broad cases we need to benchmark:
      
      - Best case - userspace never uses VMX
      
      - Worst case - userspace always uses VMX
      
      In reality a userspace process will sit somewhere between these two
      extremes. Since we need to test both aligned and unaligned copies we
      end up with 4 combinations. The point at which the VMX loop begins to
      win is:
      
      0% VMX
      aligned		2048 bytes
      unaligned	2048 bytes
      
      100% VMX
      aligned		16384 bytes
      unaligned	8192 bytes
      
      Considering this is a microbenchmark, the data is hot in cache and
      the VMX loop has better store queue merging properties we set the
      breakpoint to 4096 bytes, a little below the unaligned breakpoints.
      
      Some future optimisations we can look at:
      
      - Looking at the perf data, a significant part of the cost when a
        task is always using VMX is the extra exception we take to restore
        the VMX state. As such we should do something similar to the x86
        optimisation that restores FPU state for heavy users. ie:
      
              /*
               * If the task has used fpu the last 5 timeslices, just do a full
               * restore of the math state immediately to avoid the trap; the
               * chances of needing FPU soon are obviously high now
               */
              preload_fpu = tsk_used_math(next_p) && next_p->fpu_counter > 5;
      
        and
      
              /*
               * fpu_counter contains the number of consecutive context switches
               * that the FPU is used. If this is over a threshold, the lazy fpu
               * saving becomes unlazy to save the trap. This is an unsigned char
               * so that after 256 times the counter wraps and the behavior turns
               * lazy again; this to deal with bursty apps that only use FPU for
               * a short time
               */
      
      - We could create a paca bit to mirror the VMX enabled MSR bit and check
        that first, avoiding multiple calls to calling enable_kernel_altivec.
        That should help with iovec based system calls like readv.
      
      - We could have two VMX breakpoints, one for when we know the user VMX
        state is loaded into the registers and one when it isn't. This could
        be a second bit in the paca so we can calculate the break points quickly.
      
      - One suggestion from Ben was to save and restore the VSX registers
        we use inline instead of using enable_kernel_altivec.
      
      [BenH: Fixed a problem with preempt and fixed build without CONFIG_ALTIVEC]
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      a66086b8
  5. 17 2月, 2010 1 次提交
    • A
      powerpc: Improve 64bit copy_tofrom_user · 789c299c
      Anton Blanchard 提交于
      Here is a patch from Paul Mackerras that improves the ppc64 copy_tofrom_user.
      The loop now does 32 bytes at a time and as well as pairing loads and stores.
      
      A quick test case that reads 8kB over and over shows the improvement:
      
      POWER6: 53% faster
      POWER7: 51% faster
      
      #define _XOPEN_SOURCE 500
      #include <stdlib.h>
      #include <stdio.h>
      #include <unistd.h>
      #include <fcntl.h>
      #include <sys/types.h>
      #include <sys/stat.h>
      
      #define BUFSIZE (8 * 1024)
      #define ITERATIONS 10000000
      
      int main()
      {
      	char tmpfile[] = "/tmp/copy_to_user_testXXXXXX";
      	int fd;
      	char *buf[BUFSIZE];
      	unsigned long i;
      
      	fd = mkstemp(tmpfile);
      	if (fd < 0) {
      		perror("open");
      		exit(1);
      	}
      
      	if (write(fd, buf, BUFSIZE) != BUFSIZE) {
      		perror("open");
      		exit(1);
      	}
      
      	for (i = 0; i < 10000000; i++) {
      		if (pread(fd, buf, BUFSIZE, 0) != BUFSIZE) {
      			perror("pread");
      			exit(1);
      		}
      	}
      
      	unlink(tmpfile);
      
      	return 0;
      }
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      789c299c
  6. 26 2月, 2009 1 次提交
  7. 19 11月, 2008 1 次提交
    • M
      powerpc: Update 64bit __copy_tofrom_user() using CPU_FTR_UNALIGNED_LD_STD · a4e22f02
      Mark Nelson 提交于
      In exactly the same way that we updated memcpy() with new feature
      sections in commit 25d6e2d7 ("powerpc:
      Update 64bit memcpy() using CPU_FTR_UNALIGNED_LD_STD"), we do the same
      thing here for __copy_tofrom_user().  Once again this is purely a
      performance tweak for Cell and Power6 - this has no effect on all the
      other 64bit powerpc chips.
      
      We can make these same changes to __copy_tofrom_user() because the
      basic copy algorithm is the same as in memcpy() - this version just
      has all the exception handling logic needed when copying to or from
      userspace as well as a special case for copying whole 4K pages that
      are page aligned.
      
      CPU_FTR_UNALIGNED_LD_STD CPU was added in commit
      4ec577a2 ("powerpc: Add new CPU
      feature: CPU_FTR_UNALIGNED_LD_STD").
      
      We also make the same simple one line change from cmpldi r1,... to
      cmpldi cr1,... for consistency.
      Signed-off-by: NMark Nelson <markn@au1.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      a4e22f02
  8. 13 4月, 2007 1 次提交
  9. 10 2月, 2006 1 次提交
  10. 07 11月, 2005 1 次提交
    • B
      [PATCH] ppc64: support 64k pages · 3c726f8d
      Benjamin Herrenschmidt 提交于
      Adds a new CONFIG_PPC_64K_PAGES which, when enabled, changes the kernel
      base page size to 64K.  The resulting kernel still boots on any
      hardware.  On current machines with 4K pages support only, the kernel
      will maintain 16 "subpages" for each 64K page transparently.
      
      Note that while real 64K capable HW has been tested, the current patch
      will not enable it yet as such hardware is not released yet, and I'm
      still verifying with the firmware architects the proper to get the
      information from the newer hypervisors.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      3c726f8d
  11. 10 10月, 2005 1 次提交
  12. 26 9月, 2005 1 次提交
    • P
      powerpc: Merge enough to start building in arch/powerpc. · 14cf11af
      Paul Mackerras 提交于
      This creates the directory structure under arch/powerpc and a bunch
      of Kconfig files.  It does a first-cut merge of arch/powerpc/mm,
      arch/powerpc/lib and arch/powerpc/platforms/powermac.  This is enough
      to build a 32-bit powermac kernel with ARCH=powerpc.
      
      For now we are getting some unmerged files from arch/ppc/kernel and
      arch/ppc/syslib, or arch/ppc64/kernel.  This makes some minor changes
      to files in those directories and files outside arch/powerpc.
      
      The boot directory is still not merged.  That's going to be interesting.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      14cf11af
  13. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4