1. 05 1月, 2012 8 次提交
  2. 03 1月, 2012 4 次提交
    • L
      powerpc: Fix unpaired probe_hcall_entry and probe_hcall_exit · e4f387d8
      Li Zhong 提交于
      Unpaired calling of probe_hcall_entry and probe_hcall_exit might happen
      as following, which could cause incorrect preempt count.
      
      __trace_hcall_entry => trace_hcall_entry -> probe_hcall_entry =>
      get_cpu_var => preempt_disable
      
      __trace_hcall_exit => trace_hcall_exit -> probe_hcall_exit =>
      put_cpu_var => preempt_enable
      
      where:
      A => B and A -> B means A calls B, but
      => means A will call B through function name, and B will definitely be
      called.
      -> means A will call B through function pointer, so B might not be
      called if the function pointer is not set.
      
      So error happens when only one of probe_hcall_entry and probe_hcall_exit
      get called during a hcall.
      
      This patch tries to move the preempt count operations from
      probe_hcall_entry and probe_hcall_exit to its callers.
      Reported-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NLi Zhong <zhong@linux.vnet.ibm.com>
      Tested-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      CC: stable@kernel.org [v2.6.32+]
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      e4f387d8
    • B
      offb: Fix setting of the pseudo-palette for >8bpp · 1bb0b7d2
      Benjamin Herrenschmidt 提交于
      When using a >8bpp framebuffer, offb advertises truecolor, not directcolor,
      and doesn't touch the color map even if it has a corresponding access method
      for the real hardware.
      
      Thus it needs to set the pseudo-palette with all 3 components of the color,
      like other truecolor framebuffers, not with copies of the color index like
      a directcolor framebuffer would do.
      
      This went unnoticed for a long time because it's pretty hard to get offb
      to kick in with anything but 8bpp (old BootX under MacOS will do that and
      qemu does it).
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      CC: stable@kernel.org
      1bb0b7d2
    • B
      offb: Add palette hack for qemu "standard vga" framebuffer · 9b961ed2
      Benjamin Herrenschmidt 提交于
      We rename the mach64 hack to "simple" since that's also applicable
      to anything using VGA-style DAC IO ports (set to 8-bit DAC) and we
      use it for qemu vga.
      
      Note that this is keyed on a device-tree "compatible" property that
      is currently only set by an upcoming version of SLOF when using the
      qemu "pseries" platform. This is on purpose as other qemu ppc platforms
      using OpenBIOS aren't properly setting the DAC to 8-bit at the time of
      the writing of this patch.
      
      We can fix OpenBIOS later to do that and add the required property, in
      which case it will be matched by this change.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      9b961ed2
    • B
      offb: Fix bug in calculating requested vram size · c055fe07
      Benjamin Herrenschmidt 提交于
      We used to try to request 8 times more vram than needed, which would
      fail if the card has a too small BAR (observed with qemu & kvm).
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      CC: stable@kernel.org
      c055fe07
  3. 22 12月, 2011 1 次提交
  4. 20 12月, 2011 8 次提交
    • J
      powerpc/44x: Fix build error on currituck platform · eb975652
      Josh Boyer 提交于
      The MPIC_PRIMARY define was recently made "default" and the meaning was
      inverted to MPIC_SECONDARY.  This causes compile errors in currituck now, so
      fix it to the new manner of allocating mpics.
      Signed-off-by: NJosh Boyer <jwboyer@gmail.com>
      eb975652
    • S
      powerpc/boot: Change the load address for the wrapper to fit the kernel · c55aef0e
      Suzuki Poulose 提交于
      The wrapper code which uncompresses the kernel in case of a 'ppc' boot
      is by default loaded at 0x00400000 and the kernel will be uncompressed
      to fit the location 0-0x00400000. But with dynamic relocations, the size
      of the kernel may exceed 0x00400000(4M). This would cause an overlap
      of the uncompressed kernel and the boot wrapper, causing a failure in
      boot.
      
      The message looks like :
      
         zImage starting: loaded at 0x00400000 (sp: 0x0065ffb0)
         Allocating 0x5ce650 bytes for kernel ...
         Insufficient memory for kernel at address 0! (_start=00400000, uncompressed size=00591a20)
      
      This patch shifts the load address of the boot wrapper code to the next
      higher MB, according to the size of  the uncompressed vmlinux.
      
      With the patch, we get the following message while building the image :
      
       WARN: Uncompressed kernel (size 0x5b0344) overlaps the address of the wrapper(0x400000)
       WARN: Fixing the link_address of wrapper to (0x600000)
      Signed-off-by: NSuzuki K. Poulose <suzuki@in.ibm.com>
      Signed-off-by: NJosh Boyer <jwboyer@gmail.com>
      c55aef0e
    • S
      powerpc/44x: Enable CRASH_DUMP for 440x · 5b2e478d
      Suzuki Poulose 提交于
      Now that we have relocatable kernel, supporting CRASH_DUMP only requires
      turning the switches on for UP machines.
      
      We don't have kexec support on 47x yet. Enabling SMP support would be done
      as part of enabling the PPC_47x support.
      Signed-off-by: NSuzuki K. Poulose <suzuki@in.ibm.com>
      Cc: Josh Boyer <jwboyer@gmail.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: linuxppc-dev <linuxppc-dev@lists.ozlabs.org>
      Signed-off-by: NJosh Boyer <jwboyer@gmail.com>
      5b2e478d
    • S
      powerpc/44x: Enable CONFIG_RELOCATABLE for PPC44x · 26ecb6c4
      Suzuki Poulose 提交于
      The following patch adds relocatable kernel support - based on processing
      of dynamic relocations - for PPC44x kernel.
      
      We find the runtime address of _stext and relocate ourselves based
      on the following calculation.
      
      	virtual_base = ALIGN(KERNELBASE,256M) +
      			MODULO(_stext.run,256M)
      
      relocate() is called with the Effective Virtual Base Address (as
      shown below)
      
                  | Phys. Addr| Virt. Addr |
      Page (256M) |------------------------|
      Boundary    |           |            |
                  |           |            |
                  |           |            |
      Kernel Load |___________|_ __ _ _ _ _|<- Effective
      Addr(_stext)|           |      ^     |Virt. Base Addr
                  |           |      |     |
                  |           |      |     |
                  |           |reloc_offset|
                  |           |      |     |
                  |           |      |     |
                  |           |______v_____|<-(KERNELBASE)%256M
                  |           |            |
                  |           |            |
                  |           |            |
      Page(256M)  |-----------|------------|
      Boundary    |           |            |
      
      The virt_phys_offset is updated accordingly, i.e,
      
      	virt_phys_offset = effective. kernel virt base - kernstart_addr
      
      I have tested the patches on 440x platforms only. However this should
      work fine for PPC_47x also, as we only depend on the runtime address
      and the current TLB XLAT entry for the startup code, which is available
      in r25. I don't have access to a 47x board yet. So, it would be great if
      somebody could test this on 47x.
      Signed-off-by: NSuzuki K. Poulose <suzuki@in.ibm.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Kumar Gala <galak@kernel.crashing.org>
      Cc: Tony Breeds <tony@bakeyournoodle.com>
      Cc: Josh Boyer <jwboyer@gmail.com>
      Cc: linuxppc-dev <linuxppc-dev@lists.ozlabs.org>
      Signed-off-by: NJosh Boyer <jwboyer@gmail.com>
      26ecb6c4
    • S
      powerpc: Define virtual-physical translations for RELOCATABLE · 368ff8f1
      Suzuki Poulose 提交于
      We find the runtime address of _stext and relocate ourselves based
      on the following calculation.
      
      	virtual_base = ALIGN(KERNELBASE,KERNEL_TLB_PIN_SIZE) +
      			MODULO(_stext.run,KERNEL_TLB_PIN_SIZE)
      
      relocate() is called with the Effective Virtual Base Address (as
      shown below)
      
                  | Phys. Addr| Virt. Addr |
      Page        |------------------------|
      Boundary    |           |            |
                  |           |            |
                  |           |            |
      Kernel Load |___________|_ __ _ _ _ _|<- Effective
      Addr(_stext)|           |      ^     |Virt. Base Addr
                  |           |      |     |
                  |           |      |     |
                  |           |reloc_offset|
                  |           |      |     |
                  |           |      |     |
                  |           |______v_____|<-(KERNELBASE)%TLB_SIZE
                  |           |            |
                  |           |            |
                  |           |            |
      Page        |-----------|------------|
      Boundary    |           |            |
      
      On BookE, we need __va() & __pa() early in the boot process to access
      the device tree.
      
      Currently this has been defined as :
      
      #define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) -
      						PHYSICAL_START + KERNELBASE)
      where:
       PHYSICAL_START is kernstart_addr - a variable updated at runtime.
       KERNELBASE	is the compile time Virtual base address of kernel.
      
      This won't work for us, as kernstart_addr is dynamic and will yield different
      results for __va()/__pa() for same mapping.
      
      e.g.,
      
      Let the kernel be loaded at 64MB and KERNELBASE be 0xc0000000 (same as
      PAGE_OFFSET).
      
      In this case, we would be mapping 0 to 0xc0000000, and kernstart_addr = 64M
      
      Now __va(1MB) = (0x100000) - (0x4000000) + 0xc0000000
      		= 0xbc100000 , which is wrong.
      
      it should be : 0xc0000000 + 0x100000 = 0xc0100000
      
      On platforms which support AMP, like PPC_47x (based on 44x), the kernel
      could be loaded at highmem. Hence we cannot always depend on the compile
      time constants for mapping.
      
      Here are the possible solutions:
      
      1) Update kernstart_addr(PHSYICAL_START) to match the Physical address of
      compile time KERNELBASE value, instead of the actual Physical_Address(_stext).
      
      The disadvantage is that we may break other users of PHYSICAL_START. They
      could be replaced with __pa(_stext).
      
      2) Redefine __va() & __pa() with relocation offset
      
      #ifdef	CONFIG_RELOCATABLE_PPC32
      #define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) - PHYSICAL_START + (KERNELBASE + RELOC_OFFSET)))
      #define __pa(x) ((unsigned long)(x) + PHYSICAL_START - (KERNELBASE + RELOC_OFFSET))
      #endif
      
      where, RELOC_OFFSET could be
      
        a) A variable, say relocation_offset (like kernstart_addr), updated
           at boot time. This impacts performance, as we have to load an additional
           variable from memory.
      
      		OR
      
        b) #define RELOC_OFFSET ((PHYSICAL_START & PPC_PIN_SIZE_OFFSET_MASK) - \
                            (KERNELBASE & PPC_PIN_SIZE_OFFSET_MASK))
      
         This introduces more calculations for doing the translation.
      
      3) Redefine __va() & __pa() with a new variable
      
      i.e,
      
      #define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) + VIRT_PHYS_OFFSET))
      
      where VIRT_PHYS_OFFSET :
      
      #ifdef CONFIG_RELOCATABLE_PPC32
      #define VIRT_PHYS_OFFSET virt_phys_offset
      #else
      #define VIRT_PHYS_OFFSET (KERNELBASE - PHYSICAL_START)
      #endif /* CONFIG_RELOCATABLE_PPC32 */
      
      where virt_phy_offset is updated at runtime to :
      
      	Effective KERNELBASE - kernstart_addr.
      
      Taking our example, above:
      
      virt_phys_offset = effective_kernelstart_vaddr - kernstart_addr
      		 = 0xc0400000 - 0x400000
      		 = 0xc0000000
      	and
      
      	__va(0x100000) = 0xc0000000 + 0x100000 = 0xc0100000
      	 which is what we want.
      
      I have implemented (3) in the following patch which has same cost of
      operation as the existing one.
      
      I have tested the patches on 440x platforms only. However this should
      work fine for PPC_47x also, as we only depend on the runtime address
      and the current TLB XLAT entry for the startup code, which is available
      in r25. I don't have access to a 47x board yet. So, it would be great if
      somebody could test this on 47x.
      Signed-off-by: NSuzuki K. Poulose <suzuki@in.ibm.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Kumar Gala <galak@kernel.crashing.org>
      Cc: linuxppc-dev <linuxppc-dev@lists.ozlabs.org>
      Signed-off-by: NJosh Boyer <jwboyer@gmail.com>
      368ff8f1
    • S
      powerpc: Process dynamic relocations for kernel · 9c5f7d39
      Suzuki Poulose 提交于
      The following patch implements the dynamic relocation processing for
      PPC32 kernel. relocate() accepts the target virtual address and relocates
       the kernel image to the same.
      
      Currently the following relocation types are handled :
      
      	R_PPC_RELATIVE
      	R_PPC_ADDR16_LO
      	R_PPC_ADDR16_HI
      	R_PPC_ADDR16_HA
      
      The last 3 relocations in the above list depends on value of Symbol indexed
      whose index is encoded in the Relocation entry. Hence we need the Symbol
      Table for processing such relocations.
      
      Note: The GNU ld for ppc32 produces buggy relocations for relocation types
      that depend on symbols. The value of the symbols with STB_LOCAL scope
      should be assumed to be zero. - Alan Modra
      Signed-off-by: NSuzuki K. Poulose <suzuki@in.ibm.com>
      Signed-off-by: NJosh Poimboeuf <jpoimboe@linux.vnet.ibm.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Alan Modra <amodra@au1.ibm.com>
      Cc: Kumar Gala <galak@kernel.crashing.org>
      Cc: linuxppc-dev <linuxppc-dev@lists.ozlabs.org>
      Signed-off-by: NJosh Boyer <jwboyer@gmail.com>
      9c5f7d39
    • S
      powerpc/44x: Enable DYNAMIC_MEMSTART for 440x · 23913245
      Suzuki Poulose 提交于
      DYNAMIC_MEMSTART(old RELOCATABLE) was restricted only to PPC_47x variants
      of 44x. This patch enables DYNAMIC_MEMSTART for 440x based chipsets.
      Signed-off-by: NSuzuki K. Poulose <suzuki@in.ibm.com>
      Cc: Josh Boyer <jwboyer@gmail.com>
      Cc: Kumar Gala <galak@kernel.crashing.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: linux ppc dev <linuxppc-dev@lists.ozlabs.org>
      Signed-off-by: NJosh Boyer <jwboyer@gmail.com>
      23913245
    • S
      powerpc: Rename mapping based RELOCATABLE to DYNAMIC_MEMSTART for BookE · 0f890c8d
      Suzuki Poulose 提交于
      The current implementation of CONFIG_RELOCATABLE in BookE is based
      on mapping the page aligned kernel load address to KERNELBASE. This
      approach however is not enough for platforms, where the TLB page size
      is large (e.g, 256M on 44x). So we are renaming the RELOCATABLE used
      currently in BookE to DYNAMIC_MEMSTART to reflect the actual method.
      
      The CONFIG_RELOCATABLE for PPC32(BookE) based on processing of the
      dynamic relocations will be introduced in the later in the patch series.
      
      This change would allow the use of the old method of RELOCATABLE for
      platforms which can afford to enforce the page alignment (platforms with
      smaller TLB size).
      
      Changes since v3:
      
      * Introduced a new config, NONSTATIC_KERNEL, to denote a kernel which is
        either a RELOCATABLE or DYNAMIC_MEMSTART(Suggested by: Josh Boyer)
      Suggested-by: NScott Wood <scottwood@freescale.com>
      Tested-by: NScott Wood <scottwood@freescale.com>
      Signed-off-by: NSuzuki K. Poulose <suzuki@in.ibm.com>
      Cc: Scott Wood <scottwood@freescale.com>
      Cc: Kumar Gala <galak@kernel.crashing.org>
      Cc: Josh Boyer <jwboyer@gmail.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: linux ppc dev <linuxppc-dev@lists.ozlabs.org>
      Signed-off-by: NJosh Boyer <jwboyer@gmail.com>
      0f890c8d
  5. 19 12月, 2011 8 次提交
    • B
      powerpc: Fix old bug in prom_init setting of the color · 3f53638c
      Benjamin Herrenschmidt 提交于
      We have an array of 16 entries and a loop of 32 iterations... oops.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      3f53638c
    • P
      powerpc: Only use initrd_end as the limit for alloc_bottom if it's inside the RMO. · 64968f60
      Paul Mackerras 提交于
      As the kernels and initrd's get bigger boot-loaders and possibly
      kexec-tools will need to place the initrd outside the RMO.  When this
      happens we end up with no lowmem and the boot doesn't get very far.
      
      Only use initrd_end as the limit for alloc_bottom if it's inside the
      RMO.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NTony Breeds <tony@bakeyournoodle.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      64968f60
    • A
      powerpc: Fix comment explaining our VSID layout · b206590c
      Anton Blanchard 提交于
      We support 16TB of user address space and half a million contexts
      so update the comment to reflect this.
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      b206590c
    • A
      powerpc: Fix wrong divisor in usecs_to_cputime · 9f5072d4
      Andreas Schwab 提交于
      Commit d57af9b2 (taskstats: use real microsecond granularity for CPU times)
      renamed msecs_to_cputime to usecs_to_cputime, but failed to update all
      numbers on the way.  This causes nonsensical cpu idle/iowait values to be
      displayed in /proc/stat (the only user of usecs_to_cputime so far).
      
      This also renames __cputime_msec_factor to __cputime_usec_factor, adapting
      its value and using it directly in cputime_to_usecs instead of doing two
      multiplications.
      Signed-off-by: NAndreas Schwab <schwab@linux-m68k.org>
      Acked-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      9f5072d4
    • D
      powerpc/mm: Fix section mismatch for read_n_cells · 2011b1d0
      David Rientjes 提交于
      read_n_cells() cannot be marked as .devinit.text since it is referenced
      from two functions that are not in that section: of_get_lmb_size() and
      hot_add_drconf_scn_to_nid().
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      2011b1d0
    • D
      powerpc/mm: Fix section mismatch for mark_reserved_regions_for_nid · 28e86bdb
      David Rientjes 提交于
      mark_reserved_regions_for_nid() is only called from do_init_bootmem(),
      which is in .init.text, so it must be in the same section to avoid a
      section mismatch warning.
      Reported-by: NSubrata Modak <subrata@linux.vnet.ibm.com>
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      28e86bdb
    • M
      powerpc: Add __SANE_USERSPACE_TYPES__ to asm/types.h for LL64 · 2c9c6ce0
      Matt Evans 提交于
      PPC64 uses long long for u64 in the kernel, but powerpc's asm/types.h
      prevents 64-bit userland from seeing this definition, instead defaulting
      to u64 == long in userspace.  Some user programs (e.g. kvmtool) may actually
      want LL64, so this patch adds a check for __SANE_USERSPACE_TYPES__ so that,
      if defined, int-ll64.h is included instead.
      Signed-off-by: NMatt Evans <matt@ozlabs.org>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      2c9c6ce0
    • A
      powerpc: POWER7 optimised copy_to_user/copy_from_user using VMX · a66086b8
      Anton Blanchard 提交于
      Implement a POWER7 optimised copy_to_user/copy_from_user using VMX.
      For large aligned copies this new loop is over 10% faster, and for
      large unaligned copies it is over 200% faster.
      
      If we take a fault we fall back to the old version, this keeps
      things relatively simple and easy to verify.
      
      On POWER7 unaligned stores rarely slow down - they only flush when
      a store crosses a 4KB page boundary. Furthermore this flush is
      handled completely in hardware and should be 20-30 cycles.
      
      Unaligned loads on the other hand flush much more often - whenever
      crossing a 128 byte cache line, or a 32 byte sector if either sector
      is an L1 miss.
      
      Considering this information we really want to get the loads aligned
      and not worry about the alignment of the stores. Microbenchmarks
      confirm that this approach is much faster than the current unaligned
      copy loop that uses shifts and rotates to ensure both loads and
      stores are aligned.
      
      We also want to try and do the stores in cacheline aligned, cacheline
      sized chunks. If the store queue is unable to merge an entire
      cacheline of stores then the L2 cache will have to do a
      read/modify/write. Even worse, we will serialise this with the stores
      in the next iteration of the copy loop since both iterations hit
      the same cacheline.
      
      Based on this, the new loop does the following things:
      
      1 - 127 bytes
      Get the source 8 byte aligned and use 8 byte loads and stores. Pretty
      boring and similar to how the current loop works.
      
      128 - 4095 bytes
      Get the source 8 byte aligned and use 8 byte loads and stores,
      1 cacheline at a time. We aren't doing the stores in cacheline
      aligned chunks so we will potentially serialise once per cacheline.
      Even so it is much better than the loop we have today.
      
      4096 - bytes
      If both source and destination have the same alignment get them both
      16 byte aligned, then get the destination cacheline aligned. Do
      cacheline sized loads and stores using VMX.
      
      If source and destination do not have the same alignment, we get the
      destination cacheline aligned, and use permute to do aligned loads.
      
      In both cases the VMX loop should be optimal - we always do aligned
      loads and stores and are always doing stores in cacheline aligned,
      cacheline sized chunks.
      
      To be able to use VMX we must be careful about interrupts and
      sleeping. We don't use the VMX loop when in an interrupt (which should
      be rare anyway) and we wrap the VMX loop in disable/enable_pagefault
      and fall back to the existing copy_tofrom_user loop if we do need to
      sleep.
      
      The VMX breakpoint of 4096 bytes was chosen using this microbenchmark:
      
      http://ozlabs.org/~anton/junkcode/copy_to_user.c
      
      Since we are using VMX and there is a cost to saving and restoring
      the user VMX state there are two broad cases we need to benchmark:
      
      - Best case - userspace never uses VMX
      
      - Worst case - userspace always uses VMX
      
      In reality a userspace process will sit somewhere between these two
      extremes. Since we need to test both aligned and unaligned copies we
      end up with 4 combinations. The point at which the VMX loop begins to
      win is:
      
      0% VMX
      aligned		2048 bytes
      unaligned	2048 bytes
      
      100% VMX
      aligned		16384 bytes
      unaligned	8192 bytes
      
      Considering this is a microbenchmark, the data is hot in cache and
      the VMX loop has better store queue merging properties we set the
      breakpoint to 4096 bytes, a little below the unaligned breakpoints.
      
      Some future optimisations we can look at:
      
      - Looking at the perf data, a significant part of the cost when a
        task is always using VMX is the extra exception we take to restore
        the VMX state. As such we should do something similar to the x86
        optimisation that restores FPU state for heavy users. ie:
      
              /*
               * If the task has used fpu the last 5 timeslices, just do a full
               * restore of the math state immediately to avoid the trap; the
               * chances of needing FPU soon are obviously high now
               */
              preload_fpu = tsk_used_math(next_p) && next_p->fpu_counter > 5;
      
        and
      
              /*
               * fpu_counter contains the number of consecutive context switches
               * that the FPU is used. If this is over a threshold, the lazy fpu
               * saving becomes unlazy to save the trap. This is an unsigned char
               * so that after 256 times the counter wraps and the behavior turns
               * lazy again; this to deal with bursty apps that only use FPU for
               * a short time
               */
      
      - We could create a paca bit to mirror the VMX enabled MSR bit and check
        that first, avoiding multiple calls to calling enable_kernel_altivec.
        That should help with iovec based system calls like readv.
      
      - We could have two VMX breakpoints, one for when we know the user VMX
        state is loaded into the registers and one when it isn't. This could
        be a second bit in the paca so we can calculate the break points quickly.
      
      - One suggestion from Ben was to save and restore the VSX registers
        we use inline instead of using enable_kernel_altivec.
      
      [BenH: Fixed a problem with preempt and fixed build without CONFIG_ALTIVEC]
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      a66086b8
  6. 16 12月, 2011 8 次提交
  7. 09 12月, 2011 3 次提交