1. 30 1月, 2015 1 次提交
  2. 10 1月, 2014 6 次提交
    • S
      powerpc/e6500: TLB miss handler with hardware tablewalk support · 28efc35f
      Scott Wood 提交于
      There are a few things that make the existing hw tablewalk handlers
      unsuitable for e6500:
      
       - Indirect entries go in TLB1 (though the resulting direct entries go in
         TLB0).
      
       - It has threads, but no "tlbsrx." -- so we need a spinlock and
         a normal "tlbsx".  Because we need this lock, hardware tablewalk
         is mandatory on e6500 unless we want to add spinlock+tlbsx to
         the normal bolted TLB miss handler.
      
       - TLB1 has no HES (nor next-victim hint) so we need software round robin
         (TODO: integrate this round robin data with hugetlb/KVM)
      
       - The existing tablewalk handlers map half of a page table at a time,
         because IBM hardware has a fixed 1MiB indirect page size.  e6500
         has variable size indirect entries, with a minimum of 2MiB.
         So we can't do the half-page indirect mapping, and even if we
         could it would be less efficient than mapping the full page.
      
       - Like on e5500, the linear mapping is bolted, so we don't need the
         overhead of supporting nested tlb misses.
      
      Note that hardware tablewalk does not work in rev1 of e6500.
      We do not expect to support e6500 rev1 in mainline Linux.
      Signed-off-by: NScott Wood <scottwood@freescale.com>
      Cc: Mihai Caraman <mihai.caraman@freescale.com>
      28efc35f
    • K
      powerpc/fsl_booke: smp support for booting a relocatable kernel above 64M · 0be7d969
      Kevin Hao 提交于
      When booting above the 64M for a secondary cpu, we also face the
      same issue as the boot cpu that the PAGE_OFFSET map two different
      physical address for the init tlb and the final map. So we have to use
      switch_to_as1/restore_to_as0 between the conversion of these two
      maps. When restoring to as0 for a secondary cpu, we only need to
      return to the caller. So add a new parameter for function
      restore_to_as0 for this purpose.
      
      Use LOAD_REG_ADDR_PIC to get the address of variables which may
      be used before we set the final map in cams for the secondary cpu.
      Move the setting of cams a bit earlier in order to avoid the
      unnecessary using of LOAD_REG_ADDR_PIC.
      Signed-off-by: NKevin Hao <haokexin@gmail.com>
      Signed-off-by: NScott Wood <scottwood@freescale.com>
      0be7d969
    • K
      powerpc/fsl_booke: make sure PAGE_OFFSET map to memstart_addr for relocatable kernel · 7d2471f9
      Kevin Hao 提交于
      This is always true for a non-relocatable kernel. Otherwise the kernel
      would get stuck. But for a relocatable kernel, it seems a little
      complicated. When booting a relocatable kernel, we just align the
      kernel start addr to 64M and map the PAGE_OFFSET from there. The
      relocation will base on this virtual address. But if this address
      is not the same as the memstart_addr, we will have to change the
      map of PAGE_OFFSET to the real memstart_addr and do another relocation
      again.
      Signed-off-by: NKevin Hao <haokexin@gmail.com>
      [scottwood@freescale.com: make offset long and non-negative in simple case]
      Signed-off-by: NScott Wood <scottwood@freescale.com>
      7d2471f9
    • K
      powerpc/fsl_booke: introduce map_mem_in_cams_addr · 813125d8
      Kevin Hao 提交于
      Introduce this function so we can set both the physical and virtual
      address for the map in cams. This will be used by the relocation code.
      Signed-off-by: NKevin Hao <haokexin@gmail.com>
      Signed-off-by: NScott Wood <scottwood@freescale.com>
      813125d8
    • K
      powerpc/fsl_booke: set the tlb entry for the kernel address in AS1 · 78a235ef
      Kevin Hao 提交于
      We use the tlb1 entries to map low mem to the kernel space. In the
      current code, it assumes that the first tlb entry would cover the
      kernel image. But this is not true for some special cases, such as
      when we run a relocatable kernel above the 64M or set
      CONFIG_KERNEL_START above 64M. So we choose to switch to address
      space 1 before setting these tlb entries.
      Signed-off-by: NKevin Hao <haokexin@gmail.com>
      Signed-off-by: NScott Wood <scottwood@freescale.com>
      78a235ef
    • K
      powerpc: enable the relocatable support for the fsl booke 32bit kernel · dd189692
      Kevin Hao 提交于
      This is based on the codes in the head_44x.S. The difference is that
      the init tlb size we used is 64M. With this patch we can only load the
      kernel at address between memstart_addr ~ memstart_addr + 64M. We will
      fix this restriction in the following patches.
      Signed-off-by: NKevin Hao <haokexin@gmail.com>
      Signed-off-by: NScott Wood <scottwood@freescale.com>
      dd189692
  3. 16 3月, 2012 1 次提交
  4. 13 10月, 2011 1 次提交
  5. 12 10月, 2011 1 次提交
    • K
      powerpc/fsl-booke: Fix setup_initial_memory_limit to not blindly map · 1dc91c3e
      Kumar Gala 提交于
      On FSL Book-E devices we support multiple large TLB sizes and so we can
      get into situations in which the initial 1G TLB size is too big and
      we're asked for a size that is not mappable by a single entry (like
      512M).  The single entry is important because when we bring up secondary
      cores they need to ensure any data structure they need to access (eg
      PACA or stack) is always mapped.
      
      So we really need to determine what size will actually be mapped by the
      first TLB entry to ensure we limit early memory references to that
      region.  We refactor the map_mem_in_cams() code to provider a helper
      function that we can utilize to determine the size of the first TLB
      entry while taking into account size and alignment constraints.
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      1dc91c3e
  6. 14 10月, 2010 2 次提交
    • K
      powerpc/fsl-booke64: Use TLB CAMs to cover linear mapping on FSL 64-bit chips · 55fd766b
      Kumar Gala 提交于
      On Freescale parts typically have TLB array for large mappings that we can
      bolt the linear mapping into.  We utilize the code that already exists
      on PPC32 on the 64-bit side to setup the linear mapping to be cover by
      bolted TLB entries.  We utilize a quarter of the variable size TLB array
      for this purpose.
      
      Additionally, we limit the amount of memory to what we can cover via
      bolted entries so we don't get secondary faults in the TLB miss
      handlers.  We should fix this limitation in the future.
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      55fd766b
    • P
      powerpc: Fix invalid page flags in create TLB CAM path for PTE_64BIT · 92437d41
      Paul Gortmaker 提交于
      There exists a four line chunk of code, which when configured for
      64 bit address space, can incorrectly set certain page flags during
      the TLB creation.  It turns out that this is code which isn't used,
      but might still serve a purpose.  Since it isn't obvious why it exists
      or why it causes problems, the below description covers both in detail.
      
      For powerpc bootstrap, the physical memory (at most 768M), is mapped
      into the kernel space via the following path:
      
      MMU_init()
          |
          + adjust_total_lowmem()
                  |
                  + map_mem_in_cams()
                          |
                          + settlbcam(i, virt, phys, cam_sz, PAGE_KERNEL_X, 0);
      
      On settlbcam(), the kernel will create TLB entries according to the flag,
      PAGE_KERNEL_X.
      
      settlbcam()
      {
              ...
              TLBCAM[index].MAS1 = MAS1_VALID
                              | MAS1_IPROT | MAS1_TSIZE(tsize) | MAS1_TID(pid);
                                      ^
      			These entries cannot be invalidated by the
      			kernel since MAS1_IPROT is set on TLB property.
              ...
              if (flags & _PAGE_USER) {
                 TLBCAM[index].MAS3 |= MAS3_UX | MAS3_UR;
                 TLBCAM[index].MAS3 |= ((flags & _PAGE_RW) ? MAS3_UW : 0);
              }
      
      For classic BookE (flags & _PAGE_USER) is 'zero' so it's fine.
      But on boards like the the Freescale P4080, we want to support 36-bit
      physical address on it. So the following options may be set:
      
      CONFIG_FSL_BOOKE=y
      CONFIG_PTE_64BIT=y
      CONFIG_PHYS_64BIT=y
      
      As a result, boards like the P4080 will introduce PTE format as Book3E.
      As per the file: arch/powerpc/include/asm/pgtable-ppc32.h
      
        * #elif defined(CONFIG_FSL_BOOKE) && defined(CONFIG_PTE_64BIT)
        * #include <asm/pte-book3e.h>
      
      So PAGE_KERNEL_X is __pgprot(_PAGE_BASE | _PAGE_KERNEL_RWX) and the
      book3E version of _PAGE_KERNEL_RWX is defined with:
      
        (_PAGE_BAP_SW | _PAGE_BAP_SR | _PAGE_DIRTY | _PAGE_BAP_SX)
      
      Note the _PAGE_BAP_SR, which is also defined in the book3E _PAGE_USER:
      
        #define _PAGE_USER        (_PAGE_BAP_UR | _PAGE_BAP_SR) /* Can be read */
      
      So the possibility exists to wrongly assign the user MAS3_U<RWX> bits
      to kernel (PAGE_KERNEL_X) address space via the following code fragment:
      
              if (flags & _PAGE_USER) {
                 TLBCAM[index].MAS3 |= MAS3_UX | MAS3_UR;
                 TLBCAM[index].MAS3 |= ((flags & _PAGE_RW) ? MAS3_UW : 0);
              }
      
      Here is a dump of the TLB info from Simics with the above code present:
      ------
      L2 TLB1
                                                  GT                   SSS UUU V I
       Row  Logical           Physical            SS TLPID  TID  WIMGE XWR XWR F P   V
      ----- ----------------- ------------------- -- ----- ----- ----- --- --- - -   -
        0   c0000000-cfffffff 000000000-00fffffff 00     0     0   M   XWR XWR 0 1   1
        1   d0000000-dfffffff 010000000-01fffffff 00     0     0   M   XWR XWR 0 1   1
        2   e0000000-efffffff 020000000-02fffffff 00     0     0   M   XWR XWR 0 1   1
      
      Actually this conditional code was used for two legacy functions:
      
        1: support KGDB to set break point.
           KGDB already dropped this; now uses its core write to set break point.
      
        2: io_block_mapping() to create TLB in segmentation size (not PAGE_SIZE)
           for device IO space.
           This use case is also removed from the latest PowerPC kernel.
      
      However, there may still be a use case for it in the future, like
      large user pages, so we can't remove it entirely.  As an alternative,
      we match on all bits of _PAGE_USER instead of just any bits, so the
      case where just _PAGE_BAP_SR is set can't sneak through.
      
      With this done, the TLB appears without U having XWR as below:
      
      -------
      L2 TLB1
                                                  GT                   SSS UUU V I
       Row  Logical           Physical            SS TLPID  TID  WIMGE XWR XWR F P   V
      ----- ----------------- ------------------- -- ----- ----- ----- --- --- - -   -
        0   c0000000-cfffffff 000000000-00fffffff 00     0     0   M   XWR     0 1   1
        1   d0000000-dfffffff 010000000-01fffffff 00     0     0   M   XWR     0 1   1
        2   e0000000-efffffff 020000000-02fffffff 00     0     0   M   XWR     0 1   1
      Signed-off-by: NTiejun Chen <tiejun.chen@windriver.com>
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      92437d41
  7. 05 8月, 2010 2 次提交
    • B
      memblock: Remove rmo_size, burry it in arch/powerpc where it belongs · cd3db0c4
      Benjamin Herrenschmidt 提交于
      The RMA (RMO is a misnomer) is a concept specific to ppc64 (in fact
      server ppc64 though I hijack it on embedded ppc64 for similar purposes)
      and represents the area of memory that can be accessed in real mode
      (aka with MMU off), or on embedded, from the exception vectors (which
      is bolted in the TLB) which pretty much boils down to the same thing.
      
      We take that out of the generic MEMBLOCK data structure and move it into
      arch/powerpc where it belongs, renaming it to "RMA" while at it.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      cd3db0c4
    • B
      memblock: Introduce default allocation limit and use it to replace explicit ones · e63075a3
      Benjamin Herrenschmidt 提交于
      This introduce memblock.current_limit which is used to limit allocations
      from memblock_alloc() or memblock_alloc_base(..., MEMBLOCK_ALLOC_ACCESSIBLE).
      
      The old MEMBLOCK_ALLOC_ANYWHERE changes value from 0 to ~(u64)0 and can still
      be used with memblock_alloc_base() to allocate really anywhere.
      
      It is -no-longer- cropped to MEMBLOCK_REAL_LIMIT which disappears.
      
      Note to archs: I'm leaving the default limit to MEMBLOCK_ALLOC_ANYWHERE. I
      strongly recommend that you ensure that you set an appropriate limit
      during boot in order to guarantee that an memblock_alloc() at any time
      results in something that is accessible with a simple __va().
      
      The reason is that a subsequent patch will introduce the ability for
      the array to resize itself by reallocating itself. The MEMBLOCK core will
      honor the current limit when performing those allocations.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      e63075a3
  8. 09 7月, 2010 1 次提交
  9. 17 5月, 2010 1 次提交
  10. 14 5月, 2010 1 次提交
  11. 30 4月, 2010 1 次提交
  12. 20 4月, 2010 1 次提交
  13. 15 12月, 2009 1 次提交
  14. 21 11月, 2009 1 次提交
  15. 20 8月, 2009 1 次提交
  16. 24 3月, 2009 1 次提交
    • B
      powerpc/mm: Tweak PTE bit combination definitions · 8d1cf34e
      Benjamin Herrenschmidt 提交于
      This patch tweaks the way some PTE bit combinations are defined, in such a
      way that the 32 and 64-bit variant become almost identical and that will
      make it easier to bring in a new common pte-* file for the new variant
      of the Book3-E support.
      
      The combination of bits defining access to kernel pages are now clearly
      separated from the combination used by userspace and the core VM. The
      resulting generated code should remain identical unless I made a mistake.
      
      Note: While at it, I removed a non-sensical statement related to CONFIG_KGDB
      in ppc_mmu_32.c which could cause kernel mappings to be user accessible when
      that option is enabled. Probably something that bitrot.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      8d1cf34e
  17. 13 2月, 2009 2 次提交
    • K
      powerpc/fsl-booke: Fix compile warning · 96a8bac5
      Kumar Gala 提交于
      arch/powerpc/mm/fsl_booke_mmu.c: In function 'adjust_total_lowmem':
      arch/powerpc/mm/fsl_booke_mmu.c:221: warning: format '%ld' expects type 'long int', but argument 3 has type 'phys_addr_t'
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      96a8bac5
    • K
      powerpc/fsl-booke: Add new ISA 2.06 page sizes and MAS defines · d66c82ea
      Kumar Gala 提交于
      The Power ISA 2.06 added power of two page sizes to the embedded MMU
      architecture.  Its done it such a way to be code compatiable with the
      existing HW.  Made the minor code changes to support both power of two
      and power of four page sizes.  Also added some new MAS bits and macros
      that are defined as part of the 2.06 ISA.  Renamed some things to use
      the 'Book-3e' concept to convey the new MMU that is based on the
      Freescale Book-E MMU programming model.
      
      Note, its still invalid to try and use a page size that isn't supported
      by cpu.
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      d66c82ea
  18. 10 2月, 2009 1 次提交
  19. 29 1月, 2009 3 次提交
    • T
      powerpc/fsl-booke: Make CAM entries used for lowmem configurable · 96051465
      Trent Piepho 提交于
      On booke processors, the code that maps low memory only uses up to three
      CAM entries, even though there are sixteen and nothing else uses them.
      
      Make this number configurable in the advanced options menu along with max
      low memory size.  If one wants 1 GB of lowmem, then it's typically
      necessary to have four CAM entries.
      Signed-off-by: NTrent Piepho <tpiepho@freescale.com>
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      96051465
    • T
      powerpc/fsl-booke: Allow larger CAM sizes than 256 MB · c8f3570b
      Trent Piepho 提交于
      The code that maps kernel low memory would only use page sizes up to 256
      MB.  On E500v2 pages up to 4 GB are supported.
      
      However, a page must be aligned to a multiple of the page's size.  I.e.
      256 MB pages must aligned to a 256 MB boundary.  This was enforced by a
      requirement that the physical and virtual addresses of the start of lowmem
      be aligned to 256 MB.  Clearly requiring 1GB or 4GB alignment to allow
      pages of that size isn't acceptable.
      
      To solve this, I simply have adjust_total_lowmem() take alignment into
      account when it decides what size pages to use.  Give it PAGE_OFFSET =
      0x7000_0000, PHYSICAL_START = 0x3000_0000, and 2GB of RAM, and it will map
      pages like this:
      PA 0x3000_0000 VA 0x7000_0000 Size 256 MB
      PA 0x4000_0000 VA 0x8000_0000 Size 1 GB
      PA 0x8000_0000 VA 0xC000_0000 Size 256 MB
      PA 0x9000_0000 VA 0xD000_0000 Size 256 MB
      PA 0xA000_0000 VA 0xE000_0000 Size 256 MB
      
      Because the lowmem mapping code now takes alignment into account,
      PHYSICAL_ALIGN can be lowered from 256 MB to 64 MB.  Even lower might be
      possible.  The lowmem code will work down to 4 kB but it's possible some of
      the boot code will fail before then.  Poor alignment will force small pages
      to be used, which combined with the limited number of TLB1 pages available,
      will result in very little memory getting mapped.  So alignments less than
      64 MB probably aren't very useful anyway.
      Signed-off-by: NTrent Piepho <tpiepho@freescale.com>
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      c8f3570b
    • T
      powerpc/fsl-booke: Remove code duplication in lowmem mapping · f88747e7
      Trent Piepho 提交于
      The code to map lowmem uses three CAM aka TLB[1] entries to cover it.  The
      size of each is stored in three globals named __cam0, __cam1, and __cam2.
      All the code that uses them is duplicated three times for each of the three
      variables.
      
      We have these things called arrays and loops....
      
      Once converted to use an array, it will be easier to make the number of
      CAMs configurable.
      Signed-off-by: NTrent Piepho <tpiepho@freescale.com>
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      f88747e7
  20. 08 1月, 2009 2 次提交
  21. 16 9月, 2008 1 次提交
  22. 24 4月, 2008 1 次提交
    • K
      [POWERPC] 85xx: Add support for relocatable kernel (and booting at non-zero) · 37dd2bad
      Kumar Gala 提交于
      Added support to allow an 85xx kernel to be run from a non-zero physical
      address (useful for cooperative asymmetric multiprocessing situations and
      kdump).  The support can be configured at compile time by setting
      CONFIG_PAGE_OFFSET, CONFIG_KERNEL_START, and CONFIG_PHYSICAL_START as
      desired.
      
      Alternatively, the kernel build can set CONFIG_RELOCATABLE.  Setting this
      config option causes the kernel to determine at runtime the physical
      addresses of CONFIG_PAGE_OFFSET and CONFIG_KERNEL_START.  If
      CONFIG_RELOCATABLE is set, then CONFIG_PHYSICAL_START has no meaning.
      However, CONFIG_PHYSICAL_START will always be used to set the LOAD program
      header physical address field in the resulting ELF image.
      
      Currently we are limited to running at a physical address that is a
      multiple of 256M.  This is due to how we map TLBs to cover
      lowmem.  This should be fixed to allow 64M or maybe even 16M alignment
      in the future.  It is considered an error to try and run a kernel at a
      non-aligned physical address.
      
      All the magic for this support is accomplished by proper initialization
      of the kernel memory subsystem and use of ARCH_PFN_OFFSET.
      
      The use of ARCH_PFN_OFFSET only affects normal memory and not IO mappings.
      ioremap uses map_page and isn't affected by ARCH_PFN_OFFSET.
      
      /dev/mem continues to allow access to any physical address in the system
      regardless of how CONFIG_PHYSICAL_START is set.
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      37dd2bad
  23. 17 4月, 2008 3 次提交
  24. 24 1月, 2008 1 次提交
  25. 08 10月, 2007 1 次提交
    • D
      [POWERPC] 85xx: Failure with odd memory sizes and CONFIG_HIGHMEM · 873553b3
      Dale Farnsworth 提交于
      The CONFIG_FSL_BOOKE mmu setup code fails when CONFIG_HIGHMEM=y
      and the 3 fixed TLB entries cannot exactly map the lowmem size.
      Each TLB entry can map 4MB, 16MB, 64MB or 256MB, so the failure
      is observed when the kernel lowmem size is not equal to the
      sum of up to 3 of those values.
      
      Normally, memory is sized in nice numbers, but I observed this
      problem while testing a crash dump kernel.  The failure can
      also be observed by artificially reducing the kernel's main
      memory via the mem= kernel command line parameter.
      
      This commit fixes the problem by setting __initial_memory_limit
      in adjust_total_lowmem().
      Signed-off-by: NDale Farnsworth <dale@farnsworth.org>
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      873553b3
  26. 14 6月, 2007 1 次提交
  27. 01 7月, 2006 1 次提交