1. 16 3月, 2012 1 次提交
  2. 13 10月, 2011 1 次提交
  3. 12 10月, 2011 1 次提交
    • K
      powerpc/fsl-booke: Fix setup_initial_memory_limit to not blindly map · 1dc91c3e
      Kumar Gala 提交于
      On FSL Book-E devices we support multiple large TLB sizes and so we can
      get into situations in which the initial 1G TLB size is too big and
      we're asked for a size that is not mappable by a single entry (like
      512M).  The single entry is important because when we bring up secondary
      cores they need to ensure any data structure they need to access (eg
      PACA or stack) is always mapped.
      
      So we really need to determine what size will actually be mapped by the
      first TLB entry to ensure we limit early memory references to that
      region.  We refactor the map_mem_in_cams() code to provider a helper
      function that we can utilize to determine the size of the first TLB
      entry while taking into account size and alignment constraints.
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      1dc91c3e
  4. 14 10月, 2010 2 次提交
    • K
      powerpc/fsl-booke64: Use TLB CAMs to cover linear mapping on FSL 64-bit chips · 55fd766b
      Kumar Gala 提交于
      On Freescale parts typically have TLB array for large mappings that we can
      bolt the linear mapping into.  We utilize the code that already exists
      on PPC32 on the 64-bit side to setup the linear mapping to be cover by
      bolted TLB entries.  We utilize a quarter of the variable size TLB array
      for this purpose.
      
      Additionally, we limit the amount of memory to what we can cover via
      bolted entries so we don't get secondary faults in the TLB miss
      handlers.  We should fix this limitation in the future.
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      55fd766b
    • P
      powerpc: Fix invalid page flags in create TLB CAM path for PTE_64BIT · 92437d41
      Paul Gortmaker 提交于
      There exists a four line chunk of code, which when configured for
      64 bit address space, can incorrectly set certain page flags during
      the TLB creation.  It turns out that this is code which isn't used,
      but might still serve a purpose.  Since it isn't obvious why it exists
      or why it causes problems, the below description covers both in detail.
      
      For powerpc bootstrap, the physical memory (at most 768M), is mapped
      into the kernel space via the following path:
      
      MMU_init()
          |
          + adjust_total_lowmem()
                  |
                  + map_mem_in_cams()
                          |
                          + settlbcam(i, virt, phys, cam_sz, PAGE_KERNEL_X, 0);
      
      On settlbcam(), the kernel will create TLB entries according to the flag,
      PAGE_KERNEL_X.
      
      settlbcam()
      {
              ...
              TLBCAM[index].MAS1 = MAS1_VALID
                              | MAS1_IPROT | MAS1_TSIZE(tsize) | MAS1_TID(pid);
                                      ^
      			These entries cannot be invalidated by the
      			kernel since MAS1_IPROT is set on TLB property.
              ...
              if (flags & _PAGE_USER) {
                 TLBCAM[index].MAS3 |= MAS3_UX | MAS3_UR;
                 TLBCAM[index].MAS3 |= ((flags & _PAGE_RW) ? MAS3_UW : 0);
              }
      
      For classic BookE (flags & _PAGE_USER) is 'zero' so it's fine.
      But on boards like the the Freescale P4080, we want to support 36-bit
      physical address on it. So the following options may be set:
      
      CONFIG_FSL_BOOKE=y
      CONFIG_PTE_64BIT=y
      CONFIG_PHYS_64BIT=y
      
      As a result, boards like the P4080 will introduce PTE format as Book3E.
      As per the file: arch/powerpc/include/asm/pgtable-ppc32.h
      
        * #elif defined(CONFIG_FSL_BOOKE) && defined(CONFIG_PTE_64BIT)
        * #include <asm/pte-book3e.h>
      
      So PAGE_KERNEL_X is __pgprot(_PAGE_BASE | _PAGE_KERNEL_RWX) and the
      book3E version of _PAGE_KERNEL_RWX is defined with:
      
        (_PAGE_BAP_SW | _PAGE_BAP_SR | _PAGE_DIRTY | _PAGE_BAP_SX)
      
      Note the _PAGE_BAP_SR, which is also defined in the book3E _PAGE_USER:
      
        #define _PAGE_USER        (_PAGE_BAP_UR | _PAGE_BAP_SR) /* Can be read */
      
      So the possibility exists to wrongly assign the user MAS3_U<RWX> bits
      to kernel (PAGE_KERNEL_X) address space via the following code fragment:
      
              if (flags & _PAGE_USER) {
                 TLBCAM[index].MAS3 |= MAS3_UX | MAS3_UR;
                 TLBCAM[index].MAS3 |= ((flags & _PAGE_RW) ? MAS3_UW : 0);
              }
      
      Here is a dump of the TLB info from Simics with the above code present:
      ------
      L2 TLB1
                                                  GT                   SSS UUU V I
       Row  Logical           Physical            SS TLPID  TID  WIMGE XWR XWR F P   V
      ----- ----------------- ------------------- -- ----- ----- ----- --- --- - -   -
        0   c0000000-cfffffff 000000000-00fffffff 00     0     0   M   XWR XWR 0 1   1
        1   d0000000-dfffffff 010000000-01fffffff 00     0     0   M   XWR XWR 0 1   1
        2   e0000000-efffffff 020000000-02fffffff 00     0     0   M   XWR XWR 0 1   1
      
      Actually this conditional code was used for two legacy functions:
      
        1: support KGDB to set break point.
           KGDB already dropped this; now uses its core write to set break point.
      
        2: io_block_mapping() to create TLB in segmentation size (not PAGE_SIZE)
           for device IO space.
           This use case is also removed from the latest PowerPC kernel.
      
      However, there may still be a use case for it in the future, like
      large user pages, so we can't remove it entirely.  As an alternative,
      we match on all bits of _PAGE_USER instead of just any bits, so the
      case where just _PAGE_BAP_SR is set can't sneak through.
      
      With this done, the TLB appears without U having XWR as below:
      
      -------
      L2 TLB1
                                                  GT                   SSS UUU V I
       Row  Logical           Physical            SS TLPID  TID  WIMGE XWR XWR F P   V
      ----- ----------------- ------------------- -- ----- ----- ----- --- --- - -   -
        0   c0000000-cfffffff 000000000-00fffffff 00     0     0   M   XWR     0 1   1
        1   d0000000-dfffffff 010000000-01fffffff 00     0     0   M   XWR     0 1   1
        2   e0000000-efffffff 020000000-02fffffff 00     0     0   M   XWR     0 1   1
      Signed-off-by: NTiejun Chen <tiejun.chen@windriver.com>
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      92437d41
  5. 05 8月, 2010 2 次提交
    • B
      memblock: Remove rmo_size, burry it in arch/powerpc where it belongs · cd3db0c4
      Benjamin Herrenschmidt 提交于
      The RMA (RMO is a misnomer) is a concept specific to ppc64 (in fact
      server ppc64 though I hijack it on embedded ppc64 for similar purposes)
      and represents the area of memory that can be accessed in real mode
      (aka with MMU off), or on embedded, from the exception vectors (which
      is bolted in the TLB) which pretty much boils down to the same thing.
      
      We take that out of the generic MEMBLOCK data structure and move it into
      arch/powerpc where it belongs, renaming it to "RMA" while at it.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      cd3db0c4
    • B
      memblock: Introduce default allocation limit and use it to replace explicit ones · e63075a3
      Benjamin Herrenschmidt 提交于
      This introduce memblock.current_limit which is used to limit allocations
      from memblock_alloc() or memblock_alloc_base(..., MEMBLOCK_ALLOC_ACCESSIBLE).
      
      The old MEMBLOCK_ALLOC_ANYWHERE changes value from 0 to ~(u64)0 and can still
      be used with memblock_alloc_base() to allocate really anywhere.
      
      It is -no-longer- cropped to MEMBLOCK_REAL_LIMIT which disappears.
      
      Note to archs: I'm leaving the default limit to MEMBLOCK_ALLOC_ANYWHERE. I
      strongly recommend that you ensure that you set an appropriate limit
      during boot in order to guarantee that an memblock_alloc() at any time
      results in something that is accessible with a simple __va().
      
      The reason is that a subsequent patch will introduce the ability for
      the array to resize itself by reallocating itself. The MEMBLOCK core will
      honor the current limit when performing those allocations.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      e63075a3
  6. 09 7月, 2010 1 次提交
  7. 17 5月, 2010 1 次提交
  8. 14 5月, 2010 1 次提交
  9. 30 4月, 2010 1 次提交
  10. 20 4月, 2010 1 次提交
  11. 15 12月, 2009 1 次提交
  12. 21 11月, 2009 1 次提交
  13. 20 8月, 2009 1 次提交
  14. 24 3月, 2009 1 次提交
    • B
      powerpc/mm: Tweak PTE bit combination definitions · 8d1cf34e
      Benjamin Herrenschmidt 提交于
      This patch tweaks the way some PTE bit combinations are defined, in such a
      way that the 32 and 64-bit variant become almost identical and that will
      make it easier to bring in a new common pte-* file for the new variant
      of the Book3-E support.
      
      The combination of bits defining access to kernel pages are now clearly
      separated from the combination used by userspace and the core VM. The
      resulting generated code should remain identical unless I made a mistake.
      
      Note: While at it, I removed a non-sensical statement related to CONFIG_KGDB
      in ppc_mmu_32.c which could cause kernel mappings to be user accessible when
      that option is enabled. Probably something that bitrot.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      8d1cf34e
  15. 13 2月, 2009 2 次提交
    • K
      powerpc/fsl-booke: Fix compile warning · 96a8bac5
      Kumar Gala 提交于
      arch/powerpc/mm/fsl_booke_mmu.c: In function 'adjust_total_lowmem':
      arch/powerpc/mm/fsl_booke_mmu.c:221: warning: format '%ld' expects type 'long int', but argument 3 has type 'phys_addr_t'
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      96a8bac5
    • K
      powerpc/fsl-booke: Add new ISA 2.06 page sizes and MAS defines · d66c82ea
      Kumar Gala 提交于
      The Power ISA 2.06 added power of two page sizes to the embedded MMU
      architecture.  Its done it such a way to be code compatiable with the
      existing HW.  Made the minor code changes to support both power of two
      and power of four page sizes.  Also added some new MAS bits and macros
      that are defined as part of the 2.06 ISA.  Renamed some things to use
      the 'Book-3e' concept to convey the new MMU that is based on the
      Freescale Book-E MMU programming model.
      
      Note, its still invalid to try and use a page size that isn't supported
      by cpu.
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      d66c82ea
  16. 10 2月, 2009 1 次提交
  17. 29 1月, 2009 3 次提交
    • T
      powerpc/fsl-booke: Make CAM entries used for lowmem configurable · 96051465
      Trent Piepho 提交于
      On booke processors, the code that maps low memory only uses up to three
      CAM entries, even though there are sixteen and nothing else uses them.
      
      Make this number configurable in the advanced options menu along with max
      low memory size.  If one wants 1 GB of lowmem, then it's typically
      necessary to have four CAM entries.
      Signed-off-by: NTrent Piepho <tpiepho@freescale.com>
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      96051465
    • T
      powerpc/fsl-booke: Allow larger CAM sizes than 256 MB · c8f3570b
      Trent Piepho 提交于
      The code that maps kernel low memory would only use page sizes up to 256
      MB.  On E500v2 pages up to 4 GB are supported.
      
      However, a page must be aligned to a multiple of the page's size.  I.e.
      256 MB pages must aligned to a 256 MB boundary.  This was enforced by a
      requirement that the physical and virtual addresses of the start of lowmem
      be aligned to 256 MB.  Clearly requiring 1GB or 4GB alignment to allow
      pages of that size isn't acceptable.
      
      To solve this, I simply have adjust_total_lowmem() take alignment into
      account when it decides what size pages to use.  Give it PAGE_OFFSET =
      0x7000_0000, PHYSICAL_START = 0x3000_0000, and 2GB of RAM, and it will map
      pages like this:
      PA 0x3000_0000 VA 0x7000_0000 Size 256 MB
      PA 0x4000_0000 VA 0x8000_0000 Size 1 GB
      PA 0x8000_0000 VA 0xC000_0000 Size 256 MB
      PA 0x9000_0000 VA 0xD000_0000 Size 256 MB
      PA 0xA000_0000 VA 0xE000_0000 Size 256 MB
      
      Because the lowmem mapping code now takes alignment into account,
      PHYSICAL_ALIGN can be lowered from 256 MB to 64 MB.  Even lower might be
      possible.  The lowmem code will work down to 4 kB but it's possible some of
      the boot code will fail before then.  Poor alignment will force small pages
      to be used, which combined with the limited number of TLB1 pages available,
      will result in very little memory getting mapped.  So alignments less than
      64 MB probably aren't very useful anyway.
      Signed-off-by: NTrent Piepho <tpiepho@freescale.com>
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      c8f3570b
    • T
      powerpc/fsl-booke: Remove code duplication in lowmem mapping · f88747e7
      Trent Piepho 提交于
      The code to map lowmem uses three CAM aka TLB[1] entries to cover it.  The
      size of each is stored in three globals named __cam0, __cam1, and __cam2.
      All the code that uses them is duplicated three times for each of the three
      variables.
      
      We have these things called arrays and loops....
      
      Once converted to use an array, it will be easier to make the number of
      CAMs configurable.
      Signed-off-by: NTrent Piepho <tpiepho@freescale.com>
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      f88747e7
  18. 08 1月, 2009 2 次提交
  19. 16 9月, 2008 1 次提交
  20. 24 4月, 2008 1 次提交
    • K
      [POWERPC] 85xx: Add support for relocatable kernel (and booting at non-zero) · 37dd2bad
      Kumar Gala 提交于
      Added support to allow an 85xx kernel to be run from a non-zero physical
      address (useful for cooperative asymmetric multiprocessing situations and
      kdump).  The support can be configured at compile time by setting
      CONFIG_PAGE_OFFSET, CONFIG_KERNEL_START, and CONFIG_PHYSICAL_START as
      desired.
      
      Alternatively, the kernel build can set CONFIG_RELOCATABLE.  Setting this
      config option causes the kernel to determine at runtime the physical
      addresses of CONFIG_PAGE_OFFSET and CONFIG_KERNEL_START.  If
      CONFIG_RELOCATABLE is set, then CONFIG_PHYSICAL_START has no meaning.
      However, CONFIG_PHYSICAL_START will always be used to set the LOAD program
      header physical address field in the resulting ELF image.
      
      Currently we are limited to running at a physical address that is a
      multiple of 256M.  This is due to how we map TLBs to cover
      lowmem.  This should be fixed to allow 64M or maybe even 16M alignment
      in the future.  It is considered an error to try and run a kernel at a
      non-aligned physical address.
      
      All the magic for this support is accomplished by proper initialization
      of the kernel memory subsystem and use of ARCH_PFN_OFFSET.
      
      The use of ARCH_PFN_OFFSET only affects normal memory and not IO mappings.
      ioremap uses map_page and isn't affected by ARCH_PFN_OFFSET.
      
      /dev/mem continues to allow access to any physical address in the system
      regardless of how CONFIG_PHYSICAL_START is set.
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      37dd2bad
  21. 17 4月, 2008 3 次提交
  22. 24 1月, 2008 1 次提交
  23. 08 10月, 2007 1 次提交
    • D
      [POWERPC] 85xx: Failure with odd memory sizes and CONFIG_HIGHMEM · 873553b3
      Dale Farnsworth 提交于
      The CONFIG_FSL_BOOKE mmu setup code fails when CONFIG_HIGHMEM=y
      and the 3 fixed TLB entries cannot exactly map the lowmem size.
      Each TLB entry can map 4MB, 16MB, 64MB or 256MB, so the failure
      is observed when the kernel lowmem size is not equal to the
      sum of up to 3 of those values.
      
      Normally, memory is sized in nice numbers, but I observed this
      problem while testing a crash dump kernel.  The failure can
      also be observed by artificially reducing the kernel's main
      memory via the mem= kernel command line parameter.
      
      This commit fixes the problem by setting __initial_memory_limit
      in adjust_total_lowmem().
      Signed-off-by: NDale Farnsworth <dale@farnsworth.org>
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      873553b3
  24. 14 6月, 2007 1 次提交
  25. 01 7月, 2006 1 次提交
  26. 14 11月, 2005 1 次提交
  27. 26 9月, 2005 1 次提交
    • P
      powerpc: Merge enough to start building in arch/powerpc. · 14cf11af
      Paul Mackerras 提交于
      This creates the directory structure under arch/powerpc and a bunch
      of Kconfig files.  It does a first-cut merge of arch/powerpc/mm,
      arch/powerpc/lib and arch/powerpc/platforms/powermac.  This is enough
      to build a 32-bit powermac kernel with ARCH=powerpc.
      
      For now we are getting some unmerged files from arch/ppc/kernel and
      arch/ppc/syslib, or arch/ppc64/kernel.  This makes some minor changes
      to files in those directories and files outside arch/powerpc.
      
      The boot directory is still not merged.  That's going to be interesting.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      14cf11af
  28. 26 6月, 2005 2 次提交
  29. 22 6月, 2005 1 次提交
  30. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4