1. 14 12月, 2015 2 次提交
  2. 28 10月, 2015 1 次提交
    • S
      powerpc/booke: Only use VIRT_PHYS_OFFSET on booke32 · ffda09a9
      Scott Wood 提交于
      The way VIRT_PHYS_OFFSET is not correct on book3e-64, because
      it does not account for CONFIG_RELOCATABLE other than via the
      32-bit-only virt_phys_offset.
      
      book3e-64 can (and if the comment about a GCC miscompilation is still
      relevant, should) use the normal ppc64 __va/__pa.
      
      At this point, only booke-32 will use VIRT_PHYS_OFFSET, so given the
      issues with its calculation, restrict its definition to booke-32.
      Signed-off-by: NScott Wood <scottwood@freescale.com>
      ffda09a9
  3. 12 10月, 2015 1 次提交
  4. 09 10月, 2015 1 次提交
  5. 01 10月, 2015 1 次提交
  6. 11 5月, 2015 1 次提交
  7. 14 11月, 2014 2 次提交
  8. 09 8月, 2014 1 次提交
    • A
      arm64,ia64,ppc,s390,sh,tile,um,x86,mm: remove default gate area · a6c19dfe
      Andy Lutomirski 提交于
      The core mm code will provide a default gate area based on
      FIXADDR_USER_START and FIXADDR_USER_END if
      !defined(__HAVE_ARCH_GATE_AREA) && defined(AT_SYSINFO_EHDR).
      
      This default is only useful for ia64.  arm64, ppc, s390, sh, tile, 64-bit
      UML, and x86_32 have their own code just to disable it.  arm, 32-bit UML,
      and x86_64 have gate areas, but they have their own implementations.
      
      This gets rid of the default and moves the code into ia64.
      
      This should save some code on architectures without a gate area: it's now
      possible to inline the gate_area functions in the default case.
      Signed-off-by: NAndy Lutomirski <luto@amacapital.net>
      Acked-by: NNathan Lynch <nathan_lynch@mentor.com>
      Acked-by: NH. Peter Anvin <hpa@linux.intel.com>
      Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> [in principle]
      Acked-by: Richard Weinberger <richard@nod.at> [for um]
      Acked-by: Will Deacon <will.deacon@arm.com> [for arm64]
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Nathan Lynch <Nathan_Lynch@mentor.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a6c19dfe
  9. 30 10月, 2013 2 次提交
  10. 27 8月, 2013 1 次提交
    • P
      powerpc: Work around gcc miscompilation of __pa() on 64-bit · bdbc29c1
      Paul Mackerras 提交于
      On 64-bit, __pa(&static_var) gets miscompiled by recent versions of
      gcc as something like:
      
              addis 3,2,.LANCHOR1+4611686018427387904@toc@ha
              addi 3,3,.LANCHOR1+4611686018427387904@toc@l
      
      This ends up effectively ignoring the offset, since its bottom 32 bits
      are zero, and means that the result of __pa() still has 0xC in the top
      nibble.  This happens with gcc 4.8.1, at least.
      
      To work around this, for 64-bit we make __pa() use an AND operator,
      and for symmetry, we make __va() use an OR operator.  Using an AND
      operator rather than a subtraction ends up with slightly shorter code
      since it can be done with a single clrldi instruction, whereas it
      takes three instructions to form the constant (-PAGE_OFFSET) and add
      it on.  (Note that MEMORY_START is always 0 on 64-bit.)
      
      CC: <stable@vger.kernel.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      bdbc29c1
  11. 30 4月, 2013 3 次提交
    • A
      powerpc: Reduce PTE table memory wastage · 5c1f6ee9
      Aneesh Kumar K.V 提交于
      We allocate one page for the last level of linux page table. With THP and
      large page size of 16MB, that would mean we are wasting large part
      of that page. To map 16MB area, we only need a PTE space of 2K with 64K
      page size. This patch reduce the space wastage by sharing the page
      allocated for the last level of linux page table with multiple pmd
      entries. We call these smaller chunks PTE page fragments and allocated
      page, PTE page.
      
      In order to support systems which doesn't have 64K HPTE support, we also
      add another 2K to PTE page fragment. The second half of the PTE fragments
      is used for storing slot and secondary bit information of an HPTE. With this
      we now have a 4K PTE fragment.
      
      We use a simple approach to share the PTE page. On allocation, we bump the
      PTE page refcount to 16 and share the PTE page with the next 16 pte alloc
      request. This should help in the node locality of the PTE page fragment,
      assuming that the immediate pte alloc request will mostly come from the
      same NUMA node. We don't try to reuse the freed PTE page fragment. Hence
      we could be waisting some space.
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      5c1f6ee9
    • A
      powerpc: Switch 16GB and 16MB explicit hugepages to a different page table format · e2b3d202
      Aneesh Kumar K.V 提交于
      We will be switching PMD_SHIFT to 24 bits to facilitate THP impmenetation.
      With PMD_SHIFT set to 24, we now have 16MB huge pages allocated at PGD level.
      That means with 32 bit process we cannot allocate normal pages at
      all, because we cover the entire address space with one pgd entry. Fix this
      by switching to a new page table format for hugepages. With the new page table
      format for 16GB and 16MB hugepages we won't allocate hugepage directory. Instead
      we encode the PTE information directly at the directory level. This forces 16MB
      hugepage at PMD level. This will also make the page take walk much simpler later
      when we add the THP support.
      
      With the new table format we have 4 cases for pgds and pmds:
      (1) invalid (all zeroes)
      (2) pointer to next table, as normal; bottom 6 bits == 0
      (3) leaf pte for huge page, bottom two bits != 00
      (4) hugepd pointer, bottom two bits == 00, next 4 bits indicate size of table
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      e2b3d202
    • A
      powerpc: New hugepage directory format · cf9427b8
      Aneesh Kumar K.V 提交于
      Change the hugepage directory format so that we can have leaf ptes directly
      at page directory avoiding the allocation of hugepage directory.
      
      With the new table format we have 3 cases for pgds and pmds:
      (1) invalid (all zeroes)
      (2) pointer to next table, as normal; bottom 6 bits == 0
      (4) hugepd pointer, bottom two bits == 00, next 4 bits indicate size of table
      
      Instead of storing shift value in hugepd pointer we use mmu_psize_def index
      so that we can fit all the supported hugepage size in 4 bits
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      cf9427b8
  12. 20 12月, 2011 2 次提交
    • S
      powerpc: Define virtual-physical translations for RELOCATABLE · 368ff8f1
      Suzuki Poulose 提交于
      We find the runtime address of _stext and relocate ourselves based
      on the following calculation.
      
      	virtual_base = ALIGN(KERNELBASE,KERNEL_TLB_PIN_SIZE) +
      			MODULO(_stext.run,KERNEL_TLB_PIN_SIZE)
      
      relocate() is called with the Effective Virtual Base Address (as
      shown below)
      
                  | Phys. Addr| Virt. Addr |
      Page        |------------------------|
      Boundary    |           |            |
                  |           |            |
                  |           |            |
      Kernel Load |___________|_ __ _ _ _ _|<- Effective
      Addr(_stext)|           |      ^     |Virt. Base Addr
                  |           |      |     |
                  |           |      |     |
                  |           |reloc_offset|
                  |           |      |     |
                  |           |      |     |
                  |           |______v_____|<-(KERNELBASE)%TLB_SIZE
                  |           |            |
                  |           |            |
                  |           |            |
      Page        |-----------|------------|
      Boundary    |           |            |
      
      On BookE, we need __va() & __pa() early in the boot process to access
      the device tree.
      
      Currently this has been defined as :
      
      #define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) -
      						PHYSICAL_START + KERNELBASE)
      where:
       PHYSICAL_START is kernstart_addr - a variable updated at runtime.
       KERNELBASE	is the compile time Virtual base address of kernel.
      
      This won't work for us, as kernstart_addr is dynamic and will yield different
      results for __va()/__pa() for same mapping.
      
      e.g.,
      
      Let the kernel be loaded at 64MB and KERNELBASE be 0xc0000000 (same as
      PAGE_OFFSET).
      
      In this case, we would be mapping 0 to 0xc0000000, and kernstart_addr = 64M
      
      Now __va(1MB) = (0x100000) - (0x4000000) + 0xc0000000
      		= 0xbc100000 , which is wrong.
      
      it should be : 0xc0000000 + 0x100000 = 0xc0100000
      
      On platforms which support AMP, like PPC_47x (based on 44x), the kernel
      could be loaded at highmem. Hence we cannot always depend on the compile
      time constants for mapping.
      
      Here are the possible solutions:
      
      1) Update kernstart_addr(PHSYICAL_START) to match the Physical address of
      compile time KERNELBASE value, instead of the actual Physical_Address(_stext).
      
      The disadvantage is that we may break other users of PHYSICAL_START. They
      could be replaced with __pa(_stext).
      
      2) Redefine __va() & __pa() with relocation offset
      
      #ifdef	CONFIG_RELOCATABLE_PPC32
      #define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) - PHYSICAL_START + (KERNELBASE + RELOC_OFFSET)))
      #define __pa(x) ((unsigned long)(x) + PHYSICAL_START - (KERNELBASE + RELOC_OFFSET))
      #endif
      
      where, RELOC_OFFSET could be
      
        a) A variable, say relocation_offset (like kernstart_addr), updated
           at boot time. This impacts performance, as we have to load an additional
           variable from memory.
      
      		OR
      
        b) #define RELOC_OFFSET ((PHYSICAL_START & PPC_PIN_SIZE_OFFSET_MASK) - \
                            (KERNELBASE & PPC_PIN_SIZE_OFFSET_MASK))
      
         This introduces more calculations for doing the translation.
      
      3) Redefine __va() & __pa() with a new variable
      
      i.e,
      
      #define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) + VIRT_PHYS_OFFSET))
      
      where VIRT_PHYS_OFFSET :
      
      #ifdef CONFIG_RELOCATABLE_PPC32
      #define VIRT_PHYS_OFFSET virt_phys_offset
      #else
      #define VIRT_PHYS_OFFSET (KERNELBASE - PHYSICAL_START)
      #endif /* CONFIG_RELOCATABLE_PPC32 */
      
      where virt_phy_offset is updated at runtime to :
      
      	Effective KERNELBASE - kernstart_addr.
      
      Taking our example, above:
      
      virt_phys_offset = effective_kernelstart_vaddr - kernstart_addr
      		 = 0xc0400000 - 0x400000
      		 = 0xc0000000
      	and
      
      	__va(0x100000) = 0xc0000000 + 0x100000 = 0xc0100000
      	 which is what we want.
      
      I have implemented (3) in the following patch which has same cost of
      operation as the existing one.
      
      I have tested the patches on 440x platforms only. However this should
      work fine for PPC_47x also, as we only depend on the runtime address
      and the current TLB XLAT entry for the startup code, which is available
      in r25. I don't have access to a 47x board yet. So, it would be great if
      somebody could test this on 47x.
      Signed-off-by: NSuzuki K. Poulose <suzuki@in.ibm.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Kumar Gala <galak@kernel.crashing.org>
      Cc: linuxppc-dev <linuxppc-dev@lists.ozlabs.org>
      Signed-off-by: NJosh Boyer <jwboyer@gmail.com>
      368ff8f1
    • S
      powerpc: Rename mapping based RELOCATABLE to DYNAMIC_MEMSTART for BookE · 0f890c8d
      Suzuki Poulose 提交于
      The current implementation of CONFIG_RELOCATABLE in BookE is based
      on mapping the page aligned kernel load address to KERNELBASE. This
      approach however is not enough for platforms, where the TLB page size
      is large (e.g, 256M on 44x). So we are renaming the RELOCATABLE used
      currently in BookE to DYNAMIC_MEMSTART to reflect the actual method.
      
      The CONFIG_RELOCATABLE for PPC32(BookE) based on processing of the
      dynamic relocations will be introduced in the later in the patch series.
      
      This change would allow the use of the old method of RELOCATABLE for
      platforms which can afford to enforce the page alignment (platforms with
      smaller TLB size).
      
      Changes since v3:
      
      * Introduced a new config, NONSTATIC_KERNEL, to denote a kernel which is
        either a RELOCATABLE or DYNAMIC_MEMSTART(Suggested by: Josh Boyer)
      Suggested-by: NScott Wood <scottwood@freescale.com>
      Tested-by: NScott Wood <scottwood@freescale.com>
      Signed-off-by: NSuzuki K. Poulose <suzuki@in.ibm.com>
      Cc: Scott Wood <scottwood@freescale.com>
      Cc: Kumar Gala <galak@kernel.crashing.org>
      Cc: Josh Boyer <jwboyer@gmail.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: linux ppc dev <linuxppc-dev@lists.ozlabs.org>
      Signed-off-by: NJosh Boyer <jwboyer@gmail.com>
      0f890c8d
  13. 28 11月, 2011 1 次提交
    • S
      powerpc: Implement CONFIG_STRICT_DEVMEM · 1d54cf2b
      sukadev@linux.vnet.ibm.com 提交于
      As described in the help text in the patch, this token restricts general
      access to /dev/mem as a way of increasing the security. Specifically, access
      to exclusive IOMEM and kernel RAM is denied unless CONFIG_STRICT_DEVMEM is
      set to 'n'.
      
      Implement the 'devmem_is_allowed()' interface for Powerpc. It will be
      called from range_is_allowed() when userpsace attempts to access /dev/mem.
      
      This patch is based on an earlier patch from Steve Best and with input from
      Paul Mackerras and Scott Wood.
      
      [BenH] Fixed a typo or two and removed the generic change which should
             be submitted as a separate patch
      Signed-off-by: NSukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      1d54cf2b
  14. 20 9月, 2011 1 次提交
    • B
      powerpc: Hugetlb for BookE · 41151e77
      Becky Bruce 提交于
      Enable hugepages on Freescale BookE processors.  This allows the kernel to
      use huge TLB entries to map pages, which can greatly reduce the number of
      TLB misses and the amount of TLB thrashing experienced by applications with
      large memory footprints.  Care should be taken when using this on FSL
      processors, as the number of large TLB entries supported by the core is low
      (16-64) on current processors.
      
      The supported set of hugepage sizes include 4m, 16m, 64m, 256m, and 1g.
      Page sizes larger than the max zone size are called "gigantic" pages and
      must be allocated on the command line (and cannot be deallocated).
      
      This is currently only fully implemented for Freescale 32-bit BookE
      processors, but there is some infrastructure in the code for
      64-bit BooKE.
      Signed-off-by: NBecky Bruce <beckyb@kernel.crashing.org>
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      41151e77
  15. 30 3月, 2011 1 次提交
  16. 07 2月, 2011 1 次提交
  17. 27 4月, 2010 1 次提交
    • K
      powerpc/fsl-booke: Fix CONFIG_RELOCATABLE support on FSL Book-E ppc32 · dbc9632a
      Kumar Gala 提交于
      The following commit broke CONFIG_RELOCATABLE support on FSL Book-E
      parts:
      
      commit 549e8152
      Author: Paul Mackerras <paulus@samba.org>
      Date:   Sat Aug 30 11:43:47 2008 +1000
      
          powerpc: Make the 64-bit kernel as a position-independent executable
      
      The change to __va and __pa to use PAGE_OFFSET & MEMORY_START causes
      problems on the Book-E parts because we don't know MEMORY_START until
      after we parse the device tree.  We need __va to work properly to even
      parse the device tree so we have a chicken an egg.  So go back to using
      he other definition of __va/__pa on CONFIG_BOOKE and use the
      PAGE_OFFSET/MEMORY_START version on "Classic" PPC64.
      
      Also updated casts to handle phys_addr_t being a different size from
      unsigned long (ie 36-bit physical on PPC32).
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      dbc9632a
  18. 30 10月, 2009 1 次提交
    • D
      powerpc/mm: Allow more flexible layouts for hugepage pagetables · a4fe3ce7
      David Gibson 提交于
      Currently each available hugepage size uses a slightly different
      pagetable layout: that is, the bottem level table of pointers to
      hugepages is a different size, and may branch off from the normal page
      tables at a different level.  Every hugepage aware path that needs to
      walk the pagetables must therefore look up the hugepage size from the
      slice info first, and work out the correct way to walk the pagetables
      accordingly.  Future hardware is likely to add more possible hugepage
      sizes, more layout options and more mess.
      
      This patch, therefore reworks the handling of hugepage pagetables to
      reduce this complexity.  In the new scheme, instead of having to
      consult the slice mask, pagetable walking code can check a flag in the
      PGD/PUD/PMD entries to see where to branch off to hugepage pagetables,
      and the entry also contains the information (eseentially hugepage
      shift) necessary to then interpret that table without recourse to the
      slice mask.  This scheme can be extended neatly to handle multiple
      levels of self-describing "special" hugepage pagetables, although for
      now we assume only one level exists.
      
      This approach means that only the pagetable allocation path needs to
      know how the pagetables should be set out.  All other (hugepage)
      pagetable walking paths can just interpret the structure as they go.
      
      There already was a flag bit in PGD/PUD/PMD entries for hugepage
      directory pointers, but it was only used for debug.  We alter that
      flag bit to instead be a 0 in the MSB to indicate a hugepage pagetable
      pointer (normally it would be 1 since the pointer lies in the linear
      mapping).  This means that asm pagetable walking can test for (and
      punt on) hugepage pointers with the same test that checks for
      unpopulated page directory entries (beq becomes bge), since hugepage
      pointers will always be positive, and normal pointers always negative.
      
      While we're at it, we get rid of the confusing (and grep defeating)
      #defining of hugepte_shift to be the same thing as mmu_huge_psizes.
      Signed-off-by: NDavid Gibson <dwg@au1.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      a4fe3ce7
  19. 20 8月, 2009 1 次提交
  20. 21 5月, 2009 1 次提交
  21. 15 2月, 2009 1 次提交
    • Y
      powerpc/44x: Support for 256KB PAGE_SIZE · e1240122
      Yuri Tikhonov 提交于
      This patch adds support for 256KB pages on ppc44x-based boards.
      
      For simplification of implementation with 256KB pages we still assume
      2-level paging. As a side effect this leads to wasting extra memory space
      reserved for PTE tables: only 1/4 of pages allocated for PTEs are
      actually used. But this may be an acceptable trade-off to achieve the
      high performance we have with big PAGE_SIZEs in some applications (e.g.
      RAID).
      
      Also with 256KB PAGE_SIZE we increase THREAD_SIZE up to 32KB to minimize
      the risk of stack overflows in the cases of on-stack arrays, which size
      depends on the page size (e.g. multipage BIOs, NTFS, etc.).
      
      With 256KB PAGE_SIZE we need to decrease the PKMAP_ORDER at least down
      to 9, otherwise all high memory (2 ^ 10 * PAGE_SIZE == 256MB) we'll be
      occupied by PKMAP addresses leaving no place for vmalloc. We do not
      separate PKMAP_ORDER for 256K from 16K/64K PAGE_SIZE here; actually that
      value of 10 in support for 16K/64K had been selected rather intuitively.
      Thus now for all cases of PAGE_SIZE on ppc44x (including the default, 4KB,
      one) we have 512 pages for PKMAP.
      
      Because ELF standard supports only page sizes up to 64K, then you should
      use binutils later than 2.17.50.0.3 with '-zmax-page-size' set to 256K
      for building applications, which are to be run with the 256KB-page sized
      kernel. If using the older binutils, then you should patch them like follows:
      
      	--- binutils/bfd/elf32-ppc.c.orig
      	+++ binutils/bfd/elf32-ppc.c
      
      	-#define ELF_MAXPAGESIZE                0x10000
      	+#define ELF_MAXPAGESIZE                0x40000
      
      One more restriction we currently have with 256KB page sizes is inability
      to use shmem safely, so, for now, the 256KB is available only if you turn
      the CONFIG_SHMEM option off (another variant is to use BROKEN).
      Though, if you need shmem with 256KB pages, you can always remove the !SHMEM
      dependency in 'config PPC_256K_PAGES', and use the workaround available here:
       http://lkml.org/lkml/2008/12/19/20Signed-off-by: NYuri Tikhonov <yur@emcraft.com>
      Signed-off-by: NIlya Yanok <yanok@emcraft.com>
      Signed-off-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com>
      e1240122
  22. 29 12月, 2008 1 次提交
  23. 21 10月, 2008 1 次提交
  24. 16 10月, 2008 1 次提交
    • S
      powerpc: fix linux-next build failure · 463baa8a
      Stephen Rothwell 提交于
      Today's linux-next build (powerpc allyesconfig) failed like this:
      
      In file included from arch/powerpc/include/asm/mmu-hash64.h:17,
                       from arch/powerpc/include/asm/mmu.h:8,
                       from arch/powerpc/include/asm/pgtable.h:8,
                       from arch/powerpc/mm/slb.c:20:
      arch/powerpc/include/asm/page.h:76: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'memstart_addr'
      arch/powerpc/include/asm/page.h:77: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'kernstart_addr'
      
      Caused by commit 600715dc ("generic: add
      phys_addr_t for holding physical addresses") from the tip-core tree.
      This only fails if CONFIG_RELOCATABLE is set.
      
      So include that instead of asm/types.h in asm/page.h for
      the CONFIG_RELOCATABLE case.
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Cc: ppc-dev <linuxppc-dev@ozlabs.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      463baa8a
  25. 16 9月, 2008 1 次提交
    • P
      powerpc: Make the 64-bit kernel as a position-independent executable · 549e8152
      Paul Mackerras 提交于
      This implements CONFIG_RELOCATABLE for 64-bit by making the kernel as
      a position-independent executable (PIE) when it is set.  This involves
      processing the dynamic relocations in the image in the early stages of
      booting, even if the kernel is being run at the address it is linked at,
      since the linker does not necessarily fill in words in the image for
      which there are dynamic relocations.  (In fact the linker does fill in
      such words for 64-bit executables, though not for 32-bit executables,
      so in principle we could avoid calling relocate() entirely when we're
      running a 64-bit kernel at the linked address.)
      
      The dynamic relocations are processed by a new function relocate(addr),
      where the addr parameter is the virtual address where the image will be
      run.  In fact we call it twice; once before calling prom_init, and again
      when starting the main kernel.  This means that reloc_offset() returns
      0 in prom_init (since it has been relocated to the address it is running
      at), which necessitated a few adjustments.
      
      This also changes __va and __pa to use an equivalent definition that is
      simpler.  With the relocatable kernel, PAGE_OFFSET and MEMORY_START are
      constants (for 64-bit) whereas PHYSICAL_START is a variable (and
      KERNELBASE ideally should be too, but isn't yet).
      
      With this, relocatable kernels still copy themselves down to physical
      address 0 and run there.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      549e8152
  26. 04 8月, 2008 1 次提交
  27. 25 7月, 2008 1 次提交
    • A
      PAGE_ALIGN(): correctly handle 64-bit values on 32-bit architectures · 27ac792c
      Andrea Righi 提交于
      On 32-bit architectures PAGE_ALIGN() truncates 64-bit values to the 32-bit
      boundary. For example:
      
      	u64 val = PAGE_ALIGN(size);
      
      always returns a value < 4GB even if size is greater than 4GB.
      
      The problem resides in PAGE_MASK definition (from include/asm-x86/page.h for
      example):
      
      #define PAGE_SHIFT      12
      #define PAGE_SIZE       (_AC(1,UL) << PAGE_SHIFT)
      #define PAGE_MASK       (~(PAGE_SIZE-1))
      ...
      #define PAGE_ALIGN(addr)       (((addr)+PAGE_SIZE-1)&PAGE_MASK)
      
      The "~" is performed on a 32-bit value, so everything in "and" with
      PAGE_MASK greater than 4GB will be truncated to the 32-bit boundary.
      Using the ALIGN() macro seems to be the right way, because it uses
      typeof(addr) for the mask.
      
      Also move the PAGE_ALIGN() definitions out of include/asm-*/page.h in
      include/linux/mm.h.
      
      See also lkml discussion: http://lkml.org/lkml/2008/6/11/237
      
      [akpm@linux-foundation.org: fix drivers/media/video/uvc/uvc_queue.c]
      [akpm@linux-foundation.org: fix v850]
      [akpm@linux-foundation.org: fix powerpc]
      [akpm@linux-foundation.org: fix arm]
      [akpm@linux-foundation.org: fix mips]
      [akpm@linux-foundation.org: fix drivers/media/video/pvrusb2/pvrusb2-dvb.c]
      [akpm@linux-foundation.org: fix drivers/mtd/maps/uclinux.c]
      [akpm@linux-foundation.org: fix powerpc]
      Signed-off-by: NAndrea Righi <righi.andrea@gmail.com>
      Cc: <linux-arch@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      27ac792c
  28. 24 4月, 2008 1 次提交
    • K
      [POWERPC] 85xx: Add support for relocatable kernel (and booting at non-zero) · 37dd2bad
      Kumar Gala 提交于
      Added support to allow an 85xx kernel to be run from a non-zero physical
      address (useful for cooperative asymmetric multiprocessing situations and
      kdump).  The support can be configured at compile time by setting
      CONFIG_PAGE_OFFSET, CONFIG_KERNEL_START, and CONFIG_PHYSICAL_START as
      desired.
      
      Alternatively, the kernel build can set CONFIG_RELOCATABLE.  Setting this
      config option causes the kernel to determine at runtime the physical
      addresses of CONFIG_PAGE_OFFSET and CONFIG_KERNEL_START.  If
      CONFIG_RELOCATABLE is set, then CONFIG_PHYSICAL_START has no meaning.
      However, CONFIG_PHYSICAL_START will always be used to set the LOAD program
      header physical address field in the resulting ELF image.
      
      Currently we are limited to running at a physical address that is a
      multiple of 256M.  This is due to how we map TLBs to cover
      lowmem.  This should be fixed to allow 64M or maybe even 16M alignment
      in the future.  It is considered an error to try and run a kernel at a
      non-aligned physical address.
      
      All the magic for this support is accomplished by proper initialization
      of the kernel memory subsystem and use of ARCH_PFN_OFFSET.
      
      The use of ARCH_PFN_OFFSET only affects normal memory and not IO mappings.
      ioremap uses map_page and isn't affected by ARCH_PFN_OFFSET.
      
      /dev/mem continues to allow access to any physical address in the system
      regardless of how CONFIG_PHYSICAL_START is set.
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      37dd2bad
  29. 17 4月, 2008 1 次提交
  30. 09 2月, 2008 1 次提交
    • M
      CONFIG_HIGHPTE vs. sub-page page tables. · 2f569afd
      Martin Schwidefsky 提交于
      Background: I've implemented 1K/2K page tables for s390.  These sub-page
      page tables are required to properly support the s390 virtualization
      instruction with KVM.  The SIE instruction requires that the page tables
      have 256 page table entries (pte) followed by 256 page status table entries
      (pgste).  The pgstes are only required if the process is using the SIE
      instruction.  The pgstes are updated by the hardware and by the hypervisor
      for a number of reasons, one of them is dirty and reference bit tracking.
      To avoid wasting memory the standard pte table allocation should return
      1K/2K (31/64 bit) and 2K/4K if the process is using SIE.
      
      Problem: Page size on s390 is 4K, page table size is 1K or 2K.  That means
      the s390 version for pte_alloc_one cannot return a pointer to a struct
      page.  Trouble is that with the CONFIG_HIGHPTE feature on x86 pte_alloc_one
      cannot return a pointer to a pte either, since that would require more than
      32 bit for the return value of pte_alloc_one (and the pte * would not be
      accessible since its not kmapped).
      
      Solution: The only solution I found to this dilemma is a new typedef: a
      pgtable_t.  For s390 pgtable_t will be a (pte *) - to be introduced with a
      later patch.  For everybody else it will be a (struct page *).  The
      additional problem with the initialization of the ptl lock and the
      NR_PAGETABLE accounting is solved with a constructor pgtable_page_ctor and
      a destructor pgtable_page_dtor.  The page table allocation and free
      functions need to call these two whenever a page table page is allocated or
      freed.  pmd_populate will get a pgtable_t instead of a struct page pointer.
       To get the pgtable_t back from a pmd entry that has been installed with
      pmd_populate a new function pmd_pgtable is added.  It replaces the pmd_page
      call in free_pte_range and apply_to_pte_range.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: <linux-arch@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2f569afd
  31. 08 2月, 2008 1 次提交
  32. 27 7月, 2007 1 次提交
    • S
      fix 'dynreloc miscount' link error on Powerpc · 045e72ac
      Sam Ravnborg 提交于
      Nathan Lynch <ntl@pobox.com> reported:
      2.6.23-rc1 breaks the build for 64-bit powerpc for me (using
      maple_defconfig):
      
        LD      vmlinux.o
      powerpc64-unknown-linux-gnu-ld: dynreloc miscount for
      kernel/built-in.o, section .opd
      powerpc64-unknown-linux-gnu-ld: can not edit opd Bad value
      make: *** [vmlinux.o] Error 1
      
      However, I see a possibly related binutils patch:
      http://article.gmane.org/gmane.comp.gnu.binutils/33650
      
      It was tracked down to be caused by the weak prototype
      declaration in mm.h:
      __attribute__((weak)) const char *arch_vma_name(struct vm_area_struct *vma);
      
      But there is no need to make the declaration weak - only the definition
      needs to be marked weak.  So drop the weak declaration.  And in the process
      drop the duplicate definition in page.h for powerpc.
      
      Note: the arch_vma_name fix for x86_64 needs to be applied first to avoid
      breaking x86_64
      Signed-off-by: NSam Ravnborg <sam@ravnborg.org>
      Cc: Nathan Lynch <ntl@pobox.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      045e72ac
  33. 08 5月, 2007 1 次提交
  34. 02 5月, 2007 1 次提交
    • D
      [POWERPC] Fix STRICT_MM_TYPECHECKS · 69d48b40
      David Gibson 提交于
      Since we don't have it active by default, the STRICT_MM_TYPECHECKS
      option has bitrotted again.  This patch fixes a couple of simple build
      fixes if the option is selected.  First, pud_t mustn't be defined in
      page.h on 32-bit systems, because it conflicts with the version in the
      generic pud-folding code.  Second, pci_32.c is missing a __pgprot()
      wrapper call.  Third, a couple of PS3 files use constants of type
      pgprot_t when they need the raw values, we add pgprot_val() calls to
      fix this.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      69d48b40