1. 20 8月, 2009 1 次提交
  2. 26 6月, 2009 1 次提交
  3. 16 6月, 2009 1 次提交
    • M
      powerpc: Add configurable -Werror for arch/powerpc · ba55bd74
      Michael Ellerman 提交于
      Add the option to build the code under arch/powerpc with -Werror.
      
      The intention is to make it harder for people to inadvertantly introduce
      warnings in the arch/powerpc code. It needs to be configurable so that
      if a warning is introduced, people can easily work around it while it's
      being fixed.
      
      The option is a negative, ie. don't enable -Werror, so that it will be
      turned on for allyes and allmodconfig builds.
      
      The default is n, in the hope that developers will build with -Werror,
      that will probably lead to some build breaks, I am prepared to be flamed.
      
      It's not enabled for math-emu, which is a steaming pile of warnings.
      Signed-off-by: NMichael Ellerman <michael@ellerman.id.au>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      ba55bd74
  4. 09 6月, 2009 1 次提交
  5. 27 5月, 2009 1 次提交
  6. 24 3月, 2009 1 次提交
  7. 11 3月, 2009 1 次提交
  8. 21 12月, 2008 3 次提交
  9. 16 12月, 2008 1 次提交
  10. 03 12月, 2008 1 次提交
  11. 30 7月, 2008 1 次提交
  12. 14 2月, 2008 1 次提交
  13. 24 1月, 2008 1 次提交
    • P
      [POWERPC] Provide a way to protect 4k subpages when using 64k pages · fa28237c
      Paul Mackerras 提交于
      Using 64k pages on 64-bit PowerPC systems makes life difficult for
      emulators that are trying to emulate an ISA, such as x86, which use a
      smaller page size, since the emulator can no longer use the MMU and
      the normal system calls for controlling page protections.  Of course,
      the emulator can emulate the MMU by checking and possibly remapping
      the address for each memory access in software, but that is pretty
      slow.
      
      This provides a facility for such programs to control the access
      permissions on individual 4k sub-pages of 64k pages.  The idea is
      that the emulator supplies an array of protection masks to apply to a
      specified range of virtual addresses.  These masks are applied at the
      level where hardware PTEs are inserted into the hardware page table
      based on the Linux PTEs, so the Linux PTEs are not affected.  Note
      that this new mechanism does not allow any access that would otherwise
      be prohibited; it can only prohibit accesses that would otherwise be
      allowed.  This new facility is only available on 64-bit PowerPC and
      only when the kernel is configured for 64k pages.
      
      The masks are supplied using a new subpage_prot system call, which
      takes a starting virtual address and length, and a pointer to an array
      of protection masks in memory.  The array has a 32-bit word per 64k
      page to be protected; each 32-bit word consists of 16 2-bit fields,
      for which 0 allows any access (that is otherwise allowed), 1 prevents
      write accesses, and 2 or 3 prevent any access.
      
      Implicit in this is that the regions of the address space that are
      protected are switched to use 4k hardware pages rather than 64k
      hardware pages (on machines with hardware 64k page support).  In fact
      the whole process is switched to use 4k hardware pages when the
      subpage_prot system call is used, but this could be improved in future
      to switch only the affected segments.
      
      The subpage protection bits are stored in a 3 level tree akin to the
      page table tree.  The top level of this tree is stored in a structure
      that is appended to the top level of the page table tree, i.e., the
      pgd array.  Since it will often only be 32-bit addresses (below 4GB)
      that are protected, the pointers to the first four bottom level pages
      are also stored in this structure (each bottom level page contains the
      protection bits for 1GB of address space), so the protection bits for
      addresses below 4GB can be accessed with one fewer loads than those
      for higher addresses.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      fa28237c
  14. 03 10月, 2007 1 次提交
  15. 20 8月, 2007 1 次提交
  16. 14 6月, 2007 1 次提交
    • B
      [POWERPC] Rewrite IO allocation & mapping on powerpc64 · 3d5134ee
      Benjamin Herrenschmidt 提交于
      This rewrites pretty much from scratch the handling of MMIO and PIO
      space allocations on powerpc64.  The main goals are:
      
       - Get rid of imalloc and use more common code where possible
       - Simplify the current mess so that PIO space is allocated and
         mapped in a single place for PCI bridges
       - Handle allocation constraints of PIO for all bridges including
         hot plugged ones within the 2GB space reserved for IO ports,
         so that devices on hotplugged busses will now work with drivers
         that assume IO ports fit in an int.
       - Cleanup and separate tracking of the ISA space in the reserved
         low 64K of IO space. No ISA -> Nothing mapped there.
      
      I booted a cell blade with IDE on PIO and MMIO and a dual G5 so
      far, that's it :-)
      
      With this patch, all allocations are done using the code in
      mm/vmalloc.c, though we use the low level __get_vm_area with
      explicit start/stop constraints in order to manage separate
      areas for vmalloc/vmap, ioremap, and PCI IOs.
      
      This greatly simplifies a lot of things, as you can see in the
      diffstat of that patch :-)
      
      A new pair of functions pcibios_map/unmap_io_space() now replace
      all of the previous code that used to manipulate PCI IOs space.
      The allocation is done at mapping time, which is now called from
      scan_phb's, just before the devices are probed (instead of after,
      which is by itself a bug fix). The only other caller is the PCI
      hotplug code for hot adding PCI-PCI bridges (slots).
      
      imalloc is gone, as is the "sub-allocation" thing, but I do beleive
      that hotplug should still work in the sense that the space allocation
      is always done by the PHB, but if you unmap a child bus of this PHB
      (which seems to be possible), then the code should properly tear
      down all the HPTE mappings for that area of the PHB allocated IO space.
      
      I now always reserve the first 64K of IO space for the bridge with
      the ISA bus on it. I have moved the code for tracking ISA in a separate
      file which should also make it smarter if we ever are capable of
      hot unplugging or re-plugging an ISA bridge.
      
      This should have a side effect on platforms like powermac where VGA IOs
      will no longer work. This is done on purpose though as they would have
      worked semi-randomly before. The idea at this point is to isolate drivers
      that might need to access those and fix them by providing a proper
      function to obtain an offset to the legacy IOs of a given bus.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      3d5134ee
  17. 09 5月, 2007 1 次提交
    • B
      [POWERPC] Introduce address space "slices" · d0f13e3c
      Benjamin Herrenschmidt 提交于
      The basic issue is to be able to do what hugetlbfs does but with
      different page sizes for some other special filesystems; more
      specifically, my need is:
      
       - Huge pages
      
       - SPE local store mappings using 64K pages on a 4K base page size
      kernel on Cell
      
       - Some special 4K segments in 64K-page kernels for mapping a dodgy
      type of powerpc-specific infiniband hardware that requires 4K MMU
      mappings for various reasons I won't explain here.
      
      The main issues are:
      
       - To maintain/keep track of the page size per "segment" (as we can
      only have one page size per segment on powerpc, which are 256MB
      divisions of the address space).
      
       - To make sure special mappings stay within their allotted
      "segments" (including MAP_FIXED crap)
      
       - To make sure everybody else doesn't mmap/brk/grow_stack into a
      "segment" that is used for a special mapping
      
      Some of the necessary mechanisms to handle that were present in the
      hugetlbfs code, but mostly in ways not suitable for anything else.
      
      The patch relies on some changes to the generic get_unmapped_area()
      that just got merged.  It still hijacks hugetlb callbacks here or
      there as the generic code hasn't been entirely cleaned up yet but
      that shouldn't be a problem.
      
      So what is a slice ?  Well, I re-used the mechanism used formerly by our
      hugetlbfs implementation which divides the address space in
      "meta-segments" which I called "slices".  The division is done using
      256MB slices below 4G, and 1T slices above.  Thus the address space is
      divided currently into 16 "low" slices and 16 "high" slices.  (Special
      case: high slice 0 is the area between 4G and 1T).
      
      Doing so simplifies significantly the tracking of segments and avoids
      having to keep track of all the 256MB segments in the address space.
      
      While I used the "concepts" of hugetlbfs, I mostly re-implemented
      everything in a more generic way and "ported" hugetlbfs to it.
      
      Slices can have an associated page size, which is encoded in the mmu
      context and used by the SLB miss handler to set the segment sizes.  The
      hash code currently doesn't care, it has a specific check for hugepages,
      though I might add a mechanism to provide per-slice hash mapping
      functions in the future.
      
      The slice code provide a pair of "generic" get_unmapped_area() (bottomup
      and topdown) functions that should work with any slice size.  There is
      some trickiness here so I would appreciate people to have a look at the
      implementation of these and let me know if I got something wrong.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      d0f13e3c
  18. 04 12月, 2006 1 次提交
    • A
      [POWERPC] ps3: multiplatform build fixes · e22ba7e3
      Arnd Bergmann 提交于
      A few code paths need to check whether or not they are running
      on the PS3's LV1 hypervisor before making hcalls. This introduces
      a new firmware feature bit for this, FW_FEATURE_PS3_LV1.
      
      Now when both PS3 and IBM_CELL_BLADE are enabled, but not PSERIES,
      FW_FEATURE_PS3_LV1 and FW_FEATURE_LPAR get enabled at compile time,
      which is a bug. The same problem can also happen for (PPC_ISERIES &&
      !PPC_PSERIES && PPC_SOMETHING_ELSE). In order to solve this, I
      introduce a new CONFIG_PPC_NATIVE option that is set when at least
      one platform is selected that can run without a hypervisor and then
      turns the firmware feature check into a run-time option.
      
      The new cell oprofile support that was recently merged does not
      work on hypervisor based platforms like the PS3, therefore make
      it depend on PPC_CELL_NATIVE instead of PPC_CELL. This may change
      if we get oprofile support for PS3.
      Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
      e22ba7e3
  19. 20 10月, 2005 1 次提交
  20. 12 10月, 2005 1 次提交
    • S
      powerpc: make iSeries boot again · 3a5f8c5f
      Stephen Rothwell 提交于
      On ARCH=ppc64 we were getting htab_hash_mask recalculated
      to the correct value for our particular machine by accident.
      In the merge tree, that code was commented out, so htab_hash_mask
      was being corrupted.
      
      We now set ppc64_pft_size instead which gets htab_has_mask
      calculated correctly for us later.  We should put an
      ibm,pft-size property in the device tree at some point.
      
      Also set -mno-minimal-toc in some makefiles.
      Allow iSeries to configure PROC_DEVICETREE.
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      3a5f8c5f
  21. 10 10月, 2005 2 次提交
  22. 06 10月, 2005 1 次提交
    • P
      powerpc: Merge lmb.c and make MM initialization use it. · 7c8c6b97
      Paul Mackerras 提交于
      This also creates merged versions of do_init_bootmem, paging_init
      and mem_init and moves them to arch/powerpc/mm/mem.c.  It gets rid
      of the mem_pieces stuff.
      
      I made memory_limit a parameter to lmb_enforce_memory_limit rather
      than a global referenced by that function.  This will require some
      small changes to ppc64 if we want to continue building ARCH=ppc64
      using the merged lmb.c.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      7c8c6b97
  23. 26 9月, 2005 1 次提交
    • P
      powerpc: Merge enough to start building in arch/powerpc. · 14cf11af
      Paul Mackerras 提交于
      This creates the directory structure under arch/powerpc and a bunch
      of Kconfig files.  It does a first-cut merge of arch/powerpc/mm,
      arch/powerpc/lib and arch/powerpc/platforms/powermac.  This is enough
      to build a 32-bit powermac kernel with ARCH=powerpc.
      
      For now we are getting some unmerged files from arch/ppc/kernel and
      arch/ppc/syslib, or arch/ppc64/kernel.  This makes some minor changes
      to files in those directories and files outside arch/powerpc.
      
      The boot directory is still not merged.  That's going to be interesting.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      14cf11af