1. 13 9月, 2012 1 次提交
  2. 07 9月, 2012 1 次提交
  3. 16 8月, 2012 1 次提交
  4. 20 3月, 2012 1 次提交
  5. 09 12月, 2011 1 次提交
    • T
      powerpc: Use HAVE_MEMBLOCK_NODE_MAP · 1d7cfe18
      Tejun Heo 提交于
      powerpc doesn't access early_node_map[] directly and enabling
      HAVE_MEMBLOCK_NODE_MAP is trivial - replacing add_active_range() calls
      with memblock_set_node() and selecting HAVE_MEMBLOCK_NODE_MAP is
      enough.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      1d7cfe18
  6. 08 12月, 2011 1 次提交
    • S
      powerpc: Punch a hole in /dev/mem for librtas · 8a3e3d31
      sukadev@linux.vnet.ibm.com 提交于
      With CONFIG_STRICT_DEVMEM=y, user space cannot read any part of /dev/mem.
      Since this breaks librtas, punch a hole in /dev/mem to allow access to the
      rmo_buffer that librtas needs.
      
      Anton Blanchard reported the problem and helped with the fix.
      
      A quick test for this patch:
      
             # cat /proc/rtas/rmo_buffer
             000000000f190000 10000
      
             # python -c "print 0x000000000f190000 / 0x10000"
             3865
      
             # dd if=/dev/mem of=/tmp/foo count=1 bs=64k skip=3865
             1+0 records in
             1+0 records out
             65536 bytes (66 kB) copied, 0.000205235 s, 319 MB/s
      
             # dd if=/dev/mem of=/tmp/foo
             dd: reading `/dev/mem': Operation not permitted
             0+0 records in
             0+0 records out
             0 bytes (0 B) copied, 0.00022519 s, 0.0 kB/s
      Signed-off-by: NSukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      8a3e3d31
  7. 07 12月, 2011 1 次提交
  8. 28 11月, 2011 1 次提交
    • S
      powerpc: Implement CONFIG_STRICT_DEVMEM · 1d54cf2b
      sukadev@linux.vnet.ibm.com 提交于
      As described in the help text in the patch, this token restricts general
      access to /dev/mem as a way of increasing the security. Specifically, access
      to exclusive IOMEM and kernel RAM is denied unless CONFIG_STRICT_DEVMEM is
      set to 'n'.
      
      Implement the 'devmem_is_allowed()' interface for Powerpc. It will be
      called from range_is_allowed() when userpsace attempts to access /dev/mem.
      
      This patch is based on an earlier patch from Steve Best and with input from
      Paul Mackerras and Scott Wood.
      
      [BenH] Fixed a typo or two and removed the generic change which should
             be submitted as a separate patch
      Signed-off-by: NSukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      1d54cf2b
  9. 08 11月, 2011 1 次提交
  10. 01 11月, 2011 1 次提交
  11. 20 9月, 2011 2 次提交
    • A
      powerpc: Fix oops when echoing bad values to /sys/devices/system/memory/probe · a1194097
      Anton Blanchard 提交于
      If we echo an address the hypervisor doesn't like to
      /sys/devices/system/memory/probe we oops the box:
      
      # echo 0x10000000000 > /sys/devices/system/memory/probe
      
      kernel BUG at arch/powerpc/mm/hash_utils_64.c:541!
      
      The backtrace is:
      
      create_section_mapping
      arch_add_memory
      add_memory
      memory_probe_store
      sysdev_class_store
      sysfs_write_file
      vfs_write
      SyS_write
      
      In create_section_mapping we BUG if htab_bolt_mapping returned
      an error. A better approach is to return an error which will
      propagate back to userspace.
      
      Rerunning the test with this patch applied:
      
      # echo 0x10000000000 > /sys/devices/system/memory/probe
      -bash: echo: write error: Invalid argument
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Cc: stable@kernel.org
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      a1194097
    • B
      powerpc: Hugetlb for BookE · 41151e77
      Becky Bruce 提交于
      Enable hugepages on Freescale BookE processors.  This allows the kernel to
      use huge TLB entries to map pages, which can greatly reduce the number of
      TLB misses and the amount of TLB thrashing experienced by applications with
      large memory footprints.  Care should be taken when using this on FSL
      processors, as the number of large TLB entries supported by the core is low
      (16-64) on current processors.
      
      The supported set of hugepage sizes include 4m, 16m, 64m, 256m, and 1g.
      Page sizes larger than the max zone size are called "gigantic" pages and
      must be allocated on the command line (and cannot be deallocated).
      
      This is currently only fully implemented for Freescale 32-bit BookE
      processors, but there is some infrastructure in the code for
      64-bit BooKE.
      Signed-off-by: NBecky Bruce <beckyb@kernel.crashing.org>
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      41151e77
  12. 19 7月, 2011 1 次提交
  13. 08 7月, 2011 1 次提交
  14. 30 6月, 2011 2 次提交
  15. 29 6月, 2011 1 次提交
  16. 09 6月, 2011 1 次提交
    • B
      powerpc: Force page alignment for initrd reserved memory · 307cfe71
      Benjamin Herrenschmidt 提交于
      When using 64K pages with a separate cpio rootfs, U-Boot will align
      the rootfs on a 4K page boundary. When the memory is reserved, and
      subsequent early memblock_alloc is called, it will allocate memory
      between the 64K page alignment and reserved memory. When the reserved
      memory is subsequently freed, it is done so by pages, causing the
      early memblock_alloc requests to be re-used, which in my case, caused
      the device-tree to be clobbered.
      
      This patch forces the reserved memory for initrd to be kernel page
      aligned, and will move the device tree if it overlaps with the range
      extension of initrd. This patch will also consolidate the identical
      function free_initrd_mem() from mm/init_32.c, init_64.c to mm/mem.c,
      and adds the same range extension when freeing initrd. free_initrd_mem()
      is also moved to the __init section.
      
      Many thanks to Milton Miller for his input on this patch.
      
      [BenH: Fixed build without CONFIG_BLK_DEV_INITRD]
      Signed-off-by: NDave Carroll <dcarroll@astekcorp.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      307cfe71
  17. 31 3月, 2011 1 次提交
  18. 13 10月, 2010 1 次提交
    • Y
      memblock, bootmem: Round pfn properly for memory and reserved regions · c7fc2de0
      Yinghai Lu 提交于
      We need to round memory regions correctly -- specifically, we need to
      round reserved region in the more expansive direction (lower limit
      down, upper limit up) whereas usable memory regions need to be rounded
      in the more restrictive direction (lower limit up, upper limit down).
      
      This introduces two set of inlines:
      
      	memblock_region_memory_base_pfn()
      	memblock_region_memory_end_pfn()
      	memblock_region_reserved_base_pfn()
      	memblock_region_reserved_end_pfn()
      
      Although they are antisymmetric (and therefore are technically
      duplicates) the use of the different inlines explicitly documents the
      programmer's intention.
      
      The lack of proper rounding caused a bug on ARM, which was then found
      to also affect other architectures.
      Reported-by: NRussell King <rmk@arm.linux.org.uk>
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      LKML-Reference: <4CB4CDFD.4020105@kernel.org>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      c7fc2de0
  19. 05 8月, 2010 1 次提交
  20. 04 8月, 2010 2 次提交
  21. 14 7月, 2010 1 次提交
  22. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  23. 19 3月, 2010 1 次提交
  24. 21 2月, 2010 1 次提交
    • R
      MM: Pass a PTE pointer to update_mmu_cache() rather than the PTE itself · 4b3073e1
      Russell King 提交于
      On VIVT ARM, when we have multiple shared mappings of the same file
      in the same MM, we need to ensure that we have coherency across all
      copies.  We do this via make_coherent() by making the pages
      uncacheable.
      
      This used to work fine, until we allowed highmem with highpte - we
      now have a page table which is mapped as required, and is not available
      for modification via update_mmu_cache().
      
      Ralf Beache suggested getting rid of the PTE value passed to
      update_mmu_cache():
      
        On MIPS update_mmu_cache() calls __update_tlb() which walks pagetables
        to construct a pointer to the pte again.  Passing a pte_t * is much
        more elegant.  Maybe we might even replace the pte argument with the
        pte_t?
      
      Ben Herrenschmidt would also like the pte pointer for PowerPC:
      
        Passing the ptep in there is exactly what I want.  I want that
        -instead- of the PTE value, because I have issue on some ppc cases,
        for I$/D$ coherency, where set_pte_at() may decide to mask out the
        _PAGE_EXEC.
      
      So, pass in the mapped page table pointer into update_mmu_cache(), and
      remove the PTE value, updating all implementations and call sites to
      suit.
      
      Includes a fix from Stephen Rothwell:
      
        sparc: fix fallout from update_mmu_cache API change
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      4b3073e1
  25. 30 10月, 2009 1 次提交
    • D
      powerpc/mm: Bring hugepage PTE accessor functions back into sync with normal accessors · 0895ecda
      David Gibson 提交于
      The hugepage arch code provides a number of hook functions/macros
      which mirror the functionality of various normal page pte access
      functions.  Various changes in the normal page accessors (in
      particular BenH's recent changes to the handling of lazy icache
      flushing and PAGE_EXEC) have caused the hugepage versions to get out
      of sync with the originals.  In some cases, this is a bug, at least on
      some MMU types.
      
      One of the reasons that some hooks were not identical to the normal
      page versions, is that the fact we're dealing with a hugepage needed
      to be passed down do use the correct dcache-icache flush function.
      This patch makes the main flush_dcache_icache_page() function hugepage
      aware (by checking for the PageCompound flag).  That in turn means we
      can make set_huge_pte_at() just a call to set_pte_at() bringing it
      back into sync.  As a bonus, this lets us remove the
      hash_huge_page_do_lazy_icache() function, replacing it with a call to
      the hash_page_do_lazy_icache() function it was based on.
      
      Some other hugepage pte access hooks - huge_ptep_get_and_clear() and
      huge_ptep_clear_flush() - are not so easily unified, but this patch at
      least brings them back into sync with the current versions of the
      corresponding normal page functions.
      Signed-off-by: NDavid Gibson <dwg@au1.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      0895ecda
  26. 23 9月, 2009 1 次提交
    • K
      walk system ram range · 908eedc6
      KAMEZAWA Hiroyuki 提交于
      Originally, walk_memory_resource() was introduced to traverse all memory
      of "System RAM" for detecting memory hotplug/unplug range.  For doing so,
      flags of IORESOUCE_MEM|IORESOURCE_BUSY was used and this was enough for
      memory hotplug.
      
      But for using other purpose, /proc/kcore, this may includes some firmware
      area marked as IORESOURCE_BUSY | IORESOUCE_MEM.  This patch makes the
      check strict to find out busy "System RAM".
      
      Note: PPC64 keeps their own walk_memory_resouce(), which walk through
      ppc64's lmb informaton.  Because old kclist_add() is called per lmb, this
      patch makes no difference in behavior, finally.
      
      And this patch removes CONFIG_MEMORY_HOTPLUG check from this function.
      Because pfn_valid() just show "there is memmap or not* and cannot be used
      for "there is physical memory or not", this function is useful in generic
      to scan physical memory range.
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: WANG Cong <xiyou.wangcong@gmail.com>
      Cc: Américo Wang <xiyou.wangcong@gmail.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Roland Dreier <rolandd@cisco.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      908eedc6
  27. 22 9月, 2009 1 次提交
  28. 27 5月, 2009 2 次提交
    • B
      powerpc: Fix up dma_alloc_coherent() on platforms without cache coherency. · 8b31e49d
      Benjamin Herrenschmidt 提交于
      The implementation we just revived has issues, such as using a
      Kconfig-defined virtual address area in kernel space that nothing
      actually carves out (and thus will overlap whatever is there),
      or having some dependencies on being self contained in a single
      PTE page which adds unnecessary constraints on the kernel virtual
      address space.
      
      This fixes it by using more classic PTE accessors and automatically
      locating the area for consistent memory, carving an appropriate hole
      in the kernel virtual address space, leaving only the size of that
      area as a Kconfig option. It also brings some dma-mask related fixes
      from the ARM implementation which was almost identical initially but
      grew its own fixes.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      8b31e49d
    • B
      powerpc: Minor cleanups of kernel virt address space definitions · f637a49e
      Benjamin Herrenschmidt 提交于
      Make FIXADDR_TOP a compile time constant and cleanup a
      couple of definitions relative to the layout of the kernel
      address space on ppc32. We also print out that layout at
      boot time for debugging purposes.
      
      This is a pre-requisite for properly fixing non-coherent
      DMA allocactions.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      f637a49e
  29. 15 5月, 2009 1 次提交
  30. 11 2月, 2009 1 次提交
    • B
      powerpc/mm: Rework I$/D$ coherency (v3) · 8d30c14c
      Benjamin Herrenschmidt 提交于
      This patch reworks the way we do I and D cache coherency on PowerPC.
      
      The "old" way was split in 3 different parts depending on the processor type:
      
         - Hash with per-page exec support (64-bit and >= POWER4 only) does it
      at hashing time, by preventing exec on unclean pages and cleaning pages
      on exec faults.
      
         - Everything without per-page exec support (32-bit hash, 8xx, and
      64-bit < POWER4) does it for all page going to user space in update_mmu_cache().
      
         - Embedded with per-page exec support does it from do_page_fault() on
      exec faults, in a way similar to what the hash code does.
      
      That leads to confusion, and bugs. For example, the method using update_mmu_cache()
      is racy on SMP where another processor can see the new PTE and hash it in before
      we have cleaned the cache, and then blow trying to execute. This is hard to hit but
      I think it has bitten us in the past.
      
      Also, it's inefficient for embedded where we always end up having to do at least
      one more page fault.
      
      This reworks the whole thing by moving the cache sync into two main call sites,
      though we keep different behaviours depending on the HW capability. The call
      sites are set_pte_at() which is now made out of line, and ptep_set_access_flags()
      which joins the former in pgtable.c
      
      The base idea for Embedded with per-page exec support, is that we now do the
      flush at set_pte_at() time when coming from an exec fault, which allows us
      to avoid the double fault problem completely (we can even improve the situation
      more by implementing TLB preload in update_mmu_cache() but that's for later).
      
      If for some reason we didn't do it there and we try to execute, we'll hit
      the page fault, which will do a minor fault, which will hit ptep_set_access_flags()
      to do things like update _PAGE_ACCESSED or _PAGE_DIRTY if needed, we just make
      this guys also perform the I/D cache sync for exec faults now. This second path
      is the catch all for things that weren't cleaned at set_pte_at() time.
      
      For cpus without per-pag exec support, we always do the sync at set_pte_at(),
      thus guaranteeing that when the PTE is visible to other processors, the cache
      is clean.
      
      For the 64-bit hash with per-page exec support case, we keep the old mechanism
      for now. I'll look into changing it later, once I've reworked a bit how we
      use _PAGE_EXEC.
      
      This is also a first step for adding _PAGE_EXEC support for embedded platforms
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      8d30c14c
  31. 07 1月, 2009 1 次提交
    • G
      mm: show node to memory section relationship with symlinks in sysfs · c04fc586
      Gary Hade 提交于
      Show node to memory section relationship with symlinks in sysfs
      
      Add /sys/devices/system/node/nodeX/memoryY symlinks for all
      the memory sections located on nodeX.  For example:
      /sys/devices/system/node/node1/memory135 -> ../../memory/memory135
      indicates that memory section 135 resides on node1.
      
      Also revises documentation to cover this change as well as updating
      Documentation/ABI/testing/sysfs-devices-memory to include descriptions
      of memory hotremove files 'phys_device', 'phys_index', and 'state'
      that were previously not described there.
      
      In addition to it always being a good policy to provide users with
      the maximum possible amount of physical location information for
      resources that can be hot-added and/or hot-removed, the following
      are some (but likely not all) of the user benefits provided by
      this change.
      Immediate:
        - Provides information needed to determine the specific node
          on which a defective DIMM is located.  This will reduce system
          downtime when the node or defective DIMM is swapped out.
        - Prevents unintended onlining of a memory section that was
          previously offlined due to a defective DIMM.  This could happen
          during node hot-add when the user or node hot-add assist script
          onlines _all_ offlined sections due to user or script inability
          to identify the specific memory sections located on the hot-added
          node.  The consequences of reintroducing the defective memory
          could be ugly.
        - Provides information needed to vary the amount and distribution
          of memory on specific nodes for testing or debugging purposes.
      Future:
        - Will provide information needed to identify the memory
          sections that need to be offlined prior to physical removal
          of a specific node.
      
      Symlink creation during boot was tested on 2-node x86_64, 2-node
      ppc64, and 2-node ia64 systems.  Symlink creation during physical
      memory hot-add tested on a 2-node x86_64 system.
      Signed-off-by: NGary Hade <garyhade@us.ibm.com>
      Signed-off-by: NBadari Pulavarty <pbadari@us.ibm.com>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c04fc586
  32. 21 12月, 2008 2 次提交
    • B
      powerpc/mm: Rework usage of _PAGE_COHERENT/NO_CACHE/GUARDED · 64b3d0e8
      Benjamin Herrenschmidt 提交于
      Currently, we never set _PAGE_COHERENT in the PTEs, we just OR it in
      in the hash code based on some CPU feature bit.  We also manipulate
      _PAGE_NO_CACHE and _PAGE_GUARDED by hand in all sorts of places.
      
      This changes the logic so that instead, the PTE now contains
      _PAGE_COHERENT for all normal RAM pages thay have I = 0 on platforms
      that need it.  The hash code clears it if the feature bit is not set.
      
      It also adds some clean accessors to setup various valid combinations
      of access flags and change various bits of code to use them instead.
      
      This should help having the PTE actually containing the bit
      combinations that we really want.
      
      I also removed _PAGE_GUARDED from _PAGE_BASE on 44x and instead
      set it explicitely from the TLB miss.  I will ultimately remove it
      completely as it appears that it might not be needed after all
      but in the meantime, having it in the TLB miss makes things a
      lot easier.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Acked-by: NKumar Gala <galak@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      64b3d0e8
    • B
      powerpc/mm: Add SMP support to no-hash TLB handling · f048aace
      Benjamin Herrenschmidt 提交于
      This commit moves the whole no-hash TLB handling out of line into a
      new tlb_nohash.c file, and implements some basic SMP support using
      IPIs and/or broadcast tlbivax instructions.
      
      Note that I'm using local invalidations for D->I cache coherency.
      
      At worst, if another processor is trying to execute the same and
      has the old entry in its TLB, it will just take a fault and re-do
      the TLB flush locally (it won't re-do the cache flush in any case).
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Acked-by: NKumar Gala <galak@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      f048aace
  33. 20 10月, 2008 1 次提交
    • B
      mm: cleanup to make remove_memory() arch-neutral · 71088785
      Badari Pulavarty 提交于
      There is nothing architecture specific about remove_memory().
      remove_memory() function is common for all architectures which support
      hotplug memory remove.  Instead of duplicating it in every architecture,
      collapse them into arch neutral function.
      
      [akpm@linux-foundation.org: fix the export]
      Signed-off-by: NBadari Pulavarty <pbadari@us.ibm.com>
      Cc: Yasunori Goto <y-goto@jp.fujitsu.com>
      Cc: Gary Hade <garyhade@us.ibm.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Yasunori Goto <y-goto@jp.fujitsu.com>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      71088785
  34. 07 10月, 2008 1 次提交
    • R
      powerpc: Avoid integer overflow in page_is_ram() · a880e762
      Roland Dreier 提交于
      Commit 8b150478 ("ppc: make phys_mem_access_prot() work with pfns
      instead of addresses") fixed page_is_ram() in arch/ppc to avoid overflow
      for addresses above 4G on 32-bit kernels.  However arch/powerpc's
      page_is_ram() is missing the same fix -- it computes a physical address
      by doing pfn << PAGE_SHIFT, which overflows if pfn corresponds to a page
      above 4G.
      
      In particular this causes pages above 4G to be mapped with the wrong
      caching attribute; for example many ppc440-based SoCs have PCI space
      above 4G, and mmap()ing MMIO space may end up with a mapping that has
      caching enabled.
      
      Fix this by working with the pfn and avoiding the conversion to
      physical address that causes the overflow.  This patch compares the
      pfn to max_pfn, which is a semantic change from the old code -- that
      code compared the physical address to high_memory, which corresponds
      to max_low_pfn.  However, I think that was is another bug, since
      highmem pages are still RAM.
      Reported-by: Nvb <vb@vsbe.com>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      a880e762
  35. 04 8月, 2008 1 次提交