1. 09 8月, 2014 1 次提交
  2. 13 11月, 2013 1 次提交
  3. 20 3月, 2012 1 次提交
  4. 01 11月, 2011 1 次提交
    • P
      powerpc: include export.h for files using EXPORT_SYMBOL/THIS_MODULE · 93087948
      Paul Gortmaker 提交于
      Fix failures in powerpc associated with the previously allowed
      implicit module.h presence that now lead to things like this:
      
      arch/powerpc/mm/mmu_context_hash32.c:76:1: error: type defaults to 'int' in declaration of 'EXPORT_SYMBOL_GPL'
      arch/powerpc/mm/tlb_hash32.c:48:1: error: type defaults to 'int' in declaration of 'EXPORT_SYMBOL'
      arch/powerpc/kernel/pci_32.c:51:1: error: type defaults to 'int' in declaration of 'EXPORT_SYMBOL_GPL'
      arch/powerpc/kernel/iomap.c:36:1: error: type defaults to 'int' in declaration of 'EXPORT_SYMBOL'
      arch/powerpc/platforms/44x/canyonlands.c:126:1: error: type defaults to 'int' in declaration of 'EXPORT_SYMBOL'
      arch/powerpc/kvm/44x.c:168:59: error: 'THIS_MODULE' undeclared (first use in this function)
      
      [with several contibutions from Stephen Rothwell <sfr@canb.auug.org.au>]
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      93087948
  5. 30 3月, 2011 1 次提交
  6. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  7. 27 5月, 2009 3 次提交
    • B
      powerpc: Fix up dma_alloc_coherent() on platforms without cache coherency. · 8b31e49d
      Benjamin Herrenschmidt 提交于
      The implementation we just revived has issues, such as using a
      Kconfig-defined virtual address area in kernel space that nothing
      actually carves out (and thus will overlap whatever is there),
      or having some dependencies on being self contained in a single
      PTE page which adds unnecessary constraints on the kernel virtual
      address space.
      
      This fixes it by using more classic PTE accessors and automatically
      locating the area for consistent memory, carving an appropriate hole
      in the kernel virtual address space, leaving only the size of that
      area as a Kconfig option. It also brings some dma-mask related fixes
      from the ARM implementation which was almost identical initially but
      grew its own fixes.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      8b31e49d
    • B
      powerpc: Move dma-noncoherent.c from arch/powerpc/lib to arch/powerpc/mm · b16e7766
      Benjamin Herrenschmidt 提交于
      (pre-requisite to make the next patches more palatable)
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      b16e7766
    • B
      Revert "powerpc: Rework dma-noncoherent to use generic vmalloc layer" · 84532a0f
      Benjamin Herrenschmidt 提交于
      This reverts commit 33f00dce.
      
          While it was a good idea to try to use the mm/vmalloc.c allocator instead
          of our own (in fact, ours is itself a dup on an old variant of the vmalloc
          one), unfortunately, the approach is terminally busted since
          dma_alloc_coherent() can be called at interrupt time or in atomic contexts
          and there's little chances we'll make the code in mm/vmalloc.c cope with\       that :-(
      
          Until we can get the generic code to forbid that idiocy and fix all
          drivers abusing it, we pretty much have no choice but revert to
          our custom virtual space allocator.
      
          There's also a problem with SMP safety since freeing such mapping
          would require an IPI which cannot be done at interrupt time.
      
          However, right now, I don't think we support any platform that is
          both SMP and has non-coherent DMA (don't laugh, I know such things
          do exist !) so we can sort that out later.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      84532a0f
  8. 23 2月, 2009 1 次提交
  9. 21 12月, 2008 1 次提交
    • D
      powerpc: Rename struct vm_region to avoid conflict with NOMMU · 8168b540
      David Howells 提交于
      Rename PowerPC's struct vm_region so that I can introduce my own
      global version for NOMMU.  It's feasible that the PowerPC version may
      wish to use my global one instead.
      
      The NOMMU vm_region struct defines areas of the physical memory map
      that are under mmap.  This may include chunks of RAM or regions of
      memory mapped devices, such as flash.  It is also used to retain
      copies of file content so that shareable private memory mappings of
      files can be made.  As such, it may be compatible with what is
      described in the banner comment for PowerPC's vm_region struct.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      8168b540
  10. 19 11月, 2008 1 次提交
  11. 14 10月, 2008 1 次提交
  12. 01 7月, 2008 1 次提交
    • A
      powerpc: Prevent memory corruption due to cache invalidation of unaligned DMA buffer · 03d70617
      Andrew Lewis 提交于
      On PowerPC processors with non-coherent cache architectures the DMA
      subsystem calls invalidate_dcache_range() before performing a DMA read
      operation.  If the address and length of the DMA buffer are not aligned
      to a cache-line boundary this can result in memory outside of the DMA
      buffer being invalidated in the cache.  If this memory has an
      uncommitted store then the data will be lost and a subsequent read of
      that address will result in an old value being returned from main memory.
      
      Only when the DMA buffer starts on a cache-line boundary and is an exact
      mutiple of the cache-line size can invalidate_dcache_range() be called,
      otherwise flush_dcache_range() must be called.  flush_dcache_range()
      will first flush uncommitted writes, and then invalidate the cache.
      
      Signed-off-by: Andrew Lewis <andrew-lewis at netspace.net.au>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      03d70617
  13. 08 5月, 2007 1 次提交
  14. 07 2月, 2007 1 次提交
    • V
      [POWERPC] 8xx: generic 8xx code arch/powerpc port · 5902ebce
      Vitaly Bordug 提交于
      Including support for non-coherent cache, some mm-related things +
      relevant field in Kconfig and Makefiles. Also included rheap.o compilation
      if 8xx is defined.
      
      Non-coherent mapping were refined and renamed according to Cristoph
      Hellwig. Orphaned functions were cleaned up.
      
      [Also removed arch/ppc/kernel/dma-mapping.c, because otherwise
      compiling with ARCH=ppc for a non DMA-cache-coherent platform ends up
      with two copies of __dma_alloc_coherent etc.
       -- paulus.]
      Signed-off-by: NVitaly Bordug <vbordug@ru.mvista.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      5902ebce
  15. 01 7月, 2006 1 次提交
  16. 22 3月, 2006 1 次提交
  17. 30 10月, 2005 1 次提交
    • H
      [PATCH] mm: init_mm without ptlock · 872fec16
      Hugh Dickins 提交于
      First step in pushing down the page_table_lock.  init_mm.page_table_lock has
      been used throughout the architectures (usually for ioremap): not to serialize
      kernel address space allocation (that's usually vmlist_lock), but because
      pud_alloc,pmd_alloc,pte_alloc_kernel expect caller holds it.
      
      Reverse that: don't lock or unlock init_mm.page_table_lock in any of the
      architectures; instead rely on pud_alloc,pmd_alloc,pte_alloc_kernel to take
      and drop it when allocating a new one, to check lest a racing task already
      did.  Similarly no page_table_lock in vmalloc's map_vm_area.
      
      Some temporary ugliness in __pud_alloc and __pmd_alloc: since they also handle
      user mms, which are converted only by a later patch, for now they have to lock
      differently according to whether or not it's init_mm.
      
      If sources get muddled, there's a danger that an arch source taking
      init_mm.page_table_lock will be mixed with common source also taking it (or
      neither take it).  So break the rules and make another change, which should
      break the build for such a mismatch: remove the redundant mm arg from
      pte_alloc_kernel (ppc64 scrapped its distinct ioremap_mm in 2.6.13).
      
      Exceptions: arm26 used pte_alloc_kernel on user mm, now pte_alloc_map; ia64
      used pte_alloc_map on init_mm, now pte_alloc_kernel; parisc had bad args to
      pmd_alloc and pte_alloc_kernel in unused USE_HPPA_IOREMAP code; ppc64
      map_io_page forgot to unlock on failure; ppc mmu_mapin_ram and ppc64 im_free
      took page_table_lock for no good reason.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      872fec16
  18. 28 10月, 2005 1 次提交
  19. 12 10月, 2005 1 次提交
    • P
      [PATCH] ppc highmem fix · a0c111c6
      Paolo Galtieri 提交于
      I've noticed that the calculations for seg_size and nr_segs in
      __dma_sync_page_highmem() (arch/ppc/kernel/dma-mapping.c) are wrong.  The
      incorrect calculations can result in either an oops or a panic when running
      fsck depending on the size of the partition.
      
      The problem with the seg_size calculation is that it can result in a
      negative number if size is offset > size.  The problem with the nr_segs
      caculation is returns the wrong number of segments, e.g.  it returns 1 when
      size is 200 and offset is 4095, when it should return 2 or more.
      Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      a0c111c6
  20. 11 9月, 2005 1 次提交
  21. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4