1. 31 10月, 2013 1 次提交
  2. 29 8月, 2013 1 次提交
  3. 21 8月, 2013 2 次提交
    • S
      of: move of_get_cpu_node implementation to DT core library · 183912d3
      Sudeep KarkadaNagesha 提交于
      This patch moves the generalized implementation of of_get_cpu_node from
      PowerPC to DT core library, thereby adding support for retrieving cpu
      node for a given logical cpu index on any architecture.
      
      The CPU subsystem can now use this function to assign of_node in the
      cpu device while registering CPUs.
      
      It is recommended to use these helper function only in pre-SMP/early
      initialisation stages to retrieve CPU device node pointers in logical
      ordering. Once the cpu devices are registered, it can be retrieved easily
      from cpu device of_node which avoids unnecessary parsing and matching.
      
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Grant Likely <grant.likely@linaro.org>
      Acked-by: NRob Herring <rob.herring@calxeda.com>
      Signed-off-by: NSudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
      183912d3
    • S
      powerpc: refactor of_get_cpu_node to support other architectures · 819d5965
      Sudeep KarkadaNagesha 提交于
      Currently different drivers requiring to access cpu device node are
      parsing the device tree themselves. Since the ordering in the DT need
      not match the logical cpu ordering, the parsing logic needs to consider
      that. However, this has resulted in lots of code duplication and in some
      cases even incorrect logic.
      
      It's better to consolidate them by adding support for getting cpu
      device node for a given logical cpu index in DT core library. However
      logical to physical index mapping can be architecture specific.
      
      PowerPC has it's own implementation to get the cpu node for a given
      logical index.
      
      This patch refactors the current implementation of of_get_cpu_node.
      This in preparation to move the implementation to DT core library.
      It separates out the logical to physical mapping so that a default
      matching of the physical id to the logical cpu index can be added
      when moved to common code. Architecture specific code can override it.
      
      Cc: Rob Herring <rob.herring@calxeda.com>
      Cc: Grant Likely <grant.likely@linaro.org>
      Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NSudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
      819d5965
  4. 14 8月, 2013 3 次提交
  5. 24 7月, 2013 1 次提交
  6. 02 7月, 2013 1 次提交
    • B
      powerpc: Handle both new style and old style reserve maps · c039e3a8
      Benjamin Herrenschmidt 提交于
      When Jeremy introduced the new device-tree based reserve map, he made
      the code in early_reserve_mem_dt() bail out if it found one, thus not
      reserving the initrd nor processing the old style map.
      
      I hit problems with variants of kexec that didn't put the initrd in
      the new style map either. While these could/will be fixed, I believe
      we should be safe here and rather reserve more than not enough.
      
      We could have a firmware passing stuff via the new style map, and
      in the middle, a kexec that knows nothing about it and adding other
      things to the old style map.
      
      I don't see a big issue with processing both and reserving everything
      that needs to be. memblock_reserve() supports overlaps fine these days.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      c039e3a8
  7. 20 6月, 2013 1 次提交
  8. 15 11月, 2012 2 次提交
  9. 07 9月, 2012 1 次提交
  10. 29 3月, 2012 1 次提交
  11. 23 2月, 2012 2 次提交
  12. 09 12月, 2011 3 次提交
    • T
      memblock: s/memblock_analyze()/memblock_allow_resize()/ and update users · 1aadc056
      Tejun Heo 提交于
      The only function of memblock_analyze() is now allowing resize of
      memblock region arrays.  Rename it to memblock_allow_resize() and
      update its users.
      
      * The following users remain the same other than renaming.
      
        arm/mm/init.c::arm_memblock_init()
        microblaze/kernel/prom.c::early_init_devtree()
        powerpc/kernel/prom.c::early_init_devtree()
        openrisc/kernel/prom.c::early_init_devtree()
        sh/mm/init.c::paging_init()
        sparc/mm/init_64.c::paging_init()
        unicore32/mm/init.c::uc32_memblock_init()
      
      * In the following users, analyze was used to update total size which
        is no longer necessary.
      
        powerpc/kernel/machine_kexec.c::reserve_crashkernel()
        powerpc/kernel/prom.c::early_init_devtree()
        powerpc/mm/init_32.c::MMU_init()
        powerpc/mm/tlb_nohash.c::__early_init_mmu()  
        powerpc/platforms/ps3/mm.c::ps3_mm_add_memory()
        powerpc/platforms/embedded6xx/wii.c::wii_memory_fixups()
        sh/kernel/machine_kexec.c::reserve_crashkernel()
      
      * x86/kernel/e820.c::memblock_x86_fill() was directly setting
        memblock_can_resize before populating memblock and calling analyze
        afterwards.  Call memblock_allow_resize() before start populating.
      
      memblock_can_resize is now static inside memblock.c.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      1aadc056
    • T
      powerpc: Cleanup memblock usage · 6fbef13c
      Tejun Heo 提交于
      * early_init_devtree(): Total memory size is aligned to PAGE_SIZE;
        however, alignment isn't enforced if memory_limit is explicitly
        specified.  Simplify the logic and always apply PAGE_SIZE alignment.
      
      * MMU_init(): memblock regions is truncated by directly modifying
        memblock.memory.cnt.  This is incomplete (reserved array is not
        truncated) and unnecessarily low level hindering further memblock
        improvments.  Use memblock_enforce_memory_limit() instead.
      
      * wii_memory_fixups(): Unnecessarily low level direct manipulation of
        memblock regions.  The same result can be achieved using properly
        abstracted operations.  Reimplement using memblock API.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      6fbef13c
    • T
      memblock: Kill memblock_init() · fe091c20
      Tejun Heo 提交于
      memblock_init() initializes arrays for regions and memblock itself;
      however, all these can be done with struct initializers and
      memblock_init() can be removed.  This patch kills memblock_init() and
      initializes memblock with struct initializer.
      
      The only difference is that the first dummy entries don't have .nid
      set to MAX_NUMNODES initially.  This doesn't cause any behavior
      difference.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      fe091c20
  13. 01 11月, 2011 1 次提交
  14. 12 10月, 2011 1 次提交
    • K
      powerpc: respect mem= setting for early memory limit setup · ba14f649
      Kumar Gala 提交于
      For those MMUs that have some form of bolt'd linear mapping (TLB)
      required its rare that one ever sets mem= smaller than the size of that
      mapping.
      
      However, on Book-E 64 parts the initial linear mapping is quite large
      (1G) so its quite reasonable that mem= is set smaller than that.
      
      We need to parse the command line for mem= limit and constrain the
      amount of memory we map initially by it if need be.
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      ba14f649
  15. 20 9月, 2011 2 次提交
  16. 29 6月, 2011 1 次提交
  17. 17 6月, 2011 1 次提交
  18. 09 6月, 2011 1 次提交
    • B
      powerpc: Force page alignment for initrd reserved memory · 307cfe71
      Benjamin Herrenschmidt 提交于
      When using 64K pages with a separate cpio rootfs, U-Boot will align
      the rootfs on a 4K page boundary. When the memory is reserved, and
      subsequent early memblock_alloc is called, it will allocate memory
      between the 64K page alignment and reserved memory. When the reserved
      memory is subsequently freed, it is done so by pages, causing the
      early memblock_alloc requests to be re-used, which in my case, caused
      the device-tree to be clobbered.
      
      This patch forces the reserved memory for initrd to be kernel page
      aligned, and will move the device tree if it overlaps with the range
      extension of initrd. This patch will also consolidate the identical
      function free_initrd_mem() from mm/init_32.c, init_64.c to mm/mem.c,
      and adds the same range extension when freeing initrd. free_initrd_mem()
      is also moved to the __init section.
      
      Many thanks to Milton Miller for his input on this patch.
      
      [BenH: Fixed build without CONFIG_BLK_DEV_INITRD]
      Signed-off-by: NDave Carroll <dcarroll@astekcorp.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      307cfe71
  19. 20 5月, 2011 1 次提交
  20. 19 5月, 2011 1 次提交
  21. 11 5月, 2011 1 次提交
  22. 27 4月, 2011 1 次提交
  23. 20 4月, 2011 1 次提交
  24. 31 3月, 2011 1 次提交
  25. 02 3月, 2011 1 次提交
  26. 16 1月, 2011 1 次提交
  27. 22 10月, 2010 1 次提交
  28. 05 8月, 2010 2 次提交
    • B
      memblock: Remove rmo_size, burry it in arch/powerpc where it belongs · cd3db0c4
      Benjamin Herrenschmidt 提交于
      The RMA (RMO is a misnomer) is a concept specific to ppc64 (in fact
      server ppc64 though I hijack it on embedded ppc64 for similar purposes)
      and represents the area of memory that can be accessed in real mode
      (aka with MMU off), or on embedded, from the exception vectors (which
      is bolted in the TLB) which pretty much boils down to the same thing.
      
      We take that out of the generic MEMBLOCK data structure and move it into
      arch/powerpc where it belongs, renaming it to "RMA" while at it.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      cd3db0c4
    • B
      memblock: Introduce default allocation limit and use it to replace explicit ones · e63075a3
      Benjamin Herrenschmidt 提交于
      This introduce memblock.current_limit which is used to limit allocations
      from memblock_alloc() or memblock_alloc_base(..., MEMBLOCK_ALLOC_ACCESSIBLE).
      
      The old MEMBLOCK_ALLOC_ANYWHERE changes value from 0 to ~(u64)0 and can still
      be used with memblock_alloc_base() to allocate really anywhere.
      
      It is -no-longer- cropped to MEMBLOCK_REAL_LIMIT which disappears.
      
      Note to archs: I'm leaving the default limit to MEMBLOCK_ALLOC_ANYWHERE. I
      strongly recommend that you ensure that you set an appropriate limit
      during boot in order to guarantee that an memblock_alloc() at any time
      results in something that is accessible with a simple __va().
      
      The reason is that a subsequent patch will introduce the ability for
      the array to resize itself by reallocating itself. The MEMBLOCK core will
      honor the current limit when performing those allocations.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      e63075a3
  29. 23 7月, 2010 1 次提交
  30. 14 7月, 2010 1 次提交
  31. 09 3月, 2010 1 次提交
    • M
      powerpc: Dynamically allocate pacas · 1426d5a3
      Michael Ellerman 提交于
      On 64-bit kernels we currently have a 512 byte struct paca_struct for
      each cpu (usually just called "the paca"). Currently they are statically
      allocated, which means a kernel built for a large number of cpus will
      waste a lot of space if it's booted on a machine with few cpus.
      
      We can avoid that by only allocating the number of pacas we need at
      boot. However this is complicated by the fact that we need to access
      the paca before we know how many cpus there are in the system.
      
      The solution is to dynamically allocate enough space for NR_CPUS pacas,
      but then later in boot when we know how many cpus we have, we free any
      unused pacas.
      Signed-off-by: NMichael Ellerman <michael@ellerman.id.au>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      1426d5a3