1. 13 1月, 2012 1 次提交
    • R
      ARM: Add arm_memblock_steal() to allocate memory away from the kernel · 716a3dc2
      Russell King 提交于
      Several platforms are now using the memblock_alloc+memblock_free+
      memblock_remove trick to obtain memory which won't be mapped in the
      kernel's page tables.  Most platforms do this (correctly) in the
      ->reserve callback.  However, OMAP has started to call these functions
      outside of this callback, and this is extremely unsafe - memory will
      not be unmapped, and could well be given out after memblock is no
      longer responsible for its management.
      
      So, provide arm_memblock_steal() to perform this function, and ensure
      that it panic()s if it is used inappropriately.  Convert everyone
      over, including OMAP.
      
      As a result, OMAP with OMAP4_ERRATA_I688 enabled will panic on boot
      with this change.  Mark this option as BROKEN and make it depend on
      BROKEN.  OMAP needs to be fixed, or 137d105d (ARM: OMAP4: Fix
      errata i688 with MPU interconnect barriers.) reverted until such
      time it can be fixed correctly.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      716a3dc2
  2. 08 1月, 2012 1 次提交
  3. 07 1月, 2012 2 次提交
  4. 05 1月, 2012 4 次提交
  5. 04 1月, 2012 2 次提交
  6. 03 1月, 2012 1 次提交
  7. 28 12月, 2011 1 次提交
  8. 24 12月, 2011 1 次提交
  9. 23 12月, 2011 1 次提交
  10. 22 12月, 2011 1 次提交
    • K
      arm: convert sysdev_class to a regular subsystem · 4a858cfc
      Kay Sievers 提交于
      After all sysdev classes are ported to regular driver core entities, the
      sysdev implementation will be entirely removed from the kernel.
      
      Cc: Kukjin Kim <kgene.kim@samsung.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Ben Dooks <ben-linux@fluff.org>
      Cc: Vinod Koul <vinod.koul@intel.com>
      Cc: Boojin Kim <boojin.kim@samsung.com>
      Cc: Linus Walleij <linus.walleij@linaro.org>
      Cc: Lucas De Marchi <lucas.demarchi@profusion.mobi>
      Cc: Heiko Stuebner <heiko@sntech.de>
      Signed-off-by: NKay Sievers <kay.sievers@vrfy.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
      4a858cfc
  11. 19 12月, 2011 1 次提交
  12. 13 12月, 2011 3 次提交
  13. 11 12月, 2011 1 次提交
  14. 08 12月, 2011 7 次提交
  15. 06 12月, 2011 6 次提交
    • W
      ARM: SMP: use idmap_pgd for mapping MMU enable during secondary booting · 4e8ee7de
      Will Deacon 提交于
      The ARM SMP booting code allocates a temporary set of page tables
      containing an identity mapping of the kernel image and provides this
      to secondary CPUs for initial booting.
      
      In reality, we only need to include the __turn_mmu_on function in the
      identity mapping since the rest of the kernel is executing from virtual
      addresses after this point.
      
      This patch adds __turn_mmu_on to the .idmap.text section, allowing the
      SMP booting code to use the idmap_pgd directly and not have to populate
      its own set of page table.
      
      As a result of this patch, we can make the identity_mapping_add function
      static (since it is only used within mm/idmap.c) and also remove the
      identity_mapping_del function. The identity map population is moved to
      an early initcall so that it is setup in time for secondary CPU bringup.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      4e8ee7de
    • W
      ARM: idmap: populate identity map pgd at init time using .init.text · 8903826d
      Will Deacon 提交于
      When disabling and re-enabling the MMU, it is necessary to take out an
      identity mapping for the code that manipulates the SCTLR in order to
      avoid it disappearing from under our feet. This is useful when soft
      rebooting and returning from CPU suspend.
      
      This patch allocates a set of page tables during boot and populates them
      with an identity mapping for the .idmap.text section. This means that
      users of the identity map do not need to manage their own pgd and can
      instead annotate their functions with __idmap or, in the case of assembly
      code, place them in the correct section.
      Acked-by: NDave Martin <dave.martin@linaro.org>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Tested-by: NLorenzo Pieralisi <Lorenzo.Pieralisi@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      8903826d
    • U
      ARM: 7187/1: fix unwinding for XIP kernels · de66a979
      Uwe Kleine-König 提交于
      The linker places the unwind tables in readonly sections. So when using
      an XIP kernel these are located in ROM and cannot be modified.
      For that reason the current approach to convert the relative offsets in
      the unwind index to absolute addresses early in the boot process doesn't
      work with XIP.
      
      The offsets in the unwind index section are signed 31 bit numbers and
      the structs are sorted by this offset. So it first has offsets between
      0x40000000 and 0x7fffffff (i.e. the negative offsets) and then offsets
      between 0x00000000 and 0x3fffffff. When seperating these two blocks the
      numbers are sorted even when interpreting the offsets as unsigned longs.
      
      So determine the first non-negative entry once and track that using the
      new origin pointer. The actual bisection can then use a plain unsigned
      long comparison. The only thing that makes the new bisection more
      complicated is that the offsets are relative to their position in the
      index section, so the key to search needs to be adapted accordingly in
      each step.
      
      Moreover several consts are added to catch future writes and rename the
      member "addr" of struct unwind_idx to "addr_offset" to better match the
      new semantic. (This has the additional benefit of breaking eventual
      users at compile time to make them aware of the change.)
      
      In my tests the new algorithm was a tad faster than the original and has
      the additional upside of not needing the initial conversion and so saves
      some boot time and it's possible to unwind even earlier.
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NNicolas Pitre <nico@fluxnic.net>
      Signed-off-by: NUwe Kleine-König <u.kleine-koenig@pengutronix.de>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      de66a979
    • D
      ARM: 7173/1: Add optimised swahb32() byteswap helper for v6 and above · df0e74da
      Dave Martin 提交于
      ARMv6 and later processors have the REV16 instruction, which swaps
      the bytes within each halfword of a register value.
      
      This is already used to implement swab16(), but since the native
      operation performaed by REV16 is actually swahb32(), this patch
      renames the existing swab16() helper accordingly and defines
      __arch_swab16() in terms of it.  This allows calls to both swab16()
      and swahb32() to be optimised.
      
      The compiler's generated code might improve someday, but as of
      4.5.2 the code generated for pure C implementing these 16-bit
      bytesswaps remains pessimal.
      
      swahb32() is useful for converting 32-bit Thumb instructions
      between integer and memory representation on BE8 platforms (among
      other uses).
      Signed-off-by: NDave Martin <dave.martin@linaro.org>
      Reviewed-by: NNicolas Pitre <nico@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      df0e74da
    • R
      ARM: 7169/1: topdown mmap support · 7dbaa466
      Rob Herring 提交于
      Similar to other architectures, this adds topdown mmap support in user
      process address space allocation policy. This allows mmap sizes greater
      than 2GB. This support is largely copied from MIPS and the generic
      implementations.
      
      The address space randomization is moved into arch_pick_mmap_layout.
      
      Tested on V-Express with ubuntu and a mmap test from here:
      https://bugs.launchpad.net/bugs/861296Signed-off-by: NRob Herring <rob.herring@calxeda.com>
      Acked-by: NNicolas Pitre <nico@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      7dbaa466
    • K
      ARM: 7140/1: remove NR_IRQS dependency for ARM-specific HARDIRQ_BITS definition · 023bfa3d
      Kevin Hilman 提交于
      As a first step towards removing NR_IRQS, remove the ARM customization
      of HARDIRQ_BITS based on NR_IRQS.
      
      The generic code in <linux/hardirq.h> already has a default value of
      10 for HARDIRQ_BITS which is the max used on ARM, so let's just remove
      the NR_IRQS based customization and use the generic default.
      Signed-off-by: NKevin Hilman <khilman@ti.com>
      Acked-by: NNicolas Pitre <nicolas.pitre@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      023bfa3d
  16. 02 12月, 2011 3 次提交
  17. 01 12月, 2011 1 次提交
  18. 29 11月, 2011 1 次提交
  19. 27 11月, 2011 2 次提交
    • N
      ARM: move VMALLOC_END down temporarily for shmobile · 0af362f8
      Nicolas Pitre 提交于
      THIS IS A TEMPORARY HACK.  The purpose of this is _only_ to avoid a
      regression on an existing machine while a better fix is implemented.
      
      On shmobile the consistent DMA memory area was set to 158MB in commit
      28f0721a with no explanation.  The documented size for this area should
      vary between 2MB and 14MB, and none of the other ARM targets exceed that.
      
      The included #warning is therefore meant to be noisy on purpose to get
      shmobile maintainers attention and this commit reverted once this
      consistent DMA size conflict is resolved.
      Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
      Cc: Magnus Damm <damm@opensource.se>
      Cc: Paul Mundt <lethal@linux-sh.org>
      0af362f8
    • N
      ARM: move iotable mappings within the vmalloc region · 0536bdf3
      Nicolas Pitre 提交于
      In order to remove the build time variation between different SOCs with
      regards to VMALLOC_END, the iotable mappings are now allocated inside
      the vmalloc region.  This allows for VMALLOC_END to be identical across
      all machines.
      
      The value for VMALLOC_END is now set to 0xff000000 which is right where
      the consistent DMA area starts.
      
      To accommodate all static mappings on machines with possible highmem usage,
      the default vmalloc area size is changed to 240 MB so that VMALLOC_START
      is no higher than 0xf0000000 by default.
      Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
      Tested-by: NStephen Warren <swarren@nvidia.com>
      Tested-by: NKevin Hilman <khilman@ti.com>
      Tested-by: NJamie Iles <jamie@jamieiles.com>
      0536bdf3