1. 17 12月, 2015 1 次提交
    • N
      ARM: 8477/1: runtime patch udiv/sdiv instructions into __aeabi_{u}idiv() · 42f25bdd
      Nicolas Pitre 提交于
      The ARM compiler inserts calls to __aeabi_idiv() and
      __aeabi_uidiv() when it needs to perform division on signed and
      unsigned integers. If a processor has support for the sdiv and
      udiv instructions, the kernel may overwrite the beginning of those
      functions with those instructions and a "bx lr" to get better
      performance.
      
      To ensure that those functions are aligned to a 32-bit word for easier
      patching (which might not always be the case in Thumb mode) and that
      the two patched instructions end up in the same cache line, a 8-byte
      alignment is enforced when ARM_PATCH_IDIV is selected.
      
      This was heavily inspired by a previous patch from Stephen Boyd.
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      Acked-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      42f25bdd
  2. 18 8月, 2015 1 次提交
    • S
      ARM: 8415/1: early fixmap support for earlycon · a5f4c561
      Stefan Agner 提交于
      Add early fixmap support, initially to support permanent, fixed
      mapping support for early console. A temporary, early pte is
      created which is migrated to a permanent mapping in paging_init.
      This is also needed since the attributes may change as the memory
      types are initialized. The 3MiB range of fixmap spans two pte
      tables, but currently only one pte is created for early fixmap
      support.
      
      Re-add FIX_KMAP_BEGIN to the index calculation in highmem.c since
      the index for kmap does not start at zero anymore. This reverts
      4221e2e6 ("ARM: 8031/1: fixmap: remove FIX_KMAP_BEGIN and
      FIX_KMAP_END") to some extent.
      
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Laura Abbott <lauraa@codeaurora.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NRob Herring <robh@kernel.org>
      Signed-off-by: NStefan Agner <stefan@agner.ch>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      a5f4c561
  3. 03 8月, 2015 1 次提交
    • M
      ARM: migrate to common PSCI client code · be120397
      Mark Rutland 提交于
      Now that the common PSCI client code has been factored out to
      drivers/firmware, and made safe for 32-bit use, move the 32-bit ARM code
      over to it. This results in a moderate reduction of duplicated lines,
      and will prevent further duplication as the PSCI client code is updated
      for PSCI 1.0 and beyond.
      
      The two legacy platform users of the PSCI invocation code are updated to
      account for interface changes. In both cases the power state parameter
      (which is constant) is now generated using macros, so that the
      pack/unpack logic can be killed in preparation for PSCI 1.0 power state
      changes.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NRob Herring <robh@kernel.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Ashwin Chaugule <ashwin.chaugule@linaro.org>
      Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Russell King <rmk+kernel@arm.linux.org.uk>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      be120397
  4. 01 8月, 2015 1 次提交
    • S
      ARM: 8392/3: smp: Only expose /sys/.../cpuX/online if hotpluggable · 787047ee
      Stephen Boyd 提交于
      Writes to /sys/.../cpuX/online fail if we determine the platform
      doesn't support hotplug for that CPU. Furthermore, if the cpu_die
      op isn't specified the system hangs when we try to offline a CPU
      and it comes right back online unexpectedly. Let's figure this
      stuff out before we make the sysfs nodes so that the online file
      doesn't even exist if it isn't (at least sometimes) possible to
      hotplug the CPU.
      
      Add a new 'cpu_can_disable' op and repoint all 'cpu_disable'
      implementations at it because all implementers use the op to
      indicate if a CPU can be hotplugged or not in a static fashion.
      With PSCI we may need to add a 'cpu_disable' op so that the
      secure OS can be migrated off the CPU we're trying to hotplug.
      In this case, the 'cpu_can_disable' op will indicate that all
      CPUs are hotpluggable by returning true, but the 'cpu_disable' op
      will make a PSCI migration call and occasionally fail, denying
      the hotplug of a CPU. This shouldn't be any worse than x86 where
      we may indicate that all CPUs are hotpluggable but occasionally
      we can't offline a CPU due to check_irq_vectors_for_cpu_disable()
      failing to find a CPU to move vectors to.
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Nicolas Pitre <nico@linaro.org>
      Cc: Dave Martin <Dave.Martin@arm.com>
      Acked-by: Simon Horman <horms@verge.net.au> [shmobile portion]
      Tested-by: NSimon Horman <horms@verge.net.au>
      Cc: Magnus Damm <magnus.damm@gmail.com>
      Cc: <linux-sh@vger.kernel.org>
      Tested-by: NTyler Baker <tyler.baker@linaro.org>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Signed-off-by: NStephen Boyd <sboyd@codeaurora.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      787047ee
  5. 02 6月, 2015 1 次提交
  6. 28 5月, 2015 1 次提交
  7. 08 5月, 2015 1 次提交
  8. 28 3月, 2015 2 次提交
  9. 18 3月, 2015 1 次提交
  10. 21 1月, 2015 1 次提交
  11. 05 1月, 2015 1 次提交
  12. 02 12月, 2014 1 次提交
  13. 21 11月, 2014 1 次提交
  14. 18 9月, 2014 1 次提交
    • D
      ARM: 8150/3: fiq: Replace default FIQ handler · c0e7f7ee
      Daniel Thompson 提交于
      This patch introduces a new default FIQ handler that is structured in a
      similar way to the existing ARM exception handler and result in the FIQ
      being handled by C code running on the SVC stack (despite this code run
      in the FIQ handler is subject to severe limitations with respect to
      locking making normal interaction with the kernel impossible).
      
      This default handler allows concepts that on x86 would be handled using
      NMIs to be realized on ARM.
      
      Credit:
      
          This patch is a near complete re-write of a patch originally
          provided by Anton Vorontsov. Today only a couple of small fragments
          survive, however without Anton's work to build from this patch would
          not exist. Thanks also to Russell King for spoonfeeding me a variety
          of fixes during the review cycle.
      Signed-off-by: NDaniel Thompson <daniel.thompson@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Acked-by: NNicolas Pitre <nico@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      c0e7f7ee
  15. 18 7月, 2014 1 次提交
  16. 02 6月, 2014 1 次提交
    • R
      ARM: ensure C page table setup code follows assembly code · ca8f0b0a
      Russell King 提交于
      Fix a long standing bug where, for ARMv6+, we don't fully ensure that
      the C code sets the same cache policy as the assembly code.  This was
      introduced partially by commit 11179d8c ([ARM] 4497/1: Only allow
      safe cache configurations on ARMv6 and later) and also by adding SMP
      support.
      
      This patch sets the default cache policy based on the flags used by the
      assembly code, and then ensures that when a cache policy command line
      argument is used, we verify that on ARMv6, it matches the initial setup.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      ca8f0b0a
  17. 01 6月, 2014 1 次提交
  18. 22 5月, 2014 1 次提交
  19. 25 2月, 2014 2 次提交
  20. 10 2月, 2014 1 次提交
    • S
      ARM: 7952/1: mm: Fix the memblock allocation for LPAE machines · ca474408
      Santosh Shilimkar 提交于
      Commit ad6492b8 added much needed memblock_virt_alloc_low() and further
      commit 07bacb38 {memblock, bootmem: restore goal for alloc_low} fixed
      the issue with low memory limit thanks to Yinghai. But even after all
      these fixes, there is still one case where the limit check done with
      ARCH_LOW_ADDRESS_LIMIT for low memory fails. Russell pointed out the
      issue with 32 bit LPAE machines in below thread.
      	https://lkml.org/lkml/2014/1/28/364
      
      Since on some LPAE machines where memory start address is beyond 4GB,
      the low memory marker in memblock will be set to default
      ARCH_LOW_ADDRESS_LIMIT which is wrong. We can fix this by letting
      architectures set the ARCH_LOW_ADDRESS_LIMIT using another export
      similar to memblock_set_current_limit() but am not sure whether
      its worth the trouble. Tell me if you think otherwise.
      
      Rather am just trying to fix that one broken case using
      memblock_virt_alloc() in setup code since the memblock.current_limit
      is updated appropriately makes it work on all ARM 32 bit machines.
      
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Strashko, Grygorii <grygorii.strashko@ti.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      ca474408
  21. 28 1月, 2014 1 次提交
  22. 22 1月, 2014 2 次提交
    • S
      arch/arm/kernel/: use memblock apis for early memory allocations · 9233d2be
      Santosh Shilimkar 提交于
      Switch to memblock interfaces for early memory allocator instead of
      bootmem allocator.  No functional change in beahvior than what it is in
      current code from bootmem users points of view.
      
      Archs already converted to NO_BOOTMEM now directly use memblock
      interfaces instead of bootmem wrappers build on top of memblock.  And
      the archs which still uses bootmem, these new apis just fallback to
      exiting bootmem APIs.
      Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Grygorii Strashko <grygorii.strashko@ti.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Paul Walmsley <paul@pwsan.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Tony Lindgren <tony@atomide.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9233d2be
    • R
      ARM: ignore memory below PHYS_OFFSET · 571b1437
      Russell King 提交于
      If the kernel is loaded higher in physical memory than normal, and we
      calculate PHYS_OFFSET higher than the start of RAM, this leads to
      boot problems as we attempt to map part of this RAM into userspace.
      Rather than struggle with this, just truncate the mapping.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      571b1437
  23. 29 12月, 2013 1 次提交
  24. 10 12月, 2013 1 次提交
  25. 24 11月, 2013 1 次提交
    • S
      ARM: mm: Remove bootmem code and switch to NO_BOOTMEM · 84f452b1
      Santosh Shilimkar 提交于
      Now with dma_mask series merged and max*pfn has consistent meaning on ARM
      as rest of the arch's thanks to RMK's mega series, lets switch ARM code
      to NO_BOOTMEM. With NO_BOOTMEM change, now we use memblock allocator to
      reserve space for crash kernel to have one less dependency with nobootmem
      allocator wrapper.
      
      Tested with both flat memory and sparse (faked) memory models with highmem
      enabled.
      
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
      Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      84f452b1
  26. 29 10月, 2013 3 次提交
  27. 11 10月, 2013 1 次提交
  28. 26 9月, 2013 1 次提交
  29. 02 9月, 2013 1 次提交
  30. 26 7月, 2013 2 次提交
  31. 22 7月, 2013 1 次提交
  32. 10 7月, 2013 1 次提交
  33. 24 6月, 2013 1 次提交
  34. 20 6月, 2013 1 次提交
    • L
      ARM: kernel: build MPIDR hash function data structure · 8cf72172
      Lorenzo Pieralisi 提交于
      On ARM SMP systems, cores are identified by their MPIDR register.
      The MPIDR guidelines in the ARM ARM do not provide strict enforcement of
      MPIDR layout, only recommendations that, if followed, split the MPIDR
      on ARM 32 bit platforms in three affinity levels. In multi-cluster
      systems like big.LITTLE, if the affinity guidelines are followed, the
      MPIDR can not be considered an index anymore. This means that the
      association between logical CPU in the kernel and the HW CPU identifier
      becomes somewhat more complicated requiring methods like hashing to
      associate a given MPIDR to a CPU logical index, in order for the look-up
      to be carried out in an efficient and scalable way.
      
      This patch provides a function in the kernel that starting from the
      cpu_logical_map, implement collision-free hashing of MPIDR values by checking
      all significative bits of MPIDR affinity level bitfields. The hashing
      can then be carried out through bits shifting and ORing; the resulting
      hash algorithm is a collision-free though not minimal hash that can be
      executed with few assembly instructions. The mpidr is filtered through a
      mpidr mask that is built by checking all bits that toggle in the set of
      MPIDRs corresponding to possible CPUs. Bits that do not toggle do not carry
      information so they do not contribute to the resulting hash.
      
      Pseudo code:
      
      /* check all bits that toggle, so they are required */
      for (i = 1, mpidr_mask = 0; i < num_possible_cpus(); i++)
      	mpidr_mask |= (cpu_logical_map(i) ^ cpu_logical_map(0));
      
      /*
       * Build shifts to be applied to aff0, aff1, aff2 values to hash the mpidr
       * fls() returns the last bit set in a word, 0 if none
       * ffs() returns the first bit set in a word, 0 if none
       */
      fs0 = mpidr_mask[7:0] ? ffs(mpidr_mask[7:0]) - 1 : 0;
      fs1 = mpidr_mask[15:8] ? ffs(mpidr_mask[15:8]) - 1 : 0;
      fs2 = mpidr_mask[23:16] ? ffs(mpidr_mask[23:16]) - 1 : 0;
      ls0 = fls(mpidr_mask[7:0]);
      ls1 = fls(mpidr_mask[15:8]);
      ls2 = fls(mpidr_mask[23:16]);
      bits0 = ls0 - fs0;
      bits1 = ls1 - fs1;
      bits2 = ls2 - fs2;
      aff0_shift = fs0;
      aff1_shift = 8 + fs1 - bits0;
      aff2_shift = 16 + fs2 - (bits0 + bits1);
      u32 hash(u32 mpidr) {
      	u32 l0, l1, l2;
      	u32 mpidr_masked = mpidr & mpidr_mask;
      	l0 = mpidr_masked & 0xff;
      	l1 = mpidr_masked & 0xff00;
      	l2 = mpidr_masked & 0xff0000;
      	return (l0 >> aff0_shift | l1 >> aff1_shift | l2 >> aff2_shift);
      }
      
      The hashing algorithm relies on the inherent properties set in the ARM ARM
      recommendations for the MPIDR. Exotic configurations, where for instance the
      MPIDR values at a given affinity level have large holes, can end up requiring
      big hash tables since the compression of values that can be achieved through
      shifting is somewhat crippled when holes are present. Kernel warns if
      the number of buckets of the resulting hash table exceeds the number of
      possible CPUs by a factor of 4, which is a symptom of a very sparse HW
      MPIDR configuration.
      
      The hash algorithm is quite simple and can easily be implemented in assembly
      code, to be used in code paths where the kernel virtual address space is
      not set-up (ie cpu_resume) and instruction and data fetches are strongly
      ordered so code must be compact and must carry out few data accesses.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Colin Cross <ccross@android.com>
      Cc: Santosh Shilimkar <santosh.shilimkar@ti.com>
      Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
      Cc: Amit Kucheria <amit.kucheria@linaro.org>
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Reviewed-by: NDave Martin <Dave.Martin@arm.com>
      Reviewed-by: NNicolas Pitre <nico@linaro.org>
      Tested-by: NShawn Guo <shawn.guo@linaro.org>
      Tested-by: NKevin Hilman <khilman@linaro.org>
      Tested-by: NStephen Warren <swarren@wwwdotorg.org>
      8cf72172