1. 21 11月, 2014 1 次提交
  2. 18 9月, 2014 1 次提交
    • D
      ARM: 8150/3: fiq: Replace default FIQ handler · c0e7f7ee
      Daniel Thompson 提交于
      This patch introduces a new default FIQ handler that is structured in a
      similar way to the existing ARM exception handler and result in the FIQ
      being handled by C code running on the SVC stack (despite this code run
      in the FIQ handler is subject to severe limitations with respect to
      locking making normal interaction with the kernel impossible).
      
      This default handler allows concepts that on x86 would be handled using
      NMIs to be realized on ARM.
      
      Credit:
      
          This patch is a near complete re-write of a patch originally
          provided by Anton Vorontsov. Today only a couple of small fragments
          survive, however without Anton's work to build from this patch would
          not exist. Thanks also to Russell King for spoonfeeding me a variety
          of fixes during the review cycle.
      Signed-off-by: NDaniel Thompson <daniel.thompson@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Acked-by: NNicolas Pitre <nico@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      c0e7f7ee
  3. 18 7月, 2014 1 次提交
  4. 02 6月, 2014 1 次提交
    • R
      ARM: ensure C page table setup code follows assembly code · ca8f0b0a
      Russell King 提交于
      Fix a long standing bug where, for ARMv6+, we don't fully ensure that
      the C code sets the same cache policy as the assembly code.  This was
      introduced partially by commit 11179d8c ([ARM] 4497/1: Only allow
      safe cache configurations on ARMv6 and later) and also by adding SMP
      support.
      
      This patch sets the default cache policy based on the flags used by the
      assembly code, and then ensures that when a cache policy command line
      argument is used, we verify that on ARMv6, it matches the initial setup.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      ca8f0b0a
  5. 01 6月, 2014 1 次提交
  6. 22 5月, 2014 1 次提交
  7. 25 2月, 2014 2 次提交
  8. 10 2月, 2014 1 次提交
    • S
      ARM: 7952/1: mm: Fix the memblock allocation for LPAE machines · ca474408
      Santosh Shilimkar 提交于
      Commit ad6492b8 added much needed memblock_virt_alloc_low() and further
      commit 07bacb38 {memblock, bootmem: restore goal for alloc_low} fixed
      the issue with low memory limit thanks to Yinghai. But even after all
      these fixes, there is still one case where the limit check done with
      ARCH_LOW_ADDRESS_LIMIT for low memory fails. Russell pointed out the
      issue with 32 bit LPAE machines in below thread.
      	https://lkml.org/lkml/2014/1/28/364
      
      Since on some LPAE machines where memory start address is beyond 4GB,
      the low memory marker in memblock will be set to default
      ARCH_LOW_ADDRESS_LIMIT which is wrong. We can fix this by letting
      architectures set the ARCH_LOW_ADDRESS_LIMIT using another export
      similar to memblock_set_current_limit() but am not sure whether
      its worth the trouble. Tell me if you think otherwise.
      
      Rather am just trying to fix that one broken case using
      memblock_virt_alloc() in setup code since the memblock.current_limit
      is updated appropriately makes it work on all ARM 32 bit machines.
      
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Strashko, Grygorii <grygorii.strashko@ti.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      ca474408
  9. 28 1月, 2014 1 次提交
  10. 22 1月, 2014 2 次提交
    • S
      arch/arm/kernel/: use memblock apis for early memory allocations · 9233d2be
      Santosh Shilimkar 提交于
      Switch to memblock interfaces for early memory allocator instead of
      bootmem allocator.  No functional change in beahvior than what it is in
      current code from bootmem users points of view.
      
      Archs already converted to NO_BOOTMEM now directly use memblock
      interfaces instead of bootmem wrappers build on top of memblock.  And
      the archs which still uses bootmem, these new apis just fallback to
      exiting bootmem APIs.
      Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Grygorii Strashko <grygorii.strashko@ti.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Paul Walmsley <paul@pwsan.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Tony Lindgren <tony@atomide.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9233d2be
    • R
      ARM: ignore memory below PHYS_OFFSET · 571b1437
      Russell King 提交于
      If the kernel is loaded higher in physical memory than normal, and we
      calculate PHYS_OFFSET higher than the start of RAM, this leads to
      boot problems as we attempt to map part of this RAM into userspace.
      Rather than struggle with this, just truncate the mapping.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      571b1437
  11. 29 12月, 2013 1 次提交
  12. 10 12月, 2013 1 次提交
  13. 24 11月, 2013 1 次提交
    • S
      ARM: mm: Remove bootmem code and switch to NO_BOOTMEM · 84f452b1
      Santosh Shilimkar 提交于
      Now with dma_mask series merged and max*pfn has consistent meaning on ARM
      as rest of the arch's thanks to RMK's mega series, lets switch ARM code
      to NO_BOOTMEM. With NO_BOOTMEM change, now we use memblock allocator to
      reserve space for crash kernel to have one less dependency with nobootmem
      allocator wrapper.
      
      Tested with both flat memory and sparse (faked) memory models with highmem
      enabled.
      
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
      Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      84f452b1
  14. 29 10月, 2013 3 次提交
  15. 11 10月, 2013 1 次提交
  16. 26 9月, 2013 1 次提交
  17. 02 9月, 2013 1 次提交
  18. 26 7月, 2013 2 次提交
  19. 22 7月, 2013 1 次提交
  20. 10 7月, 2013 1 次提交
  21. 24 6月, 2013 1 次提交
  22. 20 6月, 2013 1 次提交
    • L
      ARM: kernel: build MPIDR hash function data structure · 8cf72172
      Lorenzo Pieralisi 提交于
      On ARM SMP systems, cores are identified by their MPIDR register.
      The MPIDR guidelines in the ARM ARM do not provide strict enforcement of
      MPIDR layout, only recommendations that, if followed, split the MPIDR
      on ARM 32 bit platforms in three affinity levels. In multi-cluster
      systems like big.LITTLE, if the affinity guidelines are followed, the
      MPIDR can not be considered an index anymore. This means that the
      association between logical CPU in the kernel and the HW CPU identifier
      becomes somewhat more complicated requiring methods like hashing to
      associate a given MPIDR to a CPU logical index, in order for the look-up
      to be carried out in an efficient and scalable way.
      
      This patch provides a function in the kernel that starting from the
      cpu_logical_map, implement collision-free hashing of MPIDR values by checking
      all significative bits of MPIDR affinity level bitfields. The hashing
      can then be carried out through bits shifting and ORing; the resulting
      hash algorithm is a collision-free though not minimal hash that can be
      executed with few assembly instructions. The mpidr is filtered through a
      mpidr mask that is built by checking all bits that toggle in the set of
      MPIDRs corresponding to possible CPUs. Bits that do not toggle do not carry
      information so they do not contribute to the resulting hash.
      
      Pseudo code:
      
      /* check all bits that toggle, so they are required */
      for (i = 1, mpidr_mask = 0; i < num_possible_cpus(); i++)
      	mpidr_mask |= (cpu_logical_map(i) ^ cpu_logical_map(0));
      
      /*
       * Build shifts to be applied to aff0, aff1, aff2 values to hash the mpidr
       * fls() returns the last bit set in a word, 0 if none
       * ffs() returns the first bit set in a word, 0 if none
       */
      fs0 = mpidr_mask[7:0] ? ffs(mpidr_mask[7:0]) - 1 : 0;
      fs1 = mpidr_mask[15:8] ? ffs(mpidr_mask[15:8]) - 1 : 0;
      fs2 = mpidr_mask[23:16] ? ffs(mpidr_mask[23:16]) - 1 : 0;
      ls0 = fls(mpidr_mask[7:0]);
      ls1 = fls(mpidr_mask[15:8]);
      ls2 = fls(mpidr_mask[23:16]);
      bits0 = ls0 - fs0;
      bits1 = ls1 - fs1;
      bits2 = ls2 - fs2;
      aff0_shift = fs0;
      aff1_shift = 8 + fs1 - bits0;
      aff2_shift = 16 + fs2 - (bits0 + bits1);
      u32 hash(u32 mpidr) {
      	u32 l0, l1, l2;
      	u32 mpidr_masked = mpidr & mpidr_mask;
      	l0 = mpidr_masked & 0xff;
      	l1 = mpidr_masked & 0xff00;
      	l2 = mpidr_masked & 0xff0000;
      	return (l0 >> aff0_shift | l1 >> aff1_shift | l2 >> aff2_shift);
      }
      
      The hashing algorithm relies on the inherent properties set in the ARM ARM
      recommendations for the MPIDR. Exotic configurations, where for instance the
      MPIDR values at a given affinity level have large holes, can end up requiring
      big hash tables since the compression of values that can be achieved through
      shifting is somewhat crippled when holes are present. Kernel warns if
      the number of buckets of the resulting hash table exceeds the number of
      possible CPUs by a factor of 4, which is a symptom of a very sparse HW
      MPIDR configuration.
      
      The hash algorithm is quite simple and can easily be implemented in assembly
      code, to be used in code paths where the kernel virtual address space is
      not set-up (ie cpu_resume) and instruction and data fetches are strongly
      ordered so code must be compact and must carry out few data accesses.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Colin Cross <ccross@android.com>
      Cc: Santosh Shilimkar <santosh.shilimkar@ti.com>
      Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
      Cc: Amit Kucheria <amit.kucheria@linaro.org>
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Reviewed-by: NDave Martin <Dave.Martin@arm.com>
      Reviewed-by: NNicolas Pitre <nico@linaro.org>
      Tested-by: NShawn Guo <shawn.guo@linaro.org>
      Tested-by: NKevin Hilman <khilman@linaro.org>
      Tested-by: NStephen Warren <swarren@wwwdotorg.org>
      8cf72172
  23. 30 5月, 2013 1 次提交
  24. 21 5月, 2013 2 次提交
  25. 16 5月, 2013 1 次提交
    • M
      ARM: 7669/1: keep __my_cpu_offset consistent with generic one · 9394c1c6
      Ming Lei 提交于
      Commit 14318efb(ARM: 7587/1: implement optimized percpu variable access)
      introduces arm's __my_cpu_offset to optimize percpu vaiable access,
      which really works well on hackbench, but will cause __my_cpu_offset
      to return garbage value before it is initialized in cpu_init() called
      by setup_arch, so accessing percpu variable before setup_arch may cause
      kernel hang. But generic __my_cpu_offset always returns zero before
      percpu area is brought up, and won't hang kernel.
      
      So the patch tries to clear __my_cpu_offset on boot CPU early
      to avoid boot hang.
      
      At least now percpu variable is accessed by lockdep before
      setup_arch(), and enabling CONFIG_LOCK_STAT or CONFIG_DEBUG_LOCKDEP
      can trigger kernel hang.
      Signed-off-by: NMing Lei <tom.leiming@gmail.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      9394c1c6
  26. 30 4月, 2013 1 次提交
    • A
      ARM: default machine descriptor for multiplatform · 883a106b
      Arnd Bergmann 提交于
      Since we now have default implementations for init_time and init_irq,
      the init_machine callback is the only one that is not yet optional,
      but since simple DT based platforms all have the same
      of_platform_populate function call in there, we can consolidate them
      as well, and then actually boot with a completely empty machine_desc.
      Unofortunately we cannot just default to an empty init_machine: We
      cannot call of_platform_populate before init_machine because that
      does not work in case of auxdata, and we cannot call it after
      init_machine either because the machine might need to run code
      after adding the devices.
      
      To take the final step, this adds support for booting without defining
      any machine_desc whatsoever.
      
      For the case that CONFIG_MULTIPLATFORM is enabled, it adds a
      global machine descriptor that never matches any machine but is
      used as a fallback if nothing else matches. We assume that without
      CONFIG_MULTIPLATFORM, we only want to boot on the systems that the kernel
      is built for, so we still retain the build-time warning for missing
      machine descriptors and the run-time warning when the platform does not
      match in that case.
      
      In the case that we run on a multiplatform kernel and the machine
      provides a fully populated device tree, we attempt to keep booting,
      hoping that no machine specific callbacks are necessary.
      
      Finally, this also removes the misguided "select ARCH_VEXPRESS" that
      was only added to avoid a build error for allnoconfig kernels.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Acked-by: NNicolas Pitre <nico@linaro.org>
      Acked-by: NOlof Johansson <olof@lixom.net>
      Cc: "Russell King - ARM Linux" <linux@arm.linux.org.uk>
      Cc: Rob Herring <robherring2@gmail.com>
      883a106b
  27. 26 4月, 2013 1 次提交
  28. 18 4月, 2013 1 次提交
  29. 17 4月, 2013 1 次提交
  30. 03 4月, 2013 1 次提交
  31. 23 3月, 2013 2 次提交
  32. 01 2月, 2013 1 次提交
  33. 16 12月, 2012 1 次提交