1. 20 12月, 2010 15 次提交
  2. 03 12月, 2010 3 次提交
  3. 08 10月, 2010 1 次提交
  4. 05 10月, 2010 1 次提交
  5. 27 7月, 2010 2 次提交
    • R
      ARM: call machine_shutdown() from machine_halt(), etc · 3d3f78d7
      Russell King 提交于
      x86 calls machine_shutdown() from the various machine_*() calls which
      take the machine down ready for halting, restarting, etc, and uses
      this to bring the system safely to a point where those actions can be
      performed.  Such actions are stopping the secondary CPUs.
      
      So, change the ARM implementation of these to reflect what x86 does.
      
      This solves kexec problems on ARM SMP platforms, where the secondary
      CPUs were left running across the kexec call.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      3d3f78d7
    • R
      ARM: SMP: Always enable clock event broadcast support · 5388a6b2
      Russell King 提交于
      The TWD local timers are unable to wake up the CPU when it is placed
      into a low power mode, eg. C3.  Therefore, we need to adapt things
      such that the TWD code can cope with this.
      
      We do this by always providing a broadcast tick function, and marking
      the fact that the TWD local timer will stop in low power modes.  This
      means that when the CPU is placed into a low power mode, the core
      timer code marks this fact, and allows an IPI to be given to the core.
      Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      5388a6b2
  6. 15 5月, 2010 1 次提交
  7. 01 5月, 2010 1 次提交
    • S
      ARM: 6066/1: Fix "BUG: scheduling while atomic: swapper/0/0x00000002 · 13ea9cc8
      Santosh Shilimkar 提交于
      This patch fixes the preempt leak in the cpuidle path invoked from
      cpu-hotplug. The fix is suggested by Russell King and is based
      on x86 idea of calling init_idle() on the idle task when it's
      re-used which also resets the preempt count amongst other things
      
      dump:
      BUG: scheduling while atomic: swapper/0/0x00000002
      Modules linked in:
      Backtrace:
      [<c0024f90>] (dump_backtrace+0x0/0x110) from [<c0173bc4>] (dump_stack+0x18/0x1c)
       r7:c02149e4 r6:c033df00 r5:c7836000 r4:00000000
      [<c0173bac>] (dump_stack+0x0/0x1c) from [<c003b4f0>] (__schedule_bug+0x60/0x70)
      [<c003b490>] (__schedule_bug+0x0/0x70) from [<c0174214>] (schedule+0x98/0x7b8)
       r5:c7836000 r4:c7836000
      [<c017417c>] (schedule+0x0/0x7b8) from [<c00228c4>] (cpu_idle+0xb4/0xd4)
      # [<c0022810>] (cpu_idle+0x0/0xd4) from [<c0171dd8>] (secondary_start_kernel+0xe0/0xf0)
       r5:c7836000 r4:c0205f40
      [<c0171cf8>] (secondary_start_kernel+0x0/0xf0) from [<c002d57c>] (prm_rmw_mod_reg_bits+0x88/0xa4)
       r7:c02149e4 r6:00000001 r5:00000001 r4:c7836000
      Backtrace aborted due to bad frame pointer <c7837fbc>
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      13ea9cc8
  8. 15 3月, 2010 1 次提交
  9. 29 9月, 2009 2 次提交
    • R
      ARM: Fix __cpuexit section mismatch warnings · 90140c30
      Russell King 提交于
      Fix:
      
      WARNING: vmlinux.o(.text+0x247c): Section mismatch in reference from the function cpu_idle() to the function .cpuexit.text:cpu_die()
      The function cpu_idle() references a function in an exit section.
      Often the function cpu_die() has valid usage outside the exit section
      and the fix is to remove the __cpuexit annotation of cpu_die.
      
      WARNING: vmlinux.o(.cpuexit.text+0x3c): Section mismatch in reference from the function cpu_die() to the function .cpuinit.text:secondary_start_kernel()
      The function __cpuexit cpu_die() references
      a function __cpuinit secondary_start_kernel().
      This is often seen when error handling in the exit function
      uses functionality in the init path.
      The fix is often to remove the __cpuinit annotation of
      secondary_start_kernel() so it may be used outside an init section.
      
      Sam says:
      > The annotation of cpu_die() is wrong.
      > To be annotated __cpuexit the function shall:
      > - be used in exit context and only in exit context with HOTPLUG_CPU=n
      > - be used outside exit context with HOTPLUG_CPU=y
      
      So, this also means __cpu_disable(), __cpu_die() and twd_timer_stop() are
      also wrong.  However, removing __cpuexit from cpu_die() creates:
      
      WARNING: vmlinux.o(.text+0x6834): Section mismatch in reference from the function cpu_die() to the function .cpuinit.text:secondary_start_kernel()
      The function cpu_die() references
      the function __cpuinit secondary_start_kernel().
      This is often because cpu_die lacks a __cpuinit
      annotation or the annotation of secondary_start_kernel is wrong.
      
      so fix this using __ref.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      Acked-by: NSam Ravnborg <sam@ravnborg.org>
      90140c30
    • R
      ARM: Don't allow highmem on SMP platforms without h/w TLB ops broadcast · e616c591
      Russell King 提交于
      We suffer an unfortunate combination of "features" which makes highmem
      support on platforms without hardware TLB maintainence broadcast difficult:
      
      - we need kmap_high_get() support for DMA cache coherence
      - this requires kmap_high() to take a spinlock with IRQs disabled
      - kmap_high() occasionally calls flush_all_zero_pkmaps() to clear
        out old mappings
      - flush_all_zero_pkmaps() calls flush_tlb_kernel_range(), which
        on s/w IPI'd systems eventually calls smp_call_function_many()
      - smp_call_function_many() must not be called with IRQs disabled:
      
      WARNING: at kernel/smp.c:380 smp_call_function_many+0xc4/0x240()
      Modules linked in:
      Backtrace:
      [<c00306f0>] (dump_backtrace+0x0/0x108) from [<c0286e6c>] (dump_stack+0x18/0x1c)
       r6:c007cd18 r5:c02ff228 r4:0000017c
      [<c0286e54>] (dump_stack+0x0/0x1c) from [<c0053e08>] (warn_slowpath_common+0x50/0x80)
      [<c0053db8>] (warn_slowpath_common+0x0/0x80) from [<c0053e50>] (warn_slowpath_null+0x18/0x1c)
       r7:00000003 r6:00000001 r5:c1ff4000 r4:c035fa34
      [<c0053e38>] (warn_slowpath_null+0x0/0x1c) from [<c007cd18>] (smp_call_function_many+0xc4/0x240)
      [<c007cc54>] (smp_call_function_many+0x0/0x240) from [<c007cec0>] (smp_call_function+0x2c/0x38)
      [<c007ce94>] (smp_call_function+0x0/0x38) from [<c005980c>] (on_each_cpu+0x1c/0x38)
      [<c00597f0>] (on_each_cpu+0x0/0x38) from [<c0031788>] (flush_tlb_kernel_range+0x50/0x58)
       r6:00000001 r5:00000800 r4:c05f3590
      [<c0031738>] (flush_tlb_kernel_range+0x0/0x58) from [<c009c600>] (flush_all_zero_pkmaps+0xc0/0xe8)
      [<c009c540>] (flush_all_zero_pkmaps+0x0/0xe8) from [<c009c6b4>] (kmap_high+0x8c/0x1e0)
      [<c009c628>] (kmap_high+0x0/0x1e0) from [<c00364a8>] (kmap+0x44/0x5c)
      [<c0036464>] (kmap+0x0/0x5c) from [<c0109dfc>] (cramfs_readpage+0x3c/0x194)
      [<c0109dc0>] (cramfs_readpage+0x0/0x194) from [<c0090c14>] (__do_page_cache_readahead+0x1f0/0x290)
      [<c0090a24>] (__do_page_cache_readahead+0x0/0x290) from [<c0090ce4>] (ra_submit+0x30/0x38)
      [<c0090cb4>] (ra_submit+0x0/0x38) from [<c0089384>] (filemap_fault+0x3dc/0x438)
       r4:c1819988
      [<c0088fa8>] (filemap_fault+0x0/0x438) from [<c009d21c>] (__do_fault+0x58/0x43c)
      [<c009d1c4>] (__do_fault+0x0/0x43c) from [<c009e8cc>] (handle_mm_fault+0x104/0x318)
      [<c009e7c8>] (handle_mm_fault+0x0/0x318) from [<c0033c98>] (do_page_fault+0x188/0x1e4)
      [<c0033b10>] (do_page_fault+0x0/0x1e4) from [<c0033ddc>] (do_translation_fault+0x7c/0x84)
      [<c0033d60>] (do_translation_fault+0x0/0x84) from [<c002b474>] (do_DataAbort+0x40/0xa4)
       r8:c1ff5e20 r7:c0340120 r6:00000805 r5:c1ff5e54 r4:c03400d0
      [<c002b434>] (do_DataAbort+0x0/0xa4) from [<c002bcac>] (__dabt_svc+0x4c/0x60)
      ...
      
      So we disable highmem support on these systems.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      e616c591
  10. 24 9月, 2009 1 次提交
  11. 30 5月, 2009 1 次提交
  12. 28 5月, 2009 1 次提交
  13. 18 5月, 2009 1 次提交
  14. 17 5月, 2009 1 次提交
  15. 12 2月, 2009 1 次提交
  16. 13 12月, 2008 1 次提交
    • R
      cpumask: centralize cpu_online_map and cpu_possible_map · 98a79d6a
      Rusty Russell 提交于
      Impact: cleanup
      
      Each SMP arch defines these themselves.  Move them to a central
      location.
      
      Twists:
      1) Some archs (m32, parisc, s390) set possible_map to all 1, so we add a
         CONFIG_INIT_ALL_POSSIBLE for this rather than break them.
      
      2) mips and sparc32 '#define cpu_possible_map phys_cpu_present_map'.
         Those archs simply have phys_cpu_present_map replaced everywhere.
      
      3) Alpha defined cpu_possible_map to cpu_present_map; this is tricky
         so I just manipulate them both in sync.
      
      4) IA64, cris and m32r have gratuitous 'extern cpumask_t cpu_possible_map'
         declarations.
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Reviewed-by: NGrant Grundler <grundler@parisc-linux.org>
      Tested-by: NTony Luck <tony.luck@intel.com>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Cc: Mike Travis <travis@sgi.com>
      Cc: ink@jurassic.park.msu.ru
      Cc: rmk@arm.linux.org.uk
      Cc: starvik@axis.com
      Cc: tony.luck@intel.com
      Cc: takata@linux-m32r.org
      Cc: ralf@linux-mips.org
      Cc: grundler@parisc-linux.org
      Cc: paulus@samba.org
      Cc: schwidefsky@de.ibm.com
      Cc: lethal@linux-sh.org
      Cc: wli@holomorphy.com
      Cc: davem@davemloft.net
      Cc: jdike@addtoit.com
      Cc: mingo@redhat.com
      98a79d6a
  17. 01 12月, 2008 1 次提交
  18. 09 9月, 2008 1 次提交
    • M
      kernel/cpu.c: create a CPU_STARTING cpu_chain notifier · e545a614
      Manfred Spraul 提交于
      Right now, there is no notifier that is called on a new cpu, before the new
      cpu begins processing interrupts/softirqs.
      Various kernel function would need that notification, e.g. kvm works around
      by calling smp_call_function_single(), rcu polls cpu_online_map.
      
      The patch adds a CPU_STARTING notification. It also adds a helper function
      that sends the message to all cpu_chain handlers.
      
      Tested on x86-64.
      All other archs are untested. Especially on sparc, I'm not sure if I got
      it right.
      Signed-off-by: NManfred Spraul <manfred@colorfullife.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e545a614
  19. 08 8月, 2008 1 次提交
    • R
      [ARM] Fix SMP booting with non-zero PHYS_OFFSET · 058ddee5
      Russell King 提交于
      The existing code tries to get the pmd for the temporary page table
      by doing:
      
              pgd = pgd_alloc(&init_mm);
              pmd = pmd_offset(pgd, PHYS_OFFSET);
      
      Since we have a two level page table, pmd_offset() is a no-op, so
      this just has a casting effect from a pgd to a pmd - the address
      argument is unused.  So this can't work.
      
      Normally, we'd do:
      
      	pgd = pgd_offset(&init_mm, PHYS_OFFSET);
      	...
      	pmd = pmd_offset(pgd, PHYS_OFFSET);
      
      to get the pmd you want.  However, pgd_offset() takes the mm_struct,
      not the (unattached) pgd we just allocated.  So, instead use:
      
              pgd = pgd_alloc(&init_mm);
              pmd = pmd_offset(pgd + pgd_index(PHYS_OFFSET), PHYS_OFFSET);
      Reported-by: NAntti P Miettinen <ananaza@iki.fi>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      058ddee5
  20. 26 6月, 2008 2 次提交
  21. 06 2月, 2008 1 次提交
新手
引导
客服 返回
顶部