- 03 3月, 2017 1 次提交
-
-
由 Ingo Molnar 提交于
Move the following task->mm helper APIs into a new header file, <linux/sched/mm.h>, to further reduce the size and complexity of <linux/sched.h>. Here are how the APIs are used in various kernel files: # mm_alloc(): arch/arm/mach-rpc/ecard.c fs/exec.c include/linux/sched/mm.h kernel/fork.c # __mmdrop(): arch/arc/include/asm/mmu_context.h include/linux/sched/mm.h kernel/fork.c # mmdrop(): arch/arm/mach-rpc/ecard.c arch/m68k/sun3/mmu_emu.c arch/x86/mm/tlb.c drivers/gpu/drm/amd/amdkfd/kfd_process.c drivers/gpu/drm/i915/i915_gem_userptr.c drivers/infiniband/hw/hfi1/file_ops.c drivers/vfio/vfio_iommu_spapr_tce.c fs/exec.c fs/proc/base.c fs/proc/task_mmu.c fs/proc/task_nommu.c fs/userfaultfd.c include/linux/mmu_notifier.h include/linux/sched/mm.h kernel/fork.c kernel/futex.c kernel/sched/core.c mm/khugepaged.c mm/ksm.c mm/mmu_context.c mm/mmu_notifier.c mm/oom_kill.c virt/kvm/kvm_main.c # mmdrop_async_fn(): include/linux/sched/mm.h # mmdrop_async(): include/linux/sched/mm.h kernel/fork.c # mmget_not_zero(): fs/userfaultfd.c include/linux/sched/mm.h mm/oom_kill.c # mmput(): arch/arc/include/asm/mmu_context.h arch/arc/kernel/troubleshoot.c arch/frv/mm/mmu-context.c arch/powerpc/platforms/cell/spufs/context.c arch/sparc/include/asm/mmu_context_32.h drivers/android/binder.c drivers/gpu/drm/etnaviv/etnaviv_gem.c drivers/gpu/drm/i915/i915_gem_userptr.c drivers/infiniband/core/umem.c drivers/infiniband/core/umem_odp.c drivers/infiniband/core/uverbs_main.c drivers/infiniband/hw/mlx4/main.c drivers/infiniband/hw/mlx5/main.c drivers/infiniband/hw/usnic/usnic_uiom.c drivers/iommu/amd_iommu_v2.c drivers/iommu/intel-svm.c drivers/lguest/lguest_user.c drivers/misc/cxl/fault.c drivers/misc/mic/scif/scif_rma.c drivers/oprofile/buffer_sync.c drivers/vfio/vfio_iommu_type1.c drivers/vhost/vhost.c drivers/xen/gntdev.c fs/exec.c fs/proc/array.c fs/proc/base.c fs/proc/task_mmu.c fs/proc/task_nommu.c fs/userfaultfd.c include/linux/sched/mm.h kernel/cpuset.c kernel/events/core.c kernel/events/uprobes.c kernel/exit.c kernel/fork.c kernel/ptrace.c kernel/sys.c kernel/trace/trace_output.c kernel/tsacct.c mm/memcontrol.c mm/memory.c mm/mempolicy.c mm/migrate.c mm/mmu_notifier.c mm/nommu.c mm/oom_kill.c mm/process_vm_access.c mm/rmap.c mm/swapfile.c mm/util.c virt/kvm/async_pf.c # mmput_async(): include/linux/sched/mm.h kernel/fork.c mm/oom_kill.c # get_task_mm(): arch/arc/kernel/troubleshoot.c arch/powerpc/platforms/cell/spufs/context.c drivers/android/binder.c drivers/gpu/drm/etnaviv/etnaviv_gem.c drivers/infiniband/core/umem.c drivers/infiniband/core/umem_odp.c drivers/infiniband/hw/mlx4/main.c drivers/infiniband/hw/mlx5/main.c drivers/infiniband/hw/usnic/usnic_uiom.c drivers/iommu/amd_iommu_v2.c drivers/iommu/intel-svm.c drivers/lguest/lguest_user.c drivers/misc/cxl/fault.c drivers/misc/mic/scif/scif_rma.c drivers/oprofile/buffer_sync.c drivers/vfio/vfio_iommu_type1.c drivers/vhost/vhost.c drivers/xen/gntdev.c fs/proc/array.c fs/proc/base.c fs/proc/task_mmu.c include/linux/sched/mm.h kernel/cpuset.c kernel/events/core.c kernel/exit.c kernel/fork.c kernel/ptrace.c kernel/sys.c kernel/trace/trace_output.c kernel/tsacct.c mm/memcontrol.c mm/memory.c mm/mempolicy.c mm/migrate.c mm/mmu_notifier.c mm/nommu.c mm/util.c # mm_access(): fs/proc/base.c include/linux/sched/mm.h kernel/fork.c mm/process_vm_access.c # mm_release(): arch/arc/include/asm/mmu_context.h fs/exec.c include/linux/sched/mm.h include/uapi/linux/sched.h kernel/exit.c kernel/fork.c Acked-by: NLinus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 02 3月, 2017 1 次提交
-
-
由 Ingo Molnar 提交于
We are going to split <linux/sched/hotplug.h> out of <linux/sched.h>, which will have to be picked up from other headers and a couple of .c files. Create a trivial placeholder <linux/sched/hotplug.h> file that just maps to <linux/sched.h> to make this patch obviously correct and bisectable. Include the new header in the files that are going to need it. Acked-by: NLinus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 28 2月, 2017 1 次提交
-
-
由 Vegard Nossum 提交于
Apart from adding the helper function itself, the rest of the kernel is converted mechanically using: git grep -l 'atomic_inc.*mm_count' | xargs sed -i 's/atomic_inc(&\(.*\)->mm_count);/mmgrab\(\1\);/' git grep -l 'atomic_inc.*mm_count' | xargs sed -i 's/atomic_inc(&\(.*\)\.mm_count);/mmgrab\(\&\1\);/' This is needed for a later patch that hooks into the helper, but might be a worthwhile cleanup on its own. (Michal Hocko provided most of the kerneldoc comment.) Link: http://lkml.kernel.org/r/20161218123229.22952-1-vegard.nossum@oracle.comSigned-off-by: NVegard Nossum <vegard.nossum@oracle.com> Acked-by: NMichal Hocko <mhocko@suse.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 24 2月, 2017 2 次提交
-
-
由 Vijay Kumar 提交于
On panic, all other CPUs are stopped except the one which had hit panic. To keep console alive, we need to migrate hvcons irq to panicked CPU. Signed-off-by: NVijay Kumar <vijay.ac.kumar@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Vijay Kumar 提交于
CPU needs to be marked offline before stopping it. When not marked offline, the xcall receives HV_EWOULDBLOCK and so assumes that not all CPUs received the message, and retries. After 10000 retries, it finally fails with fatal mondo timeout. Signed-off-by: NVijay Kumar <vijay.ac.kumar@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 25 12月, 2016 1 次提交
-
-
由 Linus Torvalds 提交于
This was entirely automated, using the script by Al: PATT='^[[:blank:]]*#[[:blank:]]*include[[:blank:]]*<asm/uaccess.h>' sed -i -e "s!$PATT!#include <linux/uaccess.h>!" \ $(git grep -l "$PATT"|grep -v ^include/linux/uaccess.h) to do the replacement at the end of the merge window. Requested-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 25 10月, 2016 1 次提交
-
-
由 Atish Patra 提交于
Individual scheduler domain should consist different hierarchy consisting of cores sharing similar property. Currently, no scheduler domain is defined separately for the cores that shares the last level cache. As a result, the scheduler fails to take advantage of cache locality while migrating tasks during load balancing. Here are the cpu masks currently present for sparc that are/can be used in scheduler domain construction. cpu_core_map : set based on the cores that shares l1 cache. core_core_sib_map : is set based on the socket id. The prior SPARC notion of socket was defined as highest level of shared cache. However, the MD record on T7 platforms now describes the CPUs that share the physical socket and this is no longer tied to shared cache. That's why a separate cpu mask needs to be created that truly represent highest level of shared cache for all platforms. Signed-off-by: NAtish Patra <atish.patra@oracle.com> Reviewed-by: NChris Hyser <chris.hyser@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 28 9月, 2016 1 次提交
-
-
由 Atish Patra 提交于
If kernel boot parameter nr_cpus is set, it should define the number of CPUs that can ever be available in the system i.e. cpu_possible_mask. setup_nr_cpu_ids() overrides the nr_cpu_ids based on the cpu_possible_mask during kernel initialization. If cpu_possible_mask is not set based on the nr_cpus value, earlier part of the kernel would be initialized using nr_cpus value leading to a kernel crash. Set cpu_possible_mask based on nr_cpus value. Thus setup_nr_cpu_ids() becomes redundant and does not corrupt nr_cpu_ids value. Signed-off-by: NAtish Patra <atish.patra@oracle.com> Reviewed-by: NBob Picco <bob.picco@oracle.com> Reviewed-by: NVijay Kumar <vijay.ac.kumar@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 02 3月, 2016 1 次提交
-
-
由 Thomas Gleixner 提交于
Let the non boot cpus call into idle with the corresponding hotplug state, so the hotplug core can handle the further bringup. That's a first step to convert the boot side of the hotplugged cpus to do all the synchronization with the other side through the state machine. For now it'll only start the hotplug thread and kick the full bringup of the cpu. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: linux-arch@vger.kernel.org Cc: Rik van Riel <riel@redhat.com> Cc: Rafael Wysocki <rafael.j.wysocki@intel.com> Cc: "Srivatsa S. Bhat" <srivatsa@mit.edu> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Sebastian Siewior <bigeasy@linutronix.de> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Tejun Heo <tj@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Paul McKenney <paulmck@linux.vnet.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul Turner <pjt@google.com> Link: http://lkml.kernel.org/r/20160226182341.614102639@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 23 4月, 2015 1 次提交
-
-
由 chris hyser 提交于
commit 5f4826a362405748bbf73957027b77993e61e1af Author: chris hyser <chris.hyser@oracle.com> Date: Tue Apr 21 10:31:38 2015 -0400 sparc64: Setup sysfs to mark LDOM sockets, cores and threads correctly The current sparc kernel has no representation for sockets though tools like lscpu can pull this from sysfs. This patch walks the machine description cache and socket hierarchy and marks sockets as well as cores and threads such that a representative sysfs is created by drivers/base/topology.c. Before this patch: $ lscpu Architecture: sparc64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Big Endian CPU(s): 1024 On-line CPU(s) list: 0-1023 Thread(s) per core: 8 Core(s) per socket: 1 <--- wrong Socket(s): 128 <--- wrong NUMA node(s): 4 NUMA node0 CPU(s): 0-255 NUMA node1 CPU(s): 256-511 NUMA node2 CPU(s): 512-767 NUMA node3 CPU(s): 768-1023 After this patch: $ lscpu Architecture: sparc64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Big Endian CPU(s): 1024 On-line CPU(s) list: 0-1023 Thread(s) per core: 8 Core(s) per socket: 32 Socket(s): 4 NUMA node(s): 4 NUMA node0 CPU(s): 0-255 NUMA node1 CPU(s): 256-511 NUMA node2 CPU(s): 512-767 NUMA node3 CPU(s): 768-1023 Most of this patch was done by Chris with updates by David. Signed-off-by: NChris Hyser <chris.hyser@oracle.com> Signed-off-by: NDavid Ahern <david.ahern@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 02 3月, 2015 1 次提交
-
-
由 Dave Kleikamp 提交于
"echo c > /proc/sysrq-trigger" does not result in a system crash. There are two problems. One is that the trap handler ignores the global variable, panic_on_oops. The other is that smp_send_stop() is a no-op which leaves the other cpus running normally when one cpu panics. Signed-off-by: NDave Kleikamp <dave.kleikamp@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 08 11月, 2014 1 次提交
-
-
由 David S. Miller 提交于
Otherwise rcu_irq_{enter,exit}() do not happen and we get dumps like: ==================== [ 188.275021] =============================== [ 188.309351] [ INFO: suspicious RCU usage. ] [ 188.343737] 3.18.0-rc3-00068-g20f3963d-dirty #54 Not tainted [ 188.394786] ------------------------------- [ 188.429170] include/linux/rcupdate.h:883 rcu_read_lock() used illegally while idle! [ 188.505235] other info that might help us debug this: [ 188.554230] RCU used illegally from idle CPU! rcu_scheduler_active = 1, debug_locks = 0 [ 188.637587] RCU used illegally from extended quiescent state! [ 188.690684] 3 locks held by swapper/7/0: [ 188.721932] #0: (&x->wait#11){......}, at: [<0000000000495de8>] complete+0x8/0x60 [ 188.797994] #1: (&p->pi_lock){-.-.-.}, at: [<000000000048510c>] try_to_wake_up+0xc/0x400 [ 188.881343] #2: (rcu_read_lock){......}, at: [<000000000048a910>] select_task_rq_fair+0x90/0xb40 [ 188.973043]stack backtrace: [ 188.993879] CPU: 7 PID: 0 Comm: swapper/7 Not tainted 3.18.0-rc3-00068-g20f3963d-dirty #54 [ 189.076187] Call Trace: [ 189.089719] [0000000000499360] lockdep_rcu_suspicious+0xe0/0x100 [ 189.147035] [000000000048a99c] select_task_rq_fair+0x11c/0xb40 [ 189.202253] [00000000004852d8] try_to_wake_up+0x1d8/0x400 [ 189.252258] [000000000048554c] default_wake_function+0xc/0x20 [ 189.306435] [0000000000495554] __wake_up_common+0x34/0x80 [ 189.356448] [00000000004955b4] __wake_up_locked+0x14/0x40 [ 189.406456] [0000000000495e08] complete+0x28/0x60 [ 189.448142] [0000000000636e28] blk_end_sync_rq+0x8/0x20 [ 189.496057] [0000000000639898] __blk_mq_end_request+0x18/0x60 [ 189.550249] [00000000006ee014] scsi_end_request+0x94/0x180 [ 189.601286] [00000000006ee334] scsi_io_completion+0x1d4/0x600 [ 189.655463] [00000000006e51c4] scsi_finish_command+0xc4/0xe0 [ 189.708598] [00000000006ed958] scsi_softirq_done+0x118/0x140 [ 189.761735] [00000000006398ec] __blk_mq_complete_request_remote+0xc/0x20 [ 189.827383] [00000000004c75d0] generic_smp_call_function_single_interrupt+0x150/0x1c0 [ 189.906581] [000000000043e514] smp_call_function_single_client+0x14/0x40 ==================== Based almost entirely upon a patch by Paul E. McKenney. Reported-by: NMeelis Roos <mroos@linux.ee> Tested-by: NMeelis Roos <mroos@linux.ee> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 06 10月, 2014 1 次提交
-
-
由 David S. Miller 提交于
This has become necessary with chips that support more than 43-bits of physical addressing. Based almost entirely upon a patch by Bob Picco. Signed-off-by: NDavid S. Miller <davem@davemloft.net> Acked-by: NBob Picco <bob.picco@oracle.com>
-
- 14 8月, 2014 1 次提交
-
-
由 Peter Zijlstra 提交于
Many of the atomic op implementations are the same except for one instruction; fold the lot into a few CPP macros and reduce LoC. This also prepares for easy addition of new ops. Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Acked-by: NDavid S. Miller <davem@davemloft.net> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Kirill Tkhai <tkhai@yandex.ru> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Sam Ravnborg <sam@ravnborg.org> Cc: sparclinux@vger.kernel.org Link: http://lkml.kernel.org/r/20140508135852.825281379@infradead.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 12 8月, 2014 1 次提交
-
-
由 David S. Miller 提交于
Christopher reports that perf_event_print_debug() can crash in uniprocessor builds. The crash is due to pcr_ops being NULL. This happens because pcr_arch_init() is only invoked by smp_cpus_done() which only executes in SMP builds. init_hw_perf_events() is closely intertwined with pcr_ops being setup properly, therefore: 1) Call pcr_arch_init() early on from init_hw_perf_events(), instead of from smp_cpus_done(). 2) Do not hook up a PMU type if pcr_ops is NULL after pcr_arch_init(). 3) Move init_hw_perf_events to a later initcall so that it we will be sure to invoke pcr_arch_init() after all cpus are brought up. Finally, guard the one naked sequence of pcr_ops dereferences in __global_pmu_self() with an appropriate NULL check. Reported-by: NChristopher Alexander Tobias Schulze <cat.schulze@alice-dsl.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 19 5月, 2014 3 次提交
-
-
由 Sam Ravnborg 提交于
Fix following warnings: init_64.c:191:10: warning: symbol 'dcpage_flushes' was not declared. Should it be static? init_64.c:193:10: warning: symbol 'dcpage_flushes_xcall' was not declared. Should it be static? Add extern declaration to asm/setup.h and drop local declaration in smp_64.h Signed-off-by: NSam Ravnborg <sam@ravnborg.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Sam Ravnborg 提交于
Fix following warnings: smp_32.c:177:5: warning: symbol 'setup_profiling_timer' was not declared. Should it be static? smp_64.c:1202:5: warning: symbol 'setup_profiling_timer' was not declared. Should it be static? smp_64.c:989:6: warning: symbol 'kgdb_roundup_cpus' was not declared. Should it be static? Add prototype to include/linux/profile.h of setup_profiling_timer Add missing include to smp_64.c Signed-off-by: NSam Ravnborg <sam@ravnborg.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Sam Ravnborg 提交于
Fix following warnings: smp_64.c:88:6: warning: symbol 'smp_callin' was not declared. Should it be static? smp_64.c:133:6: warning: symbol 'cpu_panic' was not declared. Should it be static? smp_64.c:187:6: warning: symbol 'smp_synchronize_tick_client' was not declared. Should it be static? smp_64.c:821:18: warning: symbol 'smp_call_function_client' was not declared. Should it be static? smp_64.c:827:18: warning: symbol 'smp_call_function_single_client' was not declared. Should it be static? smp_64.c:964:18: warning: symbol 'smp_new_mmu_context_version_client' was not declared. Should it be static? smp_64.c:1149:6: warning: symbol 'smp_capture' was not declared. Should it be static? smp_64.c:1171:6: warning: symbol 'smp_release' was not declared. Should it be static? smp_64.c:1190:18: warning: symbol 'smp_penguin_jailcell' was not declared. Should it be static? smp_64.c:1410:18: warning: symbol 'smp_receive_signal_client' was not declared. Should it be static? Add prototypes in kernel.h or asm/smp_64.h as appropriate. Delete duplicate function kimage_addr_to_ra(), and adapt parameter to const void * to match the broader use. Signed-off-by: NSam Ravnborg <sam@ravnborg.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 02 5月, 2014 1 次提交
-
-
由 Kirill Tkhai 提交于
One more place where we must not be able to be preempted or to be interrupted in RT. Always actually disable interrupts during synchronization cycle. Signed-off-by: NKirill Tkhai <tkhai@yandex.ru> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 11 3月, 2014 1 次提交
-
-
由 Bjorn Helgaas 提交于
Remove sparc64_multi_core because it's not used any more. It was added by a2f9f6bb ("Fix {mc,smt}_capable()"), and the last uses were removed by e637d96bf462 ("sched: Remove unused mc_capable() and smt_capable()"). Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Acked-by: NDavid S. Miller <davem@davemloft.net> Link: http://lkml.kernel.org/r/20140304210744.16893.75929.stgit@bhelgaas-glaptop.roam.corp.google.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 05 1月, 2014 1 次提交
-
-
由 Kirill Tkhai 提交于
Most of other architectures have below suggested order. So lets do the same to fit generic idle loop scheme better. Signed-off-by: NKirill Tkhai <tkhai@yandex.ru> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 15 11月, 2013 1 次提交
-
-
由 Kirill Tkhai 提交于
CONFIG_NO_HZ_FULL requires possibility of smp_send_reschedule() for the calling CPU. Currently, it is used in inc_nr_running() scheduler primitive only. Nobody calls smp_send_reschedule() from preemptible context (furthermore, it looks like it will be save if anybody use it another way in the future). But anyway I add WARN_ON() here just to return here if anything changes. Signed-off-by: NKirill Tkhai <tkhai@yandex.ru> CC: David Miller <davem@davemloft.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 15 7月, 2013 1 次提交
-
-
由 Paul Gortmaker 提交于
The __cpuinit type of throwaway sections might have made sense some time ago when RAM was more constrained, but now the savings do not offset the cost and complications. For example, the fix in commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time") is a good example of the nasty type of bugs that can be created with improper use of the various __init prefixes. After a discussion on LKML[1] it was decided that cpuinit should go the way of devinit and be phased out. Once all the users are gone, we can then finally remove the macros themselves from linux/init.h. Note that some harmless section mismatch warnings may result, since notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c) are flagged as __cpuinit -- so if we remove the __cpuinit from arch specific callers, we will also get section mismatch warnings. As an intermediate step, we intend to turn the linux/init.h cpuinit content into no-ops as early as possible, since that will get rid of these warnings. In any case, they are temporary and harmless. This removes all the arch/sparc uses of the __cpuinit macros from C files and removes __CPUINIT from assembly files. Note that even though arch/sparc/kernel/trampoline_64.S has instances of ".previous" in it, they are all paired off against explicit ".section" directives, and not implicitly paired with __CPUINIT (unlike mips and arm were). [1] https://lkml.org/lkml/2013/5/20/589 Cc: "David S. Miller" <davem@davemloft.net> Cc: sparclinux@vger.kernel.org Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
- 20 4月, 2013 1 次提交
-
-
由 David S. Miller 提交于
As reported by Dave Kleikamp, when we emit cross calls to do batched TLB flush processing we have a race because we do not synchronize on the sibling cpus completing the cross call. So meanwhile the TLB batch can be reset (tb->tlb_nr set to zero, etc.) and either flushes are missed or flushes will flush the wrong addresses. Fix this by using generic infrastructure to synchonize on the completion of the cross call. This first required getting the flush_tlb_pending() call out from switch_to() which operates with locks held and interrupts disabled. The problem is that smp_call_function_many() cannot be invoked with IRQs disabled and this is explicitly checked for with WARN_ON_ONCE(). We get the batch processing outside of locked IRQ disabled sections by using some ideas from the powerpc port. Namely, we only batch inside of arch_{enter,leave}_lazy_mmu_mode() calls. If we're not in such a region, we flush TLBs synchronously. 1) Get rid of xcall_flush_tlb_pending and per-cpu type implementations. 2) Do TLB batch cross calls instead via: smp_call_function_many() tlb_pending_func() __flush_tlb_pending() 3) Batch only in lazy mmu sequences: a) Add 'active' member to struct tlb_batch b) Define __HAVE_ARCH_ENTER_LAZY_MMU_MODE c) Set 'active' in arch_enter_lazy_mmu_mode() d) Run batch and clear 'active' in arch_leave_lazy_mmu_mode() e) Check 'active' in tlb_batch_add_one() and do a synchronous flush if it's clear. 4) Add infrastructure for synchronous TLB page flushes. a) Implement __flush_tlb_page and per-cpu variants, patch as needed. b) Likewise for xcall_flush_tlb_page. c) Implement smp_flush_tlb_page() to invoke the cross-call. d) Wire up global_flush_tlb_page() to the right routine based upon CONFIG_SMP 5) It turns out that singleton batches are very common, 2 out of every 3 batch flushes have only a single entry in them. The batch flush waiting is very expensive, both because of the poll on sibling cpu completeion, as well as because passing the tlb batch pointer to the sibling cpus invokes a shared memory dereference. Therefore, in flush_tlb_pending(), if there is only one entry in the batch perform a completely asynchronous global_flush_tlb_page() instead. Reported-by: NDave Kleikamp <dave.kleikamp@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Acked-by: NDave Kleikamp <dave.kleikamp@oracle.com>
-
- 14 4月, 2013 1 次提交
-
-
由 Sam Ravnborg 提交于
Add generic cpu_idle support sparc32: - replace call to cpu_idle() with cpu_startup_entry() - add arch_cpu_idle() sparc64: - smp_callin() now include cpu_startup_entry() call so we can skip calling cpu_idle from assembler - add arch_cpu_idle() and arch_cpu_idle_dead() Signed-off-by: NSam Ravnborg <sam@ravnborg.org> Reviewed-by: N"Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com> Cc: torvalds@linux-foundation.org Cc: rusty@rustcorp.com.au Cc: paulmck@linux.vnet.ibm.com Cc: peterz@infradead.org Cc: magnus.damm@gmail.com Acked-by: NDavid Miller <davem@davemloft.net> Link: http://lkml.kernel.org/r/20130411193850.GA2330@merkur.ravnborg.orgSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 13 1月, 2013 1 次提交
-
-
由 Sam Ravnborg 提交于
__devinit, __devexit annotations are nops - so drop them. Likewise for __devexit_p. Adjusted alignment of arguments when needed. Signed-off-by: NSam Ravnborg <sam@ravnborg.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 04 1月, 2013 1 次提交
-
-
由 Greg Kroah-Hartman 提交于
CONFIG_HOTPLUG is going away as an option. As a result, the __dev* markings need to be removed. This change removes the use of __devinit, __devexit_p, __devinitdata, and __devexit from these drivers. Based on patches originally written by Bill Pemberton, but redone by me in order to handle some of the coding style issues better, by hand. Cc: Bill Pemberton <wfp5p@virginia.edu> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 17 10月, 2012 1 次提交
-
-
由 David S. Miller 提交于
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 05 6月, 2012 1 次提交
-
-
由 Yong Zhang 提交于
ipi_call_lock/unlock() lock resp. unlock call_function.lock. This lock protects only the call_function data structure itself, but it's completely unrelated to cpu_online_mask. The mask to which the IPIs are sent is calculated before call_function.lock is taken in smp_call_function_many(), so the locking around set_cpu_online() is pointless and can be removed. Delay irq enable to after set_cpu_online(). [ tglx: Massaged changelog ] Signed-off-by: NYong Zhang <yong.zhang0@gmail.com> Cc: ralf@linux-mips.org Cc: sshtylyov@mvista.com Cc: david.daney@cavium.com Cc: nikunj@linux.vnet.ibm.com Cc: paulmck@linux.vnet.ibm.com Cc: axboe@kernel.dk Cc: peterz@infradead.org Cc: sparclinux@vger.kernel.org Link: http://lkml.kernel.org/r/20120529082732.GA4250@zhyAcked-by: N"David S. Miller" <davem@davemloft.net> Acked-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Acked-by: NPeter Zijlstra <peterz@infradead.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 26 4月, 2012 2 次提交
-
-
由 Thomas Gleixner 提交于
Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Tested-by: NDavid S. Miller <davem@davemloft.net> Link: http://lkml.kernel.org/r/20120420124558.055198736@linutronix.de
-
由 Thomas Gleixner 提交于
Preparatory patch to make the idle thread allocation for secondary cpus generic. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Mike Frysinger <vapier@gentoo.org> Cc: Jesper Nilsson <jesper.nilsson@axis.com> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Hirokazu Takata <takata@linux-m32r.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Howells <dhowells@redhat.com> Cc: James E.J. Bottomley <jejb@parisc-linux.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: David S. Miller <davem@davemloft.net> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: Richard Weinberger <richard@nod.at> Cc: x86@kernel.org Link: http://lkml.kernel.org/r/20120420124556.964170564@linutronix.de
-
- 02 12月, 2011 1 次提交
-
-
由 Justin P. Mattock 提交于
The below patch fixes some typos in various parts of the kernel, as well as fixes some comments. Please let me know if I missed anything, and I will try to get it changed and resent. Signed-off-by: NJustin P. Mattock <justinmattock@gmail.com> Acked-by: NRandy Dunlap <rdunlap@xenotime.net> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
- 01 11月, 2011 1 次提交
-
-
由 Paul Gortmaker 提交于
Many of the core sparc kernel files are not modules, but just including module.h for exporting symbols. Now these files can use the lighter footprint export.h for this role. Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
- 27 7月, 2011 1 次提交
-
-
由 Arun Sharma 提交于
This allows us to move duplicated code in <asm/atomic.h> (atomic_inc_not_zero() for now) to <linux/atomic.h> Signed-off-by: NArun Sharma <asharma@fb.com> Reviewed-by: NEric Dumazet <eric.dumazet@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: David Miller <davem@davemloft.net> Cc: Eric Dumazet <eric.dumazet@gmail.com> Acked-by: NMike Frysinger <vapier@gentoo.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 17 5月, 2011 1 次提交
-
-
由 KOSAKI Motohiro 提交于
Adapt new API. Almost change is trivial, most important change are to remove following like =operator. cpumask_t cpu_mask = *mm_cpumask(mm); cpus_allowed = current->cpus_allowed; Because cpumask_var_t is =operator unsafe. These usage might prevent kernel core improvement. No functional change. Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 14 4月, 2011 1 次提交
-
-
由 Peter Zijlstra 提交于
For future rework of try_to_wake_up() we'd like to push part of that function onto the CPU the task is actually going to run on. In order to do so we need a generic callback from the existing scheduler IPI. This patch introduces such a generic callback: scheduler_ipi() and implements it as a NOP. BenH notes: PowerPC might use this IPI on offline CPUs under rare conditions! Acked-by: NRussell King <rmk+kernel@arm.linux.org.uk> Acked-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Acked-by: NChris Metcalf <cmetcalf@tilera.com> Acked-by: NJesper Nilsson <jesper.nilsson@axis.com> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NRalf Baechle <ralf@linux-mips.org> Reviewed-by: NFrank Rowand <frank.rowand@am.sony.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Nick Piggin <npiggin@kernel.dk> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20110405152728.744338123@chello.nl
-
- 17 3月, 2011 1 次提交
-
-
由 David S. Miller 提交于
Most of the warnings emitted (we fail arch/sparc file builds with -Werror) were legitimate but harmless, however one case (n2_pcr_write) was a genuine bug. Based almost entirely upon a patch by Sam Ravnborg. Reported-by: NDennis Gilmore <dennis@ausil.us> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 16 2月, 2011 1 次提交
-
-
由 David S. Miller 提交于
Doing NMI startup as an early initcall doesn't work because we need to have SMP started up by then. So we'd only NMI startup one cpu, which causes perf PMU grab to BUG because the nmi_active count isn't what it's supposed to be. This also points out that we don't have proper CPU up/down notifiers for the NMI code which will need to be fixed at some point. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 4月, 2010 1 次提交
-
-
由 David S. Miller 提交于
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 30 3月, 2010 1 次提交
-
-
由 Tejun Heo 提交于
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: NTejun Heo <tj@kernel.org> Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
-