- 06 7月, 2015 1 次提交
-
-
由 Paul Gortmaker 提交于
Modular users will always be users of init functionality, but users of init functionality are not necessarily always modules. Hence any functionality like module_init and module_exit would be more at home in the module.h file. And module.h should explicitly include init.h to make the dependency clear. We've already done all the legwork needed to ensure that this move does not cause any build regressions due to implicit header file include assumptions about where module_init lives. Cc: Rusty Russell <rusty@rustcorp.com.au> Acked-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
- 28 6月, 2015 1 次提交
-
-
由 Rusty Russell 提交于
As Dan Streetman points out, the entire point of locking for is to stop sysfs accesses, so they're elided entirely in the !SYSFS case. Reported-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 23 6月, 2015 1 次提交
-
-
由 Dan Streetman 提交于
Add a "param_lock" mutex to each module, and update params.c to use the correct built-in or module mutex while locking kernel params. Remove the kparam_block_sysfs_r/w() macros, replace them with direct calls to kernel_param_[un]lock(module). The kernel param code currently uses a single mutex to protect modification of any and all kernel params. While this generally works, there is one specific problem with it; a module callback function cannot safely load another module, i.e. with request_module() or even with indirect calls such as crypto_has_alg(). If the module to be loaded has any of its params configured (e.g. with a /etc/modprobe.d/* config file), then the attempt will result in a deadlock between the first module param callback waiting for modprobe, and modprobe trying to lock the single kernel param mutex to set the new module's param. This fixes that by using per-module mutexes, so that each individual module is protected against concurrent changes in its own kernel params, but is not blocked by changes to other module params. All built-in modules continue to use the built-in mutex, since they will always be loaded at runtime and references (e.g. request_module(), crypto_has_alg()) to them will never cause load-time param changing. This also simplifies the interface used by modules to block sysfs access to their params; while there are currently functions to block and unblock sysfs param access which are split up by read and write and expect a single kernel param to be passed, their actual operation is identical and applies to all params, not just the one passed to them; they simply lock and unlock the global param mutex. They are replaced with direct calls to kernel_param_[un]lock(THIS_MODULE), which locks THIS_MODULE's param_lock, or if the module is built-in, it locks the built-in mutex. Suggested-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NDan Streetman <ddstreet@ieee.org> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 28 5月, 2015 3 次提交
-
-
由 Peter Zijlstra 提交于
Andrew worried about the overhead on small systems; only use the fancy code when either perf or tracing is enabled. Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Steven Rostedt <rostedt@goodmis.org> Requested-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Peter Zijlstra 提交于
Currently __module_address() is using a linear search through all modules in order to find the module corresponding to the provided address. With a lot of modules this can take a lot of time. One of the users of this is kernel_text_address() which is employed in many stack unwinders; which in turn are used by perf-callchain and ftrace (possibly from NMI context). So by optimizing __module_address() we optimize many stack unwinders which are used by both perf and tracing in performance sensitive code. Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Peter Zijlstra 提交于
Currently the RCU usage in module is an inconsistent mess of RCU and RCU-sched, this is broken for CONFIG_PREEMPT where synchronize_rcu() does not imply synchronize_sched(). Most usage sites use preempt_{dis,en}able() which is RCU-sched, but (most of) the modification sites use synchronize_rcu(). With the exception of the module bug list, which actually uses RCU. Convert everything over to RCU-sched. Furthermore add lockdep asserts to all sites, because it's not at all clear to me the required locking is observed, esp. on exported functions. Cc: Rusty Russell <rusty@rustcorp.com.au> Acked-by: N"Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 25 5月, 2015 1 次提交
-
-
由 Dmitry Torokhov 提交于
Commit f2411da7 ("driver-core: add driver module asynchronous probe support") broke build in case modules are disabled, because in this case "struct module" is not defined and we can't dereference it. Let's define module_requested_async_probing() helper and stub it out if modules are disabled. Reported-by: Nkbuild test robot <fengguang.wu@intel.com> Reported-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NDmitry Torokhov <dmitry.torokhov@gmail.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 20 5月, 2015 1 次提交
-
-
由 Luis R. Rodriguez 提交于
Some init systems may wish to express the desire to have device drivers run their probe() code asynchronously. This implements support for this and allows userspace to request async probe as a preference through a generic shared device driver module parameter, async_probe. Implementation for async probe is supported through a module parameter given that since synchronous probe has been prevalent for years some userspace might exist which relies on the fact that the device driver will probe synchronously and the assumption that devices it provides will be immediately available after this. Signed-off-by: NLuis R. Rodriguez <mcgrof@suse.com> Signed-off-by: NDmitry Torokhov <dmitry.torokhov@gmail.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 14 5月, 2015 1 次提交
-
-
由 Steven Rostedt (Red Hat) 提交于
The name "ftrace" really refers to the function hook infrastructure. It is not about the trace_events. The structures ftrace_event_call and ftrace_event_class have nothing to do with the function hooks, and are really trace_event structures. Rename ftrace_event_* to trace_event_*. Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 23 4月, 2015 1 次提交
-
-
由 Herbert Xu 提交于
Currently we're hiding mod->sig_ok under an ifdef in open code. This patch adds a module_sig_ok accessor function and removes that ifdef. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Acked-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 08 4月, 2015 1 次提交
-
-
由 Steven Rostedt (Red Hat) 提交于
Update the infrastructure such that modules that declare TRACE_DEFINE_ENUM() will have those enums converted into their values in the tracepoint print fmt strings. Link: http://lkml.kernel.org/r/87vbhjp74q.fsf@rustcorp.com.auAcked-by: NRusty Russell <rusty@rustcorp.com.au> Reviewed-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Tested-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 17 3月, 2015 1 次提交
-
-
由 Petr Mladek 提交于
There is a notifier that handles live patches for coming and going modules. It takes klp_mutex lock to avoid races with coming and going patches but it does not keep the lock all the time. Therefore the following races are possible: 1. The notifier is called sometime in STATE_MODULE_COMING. The module is visible by find_module() in this state all the time. It means that new patch can be registered and enabled even before the notifier is called. It might create wrong order of stacked patches, see below for an example. 2. New patch could still see the module in the GOING state even after the notifier has been called. It will try to initialize the related object structures but the module could disappear at any time. There will stay mess in the structures. It might even cause an invalid memory access. This patch solves the problem by adding a boolean variable into struct module. The value is true after the coming and before the going handler is called. New patches need to be applied when the value is true and they need to ignore the module when the value is false. Note that we need to know state of all modules on the system. The races are related to new patches. Therefore we do not know what modules will get patched. Also note that we could not simply ignore going modules. The code from the module could be called even in the GOING state until mod->exit() finishes. If we start supporting patches with semantic changes between function calls, we need to apply new patches to any still usable code. See below for an example. Finally note that the patch solves only the situation when a new patch is registered. There are no such problems when the patch is being removed. It does not matter who disable the patch first, whether the normal disable_patch() or the module notifier. There is nothing to do once the patch is disabled. Alternative solutions: ====================== + reject new patches when a patched module is coming or going; this is ugly + wait with adding new patch until the module leaves the COMING and GOING states; this might be dangerous and complicated; we would need to release kgr_lock in the middle of the patch registration to avoid a deadlock with the coming and going handlers; also we might need a waitqueue for each module which seems to be even bigger overhead than the boolean + stop modules from entering COMING and GOING states; wait until modules leave these states when they are already there; looks complicated; we would need to ignore the module that asked to stop the others to avoid a deadlock; also it is unclear what to do when two modules asked to stop others and both are in COMING state (situation when two new patches are applied) + always register/enable new patches and fix up the potential mess (registered patches order) in klp_module_init(); this is nasty and prone to regressions in the future development + add another MODULE_STATE where the kallsyms are visible but the module is not used yet; this looks too complex; the module states are checked on "many" locations Example of patch stacking breakage: =================================== The notifier could _not_ _simply_ ignore already initialized module objects. For example, let's have three patches (P1, P2, P3) for functions a() and b() where a() is from vmcore and b() is from a module M. Something like: a() b() P1 a1() b1() P2 a2() b2() P3 a3() b3(3) If you load the module M after all patches are registered and enabled. The ftrace ops for function a() and b() has listed the functions in this order: ops_a->func_stack -> list(a3,a2,a1) ops_b->func_stack -> list(b3,b2,b1) , so the pointer to b3() is the first and will be used. Then you might have the following scenario. Let's start with state when patches P1 and P2 are registered and enabled but the module M is not loaded. Then ftrace ops for b() does not exist. Then we get into the following race: CPU0 CPU1 load_module(M) complete_formation() mod->state = MODULE_STATE_COMING; mutex_unlock(&module_mutex); klp_register_patch(P3); klp_enable_patch(P3); # STATE 1 klp_module_notify(M) klp_module_notify_coming(P1); klp_module_notify_coming(P2); klp_module_notify_coming(P3); # STATE 2 The ftrace ops for a() and b() then looks: STATE1: ops_a->func_stack -> list(a3,a2,a1); ops_b->func_stack -> list(b3); STATE2: ops_a->func_stack -> list(a3,a2,a1); ops_b->func_stack -> list(b2,b1,b3); therefore, b2() is used for the module but a3() is used for vmcore because they were the last added. Example of the race with going modules: ======================================= CPU0 CPU1 delete_module() #SYSCALL try_stop_module() mod->state = MODULE_STATE_GOING; mutex_unlock(&module_mutex); klp_register_patch() klp_enable_patch() #save place to switch universe b() # from module that is going a() # from core (patched) mod->exit(); Note that the function b() can be called until we call mod->exit(). If we do not apply patch against b() because it is in MODULE_STATE_GOING, it will call patched a() with modified semantic and things might get wrong. [jpoimboe@redhat.com: use one boolean instead of two] Signed-off-by: NPetr Mladek <pmladek@suse.cz> Acked-by: NJosh Poimboeuf <jpoimboe@redhat.com> Acked-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
- 14 2月, 2015 1 次提交
-
-
由 Andrey Ryabinin 提交于
MODULE_DEVICE_TABLE() macro used to create aliases to device tables. Normally alias should have the same type as aliased symbol. Device tables are arrays, so they have 'struct type##_device_id[x]' types. Alias created by MODULE_DEVICE_TABLE() will have non-array type - 'struct type##_device_id'. This inconsistency confuses compiler, it could make a wrong assumption about variable's size which leads KASan to produce a false positive report about out of bounds access. For every global variable compiler calls __asan_register_globals() passing information about global variable (address, size, size with redzone, name ...) __asan_register_globals() poison symbols redzone to detect possible out of bounds accesses. When symbol has an alias __asan_register_globals() will be called as for symbol so for alias. Compiler determines size of variable by size of variable's type. Alias and symbol have the same address, so if alias have the wrong size part of memory that actually belongs to the symbol could be poisoned as redzone of alias symbol. By fixing type of alias symbol we will fix size of it, so __asan_register_globals() will not poison valid memory. Signed-off-by: NAndrey Ryabinin <a.ryabinin@samsung.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Konstantin Serebryany <kcc@google.com> Cc: Dmitry Chernenkov <dmitryc@google.com> Signed-off-by: NAndrey Konovalov <adech.fo@gmail.com> Cc: Yuri Gribov <tetra2005@gmail.com> Cc: Konstantin Khlebnikov <koct9i@gmail.com> Cc: Sasha Levin <sasha.levin@oracle.com> Cc: Christoph Lameter <cl@linux.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 22 1月, 2015 1 次提交
-
-
由 Rusty Russell 提交于
James Bottomley points out that it will be -1 during unload. It's only used for diagnostics, so let's not hide that as it could be a clue as to what's gone wrong. Cc: Jason Wessel <jason.wessel@windriver.com> Acked-and-documention-added-by: NJames Bottomley <James.Bottomley@HansenPartnership.com> Reviewed-by: NMasami Hiramatsu <maasami.hiramatsu.pt@hitachi.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 11 11月, 2014 1 次提交
-
-
由 Masami Hiramatsu 提交于
Replace module_ref per-cpu complex reference counter with an atomic_t simple refcnt. This is for code simplification. Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 27 7月, 2014 2 次提交
-
-
由 Petr Mladek 提交于
The within_module*() functions return only true or false. Let's use bool as the return type. Note that it should not change kABI because these are inline functions. Signed-off-by: NPetr Mladek <pmladek@suse.cz> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Petr Mladek 提交于
It is just a small optimization that allows to replace few occurrences of within_module_init() || within_module_core() with a single call. Signed-off-by: NPetr Mladek <pmladek@suse.cz> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 13 3月, 2014 2 次提交
-
-
由 Rusty Russell 提交于
MODULE_DEVICE_TABLE() calles MODULE_GENERIC_TABLE(); make it do the work directly. This also removes a wart introduced in the last patch, where the alias is defined to be an unknown struct type "struct type##__##name##_device_id" instead of "struct type##_device_id" (it's an extern so GCC doesn't care, but it's wrong). The other user of MODULE_GENERIC_TABLE (ISAPNP_CARD_TABLE) is unused, so delete it. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Tom Gundersen 提交于
Commit 78551277: "Input: i8042 - add PNP modaliases" had a bug, where the second call to MODULE_DEVICE_TABLE() overrode the first resulting in not all the modaliases being exposed. This fixes the problem by including the name of the device_id table in the __mod_*_device_table alias, allowing us to export several device_id tables per module. Suggested-by: NKay Sievers <kay@vrfy.org> Acked-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Dmitry Torokhov <dmitry.torokhov@gmail.com> Signed-off-by: NTom Gundersen <teg@jklm.no> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 07 3月, 2014 1 次提交
-
-
由 Steven Rostedt (Red Hat) 提交于
There's nothing in the module.h header that requires tracepoint.h to be included, and there may be cases that tracepoint.h may need to include module.h, which will cause recursive header issues. But module.h requires seeing HAVE_JUMP_LABEL which is set in jump_label.h which it just coincidentally gets from tracepoint.h. Link: http://lkml.kernel.org/r/20140307084712.5c68641a@gandalf.local.homeAcked-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 16 1月, 2014 1 次提交
-
-
由 Seunghun Lee 提交于
Fix coding style of module.h Signed-off-by: NSeunghun Lee <waydi1@gmail.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 04 12月, 2013 1 次提交
-
-
由 Joe Perches 提交于
[All 8 callers already have semicolons. -- RR] Signed-off-by: NJoe Perches <joe@perches.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 23 9月, 2013 1 次提交
-
-
由 Rusty Russell 提交于
The option to wait for a module reference count to reach zero was in the initial module implementation, but it was never supported in modprobe (you had to use rmmod --wait). After discussion with Lucas, It has been deprecated (with a 10 second sleep) in kmod for the last year. This finally removes it: the flag will evoke a printk warning and a normal (non-blocking) remove attempt. Cc: Lucas De Marchi <lucas.de.marchi@gmail.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 03 9月, 2013 1 次提交
-
-
由 Li Zhong 提交于
DEBUG_KOBJECT_RELEASE helps to find the issue attached below. After some investigation, it seems the reason is: The mod->mkobj.kobj(ffffffffa01600d0 below) is freed together with mod itself in free_module(). However, its children still hold references to it, as the delay caused by DEBUG_KOBJECT_RELEASE. So when the child(holders below) tries to decrease the reference count to its parent in kobject_del(), BUG happens as it tries to access already freed memory. This patch tries to fix it by waiting for the mod->mkobj.kobj to be really released in the module removing process (and some error code paths). [ 1844.175287] kobject: 'holders' (ffff88007c1f1600): kobject_release, parent ffffffffa01600d0 (delayed) [ 1844.178991] kobject: 'notes' (ffff8800370b2a00): kobject_release, parent ffffffffa01600d0 (delayed) [ 1845.180118] kobject: 'holders' (ffff88007c1f1600): kobject_cleanup, parent ffffffffa01600d0 [ 1845.182130] kobject: 'holders' (ffff88007c1f1600): auto cleanup kobject_del [ 1845.184120] BUG: unable to handle kernel paging request at ffffffffa01601d0 [ 1845.185026] IP: [<ffffffff812cda81>] kobject_put+0x11/0x60 [ 1845.185026] PGD 1a13067 PUD 1a14063 PMD 7bd30067 PTE 0 [ 1845.185026] Oops: 0000 [#1] PREEMPT [ 1845.185026] Modules linked in: xfs libcrc32c [last unloaded: kprobe_example] [ 1845.185026] CPU: 0 PID: 18 Comm: kworker/0:1 Tainted: G O 3.11.0-rc6-next-20130819+ #1 [ 1845.185026] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2007 [ 1845.185026] Workqueue: events kobject_delayed_cleanup [ 1845.185026] task: ffff88007ca51f00 ti: ffff88007ca5c000 task.ti: ffff88007ca5c000 [ 1845.185026] RIP: 0010:[<ffffffff812cda81>] [<ffffffff812cda81>] kobject_put+0x11/0x60 [ 1845.185026] RSP: 0018:ffff88007ca5dd08 EFLAGS: 00010282 [ 1845.185026] RAX: 0000000000002000 RBX: ffffffffa01600d0 RCX: ffffffff8177d638 [ 1845.185026] RDX: ffff88007ca5dc18 RSI: 0000000000000000 RDI: ffffffffa01600d0 [ 1845.185026] RBP: ffff88007ca5dd18 R08: ffffffff824e9810 R09: ffffffffffffffff [ 1845.185026] R10: ffff8800ffffffff R11: dead4ead00000001 R12: ffffffff81a95040 [ 1845.185026] R13: ffff88007b27a960 R14: ffff88007c1f1600 R15: 0000000000000000 [ 1845.185026] FS: 0000000000000000(0000) GS:ffffffff81a23000(0000) knlGS:0000000000000000 [ 1845.185026] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b [ 1845.185026] CR2: ffffffffa01601d0 CR3: 0000000037207000 CR4: 00000000000006b0 [ 1845.185026] Stack: [ 1845.185026] ffff88007c1f1600 ffff88007c1f1600 ffff88007ca5dd38 ffffffff812cdb7e [ 1845.185026] 0000000000000000 ffff88007c1f1640 ffff88007ca5dd68 ffffffff812cdbfe [ 1845.185026] ffff88007c974800 ffff88007c1f1640 ffff88007ff61a00 0000000000000000 [ 1845.185026] Call Trace: [ 1845.185026] [<ffffffff812cdb7e>] kobject_del+0x2e/0x40 [ 1845.185026] [<ffffffff812cdbfe>] kobject_delayed_cleanup+0x6e/0x1d0 [ 1845.185026] [<ffffffff81063a45>] process_one_work+0x1e5/0x670 [ 1845.185026] [<ffffffff810639e3>] ? process_one_work+0x183/0x670 [ 1845.185026] [<ffffffff810642b3>] worker_thread+0x113/0x370 [ 1845.185026] [<ffffffff810641a0>] ? rescuer_thread+0x290/0x290 [ 1845.185026] [<ffffffff8106bfba>] kthread+0xda/0xe0 [ 1845.185026] [<ffffffff814ff0f0>] ? _raw_spin_unlock_irq+0x30/0x60 [ 1845.185026] [<ffffffff8106bee0>] ? kthread_create_on_node+0x130/0x130 [ 1845.185026] [<ffffffff8150751a>] ret_from_fork+0x7a/0xb0 [ 1845.185026] [<ffffffff8106bee0>] ? kthread_create_on_node+0x130/0x130 [ 1845.185026] Code: 81 48 c7 c7 28 95 ad 81 31 c0 e8 9b da 01 00 e9 4f ff ff ff 66 0f 1f 44 00 00 55 48 89 e5 53 48 89 fb 48 83 ec 08 48 85 ff 74 1d <f6> 87 00 01 00 00 01 74 1e 48 8d 7b 38 83 6b 38 01 0f 94 c0 84 [ 1845.185026] RIP [<ffffffff812cda81>] kobject_put+0x11/0x60 [ 1845.185026] RSP <ffff88007ca5dd08> [ 1845.185026] CR2: ffffffffa01601d0 [ 1845.185026] ---[ end trace 49a70afd109f5653 ]--- Signed-off-by: NLi Zhong <zhong@linux.vnet.ibm.com> Acked-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 20 8月, 2013 1 次提交
-
-
由 Andreas Robinson 提交于
Additional and optional dependencies not found while building the kernel and modules, can now be declared explicitly. Signed-off-by: NAndreas Robinson <andr345@gmail.com> Acked-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 01 8月, 2013 1 次提交
-
-
由 Andreas Robinson 提交于
Additional and optional dependencies not found while building the kernel and modules, can now be declared explicitly. Signed-off-by: NAndreas Robinson <andr345@gmail.com> Acked-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 15 3月, 2013 1 次提交
-
-
由 Rusty Russell 提交于
We have CONFIG_SYMBOL_PREFIX, which three archs define to the string "_". But Al Viro broke this in "consolidate cond_syscall and SYSCALL_ALIAS declarations" (in linux-next), and he's not the first to do so. Using CONFIG_SYMBOL_PREFIX is awkward, since we usually just want to prefix it so something. So various places define helpers which are defined to nothing if CONFIG_SYMBOL_PREFIX isn't set: 1) include/asm-generic/unistd.h defines __SYMBOL_PREFIX. 2) include/asm-generic/vmlinux.lds.h defines VMLINUX_SYMBOL(sym) 3) include/linux/export.h defines MODULE_SYMBOL_PREFIX. 4) include/linux/kernel.h defines SYMBOL_PREFIX (which differs from #7) 5) kernel/modsign_certificate.S defines ASM_SYMBOL(sym) 6) scripts/modpost.c defines MODULE_SYMBOL_PREFIX 7) scripts/Makefile.lib defines SYMBOL_PREFIX on the commandline if CONFIG_SYMBOL_PREFIX is set, so that we have a non-string version for pasting. (arch/h8300/include/asm/linkage.h defines SYMBOL_NAME(), too). Let's solve this properly: 1) No more generic prefix, just CONFIG_HAVE_UNDERSCORE_SYMBOL_PREFIX. 2) Make linux/export.h usable from asm. 3) Define VMLINUX_SYMBOL() and VMLINUX_SYMBOL_STR(). 4) Make everyone use them. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Reviewed-by: NJames Hogan <james.hogan@imgtec.com> Tested-by: James Hogan <james.hogan@imgtec.com> (metag)
-
- 21 1月, 2013 1 次提交
-
-
由 Sasha Levin 提交于
These helper functions just check a set intersection with a range, and don't actually modify struct module. Signed-off-by: NSasha Levin <sasha.levin@oracle.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 12 1月, 2013 1 次提交
-
-
由 Rusty Russell 提交于
You should never look at such a module, so it's excised from all paths which traverse the modules list. We add the state at the end, to avoid gratuitous ABI break (ksplice). Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 10 10月, 2012 1 次提交
-
-
由 Rusty Russell 提交于
We do a very simple search for a particular string appended to the module (which is cache-hot and about to be SHA'd anyway). There's both a config option and a boot parameter which control whether we accept or fail with unsigned modules and modules that are signed with an unknown key. If module signing is enabled, the kernel will be tainted if a module is loaded that is unsigned or has a signature for which we don't have the key. (Useful feedback and tweaks by David Howells <dhowells@redhat.com>) Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NDavid Howells <dhowells@redhat.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 26 3月, 2012 1 次提交
-
-
由 Steven Rostedt 提交于
With the preempt, tracepoint and everything, it's getting a bit chubby. For an Ubuntu-based config: Before: $ size -t `find * -name '*.ko'` | grep TOTAL 56199906 3870760 1606616 61677282 3ad1ee2 (TOTALS) $ size vmlinux text data bss dec hex filename 8509342 850368 3358720 12718430 c2115e vmlinux After: $ size -t `find * -name '*.ko'` | grep TOTAL 56183760 3867892 1606616 61658268 3acd49c (TOTALS) $ size vmlinux text data bss dec hex filename 8501842 849088 3358720 12709650 c1ef12 vmlinux Signed-off-by: NSteven Rostedt <rostedt@goodmis.org> Acked-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> (made all out-of-line)
-
- 13 1月, 2012 1 次提交
-
-
由 Eric Dumazet 提交于
module_ref contains two "unsigned int" fields. Thats now too small, since some machines can open more than 2^32 files. Check commit 518de9b3 (fs: allow for more than 2^31 files) for reference. We can add an aligned(2 * sizeof(unsigned long)) attribute to force alloc_percpu() allocating module_ref areas in single cache lines. Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> CC: Rusty Russell <rusty@rustcorp.com.au> CC: Tejun Heo <tj@kernel.org> CC: Robin Holt <holt@sgi.com> CC: David Miller <davem@davemloft.net> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 31 10月, 2011 2 次提交
-
-
由 Paul Gortmaker 提交于
There are files which use module_param and MODULE_PARM_DESC back to back. They only include moduleparam.h which makes sense, but the implicit presence of module.h everywhere hid the fact that MODULE_PARM_DESC wasn't in moduleparam.h at all. Relocate the macro to moduleparam.h so that the moduleparam infrastructure can be used independently of module.h Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
由 Paul Gortmaker 提交于
A lot of files pull in module.h when all they are really looking for is the basic EXPORT_SYMBOL functionality. The recent data from Ingo[1] shows that this is one of several instances that has a significant impact on compile times, and it should be targeted for factoring out (as done here). Note that several commonly used header files in include/* directly include <linux/module.h> themselves (some 34 of them!) The most commonly used ones of these will have to be made independent of module.h before the full benefit of this change can be realized. We also transition THIS_MODULE from module.h to export.h, since there are lots of files with subsystem structs that in turn will have a struct module *owner and only be doing: .owner = THIS_MODULE; and absolutely nothing else modular. So, we also want to have the THIS_MODULE definition present in the lightweight header. [1] https://lkml.org/lkml/2011/5/23/76Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
- 11 8月, 2011 1 次提交
-
-
由 Mathieu Desnoyers 提交于
Copy the information needed from struct module into a local module list held within tracepoint.c from within the module coming/going notifier. This vastly simplifies locking of tracepoint registration / unregistration, because we don't have to take the module mutex to register and unregister tracepoints anymore. Steven Rostedt ran into dependency problems related to modules mutex vs kprobes mutex vs ftrace mutex vs tracepoint mutex that seems to be hard to fix without removing this dependency between tracepoint and module mutex. (note: it should be investigated whether kprobes could benefit of being dissociated from the modules mutex too.) This also fixes module handling of tracepoint list iterators, because it was expecting the list to be sorted by pointer address. Given we have control on our own list now, it's OK to sort this list which has tracepoints as its only purpose. The reason why this sorting is required is to handle the fact that seq files (and any read() operation from user-space) cannot hold the tracepoint mutex across multiple calls, so list entries may vanish between calls. With sorting, the tracepoint iterator becomes usable even if the list don't contain the exact item pointed to by the iterator anymore. Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com> Acked-by: NJason Baron <jbaron@redhat.com> CC: Ingo Molnar <mingo@elte.hu> CC: Lai Jiangshan <laijs@cn.fujitsu.com> CC: Peter Zijlstra <a.p.zijlstra@chello.nl> CC: Thomas Gleixner <tglx@linutronix.de> CC: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Link: http://lkml.kernel.org/r/20110810191839.GC8525@KrystalSigned-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 24 7月, 2011 2 次提交
-
-
由 Kay Sievers 提交于
Userspace wants to manage module parameters with udev rules. This currently only works for loaded modules, but not for built-in ones. To allow access to the built-in modules we need to re-trigger all module load events that happened before any userspace was running. We already do the same thing for all devices, subsystems(buses) and drivers. This adds the currently missing /sys/module/<name>/uevent files to all module entries. Signed-off-by: NKay Sievers <kay.sievers@vrfy.org> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> (split & trivial fix)
-
由 Kay Sievers 提交于
This simplifies the next patch, where we have an attribute on a builtin module (ie. module == NULL). Signed-off-by: NKay Sievers <kay.sievers@vrfy.org> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> (split into 2)
-
- 19 5月, 2011 3 次提交
-
-
由 Alessio Igor Bogani 提交于
This patch places every exported symbol in its own section (i.e. "___ksymtab+printk"). Thus the linker will use its SORT() directive to sort and finally merge all symbol in the right and final section (i.e. "__ksymtab"). The symbol prefixed archs use an underscore as prefix for symbols. To avoid collision we use a different character to create the temporary section names. This work was supported by a hardware donation from the CE Linux Forum. Signed-off-by: NAlessio Igor Bogani <abogani@kernel.org> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> (folded in '+' fixup) Tested-by: NDirk Behme <dirk.behme@googlemail.com>
-
由 Rusty Russell 提交于
Instead of having a callback function for each symbol in the kernel, have a callback for each array of symbols. This eases the logic when we move to sorted symbols and binary search. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NAlessio Igor Bogani <abogani@kernel.org>
-
由 Richard Kennedy 提交于
Reorder struct module to remove 24 bytes of alignment padding on 64 bit builds when the CONFIG_TRACE options are selected. This allows the structure to fit into one fewer cache lines, and its size drops from 592 to 568 on x86_64. Signed-off-by: NRichard Kennedy <richard@rsk.demon.co.uk> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-