1. 04 2月, 2017 1 次提交
    • A
      modversions: treat symbol CRCs as 32 bit quantities · 71810db2
      Ard Biesheuvel 提交于
      The modversion symbol CRCs are emitted as ELF symbols, which allows us
      to easily populate the kcrctab sections by relying on the linker to
      associate each kcrctab slot with the correct value.
      
      This has a couple of downsides:
      
       - Given that the CRCs are treated as memory addresses, we waste 4 bytes
         for each CRC on 64 bit architectures,
      
       - On architectures that support runtime relocation, a R_<arch>_RELATIVE
         relocation entry is emitted for each CRC value, which identifies it
         as a quantity that requires fixing up based on the actual runtime
         load offset of the kernel. This results in corrupted CRCs unless we
         explicitly undo the fixup (and this is currently being handled in the
         core module code)
      
       - Such runtime relocation entries take up 24 bytes of __init space
         each, resulting in a x8 overhead in [uncompressed] kernel size for
         CRCs.
      
      Switching to explicit 32 bit values on 64 bit architectures fixes most
      of these issues, given that 32 bit values are not treated as quantities
      that require fixing up based on the actual runtime load offset.  Note
      that on some ELF64 architectures [such as PPC64], these 32-bit values
      are still emitted as [absolute] runtime relocatable quantities, even if
      the value resolves to a build time constant.  Since relative relocations
      are always resolved at build time, this patch enables MODULE_REL_CRCS on
      powerpc when CONFIG_RELOCATABLE=y, which turns the absolute CRC
      references into relative references into .rodata where the actual CRC
      value is stored.
      
      So redefine all CRC fields and variables as u32, and redefine the
      __CRC_SYMBOL() macro for 64 bit builds to emit the CRC reference using
      inline assembler (which is necessary since 64-bit C code cannot use
      32-bit types to hold memory addresses, even if they are ultimately
      resolved using values that do not exceed 0xffffffff).  To avoid
      potential problems with legacy 32-bit architectures using legacy
      toolchains, the equivalent C definition of the kcrctab entry is retained
      for 32-bit architectures.
      
      Note that this mostly reverts commit d4703aef ("module: handle ppc64
      relocating kcrctabs when CONFIG_RELOCATABLE=y")
      Acked-by: NRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      71810db2
  2. 27 11月, 2016 2 次提交
  3. 04 8月, 2016 2 次提交
    • J
      modules: add ro_after_init support · 444d13ff
      Jessica Yu 提交于
      Add ro_after_init support for modules by adding a new page-aligned section
      in the module layout (after rodata) for ro_after_init data and enabling RO
      protection for that section after module init runs.
      Signed-off-by: NJessica Yu <jeyu@redhat.com>
      Acked-by: NKees Cook <keescook@chromium.org>
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      444d13ff
    • P
      exceptions: fork exception table content from module.h into extable.h · 0ef76537
      Paul Gortmaker 提交于
      For historical reasons (i.e. pre-git) the exception table stuff was
      buried in the middle of the module.h file.  I noticed this while
      doing an audit for needless includes of module.h and found core
      kernel files (both arch specific and arch independent) were just
      including module.h for this.
      
      The converse is also true, in that conventional drivers, be they
      for filesystems or actual hardware peripherals or similar, do not
      normally care about the exception tables.
      
      Here we fork the exception table content out of module.h into a
      new file called extable.h -- and temporarily include it into the
      module.h itself.
      
      Then we will work our way across the arch independent and arch
      specific files needing just exception table content, and move
      them off module.h and onto extable.h
      
      Once that is done, we can remove the extable.h from module.h
      and in doing it like this, we avoid introducing build failures
      into the git history.
      
      The gain here is that module.h gets a bit smaller, across all
      modular drivers that we build for allmodconfig.  Also the core
      files that only need exception table stuff don't have an include
      of module.h that brings in lots of extra stuff and just looks
      generally out of place.
      
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      0ef76537
  4. 27 7月, 2016 1 次提交
  5. 01 4月, 2016 1 次提交
    • J
      module: preserve Elf information for livepatch modules · 1ce15ef4
      Jessica Yu 提交于
      For livepatch modules, copy Elf section, symbol, and string information
      from the load_info struct in the module loader. Persist copies of the
      original symbol table and string table.
      
      Livepatch manages its own relocation sections in order to reuse module
      loader code to write relocations. Livepatch modules must preserve Elf
      information such as section indices in order to apply livepatch relocation
      sections using the module loader's apply_relocate_add() function.
      
      In order to apply livepatch relocation sections, livepatch modules must
      keep a complete copy of their original symbol table in memory. Normally, a
      stripped down copy of a module's symbol table (containing only "core"
      symbols) is made available through module->core_symtab. But for livepatch
      modules, the symbol table copied into memory on module load must be exactly
      the same as the symbol table produced when the patch module was compiled.
      This is because the relocations in each livepatch relocation section refer
      to their respective symbols with their symbol indices, and the original
      symbol indices (and thus the symtab ordering) must be preserved in order
      for apply_relocate_add() to find the right symbol.
      Signed-off-by: NJessica Yu <jeyu@redhat.com>
      Reviewed-by: NMiroslav Benes <mbenes@suse.cz>
      Acked-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: NRusty Russell <rusty@rustcorp.com.au>
      Reviewed-by: NRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      1ce15ef4
  6. 03 2月, 2016 1 次提交
    • R
      modules: fix longstanding /proc/kallsyms vs module insertion race. · 8244062e
      Rusty Russell 提交于
      For CONFIG_KALLSYMS, we keep two symbol tables and two string tables.
      There's one full copy, marked SHF_ALLOC and laid out at the end of the
      module's init section.  There's also a cut-down version that only
      contains core symbols and strings, and lives in the module's core
      section.
      
      After module init (and before we free the module memory), we switch
      the mod->symtab, mod->num_symtab and mod->strtab to point to the core
      versions.  We do this under the module_mutex.
      
      However, kallsyms doesn't take the module_mutex: it uses
      preempt_disable() and rcu tricks to walk through the modules, because
      it's used in the oops path.  It's also used in /proc/kallsyms.
      There's nothing atomic about the change of these variables, so we can
      get the old (larger!) num_symtab and the new symtab pointer; in fact
      this is what I saw when trying to reproduce.
      
      By grouping these variables together, we can use a
      carefully-dereferenced pointer to ensure we always get one or the
      other (the free of the module init section is already done in an RCU
      callback, so that's safe).  We allocate the init one at the end of the
      module init section, and keep the core one inside the struct module
      itself (it could also have been allocated at the end of the module
      core, but that's probably overkill).
      Reported-by: NWeilong Chen <chenweilong@huawei.com>
      Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=111541
      Cc: stable@kernel.org
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      8244062e
  7. 05 12月, 2015 2 次提交
  8. 06 7月, 2015 1 次提交
    • P
      module: relocate module_init from init.h to module.h · 0fd972a7
      Paul Gortmaker 提交于
      Modular users will always be users of init functionality, but
      users of init functionality are not necessarily always modules.
      
      Hence any functionality like module_init and module_exit would
      be more at home in the module.h file.  And module.h should
      explicitly include init.h to make the dependency clear.
      
      We've already done all the legwork needed to ensure that this
      move does not cause any build regressions due to implicit
      header file include assumptions about where module_init lives.
      
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Acked-by: NRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      0fd972a7
  9. 28 6月, 2015 1 次提交
  10. 23 6月, 2015 1 次提交
    • D
      module: add per-module param_lock · b51d23e4
      Dan Streetman 提交于
      Add a "param_lock" mutex to each module, and update params.c to use
      the correct built-in or module mutex while locking kernel params.
      Remove the kparam_block_sysfs_r/w() macros, replace them with direct
      calls to kernel_param_[un]lock(module).
      
      The kernel param code currently uses a single mutex to protect
      modification of any and all kernel params.  While this generally works,
      there is one specific problem with it; a module callback function
      cannot safely load another module, i.e. with request_module() or even
      with indirect calls such as crypto_has_alg().  If the module to be
      loaded has any of its params configured (e.g. with a /etc/modprobe.d/*
      config file), then the attempt will result in a deadlock between the
      first module param callback waiting for modprobe, and modprobe trying to
      lock the single kernel param mutex to set the new module's param.
      
      This fixes that by using per-module mutexes, so that each individual module
      is protected against concurrent changes in its own kernel params, but is
      not blocked by changes to other module params.  All built-in modules
      continue to use the built-in mutex, since they will always be loaded at
      runtime and references (e.g. request_module(), crypto_has_alg()) to them
      will never cause load-time param changing.
      
      This also simplifies the interface used by modules to block sysfs access
      to their params; while there are currently functions to block and unblock
      sysfs param access which are split up by read and write and expect a single
      kernel param to be passed, their actual operation is identical and applies
      to all params, not just the one passed to them; they simply lock and unlock
      the global param mutex.  They are replaced with direct calls to
      kernel_param_[un]lock(THIS_MODULE), which locks THIS_MODULE's param_lock, or
      if the module is built-in, it locks the built-in mutex.
      Suggested-by: NRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NDan Streetman <ddstreet@ieee.org>
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      b51d23e4
  11. 28 5月, 2015 3 次提交
  12. 25 5月, 2015 1 次提交
  13. 20 5月, 2015 1 次提交
  14. 14 5月, 2015 1 次提交
  15. 23 4月, 2015 1 次提交
  16. 08 4月, 2015 1 次提交
  17. 17 3月, 2015 1 次提交
    • P
      livepatch: Fix subtle race with coming and going modules · 8cb2c2dc
      Petr Mladek 提交于
      There is a notifier that handles live patches for coming and going modules.
      It takes klp_mutex lock to avoid races with coming and going patches but
      it does not keep the lock all the time. Therefore the following races are
      possible:
      
        1. The notifier is called sometime in STATE_MODULE_COMING. The module
           is visible by find_module() in this state all the time. It means that
           new patch can be registered and enabled even before the notifier is
           called. It might create wrong order of stacked patches, see below
           for an example.
      
         2. New patch could still see the module in the GOING state even after
            the notifier has been called. It will try to initialize the related
            object structures but the module could disappear at any time. There
            will stay mess in the structures. It might even cause an invalid
            memory access.
      
      This patch solves the problem by adding a boolean variable into struct module.
      The value is true after the coming and before the going handler is called.
      New patches need to be applied when the value is true and they need to ignore
      the module when the value is false.
      
      Note that we need to know state of all modules on the system. The races are
      related to new patches. Therefore we do not know what modules will get
      patched.
      
      Also note that we could not simply ignore going modules. The code from the
      module could be called even in the GOING state until mod->exit() finishes.
      If we start supporting patches with semantic changes between function
      calls, we need to apply new patches to any still usable code.
      See below for an example.
      
      Finally note that the patch solves only the situation when a new patch is
      registered. There are no such problems when the patch is being removed.
      It does not matter who disable the patch first, whether the normal
      disable_patch() or the module notifier. There is nothing to do
      once the patch is disabled.
      
      Alternative solutions:
      ======================
      
      + reject new patches when a patched module is coming or going; this is ugly
      
      + wait with adding new patch until the module leaves the COMING and GOING
        states; this might be dangerous and complicated; we would need to release
        kgr_lock in the middle of the patch registration to avoid a deadlock
        with the coming and going handlers; also we might need a waitqueue for
        each module which seems to be even bigger overhead than the boolean
      
      + stop modules from entering COMING and GOING states; wait until modules
        leave these states when they are already there; looks complicated; we would
        need to ignore the module that asked to stop the others to avoid a deadlock;
        also it is unclear what to do when two modules asked to stop others and
        both are in COMING state (situation when two new patches are applied)
      
      + always register/enable new patches and fix up the potential mess (registered
        patches order) in klp_module_init(); this is nasty and prone to regressions
        in the future development
      
      + add another MODULE_STATE where the kallsyms are visible but the module is not
        used yet; this looks too complex; the module states are checked on "many"
        locations
      
      Example of patch stacking breakage:
      ===================================
      
      The notifier could _not_ _simply_ ignore already initialized module objects.
      For example, let's have three patches (P1, P2, P3) for functions a() and b()
      where a() is from vmcore and b() is from a module M. Something like:
      
      	a()	b()
      P1	a1()	b1()
      P2	a2()	b2()
      P3	a3()	b3(3)
      
      If you load the module M after all patches are registered and enabled.
      The ftrace ops for function a() and b() has listed the functions in this
      order:
      
      	ops_a->func_stack -> list(a3,a2,a1)
      	ops_b->func_stack -> list(b3,b2,b1)
      
      , so the pointer to b3() is the first and will be used.
      
      Then you might have the following scenario. Let's start with state when patches
      P1 and P2 are registered and enabled but the module M is not loaded. Then ftrace
      ops for b() does not exist. Then we get into the following race:
      
      CPU0					CPU1
      
      load_module(M)
      
        complete_formation()
      
        mod->state = MODULE_STATE_COMING;
        mutex_unlock(&module_mutex);
      
      					klp_register_patch(P3);
      					klp_enable_patch(P3);
      
      					# STATE 1
      
        klp_module_notify(M)
          klp_module_notify_coming(P1);
          klp_module_notify_coming(P2);
          klp_module_notify_coming(P3);
      
      					# STATE 2
      
      The ftrace ops for a() and b() then looks:
      
        STATE1:
      
      	ops_a->func_stack -> list(a3,a2,a1);
      	ops_b->func_stack -> list(b3);
      
        STATE2:
      	ops_a->func_stack -> list(a3,a2,a1);
      	ops_b->func_stack -> list(b2,b1,b3);
      
      therefore, b2() is used for the module but a3() is used for vmcore
      because they were the last added.
      
      Example of the race with going modules:
      =======================================
      
      CPU0					CPU1
      
      delete_module()  #SYSCALL
      
         try_stop_module()
           mod->state = MODULE_STATE_GOING;
      
         mutex_unlock(&module_mutex);
      
      					klp_register_patch()
      					klp_enable_patch()
      
      					#save place to switch universe
      
      					b()     # from module that is going
      					  a()   # from core (patched)
      
         mod->exit();
      
      Note that the function b() can be called until we call mod->exit().
      
      If we do not apply patch against b() because it is in MODULE_STATE_GOING,
      it will call patched a() with modified semantic and things might get wrong.
      
      [jpoimboe@redhat.com: use one boolean instead of two]
      Signed-off-by: NPetr Mladek <pmladek@suse.cz>
      Acked-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: NRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      8cb2c2dc
  18. 14 2月, 2015 1 次提交
    • A
      module: fix types of device tables aliases · 6301939d
      Andrey Ryabinin 提交于
      MODULE_DEVICE_TABLE() macro used to create aliases to device tables.
      Normally alias should have the same type as aliased symbol.
      
      Device tables are arrays, so they have 'struct type##_device_id[x]'
      types. Alias created by MODULE_DEVICE_TABLE() will have non-array type -
      	'struct type##_device_id'.
      
      This inconsistency confuses compiler, it could make a wrong assumption
      about variable's size which leads KASan to produce a false positive report
      about out of bounds access.
      
      For every global variable compiler calls __asan_register_globals() passing
      information about global variable (address, size, size with redzone, name
      ...) __asan_register_globals() poison symbols redzone to detect possible
      out of bounds accesses.
      
      When symbol has an alias __asan_register_globals() will be called as for
      symbol so for alias.  Compiler determines size of variable by size of
      variable's type.  Alias and symbol have the same address, so if alias have
      the wrong size part of memory that actually belongs to the symbol could be
      poisoned as redzone of alias symbol.
      
      By fixing type of alias symbol we will fix size of it, so
      __asan_register_globals() will not poison valid memory.
      Signed-off-by: NAndrey Ryabinin <a.ryabinin@samsung.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Signed-off-by: NAndrey Konovalov <adech.fo@gmail.com>
      Cc: Yuri Gribov <tetra2005@gmail.com>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6301939d
  19. 22 1月, 2015 1 次提交
  20. 11 11月, 2014 1 次提交
  21. 27 7月, 2014 2 次提交
  22. 13 3月, 2014 2 次提交
  23. 07 3月, 2014 1 次提交
  24. 16 1月, 2014 1 次提交
  25. 04 12月, 2013 1 次提交
  26. 23 9月, 2013 1 次提交
    • R
      module: remove rmmod --wait option. · 3f2b9c9c
      Rusty Russell 提交于
      The option to wait for a module reference count to reach zero was in
      the initial module implementation, but it was never supported in
      modprobe (you had to use rmmod --wait).  After discussion with Lucas,
      It has been deprecated (with a 10 second sleep) in kmod for the last
      year.
      
      This finally removes it: the flag will evoke a printk warning and a
      normal (non-blocking) remove attempt.
      
      Cc: Lucas De Marchi <lucas.de.marchi@gmail.com>
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      3f2b9c9c
  27. 03 9月, 2013 1 次提交
    • L
      module: Fix mod->mkobj.kobj potentially freed too early · 942e4431
      Li Zhong 提交于
      DEBUG_KOBJECT_RELEASE helps to find the issue attached below.
      
      After some investigation, it seems the reason is:
      The mod->mkobj.kobj(ffffffffa01600d0 below) is freed together with mod
      itself in free_module(). However, its children still hold references to
      it, as the delay caused by DEBUG_KOBJECT_RELEASE. So when the
      child(holders below) tries to decrease the reference count to its parent
      in kobject_del(), BUG happens as it tries to access already freed memory.
      
      This patch tries to fix it by waiting for the mod->mkobj.kobj to be
      really released in the module removing process (and some error code
      paths).
      
      [ 1844.175287] kobject: 'holders' (ffff88007c1f1600): kobject_release, parent ffffffffa01600d0 (delayed)
      [ 1844.178991] kobject: 'notes' (ffff8800370b2a00): kobject_release, parent ffffffffa01600d0 (delayed)
      [ 1845.180118] kobject: 'holders' (ffff88007c1f1600): kobject_cleanup, parent ffffffffa01600d0
      [ 1845.182130] kobject: 'holders' (ffff88007c1f1600): auto cleanup kobject_del
      [ 1845.184120] BUG: unable to handle kernel paging request at ffffffffa01601d0
      [ 1845.185026] IP: [<ffffffff812cda81>] kobject_put+0x11/0x60
      [ 1845.185026] PGD 1a13067 PUD 1a14063 PMD 7bd30067 PTE 0
      [ 1845.185026] Oops: 0000 [#1] PREEMPT
      [ 1845.185026] Modules linked in: xfs libcrc32c [last unloaded: kprobe_example]
      [ 1845.185026] CPU: 0 PID: 18 Comm: kworker/0:1 Tainted: G           O 3.11.0-rc6-next-20130819+ #1
      [ 1845.185026] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2007
      [ 1845.185026] Workqueue: events kobject_delayed_cleanup
      [ 1845.185026] task: ffff88007ca51f00 ti: ffff88007ca5c000 task.ti: ffff88007ca5c000
      [ 1845.185026] RIP: 0010:[<ffffffff812cda81>]  [<ffffffff812cda81>] kobject_put+0x11/0x60
      [ 1845.185026] RSP: 0018:ffff88007ca5dd08  EFLAGS: 00010282
      [ 1845.185026] RAX: 0000000000002000 RBX: ffffffffa01600d0 RCX: ffffffff8177d638
      [ 1845.185026] RDX: ffff88007ca5dc18 RSI: 0000000000000000 RDI: ffffffffa01600d0
      [ 1845.185026] RBP: ffff88007ca5dd18 R08: ffffffff824e9810 R09: ffffffffffffffff
      [ 1845.185026] R10: ffff8800ffffffff R11: dead4ead00000001 R12: ffffffff81a95040
      [ 1845.185026] R13: ffff88007b27a960 R14: ffff88007c1f1600 R15: 0000000000000000
      [ 1845.185026] FS:  0000000000000000(0000) GS:ffffffff81a23000(0000) knlGS:0000000000000000
      [ 1845.185026] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
      [ 1845.185026] CR2: ffffffffa01601d0 CR3: 0000000037207000 CR4: 00000000000006b0
      [ 1845.185026] Stack:
      [ 1845.185026]  ffff88007c1f1600 ffff88007c1f1600 ffff88007ca5dd38 ffffffff812cdb7e
      [ 1845.185026]  0000000000000000 ffff88007c1f1640 ffff88007ca5dd68 ffffffff812cdbfe
      [ 1845.185026]  ffff88007c974800 ffff88007c1f1640 ffff88007ff61a00 0000000000000000
      [ 1845.185026] Call Trace:
      [ 1845.185026]  [<ffffffff812cdb7e>] kobject_del+0x2e/0x40
      [ 1845.185026]  [<ffffffff812cdbfe>] kobject_delayed_cleanup+0x6e/0x1d0
      [ 1845.185026]  [<ffffffff81063a45>] process_one_work+0x1e5/0x670
      [ 1845.185026]  [<ffffffff810639e3>] ? process_one_work+0x183/0x670
      [ 1845.185026]  [<ffffffff810642b3>] worker_thread+0x113/0x370
      [ 1845.185026]  [<ffffffff810641a0>] ? rescuer_thread+0x290/0x290
      [ 1845.185026]  [<ffffffff8106bfba>] kthread+0xda/0xe0
      [ 1845.185026]  [<ffffffff814ff0f0>] ? _raw_spin_unlock_irq+0x30/0x60
      [ 1845.185026]  [<ffffffff8106bee0>] ? kthread_create_on_node+0x130/0x130
      [ 1845.185026]  [<ffffffff8150751a>] ret_from_fork+0x7a/0xb0
      [ 1845.185026]  [<ffffffff8106bee0>] ? kthread_create_on_node+0x130/0x130
      [ 1845.185026] Code: 81 48 c7 c7 28 95 ad 81 31 c0 e8 9b da 01 00 e9 4f ff ff ff 66 0f 1f 44 00 00 55 48 89 e5 53 48 89 fb 48 83 ec 08 48 85 ff 74 1d <f6> 87 00 01 00 00 01 74 1e 48 8d 7b 38 83 6b 38 01 0f 94 c0 84
      [ 1845.185026] RIP  [<ffffffff812cda81>] kobject_put+0x11/0x60
      [ 1845.185026]  RSP <ffff88007ca5dd08>
      [ 1845.185026] CR2: ffffffffa01601d0
      [ 1845.185026] ---[ end trace 49a70afd109f5653 ]---
      Signed-off-by: NLi Zhong <zhong@linux.vnet.ibm.com>
      Acked-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      942e4431
  28. 20 8月, 2013 1 次提交
  29. 01 8月, 2013 1 次提交
  30. 15 3月, 2013 1 次提交
    • R
      CONFIG_SYMBOL_PREFIX: cleanup. · b92021b0
      Rusty Russell 提交于
      We have CONFIG_SYMBOL_PREFIX, which three archs define to the string
      "_".  But Al Viro broke this in "consolidate cond_syscall and
      SYSCALL_ALIAS declarations" (in linux-next), and he's not the first to
      do so.
      
      Using CONFIG_SYMBOL_PREFIX is awkward, since we usually just want to
      prefix it so something.  So various places define helpers which are
      defined to nothing if CONFIG_SYMBOL_PREFIX isn't set:
      
      1) include/asm-generic/unistd.h defines __SYMBOL_PREFIX.
      2) include/asm-generic/vmlinux.lds.h defines VMLINUX_SYMBOL(sym)
      3) include/linux/export.h defines MODULE_SYMBOL_PREFIX.
      4) include/linux/kernel.h defines SYMBOL_PREFIX (which differs from #7)
      5) kernel/modsign_certificate.S defines ASM_SYMBOL(sym)
      6) scripts/modpost.c defines MODULE_SYMBOL_PREFIX
      7) scripts/Makefile.lib defines SYMBOL_PREFIX on the commandline if
         CONFIG_SYMBOL_PREFIX is set, so that we have a non-string version
         for pasting.
      
      (arch/h8300/include/asm/linkage.h defines SYMBOL_NAME(), too).
      
      Let's solve this properly:
      1) No more generic prefix, just CONFIG_HAVE_UNDERSCORE_SYMBOL_PREFIX.
      2) Make linux/export.h usable from asm.
      3) Define VMLINUX_SYMBOL() and VMLINUX_SYMBOL_STR().
      4) Make everyone use them.
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Reviewed-by: NJames Hogan <james.hogan@imgtec.com>
      Tested-by: James Hogan <james.hogan@imgtec.com> (metag)
      b92021b0
  31. 21 1月, 2013 1 次提交
  32. 12 1月, 2013 1 次提交
  33. 10 10月, 2012 1 次提交
    • R
      module: signature checking hook · 106a4ee2
      Rusty Russell 提交于
      We do a very simple search for a particular string appended to the module
      (which is cache-hot and about to be SHA'd anyway).  There's both a config
      option and a boot parameter which control whether we accept or fail with
      unsigned modules and modules that are signed with an unknown key.
      
      If module signing is enabled, the kernel will be tainted if a module is
      loaded that is unsigned or has a signature for which we don't have the
      key.
      
      (Useful feedback and tweaks by David Howells <dhowells@redhat.com>)
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      106a4ee2