1. 13 1月, 2018 2 次提交
    • M
      error-injection: Add injectable error types · 663faf9f
      Masami Hiramatsu 提交于
      Add injectable error types for each error-injectable function.
      
      One motivation of error injection test is to find software flaws,
      mistakes or mis-handlings of expectable errors. If we find such
      flaws by the test, that is a program bug, so we need to fix it.
      
      But if the tester miss input the error (e.g. just return success
      code without processing anything), it causes unexpected behavior
      even if the caller is correctly programmed to handle any errors.
      That is not what we want to test by error injection.
      
      To clarify what type of errors the caller must expect for each
      injectable function, this introduces injectable error types:
      
       - EI_ETYPE_NULL : means the function will return NULL if it
      		    fails. No ERR_PTR, just a NULL.
       - EI_ETYPE_ERRNO : means the function will return -ERRNO
      		    if it fails.
       - EI_ETYPE_ERRNO_NULL : means the function will return -ERRNO
      		       (ERR_PTR) or NULL.
      
      ALLOW_ERROR_INJECTION() macro is expanded to get one of
      NULL, ERRNO, ERRNO_NULL to record the error type for
      each function. e.g.
      
       ALLOW_ERROR_INJECTION(open_ctree, ERRNO)
      
      This error types are shown in debugfs as below.
      
        ====
        / # cat /sys/kernel/debug/error_injection/list
        open_ctree [btrfs]	ERRNO
        io_ctl_init [btrfs]	ERRNO
        ====
      Signed-off-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Reviewed-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      663faf9f
    • M
      error-injection: Separate error-injection from kprobe · 540adea3
      Masami Hiramatsu 提交于
      Since error-injection framework is not limited to be used
      by kprobes, nor bpf. Other kernel subsystems can use it
      freely for checking safeness of error-injection, e.g.
      livepatch, ftrace etc.
      So this separate error-injection framework from kprobes.
      
      Some differences has been made:
      
      - "kprobe" word is removed from any APIs/structures.
      - BPF_ALLOW_ERROR_INJECTION() is renamed to
        ALLOW_ERROR_INJECTION() since it is not limited for BPF too.
      - CONFIG_FUNCTION_ERROR_INJECTION is the config item of this
        feature. It is automatically enabled if the arch supports
        error injection feature for kprobe or ftrace etc.
      Signed-off-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Reviewed-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      540adea3
  2. 13 12月, 2017 1 次提交
  3. 09 11月, 2017 1 次提交
    • B
      module: export module signature enforcement status · fda784e5
      Bruno E. O. Meneguele 提交于
      A static variable sig_enforce is used as status var to indicate the real
      value of CONFIG_MODULE_SIG_FORCE, once this one is set the var will hold
      true, but if the CONFIG is not set the status var will hold whatever
      value is present in the module.sig_enforce kernel cmdline param: true
      when =1 and false when =0 or not present.
      
      Considering this cmdline param take place over the CONFIG value when
      it's not set, other places in the kernel could misbehave since they
      would have only the CONFIG_MODULE_SIG_FORCE value to rely on. Exporting
      this status var allows the kernel to rely in the effective value of
      module signature enforcement, being it from CONFIG value or cmdline
      param.
      Signed-off-by: NBruno E. O. Meneguele <brdeoliv@redhat.com>
      Signed-off-by: NMimi Zohar <zohar@linux.vnet.ibm.com>
      fda784e5
  4. 30 7月, 2017 1 次提交
    • M
      module: Remove const attribute from alias for MODULE_DEVICE_TABLE · 0bf8bf50
      Matthias Kaehlcke 提交于
      MODULE_DEVICE_TABLE(type, name) creates an alias of type 'extern const
      typeof(name)'. If 'name' is already constant the 'const' attribute is
      specified twice, which is not allowed in C89 (see discussion at
      https://lkml.org/lkml/2017/5/23/1440). Since the kernel is built with
      -std=gnu89 clang generates warnings like this:
      
      drivers/thermal/x86_pkg_temp_thermal.c:509:1: warning: duplicate 'const'
        declaration specifier
            [-Wduplicate-decl-specifier]
      MODULE_DEVICE_TABLE(x86cpu, pkg_temp_thermal_ids);
      ^
      ./include/linux/module.h:212:8: note: expanded from macro 'MODULE_DEVICE_TABLE'
      extern const typeof(name) __mod_##type##__##name##_device_table
      
      Remove the const attribute from the alias to avoid the duplicate
      specifier. After all it is only an alias and the attribute shouldn't
      have any effect.
      Signed-off-by: NMatthias Kaehlcke <mka@chromium.org>
      Signed-off-by: NJessica Yu <jeyu@kernel.org>
      0bf8bf50
  5. 01 7月, 2017 1 次提交
    • K
      randstruct: Mark various structs for randomization · 3859a271
      Kees Cook 提交于
      This marks many critical kernel structures for randomization. These are
      structures that have been targeted in the past in security exploits, or
      contain functions pointers, pointers to function pointer tables, lists,
      workqueues, ref-counters, credentials, permissions, or are otherwise
      sensitive. This initial list was extracted from Brad Spengler/PaX Team's
      code in the last public patch of grsecurity/PaX based on my understanding
      of the code. Changes or omissions from the original code are mine and
      don't reflect the original grsecurity/PaX code.
      
      Left out of this list is task_struct, which requires special handling
      and will be covered in a subsequent patch.
      Signed-off-by: NKees Cook <keescook@chromium.org>
      3859a271
  6. 14 6月, 2017 2 次提交
  7. 24 4月, 2017 1 次提交
  8. 16 3月, 2017 1 次提交
    • T
      locking/lockdep: Handle statically initialized PER_CPU locks properly · 383776fa
      Thomas Gleixner 提交于
      If a PER_CPU struct which contains a spin_lock is statically initialized
      via:
      
      DEFINE_PER_CPU(struct foo, bla) = {
      	.lock = __SPIN_LOCK_UNLOCKED(bla.lock)
      };
      
      then lockdep assigns a seperate key to each lock because the logic for
      assigning a key to statically initialized locks is to use the address as
      the key. With per CPU locks the address is obvioulsy different on each CPU.
      
      That's wrong, because all locks should have the same key.
      
      To solve this the following modifications are required:
      
       1) Extend the is_kernel/module_percpu_addr() functions to hand back the
          canonical address of the per CPU address, i.e. the per CPU address
          minus the per CPU offset.
      
       2) Check the lock address with these functions and if the per CPU check
          matches use the returned canonical address as the lock key, so all per
          CPU locks have the same key.
      
       3) Move the static_obj(key) check into look_up_lock_class() so this check
          can be avoided for statically initialized per CPU locks.  That's
          required because the canonical address fails the static_obj(key) check
          for obvious reasons.
      Reported-by: NMike Galbraith <efault@gmx.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      [ Merged Dan's fixups for !MODULES and !SMP into this patch. ]
      Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Dan Murphy <dmurphy@ti.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20170227143736.pectaimkjkan5kow@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      383776fa
  9. 10 2月, 2017 1 次提交
  10. 08 2月, 2017 1 次提交
  11. 07 2月, 2017 1 次提交
  12. 04 2月, 2017 1 次提交
    • A
      modversions: treat symbol CRCs as 32 bit quantities · 71810db2
      Ard Biesheuvel 提交于
      The modversion symbol CRCs are emitted as ELF symbols, which allows us
      to easily populate the kcrctab sections by relying on the linker to
      associate each kcrctab slot with the correct value.
      
      This has a couple of downsides:
      
       - Given that the CRCs are treated as memory addresses, we waste 4 bytes
         for each CRC on 64 bit architectures,
      
       - On architectures that support runtime relocation, a R_<arch>_RELATIVE
         relocation entry is emitted for each CRC value, which identifies it
         as a quantity that requires fixing up based on the actual runtime
         load offset of the kernel. This results in corrupted CRCs unless we
         explicitly undo the fixup (and this is currently being handled in the
         core module code)
      
       - Such runtime relocation entries take up 24 bytes of __init space
         each, resulting in a x8 overhead in [uncompressed] kernel size for
         CRCs.
      
      Switching to explicit 32 bit values on 64 bit architectures fixes most
      of these issues, given that 32 bit values are not treated as quantities
      that require fixing up based on the actual runtime load offset.  Note
      that on some ELF64 architectures [such as PPC64], these 32-bit values
      are still emitted as [absolute] runtime relocatable quantities, even if
      the value resolves to a build time constant.  Since relative relocations
      are always resolved at build time, this patch enables MODULE_REL_CRCS on
      powerpc when CONFIG_RELOCATABLE=y, which turns the absolute CRC
      references into relative references into .rodata where the actual CRC
      value is stored.
      
      So redefine all CRC fields and variables as u32, and redefine the
      __CRC_SYMBOL() macro for 64 bit builds to emit the CRC reference using
      inline assembler (which is necessary since 64-bit C code cannot use
      32-bit types to hold memory addresses, even if they are ultimately
      resolved using values that do not exceed 0xffffffff).  To avoid
      potential problems with legacy 32-bit architectures using legacy
      toolchains, the equivalent C definition of the kcrctab entry is retained
      for 32-bit architectures.
      
      Note that this mostly reverts commit d4703aef ("module: handle ppc64
      relocating kcrctabs when CONFIG_RELOCATABLE=y")
      Acked-by: NRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      71810db2
  13. 04 1月, 2017 1 次提交
  14. 27 11月, 2016 2 次提交
  15. 04 8月, 2016 2 次提交
    • J
      modules: add ro_after_init support · 444d13ff
      Jessica Yu 提交于
      Add ro_after_init support for modules by adding a new page-aligned section
      in the module layout (after rodata) for ro_after_init data and enabling RO
      protection for that section after module init runs.
      Signed-off-by: NJessica Yu <jeyu@redhat.com>
      Acked-by: NKees Cook <keescook@chromium.org>
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      444d13ff
    • P
      exceptions: fork exception table content from module.h into extable.h · 0ef76537
      Paul Gortmaker 提交于
      For historical reasons (i.e. pre-git) the exception table stuff was
      buried in the middle of the module.h file.  I noticed this while
      doing an audit for needless includes of module.h and found core
      kernel files (both arch specific and arch independent) were just
      including module.h for this.
      
      The converse is also true, in that conventional drivers, be they
      for filesystems or actual hardware peripherals or similar, do not
      normally care about the exception tables.
      
      Here we fork the exception table content out of module.h into a
      new file called extable.h -- and temporarily include it into the
      module.h itself.
      
      Then we will work our way across the arch independent and arch
      specific files needing just exception table content, and move
      them off module.h and onto extable.h
      
      Once that is done, we can remove the extable.h from module.h
      and in doing it like this, we avoid introducing build failures
      into the git history.
      
      The gain here is that module.h gets a bit smaller, across all
      modular drivers that we build for allmodconfig.  Also the core
      files that only need exception table stuff don't have an include
      of module.h that brings in lots of extra stuff and just looks
      generally out of place.
      
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      0ef76537
  16. 27 7月, 2016 1 次提交
  17. 01 4月, 2016 1 次提交
    • J
      module: preserve Elf information for livepatch modules · 1ce15ef4
      Jessica Yu 提交于
      For livepatch modules, copy Elf section, symbol, and string information
      from the load_info struct in the module loader. Persist copies of the
      original symbol table and string table.
      
      Livepatch manages its own relocation sections in order to reuse module
      loader code to write relocations. Livepatch modules must preserve Elf
      information such as section indices in order to apply livepatch relocation
      sections using the module loader's apply_relocate_add() function.
      
      In order to apply livepatch relocation sections, livepatch modules must
      keep a complete copy of their original symbol table in memory. Normally, a
      stripped down copy of a module's symbol table (containing only "core"
      symbols) is made available through module->core_symtab. But for livepatch
      modules, the symbol table copied into memory on module load must be exactly
      the same as the symbol table produced when the patch module was compiled.
      This is because the relocations in each livepatch relocation section refer
      to their respective symbols with their symbol indices, and the original
      symbol indices (and thus the symtab ordering) must be preserved in order
      for apply_relocate_add() to find the right symbol.
      Signed-off-by: NJessica Yu <jeyu@redhat.com>
      Reviewed-by: NMiroslav Benes <mbenes@suse.cz>
      Acked-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: NRusty Russell <rusty@rustcorp.com.au>
      Reviewed-by: NRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      1ce15ef4
  18. 03 2月, 2016 1 次提交
    • R
      modules: fix longstanding /proc/kallsyms vs module insertion race. · 8244062e
      Rusty Russell 提交于
      For CONFIG_KALLSYMS, we keep two symbol tables and two string tables.
      There's one full copy, marked SHF_ALLOC and laid out at the end of the
      module's init section.  There's also a cut-down version that only
      contains core symbols and strings, and lives in the module's core
      section.
      
      After module init (and before we free the module memory), we switch
      the mod->symtab, mod->num_symtab and mod->strtab to point to the core
      versions.  We do this under the module_mutex.
      
      However, kallsyms doesn't take the module_mutex: it uses
      preempt_disable() and rcu tricks to walk through the modules, because
      it's used in the oops path.  It's also used in /proc/kallsyms.
      There's nothing atomic about the change of these variables, so we can
      get the old (larger!) num_symtab and the new symtab pointer; in fact
      this is what I saw when trying to reproduce.
      
      By grouping these variables together, we can use a
      carefully-dereferenced pointer to ensure we always get one or the
      other (the free of the module init section is already done in an RCU
      callback, so that's safe).  We allocate the init one at the end of the
      module init section, and keep the core one inside the struct module
      itself (it could also have been allocated at the end of the module
      core, but that's probably overkill).
      Reported-by: NWeilong Chen <chenweilong@huawei.com>
      Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=111541
      Cc: stable@kernel.org
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      8244062e
  19. 05 12月, 2015 2 次提交
  20. 06 7月, 2015 1 次提交
    • P
      module: relocate module_init from init.h to module.h · 0fd972a7
      Paul Gortmaker 提交于
      Modular users will always be users of init functionality, but
      users of init functionality are not necessarily always modules.
      
      Hence any functionality like module_init and module_exit would
      be more at home in the module.h file.  And module.h should
      explicitly include init.h to make the dependency clear.
      
      We've already done all the legwork needed to ensure that this
      move does not cause any build regressions due to implicit
      header file include assumptions about where module_init lives.
      
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Acked-by: NRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      0fd972a7
  21. 28 6月, 2015 1 次提交
  22. 23 6月, 2015 1 次提交
    • D
      module: add per-module param_lock · b51d23e4
      Dan Streetman 提交于
      Add a "param_lock" mutex to each module, and update params.c to use
      the correct built-in or module mutex while locking kernel params.
      Remove the kparam_block_sysfs_r/w() macros, replace them with direct
      calls to kernel_param_[un]lock(module).
      
      The kernel param code currently uses a single mutex to protect
      modification of any and all kernel params.  While this generally works,
      there is one specific problem with it; a module callback function
      cannot safely load another module, i.e. with request_module() or even
      with indirect calls such as crypto_has_alg().  If the module to be
      loaded has any of its params configured (e.g. with a /etc/modprobe.d/*
      config file), then the attempt will result in a deadlock between the
      first module param callback waiting for modprobe, and modprobe trying to
      lock the single kernel param mutex to set the new module's param.
      
      This fixes that by using per-module mutexes, so that each individual module
      is protected against concurrent changes in its own kernel params, but is
      not blocked by changes to other module params.  All built-in modules
      continue to use the built-in mutex, since they will always be loaded at
      runtime and references (e.g. request_module(), crypto_has_alg()) to them
      will never cause load-time param changing.
      
      This also simplifies the interface used by modules to block sysfs access
      to their params; while there are currently functions to block and unblock
      sysfs param access which are split up by read and write and expect a single
      kernel param to be passed, their actual operation is identical and applies
      to all params, not just the one passed to them; they simply lock and unlock
      the global param mutex.  They are replaced with direct calls to
      kernel_param_[un]lock(THIS_MODULE), which locks THIS_MODULE's param_lock, or
      if the module is built-in, it locks the built-in mutex.
      Suggested-by: NRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NDan Streetman <ddstreet@ieee.org>
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      b51d23e4
  23. 28 5月, 2015 3 次提交
  24. 25 5月, 2015 1 次提交
  25. 20 5月, 2015 1 次提交
  26. 14 5月, 2015 1 次提交
  27. 23 4月, 2015 1 次提交
  28. 08 4月, 2015 1 次提交
  29. 17 3月, 2015 1 次提交
    • P
      livepatch: Fix subtle race with coming and going modules · 8cb2c2dc
      Petr Mladek 提交于
      There is a notifier that handles live patches for coming and going modules.
      It takes klp_mutex lock to avoid races with coming and going patches but
      it does not keep the lock all the time. Therefore the following races are
      possible:
      
        1. The notifier is called sometime in STATE_MODULE_COMING. The module
           is visible by find_module() in this state all the time. It means that
           new patch can be registered and enabled even before the notifier is
           called. It might create wrong order of stacked patches, see below
           for an example.
      
         2. New patch could still see the module in the GOING state even after
            the notifier has been called. It will try to initialize the related
            object structures but the module could disappear at any time. There
            will stay mess in the structures. It might even cause an invalid
            memory access.
      
      This patch solves the problem by adding a boolean variable into struct module.
      The value is true after the coming and before the going handler is called.
      New patches need to be applied when the value is true and they need to ignore
      the module when the value is false.
      
      Note that we need to know state of all modules on the system. The races are
      related to new patches. Therefore we do not know what modules will get
      patched.
      
      Also note that we could not simply ignore going modules. The code from the
      module could be called even in the GOING state until mod->exit() finishes.
      If we start supporting patches with semantic changes between function
      calls, we need to apply new patches to any still usable code.
      See below for an example.
      
      Finally note that the patch solves only the situation when a new patch is
      registered. There are no such problems when the patch is being removed.
      It does not matter who disable the patch first, whether the normal
      disable_patch() or the module notifier. There is nothing to do
      once the patch is disabled.
      
      Alternative solutions:
      ======================
      
      + reject new patches when a patched module is coming or going; this is ugly
      
      + wait with adding new patch until the module leaves the COMING and GOING
        states; this might be dangerous and complicated; we would need to release
        kgr_lock in the middle of the patch registration to avoid a deadlock
        with the coming and going handlers; also we might need a waitqueue for
        each module which seems to be even bigger overhead than the boolean
      
      + stop modules from entering COMING and GOING states; wait until modules
        leave these states when they are already there; looks complicated; we would
        need to ignore the module that asked to stop the others to avoid a deadlock;
        also it is unclear what to do when two modules asked to stop others and
        both are in COMING state (situation when two new patches are applied)
      
      + always register/enable new patches and fix up the potential mess (registered
        patches order) in klp_module_init(); this is nasty and prone to regressions
        in the future development
      
      + add another MODULE_STATE where the kallsyms are visible but the module is not
        used yet; this looks too complex; the module states are checked on "many"
        locations
      
      Example of patch stacking breakage:
      ===================================
      
      The notifier could _not_ _simply_ ignore already initialized module objects.
      For example, let's have three patches (P1, P2, P3) for functions a() and b()
      where a() is from vmcore and b() is from a module M. Something like:
      
      	a()	b()
      P1	a1()	b1()
      P2	a2()	b2()
      P3	a3()	b3(3)
      
      If you load the module M after all patches are registered and enabled.
      The ftrace ops for function a() and b() has listed the functions in this
      order:
      
      	ops_a->func_stack -> list(a3,a2,a1)
      	ops_b->func_stack -> list(b3,b2,b1)
      
      , so the pointer to b3() is the first and will be used.
      
      Then you might have the following scenario. Let's start with state when patches
      P1 and P2 are registered and enabled but the module M is not loaded. Then ftrace
      ops for b() does not exist. Then we get into the following race:
      
      CPU0					CPU1
      
      load_module(M)
      
        complete_formation()
      
        mod->state = MODULE_STATE_COMING;
        mutex_unlock(&module_mutex);
      
      					klp_register_patch(P3);
      					klp_enable_patch(P3);
      
      					# STATE 1
      
        klp_module_notify(M)
          klp_module_notify_coming(P1);
          klp_module_notify_coming(P2);
          klp_module_notify_coming(P3);
      
      					# STATE 2
      
      The ftrace ops for a() and b() then looks:
      
        STATE1:
      
      	ops_a->func_stack -> list(a3,a2,a1);
      	ops_b->func_stack -> list(b3);
      
        STATE2:
      	ops_a->func_stack -> list(a3,a2,a1);
      	ops_b->func_stack -> list(b2,b1,b3);
      
      therefore, b2() is used for the module but a3() is used for vmcore
      because they were the last added.
      
      Example of the race with going modules:
      =======================================
      
      CPU0					CPU1
      
      delete_module()  #SYSCALL
      
         try_stop_module()
           mod->state = MODULE_STATE_GOING;
      
         mutex_unlock(&module_mutex);
      
      					klp_register_patch()
      					klp_enable_patch()
      
      					#save place to switch universe
      
      					b()     # from module that is going
      					  a()   # from core (patched)
      
         mod->exit();
      
      Note that the function b() can be called until we call mod->exit().
      
      If we do not apply patch against b() because it is in MODULE_STATE_GOING,
      it will call patched a() with modified semantic and things might get wrong.
      
      [jpoimboe@redhat.com: use one boolean instead of two]
      Signed-off-by: NPetr Mladek <pmladek@suse.cz>
      Acked-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: NRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      8cb2c2dc
  30. 14 2月, 2015 1 次提交
    • A
      module: fix types of device tables aliases · 6301939d
      Andrey Ryabinin 提交于
      MODULE_DEVICE_TABLE() macro used to create aliases to device tables.
      Normally alias should have the same type as aliased symbol.
      
      Device tables are arrays, so they have 'struct type##_device_id[x]'
      types. Alias created by MODULE_DEVICE_TABLE() will have non-array type -
      	'struct type##_device_id'.
      
      This inconsistency confuses compiler, it could make a wrong assumption
      about variable's size which leads KASan to produce a false positive report
      about out of bounds access.
      
      For every global variable compiler calls __asan_register_globals() passing
      information about global variable (address, size, size with redzone, name
      ...) __asan_register_globals() poison symbols redzone to detect possible
      out of bounds accesses.
      
      When symbol has an alias __asan_register_globals() will be called as for
      symbol so for alias.  Compiler determines size of variable by size of
      variable's type.  Alias and symbol have the same address, so if alias have
      the wrong size part of memory that actually belongs to the symbol could be
      poisoned as redzone of alias symbol.
      
      By fixing type of alias symbol we will fix size of it, so
      __asan_register_globals() will not poison valid memory.
      Signed-off-by: NAndrey Ryabinin <a.ryabinin@samsung.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Signed-off-by: NAndrey Konovalov <adech.fo@gmail.com>
      Cc: Yuri Gribov <tetra2005@gmail.com>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6301939d
  31. 22 1月, 2015 1 次提交
  32. 11 11月, 2014 1 次提交
  33. 27 7月, 2014 1 次提交