1. 05 10月, 2019 2 次提交
  2. 25 9月, 2019 1 次提交
  3. 18 9月, 2019 1 次提交
  4. 10 9月, 2019 1 次提交
  5. 06 9月, 2019 1 次提交
    • H
      keys: Fix missing null pointer check in request_key_auth_describe() · d41a3eff
      Hillf Danton 提交于
      If a request_key authentication token key gets revoked, there's a window in
      which request_key_auth_describe() can see it with a NULL payload - but it
      makes no check for this and something like the following oops may occur:
      
      	BUG: Kernel NULL pointer dereference at 0x00000038
      	Faulting instruction address: 0xc0000000004ddf30
      	Oops: Kernel access of bad area, sig: 11 [#1]
      	...
      	NIP [...] request_key_auth_describe+0x90/0xd0
      	LR [...] request_key_auth_describe+0x54/0xd0
      	Call Trace:
      	[...] request_key_auth_describe+0x54/0xd0 (unreliable)
      	[...] proc_keys_show+0x308/0x4c0
      	[...] seq_read+0x3d0/0x540
      	[...] proc_reg_read+0x90/0x110
      	[...] __vfs_read+0x3c/0x70
      	[...] vfs_read+0xb4/0x1b0
      	[...] ksys_read+0x7c/0x130
      	[...] system_call+0x5c/0x70
      
      Fix this by checking for a NULL pointer when describing such a key.
      
      Also make the read routine check for a NULL pointer to be on the safe side.
      
      [DH: Modified to not take already-held rcu lock and modified to also check
       in the read routine]
      
      Fixes: 04c567d9 ("[PATCH] Keys: Fix race between two instantiators of a key")
      Reported-by: NSachin Sant <sachinp@linux.vnet.ibm.com>
      Signed-off-by: NHillf Danton <hdanton@sina.com>
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Tested-by: NSachin Sant <sachinp@linux.vnet.ibm.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d41a3eff
  6. 05 9月, 2019 5 次提交
  7. 31 8月, 2019 1 次提交
  8. 30 8月, 2019 2 次提交
    • G
      ima: ima_api: Use struct_size() in kzalloc() · 2a7f0e53
      Gustavo A. R. Silva 提交于
      One of the more common cases of allocation size calculations is finding
      the size of a structure that has a zero-sized array at the end, along
      with memory for some number of elements for that array. For example:
      
      struct ima_template_entry {
      	...
              struct ima_field_data template_data[0]; /* template related data */
      };
      
      instance = kzalloc(sizeof(struct ima_template_entry) + count * sizeof(struct ima_field_data), GFP_NOFS);
      
      Instead of leaving these open-coded and prone to type mistakes, we can
      now use the new struct_size() helper:
      
      instance = kzalloc(struct_size(instance, entry, count), GFP_NOFS);
      
      This code was detected with the help of Coccinelle.
      Signed-off-by: NGustavo A. R. Silva <gustavo@embeddedor.com>
      Signed-off-by: NMimi Zohar <zohar@linux.ibm.com>
      2a7f0e53
    • G
      ima: use struct_size() in kzalloc() · fa5b5717
      Gustavo A. R. Silva 提交于
      One of the more common cases of allocation size calculations is finding
      the size of a structure that has a zero-sized array at the end, along
      with memory for some number of elements for that array. For example:
      
      struct foo {
         int stuff;
         struct boo entry[];
      };
      
      instance = kzalloc(sizeof(struct foo) + count * sizeof(struct boo), GFP_KERNEL);
      
      Instead of leaving these open-coded and prone to type mistakes, we can
      now use the new struct_size() helper:
      
      instance = kzalloc(struct_size(instance, entry, count), GFP_KERNEL);
      
      This code was detected with the help of Coccinelle.
      Signed-off-by: NGustavo A. R. Silva <gustavo@embeddedor.com>
      Signed-off-by: NMimi Zohar <zohar@linux.ibm.com>
      fa5b5717
  9. 29 8月, 2019 1 次提交
  10. 28 8月, 2019 1 次提交
    • O
      selinux: avoid atomic_t usage in sidtab · 116f21bb
      Ondrej Mosnacek 提交于
      As noted in Documentation/atomic_t.txt, if we don't need the RMW atomic
      operations, we should only use READ_ONCE()/WRITE_ONCE() +
      smp_rmb()/smp_wmb() where necessary (or the combined variants
      smp_load_acquire()/smp_store_release()).
      
      This patch converts the sidtab code to use regular u32 for the counter
      and reverse lookup cache and use the appropriate operations instead of
      atomic_get()/atomic_set(). Note that when reading/updating the reverse
      lookup cache we don't need memory barriers as it doesn't need to be
      consistent or accurate. We can now also replace some atomic ops with
      regular loads (when under spinlock) and stores (for conversion target
      fields that are always accessed under the master table's spinlock).
      
      We can now also bump SIDTAB_MAX to U32_MAX as we can use the full u32
      range again.
      Suggested-by: NJann Horn <jannh@google.com>
      Signed-off-by: NOndrej Mosnacek <omosnace@redhat.com>
      Reviewed-by: NJann Horn <jannh@google.com>
      Signed-off-by: NPaul Moore <paul@paul-moore.com>
      116f21bb
  11. 20 8月, 2019 24 次提交