1. 24 4月, 2020 2 次提交
  2. 08 4月, 2020 25 次提交
  3. 03 4月, 2020 2 次提交
  4. 27 3月, 2020 3 次提交
  5. 26 3月, 2020 2 次提交
    • D
      Fix linked-list KUnit test when run multiple times · cb88577b
      David Gow 提交于
      A few of the lists used in the linked-list KUnit tests (the
      for_each_entry{,_reverse} tests) are declared 'static', and so are
      not-reinitialised if the test runs multiple times. This was not a
      problem when KUnit tests were run once on startup, but when tests are
      able to be run manually (e.g. from debugfs[1]), this is no longer the
      case.
      
      Making these lists no longer 'static' causes the lists to be
      reinitialised, and the test passes each time it is run. While there may
      be some value in testing that initialising static lists works, the
      for_each_entry_* tests are unlikely to be the right place for it.
      Signed-off-by: NDavid Gow <davidgow@google.com>
      Reviewed-by: NBrendan Higgins <brendanhiggins@google.com>
      Signed-off-by: NShuah Khan <skhan@linuxfoundation.org>
      cb88577b
    • D
      kunit: Always print actual pointer values in asserts · 2d68df6c
      David Gow 提交于
      KUnit assertions and expectations will print the values being tested. If
      these are pointers (e.g., KUNIT_EXPECT_PTR_EQ(test, a, b)), these
      pointers are currently printed with the %pK format specifier, which -- to
      prevent information leaks which may compromise, e.g., ASLR -- are often
      either hashed or replaced with ____ptrval____ or similar, making debugging
      tests difficult.
      
      By replacing %pK with %px as Documentation/core-api/printk-formats.rst
      suggests, we disable this security feature for KUnit assertions and
      expectations, allowing the actual pointer values to be printed. Given
      that KUnit is not intended for use in production kernels, and the
      pointers are only printed on failing tests, this seems like a worthwhile
      tradeoff.
      Signed-off-by: NDavid Gow <davidgow@google.com>
      Reviewed-by: NBrendan Higgins <brendanhiggins@google.com>
      Signed-off-by: NShuah Khan <skhan@linuxfoundation.org>
      2d68df6c
  6. 25 3月, 2020 2 次提交
  7. 24 3月, 2020 1 次提交
  8. 21 3月, 2020 2 次提交
    • P
      lockdep: Introduce wait-type checks · de8f5e4f
      Peter Zijlstra 提交于
      Extend lockdep to validate lock wait-type context.
      
      The current wait-types are:
      
      	LD_WAIT_FREE,		/* wait free, rcu etc.. */
      	LD_WAIT_SPIN,		/* spin loops, raw_spinlock_t etc.. */
      	LD_WAIT_CONFIG,		/* CONFIG_PREEMPT_LOCK, spinlock_t etc.. */
      	LD_WAIT_SLEEP,		/* sleeping locks, mutex_t etc.. */
      
      Where lockdep validates that the current lock (the one being acquired)
      fits in the current wait-context (as generated by the held stack).
      
      This ensures that there is no attempt to acquire mutexes while holding
      spinlocks, to acquire spinlocks while holding raw_spinlocks and so on. In
      other words, its a more fancy might_sleep().
      
      Obviously RCU made the entire ordeal more complex than a simple single
      value test because RCU can be acquired in (pretty much) any context and
      while it presents a context to nested locks it is not the same as it
      got acquired in.
      
      Therefore its necessary to split the wait_type into two values, one
      representing the acquire (outer) and one representing the nested context
      (inner). For most 'normal' locks these two are the same.
      
      [ To make static initialization easier we have the rule that:
        .outer == INV means .outer == .inner; because INV == 0. ]
      
      It further means that its required to find the minimal .inner of the held
      stack to compare against the outer of the new lock; because while 'normal'
      RCU presents a CONFIG type to nested locks, if it is taken while already
      holding a SPIN type it obviously doesn't relax the rules.
      
      Below is an example output generated by the trivial test code:
      
        raw_spin_lock(&foo);
        spin_lock(&bar);
        spin_unlock(&bar);
        raw_spin_unlock(&foo);
      
       [ BUG: Invalid wait context ]
       -----------------------------
       swapper/0/1 is trying to lock:
       ffffc90000013f20 (&bar){....}-{3:3}, at: kernel_init+0xdb/0x187
       other info that might help us debug this:
       1 lock held by swapper/0/1:
        #0: ffffc90000013ee0 (&foo){+.+.}-{2:2}, at: kernel_init+0xd1/0x187
      
      The way to read it is to look at the new -{n,m} part in the lock
      description; -{3:3} for the attempted lock, and try and match that up to
      the held locks, which in this case is the one: -{2,2}.
      
      This tells that the acquiring lock requires a more relaxed environment than
      presented by the lock stack.
      
      Currently only the normal locks and RCU are converted, the rest of the
      lockdep users defaults to .inner = INV which is ignored. More conversions
      can be done when desired.
      
      The check for spinlock_t nesting is not enabled by default. It's a separate
      config option for now as there are known problems which are currently
      addressed. The config option allows to identify these problems and to
      verify that the solutions found are indeed solving them.
      
      The config switch will be removed and the checks will permanently enabled
      once the vast majority of issues has been addressed.
      
      [ bigeasy: Move LD_WAIT_FREE,… out of CONFIG_LOCKDEP to avoid compile
      	   failure with CONFIG_DEBUG_SPINLOCK + !CONFIG_LOCKDEP]
      [ tglx: Add the config option ]
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Link: https://lkml.kernel.org/r/20200321113242.427089655@linutronix.de
      de8f5e4f
    • V
      lib/vdso: Enable common headers · 8c59ab83
      Vincenzo Frascino 提交于
      The vDSO library should only include the necessary headers required for
      a userspace library (UAPI and a minimal set of kernel headers). To make
      this possible it is necessary to isolate from the kernel headers the
      common parts that are strictly necessary to build the library.
      
      Refactor the unified vdso code to use the common headers.
      Signed-off-by: NVincenzo Frascino <vincenzo.frascino@arm.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Link: https://lkml.kernel.org/r/20200320145351.32292-26-vincenzo.frascino@arm.com
      8c59ab83
  9. 20 3月, 2020 1 次提交