1. 20 8月, 2021 2 次提交
  2. 18 6月, 2021 3 次提交
  3. 03 6月, 2021 1 次提交
    • D
      sched/fair: Fix util_est UTIL_AVG_UNCHANGED handling · 68d7a190
      Dietmar Eggemann 提交于
      The util_est internal UTIL_AVG_UNCHANGED flag which is used to prevent
      unnecessary util_est updates uses the LSB of util_est.enqueued. It is
      exposed via _task_util_est() (and task_util_est()).
      
      Commit 92a801e5 ("sched/fair: Mask UTIL_AVG_UNCHANGED usages")
      mentions that the LSB is lost for util_est resolution but
      find_energy_efficient_cpu() checks if task_util_est() returns 0 to
      return prev_cpu early.
      
      _task_util_est() returns the max value of util_est.ewma and
      util_est.enqueued or'ed w/ UTIL_AVG_UNCHANGED.
      So task_util_est() returning the max of task_util() and
      _task_util_est() will never return 0 under the default
      SCHED_FEAT(UTIL_EST, true).
      
      To fix this use the MSB of util_est.enqueued instead and keep the flag
      util_est internal, i.e. don't export it via _task_util_est().
      
      The maximal possible util_avg value for a task is 1024 so the MSB of
      'unsigned int util_est.enqueued' isn't used to store a util value.
      
      As a caveat the code behind the util_est_se trace point has to filter
      UTIL_AVG_UNCHANGED to see the real util_est.enqueued value which should
      be easy to do.
      
      This also fixes an issue report by Xuewen Yan that util_est_update()
      only used UTIL_AVG_UNCHANGED for the subtrahend of the equation:
      
        last_enqueued_diff = ue.enqueued - (task_util() | UTIL_AVG_UNCHANGED)
      
      Fixes: b89997aa sched/pelt: Fix task util_est update filtering
      Signed-off-by: NDietmar Eggemann <dietmar.eggemann@arm.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: NXuewen Yan <xuewen.yan@unisoc.com>
      Reviewed-by: NVincent Donnefort <vincent.donnefort@arm.com>
      Reviewed-by: NVincent Guittot <vincent.guittot@linaro.org>
      Link: https://lore.kernel.org/r/20210602145808.1562603-1-dietmar.eggemann@arm.com
      68d7a190
  4. 12 5月, 2021 5 次提交
  5. 06 5月, 2021 1 次提交
    • P
      mm cma: rename PF_MEMALLOC_NOCMA to PF_MEMALLOC_PIN · 1a08ae36
      Pavel Tatashin 提交于
      PF_MEMALLOC_NOCMA is used ot guarantee that the allocator will not
      return pages that might belong to CMA region.  This is currently used
      for long term gup to make sure that such pins are not going to be done
      on any CMA pages.
      
      When PF_MEMALLOC_NOCMA has been introduced we haven't realized that it
      is focusing on CMA pages too much and that there is larger class of
      pages that need the same treatment.  MOVABLE zone cannot contain any
      long term pins as well so it makes sense to reuse and redefine this flag
      for that usecase as well.  Rename the flag to PF_MEMALLOC_PIN which
      defines an allocation context which can only get pages suitable for
      long-term pins.
      
      Also rename: memalloc_nocma_save()/memalloc_nocma_restore to
      memalloc_pin_save()/memalloc_pin_restore() and make the new functions
      common.
      
      [rppt@linux.ibm.com: fix renaming of PF_MEMALLOC_NOCMA to PF_MEMALLOC_PIN]
        Link: https://lkml.kernel.org/r/20210331163816.11517-1-rppt@kernel.org
      
      Link: https://lkml.kernel.org/r/20210215161349.246722-6-pasha.tatashin@soleen.comSigned-off-by: NPavel Tatashin <pasha.tatashin@soleen.com>
      Reviewed-by: NJohn Hubbard <jhubbard@nvidia.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Signed-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Ira Weiny <ira.weiny@intel.com>
      Cc: James Morris <jmorris@namei.org>
      Cc: Jason Gunthorpe <jgg@nvidia.com>
      Cc: Jason Gunthorpe <jgg@ziepe.ca>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Cc: Oscar Salvador <osalvador@suse.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sasha Levin <sashal@kernel.org>
      Cc: Steven Rostedt (VMware) <rostedt@goodmis.org>
      Cc: Tyler Hicks <tyhicks@linux.microsoft.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1a08ae36
  6. 01 5月, 2021 1 次提交
  7. 15 4月, 2021 1 次提交
    • T
      signal: Allow tasks to cache one sigqueue struct · 4bad58eb
      Thomas Gleixner 提交于
      The idea for this originates from the real time tree to make signal
      delivery for realtime applications more efficient. In quite some of these
      application scenarios a control tasks signals workers to start their
      computations. There is usually only one signal per worker on flight.  This
      works nicely as long as the kmem cache allocations do not hit the slow path
      and cause latencies.
      
      To cure this an optimistic caching was introduced (limited to RT tasks)
      which allows a task to cache a single sigqueue in a pointer in task_struct
      instead of handing it back to the kmem cache after consuming a signal. When
      the next signal is sent to the task then the cached sigqueue is used
      instead of allocating a new one. This solved the problem for this set of
      application scenarios nicely.
      
      The task cache is not preallocated so the first signal sent to a task goes
      always to the cache allocator. The cached sigqueue stays around until the
      task exits and is freed when task::sighand is dropped.
      
      After posting this solution for mainline the discussion came up whether
      this would be useful in general and should not be limited to realtime
      tasks: https://lore.kernel.org/r/m11rcu7nbr.fsf@fess.ebiederm.org
      
      One concern leading to the original limitation was to avoid a large amount
      of pointlessly cached sigqueues in alive tasks. The other concern was
      vs. RLIMIT_SIGPENDING as these cached sigqueues are not accounted for.
      
      The accounting problem is real, but on the other hand slightly academic.
      After gathering some statistics it turned out that after boot of a regular
      distro install there are less than 10 sigqueues cached in ~1500 tasks.
      
      In case of a 'mass fork and fire signal to child' scenario the extra 80
      bytes of memory per task are well in the noise of the overall memory
      consumption of the fork bomb.
      
      If this should be limited then this would need an extra counter in struct
      user, more atomic instructions and a seperate rlimit. Yet another tunable
      which is mostly unused.
      
      The caching is actually used. After boot and a full kernel compile on a
      64CPU machine with make -j128 the number of 'allocations' looks like this:
      
        From slab:	   23996
        From task cache: 52223
      
      I.e. it reduces the number of slab cache operations by ~68%.
      
      A typical pattern there is:
      
      <...>-58490 __sigqueue_alloc:  for 58488 from slab ffff8881132df460
      <...>-58488 __sigqueue_free:   cache ffff8881132df460
      <...>-58488 __sigqueue_alloc:  for 1149 from cache ffff8881103dc550
        bash-1149 exit_task_sighand: free ffff8881132df460
        bash-1149 __sigqueue_free:   cache ffff8881103dc550
      
      The interesting sequence is that the exiting task 58488 grabs the sigqueue
      from bash's task cache to signal exit and bash sticks it back into it's own
      cache. Lather, rinse and repeat.
      
      The caching is probably not noticable for the general use case, but the
      benefit for latency sensitive applications is clear. While kmem caches are
      usually just serving from the fast path the slab merging (default) can
      depending on the usage pattern of the merged slabs cause occasional slow
      path allocations.
      
      The time spared per cached entry is a few micro seconds per signal which is
      not relevant for e.g. a kernel build, but for signal heavy workloads it's
      measurable.
      
      As there is no real downside of this caching mechanism making it
      unconditionally available is preferred over more conditional code or new
      magic tunables.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: NOleg Nesterov <oleg@redhat.com>
      Link: https://lkml.kernel.org/r/87sg4lbmxo.fsf@nanos.tec.linutronix.de
      4bad58eb
  8. 22 3月, 2021 1 次提交
    • I
      sched: Fix various typos · 3b03706f
      Ingo Molnar 提交于
      Fix ~42 single-word typos in scheduler code comments.
      
      We have accumulated a few fun ones over the years. :-)
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Juri Lelli <juri.lelli@redhat.com>
      Cc: Vincent Guittot <vincent.guittot@linaro.org>
      Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Ben Segall <bsegall@google.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: linux-kernel@vger.kernel.org
      3b03706f
  9. 17 3月, 2021 1 次提交
  10. 06 3月, 2021 1 次提交
  11. 27 2月, 2021 1 次提交
    • S
      bpf: Enable task local storage for tracing programs · a10787e6
      Song Liu 提交于
      To access per-task data, BPF programs usually creates a hash table with
      pid as the key. This is not ideal because:
       1. The user need to estimate the proper size of the hash table, which may
          be inaccurate;
       2. Big hash tables are slow;
       3. To clean up the data properly during task terminations, the user need
          to write extra logic.
      
      Task local storage overcomes these issues and offers a better option for
      these per-task data. Task local storage is only available to BPF_LSM. Now
      enable it for tracing programs.
      
      Unlike LSM programs, tracing programs can be called in IRQ contexts.
      Helpers that access task local storage are updated to use
      raw_spin_lock_irqsave() instead of raw_spin_lock_bh().
      
      Tracing programs can attach to functions on the task free path, e.g.
      exit_creds(). To avoid allocating task local storage after
      bpf_task_storage_free(). bpf_task_storage_get() is updated to not allocate
      new storage when the task is not refcounted (task->usage == 0).
      Signed-off-by: NSong Liu <songliubraving@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NKP Singh <kpsingh@kernel.org>
      Acked-by: NMartin KaFai Lau <kafai@fb.com>
      Link: https://lore.kernel.org/bpf/20210225234319.336131-2-songliubraving@fb.com
      a10787e6
  12. 22 2月, 2021 1 次提交
    • J
      io-wq: fork worker threads from original task · 3bfe6106
      Jens Axboe 提交于
      Instead of using regular kthread kernel threads, create kernel threads
      that are like a real thread that the task would create. This ensures that
      we get all the context that we need, without having to carry that state
      around. This greatly reduces the code complexity, and the risk of missing
      state for a given request type.
      
      With the move away from kthread, we can also dump everything related to
      assigned state to the new threads.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      3bfe6106
  13. 17 2月, 2021 2 次提交
  14. 04 2月, 2021 2 次提交
  15. 28 1月, 2021 1 次提交
  16. 14 1月, 2021 1 次提交
  17. 23 12月, 2020 1 次提交
  18. 02 12月, 2020 1 次提交
    • G
      kernel: Implement selective syscall userspace redirection · 1446e1df
      Gabriel Krisman Bertazi 提交于
      Introduce a mechanism to quickly disable/enable syscall handling for a
      specific process and redirect to userspace via SIGSYS.  This is useful
      for processes with parts that require syscall redirection and parts that
      don't, but who need to perform this boundary crossing really fast,
      without paying the cost of a system call to reconfigure syscall handling
      on each boundary transition.  This is particularly important for Windows
      games running over Wine.
      
      The proposed interface looks like this:
      
        prctl(PR_SET_SYSCALL_USER_DISPATCH, <op>, <off>, <length>, [selector])
      
      The range [<offset>,<offset>+<length>) is a part of the process memory
      map that is allowed to by-pass the redirection code and dispatch
      syscalls directly, such that in fast paths a process doesn't need to
      disable the trap nor the kernel has to check the selector.  This is
      essential to return from SIGSYS to a blocked area without triggering
      another SIGSYS from rt_sigreturn.
      
      selector is an optional pointer to a char-sized userspace memory region
      that has a key switch for the mechanism. This key switch is set to
      either PR_SYS_DISPATCH_ON, PR_SYS_DISPATCH_OFF to enable and disable the
      redirection without calling the kernel.
      
      The feature is meant to be set per-thread and it is disabled on
      fork/clone/execv.
      
      Internally, this doesn't add overhead to the syscall hot path, and it
      requires very little per-architecture support.  I avoided using seccomp,
      even though it duplicates some functionality, due to previous feedback
      that maybe it shouldn't mix with seccomp since it is not a security
      mechanism.  And obviously, this should never be considered a security
      mechanism, since any part of the program can by-pass it by using the
      syscall dispatcher.
      
      For the sysinfo benchmark, which measures the overhead added to
      executing a native syscall that doesn't require interception, the
      overhead using only the direct dispatcher region to issue syscalls is
      pretty much irrelevant.  The overhead of using the selector goes around
      40ns for a native (unredirected) syscall in my system, and it is (as
      expected) dominated by the supervisor-mode user-address access.  In
      fact, with SMAP off, the overhead is consistently less than 5ns on my
      test box.
      Signed-off-by: NGabriel Krisman Bertazi <krisman@collabora.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NAndy Lutomirski <luto@kernel.org>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NKees Cook <keescook@chromium.org>
      Link: https://lore.kernel.org/r/20201127193238.821364-4-krisman@collabora.com
      1446e1df
  19. 24 11月, 2020 2 次提交
  20. 17 11月, 2020 2 次提交
    • J
      sched/deadline: Fix priority inheritance with multiple scheduling classes · 2279f540
      Juri Lelli 提交于
      Glenn reported that "an application [he developed produces] a BUG in
      deadline.c when a SCHED_DEADLINE task contends with CFS tasks on nested
      PTHREAD_PRIO_INHERIT mutexes.  I believe the bug is triggered when a CFS
      task that was boosted by a SCHED_DEADLINE task boosts another CFS task
      (nested priority inheritance).
      
       ------------[ cut here ]------------
       kernel BUG at kernel/sched/deadline.c:1462!
       invalid opcode: 0000 [#1] PREEMPT SMP
       CPU: 12 PID: 19171 Comm: dl_boost_bug Tainted: ...
       Hardware name: ...
       RIP: 0010:enqueue_task_dl+0x335/0x910
       Code: ...
       RSP: 0018:ffffc9000c2bbc68 EFLAGS: 00010002
       RAX: 0000000000000009 RBX: ffff888c0af94c00 RCX: ffffffff81e12500
       RDX: 000000000000002e RSI: ffff888c0af94c00 RDI: ffff888c10b22600
       RBP: ffffc9000c2bbd08 R08: 0000000000000009 R09: 0000000000000078
       R10: ffffffff81e12440 R11: ffffffff81e1236c R12: ffff888bc8932600
       R13: ffff888c0af94eb8 R14: ffff888c10b22600 R15: ffff888bc8932600
       FS:  00007fa58ac55700(0000) GS:ffff888c10b00000(0000) knlGS:0000000000000000
       CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
       CR2: 00007fa58b523230 CR3: 0000000bf44ab003 CR4: 00000000007606e0
       DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
       DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
       PKRU: 55555554
       Call Trace:
        ? intel_pstate_update_util_hwp+0x13/0x170
        rt_mutex_setprio+0x1cc/0x4b0
        task_blocks_on_rt_mutex+0x225/0x260
        rt_spin_lock_slowlock_locked+0xab/0x2d0
        rt_spin_lock_slowlock+0x50/0x80
        hrtimer_grab_expiry_lock+0x20/0x30
        hrtimer_cancel+0x13/0x30
        do_nanosleep+0xa0/0x150
        hrtimer_nanosleep+0xe1/0x230
        ? __hrtimer_init_sleeper+0x60/0x60
        __x64_sys_nanosleep+0x8d/0xa0
        do_syscall_64+0x4a/0x100
        entry_SYSCALL_64_after_hwframe+0x49/0xbe
       RIP: 0033:0x7fa58b52330d
       ...
       ---[ end trace 0000000000000002 ]—
      
      He also provided a simple reproducer creating the situation below:
      
       So the execution order of locking steps are the following
       (N1 and N2 are non-deadline tasks. D1 is a deadline task. M1 and M2
       are mutexes that are enabled * with priority inheritance.)
      
       Time moves forward as this timeline goes down:
      
       N1              N2               D1
       |               |                |
       |               |                |
       Lock(M1)        |                |
       |               |                |
       |             Lock(M2)           |
       |               |                |
       |               |              Lock(M2)
       |               |                |
       |             Lock(M1)           |
       |             (!!bug triggered!) |
      
      Daniel reported a similar situation as well, by just letting ksoftirqd
      run with DEADLINE (and eventually block on a mutex).
      
      Problem is that boosted entities (Priority Inheritance) use static
      DEADLINE parameters of the top priority waiter. However, there might be
      cases where top waiter could be a non-DEADLINE entity that is currently
      boosted by a DEADLINE entity from a different lock chain (i.e., nested
      priority chains involving entities of non-DEADLINE classes). In this
      case, top waiter static DEADLINE parameters could be null (initialized
      to 0 at fork()) and replenish_dl_entity() would hit a BUG().
      
      Fix this by keeping track of the original donor and using its parameters
      when a task is boosted.
      Reported-by: NGlenn Elliott <glenn@aurora.tech>
      Reported-by: NDaniel Bristot de Oliveira <bristot@redhat.com>
      Signed-off-by: NJuri Lelli <juri.lelli@redhat.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Tested-by: NDaniel Bristot de Oliveira <bristot@redhat.com>
      Link: https://lkml.kernel.org/r/20201117061432.517340-1-juri.lelli@redhat.com
      2279f540
    • P
      sched: Fix data-race in wakeup · f97bb527
      Peter Zijlstra 提交于
      Mel reported that on some ARM64 platforms loadavg goes bananas and
      Will tracked it down to the following race:
      
        CPU0					CPU1
      
        schedule()
          prev->sched_contributes_to_load = X;
          deactivate_task(prev);
      
      					try_to_wake_up()
      					  if (p->on_rq &&) // false
      					  if (smp_load_acquire(&p->on_cpu) && // true
      					      ttwu_queue_wakelist())
      					        p->sched_remote_wakeup = Y;
      
          smp_store_release(prev->on_cpu, 0);
      
      where both p->sched_contributes_to_load and p->sched_remote_wakeup are
      in the same word, and thus the stores X and Y race (and can clobber
      one another's data).
      
      Whereas prior to commit c6e7bd7a ("sched/core: Optimize ttwu()
      spinning on p->on_cpu") the p->on_cpu handoff serialized access to
      p->sched_remote_wakeup (just as it still does with
      p->sched_contributes_to_load) that commit broke that by calling
      ttwu_queue_wakelist() with p->on_cpu != 0.
      
      However, due to
      
        p->XXX = X			ttwu()
        schedule()			  if (p->on_rq && ...) // false
          smp_mb__after_spinlock()	  if (smp_load_acquire(&p->on_cpu) &&
          deactivate_task()		      ttwu_queue_wakelist())
            p->on_rq = 0;		        p->sched_remote_wakeup = Y;
      
      We can be sure any 'current' store is complete and 'current' is
      guaranteed asleep. Therefore we can move p->sched_remote_wakeup into
      the current flags word.
      
      Note: while the observed failure was loadavg accounting gone wrong due
      to ttwu() cobbering p->sched_contributes_to_load, the reverse problem
      is also possible where schedule() clobbers p->sched_remote_wakeup,
      this could result in enqueue_entity() wrecking ->vruntime and causing
      scheduling artifacts.
      
      Fixes: c6e7bd7a ("sched/core: Optimize ttwu() spinning on p->on_cpu")
      Reported-by: NMel Gorman <mgorman@techsingularity.net>
      Debugged-by: NWill Deacon <will@kernel.org>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Link: https://lkml.kernel.org/r/20201117083016.GK3121392@hirez.programming.kicks-ass.net
      f97bb527
  21. 11 11月, 2020 3 次提交
  22. 17 10月, 2020 1 次提交
  23. 14 10月, 2020 1 次提交
  24. 13 10月, 2020 1 次提交
  25. 07 10月, 2020 1 次提交
    • T
      x86/mce: Recover from poison found while copying from user space · c0ab7ffc
      Tony Luck 提交于
      Existing kernel code can only recover from a machine check on code that
      is tagged in the exception table with a fault handling recovery path.
      
      Add two new fields in the task structure to pass information from
      machine check handler to the "task_work" that is queued to run before
      the task returns to user mode:
      
      + mce_vaddr: will be initialized to the user virtual address of the fault
        in the case where the fault occurred in the kernel copying data from
        a user address.  This is so that kill_me_maybe() can provide that
        information to the user SIGBUS handler.
      
      + mce_kflags: copy of the struct mce.kflags needed by kill_me_maybe()
        to determine if mce_vaddr is applicable to this error.
      
      Add code to recover from a machine check while copying data from user
      space to the kernel. Action for this case is the same as if the user
      touched the poison directly; unmap the page and send a SIGBUS to the task.
      
      Use a new helper function to share common code between the "fault
      in user mode" case and the "fault while copying from user" case.
      
      New code paths will be activated by the next patch which sets
      MCE_IN_KERNEL_COPYIN.
      Suggested-by: NBorislav Petkov <bp@alien8.de>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Link: https://lkml.kernel.org/r/20201006210910.21062-6-tony.luck@intel.com
      c0ab7ffc
  26. 03 10月, 2020 1 次提交
  27. 01 10月, 2020 1 次提交
    • J
      io_uring: don't rely on weak ->files references · 0f212204
      Jens Axboe 提交于
      Grab actual references to the files_struct. To avoid circular references
      issues due to this, we add a per-task note that keeps track of what
      io_uring contexts a task has used. When the tasks execs or exits its
      assigned files, we cancel requests based on this tracking.
      
      With that, we can grab proper references to the files table, and no
      longer need to rely on stashing away ring_fd and ring_file to check
      if the ring_fd may have been closed.
      
      Cc: stable@vger.kernel.org # v5.5+
      Reviewed-by: NPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      0f212204