1. 02 4月, 2021 1 次提交
  2. 24 3月, 2021 1 次提交
  3. 19 3月, 2021 4 次提交
    • S
      seq_buf: Add seq_buf_terminate() API · f2616c77
      Steven Rostedt (VMware) 提交于
      In the case that the seq_buf buffer needs to be printed directly, add a way
      to make sure that the buffer is safe to read by forcing a nul terminating
      character at the end of the string, or the last byte of the buffer if the
      string has overflowed.
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      f2616c77
    • S
      ring-buffer: Allow ring_buffer_event_time_stamp() to return time stamp of all events · efe6196a
      Steven Rostedt (VMware) 提交于
      Currently, ring_buffer_event_time_stamp() only returns an accurate time
      stamp of the event if it has an absolute extended time stamp attached to
      it. To make it more robust, use the event_stamp() in case the event does
      not have an absolute value attached to it.
      
      This will allow ring_buffer_event_time_stamp() to be used in more cases
      than just histograms, and it will also allow histograms to not require
      including absolute values all the time.
      
      Link: https://lkml.kernel.org/r/20210316164113.704830885@goodmis.orgReviewed-by: NTom Zanussi <zanussi@kernel.org>
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      efe6196a
    • S
      tracing: Pass buffer of event to trigger operations · b47e3302
      Steven Rostedt (VMware) 提交于
      The ring_buffer_event_time_stamp() is going to be updated to extract the
      time stamp for the event without needing it to be set to have absolute
      values for all events. But to do so, it needs the buffer that the event is
      on as the buffer saves information for the event before it is committed to
      the buffer.
      
      If the trace buffer is disabled, a temporary buffer is used, and there's
      no access to this buffer from the current histogram triggers, even though
      it is passed to the trace event code.
      
      Pass the buffer that the event is on all the way down to the histogram
      triggers.
      
      Link: https://lkml.kernel.org/r/20210316164113.542448131@goodmis.orgReviewed-by: NTom Zanussi <zanussi@kernel.org>
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      b47e3302
    • S
      workqueue/tracing: Copy workqueue name to buffer in trace event · 83b62687
      Steven Rostedt (VMware) 提交于
      The trace event "workqueue_queue_work" references an unsafe string in
      dereferencing the name of the workqueue. As the name is allocated, it
      could later be freed, and the pointer to that string could stay on the
      tracing buffer. If the trace buffer is read after the string is freed, it
      will reference an unsafe pointer.
      
      I added a new verifier to make sure that all strings referenced in the
      output of the trace buffer is safe to read and this triggered on the
      workqueue_queue_work trace event:
      
      workqueue_queue_work: work struct=00000000b2b235c7 function=gc_worker workqueue=(0xffff888100051160:events_power_efficient)[UNSAFE-MEMORY] req_cpu=256 cpu=1
      workqueue_queue_work: work struct=00000000c344caec function=flush_to_ldisc workqueue=(0xffff888100054d60:events_unbound)[UNSAFE-MEMORY] req_cpu=256 cpu=4294967295
      workqueue_queue_work: work struct=00000000b2b235c7 function=gc_worker workqueue=(0xffff888100051160:events_power_efficient)[UNSAFE-MEMORY] req_cpu=256 cpu=1
      workqueue_queue_work: work struct=000000000b238b3f function=vmstat_update workqueue=(0xffff8881000c3760:mm_percpu_wq)[UNSAFE-MEMORY] req_cpu=1 cpu=1
      
      Also, if this event is read via a user space application like perf or
      trace-cmd, the name would only be an address and useless information:
      
      workqueue_queue_work: work struct=0xffff953f80b4b918 function=disk_events_workfn workqueue=ffff953f8005d378 req_cpu=8192 cpu=5
      
      Cc: Zqiang <qiang.zhang@windriver.com>
      Cc: Tejun Heo <tj@kernel.org>
      Fixes: 7bf9c4a8 ("workqueue: tracing the name of the workqueue instead of it's address")
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      83b62687
  4. 14 3月, 2021 7 次提交
  5. 11 3月, 2021 5 次提交
  6. 10 3月, 2021 7 次提交
  7. 09 3月, 2021 2 次提交
  8. 08 3月, 2021 3 次提交
  9. 06 3月, 2021 2 次提交
  10. 05 3月, 2021 1 次提交
    • J
      kernel: provide create_io_thread() helper · cc440e87
      Jens Axboe 提交于
      Provide a generic helper for setting up an io_uring worker. Returns a
      task_struct so that the caller can do whatever setup is needed, then call
      wake_up_new_task() to kick it into gear.
      
      Add a kernel_clone_args member, io_thread, which tells copy_process() to
      mark the task with PF_IO_WORKER.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      cc440e87
  11. 04 3月, 2021 5 次提交
  12. 03 3月, 2021 2 次提交
    • J
      swap: fix swapfile read/write offset · caf6912f
      Jens Axboe 提交于
      We're not factoring in the start of the file for where to write and
      read the swapfile, which leads to very unfortunate side effects of
      writing where we should not be...
      
      Fixes: 48d15436 ("mm: remove get_swap_bio")
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      caf6912f
    • D
      KVM: x86/xen: Add support for vCPU runstate information · 30b5c851
      David Woodhouse 提交于
      This is how Xen guests do steal time accounting. The hypervisor records
      the amount of time spent in each of running/runnable/blocked/offline
      states.
      
      In the Xen accounting, a vCPU is still in state RUNSTATE_running while
      in Xen for a hypercall or I/O trap, etc. Only if Xen explicitly schedules
      does the state become RUNSTATE_blocked. In KVM this means that even when
      the vCPU exits the kvm_run loop, the state remains RUNSTATE_running.
      
      The VMM can explicitly set the vCPU to RUNSTATE_blocked by using the
      KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_CURRENT attribute, and can also use
      KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_ADJUST to retrospectively add a given
      amount of time to the blocked state and subtract it from the running
      state.
      
      The state_entry_time corresponds to get_kvmclock_ns() at the time the
      vCPU entered the current state, and the total times of all four states
      should always add up to state_entry_time.
      Co-developed-by: NJoao Martins <joao.m.martins@oracle.com>
      Signed-off-by: NJoao Martins <joao.m.martins@oracle.com>
      Signed-off-by: NDavid Woodhouse <dwmw@amazon.co.uk>
      Message-Id: <20210301125309.874953-2-dwmw2@infradead.org>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      30b5c851