1. 01 12月, 2012 2 次提交
    • S
      ring-buffer: Fix race between integrity check and readers · 9366c1ba
      Steven Rostedt 提交于
      The function rb_check_pages() was added to make sure the ring buffer's
      pages were sane. This check is done when the ring buffer size is modified
      as well as when the iterator is released (closing the "trace" file),
      as that was considered a non fast path and a good place to do a sanity
      check.
      
      The problem is that the check does not have any locks around it.
      If one process were to read the trace file, and another were to read
      the raw binary file, the check could happen while the reader is reading
      the file.
      
      The issues with this is that the check requires to clear the HEAD page
      before doing the full check and it restores it afterward. But readers
      require the HEAD page to exist before it can read the buffer, otherwise
      it gives a nasty warning and disables the buffer.
      
      By adding the reader lock around the check, this keeps the race from
      happening.
      
      Cc: stable@vger.kernel.org # 3.6
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      9366c1ba
    • S
      ring-buffer: Fix NULL pointer if rb_set_head_page() fails · 54f7be5b
      Steven Rostedt 提交于
      The function rb_set_head_page() searches the list of ring buffer
      pages for a the page that has the HEAD page flag set. If it does
      not find it, it will do a WARN_ON(), disable the ring buffer and
      return NULL, as this should never happen.
      
      But if this bug happens to happen, not all callers of this function
      can handle a NULL pointer being returned from it. That needs to be
      fixed.
      
      Cc: stable@vger.kernel.org # 3.0+
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      54f7be5b
  2. 02 11月, 2012 1 次提交
  3. 01 11月, 2012 2 次提交
  4. 12 10月, 2012 1 次提交
  5. 07 8月, 2012 1 次提交
  6. 30 6月, 2012 2 次提交
  7. 29 6月, 2012 1 次提交
    • S
      ring-buffer: Fix uninitialized read_stamp · a5fb8331
      Steven Rostedt 提交于
      The ring buffer reader page is used to swap a page from the writable
      ring buffer. If the writer happens to be on that page, it ends up on the
      reader page, but will simply move off of it, back into the writable ring
      buffer as writes are added.
      
      The time stamp passed back to the readers is stored in the cpu_buffer per
      CPU descriptor. This stamp is updated when a swap of the reader page takes
      place, and it reads the current stamp from the page taken from the writable
      ring buffer. Everytime a writer goes to a new page, it updates the time stamp
      of that page.
      
      The problem happens if a reader reads a page from an empty per CPU ring buffer.
      If the buffer is empty, the swap still takes place, placing the writer at the
      start of the reader page. If at a later time, a write happens, it updates the
      page's time stamp and continues. But the problem is that the read_stamp does
      not get updated, because the page was already swapped.
      
      The solution to this was to not swap the page if the ring buffer happens to
      be empty. This also removes the side effect that the writes on the reader
      page will not get updated because the writer never gets back on the reader
      page without a swap. That is, if a read happens on an empty buffer, but then
      no reads happen for a while. If a swap took place, and the writer were to start
      writing a lot of data (function tracer), it will start overflowing the ring buffer
      and overwrite the older data. But because the writer never goes back onto the
      reader page, the data left on the reader page never gets overwritten. This
      causes the reader to see really old data, followed by a jump to newer data.
      
      Link: http://lkml.kernel.org/r/1340060577-9112-1-git-send-email-dhsharp@google.com
      Google-Bug-Id: 6410455
      Reported-by: NDavid Sharp <dhsharp@google.com>
      tested-by: NDavid Sharp <dhsharp@google.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      a5fb8331
  8. 24 5月, 2012 1 次提交
    • S
      ring-buffer: Check for valid buffer before changing size · 6a31e1f1
      Steven Rostedt 提交于
      On some machines the number of possible CPUS is not the same as the
      number of CPUs that is on the machine. Ftrace uses possible_cpus to
      update the tracing structures but the ring buffer only allocates
      per cpu buffers for online CPUs when they come up.
      
      When the wakeup tracer was enabled in such a case, the ftrace code
      enabled all possible cpu buffers, but the code in ring_buffer_resize()
      did not check to see if the buffer in question was allocated. Since
      boot up CPUs did not match possible CPUs it caused the following
      crash:
      
      BUG: unable to handle kernel NULL pointer dereference at 00000020
      IP: [<c1097851>] ring_buffer_resize+0x16a/0x28d
      *pde = 00000000
      Oops: 0000 [#1] PREEMPT SMP
      Dumping ftrace buffer:
         (ftrace buffer empty)
      Modules linked in: [last unloaded: scsi_wait_scan]
      
      Pid: 1387, comm: bash Not tainted 3.4.0-test+ #13                  /DG965MQ
      EIP: 0060:[<c1097851>] EFLAGS: 00010217 CPU: 0
      EIP is at ring_buffer_resize+0x16a/0x28d
      EAX: f5a14340 EBX: f6026b80 ECX: 00000ff4 EDX: 00000ff3
      ESI: 00000000 EDI: 00000002 EBP: f4275ecc ESP: f4275eb0
       DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068
      CR0: 80050033 CR2: 00000020 CR3: 34396000 CR4: 000007d0
      DR0: 00000000 DR1: 00000000 DR2: 00000000 DR3: 00000000
      DR6: ffff0ff0 DR7: 00000400
      Process bash (pid: 1387, ti=f4274000 task=f4380cb0 task.ti=f4274000)
      Stack:
       c109cf9a f6026b98 00000162 00160f68 00000006 00160f68 00000002 f4275ef0
       c109d013 f4275ee8 c123b72a c1c0bf00 c1cc81dc 00000005 f4275f98 00000007
       f4275f70 c109d0c7 7700000e 75656b61 00000070 f5e90900 f5c4e198 00000301
      Call Trace:
       [<c109cf9a>] ? tracing_set_tracer+0x115/0x1e9
       [<c109d013>] tracing_set_tracer+0x18e/0x1e9
       [<c123b72a>] ? _copy_from_user+0x30/0x46
       [<c109d0c7>] tracing_set_trace_write+0x59/0x7f
       [<c10ec01e>] ? fput+0x18/0x1c6
       [<c11f8732>] ? security_file_permission+0x27/0x2b
       [<c10eaacd>] ? rw_verify_area+0xcf/0xf2
       [<c10ec01e>] ? fput+0x18/0x1c6
       [<c109d06e>] ? tracing_set_tracer+0x1e9/0x1e9
       [<c10ead77>] vfs_write+0x8b/0xe3
       [<c10ebead>] ? fget_light+0x30/0x81
       [<c10eaf54>] sys_write+0x42/0x63
       [<c1834fbf>] sysenter_do_call+0x12/0x28
      
      This happens with the latency tracer as the ftrace code updates the
      saved max buffer via its cpumask and not with a global setting.
      
      Adding a check in ring_buffer_resize() to make sure the buffer being resized
      exists, fixes the problem.
      
      Cc: Vaibhav Nagarnaik <vnagarnaik@google.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      6a31e1f1
  9. 19 5月, 2012 1 次提交
  10. 17 5月, 2012 4 次提交
    • S
      ring-buffer: Reset head page before running self test · 308f7eeb
      Steven Rostedt 提交于
      When the ring buffer does its consistency test on itself, it
      removes the head page, runs the tests, and then adds it back
      to what the "head_page" pointer was. But because the head_page
      pointer may lack behind the real head page (held by the link
      list pointer). The reset may be incorrect.
      
      Instead, if the head_page exists (it does not on first allocation)
      reset it back to the real head page before running the consistency
      tests. Then it will be put back to its original location after
      the tests are complete.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      308f7eeb
    • S
      ring-buffer: Add integrity check at end of iter read · 659f451f
      Steven Rostedt 提交于
      There use to be ring buffer integrity checks after updating the
      size of the ring buffer. But now that the ring buffer can modify
      the size while the system is running, the integrity checks were
      removed, as they require the ring buffer to be disabed to perform
      the check.
      
      Move the integrity check to the reading of the ring buffer via the
      iterator reads (the "trace" file). As reading via an iterator requires
      disabling the ring buffer, it is a perfect place to have it.
      
      If the ring buffer happens to be disabled when updating the size,
      we still perform the integrity check.
      
      Cc: Vaibhav Nagarnaik <vnagarnaik@google.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      659f451f
    • V
      ring-buffer: Make addition of pages in ring buffer atomic · 5040b4b7
      Vaibhav Nagarnaik 提交于
      This patch adds the capability to add new pages to a ring buffer
      atomically while write operations are going on. This makes it possible
      to expand the ring buffer size without reinitializing the ring buffer.
      
      The new pages are attached between the head page and its previous page.
      
      Link: http://lkml.kernel.org/r/1336096792-25373-2-git-send-email-vnagarnaik@google.com
      
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Laurent Chavey <chavey@google.com>
      Cc: Justin Teravest <teravest@google.com>
      Cc: David Sharp <dhsharp@google.com>
      Signed-off-by: NVaibhav Nagarnaik <vnagarnaik@google.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      5040b4b7
    • V
      ring-buffer: Make removal of ring buffer pages atomic · 83f40318
      Vaibhav Nagarnaik 提交于
      This patch adds the capability to remove pages from a ring buffer
      without destroying any existing data in it.
      
      This is done by removing the pages after the tail page. This makes sure
      that first all the empty pages in the ring buffer are removed. If the
      head page is one in the list of pages to be removed, then the page after
      the removed ones is made the head page. This removes the oldest data
      from the ring buffer and keeps the latest data around to be read.
      
      To do this in a non-racey manner, tracing is stopped for a very short
      time while the pages to be removed are identified and unlinked from the
      ring buffer. The pages are freed after the tracing is restarted to
      minimize the time needed to stop tracing.
      
      The context in which the pages from the per-cpu ring buffer are removed
      runs on the respective CPU. This minimizes the events not traced to only
      NMI trace contexts.
      
      Link: http://lkml.kernel.org/r/1336096792-25373-1-git-send-email-vnagarnaik@google.com
      
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Laurent Chavey <chavey@google.com>
      Cc: Justin Teravest <teravest@google.com>
      Cc: David Sharp <dhsharp@google.com>
      Signed-off-by: NVaibhav Nagarnaik <vnagarnaik@google.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      83f40318
  11. 24 4月, 2012 1 次提交
  12. 23 2月, 2012 1 次提交
    • S
      tracing/ring-buffer: Only have tracing_on disable tracing buffers · 499e5470
      Steven Rostedt 提交于
      As the ring-buffer code is being used by other facilities in the
      kernel, having tracing_on file disable *all* buffers is not a desired
      affect. It should only disable the ftrace buffers that are being used.
      
      Move the code into the trace.c file and use the buffer disabling
      for tracing_on() and tracing_off(). This way only the ftrace buffers
      will be affected by them and other kernel utilities will not be
      confused to why their output suddenly stopped.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      499e5470
  13. 13 9月, 2011 1 次提交
  14. 31 8月, 2011 1 次提交
    • V
      trace: Add ring buffer stats to measure rate of events · c64e148a
      Vaibhav Nagarnaik 提交于
      The stats file under per_cpu folder provides the number of entries,
      overruns and other statistics about the CPU ring buffer. However, the
      numbers do not provide any indication of how full the ring buffer is in
      bytes compared to the overall size in bytes. Also, it is helpful to know
      the rate at which the cpu buffer is filling up.
      
      This patch adds an entry "bytes: " in printed stats for per_cpu ring
      buffer which provides the actual bytes consumed in the ring buffer. This
      field includes the number of bytes used by recorded events and the
      padding bytes added when moving the tail pointer to next page.
      
      It also adds the following time stamps:
      "oldest event ts:" - the oldest timestamp in the ring buffer
      "now ts:"  - the timestamp at the time of reading
      
      The field "now ts" provides a consistent time snapshot to the userspace
      when being read. This is read from the same trace clock used by tracing
      event timestamps.
      
      Together, these values provide the rate at which the buffer is filling
      up, from the formula:
      bytes / (now_ts - oldest_event_ts)
      Signed-off-by: NVaibhav Nagarnaik <vnagarnaik@google.com>
      Cc: Michael Rubin <mrubin@google.com>
      Cc: David Sharp <dhsharp@google.com>
      Link: http://lkml.kernel.org/r/1313531179-9323-3-git-send-email-vnagarnaik@google.comSigned-off-by: NSteven Rostedt <rostedt@goodmis.org>
      c64e148a
  15. 15 6月, 2011 3 次提交
  16. 26 5月, 2011 1 次提交
    • S
      ftrace: Add internal recursive checks · b1cff0ad
      Steven Rostedt 提交于
      Witold reported a reboot caused by the selftests of the dynamic function
      tracer. He sent me a config and I used ktest to do a config_bisect on it
      (as my config did not cause the crash). It pointed out that the problem
      config was CONFIG_PROVE_RCU.
      
      What happened was that if multiple callbacks are attached to the
      function tracer, we iterate a list of callbacks. Because the list is
      managed by synchronize_sched() and preempt_disable, the access to the
      pointers uses rcu_dereference_raw().
      
      When PROVE_RCU is enabled, the rcu_dereference_raw() calls some
      debugging functions, which happen to be traced. The tracing of the debug
      function would then call rcu_dereference_raw() which would then call the
      debug function and then... well you get the idea.
      
      I first wrote two different patches to solve this bug.
      
      1) add a __rcu_dereference_raw() that would not do any checks.
      2) add notrace to the offending debug functions.
      
      Both of these patches worked.
      
      Talking with Paul McKenney on IRC, he suggested to add recursion
      detection instead. This seemed to be a better solution, so I decided to
      implement it. As the task_struct already has a trace_recursion to detect
      recursion in the ring buffer, and that has a very small number it
      allows, I decided to use that same variable to add flags that can detect
      the recursion inside the infrastructure of the function tracer.
      
      I plan to change it so that the task struct bit can be checked in
      mcount, but as that requires changes to all archs, I will hold that off
      to the next merge window.
      
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Link: http://lkml.kernel.org/r/1306348063.1465.116.camel@gandalf.stny.rr.comReported-by: NWitold Baryluk <baryluk@smp.if.uj.edu.pl>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      b1cff0ad
  17. 31 3月, 2011 1 次提交
  18. 10 3月, 2011 3 次提交
  19. 18 2月, 2011 1 次提交
  20. 09 2月, 2011 1 次提交
    • J
      tracing: Add unstable sched clock note to the warning · 5e38ca8f
      Jiri Olsa 提交于
      The warning "Delta way too big" warning might appear on a system with
      unstable shed clock right after the system is resumed and tracing
      was enabled during the suspend.
      
      Since it's not realy bug, and the unstable sched clock is working
      fast and reliable otherwise, Steven suggested to keep using the
      sched clock in any case and just to make note in the warning itself.
      Signed-off-by: NJiri Olsa <jolsa@redhat.com>
      LKML-Reference: <1296649698-6003-1-git-send-email-jolsa@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      5e38ca8f
  21. 19 1月, 2011 1 次提交
  22. 24 12月, 2010 1 次提交
    • D
      ring_buffer: Off-by-one and duplicate events in ring_buffer_read_page · e1e35927
      David Sharp 提交于
      Fix two related problems in the event-copying loop of
      ring_buffer_read_page.
      
      The loop condition for copying events is off-by-one.
      "len" is the remaining space in the caller-supplied page.
      "size" is the size of the next event (or two events).
      If len == size, then there is just enough space for the next event.
      
      size was set to rb_event_ts_length, which may include the size of two
      events if the first event is a time-extend, in order to assure time-
      extends are kept together with the event after it. However,
      rb_advance_reader always advances by one event. This would result in the
      event after any time-extend being duplicated. Instead, get the size of
      a single event for the memcpy, but use rb_event_ts_length for the loop
      condition.
      Signed-off-by: NDavid Sharp <dhsharp@google.com>
      LKML-Reference: <1293064704-8101-1-git-send-email-dhsharp@google.com>
      LKML-Reference: <AANLkTin7nLrRPc9qGjdjHbeVDDWiJjAiYyb-L=gH85bx@mail.gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      e1e35927
  23. 21 10月, 2010 5 次提交
    • S
      ring-buffer: Remove unused macro RB_TIMESTAMPS_PER_PAGE · b8b2663b
      Steven Rostedt 提交于
      With the binding of time extends to events we no longer need to use
      the macro RB_TIMESTAMPS_PER_PAGE. Remove it.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      b8b2663b
    • S
      ring-buffer: Micro-optimize with some strategic inlining · d9abde21
      Steven Rostedt 提交于
      By using inline and noinline, we are able to make the fast path of
      recording an event 4% faster.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      d9abde21
    • S
      ring-buffer: Remove condition to add timestamp in fast path · 140ff891
      Steven Rostedt 提交于
      There's a condition to check if we should add a time extend or
      not in the fast path. But this condition is racey (in the sense
      that we can add a unnecessary time extend, but nothing that
      can break anything). We later check if the time or event time
      delta should be zero or have real data in it (not racey), making
      this first check redundant.
      
      This check may help save space once in a while, but really is
      not worth the hassle to try to save some space that happens at
      most 134 ms at a time.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      140ff891
    • S
      ring-buffer: Bind time extend and data events together · 69d1b839
      Steven Rostedt 提交于
      When the time between two timestamps is greater than
      2^27 nanosecs (~134 ms) a time extend event is added that extends
      the time difference to 59 bits (~18 years). This is due to
      events only having a 27 bit field to store time.
      
      Currently this time extend is a separate event. We add it just before
      the event data that is being written to the buffer. But before
      the event data is committed, the event data can also be discarded (as
      with the case of filters). But because the time extend has already been
      committed, it will stay in the buffer.
      
      If lots of events are being filtered and no event is being
      written, then every 134ms a time extend can be added to the buffer
      without any data attached. To keep from filling the entire buffer
      with time extends, a time extend will never be the first event
      in a page because the page timestamp can be used. Time extends can
      only fill the rest of a page with some data at the beginning.
      
      This patch binds the time extend with the data. The difference here
      is that the time extend is not committed before the data is added.
      Instead, when a time extend is needed, the space reserved on
      the ring buffer is the time extend + the data event size. The
      time extend is added to the first part of the reserved block and
      the data is added to the second. The time extend event is passed
      back to the reserver, but since the reserver also uses a function
      to find the data portion of the reserved block, no changes to the
      ring buffer interface need to be made.
      
      When a commit is discarded, we now remove both the time extend and
      the event. With this approach no more than one time extend can
      be in the buffer in a row. Data must always follow a time extend.
      
      Thanks to Mathieu Desnoyers for suggesting this idea.
      Suggested-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      69d1b839
    • S
      ring-buffer: Pass delta by value and not by reference · f25106ae
      Steven Rostedt 提交于
      The delta between events is passed to the timestamp code by reference
      and the timestamp code will reset the value. But it can be reset
      from the caller. No need to pass it in by reference.
      
      By changing the call to pass by value, lets gcc optimize the code
      a bit more where it can store the delta in a register and not
      worry about updating the reference.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f25106ae
  24. 20 10月, 2010 2 次提交
    • S
      ring-buffer: Pass timestamp by value and not by reference · e8bc43e8
      Steven Rostedt 提交于
      The original code for the ring buffer had locations that modified
      the timestamp and that change was used by the callers. Now,
      the timestamp is not reused by the callers and there is no reason
      to pass it by reference.
      
      By changing the call to pass by value, lets gcc optimize the code
      a bit more where it can store the timestamp in a register and not
      worry about updating the reference.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      e8bc43e8
    • S
      ring-buffer: Make write slow path out of line · 747e94ae
      Steven Rostedt 提交于
      Gcc inlines the slow path of the ring buffer write which can
      hurt performance. This patch simply forces the slow path function
      rb_move_tail() to always be a function.
      
      The ring_buffer_benchmark module with reader_disabled=1 shows that
      this patch changes the time to record an event from 135 ns to
      132 ns. (3 ns or 2.22% improvement)
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      747e94ae
  25. 15 10月, 2010 1 次提交
    • A
      llseek: automatically add .llseek fop · 6038f373
      Arnd Bergmann 提交于
      All file_operations should get a .llseek operation so we can make
      nonseekable_open the default for future file operations without a
      .llseek pointer.
      
      The three cases that we can automatically detect are no_llseek, seq_lseek
      and default_llseek. For cases where we can we can automatically prove that
      the file offset is always ignored, we use noop_llseek, which maintains
      the current behavior of not returning an error from a seek.
      
      New drivers should normally not use noop_llseek but instead use no_llseek
      and call nonseekable_open at open time.  Existing drivers can be converted
      to do the same when the maintainer knows for certain that no user code
      relies on calling seek on the device file.
      
      The generated code is often incorrectly indented and right now contains
      comments that clarify for each added line why a specific variant was
      chosen. In the version that gets submitted upstream, the comments will
      be gone and I will manually fix the indentation, because there does not
      seem to be a way to do that using coccinelle.
      
      Some amount of new code is currently sitting in linux-next that should get
      the same modifications, which I will do at the end of the merge window.
      
      Many thanks to Julia Lawall for helping me learn to write a semantic
      patch that does all this.
      
      ===== begin semantic patch =====
      // This adds an llseek= method to all file operations,
      // as a preparation for making no_llseek the default.
      //
      // The rules are
      // - use no_llseek explicitly if we do nonseekable_open
      // - use seq_lseek for sequential files
      // - use default_llseek if we know we access f_pos
      // - use noop_llseek if we know we don't access f_pos,
      //   but we still want to allow users to call lseek
      //
      @ open1 exists @
      identifier nested_open;
      @@
      nested_open(...)
      {
      <+...
      nonseekable_open(...)
      ...+>
      }
      
      @ open exists@
      identifier open_f;
      identifier i, f;
      identifier open1.nested_open;
      @@
      int open_f(struct inode *i, struct file *f)
      {
      <+...
      (
      nonseekable_open(...)
      |
      nested_open(...)
      )
      ...+>
      }
      
      @ read disable optional_qualifier exists @
      identifier read_f;
      identifier f, p, s, off;
      type ssize_t, size_t, loff_t;
      expression E;
      identifier func;
      @@
      ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
      {
      <+...
      (
         *off = E
      |
         *off += E
      |
         func(..., off, ...)
      |
         E = *off
      )
      ...+>
      }
      
      @ read_no_fpos disable optional_qualifier exists @
      identifier read_f;
      identifier f, p, s, off;
      type ssize_t, size_t, loff_t;
      @@
      ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
      {
      ... when != off
      }
      
      @ write @
      identifier write_f;
      identifier f, p, s, off;
      type ssize_t, size_t, loff_t;
      expression E;
      identifier func;
      @@
      ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
      {
      <+...
      (
        *off = E
      |
        *off += E
      |
        func(..., off, ...)
      |
        E = *off
      )
      ...+>
      }
      
      @ write_no_fpos @
      identifier write_f;
      identifier f, p, s, off;
      type ssize_t, size_t, loff_t;
      @@
      ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
      {
      ... when != off
      }
      
      @ fops0 @
      identifier fops;
      @@
      struct file_operations fops = {
       ...
      };
      
      @ has_llseek depends on fops0 @
      identifier fops0.fops;
      identifier llseek_f;
      @@
      struct file_operations fops = {
      ...
       .llseek = llseek_f,
      ...
      };
      
      @ has_read depends on fops0 @
      identifier fops0.fops;
      identifier read_f;
      @@
      struct file_operations fops = {
      ...
       .read = read_f,
      ...
      };
      
      @ has_write depends on fops0 @
      identifier fops0.fops;
      identifier write_f;
      @@
      struct file_operations fops = {
      ...
       .write = write_f,
      ...
      };
      
      @ has_open depends on fops0 @
      identifier fops0.fops;
      identifier open_f;
      @@
      struct file_operations fops = {
      ...
       .open = open_f,
      ...
      };
      
      // use no_llseek if we call nonseekable_open
      ////////////////////////////////////////////
      @ nonseekable1 depends on !has_llseek && has_open @
      identifier fops0.fops;
      identifier nso ~= "nonseekable_open";
      @@
      struct file_operations fops = {
      ...  .open = nso, ...
      +.llseek = no_llseek, /* nonseekable */
      };
      
      @ nonseekable2 depends on !has_llseek @
      identifier fops0.fops;
      identifier open.open_f;
      @@
      struct file_operations fops = {
      ...  .open = open_f, ...
      +.llseek = no_llseek, /* open uses nonseekable */
      };
      
      // use seq_lseek for sequential files
      /////////////////////////////////////
      @ seq depends on !has_llseek @
      identifier fops0.fops;
      identifier sr ~= "seq_read";
      @@
      struct file_operations fops = {
      ...  .read = sr, ...
      +.llseek = seq_lseek, /* we have seq_read */
      };
      
      // use default_llseek if there is a readdir
      ///////////////////////////////////////////
      @ fops1 depends on !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
      identifier fops0.fops;
      identifier readdir_e;
      @@
      // any other fop is used that changes pos
      struct file_operations fops = {
      ... .readdir = readdir_e, ...
      +.llseek = default_llseek, /* readdir is present */
      };
      
      // use default_llseek if at least one of read/write touches f_pos
      /////////////////////////////////////////////////////////////////
      @ fops2 depends on !fops1 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
      identifier fops0.fops;
      identifier read.read_f;
      @@
      // read fops use offset
      struct file_operations fops = {
      ... .read = read_f, ...
      +.llseek = default_llseek, /* read accesses f_pos */
      };
      
      @ fops3 depends on !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
      identifier fops0.fops;
      identifier write.write_f;
      @@
      // write fops use offset
      struct file_operations fops = {
      ... .write = write_f, ...
      +	.llseek = default_llseek, /* write accesses f_pos */
      };
      
      // Use noop_llseek if neither read nor write accesses f_pos
      ///////////////////////////////////////////////////////////
      
      @ fops4 depends on !fops1 && !fops2 && !fops3 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
      identifier fops0.fops;
      identifier read_no_fpos.read_f;
      identifier write_no_fpos.write_f;
      @@
      // write fops use offset
      struct file_operations fops = {
      ...
       .write = write_f,
       .read = read_f,
      ...
      +.llseek = noop_llseek, /* read and write both use no f_pos */
      };
      
      @ depends on has_write && !has_read && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
      identifier fops0.fops;
      identifier write_no_fpos.write_f;
      @@
      struct file_operations fops = {
      ... .write = write_f, ...
      +.llseek = noop_llseek, /* write uses no f_pos */
      };
      
      @ depends on has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
      identifier fops0.fops;
      identifier read_no_fpos.read_f;
      @@
      struct file_operations fops = {
      ... .read = read_f, ...
      +.llseek = noop_llseek, /* read uses no f_pos */
      };
      
      @ depends on !has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
      identifier fops0.fops;
      @@
      struct file_operations fops = {
      ...
      +.llseek = noop_llseek, /* no read or write fn */
      };
      ===== End semantic patch =====
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Cc: Julia Lawall <julia@diku.dk>
      Cc: Christoph Hellwig <hch@infradead.org>
      6038f373