1. 05 10月, 2015 1 次提交
    • P
      drivers/tty: make sysrq.c slightly more explicitly non-modular · 3bce6f64
      Paul Gortmaker 提交于
      The Kconfig currently controlling compilation of this code is:
      
      config.debug:config MAGIC_SYSRQ
            bool "Magic SysRq key"
      
      ...meaning that it currently is not being built as a module by anyone.
      
      Lets remove the traces of modularity we can so that when reading the
      driver there is less doubt it is builtin-only.
      
      Since module_init translates to device_initcall in the non-modular
      case, the init ordering remains unchanged with this commit.
      
      We don't delete the module.h include since other parts of the file are
      using content from there.
      
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Jiri Slaby <jslaby@suse.com>
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      3bce6f64
  2. 09 9月, 2015 2 次提交
  3. 25 6月, 2015 1 次提交
    • J
      mm: oom_kill: simplify OOM killer locking · dc56401f
      Johannes Weiner 提交于
      The zonelist locking and the oom_sem are two overlapping locks that are
      used to serialize global OOM killing against different things.
      
      The historical zonelist locking serializes OOM kills from allocations with
      overlapping zonelists against each other to prevent killing more tasks
      than necessary in the same memory domain.  Only when neither tasklists nor
      zonelists from two concurrent OOM kills overlap (tasks in separate memcgs
      bound to separate nodes) are OOM kills allowed to execute in parallel.
      
      The younger oom_sem is a read-write lock to serialize OOM killing against
      the PM code trying to disable the OOM killer altogether.
      
      However, the OOM killer is a fairly cold error path, there is really no
      reason to optimize for highly performant and concurrent OOM kills.  And
      the oom_sem is just flat-out redundant.
      
      Replace both locking schemes with a single global mutex serializing OOM
      kills regardless of context.
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: NMichal Hocko <mhocko@suse.cz>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dc56401f
  4. 22 6月, 2015 1 次提交
  5. 03 6月, 2015 1 次提交
    • A
      tty: remove platform_sysrq_reset_seq · ffb6e0c9
      Arnd Bergmann 提交于
      The platform_sysrq_reset_seq code was intended as a way for an embedded
      platform to provide its own sysrq sequence at compile time. After over two
      years, nobody has started using it in an upstream kernel, and the platforms
      that were interested in it have moved on to devicetree, which can be used
      to configure the sequence without requiring kernel changes. The method is
      also incompatible with the way that most architectures build support for
      multiple platforms into a single kernel.
      
      Now the code is producing warnings when built with gcc-5.1:
      
      drivers/tty/sysrq.c: In function 'sysrq_init':
      drivers/tty/sysrq.c:959:33: warning: array subscript is above array bounds [-Warray-bounds]
         key = platform_sysrq_reset_seq[i];
      
      We could fix this, but it seems unlikely that it will ever be used, so
      let's just remove the code instead. We still have the option to pass the
      sequence either in DT, using the kernel command line, or using the
      /sys/module/sysrq/parameters/reset_seq file.
      
      Fixes: 154b7a48 ("Input: sysrq - allow specifying alternate reset sequence")
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NDmitry Torokhov <dmitry.torokhov@gmail.com>
      ffb6e0c9
  6. 28 5月, 2015 1 次提交
    • L
      kernel/params: constify struct kernel_param_ops uses · 9c27847d
      Luis R. Rodriguez 提交于
      Most code already uses consts for the struct kernel_param_ops,
      sweep the kernel for the last offending stragglers. Other than
      include/linux/moduleparam.h and kernel/params.c all other changes
      were generated with the following Coccinelle SmPL patch. Merge
      conflicts between trees can be handled with Coccinelle.
      
      In the future git could get Coccinelle merge support to deal with
      patch --> fail --> grammar --> Coccinelle --> new patch conflicts
      automatically for us on patches where the grammar is available and
      the patch is of high confidence. Consider this a feature request.
      
      Test compiled on x86_64 against:
      
      	* allnoconfig
      	* allmodconfig
      	* allyesconfig
      
      @ const_found @
      identifier ops;
      @@
      
      const struct kernel_param_ops ops = {
      };
      
      @ const_not_found depends on !const_found @
      identifier ops;
      @@
      
      -struct kernel_param_ops ops = {
      +const struct kernel_param_ops ops = {
      };
      
      Generated-by: Coccinelle SmPL
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Junio C Hamano <gitster@pobox.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: cocci@systeme.lip6.fr
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NLuis R. Rodriguez <mcgrof@suse.com>
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      9c27847d
  7. 25 5月, 2015 1 次提交
    • A
      tty: remove platform_sysrq_reset_seq · abab381f
      Arnd Bergmann 提交于
      The platform_sysrq_reset_seq code was intended as a way for an embedded
      platform to provide its own sysrq sequence at compile time. After over
      two years, nobody has started using it in an upstream kernel, and
      the platforms that were interested in it have moved on to devicetree,
      which can be used to configure the sequence without requiring kernel
      changes. The method is also incompatible with the way that most
      architectures build support for multiple platforms into a single
      kernel.
      
      Now the code is producing warnings when built with gcc-5.1:
      
      drivers/tty/sysrq.c: In function 'sysrq_init':
      drivers/tty/sysrq.c:959:33: warning: array subscript is above array bounds [-Warray-bounds]
         key = platform_sysrq_reset_seq[i];
      
      We could fix this, but it seems unlikely that it will ever be used,
      so let's just remove the code instead. We still have the option to
      pass the sequence either in DT, using the kernel command line,
      or using the /sys/module/sysrq/parameters/reset_seq file.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Fixes: 154b7a48 ("Input: sysrq - allow specifying alternate reset sequence")
      ----
      v2: moved sysrq_reset_downtime_ms variable to avoid introducing a compile
          warning when CONFIG_INPUT is disabled
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      abab381f
  8. 09 3月, 2015 1 次提交
    • T
      workqueue: dump workqueues on sysrq-t · 3494fc30
      Tejun Heo 提交于
      Workqueues are used extensively throughout the kernel but sometimes
      it's difficult to debug stalls involving work items because visibility
      into its inner workings is fairly limited.  Although sysrq-t task dump
      annotates each active worker task with the information on the work
      item being executed, it is challenging to find out which work items
      are pending or delayed on which queues and how pools are being
      managed.
      
      This patch implements show_workqueue_state() which dumps all busy
      workqueues and pools and is called from the sysrq-t handler.  At the
      end of sysrq-t dump, something like the following is printed.
      
       Showing busy workqueues and worker pools:
       ...
       workqueue filler_wq: flags=0x0
         pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=2/256
           in-flight: 491:filler_workfn, 507:filler_workfn
         pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=2/256
           in-flight: 501:filler_workfn
           pending: filler_workfn
       ...
       workqueue test_wq: flags=0x8
         pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1
           in-flight: 510(RESCUER):test_workfn BAR(69) BAR(500)
           delayed: test_workfn1 BAR(492), test_workfn2
       ...
       pool 0: cpus=0 node=0 flags=0x0 nice=0 workers=2 manager: 137
       pool 2: cpus=1 node=0 flags=0x0 nice=0 workers=3 manager: 469
       pool 3: cpus=1 node=0 flags=0x0 nice=-20 workers=2 idle: 16
       pool 8: cpus=0-3 flags=0x4 nice=0 workers=2 manager: 62
      
      The above shows that test_wq is executing test_workfn() on pid 510
      which is the rescuer and also that there are two tasks 69 and 500
      waiting for the work item to finish in flush_work().  As test_wq has
      max_active of 1, there are two work items for test_workfn1() and
      test_workfn2() which are delayed till the current work item is
      finished.  In addition, pid 492 is flushing test_workfn1().
      
      The work item for test_workfn() is being executed on pwq of pool 2
      which is the normal priority per-cpu pool for CPU 1.  The pool has
      three workers, two of which are executing filler_workfn() for
      filler_wq and the last one is assuming the manager role trying to
      create more workers.
      
      This extra workqueue state dump will hopefully help chasing down hangs
      involving workqueues.
      
      v3: cpulist_pr_cont() replaced with "%*pbl" printf formatting.
      
      v2: As suggested by Andrew, minor formatting change in pr_cont_work(),
          printk()'s replaced with pr_info()'s, and cpumask printing now
          uses cpulist_pr_cont().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      CC: Ingo Molnar <mingo@redhat.com>
      3494fc30
  9. 12 2月, 2015 2 次提交
    • M
      oom, PM: make OOM detection in the freezer path raceless · c32b3cbe
      Michal Hocko 提交于
      Commit 5695be14 ("OOM, PM: OOM killed task shouldn't escape PM
      suspend") has left a race window when OOM killer manages to
      note_oom_kill after freeze_processes checks the counter.  The race
      window is quite small and really unlikely and partial solution deemed
      sufficient at the time of submission.
      
      Tejun wasn't happy about this partial solution though and insisted on a
      full solution.  That requires the full OOM and freezer's task freezing
      exclusion, though.  This is done by this patch which introduces oom_sem
      RW lock and turns oom_killer_disable() into a full OOM barrier.
      
      oom_killer_disabled check is moved from the allocation path to the OOM
      level and we take oom_sem for reading for both the check and the whole
      OOM invocation.
      
      oom_killer_disable() takes oom_sem for writing so it waits for all
      currently running OOM killer invocations.  Then it disable all the further
      OOMs by setting oom_killer_disabled and checks for any oom victims.
      Victims are counted via mark_tsk_oom_victim resp.  unmark_oom_victim.  The
      last victim wakes up all waiters enqueued by oom_killer_disable().
      Therefore this function acts as the full OOM barrier.
      
      The page fault path is covered now as well although it was assumed to be
      safe before.  As per Tejun, "We used to have freezing points deep in file
      system code which may be reacheable from page fault." so it would be
      better and more robust to not rely on freezing points here.  Same applies
      to the memcg OOM killer.
      
      out_of_memory tells the caller whether the OOM was allowed to trigger and
      the callers are supposed to handle the situation.  The page allocation
      path simply fails the allocation same as before.  The page fault path will
      retry the fault (more on that later) and Sysrq OOM trigger will simply
      complain to the log.
      
      Normally there wouldn't be any unfrozen user tasks after
      try_to_freeze_tasks so the function will not block. But if there was an
      OOM killer racing with try_to_freeze_tasks and the OOM victim didn't
      finish yet then we have to wait for it. This should complete in a finite
      time, though, because
      
      	- the victim cannot loop in the page fault handler (it would die
      	  on the way out from the exception)
      	- it cannot loop in the page allocator because all the further
      	  allocation would fail and __GFP_NOFAIL allocations are not
      	  acceptable at this stage
      	- it shouldn't be blocked on any locks held by frozen tasks
      	  (try_to_freeze expects lockless context) and kernel threads and
      	  work queues are not frozen yet
      Signed-off-by: NMichal Hocko <mhocko@suse.cz>
      Suggested-by: NTejun Heo <tj@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Cong Wang <xiyou.wangcong@gmail.com>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c32b3cbe
    • M
      sysrq: convert printk to pr_* equivalent · 401e4a7c
      Michal Hocko 提交于
      While touching this area let's convert printk to pr_*.  This also makes
      the printing of continuation lines done properly.
      Signed-off-by: NMichal Hocko <mhocko@suse.cz>
      Acked-by: NTejun Heo <tj@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Cong Wang <xiyou.wangcong@gmail.com>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      401e4a7c
  10. 07 8月, 2014 1 次提交
  11. 07 6月, 2014 2 次提交
    • R
      sysrq,rcu: suppress RCU stall warnings while sysrq runs · 722773af
      Rik van Riel 提交于
      Some sysrq handlers can run for a long time, because they dump a lot of
      data onto a serial console.  Having RCU stall warnings pop up in the
      middle of them only makes the problem worse.
      
      This patch temporarily disables RCU stall warnings while a sysrq request
      is handled.
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Suggested-by: NPaul McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Madper Xie <cxie@redhat.com>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Richard Weinberger <richard@nod.at>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      722773af
    • R
      sysrq: rcu-ify __handle_sysrq · 984d74a7
      Rik van Riel 提交于
      Echoing values into /proc/sysrq-trigger seems to be a popular way to get
      information out of the kernel.  However, dumping information about
      thousands of processes, or hundreds of CPUs to serial console can result
      in IRQs being blocked for minutes, resulting in various kinds of cascade
      failures.
      
      The most common failure is due to interrupts being blocked for a very
      long time.  This can lead to things like failed IO requests, and other
      things the system cannot easily recover from.
      
      This problem is easily fixable by making __handle_sysrq use RCU instead
      of spin_lock_irqsave.
      
      This leaves the warning that RCU grace periods have not elapsed for a
      long time, but the system will come back from that automatically.
      
      It also leaves sysrq-from-irq-context when the sysrq keys are pressed,
      but that is probably desired since people want that to work in
      situations where the system is already hosed.
      
      The callers of register_sysrq_key and unregister_sysrq_key appear to be
      capable of sleeping.
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Reported-by: NMadper Xie <cxie@redhat.com>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      984d74a7
  12. 05 6月, 2014 1 次提交
  13. 17 10月, 2013 1 次提交
  14. 13 8月, 2013 1 次提交
  15. 07 6月, 2013 1 次提交
  16. 04 6月, 2013 1 次提交
  17. 02 4月, 2013 1 次提交
  18. 16 3月, 2013 1 次提交
  19. 28 2月, 2013 1 次提交
    • L
      sysrq: don't depend on weak undefined arrays to have an address that compares as NULL · adf96e6f
      Linus Torvalds 提交于
      When taking an address of an extern array, gcc quite naturally should be
      able to say "an address of an object can never be NULL" and just
      optimize away the test entirely.
      
      However, the new alternate sysrq reset code (commit 154b7a48:
      "Input: sysrq - allow specifying alternate reset sequence") did exactly
      that, and declared platform_sysrq_reset_seq[] as a weak array, and
      expecting that testing the address of the array would show whether it
      actually got linked against something or not.
      
      And that doesn't work with all gcc versions.  Clearly it works with
      *some* versions of gcc, and maybe it's even supposed to work, but it
      really is a very fragile concept.
      
      So instead of testing the address of the weak variable, just create a
      weak instance of that array that is empty.  If some platform then has a
      real platform_sysrq_reset_seq[] that overrides our weak one, the linker
      will switch to that one, and it all works without any run-time
      conditionals at all.
      Reported-by: NDave Airlie <airlied@gmail.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Russell King <rmk+kernel@arm.linux.org.uk>
      Cc: Dmitry Torokhov <dmitry.torokhov@gmail.com>
      Acked-by: NMathieu Poirier <mathieu.poirier@linaro.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      adf96e6f
  20. 08 2月, 2013 1 次提交
  21. 17 1月, 2013 1 次提交
  22. 16 11月, 2012 1 次提交
  23. 17 10月, 2012 1 次提交
  24. 06 4月, 2012 1 次提交
    • A
      sysrq: use SEND_SIG_FORCED instead of force_sig() · b82c3287
      Anton Vorontsov 提交于
      Change send_sig_all() to use do_send_sig_info(SEND_SIG_FORCED) instead
      of force_sig(SIGKILL).  With the recent changes we do not need force_ to
      kill the CLONE_NEWPID tasks.
      
      And this is more correct.  force_sig() can race with the exiting thread,
      while do_send_sig_info(group => true) kill the whole process.
      
      Some more notes from Oleg Nesterov:
      
      > Just one note. This change makes no difference for sysrq_handle_kill().
      > But it obviously changes the behaviour sysrq_handle_term(). I think
      > this is fine, if you want to really kill the task which blocks/ignores
      > SIGTERM you can use sysrq_handle_kill().
      >
      > Even ignoring the reasons why force_sig() is simply wrong here,
      > force_sig(SIGTERM) looks strange. The task won't be killed if it has
      > a handler, but SIG_IGN can't help. However if it has the handler
      > but blocks SIGTERM temporary (this is very common) it will be killed.
      
      Also,
      
      > force_sig() can't kill the process if the main thread has already
      > exited. IOW, it is trivial to create the process which can't be
      > killed by sysrq.
      
      So, this patch fixes the issue.
      Suggested-by: NOleg Nesterov <oleg@redhat.com>
      Acked-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Acked-by: NOleg Nesterov <oleg@redhat.com>
      Signed-off-by: NAnton Vorontsov <anton.vorontsov@linaro.org>
      Cc: Alan Cox <alan@linux.intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b82c3287
  25. 22 3月, 2012 1 次提交
  26. 09 3月, 2012 1 次提交
    • A
      vt:tackle kbd_table · 079c9534
      Alan Cox 提交于
      Keyboard struct lifetime is easy, but the locking is not and is completely
      ignored by the existing code. Tackle this one head on
      
      - Make the kbd_table private so we can run down all direct users
      - Hoick the relevant ioctl handlers into the keyboard layer
      - Lock them with the keyboard lock so they don't change mid keypress
      - Add helpers for things like console stop/start so we isolate the poking
        around properly
      - Tweak the braille console so it still builds
      
      There are a couple of FIXME locking cases left for ioctls that are so hideous
      they should be addressed in a later patch. After this patch the kbd_table is
      private and all the keyboard jiggery pokery is in one place.
      
      This update fixes speakup and also a memory leak in the original.
      Signed-off-by: NAlan Cox <alan@linux.intel.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      079c9534
  27. 10 2月, 2012 2 次提交
  28. 04 1月, 2012 1 次提交
  29. 25 3月, 2011 1 次提交
  30. 05 11月, 2010 1 次提交
  31. 15 10月, 2010 1 次提交
    • A
      llseek: automatically add .llseek fop · 6038f373
      Arnd Bergmann 提交于
      All file_operations should get a .llseek operation so we can make
      nonseekable_open the default for future file operations without a
      .llseek pointer.
      
      The three cases that we can automatically detect are no_llseek, seq_lseek
      and default_llseek. For cases where we can we can automatically prove that
      the file offset is always ignored, we use noop_llseek, which maintains
      the current behavior of not returning an error from a seek.
      
      New drivers should normally not use noop_llseek but instead use no_llseek
      and call nonseekable_open at open time.  Existing drivers can be converted
      to do the same when the maintainer knows for certain that no user code
      relies on calling seek on the device file.
      
      The generated code is often incorrectly indented and right now contains
      comments that clarify for each added line why a specific variant was
      chosen. In the version that gets submitted upstream, the comments will
      be gone and I will manually fix the indentation, because there does not
      seem to be a way to do that using coccinelle.
      
      Some amount of new code is currently sitting in linux-next that should get
      the same modifications, which I will do at the end of the merge window.
      
      Many thanks to Julia Lawall for helping me learn to write a semantic
      patch that does all this.
      
      ===== begin semantic patch =====
      // This adds an llseek= method to all file operations,
      // as a preparation for making no_llseek the default.
      //
      // The rules are
      // - use no_llseek explicitly if we do nonseekable_open
      // - use seq_lseek for sequential files
      // - use default_llseek if we know we access f_pos
      // - use noop_llseek if we know we don't access f_pos,
      //   but we still want to allow users to call lseek
      //
      @ open1 exists @
      identifier nested_open;
      @@
      nested_open(...)
      {
      <+...
      nonseekable_open(...)
      ...+>
      }
      
      @ open exists@
      identifier open_f;
      identifier i, f;
      identifier open1.nested_open;
      @@
      int open_f(struct inode *i, struct file *f)
      {
      <+...
      (
      nonseekable_open(...)
      |
      nested_open(...)
      )
      ...+>
      }
      
      @ read disable optional_qualifier exists @
      identifier read_f;
      identifier f, p, s, off;
      type ssize_t, size_t, loff_t;
      expression E;
      identifier func;
      @@
      ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
      {
      <+...
      (
         *off = E
      |
         *off += E
      |
         func(..., off, ...)
      |
         E = *off
      )
      ...+>
      }
      
      @ read_no_fpos disable optional_qualifier exists @
      identifier read_f;
      identifier f, p, s, off;
      type ssize_t, size_t, loff_t;
      @@
      ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
      {
      ... when != off
      }
      
      @ write @
      identifier write_f;
      identifier f, p, s, off;
      type ssize_t, size_t, loff_t;
      expression E;
      identifier func;
      @@
      ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
      {
      <+...
      (
        *off = E
      |
        *off += E
      |
        func(..., off, ...)
      |
        E = *off
      )
      ...+>
      }
      
      @ write_no_fpos @
      identifier write_f;
      identifier f, p, s, off;
      type ssize_t, size_t, loff_t;
      @@
      ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
      {
      ... when != off
      }
      
      @ fops0 @
      identifier fops;
      @@
      struct file_operations fops = {
       ...
      };
      
      @ has_llseek depends on fops0 @
      identifier fops0.fops;
      identifier llseek_f;
      @@
      struct file_operations fops = {
      ...
       .llseek = llseek_f,
      ...
      };
      
      @ has_read depends on fops0 @
      identifier fops0.fops;
      identifier read_f;
      @@
      struct file_operations fops = {
      ...
       .read = read_f,
      ...
      };
      
      @ has_write depends on fops0 @
      identifier fops0.fops;
      identifier write_f;
      @@
      struct file_operations fops = {
      ...
       .write = write_f,
      ...
      };
      
      @ has_open depends on fops0 @
      identifier fops0.fops;
      identifier open_f;
      @@
      struct file_operations fops = {
      ...
       .open = open_f,
      ...
      };
      
      // use no_llseek if we call nonseekable_open
      ////////////////////////////////////////////
      @ nonseekable1 depends on !has_llseek && has_open @
      identifier fops0.fops;
      identifier nso ~= "nonseekable_open";
      @@
      struct file_operations fops = {
      ...  .open = nso, ...
      +.llseek = no_llseek, /* nonseekable */
      };
      
      @ nonseekable2 depends on !has_llseek @
      identifier fops0.fops;
      identifier open.open_f;
      @@
      struct file_operations fops = {
      ...  .open = open_f, ...
      +.llseek = no_llseek, /* open uses nonseekable */
      };
      
      // use seq_lseek for sequential files
      /////////////////////////////////////
      @ seq depends on !has_llseek @
      identifier fops0.fops;
      identifier sr ~= "seq_read";
      @@
      struct file_operations fops = {
      ...  .read = sr, ...
      +.llseek = seq_lseek, /* we have seq_read */
      };
      
      // use default_llseek if there is a readdir
      ///////////////////////////////////////////
      @ fops1 depends on !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
      identifier fops0.fops;
      identifier readdir_e;
      @@
      // any other fop is used that changes pos
      struct file_operations fops = {
      ... .readdir = readdir_e, ...
      +.llseek = default_llseek, /* readdir is present */
      };
      
      // use default_llseek if at least one of read/write touches f_pos
      /////////////////////////////////////////////////////////////////
      @ fops2 depends on !fops1 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
      identifier fops0.fops;
      identifier read.read_f;
      @@
      // read fops use offset
      struct file_operations fops = {
      ... .read = read_f, ...
      +.llseek = default_llseek, /* read accesses f_pos */
      };
      
      @ fops3 depends on !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
      identifier fops0.fops;
      identifier write.write_f;
      @@
      // write fops use offset
      struct file_operations fops = {
      ... .write = write_f, ...
      +	.llseek = default_llseek, /* write accesses f_pos */
      };
      
      // Use noop_llseek if neither read nor write accesses f_pos
      ///////////////////////////////////////////////////////////
      
      @ fops4 depends on !fops1 && !fops2 && !fops3 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
      identifier fops0.fops;
      identifier read_no_fpos.read_f;
      identifier write_no_fpos.write_f;
      @@
      // write fops use offset
      struct file_operations fops = {
      ...
       .write = write_f,
       .read = read_f,
      ...
      +.llseek = noop_llseek, /* read and write both use no f_pos */
      };
      
      @ depends on has_write && !has_read && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
      identifier fops0.fops;
      identifier write_no_fpos.write_f;
      @@
      struct file_operations fops = {
      ... .write = write_f, ...
      +.llseek = noop_llseek, /* write uses no f_pos */
      };
      
      @ depends on has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
      identifier fops0.fops;
      identifier read_no_fpos.read_f;
      @@
      struct file_operations fops = {
      ... .read = read_f, ...
      +.llseek = noop_llseek, /* read uses no f_pos */
      };
      
      @ depends on !has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
      identifier fops0.fops;
      @@
      struct file_operations fops = {
      ...
      +.llseek = noop_llseek, /* no read or write fn */
      };
      ===== End semantic patch =====
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Cc: Julia Lawall <julia@diku.dk>
      Cc: Christoph Hellwig <hch@infradead.org>
      6038f373
  32. 30 9月, 2010 1 次提交
  33. 21 8月, 2010 1 次提交
  34. 20 8月, 2010 1 次提交
  35. 22 7月, 2010 1 次提交
  36. 09 6月, 2010 1 次提交