1. 04 5月, 2017 1 次提交
    • M
      oom: improve oom disable handling · d75da004
      Michal Hocko 提交于
      Tetsuo has reported that sysrq triggered OOM killer will print a
      misleading information when no tasks are selected:
      
        sysrq: SysRq : Manual OOM execution
        Out of memory: Kill process 4468 ((agetty)) score 0 or sacrifice child
        Killed process 4468 ((agetty)) total-vm:43704kB, anon-rss:1760kB, file-rss:0kB, shmem-rss:0kB
        sysrq: SysRq : Manual OOM execution
        Out of memory: Kill process 4469 (systemd-cgroups) score 0 or sacrifice child
        Killed process 4469 (systemd-cgroups) total-vm:10704kB, anon-rss:120kB, file-rss:0kB, shmem-rss:0kB
        sysrq: SysRq : Manual OOM execution
        sysrq: OOM request ignored because killer is disabled
        sysrq: SysRq : Manual OOM execution
        sysrq: OOM request ignored because killer is disabled
        sysrq: SysRq : Manual OOM execution
        sysrq: OOM request ignored because killer is disabled
      
      The real reason is that there are no eligible tasks for the OOM killer
      to select but since commit 7c5f64f8 ("mm: oom: deduplicate victim
      selection code for memcg and global oom") the semantic of out_of_memory
      has changed without updating moom_callback.
      
      This patch updates moom_callback to tell that no task was eligible which
      is the case for both oom killer disabled and no eligible tasks.  In
      order to help distinguish first case from the second add printk to both
      oom_killer_{enable,disable}.  This information is useful on its own
      because it might help debugging potential memory allocation failures.
      
      Fixes: 7c5f64f8 ("mm: oom: deduplicate victim selection code for memcg and global oom")
      Link: http://lkml.kernel.org/r/20170404134705.6361-1-mhocko@kernel.orgSigned-off-by: NMichal Hocko <mhocko@suse.com>
      Reported-by: NTetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d75da004
  2. 05 4月, 2017 1 次提交
  3. 02 3月, 2017 3 次提交
  4. 23 2月, 2017 1 次提交
  5. 11 1月, 2017 1 次提交
  6. 28 9月, 2016 1 次提交
  7. 27 9月, 2016 1 次提交
    • M
      drivers/tty: Explicitly pass current to show_stack · 9f12cea9
      Mark Rutland 提交于
      As noted in commit:
      
        81539169 ("x86/dumpstack: Remove NULL task pointer convention")
      
      ... having a NULL task parameter imply current leads to subtle bugs in stack
      walking code (so far seen on both 86 and arm64), makes callsites harder to
      read, and is unnecessary as all callers have access to current.
      
      As a step towards removing the problematic NULL-implies-current idiom entirely,
      have the sysrq code explicitly pass current to show_stack.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Jiri Slaby <jslaby@suse.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      9f12cea9
  8. 27 7月, 2016 1 次提交
  9. 30 12月, 2015 1 次提交
    • A
      sysrq: Fix warning in sysrq generated crash. · 984cf355
      Ani Sinha 提交于
      Commit 984d74a7 ("sysrq: rcu-ify __handle_sysrq") replaced
      spin_lock_irqsave() calls with rcu_read_lock() calls in sysrq. Since
      rcu_read_lock() does not disable preemption, faulthandler_disabled() in
      __do_page_fault() in x86/fault.c returns false. When the code later calls
      might_sleep() in the pagefault handler, we get the following warning:
      
      BUG: sleeping function called from invalid context at ../arch/x86/mm/fault.c:1187
      in_atomic(): 0, irqs_disabled(): 0, pid: 4706, name: bash
      Preemption disabled at:[<ffffffff81484339>] printk+0x48/0x4a
      
      To fix this, we release the RCU read lock before we crash.
      
      Tested this patch on linux 3.18 by booting off one of our boards.
      
      Fixes: 984d74a7 ("sysrq: rcu-ify __handle_sysrq")
      Signed-off-by: NAni Sinha <ani@arista.com>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      984cf355
  10. 05 10月, 2015 1 次提交
    • P
      drivers/tty: make sysrq.c slightly more explicitly non-modular · 3bce6f64
      Paul Gortmaker 提交于
      The Kconfig currently controlling compilation of this code is:
      
      config.debug:config MAGIC_SYSRQ
            bool "Magic SysRq key"
      
      ...meaning that it currently is not being built as a module by anyone.
      
      Lets remove the traces of modularity we can so that when reading the
      driver there is less doubt it is builtin-only.
      
      Since module_init translates to device_initcall in the non-modular
      case, the init ordering remains unchanged with this commit.
      
      We don't delete the module.h include since other parts of the file are
      using content from there.
      
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Jiri Slaby <jslaby@suse.com>
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      3bce6f64
  11. 09 9月, 2015 2 次提交
  12. 25 6月, 2015 1 次提交
    • J
      mm: oom_kill: simplify OOM killer locking · dc56401f
      Johannes Weiner 提交于
      The zonelist locking and the oom_sem are two overlapping locks that are
      used to serialize global OOM killing against different things.
      
      The historical zonelist locking serializes OOM kills from allocations with
      overlapping zonelists against each other to prevent killing more tasks
      than necessary in the same memory domain.  Only when neither tasklists nor
      zonelists from two concurrent OOM kills overlap (tasks in separate memcgs
      bound to separate nodes) are OOM kills allowed to execute in parallel.
      
      The younger oom_sem is a read-write lock to serialize OOM killing against
      the PM code trying to disable the OOM killer altogether.
      
      However, the OOM killer is a fairly cold error path, there is really no
      reason to optimize for highly performant and concurrent OOM kills.  And
      the oom_sem is just flat-out redundant.
      
      Replace both locking schemes with a single global mutex serializing OOM
      kills regardless of context.
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: NMichal Hocko <mhocko@suse.cz>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dc56401f
  13. 22 6月, 2015 1 次提交
  14. 03 6月, 2015 1 次提交
    • A
      tty: remove platform_sysrq_reset_seq · ffb6e0c9
      Arnd Bergmann 提交于
      The platform_sysrq_reset_seq code was intended as a way for an embedded
      platform to provide its own sysrq sequence at compile time. After over two
      years, nobody has started using it in an upstream kernel, and the platforms
      that were interested in it have moved on to devicetree, which can be used
      to configure the sequence without requiring kernel changes. The method is
      also incompatible with the way that most architectures build support for
      multiple platforms into a single kernel.
      
      Now the code is producing warnings when built with gcc-5.1:
      
      drivers/tty/sysrq.c: In function 'sysrq_init':
      drivers/tty/sysrq.c:959:33: warning: array subscript is above array bounds [-Warray-bounds]
         key = platform_sysrq_reset_seq[i];
      
      We could fix this, but it seems unlikely that it will ever be used, so
      let's just remove the code instead. We still have the option to pass the
      sequence either in DT, using the kernel command line, or using the
      /sys/module/sysrq/parameters/reset_seq file.
      
      Fixes: 154b7a48 ("Input: sysrq - allow specifying alternate reset sequence")
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NDmitry Torokhov <dmitry.torokhov@gmail.com>
      ffb6e0c9
  15. 28 5月, 2015 1 次提交
    • L
      kernel/params: constify struct kernel_param_ops uses · 9c27847d
      Luis R. Rodriguez 提交于
      Most code already uses consts for the struct kernel_param_ops,
      sweep the kernel for the last offending stragglers. Other than
      include/linux/moduleparam.h and kernel/params.c all other changes
      were generated with the following Coccinelle SmPL patch. Merge
      conflicts between trees can be handled with Coccinelle.
      
      In the future git could get Coccinelle merge support to deal with
      patch --> fail --> grammar --> Coccinelle --> new patch conflicts
      automatically for us on patches where the grammar is available and
      the patch is of high confidence. Consider this a feature request.
      
      Test compiled on x86_64 against:
      
      	* allnoconfig
      	* allmodconfig
      	* allyesconfig
      
      @ const_found @
      identifier ops;
      @@
      
      const struct kernel_param_ops ops = {
      };
      
      @ const_not_found depends on !const_found @
      identifier ops;
      @@
      
      -struct kernel_param_ops ops = {
      +const struct kernel_param_ops ops = {
      };
      
      Generated-by: Coccinelle SmPL
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Junio C Hamano <gitster@pobox.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: cocci@systeme.lip6.fr
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NLuis R. Rodriguez <mcgrof@suse.com>
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      9c27847d
  16. 25 5月, 2015 1 次提交
    • A
      tty: remove platform_sysrq_reset_seq · abab381f
      Arnd Bergmann 提交于
      The platform_sysrq_reset_seq code was intended as a way for an embedded
      platform to provide its own sysrq sequence at compile time. After over
      two years, nobody has started using it in an upstream kernel, and
      the platforms that were interested in it have moved on to devicetree,
      which can be used to configure the sequence without requiring kernel
      changes. The method is also incompatible with the way that most
      architectures build support for multiple platforms into a single
      kernel.
      
      Now the code is producing warnings when built with gcc-5.1:
      
      drivers/tty/sysrq.c: In function 'sysrq_init':
      drivers/tty/sysrq.c:959:33: warning: array subscript is above array bounds [-Warray-bounds]
         key = platform_sysrq_reset_seq[i];
      
      We could fix this, but it seems unlikely that it will ever be used,
      so let's just remove the code instead. We still have the option to
      pass the sequence either in DT, using the kernel command line,
      or using the /sys/module/sysrq/parameters/reset_seq file.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Fixes: 154b7a48 ("Input: sysrq - allow specifying alternate reset sequence")
      ----
      v2: moved sysrq_reset_downtime_ms variable to avoid introducing a compile
          warning when CONFIG_INPUT is disabled
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      abab381f
  17. 09 3月, 2015 1 次提交
    • T
      workqueue: dump workqueues on sysrq-t · 3494fc30
      Tejun Heo 提交于
      Workqueues are used extensively throughout the kernel but sometimes
      it's difficult to debug stalls involving work items because visibility
      into its inner workings is fairly limited.  Although sysrq-t task dump
      annotates each active worker task with the information on the work
      item being executed, it is challenging to find out which work items
      are pending or delayed on which queues and how pools are being
      managed.
      
      This patch implements show_workqueue_state() which dumps all busy
      workqueues and pools and is called from the sysrq-t handler.  At the
      end of sysrq-t dump, something like the following is printed.
      
       Showing busy workqueues and worker pools:
       ...
       workqueue filler_wq: flags=0x0
         pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=2/256
           in-flight: 491:filler_workfn, 507:filler_workfn
         pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=2/256
           in-flight: 501:filler_workfn
           pending: filler_workfn
       ...
       workqueue test_wq: flags=0x8
         pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1
           in-flight: 510(RESCUER):test_workfn BAR(69) BAR(500)
           delayed: test_workfn1 BAR(492), test_workfn2
       ...
       pool 0: cpus=0 node=0 flags=0x0 nice=0 workers=2 manager: 137
       pool 2: cpus=1 node=0 flags=0x0 nice=0 workers=3 manager: 469
       pool 3: cpus=1 node=0 flags=0x0 nice=-20 workers=2 idle: 16
       pool 8: cpus=0-3 flags=0x4 nice=0 workers=2 manager: 62
      
      The above shows that test_wq is executing test_workfn() on pid 510
      which is the rescuer and also that there are two tasks 69 and 500
      waiting for the work item to finish in flush_work().  As test_wq has
      max_active of 1, there are two work items for test_workfn1() and
      test_workfn2() which are delayed till the current work item is
      finished.  In addition, pid 492 is flushing test_workfn1().
      
      The work item for test_workfn() is being executed on pwq of pool 2
      which is the normal priority per-cpu pool for CPU 1.  The pool has
      three workers, two of which are executing filler_workfn() for
      filler_wq and the last one is assuming the manager role trying to
      create more workers.
      
      This extra workqueue state dump will hopefully help chasing down hangs
      involving workqueues.
      
      v3: cpulist_pr_cont() replaced with "%*pbl" printf formatting.
      
      v2: As suggested by Andrew, minor formatting change in pr_cont_work(),
          printk()'s replaced with pr_info()'s, and cpumask printing now
          uses cpulist_pr_cont().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      CC: Ingo Molnar <mingo@redhat.com>
      3494fc30
  18. 12 2月, 2015 2 次提交
    • M
      oom, PM: make OOM detection in the freezer path raceless · c32b3cbe
      Michal Hocko 提交于
      Commit 5695be14 ("OOM, PM: OOM killed task shouldn't escape PM
      suspend") has left a race window when OOM killer manages to
      note_oom_kill after freeze_processes checks the counter.  The race
      window is quite small and really unlikely and partial solution deemed
      sufficient at the time of submission.
      
      Tejun wasn't happy about this partial solution though and insisted on a
      full solution.  That requires the full OOM and freezer's task freezing
      exclusion, though.  This is done by this patch which introduces oom_sem
      RW lock and turns oom_killer_disable() into a full OOM barrier.
      
      oom_killer_disabled check is moved from the allocation path to the OOM
      level and we take oom_sem for reading for both the check and the whole
      OOM invocation.
      
      oom_killer_disable() takes oom_sem for writing so it waits for all
      currently running OOM killer invocations.  Then it disable all the further
      OOMs by setting oom_killer_disabled and checks for any oom victims.
      Victims are counted via mark_tsk_oom_victim resp.  unmark_oom_victim.  The
      last victim wakes up all waiters enqueued by oom_killer_disable().
      Therefore this function acts as the full OOM barrier.
      
      The page fault path is covered now as well although it was assumed to be
      safe before.  As per Tejun, "We used to have freezing points deep in file
      system code which may be reacheable from page fault." so it would be
      better and more robust to not rely on freezing points here.  Same applies
      to the memcg OOM killer.
      
      out_of_memory tells the caller whether the OOM was allowed to trigger and
      the callers are supposed to handle the situation.  The page allocation
      path simply fails the allocation same as before.  The page fault path will
      retry the fault (more on that later) and Sysrq OOM trigger will simply
      complain to the log.
      
      Normally there wouldn't be any unfrozen user tasks after
      try_to_freeze_tasks so the function will not block. But if there was an
      OOM killer racing with try_to_freeze_tasks and the OOM victim didn't
      finish yet then we have to wait for it. This should complete in a finite
      time, though, because
      
      	- the victim cannot loop in the page fault handler (it would die
      	  on the way out from the exception)
      	- it cannot loop in the page allocator because all the further
      	  allocation would fail and __GFP_NOFAIL allocations are not
      	  acceptable at this stage
      	- it shouldn't be blocked on any locks held by frozen tasks
      	  (try_to_freeze expects lockless context) and kernel threads and
      	  work queues are not frozen yet
      Signed-off-by: NMichal Hocko <mhocko@suse.cz>
      Suggested-by: NTejun Heo <tj@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Cong Wang <xiyou.wangcong@gmail.com>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c32b3cbe
    • M
      sysrq: convert printk to pr_* equivalent · 401e4a7c
      Michal Hocko 提交于
      While touching this area let's convert printk to pr_*.  This also makes
      the printing of continuation lines done properly.
      Signed-off-by: NMichal Hocko <mhocko@suse.cz>
      Acked-by: NTejun Heo <tj@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Cong Wang <xiyou.wangcong@gmail.com>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      401e4a7c
  19. 07 8月, 2014 1 次提交
  20. 07 6月, 2014 2 次提交
    • R
      sysrq,rcu: suppress RCU stall warnings while sysrq runs · 722773af
      Rik van Riel 提交于
      Some sysrq handlers can run for a long time, because they dump a lot of
      data onto a serial console.  Having RCU stall warnings pop up in the
      middle of them only makes the problem worse.
      
      This patch temporarily disables RCU stall warnings while a sysrq request
      is handled.
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Suggested-by: NPaul McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Madper Xie <cxie@redhat.com>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Richard Weinberger <richard@nod.at>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      722773af
    • R
      sysrq: rcu-ify __handle_sysrq · 984d74a7
      Rik van Riel 提交于
      Echoing values into /proc/sysrq-trigger seems to be a popular way to get
      information out of the kernel.  However, dumping information about
      thousands of processes, or hundreds of CPUs to serial console can result
      in IRQs being blocked for minutes, resulting in various kinds of cascade
      failures.
      
      The most common failure is due to interrupts being blocked for a very
      long time.  This can lead to things like failed IO requests, and other
      things the system cannot easily recover from.
      
      This problem is easily fixable by making __handle_sysrq use RCU instead
      of spin_lock_irqsave.
      
      This leaves the warning that RCU grace periods have not elapsed for a
      long time, but the system will come back from that automatically.
      
      It also leaves sysrq-from-irq-context when the sysrq keys are pressed,
      but that is probably desired since people want that to work in
      situations where the system is already hosed.
      
      The callers of register_sysrq_key and unregister_sysrq_key appear to be
      capable of sleeping.
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Reported-by: NMadper Xie <cxie@redhat.com>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      984d74a7
  21. 05 6月, 2014 1 次提交
  22. 17 10月, 2013 1 次提交
  23. 13 8月, 2013 1 次提交
  24. 07 6月, 2013 1 次提交
  25. 04 6月, 2013 1 次提交
  26. 02 4月, 2013 1 次提交
  27. 16 3月, 2013 1 次提交
  28. 28 2月, 2013 1 次提交
    • L
      sysrq: don't depend on weak undefined arrays to have an address that compares as NULL · adf96e6f
      Linus Torvalds 提交于
      When taking an address of an extern array, gcc quite naturally should be
      able to say "an address of an object can never be NULL" and just
      optimize away the test entirely.
      
      However, the new alternate sysrq reset code (commit 154b7a48:
      "Input: sysrq - allow specifying alternate reset sequence") did exactly
      that, and declared platform_sysrq_reset_seq[] as a weak array, and
      expecting that testing the address of the array would show whether it
      actually got linked against something or not.
      
      And that doesn't work with all gcc versions.  Clearly it works with
      *some* versions of gcc, and maybe it's even supposed to work, but it
      really is a very fragile concept.
      
      So instead of testing the address of the weak variable, just create a
      weak instance of that array that is empty.  If some platform then has a
      real platform_sysrq_reset_seq[] that overrides our weak one, the linker
      will switch to that one, and it all works without any run-time
      conditionals at all.
      Reported-by: NDave Airlie <airlied@gmail.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Russell King <rmk+kernel@arm.linux.org.uk>
      Cc: Dmitry Torokhov <dmitry.torokhov@gmail.com>
      Acked-by: NMathieu Poirier <mathieu.poirier@linaro.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      adf96e6f
  29. 08 2月, 2013 1 次提交
  30. 17 1月, 2013 1 次提交
  31. 16 11月, 2012 1 次提交
  32. 17 10月, 2012 1 次提交
  33. 06 4月, 2012 1 次提交
    • A
      sysrq: use SEND_SIG_FORCED instead of force_sig() · b82c3287
      Anton Vorontsov 提交于
      Change send_sig_all() to use do_send_sig_info(SEND_SIG_FORCED) instead
      of force_sig(SIGKILL).  With the recent changes we do not need force_ to
      kill the CLONE_NEWPID tasks.
      
      And this is more correct.  force_sig() can race with the exiting thread,
      while do_send_sig_info(group => true) kill the whole process.
      
      Some more notes from Oleg Nesterov:
      
      > Just one note. This change makes no difference for sysrq_handle_kill().
      > But it obviously changes the behaviour sysrq_handle_term(). I think
      > this is fine, if you want to really kill the task which blocks/ignores
      > SIGTERM you can use sysrq_handle_kill().
      >
      > Even ignoring the reasons why force_sig() is simply wrong here,
      > force_sig(SIGTERM) looks strange. The task won't be killed if it has
      > a handler, but SIG_IGN can't help. However if it has the handler
      > but blocks SIGTERM temporary (this is very common) it will be killed.
      
      Also,
      
      > force_sig() can't kill the process if the main thread has already
      > exited. IOW, it is trivial to create the process which can't be
      > killed by sysrq.
      
      So, this patch fixes the issue.
      Suggested-by: NOleg Nesterov <oleg@redhat.com>
      Acked-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Acked-by: NOleg Nesterov <oleg@redhat.com>
      Signed-off-by: NAnton Vorontsov <anton.vorontsov@linaro.org>
      Cc: Alan Cox <alan@linux.intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b82c3287
  34. 22 3月, 2012 1 次提交
  35. 09 3月, 2012 1 次提交
    • A
      vt:tackle kbd_table · 079c9534
      Alan Cox 提交于
      Keyboard struct lifetime is easy, but the locking is not and is completely
      ignored by the existing code. Tackle this one head on
      
      - Make the kbd_table private so we can run down all direct users
      - Hoick the relevant ioctl handlers into the keyboard layer
      - Lock them with the keyboard lock so they don't change mid keypress
      - Add helpers for things like console stop/start so we isolate the poking
        around properly
      - Tweak the braille console so it still builds
      
      There are a couple of FIXME locking cases left for ioctls that are so hideous
      they should be addressed in a later patch. After this patch the kbd_table is
      private and all the keyboard jiggery pokery is in one place.
      
      This update fixes speakup and also a memory leak in the original.
      Signed-off-by: NAlan Cox <alan@linux.intel.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      079c9534