1. 13 2月, 2018 3 次提交
    • Y
      debugobjects: Use global free list in free_object() · 636e1970
      Yang Shi 提交于
      The newly added global free list allows to avoid lengthy pool_list
      iterations in free_obj_work() by putting objects either into the pool list
      when the fill level of the pool is below the maximum or by putting them on
      the global free list immediately.
      
      As the pool is now guaranteed to never exceed the maximum fill level this
      allows to remove the batch removal from pool list in free_obj_work().
      
      Split free_object() into two parts, so the actual queueing function can be
      reused without invoking schedule_work() on every invocation.
      
      [ tglx: Remove the batch removal from pool list and massage changelog ]
      Suggested-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NYang Shi <yang.shi@linux.alibaba.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: longman@redhat.com
      Link: https://lkml.kernel.org/r/1517872708-24207-4-git-send-email-yang.shi@linux.alibaba.com
      636e1970
    • Y
      debugobjects: Add global free list and the counter · 36c4ead6
      Yang Shi 提交于
      free_object() adds objects to the pool list and schedules work when the
      pool list is larger than the pool size.  The worker handles the actual
      kfree() of the object by iterating the pool list until the pool size is
      below the maximum pool size again.
      
      To iterate the pool list, pool_lock has to be held and the objects which
      should be freed() need to be put into temporary storage so pool_lock can be
      dropped for the actual kmem_cache_free() invocation. That's a pointless and
      expensive exercise if there is a large number of objects to free.
      
      In such a case its better to evaulate the fill level of the pool in
      free_objects() and queue the object to free either in the pool list or if
      it's full on a separate global free list.
      
      The worker can then do the following simpler operation:
      
        - Move objects back from the global free list to the pool list if the
          pool list is not longer full.
      
        - Remove the remaining objects in a single list move operation from the
          global free list and do the kmem_cache_free() operation lockless from
          the temporary list head.
      
      In fill_pool() the global free list is checked as well to avoid real
      allocations from the kmem cache.
      
      Add the necessary list head and a counter for the number of objects on the
      global free list and export that counter via sysfs:
      
      max_chain     :79
      max_loops     :8147
      warnings      :0
      fixups        :0
      pool_free     :1697
      pool_min_free :346
      pool_used     :15356
      pool_max_used :23933
      on_free_list  :39
      objs_allocated:32617
      objs_freed    :16588
      
      Nothing queues objects on the global free list yet. This happens in a
      follow up change.
      
      [ tglx: Simplified implementation and massaged changelog ]
      Suggested-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NYang Shi <yang.shi@linux.alibaba.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: longman@redhat.com
      Link: https://lkml.kernel.org/r/1517872708-24207-3-git-send-email-yang.shi@linux.alibaba.com
      36c4ead6
    • Y
      debugobjects: Export max loops counter · bd9dcd04
      Yang Shi 提交于
      __debug_check_no_obj_freed() can be an expensive operation depending on the
      size of memory freed. It already exports the maximum chain walk length via
      debugfs, but this only records the maximum of a single memory chunk.
      
      Though there is no information about the total number of objects inspected
      for a __debug_check_no_obj_freed() operation, which might be significantly
      larger when a huge memory region is freed.
      
      Aggregate the number of objects inspected for a single invocation of
      __debug_check_no_obj_freed() and export it via sysfs.
      
      The resulting output of /sys/kernel/debug/debug_objects/stats looks like:
      
      max_chain     :121
      max_checked   :543267
      warnings      :0
      fixups        :0
      pool_free     :1764
      pool_min_free :341
      pool_used     :86438
      pool_max_used :268887
      objs_allocated:6068254
      objs_freed    :5981076
      
      [ tglx: Renamed the variable to max_checked and adjusted changelog ]
      Signed-off-by: NYang Shi <yang.shi@linux.alibaba.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: longman@redhat.com
      Link: https://lkml.kernel.org/r/1517872708-24207-2-git-send-email-yang.shi@linux.alibaba.com
      bd9dcd04
  2. 14 8月, 2017 1 次提交
    • W
      debugobjects: Make kmemleak ignore debug objects · caba4cbb
      Waiman Long 提交于
      The allocated debug objects are either on the free list or in the
      hashed bucket lists. So they won't get lost. However if both debug
      objects and kmemleak are enabled and kmemleak scanning is done
      while some of the debug objects are transitioning from one list to
      the others, false negative reporting of memory leaks may happen for
      those objects. For example,
      
      [38687.275678] kmemleak: 12 new suspected memory leaks (see
      /sys/kernel/debug/kmemleak)
      unreferenced object 0xffff92e98aabeb68 (size 40):
        comm "ksmtuned", pid 4344, jiffies 4298403600 (age 906.430s)
        hex dump (first 32 bytes):
          00 00 00 00 00 00 00 00 d0 bc db 92 e9 92 ff ff  ................
          01 00 00 00 00 00 00 00 38 36 8a 61 e9 92 ff ff  ........86.a....
        backtrace:
          [<ffffffff8fa5378a>] kmemleak_alloc+0x4a/0xa0
          [<ffffffff8f47c019>] kmem_cache_alloc+0xe9/0x320
          [<ffffffff8f62ed96>] __debug_object_init+0x3e6/0x400
          [<ffffffff8f62ef01>] debug_object_activate+0x131/0x210
          [<ffffffff8f330d9f>] __call_rcu+0x3f/0x400
          [<ffffffff8f33117d>] call_rcu_sched+0x1d/0x20
          [<ffffffff8f4a183c>] put_object+0x2c/0x40
          [<ffffffff8f4a188c>] __delete_object+0x3c/0x50
          [<ffffffff8f4a18bd>] delete_object_full+0x1d/0x20
          [<ffffffff8fa535c2>] kmemleak_free+0x32/0x80
          [<ffffffff8f47af07>] kmem_cache_free+0x77/0x350
          [<ffffffff8f453912>] unlink_anon_vmas+0x82/0x1e0
          [<ffffffff8f440341>] free_pgtables+0xa1/0x110
          [<ffffffff8f44af91>] exit_mmap+0xc1/0x170
          [<ffffffff8f29db60>] mmput+0x80/0x150
          [<ffffffff8f2a7609>] do_exit+0x2a9/0xd20
      
      The references in the debug objects may also hide a real memory leak.
      
      As there is no point in having kmemleak to track debug object
      allocations, kmemleak checking is now disabled for debug objects.
      Signed-off-by: NWaiman Long <longman@redhat.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Link: http://lkml.kernel.org/r/1502718733-8527-1-git-send-email-longman@redhat.com
      caba4cbb
  3. 02 3月, 2017 1 次提交
  4. 10 2月, 2017 1 次提交
  5. 06 2月, 2017 1 次提交
    • W
      debugobjects: Reduce contention on the global pool_lock · 858274b6
      Waiman Long 提交于
      On a large SMP system with many CPUs, the global pool_lock may become
      a performance bottleneck as all the CPUs that need to allocate or
      free debug objects have to take the lock. That can sometimes cause
      soft lockups like:
      
       NMI watchdog: BUG: soft lockup - CPU#35 stuck for 22s! [rcuos/1:21]
       ...
       RIP: 0010:[<ffffffff817c216b>]  [<ffffffff817c216b>]
      	_raw_spin_unlock_irqrestore+0x3b/0x60
       ...
       Call Trace:
        [<ffffffff813f40d1>] free_object+0x81/0xb0
        [<ffffffff813f4f33>] debug_check_no_obj_freed+0x193/0x220
        [<ffffffff81101a59>] ? trace_hardirqs_on_caller+0xf9/0x1c0
        [<ffffffff81284996>] ? file_free_rcu+0x36/0x60
        [<ffffffff81251712>] kmem_cache_free+0xd2/0x380
        [<ffffffff81284960>] ? fput+0x90/0x90
        [<ffffffff81284996>] file_free_rcu+0x36/0x60
        [<ffffffff81124c23>] rcu_nocb_kthread+0x1b3/0x550
        [<ffffffff81124b71>] ? rcu_nocb_kthread+0x101/0x550
        [<ffffffff81124a70>] ? sync_exp_work_done.constprop.63+0x50/0x50
        [<ffffffff810c59d1>] kthread+0x101/0x120
        [<ffffffff81101a59>] ? trace_hardirqs_on_caller+0xf9/0x1c0
        [<ffffffff817c2d32>] ret_from_fork+0x22/0x50
      
      To reduce the amount of contention on the pool_lock, the actual
      kmem_cache_free() of the debug objects will be delayed if the pool_lock
      is busy. This will temporarily increase the amount of free objects
      available at the free pool when the system is busy. As a result,
      the number of kmem_cache allocation and freeing is reduced.
      
      To further reduce the lock operations free debug objects in batches of
      four.
      Signed-off-by: NWaiman Long <longman@redhat.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: "Du Changbin" <changbin.du@intel.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Jan Stancek <jstancek@redhat.com>
      Link: http://lkml.kernel.org/r/1483647425-4135-4-git-send-email-longman@redhat.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      858274b6
  6. 04 2月, 2017 2 次提交
  7. 01 12月, 2016 1 次提交
  8. 18 9月, 2016 1 次提交
  9. 20 5月, 2016 3 次提交
    • D
      debugobjects: insulate non-fixup logic related to static obj from fixup callbacks · b9fdac7f
      Du, Changbin 提交于
      When activating a static object we need make sure that the object is
      tracked in the object tracker.  If it is a non-static object then the
      activation is illegal.
      
      In previous implementation, each subsystem need take care of this in
      their fixup callbacks.  Actually we can put it into debugobjects core.
      Thus we can save duplicated code, and have *pure* fixup callbacks.
      
      To achieve this, a new callback "is_static_object" is introduced to let
      the type specific code decide whether a object is static or not.  If
      yes, we take it into object tracker, otherwise give warning and invoke
      fixup callback.
      
      This change has paassed debugobjects selftest, and I also do some test
      with all debugobjects supports enabled.
      
      At last, I have a concern about the fixups that can it change the object
      which is in incorrect state on fixup? Because the 'addr' may not point
      to any valid object if a non-static object is not tracked.  Then Change
      such object can overwrite someone's memory and cause unexpected
      behaviour.  For example, the timer_fixup_activate bind timer to function
      stub_timer.
      
      Link: http://lkml.kernel.org/r/1462576157-14539-1-git-send-email-changbin.du@intel.com
      [changbin.du@intel.com: improve code comments where invoke the new is_static_object callback]
        Link: http://lkml.kernel.org/r/1462777431-8171-1-git-send-email-changbin.du@intel.comSigned-off-by: NDu, Changbin <changbin.du@intel.com>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Josh Triplett <josh@kernel.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b9fdac7f
    • D
      debugobjects: correct the usage of fixup call results · e7a8e78b
      Du, Changbin 提交于
      If debug_object_fixup() return non-zero when problem has been fixed.
      But the code got it backwards, it taks 0 as fixup successfully.  So fix
      it.
      Signed-off-by: NDu, Changbin <changbin.du@intel.com>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Josh Triplett <josh@kernel.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e7a8e78b
    • D
      debugobjects: make fixup functions return bool instead of int · b1e4d9d8
      Du, Changbin 提交于
      I am going to introduce debugobjects infrastructure to USB subsystem.
      But before this, I found the code of debugobjects could be improved.
      This patchset will make fixup functions return bool type instead of int.
      Because fixup only need report success or no.  boolean is the 'real'
      type.
      
      This patch (of 7):
      
      The object debugging infrastructure core provides some fixup callbacks
      for the subsystem who use it.  These callbacks are called from the debug
      code whenever a problem in debug_object_init is detected.  And
      debugobjects core suppose them returns 1 when the fixup was successful,
      otherwise 0.  So the return type is boolean.
      
      A bad thing is that debug_object_fixup use the return value for
      arithmetic operation.  It confused me that what is the reall return
      type.
      
      Reading over the whole code, I found some place do use the return value
      incorrectly(see next patch).  So why use bool type instead?
      Signed-off-by: NDu, Changbin <changbin.du@intel.com>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Josh Triplett <josh@kernel.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b1e4d9d8
  10. 27 1月, 2016 1 次提交
  11. 05 6月, 2014 3 次提交
  12. 13 11月, 2013 1 次提交
  13. 19 8月, 2013 1 次提交
    • P
      debugobjects: Make debug_object_activate() return status · b778ae25
      Paul E. McKenney 提交于
      In order to better respond to things like duplicate invocations
      of call_rcu(), RCU needs to see the status of a call to
      debug_object_activate().  This would allow RCU to leak the callback in
      order to avoid adding freelist-reuse mischief to the duplicate invoations.
      This commit therefore makes debug_object_activate() return status,
      zero for success and -EINVAL for failure.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: Sedat Dilek <sedat.dilek@gmail.com>
      Cc: Davidlohr Bueso <davidlohr.bueso@hp.com>
      Cc: Rik van Riel <riel@surriel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Tested-by: NSedat Dilek <sedat.dilek@gmail.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      b778ae25
  14. 28 2月, 2013 1 次提交
    • S
      hlist: drop the node parameter from iterators · b67bfe0d
      Sasha Levin 提交于
      I'm not sure why, but the hlist for each entry iterators were conceived
      
              list_for_each_entry(pos, head, member)
      
      The hlist ones were greedy and wanted an extra parameter:
      
              hlist_for_each_entry(tpos, pos, head, member)
      
      Why did they need an extra pos parameter? I'm not quite sure. Not only
      they don't really need it, it also prevents the iterator from looking
      exactly like the list iterator, which is unfortunate.
      
      Besides the semantic patch, there was some manual work required:
      
       - Fix up the actual hlist iterators in linux/list.h
       - Fix up the declaration of other iterators based on the hlist ones.
       - A very small amount of places were using the 'node' parameter, this
       was modified to use 'obj->member' instead.
       - Coccinelle didn't handle the hlist_for_each_entry_safe iterator
       properly, so those had to be fixed up manually.
      
      The semantic patch which is mostly the work of Peter Senna Tschudin is here:
      
      @@
      iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
      
      type T;
      expression a,c,d,e;
      identifier b;
      statement S;
      @@
      
      -T b;
          <+... when != b
      (
      hlist_for_each_entry(a,
      - b,
      c, d) S
      |
      hlist_for_each_entry_continue(a,
      - b,
      c) S
      |
      hlist_for_each_entry_from(a,
      - b,
      c) S
      |
      hlist_for_each_entry_rcu(a,
      - b,
      c, d) S
      |
      hlist_for_each_entry_rcu_bh(a,
      - b,
      c, d) S
      |
      hlist_for_each_entry_continue_rcu_bh(a,
      - b,
      c) S
      |
      for_each_busy_worker(a, c,
      - b,
      d) S
      |
      ax25_uid_for_each(a,
      - b,
      c) S
      |
      ax25_for_each(a,
      - b,
      c) S
      |
      inet_bind_bucket_for_each(a,
      - b,
      c) S
      |
      sctp_for_each_hentry(a,
      - b,
      c) S
      |
      sk_for_each(a,
      - b,
      c) S
      |
      sk_for_each_rcu(a,
      - b,
      c) S
      |
      sk_for_each_from
      -(a, b)
      +(a)
      S
      + sk_for_each_from(a) S
      |
      sk_for_each_safe(a,
      - b,
      c, d) S
      |
      sk_for_each_bound(a,
      - b,
      c) S
      |
      hlist_for_each_entry_safe(a,
      - b,
      c, d, e) S
      |
      hlist_for_each_entry_continue_rcu(a,
      - b,
      c) S
      |
      nr_neigh_for_each(a,
      - b,
      c) S
      |
      nr_neigh_for_each_safe(a,
      - b,
      c, d) S
      |
      nr_node_for_each(a,
      - b,
      c) S
      |
      nr_node_for_each_safe(a,
      - b,
      c, d) S
      |
      - for_each_gfn_sp(a, c, d, b) S
      + for_each_gfn_sp(a, c, d) S
      |
      - for_each_gfn_indirect_valid_sp(a, c, d, b) S
      + for_each_gfn_indirect_valid_sp(a, c, d) S
      |
      for_each_host(a,
      - b,
      c) S
      |
      for_each_host_safe(a,
      - b,
      c, d) S
      |
      for_each_mesh_entry(a,
      - b,
      c, d) S
      )
          ...+>
      
      [akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
      [akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
      [akpm@linux-foundation.org: checkpatch fixes]
      [akpm@linux-foundation.org: fix warnings]
      [akpm@linux-foudnation.org: redo intrusive kvm changes]
      Tested-by: NPeter Senna Tschudin <peter.senna@gmail.com>
      Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NSasha Levin <sasha.levin@oracle.com>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Gleb Natapov <gleb@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b67bfe0d
  15. 18 4月, 2012 1 次提交
  16. 11 4月, 2012 2 次提交
  17. 06 3月, 2012 1 次提交
  18. 24 11月, 2011 2 次提交
  19. 20 6月, 2011 1 次提交
    • M
      debugobjects: Fix boot crash when kmemleak and debugobjects enabled · 161b6ae0
      Marcin Slusarz 提交于
      Order of initialization look like this:
      ...
      debugobjects
      kmemleak
      ...(lots of other subsystems)...
      workqueues (through early initcall)
      ...
      
      debugobjects use schedule_work for batch freeing of its data and kmemleak
      heavily use debugobjects, so when it comes to freeing and workqueues were
      not initialized yet, kernel crashes:
      
      BUG: unable to handle kernel NULL pointer dereference at           (null)
      IP: [<ffffffff810854d1>] __queue_work+0x29/0x41a
       [<ffffffff81085910>] queue_work_on+0x16/0x1d
       [<ffffffff81085abc>] queue_work+0x29/0x55
       [<ffffffff81085afb>] schedule_work+0x13/0x15
       [<ffffffff81242de1>] free_object+0x90/0x95
       [<ffffffff81242f6d>] debug_check_no_obj_freed+0x187/0x1d3
       [<ffffffff814b6504>] ? _raw_spin_unlock_irqrestore+0x30/0x4d
       [<ffffffff8110bd14>] ? free_object_rcu+0x68/0x6d
       [<ffffffff8110890c>] kmem_cache_free+0x64/0x12c
       [<ffffffff8110bd14>] free_object_rcu+0x68/0x6d
       [<ffffffff810b58bc>] __rcu_process_callbacks+0x1b6/0x2d9
      ...
      
      because system_wq is NULL.
      
      Fix it by checking if workqueues susbystem was initialized before using.
      Signed-off-by: NMarcin Slusarz <marcin.slusarz@gmail.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Dipankar Sarma <dipankar@in.ibm.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: stable@kernel.org
      Link: http://lkml.kernel.org/r/20110528112342.GA3068@joi.lanSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      161b6ae0
  20. 08 3月, 2011 1 次提交
    • S
      debugobjects: Add hint for better object identification · 99777288
      Stanislaw Gruszka 提交于
      In complex subsystems like mac80211 structures can contain several
      timers and work structs, so identifying a specific instance from the
      call trace and object type output of debugobjects can be hard.
      
      Allow the subsystems which support debugobjects to provide a hint
      function. This function returns a pointer to a kernel address
      (preferrably the objects callback function) which is printed along
      with the debugobjects type.
      
      Add hint methods for timer_list, work_struct and hrtimer.
      
      [ tglx: Massaged changelog, made it compile ]
      Signed-off-by: NStanislaw Gruszka <sgruszka@redhat.com>
      LKML-Reference: <20110307085809.GA9334@redhat.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      99777288
  21. 11 5月, 2010 1 次提交
    • M
      Debugobjects transition check · a5d8e467
      Mathieu Desnoyers 提交于
      Implement a basic state machine checker in the debugobjects.
      
      This state machine checker detects races and inconsistencies within the "active"
      life of a debugobject. The checker only keeps track of the current state; all
      the state machine logic is kept at the object instance level.
      
      The checker works by adding a supplementary "unsigned int astate" field to the
      debug_obj structure. It keeps track of the current "active state" of the object.
      
      The only constraints that are imposed on the states by the debugobjects system
      is that:
      
      - activation of an object sets the current active state to 0,
      - deactivation of an object expects the current active state to be 0.
      
      For the rest of the states, the state mapping is determined by the specific
      object instance. Therefore, the logic keeping track of the state machine is
      within the specialized instance, without any need to know about it at the
      debugobject level.
      
      The current object active state is changed by calling:
      
      debug_object_active_state(addr, descr, expect, next)
      
      where "expect" is the expected state and "next" is the next state to move to if
      the expected state is found. A warning is generated if the expected is not
      found.
      Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      CC: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      CC: akpm@linux-foundation.org
      CC: mingo@elte.hu
      CC: laijs@cn.fujitsu.com
      CC: dipankar@in.ibm.com
      CC: josh@joshtriplett.org
      CC: dvhltc@us.ibm.com
      CC: niv@us.ibm.com
      CC: peterz@infradead.org
      CC: rostedt@goodmis.org
      CC: Valdis.Kletnieks@vt.edu
      CC: dhowells@redhat.com
      CC: eric.dumazet@gmail.com
      CC: Alexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      a5d8e467
  22. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  23. 27 3月, 2010 1 次提交
    • H
      debugobjects: Section mismatch cleanup · 1fb2f77c
      Henrik Kretzschmar 提交于
      This patch marks two functions, which only get called at
      initialization, as __init.
      
      Here is also interesting, that modpost doesn't catch here the right
      function name.
      
      WARNING: lib/built-in.o(.text+0x585f): Section mismatch in reference
      from the function T.506() to the variable .init.data:obj
      The function T.506() references the variable __initdata obj.
      This is often because T.506 lacks a __initdata annotation or the 
      annotation of obj is wrong.
      Signed-off-by: NHenrik Kretzschmar <henne@nachtwindheim.de>
      LKML-Reference: <1269632315-19403-1-git-send-email-henne@nachtwindheim.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      1fb2f77c
  24. 15 12月, 2009 1 次提交
  25. 12 10月, 2009 1 次提交
  26. 17 3月, 2009 2 次提交
    • T
      debugobjects: delay free of internal objects · 337fff8b
      Thomas Gleixner 提交于
      Impact: avoid recursive kfree calls, less slab activity on heavy load
      
      debugobjects checks on kfree whether tracked objects are freed. When a
      tracked object is freed debugobjects frees the internal reference
      object as well. The debug object slab cache is marked to not recurse
      into debugobjects when a slab objects is freed, but the recursive call
      can be problematic versus locking in the memory allocator.
      
      Defer the freeing of debug slab objects via schedule_work. The reasons
      not to use RCU are:
      
      1) rcu makes the data structure larger
      2) there is no real need for rcu as nothing references the obj after
         we freed it
      3) under heavy load it is easier to reuse the to be freed objects instead
         of allocating new objects from the slab. This lowered the slab activity
         significantly in a heavy load networking test where lots of timers are
         created/destroyed. The workqueue based delayed free allows us just to
         put the to be freed objects back into the object pool and reuse them
         right away.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      LKML-Reference: <200903162049.58058.nickpiggin@yahoo.com.au>
      337fff8b
    • T
      debugobjects: replace static objects when slab cache becomes available · 1be1cb7b
      Thomas Gleixner 提交于
      Impact: refactor/consolidate object management, prepare for delayed free
      
      debugobjects allocates static reference objects to track objects which
      are initialized or activated before the slab cache becomes
      available. These static reference objects have to be handled
      seperately in free_object(). The handling of these objects is in the
      way of implementing a delayed free functionality. The delayed free is
      required to avoid callbacks into the mm code from
      debug_check_no_obj_freed().
      
      Replace the static object references with dynamic ones after the slab
      cache has been initialized. The static objects are now marked initdata.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      LKML-Reference: <200903162049.58058.nickpiggin@yahoo.com.au>
      1be1cb7b
  27. 02 3月, 2009 1 次提交
  28. 26 11月, 2008 1 次提交
    • I
      debugobjects: add boot parameter default value · 3ae70205
      Ingo Molnar 提交于
      Impact: add .config driven boot parameter default value
      
      Right now debugobjects can only be activated if the debug_objects
      boot parameter is passed in via the boot command line.
      
      Make this more convenient (and randomizable) by also providing
      a .config method. Enable it by default. (DEBUG_OBJECTS itself
      is default-off)
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3ae70205
  29. 01 9月, 2008 1 次提交
    • V
      debugobjects: fix lockdep warning · 673d62cc
      Vegard Nossum 提交于
      Daniel J. Blueman reported:
      > =======================================================
      > [ INFO: possible circular locking dependency detected ]
      > 2.6.27-rc4-224c #1
      > -------------------------------------------------------
      > hald/4680 is trying to acquire lock:
      >  (&n->list_lock){++..}, at: [<ffffffff802bfa26>] add_partial+0x26/0x80
      >
      > but task is already holding lock:
      >  (&obj_hash[i].lock){++..}, at: [<ffffffff8041cfdc>]
      > debug_object_free+0x5c/0x120
      
      We fix it by moving the actual freeing to outside the lock (the lock
      now only protects the list).
      
      The pool lock is also promoted to irq-safe (suggested by Dan). It's
      necessary because free_pool is now called outside the irq disabled
      region. So we need to protect against an interrupt handler which calls
      debug_object_init().
      
      [tglx@linutronix.de: added hlist_move_list helper to avoid looping
      		     through the list twice]
      Reported-by: NDaniel J Blueman <daniel.blueman@gmail.com>
      Signed-off-by: NVegard Nossum <vegard.nossum@gmail.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      673d62cc
  30. 27 7月, 2008 1 次提交