- 23 1月, 2010 1 次提交
-
-
由 David Rientjes 提交于
`s' cannot be NULL if kmalloc_caches is not NULL. This conditional would trigger a NULL pointer on `s', anyway, since it is immediately derefernced if true. Acked-by: NChristoph Lameter <cl@linux-foundation.org> Signed-off-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
- 20 12月, 2009 4 次提交
-
-
由 Christoph Lameter 提交于
this_cpu_inc() translates into a single instruction on x86 and does not need any register. So use it in stat(). We also want to avoid the calculation of the per cpu kmem_cache_cpu structure pointer. So pass a kmem_cache pointer instead of a kmem_cache_cpu pointer. Signed-off-by: NChristoph Lameter <cl@linux-foundation.org> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
由 Christoph Lameter 提交于
Remove the fields in struct kmem_cache_cpu that were used to cache data from struct kmem_cache when they were in different cachelines. The cacheline that holds the per cpu array pointer now also holds these values. We can cut down the struct kmem_cache_cpu size to almost half. The get_freepointer() and set_freepointer() functions that used to be only intended for the slow path now are also useful for the hot path since access to the size field does not require accessing an additional cacheline anymore. This results in consistent use of functions for setting the freepointer of objects throughout SLUB. Also we initialize all possible kmem_cache_cpu structures when a slab is created. No need to initialize them when a processor or node comes online. Signed-off-by: NChristoph Lameter <cl@linux-foundation.org> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
由 Christoph Lameter 提交于
Dynamic DMA kmalloc cache allocation is troublesome since the new percpu allocator does not support allocations in atomic contexts. Reserve some statically allocated kmalloc_cpu structures instead. Signed-off-by: NChristoph Lameter <cl@linux-foundation.org> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
由 Christoph Lameter 提交于
Using per cpu allocations removes the needs for the per cpu arrays in the kmem_cache struct. These could get quite big if we have to support systems with thousands of cpus. The use of this_cpu_xx operations results in: 1. The size of kmem_cache for SMP configuration shrinks since we will only need 1 pointer instead of NR_CPUS. The same pointer can be used by all processors. Reduces cache footprint of the allocator. 2. We can dynamically size kmem_cache according to the actual nodes in the system meaning less memory overhead for configurations that may potentially support up to 1k NUMA nodes / 4k cpus. 3. We can remove the diddle widdle with allocating and releasing of kmem_cache_cpu structures when bringing up and shutting down cpus. The cpu alloc logic will do it all for us. Removes some portions of the cpu hotplug functionality. 4. Fastpath performance increases since per cpu pointer lookups and address calculations are avoided. V7-V8 - Convert missed get_cpu_slab() under CONFIG_SLUB_STATS Signed-off-by: NChristoph Lameter <cl@linux-foundation.org> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
- 11 12月, 2009 1 次提交
-
-
由 Li Zefan 提交于
Define kmem_trace_alloc_{,node}_notrace() if CONFIG_TRACING is enabled, otherwise perf-kmem will show wrong stats ifndef CONFIG_KMEM_TRACE, because a kmalloc() memory allocation may be traced by both trace_kmalloc() and trace_kmem_cache_alloc(). Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com> Reviewed-by: NPekka Enberg <penberg@cs.helsinki.fi> Cc: Christoph Lameter <cl@linux-foundation.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: linux-mm@kvack.org <linux-mm@kvack.org> Cc: Eduard - Gabriel Munteanu <eduard.munteanu@linux360.ro> LKML-Reference: <4B21F89A.7000801@cn.fujitsu.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 29 11月, 2009 1 次提交
-
-
由 Pekka Enberg 提交于
The unlikely() annotation in slab_alloc() covers too much of the expression. It's actually very likely that the object is not NULL so use unlikely() only for the __GFP_ZERO expression like SLAB does. The patch reduces kernel text by 29 bytes on x86-64: text data bss dec hex filename 24185 8560 176 32921 8099 mm/slub.o.orig 24156 8560 176 32892 807c mm/slub.o Acked-by: NChristoph Lameter <cl@linux-foundation.org> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
- 16 10月, 2009 1 次提交
-
-
由 David Rientjes 提交于
When collecting slub stats for particular workloads, it's necessary to collect each statistic for all caches before the job is even started because the counters are usually greater than zero just from boot and initialization. This allows a statistic to be cleared on each cpu by writing '0' to its sysfs file. This creates a baseline for statistics of interest before the workload is started. Setting a statistic to a particular value is not supported, so all values written to these files other than '0' returns -EINVAL. Cc: Christoph Lameter <cl@linux-foundation.org> Signed-off-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
- 22 9月, 2009 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
Right now, if you inadvertently pass NULL to kmem_cache_create() at boot time, it crashes much later after boot somewhere deep inside sysfs which makes it very non obvious to figure out what's going on. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Pekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 16 9月, 2009 1 次提交
-
-
由 Ingo Molnar 提交于
This build bug: mm/slub.c: In function 'kmem_cache_open': mm/slub.c:2476: error: 'disable_higher_order_debug' undeclared (first use in this function) mm/slub.c:2476: error: (Each undeclared identifier is reported only once mm/slub.c:2476: error: for each function it appears in.) Triggers because there's no !CONFIG_SLUB_DEBUG definition for disable_higher_order_debug. Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
- 14 9月, 2009 1 次提交
-
-
由 Eric Dumazet 提交于
When SLAB_POISON is used and slab_pad_check() finds an overwrite of the slab padding, we call restore_bytes() on the whole slab, not only on the padding. Acked-by: NChristoph Lameer <cl@linux-foundation.org> Reported-by: NZdenek Kabelac <zdenek.kabelac@gmail.com> Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
- 04 9月, 2009 2 次提交
-
-
由 Eric Dumazet 提交于
kmem_cache_destroy() should call rcu_barrier() *after* kmem_cache_close() and *before* sysfs_slab_remove() or risk rcu_free_slab() being called after kmem_cache is deleted (kfreed). rmmod nf_conntrack can crash the machine because it has to kmem_cache_destroy() a SLAB_DESTROY_BY_RCU enabled cache. Cc: <stable@kernel.org> Reported-by: NZdenek Kabelac <zdenek.kabelac@gmail.com> Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
由 Xiaotian Feng 提交于
When CONFIG_SLUB_DEBUG is enabled, sysfs_slab_add should unlink and put the kobject if sysfs_create_group failed. Otherwise, sysfs_slab_add returns error then free kmem_cache s, thus memory of s->kobj is leaked. Acked-by: NChristoph Lameter <cl@linux-foundation.org> Signed-off-by: NXiaotian Feng <dfeng@redhat.com> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
- 30 8月, 2009 1 次提交
-
-
由 Aaro Koskinen 提交于
If the minalign is 64 bytes, then the 96 byte cache should not be created because it would conflict with the 128 byte cache. If the minalign is 256 bytes, patching the size_index table should not result in a buffer overrun. The calculation "(i - 1) / 8" used to access size_index[] is moved to a separate function as suggested by Christoph Lameter. Acked-by: NChristoph Lameter <cl@linux-foundation.org> Signed-off-by: NAaro Koskinen <aaro.koskinen@nokia.com> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
- 20 8月, 2009 1 次提交
-
-
由 Amerigo Wang 提交于
Signed-off-by: NWANG Cong <amwang@redhat.com> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
- 19 8月, 2009 1 次提交
-
-
由 WANG Cong 提交于
SLUB does not support writes to /proc/slabinfo so there should not be write permission to do that either. Signed-off-by: NWANG Cong <amwang@redhat.com> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
- 01 8月, 2009 1 次提交
-
-
由 Zhang, Yanmin 提交于
kmem_cache->align records the original align parameter value specified by users. Function calculate_alignment might change it based on cache line size. So change kmem_cache->align correspondingly. Signed-off-by: NZhang Yanmin <yanmin_zhang@linux.intel.com> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
- 28 7月, 2009 1 次提交
-
-
由 David Rientjes 提交于
This patch moves the masking of debugging flags which increase a cache's min order due to metadata when `slub_debug=O' is used from kmem_cache_flags() to kmem_cache_open(). Instead of defining the maximum metadata size increase in a preprocessor macro, this approach uses the cache's ->size and ->objsize members to determine if the min order increased due to debugging options. If so, the flags specified in the more appropriately named DEBUG_METADATA_FLAGS are masked off. This approach was suggested by Christoph Lameter <cl@linux-foundation.org>. Cc: Christoph Lameter <cl@linux-foundation.org> Signed-off-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
- 10 7月, 2009 1 次提交
-
-
由 David Rientjes 提交于
When debugging is enabled, slub requires that additional metadata be stored in slabs for certain options: SLAB_RED_ZONE, SLAB_POISON, and SLAB_STORE_USER. Consequently, it may require that the minimum possible slab order needed to allocate a single object be greater when using these options. The most notable example is for objects that are PAGE_SIZE bytes in size. Higher minimum slab orders may cause page allocation failures when oom or under heavy fragmentation. This patch adds a new slub_debug option, which disables debugging by default for caches that would have resulted in higher minimum orders: slub_debug=O When this option is used on systems with 4K pages, kmalloc-4096, for example, will not have debugging enabled by default even if CONFIG_SLUB_DEBUG_ON is defined because it would have resulted in a order-1 minimum slab order. Reported-by: NLarry Finger <Larry.Finger@lwfinger.net> Tested-by: NLarry Finger <Larry.Finger@lwfinger.net> Cc: Christoph Lameter <cl@linux-foundation.org> Signed-off-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
- 08 7月, 2009 1 次提交
-
-
由 Catalin Marinas 提交于
The kmalloc_large() and kmalloc_large_node() functions were missed when adding the kmemleak hooks to the slub allocator. However, they should be traced to avoid false positives. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Christoph Lameter <cl@linux-foundation.org> Acked-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
- 26 6月, 2009 1 次提交
-
-
由 Paul E. McKenney 提交于
Jesper noted that kmem_cache_destroy() invokes synchronize_rcu() rather than rcu_barrier() in the SLAB_DESTROY_BY_RCU case, which could result in RCU callbacks accessing a kmem_cache after it had been destroyed. Cc: <stable@kernel.org> Acked-by: NMatt Mackall <mpm@selenic.com> Reported-by: NJesper Dangaard Brouer <hawk@comx.dk> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
- 25 6月, 2009 1 次提交
-
-
由 Pekka Enberg 提交于
SLUB uses higher order allocations by default but falls back to small orders under memory pressure. Make sure the GFP mask used in the initial allocation doesn't include __GFP_NOFAIL. Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 24 6月, 2009 1 次提交
-
-
由 Tejun Heo 提交于
Currently, the following three different ways to define percpu arrays are in use. 1. DEFINE_PER_CPU(elem_type[array_len], array_name); 2. DEFINE_PER_CPU(elem_type, array_name[array_len]); 3. DEFINE_PER_CPU(elem_type, array_name)[array_len]; Unify to #1 which correctly separates the roles of the two parameters and thus allows more flexibility in the way percpu variables are defined. [ Impact: cleanup ] Signed-off-by: NTejun Heo <tj@kernel.org> Reviewed-by: NChristoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Tony Luck <tony.luck@intel.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: linux-mm@kvack.org Cc: Christoph Lameter <cl@linux-foundation.org> Cc: David S. Miller <davem@davemloft.net>
-
- 19 6月, 2009 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
The page allocator also needs the masking of gfp flags during boot, so this moves it out of slab/slub and uses it with the page allocator as well. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: NPekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 17 6月, 2009 1 次提交
-
-
由 Christoph Lameter 提交于
num_online_nodes() is called in a number of places but most often by the page allocator when deciding whether the zonelist needs to be filtered based on cpusets or the zonelist cache. This is actually a heavy function and touches a number of cache lines. This patch stores the number of online nodes at boot time and updates the value when nodes get onlined and offlined. The value is then used in a number of important paths in place of num_online_nodes(). [rientjes@google.com: do not override definition of node_set_online() with macro] Signed-off-by: NChristoph Lameter <cl@linux-foundation.org> Signed-off-by: NMel Gorman <mel@csn.ul.ie> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Pekka Enberg <penberg@cs.helsinki.fi> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Cc: Dave Hansen <dave@linux.vnet.ibm.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com> Signed-off-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 15 6月, 2009 3 次提交
-
-
由 Vegard Nossum 提交于
This adds support for tracking the initializedness of memory that was allocated with the page allocator. Highmem requests are not tracked. Cc: Dave Hansen <dave@linux.vnet.ibm.com> Acked-by: NPekka Enberg <penberg@cs.helsinki.fi> [build fix for !CONFIG_KMEMCHECK] Signed-off-by: NIngo Molnar <mingo@elte.hu> [rebased for mainline inclusion] Signed-off-by: NVegard Nossum <vegard.nossum@gmail.com>
-
由 Nick Piggin 提交于
Recent change to use slab allocations earlier exposed a bug where SLUB can call schedule_work and try to call sysfs before it is safe to do so. Reported-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Tested-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NNick Piggin <npiggin@suse.de> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
由 Vegard Nossum 提交于
Parts of this patch were contributed by Pekka Enberg but merged for atomicity. Cc: Christoph Lameter <clameter@sgi.com> Signed-off-by: NVegard Nossum <vegardno@ifi.uio.no> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: NIngo Molnar <mingo@elte.hu> [rebased for mainline inclusion] Signed-off-by: NVegard Nossum <vegardno@ifi.uio.no>
-
- 14 6月, 2009 2 次提交
-
-
由 Pekka Enberg 提交于
We must check for __GFP_NOFAIL like the page allocator does; otherwise we end up with false positives. While at it, add the printk_ratelimit() check in SLUB as well. Cc: Alexander Beregalov <a.beregalov@gmail.com> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
由 Alexander Beregalov 提交于
Fix this build error when CONFIG_SLUB_DEBUG is not set: mm/slub.c: In function 'slab_out_of_memory': mm/slub.c:1551: error: 'struct kmem_cache_node' has no member named 'nr_slabs' mm/slub.c:1552: error: 'struct kmem_cache_node' has no member named 'total_objects' [ penberg@cs.helsinki.fi: cleanups ] Signed-off-by: NAlexander Beregalov <a.beregalov@gmail.com> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
- 12 6月, 2009 3 次提交
-
-
由 Pekka Enberg 提交于
As explained by Benjamin Herrenschmidt: Oh and btw, your patch alone doesn't fix powerpc, because it's missing a whole bunch of GFP_KERNEL's in the arch code... You would have to grep the entire kernel for things that check slab_is_available() and even then you'll be missing some. For example, slab_is_available() didn't always exist, and so in the early days on powerpc, we used a mem_init_done global that is set form mem_init() (not perfect but works in practice). And we still have code using that to do the test. Therefore, mask out __GFP_WAIT, __GFP_IO, and __GFP_FS in the slab allocators in early boot code to avoid enabling interrupts. Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
由 Pekka Enberg 提交于
This patch makes kmalloc() available earlier in the boot sequence so we can get rid of some bootmem allocations. The bulk of the changes are due to kmem_cache_init() being called with interrupts disabled which requires some changes to allocator boostrap code. Note: 32-bit x86 does WP protect test in mem_init() so we must setup traps before we call mem_init() during boot as reported by Ingo Molnar: We have a hard crash in the WP-protect code: [ 0.000000] Checking if this processor honours the WP bit even in supervisor mode...BUG: Int 14: CR2 ffcff000 [ 0.000000] EDI 00000188 ESI 00000ac7 EBP c17eaf9c ESP c17eaf8c [ 0.000000] EBX 000014e0 EDX 0000000e ECX 01856067 EAX 00000001 [ 0.000000] err 00000003 EIP c10135b1 CS 00000060 flg 00010002 [ 0.000000] Stack: c17eafa8 c17fd410 c16747bc c17eafc4 c17fd7e5 000011fd f8616000 c18237cc [ 0.000000] 00099800 c17bb000 c17eafec c17f1668 000001c5 c17f1322 c166e039 c1822bf0 [ 0.000000] c166e033 c153a014 c18237cc 00020800 c17eaff8 c17f106a 00020800 01ba5003 [ 0.000000] Pid: 0, comm: swapper Not tainted 2.6.30-tip-02161-g7a74539-dirty #52203 [ 0.000000] Call Trace: [ 0.000000] [<c15357c2>] ? printk+0x14/0x16 [ 0.000000] [<c10135b1>] ? do_test_wp_bit+0x19/0x23 [ 0.000000] [<c17fd410>] ? test_wp_bit+0x26/0x64 [ 0.000000] [<c17fd7e5>] ? mem_init+0x1ba/0x1d8 [ 0.000000] [<c17f1668>] ? start_kernel+0x164/0x2f7 [ 0.000000] [<c17f1322>] ? unknown_bootoption+0x0/0x19c [ 0.000000] [<c17f106a>] ? __init_begin+0x6a/0x6f Acked-by: NJohannes Weiner <hannes@cmpxchg.org> Acked-by Linus Torvalds <torvalds@linux-foundation.org> Cc: Christoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Matt Mackall <mpm@selenic.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Yinghai Lu <yinghai@kernel.org> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
由 Catalin Marinas 提交于
This patch adds the callbacks to kmemleak_(alloc|free) functions from the slub allocator. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Christoph Lameter <cl@linux-foundation.org> Reviewed-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
- 11 6月, 2009 1 次提交
-
-
由 Pekka Enberg 提交于
As suggested by Mel Gorman, add out-of-memory diagnostics to the SLUB allocator to make debugging OOM conditions easier. This patch helped hunt down a nasty OOM issue that popped up every now that was caused by SLUB debugging code which forced 4096 byte allocations to use order 1 pages even in the fallback case. An example print out looks like this: <snip page allocator out-of-memory message> SLUB: Unable to allocate memory on node -1 (gfp=20) cache: kmalloc-4096, object size: 4096, buffer size: 4168, default order: 3, min order: 1 node 0: slabs: 95, objs: 665, free: 0 Acked-by: NChristoph Lameter <cl@linux-foundation.org> Acked-by: NMel Gorman <mel@csn.ul.ie> Tested-by: NLarry Finger <Larry.Finger@lwfinger.net> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
- 06 5月, 2009 1 次提交
-
-
由 Nick Piggin 提交于
SLUB does not correctly account reclaim_state.reclaimed_slab, so it will break memory reclaim. Account it like SLAB does. Cc: stable@kernel.org Cc: linux-mm@kvack.org Cc: Matt Mackall <mpm@selenic.com> Acked-by: NChristoph Lameter <cl@linux-foundation.org> Signed-off-by: NNick Piggin <npiggin@suse.de> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
- 23 4月, 2009 1 次提交
-
-
由 David Rientjes 提交于
slub_max_order may not be equal to or greater than MAX_ORDER. Additionally, if a single object cannot be placed in a slab of slub_max_order, it still must allocate slabs below MAX_ORDER. Acked-by: NChristoph Lameter <cl@linux-foundation.org> Signed-off-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
- 12 4月, 2009 1 次提交
-
-
由 Zhaolei 提交于
Impact: refactor code for future changes Current kmemtrace.h is used both as header file of kmemtrace and kmem's tracepoints definition. Tracepoints' definition file may be used by other code, and should only have definition of tracepoint. We can separate include/trace/kmemtrace.h into 2 files: include/linux/kmemtrace.h: header file for kmemtrace include/trace/kmem.h: definition of kmem tracepoints Signed-off-by: NZhao Lei <zhaolei@cn.fujitsu.com> Acked-by: NEduard - Gabriel Munteanu <eduard.munteanu@linux360.ro> Acked-by: NPekka Enberg <penberg@cs.helsinki.fi> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Tom Zanussi <tzanussi@gmail.com> LKML-Reference: <49DEE68A.5040902@cn.fujitsu.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 03 4月, 2009 2 次提交
-
-
由 Pekka Enberg 提交于
Impact: also output kfree(NULL) entries This patch moves the trace_kfree() calls before the ZERO_OR_NULL_PTR check so that we can trace call-sites that call kfree() with NULL many times which might be an indication of a bug. Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi> Cc: Eduard - Gabriel Munteanu <eduard.munteanu@linux360.ro> LKML-Reference: <1237971957.30175.18.camel@penberg-laptop> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
kmemtrace now uses tracepoints instead of markers. We no longer need to use format specifiers to pass arguments. Signed-off-by: NEduard - Gabriel Munteanu <eduard.munteanu@linux360.ro> [ folded: Use the new TP_PROTO and TP_ARGS to fix the build. ] [ folded: fix build when CONFIG_KMEMTRACE is disabled. ] [ folded: define tracepoints when CONFIG_TRACEPOINTS is enabled. ] Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi> LKML-Reference: <ae61c0f37156db8ec8dc0d5778018edde60a92e3.1237813499.git.eduard.munteanu@linux360.ro> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 23 3月, 2009 1 次提交
-
-
由 Akinobu Mita 提交于
Use get_track() in set_track() Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com> Cc: Christoph Lameter <cl@linux-foundation.org> Cc: Pekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-