- 02 10月, 2010 4 次提交
-
-
由 Christoph Lameter 提交于
kmalloc caches are statically defined and may take up a lot of space just because the sizes of the node array has to be dimensioned for the largest node count supported. This patch makes the size of the kmem_cache structure dynamic throughout by creating a kmem_cache slab cache for the kmem_cache objects. The bootstrap occurs by allocating the initial one or two kmem_cache objects from the page allocator. C2->C3 - Fix various issues indicated by David - Make create kmalloc_cache return a kmem_cache * pointer. Acked-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NChristoph Lameter <cl@linux-foundation.org> Signed-off-by: NPekka Enberg <penberg@kernel.org>
-
由 Christoph Lameter 提交于
The percpu allocator can now handle allocations during early boot. So drop the static kmem_cache_cpu array. Cc: Tejun Heo <tj@kernel.org> Acked-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NChristoph Lameter <cl@linux-foundation.org> Signed-off-by: NPekka Enberg <penberg@kernel.org>
-
由 Christoph Lameter 提交于
Remove the dynamic dma slab allocation since this causes too many issues with nested locks etc etc. The change avoids passing gfpflags into many functions. V3->V4: - Create dma caches in kmem_cache_init() instead of kmem_cache_init_late(). Acked-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NChristoph Lameter <cl@linux-foundation.org> Signed-off-by: NPekka Enberg <penberg@kernel.org>
-
由 Christoph Lameter 提交于
Compiler folds the debgging functions into the critical paths. Avoid that by adding noinline to the functions that check for problems. Acked-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NChristoph Lameter <cl@linux.com> Signed-off-by: NPekka Enberg <penberg@kernel.org>
-
- 03 8月, 2010 2 次提交
-
-
由 Christoph Lameter 提交于
Serialize kmem_cache_create and kmem_cache_destroy using the slub_lock. Only possible after the use of the slub_lock during dynamic dma creation has been removed. Then make sure that the setup of the slab sysfs entries does not race with kmem_cache_create and kmem_cache destroy. If a slab cache is removed before we have setup sysfs then simply skip over the sysfs handling. Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Roland Dreier <rdreier@cisco.com> Signed-off-by: NChristoph Lameter <cl@linux-foundation.org> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
由 Pekka Enberg 提交于
This reverts commit f5b801ac.
-
- 29 7月, 2010 1 次提交
-
-
由 Christoph Lameter 提交于
The network developers have seen sporadic allocations resulting in objects coming from unexpected NUMA nodes despite asking for objects from a specific node. This is due to get_partial() calling get_any_partial() if partial slabs are exhausted for a node even if a node was specified and therefore one would expect allocations only from the specified node. get_any_partial() sporadically may return a slab from a foreign node to gradually reduce the size of partial lists on remote nodes and thereby reduce total memory use for a slab cache. The behavior is controlled by the remote_defrag_ratio of each cache. Strictly speaking this is permitted behavior since __GFP_THISNODE was not specified for the allocation but it is certain surprising. This patch makes sure that the remote defrag behavior only occurs if no node was specified. Signed-off-by: NChristoph Lameter <cl@linux-foundation.org> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
- 16 7月, 2010 5 次提交
-
-
由 Christoph Lameter 提交于
The cacheline with the flags is reachable from the hot paths after the percpu allocator changes went in. So there is no need anymore to put a flag into each slab page. Get rid of the SlubDebug flag and use the flags in kmem_cache instead. Acked-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NChristoph Lameter <cl@linux-foundation.org> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
由 Christoph Lameter 提交于
If a slab cache is removed before we have setup sysfs then simply skip over the sysfs handling. Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Roland Dreier <rdreier@cisco.com> Acked-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NChristoph Lameter <cl@linux-foundation.org> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
由 Christoph Lameter 提交于
Small allocations may fail during slab bringup which is fatal. Add a BUG_ON() so that we fail immediately rather than failing later during sysfs processing. Acked-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NChristoph Lameter <cl@linux-foundation.org> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
由 Christoph Lameter 提交于
UL suffix is missing in some constants. Conform to how slab.h uses constants. Acked-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NChristoph Lameter <cl@linux-foundation.org> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
由 Christoph Lameter 提交于
kmalloc_node() and friends can be passed a constant -1 to indicate that no choice was made for the node from which the object needs to come. Use NUMA_NO_NODE instead of -1. CC: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NChristoph Lameter <cl@linux-foundation.org> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
- 09 6月, 2010 1 次提交
-
-
由 Li Zefan 提交于
We have been resisting new ftrace plugins and removing existing ones, and kmemtrace has been superseded by kmem trace events and perf-kmem, so we remove it. Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com> Acked-by: NPekka Enberg <penberg@cs.helsinki.fi> Acked-by: NEduard - Gabriel Munteanu <eduard.munteanu@linux360.ro> Cc: Ingo Molnar <mingo@elte.hu> Cc: Steven Rostedt <rostedt@goodmis.org> [ remove kmemtrace from the makefile, handle slob too ] Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
-
- 25 5月, 2010 2 次提交
-
-
由 Miao Xie 提交于
Before applying this patch, cpuset updates task->mems_allowed and mempolicy by setting all new bits in the nodemask first, and clearing all old unallowed bits later. But in the way, the allocator may find that there is no node to alloc memory. The reason is that cpuset rebinds the task's mempolicy, it cleans the nodes which the allocater can alloc pages on, for example: (mpol: mempolicy) task1 task1's mpol task2 alloc page 1 alloc on node0? NO 1 1 change mems from 1 to 0 1 rebind task1's mpol 0-1 set new bits 0 clear disallowed bits alloc on node1? NO 0 ... can't alloc page goto oom This patch fixes this problem by expanding the nodes range first(set newly allowed bits) and shrink it lazily(clear newly disallowed bits). So we use a variable to tell the write-side task that read-side task is reading nodemask, and the write-side task clears newly disallowed nodes after read-side task ends the current memory allocation. [akpm@linux-foundation.org: fix spello] Signed-off-by: NMiao Xie <miaox@cn.fujitsu.com> Cc: David Rientjes <rientjes@google.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Paul Menage <menage@google.com> Cc: Lee Schermerhorn <lee.schermerhorn@hp.com> Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk> Cc: Ravikiran Thirumalai <kiran@scalex86.org> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Christoph Lameter <cl@linux-foundation.org> Cc: Andi Kleen <andi@firstfloor.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Alexander Duyck 提交于
This patch is meant to improve the performance of SLUB by moving the local kmem_cache_node lock into it's own cacheline separate from kmem_cache. This is accomplished by simply removing the local_node when NUMA is enabled. On my system with 2 nodes I saw around a 5% performance increase w/ hackbench times dropping from 6.2 seconds to 5.9 seconds on average. I suspect the performance gain would increase as the number of nodes increases, but I do not have the data to currently back that up. Bugzilla-Reference: http://bugzilla.kernel.org/show_bug.cgi?id=15713 Cc: <stable@kernel.org> Reported-by: NAlex Shi <alex.shi@intel.com> Tested-by: NAlex Shi <alex.shi@intel.com> Acked-by: NYanmin Zhang <yanmin_zhang@linux.intel.com> Acked-by: NChristoph Lameter <cl@linux-foundation.org> Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
- 22 5月, 2010 3 次提交
-
-
由 Minchan Kim 提交于
The alloc_slab_page() in SLUB uses alloc_pages() if node is '-1'. This means that node validity check in alloc_pages_node is unnecessary and we can use alloc_pages_exact_node() to avoid comparison and branch as commit 6484eb3e ("page allocator: do not check NUMA node ID when the caller knows the node is valid") did for the page allocator. Cc: Christoph Lameter <cl@linux-foundation.org> Reviewed-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Reviewed-by: NMel Gorman <mel@csn.ul.ie> Signed-off-by: NMinchan Kim <minchan.kim@gmail.com> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
由 Xiaotian Feng 提交于
commit 94b528d0 (kmemtrace: SLUB hooks for caller-tracking functions) missed tracing kmalloc_large_node in __kmalloc_node_track_caller. We should trace it same as __kmalloc_node. Acked-by: NDavid Rientjes <rientjes@google.com> Cc: Matt Mackall <mpm@selenic.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Vegard Nossum <vegard.nossum@gmail.com> Signed-off-by: NXiaotian Feng <dfeng@redhat.com> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
由 Eric Dumazet 提交于
I discovered that we can overflow stack if CONFIG_SLUB_DEBUG=y and use slabs with many objects, since list_slab_objects() and process_slab() use DECLARE_BITMAP(map, page->objects). With 65535 bits, we use 8192 bytes of stack ... Switch these allocations to dynamic allocations. Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NChristoph Lameter <cl@linux-foundation.org> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
- 20 5月, 2010 1 次提交
-
-
由 David Woodhouse 提交于
Acked-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
- 06 5月, 2010 1 次提交
-
-
由 Zhang, Yanmin 提交于
Function init_kmem_cache_nodes is incorrect when checking upper limitation of kmalloc_caches. The breakage was introduced by commit 91efd773 ("dma kmalloc handling fixes"). Acked-by: NChristoph Lameter <cl@linux-foundation.org> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
- 10 4月, 2010 1 次提交
-
-
由 Pekka Enberg 提交于
As suggested by Linus, fix up kmem_ptr_validate() to handle non-kernel pointers more graciously. The patch changes kmem_ptr_validate() to use the newly introduced kern_ptr_validate() helper to check that a pointer is a valid kernel pointer before we attempt to convert it into a 'struct page'. Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Matt Mackall <mpm@selenic.com> Cc: Nick Piggin <npiggin@suse.de> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi> Acked-by: NChristoph Lameter <cl@linux-foundation.org> Acked-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 08 3月, 2010 2 次提交
-
-
由 Emese Revfy 提交于
Constify struct sysfs_ops. This is part of the ops structure constification effort started by Arjan van de Ven et al. Benefits of this constification: * prevents modification of data that is shared (referenced) by many other structure instances at runtime * detects/prevents accidental (but not intentional) modification attempts on archs that enforce read-only kernel data at runtime * potentially better optimized code as the compiler can assume that the const data cannot be changed * the compiler/linker move const data into .rodata and therefore exclude them from false sharing Signed-off-by: NEmese Revfy <re.emese@gmail.com> Acked-by: NDavid Teigland <teigland@redhat.com> Acked-by: NMatt Domsch <Matt_Domsch@dell.com> Acked-by: NMaciej Sosnowski <maciej.sosnowski@intel.com> Acked-by: NHans J. Koch <hjk@linutronix.de> Acked-by: NPekka Enberg <penberg@cs.helsinki.fi> Acked-by: NJens Axboe <jens.axboe@oracle.com> Acked-by: NStephen Hemminger <shemminger@vyatta.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 Emese Revfy 提交于
Constify struct kset_uevent_ops. This is part of the ops structure constification effort started by Arjan van de Ven et al. Benefits of this constification: * prevents modification of data that is shared (referenced) by many other structure instances at runtime * detects/prevents accidental (but not intentional) modification attempts on archs that enforce read-only kernel data at runtime * potentially better optimized code as the compiler can assume that the const data cannot be changed * the compiler/linker move const data into .rodata and therefore exclude them from false sharing Signed-off-by: NEmese Revfy <re.emese@gmail.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
- 04 3月, 2010 1 次提交
-
-
由 Stephen Rothwell 提交于
The slab tree adds a percpu variable usage case (commit 9dfc6e68 "SLUB: Use this_cpu operations in slub"), but the percpu tree removes the prefixing of percpu variables (commit dd17c8f7 "percpu: remove per_cpu__ prefix"), thus causing the following compilation error: CC mm/slub.o mm/slub.c: In function ‘alloc_kmem_cache_cpus’: mm/slub.c:2078: error: implicit declaration of function ‘per_cpu_var’ mm/slub.c:2078: warning: assignment makes pointer from integer without a cast make[1]: *** [mm/slub.o] Error 1 Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
- 27 2月, 2010 1 次提交
-
-
由 Dmitry Monakhov 提交于
This patch allow to inject faults only for specific slabs. In order to preserve default behavior cache filter is off by default (all caches are faulty). One may define specific set of slabs like this: # mark skbuff_head_cache as faulty echo 1 > /sys/kernel/slab/skbuff_head_cache/failslab # Turn on cache filter (off by default) echo 1 > /sys/kernel/debug/failslab/cache-filter # Turn on fault injection echo 1 > /sys/kernel/debug/failslab/times echo 1 > /sys/kernel/debug/failslab/probability Acked-by: NDavid Rientjes <rientjes@google.com> Acked-by: NAkinobu Mita <akinobu.mita@gmail.com> Acked-by: NChristoph Lameter <cl@linux-foundation.org> Signed-off-by: NDmitry Monakhov <dmonakhov@openvz.org> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
- 05 2月, 2010 1 次提交
-
-
由 Adam Buchbinder 提交于
Some comments misspell "should" or "shouldn't"; this fixes them. No code changes. Signed-off-by: NAdam Buchbinder <adam.buchbinder@gmail.com> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
- 23 1月, 2010 2 次提交
-
-
由 Christoph Lameter 提交于
1. We need kmalloc_percpu for all of the now extended kmalloc caches array not just for each shift value. 2. init_kmem_cache_nodes() must assume node 0 locality for statically allocated dma kmem_cache structures even after boot is complete. Reported-and-tested-by: NAlex Chiang <achiang@hp.com> Signed-off-by: NChristoph Lameter <cl@linux-foundation.org> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
由 David Rientjes 提交于
`s' cannot be NULL if kmalloc_caches is not NULL. This conditional would trigger a NULL pointer on `s', anyway, since it is immediately derefernced if true. Acked-by: NChristoph Lameter <cl@linux-foundation.org> Signed-off-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
- 20 12月, 2009 4 次提交
-
-
由 Christoph Lameter 提交于
this_cpu_inc() translates into a single instruction on x86 and does not need any register. So use it in stat(). We also want to avoid the calculation of the per cpu kmem_cache_cpu structure pointer. So pass a kmem_cache pointer instead of a kmem_cache_cpu pointer. Signed-off-by: NChristoph Lameter <cl@linux-foundation.org> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
由 Christoph Lameter 提交于
Remove the fields in struct kmem_cache_cpu that were used to cache data from struct kmem_cache when they were in different cachelines. The cacheline that holds the per cpu array pointer now also holds these values. We can cut down the struct kmem_cache_cpu size to almost half. The get_freepointer() and set_freepointer() functions that used to be only intended for the slow path now are also useful for the hot path since access to the size field does not require accessing an additional cacheline anymore. This results in consistent use of functions for setting the freepointer of objects throughout SLUB. Also we initialize all possible kmem_cache_cpu structures when a slab is created. No need to initialize them when a processor or node comes online. Signed-off-by: NChristoph Lameter <cl@linux-foundation.org> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
由 Christoph Lameter 提交于
Dynamic DMA kmalloc cache allocation is troublesome since the new percpu allocator does not support allocations in atomic contexts. Reserve some statically allocated kmalloc_cpu structures instead. Signed-off-by: NChristoph Lameter <cl@linux-foundation.org> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
由 Christoph Lameter 提交于
Using per cpu allocations removes the needs for the per cpu arrays in the kmem_cache struct. These could get quite big if we have to support systems with thousands of cpus. The use of this_cpu_xx operations results in: 1. The size of kmem_cache for SMP configuration shrinks since we will only need 1 pointer instead of NR_CPUS. The same pointer can be used by all processors. Reduces cache footprint of the allocator. 2. We can dynamically size kmem_cache according to the actual nodes in the system meaning less memory overhead for configurations that may potentially support up to 1k NUMA nodes / 4k cpus. 3. We can remove the diddle widdle with allocating and releasing of kmem_cache_cpu structures when bringing up and shutting down cpus. The cpu alloc logic will do it all for us. Removes some portions of the cpu hotplug functionality. 4. Fastpath performance increases since per cpu pointer lookups and address calculations are avoided. V7-V8 - Convert missed get_cpu_slab() under CONFIG_SLUB_STATS Signed-off-by: NChristoph Lameter <cl@linux-foundation.org> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
- 11 12月, 2009 1 次提交
-
-
由 Li Zefan 提交于
Define kmem_trace_alloc_{,node}_notrace() if CONFIG_TRACING is enabled, otherwise perf-kmem will show wrong stats ifndef CONFIG_KMEM_TRACE, because a kmalloc() memory allocation may be traced by both trace_kmalloc() and trace_kmem_cache_alloc(). Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com> Reviewed-by: NPekka Enberg <penberg@cs.helsinki.fi> Cc: Christoph Lameter <cl@linux-foundation.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: linux-mm@kvack.org <linux-mm@kvack.org> Cc: Eduard - Gabriel Munteanu <eduard.munteanu@linux360.ro> LKML-Reference: <4B21F89A.7000801@cn.fujitsu.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 29 11月, 2009 1 次提交
-
-
由 Pekka Enberg 提交于
The unlikely() annotation in slab_alloc() covers too much of the expression. It's actually very likely that the object is not NULL so use unlikely() only for the __GFP_ZERO expression like SLAB does. The patch reduces kernel text by 29 bytes on x86-64: text data bss dec hex filename 24185 8560 176 32921 8099 mm/slub.o.orig 24156 8560 176 32892 807c mm/slub.o Acked-by: NChristoph Lameter <cl@linux-foundation.org> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
- 16 10月, 2009 1 次提交
-
-
由 David Rientjes 提交于
When collecting slub stats for particular workloads, it's necessary to collect each statistic for all caches before the job is even started because the counters are usually greater than zero just from boot and initialization. This allows a statistic to be cleared on each cpu by writing '0' to its sysfs file. This creates a baseline for statistics of interest before the workload is started. Setting a statistic to a particular value is not supported, so all values written to these files other than '0' returns -EINVAL. Cc: Christoph Lameter <cl@linux-foundation.org> Signed-off-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
- 22 9月, 2009 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
Right now, if you inadvertently pass NULL to kmem_cache_create() at boot time, it crashes much later after boot somewhere deep inside sysfs which makes it very non obvious to figure out what's going on. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Pekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 16 9月, 2009 1 次提交
-
-
由 Ingo Molnar 提交于
This build bug: mm/slub.c: In function 'kmem_cache_open': mm/slub.c:2476: error: 'disable_higher_order_debug' undeclared (first use in this function) mm/slub.c:2476: error: (Each undeclared identifier is reported only once mm/slub.c:2476: error: for each function it appears in.) Triggers because there's no !CONFIG_SLUB_DEBUG definition for disable_higher_order_debug. Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
- 14 9月, 2009 1 次提交
-
-
由 Eric Dumazet 提交于
When SLAB_POISON is used and slab_pad_check() finds an overwrite of the slab padding, we call restore_bytes() on the whole slab, not only on the padding. Acked-by: NChristoph Lameer <cl@linux-foundation.org> Reported-by: NZdenek Kabelac <zdenek.kabelac@gmail.com> Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
- 04 9月, 2009 2 次提交
-
-
由 Eric Dumazet 提交于
kmem_cache_destroy() should call rcu_barrier() *after* kmem_cache_close() and *before* sysfs_slab_remove() or risk rcu_free_slab() being called after kmem_cache is deleted (kfreed). rmmod nf_conntrack can crash the machine because it has to kmem_cache_destroy() a SLAB_DESTROY_BY_RCU enabled cache. Cc: <stable@kernel.org> Reported-by: NZdenek Kabelac <zdenek.kabelac@gmail.com> Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
由 Xiaotian Feng 提交于
When CONFIG_SLUB_DEBUG is enabled, sysfs_slab_add should unlink and put the kobject if sysfs_create_group failed. Otherwise, sysfs_slab_add returns error then free kmem_cache s, thus memory of s->kobj is leaked. Acked-by: NChristoph Lameter <cl@linux-foundation.org> Signed-off-by: NXiaotian Feng <dfeng@redhat.com> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-