- 07 8月, 2014 1 次提交
-
-
由 WANG Chao 提交于
Currently map_vm_area() takes (struct page *** pages) as third argument, and after mapping, it moves (*pages) to point to (*pages + nr_mappped_pages). It looks like this kind of increment is useless to its caller these days. The callers don't care about the increments and actually they're trying to avoid this by passing another copy to map_vm_area(). The caller can always guarantee all the pages can be mapped into vm_area as specified in first argument and the caller only cares about whether map_vm_area() fails or not. This patch cleans up the pointer movement in map_vm_area() and updates its callers accordingly. Signed-off-by: NWANG Chao <chaowang@redhat.com> Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com> Acked-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Minchan Kim <minchan@kernel.org> Cc: Nitin Gupta <ngupta@vflare.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 05 6月, 2014 2 次提交
-
-
由 Weijie Yang 提交于
According to calculation, ZS_SIZE_CLASSES value is 255 on systems with 4K page size, not 254. The old value may forget count the ZS_MIN_ALLOC_SIZE in. This patch fixes this trivial issue in the comments. Signed-off-by: NWeijie Yang <weijie.yang@samsung.com> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Christoph Lameter 提交于
Replace places where __get_cpu_var() is used for an address calculation with this_cpu_ptr(). Signed-off-by: NChristoph Lameter <cl@linux.com> Cc: Tejun Heo <tj@kernel.org> Cc: Hugh Dickins <hughd@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 20 3月, 2014 1 次提交
-
-
由 Srivatsa S. Bhat 提交于
Subsystems that want to register CPU hotplug callbacks, as well as perform initialization for the CPUs that are already online, often do it as shown below: get_online_cpus(); for_each_online_cpu(cpu) init_cpu(cpu); register_cpu_notifier(&foobar_cpu_notifier); put_online_cpus(); This is wrong, since it is prone to ABBA deadlocks involving the cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently with CPU hotplug operations). Instead, the correct and race-free way of performing the callback registration is: cpu_notifier_register_begin(); for_each_online_cpu(cpu) init_cpu(cpu); /* Note the use of the double underscored version of the API */ __register_cpu_notifier(&foobar_cpu_notifier); cpu_notifier_register_done(); Fix the zsmalloc code by using this latter form of callback registration. Cc: Nitin Gupta <ngupta@vflare.org> Cc: Ingo Molnar <mingo@kernel.org> Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Acked-by: NMinchan Kim <minchan@kernel.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 31 1月, 2014 2 次提交
-
-
由 Minchan Kim 提交于
Add my copyright to the zsmalloc source code which I maintain. Signed-off-by: NMinchan Kim <minchan@kernel.org> Cc: Nitin Gupta <ngupta@vflare.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Minchan Kim 提交于
This patch moves zsmalloc under mm directory. Before that, description will explain why we have needed custom allocator. Zsmalloc is a new slab-based memory allocator for storing compressed pages. It is designed for low fragmentation and high allocation success rate on large object, but <= PAGE_SIZE allocations. zsmalloc differs from the kernel slab allocator in two primary ways to achieve these design goals. zsmalloc never requires high order page allocations to back slabs, or "size classes" in zsmalloc terms. Instead it allows multiple single-order pages to be stitched together into a "zspage" which backs the slab. This allows for higher allocation success rate under memory pressure. Also, zsmalloc allows objects to span page boundaries within the zspage. This allows for lower fragmentation than could be had with the kernel slab allocator for objects between PAGE_SIZE/2 and PAGE_SIZE. With the kernel slab allocator, if a page compresses to 60% of it original size, the memory savings gained through compression is lost in fragmentation because another object of the same size can't be stored in the leftover space. This ability to span pages results in zsmalloc allocations not being directly addressable by the user. The user is given an non-dereferencable handle in response to an allocation request. That handle must be mapped, using zs_map_object(), which returns a pointer to the mapped region that can be used. The mapping is necessary since the object data may reside in two different noncontigious pages. The zsmalloc fulfills the allocation needs for zram perfectly [sjenning@linux.vnet.ibm.com: borrow Seth's quote] Signed-off-by: NMinchan Kim <minchan@kernel.org> Acked-by: NNitin Gupta <ngupta@vflare.org> Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Bob Liu <bob.liu@oracle.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Hugh Dickins <hughd@google.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Luigi Semenzato <semenzato@google.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Pekka Enberg <penberg@kernel.org> Cc: Rik van Riel <riel@redhat.com> Cc: Seth Jennings <sjenning@linux.vnet.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 18 12月, 2013 1 次提交
-
-
由 Paul Gortmaker 提交于
None of these files are actually using any __init type directives and hence don't need to include <linux/init.h>. Most are just a left over from __devinit and __cpuinit removal, or simply due to code getting copied from one driver to the next. Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 11 12月, 2013 2 次提交
-
-
由 Nitin Cupta 提交于
This patch adds lots of comments and it will help others to review and enhance. Signed-off-by: NSeth Jennings <sjenning@linux.vnet.ibm.com> Signed-off-by: NNitin Gupta <ngupta@vflare.org> Signed-off-by: NMinchan Kim <minchan@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Minchan Kim 提交于
Zsmalloc has two methods 1) copy-based and 2) pte based to access objects that span two pages. You can see history why we supported two approach from [1]. But it was bad choice that adding hard coding to select arch which want to use pte based method because there are lots of SoC in an architecure and they can have different cache size, CPU speed and so on so it would be better to expose it to user as selectable Kconfig option like Andrew Morton suggested. [1] https://lkml.org/lkml/2012/7/11/58Acked-by: NNitin Gupta <ngupta@vflare.org> Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NMinchan Kim <minchan@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 26 11月, 2013 1 次提交
-
-
由 Olav Haugan 提交于
zsmalloc encodes a handle using the pfn and an object index. On hardware platforms with physical memory starting at 0x0 the pfn can be 0. This causes the encoded handle to be 0 and is incorrectly interpreted as an allocation failure. This issue affects all current and future SoCs with physical memory starting at 0x0. All MSM8974 SoCs which includes Google Nexus 5 devices are affected. To prevent this false error we ensure that the encoded handle will not be 0 when allocation succeeds. Signed-off-by: NOlav Haugan <ohaugan@codeaurora.org> Cc: stable <stable@vger.kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 24 7月, 2013 1 次提交
-
-
由 Sunghan Suh 提交于
Signed-off-by: NSunghan Suh <sunghan.suh@samsung.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 22 5月, 2013 1 次提交
-
-
由 Sara Bird 提交于
The existing comments are using an odd style. Fixed them up to adhere to the StyleGuide. No code changes. Signed-off-by: NSara Bird <sara.bird.iar@gmail.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 21 5月, 2013 1 次提交
-
-
由 Marlies Ruck 提交于
Fixes the following checkpatch warning: WARNING: quoted string split across lines Signed-off-by: NMarlies Ruck <marlies.ruck@gmail.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 24 4月, 2013 1 次提交
-
-
由 Arnd Bergmann 提交于
Building zsmalloc as a module does not work on ARM because it uses an interface that is not exported: ERROR: "flush_tlb_kernel_range" [drivers/staging/zsmalloc/zsmalloc.ko] undefined! Since this is only used as a performance optimization and only on ARM, we can avoid the problem simply by not using that optimization when building zsmalloc it is a loadable module. flush_tlb_kernel_range is often an inline function, but out of the architectures that use an extern function, only powerpc exports it. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Seth Jennings <sjenning@linux.vnet.ibm.com> Cc: Nitin Gupta <ngupta@vflare.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 29 3月, 2013 1 次提交
-
-
由 Joerg Roedel 提交于
Testing the arm chromebook config against the upstream kernel produces a linker error for the zsmalloc module from staging. The symbol flush_tlb_kernel_range is not available there. Fix this by removing the reimplementation of unmap_kernel_range in the zsmalloc module and using the function directly. The unmap_kernel_range function is not usable by modules, so also disallow building the driver as a module for now. Cc: stable <stable@vger.kernel.org> Signed-off-by: NJoerg Roedel <joro@8bytes.org> Acked-by: NMinchan Kim <minchan@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 24 2月, 2013 1 次提交
-
-
由 Mel Gorman 提交于
The function names page_xchg_last_nid(), page_last_nid() and reset_page_last_nid() were judged to be inconsistent so rename them to a struct_field_op style pattern. As it looked jarring to have reset_page_mapcount() and page_nid_reset_last() beside each other in memmap_init_zone(), this patch also renames reset_page_mapcount() to page_mapcount_reset(). There are others like init_page_count() but as it is used throughout the arch code a rename would likely cause more conflicts than it is worth. [akpm@linux-foundation.org: fix zcache] Signed-off-by: NMel Gorman <mgorman@suse.de> Suggested-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 31 1月, 2013 1 次提交
-
-
由 Seth Jennings 提交于
zs_create_pool() currently takes a name argument which is never used in any useful way. This patch removes it. Signed-off-by: NSeth Jennings <sjenning@linux.vnet.ibm.com> Acked-by: NNitin Gupta <ngupta@vflare.org> Acked-by: NRik van Riel <riel@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 30 1月, 2013 2 次提交
-
-
由 Minchan Kim 提交于
Recently, Matt Sealey reported he fail to build zsmalloc caused by using of local_flush_tlb_kernel_range which are architecture dependent function so !CONFIG_SMP in ARM couldn't implement it so it ends up build error following as. MODPOST 216 modules LZMA arch/arm/boot/compressed/piggy.lzma AS arch/arm/boot/compressed/lib1funcs.o ERROR: "v7wbi_flush_kern_tlb_range" [drivers/staging/zsmalloc/zsmalloc.ko] undefined! make[1]: *** [__modpost] Error 1 make: *** [modules] Error 2 make: *** Waiting for unfinished jobs.... The reason we used that function is copy method by [1] was really slow in ARM but at that time. More severe problem is ARM can prefetch speculatively on other CPUs so under us, other TLBs can have an entry only if we do flush local CPU. Russell King pointed that. Thanks! We don't have many choices except using flush_tlb_kernel_range. My experiment in ARMv7 processor 4 core didn't make any difference with zsmapbench[2] between local_flush_tlb_kernel_range and flush_tlb_kernel_range but still page-table based is much better than copy-based. * bigger is better. 1. local_flush_tlb_kernel_range: 3918795 mappings 2. flush_tlb_kernel_range : 3989538 mappings 3. copy-based: 635158 mappings This patch replace local_flush_tlb_kernel_range with flush_tlb_kernel_range which are avaialbe in all architectures because we already have used it in vmalloc allocator which are generic one so build problem should go away and performane loss shoud be void. [1] f553646a, zsmalloc: add page table mapping method [2] https://github.com/spartacus06/zsmapbench Cc: stable@vger.kernel.org Cc: Dan Magenheimer <dan.magenheimer@oracle.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Konrad Rzeszutek Wilk <konrad@darnok.org> Cc: Nitin Gupta <ngupta@vflare.org> Cc: Seth Jennings <sjenning@linux.vnet.ibm.com> Reported-by: NMatt Sealey <matt@genesi-usa.com> Signed-off-by: NMinchan Kim <minchan@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Seth Jennings 提交于
Right now ZS_SIZE_CLASS_DELTA is hardcoded to be 16. This creates 254 classes for systems with 4k pages. However, on PPC64 with 64k pages, it creates 4095 classes which is far too many. This patch makes ZS_SIZE_CLASS_DELTA relative to PAGE_SIZE so that regardless of the page size, there will be the same number of classes. Acked-by: NNitin Gupta <ngupta@vflare.org> Acked-by: NMinchan Kim <minchan@kernel.org> Signed-off-by: NSeth Jennings <sjenning@linux.vnet.ibm.com> Acked-by: NDan Magenheimer <dan.magenheimer@oracle.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 16 1月, 2013 1 次提交
-
-
由 Davidlohr Bueso 提交于
Just as with zs_malloc() and zs_map_object(), it is worth formally commenting the zs_create_pool() function. Signed-off-by: NDavidlohr Bueso <davidlohr.bueso@hp.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 14 8月, 2012 4 次提交
-
-
由 Seth Jennings 提交于
The patch collapses in the internal zsmalloc_int.h into the zsmalloc-main.c file. This is done in preparation for the promotion to mm/ where separate internal headers are discouraged. Signed-off-by: NSeth Jennings <sjenning@linux.vnet.ibm.com> Signed-off-by: NMinchan Kim <minchan@kernel.org> Acked-by: NNitin Gupta <ngupta@vflare.org> Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Seth Jennings 提交于
This patchset provides page mapping via the page table. On some archs, most notably ARM, this method has been demonstrated to be faster than copying. The logic controlling the method selection (copy vs page table) is controlled by the definition of USE_PGTABLE_MAPPING which is/can be defined for any arch that performs better with page table mapping. Signed-off-by: NSeth Jennings <sjenning@linux.vnet.ibm.com> Acked-by: NMinchan Kim <minchan@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Seth Jennings 提交于
Because we use per-cpu mapping areas shared among the pools/users, we can't allow mapping in interrupt context because it can corrupt another users mappings. Signed-off-by: NSeth Jennings <sjenning@linux.vnet.ibm.com> Acked-by: NMinchan Kim <minchan@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Seth Jennings 提交于
firstpage already has precedent and meaning the first page of a zspage. In the case of the copy mapping functions, it is the first of a pair of pages needing to be mapped. This patch just renames the firstpage argument to "page" to avoid confusion. Signed-off-by: NSeth Jennings <sjenning@linux.vnet.ibm.com> Acked-by: NMinchan Kim <minchan@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 10 7月, 2012 4 次提交
-
-
由 Seth Jennings 提交于
This patch improves mapping performance in zsmalloc by getting usage information from the user in the form of a "mapping mode" and using it to avoid unnecessary copying for objects that span pages. Signed-off-by: NSeth Jennings <sjenning@linux.vnet.ibm.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Seth Jennings 提交于
Add information on the usage limits of zs_map_object() Signed-off-by: NSeth Jennings <sjenning@linux.vnet.ibm.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Seth Jennings 提交于
Improve zs_unmap_object() performance by adding a fast path for objects that don't span pages. Signed-off-by: NSeth Jennings <sjenning@linux.vnet.ibm.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Seth Jennings 提交于
This patch replaces the page table assisted object mapping method, which has x86 dependencies, with a arch-independent method that does a simple copy into a temporary per-cpu buffer. While a copy seems like it would be worse than mapping the pages, tests demonstrate the copying is always faster and, in the case of running inside a KVM guest, roughly 4x faster. Signed-off-by: NSeth Jennings <sjenning@linux.vnet.ibm.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 21 6月, 2012 1 次提交
-
-
由 Ben Hutchings 提交于
ZSMALLOC is tristate, but the code has no MODULE_LICENSE and since it depends on GPL-only symbols it cannot be loaded as a module. This in turn breaks zram which now depends on it. I assume it's meant to be Dual BSD/GPL like the other z-stuff. There is also no module_exit, which will make it impossible to unload. Add the appropriate module_init and module_exit declarations suggested by comments. Reported-by: NChristian Ohm <chr.ohm@gmx.net> References: http://bugs.debian.org/677273 Cc: stable@vger.kernel.org # v3.4 Signed-off-by: NBen Hutchings <ben@decadent.org.uk> Reviewed-by: NJonathan Nieder <jrnieder@gmail.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 14 6月, 2012 1 次提交
-
-
由 Seth Jennings 提交于
This patch fixes an uninitialized variable warning in alloc_zspage(). It also fixes the secondary issue of prev_page leaving scope on each loop iteration. The only reason this ever worked was because prev_page was occupying the same space on the stack on each iteration. Signed-off-by: NSeth Jennings <sjenning@linux.vnet.ibm.com> Reported-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 12 6月, 2012 1 次提交
-
-
由 Nitin Gupta 提交于
Documentation of various struct page fields used by zsmalloc. Changes for v2: - Regroup descriptions as suggested by Konrad Signed-off-by: NNitin Gupta <ngupta@vflare.org> Acked-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: NMinchan Kim <minchan@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 11 6月, 2012 1 次提交
-
-
由 Minchan Kim 提交于
We should use unsigned long as handle instead of void * to avoid any confusion. Without this, users may just treat zs_malloc return value as a pointer and try to deference it. This patch passed compile test(zram, zcache and ramster) and zram is tested on qemu. changelog * from v2 - remove hval pointed out by Nitin - based on next-20120607 * from v1 - change zcache's zv_create return value - baesd on next-20120604 Cc: Dan Magenheimer <dan.magenheimer@oracle.com> Acked-by: NSeth Jennings <sjenning@linux.vnet.ibm.com> Acked-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NMinchan Kim <minchan@kernel.org> Acked-by: NNitin Gupta <ngupta@vflare.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 10 5月, 2012 2 次提交
-
-
由 Minchan Kim 提交于
Add/fix the comment. Signed-off-by: NMinchan Kim <minchan@kernel.org> Acked-by: NNitin Gupta <ngupta@vflare.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Minchan Kim 提交于
zspage_order defines how many pages are needed to make a zspage. So _order_ is rather awkward naming. It already deceive Jonathan - http://lwn.net/Articles/477067/ " For each size, the code calculates an optimum number of pages (up to 16)" Let's change from _order_ to _pages_ and some function names. Signed-off-by: NMinchan Kim <minchan@kernel.org> Acked-by: NNitin Gupta <ngupta@vflare.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 26 4月, 2012 1 次提交
-
-
由 Minchan Kim 提交于
MM code always uses PageXXX to handle page flags. Let's keep the consistency. Signed-off-by: NMinchan Kim <minchan@kernel.org> Acked-by: NNitin Gupta <ngupta@vflare.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 11 4月, 2012 1 次提交
-
-
由 Nitin Gupta 提交于
This patch fixes a memory leak in zsmalloc where the first subpage of each zspage is leaked when the zspage is freed. Signed-off-by: NNitin Gupta <ngupta@vflare.org> Acked-by: NSeth Jennings <sjenning@linux.vnet.ibm.com> Acked-by: NDan Magenheimer <dan.magenheimer@oracle.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 08 3月, 2012 2 次提交
-
-
由 Seth Jennings 提交于
This patch moves where max_zspage_order is declared and changes its meaning. "Order" typically implies 2^order of something; however, it is currently being used as the "maximum number of single pages in a zspage". To add clarity, ZS_MAX_ZSPAGE_ORDER is now used to calculate ZS_MAX_PAGES_PER_ZSPAGE, which is 2^ZS_MAX_ZSPAGE_ORDER and is the upper bound on the number of pages in a zspage. Signed-off-by: NSeth Jennings <sjenning@linux.vnet.ibm.com> Acked-by: NNitin Gupta <ngupta@vflare.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Seth Jennings 提交于
This patch moves the definitions of _PFN_BITS, OBJ_INDEX_BITS and OBJ_INDEX_MASK from zsmalloc-main.c to zsmalloc_int.h They will be needed to determine ZS_MIN_ALLOC_SIZE in the next patch Signed-off-by: NSeth Jennings <sjenning@linux.vnet.ibm.com> Acked-by: NNitin Gupta <ngupta@vflare.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 13 2月, 2012 1 次提交
-
-
由 Seth Jennings 提交于
linux/vmalloc.h added to zsmalloc-main.c to resolve implicit declaration errors. X86 dependency added to zsmalloc and dependent drivers zcache and zram. This X86 only requirement is not ideal. Working to find portable functions for __flush_tlb_one and set_pte. Signed-off-by: NSeth Jennings <sjenning@linux.vnet.ibm.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 09 2月, 2012 1 次提交
-
-
由 Nitin Gupta 提交于
This patch creates a new memory allocation library named zsmalloc. NOTE: zsmalloc currently depends on SPARSEMEM for the MAX_PHYSMEM_BITS value needed to determine the format of the object handle. There may be a better way to do this. Feedback is welcome. Signed-off-by: NNitin Gupta <ngupta@vflare.org> Signed-off-by: NSeth Jennings <sjenning@linux.vnet.ibm.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-