1. 09 10月, 2014 1 次提交
    • T
      percpu: fix how @gfp is interpreted by the percpu allocator · 6ae833c7
      Tejun Heo 提交于
      When @gfp is specified, the percpu allocator is interested in whether
      it contains all of GFP_KERNEL or not.  If it does, the normal
      allocation path is taken; otherwise, the atomic allocation path.
      Unfortunately, pcpu_alloc() was incorrectly testing for whether @gfp
      contains any part of GFP_KERNEL.
      
      Fix it by testing "(gfp & GFP_KERNEL) != GFP_KERNEL" instead of
      "!(gfp & GFP_KERNEL)" to decide whether the allocation should be
      atomic or not.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      6ae833c7
  2. 22 9月, 2014 1 次提交
  3. 09 9月, 2014 1 次提交
  4. 03 9月, 2014 10 次提交
    • T
      percpu: implement asynchronous chunk population · 1a4d7607
      Tejun Heo 提交于
      The percpu allocator now supports atomic allocations by only
      allocating from already populated areas but the mechanism to ensure
      that there's adequate amount of populated areas was missing.
      
      This patch expands pcpu_balance_work so that in addition to freeing
      excess free chunks it also populates chunks to maintain an adequate
      level of populated areas.  pcpu_alloc() schedules pcpu_balance_work if
      the amount of free populated areas is too low or after an atomic
      allocation failure.
      
      * PERPCU_DYNAMIC_RESERVE is increased by two pages to account for
        PCPU_EMPTY_POP_PAGES_LOW.
      
      * pcpu_async_enabled is added to gate both async jobs -
        chunk->map_extend_work and pcpu_balance_work - so that we don't end
        up scheduling them while the needed subsystems aren't up yet.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      1a4d7607
    • T
      percpu: rename pcpu_reclaim_work to pcpu_balance_work · fe6bd8c3
      Tejun Heo 提交于
      pcpu_reclaim_work will also be used to populate chunks asynchronously.
      Rename it to pcpu_balance_work in preparation.  pcpu_reclaim() is
      renamed to pcpu_balance_workfn() and some of its local variables are
      renamed too.
      
      This is pure rename.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      fe6bd8c3
    • T
      percpu: implmeent pcpu_nr_empty_pop_pages and chunk->nr_populated · b539b87f
      Tejun Heo 提交于
      pcpu_nr_empty_pop_pages counts the number of empty populated pages
      across all chunks and chunk->nr_populated counts the number of
      populated pages in a chunk.  Both will be used to implement pre/async
      population for atomic allocations.
      
      pcpu_chunk_[de]populated() are added to update chunk->populated,
      chunk->nr_populated and pcpu_nr_empty_pop_pages together.  All
      successful chunk [de]populations should be followed by the
      corresponding pcpu_chunk_[de]populated() calls.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      b539b87f
    • T
      percpu: make sure chunk->map array has available space · 9c824b6a
      Tejun Heo 提交于
      An allocation attempt may require extending chunk->map array which
      requires GFP_KERNEL context which isn't available for atomic
      allocations.  This patch ensures that chunk->map array usually keeps
      some amount of available space by directly allocating buffer space
      during GFP_KERNEL allocations and scheduling async extension during
      atomic ones.  This should make atomic allocation failures from map
      space exhaustion rare.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      9c824b6a
    • T
      percpu: implement [__]alloc_percpu_gfp() · 5835d96e
      Tejun Heo 提交于
      Now that pcpu_alloc_area() can allocate only from populated areas,
      it's easy to add atomic allocation support to [__]alloc_percpu().
      Update pcpu_alloc() so that it accepts @gfp and skips all the blocking
      operations and allocates only from the populated areas if @gfp doesn't
      contain GFP_KERNEL.  New interface functions [__]alloc_percpu_gfp()
      are added.
      
      While this means that atomic allocations are possible, this isn't
      complete yet as there's no mechanism to ensure that certain amount of
      populated areas is kept available and atomic allocations may keep
      failing under certain conditions.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      5835d96e
    • T
      percpu: indent the population block in pcpu_alloc() · e04d3208
      Tejun Heo 提交于
      The next patch will conditionalize the population block in
      pcpu_alloc() which will end up making a rather large indentation
      change obfuscating the actual logic change.  This patch puts the block
      under "if (true)" so that the next patch can avoid indentation
      changes.  The defintions of the local variables which are used only in
      the block are moved into the block.
      
      This patch is purely cosmetic.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      e04d3208
    • T
      percpu: make pcpu_alloc_area() capable of allocating only from populated areas · a16037c8
      Tejun Heo 提交于
      Update pcpu_alloc_area() so that it can skip unpopulated areas if the
      new parameter @pop_only is true.  This is implemented by a new
      function, pcpu_fit_in_area(), which determines the amount of head
      padding considering the alignment and populated state.
      
      @pop_only is currently always false but this will be used to implement
      atomic allocation.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      a16037c8
    • T
      percpu: restructure locking · b38d08f3
      Tejun Heo 提交于
      At first, the percpu allocator required a sleepable context for both
      alloc and free paths and used pcpu_alloc_mutex to protect everything.
      Later, pcpu_lock was introduced to protect the index data structure so
      that the free path can be invoked from atomic contexts.  The
      conversion only updated what's necessary and left most of the
      allocation path under pcpu_alloc_mutex.
      
      The percpu allocator is planned to add support for atomic allocation
      and this patch restructures locking so that the coverage of
      pcpu_alloc_mutex is further reduced.
      
      * pcpu_alloc() now grab pcpu_alloc_mutex only while creating a new
        chunk and populating the allocated area.  Everything else is now
        protected soley by pcpu_lock.
      
        After this change, multiple instances of pcpu_extend_area_map() may
        race but the function already implements sufficient synchronization
        using pcpu_lock.
      
        This also allows multiple allocators to arrive at new chunk
        creation.  To avoid creating multiple empty chunks back-to-back, a
        new chunk is created iff there is no other empty chunk after
        grabbing pcpu_alloc_mutex.
      
      * pcpu_lock is now held while modifying chunk->populated bitmap.
        After this, all data structures are protected by pcpu_lock.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      b38d08f3
    • T
      percpu: move region iterations out of pcpu_[de]populate_chunk() · a93ace48
      Tejun Heo 提交于
      Previously, pcpu_[de]populate_chunk() were called with the range which
      may contain multiple target regions in it and
      pcpu_[de]populate_chunk() iterated over the regions.  This has the
      benefit of batching up cache flushes for all the regions; however,
      we're planning to add more bookkeeping logic around [de]population to
      support atomic allocations and this delegation of iterations gets in
      the way.
      
      This patch moves the region iterations out of
      pcpu_[de]populate_chunk() into its callers - pcpu_alloc() and
      pcpu_reclaim() - so that we can later add logic to track more states
      around them.  This change may make cache and tlb flushes more frequent
      but multi-region [de]populations are rare anyway and if this actually
      becomes a problem, it's not difficult to factor out cache flushes as
      separate callbacks which are directly invoked from percpu.c.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      a93ace48
    • T
      percpu: move common parts out of pcpu_[de]populate_chunk() · dca49645
      Tejun Heo 提交于
      percpu-vm and percpu-km implement separate versions of
      pcpu_[de]populate_chunk() and some part which is or should be common
      are currently in the specific implementations.  Make the following
      changes.
      
      * Allocate area clearing is moved from the pcpu_populate_chunk()
        implementations to pcpu_alloc().  This makes percpu-km's version
        noop.
      
      * Quick exit tests in pcpu_[de]populate_chunk() of percpu-vm are moved
        to their respective callers so that they are applied to percpu-km
        too.  This doesn't make any meaningful difference as both functions
        are noop for percpu-km; however, this is more consistent and will
        help implementing atomic allocation support.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      dca49645
  5. 16 8月, 2014 1 次提交
  6. 19 6月, 2014 1 次提交
  7. 15 4月, 2014 1 次提交
    • J
      percpu: make pcpu_alloc_chunk() use pcpu_mem_free() instead of kfree() · 5a838c3b
      Jianyu Zhan 提交于
      pcpu_chunk_struct_size = sizeof(struct pcpu_chunk) +
      	BITS_TO_LONGS(pcpu_unit_pages) * sizeof(unsigned long)
      
      It hardly could be ever bigger than PAGE_SIZE even for large-scale machine,
      but for consistency with its couterpart pcpu_mem_zalloc(),
      use pcpu_mem_free() instead.
      
      Commit b4916cb1 ("percpu: make pcpu_free_chunk() use
      pcpu_mem_free() instead of kfree()") addressed this problem, but
      missed this one.
      
      tj: commit message updated
      Signed-off-by: NJianyu Zhan <nasa4836@gmail.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Fixes: 099a19d9 ("percpu: allow limited allocation before slab is online)
      Cc: stable@vger.kernel.org
      5a838c3b
  8. 29 3月, 2014 1 次提交
  9. 18 3月, 2014 1 次提交
    • V
      percpu: allocation size should be even · 2f69fa82
      Viro 提交于
      723ad1d9 ("percpu: store offsets instead of lengths in ->map[]")
      updated percpu area allocator to use the lowest bit, instead of sign,
      to signify whether the area is occupied and forced min align to 2;
      unfortunately, it forgot to force the allocation size to be even
      causing malfunctions for the very rare odd-sized allocations.
      
      Always force the allocations to be even sized.
      
      tj: Wrote patch description.
      Original-patch-by: NAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      2f69fa82
  10. 07 3月, 2014 3 次提交
    • A
      percpu: speed alloc_pcpu_area() up · 3d331ad7
      Al Viro 提交于
      If we know that first N areas are all in use, we can obviously skip
      them when searching for a free one.  And that kind of hint is very
      easy to maintain.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      3d331ad7
    • A
      percpu: store offsets instead of lengths in ->map[] · 723ad1d9
      Al Viro 提交于
      Current code keeps +-length for each area in chunk->map[].  It has
      several unpleasant consequences:
      	* even if we know that first 50 areas are all in use, allocation
      still needs to go through all those areas just to sum their sizes, just
      to get the offset of free one.
      	* freeing needs to find the array entry refering to the area
      in question; again, the need to sum the sizes until we reach the offset
      we are interested in.  Note that offsets are monotonous, so simple
      binary search would do here.
      
      	New data representation: array of <offset,in-use flag> pairs.
      Each pair is represented by one int - we use offset|1 for <offset, in use>
      and offset for <offset, free> (we make sure that all offsets are even).
      In the end we put a sentry entry - <total size, in use>.  The first
      entry is <0, flag>; it would be possible to store together the flag
      for Nth area and offset for N+1st, but that leads to much hairier code.
      
      In other words, where the old variant would have
      	4, -8, -4, 4, -12, 100
      (4 bytes free, 8 in use, 4 in use, 4 free, 12 in use, 100 free) we store
      	<0,0>, <4,1>, <12,1>, <16,0>, <20,1>, <32,0>, <132,1>
      i.e.
      	0, 5, 13, 16, 21, 32, 133
      
      This commit switches to new data representation and takes care of a couple
      of low-hanging fruits in free_pcpu_area() - one is the switch to binary
      search, another is not doing two memmove() when one would do.  Speeding
      the alloc side up (by keeping track of how many areas in the beginning are
      known to be all in use) also becomes possible - that'll be done in the next
      commit.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      723ad1d9
    • A
      perpcu: fold pcpu_split_block() into the only caller · 706c16f2
      Al Viro 提交于
      ... and simplify the results a bit.  Makes the next step easier
      to deal with - we will be changing the data representation for
      chunk->map[] and it's easier to do if the code in question is
      not split between pcpu_alloc_area() and pcpu_split_block().
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      706c16f2
  11. 22 1月, 2014 1 次提交
    • S
      mm/percpu.c: use memblock apis for early memory allocations · 999c17e3
      Santosh Shilimkar 提交于
      Switch to memblock interfaces for early memory allocator instead of
      bootmem allocator.  No functional change in beahvior than what it is in
      current code from bootmem users points of view.
      
      Archs already converted to NO_BOOTMEM now directly use memblock
      interfaces instead of bootmem wrappers build on top of memblock.  And
      the archs which still uses bootmem, these new apis just fallback to
      exiting bootmem APIs.
      Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Grygorii Strashko <grygorii.strashko@ti.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Paul Walmsley <paul@pwsan.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Tony Lindgren <tony@atomide.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      999c17e3
  12. 21 1月, 2014 1 次提交
  13. 23 9月, 2013 1 次提交
    • M
      percpu: fix bootmem error handling in pcpu_page_first_chunk() · f851c8d8
      Michael Holzheu 提交于
      If memory allocation of in pcpu_embed_first_chunk() fails, the
      allocated memory is not released correctly. In the release loop also
      the non-allocated elements are released which leads to the following
      kernel BUG on systems with very little memory:
      
      [    0.000000] kernel BUG at mm/bootmem.c:307!
      [    0.000000] illegal operation: 0001 [#1] PREEMPT SMP DEBUG_PAGEALLOC
      [    0.000000] Modules linked in:
      [    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 3.10.0 #22
      [    0.000000] task: 0000000000a20ae0 ti: 0000000000a08000 task.ti: 0000000000a08000
      [    0.000000] Krnl PSW : 0400000180000000 0000000000abda7a (__free+0x116/0x154)
      [    0.000000]            R:0 T:1 IO:0 EX:0 Key:0 M:0 W:0 P:0 AS:0 CC:0 PM:0 EA:3
      ...
      [    0.000000]  [<0000000000abdce2>] mark_bootmem_node+0xde/0xf0
      [    0.000000]  [<0000000000abdd9c>] mark_bootmem+0xa8/0x118
      [    0.000000]  [<0000000000abcbba>] pcpu_embed_first_chunk+0xe7a/0xf0c
      [    0.000000]  [<0000000000abcc96>] setup_per_cpu_areas+0x4a/0x28c
      
      To fix the problem now only allocated elements are released. This then
      leads to the correct kernel panic:
      
      [    0.000000] Kernel panic - not syncing: Failed to initialize percpu areas.
      ...
      [    0.000000] Call Trace:
      [    0.000000] ([<000000000011307e>] show_trace+0x132/0x150)
      [    0.000000]  [<0000000000113160>] show_stack+0xc4/0xd4
      [    0.000000]  [<00000000007127dc>] dump_stack+0x74/0xd8
      [    0.000000]  [<00000000007123fe>] panic+0xea/0x264
      [    0.000000]  [<0000000000b14814>] setup_per_cpu_areas+0x5c/0x28c
      
      tj: Flipped if conditional so that it doesn't need "continue".
      Signed-off-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      f851c8d8
  14. 02 12月, 2012 1 次提交
  15. 29 10月, 2012 1 次提交
  16. 06 10月, 2012 1 次提交
  17. 10 5月, 2012 2 次提交
  18. 30 3月, 2012 1 次提交
  19. 16 12月, 2011 1 次提交
    • E
      percpu: fix per_cpu_ptr_to_phys() handling of non-page-aligned addresses · 9f57bd4d
      Eugene Surovegin 提交于
      per_cpu_ptr_to_phys() incorrectly rounds up its result for non-kmalloc
      case to the page boundary, which is bogus for any non-page-aligned
      address.
      
      This affects the only in-tree user of this function - sysfs handler
      for per-cpu 'crash_notes' physical address.  The trouble is that the
      crash_notes per-cpu variable is not page-aligned:
      
      crash_notes = 0xc08e8ed4
      PER-CPU OFFSET VALUES:
       CPU 0: 3711f000
       CPU 1: 37129000
       CPU 2: 37133000
       CPU 3: 3713d000
      
      So, the per-cpu addresses are:
       crash_notes on CPU 0: f7a07ed4 => phys 36b57ed4
       crash_notes on CPU 1: f7a11ed4 => phys 36b4ded4
       crash_notes on CPU 2: f7a1bed4 => phys 36b43ed4
       crash_notes on CPU 3: f7a25ed4 => phys 36b39ed4
      
      However, /sys/devices/system/cpu/cpu*/crash_notes says:
       /sys/devices/system/cpu/cpu0/crash_notes: 36b57000
       /sys/devices/system/cpu/cpu1/crash_notes: 36b4d000
       /sys/devices/system/cpu/cpu2/crash_notes: 36b43000
       /sys/devices/system/cpu/cpu3/crash_notes: 36b39000
      
      As you can see, all values are rounded down to a page
      boundary. Consequently, this is where kexec sets up the NOTE segments,
      and thus where the secondary kernel is looking for them. However, when
      the first kernel crashes, it saves the notes to the unaligned
      addresses, where they are not found.
      
      Fix it by adding offset_in_page() to the translated page address.
      
      -tj: Combined Eugene's and Petr's commit messages.
      Signed-off-by: NEugene Surovegin <ebs@ebshome.net>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reported-by: NPetr Tesarik <ptesarik@suse.cz>
      Cc: stable@kernel.org
      9f57bd4d
  20. 03 12月, 2011 1 次提交
  21. 24 11月, 2011 1 次提交
  22. 23 11月, 2011 2 次提交
    • T
      percpu: fix chunk range calculation · a855b84c
      Tejun Heo 提交于
      Percpu allocator recorded the cpus which map to the first and last
      units in pcpu_first/last_unit_cpu respectively and used them to
      determine the address range of a chunk - e.g. it assumed that the
      first unit has the lowest address in a chunk while the last unit has
      the highest address.
      
      This simply isn't true.  Groups in a chunk can have arbitrary positive
      or negative offsets from the previous one and there is no guarantee
      that the first unit occupies the lowest offset while the last one the
      highest.
      
      Fix it by actually comparing unit offsets to determine cpus occupying
      the lowest and highest offsets.  Also, rename pcu_first/last_unit_cpu
      to pcpu_low/high_unit_cpu to avoid confusion.
      
      The chunk address range is used to flush cache on vmalloc area
      map/unmap and decide whether a given address is in the first chunk by
      per_cpu_ptr_to_phys() and the bug was discovered by invalid
      per_cpu_ptr_to_phys() translation for crash_note.
      
      Kudos to Dave Young for tracking down the problem.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reported-by: NWANG Cong <xiyou.wangcong@gmail.com>
      Reported-by: NDave Young <dyoung@redhat.com>
      Tested-by: NDave Young <dyoung@redhat.com>
      LKML-Reference: <4EC21F67.10905@redhat.com>
      Cc: stable @kernel.org
      a855b84c
    • B
      percpu: rename pcpu_mem_alloc to pcpu_mem_zalloc · 90459ce0
      Bob Liu 提交于
      Currently pcpu_mem_alloc() is implemented always return zeroed memory.
      So rename it to make user like pcpu_get_pages_and_bitmap() know don't
      reinit it.
      Signed-off-by: NBob Liu <lliubbo@gmail.com>
      Reviewed-by: NPekka Enberg <penberg@kernel.org>
      Reviewed-by: NMichal Hocko <mhocko@suse.cz>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      90459ce0
  23. 31 3月, 2011 1 次提交
  24. 29 3月, 2011 1 次提交
    • M
      percpu: Cast away printk format warning · 787e5b06
      Mike Frysinger 提交于
      On 32-bit systems which don't happen to implicitly define or cast
      VMALLOC_START and/or VMALLOC_END to long in their arch headers, the
      printk in the percpu code will cause a warning to be emitted:
      
      mm/percpu.c: In function 'pcpu_embed_first_chunk':
      mm/percpu.c:1648: warning: format '%lx' expects type 'long unsigned int',
              but argument 3 has type 'unsigned int'
      
      So add an explicit cast to unsigned long here.
      Signed-off-by: NMike Frysinger <vapier@gentoo.org>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      787e5b06
  25. 28 3月, 2011 1 次提交
    • D
      NOMMU: percpu should use is_vmalloc_addr(). · eac522ef
      David Howells 提交于
      per_cpu_ptr_to_phys() uses VMALLOC_START and VMALLOC_END to determine if an
      address is in the vmalloc() region or not.  This is incorrect on NOMMU as
      there is no real vmalloc() capability (vmalloc() is emulated by kmalloc()).
      
      The correct way to do this is to use is_vmalloc_addr().  This encapsulates the
      vmalloc() region test in MMU mode and just returns 0 in NOMMU mode.
      
      On FRV in NOMMU mode, the percpu compilation fails without this patch:
      
      mm/percpu.c: In function 'per_cpu_ptr_to_phys':
      mm/percpu.c:1011: error: 'VMALLOC_START' undeclared (first use in this function)
      mm/percpu.c:1011: error: (Each undeclared identifier is reported only once
      mm/percpu.c:1011: error: for each function it appears in.)
      mm/percpu.c:1012: error: 'VMALLOC_END' undeclared (first use in this function)
      mm/percpu.c:1018: warning: control reaches end of non-void function
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      eac522ef
  26. 25 3月, 2011 1 次提交
    • T
      percpu: Always align percpu output section to PAGE_SIZE · 0415b00d
      Tejun Heo 提交于
      Percpu allocator honors alignment request upto PAGE_SIZE and both the
      percpu addresses in the percpu address space and the translated kernel
      addresses should be aligned accordingly.  The calculation of the
      former depends on the alignment of percpu output section in the kernel
      image.
      
      The linker script macros PERCPU_VADDR() and PERCPU() are used to
      define this output section and the latter takes @align parameter.
      Several architectures are using @align smaller than PAGE_SIZE breaking
      percpu memory alignment.
      
      This patch removes @align parameter from PERCPU(), renames it to
      PERCPU_SECTION() and makes it always align to PAGE_SIZE.  While at it,
      add PCPU_SETUP_BUG_ON() checks such that alignment problems are
      reliably detected and remove percpu alignment comment recently added
      in workqueue.c as the condition would trigger BUG way before reaching
      there.
      
      For um, this patch raises the alignment of percpu area.  As the area
      is in .init, there shouldn't be any noticeable difference.
      
      This problem was discovered by David Howells while debugging boot
      failure on mn10300.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NMike Frysinger <vapier@gentoo.org>
      Cc: uclinux-dist-devel@blackfin.uclinux.org
      Cc: David Howells <dhowells@redhat.com>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: user-mode-linux-devel@lists.sourceforge.net
      0415b00d
  27. 22 12月, 2010 1 次提交