1. 18 6月, 2010 1 次提交
    • T
      percpu: fix first chunk match in per_cpu_ptr_to_phys() · 9983b6f0
      Tejun Heo 提交于
      per_cpu_ptr_to_phys() determines whether the passed in @addr belongs
      to the first_chunk or not by just matching the address against the
      address range of the base unit (unit0, used by cpu0).  When an adress
      from another cpu was passed in, it will always determine that the
      address doesn't belong to the first chunk even when it does.  This
      makes the function return a bogus physical address which may lead to
      crash.
      
      This problem was discovered by Cliff Wickman while investigating a
      crash during kdump on a SGI UV system.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reported-by: NCliff Wickman <cpw@sgi.com>
      Tested-by: NCliff Wickman <cpw@sgi.com>
      Cc: stable@kernel.org
      9983b6f0
  2. 17 6月, 2010 1 次提交
  3. 01 5月, 2010 5 次提交
    • T
      percpu: implement kernel memory based chunk allocation · b0c9778b
      Tejun Heo 提交于
      Implement an alternate percpu chunk management based on kernel memeory
      for nommu SMP architectures.  Instead of mapping into vmalloc area,
      chunks are allocated as a contiguous kernel memory using
      alloc_pages().  As such, percpu allocator on nommu will have the
      following restrictions.
      
      * It can't fill chunks on-demand page-by-page.  It has to allocate
        each chunk fully upfront.
      
      * It can't support sparse chunk for NUMA configurations.  SMP w/o mmu
        is crazy enough.  Let's hope no one does NUMA w/o mmu.  :-P
      
      * If chunk size isn't power-of-two multiple of PAGE_SIZE, the
        unaligned amount will be wasted on each chunk.  So, archs which use
        this better align chunk size.
      
      For instructions on how to use this, read the comment on top of
      mm/percpu-km.c.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NDavid Howells <dhowells@redhat.com>
      Cc: Graff Yang <graff.yang@gmail.com>
      Cc: Sonic Zhang <sonic.adi@gmail.com>
      b0c9778b
    • T
      percpu: move vmalloc based chunk management into percpu-vm.c · 9f645532
      Tejun Heo 提交于
      Separate out and move chunk management (creation/desctruction and
      [de]population) code into percpu-vm.c which is included by percpu.c
      and compiled together.  The interface for chunk management is defined
      as follows.
      
       * pcpu_populate_chunk		- populate the specified range of a chunk
       * pcpu_depopulate_chunk	- depopulate the specified range of a chunk
       * pcpu_create_chunk		- create a new chunk
       * pcpu_destroy_chunk		- destroy a chunk, always preceded by full depop
       * pcpu_addr_to_page		- translate address to physical address
       * pcpu_verify_alloc_info	- check alloc_info is acceptable during init
      
      Other than wrapping vmalloc_to_page() inside pcpu_addr_to_page() and
      dummy pcpu_verify_alloc_info() implementation, this patch only moves
      code around.  This separation is to allow alternate chunk management
      implementation.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NDavid Howells <dhowells@redhat.com>
      Cc: Graff Yang <graff.yang@gmail.com>
      Cc: Sonic Zhang <sonic.adi@gmail.com>
      9f645532
    • T
      percpu: misc preparations for nommu support · 88999a89
      Tejun Heo 提交于
      Make the following misc preparations for percpu nommu support.
      
      * Remove refernces to vmalloc in common comments as nommu percpu won't
        use it.
      
      * Rename chunk->vms to chunk->data and make it void *.  Its use is
        determined by chunk management implementation.
      
      * Relocate utility functions and add __maybe_unused to functions which
        might not be used by different chunk management implementations.
      
      This patch doesn't cause any functional change.  This is to allow
      alternate chunk management implementation for percpu nommu support.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NDavid Howells <dhowells@redhat.com>
      Cc: Graff Yang <graff.yang@gmail.com>
      Cc: Sonic Zhang <sonic.adi@gmail.com>
      88999a89
    • T
      percpu: reorganize chunk creation and destruction · 6081089f
      Tejun Heo 提交于
      Reorganize alloc/free_pcpu_chunk() such that chunk struct alloc/free
      live in pcpu_alloc/free_chunk() and the rest in
      pcpu_create/destroy_chunk().  While at it, add missing error handling
      for chunk->map allocation failure.
      
      This is to allow alternate chunk management implementation for percpu
      nommu support.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NDavid Howells <dhowells@redhat.com>
      Cc: Graff Yang <graff.yang@gmail.com>
      Cc: Sonic Zhang <sonic.adi@gmail.com>
      6081089f
    • T
      percpu: factor out pcpu_addr_in_first/reserved_chunk() and update per_cpu_ptr_to_phys() · 020ec653
      Tejun Heo 提交于
      Factor out pcpu_addr_in_first/reserved_chunk() from
      pcpu_chunk_addr_search() and use it to update per_cpu_ptr_to_phys()
      such that it handles first chunk differently from the rest.
      
      This patch doesn't cause any functional change and is to prepare for
      percpu nommu support.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NDavid Howells <dhowells@redhat.com>
      Cc: Graff Yang <graff.yang@gmail.com>
      Cc: Sonic Zhang <sonic.adi@gmail.com>
      020ec653
  4. 29 3月, 2010 1 次提交
  5. 26 2月, 2010 1 次提交
  6. 17 2月, 2010 1 次提交
  7. 13 2月, 2010 1 次提交
  8. 12 1月, 2010 1 次提交
  9. 08 12月, 2009 1 次提交
  10. 25 11月, 2009 1 次提交
    • V
      percpu: Fix kdump failure if booted with percpu_alloc=page · 3b034b0d
      Vivek Goyal 提交于
      o kdump functionality reserves a per cpu area at boot time and exports the
        physical address of that area to user space through sys interface. This
        area stores some dump related information like cpu register states etc
        at the time of crash.
      
      o We were assuming that per cpu area always come from linearly mapped meory
        region and using __pa() to determine physical address.
        With percpu_alloc=page, per cpu area can come from vmalloc region also and
        __pa() breaks.
      
      o This patch implments a new function to convert per cpu address to
        physical address.
      
      Before the patch, crash_notes addresses looked as follows.
      
      cpu0 60fffff49800
      cpu1 60fffff60800
      cpu2 60fffff77800
      
      These are bogus phsyical addresses.
      
      After the patch, address are following.
      
      cpu0 13eb44000
      cpu1 13eb43000
      cpu2 13eb42000
      cpu3 13eb41000
      
      These look fine. I got 4G of memory and /proc/iomem tell me following.
      
      100000000-13fffffff : System RAM
      
      tj: * added missing asm/io.h include reported by Stephen Rothwell
          * repositioned per_cpu_ptr_phys() in percpu.c and added comment.
      Signed-off-by: NVivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      3b034b0d
  11. 12 11月, 2009 1 次提交
    • T
      percpu: restructure pcpu_extend_area_map() to fix bugs and improve readability · 833af842
      Tejun Heo 提交于
      pcpu_extend_area_map() had the following two bugs.
      
      * It should return 1 if pcpu_lock was dropped and reacquired but it
        returned 0.  This could lead to oops if free_percpu() races with
        area map extension.
      
      * pcpu_mem_free() was called under pcpu_lock.  pcpu_mem_free() might
        end up calling vfree() which isn't IRQ safe.  This could lead to
        deadlock through lock order inversion via IRQ.
      
      In addition, Linus pointed out that the temporary lock dropping and
      subtle three-way return value of pcpu_extend_area_map() was very ugly
      and suggested to split the function into two - pcpu_need_to_extend()
      and pcpu_extend_area_map().
      
      This patch restructures pcpu_extend_area_map() as suggested and fixes
      the two bugs.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      833af842
  12. 29 10月, 2009 1 次提交
    • T
      percpu: remove some sparse warnings · 0f5e4816
      Tejun Heo 提交于
      Make the following changes to remove some sparse warnings.
      
      * Make DEFINE_PER_CPU_SECTION() declare __pcpu_unique_* before
        defining it.
      
      * Annotate pcpu_extend_area_map() that it is entered with pcpu_lock
        held, releases it and then reacquires it.
      
      * Make percpu related macros use unique nested variable names.
      
      * While at it, add pcpu prefix to __size_call[_return]() macros as
        to-be-implemented sparse annotations will add percpu specific stuff
        to these macros.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      0f5e4816
  13. 28 10月, 2009 1 次提交
    • J
      percpu: allow pcpu_alloc() to be called with IRQs off · 403a91b1
      Jiri Kosina 提交于
      pcpu_alloc() and pcpu_extend_area_map() perform a series of
      spin_lock_irq()/spin_unlock_irq() calls, which make them unsafe
      with respect to being called from contexts which have IRQs off.
      
      This patch converts the code to perform save/restore of flags instead,
      making pcpu_alloc() (or __alloc_percpu() respectively) to be called
      from early kernel startup stage, where IRQs are off.
      
      This is needed for proper initialization of per-cpu rq_weight data from
      sched_init().
      
      tj: added comment explaining why irqsave/restore is used in alloc path.
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      403a91b1
  14. 12 10月, 2009 1 次提交
  15. 02 10月, 2009 1 次提交
  16. 29 9月, 2009 5 次提交
    • T
      percpu: make allocation failures more verbose · f2badb0c
      Tejun Heo 提交于
      Warn and dump stack when percpu allocation fails.  percpu allocator is
      still young and unchecked NULL percpu pointer usage can result in
      random memory corruption when combined with the pointer shifting in
      access macros.  Allocation failures should be rare and the warning
      message will be disabled after certain times.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      f2badb0c
    • T
      percpu: make pcpu_setup_first_chunk() failures more verbose · 635b75fc
      Tejun Heo 提交于
      The parameters to pcpu_setup_first_chunk() come from different sources
      depending on architecture and can be quite complex.  The function runs
      various sanity checks on the parameters and triggers BUG() if
      something isn't right.  However, this is very early during the boot
      and not reporting exactly what the problem is makes debugging even
      harder.
      
      Add PCPU_SETUP_BUG() macro which prints out enough information about
      the parameters.  As the macro still puts separate BUG() for each
      check, it won't lose any information even on the situations where only
      the program counter can be retrieved.
      
      While at it, also bump pcpu_dump_alloc_info() message to KERN_INFO so
      that it's visible on the console if boot fails to complete.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      635b75fc
    • T
      percpu: make embedding first chunk allocator check vmalloc space size · 6ea529a2
      Tejun Heo 提交于
      Embedding first chunk allocator maintains the distances between units
      in the vmalloc area and thus needs vmalloc space to be larger than the
      maximum distances between units; otherwise, it wouldn't be able to
      create any dynamic chunks.  This patch makes the embedding first chunk
      allocator check vmalloc space size and if the maximum distance between
      units is larger than 75% of it, print warning and, if page mapping
      allocator is available, fail initialization so that the system falls
      back onto it.
      
      This should work around percpu allocation failure problems on certain
      sparc64 configurations where distances between NUMA nodes are larger
      than the vmalloc area and makes percpu allocator more robust for
      future configurations.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      6ea529a2
    • T
      percpu: make pcpu_build_alloc_info() clear static buffers · fb59e72e
      Tejun Heo 提交于
      pcpu_build_alloc_info() may be called multiple times when percpu is
      falling back to different first chunk allocator.  Make it clear static
      buffers so that they don't contain values from previous runs.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      fb59e72e
    • T
      percpu: fix unit_map[] verification in pcpu_setup_first_chunk() · ffe0d5a5
      Tejun Heo 提交于
      pcpu_setup_first_chunk() incorrectly used NR_CPUS as the impossible
      unit number while unit number can equal and go over NR_CPUS with
      sparse unit map.  This triggers BUG_ON() spuriously on machines which
      have non-power-of-two number of cpus.  Use UINT_MAX instead.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reported-and-tested-by: NTony Vroon <tony@linx.net>
      ffe0d5a5
  17. 01 9月, 2009 1 次提交
    • T
      percpu: don't assume existence of cpu0 · 04a13c7c
      Tejun Heo 提交于
      percpu incorrectly assumed that cpu0 was always there which led to the
      following warning and eventual oops on sparc machines w/o cpu0.
      
        WARNING: at mm/percpu.c:651 pcpu_map+0xdc/0x100()
        Modules linked in:
        Call Trace:
          [000000000045eb70] warn_slowpath_common+0x50/0xa0
          [000000000045ebdc] warn_slowpath_null+0x1c/0x40
          [00000000004d493c] pcpu_map+0xdc/0x100
          [00000000004d59a4] pcpu_alloc+0x3e4/0x4e0
          [00000000004d5af8] __alloc_percpu+0x18/0x40
          [00000000005b112c] __percpu_counter_init+0x4c/0xc0
        ...
        Unable to handle kernel NULL pointer dereference
        ...
         I7: <sysfs_new_dirent+0x30/0x120>
         Disabling lock debugging due to kernel taint
         Caller[000000000053c1b0]: sysfs_new_dirent+0x30/0x120
         Caller[000000000053c7a4]: create_dir+0x24/0xc0
         Caller[000000000053c870]: sysfs_create_dir+0x30/0x80
         Caller[00000000005990e8]: kobject_add_internal+0xc8/0x200
        ...
         Kernel panic - not syncing: Attempted to kill the idle task!
      
      This patch fixes the problem by backporting parts from devel branch to
      make percpu core not depend on the existence of cpu0.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reported-by: NMeelis Roos <mroos@linux.ee>
      Cc: David Miller <davem@davemloft.net>
      04a13c7c
  18. 14 8月, 2009 15 次提交
    • T
      percpu: kill lpage first chunk allocator · e933a73f
      Tejun Heo 提交于
      With x86 converted to embedding allocator, lpage doesn't have any user
      left.  Kill it along with cpa handling code.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Jan Beulich <JBeulich@novell.com>
      e933a73f
    • T
      percpu: update embedding first chunk allocator to handle sparse units · c8826dd5
      Tejun Heo 提交于
      Now that percpu core can handle very sparse units, given that vmalloc
      space is large enough, embedding first chunk allocator can use any
      memory to build the first chunk.  This patch teaches
      pcpu_embed_first_chunk() about distances between cpus and to use
      alloc/free callbacks to allocate node specific areas for each group
      and use them for the first chunk.
      
      This brings the benefits of embedding allocator to NUMA configurations
      - no extra TLB pressure with the flexibility of unified dynamic
      allocator and no need to restructure arch code to build memory layout
      suitable for percpu.  With units put into atom_size aligned groups
      according to cpu distances, using large page for dynamic chunks is
      also easily possible with falling back to reuglar pages if large
      allocation fails.
      
      Embedding allocator users are converted to specify NULL
      cpu_distance_fn, so this patch doesn't cause any visible behavior
      difference.  Following patches will convert them.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      c8826dd5
    • T
      percpu: use group information to allocate vmap areas sparsely · 6563297c
      Tejun Heo 提交于
      ai->groups[] contains which units need to be put consecutively and at
      what offset from the chunk base address.  Compile this information
      into pcpu_group_offsets[] and pcpu_group_sizes[] in
      pcpu_setup_first_chunk() and use them to allocate sparse vm areas
      using pcpu_get_vm_areas().
      
      This will be used to allow directly using sparse NUMA memories as
      percpu areas.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Nick Piggin <npiggin@suse.de>
      6563297c
    • T
      percpu: add chunk->base_addr · bba174f5
      Tejun Heo 提交于
      The only thing percpu allocator wants to know about a vmalloc area is
      the base address.  Instead of requiring chunk->vm, add
      chunk->base_addr which contains the necessary value.  This simplifies
      the code a bit and makes the dummy first_vm unnecessary.  This change
      will ease allowing a chunk to be mapped by multiple vms.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      bba174f5
    • T
      percpu: add pcpu_unit_offsets[] · fb435d52
      Tejun Heo 提交于
      Currently units are mapped sequentially into address space.  This
      patch adds pcpu_unit_offsets[] which allows units to be mapped to
      arbitrary offsets from the chunk base address.  This is necessary to
      allow sparse embedding which might would need to allocate address
      ranges and memory areas which aren't aligned to unit size but
      allocation atom size (page or large page size).  This also simplifies
      things a bit by removing the need to calculate offset from unit
      number.
      
      With this change, there's no need for the arch code to know
      pcpu_unit_size.  Update pcpu_setup_first_chunk() and first chunk
      allocators to return regular 0 or -errno return code instead of unit
      size or -errno.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: David S. Miller <davem@davemloft.net>
      fb435d52
    • T
      percpu: introduce pcpu_alloc_info and pcpu_group_info · fd1e8a1f
      Tejun Heo 提交于
      Till now, non-linear cpu->unit map was expressed using an integer
      array which maps each cpu to a unit and used only by lpage allocator.
      Although how many units have been placed in a single contiguos area
      (group) is known while building unit_map, the information is lost when
      the result is recorded into the unit_map array.  For lpage allocator,
      as all allocations are done by lpages and whether two adjacent lpages
      are in the same group or not is irrelevant, this didn't cause any
      problem.  Non-linear cpu->unit mapping will be used for sparse
      embedding and this grouping information is necessary for that.
      
      This patch introduces pcpu_alloc_info which contains all the
      information necessary for initializing percpu allocator.
      pcpu_alloc_info contains array of pcpu_group_info which describes how
      units are grouped and mapped to cpus.  pcpu_group_info also has
      base_offset field to specify its offset from the chunk's base address.
      pcpu_build_alloc_info() initializes this field as if all groups are
      allocated back-to-back as is currently done but this will be used to
      sparsely place groups.
      
      pcpu_alloc_info is a rather complex data structure which contains a
      flexible array which in turn points to nested cpu_map arrays.
      
      * pcpu_alloc_alloc_info() and pcpu_free_alloc_info() are provided to
        help dealing with pcpu_alloc_info.
      
      * pcpu_lpage_build_unit_map() is updated to build pcpu_alloc_info,
        generalized and renamed to pcpu_build_alloc_info().
        @cpu_distance_fn may be NULL indicating that all cpus are of
        LOCAL_DISTANCE.
      
      * pcpul_lpage_dump_cfg() is updated to process pcpu_alloc_info,
        generalized and renamed to pcpu_dump_alloc_info().  It now also
        prints which group each alloc unit belongs to.
      
      * pcpu_setup_first_chunk() now takes pcpu_alloc_info instead of the
        separate parameters.  All first chunk allocators are updated to use
        pcpu_build_alloc_info() to build alloc_info and call
        pcpu_setup_first_chunk() with it.  This has the side effect of
        packing units for sparse possible cpus.  ie. if cpus 0, 2 and 4 are
        possible, they'll be assigned unit 0, 1 and 2 instead of 0, 2 and 4.
      
      * x86 setup_pcpu_lpage() is updated to deal with alloc_info.
      
      * sparc64 setup_per_cpu_areas() is updated to build alloc_info.
      
      Although the changes made by this patch are pretty pervasive, it
      doesn't cause any behavior difference other than packing of sparse
      cpus.  It mostly changes how information is passed among
      initialization functions and makes room for more flexibility.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: David Miller <davem@davemloft.net>
      fd1e8a1f
    • T
      percpu: move pcpu_lpage_build_unit_map() and pcpul_lpage_dump_cfg() upward · 033e48fb
      Tejun Heo 提交于
      Unit map handling will be generalized and extended and used for
      embedding sparse first chunk and other purposes.  Relocate two
      unit_map related functions upward in preparation.  This patch just
      moves the code without any actual change.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      033e48fb
    • T
      percpu: add @align to pcpu_fc_alloc_fn_t · 3cbc8565
      Tejun Heo 提交于
      pcpu_fc_alloc_fn_t is about to see more interesting usage, add @align
      parameter.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      3cbc8565
    • T
      percpu: make @dyn_size mandatory for pcpu_setup_first_chunk() · 1d9d3257
      Tejun Heo 提交于
      Now that all actual first chunk allocation and copying happen in the
      first chunk allocators and helpers, there's no reason for
      pcpu_setup_first_chunk() to try to determine @dyn_size automatically.
      The only left user is page first chunk allocator.  Make it determine
      dyn_size like other allocators and make @dyn_size mandatory for
      pcpu_setup_first_chunk().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      1d9d3257
    • T
      percpu: drop @static_size from first chunk allocators · 9a773769
      Tejun Heo 提交于
      First chunk allocators assume percpu areas have been linked using one
      of PERCPU_*() macros and depend on __per_cpu_load symbol defined by
      those macros, so there isn't much point in passing in static area size
      explicitly when it can be easily calculated from __per_cpu_start and
      __per_cpu_end.  Drop @static_size from all percpu first chunk
      allocators and helpers.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      9a773769
    • T
      percpu: generalize first chunk allocator selection · f58dc01b
      Tejun Heo 提交于
      Now that all first chunk allocators are in mm/percpu.c, it makes sense
      to make generalize percpu_alloc kernel parameter.  Define PCPU_FC_*
      and set pcpu_chosen_fc using early_param() in mm/percpu.c.  Arch code
      can use the set value to determine which first chunk allocator to use.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      f58dc01b
    • T
      percpu: build first chunk allocators selectively · 08fc4580
      Tejun Heo 提交于
      There's no need to build unused first chunk allocators in.  Define
      CONFIG_NEED_PER_CPU_*_FIRST_CHUNK and let archs enable them
      selectively.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      08fc4580
    • T
      percpu: rename 4k first chunk allocator to page · 00ae4064
      Tejun Heo 提交于
      Page size isn't always 4k depending on arch and configuration.  Rename
      4k first chunk allocator to page.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: David Howells <dhowells@redhat.com>
      00ae4064
    • T
      percpu: improve boot messages · 004018e2
      Tejun Heo 提交于
      Improve percpu boot messages such that they're uniform and contain
      more information.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NChristoph Lameter <cl@linux-foundation.org>
      004018e2
    • T
      percpu: fix pcpu_reclaim() locking · 971f3918
      Tejun Heo 提交于
      pcpu_reclaim() calls pcpu_depopulate_chunk() which makes use of pages
      array and bitmap returned by pcpu_get_pages_and_bitmap() and thus
      should be called under pcpu_alloc_mutex.  pcpu_reclaim() released the
      mutex before calling depopulate leading to double free and other
      strange problems caused by the unexpected concurrent usages of pages
      array and bitmap.  Fix it.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NChristoph Lameter <cl@linux-foundation.org>
      971f3918