1. 06 11月, 2015 1 次提交
    • R
      arch/powerpc/mm/numa.c: do not allocate bootmem memory for non existing nodes · c118baf8
      Raghavendra K T 提交于
      With the setup_nr_nodes(), we have already initialized
      node_possible_map.  So it is safe to use for_each_node here.
      
      There are many places in the kernel that use hardcoded 'for' loop with
      nr_node_ids, because all other architectures have numa nodes populated
      serially.  That should be reason we had maintained the same for
      powerpc.
      
      But, since sparse numa node ids possible on powerpc, we unnecessarily
      allocate memory for non existent numa nodes.
      
      For e.g., on a system with 0,1,16,17 as numa nodes nr_node_ids=18 and
      we allocate memory for nodes 2-14.  This patch we allocate memory for
      only existing numa nodes.
      
      The patch is boot tested on a 4 node tuleta, confirming with printks
      that it works as expected.
      Signed-off-by: NRaghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
      Cc: Vladimir Davydov <vdavydov@parallels.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Anton Blanchard <anton@samba.org>
      Cc: Nishanth Aravamudan <nacc@linux.vnet.ibm.com>
      Cc: Greg Kurz <gkurz@linux.vnet.ibm.com>
      Cc: Grant Likely <grant.likely@linaro.org>
      Cc: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c118baf8
  2. 28 10月, 2015 2 次提交
    • K
      powerpc/e6500: hw tablewalk: make sure we invalidate and write to the same tlb entry · e1f580e8
      Kevin Hao 提交于
      In order to workaround Erratum A-008139, we have to invalidate the
      tlb entry with tlbilx before overwriting. Due to the performance
      consideration, we don't add any memory barrier when acquire/release
      the tcd lock. This means the two load instructions for esel_next do
      have the possibility to return different value. This is definitely
      not acceptable due to the Erratum A-008139. We have two options to
      fix this issue:
        a) Add memory barrier when acquire/release tcd lock to order the
           load/store to esel_next.
        b) Just make sure to invalidate and write to the same tlb entry and
           tolerate the race that we may get the wrong value and overwrite
           the tlb entry just updated by the other thread.
      
      We observe better performance using option b. So reserve an additional
      register to save the value of the esel_next.
      Signed-off-by: NKevin Hao <haokexin@gmail.com>
      Signed-off-by: NScott Wood <scottwood@freescale.com>
      e1f580e8
    • S
      powerpc/fsl-booke-64: Don't limit ppc64_rma_size to one TLB entry · eba5de8d
      Scott Wood 提交于
      This is required for kdump to work when loaded at at an address that
      does not fall within the first TLB entry -- which can easily happen
      because while the lower limit is enforced via reserved memory, which
      doesn't affect how much is mapped, the upper limit is enforced via a
      different mechanism that does.  Thus, more TLB entries are needed than
      would normally be used, as the total memory to be mapped might not be a
      power of two.
      Signed-off-by: NScott Wood <scottwood@freescale.com>
      eba5de8d
  3. 23 10月, 2015 1 次提交
    • S
      powerpc/85xx: Load all early TLB entries at once · d9e1831a
      Scott Wood 提交于
      Use an AS=1 trampoline TLB entry to allow all normal TLB1 entries to
      be loaded at once.  This avoids the need to keep the translation that
      code is executing from in the same TLB entry in the final TLB
      configuration as during early boot, which in turn is helpful for
      relocatable kernels (e.g. kdump) where the kernel is not running from
      what would be the first TLB entry.
      
      On e6500, we limit map_mem_in_cams() to the primary hwthread of a
      core (the boot cpu is always considered primary, as a kdump kernel
      can be entered on any cpu).  Each TLB only needs to be set up once,
      and when we do, we don't want another thread to be running when we
      create a temporary trampoline TLB1 entry.
      Signed-off-by: NScott Wood <scottwood@freescale.com>
      d9e1831a
  4. 15 10月, 2015 1 次提交
  5. 12 10月, 2015 2 次提交
  6. 09 10月, 2015 1 次提交
    • C
      powerpc: Fix checkstop in native_hpte_clear() with lockdep · fdf880a6
      Cyril Bur 提交于
      native_hpte_clear() is called in real mode from two places:
      - Early in boot during htab initialisation if firmware assisted dump is
        active.
      - Late in the kexec path.
      
      In both contexts there is no need to disable interrupts are they are
      already disabled. Furthermore, locking around the tlbie() is only required
      for pre POWER5 hardware.
      
      On POWER5 or newer hardware concurrent tlbie()s work as expected and on pre
      POWER5 hardware concurrent tlbie()s could result in deadlock. This code
      would only be executed at crashdump time, during which all bets are off,
      concurrent tlbie()s are unlikely and taking locks is unsafe therefore the
      best course of action is to simply do nothing. Concurrent tlbie()s are not
      possible in the first case as secondary CPUs have not come up yet.
      Signed-off-by: NCyril Bur <cyrilbur@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      fdf880a6
  7. 01 10月, 2015 2 次提交
  8. 16 9月, 2015 1 次提交
  9. 28 8月, 2015 1 次提交
    • D
      mm: ZONE_DEVICE for "device memory" · 033fbae9
      Dan Williams 提交于
      While pmem is usable as a block device or via DAX mappings to userspace
      there are several usage scenarios that can not target pmem due to its
      lack of struct page coverage. In preparation for "hot plugging" pmem
      into the vmemmap add ZONE_DEVICE as a new zone to tag these pages
      separately from the ones that are subject to standard page allocations.
      Importantly "device memory" can be removed at will by userspace
      unbinding the driver of the device.
      
      Having a separate zone prevents allocation and otherwise marks these
      pages that are distinct from typical uniform memory.  Device memory has
      different lifetime and performance characteristics than RAM.  However,
      since we have run out of ZONES_SHIFT bits this functionality currently
      depends on sacrificing ZONE_DMA.
      
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Jerome Glisse <j.glisse@gmail.com>
      [hch: various simplifications in the arch interface]
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      033fbae9
  10. 18 8月, 2015 4 次提交
    • N
      powerpc/numa: initialize distance lookup table from drconf path · 1d805440
      Nikunj A Dadhania 提交于
      In some situations, a NUMA guest that supports
      ibm,dynamic-memory-reconfiguration node will end up having flat NUMA
      distances between nodes. This is because of two problems in the
      current code.
      
      1) Different representations of associativity lists.
      
         There is an assumption about the associativity list in
         initialize_distance_lookup_table(). Associativity list has two forms:
      
         a) [cpu,memory]@x/ibm,associativity has following
            format:
                 <N> <N integers>
      
         b) ibm,dynamic-reconfiguration-memory/ibm,associativity-lookup-arrays
      
                 <M> <N> <M associativity lists each having N integers>
                 M = the number of associativity lists
                 N = the number of entries per associativity list
      
         Fix initialize_distance_lookup_table() so that it does not assume
         "case a". And update the caller to skip the length field before
         sending the associativity list.
      
      2) Distance table not getting updated from drconf path.
      
         Node distance table will not get initialized in certain cases as
         ibm,dynamic-reconfiguration-memory path does not initialize the
         lookup table.
      
         Call initialize_distance_lookup_table() from drconf path with
         appropriate associativity list.
      Reported-by: NBharata B Rao <bharata@linux.vnet.ibm.com>
      Signed-off-by: NNikunj A Dadhania <nikunj@linux.vnet.ibm.com>
      Acked-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      1d805440
    • M
      powerpc/mm: Drop CONFIG_PPC_HAS_HASH_64K · 73b341ef
      Michael Ellerman 提交于
      The relation between CONFIG_PPC_HAS_HASH_64K and CONFIG_PPC_64K_PAGES is
      painfully complicated.
      
      But if we rearrange it enough we can see that PPC_HAS_HASH_64K
      essentially depends on PPC_STD_MMU_64 && PPC_64K_PAGES.
      
      We can then notice that PPC_HAS_HASH_64K is used in files that are only
      built for PPC_STD_MMU_64, meaning it's equivalent to PPC_64K_PAGES.
      
      So replace all uses and drop it.
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      73b341ef
    • M
      powerpc/cell: Drop support for 64K local store on 4K kernels · f444f1f8
      Michael Ellerman 提交于
      Back in the olden days we added support for using 64K pages to map the
      SPU (Synergistic Processing Unit) local store on Cell, when the main
      kernel was using 4K pages.
      
      This was useful at the time because distros were using 4K pages, but
      using 64K pages on the SPUs could reduce TLB pressure there.
      
      However these days the number of Cell users is approaching zero, and
      supporting this option adds unpleasant complexity to the memory
      management code.
      
      So drop the option, CONFIG_SPU_FS_64K_LS, and all related code.
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Acked-by: NJeremy Kerr <jk@ozlabs.org>
      f444f1f8
    • K
      powerpc/e6500: hw tablewalk: optimize a bit for tcd lock acquiring codes · 69399ee9
      Kevin Hao 提交于
      It makes no sense to put the instructions for calculating the lock
      value (cpu number + 1) and the clearing of eq bit of cr1 in lbarx/stbcx
      loop. And when the lock is acquired by the other thread, the current
      lock value has no chance to equal with the lock value used by current
      cpu. So we can skip the comparing for these two lock values in the
      lbz/bne loop.
      Signed-off-by: NKevin Hao <haokexin@gmail.com>
      Signed-off-by: NScott Wood <scottwood@freescale.com>
      69399ee9
  11. 12 8月, 2015 3 次提交
  12. 08 8月, 2015 3 次提交
  13. 06 7月, 2015 1 次提交
  14. 25 6月, 2015 4 次提交
  15. 17 6月, 2015 1 次提交
    • P
      powerpc: don't use module_init for non-modular core hugetlb code · 6f114281
      Paul Gortmaker 提交于
      The hugetlbpage.o is obj-y (always built in).  It will never
      be modular, so using module_init as an alias for __initcall is
      somewhat misleading.
      
      Fix this up now, so that we can relocate module_init from
      init.h into module.h in the future.  If we don't do this, we'd
      have to add module.h to obviously non-modular code, and that
      would be a worse thing.
      
      Note that direct use of __initcall is discouraged, vs. one
      of the priority categorized subgroups.  As __initcall gets
      mapped onto device_initcall, our use of arch_initcall (which
      makes sense for arch code) will thus change this registration
      from level 6-device to level 3-arch (i.e. slightly earlier).
      However no observable impact of that small difference has
      been observed during testing, or is expected.
      
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: linuxppc-dev@lists.ozlabs.org
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      6f114281
  16. 11 6月, 2015 1 次提交
    • A
      powerpc/mmu: Add userspace-to-physical addresses translation cache · 15b244a8
      Alexey Kardashevskiy 提交于
      We are adding support for DMA memory pre-registration to be used in
      conjunction with VFIO. The idea is that the userspace which is going to
      run a guest may want to pre-register a user space memory region so
      it all gets pinned once and never goes away. Having this done,
      a hypervisor will not have to pin/unpin pages on every DMA map/unmap
      request. This is going to help with multiple pinning of the same memory.
      
      Another use of it is in-kernel real mode (mmu off) acceleration of
      DMA requests where real time translation of guest physical to host
      physical addresses is non-trivial and may fail as linux ptes may be
      temporarily invalid. Also, having cached host physical addresses
      (compared to just pinning at the start and then walking the page table
      again on every H_PUT_TCE), we can be sure that the addresses which we put
      into TCE table are the ones we already pinned.
      
      This adds a list of memory regions to mm_context_t. Each region consists
      of a header and a list of physical addresses. This adds API to:
      1. register/unregister memory regions;
      2. do final cleanup (which puts all pre-registered pages);
      3. do userspace to physical address translation;
      4. manage usage counters; multiple registration of the same memory
      is allowed (once per container).
      
      This implements 2 counters per registered memory region:
      - @mapped: incremented on every DMA mapping; decremented on unmapping;
      initialized to 1 when a region is just registered; once it becomes zero,
      no more mappings allowe;
      - @used: incremented on every "register" ioctl; decremented on
      "unregister"; unregistration is allowed for DMA mapped regions unless
      it is the very last reference. For the very last reference this checks
      that the region is still mapped and returns -EBUSY so the userspace
      gets to know that memory is still pinned and unregistration needs to
      be retried; @used remains 1.
      
      Host physical addresses are stored in vmalloc'ed array. In order to
      access these in the real mode (mmu off), there is a real_vmalloc_addr()
      helper. In-kernel acceleration patchset will move it from KVM to MMU code.
      Signed-off-by: NAlexey Kardashevskiy <aik@ozlabs.ru>
      Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      15b244a8
  17. 10 6月, 2015 1 次提交
    • A
      powerpc/mm: Add trace point for tracking hash pte fault · cfcb3d80
      Aneesh Kumar K.V 提交于
      This enables us to understand how many hash fault we are taking
      when running benchmarks.
      
      For ex:
      -bash-4.2# ./perf stat -e  powerpc:hash_fault -e page-faults /tmp/ebizzy.ppc64 -S 30  -P -n 1000
      ...
      
       Performance counter stats for '/tmp/ebizzy.ppc64 -S 30 -P -n 1000':
      
             1,10,04,075      powerpc:hash_fault
             1,10,03,429      page-faults
      
            30.865978991 seconds time elapsed
      
      NOTE:
      The impact of the tracepoint was not noticeable when running test. It was
      within the run-time variance of the test. For ex:
      
      without-patch:
      --------------
      
       Performance counter stats for './a.out 3000 300':
      
      	       643      page-faults               #    0.089 M/sec
      	  7.236562      task-clock (msec)         #    0.928 CPUs utilized
      	 2,179,213      stalled-cycles-frontend   #    0.00% frontend cycles idle
      	17,174,367      stalled-cycles-backend    #    0.00% backend  cycles idle
      		 0      context-switches          #    0.000 K/sec
      
             0.007794658 seconds time elapsed
      
      And with-patch:
      ---------------
      
       Performance counter stats for './a.out 3000 300':
      
      	       643      page-faults               #    0.089 M/sec
      	  7.233746      task-clock (msec)         #    0.921 CPUs utilized
      		 0      context-switches          #    0.000 K/sec
      
             0.007854876 seconds time elapsed
      
       Performance counter stats for './a.out 3000 300':
      
      	       643      page-faults               #    0.087 M/sec
      	       649      powerpc:hash_fault        #    0.087 M/sec
      	  7.430376      task-clock (msec)         #    0.938 CPUs utilized
      	 2,347,174      stalled-cycles-frontend   #    0.00% frontend cycles idle
      	17,524,282      stalled-cycles-backend    #    0.00% backend  cycles idle
      		 0      context-switches          #    0.000 K/sec
      
             0.007920284 seconds time elapsed
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      cfcb3d80
  18. 03 6月, 2015 4 次提交
  19. 02 6月, 2015 1 次提交
  20. 27 5月, 2015 1 次提交
  21. 20 5月, 2015 1 次提交
    • L
      module: add extra argument for parse_params() callback · ecc86170
      Luis R. Rodriguez 提交于
      This adds an extra argument onto parse_params() to be used
      as a way to make the unused callback a bit more useful and
      generic by allowing the caller to pass on a data structure
      of its choice. An example use case is to allow us to easily
      make module parameters for every module which we will do
      next.
      
      @ parse @
      identifier name, args, params, num, level_min, level_max;
      identifier unknown, param, val, doing;
      type s16;
      @@
       extern char *parse_args(const char *name,
       			 char *args,
       			 const struct kernel_param *params,
       			 unsigned num,
       			 s16 level_min,
       			 s16 level_max,
      +			 void *arg,
       			 int (*unknown)(char *param, char *val,
      					const char *doing
      +					, void *arg
      					));
      
      @ parse_mod @
      identifier name, args, params, num, level_min, level_max;
      identifier unknown, param, val, doing;
      type s16;
      @@
       char *parse_args(const char *name,
       			 char *args,
       			 const struct kernel_param *params,
       			 unsigned num,
       			 s16 level_min,
       			 s16 level_max,
      +			 void *arg,
       			 int (*unknown)(char *param, char *val,
      					const char *doing
      +					, void *arg
      					))
      {
      	...
      }
      
      @ parse_args_found @
      expression R, E1, E2, E3, E4, E5, E6;
      identifier func;
      @@
      
      (
      	R =
      	parse_args(E1, E2, E3, E4, E5, E6,
      +		   NULL,
      		   func);
      |
      	R =
      	parse_args(E1, E2, E3, E4, E5, E6,
      +		   NULL,
      		   &func);
      |
      	R =
      	parse_args(E1, E2, E3, E4, E5, E6,
      +		   NULL,
      		   NULL);
      |
      	parse_args(E1, E2, E3, E4, E5, E6,
      +		   NULL,
      		   func);
      |
      	parse_args(E1, E2, E3, E4, E5, E6,
      +		   NULL,
      		   &func);
      |
      	parse_args(E1, E2, E3, E4, E5, E6,
      +		   NULL,
      		   NULL);
      )
      
      @ parse_args_unused depends on parse_args_found @
      identifier parse_args_found.func;
      @@
      
      int func(char *param, char *val, const char *unused
      +		 , void *arg
      		 )
      {
      	...
      }
      
      @ mod_unused depends on parse_args_found @
      identifier parse_args_found.func;
      expression A1, A2, A3;
      @@
      
      -	func(A1, A2, A3);
      +	func(A1, A2, A3, NULL);
      
      Generated-by: Coccinelle SmPL
      Cc: cocci@systeme.lip6.fr
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Felipe Contreras <felipe.contreras@gmail.com>
      Cc: Ewan Milne <emilne@redhat.com>
      Cc: Jean Delvare <jdelvare@suse.de>
      Cc: Hannes Reinecke <hare@suse.de>
      Cc: Jani Nikula <jani.nikula@intel.com>
      Cc: linux-kernel@vger.kernel.org
      Reviewed-by: NTejun Heo <tj@kernel.org>
      Acked-by: NRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NLuis R. Rodriguez <mcgrof@suse.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      ecc86170
  22. 19 5月, 2015 2 次提交
  23. 12 5月, 2015 1 次提交