1. 12 10月, 2019 1 次提交
    • G
      powerpc/pseries: Fix cpu_hotplug_lock acquisition in resize_hpt() · f5f31a6e
      Gautham R. Shenoy 提交于
      [ Upstream commit c784be435d5dae28d3b03db31753dd7a18733f0c ]
      
      The calls to arch_add_memory()/arch_remove_memory() are always made
      with the read-side cpu_hotplug_lock acquired via memory_hotplug_begin().
      On pSeries, arch_add_memory()/arch_remove_memory() eventually call
      resize_hpt() which in turn calls stop_machine() which acquires the
      read-side cpu_hotplug_lock again, thereby resulting in the recursive
      acquisition of this lock.
      
      In the absence of CONFIG_PROVE_LOCKING, we hadn't observed a system
      lockup during a memory hotplug operation because cpus_read_lock() is a
      per-cpu rwsem read, which, in the fast-path (in the absence of the
      writer, which in our case is a CPU-hotplug operation) simply
      increments the read_count on the semaphore. Thus a recursive read in
      the fast-path doesn't cause any problems.
      
      However, we can hit this problem in practice if there is a concurrent
      CPU-Hotplug operation in progress which is waiting to acquire the
      write-side of the lock. This will cause the second recursive read to
      block until the writer finishes. While the writer is blocked since the
      first read holds the lock. Thus both the reader as well as the writers
      fail to make any progress thereby blocking both CPU-Hotplug as well as
      Memory Hotplug operations.
      
      Memory-Hotplug				CPU-Hotplug
      CPU 0					CPU 1
      ------                                  ------
      
      1. down_read(cpu_hotplug_lock.rw_sem)
         [memory_hotplug_begin]
      					2. down_write(cpu_hotplug_lock.rw_sem)
      					[cpu_up/cpu_down]
      3. down_read(cpu_hotplug_lock.rw_sem)
         [stop_machine()]
      
      Lockdep complains as follows in these code-paths.
      
       swapper/0/1 is trying to acquire lock:
       (____ptrval____) (cpu_hotplug_lock.rw_sem){++++}, at: stop_machine+0x2c/0x60
      
      but task is already holding lock:
      (____ptrval____) (cpu_hotplug_lock.rw_sem){++++}, at: mem_hotplug_begin+0x20/0x50
      
       other info that might help us debug this:
        Possible unsafe locking scenario:
      
              CPU0
              ----
         lock(cpu_hotplug_lock.rw_sem);
         lock(cpu_hotplug_lock.rw_sem);
      
        *** DEADLOCK ***
      
        May be due to missing lock nesting notation
      
       3 locks held by swapper/0/1:
        #0: (____ptrval____) (&dev->mutex){....}, at: __driver_attach+0x12c/0x1b0
        #1: (____ptrval____) (cpu_hotplug_lock.rw_sem){++++}, at: mem_hotplug_begin+0x20/0x50
        #2: (____ptrval____) (mem_hotplug_lock.rw_sem){++++}, at: percpu_down_write+0x54/0x1a0
      
      stack backtrace:
       CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.0.0-rc5-58373-gbc99402235f3-dirty #166
       Call Trace:
         dump_stack+0xe8/0x164 (unreliable)
         __lock_acquire+0x1110/0x1c70
         lock_acquire+0x240/0x290
         cpus_read_lock+0x64/0xf0
         stop_machine+0x2c/0x60
         pseries_lpar_resize_hpt+0x19c/0x2c0
         resize_hpt_for_hotplug+0x70/0xd0
         arch_add_memory+0x58/0xfc
         devm_memremap_pages+0x5e8/0x8f0
         pmem_attach_disk+0x764/0x830
         nvdimm_bus_probe+0x118/0x240
         really_probe+0x230/0x4b0
         driver_probe_device+0x16c/0x1e0
         __driver_attach+0x148/0x1b0
         bus_for_each_dev+0x90/0x130
         driver_attach+0x34/0x50
         bus_add_driver+0x1a8/0x360
         driver_register+0x108/0x170
         __nd_driver_register+0xd0/0xf0
         nd_pmem_driver_init+0x34/0x48
         do_one_initcall+0x1e0/0x45c
         kernel_init_freeable+0x540/0x64c
         kernel_init+0x2c/0x160
         ret_from_kernel_thread+0x5c/0x68
      
      Fix this issue by
        1) Requiring all the calls to pseries_lpar_resize_hpt() be made
           with cpu_hotplug_lock held.
      
        2) In pseries_lpar_resize_hpt() invoke stop_machine_cpuslocked()
           as a consequence of 1)
      
        3) To satisfy 1), in hpt_order_set(), call mmu_hash_ops.resize_hpt()
           with cpu_hotplug_lock held.
      
      Fixes: dbcf929c ("powerpc/pseries: Add support for hash table resizing")
      Cc: stable@vger.kernel.org # v4.11+
      Reported-by: NAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Signed-off-by: NGautham R. Shenoy <ego@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Link: https://lore.kernel.org/r/1557906352-29048-1-git-send-email-ego@linux.vnet.ibm.comSigned-off-by: NSasha Levin <sashal@kernel.org>
      f5f31a6e
  2. 16 9月, 2019 1 次提交
  3. 30 7月, 2018 1 次提交
  4. 24 7月, 2018 1 次提交
  5. 16 7月, 2018 1 次提交
  6. 24 5月, 2018 1 次提交
  7. 15 5月, 2018 2 次提交
  8. 03 5月, 2018 1 次提交
    • H
      powerpc/fadump: Do not use hugepages when fadump is active · 85975387
      Hari Bathini 提交于
      FADump capture kernel boots in restricted memory environment preserving
      the context of previous kernel to save vmcore. Supporting hugepages in
      such environment makes things unnecessarily complicated, as hugepages
      need memory set aside for them. This means most of the capture kernel's
      memory is used in supporting hugepages. In most cases, this results in
      out-of-memory issues while booting FADump capture kernel. But hugepages
      are not of much use in capture kernel whose only job is to save vmcore.
      So, disabling hugepages support, when fadump is active, is a reliable
      solution for the out of memory issues. Introducing a flag variable to
      disable HugeTLB support when fadump is active.
      Signed-off-by: NHari Bathini <hbathini@linux.vnet.ibm.com>
      Reviewed-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      85975387
  9. 31 3月, 2018 1 次提交
  10. 30 3月, 2018 2 次提交
  11. 27 3月, 2018 1 次提交
    • P
      powerpc/64: Call H_REGISTER_PROC_TBL when running as a HPT guest on POWER9 · dbfcf3cb
      Paul Mackerras 提交于
      On POWER9, since commit cc3d2940 ("powerpc/64: Enable use of radix
      MMU under hypervisor on POWER9", 2017-01-30), we set both the radix and
      HPT bits in the client-architecture-support (CAS) vector, which tells
      the hypervisor that we can do either radix or HPT.  According to PAPR,
      if we use this combination we are promising to do a H_REGISTER_PROC_TBL
      hcall later on to let the hypervisor know whether we are doing radix
      or HPT.  We currently do this call if we are doing radix but not if
      we are doing HPT.  If the hypervisor is able to support both radix
      and HPT guests, it would be entitled to defer allocation of the HPT
      until the H_REGISTER_PROC_TBL call, and to fail any attempts to create
      HPTEs until the H_REGISTER_PROC_TBL call.  Thus we need to do a
      H_REGISTER_PROC_TBL call when we are doing HPT; otherwise we may
      crash at boot time.
      
      This adds the code to call H_REGISTER_PROC_TBL in this case, before
      we attempt to create any HPT entries using H_ENTER.
      
      Fixes: cc3d2940 ("powerpc/64: Enable use of radix MMU under hypervisor on POWER9")
      Cc: stable@vger.kernel.org # v4.11+
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      Reviewed-by: NSuraj Jitindar Singh <sjitindarsingh@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      dbfcf3cb
  12. 06 3月, 2018 1 次提交
  13. 13 2月, 2018 1 次提交
    • A
      powerpc/mm: Fix crashes with 16G huge pages · fae22116
      Aneesh Kumar K.V 提交于
      To support memory keys, we moved the hash pte slot information to the
      second half of the page table. This was ok with PTE entries at level
      4 (PTE page) and level 3 (PMD). We already allocate larger page table
      pages at those levels to accomodate extra details. For level 4 we
      already have the extra space which was used to track 4k hash page
      table entry details and at level 3 the extra space was allocated to
      track the THP details.
      
      With hugetlbfs PTE, we used this extra space at the PMD level to store
      the slot details. But we also support hugetlbfs PTE at PUD level for
      16GB pages and PUD level page didn't allocate extra space. This
      resulted in memory corruption.
      
      Fix this by allocating extra space at PUD level when HUGETLB is
      enabled.
      
      Fixes: bf9a95f9 ("powerpc: Free up four 64K PTE bits in 64K backed HPTE pages")
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Reviewed-by: NRam Pai <linuxram@us.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      fae22116
  14. 22 1月, 2018 1 次提交
  15. 21 1月, 2018 1 次提交
    • A
      powerpc/hash: Skip non initialized page size in init_hpte_page_sizes · 10527e80
      Aneesh Kumar K.V 提交于
      One of the easiest way to test config with 4K HPTE is to disable 64K hardware
      page size like below.
      
      int __init htab_dt_scan_page_sizes(unsigned long node,
      
       		size -= 3; prop += 3;
       		base_idx = get_idx_from_shift(base_shift);
      -		if (base_idx < 0) {
      +		if (base_idx < 0 || base_idx == MMU_PAGE_64K) {
       			/* skip the pte encoding also */
       			prop += lpnum * 2; size -= lpnum * 2;
      
      But then this results in error in other part of the code such as MPSS parsing
      where we look at 4K base page size and 64K actual page size support.
      
      This patch fix MPSS parsing by ignoring the actual page sizes marked
      unsupported. In reality this can happen only with a corrupt device tree. But it
      is good to tighten the error check.
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      10527e80
  16. 20 1月, 2018 3 次提交
  17. 17 1月, 2018 3 次提交
    • N
      powerpc/pseries: lift RTAS limit for hash · c610d65c
      Nicholas Piggin 提交于
      With the previous patch to switch to 64-bit mode after returning from
      RTAS and before doing any memory accesses, the RMA limit need not be
      clamped to 1GB to avoid RTAS bugs.
      
      Keep the 1GB limit for older firmware (although this is more of a kernel
      concern than RTAS), and remove it starting with POWER9.
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      c610d65c
    • N
      powerpc/powernv: Remove real mode access limit for early allocations · 1513c33d
      Nicholas Piggin 提交于
      This removes the RMA limit on powernv platform, which constrains
      early allocations such as PACAs and stacks. There are still other
      restrictions that must be followed, such as bolted SLB limits, but
      real mode addressing has no constraints.
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      1513c33d
    • N
      powerpc/64s: Improve local TLB flush for boot and MCE on POWER9 · d4748276
      Nicholas Piggin 提交于
      There are several cases outside the normal address space management
      where a CPU's entire local TLB is to be flushed:
      
        1. Booting the kernel, in case something has left stale entries in
           the TLB (e.g., kexec).
      
        2. Machine check, to clean corrupted TLB entries.
      
      One other place where the TLB is flushed, is waking from deep idle
      states. The flush is a side-effect of calling ->cpu_restore with the
      intention of re-setting various SPRs. The flush itself is unnecessary
      because in the first case, the TLB should not acquire new corrupted
      TLB entries as part of sleep/wake (though they may be lost).
      
      This type of TLB flush is coded inflexibly, several times for each CPU
      type, and they have a number of problems with ISA v3.0B:
      
      - The current radix mode of the MMU is not taken into account, it is
        always done as a hash flushn For IS=2 (LPID-matching flush from host)
        and IS=3 with HV=0 (guest kernel flush), tlbie(l) is undefined if
        the R field does not match the current radix mode.
      
      - ISA v3.0B hash must flush the partition and process table caches as
        well.
      
      - ISA v3.0B radix must flush partition and process scoped translations,
        partition and process table caches, and also the page walk cache.
      
      So consolidate the flushing code and implement it in C and inline asm
      under the mm/ directory with the rest of the flush code. Add ISA v3.0B
      cases for radix and hash, and use the radix flush in radix environment.
      
      Provide a way for IS=2 (LPID flush) to specify the radix mode of the
      partition. Have KVM pass in the radix mode of the guest.
      
      Take out the flushes from early cputable/dt_cpu_ftrs detection hooks,
      and move it later in the boot process after, the MMU registers are set
      up and before relocation is first turned on.
      
      The TLB flush is no longer called when restoring from deep idle states.
      This was not be done as a separate step because booting secondaries
      uses the same cpu_restore as idle restore, which needs the TLB flush.
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      d4748276
  18. 20 12月, 2017 3 次提交
    • R
      powerpc: use helper functions to get and set hash slots · a8548686
      Ram Pai 提交于
      replace redundant code in __hash_page_4K() and flush_hash_page()
      with helper functions pte_get_hash_gslot() and pte_set_hidx()
      Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NRam Pai <linuxram@us.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      a8548686
    • R
      powerpc: Free up four 64K PTE bits in 4K backed HPTE pages · 9d2edb18
      Ram Pai 提交于
      Rearrange 64K PTE bits to free up bits 3, 4, 5 and 6,
      in the 4K backed HPTE pages.These bits continue to be used
      for 64K backed HPTE pages in this patch, but will be freed
      up in the next patch. The bit numbers are big-endian as
      defined in the ISA3.0
      
      The patch does the following change to the 4k HTPE backed
      64K PTE's format.
      
      H_PAGE_BUSY moves from bit 3 to bit 9 (B bit in the figure
      		below)
      V0 which occupied bit 4 is not used anymore.
      V1 which occupied bit 5 is not used anymore.
      V2 which occupied bit 6 is not used anymore.
      V3 which occupied bit 7 is not used anymore.
      
      Before the patch, the 4k backed 64k PTE format was as follows
      
       0 1 2 3 4  5  6  7  8 9 10...........................63
       : : : : :  :  :  :  : : :                            :
       v v v v v  v  v  v  v v v                            v
      
      ,-,-,-,-,--,--,--,--,-,-,-,-,-,------------------,-,-,-,
      |x|x|x|B|V0|V1|V2|V3|x| | |x|x|................|x|x|x|x| <- primary pte
      '_'_'_'_'__'__'__'__'_'_'_'_'_'________________'_'_'_'_'
      |S|G|I|X|S |G |I |X |S|G|I|X|..................|S|G|I|X| <- secondary pte
      '_'_'_'_'__'__'__'__'_'_'_'_'__________________'_'_'_'_'
      
      After the patch, the 4k backed 64k PTE format is as follows
      
       0 1 2 3 4  5  6  7  8 9 10...........................63
       : : : : :  :  :  :  : : :                            :
       v v v v v  v  v  v  v v v                            v
      
      ,-,-,-,-,--,--,--,--,-,-,-,-,-,------------------,-,-,-,
      |x|x|x| |  |  |  |  |x|B| |x|x|................|.|.|.|.| <- primary pte
      '_'_'_'_'__'__'__'__'_'_'_'_'_'________________'_'_'_'_'
      |S|G|I|X|S |G |I |X |S|G|I|X|..................|S|G|I|X| <- secondary pte
      '_'_'_'_'__'__'__'__'_'_'_'_'__________________'_'_'_'_'
      
      the four bits S,G,I,X (one quadruplet per 4k HPTE) that
      cache the hash-bucket slot value, is initialized to
      1,1,1,1 indicating -- an invalid slot. If a HPTE gets
      cached in a 1111 slot(i.e 7th slot of secondary hash
      bucket), it is released immediately. In other words,
      even though 1111 is a valid slot value in the hash
      bucket, we consider it invalid and release the slot and
      the HPTE. This gives us the opportunity to determine
      the validity of S,G,I,X bits based on its contents and
      not on any of the bits V0,V1,V2 or V3 in the primary PTE
      
      When we release a HPTE cached in the 1111 slot
      we also release a legitimate slot in the primary
      hash bucket and unmap its corresponding HPTE. This
      is to ensure that we do get a HPTE cached in a slot
      of the primary hash bucket, the next time we retry.
      
      Though treating 1111 slot as invalid, reduces the
      number of available slots in the hash bucket and may
      have an effect on the performance, the probabilty of
      hitting a 1111 slot is extermely low.
      
      Compared to the current scheme, the above scheme
      reduces the number of false hash table updates
      significantly and has the added advantage of releasing
      four valuable PTE bits for other purpose.
      
      NOTE:even though bits 3, 4, 5, 6, 7 are not used when
      the 64K PTE is backed by 4k HPTE, they continue to be
      used if the PTE gets backed by 64k HPTE. The next
      patch will decouple that aswell, and truely release the
      bits.
      
      This idea was jointly developed by Paul Mackerras,
      Aneesh, Michael Ellermen and myself.
      
      4K PTE format remains unchanged currently.
      
      The patch does the following code changes
      a) PTE flags are split between 64k and 4k header files.
      b) __hash_page_4K() is reimplemented to reflect the
       above logic.
      Acked-by: NBalbir Singh <bsingharora@gmail.com>
      Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NRam Pai <linuxram@us.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      9d2edb18
    • R
      powerpc: introduce pte_get_hash_gslot() helper · 318995b4
      Ram Pai 提交于
      Introduce pte_get_hash_gslot()() which returns the global slot number of
      the HPTE in the global hash table.
      
      This function will come in handy as we work towards re-arranging the PTE
      bits in the later patches.
      Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NRam Pai <linuxram@us.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      318995b4
  19. 06 11月, 2017 1 次提交
    • A
      powerpc/mm/hash: Add pr_fmt() to hash_utils64.c · 7f142661
      Aneesh Kumar K.V 提交于
      Make the printks look a bit nicer by adding a prefix.
      
      Radix config now do
       radix-mmu: Page sizes from device-tree:
       radix-mmu: Page size shift = 12 AP=0x0
       radix-mmu: Page size shift = 16 AP=0x5
       radix-mmu: Page size shift = 21 AP=0x1
       radix-mmu: Page size shift = 30 AP=0x2
      
      This patch update hash config to do similar dmesg output. With the patch we have
      
       hash-mmu: Page sizes from device-tree:
       hash-mmu: base_shift=12: shift=12, sllp=0x0000, avpnm=0x00000000, tlbiel=1, penc=0
       hash-mmu: base_shift=12: shift=16, sllp=0x0000, avpnm=0x00000000, tlbiel=1, penc=7
       hash-mmu: base_shift=12: shift=24, sllp=0x0000, avpnm=0x00000000, tlbiel=1, penc=56
       hash-mmu: base_shift=16: shift=16, sllp=0x0110, avpnm=0x00000000, tlbiel=1, penc=1
       hash-mmu: base_shift=16: shift=24, sllp=0x0110, avpnm=0x00000000, tlbiel=1, penc=8
       hash-mmu: base_shift=20: shift=20, sllp=0x0111, avpnm=0x00000000, tlbiel=0, penc=2
       hash-mmu: base_shift=24: shift=24, sllp=0x0100, avpnm=0x00000001, tlbiel=0, penc=0
       hash-mmu: base_shift=34: shift=34, sllp=0x0120, avpnm=0x000007ff, tlbiel=0, penc=3
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      7f142661
  20. 23 8月, 2017 1 次提交
  21. 17 8月, 2017 1 次提交
    • A
      powerpc/mm: Rename find_linux_pte_or_hugepte() · 94171b19
      Aneesh Kumar K.V 提交于
      Add newer helpers to make the function usage simpler. It is always
      recommended to use find_current_mm_pte() for walking the page table.
      If we cannot use find_current_mm_pte(), it should be documented why
      the said usage of __find_linux_pte() is safe against a parallel THP
      split.
      
      For now we have KVM code using __find_linux_pte(). This is because kvm
      code ends up calling __find_linux_pte() in real mode with MSR_EE=0 but
      with PACA soft_enabled = 1. We may want to fix that later and make
      sure we keep the MSR_EE and PACA soft_enabled in sync. When we do that
      we can switch kvm to use find_linux_pte().
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      94171b19
  22. 16 8月, 2017 1 次提交
    • A
      powerpc/mm/hugetlb: Add support for reserving gigantic huge pages via kernel command line · 79cc38de
      Aneesh Kumar K.V 提交于
      With commit aa888a74 ("hugetlb: support larger than MAX_ORDER") we added
      support for allocating gigantic hugepages via kernel command line. Switch
      ppc64 arch specific code to use that.
      
      W.r.t FSL support, we now limit our allocation range using BOOTMEM_ALLOC_ACCESSIBLE.
      
      We use the kernel command line to do reservation of hugetlb pages on powernv
      platforms. On pseries hash mmu mode the supported gigantic huge page size is
      16GB and that can only be allocated with hypervisor assist. For pseries the
      command line option doesn't do the allocation. Instead pseries does gigantic
      hugepage allocation based on hypervisor hint that is specified via
      "ibm,expected#pages" property of the memory node.
      
      Cc: Scott Wood <oss@buserror.net>
      Cc: Christophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      79cc38de
  23. 08 8月, 2017 1 次提交
  24. 31 7月, 2017 1 次提交
  25. 23 6月, 2017 1 次提交
    • B
      powerpc/mm: Trace tlbie(l) instructions · 0428491c
      Balbir Singh 提交于
      Add a trace point for tlbie(l) (Translation Lookaside Buffer Invalidate
      Entry (Local)) instructions.
      
      The tlbie instruction has changed over the years, so not all versions
      accept the same operands. Use the ISA v3 field operands because they are
      the most verbose, we may change them in future.
      
      Example output:
      
        qemu-system-ppc-5371  [016]  1412.369519: tlbie:
        	tlbie with lpid 0, local 1, rb=67bd8900174c11c1, rs=0, ric=0 prs=0 r=0
      Signed-off-by: NBalbir Singh <bsingharora@gmail.com>
      [mpe: Add some missing trace_tlbie()s, reword change log]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      0428491c
  26. 11 4月, 2017 1 次提交
  27. 03 4月, 2017 1 次提交
  28. 01 4月, 2017 1 次提交
    • A
      powerpc/pseries: Skip using reserved virtual address range · 82228e36
      Aneesh Kumar K.V 提交于
      Now that we use all the available virtual address range, we need to make
      sure we don't generate VSID such that it overlaps with the reserved vsid
      range. Reserved vsid range include the virtual address range used by the
      adjunct partition and also the VRMA virtual segment. We find the context
      value that can result in generating such a VSID and reserve it early in
      boot.
      
      We don't look at the adjunct range, because for now we disable the
      adjunct usage in a Linux LPAR via CAS interface.
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      [mpe: Rewrite hash__reserve_context_id(), move the rest into pseries]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      82228e36
  29. 31 3月, 2017 2 次提交
  30. 02 3月, 2017 1 次提交
  31. 10 2月, 2017 1 次提交