- 16 1月, 2018 9 次提交
-
-
由 Nathan Fontenot 提交于
Add required bits to the architecture vector to enable support of the ibm,dynamic-memory-v2 device tree property. Signed-off-by: NNathan Fontenot <nfont@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nathan Fontenot 提交于
The Power Hypervisor has introduced a new device tree format for the property describing the dynamic reconfiguration LMBs for a system, ibm,dynamic-memory-v2. This new format condenses the size of the property, especially on large memory systems, by reporting sets of LMBs that have the same properties (flags and associativity array index). This patch updates the powerpc/mm/drmem.c code to provide routines that can parse the new device tree format during the walk_drmem_lmb* routines used during boot, the creation of the LMB array, and updating the device tree to create a new property in the proper format for ibm,dynamic-memory-v2. Signed-off-by: NNathan Fontenot <nfont@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nathan Fontenot 提交于
Now that the powerpc code parses dynamic reconfiguration memory LMB information from the LMB array and not the device tree directly we can move the of_drconf_cell struct to drmem.h where it fits better. In addition, the struct is renamed to of_drconf_cell_v1 in anticipation of upcoming support for version 2 of the dynamic reconfiguration property and the members are typed as __be* values to reflect how they exist in the device tree. Signed-off-by: NNathan Fontenot <nfont@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nathan Fontenot 提交于
Update the pseries memory hotplug code to use the newly added dynamic reconfiguration LMB array. Doing this is required for the upcoming support of version 2 of the dynamic reconfiguration device tree property. In addition, making this change cleans up the code that parses the LMB information as we no longer need to worry about device tree format. This allows us to discard one of the first steps on memory hotplug where we make a working copy of the device tree property and convert the entire property to cpu format. Instead we just use the LMB array directly while holding the memory hotplug lock. This patch also moves the updating of the device tree property to powerpc/mm/drmem.c. This allows to the hotplug code to work without needing to know the device tree format and provides a single routine for updating the device tree property. This new routine will handle determination of the proper device tree format and generate a properly formatted device tree property. Signed-off-by: NNathan Fontenot <nfont@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nathan Fontenot 提交于
Update code in powerpc/numa.c to use the walk_drmem_lmbs() routine instead of parsing the device tree directly. This is in anticipation of introducing a new ibm,dynamic-memory-v2 property with a different format. This will allow the numa code to use a single initialization routine per-LMB irregardless of the device tree format. Additionally, to support additional routines in numa.c that need to look up LMB information, an late_init routine is added to drmem.c to allocate the array of LMB information. This LMB array will provide per-LMB information to separate the LMB data from the device tree format. Signed-off-by: NNathan Fontenot <nfont@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nathan Fontenot 提交于
We currently have code to parse the dynamic reconfiguration LMB information from the ibm,dynamic-meory device tree property in multiple locations; numa.c, prom.c, and pseries/hotplug-memory.c. In anticipation of adding support for a version 2 of the ibm,dynamic-memory property this patch aims to separate the device tree information from the device tree format. Doing this requires a two step process to avoid a possibly very large bootmem allocation early in boot. During initial boot, new routines are provided to walk the device tree property and make a call-back for each LMB. The second step (introduced in later patches) will allocate an array of LMB information that can be used directly without needing to know the DT format. This approach provides the benefit of consolidating the device tree property parsing to a single location and (eventually) providing a common data structure for retrieving LMB information. This patch introduces a routine to walk the ibm,dynamic-memory property in the flattened device tree and updates the prom.c code to use this to initialize memory. Signed-off-by: NNathan Fontenot <nfont@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nathan Fontenot 提交于
Look up the associativity arrays in of_drconf_to_nid_single when deriving the nid for a LMB instead of having it passed in as a parameter. Signed-off-by: NNathan Fontenot <nfont@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nathan Fontenot 提交于
Look up the device node for the usable memory property instead of having it passed in as a parameter. This changes precedes an update in which the calling routines for of_get_usable_memory() will not have the device node pointer to pass in. Signed-off-by: NNathan Fontenot <nfont@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nathan Fontenot 提交于
Look up the device node for the associativity array property instead of having it passed in as a parameter. This changes precedes an update in which the calling routines for of_get_assoc_arrays() will not have the device node pointer to pass in. Signed-off-by: NNathan Fontenot <nfont@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 03 1月, 2018 1 次提交
-
-
由 Michael Ellerman 提交于
Add a test case of the error code reported when we take a SEGV on a mapped but inaccessible area. We broke this recently. Based on a test case from John Sperbeck <jsperbeck@google.com>. Acked-by: NJohn Sperbeck <jsperbeck@google.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 22 12月, 2017 3 次提交
-
-
由 Aneesh Kumar K.V 提交于
pte_access_premitted get called in get_user_pages_fast path. If we have marked the pte PROT_NONE, we should not allow a read access on the address. With the current implementation we are not checking the READ and only check for WRITE. This is needed on archs like ppc64 that implement PROT_NONE using _PAGE_USER access instead of _PAGE_PRESENT. Also add pte_user check just to make sure we are not accessing kernel mapping. Even though there is code duplication, keeping the low level pte accessors different for different platforms helps in code readability. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Aneesh Kumar K.V 提交于
pte_access_premitted get called in get_user_pages_fast path. If we have marked the pte PROT_NONE, we should not allow a read access on the address. With the current implementation we are not checking the READ and only check for WRITE. This is needed on archs like ppc64 that implement PROT_NONE using RWX access instead of _PAGE_PRESENT. Also add pte_user check just to make sure we are not accessing kernel mapping. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Aneesh Kumar K.V 提交于
No functional change in this patch. This update gup_hugepte to use the helper. This will help later when we add memory keys. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 20 12月, 2017 8 次提交
-
-
由 Ram Pai 提交于
The H_PAGE_F_SECOND,H_PAGE_F_GIX are not in the 64K main-PTE. capture these changes in the dump pte report. Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NRam Pai <linuxram@us.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Ram Pai 提交于
replace redundant code in __hash_page_4K() and flush_hash_page() with helper functions pte_get_hash_gslot() and pte_set_hidx() Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NRam Pai <linuxram@us.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Ram Pai 提交于
We need PTE bits 3 ,4, 5, 6 and 57 to support protection-keys, because these are the bits we want to consolidate on across all configuration to support protection keys. Bit 3,4,5 and 6 are currently used on 4K-pte kernels. But bit 9 and 10 are available. Hence we use the two available bits and free up bit 5 and 6. We will still not be able to free up bit 3 and 4. In the absence of any other free bits, we will have to stay satisfied with what we have :-(. This means we will not be able to support 32 protection keys, but only 8. The bit numbers are big-endian as defined in the ISA3.0 This patch does the following change to 4K PTE. H_PAGE_F_SECOND (S) which occupied bit 4 moves to bit 7. H_PAGE_F_GIX (G,I,X) which occupied bit 5, 6 and 7 also moves to bit 8,9, 10 respectively. H_PAGE_HASHPTE (H) which occupied bit 8 moves to bit 4. Before the patch, the 4k PTE format was as follows 0 1 2 3 4 5 6 7 8 9 10....................57.....63 : : : : : : : : : : : : : v v v v v v v v v v v v v ,-,-,-,-,--,--,--,--,-,-,-,-,-,------------------,-,-,-, |x|x|x|B|S |G |I |X |H| | |x|x|................| |x|x|x| '_'_'_'_'__'__'__'__'_'_'_'_'_'________________'_'_'_'_' After the patch, the 4k PTE format is as follows 0 1 2 3 4 5 6 7 8 9 10....................57.....63 : : : : : : : : : : : : : v v v v v v v v v v v v v ,-,-,-,-,--,--,--,--,-,-,-,-,-,------------------,-,-,-, |x|x|x|B|H | | |S |G|I|X|x|x|................| |.|.|.| '_'_'_'_'__'__'__'__'_'_'_'_'_'________________'_'_'_'_' The patch has no code changes; just swizzles around bits. Signed-off-by: NRam Pai <linuxram@us.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Ram Pai 提交于
0xf is considered invalid hidx value. It indicates absence of a backing HPTE. A PTE is initialized to 0xf either a) when it is new it is newly allocated to hold 4k-backing-HPTE or b) Any time it gets demoted to a 4k-backing-HPTE This patch shifts the representation by one-modulo-0xf; i.e hidx 0 is represented as 1, 1 as 2,... , and 0xf as 0. This convention lets us initialize the secondary-part of the PTE to all zeroes. PTEs are anyway zero'd when allocated. We do not have to zero them again; thus saving on the initialization. Signed-off-by: NRam Pai <linuxram@us.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Ram Pai 提交于
Rearrange 64K PTE bits to free up bits 3, 4, 5 and 6 in the 64K backed HPTE pages. This along with the earlier patch will entirely free up the four bits from 64K PTE. The bit numbers are big-endian as defined in the ISA3.0 This patch does the following change to 64K PTE backed by 64K HPTE. H_PAGE_F_SECOND (S) which occupied bit 4 moves to the second part of the pte to bit 60. H_PAGE_F_GIX (G,I,X) which occupied bit 5, 6 and 7 also moves to the second part of the pte to bit 61, 62, 63, 64 respectively since bit 7 is now freed up, we move H_PAGE_BUSY (B) from bit 9 to bit 7. The second part of the PTE will hold (H_PAGE_F_SECOND|H_PAGE_F_GIX) at bit 60,61,62,63. NOTE: None of the bits in the secondary PTE were not used by 64k-HPTE backed PTE. Before the patch, the 64K HPTE backed 64k PTE format was as follows 0 1 2 3 4 5 6 7 8 9 10...........................63 : : : : : : : : : : : : v v v v v v v v v v v v ,-,-,-,-,--,--,--,--,-,-,-,-,-,------------------,-,-,-, |x|x|x| |S |G |I |X |x|B| |x|x|................|x|x|x|x| <- primary pte '_'_'_'_'__'__'__'__'_'_'_'_'_'________________'_'_'_'_' | | | | | | | | | | | | |..................| | | | | <- secondary pte '_'_'_'_'__'__'__'__'_'_'_'_'__________________'_'_'_'_' After the patch, the 64k HPTE backed 64k PTE format is as follows 0 1 2 3 4 5 6 7 8 9 10...........................63 : : : : : : : : : : : : v v v v v v v v v v v v ,-,-,-,-,--,--,--,--,-,-,-,-,-,------------------,-,-,-, |x|x|x| | | | |B |x| | |x|x|................|.|.|.|.| <- primary pte '_'_'_'_'__'__'__'__'_'_'_'_'_'________________'_'_'_'_' | | | | | | | | | | | | |..................|S|G|I|X| <- secondary pte '_'_'_'_'__'__'__'__'_'_'_'_'__________________'_'_'_'_' The above PTE changes is applicable to hugetlbpages aswell. The patch does the following code changes: a) moves the H_PAGE_F_SECOND and H_PAGE_F_GIX to 4k PTE header since it is no more needed b the 64k PTEs. b) abstracts out __real_pte() and __rpte_to_hidx() so the caller need not know the bit location of the slot. c) moves the slot bits to the secondary pte. Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NRam Pai <linuxram@us.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Ram Pai 提交于
Rearrange 64K PTE bits to free up bits 3, 4, 5 and 6, in the 4K backed HPTE pages.These bits continue to be used for 64K backed HPTE pages in this patch, but will be freed up in the next patch. The bit numbers are big-endian as defined in the ISA3.0 The patch does the following change to the 4k HTPE backed 64K PTE's format. H_PAGE_BUSY moves from bit 3 to bit 9 (B bit in the figure below) V0 which occupied bit 4 is not used anymore. V1 which occupied bit 5 is not used anymore. V2 which occupied bit 6 is not used anymore. V3 which occupied bit 7 is not used anymore. Before the patch, the 4k backed 64k PTE format was as follows 0 1 2 3 4 5 6 7 8 9 10...........................63 : : : : : : : : : : : : v v v v v v v v v v v v ,-,-,-,-,--,--,--,--,-,-,-,-,-,------------------,-,-,-, |x|x|x|B|V0|V1|V2|V3|x| | |x|x|................|x|x|x|x| <- primary pte '_'_'_'_'__'__'__'__'_'_'_'_'_'________________'_'_'_'_' |S|G|I|X|S |G |I |X |S|G|I|X|..................|S|G|I|X| <- secondary pte '_'_'_'_'__'__'__'__'_'_'_'_'__________________'_'_'_'_' After the patch, the 4k backed 64k PTE format is as follows 0 1 2 3 4 5 6 7 8 9 10...........................63 : : : : : : : : : : : : v v v v v v v v v v v v ,-,-,-,-,--,--,--,--,-,-,-,-,-,------------------,-,-,-, |x|x|x| | | | | |x|B| |x|x|................|.|.|.|.| <- primary pte '_'_'_'_'__'__'__'__'_'_'_'_'_'________________'_'_'_'_' |S|G|I|X|S |G |I |X |S|G|I|X|..................|S|G|I|X| <- secondary pte '_'_'_'_'__'__'__'__'_'_'_'_'__________________'_'_'_'_' the four bits S,G,I,X (one quadruplet per 4k HPTE) that cache the hash-bucket slot value, is initialized to 1,1,1,1 indicating -- an invalid slot. If a HPTE gets cached in a 1111 slot(i.e 7th slot of secondary hash bucket), it is released immediately. In other words, even though 1111 is a valid slot value in the hash bucket, we consider it invalid and release the slot and the HPTE. This gives us the opportunity to determine the validity of S,G,I,X bits based on its contents and not on any of the bits V0,V1,V2 or V3 in the primary PTE When we release a HPTE cached in the 1111 slot we also release a legitimate slot in the primary hash bucket and unmap its corresponding HPTE. This is to ensure that we do get a HPTE cached in a slot of the primary hash bucket, the next time we retry. Though treating 1111 slot as invalid, reduces the number of available slots in the hash bucket and may have an effect on the performance, the probabilty of hitting a 1111 slot is extermely low. Compared to the current scheme, the above scheme reduces the number of false hash table updates significantly and has the added advantage of releasing four valuable PTE bits for other purpose. NOTE:even though bits 3, 4, 5, 6, 7 are not used when the 64K PTE is backed by 4k HPTE, they continue to be used if the PTE gets backed by 64k HPTE. The next patch will decouple that aswell, and truely release the bits. This idea was jointly developed by Paul Mackerras, Aneesh, Michael Ellermen and myself. 4K PTE format remains unchanged currently. The patch does the following code changes a) PTE flags are split between 64k and 4k header files. b) __hash_page_4K() is reimplemented to reflect the above logic. Acked-by: NBalbir Singh <bsingharora@gmail.com> Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NRam Pai <linuxram@us.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Ram Pai 提交于
Introduce pte_get_hash_gslot()() which returns the global slot number of the HPTE in the global hash table. This function will come in handy as we work towards re-arranging the PTE bits in the later patches. Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NRam Pai <linuxram@us.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Ram Pai 提交于
Introduce pte_set_hidx().It sets the (H_PAGE_F_SECOND|H_PAGE_F_GIX) bits at the appropriate location in the PTE of 4K PTE. For 64K PTE, it sets the bits in the second part of the PTE. Though the implementation for the former just needs the slot parameter, it does take some additional parameters to keep the prototype consistent. This function will be handy as we work towards re-arranging the bits in the subsequent patches. Acked-by: NBalbir Singh <bsingharora@gmail.com> Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NRam Pai <linuxram@us.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 11 12月, 2017 19 次提交
-
-
由 Simon Guo 提交于
GCC 7 will take "r2" in clobber list as an error and it will get following build errors for powerpc ptrace selftests even with -fno-pic option: ptrace-tm-vsx.c: In function ‘tm_vsx’: ptrace-tm-vsx.c:42:2: error: PIC register clobbered by ‘r2’ in ‘asm’ asm __volatile__( ^~~ make[1]: *** [ptrace-tm-vsx] Error 1 ptrace-tm-spd-vsx.c: In function ‘tm_spd_vsx’: ptrace-tm-spd-vsx.c:55:2: error: PIC register clobbered by ‘r2’ in ‘asm’ asm __volatile__( ^~~ make[1]: *** [ptrace-tm-spd-vsx] Error 1 ptrace-tm-spr.c: In function ‘tm_spr’: ptrace-tm-spr.c:46:2: error: PIC register clobbered by ‘r2’ in ‘asm’ asm __volatile__( ^~~ Fix the build error by removing "r2" from the clobber list. None of these asm blocks actually clobber r2. Reported-by: NSeth Forshee <seth.forshee@canonical.com> Signed-off-by: NSimon Guo <wei.guo.simon@gmail.com> Tested-by: NSeth Forshee <seth.forshee@canonical.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Bryant G. Ly 提交于
Add a pci_vf_drivers_autoprobe() interface. Setting autoprobe to false on the PF prevents drivers from binding to VFs when they are enabled. Signed-off-by: NBryant G. Ly <bryantly@linux.vnet.ibm.com> Signed-off-by: NJuan J. Alvarez <jjalvare@linux.vnet.ibm.com> Acked-by: NBjorn Helgaas <bhelgaas@google.com> Acked-by: NRussell Currey <ruscur@russell.cc> Reviewed-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Bryant G. Ly 提交于
Add calls for pseries platform to configure/deconfigure SR-IOV. Signed-off-by: NBryant G. Ly <bryantly@linux.vnet.ibm.com> Signed-off-by: NJuan J. Alvarez <jjalvare@us.ibm.com> Acked-by: NRussell Currey <ruscur@russell.cc> Reviewed-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Bryant G. Ly 提交于
SR-IOV can now be enabled for the powernv platform and pseries platform. Therefore move the appropriate calls to machine dependent code instead of relying on definition at compile time. Signed-off-by: NBryant G. Ly <bryantly@linux.vnet.ibm.com> Signed-off-by: NJuan J. Alvarez <jjalvare@us.ibm.com> Acked-by: NRussell Currey <ruscur@russell.cc> Reviewed-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Alan Modra 提交于
powerpc64 gcc can generate code that offsets an address, to access part of an object in memory. If the address is a -mcmodel=medium toc pointer relative address then code like the following is possible. addis r9,r2,var@toc@ha ld r3,var@toc@l(r9) ld r4,(var+8)@toc@l(r9) This works fine so long as var is naturally aligned, *and* r2 is sufficiently aligned. If not, there is a possibility that the offset added to access var+8 wraps over a n*64k+32k boundary. Modules don't have any guarantee that r2 is sufficiently aligned. Moreover, code generated by older compilers generates a .toc section with 2**0 alignment, which can result in relocation failures at module load time even without the wrap problem. Thus, this patch links modules with an aligned .toc section (Makefile and module.lds changes), and forces alignment for out of tree modules or those without a .toc section (module_64.c changes). Signed-off-by: NAlan Modra <amodra@gmail.com> [desnesn: updated patch to apply to powerpc-next kernel v4.15 ] Signed-off-by: NDesnes A. Nunes do Rosario <desnesn@linux.vnet.ibm.com> [mpe: Fix out-of-tree build, swap -256 for ~0xff, reflow comment] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
When an interrupt is returning to a soft-disabled context (which can happen for non-maskable interrupts or synchronous interrupts), it goes through the motions of soft-disabling again, including calling TRACE_DISABLE_INTS (i.e., trace_hardirqs_off()). This is not necessary, because we must already be soft-disabled in the interrupt context, it also may be causing crashes in the irq tracing code to re-enter as an nmi. Replace it with a warning to ensure that soft-interrupts are still disabled. Fixes: 7c0482e3 ("powerpc/irq: Fix another case of lazy IRQ state getting out of sync") Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Ivan Mikhaylov 提交于
Add irq error handlers for cmu, plb, opb, mcue, conf with debug information output in case of problems. Signed-off-by: NIvan Mikhaylov <ivan@de.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Ivan Mikhaylov 提交于
TVSENSE(temperature and voltage sensors) reset is blocked (clock gated) by the POR default of the TVS sleep config bit. As a consequence, TVSENSE will provide erratic sensor values, which may result in spurious (parity) errors recorded in the CMU FIR and leading to erroneous interrupt requests once the CMU interrupt is unmasked. Purpose of this to set up CMU in working state in any cases even in case of parity errors. Reviewed-by: NAlistair Popple <alistair@popple.id.au> Signed-off-by: NIvan Mikhaylov <ivan@de.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Ivan Mikhaylov 提交于
* clear out any possible plb6 errors * board interrupt handling setup within l2 reg set * fsp2 parity error setup All those points are needed for correct interrupt handling on board level including error handling report. Reviewed-by: NAlistair Popple <alistair@popple.id.au> Signed-off-by: NIvan Mikhaylov <ivan@de.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Ivan Mikhaylov 提交于
Add cmu, plbX, l2, ddr3/4, crcs register definitions. Add mfcmu, mtcmu functions for indirect access to cmu. Add mtl2, mfl2 same for l2 cache core reg set. Signed-off-by: NIvan Mikhaylov <ivan@de.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
Match powerpc/64 and include .data.rel* input sections in the .data output section explicitly. This solves the warning: powerpc-linux-gnu-ld: warning: orphan section `.data.rel.ro' from `arch/powerpc/kernel/head_44x.o' being placed in section `.data.rel.ro'. Link: https://lists.01.org/pipermail/kbuild-all/2017-November/040010.htmlReported-by: Nkbuild test robot <fengguang.wu@intel.com> Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
As far as I can tell CONFIG_CPM is the right symbol to use to conditionally compile the cpm-serial.c code. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
Only build the OPAL console code in when necessary. This looks like it should use CONFIG_PPC_POWERNV, but because the opal-call.S code is 64-bit only, we must only build it when we're building the boot wrapper 64-bit. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
The serial code in uartlite.c only matches if we find one of two Xilinx (xlnx) nodes in the device tree, there's no need to build or link the code on other platforms. As far as I can tell CONFIG_XILINX_VIRTEX is the appropriate symbol to use to conditionally compile the code. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Josh Poimboeuf 提交于
Print the function address associated with the restore_r2() error to make it easier to debug the problem. Also clarify the wording a bit. Before: module_64: patch_foo: Expect noop after relocate, got 3c820000 After: module_64: patch_foo: Expected nop after call, got 7c630034 at netdev_has_upper_dev+0x54/0xb0 [patch_foo] Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NKamalesh Babulal <kamalesh@linux.vnet.ibm.com> [mpe: Change noop to nop, as that's the name of the instruction] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Josh Poimboeuf 提交于
When attempting to load a livepatch module, I got the following error: module_64: patch_module: Expect noop after relocate, got 3c820000 The error was triggered by the following code in unregister_netdevice_queue(): 14c: 00 00 00 48 b 14c <unregister_netdevice_queue+0x14c> 14c: R_PPC64_REL24 net_set_todo 150: 00 00 82 3c addis r4,r2,0 GCC didn't insert a nop after the branch to net_set_todo() because it's a sibling call, so it never returns. The nop isn't needed after the branch in that case. Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Acked-by: NNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Reviewed-and-tested-by: NKamalesh Babulal <kamalesh@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Kamalesh Babulal 提交于
Livepatch re-uses module loader function apply_relocate_add() to write relocations, instead of managing them by arch-dependent klp_write_module_reloc() function. apply_relocate_add() doesn't understand livepatch symbols (marked with SHN_LIVEPATCH symbol section index) and assumes them to be local symbols by default for R_PPC64_REL24 relocation type. It fails with an error, when trying to calculate offset with local_entry_offset(): module_64: kpatch_meminfo: REL24 -1152921504897399800 out of range! Whereas livepatch symbols are essentially SHN_UNDEF, should be called via stub used for global calls. This issue can be fixed by teaching apply_relocate_add() to handle both SHN_UNDEF/SHN_LIVEPATCH symbols via the same stub. This patch extends SHN_UNDEF code to handle livepatch symbols too. Signed-off-by: NKamalesh Babulal <kamalesh@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Benjamin Herrenschmidt 提交于
This message isn't terribly useful. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Benjamin Herrenschmidt 提交于
This statement causes some not very useful messages to always be printed on the serial port at boot, even on quiet boots. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-