- 29 7月, 2020 16 次提交
-
-
由 Michael Ellerman 提交于
Now that the powerpc code behaves the same as other architectures we can drop the special cases we had. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200724092528.1578671-5-mpe@ellerman.id.au
-
由 Michael Ellerman 提交于
We have powerpc specific logic in our page fault handling to decide if an access to an unmapped address below the stack pointer should expand the stack VMA. The logic aims to prevent userspace from doing bad accesses below the stack pointer. However as long as the stack is < 1MB in size, we allow all accesses without further checks. Adding some debug I see that I can do a full kernel build and LTP run, and not a single process has used more than 1MB of stack. So for the majority of processes the logic never even fires. We also recently found a nasty bug in this code which could cause userspace programs to be killed during signal delivery. It went unnoticed presumably because most processes use < 1MB of stack. The generic mm code has also grown support for stack guard pages since this code was originally written, so the most heinous case of the stack expanding into other mappings is now handled for us. Finally although some other arches have special logic in this path, from what I can tell none of x86, arm64, arm and s390 impose any extra checks other than those in expand_stack(). So drop our complicated logic and like other architectures just let the stack expand as long as its within the rlimit. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Tested-by: NDaniel Axtens <dja@axtens.net> Link: https://lore.kernel.org/r/20200724092528.1578671-4-mpe@ellerman.id.au
-
由 Michael Ellerman 提交于
Update the stack expansion load/store test to take into account the new allowance of 4224 bytes below the stack pointer. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200724092528.1578671-3-mpe@ellerman.id.au
-
由 Michael Ellerman 提交于
We have powerpc specific logic in our page fault handling to decide if an access to an unmapped address below the stack pointer should expand the stack VMA. The code was originally added in 2004 "ported from 2.4". The rough logic is that the stack is allowed to grow to 1MB with no extra checking. Over 1MB the access must be within 2048 bytes of the stack pointer, or be from a user instruction that updates the stack pointer. The 2048 byte allowance below the stack pointer is there to cover the 288 byte "red zone" as well as the "about 1.5kB" needed by the signal delivery code. Unfortunately since then the signal frame has expanded, and is now 4224 bytes on 64-bit kernels with transactional memory enabled. This means if a process has consumed more than 1MB of stack, and its stack pointer lies less than 4224 bytes from the next page boundary, signal delivery will fault when trying to expand the stack and the process will see a SEGV. The total size of the signal frame is the size of struct rt_sigframe (which includes the red zone) plus __SIGNAL_FRAMESIZE (128 bytes on 64-bit). The 2048 byte allowance was correct until 2008 as the signal frame was: struct rt_sigframe { struct ucontext uc; /* 0 1440 */ /* --- cacheline 11 boundary (1408 bytes) was 32 bytes ago --- */ long unsigned int _unused[2]; /* 1440 16 */ unsigned int tramp[6]; /* 1456 24 */ struct siginfo * pinfo; /* 1480 8 */ void * puc; /* 1488 8 */ struct siginfo info; /* 1496 128 */ /* --- cacheline 12 boundary (1536 bytes) was 88 bytes ago --- */ char abigap[288]; /* 1624 288 */ /* size: 1920, cachelines: 15, members: 7 */ /* padding: 8 */ }; 1920 + 128 = 2048 Then in commit ce48b210 ("powerpc: Add VSX context save/restore, ptrace and signal support") (Jul 2008) the signal frame expanded to 2304 bytes: struct rt_sigframe { struct ucontext uc; /* 0 1696 */ <-- /* --- cacheline 13 boundary (1664 bytes) was 32 bytes ago --- */ long unsigned int _unused[2]; /* 1696 16 */ unsigned int tramp[6]; /* 1712 24 */ struct siginfo * pinfo; /* 1736 8 */ void * puc; /* 1744 8 */ struct siginfo info; /* 1752 128 */ /* --- cacheline 14 boundary (1792 bytes) was 88 bytes ago --- */ char abigap[288]; /* 1880 288 */ /* size: 2176, cachelines: 17, members: 7 */ /* padding: 8 */ }; 2176 + 128 = 2304 At this point we should have been exposed to the bug, though as far as I know it was never reported. I no longer have a system old enough to easily test on. Then in 2010 commit 320b2b8d ("mm: keep a guard page below a grow-down stack segment") caused our stack expansion code to never trigger, as there was always a VMA found for a write up to PAGE_SIZE below r1. That meant the bug was hidden as we continued to expand the signal frame in commit 2b0a576d ("powerpc: Add new transactional memory state to the signal context") (Feb 2013): struct rt_sigframe { struct ucontext uc; /* 0 1696 */ /* --- cacheline 13 boundary (1664 bytes) was 32 bytes ago --- */ struct ucontext uc_transact; /* 1696 1696 */ <-- /* --- cacheline 26 boundary (3328 bytes) was 64 bytes ago --- */ long unsigned int _unused[2]; /* 3392 16 */ unsigned int tramp[6]; /* 3408 24 */ struct siginfo * pinfo; /* 3432 8 */ void * puc; /* 3440 8 */ struct siginfo info; /* 3448 128 */ /* --- cacheline 27 boundary (3456 bytes) was 120 bytes ago --- */ char abigap[288]; /* 3576 288 */ /* size: 3872, cachelines: 31, members: 8 */ /* padding: 8 */ /* last cacheline: 32 bytes */ }; 3872 + 128 = 4000 And commit 573ebfa6 ("powerpc: Increase stack redzone for 64-bit userspace to 512 bytes") (Feb 2014): struct rt_sigframe { struct ucontext uc; /* 0 1696 */ /* --- cacheline 13 boundary (1664 bytes) was 32 bytes ago --- */ struct ucontext uc_transact; /* 1696 1696 */ /* --- cacheline 26 boundary (3328 bytes) was 64 bytes ago --- */ long unsigned int _unused[2]; /* 3392 16 */ unsigned int tramp[6]; /* 3408 24 */ struct siginfo * pinfo; /* 3432 8 */ void * puc; /* 3440 8 */ struct siginfo info; /* 3448 128 */ /* --- cacheline 27 boundary (3456 bytes) was 120 bytes ago --- */ char abigap[512]; /* 3576 512 */ <-- /* size: 4096, cachelines: 32, members: 8 */ /* padding: 8 */ }; 4096 + 128 = 4224 Then finally in 2017, commit 1be7107f ("mm: larger stack guard gap, between vmas") exposed us to the existing bug, because it changed the stack VMA to be the correct/real size, meaning our stack expansion code is now triggered. Fix it by increasing the allowance to 4224 bytes. Hard-coding 4224 is obviously unsafe against future expansions of the signal frame in the same way as the existing code. We can't easily use sizeof() because the signal frame structure is not in a header. We will either fix that, or rip out all the custom stack expansion checking logic entirely. Fixes: ce48b210 ("powerpc: Add VSX context save/restore, ptrace and signal support") Cc: stable@vger.kernel.org # v2.6.27+ Reported-by: NTom Lane <tgl@sss.pgh.pa.us> Tested-by: NDaniel Axtens <dja@axtens.net> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200724092528.1578671-2-mpe@ellerman.id.au
-
由 Michael Ellerman 提交于
We have custom stack expansion checks that it turns out are extremely badly tested and contain bugs, surprise. So add some tests that exercise the code and capture the current boundary conditions. The signal test currently fails on 64-bit kernels because the 2048 byte allowance for the signal frame is too small, we will fix that in a subsequent patch. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200724092528.1578671-1-mpe@ellerman.id.au
-
由 Oliver O'Halloran 提交于
For drivers that don't have the error handling callbacks we implement recovery by removing the device and re-probing it. This causes the sysfs directory for the PCI device to be removed which causes the following spurious error to be printed when checking the PE state: Breaking 0005:03:00.0... ./eeh-basic.sh: line 13: can't open /sys/bus/pci/devices/0005:03:00.0/eeh_pe_state: no such file 0005:03:00.0, waited 0/60 0005:03:00.0, waited 1/60 0005:03:00.0, waited 2/60 0005:03:00.0, waited 3/60 0005:03:00.0, waited 4/60 0005:03:00.0, waited 5/60 0005:03:00.0, waited 6/60 0005:03:00.0, waited 7/60 0005:03:00.0, Recovered after 8 seconds We currently try to avoid this by checking if the PE state file exists before reading from it. This is however inherently racy so re-work the state checking so that we only read from the file once, and we squash any errors that occur while reading. Fixes: 85d86c8a ("selftests/powerpc: Add basic EEH selftest") Signed-off-by: NOliver O'Halloran <oohall@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200727010127.23698-1-oohall@gmail.com
-
由 Sandipan Das 提交于
Commit c46241a3 ("powerpc/pkeys: Check vma before returning key fault error to the user") fixes a bug which causes the kernel to set the wrong pkey in siginfo when a pkey fault occurs after two competing threads that have allocated different pkeys, one fully permissive and the other restrictive, attempt to protect a common page at the same time. This adds a test to detect the bug. Signed-off-by: NSandipan Das <sandipan@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/ce40b6ee270bda52e8f4088578ed2faf7d1d509a.1595821792.git.sandipan@linux.ibm.com
-
由 Sandipan Das 提交于
The gettid() syscall wrapper was first introduced in glibc 2.30. This adds a wrapper for use in distros running older versions. Suggested-by: NChristophe Leroy <christophe.leroy@csgroup.eu> Suggested-by: NMichael Ellerman <mpe@ellerman.id.au> Signed-off-by: NSandipan Das <sandipan@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/8ca3b0eeda989707815d1cf337cc33f090408965.1595821792.git.sandipan@linux.ibm.com
-
由 Sandipan Das 提交于
This adds a helper similar to FAIL_IF() which lets a program exit with code 1 (to indicate failure) when the given condition is true. Signed-off-by: NSandipan Das <sandipan@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/dac282d5c2e96e7816dc522e4e20d56d7c79c898.1595821792.git.sandipan@linux.ibm.com
-
由 Sandipan Das 提交于
Commit 192b6a78 ("powerpc/book3s64/pkeys: Fix pkey_access_permitted() for execute disable pkey") fixed a bug that caused repetitive faults for pkeys with no execute rights alongside some combination of read and write rights. This removes the last two cases of the test, which check the behaviour of pkeys with read, write but no execute rights and all the rights, in favour of checking all the possible combinations of read, write and execute rights to be able to detect bugs like the one mentioned above. Signed-off-by: NSandipan Das <sandipan@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/db467500f8af47727bba6b35796e8974a78b71e5.1595821792.git.sandipan@linux.ibm.com
-
由 Sandipan Das 提交于
This adds some new pkey-related helper to print access rights of a pkey in the "rwx" format and to generate different valid combinations of pkey rights starting from a given combination. Signed-off-by: NSandipan Das <sandipan@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/6cc1c7d1f686618668a3e090f1d0c2a4cd9dea3f.1595821792.git.sandipan@linux.ibm.com
-
由 Sandipan Das 提交于
This moves all the pkey-related helpers to a new header file and also a helper to print error messages in signal handlers to the existing utils header file. Signed-off-by: NSandipan Das <sandipan@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/28e633fa9ec1a6500c12188e09ea1887b10a10c1.1595821792.git.sandipan@linux.ibm.com
-
由 Nicholas Piggin 提交于
KVM guests have certain restrictions and performance quirks when using doorbells. This patch moves the EPAPR KVM guest test so it can be shared with PSERIES, and uses that in doorbell setup code to apply the KVM guest quirks and improves IPI performance for two cases: - PowerVM guests may now use doorbells even if they are secure. - KVM guests no longer use doorbells if XIVE is available. There is a valid complaint that "KVM guest" is not a very reasonable thing to test for, it's preferable for the hypervisor to advertise particular behaviours to the guest so they could change if the hypervisor implementation or configuration changes. However in this case we were already assuming a KVM guest worst case, so this patch is about containing those quirks. If KVM later advertises fast doorbells, we should test for that and override the quirks. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Tested-by: NCédric Le Goater <clg@kaod.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200726035155.1424103-4-npiggin@gmail.com
-
由 Nicholas Piggin 提交于
KVM supports msgsndp in guests by trapping and emulating the instruction, so it was decided to always use XIVE for IPIs if it is available. However on PowerVM systems, msgsndp can be used and gives better performance. On large systems, high XIVE interrupt rates can have sub-linear scaling, and using msgsndp can reduce the load on the interrupt controller. So switch to using core local doorbells even if XIVE is available. This reduces performance for KVM guests with an SMT topology by about 50% for ping-pong context switching between SMT vCPUs. An option vector (or dt-cpu-ftrs) could be defined to disable msgsndp to get KVM performance back. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Tested-by: NCédric Le Goater <clg@kaod.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200726035155.1424103-3-npiggin@gmail.com
-
由 Nicholas Piggin 提交于
These are only called in one place for a given platform, so inline them for performance. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Tested-by: NCédric Le Goater <clg@kaod.org> [mpe: Fix build errors related to KVM] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200726035155.1424103-2-npiggin@gmail.com
-
由 Athira Rajeev 提交于
Commit 9908c826 ("powerpc/perf: Add Power10 PMU feature to DT CPU features") defines MMCRA_BHRB_DISABLE as `0x2000000000UL`. Binutils version less than 2.28 doesn't support UL suffix. arch/powerpc/kernel/cpu_setup_power.S: Assembler messages: arch/powerpc/kernel/cpu_setup_power.S:250: Error: found 'L', expected: ')' arch/powerpc/kernel/cpu_setup_power.S:250: Error: junk at end of line, first unrecognized character is `L' arch/powerpc/kernel/cpu_setup_power.S:250: Error: found 'L', expected: ')' arch/powerpc/kernel/cpu_setup_power.S:250: Error: found 'L', expected: ')' arch/powerpc/kernel/cpu_setup_power.S:250: Error: junk at end of line, first unrecognized character is `L' arch/powerpc/kernel/cpu_setup_power.S:250: Error: found 'L', expected: ')' arch/powerpc/kernel/cpu_setup_power.S:250: Error: found 'L', expected: ')' arch/powerpc/kernel/cpu_setup_power.S:250: Error: operand out of range (0x0000002000000000 is not between 0xffffffffffff8000 and 0x000000000000ffff) Fix this by wrapping it with the `_UL` macro. Fixes: 9908c826 ("Add Power10 PMU feature to DT CPU features") Suggested-by: NMichael Ellerman <mpe@ellerman.id.au> Signed-off-by: NAthira Rajeev <atrajeev@linux.vnet.ibm.com> Reviewed-by: NMadhavan Srinivasan <maddy@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1595996214-5833-1-git-send-email-atrajeev@linux.vnet.ibm.com
-
- 27 7月, 2020 1 次提交
-
-
由 Michael Ellerman 提交于
skiroot_defconfig fails: arch/powerpc/kernel/fadump.c:48:17: error: ‘cpus_in_fadump’ defined but not used 48 | static atomic_t cpus_in_fadump; Fix it by moving the definition into the #ifdef where it's used. Fixes: ba608c4f ("powerpc/fadump: fix race between pstore write and fadump crash trigger") Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200727070341.595634-1-mpe@ellerman.id.au
-
- 26 7月, 2020 23 次提交
-
-
由 Randy Dunlap 提交于
Drop the repeated word "for". Signed-off-by: NRandy Dunlap <rdunlap@infradead.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200726003809.20454-10-rdunlap@infradead.org
-
由 Randy Dunlap 提交于
Drop the repeated word "the". Signed-off-by: NRandy Dunlap <rdunlap@infradead.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200726003809.20454-9-rdunlap@infradead.org
-
由 Randy Dunlap 提交于
Drop the repeated word "a". Signed-off-by: NRandy Dunlap <rdunlap@infradead.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200726003809.20454-8-rdunlap@infradead.org
-
由 Randy Dunlap 提交于
Drop the repeated word "in". Signed-off-by: NRandy Dunlap <rdunlap@infradead.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200726003809.20454-7-rdunlap@infradead.org
-
由 Randy Dunlap 提交于
Drop the repeated word "the". Signed-off-by: NRandy Dunlap <rdunlap@infradead.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200726003809.20454-6-rdunlap@infradead.org
-
由 Randy Dunlap 提交于
Drop the repeated words "file" and "the". Signed-off-by: NRandy Dunlap <rdunlap@infradead.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200726003809.20454-5-rdunlap@infradead.org
-
由 Randy Dunlap 提交于
Drop the repeated word "use". Signed-off-by: NRandy Dunlap <rdunlap@infradead.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200726003809.20454-4-rdunlap@infradead.org
-
由 Randy Dunlap 提交于
Drop the repeated word "per". Signed-off-by: NRandy Dunlap <rdunlap@infradead.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200726003809.20454-3-rdunlap@infradead.org
-
由 Randy Dunlap 提交于
Drop the repeated word "below". Signed-off-by: NRandy Dunlap <rdunlap@infradead.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200726003809.20454-2-rdunlap@infradead.org
-
由 Li RongQing 提交于
Align it with other architectures and none of the callers has been interested its return Signed-off-by: NLi RongQing <lirongqing@baidu.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1556278590-14727-1-git-send-email-lirongqing@baidu.com
-
由 Christophe Leroy 提交于
In note_page(), the pg_state is updated the same way in two places. Add note_page_update_state() to do it. Also include the display of boundary markers there as it is missing "no level" leg, leading to a mismatch when the first two markers are at the same address and the first displayed area uses that address. Signed-off-by: NChristophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/a284a809f01c705bbaab303b06fda216f147a99a.1593429426.git.christophe.leroy@csgroup.eu
-
由 Christophe Leroy 提交于
st->last_pa is always updated in note_page() so it can be done outside the if/elseif/else block. Signed-off-by: NChristophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/610d6b1a60ad0bedef865a90153c1110cfaa507e.1593429426.git.christophe.leroy@csgroup.eu
-
由 Christophe Leroy 提交于
When STRICT_KERNEL_RWX is set, we want to set NX bit on vmalloc segments. But modules require exec. Use a dedicated segment for modules. There is not much space above kernel, and we don't waste vmalloc space to do alignment. Therefore, we take the segment before PAGE_OFFSET for modules. Signed-off-by: NChristophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/eb8faba9148b6cf17c696ba776b4e8ee2f6313bf.1593428200.git.christophe.leroy@csgroup.eu
-
由 Christophe Leroy 提交于
Kernel space starts at TASK_SIZE. Select kernel page table when address is over TASK_SIZE. Signed-off-by: NChristophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/893425e32cd0a003539573b2d115e0ffa98bc26c.1593428200.git.christophe.leroy@csgroup.eu
-
由 Christophe Leroy 提交于
User space stops at TASK_SIZE. At the moment, kernel space starts at PAGE_OFFSET. In order to use space between TASK_SIZE and PAGE_OFFSET for modules, make TASK_SIZE the limit between user and kernel space. Note that fault.c already considers TASK_SIZE as the boundary between user and kernel space. Signed-off-by: NChristophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/b38b52cd8dabbb56fbd6f9219d6f3cdccbb43b44.1593428200.git.christophe.leroy@csgroup.eu
-
由 Christophe Leroy 提交于
Instead of leaving NX unset on all segments above the start of vmalloc space, only leave NX unset on segments used for modules. Signed-off-by: NChristophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/7172c0f5253419315e434a1816ee3d6ed6505bc0.1593428200.git.christophe.leroy@csgroup.eu
-
由 Christophe Leroy 提交于
In order to allow allocation of modules outside of vmalloc space, use MODULES_VADDR and MODULES_END when MODULES_VADDR is defined. Redefine module_alloc() when MODULES_VADDR defined. Unmap corresponding KASAN shadow memory. Signed-off-by: NChristophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/7ecf5fff1eef67d450e73fc412b6ec3818483d75.1593428200.git.christophe.leroy@csgroup.eu
-
由 Christophe Leroy 提交于
Use is_vmalloc_or_module_addr() instead of is_vmalloc_addr() Signed-off-by: NChristophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/7d884db0e5a6f521331639d8c0f13e520d5a4fef.1593428200.git.christophe.leroy@csgroup.eu
-
由 Wei Yongjun 提交于
The sparse tool complains as follows: arch/powerpc/platforms/pseries/papr_scm.c:97:1: warning: symbol 'papr_nd_regions' was not declared. Should it be static? arch/powerpc/platforms/pseries/papr_scm.c:98:1: warning: symbol 'papr_ndr_lock' was not declared. Should it be static? Those variables are not used outside of papr_scm.c, so this commit marks them static. Fixes: 85343a8d ("powerpc/papr/scm: Add bad memory ranges to nvdimm bad ranges") Reported-by: NHulk Robot <hulkci@huawei.com> Signed-off-by: NWei Yongjun <weiyongjun1@huawei.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200725091949.75234-1-weiyongjun1@huawei.com
-
由 Bill Wendling 提交于
Clang's objdump emits slightly different output from GNU's objdump, causing a list of warnings to be emitted during relocatable builds. E.g., clang's objdump emits this: c000000000000004: 2c 00 00 48 b 0xc000000000000030 ... c000000000005c6c: 10 00 82 40 bf 2, 0xc000000000005c7c while GNU objdump emits: c000000000000004: 2c 00 00 48 b c000000000000030 <__start+0x30> ... c000000000005c6c: 10 00 82 40 bne c000000000005c7c <masked_interrupt+0x3c> Adjust llvm-objdump's output to remove the extraneous '0x' and convert 'bf' and 'bt' to 'bne' and 'beq' resp. to more closely match GNU objdump's output. Note that clang's objdump doesn't yet output the relocation symbols on PPC. Signed-off-by: NBill Wendling <morbo@google.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/191c67db31264b69cf6b566fd69851beb3dd0abb.1595630874.git.morbo@google.com
-
由 Nicholas Piggin 提交于
This implements smp_cond_load_relaxed() with the slowpath busy loop using the preferred SMT priority pattern. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Acked-by: NWaiman Long <longman@redhat.com> [mpe: Make it 64-bit only to fix build errors on 32-bit] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200724131423.1362108-7-npiggin@gmail.com
-
由 Nicholas Piggin 提交于
This brings the behaviour of the uncontended fast path back to roughly equivalent to simple spinlocks -- a single atomic op with lock hint. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Acked-by: NWaiman Long <longman@redhat.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200724131423.1362108-6-npiggin@gmail.com
-
由 Nicholas Piggin 提交于
This implements the generic paravirt qspinlocks using H_PROD and H_CONFER to kick and wait. This uses an un-directed yield to any CPU rather than the directed yield to a pre-empted lock holder that paravirtualised simple spinlocks use, that requires no kick hcall. This is something that could be investigated and improved in future. Performance results can be found in the commit which added queued spinlocks. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NWaiman Long <longman@redhat.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200724131423.1362108-5-npiggin@gmail.com
-