- 02 8月, 2018 1 次提交
-
-
由 Christoph Hellwig 提交于
Instead of duplicating the source statements in every architecture just do it once in the toplevel Kconfig file. Note that with this the inclusion of arch/$(SRCARCH/Kconfig moves out of the top-level Kconfig into arch/Kconfig so that don't violate ordering constraits while keeping a sensible menu structure. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
-
- 13 6月, 2018 1 次提交
-
-
Alpha provides a custom implementation of dec_and_lock(). The functions is split into two parts: - atomic_add_unless() + return 0 (fast path in assembly) - remaining part including locking (slow path in C) Comparing the result of the alpha implementation with the generic implementation compiled by gcc it looks like the fast path is optimized by avoiding a stack frame (and reloading the GP), register store and all this. This is only done in the slowpath. After marking the slowpath (atomic_dec_and_lock_1()) as "noinline" and doing the slowpath in C (the atomic_add_unless(atomic, -1, 1) part) I noticed differences in the resulting assembly: - the GP is still reloaded - atomic_add_unless() adds more memory barriers compared to the custom assembly - the custom assembly here does "load, sub, beq" while atomic_add_unless() does "load, cmpeq, add, bne". This is okay because it compares against zero after subtraction while the generic code compares against 1 before. I'm not sure if avoiding the stack frame (and GP reloading) brings a lot in terms of performance. Regarding the different barriers, Peter Zijlstra says: |refcount decrement needs to be a RELEASE operation, such that all the |load/stores to the object happen before we decrement the refcount. | |Otherwise things like: | | obj->foo = 5; | refcnt_dec(&obj->ref); | |can be re-ordered, which then allows fun scenarios like: | | CPU0 CPU1 | | refcnt_dec(&obj->ref); | if (dec_and_test(&obj->ref)) | free(obj); | obj->foo = 5; // oops UaF | | |This means (for alpha) that there should be a memory barrier _before_ |the decrement, however the dec_and_lock asm thing only has one _after_, |which, per the above, is too late. | |The generic version using add_unless will result in memory barrier |before and after (because that is the rule for atomic ops with a return |value) which is strictly too many barriers for the refcount story, but |who knows what other ordering requirements code has. Remove the custom alpha implementation of dec_and_lock() and if it is an issue (performance wise) then the fast path could still be inlined. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Matt Turner <mattst88@gmail.com> Cc: linux-alpha@vger.kernel.org Link: https://lkml.kernel.org/r/20180606115918.GG12198@hirez.programming.kicks-ass.net Link: https://lkml.kernel.org/r20180612161621.22645-2-bigeasy@linutronix.de
-
- 01 6月, 2018 1 次提交
-
-
由 Luc Van Oostenryck 提交于
By default, sparse assumes a 64bit machine when compiled on x86-64 and 32bit when compiled on anything else. This can of course create all sort of problems for the other archs, like issuing false warnings ('shift too big (32) for type unsigned long'), or worse, failing to emit legitimate warnings. Fix this by adding the -m32/-m64 flag, depending on CONFIG_64BIT, to CHECKFLAGS in the main Makefile (and so for all archs). Also, remove the now unneeded -m32/-m64 in arch specific Makefiles. Signed-off-by: NLuc Van Oostenryck <luc.vanoostenryck@gmail.com> Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
-
- 23 5月, 2018 3 次提交
-
-
由 Sinan Kaya 提交于
memory-barriers.txt has been updated with the following requirement. "When using writel(), a prior wmb() is not needed to guarantee that the cache coherent memory writes have completed before writing to the MMIO region." Current writeX() and iowriteX() implementations on alpha are not satisfying this requirement as the barrier is after the register write. Move mb() in writeX() and iowriteX() functions to guarantee that HW observes memory changes before performing register operations. Signed-off-by: NSinan Kaya <okaya@codeaurora.org> Reported-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NMatt Turner <mattst88@gmail.com>
-
由 Christoph Hellwig 提交于
Remove the dma_ops indirection. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NMatt Turner <mattst88@gmail.com>
-
由 Christoph Hellwig 提交于
The generic dma_direct implementation does the same thing as the alpha pci-noop implementation, just with more bells and whistles. And unlike the current code it at least has a theoretical chance to actually compile. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NMatt Turner <mattst88@gmail.com>
-
- 09 5月, 2018 4 次提交
-
-
由 Christoph Hellwig 提交于
Define this symbol if the architecture either uses 64-bit pointers or the PHYS_ADDR_T_64BIT is set. This covers 95% of the old arch magic. We only need an additional select for Xen on ARM (why anyway?), and we now always set ARCH_DMA_ADDR_T_64BIT on mips boards with 64-bit physical addressing instead of only doing it when highmem is set. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NJames Hogan <jhogan@kernel.org>
-
由 Christoph Hellwig 提交于
This way we have one central definition of it, and user can select it as needed. Note that we now also always select it when CONFIG_DMA_API_DEBUG is select, which fixes some incorrect checks in a few network drivers. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com>
-
由 Christoph Hellwig 提交于
This way we have one central definition of it, and user can select it as needed. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com>
-
由 Christoph Hellwig 提交于
This avoids selecting IOMMU_HELPER just for this function. And we only use it once or twice in normal builds so this often even is a size reduction. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 07 5月, 2018 1 次提交
-
-
由 Christoph Hellwig 提交于
This was used by the ide, scsi and networking code in the past to determine if they should bounce payloads. Now that the dma mapping always have to support dma to all physical memory (thanks to swiotlb for non-iommu systems) there is no need to this crude hack any more. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: Palmer Dabbelt <palmer@sifive.com> (for riscv) Reviewed-by: NJens Axboe <axboe@kernel.dk>
-
- 25 4月, 2018 5 次提交
-
-
由 Eric W. Biederman 提交于
Filling in struct siginfo before calling force_sig_info a tedious and error prone process, where once in a great while the wrong fields are filled out, and siginfo has been inconsistently cleared. Simplify this process by using the helper force_sig_fault. Which takes as a parameters all of the information it needs, ensures all of the fiddly bits of filling in struct siginfo are done properly and then calls force_sig_info. In short about a 5 line reduction in code for every time force_sig_info is called, which makes the calling function clearer. Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: linux-alpha@vger.kernel.org Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
-
由 Eric W. Biederman 提交于
Filling in struct siginfo before calling send_sig_info a tedious and error prone process, where once in a great while the wrong fields are filled out, and siginfo has been inconsistently cleared. Simplify this process by using the helper send_sig_fault. Which takes as a parameters all of the information it needs, ensures all of the fiddly bits of filling in struct siginfo are done properly and then calls send_sig_info. In short about a 5 line reduction in code for every time send_sig_info is called, which makes the calling function clearer. Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: linux-alpha@vger.kernel.org Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
-
由 Eric W. Biederman 提交于
Using an si_code of 0 that aliases with SI_USER is clearly the wrong thing to do, and causes problems in interesting ways. For it really is not clear to me if using TRAP_UNK bugcheck or the default case of gentrap is really the best way to handle things. There is certainly enough information that that a more specific si_code could potentially be used. That said TRAP_UNK is definitely an improvement over 0 as it removes the ambiguiuty of what si_code of 0 with SIGTRAP means on alpha. Recent history suggests no actually cares about crazy corner cases of the kernel behavior like this so I don't expect any regressions from changing this. However if something does happen this change is easy to revert. Cc: Helge Deller <deller@gmx.de> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Matt Turner <mattst88@gmail.com> Cc: linux-alpha@vger.kernel.org Fixes: 0a635c7a84cf ("Fill in siginfo_t.") History Tree: https://git.kernel.org/pub/scm/linux/kernel/git/tglx/history.gitSigned-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
-
由 Eric W. Biederman 提交于
Using an si_code of 0 that aliases with SI_USER is clearly the wrong thing todo, and causes problems in interesting ways. The newly defined FPE_FLTUNK semantically appears to fit the bill so use it instead. Given recent experience in this area odds are it will not break anything. Fixing it removes a hazard to kernel maintenance. Cc: Helge Deller <deller@gmx.de> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Matt Turner <mattst88@gmail.com> Cc: linux-alpha@vger.kernel.org History Tree: https://git.kernel.org/pub/scm/linux/kernel/git/tglx/history.git Fixes: 0a635c7a84cf ("Fill in siginfo_t.") Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
-
由 Eric W. Biederman 提交于
Call clear_siginfo to ensure every stack allocated siginfo is properly initialized before being passed to the signal sending functions. Note: It is not safe to depend on C initializers to initialize struct siginfo on the stack because C is allowed to skip holes when initializing a structure. The initialization of struct siginfo in tracehook_report_syscall_exit was moved from the helper user_single_step_siginfo into tracehook_report_syscall_exit itself, to make it clear that the local variable siginfo gets fully initialized. In a few cases the scope of struct siginfo has been reduced to make it clear that siginfo siginfo is not used on other paths in the function in which it is declared. Instances of using memset to initialize siginfo have been replaced with calls clear_siginfo for clarity. Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
-
- 20 4月, 2018 1 次提交
-
-
由 Arnd Bergmann 提交于
The alpha ipcbuf/msgbuf/sembuf/shmbuf header files are all identical to the version from asm-generic. This patch removes the files and replaces them with 'generic-y' statements as part of the y2038 series. Since there is no 32-bit syscall support for alpha, we don't need the other changes, but it's good to have clean this up anyway. Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
- 19 4月, 2018 1 次提交
-
-
由 Arnd Bergmann 提交于
We have a couple of files that try to include asm/compat.h on architectures where this is available. Those should generally use the higher-level linux/compat.h file, but that in turn fails to include asm/compat.h when CONFIG_COMPAT is disabled, unless we can provide that header on all architectures. This adds the asm/compat.h for all remaining architectures to simplify the dependencies. Architectures that are getting removed in linux-4.17 are not changed here, to avoid needless conflicts with the removal patches. Those architectures are broken by this patch, but we have already shown that they have no users. Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
- 18 4月, 2018 1 次提交
-
-
由 Eric W. Biederman 提交于
Setting si_code to 0 is the same as setting si_code to SI_USER. This is the same si_code as SI_USER. Posix and common sense requires that SI_USER not be a signal specific si_code. As such this use of 0 for the si_code is a pretty horribly broken ABI. Cc: Helge Deller <deller@gmx.de> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Matt Turner <mattst88@gmail.com> Cc: linux-alpha@vger.kernel.org History Tree: https://git.kernel.org/pub/scm/linux/kernel/git/tglx/history.git Ref: 0a635c7a84cf ("Fill in siginfo_t.") Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
-
- 17 4月, 2018 1 次提交
-
-
由 Mike Rapoport 提交于
Signed-off-by: NMike Rapoport <rppt@linux.vnet.ibm.com> Signed-off-by: NJonathan Corbet <corbet@lwn.net>
-
- 12 4月, 2018 1 次提交
-
-
由 Michal Hocko 提交于
Patch series "mm: introduce MAP_FIXED_NOREPLACE", v2. This has started as a follow up discussion [3][4] resulting in the runtime failure caused by hardening patch [5] which removes MAP_FIXED from the elf loader because MAP_FIXED is inherently dangerous as it might silently clobber an existing underlying mapping (e.g. stack). The reason for the failure is that some architectures enforce an alignment for the given address hint without MAP_FIXED used (e.g. for shared or file backed mappings). One way around this would be excluding those archs which do alignment tricks from the hardening [6]. The patch is really trivial but it has been objected, rightfully so, that this screams for a more generic solution. We basically want a non-destructive MAP_FIXED. The first patch introduced MAP_FIXED_NOREPLACE which enforces the given address but unlike MAP_FIXED it fails with EEXIST if the given range conflicts with an existing one. The flag is introduced as a completely new one rather than a MAP_FIXED extension because of the backward compatibility. We really want a never-clobber semantic even on older kernels which do not recognize the flag. Unfortunately mmap sucks wrt flags evaluation because we do not EINVAL on unknown flags. On those kernels we would simply use the traditional hint based semantic so the caller can still get a different address (which sucks) but at least not silently corrupt an existing mapping. I do not see a good way around that. Except we won't export expose the new semantic to the userspace at all. It seems there are users who would like to have something like that. Jemalloc has been mentioned by Michael Ellerman [7] Florian Weimer has mentioned the following: : glibc ld.so currently maps DSOs without hints. This means that the kernel : will map right next to each other, and the offsets between them a completely : predictable. We would like to change that and supply a random address in a : window of the address space. If there is a conflict, we do not want the : kernel to pick a non-random address. Instead, we would try again with a : random address. John Hubbard has mentioned CUDA example : a) Searches /proc/<pid>/maps for a "suitable" region of available : VA space. "Suitable" generally means it has to have a base address : within a certain limited range (a particular device model might : have odd limitations, for example), it has to be large enough, and : alignment has to be large enough (again, various devices may have : constraints that lead us to do this). : : This is of course subject to races with other threads in the process. : : Let's say it finds a region starting at va. : : b) Next it does: : p = mmap(va, ...) : : *without* setting MAP_FIXED, of course (so va is just a hint), to : attempt to safely reserve that region. If p != va, then in most cases, : this is a failure (almost certainly due to another thread getting a : mapping from that region before we did), and so this layer now has to : call munmap(), before returning a "failure: retry" to upper layers. : : IMPROVEMENT: --> if instead, we could call this: : : p = mmap(va, ... MAP_FIXED_NOREPLACE ...) : : , then we could skip the munmap() call upon failure. This : is a small thing, but it is useful here. (Thanks to Piotr : Jaroszynski and Mark Hairgrove for helping me get that detail : exactly right, btw.) : : c) After that, CUDA suballocates from p, via: : : q = mmap(sub_region_start, ... MAP_FIXED ...) : : Interestingly enough, "freeing" is also done via MAP_FIXED, and : setting PROT_NONE to the subregion. Anyway, I just included (c) for : general interest. Atomic address range probing in the multithreaded programs in general sounds like an interesting thing to me. The second patch simply replaces MAP_FIXED use in elf loader by MAP_FIXED_NOREPLACE. I believe other places which rely on MAP_FIXED should follow. Actually real MAP_FIXED usages should be docummented properly and they should be more of an exception. [1] http://lkml.kernel.org/r/20171116101900.13621-1-mhocko@kernel.org [2] http://lkml.kernel.org/r/20171129144219.22867-1-mhocko@kernel.org [3] http://lkml.kernel.org/r/20171107162217.382cd754@canb.auug.org.au [4] http://lkml.kernel.org/r/1510048229.12079.7.camel@abdul.in.ibm.com [5] http://lkml.kernel.org/r/20171023082608.6167-1-mhocko@kernel.org [6] http://lkml.kernel.org/r/20171113094203.aofz2e7kueitk55y@dhcp22.suse.cz [7] http://lkml.kernel.org/r/87efp1w7vy.fsf@concordia.ellerman.id.au This patch (of 2): MAP_FIXED is used quite often to enforce mapping at the particular range. The main problem of this flag is, however, that it is inherently dangerous because it unmaps existing mappings covered by the requested range. This can cause silent memory corruptions. Some of them even with serious security implications. While the current semantic might be really desiderable in many cases there are others which would want to enforce the given range but rather see a failure than a silent memory corruption on a clashing range. Please note that there is no guarantee that a given range is obeyed by the mmap even when it is free - e.g. arch specific code is allowed to apply an alignment. Introduce a new MAP_FIXED_NOREPLACE flag for mmap to achieve this behavior. It has the same semantic as MAP_FIXED wrt. the given address request with a single exception that it fails with EEXIST if the requested address is already covered by an existing mapping. We still do rely on get_unmaped_area to handle all the arch specific MAP_FIXED treatment and check for a conflicting vma after it returns. The flag is introduced as a completely new one rather than a MAP_FIXED extension because of the backward compatibility. We really want a never-clobber semantic even on older kernels which do not recognize the flag. Unfortunately mmap sucks wrt. flags evaluation because we do not EINVAL on unknown flags. On those kernels we would simply use the traditional hint based semantic so the caller can still get a different address (which sucks) but at least not silently corrupt an existing mapping. I do not see a good way around that. [mpe@ellerman.id.au: fix whitespace] [fail on clashing range with EEXIST as per Florian Weimer] [set MAP_FIXED before round_hint_to_min as per Khalid Aziz] Link: http://lkml.kernel.org/r/20171213092550.2774-2-mhocko@kernel.orgReviewed-by: NKhalid Aziz <khalid.aziz@oracle.com> Signed-off-by: NMichal Hocko <mhocko@suse.com> Acked-by: NMichael Ellerman <mpe@ellerman.id.au> Cc: Khalid Aziz <khalid.aziz@oracle.com> Cc: Russell King - ARM Linux <linux@armlinux.org.uk> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Florian Weimer <fweimer@redhat.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Abdul Haleem <abdhalee@linux.vnet.ibm.com> Cc: Joel Stanley <joel@jms.id.au> Cc: Kees Cook <keescook@chromium.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Jason Evans <jasone@google.com> Cc: David Goldblatt <davidtgoldblatt@gmail.com> Cc: Edward Tomasz Napierała <trasz@FreeBSD.org> Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 08 4月, 2018 4 次提交
-
-
由 Sinan Kaya 提交于
memory-barriers.txt has been updated with the following requirement. "When using writel(), a prior wmb() is not needed to guarantee that the cache coherent memory writes have completed before writing to the MMIO region." Current writeX() and iowriteX() implementations on alpha are not satisfying this requirement as the barrier is after the register write. Move mb() in writeX() and iowriteX() functions to guarantee that HW observes memory changes before performing register operations. Signed-off-by: NSinan Kaya <okaya@codeaurora.org> Reported-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NMatt Turner <mattst88@gmail.com>
-
由 Michael Cree 提交于
Implement the CPU vulnerabilty show functions for meltdown, spectre_v1 and spectre_v2 on Alpha. Tests on XP1000 (EV67/667MHz) and ES45 (EV68CB/1.25GHz) show them to be vulnerable to Meltdown and Spectre V1. In the case of Meltdown I saw a 1 to 2% success rate in reading bytes on the XP1000 and 50 to 60% success rate on the ES45. (This compares to 99.97% success reported for Intel CPUs.) Report EV6 and later CPUs as vulnerable. Tests on PWS600au (EV56/600MHz) for Spectre V1 attack were unsuccessful (though I did not try particularly hard) so mark EV4 through to EV56 as not vulnerable. Signed-off-by: NMichael Cree <mcree@orcon.net.nz> Signed-off-by: NMatt Turner <mattst88@gmail.com>
-
由 Alexandre Belloni 提交于
The RTC core is always calling rtc_valid_tm after the read_time callback. It is not necessary to call it just before returning from the callback. Signed-off-by: NAlexandre Belloni <alexandre.belloni@bootlin.com> Signed-off-by: NMatt Turner <mattst88@gmail.com>
-
由 Alexandre Belloni 提交于
The .set_mmss and .setmmss64 ops are only called when the RTC is not providing an implementation for the .set_time callback. On alpha, .set_time is provided so .set_mmss64 is never called. Remove the unused code. Signed-off-by: NAlexandre Belloni <alexandre.belloni@bootlin.com> Signed-off-by: NMatt Turner <mattst88@gmail.com>
-
- 03 4月, 2018 1 次提交
-
-
由 Dominik Brodowski 提交于
Using this helper allows us to avoid the in-kernel calls to the sys_mmap_pgoff() syscall. The ksys_ prefix denotes that this function is meant as a drop-in replacement for the syscall. In particular, it uses the same calling convention as sys_mmap_pgoff(). This patch is part of a series which removes in-kernel calls to syscalls. On this basis, the syscall entry path can be streamlined. For details, see http://lkml.kernel.org/r/20180325162527.GA17492@light.dominikbrodowski.net Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-mm@kvack.org Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
-
- 28 3月, 2018 2 次提交
-
-
由 Al Viro 提交于
It used to clear a3, so that signal handling on return to userland would've passed zero r0 to do_work_pending(), preventing the syscall restart logics from triggering. It had been pointless all along, since we only go there after successful do_execve(). Which does clear regs->r0 on alpha, preventing the syscall restart logics just fine, no extra help needed. Good thing, that, since back in 2012 do_work_pending() has lost the second argument, shifting the registers used to pass that thing from a3 to a2. Commit that had done that adjusted the entry.S code accordingly, but missed that one. As the result, we were left with useless insn in ret_from_kernel_thread and confusing comment to go with it. Get rid of both... Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 17 3月, 2018 1 次提交
-
-
由 Peter Zijlstra 提交于
Mark noticed that the change to sibling_list changed some iteration semantics; because previously we used group_list as list entry, sibling events would always have an empty sibling_list. But because we now use sibling_list for both list head and list entry, siblings will report as having siblings. Fix this with a custom for_each_sibling_event() iterator. Fixes: 8343aae6 ("perf/core: Remove perf_event::group_entry") Reported-by: NMark Rutland <mark.rutland@arm.com> Suggested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: vincent.weaver@maine.edu Cc: alexander.shishkin@linux.intel.com Cc: torvalds@linux-foundation.org Cc: alexey.budankov@linux.intel.com Cc: valery.cherepennikov@intel.com Cc: eranian@google.com Cc: acme@redhat.com Cc: linux-tip-commits@vger.kernel.org Cc: davidcc@google.com Cc: kan.liang@intel.com Cc: Dmitry.Prohorov@intel.com Cc: jolsa@redhat.com Link: https://lkml.kernel.org/r/20180315170129.GX4043@hirez.programming.kicks-ass.net
-
- 16 3月, 2018 1 次提交
-
-
由 Peter Zijlstra 提交于
Mark noticed that the change to sibling_list changed some iteration semantics; because previously we used group_list as list entry, sibling events would always have an empty sibling_list. But because we now use sibling_list for both list head and list entry, siblings will report as having siblings. Fix this with a custom for_each_sibling_event() iterator. Fixes: 8343aae6 ("perf/core: Remove perf_event::group_entry") Reported-by: NMark Rutland <mark.rutland@arm.com> Suggested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: vincent.weaver@maine.edu Cc: alexander.shishkin@linux.intel.com Cc: torvalds@linux-foundation.org Cc: alexey.budankov@linux.intel.com Cc: valery.cherepennikov@intel.com Cc: eranian@google.com Cc: acme@redhat.com Cc: linux-tip-commits@vger.kernel.org Cc: davidcc@google.com Cc: kan.liang@intel.com Cc: Dmitry.Prohorov@intel.com Cc: jolsa@redhat.com Link: https://lkml.kernel.org/r/20180315170129.GX4043@hirez.programming.kicks-ass.net
-
- 12 3月, 2018 2 次提交
-
-
由 Peter Zijlstra 提交于
Now that all the grouping is done with RB trees, we no longer need group_entry and can replace the whole thing with sibling_list. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexey Budankov <alexey.budankov@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: David Carrillo-Cisneros <davidcc@google.com> Cc: Dmitri Prokhorov <Dmitry.Prohorov@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Valery Cherepennikov <valery.cherepennikov@intel.com> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: linux-kernel@vger.kernel.org Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Andrea Parri 提交于
The following two commits: 79d44246 ("locking/xchg/alpha: Clean up barrier usage by using smp_mb() in place of __ASM__MB") 472e8c55 ("locking/xchg/alpha: Fix xchg() and cmpxchg() memory ordering bugs") ... ended up adding unnecessary barriers to the _local() variants on Alpha, which the previous code took care to avoid. Fix them by adding the smp_mb() into the cmpxchg() macro rather than into the ____cmpxchg() variants. Reported-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NAndrea Parri <parri.andrea@gmail.com> Cc: Alan Stern <stern@rowland.harvard.edu> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matt Turner <mattst88@gmail.com> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Richard Henderson <rth@twiddle.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-alpha@vger.kernel.org Fixes: 472e8c55 ("locking/xchg/alpha: Fix xchg() and cmpxchg() memory ordering bugs") Fixes: 79d44246 ("locking/xchg/alpha: Clean up barrier usage by using smp_mb() in place of __ASM__MB") Link: http://lkml.kernel.org/r/1519704058-13430-1-git-send-email-parri.andrea@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 23 2月, 2018 2 次提交
-
-
由 Andrea Parri 提交于
Successful RMW operations are supposed to be fully ordered, but Alpha's xchg() and cmpxchg() do not meet this requirement. Will Deacon noticed the bug: > So MP using xchg: > > WRITE_ONCE(x, 1) > xchg(y, 1) > > smp_load_acquire(y) == 1 > READ_ONCE(x) == 0 > > would be allowed. ... which thus violates the above requirement. Fix it by adding a leading smp_mb() to the xchg() and cmpxchg() implementations. Reported-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NAndrea Parri <parri.andrea@gmail.com> Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Alan Stern <stern@rowland.harvard.edu> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matt Turner <mattst88@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Richard Henderson <rth@twiddle.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-alpha@vger.kernel.org Link: http://lkml.kernel.org/r/1519291488-5752-1-git-send-email-parri.andrea@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Andrea Parri 提交于
Replace each occurrence of __ASM__MB with a (trailing) smp_mb() in xchg(), cmpxchg(), and remove the now unused __ASM__MB definitions; this improves readability, with no additional synchronization cost. Suggested-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NAndrea Parri <parri.andrea@gmail.com> Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Alan Stern <stern@rowland.harvard.edu> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matt Turner <mattst88@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Richard Henderson <rth@twiddle.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-alpha@vger.kernel.org Link: http://lkml.kernel.org/r/1519291469-5702-1-git-send-email-parri.andrea@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 21 2月, 2018 1 次提交
-
-
由 Andrea Parri 提交于
Continuing along with the fight against smp_read_barrier_depends() [1] (or rather, against its improper use), add an unconditional barrier to cmpxchg. This guarantees that dependency ordering is preserved when a dependency is headed by an unsuccessful cmpxchg. As it turns out, the change could enable further simplification of LKMM as proposed in [2]. [1] https://marc.info/?l=linux-kernel&m=150884953419377&w=2 https://marc.info/?l=linux-kernel&m=150884946319353&w=2 https://marc.info/?l=linux-kernel&m=151215810824468&w=2 https://marc.info/?l=linux-kernel&m=151215816324484&w=2 [2] https://marc.info/?l=linux-kernel&m=151881978314872&w=2Signed-off-by: NAndrea Parri <parri.andrea@gmail.com> Acked-by: NPeter Zijlstra <peterz@infradead.org> Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Alan Stern <stern@rowland.harvard.edu> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matt Turner <mattst88@gmail.com> Cc: Richard Henderson <rth@twiddle.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-alpha@vger.kernel.org Link: http://lkml.kernel.org/r/1519152356-4804-1-git-send-email-parri.andrea@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 26 1月, 2018 2 次提交
-
-
由 Arnd Bergmann 提交于
Some of the syscall helper functions (do_utimes, poll_select_set_timeout, core_sys_select) have changed over the past year or two to use 'timespec64' pointers rather than 'timespec'. This was fine on alpha, since 64-bit architectures treat the two as the same type. However, I'd like to change that behavior and make 'timespec64' a proper type of its own even on 64-bit architectures, and that will introduce harmless type mismatch warnings here. Also, I'm trying to kill off the do_gettimeofday() helper in favor of ktime_get() and related interfaces throughout the kernel. This changes the get_tv32/put_tv32 helper functions to also take a timespec64 argument rather than timeval, which allows us to simplify some of the syscall helpers a bit and avoid the type warnings. For the moment, wait4 and adjtimex are still better off with the old behavior, so I'm adding a special put_tv_to_tv32() helper for those. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Arnd Bergmann 提交于
There was a typo in the new version of put_tv32() that caused an unguarded access of a user space pointer, and failed to return the correct result in gettimeofday(), wait4(), usleep_thread() and old_adjtimex(). This fixes it to give the correct behavior again. Cc: stable@vger.kernel.org Fixes: 1cc6c463 ("osf_sys.c: switch handling of timeval32/itimerval32 to copy_{to,from}_user()") Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 21 1月, 2018 3 次提交
-
-
由 Mikulas Patocka 提交于
On alpha, a process will crash if it attempts to start a thread and a signal is delivered at the same time. The crash can be reproduced with this program: https://cygwin.com/ml/cygwin/2014-11/msg00473.html The reason for the crash is this: * we call the clone syscall * we go to the function copy_process * copy process calls copy_thread_tls, it is a wrapper around copy_thread * copy_thread sets the tls pointer: childti->pcb.unique = regs->r20 * copy_thread sets regs->r20 to zero * we go back to copy_process * copy process checks "if (signal_pending(current))" and returns -ERESTARTNOINTR * the clone syscall is restarted, but this time, regs->r20 is zero, so the new thread is created with zero tls pointer * the new thread crashes in start_thread when attempting to access tls The comment in the code says that setting the register r20 is some compatibility with OSF/1. But OSF/1 doesn't use the CLONE_SETTLS flag, so we don't have to zero r20 if CLONE_SETTLS is set. This patch fixes the bug by zeroing regs->r20 only if CLONE_SETTLS is not set. Signed-off-by: NMikulas Patocka <mpatocka@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: NMatt Turner <mattst88@gmail.com>
-
由 Mikulas Patocka 提交于
Since version 4.9, the kernel automatically breaks printk calls into multiple newlines unless pr_cont is used. Fix the alpha stacktrace code, so that it prints stack trace in four columns, as it was initially intended. Signed-off-by: NMikulas Patocka <mpatocka@redhat.com> Cc: stable@vger.kernel.org # v4.9+ Signed-off-by: NMatt Turner <mattst88@gmail.com>
-
由 Mikulas Patocka 提交于
We need to define NEED_SRM_SAVE_RESTORE on the Avanti, otherwise we get machine check exception when attempting to reboot the machine. Signed-off-by: NMikulas Patocka <mpatocka@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: NMatt Turner <mattst88@gmail.com>
-