- 18 11月, 2014 2 次提交
-
-
由 Dave Hansen 提交于
The previous patch allocates bounds tables on-demand. As noted in an earlier description, these can add up to *HUGE* amounts of memory. This has caused OOMs in practice when running tests. This patch adds support for freeing bounds tables when they are no longer in use. There are two types of mappings in play when unmapping tables: 1. The mapping with the actual data, which userspace is munmap()ing or brk()ing away, etc... 2. The mapping for the bounds table *backing* the data (is tagged with VM_MPX, see the patch "add MPX specific mmap interface"). If userspace use the prctl() indroduced earlier in this patchset to enable the management of bounds tables in kernel, when it unmaps the first type of mapping with the actual data, the kernel needs to free the mapping for the bounds table backing the data. This patch hooks in at the very end of do_unmap() to do so. We look at the addresses being unmapped and find the bounds directory entries and tables which cover those addresses. If an entire table is unused, we clear associated directory entry and free the table. Once we unmap the bounds table, we would have a bounds directory entry pointing at empty address space. That address space might now be allocated for some other (random) use, and the MPX hardware might now try to walk it as if it were a bounds table. That would be bad. So any unmapping of an enture bounds table has to be accompanied by a corresponding write to the bounds directory entry to invalidate it. That write to the bounds directory can fault, which causes the following problem: Since we are doing the freeing from munmap() (and other paths like it), we hold mmap_sem for write. If we fault, the page fault handler will attempt to acquire mmap_sem for read and we will deadlock. To avoid the deadlock, we pagefault_disable() when touching the bounds directory entry and use a get_user_pages() to resolve the fault. The unmapping of bounds tables happends under vm_munmap(). We also (indirectly) call vm_munmap() to _do_ the unmapping of the bounds tables. We avoid unbounded recursion by disallowing freeing of bounds tables *for* bounds tables. This would not occur normally, so should not have any practical impact. Being strict about it here helps ensure that we do not have an exploitable stack overflow. Based-on-patch-by: NQiaowei Ren <qiaowei.ren@intel.com> Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com> Cc: linux-mm@kvack.org Cc: linux-mips@linux-mips.org Cc: Dave Hansen <dave@sr71.net> Link: http://lkml.kernel.org/r/20141114151831.E4531C4A@viggo.jf.intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Dave Hansen 提交于
This is really the meat of the MPX patch set. If there is one patch to review in the entire series, this is the one. There is a new ABI here and this kernel code also interacts with userspace memory in a relatively unusual manner. (small FAQ below). Long Description: This patch adds two prctl() commands to provide enable or disable the management of bounds tables in kernel, including on-demand kernel allocation (See the patch "on-demand kernel allocation of bounds tables") and cleanup (See the patch "cleanup unused bound tables"). Applications do not strictly need the kernel to manage bounds tables and we expect some applications to use MPX without taking advantage of this kernel support. This means the kernel can not simply infer whether an application needs bounds table management from the MPX registers. The prctl() is an explicit signal from userspace. PR_MPX_ENABLE_MANAGEMENT is meant to be a signal from userspace to require kernel's help in managing bounds tables. PR_MPX_DISABLE_MANAGEMENT is the opposite, meaning that userspace don't want kernel's help any more. With PR_MPX_DISABLE_MANAGEMENT, the kernel won't allocate and free bounds tables even if the CPU supports MPX. PR_MPX_ENABLE_MANAGEMENT will fetch the base address of the bounds directory out of a userspace register (bndcfgu) and then cache it into a new field (->bd_addr) in the 'mm_struct'. PR_MPX_DISABLE_MANAGEMENT will set "bd_addr" to an invalid address. Using this scheme, we can use "bd_addr" to determine whether the management of bounds tables in kernel is enabled. Also, the only way to access that bndcfgu register is via an xsaves, which can be expensive. Caching "bd_addr" like this also helps reduce the cost of those xsaves when doing table cleanup at munmap() time. Unfortunately, we can not apply this optimization to #BR fault time because we need an xsave to get the value of BNDSTATUS. ==== Why does the hardware even have these Bounds Tables? ==== MPX only has 4 hardware registers for storing bounds information. If MPX-enabled code needs more than these 4 registers, it needs to spill them somewhere. It has two special instructions for this which allow the bounds to be moved between the bounds registers and some new "bounds tables". They are similar conceptually to a page fault and will be raised by the MPX hardware during both bounds violations or when the tables are not present. This patch handles those #BR exceptions for not-present tables by carving the space out of the normal processes address space (essentially calling the new mmap() interface indroduced earlier in this patch set.) and then pointing the bounds-directory over to it. The tables *need* to be accessed and controlled by userspace because the instructions for moving bounds in and out of them are extremely frequent. They potentially happen every time a register pointing to memory is dereferenced. Any direct kernel involvement (like a syscall) to access the tables would obviously destroy performance. ==== Why not do this in userspace? ==== This patch is obviously doing this allocation in the kernel. However, MPX does not strictly *require* anything in the kernel. It can theoretically be done completely from userspace. Here are a few ways this *could* be done. I don't think any of them are practical in the real-world, but here they are. Q: Can virtual space simply be reserved for the bounds tables so that we never have to allocate them? A: As noted earlier, these tables are *HUGE*. An X-GB virtual area needs 4*X GB of virtual space, plus 2GB for the bounds directory. If we were to preallocate them for the 128TB of user virtual address space, we would need to reserve 512TB+2GB, which is larger than the entire virtual address space today. This means they can not be reserved ahead of time. Also, a single process's pre-popualated bounds directory consumes 2GB of virtual *AND* physical memory. IOW, it's completely infeasible to prepopulate bounds directories. Q: Can we preallocate bounds table space at the same time memory is allocated which might contain pointers that might eventually need bounds tables? A: This would work if we could hook the site of each and every memory allocation syscall. This can be done for small, constrained applications. But, it isn't practical at a larger scale since a given app has no way of controlling how all the parts of the app might allocate memory (think libraries). The kernel is really the only place to intercept these calls. Q: Could a bounds fault be handed to userspace and the tables allocated there in a signal handler instead of in the kernel? A: (thanks to tglx) mmap() is not on the list of safe async handler functions and even if mmap() would work it still requires locking or nasty tricks to keep track of the allocation state there. Having ruled out all of the userspace-only approaches for managing bounds tables that we could think of, we create them on demand in the kernel. Based-on-patch-by: NQiaowei Ren <qiaowei.ren@intel.com> Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com> Cc: linux-mm@kvack.org Cc: linux-mips@linux-mips.org Cc: Dave Hansen <dave@sr71.net> Link: http://lkml.kernel.org/r/20141114151829.AD4310DE@viggo.jf.intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 14 10月, 2014 1 次提交
-
-
由 Peter Feiner 提交于
For VMAs that don't want write notifications, PTEs created for read faults have their write bit set. If the read fault happens after VM_SOFTDIRTY is cleared, then the PTE's softdirty bit will remain clear after subsequent writes. Here's a simple code snippet to demonstrate the bug: char* m = mmap(NULL, getpagesize(), PROT_READ | PROT_WRITE, MAP_ANONYMOUS | MAP_SHARED, -1, 0); system("echo 4 > /proc/$PPID/clear_refs"); /* clear VM_SOFTDIRTY */ assert(*m == '\0'); /* new PTE allows write access */ assert(!soft_dirty(x)); *m = 'x'; /* should dirty the page */ assert(soft_dirty(x)); /* fails */ With this patch, write notifications are enabled when VM_SOFTDIRTY is cleared. Furthermore, to avoid unnecessary faults, write notifications are disabled when VM_SOFTDIRTY is set. As a side effect of enabling and disabling write notifications with care, this patch fixes a bug in mprotect where vm_page_prot bits set by drivers were zapped on mprotect. An analogous bug was fixed in mmap by commit c9d0bf24 ("mm: uncached vma support with writenotify"). Signed-off-by: NPeter Feiner <pfeiner@google.com> Reported-by: NPeter Feiner <pfeiner@google.com> Suggested-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Cyrill Gorcunov <gorcunov@openvz.org> Cc: Pavel Emelyanov <xemul@parallels.com> Cc: Jamie Liu <jamieliu@google.com> Cc: Hugh Dickins <hughd@google.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 10 10月, 2014 3 次提交
-
-
由 Geert Uytterhoeven 提交于
The different architectures used their own (and different) declarations: extern __visible const void __nosave_begin, __nosave_end; extern const void __nosave_begin, __nosave_end; extern long __nosave_begin, __nosave_end; Consolidate them using the first variant in <asm/sections.h>. Signed-off-by: NGeert Uytterhoeven <geert@linux-m68k.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Laura Abbott 提交于
For architectures without coherent DMA, memory for DMA may need to be remapped with coherent attributes. Factor out the the remapping code from arm and put it in a common location to reduce code duplication. As part of this, the arm APIs are now migrated away from ioremap_page_range to the common APIs which use map_vm_area for remapping. This should be an equivalent change and using map_vm_area is more correct as ioremap_page_range is intended to bring in io addresses into the cpu space and not regular kernel managed memory. Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: David Riley <davidriley@chromium.org> Cc: Olof Johansson <olof@lixom.net> Cc: Ritesh Harjain <ritesh.harjani@gmail.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Thierry Reding <thierry.reding@gmail.com> Cc: Will Deacon <will.deacon@arm.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Laura Abbott <lauraa@codeaurora.org> Cc: Mitchel Humpherys <mitchelh@codeaurora.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Mel Gorman 提交于
ARCH_USES_NUMA_PROT_NONE was defined for architectures that implemented _PAGE_NUMA using _PROT_NONE. This saved using an additional PTE bit and relied on the fact that PROT_NONE vmas were skipped by the NUMA hinting fault scanner. This was found to be conceptually confusing with a lot of implicit assumptions and it was asked that an alternative be found. Commit c46a7c81 "x86: define _PAGE_NUMA by reusing software bits on the PMD and PTE levels" redefined _PAGE_NUMA on x86 to be one of the swap PTE bits and shrunk the maximum possible swap size but it did not go far enough. There are no architectures that reuse _PROT_NONE as _PROT_NUMA but the relics still exist. This patch removes ARCH_USES_NUMA_PROT_NONE and removes some unnecessary duplication in powerpc vs the generic implementation by defining the types the core NUMA helpers expected to exist from x86 with their ppc64 equivalent. This necessitated that a PTE bit mask be created that identified the bits that distinguish present from NUMA pte entries but it is expected this will only differ between arches based on _PAGE_PROTNONE. The naming for the generic helpers was taken from x86 originally but ppc64 has types that are equivalent for the purposes of the helper so they are mapped instead of duplicating code. Signed-off-by: NMel Gorman <mgorman@suse.de> Cc: Hugh Dickins <hughd@google.com> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Rik van Riel <riel@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Cyrill Gorcunov <gorcunov@gmail.com> Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 03 10月, 2014 3 次提交
-
-
由 Pranith Kumar 提交于
Use the much more reader friendly ACCESS_ONCE() instead of the cast to volatile. This is purely a stylistic change. Signed-off-by: NPranith Kumar <bobby.prani@gmail.com> Acked-by: NJesper Nilsson <jesper.nilsson@axis.com> Acked-by: NHans-Christian Egtvedt <egtvedt@samfundet.no> Acked-by: NMax Filippov <jcmvbkbc@gmail.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: linux-arch@vger.kernel.org Link: http://lkml.kernel.org/r/1411482607-20948-1-git-send-email-bobby.prani@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Rik van Riel 提交于
On 32 bit systems cmpxchg cannot handle 64 bit values, so some additional magic is required to allow a 32 bit system with CONFIG_VIRT_CPU_ACCOUNTING_GEN=y enabled to build. Make sure the correct cmpxchg function is used when doing an atomic swap of a cputime_t. Reported-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NRik van Riel <riel@redhat.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: umgwanakikbuti@gmail.com Cc: fweisbec@gmail.com Cc: srao@redhat.com Cc: lwoodman@redhat.com Cc: atheurer@redhat.com Cc: oleg@redhat.com Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Mackerras <paulus@samba.org> Cc: linux390@de.ibm.com Cc: linux-arch@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org Cc: linux-s390@vger.kernel.org Link: http://lkml.kernel.org/r/20140930155947.070cdb1f@annuminas.surriel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Yalin Wang 提交于
This patch changes the __init_end address to a page align address, so that free_initmem() can free the whole .init section, because if the end address is not page aligned, it will round down to a page align address, then the tail unligned page will not be freed. Signed-off-by: Nwang <yalin.wang2010@gmail.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 01 10月, 2014 1 次提交
-
-
由 Liviu Dudau 提交于
Add pci_remap_iospace() to map bus I/O resources into the CPU virtual address space. Architectures with special needs may provide their own version, but most should be able to use this one. This function is useful for PCI host bridge drivers that need to map the PCI I/O resources into virtual memory space. [bhelgaas: phys_addr description, drop temporary "err" variable] Signed-off-by: NLiviu Dudau <Liviu.Dudau@arm.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Reviewed-by: NRob Herring <robh@kernel.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> CC: Arnd Bergmann <arnd@arndb.de>
-
- 30 9月, 2014 1 次提交
-
-
由 Liviu Dudau 提交于
The !CONFIG_GENERIC_IOMAP version of ioport_map() is wrong. It returns a mapped, i.e., virtual, address that can start from zero and completely ignores the PCI_IOBASE and IO_SPACE_LIMIT that most architectures that use !CONFIG_GENERIC_MAP define. Tested-by: NTanmay Inamdar <tinamdar@apm.com> Signed-off-by: NLiviu Dudau <Liviu.Dudau@arm.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NArnd Bergmann <arnd@arndb.de>
-
- 26 9月, 2014 1 次提交
-
-
由 Mike Turquette 提交于
If CONFIG_COMMON_CLK is selected then __clk_get and __clk_put are defined in drivers/clk/clk.c and declared in include/linux/clkdev.h. Sylwester's series[0] to properly support clk_{get,put} in the common clock framework made changes to the asm-specific clkdev.h headers, but not the asm-generic version. Tomeu's recent changes[1] to introduce a provider/consumer split in the clock framework uncovered this problem, causing the following build error on any architecture using the asm-generic clkdev.h (e.g. x86 architecture and the ACPI LPSS driver): In file included from drivers/acpi/acpi_lpss.c:15:0: include/linux/clkdev.h:59:5: error: conflicting types for ‘__clk_get’ int __clk_get(struct clk_core *clk); ^ In file included from arch/x86/include/generated/asm/clkdev.h:1:0, from include/linux/clkdev.h:15, from drivers/acpi/acpi_lpss.c:15: include/asm-generic/clkdev.h:20:19: note: previous definition of ‘__clk_get’ was here static inline int __clk_get(struct clk *clk) { return 1; } ^ Fixed by only declarating __clk_get and __clk_put when CONFIG_COMMON_CLK is set. [0] http://lkml.kernel.org/r/<1386177127-2894-5-git-send-email-s.nawrocki@samsung.com> [1] http://lkml.kernel.org/r/<1409758148-20104-1-git-send-email-tomeu.vizoso@collabora.com> Signed-off-by: NMike Turquette <mturquette@linaro.org>
-
- 24 9月, 2014 1 次提交
-
-
由 Richard Guy Briggs 提交于
syscall_get_arch() used to take a task as a argument. It now uses current. Fix the doc text. Signed-off-by: NRichard Guy Briggs <rgb@redhat.com> Signed-off-by: NEric Paris <eparis@redhat.com>
-
- 23 9月, 2014 1 次提交
-
-
由 Mika Westerberg 提交于
Some newer Intel SoCs, like Braswell already have more than 256 GPIOs available so the default limit is exceeded. Instead of adding more architecture specific gpio.h files with custom ARCH_NR_GPIOs we increase the gpiolib default limit to be twice the current. Current generic ARCH_NR_GPIOS limit is 256 which starts to be too small for newer Intel SoCs like Braswell. In order to support GPIO controllers on these SoCs we increase ARCH_NR_GPIOS to be 512 which should be sufficient for now. The kernel size increases a bit with this change. Below is an example of x86_64 kernel image. ARCH_NR_GPIOS=256 text data bss dec hex filename 11476173 1971328 1265664 14713165 e0814d vmlinux ARCH_NR_GPIOS=512 text data bss dec hex filename 11476173 1971328 1269760 14717261 e0914d vmlinux So the BSS size and this the kernel image size increases by 4k. Signed-off-by: NMika Westerberg <mika.westerberg@linux.intel.com> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org>
-
- 22 9月, 2014 1 次提交
-
-
This header is used by arm64 and x86 individually. Adding to asm-generic to avoid further code repetition while adding cma to mips. Signed-off-by: NZubair Lutfullah Kakakhel <Zubair.Kakakhel@imgtec.com> Acked-by: NMichal Nazarewicz <mina86@mina86.com> Cc: catalin.marinas@arm.com Cc: will.deacon@arm.com Cc: tglx@linutronix.de Cc: mingo@redhat.com Cc: hpa@zytor.com Cc: arnd@arndb.de Cc: gregkh@linuxfoundation.org Cc: m.szyprowski@samsung.com Cc: x86@kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Cc: linux-mips@linux-mips.org Cc: linux-arch@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/7357/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 14 9月, 2014 1 次提交
-
-
由 Peter Zijlstra 提交于
The nohz full code needs irq work to trigger its own interrupt so that the subsystem can work even when the tick is stopped. Lets introduce arch_irq_work_has_interrupt() that archs can override to tell about their support for this ability. Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
-
- 26 8月, 2014 1 次提交
-
-
由 Thierry Reding 提交于
Provide an implementation for dma_{alloc,free,mmap}_writecombine() when the architecture supports DMA attributes. Signed-off-by: NThierry Reding <treding@nvidia.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 14 8月, 2014 1 次提交
-
-
由 Peter Zijlstra 提交于
Rewrite generic atomic support to only require cmpxchg(), generate all other primitives from that. Furthermore reduce the endless repetition for all these primitives to a few CPP macros. This way we get more for less lines. Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20140508135852.940119622@infradead.org Cc: Arnd Bergmann <arnd@arndb.de> Cc: David Howells <dhowells@redhat.com> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: David S. Miller <davem@davemloft.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: linux-arch@vger.kernel.org Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 09 8月, 2014 1 次提交
-
-
由 Joe Perches 提交于
Add this helper for consistency with pci_zalloc_coherent and the ability to remove unnecessary memset(,0,) uses. Signed-off-by: NJoe Perches <joe@perches.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: "James E.J. Bottomley" <JBottomley@parallels.com> Cc: "John W. Linville" <linville@tuxdriver.com> Cc: "Stephen M. Cameron" <scameron@beardog.cce.hp.com> Cc: Adam Radford <linuxraid@lsi.com> Cc: Chaoming Li <chaoming_li@realsil.com.cn> Cc: Chas Williams <chas@cmf.nrl.navy.mil> Cc: Christian Benvenuti <benve@cisco.com> Cc: Christopher Harrer <charrer@alacritech.com> Cc: Dario Ballabio <ballabio_dario@emc.com> Cc: David Airlie <airlied@linux.ie> Cc: Don Fry <pcnet32@frontier.com> Cc: Faisal Latif <faisal.latif@intel.com> Cc: Forest Bond <forest@alittletooquiet.net> Cc: Govindarajulu Varadarajan <_govind@gmx.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Hal Rosenstock <hal.rosenstock@gmail.com> Cc: Hans Verkuil <hverkuil@xs4all.nl> Cc: Jayamohan Kallickal <jayamohan.kallickal@emulex.com> Cc: Jiri Slaby <jslaby@suse.cz> Cc: Jitendra Kalsaria <jitendra.kalsaria@qlogic.com> Cc: Larry Finger <Larry.Finger@lwfinger.net> Cc: Lennert Buytenhek <buytenh@wantstofly.org> Cc: Lior Dotan <liodot@gmail.com> Cc: Manish Chopra <manish.chopra@qlogic.com> Cc: Manohar Vanga <manohar.vanga@gmail.com> Cc: Martyn Welch <martyn.welch@ge.com> Cc: Mauro Carvalho Chehab <m.chehab@samsung.com> Cc: Michael Neuffer <mike@i-Connect.Net> Cc: Mirko Lindner <mlindner@marvell.com> Cc: Neel Patel <neepatel@cisco.com> Cc: Neela Syam Kolli <megaraidlinux@lsi.com> Cc: Rajesh Borundia <rajesh.borundia@qlogic.com> Cc: Roland Dreier <roland@kernel.org> Cc: Ron Mercer <ron.mercer@qlogic.com> Cc: Samuel Ortiz <samuel@sortiz.org> Cc: Sean Hefty <sean.hefty@intel.com> Cc: Shahed Shaikh <shahed.shaikh@qlogic.com> Cc: Sony Chacko <sony.chacko@qlogic.com> Cc: Stanislav Yakovlev <stas.yakovlev@gmail.com> Cc: Stephen Hemminger <stephen@networkplumber.org> Cc: Steve Wise <swise@opengridcomputing.com> Cc: Sujith Sankar <ssujith@cisco.com> Cc: Tom Tucker <tom@opengridcomputing.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 25 7月, 2014 1 次提交
-
-
由 Alexandre Courbot 提交于
gpio_ensure_requested() has been introduced in Feb. 2008 by commit d2876d08 to force users of the GPIO API to explicitly request GPIOs before using them. Hopefully by now all GPIOs are correctly requested and this extra check can be omitted ; in any case the GPIO maintainers won't feel bad if machines start failing after 6 years of warnings. This patch removes that function from the dark ages. Signed-off-by: NAlexandre Courbot <acourbot@nvidia.com> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org>
-
- 23 7月, 2014 2 次提交
-
-
由 Alexandre Courbot 提交于
gpio_ensure_requested() only makes sense when using the integer-based GPIO API, so make sure it is called from there instead of the gpiod API which we know cannot be called with a non-requested GPIO anyway. The uses of gpio_ensure_requested() in the gpiod API were kind of out-of-place anyway, so putting them in gpio-legacy.c helps clearing the code. Actually, considering the time this ensure_requested mechanism has been around, maybe we should just turn this patch into "remove gpio_ensure_requested()" if we know for sure that no user depend on it anymore? Signed-off-by: NAlexandre Courbot <acourbot@nvidia.com> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org>
-
由 Alexandre Courbot 提交于
gpio_lock/unlock_as_irq() are working with (chip, offset) arguments and are thus not using the old integer namespace. Therefore, there is no reason to have gpiod variants of these functions working with descriptors, especially since the (chip, offset) tuple is more suitable to the users of these functions (GPIO drivers, whereas GPIO descriptors are targeted at GPIO consumers). Signed-off-by: NAlexandre Courbot <acourbot@nvidia.com> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org>
-
- 04 7月, 2014 1 次提交
-
-
由 Jason Baron 提交于
Even on x86-64, I've found the need to break up a readq() into 2 readl() calls. According to the Intel datasheet for the E3-1200 processor: " Software must not access B0/D0/F0 32-bit memory-mapped registers with requests that cross a DW boundary. " (http://www.intel.com/content/www/us/en/processors/xeon/xeon-e3-1200-family-vol-2-datasheet.html p. 16) I can confirm this is true via several hard machine lockups. Thus, add explicit hi_lo_[readq|write]_q and lo_hi_[read|write]_q so that these uses are spelled out. Signed-off-by: NJason Baron <jbaron@akamai.com> Link: http://lkml.kernel.org/r/281f09da7ad01e5cea99737ec34d2399bdbbbf63.1403818526.git.jbaron@akamai.comSigned-off-by: NBorislav Petkov <bp@suse.de>
-
- 02 7月, 2014 1 次提交
-
-
由 Zhengyu He 提交于
This fixes a typo that named the read_mostly section of percpu as readmostly. It works fine with SMP because the linker script specifies .data..percpu..readmostly. However, UP kernel builds don't have percpu sections defined and the non-percpu version of the section is called data..read_mostly, so .data..readmostly will float around and may break things unexpectedly. Looking at the original change that introduced data..percpu..readmostly (commit c957ef2c), it looks like this was the original intention. Tested: Built UP kernel and confirmed the sections got merged. - Before the patch: $ objdump -h vmlinux.o | grep '\.data\.\.read.*mostly' 38 .data..read_mostly 00004418 0000000000000000 0000000000000000 00431ac0 2**6 50 .data..readmostly 00000014 0000000000000000 0000000000000000 00444000 2**3 - After the patch: $ objdump -h vmlinux.o | grep '\.data\.\.read.*mostly' 38 .data..read_mostly 00004438 0000000000000000 0000000000000000 00431ac0 2**6 Signed-off-by: NZhengyu He <hzy@google.com> Signed-off-by: NFilipe Brandenburger <filbranden@google.com> Signed-off-by: NTejun Heo <tj@kernel.org>
-
- 20 6月, 2014 1 次提交
-
-
由 Andreas Noever 提交于
Add pci_fixup_suspend_late as a new pci_fixup_pass. The pass is called from suspend_noirq and poweroff_noirq. Using the same pass for suspend and hibernate is consistent with resume_early which is called by resume_noirq and restore_noirq. The new quirk pass is required for Thunderbolt support on Apple hardware. Signed-off-by: NAndreas Noever <andreas.noever@gmail.com> Acked-by: NBjorn Helgaas <bhelgaas@google.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 18 6月, 2014 6 次提交
-
-
由 Tejun Heo 提交于
percpu macros are difficult to read. It's partly because they're fairly complex but also because they simply lack visual and conventional consistency to an unusual degree. The preceding patches tried to organize macro definitions consistently by their roles. This patch makes the following cosmetic changes to improve overall readability. * Use consistent convention for multi-line macro definitions - "do {" or "({" are now put on their own lines and the line continuing '\' are all put on the same column. * Temp variables used inside macro are consistently given "__" prefix. * When a macro argument is passed to another macro or a function, putting extra parenthses around it doesn't help anything. Don't put them. * _this_cpu_generic_*() are renamed to this_cpu_generic_*() so that they're consistent with raw_cpu_generic_*(). * Reorganize raw_cpu_*() and this_cpu_*() definitions so that trivial wrappers are collected in one place after actual operation definitions. * Other misc cleanups including reorganizing comments. All changes in this patch are cosmetic and cause no functional difference. Signed-off-by: NTejun Heo <tj@kernel.org> Acked-by: NChristoph Lameter <cl@linux.com>
-
由 Tejun Heo 提交于
* In include/asm-generic/percpu.h, collect {raw|_this}_cpu_generic*() macros into one place. They were dispersed through {raw|this}_cpu_*_N() definitions and the visiual inconsistency was making following the code unnecessarily difficult. * In include/linux/percpu-defs.h, move __verify_pcpu_ptr() later in the file so that it's right above accessor definitions where it's actually used. This is pure reorganization. Signed-off-by: NTejun Heo <tj@kernel.org> Acked-by: NChristoph Lameter <cl@linux.com>
-
由 Tejun Heo 提交于
{raw|this}_cpu_*_N() operations are expected to be provided by archs and the generic definitions are provided as fallbacks. As such, these firmly belong to include/asm-generic/percpu.h. Move the generic definitions to include/asm-generic/percpu.h. The code is moved mostly verbatim; however, raw_cpu_*_N() are placed above this_cpu_*_N() which is more conventional as the raw operations may be used to defined other variants. This is pure reorganization. Signed-off-by: NTejun Heo <tj@kernel.org> Acked-by: NChristoph Lameter <cl@linux.com>
-
由 Tejun Heo 提交于
The roles of the various percpu header files has become unclear. There are four header files involved. include/linux/percpu-defs.h include/linux/percpu.h include/asm-generic/percpu.h arch/*/include/asm/percpu.h The original intention for include/asm-generic/percpu.h is providing generic definitions for arch-overridable parts; however, it now hosts various stuff which can't be overridden by archs. Also, include/linux/percpu-defs.h was initially added to contain section and percpu variable definition macros so that arch header files can make use of them without worrying about introducing cyclic inclusion dependency by including include/linux/percpu.h; however, arch headers sometimes need to access percpu variables too and this is one of the reasons why some accessors were implemented in include/linux/asm-generic/percpu.h. Let's clear up the situation by making include/asm-generic/percpu.h contain only arch-overridable parts and moving accessors and operations into include/linux/percpu-defs. Note that this patch only moves things from include/asm-generic/percpu.h. include/linux/percpu.h will be taken care of by later patches. This patch moves the followings. * SHIFT_PERCPU_PTR() / VERIFY_PERCPU_PTR() * per_cpu() * raw_cpu_ptr() * this_cpu_ptr() * __get_cpu_var() * __raw_get_cpu_var() * __this_cpu_ptr() * PER_CPU_[SHARED_]ALIGNED_SECTION * PER_CPU_[SHARED_]ALIGNED_SECTION * PER_CPU_FIRST_SECTION This patch is pure reorganization. Signed-off-by: NTejun Heo <tj@kernel.org> Acked-by: NChristoph Lameter <cl@linux.com>
-
由 Tejun Heo 提交于
Currently, archs can override raw_cpu_ptr() directly; however, we wanna build a layer of indirection in the generic part of percpu so that we can implement generic features there without affecting archs. Introduce arch_raw_cpu_ptr() which is used to define raw_cpu_ptr() by generic percpu code. The two are identical for now. x86 is currently the only arch which overrides raw_cpu_ptr() and is converted to define arch_raw_cpu_ptr() instead. This doesn't introduce any functional difference. Signed-off-by: NTejun Heo <tj@kernel.org> Cc: Christoph Lameter <cl@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com>
-
由 Tejun Heo 提交于
It has been about half a decade since all archs started using the dynamic percpu allocator and thus the same SHIFT_PERCPU_PTR() implementation. There's no benefit in overriding SHIFT_PERCPU_PTR() anymore. Remove #ifndef around it to clarify that this is identical regardless of the arch. This patch doesn't cause any functional difference. Signed-off-by: NTejun Heo <tj@kernel.org> Acked-by: NChristoph Lameter <cl@linux.com>
-
- 07 6月, 2014 1 次提交
-
-
由 Hans Verkuil 提交于
When running sparse over drivers/media/v4l2-core/v4l2-ioctl.c I get these errors: drivers/media/v4l2-core/v4l2-ioctl.c:2043:9: error: bad integer constant expression drivers/media/v4l2-core/v4l2-ioctl.c:2044:9: error: bad integer constant expression drivers/media/v4l2-core/v4l2-ioctl.c:2045:9: error: bad integer constant expression drivers/media/v4l2-core/v4l2-ioctl.c:2046:9: error: bad integer constant expression etc. The root cause of that turns out to be in include/asm-generic/ioctl.h: #include <uapi/asm-generic/ioctl.h> /* provoke compile error for invalid uses of size argument */ extern unsigned int __invalid_size_argument_for_IOC; #define _IOC_TYPECHECK(t) \ ((sizeof(t) == sizeof(t[1]) && \ sizeof(t) < (1 << _IOC_SIZEBITS)) ? \ sizeof(t) : __invalid_size_argument_for_IOC) If it is defined as this (as is already done if __KERNEL__ is not defined): #define _IOC_TYPECHECK(t) (sizeof(t)) then all is well with the world. This patch allows sparse to work correctly. Signed-off-by: NHans Verkuil <hans.verkuil@cisco.com> Reviewed-by: NJosh Triplett <josh@joshtriplett.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 06 6月, 2014 1 次提交
-
-
由 Waiman Long 提交于
This rwlock uses the arch_spin_lock_t as a waitqueue, and assuming the arch_spin_lock_t is a fair lock (ticket,mcs etc..) the resulting rwlock is a fair lock. It fits in the same 8 bytes as the regular rwlock_t by folding the reader and writer count into a single integer, using the remaining 4 bytes for the arch_spinlock_t. Architectures that can single-copy adress bytes can optimize queue_write_unlock() with a 0 write to the LSB (the write count). Performance as measured by Davidlohr Bueso (rwlock_t -> qrwlock_t): +--------------+-------------+---------------+ | Workload | #users | delta | +--------------+-------------+---------------+ | alltests | > 1400 | -4.83% | | custom | 0-100,> 100 | +1.43%,-1.57% | | high_systime | > 1000 | -2.61 | | shared | all | +0.32 | +--------------+-------------+---------------+ http://www.stgolabs.net/qrwlock-stuff/aim7-results-vs-rwsem_optsin/Signed-off-by: NWaiman Long <Waiman.Long@hp.com> [peterz: near complete rewrite] Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: "Paul E.McKenney" <paulmck@linux.vnet.ibm.com> Cc: linux-arch@vger.kernel.org Cc: linux-kernel@vger.kernel.org Link: http://lkml.kernel.org/n/tip-gac1nnl3wvs2ij87zv2xkdzq@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 05 6月, 2014 1 次提交
-
-
由 Mel Gorman 提交于
_PAGE_NUMA is currently an alias of _PROT_PROTNONE to trap NUMA hinting faults on x86. Care is taken such that _PAGE_NUMA is used only in situations where the VMA flags distinguish between NUMA hinting faults and prot_none faults. This decision was x86-specific and conceptually it is difficult requiring special casing to distinguish between PROTNONE and NUMA ptes based on context. Fundamentally, we only need the _PAGE_NUMA bit to tell the difference between an entry that is really unmapped and a page that is protected for NUMA hinting faults as if the PTE is not present then a fault will be trapped. Swap PTEs on x86-64 use the bits after _PAGE_GLOBAL for the offset. This patch shrinks the maximum possible swap size and uses the bit to uniquely distinguish between NUMA hinting ptes and swap ptes. Signed-off-by: NMel Gorman <mgorman@suse.de> Cc: David Vrabel <david.vrabel@citrix.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Peter Anvin <hpa@zytor.com> Cc: Fengguang Wu <fengguang.wu@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Steven Noonan <steven@uplinklabs.net> Cc: Rik van Riel <riel@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Cc: Cyrill Gorcunov <gorcunov@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 21 5月, 2014 5 次提交
-
-
由 Bjorn Helgaas 提交于
dma_declare_coherent_memory() takes two addresses for a region of memory: a "bus_addr" and a "device_addr". I think the intent is that "bus_addr" is the physical address a *CPU* would use to access the region, and "device_addr" is the bus address the *device* would use to address the region. Rename "bus_addr" to "phys_addr" and change its type to phys_addr_t. Most callers already supply a phys_addr_t for this argument. The others supply a 32-bit integer (a constant, unsigned int, or __u32) and need no change. Use "unsigned long", not phys_addr_t, to hold PFNs. No functional change (this could theoretically fix a truncation in a config with 32-bit dma_addr_t and 64-bit phys_addr_t, but I don't think there are any such cases involving this code). Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Acked-by: NJames Bottomley <jbottomley@Parallels.com> Acked-by: NRandy Dunlap <rdunlap@infradead.org>
-
由 Rob Herring 提交于
This adds the infrastructure to generic earlycon for earlycon setup using DT. The actual setup is not enabled until a following commit to add the FDT parsing. Signed-off-by: NRob Herring <robh@kernel.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Jiri Slaby <jslaby@suse.cz> Cc: Arnd Bergmann <arnd@arndb.de> Acked-by: NGrant Likely <grant.likely@linaro.org>
-
由 Rob Herring 提交于
OF table sections all have the same pattern, so create a macro to define them and insure consistency. Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NRob Herring <robh@kernel.org>
-
由 Rob Herring 提交于
The cpu_method_of_table is the oddball of the various OF linker sections. In preparation to have common linker section definitions, align the cpu_method_of_table with the other definitions for the naming and ending with a blank struct. Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NRob Herring <robh@kernel.org> Cc: Russell King <linux@arm.linux.org.uk>
-
由 Rob Herring 提交于
Make the irqchip OF match table section naming aligned with other OF match table sections in preparation to have a common definition. Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NRob Herring <robh@kernel.org>
-
- 15 5月, 2014 1 次提交
-
-
由 James Hogan 提交于
_STK_LIM_MAX could be used to override the RLIMIT_STACK hard limit from an arch's include/uapi/asm-generic/resource.h file, but is no longer used since both parisc and metag removed the override. Therefore remove it entirely, setting the hard RLIMIT_STACK limit to RLIM_INFINITY directly in include/asm-generic/resource.h. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: linux-arch@vger.kernel.org Cc: Helge Deller <deller@gmx.de> Cc: John David Anglin <dave.anglin@bell.net>
-