- 11 1月, 2016 1 次提交
-
-
由 Max Filippov 提交于
Do not always use fake NMI when safe, provide Kconfig option instead. Print a warning if fake NMI is chosen in unsafe configuration, but allow it, because it may work if the user knows that interrupts with priorities at or above PMM IRQ are not used. Add a check to NMI handler that BUGs if any of these IRQs fire. Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
-
- 09 11月, 2015 2 次提交
-
-
由 Max Filippov 提交于
This fixes the following build error seen in -next: drivers/gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c:143:2: error: implicit declaration of function 'dma_to_phys' Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
-
由 Max Filippov 提交于
- don't bugcheck if high memory page is passed to xtensa_map_page; - turn empty dcache flush macros into functions so that they could be passed as function parameters; - use kmap_atomic to map high memory pages for cache invalidation/ flushing performed by xtensa_sync_single_for_{cpu,device}. Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
-
- 08 11月, 2015 1 次提交
-
-
由 Jens Axboe 提交于
No functional changes in this patch, but it prepares us for returning a more useful cookie related to the IO that was queued up. Signed-off-by: NJens Axboe <axboe@fb.com> Acked-by: NChristoph Hellwig <hch@lst.de> Acked-by: NKeith Busch <keith.busch@intel.com>
-
- 07 11月, 2015 1 次提交
-
-
由 Kirill A. Shutemov 提交于
Hugh has pointed that compound_head() call can be unsafe in some context. There's one example: CPU0 CPU1 isolate_migratepages_block() page_count() compound_head() !!PageTail() == true put_page() tail->first_page = NULL head = tail->first_page alloc_pages(__GFP_COMP) prep_compound_page() tail->first_page = head __SetPageTail(p); !!PageTail() == true <head == NULL dereferencing> The race is pure theoretical. I don't it's possible to trigger it in practice. But who knows. We can fix the race by changing how encode PageTail() and compound_head() within struct page to be able to update them in one shot. The patch introduces page->compound_head into third double word block in front of compound_dtor and compound_order. Bit 0 encodes PageTail() and the rest bits are pointer to head page if bit zero is set. The patch moves page->pmd_huge_pte out of word, just in case if an architecture defines pgtable_t into something what can have the bit 0 set. hugetlb_cgroup uses page->lru.next in the second tail page to store pointer struct hugetlb_cgroup. The patch switch it to use page->private in the second tail page instead. The space is free since ->first_page is removed from the union. The patch also opens possibility to remove HUGETLB_CGROUP_MIN_ORDER limitation, since there's now space in first tail page to store struct hugetlb_cgroup pointer. But that's out of scope of the patch. That means page->compound_head shares storage space with: - page->lru.next; - page->next; - page->rcu_head.next; That's too long list to be absolutely sure, but looks like nobody uses bit 0 of the word. page->rcu_head.next guaranteed[1] to have bit 0 clean as long as we use call_rcu(), call_rcu_bh(), call_rcu_sched(), or call_srcu(). But future call_rcu_lazy() is not allowed as it makes use of the bit and we can get false positive PageTail(). [1] http://lkml.kernel.org/g/20150827163634.GD4029@linux.vnet.ibm.comSigned-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Acked-by: NMichal Hocko <mhocko@suse.com> Reviewed-by: NAndrea Arcangeli <aarcange@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: David Rientjes <rientjes@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Christoph Lameter <cl@linux.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 06 11月, 2015 1 次提交
-
-
由 Eric B Munson 提交于
The previous patch introduced a flag that specified pages in a VMA should be placed on the unevictable LRU, but they should not be made present when the area is created. This patch adds the ability to set this state via the new mlock system calls. We add MLOCK_ONFAULT for mlock2 and MCL_ONFAULT for mlockall. MLOCK_ONFAULT will set the VM_LOCKONFAULT modifier for VM_LOCKED. MCL_ONFAULT should be used as a modifier to the two other mlockall flags. When used with MCL_CURRENT, all current mappings will be marked with VM_LOCKED | VM_LOCKONFAULT. When used with MCL_FUTURE, the mm->def_flags will be marked with VM_LOCKED | VM_LOCKONFAULT. When used with both MCL_CURRENT and MCL_FUTURE, all current mappings and mm->def_flags will be marked with VM_LOCKED | VM_LOCKONFAULT. Prior to this patch, mlockall() will unconditionally clear the mm->def_flags any time it is called without MCL_FUTURE. This behavior is maintained after adding MCL_ONFAULT. If a call to mlockall(MCL_FUTURE) is followed by mlockall(MCL_CURRENT), the mm->def_flags will be cleared and new VMAs will be unlocked. This remains true with or without MCL_ONFAULT in either mlockall() invocation. munlock() will unconditionally clear both vma flags. munlockall() unconditionally clears for VMA flags on all VMAs and in the mm->def_flags field. Signed-off-by: NEric B Munson <emunson@akamai.com> Acked-by: NMichal Hocko <mhocko@suse.com> Acked-by: NVlastimil Babka <vbabka@suse.cz> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Guenter Roeck <linux@roeck-us.net> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Michael Kerrisk <mtk.manpages@gmail.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Shuah Khan <shuahkh@osg.samsung.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 03 11月, 2015 4 次提交
-
-
由 Max Filippov 提交于
Drop unaligned dcache management functions as they are no longer used. This reverts commit bd974240 ("xtensa: cache inquiry and unaligned cache handling functions"). Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
-
由 Max Filippov 提交于
There are no .bootstrap or .ResetVector.text sections linked to the vmlinux image, drop these sections from vmlinux.ld.S. Drop RESET_VECTOR_VADDR definition only used for .ResetVector.text. Drop remapped copies of primary and secondary reset vectors, as modern gdb don't have problems stepping through instructions at arbitrary locations. Drop corresponding sections from the corresponding linker scripts. Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
-
由 Max Filippov 提交于
There are multiple factors adding to the issue in different configurations: - commit 17290231 ("xtensa: add fixup for double exception raised in window overflow") added function window_overflow_restore_a0_fixup to double exception vector overlapping reset vector location of secondary processor cores. - on MMUv2 cores RESET_VECTOR1_VADDR may point to uncached kernel memory making code overlapping depend on cache type and size, so that without cache or with WT cache reset vector code overwrites double exception code, making issue even harder to detect. - on MMUv3 cores RESET_VECTOR1_VADDR may point to unmapped area, as MMUv3 cores change virtual address map to match MMUv2 layout, but reset vector virtual address is given for the original MMUv3 mapping. - physical memory region of the secondary reset vector is not reserved in the physical memory map, and thus may be allocated and overwritten at arbitrary moment. Fix it as follows: - move window_overflow_restore_a0_fixup code to .text section. - define RESET_VECTOR1_VADDR so that it points to reset vector in the cacheable MMUv2 map for cores with MMU. - reserve reset vector region in the physical memory map. Drop separate literal section and build mxhead.S with text section literals. Cc: <stable@vger.kernel.org> Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
-
由 Max Filippov 提交于
Make maximal memory allocation order configurable, so that drivers could allocate huge buffers when they need to. Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
-
- 02 11月, 2015 10 次提交
-
-
由 Max Filippov 提交于
Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
-
由 Max Filippov 提交于
Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
-
由 Max Filippov 提交于
Diamond core 212 is a generic purpose core without full MMU used for sample noMMU configuration. Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
-
由 Max Filippov 提交于
Not having HAVE_FUTEX_CMPXCHG makes futex_detect_cmpxchg probe cmpxchg_futex_value_locked with NULL address. It's not guaranteed to fault without MMU, instead it locks up on Xtensa when there's no RAM at address 0. Select HAVE_FUTEX_CMPXCHG in noMMU Xtensa configurations. Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
-
由 Max Filippov 提交于
RAM starts at 0x60000000 on noMMU cores, not at 0x40000000. Fix the default. Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
-
由 Max Filippov 提交于
KIO region location is different for noMMU cores. Provide different default physical address and make KIO virtual address equal to physical. Move xtensa_get_kio_paddr function close to XCHAL_KIO_PADDR definition and define it not only for MMUv3, but for all MMU options except MMUv2. Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
-
由 Max Filippov 提交于
There's no kernel/user separation in noMMU and PS.RING may not exist. Even if it exists it should not be used because TLB entries are not set up for user ring on user pages. Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
-
由 Max Filippov 提交于
This fixes the following warning when default memory region crosses 0x80000000: arch/xtensa/include/asm/processor.h:40:47: warning: integer overflow in expression [-Woverflow] #define TASK_SIZE (PLATFORM_DEFAULT_MEM_START + PLATFORM_DEFAULT_MEM_SIZE) ^ Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
-
由 Max Filippov 提交于
- make cache-related assembly macros empty if core doesn't have corresponding cache type; - don't initialize cache attributes in instruction/data TLB entries if there's no corresponding cache type. Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
-
由 Max Filippov 提交于
Build-time fixes: - make lbeg/lend/lcount save/restore conditional on kernel entry; - don't clear lcount in platform_restart functions unconditionally. Run-time fixes: - use correct end of range register in __endla paired with __loopt, not the unused temporary register. This fixes .bss zero-initialization. Update comments in asmmacro.h; - don't clobber a10 in the usercopy that leads to access to unmapped memory. Cc: <stable@vger.kernel.org> Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
-
- 28 10月, 2015 1 次提交
-
-
由 Rob Herring 提交于
Enable building all dtb files when CONFIG_OF_ALL_DTBS is enabled. The dtbs are not really dependent on a platform being enabled or any other kernel config, so for testing coverage it is convenient to build all of the dtbs. This builds all dts files in the tree, not just targets listed. Signed-off-by: NRob Herring <robh@kernel.org> Cc: Chris Zankel <chris@zankel.net> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: linux-xtensa@linux-xtensa.org
-
- 01 10月, 2015 1 次提交
-
-
由 Marc Zyngier 提交于
Seeing the 'of' characters in a symbol that is being called from ACPI seems to freak out people. So let's do a bit of pointless renaming so that these folks do feel at home. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NHanjun Guo <hanjun.guo@linaro.org> Acked-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NHanjun Guo <hanjun.guo@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 23 9月, 2015 1 次提交
-
-
由 Peter Zijlstra 提交于
This patch makes sure that atomic_{read,set}() are at least {READ,WRITE}_ONCE(). We already had the 'requirement' that atomic_read() should use ACCESS_ONCE(), and most archs had this, but a few were lacking. All are now converted to use READ_ONCE(). And, by a symmetry and general paranoia argument, upgrade atomic_set() to use WRITE_ONCE(). Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: james.hogan@imgtec.com Cc: linux-kernel@vger.kernel.org Cc: oleg@redhat.com Cc: will.deacon@arm.com Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 16 9月, 2015 1 次提交
-
-
由 Bjorn Helgaas 提交于
Revert dff22d20 ("PCI: Call pci_read_bridge_bases() from core instead of arch code"). Reading PCI bridge windows is not arch-specific in itself, but there is PCI core code that doesn't work correctly if we read them too early. For example, Hannes found this case on an ARM Freescale i.mx6 board: pci_bus 0000:00: root bus resource [mem 0x01000000-0x01efffff] pci 0000:00:00.0: PCI bridge to [bus 01-ff] pci 0000:00:00.0: BAR 8: no space for [mem size 0x01000000] (mem window) pci 0000:01:00.0: BAR 2: failed to assign [mem size 0x00200000] pci 0000:01:00.0: BAR 1: failed to assign [mem size 0x00004000] pci 0000:01:00.0: BAR 0: failed to assign [mem size 0x00000100] The 00:00.0 mem window needs to be at least 3MB: the 01:00.0 device needs 0x204100 of space, and mem windows are megabyte-aligned. Bus sizing can increase a bridge window size, but never *decrease* it (see d65245c3 ("PCI: don't shrink bridge resources")). Prior to dff22d20, ARM didn't read bridge windows at all, so the "original size" was zero, and we assigned a 3MB window. After dff22d20, we read the bridge windows before sizing the bus. The firmware programmed a 16MB window (size 0x01000000) in 00:00.0, and since we never decrease the size, we kept 16MB even though we only needed 3MB. But 16MB doesn't fit in the host bridge aperture, so we failed to assign space for the window and the downstream devices. I think this is a defect in the PCI core: we shouldn't rely on the firmware to assign sensible windows. Ray reported a similar problem, also on ARM, with Broadcom iProc. Issues like this are too hard to fix right now, so revert dff22d20. Reported-by: NHannes <oe5hpm@gmail.com> Reported-by: NRay Jui <rjui@broadcom.com> Link: http://lkml.kernel.org/r/CAAa04yFQEUJm7Jj1qMT57-LG7ZGtnhNDBe=PpSRa70Mj+XhW-A@mail.gmail.com Link: http://lkml.kernel.org/r/55F75BB8.4070405@broadcom.comSigned-off-by: NBjorn Helgaas <bhelgaas@google.com> Acked-by: NYinghai Lu <yinghai@kernel.org> Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
-
- 11 9月, 2015 5 次提交
-
-
由 Christoph Hellwig 提交于
Almost everyone implements dma_set_mask the same way, although some time that's hidden in ->set_dma_mask methods. This patch consolidates those into a common implementation that either calls ->set_dma_mask if present or otherwise uses the default implementation. Some architectures used to only call ->set_dma_mask after the initial checks, and those instance have been fixed to do the full work. h8300 implemented dma_set_mask bogusly as a no-ops and has been fixed. Unfortunately some architectures overload unrelated semantics like changing the dma_ops into it so we still need to allow for an architecture override for now. [jcmvbkbc@gmail.com: fix xtensa] Signed-off-by: NChristoph Hellwig <hch@lst.de> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Russell King <linux@arm.linux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Michal Simek <monstr@monstr.eu> Cc: Jonas Bonn <jonas@southpole.se> Cc: Chris Metcalf <cmetcalf@ezchip.com> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Andy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Christoph Hellwig 提交于
Most architectures just call into ->dma_supported, but some also return 1 if the method is not present, or 0 if no dma ops are present (although that should never happeb). Consolidate this more broad version into common code. Also fix h8300 which inorrectly always returned 0, which would have been a problem if it's dma_set_mask implementation wasn't a similarly buggy noop. As a few architectures have much more elaborate implementations, we still allow for arch overrides. [jcmvbkbc@gmail.com: fix xtensa] Signed-off-by: NChristoph Hellwig <hch@lst.de> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Russell King <linux@arm.linux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Michal Simek <monstr@monstr.eu> Cc: Jonas Bonn <jonas@southpole.se> Cc: Chris Metcalf <cmetcalf@ezchip.com> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Andy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Christoph Hellwig 提交于
Currently there are three valid implementations of dma_mapping_error: (1) call ->mapping_error (2) check for a hardcoded error code (3) always return 0 This patch provides a common implementation that calls ->mapping_error if present, then checks for DMA_ERROR_CODE if defined or otherwise returns 0. [jcmvbkbc@gmail.com: fix xtensa] Signed-off-by: NChristoph Hellwig <hch@lst.de> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Russell King <linux@arm.linux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Michal Simek <monstr@monstr.eu> Cc: Jonas Bonn <jonas@southpole.se> Cc: Chris Metcalf <cmetcalf@ezchip.com> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Andy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Christoph Hellwig 提交于
Most architectures do not support non-coherent allocations and either define dma_{alloc,free}_noncoherent to their coherent versions or stub them out. Openrisc uses dma_{alloc,free}_attrs to implement them, and only Mips implements them directly. This patch moves the Openrisc version to common code, and handles the DMA_ATTR_NON_CONSISTENT case in the mips dma_map_ops instance. Note that actual non-coherent allocations require a dma_cache_sync implementation, so if non-coherent allocations didn't work on an architecture before this patch they still won't work after it. [jcmvbkbc@gmail.com: fix xtensa] Signed-off-by: NChristoph Hellwig <hch@lst.de> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Russell King <linux@arm.linux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Michal Simek <monstr@monstr.eu> Cc: Jonas Bonn <jonas@southpole.se> Cc: Chris Metcalf <cmetcalf@ezchip.com> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Andy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Christoph Hellwig 提交于
Since 2009 we have a nice asm-generic header implementing lots of DMA API functions for architectures using struct dma_map_ops, but unfortunately it's still missing a lot of APIs that all architectures still have to duplicate. This series consolidates the remaining functions, although we still need arch opt outs for two of them as a few architectures have very non-standard implementations. This patch (of 5): The coherent DMA allocator works the same over all architectures supporting dma_map operations. This patch consolidates them and converges the minor differences: - the debug_dma helpers are now called from all architectures, including those that were previously missing them - dma_alloc_from_coherent and dma_release_from_coherent are now always called from the generic alloc/free routines instead of the ops dma-mapping-common.h always includes dma-coherent.h to get the defintions for them, or the stubs if the architecture doesn't support this feature - checks for ->alloc / ->free presence are removed. There is only one magic instead of dma_map_ops without them (mic_dma_ops) and that one is x86 only anyway. Besides that only x86 needs special treatment to replace a default devices if none is passed and tweak the gfp_flags. An optional arch hook is provided for that. [linux@roeck-us.net: fix build] [jcmvbkbc@gmail.com: fix xtensa] Signed-off-by: NChristoph Hellwig <hch@lst.de> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Russell King <linux@arm.linux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Michal Simek <monstr@monstr.eu> Cc: Jonas Bonn <jonas@southpole.se> Cc: Chris Metcalf <cmetcalf@ezchip.com> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Andy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by: NGuenter Roeck <linux@roeck-us.net> Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 20 8月, 2015 1 次提交
-
-
由 Max Filippov 提交于
Current sed script makes assumptions about the structure of rules that group .text sections in the vmlinux linker script. These assumptions get broken occasionally, e.g.: 779c88c9 "ARM: 8321/1: asm-generic: introduce.text.fixup input section", or 9bebe9e5 "kbuild: Fix .text.unlikely placement". Rewrite sed rules so that they don't depend on number/arrangement of text sections in *(...) blocks. Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
-
- 17 8月, 2015 10 次提交
-
-
由 Max Filippov 提交于
Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
-
由 Max Filippov 提交于
In case perf IRQ is the highest of the medium-level IRQs, and is alone on its level, it may be treated as NMI: - LOCKLEVEL is defined to be one level less than EXCM level, - IRQ masking never lowers current IRQ level, - new fake exception cause code, EXCCAUSE_MAPPED_NMI is assigned to that IRQ; new second level exception handler, do_nmi, assigned to it handles it as NMI, - atomic operations in configurations without s32c1i still need to mask all interrupts. Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
-
由 Max Filippov 提交于
There's no way _switch_to can produce double exceptions now, don't enter/leave EXC_TABLE_FIXUP critical section. Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
-
由 Max Filippov 提交于
call12 can't be safely used as the first call in the inline function, because the compiler does not extend the stack frame of the bounding function accordingly, which may result in corruption of local variables. If a call needs to be done, do call8 first followed by call12. For pure assembly code in _switch_to increase stack frame size of the bounding function. Cc: stable@vger.kernel.org Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
-
由 Max Filippov 提交于
entry.s only disables IRQs on hardware IRQ, move trace_hardirqs_off call into do_interrupt. Check actual intlevel that will be restored on return from exception handler to decide if trace_hardirqs_on should be called. Annotate IRQ on/off points in the TIF_* handling loop on return from exception handler. Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
-
由 Max Filippov 提交于
Xtensa Performance Monitor Module has up to 8 32 bit wide performance counters. Each counter may be enabled independently and can count any single type of hardware performance events. Event counting may be enabled and disabled globally (per PMM). Each counter has status register with bits indicating if the counter has been overflown and may be programmed to raise profiling IRQ on overflow. This IRQ is used to rewind counters and allow for counting more than 2^32 samples for counting events and to report samples for sampling events. For more details see Tensilica Debug User's Guide, chapter 8 "Performance monitor module". Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
-
由 Max Filippov 提交于
Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
-
由 Max Filippov 提交于
Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
-
由 Max Filippov 提交于
Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
-
由 Max Filippov 提交于
Old oprofile interface will share user stack tracing with new perf interface. Move oprofile user/kernel stack tracing to stacktrace.c to make it possible. Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
-