- 25 2月, 2017 4 次提交
-
-
由 Geert Uytterhoeven 提交于
Extract the glob test code into its own source file, to allow to compile it either to a loadable module, or builtin into the kernel. Link: http://lkml.kernel.org/r/1483470276-10517-2-git-send-email-geert@linux-m68k.orgSigned-off-by: NGeert Uytterhoeven <geert@linux-m68k.org> Reviewed-by: NAndy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Geert Uytterhoeven 提交于
Extract the crc32 test code into its own source file, to allow to compile it either to a loadable module, or builtin into the kernel. Link: http://lkml.kernel.org/r/1483470276-10517-1-git-send-email-geert@linux-m68k.orgSigned-off-by: NGeert Uytterhoeven <geert@linux-m68k.org> Reviewed-by: NAndy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Kees Cook 提交于
The CHECK_DATA_CORRUPTION() macro was designed to have callers do something meaningful/protective on failure. However, using "return false" in the macro too strictly limits the design patterns of callers. Instead, let callers handle the logic test directly, but make sure that the result IS checked by forcing __must_check (which appears to not be able to be used directly on macro expressions). Link: http://lkml.kernel.org/r/20170206204547.GA125312@beastSigned-off-by: NKees Cook <keescook@chromium.org> Suggested-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Greg Thelen 提交于
Make a kasan test which uses a SLAB_ACCOUNT slab cache. If the test is run within a non default memcg, then it uncovers the bug fixed by "kasan: drain quarantine of memcg slab objects"[1]. If run without fix [1] it shows "Slab cache still has objects", and the kmem_cache structure is leaked. Here's an unpatched kernel test: $ dmesg -c > /dev/null $ mkdir /sys/fs/cgroup/memory/test $ echo $$ > /sys/fs/cgroup/memory/test/tasks $ modprobe test_kasan 2> /dev/null $ dmesg | grep -B1 still [ 123.456789] kasan test: memcg_accounted_kmem_cache allocate memcg accounted object [ 124.456789] kmem_cache_destroy test_cache: Slab cache still has objects Kernels with fix [1] don't have the "Slab cache still has objects" warning or the underlying leak. The new test runs and passes in the default (root) memcg, though in the root memcg it won't uncover the problem fixed by [1]. Link: http://lkml.kernel.org/r/1482257462-36948-2-git-send-email-gthelen@google.comSigned-off-by: NGreg Thelen <gthelen@google.com> Reviewed-by: NVladimir Davydov <vdavydov.dev@gmail.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Alexander Potapenko <glider@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 24 2月, 2017 1 次提交
-
-
由 Dave Airlie 提交于
Linus doesn't like it user selectable, so kill it until someone needs it for something else. Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 23 2月, 2017 4 次提交
-
-
由 Jiri Pirko 提交于
As reported by Geert, remove the string so the user does not see this config option. The option is explicitly selected only as a dependency of in-kernel users. Reported-by: NGeert Uytterhoeven <geert@linux-m68k.org> Fixes: 44091d29 ("lib: Introduce priority array area manager") Signed-off-by: NJiri Pirko <jiri@mellanox.com> Tested-by: NGeert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michal Hocko 提交于
show_mem() allows to filter out node specific data which is irrelevant to the allocation request via SHOW_MEM_FILTER_NODES. The filtering is done in skip_free_areas_node which skips all nodes which are not in the mems_allowed of the current process. This works most of the time as expected because the nodemask shouldn't be outside of the allocating task but there are some exceptions. E.g. memory hotplug might want to request allocations from outside of the allowed nodes (see new_node_page). Get rid of this hardcoded behavior and push the allocation mask down the show_mem path and use it instead of cpuset_current_mems_allowed. NULL nodemask is interpreted as cpuset_current_mems_allowed. [akpm@linux-foundation.org: coding-style fixes] Link: http://lkml.kernel.org/r/20170117091543.25850-5-mhocko@kernel.orgSigned-off-by: NMichal Hocko <mhocko@suse.com> Acked-by: NMel Gorman <mgorman@suse.de> Cc: Hillf Danton <hillf.zj@alibaba-inc.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: David Rientjes <rientjes@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Miles Chen 提交于
Add comment for failure to check a map error to help driver developers. Link: http://lkml.kernel.org/r/1484622289-22085-1-git-send-email-miles.chen@mediatek.comSigned-off-by: NMiles Chen <miles.chen@mediatek.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Arnd Bergmann 提交于
On a NOMMU ARM kernel, we get this link error: ERROR: "__get_user_bad" [lib/test_user_copy.ko] undefined! The problem is that the extended get_user/put_user definitions were only added for the normal (MMU based) case. We could add it for NOMMU as well, but it seems easier to just not call it, since no other code needs it. Fixes: 4c5d7bc6 ("usercopy: Add tests for all get_user() sizes") Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NKees Cook <keescook@chromium.org>
-
- 22 2月, 2017 1 次提交
-
-
由 Kees Cook 提交于
The existing test was only exercising native unsigned long size get_user(). For completeness, we should check all sizes. But we must skip some 32-bit architectures that don't implement a 64-bit get_user(). These new tests actually uncovered a bug in ARM's 64-bit get_user() zeroing. Signed-off-by: NKees Cook <keescook@chromium.org>
-
- 18 2月, 2017 1 次提交
-
-
由 Herbert Xu 提交于
This patch adds code that handles GFP_ATOMIC kmalloc failure on insertion. As we cannot use vmalloc, we solve it by making our hash table nested. That is, we allocate single pages at each level and reach our desired table size by nesting them. When a nested table is created, only a single page is allocated at the top-level. Lower levels are allocated on demand during insertion. Therefore for each insertion to succeed, only two (non-consecutive) pages are needed. After a nested table is created, a rehash will be scheduled in order to switch to a vmalloced table as soon as possible. Also, the rehash code will never rehash into a nested table. If we detect a nested table during a rehash, the rehash will be aborted and a new rehash will be scheduled. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 17 2月, 2017 2 次提交
-
-
由 Kees Cook 提交于
Under SMAP/PAN/etc, we cannot write directly to userspace memory, so this rearranges the test bytes to get written through copy_to_user(). Additionally drops the bad copy_from_user() test that would trigger a memcpy() against userspace on failure. Signed-off-by: NKees Cook <keescook@chromium.org>
-
由 Hoeun Ryu 提交于
During usercopy the destination buffer will be zeroed if copy_from_user() or get_user() fails. This patch adds testcases for it. The destination buffer is set with non-zero value before illegal copy_from_user() or get_user() is executed and the buffer is compared to zero after usercopy is done. Signed-off-by: NHoeun Ryu <hoeun.ryu@gmail.com> [kees: clarified commit log, dropped second kmalloc] Signed-off-by: NKees Cook <keescook@chromium.org>
-
- 16 2月, 2017 1 次提交
-
-
由 David S. Miller 提交于
This reverts commits: 6a254780 9dbbfb0a 40137906 It's too risky to put in this late in the release cycle. We'll put these changes into the next merge window instead. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 14 2月, 2017 1 次提交
-
-
由 Herbert Xu 提交于
This patch adds code that handles GFP_ATOMIC kmalloc failure on insertion. As we cannot use vmalloc, we solve it by making our hash table nested. That is, we allocate single pages at each level and reach our desired table size by nesting them. When a nested table is created, only a single page is allocated at the top-level. Lower levels are allocated on demand during insertion. Therefore for each insertion to succeed, only two (non-consecutive) pages are needed. After a nested table is created, a rehash will be scheduled in order to switch to a vmalloced table as soon as possible. Also, the rehash code will never rehash into a nested table. If we detect a nested table during a rehash, the rehash will be aborted and a new rehash will be scheduled. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 10 2月, 2017 3 次提交
-
-
由 Kees Cook 提交于
Currently CONFIG_TIMER_STATS exposes process information across namespaces: kernel/time/timer_list.c print_timer(): SEQ_printf(m, ", %s/%d", tmp, timer->start_pid); /proc/timer_list: #11: <0000000000000000>, hrtimer_wakeup, S:01, do_nanosleep, cron/2570 Given that the tracer can give the same information, this patch entirely removes CONFIG_TIMER_STATS. Suggested-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NKees Cook <keescook@chromium.org> Acked-by: NJohn Stultz <john.stultz@linaro.org> Cc: Nicolas Pitre <nicolas.pitre@linaro.org> Cc: linux-doc@vger.kernel.org Cc: Lai Jiangshan <jiangshanlai@gmail.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Xing Gao <xgao01@email.wm.edu> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Jessica Frazelle <me@jessfraz.com> Cc: kernel-hardening@lists.openwall.com Cc: Nicolas Iooss <nicolas.iooss_linux@m4x.org> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: Petr Mladek <pmladek@suse.com> Cc: Richard Cochran <richardcochran@gmail.com> Cc: Tejun Heo <tj@kernel.org> Cc: Michal Marek <mmarek@suse.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Olof Johansson <olof@lixom.net> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-api@vger.kernel.org Cc: Arjan van de Ven <arjan@linux.intel.com> Link: http://lkml.kernel.org/r/20170208192659.GA32582@beastSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Waiman Long 提交于
As suggested by Ingo, the debug_objects_alloc counter is now renamed to debug_objects_allocated with minor twist in comment and debug output. Signed-off-by: NWaiman Long <longman@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1486503630-1501-1-git-send-email-longman@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Peter Zijlstra 提交于
Provide refcount_t, an atomic_t like primitive built just for refcounting. It provides saturation semantics such that overflow becomes impossible and thereby 'spurious' use-after-free is avoided. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 08 2月, 2017 1 次提交
-
-
由 Sergey Senozhatsky 提交于
A preparation patch for printk_safe work. No functional change. - rename nmi.c to print_safe.c - add `printk_safe' prefix to some (which used both by printk-safe and printk-nmi) of the exported functions. Link: http://lkml.kernel.org/r/20161227141611.940-3-sergey.senozhatsky@gmail.com Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Jan Kara <jack@suse.cz> Cc: Tejun Heo <tj@kernel.org> Cc: Calvin Owens <calvinowens@fb.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Peter Hurley <peter@hurleysoftware.com> Cc: linux-kernel@vger.kernel.org Signed-off-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by: NPetr Mladek <pmladek@suse.com>
-
- 06 2月, 2017 1 次提交
-
-
由 Waiman Long 提交于
On a large SMP system with many CPUs, the global pool_lock may become a performance bottleneck as all the CPUs that need to allocate or free debug objects have to take the lock. That can sometimes cause soft lockups like: NMI watchdog: BUG: soft lockup - CPU#35 stuck for 22s! [rcuos/1:21] ... RIP: 0010:[<ffffffff817c216b>] [<ffffffff817c216b>] _raw_spin_unlock_irqrestore+0x3b/0x60 ... Call Trace: [<ffffffff813f40d1>] free_object+0x81/0xb0 [<ffffffff813f4f33>] debug_check_no_obj_freed+0x193/0x220 [<ffffffff81101a59>] ? trace_hardirqs_on_caller+0xf9/0x1c0 [<ffffffff81284996>] ? file_free_rcu+0x36/0x60 [<ffffffff81251712>] kmem_cache_free+0xd2/0x380 [<ffffffff81284960>] ? fput+0x90/0x90 [<ffffffff81284996>] file_free_rcu+0x36/0x60 [<ffffffff81124c23>] rcu_nocb_kthread+0x1b3/0x550 [<ffffffff81124b71>] ? rcu_nocb_kthread+0x101/0x550 [<ffffffff81124a70>] ? sync_exp_work_done.constprop.63+0x50/0x50 [<ffffffff810c59d1>] kthread+0x101/0x120 [<ffffffff81101a59>] ? trace_hardirqs_on_caller+0xf9/0x1c0 [<ffffffff817c2d32>] ret_from_fork+0x22/0x50 To reduce the amount of contention on the pool_lock, the actual kmem_cache_free() of the debug objects will be delayed if the pool_lock is busy. This will temporarily increase the amount of free objects available at the free pool when the system is busy. As a result, the number of kmem_cache allocation and freeing is reduced. To further reduce the lock operations free debug objects in batches of four. Signed-off-by: NWaiman Long <longman@redhat.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: "Du Changbin" <changbin.du@intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jan Stancek <jstancek@redhat.com> Link: http://lkml.kernel.org/r/1483647425-4135-4-git-send-email-longman@redhat.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 04 2月, 2017 3 次提交
-
-
由 Waiman Long 提交于
On a large SMP systems with hundreds of CPUs, the current thresholds for allocating and freeing debug objects (256 and 1024 respectively) may not work well. This can cause a lot of needless calls to kmem_aloc() and kmem_free() on those systems. To alleviate this thrashing problem, the object freeing threshold is now increased to "1024 + # of CPUs * 32". Whereas the object allocation threshold is increased to "256 + # of CPUs * 4". That should make the debug objects subsystem scale better with the number of CPUs available in the system. Signed-off-by: NWaiman Long <longman@redhat.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: "Du Changbin" <changbin.du@intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jan Stancek <jstancek@redhat.com> Link: http://lkml.kernel.org/r/1483647425-4135-3-git-send-email-longman@redhat.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Waiman Long 提交于
New debugfs stat counters are added to track the numbers of kmem_cache_alloc() and kmem_cache_free() function calls to get a sense of how the internal debug objects cache management is performing. Signed-off-by: NWaiman Long <longman@redhat.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: "Du Changbin" <changbin.du@intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jan Stancek <jstancek@redhat.com> Link: http://lkml.kernel.org/r/1483647425-4135-2-git-send-email-longman@redhat.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Jiri Pirko 提交于
This introduces a infrastructure for management of linear priority areas. Priority order in an array matters, however order of items inside a priority group does not matter. As an initial implementation, L-sort algorithm is used. It is quite trivial. More advanced algorithm called P-sort will be introduced as a follow-up. The infrastructure is prepared for other algos. Alongside this, a testing module is introduced as well. Signed-off-by: NJiri Pirko <jiri@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 03 2月, 2017 1 次提交
-
-
由 Jason A. Donenfeld 提交于
The "half md4" transform should not be used by any new code. And fortunately, it's only used now by ext4. Since ext4 supports several hashing methods, at some point it might be desirable to move to something like SipHash. As an intermediate step, remove half md4 from cryptohash.h and lib, and make it just a local function in ext4's hash.c. There's precedent for doing this; the other function ext can use for its hashes -- TEA -- is also implemented in the same place. Also, by being a local function, this might allow gcc to perform some additional optimizations. Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com> Reviewed-by: NAndreas Dilger <adilger@dilger.ca> Cc: Theodore Ts'o <tytso@mit.edu> Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
-
- 27 1月, 2017 1 次提交
-
-
由 Omar Sandoval 提交于
This is useful debugging information that will be used in the blk-mq debugfs directory. Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NOmar Sandoval <osandov@fb.com> Changed 'weight' to 'busy'. Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 25 1月, 2017 5 次提交
-
-
由 Luis R. Rodriguez 提交于
We have no custom fallback mechanism test interface. Provide one. This tests both the custom fallback mechanism and cancelling the it. Signed-off-by: NLuis R. Rodriguez <mcgrof@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Luis R. Rodriguez 提交于
This simplifies init and exit. Signed-off-by: NLuis R. Rodriguez <mcgrof@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Luis R. Rodriguez 提交于
This will make further changes easier to review. Signed-off-by: NLuis R. Rodriguez <mcgrof@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 zhong jiang 提交于
Recently, I've found cases in which ioremap_page_range was used incorrectly, in external modules, leading to crashes. This can be partly attributed to the fact that ioremap_page_range is lower-level, with fewer protections, as compared to the other functions that an external module would typically call. Those include: ioremap_cache ioremap_nocache ioremap_prot ioremap_uc ioremap_wc ioremap_wt ...each of which wraps __ioremap_caller, which in turn provides a safer way to achieve the mapping. Therefore, stop EXPORT-ing ioremap_page_range. Link: http://lkml.kernel.org/r/1485173220-29010-1-git-send-email-zhongjiang@huawei.comSigned-off-by: Nzhong jiang <zhongjiang@huawei.com> Reviewed-by: NJohn Hubbard <jhubbard@nvidia.com> Suggested-by: NJohn Hubbard <jhubbard@nvidia.com> Acked-by: NMichal Hocko <mhocko@suse.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Matthew Wilcox 提交于
The newly introduced warning in radix_tree_free_nodes() was testing the wrong variable; it should have been 'old' instead of 'node'. Fixes: ea07b862 ("mm: workingset: fix use-after-free in shadow node shrinker") Link: http://lkml.kernel.org/r/20170118163746.GA32495@cmpxchg.orgSigned-off-by: NMatthew Wilcox <mawilcox@microsoft.com> Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 24 1月, 2017 1 次提交
-
-
由 Matt Fleming 提交于
While debugging a performance issue I needed to understand why RCU sofitrqs were firing so frequently. Unfortunately, the RCU callback tracepoints are hidden behind CONFIG_RCU_TRACE which defaults to off in the upstream kernel and is likely to also be disabled in enterprise distribution configs. Enable it by default for CONFIG_TREE_RCU. However, we must keep it disabled for tiny RCU, because it would otherwise pull in a large amount of code that would make tiny RCU less than tiny. I ran some file system metadata intensive workloads (git checkout, FS-Mark) on a variety of machines with this patch and saw no detectable change in performance. Cc: Mel Gorman <mgorman@techsingularity.net> Signed-off-by: NMatt Fleming <matt@codeblueprint.co.uk> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
-
- 23 1月, 2017 1 次提交
-
-
由 Chris Wilson 提交于
The allocation for the bitmap may become very large, larger than MAX_ORDER, for large requests. We fail gracefully by falling back to trail-division, so disable the warning from kmalloc: 521.961092] WARNING: CPU: 0 PID: 30637 at mm/page_alloc.c:3548 __alloc_pages_slowpath+0x237/0x9a0 [ 521.961105] Modules linked in: i915(+) drm_kms_helper intel_gtt prime_numbers [last unloaded: drm_kms_helper] [ 521.961126] CPU: 0 PID: 30637 Comm: drv_selftest Tainted: G U W 4.10.0-rc3+ #321 [ 521.961137] Hardware name: / , BIOS PYBSWCEL.86A.0027.2015.0507.1758 05/07/2015 [ 521.961148] Call Trace: [ 521.961161] dump_stack+0x4d/0x6f [ 521.961172] __warn+0xc1/0xe0 [ 521.961181] warn_slowpath_null+0x18/0x20 [ 521.961189] __alloc_pages_slowpath+0x237/0x9a0 [ 521.961200] ? sg_init_table+0x1a/0x40 [ 521.961208] ? get_page_from_freelist+0x3fa/0x910 [ 521.961275] ? i915_gem_object_get_sg+0x272/0x2b0 [i915] [ 521.961285] __alloc_pages_nodemask+0x1ea/0x220 [ 521.961295] kmalloc_order+0x1c/0x50 [ 521.961304] __kmalloc+0x115/0x170 [ 521.961314] expand_to_next_prime+0x43/0x180 [prime_numbers] [ 521.961324] next_prime_number+0x47/0xc0 [prime_numbers] [ 521.961377] igt_vma_rotate+0x386/0x590 [i915] [ 521.961429] i915_subtests+0x37/0xc0 [i915] [ 521.961481] i915_vma_mock_selftests+0x3d/0x70 [i915] [ 521.961532] run_selftests+0x16e/0x1f0 [i915] [ 521.961541] ? 0xffffffffa02a4000 [ 521.961592] i915_mock_selftests+0x29/0x40 [i915] [ 521.961638] i915_init+0xa/0x5e [i915] [ 521.961646] ? 0xffffffffa02a4000 [ 521.961655] do_one_initcall+0x3e/0x160 [ 521.961664] ? __vunmap+0x7c/0xc0 [ 521.961672] ? vfree+0x29/0x70 [ 521.961680] ? kmem_cache_alloc+0xcf/0x120 [ 521.961690] do_init_module+0x55/0x1c4 [ 521.961699] load_module+0x1f3f/0x25b0 [ 521.961707] ? __symbol_put+0x40/0x40 [ 521.961716] ? kernel_read_file+0x100/0x190 [ 521.961725] SYSC_finit_module+0xbc/0xf0 [ 521.961734] SyS_finit_module+0x9/0x10 [ 521.961744] entry_SYSCALL_64_fastpath+0x17/0x98 [ 521.961752] RIP: 0033:0x7f111aca4119 [ 521.961760] RSP: 002b:00007ffd8be6cbe8 EFLAGS: 00000246 ORIG_RAX: 0000000000000139 [ 521.961773] RAX: ffffffffffffffda RBX: 0000000000000006 RCX: 00007f111aca4119 [ 521.961781] RDX: 0000000000000000 RSI: 000055dfc18bc8e0 RDI: 0000000000000006 [ 521.961789] RBP: 00007ffd8be6bbe0 R08: 0000000000000000 R09: 0000000000000000 [ 521.961796] R10: 0000000000000006 R11: 0000000000000246 R12: 0000000000000005 [ 521.961805] R13: 000055dfc18bd3a0 R14: 00007ffd8be6bbc0 R15: 0000000000000005 Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Daniel Vetter <daniel.vetter@intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/20170113235119.22528-1-chris@chris-wilson.co.uk
-
- 20 1月, 2017 1 次提交
-
-
由 Geliang Tang 提交于
Signed-off-by: NGeliang Tang <geliangtang@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: John Stultz <john.stultz@linaro.org> Link: http://lkml.kernel.org/r/0d5cf199ac43792df0b6f7e2145545c30fa1dbbe.1482222135.git.geliangtang@gmail.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 19 1月, 2017 2 次提交
-
-
由 Omar Sandoval 提交于
When we resize a struct sbitmap_queue, we update the wakeup batch size, but we don't update the wait count in the struct sbq_wait_states. If we resized down from a size which could use a bigger batch size, these counts could be too large and cause us to miss necessary wakeups. To fix this, update the wait counts when we resize (ensuring some careful memory ordering so that it's safe w.r.t. concurrent clears). This also fixes a theoretical issue where two threads could end up bumping the wait count up by the batch size, which could also potentially lead to hangs. Reported-by: NMartin Raiber <martin@urbackup.org> Fixes: e3a2b3f9 ("blk-mq: allow changing of queue depth through sysfs") Fixes: 2971c35f ("blk-mq: bitmap tag: fix race on blk_mq_bitmap_tags::wake_cnt") Signed-off-by: NOmar Sandoval <osandov@fb.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Omar Sandoval 提交于
We always do an atomic clear_bit() right before we call sbq_wake_up(), so we can use smp_mb__after_atomic(). While we're here, comment the memory barriers in here a little more. Signed-off-by: NOmar Sandoval <osandov@fb.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 16 1月, 2017 1 次提交
-
-
由 Nikita Yushchenko 提交于
Some drivers do depend on page mappings to be page aligned. Swiotlb already enforces such alignment for mappings greater than page, extend that to page-sized mappings as well. Without this fix, nvme hits BUG() in nvme_setup_prps(), because that routine assumes page-aligned mappings. Signed-off-by: NNikita Yushchenko <nikita.yoush@cogentembedded.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NSagi Grimberg <sagi@grimberg.me> Signed-off-by: NKonrad Rzeszutek Wilk <konrad@kernel.org>
-
- 15 1月, 2017 1 次提交
-
-
由 Al Viro 提交于
The logics in pipe_advance() used to release all buffers past the new position failed in cases when the number of buffers to release was equal to pipe->buffers. If that happened, none of them had been released, leaving pipe full. Worse, it was trivial to trigger and we end up with pipe full of uninitialized pages. IOW, it's an infoleak. Cc: stable@vger.kernel.org # v4.9 Reported-by: N"Alan J. Wylie" <alan@wylie.me.uk> Tested-by: N"Alan J. Wylie" <alan@wylie.me.uk> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 14 1月, 2017 1 次提交
-
-
由 Chris Wilson 提交于
Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Maarten Lankhorst <dev@mblankhorst.nl> Cc: Nicolai Hähnle <nhaehnle@gmail.com> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20161201114711.28697-4-chris@chris-wilson.co.ukSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 12 1月, 2017 1 次提交
-
-
由 Felix Fietkau 提交于
many embedded boards have a disconnected TTL level serial which can generate some garbage that can lead to spurious false sysrq detects. Signed-off-by: NJohn Crispin <john@phrozen.org> Signed-off-by: NFelix Fietkau <nbd@nbd.name> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 11 1月, 2017 1 次提交
-
-
由 Laura Abbott 提交于
DEBUG_VIRTUAL currently depends on DEBUG_KERNEL && X86. arm64 is getting the same support. Rather than add a list of architectures, switch this to ARCH_HAS_DEBUG_VIRTUAL and let architectures select it as appropriate. Acked-by: NIngo Molnar <mingo@kernel.org> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Suggested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NLaura Abbott <labbott@redhat.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-