- 18 10月, 2013 2 次提交
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound由 Linus Torvalds 提交于
Pull sound fixes from Takashi Iwai: "All reasonably small fixes as rc6: a HD-audio mic fix, a us122l mmap regression fix, and kernel memory leak fix in hdsp driver. Hopefully this will be the last pull request for 3.12..." * tag 'sound-3.12' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound: ALSA: hdsp - info leak in snd_hdsp_hwdep_ioctl() ALSA: us122l: Fix pcm_usb_stream mmapping regression ALSA: hda - Fix inverted internal mic not indicated on some machines
-
git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security由 Linus Torvalds 提交于
Pull apparmor fixes from James Morris: "A couple more regressions fixed" * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security: apparmor: fix bad lock balance when introspecting policy apparmor: fix memleak of the profile hash
-
- 17 10月, 2013 25 次提交
-
-
由 Linus Torvalds 提交于
Merge misc fixes from Andrew Morton. * emailed patches from Andrew Morton <akpm@linux-foundation.org>: (21 commits) mm: revert mremap pud_free anti-fix mm: fix BUG in __split_huge_page_pmd swap: fix set_blocksize race during swapon/swapoff procfs: call default get_unmapped_area on MMU-present architectures procfs: fix unintended truncation of returned mapped address writeback: fix negative bdi max pause percpu_refcount: export symbols fs: buffer: move allocation failure loop into the allocator mm: memcg: handle non-error OOM situations more gracefully tools/testing/selftests: fix uninitialized variable block/partitions/efi.c: treat size mismatch as a warning, not an error mm: hugetlb: initialize PG_reserved for tail pages of gigantic compound pages mm/zswap: bugfix: memory leak when re-swapon mm: /proc/pid/pagemap: inspect _PAGE_SOFT_DIRTY only on present pages mm: migration: do not lose soft dirty bit if page is in migration state gcov: MAINTAINERS: Add an entry for gcov mm/hugetlb.c: correct missing private flag clearing mm/vmscan.c: don't forget to free shrinker->nr_deferred ipc/sem.c: synchronize semop and semctl with IPC_RMID ipc: update locking scheme comments ...
-
由 Hugh Dickins 提交于
Revert commit 1ecfd533 ("mm/mremap.c: call pud_free() after fail calling pmd_alloc()"). The original code was correct: pud_alloc(), pmd_alloc(), pte_alloc_map() ensure that the pud, pmd, pt is already allocated, and seldom do they need to allocate; on failure, upper levels are freed if appropriate by the subsequent do_munmap(). Whereas commit 1ecfd533 did an unconditional pud_free() of a most-likely still-in-use pud: saved only by the near-impossiblity of pmd_alloc() failing. Signed-off-by: NHugh Dickins <hughd@google.com> Cc: Chen Gang <gang.chen@asianux.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Hugh Dickins 提交于
Occasionally we hit the BUG_ON(pmd_trans_huge(*pmd)) at the end of __split_huge_page_pmd(): seen when doing madvise(,,MADV_DONTNEED). It's invalid: we don't always have down_write of mmap_sem there: a racing do_huge_pmd_wp_page() might have copied-on-write to another huge page before our split_huge_page() got the anon_vma lock. Forget the BUG_ON, just go back and try again if this happens. Signed-off-by: NHugh Dickins <hughd@google.com> Acked-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: David Rientjes <rientjes@google.com> Cc: <stable@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Krzysztof Kozlowski 提交于
Fix race between swapoff and swapon. Swapoff used old_block_size from swap_info outside of swapon_mutex so it could be overwritten by concurrent swapon. The race has visible effect only if more than one swap block device exists with different block sizes (e.g. /dev/sda1 with block size 4096 and /dev/sdb1 with 512). In such case it leads to setting the blocksize of swapped off device with wrong blocksize. The bug can be triggered with multiple concurrent swapoff and swapon: 0. Swap for some device is on. 1. swapoff: First the swapoff is called on this device and "struct swap_info_struct *p" is assigned. This is done under swap_lock however this lock is released for the call try_to_unuse(). 2. swapon: After the assignment above (and before acquiring swapon_mutex & swap_lock by swapoff) the swapon is called on the same device. The p->old_block_size is assigned to the value of block_size the device. This block size should be the same as previous but sometimes it is not. The swapon ends successfully. 3. swapoff: Swapoff resumes, grabs the locks and mutex and continues to disable this swap device. Now it sets the block size to value taken from swap_info which was overwritten by swapon in 2. Signed-off-by: NKrzysztof Kozlowski <k.kozlowski@samsung.com> Reported-by: NWeijie Yang <weijie.yang.kh@gmail.com> Cc: Bob Liu <bob.liu@oracle.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Shaohua Li <shli@fusionio.com> Cc: Minchan Kim <minchan@kernel.org> Acked-by: NHugh Dickins <hughd@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 HATAYAMA Daisuke 提交于
Commit c4fe2448 ("sparc: fix PCI device proc file mmap(2)") added proc_reg_get_unmapped_area in proc_reg_file_ops and proc_reg_file_ops_no_compat, by which now mmap always returns EIO if get_unmapped_area method is not defined for the target procfs file, which causes regression of mmap on /proc/vmcore. To address this issue, like get_unmapped_area(), call default current->mm->get_unmapped_area on MMU-present architectures if pde->proc_fops->get_unmapped_area, i.e. the one in actual file operation in the procfs file, is not defined. Reported-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: NHATAYAMA Daisuke <d.hatayama@jp.fujitsu.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: David S. Miller <davem@davemloft.net> Tested-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 HATAYAMA Daisuke 提交于
Currently, proc_reg_get_unmapped_area truncates upper 32-bit of the mapped virtual address returned from get_unmapped_area method in pde->proc_fops due to the variable rv of signed integer on x86_64. This is too small to have vitual address of unsigned long on x86_64 since on x86_64, signed integer is of 4 bytes while unsigned long is of 8 bytes. To fix this issue, use unsigned long instead. Fixes a regression added in commit c4fe2448 ("sparc: fix PCI device proc file mmap(2)"). Signed-off-by: NHATAYAMA Daisuke <d.hatayama@jp.fujitsu.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: David S. Miller <davem@davemloft.net> Tested-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Fengguang Wu 提交于
Toralf runs trinity on UML/i386. After some time it hangs and the last message line is BUG: soft lockup - CPU#0 stuck for 22s! [trinity-child0:1521] It's found that pages_dirtied becomes very large. More than 1000000000 pages in this case: period = HZ * pages_dirtied / task_ratelimit; BUG_ON(pages_dirtied > 2000000000); BUG_ON(pages_dirtied > 1000000000); <--------- UML debug printf shows that we got negative pause here: ick: pause : -984 ick: pages_dirtied : 0 ick: task_ratelimit: 0 pause: + if (pause < 0) { + extern int printf(char *, ...); + printf("ick : pause : %li\n", pause); + printf("ick: pages_dirtied : %lu\n", pages_dirtied); + printf("ick: task_ratelimit: %lu\n", task_ratelimit); + BUG_ON(1); + } trace_balance_dirty_pages(bdi, Since pause is bounded by [min_pause, max_pause] where min_pause is also bounded by max_pause. It's suspected and demonstrated that the max_pause calculation goes wrong: ick: pause : -717 ick: min_pause : -177 ick: max_pause : -717 ick: pages_dirtied : 14 ick: task_ratelimit: 0 The problem lies in the two "long = unsigned long" assignments in bdi_max_pause() which might go negative if the highest bit is 1, and the min_t(long, ...) check failed to protect it falling under 0. Fix all of them by using "unsigned long" throughout the function. Signed-off-by: NFengguang Wu <fengguang.wu@intel.com> Reported-by: NToralf Förster <toralf.foerster@gmx.de> Tested-by: NToralf Förster <toralf.foerster@gmx.de> Reviewed-by: NJan Kara <jack@suse.cz> Cc: Richard Weinberger <richard@nod.at> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: <stable@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Matias Bjorling 提交于
Export the interface to be used within modules. Signed-off-by: NMatias Bjorling <m@bjorling.me> Acked-by: NTejun Heo <tj@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Johannes Weiner 提交于
Buffer allocation has a very crude indefinite loop around waking the flusher threads and performing global NOFS direct reclaim because it can not handle allocation failures. The most immediate problem with this is that the allocation may fail due to a memory cgroup limit, where flushers + direct reclaim might not make any progress towards resolving the situation at all. Because unlike the global case, a memory cgroup may not have any cache at all, only anonymous pages but no swap. This situation will lead to a reclaim livelock with insane IO from waking the flushers and thrashing unrelated filesystem cache in a tight loop. Use __GFP_NOFAIL allocations for buffers for now. This makes sure that any looping happens in the page allocator, which knows how to orchestrate kswapd, direct reclaim, and the flushers sensibly. It also allows memory cgroups to detect allocations that can't handle failure and will allow them to ultimately bypass the limit if reclaim can not make progress. Reported-by: NazurIt <azurit@pobox.sk> Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.cz> Cc: <stable@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Johannes Weiner 提交于
Commit 3812c8c8 ("mm: memcg: do not trap chargers with full callstack on OOM") assumed that only a few places that can trigger a memcg OOM situation do not return VM_FAULT_OOM, like optional page cache readahead. But there are many more and it's impractical to annotate them all. First of all, we don't want to invoke the OOM killer when the failed allocation is gracefully handled, so defer the actual kill to the end of the fault handling as well. This simplifies the code quite a bit for added bonus. Second, since a failed allocation might not be the abrupt end of the fault, the memcg OOM handler needs to be re-entrant until the fault finishes for subsequent allocation attempts. If an allocation is attempted after the task already OOMed, allow it to bypass the limit so that it can quickly finish the fault and invoke the OOM killer. Reported-by: NazurIt <azurit@pobox.sk> Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.cz> Cc: <stable@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Felipe Pena 提交于
The err variable is intended to receive the timer_create() return before checking it Signed-off-by: NFelipe Pena <felipensp@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Doug Anderson 提交于
In commit 27a7c642 ("partitions/efi: account for pmbr size in lba") we started treating bad sizes in lba field of the partition that has the 0xEE (GPT protective) as errors. However, we may run into these "bad sizes" in the real world if someone uses dd to copy an image from a smaller disk to a bigger disk. Since this case used to work (even without using force_gpt), keep it working and treat the size mismatch as a warning instead of an error. Reported-by: NJosh Triplett <josh@joshtriplett.org> Reported-by: NSean Paul <seanpaul@chromium.org> Signed-off-by: NDoug Anderson <dianders@chromium.org> Reviewed-by: NJosh Triplett <josh@joshtriplett.org> Acked-by: NDavidlohr Bueso <davidlohr@hp.com> Tested-by: NArtem Bityutskiy <dedekind1@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Andrea Arcangeli 提交于
Commit 11feeb49 ("kvm: optimize away THP checks in kvm_is_mmio_pfn()") introduced a memory leak when KVM is run on gigantic compound pages. That commit depends on the assumption that PG_reserved is identical for all head and tail pages of a compound page. So that if get_user_pages returns a tail page, we don't need to check the head page in order to know if we deal with a reserved page that requires different refcounting. The assumption that PG_reserved is the same for head and tail pages is certainly correct for THP and regular hugepages, but gigantic hugepages allocated through bootmem don't clear the PG_reserved on the tail pages (the clearing of PG_reserved is done later only if the gigantic hugepage is freed). This patch corrects the gigantic compound page initialization so that we can retain the optimization in 11feeb49. The cacheline was already modified in order to set PG_tail so this won't affect the boot time of large memory systems. [akpm@linux-foundation.org: tweak comment layout and grammar] Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com> Reported-by: Nandy123 <ajs124.ajs124@gmail.com> Acked-by: NRik van Riel <riel@redhat.com> Cc: Gleb Natapov <gleb@redhat.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Hugh Dickins <hughd@google.com> Acked-by: NRafael Aquini <aquini@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Weijie Yang 提交于
zswap_tree is not freed when swapoff, and it got re-kmalloced in swapon, so a memory leak occurs. Free the memory of zswap_tree in zswap_frontswap_invalidate_area(). Signed-off-by: NWeijie Yang <weijie.yang@samsung.com> Reviewed-by: NBob Liu <bob.liu@oracle.com> Cc: Minchan Kim <minchan@kernel.org> Reviewed-by: NMinchan Kim <minchan@kernel.org> Cc: <stable@vger.kernel.org> From: Weijie Yang <weijie.yang@samsung.com> Subject: mm/zswap: bugfix: memory leak when invalidate and reclaim occur concurrently Consider the following scenario: thread 0: reclaim entry x (get refcount, but not call zswap_get_swap_cache_page) thread 1: call zswap_frontswap_invalidate_page to invalidate entry x. finished, entry x and its zbud is not freed as its refcount != 0 now, the swap_map[x] = 0 thread 0: now call zswap_get_swap_cache_page swapcache_prepare return -ENOENT because entry x is not used any more zswap_get_swap_cache_page return ZSWAP_SWAPCACHE_NOMEM zswap_writeback_entry do nothing except put refcount Now, the memory of zswap_entry x and its zpage leak. Modify: - check the refcount in fail path, free memory if it is not referenced. - use ZSWAP_SWAPCACHE_FAIL instead of ZSWAP_SWAPCACHE_NOMEM as the fail path can be not only caused by nomem but also by invalidate. [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: NWeijie Yang <weijie.yang@samsung.com> Reviewed-by: NBob Liu <bob.liu@oracle.com> Cc: Minchan Kim <minchan@kernel.org> Cc: <stable@vger.kernel.org> Acked-by: NSeth Jennings <sjenning@linux.vnet.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Cyrill Gorcunov 提交于
If a page we are inspecting is in swap we may occasionally report it as having soft dirty bit (even if it is clean). The pte_soft_dirty helper should be called on present pte only. Signed-off-by: NCyrill Gorcunov <gorcunov@openvz.org> Cc: Pavel Emelyanov <xemul@parallels.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Matt Mackall <mpm@selenic.com> Cc: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> Cc: Marcelo Tosatti <mtosatti@redhat.com> Cc: KOSAKI Motohiro <kosaki.motohiro@gmail.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Peter Zijlstra <peterz@infradead.org> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com> Reviewed-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: <stable@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Cyrill Gorcunov 提交于
If page migration is turned on in config and the page is migrating, we may lose the soft dirty bit. If fork and mprotect are called on migrating pages (once migration is complete) pages do not obtain the soft dirty bit in the correspond pte entries. Fix it adding an appropriate test on swap entries. Signed-off-by: NCyrill Gorcunov <gorcunov@openvz.org> Cc: Pavel Emelyanov <xemul@parallels.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Matt Mackall <mpm@selenic.com> Cc: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> Cc: Marcelo Tosatti <mtosatti@redhat.com> Cc: KOSAKI Motohiro <kosaki.motohiro@gmail.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Peter Zijlstra <peterz@infradead.org> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: <stable@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Peter Oberparleiter 提交于
Signed-off-by: NPeter Oberparleiter <oberpar@linux.vnet.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Joonsoo Kim 提交于
We should clear the page's private flag when returing the page to the hugepage pool. Otherwise, marked hugepage can be allocated to the user who tries to allocate the non-reserved hugepage. If this user fail to map this hugepage, he would try to return the page to the hugepage pool. Since this page has a private flag, resv_huge_pages would mistakenly increase. This patch fixes this situation. Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Rik van Riel <riel@redhat.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Michal Hocko <mhocko@suse.cz> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Hugh Dickins <hughd@google.com> Cc: Davidlohr Bueso <davidlohr.bueso@hp.com> Cc: David Gibson <david@gibson.dropbear.id.au> Cc: Wanpeng Li <liwanp@linux.vnet.ibm.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Hillf Danton <dhillf@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Andrew Vagin 提交于
This leak was added by commit 1d3d4437 ("vmscan: per-node deferred work"). unreferenced object 0xffff88006ada3bd0 (size 8): comm "criu", pid 14781, jiffies 4295238251 (age 105.641s) hex dump (first 8 bytes): 00 00 00 00 00 00 00 00 ........ backtrace: [<ffffffff8170caee>] kmemleak_alloc+0x5e/0xc0 [<ffffffff811c0527>] __kmalloc+0x247/0x310 [<ffffffff8117848c>] register_shrinker+0x3c/0xa0 [<ffffffff811e115b>] sget+0x5ab/0x670 [<ffffffff812532f4>] proc_mount+0x54/0x170 [<ffffffff811e1893>] mount_fs+0x43/0x1b0 [<ffffffff81202dd2>] vfs_kern_mount+0x72/0x110 [<ffffffff81202e89>] kern_mount_data+0x19/0x30 [<ffffffff812530a0>] pid_ns_prepare_proc+0x20/0x40 [<ffffffff81083c56>] alloc_pid+0x466/0x4a0 [<ffffffff8105aeda>] copy_process+0xc6a/0x1860 [<ffffffff8105beab>] do_fork+0x8b/0x370 [<ffffffff8105c1a6>] SyS_clone+0x16/0x20 [<ffffffff8171f739>] stub_clone+0x69/0x90 [<ffffffffffffffff>] 0xffffffffffffffff Signed-off-by: NAndrew Vagin <avagin@openvz.org> Cc: Mel Gorman <mgorman@suse.de> Cc: Michal Hocko <mhocko@suse.cz> Cc: Rik van Riel <riel@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Glauber Costa <glommer@openvz.org> Cc: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Manfred Spraul 提交于
After acquiring the semlock spinlock, operations must test that the array is still valid. - semctl() and exit_sem() would walk stale linked lists (ugly, but should be ok: all lists are empty) - semtimedop() would sleep forever - and if woken up due to a signal - access memory after free. The patch also: - standardizes the tests for .deleted, so that all tests in one function leave the function with the same approach. - unconditionally tests for .deleted immediately after every call to sem_lock - even it it means that for semctl(GETALL), .deleted will be tested twice. Both changes make the review simpler: After every sem_lock, there must be a test of .deleted, followed by a goto to the cleanup code (if the function uses "goto cleanup"). The only exception is semctl_down(): If sem_ids().rwsem is locked, then the presence in ids->ipcs_idr is equivalent to !.deleted, thus no additional test is required. Signed-off-by: NManfred Spraul <manfred@colorfullife.com> Cc: Mike Galbraith <efault@gmx.de> Acked-by: NDavidlohr Bueso <davidlohr@hp.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Davidlohr Bueso 提交于
The initial documentation was a bit incomplete, update accordingly. [akpm@linux-foundation.org: make it more readable in 80 columns] Signed-off-by: NDavidlohr Bueso <davidlohr@hp.com> Acked-by: NManfred Spraul <manfred@colorfullife.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 David Rientjes 提交于
for_each_online_cpu() needs the protection of {get,put}_online_cpus() so cpu_online_mask doesn't change during the iteration. cpu_hotplug.lock is held while a cpu is going down, it's a coarse lock that is used kernel-wide to synchronize cpu hotplug activity. Memcg has a cpu hotplug notifier, called while there may not be any cpu hotplug refcounts, which drains per-cpu event counts to memcg->nocpu_base.events to maintain a cumulative event count as cpus disappear. Without get_online_cpus() in mem_cgroup_read_events(), it's possible to account for the event count on a dying cpu twice, and this value may be significantly large. In fact, all memcg->pcp_counter_lock use should be nested by {get,put}_online_cpus(). This fixes that issue and ensures the reported statistics are not vastly over-reported during cpu hotplug. Signed-off-by: NDavid Rientjes <rientjes@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.cz> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Acked-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs由 Linus Torvalds 提交于
Pull tmpfile fix from Al Viro: "A fix for double iput() in ->tmpfile() on ext3 and ext4; I'd fucked it up, Miklos has caught it" * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: ext[34]: fix double put in tmpfile
-
git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm由 Linus Torvalds 提交于
Pull device-mapper fix from Alasdair Kergon: "A patch to avoid data corruption in a device-mapper snapshot. This is primarily a data corruption bug that all users of device-mapper snapshots will want to fix. The CVE is due to a data leak under specific circumstances if, for example, the snapshot is presented to a virtual machine: a block written as data inside the VM can get interpreted incorrectly on the host outside the VM as metadata, causing the host to provide the VM with access to blocks it would not otherwise see. This is likely to affect few, if any, people" * tag 'dm-3.12-fix-cve' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm: dm snapshot: fix data corruption
-
git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio由 Linus Torvalds 提交于
Pull gpio fixes from Linus Walleij: "Three GPIO fixes for the v3.12 series: - A fix to the Lynxpoint IRQ handler - Two late fixes to fallout from the gpiod refactoring" * tag 'gpio-v3.12-3' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio: gpiolib: let gpiod_request() return -EPROBE_DEFER gpiolib: safer implementation of desc_to_gpio() gpio/lynxpoint: check if the interrupt is enabled in IRQ handler
-
- 16 10月, 2013 9 次提交
-
-
由 Dan Carpenter 提交于
In GCC the sizeof(hdsp_version) is 8 because there is a 2 byte hole at the end of the struct after ->firmware_rev. Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NTakashi Iwai <tiwai@suse.de>
-
由 Mikulas Patocka 提交于
This patch fixes a particular type of data corruption that has been encountered when loading a snapshot's metadata from disk. When we allocate a new chunk in persistent_prepare, we increment ps->next_free and we make sure that it doesn't point to a metadata area by further incrementing it if necessary. When we load metadata from disk on device activation, ps->next_free is positioned after the last used data chunk. However, if this last used data chunk is followed by a metadata area, ps->next_free is positioned erroneously to the metadata area. A newly-allocated chunk is placed at the same location as the metadata area, resulting in data or metadata corruption. This patch changes the code so that ps->next_free skips the metadata area when metadata are loaded in function read_exceptions. The patch also moves a piece of code from persistent_prepare_exception to a separate function skip_metadata to avoid code duplication. CVE-2013-4299 Signed-off-by: NMikulas Patocka <mpatocka@redhat.com> Cc: stable@vger.kernel.org Cc: Mike Snitzer <snitzer@redhat.com> Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
-
由 John Johansen 提交于
BugLink: http://bugs.launchpad.net/bugs/1235977 The profile introspection seq file has a locking bug when policy is viewed from a virtual root (task in a policy namespace), introspection from the real root is not affected. The test for root while (parent) { is correct for the real root, but incorrect for tasks in a policy namespace. This allows the task to walk backup the policy tree past its virtual root causing it to be unlocked before the virtual root should be in the p_stop fn. This results in the following lockdep back trace: [ 78.479744] [ BUG: bad unlock balance detected! ] [ 78.479792] 3.11.0-11-generic #17 Not tainted [ 78.479838] ------------------------------------- [ 78.479885] grep/2223 is trying to release lock (&ns->lock) at: [ 78.479952] [<ffffffff817bf3be>] mutex_unlock+0xe/0x10 [ 78.480002] but there are no more locks to release! [ 78.480037] [ 78.480037] other info that might help us debug this: [ 78.480037] 1 lock held by grep/2223: [ 78.480037] #0: (&p->lock){+.+.+.}, at: [<ffffffff812111bd>] seq_read+0x3d/0x3d0 [ 78.480037] [ 78.480037] stack backtrace: [ 78.480037] CPU: 0 PID: 2223 Comm: grep Not tainted 3.11.0-11-generic #17 [ 78.480037] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011 [ 78.480037] ffffffff817bf3be ffff880007763d60 ffffffff817b97ef ffff8800189d2190 [ 78.480037] ffff880007763d88 ffffffff810e1c6e ffff88001f044730 ffff8800189d2190 [ 78.480037] ffffffff817bf3be ffff880007763e00 ffffffff810e5bd6 0000000724fe56b7 [ 78.480037] Call Trace: [ 78.480037] [<ffffffff817bf3be>] ? mutex_unlock+0xe/0x10 [ 78.480037] [<ffffffff817b97ef>] dump_stack+0x54/0x74 [ 78.480037] [<ffffffff810e1c6e>] print_unlock_imbalance_bug+0xee/0x100 [ 78.480037] [<ffffffff817bf3be>] ? mutex_unlock+0xe/0x10 [ 78.480037] [<ffffffff810e5bd6>] lock_release_non_nested+0x226/0x300 [ 78.480037] [<ffffffff817bf2fe>] ? __mutex_unlock_slowpath+0xce/0x180 [ 78.480037] [<ffffffff817bf3be>] ? mutex_unlock+0xe/0x10 [ 78.480037] [<ffffffff810e5d5c>] lock_release+0xac/0x310 [ 78.480037] [<ffffffff817bf2b3>] __mutex_unlock_slowpath+0x83/0x180 [ 78.480037] [<ffffffff817bf3be>] mutex_unlock+0xe/0x10 [ 78.480037] [<ffffffff81376c91>] p_stop+0x51/0x90 [ 78.480037] [<ffffffff81211408>] seq_read+0x288/0x3d0 [ 78.480037] [<ffffffff811e9d9e>] vfs_read+0x9e/0x170 [ 78.480037] [<ffffffff811ea8cc>] SyS_read+0x4c/0xa0 [ 78.480037] [<ffffffff817ccc9d>] system_call_fastpath+0x1a/0x1f Signed-off-by: NJohn Johansen <john.johansen@canonical.com> Signed-off-by: NJames Morris <james.l.morris@oracle.com>
-
由 John Johansen 提交于
BugLink: http://bugs.launchpad.net/bugs/1235523 This fixes the following kmemleak trace: unreferenced object 0xffff8801e8c35680 (size 32): comm "apparmor_parser", pid 691, jiffies 4294895667 (age 13230.876s) hex dump (first 32 bytes): e0 d3 4e b5 ac 6d f4 ed 3f cb ee 48 1c fd 40 cf ..N..m..?..H..@. 5b cc e9 93 00 00 00 00 00 00 00 00 00 00 00 00 [............... backtrace: [<ffffffff817a97ee>] kmemleak_alloc+0x4e/0xb0 [<ffffffff811ca9f3>] __kmalloc+0x103/0x290 [<ffffffff8138acbc>] aa_calc_profile_hash+0x6c/0x150 [<ffffffff8138074d>] aa_unpack+0x39d/0xd50 [<ffffffff8137eced>] aa_replace_profiles+0x3d/0xd80 [<ffffffff81376937>] profile_replace+0x37/0x50 [<ffffffff811e9f2d>] vfs_write+0xbd/0x1e0 [<ffffffff811ea96c>] SyS_write+0x4c/0xa0 [<ffffffff817ccb1d>] system_call_fastpath+0x1a/0x1f [<ffffffffffffffff>] 0xffffffffffffffff Signed-off-by: NJohn Johansen <john.johansen@canonical.com> Signed-off-by: NJames Morris <james.l.morris@oracle.com>
-
git://git.secretlab.ca/git/linux由 Linus Torvalds 提交于
Pull device tree fixes and reverts from Grant Likely: "One bug fix and three reverts. The reverts back out the slightly controversial feeding the entire device tree into the random pool and the reserved-memory binding which isn't fully baked yet. Expect the reserved-memory patches at least to resurface for v3.13. The bug fixes removes a scary but harmless warning on SPARC that was introduced in the v3.12 merge window. v3.13 will contain a proper fix that makes the new code work on SPARC. On the plus side, the diffstat looks *awesome*. I love removing lines of code" * tag 'devicetree-for-linus' of git://git.secretlab.ca/git/linux: Revert "drivers: of: add initialization code for dma reserved memory" Revert "ARM: init: add support for reserved memory defined by device tree" Revert "of: Feed entire flattened device tree into the random pool" of: fix unnecessary warning on missing /cpus node
-
git://git.linaro.org/people/mszyprowski/linux-dma-mapping由 Linus Torvalds 提交于
Pull DMA-mapping fix from Marek Szyprowski: "A bugfix for the IOMMU-based implementation of dma-mapping subsystem for ARM architecture" * 'fixes-for-v3.12' of git://git.linaro.org/people/mszyprowski/linux-dma-mapping: ARM: dma-mapping: Always pass proper prot flags to iommu_map()
-
git://git.kernel.org/pub/scm/virt/kvm/kvm由 Linus Torvalds 提交于
Pull kvm fix from Gleb Natapov. * git://git.kernel.org/pub/scm/virt/kvm/kvm: KVM: Enable pvspinlock after jump_label_init() to avoid VM hang
-
git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip由 Linus Torvalds 提交于
Pull Xen fixes from Stefano Stabellini: "A small fix for Xen on x86_32 and a build fix for xen-tpmfront on arm64" * tag 'stable/for-linus-3.12-rc4-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip: xen: Fix possible user space selector corruption tpm: xen-tpmfront: fix missing declaration of xen_domain
-
由 Miklos Szeredi 提交于
d_tmpfile() already swallowed the inode ref. Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Cc: stable@vger.kernel.org Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 15 10月, 2013 4 次提交
-
-
由 Raghavendra K T 提交于
We use jump label to enable pv-spinlock. With the changes in (442e0973 Merge branch 'x86/jumplabel'), the jump label behaviour has changed that would result in eventual hang of the VM since we would end up in a situation where slow path locks would halt the vcpus but we will not be able to wakeup the vcpu by lock releaser using unlock kick. Similar problem in Xen and more detailed description is available in a945928e (xen: Do not enable spinlocks before jump_label_init() has executed) This patch splits kvm_spinlock_init to separate jump label changes with pvops patching and also make jump label enabling after jump_label_init(). Signed-off-by: NRaghavendra K T <raghavendra.kt@linux.vnet.ibm.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NSteven Rostedt <rostedt@goodmis.org> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
由 Takashi Iwai 提交于
The pcm_usb_stream plugin requires the mremap explicitly for the read buffer, as it expands itself once after reading the required size. But the commit [314e51b9: mm: kill vma flag VM_RESERVED and mm->reserved_vm counter] converted blindly to a combination of VM_DONTEXPAND | VM_DONTDUMP like other normal drivers, and this resulted in the failure of mremap(). For fixing this regression, we need to remove VM_DONTEXPAND for the read-buffer mmap. Reported-and-tested-by: NJames Miller <jamesstewartmiller@gmail.com> Cc: <stable@vger.kernel.org> Signed-off-by: NTakashi Iwai <tiwai@suse.de>
-
由 Marek Szyprowski 提交于
This reverts commit 9d8eab7a. There is still no consensus on the bindings for the reserved memory and various drawbacks of the proposed solution has been shown, so the best now is to revert it completely and start again from scratch later. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NGrant Likely <grant.likely@linaro.org>
-
由 Marek Szyprowski 提交于
This reverts commit 10bcdfb8. There is no consensus on the bindings for the reserved memory, so the code for handing it will be reverted. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NGrant Likely <grant.likely@linaro.org>
-