- 25 12月, 2013 1 次提交
-
-
由 Ard Biesheuvel 提交于
Commit 2171364d ("powerpc: Add HWCAP2 aux entry") introduced a new AT_ auxv entry type AT_HWCAP2 but failed to update AT_VECTOR_SIZE_BASE accordingly. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Fixes: 2171364d (powerpc: Add HWCAP2 aux entry) Cc: stable@vger.kernel.org Acked-by: NMichael Neuling <michael@neuling.org> Cc: Nishanth Aravamudan <nacc@linux.vnet.ibm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 22 12月, 2013 1 次提交
-
-
由 Benjamin LaHaise 提交于
The arbitrary restriction on page counts offered by the core migrate_page_move_mapping() code results in rather suspicious looking fiddling with page reference counts in the aio_migratepage() operation. To fix this, make migrate_page_move_mapping() take an extra_count parameter that allows aio to tell the code about its own reference count on the page being migrated. While cleaning up aio_migratepage(), make it validate that the old page being passed in is actually what aio_migratepage() expects to prevent misbehaviour in the case of races. Signed-off-by: NBenjamin LaHaise <bcrl@kvack.org>
-
- 21 12月, 2013 3 次提交
-
-
由 Luck, Tony 提交于
Some pstore backing devices use on board flash as persistent storage. These have limited numbers of write cycles so it is a poor idea to use them from high frequency operations. Signed-off-by: NTony Luck <tony.luck@intel.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Kirill A. Shutemov 提交于
In struct page we have enough space to fit long-size page->ptl there, but we use dynamically-allocated page->ptl if size(spinlock_t) is larger than sizeof(int). It hurts 64-bit architectures with CONFIG_GENERIC_LOCKBREAK, where sizeof(spinlock_t) == 8, but it easily fits into struct page. Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Acked-by: NHugh Dickins <hughd@google.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Kirill A. Shutemov 提交于
Sasha Levin found a NULL pointer dereference that is due to a missing page table lock, which in turn is due to the pmd entry in question being a transparent huge-table entry. The code - introduced in commit 1998cc04 ("mm: make madvise(MADV_WILLNEED) support swap file prefetch") - correctly checks for this situation using pmd_none_or_trans_huge_or_clear_bad(), but it turns out that that function doesn't work correctly. pmd_none_or_trans_huge_or_clear_bad() expected that pmd_bad() would trigger if the transparent hugepage bit was set, but it doesn't do that if pmd_numa() is also set. Note that the NUMA bit only gets set on real NUMA machines, so people trying to reproduce this on most normal development systems would never actually trigger this. Fix it by removing the very subtle (and subtly incorrect) expectation, and instead just checking pmd_trans_huge() explicitly. Reported-by: NSasha Levin <sasha.levin@oracle.com> Acked-by: NAndrea Arcangeli <aarcange@redhat.com> [ Additionally remove the now stale test for pmd_trans_huge() inside the pmd_bad() case - Linus ] Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 19 12月, 2013 5 次提交
-
-
由 Nicholas Bellinger 提交于
This patch allows FILEIO to update hw_max_sectors based on the current max_bytes_per_io. This is required because vfs_[writev,readv]() can accept a maximum of 2048 iovecs per call, so the enforced hw_max_sectors really needs to be calculated based on block_size. This addresses a >= v3.5 bug where block_size=512 was rejecting > 1M sized I/O requests, because FD_MAX_SECTORS was hardcoded to 2048 for the block_size=4096 case. (v2: Use max_bytes_per_io instead of ->update_hw_max_sectors) Reported-by: NHenrik Goldman <hg@x-formation.com> Cc: <stable@vger.kernel.org> #3.5+ Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
-
由 Mel Gorman 提交于
According to documentation on barriers, stores issued before a LOCK can complete after the lock implying that it's possible tlb_flush_pending can be visible after a page table update. As per revised documentation, this patch adds a smp_mb__before_spinlock to guarantee the correct ordering. Signed-off-by: NMel Gorman <mgorman@suse.de> Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: NRik van Riel <riel@redhat.com> Cc: <stable@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Rik van Riel 提交于
There are a few subtle races, between change_protection_range (used by mprotect and change_prot_numa) on one side, and NUMA page migration and compaction on the other side. The basic race is that there is a time window between when the PTE gets made non-present (PROT_NONE or NUMA), and the TLB is flushed. During that time, a CPU may continue writing to the page. This is fine most of the time, however compaction or the NUMA migration code may come in, and migrate the page away. When that happens, the CPU may continue writing, through the cached translation, to what is no longer the current memory location of the process. This only affects x86, which has a somewhat optimistic pte_accessible. All other architectures appear to be safe, and will either always flush, or flush whenever there is a valid mapping, even with no permissions (SPARC). The basic race looks like this: CPU A CPU B CPU C load TLB entry make entry PTE/PMD_NUMA fault on entry read/write old page start migrating page change PTE/PMD to new page read/write old page [*] flush TLB reload TLB from new entry read/write new page lose data [*] the old page may belong to a new user at this point! The obvious fix is to flush remote TLB entries, by making sure that pte_accessible aware of the fact that PROT_NONE and PROT_NUMA memory may still be accessible if there is a TLB flush pending for the mm. This should fix both NUMA migration and compaction. [mgorman@suse.de: fix build] Signed-off-by: NRik van Riel <riel@redhat.com> Signed-off-by: NMel Gorman <mgorman@suse.de> Cc: Alex Thorlton <athorlton@sgi.com> Cc: <stable@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Mel Gorman 提交于
do_huge_pmd_numa_page() handles the case where there is parallel THP migration. However, by the time it is checked the NUMA hinting information has already been disrupted. This patch adds an earlier check with some helpers. Signed-off-by: NMel Gorman <mgorman@suse.de> Reviewed-by: NRik van Riel <riel@redhat.com> Cc: Alex Thorlton <athorlton@sgi.com> Cc: <stable@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Vivek Goyal 提交于
Commit 1b3a5d02 ("reboot: move arch/x86 reboot= handling to generic kernel") moved reboot= handling to generic code. In the process it also removed the code in native_machine_shutdown() which are moving reboot process to reboot_cpu/cpu0. I guess that thought must have been that all reboot paths are calling migrate_to_reboot_cpu(), so we don't need this special handling. But kexec reboot path (kernel_kexec()) is not calling migrate_to_reboot_cpu() so above change broke kexec. Now reboot can happen on non-boot cpu and when INIT is sent in second kerneo to bring up BP, it brings down the machine. So start calling migrate_to_reboot_cpu() in kexec reboot path to avoid this problem. Bisected by WANG Chao. Reported-by: NMatthew Whitehead <mwhitehe@redhat.com> Reported-by: NDave Young <dyoung@redhat.com> Signed-off-by: NVivek Goyal <vgoyal@redhat.com> Tested-by: NBaoquan He <bhe@redhat.com> Tested-by: NWANG Chao <chaowang@redhat.com> Acked-by: NH. Peter Anvin <hpa@linux.intel.com> Cc: <stable@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 17 12月, 2013 3 次提交
-
-
由 Vince Weaver 提交于
Commit fdfbbd07 ("perf: Add generic transaction flags") added support for PERF_SAMPLE_TRANSACTION but forgot to add documentation for the sample type to include/uapi/linux/perf_event.h Signed-off-by: NVince Weaver <vincent.weaver@maine.edu> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Cc: Andi Kleen <ak@linux.intel.com> Link: http://lkml.kernel.org/r/alpine.DEB.2.02.1312131548450.10372@pianoman.cluster.toySigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Marc Carino 提交于
Certain drives cannot handle queued TRIM commands properly, even though support is indicated in the IDENTIFY DEVICE buffer. This patch allows for disabling the commands for the affected drives and apply it to the Micron/Crucial M500 SSDs which exhibit incorrect protocol behavior when issued queued TRIM commands, which could lead to silent data corruption. tj: Merged two unnecessarily split patches and made minor edits including shortening horkage name. Signed-off-by: NMarc Carino <marc.ceeeee@gmail.com> Signed-off-by: NTejun Heo <tj@kernel.org> Link: http://lkml.kernel.org/g/1387246554-7311-1-git-send-email-marc.ceeeee@gmail.com Cc: stable@vger.kernel.org # 3.12+
-
由 Yann Droneaud 提交于
Userspace input buffer is not modified by kernel, so it can be 'const'. This is also a prerequisite to remove the implicit cast from INIT_UDATA(). Link: http://marc.info/?i=cover.1386798254.git.ydroneaud@opteya.com> Signed-off-by: NYann Droneaud <ydroneaud@opteya.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
- 13 12月, 2013 7 次提交
-
-
由 Julien Grall 提交于
On ARM (32 bits and 64 bits), the double-word is 8-bytes aligned. This will result on different structure from Xen and Linux repositories. As Linux is using __packed__ attribute, it must have a 4-bytes padding before each "id" field. This change breaks guest block support with older kernel. IMHO, it's acceptable because Xen on ARM is still on Tech Preview and the hypercall ABI is not yet freezed. Only one architecture (x86_32) doesn't have 64-bit ABI for the block interface. Don't add padding if Linux is compiled for this architecture. Signed-off-by: NJulien Grall <julien.grall@linaro.org> Acked-by: NRoger Pau Monne <roger.pau@citrix.com> Acked-by: NDavid Vrabel <david.vrabel@citrix.com> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Ian Campbell <ian.campbell@citrix.com> Acked-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> [I had asked for confirmation that it did not break x86 and Ian went beyound the call of duty to confirm it. Also a internal regression bucket with 32/64 dom0 with 32/64 domU (PV and HVM) confirmed no regressions. ABI changes are a drag..] Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
由 Krzysztof Kozlowski 提交于
Rename old regmap field of "struct sec_pmic_dev" to "regmap_pmic" and add new regmap for RTC. On S5M8767A registers were not properly updated and read due to usage of the same regmap as the PMIC. This could be observed in various hangs, e.g. in infinite loop during waiting for UDR field change. On this chip family the RTC has different I2C address than PMIC so additional regmap is needed. Signed-off-by: NKrzysztof Kozlowski <k.kozlowski@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com> Reviewed-by: NMark Brown <broonie@linaro.org> Acked-by: NSangbeom Kim <sbkim73@samsung.com> Cc: Samuel Ortiz <sameo@linux.intel.com> Cc: Lee Jones <lee.jones@linaro.org> Cc: Liam Girdwood <lgirdwood@gmail.com> Cc: Alessandro Zummo <a.zummo@towertech.it> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Kyungmin Park <kyungmin.park@samsung.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Axel Lin 提交于
The machine cannot fault if !MUU, so make might_fault() a nop for !MMU. This fixes below build error if !CONFIG_MMU && (CONFIG_PROVE_LOCKING=y || CONFIG_DEBUG_ATOMIC_SLEEP=y): arch/arm/kernel/built-in.o: In function `arch_ptrace': arch/arm/kernel/ptrace.c:852: undefined reference to `might_fault' arch/arm/kernel/built-in.o: In function `restore_sigframe': arch/arm/kernel/signal.c:173: undefined reference to `might_fault' ... arch/arm/kernel/built-in.o:arch/arm/kernel/signal.c:177: more undefined references to `might_fault' follow make: *** [vmlinux] Error 1 Signed-off-by: NAxel Lin <axel.lin@ingics.com> Acked-by: NMichael S. Tsirkin <mst@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Naoya Horiguchi 提交于
With CONFIG_HUGETLBFS=n: mm/migrate.c: In function `do_move_page_to_node_array': include/linux/hugetlb.h:140:33: warning: statement with no effect [-Wunused-value] #define isolate_huge_page(p, l) false ^ mm/migrate.c:1170:4: note: in expansion of macro `isolate_huge_page' isolate_huge_page(page, &pagelist); Reported-by: NBorislav Petkov <bp@alien8.de> Tested-by: NBorislav Petkov <bp@alien8.de> Signed-off-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com> Acked-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Sebastian Siewior 提交于
neigh_priv_len is defined as u8. With all debug enabled struct ipoib_neigh has 200 bytes. The largest part is sk_buff_head with 96 bytes and here the spinlock with 72 bytes. The size value still fits in this u8 leaving some room for more. On -RT struct ipoib_neigh put on weight and has 392 bytes. The main reason is sk_buff_head with 288 and the fatty here is spinlock with 192 bytes. This does no longer fit into into neigh_priv_len and gcc complains. This patch changes neigh_priv_len from being 8bit to 16bit. Since the following element (dev_id) is 16bit followed by a spinlock which is aligned, the struct remains with a total size of 3200 (allmodconfig) / 2048 (with as much debug off as possible) bytes on x86-64. On x86-32 the struct is 1856 (allmodconfig) / 1216 (with as much debug off as possible) bytes long. The numbers were gained with and without the patch to prove that this change does not increase the size of the struct. Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Will Deacon 提交于
Whilst architectures may be able to do better than this (which they can, by simply defining their own macro), this is a generic stab at a zero_bytemask implementation for the asm-generic, big-endian word-at-a-time implementation. On arm64, a clz instruction is used to implement the fls efficiently. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Will Deacon 提交于
When explicitly hashing the end of a string with the word-at-a-time interface, we have to be careful which end of the word we pick up. On big-endian CPUs, the upper-bits will contain the data we're after, so ensure we generate our masks accordingly (and avoid hashing whatever random junk may have been sitting after the string). This patch adds a new dcache helper, bytemask_from_count, which creates a mask appropriate for the CPU endianness. Cc: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 12 12月, 2013 2 次提交
-
-
由 Roland Dreier 提交于
Commit 04f3b31b ("iscsi-target: Convert iscsi_session statistics to atomic_long_t") removed the updating of these fields in iscsi (the only fabric driver that ever touched these counters), and the core has no way to report or otherwise use the values. Remove the last remnants of these counters. Signed-off-by: NRoland Dreier <roland@purestorage.com> Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
-
由 Sergei Shtylyov 提交于
Renesas R-Car development boards use KSZ8041RNLI PHY which for some reason has ID of 0x00221537 that is not documented for KSZ8041-family PHYs and does not match the documented ID of 0x0022151x (where 'x' is the revision). We have to add the new #define PHY_ID_* and new ksphy_driver[] entry, almost the same as KSZ8041 one, differing only in the 'phy_id' and 'name' fields. Signed-off-by: NSergei Shtylyov <sergei.shtylyov@cogentembedded.com> Tested-by: NGeert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 11 12月, 2013 6 次提交
-
-
由 Peter Zijlstra 提交于
Christian suffers from a bad BIOS that wrecks his i5's TSC sync. This results in him occasionally seeing time going backwards - which crashes the scheduler ... Most of our time accounting can actually handle that except the most common one; the tick time update of sched_fair. There is a further problem with that code; previously we assumed that because we get a tick every TICK_NSEC our time delta could never exceed 32bits and math was simpler. However, ever since Frederic managed to get NO_HZ_FULL merged; this is no longer the case since now a task can run for a long time indeed without getting a tick. It only takes about ~4.2 seconds to overflow our u32 in nanoseconds. This means we not only need to better deal with time going backwards; but also means we need to be able to deal with large deltas. This patch reworks the entire code and uses mul_u64_u32_shr() as proposed by Andy a long while ago. We express our virtual time scale factor in a u32 multiplier and shift right and the 32bit mul_u64_u32_shr() implementation reduces to a single 32x32->64 multiply if the time delta is still short (common case). For 64bit a 64x64->128 multiply can be used if ARCH_SUPPORTS_INT128. Reported-and-Tested-by: NChristian Engelmayer <cengelma@gmx.at> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Cc: fweisbec@gmail.com Cc: Paul Turner <pjt@google.com> Cc: Stanislaw Gruszka <sgruszka@redhat.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/20131118172706.GI3866@twins.programming.kicks-ass.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Peter Zijlstra 提交于
Introduce mul_u64_u32_shr() as proposed by Andy a while back; it allows using 64x64->128 muls on 64bit archs and recent GCC which defines __SIZEOF_INT128__ and __int128. (This new method will be used by the scheduler.) Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Cc: fweisbec@gmail.com Cc: Andy Lutomirski <luto@amacapital.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/n/tip-hxjoeuzmrcaumR0uZwjpe2pv@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Peter Zijlstra 提交于
While hunting a preemption issue with Alexander, Ben noticed that the currently generic PREEMPT_NEED_RESCHED stuff is horribly broken for load-store architectures. We currently rely on the IPI to fold TIF_NEED_RESCHED into PREEMPT_NEED_RESCHED, but when this IPI lands while we already have a load for the preempt-count but before the store, the store will erase the PREEMPT_NEED_RESCHED change. The current preempt-count only works on load-store archs because interrupts are assumed to be completely balanced wrt their preempt_count fiddling; the previous preempt_count load will match the preempt_count state after the interrupt and therefore nothing gets lost. This patch removes the PREEMPT_NEED_RESCHED usage from generic code and pushes it into x86 arch code; the generic code goes back to relying on TIF_NEED_RESCHED. Boot tested on x86_64 and compile tested on ppc64. Reported-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Reported-and-Tested-by: NAlexander Graf <agraf@suse.de> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/20131128132641.GP10022@twins.programming.kicks-ass.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Neil Horman 提交于
Currently, sctp associations latch a sockets autoclose value to an association at association init time, subject to capping constraints from the max_autoclose sysctl value. This leads to an odd situation where an application may set a socket level autoclose timeout, but sliently sctp will limit the autoclose timeout to something less than that. Fix this by modifying the autoclose setsockopt function to check the limit, cap it and warn the user via syslog that the timeout is capped. This will allow getsockopt to return valid autoclose timeout values that reflect what subsequent associations actually use. While were at it, also elimintate the assoc->autoclose variable, it duplicates whats in the timeout array, which leads to multiple sources for the same information, that may differ (as the former isn't subject to any capping). This gives us the timeout information in a canonical place and saves some space in the association structure as well. Signed-off-by: NNeil Horman <nhorman@tuxdriver.com> Acked-by: NVlad Yasevich <vyasevich@gmail.com> CC: Wang Weidong <wangweidong1@huawei.com> CC: David Miller <davem@davemloft.net> CC: Vlad Yasevich <vyasevich@gmail.com> CC: netdev@vger.kernel.org Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Sasha Levin 提交于
unix_dgram_recvmsg() will hold the readlock of the socket until recv is complete. In the same time, we may try to setsockopt(SO_PEEK_OFF) which will hang until unix_dgram_recvmsg() will complete (which can take a while) without allowing us to break out of it, triggering a hung task spew. Instead, allow set_peek_off to fail, this way userspace will not hang. Signed-off-by: NSasha Levin <sasha.levin@oracle.com> Acked-by: NPavel Emelyanov <xemul@parallels.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 H. Peter Anvin 提交于
When compiling with icc, <linux/compiler-gcc.h> ends up included because the icc environment defines __GNUC__. Thus, we neither need nor want to have this macro defined in both compiler-gcc.h and compiler-intel.h, and the fact that they are inconsistent just makes the compiler spew warnings. Reported-by: NSunil K. Pandey <sunil.k.pandey@intel.com> Cc: Kevin B. Smith <kevin.b.smith@intel.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com> Link: http://lkml.kernel.org/n/tip-0mbwou1zt7pafij09b897lg3@git.kernel.org Cc: <stable@vger.kernel.org>
-
- 10 12月, 2013 3 次提交
-
-
由 Takashi Iwai 提交于
snd_pcm_uframes_t is defined as unsigned long so it would take different sizes depending on 32 or 64bit architectures. As we don't want this ABI incompatibility, and there is no real 64bit user yet, let's make it the fixed size with __u32. Also bump the protocol version number to 0.1.2. Acked-by: NVinod Koul <vinod.koul@intel.com> Cc: <stable@vger.kernel.org> Signed-off-by: NTakashi Iwai <tiwai@suse.de>
-
由 Stefano Panella 提交于
When running a 32bit kernel the hda_intel driver is still reporting a 64bit dma_mask if the HW supports it. From sound/pci/hda/hda_intel.c: /* allow 64bit DMA address if supported by H/W */ if ((gcap & ICH6_GCAP_64OK) && !pci_set_dma_mask(pci, DMA_BIT_MASK(64))) pci_set_consistent_dma_mask(pci, DMA_BIT_MASK(64)); else { pci_set_dma_mask(pci, DMA_BIT_MASK(32)); pci_set_consistent_dma_mask(pci, DMA_BIT_MASK(32)); } which means when there is a call to dma_alloc_coherent from snd_malloc_dev_pages a machine address bigger than 32bit can be returned. This can be true in particular if running the 32bit kernel as a pv dom0 under the Xen Hypervisor or PAE on bare metal. The problem is that when calling setup_bdle to program the BLE the dma_addr_t returned from the dma_alloc_coherent is wrongly truncated from snd_sgbuf_get_addr if running a 32bit kernel: static inline dma_addr_t snd_sgbuf_get_addr(struct snd_dma_buffer *dmab, size_t offset) { struct snd_sg_buf *sgbuf = dmab->private_data; dma_addr_t addr = sgbuf->table[offset >> PAGE_SHIFT].addr; addr &= PAGE_MASK; return addr + offset % PAGE_SIZE; } where PAGE_MASK in a 32bit kernel is zeroing the upper 32bit af addr. Without this patch the HW will fetch the 32bit truncated address, which is not the one obtained from dma_alloc_coherent and will result to a non working audio but can corrupt host memory at a random location. The current patch apply to v3.13-rc3-74-g6c843f5 Signed-off-by: NStefano Panella <stefano.panella@citrix.com> Reviewed-by: NFrediano Ziglio <frediano.ziglio@citrix.com> Cc: <stable@vger.kernel.org> Signed-off-by: NTakashi Iwai <tiwai@suse.de>
-
由 Philipp Zabel 提交于
Currently it is not possible for userspace to map a DMABUF exported buffer with write permissions. This patch allows to also pass O_RDONLY/O_RDWR when exporting the buffer, so that userspace may map it with write permissions. Signed-off-by: NPhilipp Zabel <p.zabel@pengutronix.de> Signed-off-by: NSylwester Nawrocki <s.nawrocki@samsung.com> Signed-off-by: NMauro Carvalho Chehab <m.chehab@samsung.com>
-
- 09 12月, 2013 2 次提交
-
-
由 Tom Lendacky 提交于
Now that scatterwalk_sg_chain sets the chain pointer bit the sg_page call in scatterwalk_sg_next hits a BUG_ON when CONFIG_DEBUG_SG is enabled. Use sg_chain_ptr instead of sg_page on a chain entry. Cc: stable@vger.kernel.org Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Jakob Bornecrantz 提交于
Userspace uses this to workaround overcommit issues by flushing the command stream early. Signed-off-by: NJakob Bornecrantz <jakob@vmware.com> Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com>
-
- 08 12月, 2013 2 次提交
-
-
由 Rafael J. Wysocki 提交于
Commit 5a87182a (cpufreq: suspend governors on system suspend/hibernate) causes hibernation problems to happen on Bjørn Mork's and Paul Bolle's systems, so revert it. Fixes: 5a87182a (cpufreq: suspend governors on system suspend/hibernate) Reported-by: NBjørn Mork <bjorn@mork.no> Reported-by: NPaul Bolle <pebolle@tiscali.nl> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Khalid Aziz 提交于
Add a flag to tell the PCI subsystem that kernel is shutting down in preparation to kexec a kernel. Add code in PCI subsystem to use this flag to clear Bus Master bit on PCI devices only in case of kexec reboot. This fixes a power-off problem on Acer Aspire V5-573G and likely other machines and avoids any other issues caused by clearing Bus Master bit on PCI devices in normal shutdown path. The problem was introduced by b566a22c ("PCI: disable Bus Master on PCI device shutdown"). This patch is based on discussion at http://marc.info/?l=linux-pci&m=138425645204355&w=2 Link: https://bugzilla.kernel.org/show_bug.cgi?id=63861Reported-by: NChang Liu <cl91tp@gmail.com> Signed-off-by: NKhalid Aziz <khalid.aziz@oracle.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Acked-by: NKonstantin Khlebnikov <koct9i@gmail.com> Cc: stable@vger.kernel.org # v3.5+
-
- 06 12月, 2013 4 次提交
-
-
由 Eric W. Biederman 提交于
kill memcg_tcp_enter_memory_pressure. The only function of memcg_tcp_enter_memory_pressure was to reduce deal with the unnecessary abstraction that was tcp_memcontrol. Now that struct tcp_memcontrol is gone remove this unnecessary function, the unnecessary function pointer, and modify sk_enter_memory_pressure to set this field directly, just as sk_leave_memory_pressure cleas this field directly. This fixes a small bug I intruduced when killing struct tcp_memcontrol that caused memcg_tcp_enter_memory_pressure to never be called and thus failed to ever set cg_proto->memory_pressure. Remove the cg_proto enter_memory_pressure function as it now serves no useful purpose. Don't test cg_proto->memory_presser in sk_leave_memory_pressure before clearing it. The test was originally there to ensure that the pointer was non-NULL. Now that cg_proto is not a pointer the pointer does not matter. Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Paul Durrant 提交于
The code to detect fragments in checksum_setup() was missing for IPv4 and too eager for IPv6. (It transpires that Windows seems to send IPv6 packets with a fragment header even if they are not a fragment - i.e. offset is zero, and M bit is not set). This patch also incorporates a fix to callers of maybe_pull_tail() where skb->network_header was being erroneously added to the length argument. Signed-off-by: NPaul Durrant <paul.durrant@citrix.com> Signed-off-by: NZoltan Kiss <zoltan.kiss@citrix.com> Cc: Wei Liu <wei.liu2@citrix.com> Cc: Ian Campbell <ian.campbell@citrix.com> Cc: David Vrabel <david.vrabel@citrix.com> cc: David Miller <davem@davemloft.net> Acked-by: NWei Liu <wei.liu2@citrix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ping Cheng 提交于
Some devices, such as new Intuos series tablets, have a hardware switch to turn touch data on/off. To report the state, SW_MUTE_DEVICE is added in include/uapi/linux/input.h. Reviewed_by: Chris Bagwell <chris@cnpbagwell.com> Acked-by: NPeter Hutterer <peter.hutterer@who-t.net> Tested-by: NJason Gerecke <killertofu@gmail.com> Signed-off-by: NPing Cheng <pingc@wacom.com> Signed-off-by: NDmitry Torokhov <dmitry.torokhov@gmail.com>
-
由 Tejun Heo 提交于
When CONFIG_DEBUG_FORCE_WEAK_PER_CPU or CONFIG_ARCH_NEEDS_WEAK_PER_CPU is set, DEFINE_PER_CPU() explodes into cryptic series of definitions to still allow using "static" for percpu variables while keeping all per-cpu symbols unique in the kernel image which is required for weak symbols. This ultimately converts the actual symbol to global whether DEFINE_PER_CPU() is prefixed with static or not. Unfortunately, the macro forgot to add explicit extern declartion of the actual symbol ending up defining global symbol without preceding declaration for static definitions which naturally don't have matching DECLARE_PER_CPU(). The only ill effect is triggering of the following warnings. fs/inode.c:74:8: warning: symbol 'nr_inodes' was not declared. Should it be static? fs/inode.c:75:8: warning: symbol 'nr_unused' was not declared. Should it be static? Fix it by adding extern declaration in the DEFINE_PER_CPU() macro. Signed-off-by: NTejun Heo <tj@kernel.org> Reported-by: NWanlong Gao <gaowanlong@cn.fujitsu.com> Tested-by: NWanlong Gao <gaowanlong@cn.fujitsu.com>
-
- 03 12月, 2013 1 次提交
-
-
由 Amit Pundir 提交于
Drop EPOLLWAKEUP from epoll events mask if CONFIG_PM_SLEEP is disabled. Signed-off-by: NAmit Pundir <amit.pundir@linaro.org> Cc: John Stultz <john.stultz@linaro.org> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-