- 18 2月, 2011 2 次提交
-
-
由 Roland Dreier 提交于
-
由 Mike Marciniszyn 提交于
There is a double completion associated with error handling for RC QPs. The sequence is: - The do_rc_ack() routine fields an RNR nack and there are 0 rnr_retries configured on the QP. - qib_error_qp() stops the pending timer - qib_rc_send_complete() is called from sdma_complete() - qib_rc_send_complete() starts the timer because the msb of the psn just completed says an ack is needed. - a bunch of flushes occur as ipoib posts WQEs to an error'ed QP - rc_timeout() calls qib_restart_rc() - qib_restart_rc() calls qib_send_complete() with a IB_WC_RETRY_EXC_ERR on a wqe that has already been completed in the past The fix avoids starting the timer since another packet will never arrive. Signed-off-by: NMike Marciniszyn <mike.marciniszyn@qlogic.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
- 11 2月, 2011 1 次提交
-
-
由 Mike Marciniszyn 提交于
The following panic BUG_ON occurs during qib testing: Kernel BUG at include/linux/timer.h:82 RIP [<ffffffff881f7109>] :ib_qib:start_timer+0x73/0x89 RSP <ffffffff80425bd0> <0>Kernel panic - not syncing: Fatal exception <0>Dumping qib trace buffer from panic qib_set_lid INFO: IB0:1 got a lid: 0xf8 Done dumping qib trace buffer BUG: warning at kernel/panic.c:137/panic() (Tainted: G The flaw is due to a missing state test when processing responses that results in an add_timer() call when the same timer is already queued. This code was executing in parallel with a QP destroy on another CPU that had changed the state to reset, but the missing test caused to response handling code to run on into the panic. Signed-off-by: NMike Marciniszyn <mike.marciniszyn@qlogic.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
- 04 2月, 2011 6 次提交
-
-
由 Maciej Sosnowski 提交于
nes_port_ibevent() should not be called when the nes RDMA device is not registered with the RDMA core. Add missing checks of of_device_registered flag. Signed-off-by: NMaciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Suresh Siddha 提交于
Clearing the cpu in prev's mm_cpumask early will avoid the flush tlb IPI's while the cr3 is still pointing to the prev mm. And this window can lead to the possibility of bogus TLB fills resulting in strange failures. One such problematic scenario is mentioned below. T1. CPU-1 is context switching from mm1 to mm2 context and got a NMI etc between the point of clearing the cpu from the mm_cpumask(mm1) and before reloading the cr3 with the new mm2. T2. CPU-2 is tearing down a specific vma for mm1 and will proceed with flushing the TLB for mm1. It doesn't send the flush TLB to CPU-1 as it doesn't see that cpu listed in the mm_cpumask(mm1). T3. After the TLB flush is complete, CPU-2 goes ahead and frees the page-table pages associated with the removed vma mapping. T4. CPU-2 now allocates those freed page-table pages for something else. T5. As the CR3 and TLB caches for mm1 is still active on CPU-1, CPU-1 can potentially speculate and walk through the page-table caches and can insert new TLB entries. As the page-table pages are already freed and being used on CPU-2, this page walk can potentially insert a bogus global TLB entry depending on the (random) contents of the page that is being used on CPU-2. T6. This bogus TLB entry being global will be active across future CR3 changes and can result in weird memory corruption etc. To avoid this issue, for the prev mm that is handing over the cpu to another mm, clear the cpu from the mm_cpumask(prev) after the cr3 is changed. Marking it for -stable, though we haven't seen any reported failure that can be attributed to this. Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Acked-by: NIngo Molnar <mingo@elte.hu> Cc: stable@kernel.org [v2.6.32+] Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
git://git.kernel.org/pub/scm/linux/kernel/git/roland/infiniband由 Linus Torvalds 提交于
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/roland/infiniband: RDMA: Update missed conversion of flush_scheduled_work() RDMA/ucma: Copy iWARP route information on queries RDMA/amso1100: Fix compile warnings RDMA/cxgb4: Set the correct device physical function for iWARP connections RDMA/cxgb4: Limit MAXBURST EQ context field to 256B IB/qib: Hold link for TX SERDES settings mlx4_core: Add ConnectX-3 device IDs
-
由 Linus Torvalds 提交于
Merge branch 'irq-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'irq-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: genirq: Prevent irq storm on migration
-
由 Linus Torvalds 提交于
Merge branch 'sched-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'sched-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: sched: Fix update_curr_rt() sched, docs: Update schedstats documentation to version 15
-
由 Linus Torvalds 提交于
Merge branch 'perf-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'perf-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: perf: Fix reading in perf_event_read() watchdog: Don't change watchdog state on read of sysctl watchdog: Fix sysctl consistency watchdog: Fix broken nowatchdog logic perf: Fix Pentium4 raw event validation perf: Fix alloc_callchain_buffers()
-
- 03 2月, 2011 24 次提交
-
-
由 Peter Zijlstra 提交于
cpu_stopper_thread() migration_cpu_stop() __migrate_task() deactivate_task() dequeue_task() dequeue_task_rq() update_curr_rt() Will call update_curr_rt() on rq->curr, which at that time is rq->stop. The problem is that rq->stop.prio matches an RT prio and thus falsely assumes its a rt_sched_class task. Reported-Debuged-Tested-Acked-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <new-submission> Cc: stable@kernel.org # .37 Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
It is quite possible for the event to have been disabled between perf_event_read() sending the IPI and the CPU servicing the IPI and calling __perf_event_read(), hence revalidate the state. Reported-by: NStephane Eranian <eranian@google.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <new-submission> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Boaz Harrosh 提交于
This reverts commit 115e19c5. Apparently setting inode->bdi to one's own sb->s_bdi stops VFS from sending *read-aheads*. This problem was bisected to this commit. A revert fixes it. I'll investigate farther why is this happening for the next Kernel, but for now a revert. I'm sending to stable@kernel.org as well, since it exists also in 2.6.37. 2.6.36 is good and does not have this patch. CC: Stable Tree <stable@kernel.org> Signed-off-by: NBoaz Harrosh <bharrosh@panasas.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-2.6由 Linus Torvalds 提交于
* 'media_fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-2.6: [media] fix saa7111 non-detection [media] rc/streamzap: fix reporting response times [media] mceusb: really fix remaining keybounce issues [media] rc: use time unit conversion macros correctly [media] rc/ir-lirc-codec: add back debug spew [media] ir-kbd-i2c: improve remote behavior with z8 behind usb [media] lirc_zilog: z8 on usb doesn't like back-to-back i2c_master_send [media] hdpvr: fix up i2c device registration [media] rc/mce: add mappings for missing keys [media] gspca - zc3xx: Discard the partial frames [media] gspca - zc3xx: Fix bad images with the sensor hv7131r [media] gspca - zc3xx: Bad delay when given by a table
-
git://git390.marist.edu/pub/scm/linux-2.6由 Linus Torvalds 提交于
* 'for-linus' of git://git390.marist.edu/pub/scm/linux-2.6: [S390] reset default for CONFIG_CHSC_SCH [S390] qdio: prevent compile warning under CONFIG_32BIT [S390] use asm-generic/cacheflush.h [S390] tlb: fix build error caused by THP [S390] missing sacf in uaccess [S390] pgtable_list corruption [S390] dasd: prevent panic with unresumed devices
-
由 Josef Bacik 提交于
Some filesystems don't deal well with being asked to map less than blocksize blocks (GFS2 for example). Since we are always mapping at least blocksize sections anyway, just make sure len is at least as big as a blocksize so we don't trip up any filesystems. Thanks, Signed-off-by: NJosef Bacik <josef@redhat.com> Cc: Steven Whitehouse <swhiteho@redhat.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Namhyung Kim 提交于
FMODE_EXEC is a constant type of fmode_t but was used with normal integer constants. This results in following warnings from sparse. Fix it using new macro __FMODE_EXEC. fs/exec.c:116:58: warning: restricted fmode_t degrades to integer fs/exec.c:689:58: warning: restricted fmode_t degrades to integer fs/fcntl.c:777:9: warning: restricted fmode_t degrades to integer Signed-off-by: NNamhyung Kim <namhyung@gmail.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Namhyung Kim 提交于
AND-ing FMODE_* constant with normal integer results in following sparse warnings. Fix it. fs/open.c:662:21: warning: restricted fmode_t degrades to integer fs/anon_inodes.c:123:34: warning: restricted fmode_t degrades to integer Signed-off-by: NNamhyung Kim <namhyung@gmail.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 KAMEZAWA Hiroyuki 提交于
Changes in e401f176 ("memcg: modify accounting function for supporting THP better") adds nr_pages to support multiple page size in memory_cgroup_charge_statistics. But counting the number of event nees abs(nr_pages) for increasing counters. This patch fixes event counting. Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Reviewed-by: NJohannes Weiner <hannes@cmpxchg.org> Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Cc: Balbir Singh <balbir@in.ibm.com> Cc: Minchan Kim <minchan.kim@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Johannes Weiner 提交于
Huge page coverage should obviously have less priority than the continued execution of a process. Never kill a process when charging it a huge page fails. Instead, give up after the first failed reclaim attempt and fall back to regular pages. Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org> Acked-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Reviewed-by: NMinchan Kim <minchan.kim@gmail.com> Cc: Balbir Singh <balbir@linux.vnet.ibm.com> Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Johannes Weiner 提交于
If reclaim after a failed charging was unsuccessful, the limits are checked again, just in case they settled by means of other tasks. This is all fine as long as every charge is of size PAGE_SIZE, because in that case, being below the limit means having at least PAGE_SIZE bytes available. But with transparent huge pages, we may end up in an endless loop where charging and reclaim fail, but we keep going because the limits are not yet exceeded, although not allowing for a huge page. Fix this up by explicitely checking for enough room, not just whether we are within limits. Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org> Acked-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Reviewed-by: NMinchan Kim <minchan.kim@gmail.com> Cc: Balbir Singh <balbir@linux.vnet.ibm.com> Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Johannes Weiner 提交于
The charging code can encounter a charge size that is bigger than a regular page in two situations: one is a batched charge to fill the per-cpu stocks, the other is a huge page charge. This code is distributed over two functions, however, and only the outer one is aware of huge pages. In case the charging fails, the inner function will tell the outer function to retry if the charge size is bigger than regular pages--assuming batched charging is the only case. And the outer function will retry forever charging a huge page. This patch makes sure the inner function can distinguish between batch charging and a single huge page charge. It will only signal another attempt if batch charging failed, and go into regular reclaim when it is called on behalf of a huge page. Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org> Acked-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Reviewed-by: NMinchan Kim <minchan.kim@gmail.com> Cc: Balbir Singh <balbir@linux.vnet.ibm.com> Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Jin Dongming 提交于
When a tail page of THP is poisoned, memory-failure will do nothing except setting PG_hwpoison, while the expected behavior is that the process, who is using the poisoned tail page, should be killed. The above problem is caused by lru check of the poisoned tail page of THP. Because PG_lru flag is only set on the head page of THP, the check always consider the poisoned tail page as NON lru page. So the lru check for the tail page of THP should be avoided, as like as hugetlb. This patch adds !PageTransCompound() before lru check for THP, because of the check (!PageHuge() && !PageTransCompound()) the whole branch could be optimized away at build time when both hugetlbfs and THP are set with "N" (or in archs not supporting either of those). [akpm@linux-foundation.org: fix unrelated typo in shake_page() comment] Signed-off-by: NJin Dongming <jin.dongming@np.css.fujitsu.com> Reviewed-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Andi Kleen <andi@firstfloor.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Jin Dongming 提交于
When the tail page of THP is poisoned, the head page will be poisoned too. And the wrong address, address of head page, will be sent with sigbus always. So when the poisoned page is used by Guest OS which is running on KVM, after the address changing(hva->gpa) by qemu, the unexpected process on Guest OS will be killed by sigbus. What we expected is that the process using the poisoned tail page could be killed on Guest OS, but not that the process using the healthy head page is killed. Since it is not good to poison the healthy page, avoid poisoning other than the page which is really poisoned. (While we poison all pages in a huge page in case of hugetlb, we can do this for THP thanks to split_huge_page().) Here we fix two parts: 1. Isolate the poisoned page only to make sure the reported address is the address of poisoned page. 2. make the poisoned page work as the poisoned regular page. [akpm@linux-foundation.org: fix spello in comment] Signed-off-by: NJin Dongming <jin.dongming@np.css.fujitsu.com> Reviewed-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Andi Kleen <andi@firstfloor.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Jin Dongming 提交于
The poisoned THP is now split with split_huge_page() in collect_procs_anon(). If kmalloc() is failed in collect_procs(), split_huge_page() could not be called. And the work after split_huge_page() for collecting the processes using poisoned page will not be done, too. So the processes using the poisoned page could not be killed. The condition becomes worse when CONFIG_DEBUG_VM == "Y". Because the poisoned THP could not be split, system panic will be caused by VM_BUG_ON(PageTransHuge(page)) in try_to_unmap(). This patch does: 1. move split_huge_page() to the place before collect_procs(). This can be sure the failure of splitting THP is caused by itself. 2. when splitting THP is failed, stop the operations after it. This can avoid unexpected system panic or non sense works. [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: NJin Dongming <jin.dongming@np.css.fujitsu.com> Reviewed-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Andi Kleen <andi@firstfloor.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Ben Dooks 提交于
The support@simtec.co.uk address is for direct customer support only, the EB2410ITX and EB110ATX entries should direct to the Simtec Linux Team address of linux@simtec.co.uk Also add correct email address for Vincent Sanders [akpm@linux-foundation.org: fix Vincent's address] Signed-off-by: NBen Dooks <ben-linux@fluff.org> Cc: Vincent Sanders <vince@simtec.co.uk> Cc: Simtec Support <support@simtec.co.uk> Cc: Simtec Linux Team <linux@simtec.co.uk> Cc: Jack Stone <jwjstone@fastmail.fm> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Ben Dooks 提交于
Add the correct files for the Simtec BAST machine, ensuring the IDE and IRQ routing are added, and move to the machine specific file instead of trying to catch all of arch/arm/mach-s3c2410 Signed-off-by: NBen Dooks <ben-linux@fluff.org> Cc: Simtec Linux Team <linux@simtec.co.uk> Cc: Simtec Support <support@simtec.co.uk> Cc: Jack Stone <jwjstone@fastmail.fm> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Ben Dooks 提交于
There are currently two entries under the "SIMTEC EB2410ITX (BAST)" machine entry for drivers/*/*s3c2410*, which is catching everything s3c2410 driver related. This entry is for a specific S3C2410 based machine, so move these two file entries to the "ARM/SAMSUNG ARM ARCHITECTURES" entry, where it will reach a wider audience of interested parties. Signed-off-by: NBen Dooks <ben-linux@fluff.org> Cc: Simtec Linux Team <linux@simtec.co.uk> Acked-by: NKukjin Kim <kgene.kim@samsung.com> Cc: Jack Stone <jwjstone@fastmail.fm> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Eric Dumazet 提交于
commit 95aac7b1 ("epoll: make epoll_wait() use the hrtimer range feature") added a performance regression because it uses timespec_add_ns() with potential very large 'ns' values. [akpm@linux-foundation.org: s/epoll_set_mstimeout/ep_set_mstimeout/, per Davide] Reported-by: NSimon Kirby <sim@hostway.ca> Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Cc: Shawn Bohrer <shawn.bohrer@gmail.com> Acked-by: NDavide Libenzi <davidel@xmailserver.org> Cc: <stable@kernel.org> [2.6.37.x] Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Minchan Kim 提交于
If migrate_huge_page by memory-failure fails , it calls put_page in itself to decrease page reference and caller of migrate_huge_page also calls putback_lru_pages. It can do double free of page so it can make page corruption on page holder. In addtion, clean of pages on caller is consistent behavior with migrate_pages by cf608ac1 ("mm: compaction: fix COMPACTPAGEFAILED counting"). Signed-off-by: NMinchan Kim <minchan.kim@gmail.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Christoph Lameter <cl@linux.com> Cc: Mel Gorman <mel@csn.ul.ie> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Andrea Arcangeli 提交于
In some cases migrate_pages could return zero while still leaving a few pages in the pagelist (and some caller wouldn't notice it has to call putback_lru_pages after commit cf608ac1 ("mm: compaction: fix COMPACTPAGEFAILED counting")). Add one missing putback_lru_pages not added by commit cf608ac1 ("mm: compaction: fix COMPACTPAGEFAILED counting"). Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com> Signed-off-by: NMinchan Kim <minchan.kim@gmail.com> Reviewed-by: NMinchan Kim <minchan.kim@gmail.com> Cc: Christoph Lameter <cl@linux.com> Acked-by: NMel Gorman <mel@csn.ul.ie> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Michal Hocko 提交于
noswapaccount couldn't be used to control memsw for both on/off cases so we have added swapaccount[=0|1] parameter. This way we can turn the feature in two ways noswapaccount resp. swapaccount=0. We have kept the original noswapaccount but I think we should remove it after some time as it just makes more command line parameters without any advantages and also the code to handle parameters is uglier if we want both parameters. Signed-off-by: NMichal Hocko <mhocko@suse.cz> Requested-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Acked-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Michal Hocko 提交于
__setup based kernel command line parameters handlers which are handled in obsolete_checksetup are provided with the parameter value including = (more precisely everything right after the parameter name). This means that the current implementation of swapaccount[=1|0] doesn't work at all because if there is a value for the parameter then we are testing for "0" resp. "1" but we are getting "=0" resp. "=1" and if there is no parameter value we are getting an empty string rather than NULL. The original noswapccount parameter, which doesn't care about the value, works correctly. Signed-off-by: NMichal Hocko <mhocko@suse.cz> Acked-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Cc: <stable@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Thomas Gleixner 提交于
move_native_irq() masks and unmasks the interrupt line unconditionally, but the interrupt line might be masked due to a threaded oneshot handler in progress. Unmasking the line in that case can lead to interrupt storms. Observed on PREEMPT_RT. Originally-from: Ingo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: stable@kernel.org
-
- 02 2月, 2011 4 次提交
-
-
git://git.monstr.eu/linux-2.6-microblaze由 Linus Torvalds 提交于
* 'next' of git://git.monstr.eu/linux-2.6-microblaze: microblaze: Fix ASM optimized code for LE microblaze: Fix unaligned issue on MMU system with BS=0 DIV=1 microblaze: Fix DTB passing from bootloader
-
git://git.kernel.org/pub/scm/linux/kernel/git/sfrench/cifs-2.6由 Linus Torvalds 提交于
* git://git.kernel.org/pub/scm/linux/kernel/git/sfrench/cifs-2.6: cifs: fix length checks in checkSMB [CIFS] Update cifs minor version cifs: No need to check crypto blockcipher allocation cifs: clean up some compiler warnings cifs: make CIFS depend on CRYPTO_MD4 cifs: force a reconnect if there are too many MIDs in flight cifs: don't pop a printk when sending on a socket is interrupted cifs: simplify SMB header check routine cifs: send an NT_CANCEL request when a process is signalled cifs: handle cancelled requests better cifs: fix two compiler warning about uninitialized vars
-
由 Michel Lespinasse 提交于
As Tao Ma noticed, change 5ecfda04 breaks blktrace. This is because blktrace mmaps a file with PROT_WRITE permissions but without PROT_READ, so my attempt to not unnecessarity break COW during mlock ended up causing mlock to fail with a permission problem. I am proposing to let mlock ignore vma protection in all cases except PROT_NONE. In particular, mlock should not fail for PROT_WRITE regions (as in the blktrace case, which broke at 5ecfda04) or for PROT_EXEC regions (which seem to me like they were always broken). Signed-off-by: NMichel Lespinasse <walken@google.com> Acked-by: NRik van Riel <riel@redhat.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Randy Dunlap 提交于
The comments under "config STUB_POULSBO" are close to correct, but they are not being followed. This patch updates them to reflect the requirements for THERMAL. This build error is caused by STUB_POULSBO selecting ACPI_VIDEO when ACPI_VIDEO's config requirements are not met. ERROR: "thermal_cooling_device_register" [drivers/acpi/video.ko] undefined! ERROR: "thermal_cooling_device_unregister" [drivers/acpi/video.ko] undefined! Signed-off-by: NRandy Dunlap <randy.dunlap@oracle.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 01 2月, 2011 3 次提交
-
-
由 Javi Merino 提交于
Version 15 of schedstats was introduced in: 67aa0f76: sched: remove unused fields from struct rq and removed three unused counters in sched_yield(). Update the documentation. Signed-off-by: NJavi Merino <cibervicho@gmail.com> Cc: henrix@sapo.pt Cc: rdunlap@xenotime.net Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> LKML-Reference: <1296515496-8229-1-git-send-email-cibervicho@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Linus Torvalds 提交于
-
git://git.linux-nfs.org/projects/trondmy/nfs-2.6由 Linus Torvalds 提交于
* 'bugfixes' of git://git.linux-nfs.org/projects/trondmy/nfs-2.6: NFS: NFSv4 readdir loses entries NFS: Micro-optimize nfs4_decode_dirent() NFS: Fix an NFS client lockdep issue NFS construct consistent co_ownerid for v4.1 NFS: nfs_wcc_update_inode() should set nfsi->attr_gencount NFS improve pnfs_put_deviceid_cache debug print NFS fix cb_sequence error processing NFS do not find client in NFSv4 pg_authenticate NLM: Fix "kernel BUG at fs/lockd/host.c:417!" or ".../host.c:283!" NFS: Prevent memory allocation failure in nfsacl_encode() NFS: nfsacl_{encode,decode} should return signed integer NFS: Fix "kernel BUG at fs/nfs/nfs3xdr.c:1338!" NFS: Fix "kernel BUG at fs/aio.c:554!" NFS4: Avoid potential NULL pointer dereference in decode_and_add_ds(). NFS: fix handling of malloc failure during nfs_flush_multi()
-