- 10 7月, 2012 6 次提交
-
-
由 Deepak Sikri 提交于
For the higher mtu sizes requiring the buffer size greater than 8192, the buffers are sent or received using multiple dma descriptors/ same descriptor with option of multi buffer handling. It was observed during tests that the driver was missing on data packets during the normal ping operations if the data buffers being used catered to jumbo frame handling. The memory barrriers are added in between preparation of dma descriptors in the jumbo frame handling path to ensure all instructions before enabling the dma are complete. Signed-off-by: NDeepak Sikri <deepak.sikri@st.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Deepak Sikri 提交于
It was observed that during multiple reboots nfs hangs. The status of receive descriptors shows that all the descriptors were in control of CPU, and none were assigned to DMA. Also the DMA status register confirmed that the Rx buffer is unavailable. This patch adds the fix for the same by adding the memory barriers to ascertain that the all instructions before enabling the Rx or Tx DMA are completed which involves the proper setting of the ownership bit in DMA descriptors. Signed-off-by: NDeepak Sikri <deepak.sikri@st.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Emmanuel Grumbach 提交于
When we remove a key, we put a key index which was supposed to tell the fw that we are actually removing the key. But instead the fw took that index as a valid index and messed up the SRAM of the device. This memory corruption on the device mangled the data of the SCD. The impact on the user is that SCD queue 2 got stuck after having removed keys. Reported-by: NPaul Bolle <pebolle@tiscali.nl> Cc: stable@vger.kernel.org Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com> Signed-off-by: NStanislaw Gruszka <sgruszka@redhat.com> Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
-
由 Stanislaw Gruszka 提交于
This is iwlegacy version of: commit 342bbf3f Author: Johannes Berg <johannes.berg@intel.com> Date: Sun Mar 4 08:50:46 2012 -0800 iwlwifi: always monitor for stuck queues If we only monitor while associated, the following can happen: - we're associated, and the queue stuck check runs, setting the queue "touch" time to X - we disassociate, stopping the monitoring, which leaves the time set to X - almost 2s later, we associate, and enqueue a frame - before the frame is transmitted, we monitor for stuck queues, and find the time set to X, although it is now later than X + 2000ms, so we decide that the queue is stuck and erroneously restart the device Cc: stable@vger.kernel.org Signed-off-by: NStanislaw Gruszka <sgruszka@redhat.com> Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
-
由 Stanislaw Gruszka 提交于
On rt2x00_dmastart() we increase index specified by Q_INDEX and on rt2x00_dmadone() we increase index specified by Q_INDEX_DONE. So entries between Q_INDEX_DONE and Q_INDEX are those we currently process in the hardware. Entries between Q_INDEX and Q_INDEX_DONE are those we can submit to the hardware. According to that fix rt2x00usb_kick_queue(), as we need to submit RX entries that are not processed by the hardware. It worked before only for empty queue, otherwise was broken. Note that for TX queues indexes ordering are ok. We need to kick entries that have filled skb, but was not submitted to the hardware, i.e. started from Q_INDEX_DONE and have ENTRY_DATA_PENDING bit set. From practical standpoint this fixes RX queue stall, usually reproducible in AP mode, like for example reported here: https://bugzilla.redhat.com/show_bug.cgi?id=828824Reported-and-tested-by: NFranco Miceli <fmiceli@plan.ceibal.edu.uy> Reported-and-tested-by: NTom Horsley <horsley1953@gmail.com> Cc: stable@vger.kernel.org Signed-off-by: NStanislaw Gruszka <sgruszka@redhat.com> Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
-
由 Bing Zhao 提交于
> *. CID 709078: Resource leak (RESOURCE_LEAK) > - drivers/net/wireless/mwifiex/cfg80211.c, line: 935 > Assigning: "bss_cfg" = storage returned from "kzalloc(132UL, 208U)" > - but was not free > drivers/net/wireless/mwifiex/cfg80211.c:935 Signed-off-by: NBing Zhao <bzhao@marvell.com> Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
-
- 09 7月, 2012 6 次提交
-
-
由 Michael Chan 提交于
commit c0357e97 bnx2: stop using net_device.{base_addr, irq}. removed netdev->base_addr so we need to update cnic to get the MMIO base address from pci_resource_start(). Otherwise, mmap of the uio device will fail. Signed-off-by: NMichael Chan <mchan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Bjørn Mork 提交于
Adding a device with limited QMI support. It does not support normal QMI_WDS commands for connection management. Instead, sending a QMI_CTL SET_INSTANCE_ID command is required to enable the network interface: 01 0f 00 00 00 00 00 00 20 00 04 00 01 01 00 00 A number of QMI_DMS and QMI_NAS commands are also supported for optional device management. Signed-off-by: NBjørn Mork <bjorn@mork.no> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David Daney 提交于
With lockdep enabled we get: ============================================= [ INFO: possible recursive locking detected ] 3.4.4-Cavium-Octeon+ #313 Not tainted --------------------------------------------- kworker/u:1/36 is trying to acquire lock: (&bus->mdio_lock){+.+...}, at: [<ffffffff813da7e8>] mdio_mux_read+0x38/0xa0 but task is already holding lock: (&bus->mdio_lock){+.+...}, at: [<ffffffff813d79e4>] mdiobus_read+0x44/0x88 other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(&bus->mdio_lock); lock(&bus->mdio_lock); *** DEADLOCK *** May be due to missing lock nesting notation . . . This is a false positive, since we are indeed using 'nested' locking, we need to use mutex_lock_nested(). Now in theory we can stack multiple MDIO multiplexers, but that would require passing the nesting level (which is difficult to know) to mutex_lock_nested(). Instead we assume the simple case of a single level of nesting. Since these are only warning messages, it isn't so important to solve the general case. Signed-off-by: NDavid Daney <david.daney@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Alexander Duyck 提交于
DCB and SR-IOV cannot currently be enabled at the same time as the queueing schemes are incompatible. If they are both enabled it will result in Tx hangs since only the first Tx queue will be able to transmit any traffic. This simple fix for this is to block us from enabling TCs in ixgbe_setup_tc if SR-IOV is enabled. This change will be reverted once we can support SR-IOV and DCB coexistence. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Acked-by: NJohn Fastabend <john.r.fastabend@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Tested-by: NRoss Brattain <ross.b.brattain@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Cloud Ren 提交于
some people report atl1c could cause system hang with following kernel trace info: --------------------------------------- WARNING: at.../net/sched/sch_generic.c:258 dev_watchdog+0x1db/0x1d0() ... NETDEV WATCHDOG: eth0 (atl1c): transmit queue 0 timed out ... --------------------------------------- This is caused by netif_stop_queue calling when cable Link is down. So remove netif_stop_queue, because link_watch will take it over. Signed-off-by: Nxiong <xiong@qca.qualcomm.com> Cc: stable <stable@vger.kernel.org> Signed-off-by: NCloud Ren <cjren@qca.qualcomm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
commit a1c7fff7 (net: netdev_alloc_skb() use build_skb()) broke b44 on some 64bit machines. It appears b44 and b43 use __netdev_alloc_skb() instead of alloc_skb() for their bounce buffers. There is no need to add an extra NET_SKB_PAD reservation for bounce buffers : - In TX path, NET_SKB_PAD is useless - In RX path in b44, we force a copy of incoming frames if GFP_DMA allocations were needed. Reported-and-bisected-by: NStefan Bader <stefan.bader@canonical.com> Signed-off-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 04 7月, 2012 3 次提交
-
-
由 NeilBrown 提交于
build error introduced by commit b357f04a That function doesn't get extra args until a later patch. Bother. Reported-by: Fengguang Wu <wfg@linux.intel.com> Reported-by: NSimon Kirby <sim@hostway.ca> Reported-by: NTobias Klausmann <tobias.johannes.klausmann@mni.thm.de> Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 Linus Torvalds 提交于
In commit 070ad7e7 ("floppy: convert to delayed work and single-thread wq") the 'fd_timeout' timer was converted to a delayed work. However, the "del_timer(&fd_timeout)" was lost in the process, and any previous pending timeouts would stay active when we then re-queued the timeout. This resulted in the floppy probe sequence having a (stale) 20s timeout rather than the intended 3s timeout, and thus made booting with the floppy driver (but no actual floppy controller) take much longer than it should. Of course, there's little reason for most people to compile the floppy driver into the kernel at all, which is why most people never noticed. Canceling the delayed work where we used to do the del_timer() fixes the issue, and makes the floppy probing use the proper new timeout instead. The three second timeout is still very wasteful, but better than the 20s one. Reported-and-tested-by: NAndi Kleen <ak@linux.intel.com> Reported-and-tested-by: NCalvin Walton <calvin.walton@kepstin.ca> Cc: Jiri Kosina <jkosina@suse.cz> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Rajendra Nayak 提交于
The below commit introduced a bug in __clk_set_parent() which could cause it to *skip* the parent validation which makes sure the parent passed to the api is a valid one. commit 7975059d Author: Rajendra Nayak <rnayak@ti.com> Date: Wed Jun 6 14:41:31 2012 +0530 clk: Allow late cache allocation for clk->parents This was identified by the following compiler warning.. drivers/clk/clk.c: In function '__clk_set_parent': drivers/clk/clk.c:1083:5: warning: 'i' may be used uninitialized in this function [-Wuninitialized] .. as reported by Marc Kleine-Budde. There were various options discussed on how to fix this, one being initing 'i' to clk->num_parents, but the below approach was found to be more appropriate as it also makes the 'parent validation' code simpler to read. Reported-by: NMarc Kleine-Budde <mkl@pengutronix.de> Signed-off-by: NRajendra Nayak <rnayak@ti.com> Signed-off-by: NMike Turquette <mturquette@linaro.org> Cc: stable@kernel.org
-
- 03 7月, 2012 20 次提交
-
-
由 Mike Snitzer 提交于
If CONFIG_DM_DEBUG_SPACE_MAPS is enabled and memory is fragmented and a sufficiently-large metadata device is used in a thin pool then the space map checker will fail to allocate the memory it requires. Switch from kmalloc to vmalloc to allow larger virtually contiguous allocations for the space map checker's internal count arrays. Reported-by: NVivek Goyal <vgoyal@redhat.com> Cc: stable@kernel.org Signed-off-by: NMike Snitzer <snitzer@redhat.com> Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
-
由 Mike Snitzer 提交于
If CONFIG_DM_DEBUG_SPACE_MAPS is enabled and dm_sm_checker_create() fails, dm_tm_create_internal() would still return success even though it cleaned up all resources it was supposed to have created. This will lead to a kernel crash: general protection fault: 0000 [#1] SMP DEBUG_PAGEALLOC ... RIP: 0010:[<ffffffff81593659>] [<ffffffff81593659>] dm_bufio_get_block_size+0x9/0x20 Call Trace: [<ffffffff81599bae>] dm_bm_block_size+0xe/0x10 [<ffffffff8159b8b8>] sm_ll_init+0x78/0xd0 [<ffffffff8159c1a6>] sm_ll_new_disk+0x16/0xa0 [<ffffffff8159c98e>] dm_sm_disk_create+0xfe/0x160 [<ffffffff815abf6e>] dm_pool_metadata_open+0x16e/0x6a0 [<ffffffff815aa010>] pool_ctr+0x3f0/0x900 [<ffffffff8158d565>] dm_table_add_target+0x195/0x450 [<ffffffff815904c4>] table_load+0xe4/0x330 [<ffffffff815917ea>] ctl_ioctl+0x15a/0x2c0 [<ffffffff81591963>] dm_ctl_ioctl+0x13/0x20 [<ffffffff8116a4f8>] do_vfs_ioctl+0x98/0x560 [<ffffffff8116aa51>] sys_ioctl+0x91/0xa0 [<ffffffff81869f52>] system_call_fastpath+0x16/0x1b Fix the space map checker code to return an appropriate ERR_PTR and have dm_sm_disk_create() and dm_tm_create_internal() check for it with IS_ERR. Reported-by: NVivek Goyal <vgoyal@redhat.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
-
由 Mike Snitzer 提交于
Cleanup the shadow table before destroying the transaction manager. Reference: leak was identified with kmemleak when running test_discard_random_sectors in the thinp-test-suite. Signed-off-by: NMike Snitzer <snitzer@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
-
由 Joe Thornber 提交于
Userland sometimes sees a corrupt metadata block if metadata is changing rapidly when a metadata snapshot is reserved for userland, To make the problem go away, commit before we take the metadata snapshot (which is a sensible thing to do anyway). The checksums mean userland spots this corruption immediately so there's no risk of acting on incorrect data. No corruption exists from the kernel's point of view, and thin_check passes after pool shutdown. I believe this is to do with shared blocks at the first level of the {device, mapping} btree. Prior to the metadata-snap support no sharing at this level was possible, so this patch is only required after commit cc8394d8 ("dm thin: provide userspace access to pool metadata"). Signed-off-by: NJoe Thornber <ejt@redhat.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com> Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
-
由 Daniel Vetter 提交于
Especially vesafb likes to map everything as uc- (yikes), and if that mapping hangs around still while we try to map the gtt as wc the kernel will downgrade our request to uc-, resulting in abyssal performance. Unfortunately we can't do this as early as readon does (i.e. as the first thing we do when initializing the hw) because our fb/mmio space region moves around on a per-gen basis. So I've had to move it below the gtt initialization, but that seems to work, too. The important thing is that we do this before we set up the gtt wc mapping. Now an altogether different question is why people compile their kernels with vesafb enabled, but I guess making things just work isn't bad per se ... v2: - s/radeondrmfb/inteldrmfb/ - fix up error handling v3: Kill #ifdef X86, this is Intel after all. Noticed by Ben Widawsky. v4: Jani Nikula complained about the pointless bool primary initialization. v5: Don't oops if we can't allocate, noticed by Chris Wilson. v6: Resolve conflicts with agp rework and fixup whitespace. This is commit e188719a in drm-next. Backport to 3.5 -fixes queue requested by Dave Airlie - due to grub using vesa on fedora their initrd seems to load vesafb before loading the real kms driver. So tons more people actually experience a dead-slow gpu. Hence also the Cc: stable. Cc: stable@vger.kernel.org Reported-and-tested-by: N"Kilarski, Bernard R" <bernard.r.kilarski@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Takashi Iwai 提交于
When a monitor EDID doesn't give the preferred bit, driver assumes that the mode with the higest resolution and rate is the preferred mode. Meanwhile the recent changes for allowing more modes in the GFT/CVT ranges give actually more modes, and some modes may be over the native size. Thus such a mode would be picked up as the preferred mode although it's no native resolution. For avoiding such a problem, this patch limits the addition of inferred modes by checking not to be greater than other modes. Also, it checks the duplicated mode entry at the same time. Reviewed-by: NAdam Jackson <ajax@redhat.com> Signed-off-by: NTakashi Iwai <tiwai@suse.de> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Jerome Glisse 提交于
In gem idle/busy ioctl the radeon object was derefenced after drm_gem_object_unreference_unlocked which in case the object have been destroyed lead to use of a possibly free pointer with possibly wrong data. Signed-off-by: NJerome Glisse <jglisse@redhat.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 NeilBrown 提交于
The value returned by "mddev_check_plug" is only valid until the next 'schedule' as that will unplug things. This could happen at any call to mempool_alloc. So just calling mddev_check_plug at the start doesn't really make sense. So call it just before, or just after, queuing things for the thread. As the action that happens at unplug is to wake the thread, this makes lots of sense. If we cannot add a plug (which requires a small GFP_ATOMIC alloc) we wake thread immediately. RAID5 is a bit different. Requests are queued for the thread and the thread is woken by release_stripe. So we don't need to wake the thread on failure. However the thread doesn't perform certain actions when there is any active plug, so it is important to install a plug before waking the thread. So for RAID5 we install the plug *before* queuing the request and waking the thread. Without this patch it is possible for raid1 or raid10 to queue a request without then waking the thread, resulting in the array locking up. Also change raid10 to only flush_pending_write when there are not active plugs, just like raid1. This patch is suitable for 3.0 or later. I plan to submit it to -stable, but I'll like to let it spend a few weeks in mainline first to be sure it is completely safe. Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
We currently only allow a device to be re-added if it appear to be in-sync. This is overly restrictive as it may be desirable to re-add a device that is in the middle of recovery. So remove the test for "InSync" - the test on rdev->raid_disk is sufficient to ensure that the re-add will succeed. Reported-by: NAlexander Lyakas <alex.bolshoy@gmail.com> Tested-by: NAlexander Lyakas <alex.bolshoy@gmail.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
When we added hot_replace we doubled the number of devices that could be in a RAID1 array. So we doubled how far read_balance would search. Unfortunately we didn't double the point at which it looped back to the beginning - so it effectively loops over all non-replacement disks twice. This doesn't cause bad behaviour, but it pointless and means we never read from replacement devices. Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 Shaohua Li 提交于
There isn't locking setting STRIPE_DELAYED and STRIPE_PREREAD_ACTIVE bits, but the two bits have relationship. A delayed stripe can be moved to hold list only when preread active stripe count is below IO_THRESHOLD. If a stripe has both the bits set, such stripe will be in delayed list and preread count not 0, which will make such stripe never leave delayed list. Signed-off-by: NShaohua Li <shli@fusionio.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 majianpeng 提交于
We may not be able to fix a bad block if: - the array is degraded - the over-write fails. In these cases we currently eject the device, but we should record a bad block if possible. Signed-off-by: Nmajianpeng <majianpeng@gmail.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
Having the 'name' arg optional and defaulting to the current personality name is no necessary and leads to errors, as when changing the level of an array we can end up using the name of the old level instead of the new one. So make it non-optional and always explicitly pass the name of the level that the array will be. Reported-by: Nmajianpeng <majianpeng@gmail.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
commit 58c54fcc md/raid10: handle further errors during fix_read_error better. in 3.1 added "r10_sync_page_io" which takes an IO size in sectors. But we were passing the IO size in bytes!!! This resulting in bio_add_page failing, and empty request being sent down, and a consequent BUG_ON in scsi_lib. [fix missing space in error message at same time] This fix is suitable for 3.1.y and later. Cc: stable@vger.kernel.org Reported-by: NChristian Balzer <chibi@gol.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
commit 43220aa0 md/raid5: fix a hang on device failure. fixed a hang, but introduced a refcounting in-balance so that if the presence of bad-blocks ever caused an rdev to be 'blocked' we would increment the refcount on the rdev and never decrement it. So added the needed rdev_dec_pending when md_wait_for_blocked_rdev is not called. Reported-by: Nmajianpeng <majianpeng@gmail.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 majianpeng 提交于
Add blk_plug in sync_thread will increase the performance of sync. Because sync_thread did not blk_plug,so when raid sync, the bio merge not well. Testing environment: SATA controller: Intel Corporation 82801JI (ICH10 Family) SATA AHCI Controller. OS:Linux xxx 3.5.0-rc2+ #340 SMP Tue Jun 12 09:00:25 CST 2012 x86_64 x86_64 x86_64 GNU/Linux. RAID5: four ST31000524NS disk. Without blk_plug:recovery speed about 63M/Sec; Add blk_plug:recovery speed about 120M/Sec. Using blktrace: blktrace -d /dev/sdb -w 60 -o -|blkparse -i - without blk_plug: Total (8,16): Reads Queued: 309811, 1239MiB Writes Queued: 0, 0KiB Read Dispatches: 283583, 1189MiB Write Dispatches: 0, 0KiB Reads Requeued: 0 Writes Requeued: 0 Reads Completed: 273351, 1149MiB Writes Completed: 0, 0KiB Read Merges: 23533, 94132KiB Write Merges: 0, 0KiB IO unplugs: 0 Timer unplugs: 0 add blk_plug: Total (8,16): Reads Queued: 428697, 1714MiB Writes Queued: 0, 0KiB Read Dispatches: 3954, 1714MiB Write Dispatches: 0, 0KiB Reads Requeued: 0 Writes Requeued: 0 Reads Completed: 3956, 1715MiB Writes Completed: 0, 0KiB Read Merges: 424743, 1698MiB Write Merges: 0, 0KiB IO unplugs: 0 Timer unplugs: 3384 The ratio of merge will be markedly increased. Signed-off-by: Nmajianpeng <majianpeng@gmail.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 majianpeng 提交于
In ops_run_io(), the call to md_wait_for_blocked_rdev will decrement nr_pending so we lose the reference we hold on the rdev. So atomic_inc it first to maintain the reference. This bug was introduced by commit 73e92e51 md/raid5. Don't write to known bad block on doubtful devices. which appeared in 3.0, so patch is suitable for stable kernels since then. Cc: stable@vger.kernel.org Signed-off-by: Nmajianpeng <majianpeng@gmail.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 majianpeng 提交于
In chunk_aligned_read() we are adding data_offset before calling is_badblock. But is_badblock also adds data_offset, so that is bad. So move the addition of data_offset to after the call to is_badblock. This bug was introduced by commit 31c176ec md/raid5: avoid reading from known bad blocks. which first appeared in 3.0. So that patch is suitable for any -stable kernel from 3.0.y onwards. However it will need minor revision for most of those (as the comment didn't appear until recently). Cc: stable@vger.kernel.org Signed-off-by: Nmajianpeng <majianpeng@gmail.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
If a RAID5 has both a failed device and a device marked as 'WantReplacement', then we should preferentially replace the failed device. However the current code replaces whichever is found first. So split into 2 loops, check fail failed/missing first, and only check for WantReplacement if nothing is failed or missing. Reported-by: Nmajianpeng <majianpeng@gmail.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
If a RAID10 has an odd number of chunks - as might happen when there are an odd number of devices - the last chunk has no pair and so is not mirrored. We don't store data there, but when recovering the last device in an array we retry to recover that last chunk from a non-existent location. This results in an error, and the recovery aborts. When we get to that last chunk we should just stop - there is nothing more to do anyway. This bug has been present since the introduction of RAID10, so the patch is appropriate for any -stable kernel. Cc: stable@vger.kernel.org Reported-by: NChristian Balzer <chibi@gol.com> Tested-by: NChristian Balzer <chibi@gol.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
- 01 7月, 2012 2 次提交
-
-
由 Bruce Allan 提交于
Currently only used when packet split mode is enabled with jumbo frames, IP payload checksum (for fragmented UDP packets) is mutually exclusive with receive hashing offload since the hardware uses the same space in the receive descriptor for the hardware-provided packet checksum and the RSS hash, respectively. Users currently must disable jumbos when receive hashing offload is enabled, or vice versa, because of this incompatibility. Since testing has shown that IP payload checksum does not provide any real benefit, just remove it so that there is no longer a choice between jumbos or receive hashing offload but not both as done in other Intel GbE drivers (e.g. e1000, igb). Also, add a missing check for IP checksum error reported by the hardware; let the stack verify the checksum when this happens. CC: stable <stable@vger.kernel.org> [3.4] Signed-off-by: NBruce Allan <bruce.w.allan@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Mitch A Williams 提交于
Using ethtool -C ethX rx-usecs 0 crashes with a divide by zero. Refactor this function to fix this issue and make it more clear what the intent of each conditional is. Add comment regarding using a setting of zero. CC: stable <stable@vger.kernel.org> [3.3+] CC: David Ahern <daahern@cisco.com> Signed-off-by: NMitch Williams <mitch.a.williams@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 30 6月, 2012 2 次提交
-
-
由 Stuart Hayes 提交于
The acpi_pad driver can get stuck in destroy_power_saving_task() waiting for kthread_stop() to stop a power_saving thread. The problem is that the isolated_cpus_lock mutex is owned when destroy_power_saving_task() calls kthread_stop(), which waits for a power_saving thread to end, and the power_saving thread tries to acquire the isolated_cpus_lock when it calls round_robin_cpu(). This patch fixes the issue by making round_robin_cpu() use its own mutex. https://bugzilla.kernel.org/show_bug.cgi?id=42981 Cc: stable@vger.kernel.org Signed-off-by: NStuart Hayes <Stuart_Hayes@Dell.com> Signed-off-by: NLen Brown <len.brown@intel.com>
-
由 Zhang Rui 提交于
This fixes a regression in 3.4-rc1 caused by commit ea9f8856 (ACPI video: Harden video bus adding.) Some platforms don't have _DOS control method, but the ACPI backlight still works. We should not invoke _DOS for these platforms. https://bugzilla.kernel.org/show_bug.cgi?id=43168 Cc: Igor Murzov <intergalactic.anonymous@gmail.com> Cc: stable@vger.kernel.org Signed-off-by: NZhang Rui <rui.zhang@intel.com> Signed-off-by: NLen Brown <len.brown@intel.com>
-
- 29 6月, 2012 1 次提交
-
-
由 Alex Deucher 提交于
Cayman and trinity allow for variable sized VM page tables, but SI requires that all page tables be the same size. The current code assumes variablely sized VM page tables so SI may end up with part of each page table overlapping with other memory which could end up being interpreted by the VM hw as garbage. Change the code to better accomodate SI. Allocate enough space for at least 2 full page tables and always set last_pfn to max_pfn on SI so each VM is backed by a full page table. This limits us to only 2 VMs active at any given time on SI. This will be rectified and the code can be reunified once we move to two level page tables. Signed-off-by: NAlex Deucher <alexander.deucher@amd.com> Reviewed-by: NJerome Glisse <jglisse@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: NDave Airlie <airlied@redhat.com>
-