1. 28 11月, 2012 21 次提交
  2. 24 11月, 2012 3 次提交
  3. 23 11月, 2012 10 次提交
  4. 22 11月, 2012 6 次提交
    • A
      OMAPDSS: do not fail if dpll4_m4_ck is missing · 8ad9375f
      Aaro Koskinen 提交于
      Do not fail if dpll4_m4_ck is missing. The clock is not there on omap24xx,
      so this should not be a hard error.
      
      The patch retains the functionality before the commit 185bae10 (OMAPDSS:
      DSS: Cleanup cpu_is_xxxx checks).
      Signed-off-by: NAaro Koskinen <aaro.koskinen@iki.fi>
      Signed-off-by: NTomi Valkeinen <tomi.valkeinen@ti.com>
      8ad9375f
    • N
      md/raid10: decrement correct pending counter when writing to replacement. · 884162df
      NeilBrown 提交于
      When a write to a replacement device completes, we carefully
      and correctly found the rdev that the write actually went to
      and the blithely called rdev_dec_pending on the primary rdev,
      even if this write was to the replacement.
      
      This means that any writes to an array while a replacement
      was ongoing would cause the nr_pending count for the primary
      device to go negative, so it could never be removed.
      
      This bug has been present since replacement was introduced in
      3.3, so it is suitable for any -stable kernel since then.
      Reported-by: N"George Spelvin" <linux@horizon.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NNeilBrown <neilb@suse.de>
      884162df
    • N
      md/raid10: close race that lose writes lost when replacement completes. · e7c0c3fa
      NeilBrown 提交于
      When a replacement operation completes there is a small window
      when the original device is marked 'faulty' and the replacement
      still looks like a replacement.  The faulty should be removed and
      the replacement moved in place very quickly, bit it isn't instant.
      
      So the code write out to the array must handle the possibility that
      the only working device for some slot in the replacement - but it
      doesn't.  If the primary device is faulty it just gives up.  This
      can lead to corruption.
      
      So make the code more robust: if either  the primary or the
      replacement is present and working, write to them.  Only when
      neither are present do we give up.
      
      This bug has been present since replacement was introduced in
      3.3, so it is suitable for any -stable kernel since then.
      Reported-by: N"George Spelvin" <linux@horizon.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NNeilBrown <neilb@suse.de>
      e7c0c3fa
    • M
      drm/nouveau: use the correct fence implementation for nv50 · ace5a9b8
      Maarten Lankhorst 提交于
      Only compile time tested, noticed nv50_fence_create was never used,
      so fix this. This will probably fix vblank on nv50 cards.
      
      Hopefully this is still in time for 3.7 final release.
      Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com>
      Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
      ace5a9b8
    • N
      md/raid5: Make sure we clear R5_Discard when discard is finished. · ca64cae9
      NeilBrown 提交于
      commit 9e444768
          MD: raid5 avoid unnecessary zero page for trim
      
      change raid5 to clear R5_Discard when the complete request is
      handled rather than when submitting the per-device discard request.
      However it did not clear R5_Discard for the parity device.
      
      This means that if the stripe_head was reused before it expired from
      the cache, the setting would be wrong and a hang would result.
      
      Also if the R5_Uptodate bit happens to be set, R5_Discard again
      won't be cleared.  But R5_Uptodate really should be clear at this point.
      
      So make sure R5_Discard is cleared in all cases, and clear
      R5_Uptodate when a 'discard' completes.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      ca64cae9
    • N
      md/raid5: move resolving of reconstruct_state earlier in · ef5b7c69
      NeilBrown 提交于
      stripe_handle.
      
      The chunk of code in stripe_handle which responds to a
      *_result value in reconstruct_state is really the completion
      of some processing that happened outside of handle_stripe
      (possibly asynchronously) and so should be one of the first
      things done in handle_stripe().
      
      After the next patch it will be important that it happens before
      handle_stripe_clean_event(), as that will clear some dev->flags
      bit that this code tests.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      ef5b7c69