1. 08 2月, 2006 1 次提交
  2. 06 2月, 2006 4 次提交
  3. 04 2月, 2006 1 次提交
  4. 01 2月, 2006 3 次提交
    • P
      [PATCH] USB: ub 05 Bulk reset · 2c2e4a2e
      Pete Zaitcev 提交于
      For crying out loud, they have devices which do not like port resets.
      So, do what usb-storage does and try both bulk and port resets.
      We start with a port reset (which usb-storage does at the end of transport),
      then do a Bulk reset, then a port reset again. This seems to work for me.
      
      The code is getting dirtier and dirtier here, but I swear that I'll
      do something about it (see those two new XXX). Honest.
      Signed-off-by: NPete Zaitcev <zaitcev@redhat.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
      2c2e4a2e
    • P
      [PATCH] USB: ub 04 Loss of timer and a hang · b31f821c
      Pete Zaitcev 提交于
      If SCSI commands are submitted while other commands are still processed,
      the dispatch loop turns, and we stop the work_timer. Then, if URB fails
      to complete, ub hangs until the device is unplugged.
      
      This does not happen often, becase we only allow one SCSI command per
      block device, but does happen (on multi-LUN devices, for example).
      
      The fix is to stop timer only when we actually going to change the state.
      
      The nicest code would be to have the timer stopped in URB callback, but
      this is impossible, because it can be called from inside a timer, through
      the urb_unlink. Then we get BUG in timer.c:cascade(). So, we do it a
      little dirtier.
      Signed-off-by: NPete Zaitcev <zaitcev@redhat.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
      b31f821c
    • P
      [PATCH] USB: ub 03 Oops with CFQ · 65b4fe55
      Pete Zaitcev 提交于
      The blk_cleanup_queue does not necesserily destroy the queue. When we
      destroy the corresponding ub_dev, it may leave the queue spinlock pointer
      dangling.
      
      This patch moves spinlocks from ub_dev to static memory. The locking
      scheme is not changed. These spinlocks are still separate from the ub_lock.
      Signed-off-by: NPete Zaitcev <zaitcev@redhat.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
      65b4fe55
  5. 17 1月, 2006 1 次提交
  6. 15 1月, 2006 3 次提交
  7. 13 1月, 2006 4 次提交
  8. 12 1月, 2006 1 次提交
  9. 10 1月, 2006 4 次提交
  10. 09 1月, 2006 8 次提交
  11. 07 1月, 2006 2 次提交
    • M
      [PATCH] parport: Kconfig dependency fixes · 6a19b41b
      Marko Kohtala 提交于
      Make drivers that use directly PC parport HW depend on PARPORT_PC rather than
      HW independent PARPORT.
      Signed-off-by: NMarko Kohtala <marko.kohtala@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      6a19b41b
    • H
      [PATCH] nbd: fix TX/RX race condition · 4b2f0260
      Herbert Xu 提交于
      Janos Haar of First NetCenter Bt.  reported numerous crashes involving the
      NBD driver.  With his help, this was tracked down to bogus bio vectors
      which in turn was the result of a race condition between the
      receive/transmit routines in the NBD driver.
      
      The bug manifests itself like this:
      
      CPU0				CPU1
      do_nbd_request
      	add req to queuelist
      	nbd_send_request
      		send req head
      		for each bio
      			kmap
      			send
      				nbd_read_stat
      					nbd_find_request
      					nbd_end_request
      			kunmap
      
      When CPU1 finishes nbd_end_request, the request and all its associated
      bio's are freed.  So when CPU0 calls kunmap whose argument is derived from
      the last bio, it may crash.
      
      Under normal circumstances, the race occurs only on the last bio.  However,
      if an error is encountered on the remote NBD server (such as an incorrect
      magic number in the request), or if there were a bug in the server, it is
      possible for the nbd_end_request to occur any time after the request's
      addition to the queuelist.
      
      The following patch fixes this problem by making sure that requests are not
      added to the queuelist until after they have been completed transmission.
      
      In order for the receiving side to be ready for responses involving
      requests still being transmitted, the patch introduces the concept of the
      active request.
      
      When a response matches the current active request, its processing is
      delayed until after the tranmission has come to a stop.
      
      This has been tested by Janos and it has been successful in curing this
      race condition.
      
      From: Herbert Xu <herbert@gondor.apana.org.au>
      
        Here is an updated patch which removes the active_req wait in
        nbd_clear_queue and the associated memory barrier.
      
        I've also clarified this in the comment.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Cc: <djani22@dynamicweb.hu>
      Cc: Paul Clements <Paul.Clements@SteelEye.com>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      4b2f0260
  12. 06 1月, 2006 1 次提交
    • T
      [BLOCK] add @uptodate to end_that_request_last() and @error to rq_end_io_fn() · 8ffdc655
      Tejun Heo 提交于
      add @uptodate argument to end_that_request_last() and @error
      to rq_end_io_fn().  there's no generic way to pass error code
      to request completion function, making generic error handling
      of non-fs request difficult (rq->errors is driver-specific and
      each driver uses it differently).  this patch adds @uptodate
      to end_that_request_last() and @error to rq_end_io_fn().
      
      for fs requests, this doesn't really matter, so just using the
      same uptodate argument used in the last call to
      end_that_request_first() should suffice.  imho, this can also
      help the generic command-carrying request jens is working on.
      Signed-off-by: Ntejun heo <htejun@gmail.com>
      Signed-Off-By: NJens Axboe <axboe@suse.de>
      8ffdc655
  13. 05 1月, 2006 5 次提交
  14. 04 1月, 2006 1 次提交
    • Z
      [PATCH] add AOP_TRUNCATED_PAGE, prepend AOP_ to WRITEPAGE_ACTIVATE · 994fc28c
      Zach Brown 提交于
      readpage(), prepare_write(), and commit_write() callers are updated to
      understand the special return code AOP_TRUNCATED_PAGE in the style of
      writepage() and WRITEPAGE_ACTIVATE.  AOP_TRUNCATED_PAGE tells the caller that
      the callee has unlocked the page and that the operation should be tried again
      with a new page.  OCFS2 uses this to detect and work around a lock inversion in
      its aop methods.  There should be no change in behaviour for methods that don't
      return AOP_TRUNCATED_PAGE.
      
      WRITEPAGE_ACTIVATE is also prepended with AOP_ for consistency and they are
      made enums so that kerneldoc can be used to document their semantics.
      Signed-off-by: NZach Brown <zach.brown@oracle.com>
      994fc28c
  15. 13 12月, 2005 1 次提交