1. 24 10月, 2011 1 次提交
  2. 28 9月, 2011 1 次提交
    • H
      block: Free queue resources at blk_release_queue() · 777eb1bf
      Hannes Reinecke 提交于
      A kernel crash is observed when a mounted ext3/ext4 filesystem is
      physically removed. The problem is that blk_cleanup_queue() frees up
      some resources eg by calling elevator_exit(), which are not checked for
      in normal operation. So we should rather move these calls to the
      destructor function blk_release_queue() as at that point all remaining
      references are gone. However, in doing so we have to ensure that any
      externally supplied queue_lock is disconnected as the driver might free
      up the lock after the call of blk_cleanup_queue(),
      Signed-off-by: NHannes Reinecke <hare@suse.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      777eb1bf
  3. 21 9月, 2011 1 次提交
  4. 14 9月, 2011 1 次提交
  5. 24 8月, 2011 3 次提交
  6. 23 8月, 2011 1 次提交
  7. 19 8月, 2011 1 次提交
    • J
      Revert "cfq: Remove special treatment for metadata rqs." · b53d1ed7
      Jens Axboe 提交于
      We have a kernel build regression since 3.1-rc1, which is about 10%
      regression. The kernel source is in an ext3 filesystem.
      Alex Shi bisect it to commit:
      commit a07405b7
      Author: Justin TerAvest <teravest@google.com>
      Date:   Sun Jul 10 22:09:19 2011 +0200
      
          cfq: Remove special treatment for metadata rqs.
      
      Apparently this is caused by lack metadata preemption, where ext3/ext4
      do use READ_META. I didn't see a way to fix the issue, so suggest
      reverting the patch.
      
      This reverts commit a07405b7.
      
      Reported-by: Alex Shi<alex.shi@intel.com>
      Reported-by: Shaohua Li<shaohua.li@intel.com>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      b53d1ed7
  8. 16 8月, 2011 1 次提交
    • J
      block: fix flush machinery for stacking drivers with differring flush flags · 4853abaa
      Jeff Moyer 提交于
      Commit ae1b1539, block: reimplement
      FLUSH/FUA to support merge, introduced a performance regression when
      running any sort of fsyncing workload using dm-multipath and certain
      storage (in our case, an HP EVA).  The test I ran was fs_mark, and it
      dropped from ~800 files/sec on ext4 to ~100 files/sec.  It turns out
      that dm-multipath always advertised flush+fua support, and passed
      commands on down the stack, where those flags used to get stripped off.
      The above commit changed that behavior:
      
      static inline struct request *__elv_next_request(struct request_queue *q)
      {
              struct request *rq;
      
              while (1) {
      -               while (!list_empty(&q->queue_head)) {
      +               if (!list_empty(&q->queue_head)) {
                              rq = list_entry_rq(q->queue_head.next);
      -                       if (!(rq->cmd_flags & (REQ_FLUSH | REQ_FUA)) ||
      -                           (rq->cmd_flags & REQ_FLUSH_SEQ))
      -                               return rq;
      -                       rq = blk_do_flush(q, rq);
      -                       if (rq)
      -                               return rq;
      +                       return rq;
                      }
      
      Note that previously, a command would come in here, have
      REQ_FLUSH|REQ_FUA set, and then get handed off to blk_do_flush:
      
      struct request *blk_do_flush(struct request_queue *q, struct request *rq)
      {
              unsigned int fflags = q->flush_flags; /* may change, cache it */
              bool has_flush = fflags & REQ_FLUSH, has_fua = fflags & REQ_FUA;
              bool do_preflush = has_flush && (rq->cmd_flags & REQ_FLUSH);
              bool do_postflush = has_flush && !has_fua && (rq->cmd_flags &
              REQ_FUA);
              unsigned skip = 0;
      ...
              if (blk_rq_sectors(rq) && !do_preflush && !do_postflush) {
                      rq->cmd_flags &= ~REQ_FLUSH;
      		if (!has_fua)
      			rq->cmd_flags &= ~REQ_FUA;
      	        return rq;
      	}
      
      So, the flush machinery was bypassed in such cases (q->flush_flags == 0
      && rq->cmd_flags & (REQ_FLUSH|REQ_FUA)).
      
      Now, however, we don't get into the flush machinery at all.  Instead,
      __elv_next_request just hands a request with flush and fua bits set to
      the scsi_request_fn, even if the underlying request_queue does not
      support flush or fua.
      
      The agreed upon approach is to fix the flush machinery to allow
      stacking.  While this isn't used in practice (since there is only one
      request-based dm target, and that target will now reflect the flush
      flags of the underlying device), it does future-proof the solution, and
      make it function as designed.
      
      In order to make this work, I had to add a field to the struct request,
      inside the flush structure (to store the original req->end_io).  Shaohua
      had suggested overloading the union with rb_node and completion_data,
      but the completion data is used by device mapper and can also be used by
      other drivers.  So, I didn't see a way around the additional field.
      
      I tested this patch on an HP EVA with both ext4 and xfs, and it recovers
      the lost performance.  Comments and other testers, as always, are
      appreciated.
      
      Cheers,
      Jeff
      Signed-off-by: NJeff Moyer <jmoyer@redhat.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      4853abaa
  9. 11 8月, 2011 1 次提交
    • S
      block: improve rq_affinity placement · bcf30e75
      Shaohua Li 提交于
      This patch reverts commit 35ae66e0(block: Make rq_affinity = 1
      work as expected). The purpose is to avoid an unnecessary IPI.
      Let's take an example. My test box has cpu 0-7, one socket. Say request is
      added from CPU 1, blk_complete_request() occurs at CPU 7. Without the reverted
      patch, softirq will be done at CPU 7. With it, an IPI will be directed to CPU
      0, and softirq will be done at CPU 0. In this case, doing softirq at CPU 0 and
      CPU 7 have no difference from cache sharing point view and we can avoid an
      ipi if doing it in CPU 7.
      An immediate concern is this is just like QUEUE_FLAG_SAME_FORCE, but actually
      not. blk_complete_request() is running in interrupt handler, and currently
      I/O controller doesn't support multiple interrupts (I checked several LSI
      cards and AHCI), so only one CPU can run blk_complete_request(). This is
      still quite different as QUEUE_FLAG_SAME_FORCE.
      Since only one CPU runs softirq, the only difference with below patch is
      softirq not always runs at the first CPU of a group.
      Signed-off-by: NShaohua Li <shaohua.li@intel.com>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      bcf30e75
  10. 10 8月, 2011 1 次提交
    • J
      allow blk_flush_policy to return REQ_FSEQ_DATA independent of *FLUSH · fa1bf42f
      Jeff Moyer 提交于
      blk_insert_flush has the following check:
      
      	/*
      	 * If there's data but flush is not necessary, the request can be
      	 * processed directly without going through flush machinery.  Queue
      	 * for normal execution.
      	 */
      	if ((policy & REQ_FSEQ_DATA) &&
      	    !(policy & (REQ_FSEQ_PREFLUSH | REQ_FSEQ_POSTFLUSH))) {
      		list_add_tail(&rq->queuelist, &q->queue_head);
      		return;
      	}
      
      However, blk_flush_policy will not return with policy set to only
      REQ_FSEQ_DATA:
      
      static unsigned int blk_flush_policy(unsigned int fflags, struct request *rq)
      {
      	unsigned int policy = 0;
      
      	if (fflags & REQ_FLUSH) {
      		if (rq->cmd_flags & REQ_FLUSH)
      			policy |= REQ_FSEQ_PREFLUSH;
      		if (blk_rq_sectors(rq))
      			policy |= REQ_FSEQ_DATA;
      		if (!(fflags & REQ_FUA) && (rq->cmd_flags & REQ_FUA))
      			policy |= REQ_FSEQ_POSTFLUSH;
      	}
      	return policy;
      }
      
      Notice that REQ_FSEQ_DATA is only set if REQ_FLUSH is set.  Fix this
      mismatch by moving the setting of REQ_FSEQ_DATA outside of the REQ_FLUSH
      check.
      
      Tejun notes:
      
        Hmmm... yes, this can become a correctness issue if (and only if)
        blk_queue_flush() is called to change q->flush_flags while requests
        are in-flight; otherwise, requests wouldn't reach the function at all.
        Also, I think it would be a generally good idea to always set
        FSEQ_DATA if the request has data.
      
      Cheers,
      Jeff
      Signed-off-by: NJeff Moyer <jmoyer@redhat.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      fa1bf42f
  11. 05 8月, 2011 1 次提交
    • T
      block: Make rq_affinity = 1 work as expected · 35ae66e0
      Tao Ma 提交于
      Commit 5757a6d7 introduced a new rq_affinity = 2 so as to make
      the request completed in the __make_request cpu. But it makes the
      old rq_affinity = 1 not work any more. The root cause is that
      if the 'cpu' and 'req->cpu' is in the same group and cpu != req->cpu,
      ccpu will be the same as group_cpu, so the completion will be
      excuted in the 'cpu' not 'group_cpu'.
      
      This patch fix problem by simpling removing group_cpu and the codes
      are more explicit now. If ccpu == cpu, we complete in cpu, otherwise
      we raise_blk_irq to ccpu.
      
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Roland Dreier <roland@purestorage.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Jens Axboe <jaxboe@fusionio.com>
      Signed-off-by: NTao Ma <boyu.mt@taobao.com>
      Reviewed-by: NShaohua Li <shaohua.li@intel.com>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      35ae66e0
  12. 04 8月, 2011 1 次提交
    • A
      fault-injection: add ability to export fault_attr in arbitrary directory · dd48c085
      Akinobu Mita 提交于
      init_fault_attr_dentries() is used to export fault_attr via debugfs.
      But it can only export it in debugfs root directory.
      
      Per Forlin is working on mmc_fail_request which adds support to inject
      data errors after a completed host transfer in MMC subsystem.
      
      The fault_attr for mmc_fail_request should be defined per mmc host and
      export it in debugfs directory per mmc host like
      /sys/kernel/debug/mmc0/mmc_fail_request.
      
      init_fault_attr_dentries() doesn't help for mmc_fail_request.  So this
      introduces fault_create_debugfs_attr() which is able to create a
      directory in the arbitrary directory and replace
      init_fault_attr_dentries().
      
      [akpm@linux-foundation.org: extraneous semicolon, per Randy]
      Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com>
      Tested-by: NPer Forlin <per.forlin@linaro.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Matt Mackall <mpm@selenic.com>
      Cc: Randy Dunlap <rdunlap@xenotime.net>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dd48c085
  13. 02 8月, 2011 3 次提交
    • H
      block/genhd.c: remove useless cast in diskstats_show() · f95fe9cf
      Herbert Poetzl 提交于
      Remove the (unsigned long long) cast in diskstats_show() and adjusts the
      seq_printf() format string to 'unsigned long'
      
      diskstats_show() uses part_stat_read() to get the stats, which either
      accesses the specified field in the struct disk_stats directly (non SMP)
      or sums up the per CPU values in a variable of the same type as the field,
      so in any case the result will have the same type and range as the
      specified field which for all disk_stats entries is unsigned long
      
      Also, for unsigned long ranges the output of %lu should be identical to
      the one of %llu, so no change in the actual proc entry contents.
      Signed-off-by: NHerbert Poetzl <herbert@13thfloor.at>
      Cc: Jens Axboe <axboe@kernel.dk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      f95fe9cf
    • J
      bsg-lib: add module.h include · e2a5429f
      Jens Axboe 提交于
      Due to conflicts with the moduleh tree in linux-next, we
      run into an include file mess. We really need export.h
      in that tree, but if we add module.h locally then the
      issue is easier to resolve.
      Reported-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      e2a5429f
    • V
      cfq-iosched: Reduce linked group count upon group destruction · a5395b83
      Vivek Goyal 提交于
      FQ keeps track of number of groups which are linked on blkcg->blkg_list.
      This is useful to avoid races between queue exit and cgroup exit code
      paths. So if at the request queue exit time linked group count is not
      zero, that means there are some group out there which is yet to be
      deleted under rcu read period and queue exit code should wait for
      on rcu period.
      
      In my previous patch I forgot to decrease the number of group count.
      So in current form, we nr_blkcg_linked_grps is always non-zero and
      we will always wait one rcu period (if BLK_CGROUP=y). The side effect
      of this is that it can increase boot time. I am surprised, nobody
      complained so far.
      Signed-off-by: NVivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      a5395b83
  14. 01 8月, 2011 2 次提交
  15. 27 7月, 2011 1 次提交
  16. 26 7月, 2011 1 次提交
    • J
      block: fix warning with calling smp_processor_id() in preemptible section · 11ccf116
      Jens Axboe 提交于
      After commit 5757a6d7 introduced an unsafe calling of
      smp_processor_id(), with preempt debuggin turned on we spew a lot of:
      
      BUG: using smp_processor_id() in preemptible [00000000] code: kjournald/514
      caller is __make_request+0x1b8/0x308
      [<c0019f44>] (unwind_backtrace+0x0/0xe8) from [<c024b4cc>] (debug_smp_processor_id+0xbc/0xf0)
      [<c024b4cc>] (debug_smp_processor_id+0xbc/0xf0) from [<c0223d14>] (__make_request+0x1b8/0x308)
      [<c0223d14>] (__make_request+0x1b8/0x308) from [<c02215ac>] (generic_make_request+0x4dc/0x558)
      [<c02215ac>] (generic_make_request+0x4dc/0x558) from [<c022173c>] (submit_bio+0x114/0x138)
      [<c022173c>] (submit_bio+0x114/0x138) from [<c011f504>] (submit_bh+0x148/0x16c)
      [<c011f504>] (submit_bh+0x148/0x16c) from [<c0121ed8>] (__sync_dirty_buffer+0x88/0xd8)
      [<c0121ed8>] (__sync_dirty_buffer+0x88/0xd8) from [<c01aff78>] (journal_commit_transaction+0x1198/0x1688)
      [<c01aff78>] (journal_commit_transaction+0x1198/0x1688) from [<c01b4034>] (kjournald+0xb4/0x224)
      [<c01b4034>] (kjournald+0xb4/0x224) from [<c0069ea0>] (kthread+0x8c/0x94)
      [<c0069ea0>] (kthread+0x8c/0x94) from [<c00137f8>] (kernel_thread_exit+0x0/0x8)
      
      Fix this by just using raw_smp_processor_id(), it's just a hint
      after all. There's no pinning of the CPU or accessing per-cpu
      structures involved.
      Reported-by: NMing Lei <tom.leiming@gmail.com>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      11ccf116
  17. 24 7月, 2011 2 次提交
  18. 22 7月, 2011 1 次提交
    • J
      [SCSI] fix crash in scsi_dispatch_cmd() · bfe159a5
      James Bottomley 提交于
      USB surprise removal of sr is triggering an oops in
      scsi_dispatch_command().  What seems to be happening is that USB is
      hanging on to a queue reference until the last close of the upper
      device, so the crash is caused by surprise remove of a mounted CD
      followed by attempted unmount.
      
      The problem is that USB doesn't issue its final commands as part of
      the SCSI teardown path, but on last close when the block queue is long
      gone.  The long term fix is probably to make sr do the teardown in the
      same way as sd (so remove all the lower bits on ejection, but keep the
      upper disk alive until last close of user space).  However, the
      current oops can be simply fixed by not allowing any commands to be
      sent to a dead queue.
      
      Cc: stable@kernel.org
      Signed-off-by: NJames Bottomley <JBottomley@Parallels.com>
      bfe159a5
  19. 21 7月, 2011 1 次提交
  20. 12 7月, 2011 4 次提交
    • S
      CFQ: add think time check for group · 7700fc4f
      Shaohua Li 提交于
      Currently when the last queue of a group has no request, we don't expire
      the queue to hope request from the group comes soon, so the group doesn't
      miss its share. But if the think time is big, the assumption isn't correct
      and we just waste bandwidth. In such case, we don't do idle.
      
      [global]
      runtime=30
      direct=1
      
      [test1]
      cgroup=test1
      cgroup_weight=1000
      rw=randread
      ioengine=libaio
      size=500m
      runtime=30
      directory=/mnt
      filename=file1
      thinktime=9000
      
      [test2]
      cgroup=test2
      cgroup_weight=1000
      rw=randread
      ioengine=libaio
      size=500m
      runtime=30
      directory=/mnt
      filename=file2
      
      	patched		base
      test1	64k		39k
      test2	548k		540k
      total	604k		578k
      
      group1 gets much better throughput because it waits less time.
      
      To check if the patch changes behavior of queue without think time. I also
      tried to give test1 2ms think time or no think time. The test result is stable.
      The thoughput doesn't change with/without the patch.
      Signed-off-by: NShaohua Li <shaohua.li@intel.com>
      Acked-by: NVivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      7700fc4f
    • S
      CFQ: add think time check for service tree · f5f2b6ce
      Shaohua Li 提交于
      Currently when the last queue of a service tree has no request, we don't
      expire the queue to hope request from the service tree comes soon, so the
      service tree doesn't miss its share. But if the think time is big, the
      assumption isn't correct and we just waste bandwidth. In such case, we
      don't do idle.
      
      [global]
      runtime=10
      direct=1
      
      [test1]
      rw=randread
      ioengine=libaio
      size=500m
      directory=/mnt
      filename=file1
      thinktime=9000
      
      [test2]
      rw=read
      ioengine=libaio
      size=1G
      directory=/mnt
      filename=file2
      
      	patched		base
      test1	41k/s		33k/s
      test2	15868k/s	15789k/s
      total	15902k/s	15817k/s
      
      A slightly better
      
      To check if the patch changes behavior of queue without think time. I also
      tried to give test1 2ms think time or no think time. The test has variation
      even without the patch, but the average throughput doesn't change with/without
      the patch.
      Signed-off-by: NShaohua Li <shaohua.li@intel.com>
      Acked-by: NVivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      f5f2b6ce
    • S
      CFQ: move think time check variables to a separate struct · 383cd721
      Shaohua Li 提交于
      Move the variables to do think time check to a sepatate struct. This is
      to prepare adding think time check for service tree and group. No
      functional change.
      Signed-off-by: NShaohua Li <shaohua.li@intel.com>
      Acked-by: NVivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      383cd721
    • J
      fixlet: Remove fs_excl from struct task. · 4aede84b
      Justin TerAvest 提交于
      fs_excl is a poor man's priority inheritance for filesystems to hint to
      the block layer that an operation is important. It was never clearly
      specified, not widely adopted, and will not prevent starvation in many
      cases (like across cgroups).
      
      fs_excl was introduced with the time sliced CFQ IO scheduler, to
      indicate when a process held FS exclusive resources and thus needed
      a boost.
      
      It doesn't cover all file systems, and it was never fully complete.
      Lets kill it.
      Signed-off-by: NJustin TerAvest <teravest@google.com>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      4aede84b
  21. 11 7月, 2011 1 次提交
  22. 08 7月, 2011 1 次提交
    • S
      block: avoid building too big plug list · 55c022bb
      Shaohua Li 提交于
      When I test fio script with big I/O depth, I found the total throughput drops
      compared to some relative small I/O depth. The reason is the thread accumulates
      big requests in its plug list and causes some delays (surely this depends
      on CPU speed).
      I thought we'd better have a threshold for requests. When a threshold reaches,
      this means there is no request merge and queue lock contention isn't severe
      when pushing per-task requests to queue, so the main advantages of blk plug
      don't exist. We can force a plug list flush in this case.
      With this, my test throughput actually increases and almost equals to small
      I/O depth. Another side effect is irq off time decreases in blk_flush_plug_list()
      for big I/O depth.
      The BLK_MAX_REQUEST_COUNT is choosen arbitarily, but 16 is efficiently to
      reduce lock contention to me. But I'm open here, 32 is ok in my test too.
      Signed-off-by: NShaohua Li <shaohua.li@intel.com>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      55c022bb
  23. 07 7月, 2011 1 次提交
    • M
      block: eliminate potential for infinite loop in blkdev_issue_discard · 0f799603
      Mike Snitzer 提交于
      Due to the recently identified overflow in read_capacity_16() it was
      possible for max_discard_sectors to be zero but still have discards
      enabled on the associated device's queue.
      
      Eliminate the possibility for blkdev_issue_discard to infinitely loop.
      
      Interestingly this issue wasn't identified until a device, whose
      discard_granularity was 0 due to read_capacity_16 overflow, was consumed
      by blk_stack_limits() to construct limits for a higher-level DM
      multipath device.  The multipath device's resulting limits never had the
      discard limits stacked because blk_stack_limits() will only do so if
      the bottom device's discard_granularity != 0.  This resulted in the
      multipath device's limits.max_discard_sectors being 0.
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      0f799603
  24. 02 7月, 2011 1 次提交
    • J
      compat_ioctl: fix warning caused by qemu · 390192b3
      Johannes Stezenbach 提交于
      On Linux x86_64 host with 32bit userspace, running
      qemu or even just "qemu-img create -f qcow2 some.img 1G"
      causes a kernel warning:
      
      ioctl32(qemu-img:5296): Unknown cmd fd(3) cmd(00005326){t:'S';sz:0} arg(7fffffff) on some.img
      ioctl32(qemu-img:5296): Unknown cmd fd(3) cmd(801c0204){t:02;sz:28} arg(fff77350) on some.img
      
      ioctl 00005326 is CDROM_DRIVE_STATUS,
      ioctl 801c0204 is FDGETPRM.
      
      The warning appears because the Linux compat-ioctl handler for these
      ioctls only applies to block devices, while qemu also uses the ioctls on
      plain files.
      Signed-off-by: NJohannes Stezenbach <js@sig21.net>
      Acked-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      390192b3
  25. 01 7月, 2011 1 次提交
    • T
      block: flush MEDIA_CHANGE from drivers on close(2) · 85ef06d1
      Tejun Heo 提交于
      Currently, only open(2) is defined as the 'clearing' point.  It has
      two roles - first, it's an acknowledgement from userland indicating
      that the event has been received and kernel can clear pending states
      and proceed to generate more events.  Secondly, it's passed on to
      device drivers as a hint indicating that a synchronization point has
      been reached and it might want to take a deeper look at the device.
      
      The latter currently is only used by sr which uses two different
      mechanisms - GET_EVENT_MEDIA_STATUS_NOTIFICATION and TEST_UNIT_READY
      to discover events, where the former is lighter weight and safe to be
      used repeatedly but may not provide full coverage.  Among other
      things, GET_EVENT can't detect media removal while TUR can.
      
      This patch makes close(2) - blkdev_put() - indicate clearing hint for
      MEDIA_CHANGE to drivers.  disk_check_events() is renamed to
      disk_flush_events() and updated to take @mask for events to flush
      which is or'd to ev->clearing and will be passed to the driver on the
      next ->check_events() invocation.
      
      This change makes sr generate MEDIA_CHANGE when media is ejected from
      userland - e.g. with eject(1).
      
      Note: Given the current usage, it seems @clearing hint is needlessly
      complex.  disk_clear_events() can simply clear all events and the hint
      can be boolean @flush.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Kay Sievers <kay.sievers@vrfy.org>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      85ef06d1
  26. 27 6月, 2011 2 次提交
  27. 20 6月, 2011 3 次提交
  28. 14 6月, 2011 1 次提交