1. 18 12月, 2012 1 次提交
  2. 15 12月, 2012 3 次提交
  3. 10 12月, 2012 1 次提交
  4. 06 12月, 2012 7 次提交
    • D
      partitions: enable EFI/GPT support by default · 5f6f38db
      Diego Calleja 提交于
      The Kconfig currently enables MSDOS partitions by default because they
      are assumed to be essential, but it's necessary to enable "advanced
      partition selection" in order to get GPT support. IMO GPT partitions
      are becoming common enought to deserve the same treatment MSDOS
      partitions get.
      
      (Side note: I got bit by a disk that had MSDOS and GPT partition
      tables, but for some reason the MSDOS table was different from the
      GPT one. I was stupid enought to disable "advanced partition
      selection" in my .config, which disabled GPT partitioning and made
      my btrfs pool unbootable because it couldn't find the partitions)
      Signed-off-by: NDiego Calleja <diegocg@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      5f6f38db
    • B
      bsg: Remove unused function bsg_goose_queue() · 80729beb
      Bart Van Assche 提交于
      The function bsg_goose_queue() does not have any in-tree callers,
      so let's remove it.
      Signed-off-by: NBart Van Assche <bvanassche@acm.org>
      Acked-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      80729beb
    • B
      block: Make blk_cleanup_queue() wait until request_fn finished · 24faf6f6
      Bart Van Assche 提交于
      Some request_fn implementations, e.g. scsi_request_fn(), unlock
      the queue lock internally. This may result in multiple threads
      executing request_fn for the same queue simultaneously. Keep
      track of the number of active request_fn calls and make sure that
      blk_cleanup_queue() waits until all active request_fn invocations
      have finished. A block driver may start cleaning up resources
      needed by its request_fn as soon as blk_cleanup_queue() finished,
      so blk_cleanup_queue() must wait for all outstanding request_fn
      invocations to finish.
      Signed-off-by: NBart Van Assche <bvanassche@acm.org>
      Reported-by: NChanho Min <chanho.min@lge.com>
      Cc: James Bottomley <JBottomley@Parallels.com>
      Cc: Mike Christie <michaelc@cs.wisc.edu>
      Acked-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      24faf6f6
    • B
      block: Avoid scheduling delayed work on a dead queue · 70460571
      Bart Van Assche 提交于
      Running a queue must continue after it has been marked dying until
      it has been marked dead. So the function blk_run_queue_async() must
      not schedule delayed work after blk_cleanup_queue() has marked a queue
      dead. Hence add a test for that queue state in blk_run_queue_async()
      and make sure that queue_unplugged() invokes that function with the
      queue lock held. This avoids that the queue state can change after
      it has been tested and before mod_delayed_work() is invoked. Drop
      the queue dying test in queue_unplugged() since it is now
      superfluous: __blk_run_queue() already tests whether or not the
      queue is dead.
      Signed-off-by: NBart Van Assche <bvanassche@acm.org>
      Cc: Mike Christie <michaelc@cs.wisc.edu>
      Acked-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      70460571
    • B
      block: Avoid that request_fn is invoked on a dead queue · c246e80d
      Bart Van Assche 提交于
      A block driver may start cleaning up resources needed by its
      request_fn as soon as blk_cleanup_queue() finished, so request_fn
      must not be invoked after draining finished. This is important
      when blk_run_queue() is invoked without any requests in progress.
      As an example, if blk_drain_queue() and scsi_run_queue() run in
      parallel, blk_drain_queue() may have finished all requests after
      scsi_run_queue() has taken a SCSI device off the starved list but
      before that last function has had a chance to run the queue.
      Signed-off-by: NBart Van Assche <bvanassche@acm.org>
      Cc: James Bottomley <JBottomley@Parallels.com>
      Cc: Mike Christie <michaelc@cs.wisc.edu>
      Cc: Chanho Min <chanho.min@lge.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      c246e80d
    • B
      block: Let blk_drain_queue() caller obtain the queue lock · 807592a4
      Bart Van Assche 提交于
      Let the caller of blk_drain_queue() obtain the queue lock to improve
      readability of the patch called "Avoid that request_fn is invoked on
      a dead queue".
      Signed-off-by: NBart Van Assche <bvanassche@acm.org>
      Acked-by: NTejun Heo <tj@kernel.org>
      Cc: James Bottomley <JBottomley@Parallels.com>
      Cc: Mike Christie <michaelc@cs.wisc.edu>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Chanho Min <chanho.min@lge.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      807592a4
    • B
      block: Rename queue dead flag · 3f3299d5
      Bart Van Assche 提交于
      QUEUE_FLAG_DEAD is used to indicate that queuing new requests must
      stop. After this flag has been set queue draining starts. However,
      during the queue draining phase it is still safe to invoke the
      queue's request_fn, so QUEUE_FLAG_DYING is a better name for this
      flag.
      
      This patch has been generated by running the following command
      over the kernel source tree:
      
      git grep -lEw 'blk_queue_dead|QUEUE_FLAG_DEAD' |
          xargs sed -i.tmp -e 's/blk_queue_dead/blk_queue_dying/g'      \
              -e 's/QUEUE_FLAG_DEAD/QUEUE_FLAG_DYING/g';                \
      sed -i.tmp -e "s/QUEUE_FLAG_DYING$(printf \\t)*5/QUEUE_FLAG_DYING$(printf \\t)5/g" \
          include/linux/blkdev.h;                                       \
      sed -i.tmp -e 's/ DEAD/ DYING/g' -e 's/dead queue/a dying queue/' \
          -e 's/Dead queue/A dying queue/' block/blk-core.c
      Signed-off-by: NBart Van Assche <bvanassche@acm.org>
      Acked-by: NTejun Heo <tj@kernel.org>
      Cc: James Bottomley <JBottomley@Parallels.com>
      Cc: Mike Christie <michaelc@cs.wisc.edu>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Chanho Min <chanho.min@lge.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      3f3299d5
  5. 23 11月, 2012 3 次提交
    • R
      block: Don't access request after it might be freed · 893d290f
      Roland Dreier 提交于
      After we've done __elv_add_request() and __blk_run_queue() in
      blk_execute_rq_nowait(), the request might finish and be freed
      immediately.  Therefore checking if the type is REQ_TYPE_PM_RESUME
      isn't safe afterwards, because if it isn't, rq might be gone.
      Instead, check beforehand and stash the result in a temporary.
      
      This fixes crashes in blk_execute_rq_nowait() I get occasionally when
      running with lots of memory debugging options enabled -- I think this
      race is usually harmless because the window for rq to be reallocated
      is so small.
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      Cc: stable@kernel.org
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      893d290f
    • S
      block: partition: msdos: provide UUIDs for partitions · d33b98fc
      Stephen Warren 提交于
      The MSDOS/MBR partition table includes a 32-bit unique ID, often referred
      to as the NT disk signature.  When combined with a partition number within
      the table, this can form a unique ID similar in concept to EFI/GPT's
      partition UUID.  Constructing and recording this value in struct
      partition_meta_info allows MSDOS partitions to be referred to on the
      kernel command-line using the following syntax:
      
      root=PARTUUID=0002dd75-01
      Signed-off-by: NStephen Warren <swarren@nvidia.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Will Drewry <wad@chromium.org>
      Cc: Kay Sievers <kay.sievers@vrfy.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      d33b98fc
    • S
      block: store partition_meta_info.uuid as a string · 1ad7e899
      Stephen Warren 提交于
      This will allow other types of UUID to be stored here, aside from true
      UUIDs.  This also simplifies code that uses this field, since it's usually
      constructed from a, used as a, or compared to other, strings.
      
      Note: A simplistic approach here would be to set uuid_str[36]=0 whenever a
      /PARTNROFF option was found to be present.  However, this modifies the
      input string, and causes subsequent calls to devt_from_partuuid() not to
      see the /PARTNROFF option, which causes different results.  In order to
      avoid misleading future maintainers, this parameter is marked const.
      Signed-off-by: NStephen Warren <swarren@nvidia.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Will Drewry <wad@chromium.org>
      Cc: Kay Sievers <kay.sievers@vrfy.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      1ad7e899
  6. 20 11月, 2012 1 次提交
  7. 10 11月, 2012 1 次提交
  8. 09 11月, 2012 1 次提交
    • S
      block: recursive merge requests · bee0393c
      Shaohua Li 提交于
      In a workload, thread 1 accesses a, a+2, ..., thread 2 accesses a+1, a+3,....
      When the requests are flushed to queue, a and a+1 are merged to (a, a+1), a+2
      and a+3 too to (a+2, a+3), but (a, a+1) and (a+2, a+3) aren't merged.
      
      If we do recursive merge for such interleave access, some workloads throughput
      get improvement. A recent worload I'm checking on is swap, below change
      boostes the throughput around 5% ~ 10%.
      Signed-off-by: NShaohua Li <shli@fusionio.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      bee0393c
  9. 06 11月, 2012 2 次提交
  10. 26 10月, 2012 1 次提交
    • J
      block: Add blk_rq_pos(rq) to sort rq when plushing · 975927b9
      Jianpeng Ma 提交于
      My workload is a raid5 which had 16 disks. And used our filesystem to
      write using direct-io mode.
      
      I used the blktrace to find those message:
      8,16   0     6647     2.453665504  2579  M   W 7493152 + 8 [md0_raid5]
      8,16   0     6648     2.453672411  2579  Q   W 7493160 + 8 [md0_raid5]
      8,16   0     6649     2.453672606  2579  M   W 7493160 + 8 [md0_raid5]
      8,16   0     6650     2.453679255  2579  Q   W 7493168 + 8 [md0_raid5]
      8,16   0     6651     2.453679441  2579  M   W 7493168 + 8 [md0_raid5]
      8,16   0     6652     2.453685948  2579  Q   W 7493176 + 8 [md0_raid5]
      8,16   0     6653     2.453686149  2579  M   W 7493176 + 8 [md0_raid5]
      8,16   0     6654     2.453693074  2579  Q   W 7493184 + 8 [md0_raid5]
      8,16   0     6655     2.453693254  2579  M   W 7493184 + 8 [md0_raid5]
      8,16   0     6656     2.453704290  2579  Q   W 7493192 + 8 [md0_raid5]
      8,16   0     6657     2.453704482  2579  M   W 7493192 + 8 [md0_raid5]
      8,16   0     6658     2.453715016  2579  Q   W 7493200 + 8 [md0_raid5]
      8,16   0     6659     2.453715247  2579  M   W 7493200 + 8 [md0_raid5]
      8,16   0     6660     2.453721730  2579  Q   W 7493208 + 8 [md0_raid5]
      8,16   0     6661     2.453721974  2579  M   W 7493208 + 8 [md0_raid5]
      8,16   0     6662     2.453728202  2579  Q   W 7493216 + 8 [md0_raid5]
      8,16   0     6663     2.453728436  2579  M   W 7493216 + 8 [md0_raid5]
      8,16   0     6664     2.453734782  2579  Q   W 7493224 + 8 [md0_raid5]
      8,16   0     6665     2.453735019  2579  M   W 7493224 + 8 [md0_raid5]
      8,16   0     6666     2.453741401  2579  Q   W 7493232 + 8 [md0_raid5]
      8,16   0     6667     2.453741632  2579  M   W 7493232 + 8 [md0_raid5]
      8,16   0     6668     2.453748148  2579  Q   W 7493240 + 8 [md0_raid5]
      8,16   0     6669     2.453748386  2579  M   W 7493240 + 8 [md0_raid5]
      8,16   0     6670     2.453851843  2579  I   W 7493144 + 104 [md0_raid5]
      8,16   0        0     2.453853661     0  m   N cfq2579 insert_request
      8,16   0     6671     2.453854064  2579  I   W 7493120 + 24 [md0_raid5]
      8,16   0        0     2.453854439     0  m   N cfq2579 insert_request
      8,16   0     6672     2.453854793  2579  U   N [md0_raid5] 2
      8,16   0        0     2.453855513     0  m   N cfq2579 Not idling.st->count:1
      8,16   0        0     2.453855927     0  m   N cfq2579 dispatch_insert
      8,16   0        0     2.453861771     0  m   N cfq2579 dispatched a request
      8,16   0        0     2.453862248     0  m   N cfq2579 activate rq,drv=1
      8,16   0     6673     2.453862332  2579  D   W 7493120 + 24 [md0_raid5]
      8,16   0        0     2.453865957     0  m   N cfq2579 Not idling.st->count:1
      8,16   0        0     2.453866269     0  m   N cfq2579 dispatch_insert
      8,16   0        0     2.453866707     0  m   N cfq2579 dispatched a request
      8,16   0        0     2.453867061     0  m   N cfq2579 activate rq,drv=2
      8,16   0     6674     2.453867145  2579  D   W 7493144 + 104 [md0_raid5]
      8,16   0     6675     2.454147608     0  C   W 7493120 + 24 [0]
      8,16   0        0     2.454149357     0  m   N cfq2579 complete rqnoidle 0
      8,16   0     6676     2.454791505     0  C   W 7493144 + 104 [0]
      8,16   0        0     2.454794803     0  m   N cfq2579 complete rqnoidle 0
      8,16   0        0     2.454795160     0  m   N cfq schedule dispatch
      
      From above messages,we can find rq[W 7493144 + 104] and rq[W
      7493120 + 24] do not merge.
      Because the bio order is:
        8,16   0     6638     2.453619407  2579  Q   W 7493144 + 8 [md0_raid5]
        8,16   0     6639     2.453620460  2579  G   W 7493144 + 8 [md0_raid5]
        8,16   0     6640     2.453639311  2579  Q   W 7493120 + 8 [md0_raid5]
        8,16   0     6641     2.453639842  2579  G   W 7493120 + 8 [md0_raid5]
      The bio(7493144) first and bio(7493120) later.So the subsequent
      bios will be divided into two parts.
      When flushing plug-list,because elv_attempt_insert_merge only support
      backmerge,not supporting frontmerge.
      So rq[7493120 + 24] can't merge with rq[7493144 + 104].
      
      From my test,i found those situation can count 25% in our system.
      Using this patch, there is no this situation.
      Signed-off-by: NJianpeng Ma <majianpeng@gmail.com>
      CC:Shaohua Li <shli@kernel.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      975927b9
  11. 24 10月, 2012 1 次提交
  12. 23 10月, 2012 2 次提交
    • J
      blkcg: stop iteration early if root_rl is the only request list · 65c77fd9
      Jun'ichi Nomura 提交于
      __blk_queue_next_rl() finds next request list based on blkg_list
      while skipping root_blkg in the list.
      OTOH, root_rl is special as it may exist even without root_blkg.
      
      Though the later part of the function handles such a case correctly,
      exiting early is good for readability of the code.
      Signed-off-by: NJun'ichi Nomura <j-nomura@ce.jp.nec.com>
      Cc: Tejun Heo <tj@kernel.org>
      Acked-by: NVivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      65c77fd9
    • J
      blkcg: Fix use-after-free of q->root_blkg and q->root_rl.blkg · 65635cbc
      Jun'ichi Nomura 提交于
      blk_put_rl() does not call blkg_put() for q->root_rl because we
      don't take request list reference on q->root_blkg.
      However, if root_blkg is once attached then detached (freed),
      blk_put_rl() is confused by the bogus pointer in q->root_blkg.
      
      For example, with !CONFIG_BLK_DEV_THROTTLING &&
      CONFIG_CFQ_GROUP_IOSCHED,
      switching IO scheduler from cfq to deadline will cause system stall
      after the following warning with 3.6:
      
      > WARNING: at /work/build/linux/block/blk-cgroup.h:250
      > blk_put_rl+0x4d/0x95()
      > Modules linked in: bridge stp llc sunrpc acpi_cpufreq freq_table mperf
      > ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4
      > Pid: 0, comm: swapper/0 Not tainted 3.6.0 #1
      > Call Trace:
      >  <IRQ>  [<ffffffff810453bd>] warn_slowpath_common+0x85/0x9d
      >  [<ffffffff810453ef>] warn_slowpath_null+0x1a/0x1c
      >  [<ffffffff811d5f8d>] blk_put_rl+0x4d/0x95
      >  [<ffffffff811d614a>] __blk_put_request+0xc3/0xcb
      >  [<ffffffff811d71a3>] blk_finish_request+0x232/0x23f
      >  [<ffffffff811d76c3>] ? blk_end_bidi_request+0x34/0x5d
      >  [<ffffffff811d76d1>] blk_end_bidi_request+0x42/0x5d
      >  [<ffffffff811d7728>] blk_end_request+0x10/0x12
      >  [<ffffffff812cdf16>] scsi_io_completion+0x207/0x4d5
      >  [<ffffffff812c6fcf>] scsi_finish_command+0xfa/0x103
      >  [<ffffffff812ce2f8>] scsi_softirq_done+0xff/0x108
      >  [<ffffffff811dcea5>] blk_done_softirq+0x8d/0xa1
      >  [<ffffffff810915d5>] ?
      >  generic_smp_call_function_single_interrupt+0x9f/0xd7
      >  [<ffffffff8104cf5b>] __do_softirq+0x102/0x213
      >  [<ffffffff8108a5ec>] ? lock_release_holdtime+0xb6/0xbb
      >  [<ffffffff8104d2b4>] ? raise_softirq_irqoff+0x9/0x3d
      >  [<ffffffff81424dfc>] call_softirq+0x1c/0x30
      >  [<ffffffff81011beb>] do_softirq+0x4b/0xa3
      >  [<ffffffff8104cdb0>] irq_exit+0x53/0xd5
      >  [<ffffffff8102d865>] smp_call_function_single_interrupt+0x34/0x36
      >  [<ffffffff8142486f>] call_function_single_interrupt+0x6f/0x80
      >  <EOI>  [<ffffffff8101800b>] ? mwait_idle+0x94/0xcd
      >  [<ffffffff81018002>] ? mwait_idle+0x8b/0xcd
      >  [<ffffffff81017811>] cpu_idle+0xbb/0x114
      >  [<ffffffff81401fbd>] rest_init+0xc1/0xc8
      >  [<ffffffff81401efc>] ? csum_partial_copy_generic+0x16c/0x16c
      >  [<ffffffff81cdbd3d>] start_kernel+0x3d4/0x3e1
      >  [<ffffffff81cdb79e>] ? kernel_init+0x1f7/0x1f7
      >  [<ffffffff81cdb2dd>] x86_64_start_reservations+0xb8/0xbd
      >  [<ffffffff81cdb3e3>] x86_64_start_kernel+0x101/0x110
      
      This patch clears q->root_blkg and q->root_rl.blkg when root blkg
      is destroyed.
      Signed-off-by: NJun'ichi Nomura <j-nomura@ce.jp.nec.com>
      Acked-by: NVivek Goyal <vgoyal@redhat.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Cc: stable@kernel.org
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      65635cbc
  13. 26 9月, 2012 1 次提交
    • S
      s390/partitions: make partition detection independent from DASD ioctls · 46e88947
      Stefan Weinhuber 提交于
      In some usage scenarios it is desireable to work with disk images or
      virtualized DASD devices. One problem that prevents such applications
      is the partition detection in ibm.c. Currently it works only for
      devices that support the BIODASDINFO2 ioctl, in other words, it only
      works for devices that belong to the DASD device driver.
      
      The information gained from the BIODASDINFO2 ioctl is only for a small
      set of legacy cases abolutely necessary. All current VOL1, LNX1 and
      CMS1 type of disk labels can be interpreted correctly without this
      information, as long as the generic HDIO_GETGEO ioctl works and
      provides a correct disk geometry.
      
      This patch makes the ibm.c partition detection as independent as
      possible from the BIODASDINFO2 ioctl. Only the following two cases are
      still restricted to real DASDs:
      - An FBA DASD, or LDL formatted ECKD DASD without any disk label.
      - An old style LNX1 label (without large volume support) on a disk
        with inconsistent device geometry.
      Signed-off-by: NStefan Weinhuber <wein@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      46e88947
  14. 21 9月, 2012 2 次提交
    • T
      block: fix request_queue->flags initialization · 60ea8226
      Tejun Heo 提交于
      A queue newly allocated with blk_alloc_queue_node() has only
      QUEUE_FLAG_BYPASS set.  For request-based drivers,
      blk_init_allocated_queue() is called and q->queue_flags is overwritten
      with QUEUE_FLAG_DEFAULT which doesn't include BYPASS even though the
      initial bypass is still in effect.
      
      In blk_init_allocated_queue(), or QUEUE_FLAG_DEFAULT to q->queue_flags
      instead of overwriting.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: stable@vger.kernel.org
      Acked-by: NVivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      60ea8226
    • T
      block: lift the initial queue bypass mode on blk_register_queue() instead of... · 749fefe6
      Tejun Heo 提交于
      block: lift the initial queue bypass mode on blk_register_queue() instead of blk_init_allocated_queue()
      
      b82d4b19 ("blkcg: make request_queue bypassing on allocation") made
      request_queues bypassed on allocation to avoid switching on and off
      bypass mode on a queue being initialized.  Some drivers allocate and
      then destroy a lot of queues without fully initializing them and
      incurring bypass latency overhead on each of them could add upto
      significant overhead.
      
      Unfortunately, blk_init_allocated_queue() is never used by queues of
      bio-based drivers, which means that all bio-based driver queues are in
      bypass mode even after initialization and registration complete
      successfully.
      
      Due to the limited way request_queues are used by bio drivers, this
      problem is hidden pretty well but it shows up when blk-throttle is
      used in combination with a bio-based driver.  Trying to configure
      (echoing to cgroupfs file) blk-throttle for a bio-based driver hangs
      indefinitely in blkg_conf_prep() waiting for bypass mode to end.
      
      This patch moves the initial blk_queue_bypass_end() call from
      blk_init_allocated_queue() to blk_register_queue() which is called for
      any userland-visible queues regardless of its type.
      
      I believe this is correct because I don't think there is any block
      driver which needs or wants working elevator and blk-cgroup on a queue
      which isn't visible to userland.  If there are such users, we need a
      different solution.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reported-by: NJoseph Glanville <joseph.glanville@orionvm.com.au>
      Cc: stable@vger.kernel.org
      Acked-by: NVivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      749fefe6
  15. 20 9月, 2012 5 次提交
  16. 18 9月, 2012 1 次提交
  17. 15 9月, 2012 1 次提交
    • T
      cgroup: mark subsystems with broken hierarchy support and whine if cgroups are nested for them · 8c7f6edb
      Tejun Heo 提交于
      Currently, cgroup hierarchy support is a mess.  cpu related subsystems
      behave correctly - configuration, accounting and control on a parent
      properly cover its children.  blkio and freezer completely ignore
      hierarchy and treat all cgroups as if they're directly under the root
      cgroup.  Others show yet different behaviors.
      
      These differing interpretations of cgroup hierarchy make using cgroup
      confusing and it impossible to co-mount controllers into the same
      hierarchy and obtain sane behavior.
      
      Eventually, we want full hierarchy support from all subsystems and
      probably a unified hierarchy.  Users using separate hierarchies
      expecting completely different behaviors depending on the mounted
      subsystem is deterimental to making any progress on this front.
      
      This patch adds cgroup_subsys.broken_hierarchy and sets it to %true
      for controllers which are lacking in hierarchy support.  The goal of
      this patch is two-fold.
      
      * Move users away from using hierarchy on currently non-hierarchical
        subsystems, so that implementing proper hierarchy support on those
        doesn't surprise them.
      
      * Keep track of which controllers are broken how and nudge the
        subsystems to implement proper hierarchy support.
      
      For now, start with a single warning message.  We can whine louder
      later on.
      
      v2: Fixed a typo spotted by Michal. Warning message updated.
      
      v3: Updated memcg part so that it doesn't generate warning in the
          cases where .use_hierarchy=false doesn't make the behavior
          different from root.use_hierarchy=true.  Fixed a typo spotted by
          Glauber.
      
      v4: Check ->broken_hierarchy after cgroup creation is complete so that
          ->create() can affect the result per Michal.  Dropped unnecessary
          memcg root handling per Michal.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NMichal Hocko <mhocko@suse.cz>
      Acked-by: NLi Zefan <lizefan@huawei.com>
      Acked-by: NSerge E. Hallyn <serue@us.ibm.com>
      Cc: Glauber Costa <glommer@parallels.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Paul Turner <pjt@google.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Thomas Graf <tgraf@suug.ch>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Cc: Neil Horman <nhorman@tuxdriver.com>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      8c7f6edb
  18. 13 9月, 2012 1 次提交
    • P
      block/blk-tag.c: Remove useless kfree · d41570b7
      Peter Senna Tschudin 提交于
      Remove useless kfree() and clean up code related to the removal.
      
      The semantic patch that finds this problem is as follows:
      (http://coccinelle.lip6.fr/)
      
      // <smpl>
      @r exists@
      position p1,p2;
      expression x;
      @@
      
      if (x@p1 == NULL) { ... kfree@p2(x); ... return ...; }
      
      @unchanged exists@
      position r.p1,r.p2;
      expression e <= r.x,x,e1;
      iterator I;
      statement S;
      @@
      
      if (x@p1 == NULL) { ... when != I(x,...) S
                              when != e = e1
                              when != e += e1
                              when != e -= e1
                              when != ++e
                              when != --e
                              when != e++
                              when != e--
                              when != &e
         kfree@p2(x); ... return ...; }
      
      @ok depends on unchanged exists@
      position any r.p1;
      position r.p2;
      expression x;
      @@
      
      ... when != true x@p1 == NULL
      kfree@p2(x);
      
      @depends on !ok && unchanged@
      position r.p2;
      expression x;
      @@
      
      *kfree@p2(x);
      // </smpl>
      Signed-off-by: NPeter Senna Tschudin <peter.senna@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      d41570b7
  19. 09 9月, 2012 5 次提交