1. 04 6月, 2012 1 次提交
  2. 20 4月, 2012 11 次提交
    • T
      blkcg: collapse blkcg_policy_ops into blkcg_policy · f9fcc2d3
      Tejun Heo 提交于
      There's no reason to keep blkcg_policy_ops separate.  Collapse it into
      blkcg_policy.
      
      This patch doesn't introduce any functional change.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      f9fcc2d3
    • T
      blkcg: embed struct blkg_policy_data in policy specific data · f95a04af
      Tejun Heo 提交于
      Currently blkg_policy_data carries policy specific data as char flex
      array instead of being embedded in policy specific data.  This was
      forced by oddities around blkg allocation which are all gone now.
      
      This patch makes blkg_policy_data embedded in policy specific data -
      throtl_grp and cfq_group so that it's more conventional and consistent
      with how io_cq is handled.
      
      * blkcg_policy->pdata_size is renamed to ->pd_size.
      
      * Functions which used to take void *pdata now takes struct
        blkg_policy_data *pd.
      
      * blkg_to_pdata/pdata_to_blkg() updated to blkg_to_pd/pd_to_blkg().
      
      * Dummy struct blkg_policy_data definition added.  Dummy
        pdata_to_blkg() definition was unused and inconsistent with the
        non-dummy version - correct dummy pd_to_blkg() added.
      
      * throtl and cfq updated accordingly.
      
      * As dummy blkg_to_pd/pd_to_blkg() are provided,
        blkg_to_cfqg/cfqg_to_blkg() don't need to be ifdef'd.  Moved outside
        ifdef block.
      
      This patch doesn't introduce any functional change.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      f95a04af
    • T
      blkcg: mass rename of blkcg API · 3c798398
      Tejun Heo 提交于
      During the recent blkcg cleanup, most of blkcg API has changed to such
      extent that mass renaming wouldn't cause any noticeable pain.  Take
      the chance and cleanup the naming.
      
      * Rename blkio_cgroup to blkcg.
      
      * Drop blkio / blkiocg prefixes and consistently use blkcg.
      
      * Rename blkio_group to blkcg_gq, which is consistent with io_cq but
        keep the blkg prefix / variable name.
      
      * Rename policy method type and field names to signify they're dealing
        with policy data.
      
      * Rename blkio_policy_type to blkcg_policy.
      
      This patch doesn't cause any functional change.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      3c798398
    • T
      blkcg: remove blkio_group->path[] · 54e7ed12
      Tejun Heo 提交于
      blkio_group->path[] stores the path of the associated cgroup and is
      used only for debug messages.  Just format the path from blkg->cgroup
      when printing debug messages.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      54e7ed12
    • T
      blkcg: drop stuff unused after per-queue policy activation update · 3c96cb32
      Tejun Heo 提交于
      * All_q_list is unused.  Drop all_q_{mutex|list}.
      
      * @for_root of blkg_lookup_create() is always %false when called from
        outside blk-cgroup.c proper.  Factor out __blkg_lookup_create() so
        that it doesn't check whether @q is bypassing and use the
        underscored version for the @for_root callsite.
      
      * blkg_destroy_all() is used only from blkcg proper and @destroy_root
        is always %true.  Make it static and drop @destroy_root.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      3c96cb32
    • T
      blkcg: implement per-queue policy activation · a2b1693b
      Tejun Heo 提交于
      All blkcg policies were assumed to be enabled on all request_queues.
      Due to various implementation obstacles, during the recent blkcg core
      updates, this was temporarily implemented as shooting down all !root
      blkgs on elevator switch and policy [de]registration combined with
      half-broken in-place root blkg updates.  In addition to being buggy
      and racy, this meant losing all blkcg configurations across those
      events.
      
      Now that blkcg is cleaned up enough, this patch replaces the temporary
      implementation with proper per-queue policy activation.  Each blkcg
      policy should call the new blkcg_[de]activate_policy() to enable and
      disable the policy on a specific queue.  blkcg_activate_policy()
      allocates and installs policy data for the policy for all existing
      blkgs.  blkcg_deactivate_policy() does the reverse.  If a policy is
      not enabled for a given queue, blkg printing / config functions skip
      the respective blkg for the queue.
      
      blkcg_activate_policy() also takes care of root blkg creation, and
      cfq_init_queue() and blk_throtl_init() are updated accordingly.
      
      This replaces blkcg_bypass_{start|end}() and update_root_blkg_pd()
      unnecessary.  Dropped.
      
      v2: cfq_init_queue() was returning uninitialized @ret on root_group
          alloc failure if !CONFIG_CFQ_GROUP_IOSCHED.  Fixed.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      a2b1693b
    • T
      blkcg: add request_queue->root_blkg · 03d8e111
      Tejun Heo 提交于
      With per-queue policy activation, root blkg creation will be moved to
      blkcg core.  Add q->root_blkg in preparation.  For blk-throtl, this
      replaces throtl_data->root_tg; however, cfq needs to keep
      cfqd->root_group for !CONFIG_CFQ_GROUP_IOSCHED.
      
      This is to prepare for per-queue policy activation and doesn't cause
      any functional difference.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      03d8e111
    • T
      blkcg: make blkg_conf_prep() take @pol and return with queue lock held · da8b0662
      Tejun Heo 提交于
      Add @pol to blkg_conf_prep() and let it return with queue lock held
      (to be released by blkg_conf_finish()).  Note that @pol isn't used
      yet.
      
      This is to prepare for per-queue policy activation and doesn't cause
      any visible difference.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      da8b0662
    • T
      blkcg: remove static policy ID enums · 8bd435b3
      Tejun Heo 提交于
      Remove BLKIO_POLICY_* enums and let blkio_policy_register() allocate
      @pol->plid dynamically on registration.  The maximum number of blkcg
      policies which can be registered at the same time is defined by
      BLKCG_MAX_POLS constant added to include/linux/blkdev.h.
      
      Note that blkio_policy_register() now may fail.  Policy init functions
      updated accordingly and unnecessary ifdefs removed from cfq_init().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      8bd435b3
    • T
      blkcg: use @pol instead of @plid in update_root_blkg_pd() and blkcg_print_blkgs() · ec399347
      Tejun Heo 提交于
      The two functions were taking "enum blkio_policy_id plid".  Make them
      take "const struct blkio_policy_type *pol" instead.
      
      This is to prepare for per-queue policy activation and doesn't cause
      any functional difference.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      ec399347
    • T
      cfq: fix build breakage & warnings · f48ec1d7
      Tejun Heo 提交于
      * CFQ_WEIGHT_* defined inside CONFIG_BLK_CGROUP causes cfq-iosched.c
        compile failure when the config is disabled.  Move it outside the
        ifdef block.
      
      * Dummy cfqg_stats_*() definitions were lacking inline modifiers
        causing unused functions warning if !CONFIG_CFQ_GROUP_IOSCHED.  Add
        them.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      f48ec1d7
  3. 02 4月, 2012 10 次提交
    • T
      blkcg: drop BLKCG_STAT_{PRIV|POL|OFF} macros · 5bc4afb1
      Tejun Heo 提交于
      Now that all stat handling code lives in policy implementations,
      there's no need to encode policy ID in cft->private.
      
      * Export blkcg_prfill_[rw]stat() from blkcg, remove
        blkcg_print_[rw]stat(), and implement cfqg_print_[rw]stat() which
        use hard-code BLKIO_POLICY_PROP.
      
      * Use cft->private for offset of the target field directly and drop
        BLKCG_STAT_{PRIV|POL|OFF}().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      5bc4afb1
    • T
      blkcg: pass around pd->pdata instead of pd itself in prfill functions · d366e7ec
      Tejun Heo 提交于
      Now that all conf and stat fields are moved into policy specific
      blkio_policy_data->pdata areas, there's no reason to use
      blkio_policy_data itself in prfill functions.  Pass around @pd->pdata
      instead of @pd.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      d366e7ec
    • T
      blkcg: move blkio_group_conf->weight to cfq · 3381cb8d
      Tejun Heo 提交于
      blkio_group_conf->weight is owned by cfq and has no reason to be
      defined in blkcg core.  Replace it with cfq_group->dev_weight and let
      conf setting functions directly set it.  If dev_weight is zero, the
      cfqg doesn't have device specific weight configured.
      
      Also, rename BLKIO_WEIGHT_* constants to CFQ_WEIGHT_* and rename
      blkio_cgroup->weight to blkio_cgroup->cfq_weight.  We eventually want
      per-policy storage in blkio_cgroup but just mark the ownership of the
      field for now.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      3381cb8d
    • T
      blkcg: move blkio_group_stats to cfq-iosched.c · 155fead9
      Tejun Heo 提交于
      blkio_group_stats contains only fields used by cfq and has no reason
      to be defined in blkcg core.
      
      * Move blkio_group_stats to cfq-iosched.c and rename it to cfqg_stats.
      
      * blkg_policy_data->stats is replaced with cfq_group->stats.
        blkg_prfill_[rw]stat() are updated to use offset against pd->pdata
        instead.
      
      * All related macros / functions are renamed so that they have cfqg_
        prefix and the unnecessary @pol arguments are dropped.
      
      * All stat functions now take cfq_group * instead of blkio_group *.
      
      * lockdep assertion on queue lock dropped.  Elevator runs under queue
        lock by default.  There isn't much to be gained by adding lockdep
        assertions at stat function level.
      
      * cfqg_stats_reset() implemented for blkio_reset_group_stats_fn method
        so that cfqg->stats can be reset.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      155fead9
    • T
      blkcg: cfq doesn't need per-cpu dispatch stats · 41b38b6d
      Tejun Heo 提交于
      blkio_group_stats_cpu is used to count dispatch stats using per-cpu
      counters.  This is used by both blk-throtl and cfq-iosched but the
      sharing is rather silly.
      
      * cfq-iosched doesn't need per-cpu dispatch stats.  cfq always updates
        those stats while holding queue_lock.
      
      * blk-throtl needs per-cpu dispatch stats but only service_bytes and
        serviced.  It doesn't make use of sectors.
      
      This patch makes cfq add and use global stats for service_bytes,
      serviced and sectors, removes per-cpu sectors counter and moves
      per-cpu stat printing code to blk-throttle.c.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      41b38b6d
    • T
      blkcg: move statistics update code to policies · 629ed0b1
      Tejun Heo 提交于
      As with conf/stats file handling code, there's no reason for stat
      update code to live in blkcg core with policies calling into update
      them.  The current organization is both inflexible and complex.
      
      This patch moves stat update code to specific policies.  All
      blkiocg_update_*_stats() functions which deal with BLKIO_POLICY_PROP
      stats are collapsed into their cfq_blkiocg_update_*_stats()
      counterparts.  blkiocg_update_dispatch_stats() is used by both
      policies and duplicated as throtl_update_dispatch_stats() and
      cfq_blkiocg_update_dispatch_stats().  This will be cleaned up later.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      629ed0b1
    • T
      cfq: collapse cfq.h into cfq-iosched.c · 2ce4d50f
      Tejun Heo 提交于
      block/cfq.h contains some functions which interact with blkcg;
      however, this is only part of it and cfq-iosched.c already has quite
      some #ifdef CONFIG_CFQ_GROUP_IOSCHED.  With conf/stat handling being
      moved to specific policies, having these relay functions isolated in
      cfq.h doesn't make much sense.  Collapse cfq.h into cfq-iosched.c for
      now.  Let's split blkcg support properly later if necessary.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      2ce4d50f
    • T
      blkcg: move conf/stat file handling code to policies · 60c2bc2d
      Tejun Heo 提交于
      blkcg conf/stat handling is convoluted in that details which belong to
      specific policy implementations are all out in blkcg core and then
      policies hook into core layer to access and manipulate confs and
      stats.  This sadly achieves both inflexibility (confs/stats can't be
      modified without messing with blkcg core) and complexity (all the
      call-ins and call-backs).
      
      The previous patches restructured conf and stat handling code such
      that they can be separated out.  This patch relocates the file
      handling part.  All conf/stat file handling code which belongs to
      BLKIO_POLICY_PROP is moved to cfq-iosched.c and all
      BKLIO_POLICY_THROTL code to blk-throtl.c.
      
      The move is verbatim except for blkio_update_group_{weight|bps|iops}()
      callbacks which relays conf changes to policies.  The configuration
      settings are handled in policies themselves so the relaying isn't
      necessary.  Conf setting functions are modified to directly call
      per-policy update functions and the relaying mechanism is dropped.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      60c2bc2d
    • T
      blkcg: remove unused @pol and @plid parameters · aaec55a0
      Tejun Heo 提交于
      @pol to blkg_to_pdata() and @plid to blkg_lookup_create() are no
      longer necessary.  Drop them.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      aaec55a0
    • T
      block: Make cfq_target_latency tunable through sysfs. · 5bf14c07
      Tao Ma 提交于
      In cfq, when we calculate a time slice for a process(or a cfqq to
      be precise), we have to consider the cfq_target_latency so that all the
      sync request have an estimated latency(300ms) and it is controlled by
      cfq_target_latency. But in some hadoop test, we have found that if
      there are many processes doing sequential read(24 for example), the
      throughput is bad because every process can only work for about 25ms
      and the cfqq is switched. That leads to a higher disk seek. We can
      achive the good throughput by setting low_latency=0, but then some
      read's latency is too much for the application.
      
      So this patch makes cfq_target_latency tunable through sysfs so that
      we can tune it and find some magic number which is not bad for both
      the throughput and the read latency.
      
      Cc: Jens Axboe <axboe@kernel.dk>
      Signed-off-by: NTao Ma <boyu.mt@taobao.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      5bf14c07
  4. 23 3月, 2012 1 次提交
    • T
      cfq: fix cfqg ref handling when BLK_CGROUP && !CFQ_GROUP_IOSCHED · eb7d8c07
      Tejun Heo 提交于
      When BLK_CGROUP is enabled but CFQ_GROUP_IOSCHED is, cfq ends up
      calling blkg_get/put() on dummy cfqg leading to the following crash.
      
        BUG: unable to handle kernel NULL pointer dereference at 00000000000000b0
        IP: [<ffffffff813d44d8>] cfq_init_queue+0x258/0x430
        PGD 0
        Oops: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC
        CPU 0
        Modules linked in:
      
        Pid: 1, comm: swapper/0 Not tainted 3.3.0-rc6-work+ #125 Bochs Bochs
        RIP: 0010:[<ffffffff813d44d8>]  [<ffffffff813d44d8>] cfq_init_queue+0x258/0x430
        RSP: 0018:ffff88001f9dfd80  EFLAGS: 00010046
        RAX: ffff88001aefbbf0 RBX: ffff88001aeedbf0 RCX: 0000000000000100
        RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffff820ffd40
        RBP: ffff88001f9dfdd0 R08: 0000000000000000 R09: 0000000000000001
        R10: 0000000000000001 R11: 0000000000000000 R12: 0000000000000000
        R13: 0000000000000009 R14: ffff88001aefbc30 R15: 0000000000000003
        FS:  0000000000000000(0000) GS:ffff88001fc00000(0000) knlGS:0000000000000000
        CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
        CR2: 00000000000000b0 CR3: 000000000206f000 CR4: 00000000000006f0
        DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
        DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
        Process swapper/0 (pid: 1, threadinfo ffff88001f9de000, task ffff88001f9dc040)
        Stack:
         ffff88001aeedbf0 ffff88001aefbdb0 ffff88001aef1548 ffff88001aefbbf0
         ffff88001f9dfdd0 ffff88001aef1548 ffffffff820d6320 ffffffff8165ce30
         ffffffff82c555e0 ffff88001aeebbf0 ffff88001f9dfe00 ffffffff813b0507
        Call Trace:
         [<ffffffff813b0507>] elevator_init+0xd7/0x140
         [<ffffffff813b83d5>] blk_init_allocated_queue+0x125/0x150
         [<ffffffff813b94d3>] blk_init_queue_node+0x43/0x80
         [<ffffffff813b9523>] blk_init_queue+0x13/0x20
         [<ffffffff821aec00>] floppy_init+0x82/0xec7
         [<ffffffff810001d2>] do_one_initcall+0x42/0x170
         [<ffffffff821835fc>] kernel_init+0xcb/0x14f
         [<ffffffff81b40b24>] kernel_thread_helper+0x4/0x10
        Code: 00 e8 1d 9e 76 00 48 8b 43 48 48 85 c0 48 89 83 28 03 00 00 74 07 4c 8b a0 10 ff ff ff 8b 15 b0 2e d0 00 85 d2 0f 85 49 01 00 00 <41> 8b 84 24 b0 00 00 00 85 c0 0f 8e 8c 01 00 00 83 e8 01 85 c0
        RIP  [<ffffffff813d44d8>] cfq_init_queue+0x258/0x430
      
      Because cfq's blkcg support has a on/off switch, CFQ_GROUP_IOSCHED,
      separate from BLK_CGROUP, blkg access through cfqg needs to be
      conditioned on it.
      
      * Make blkg_to_cfqg() and cfqg_to_blkg() conditioned on
        CFQ_GROUP_IOSCHED.  If disabled, they always return %NULL.
      
      * Introduce cfqg_get() and cfqg_put() conditioned on
        CFQ_GROUP_IOSCHED.  If disabled, they are noops.
      Reported-by: NFengguang Wu <fengguang.wu@intel.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      eb7d8c07
  5. 20 3月, 2012 2 次提交
    • T
      cfq: don't use icq_get_changed() · 598971bf
      Tejun Heo 提交于
      cfq caches the associated cfqq's for a given cic.  The cache needs to
      be flushed if the cic's ioprio or blkcg has changed.  It is currently
      done by requiring the changing action to set the respective
      ICQ_*_CHANGED bit in the icq and testing it from cfq_set_request(),
      which involves iterating through all the affected icqs.
      
      All cfq wants to know is whether ioprio and/or blkcg have changed
      since the last flush and can be easily achieved by just remembering
      the current ioprio and blkcg ID in cic.
      
      This patch adds cic->{ioprio|blkcg_id}, updates all ioprio users to
      use the remembered value instead, and updates cfq_set_request() path
      such that, instead of using icq_get_changed(), the current values are
      compared against the remembered ones and trigger appropriate flush
      action if not.  Condition tests are moved inside both _changed
      functions which are now named check_ioprio_changed() and
      check_blkcg_changed().
      
      ioprio.h::task_ioprio*() can't be used anymore and replaced with
      open-coded IOPRIO_CLASS_NONE case in cfq_async_queue_prio().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      598971bf
    • T
      cfq: pass around cfq_io_cq instead of io_context · abede6da
      Tejun Heo 提交于
      Now that io_cq is managed by block core and guaranteed to exist for
      any in-flight request, it is easier and carries more information to
      pass around cfq_io_cq than io_context.
      
      This patch updates cfq_init_prio_data(), cfq_find_alloc_queue() and
      cfq_get_queue() to take @cic instead of @ioc.  This change removes a
      duplicate cfq_cic_lookup() from cfq_find_alloc_queue().
      
      This change enables the use of cic-cached ioprio in the next patch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      abede6da
  6. 07 3月, 2012 15 次提交
    • T
      block: make block cgroup policies follow bio task association · 4f85cb96
      Tejun Heo 提交于
      Implement bio_blkio_cgroup() which returns the blkcg associated with
      the bio if exists or %current's blkcg, and use it in blk-throttle and
      cfq-iosched propio.  This makes both cgroup policies honor task
      association for the bio instead of always assuming %current.
      
      As nobody is using bio_set_task() yet, this doesn't introduce any
      behavior change.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      4f85cb96
    • T
      block: implement bio_associate_current() · 852c788f
      Tejun Heo 提交于
      IO scheduling and cgroup are tied to the issuing task via io_context
      and cgroup of %current.  Unfortunately, there are cases where IOs need
      to be routed via a different task which makes scheduling and cgroup
      limit enforcement applied completely incorrectly.
      
      For example, all bios delayed by blk-throttle end up being issued by a
      delayed work item and get assigned the io_context of the worker task
      which happens to serve the work item and dumped to the default block
      cgroup.  This is double confusing as bios which aren't delayed end up
      in the correct cgroup and makes using blk-throttle and cfq propio
      together impossible.
      
      Any code which punts IO issuing to another task is affected which is
      getting more and more common (e.g. btrfs).  As both io_context and
      cgroup are firmly tied to task including userland visible APIs to
      manipulate them, it makes a lot of sense to match up tasks to bios.
      
      This patch implements bio_associate_current() which associates the
      specified bio with %current.  The bio will record the associated ioc
      and blkcg at that point and block layer will use the recorded ones
      regardless of which task actually ends up issuing the bio.  bio
      release puts the associated ioc and blkcg.
      
      It grabs and remembers ioc and blkcg instead of the task itself
      because task may already be dead by the time the bio is issued making
      ioc and blkcg inaccessible and those are all block layer cares about.
      
      elevator_set_req_fn() is updated such that the bio elvdata is being
      allocated for is available to the elevator.
      
      This doesn't update block cgroup policies yet.  Further patches will
      implement the support.
      
      -v2: #ifdef CONFIG_BLK_CGROUP added around bio->bi_ioc dereference in
           rq_ioc() to fix build breakage.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: Kent Overstreet <koverstreet@google.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      852c788f
    • T
      block: add io_context->active_ref · f6e8d01b
      Tejun Heo 提交于
      Currently ioc->nr_tasks is used to decide two things - whether an ioc
      is done issuing IOs and whether it's shared by multiple tasks.  This
      patch separate out the first into ioc->active_ref, which is acquired
      and released using {get|put}_io_context_active() respectively.
      
      This will be used to associate bio's with a given task.  This patch
      doesn't introduce any visible behavior change.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      f6e8d01b
    • T
      blkcg: drop unnecessary RCU locking · c875f4d0
      Tejun Heo 提交于
      Now that blkg additions / removals are always done under both q and
      blkcg locks, the only places RCU locking is necessary are
      blkg_lookup[_create]() for lookup w/o blkcg lock.  This patch drops
      unncessary RCU locking replacing it with plain blkcg locking as
      necessary.
      
      * blkiocg_pre_destroy() already perform proper locking and don't need
        RCU.  Dropped.
      
      * blkio_read_blkg_stats() now uses blkcg->lock instead of RCU read
        lock.  This isn't a hot path.
      
      * Now unnecessary synchronize_rcu() from queue exit paths removed.
        This makes q->nr_blkgs unnecessary.  Dropped.
      
      * RCU annotation on blkg->q removed.
      
      -v2: Vivek pointed out that blkg_lookup_create() still needs to be
           called under rcu_read_lock().  Updated.
      
      -v3: After the update, stats_lock locking in blkio_read_blkg_stats()
           shouldn't be using _irq variant as it otherwise ends up enabling
           irq while blkcg->lock is locked.  Fixed.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      c875f4d0
    • T
      blkcg: unify blkg's for blkcg policies · e8989fae
      Tejun Heo 提交于
      Currently, blkg is per cgroup-queue-policy combination.  This is
      unnatural and leads to various convolutions in partially used
      duplicate fields in blkg, config / stat access, and general management
      of blkgs.
      
      This patch make blkg's per cgroup-queue and let them serve all
      policies.  blkgs are now created and destroyed by blkcg core proper.
      This will allow further consolidation of common management logic into
      blkcg core and API with better defined semantics and layering.
      
      As a transitional step to untangle blkg management, elvswitch and
      policy [de]registration, all blkgs except the root blkg are being shot
      down during elvswitch and bypass.  This patch adds blkg_root_update()
      to update root blkg in place on policy change.  This is hacky and racy
      but should be good enough as interim step until we get locking
      simplified and switch over to proper in-place update for all blkgs.
      
      -v2: Root blkgs need to be updated on elvswitch too and blkg_alloc()
           comment wasn't updated according to the function change.  Fixed.
           Both pointed out by Vivek.
      
      -v3: v2 updated blkg_destroy_all() to invoke update_root_blkg_pd() for
           all policies.  This freed root pd during elvswitch before the
           last queue finished exiting and led to oops.  Directly invoke
           update_root_blkg_pd() only on BLKIO_POLICY_PROP from
           cfq_exit_queue().  This also is closer to what will be done with
           proper in-place blkg update.  Reported by Vivek.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      e8989fae
    • T
      blkcg: let blkcg core manage per-queue blkg list and counter · 03aa264a
      Tejun Heo 提交于
      With the previous patch to move blkg list heads and counters to
      request_queue and blkg, logic to manage them in both policies are
      almost identical and can be moved to blkcg core.
      
      This patch moves blkg link logic into blkg_lookup_create(), implements
      common blkg unlink code in blkg_destroy(), and updates
      blkg_destory_all() so that it's policy specific and can skip root
      group.  The updated blkg_destroy_all() is now used to both clear queue
      for bypassing and elv switching, and release all blkgs on q exit.
      
      This patch introduces a race window where policy [de]registration may
      race against queue blkg clearing.  This can only be a problem on cfq
      unload and shouldn't be a real problem in practice (and we have many
      other places where this race already exists).  Future patches will
      remove these unlikely races.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      03aa264a
    • T
      blkcg: move per-queue blkg list heads and counters to queue and blkg · 4eef3049
      Tejun Heo 提交于
      Currently, specific policy implementations are responsible for
      maintaining list and number of blkgs.  This duplicates code
      unnecessarily, and hinders factoring common code and providing blkcg
      API with better defined semantics.
      
      After this patch, request_queue hosts list heads and counters and blkg
      has list nodes for both policies.  This patch only relocates the
      necessary fields and the next patch will actually move management code
      into blkcg core.
      
      Note that request_queue->blkg_list[] and ->nr_blkgs[] are hardcoded to
      have 2 elements.  This is to avoid include dependency and will be
      removed by the next patch.
      
      This patch doesn't introduce any behavior change.
      
      -v2: Now unnecessary conditional on CONFIG_BLK_CGROUP_MODULE removed
           as pointed out by Vivek.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      4eef3049
    • T
      blkcg: don't use blkg->plid in stat related functions · c1768268
      Tejun Heo 提交于
      blkg is scheduled to be unified for all policies and thus there won't
      be one-to-one mapping from blkg to policy.  Update stat related
      functions to take explicit @pol or @plid arguments and not use
      blkg->plid.
      
      This is painful for now but most of specific stat interface functions
      will be replaced with a handful of generic helpers.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      c1768268
    • T
      blkcg: move refcnt to blkcg core · 1adaf3dd
      Tejun Heo 提交于
      Currently, blkcg policy implementations manage blkg refcnt duplicating
      mostly identical code in both policies.  This patch moves refcnt to
      blkg and let blkcg core handle refcnt and freeing of blkgs.
      
      * cfq blkgs now also get freed via RCU.
      
      * cfq blkgs lose RB_EMPTY_ROOT() sanity check on blkg free.  If
        necessary, we can add blkio_exit_group_fn() to resurrect this.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      1adaf3dd
    • T
      blkcg: let blkcg core handle policy private data allocation · 0381411e
      Tejun Heo 提交于
      Currently, blkg's are embedded in private data blkcg policy private
      data structure and thus allocated and freed by policies.  This leads
      to duplicate codes in policies, hinders implementing common part in
      blkcg core with strong semantics, and forces duplicate blkg's for the
      same cgroup-q association.
      
      This patch introduces struct blkg_policy_data which is a separate data
      structure chained from blkg.  Policies specifies the amount of private
      data it needs in its blkio_policy_type->pdata_size and blkcg core
      takes care of allocating them along with blkg which can be accessed
      using blkg_to_pdata().  blkg can be determined from pdata using
      pdata_to_blkg().  blkio_alloc_group_fn() method is accordingly updated
      to blkio_init_group_fn().
      
      For consistency, tg_of_blkg() and cfqg_of_blkg() are replaced with
      blkg_to_tg() and blkg_to_cfqg() respectively, and functions to map in
      the reverse direction are added.
      
      Except that policy specific data now lives in a separate data
      structure from blkg, this patch doesn't introduce any functional
      difference.
      
      This will be used to unify blkg's for different policies.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      0381411e
    • T
      blkcg: let blkio_group point to blkio_cgroup directly · 7ee9c562
      Tejun Heo 提交于
      Currently, blkg points to the associated blkcg via its css_id.  This
      unnecessarily complicates dereferencing blkcg.  Let blkg hold a
      reference to the associated blkcg and point directly to it and disable
      css_id on blkio_subsys.
      
      This change requires splitting blkiocg_destroy() into
      blkiocg_pre_destroy() and blkiocg_destroy() so that all blkg's can be
      destroyed and all the blkcg references held by them dropped during
      cgroup removal.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      7ee9c562
    • T
      blkcg: kill the mind-bending blkg->dev · 7a4dd281
      Tejun Heo 提交于
      blkg->dev is dev_t recording the device number of the block device for
      the associated request_queue.  It is used to identify the associated
      block device when printing out configuration or stats.
      
      This is redundant to begin with.  A blkg is an association between a
      cgroup and a request_queue and it of course is possible to reach
      request_queue from blkg and synchronization conventions are in place
      for safe q dereferencing, so this shouldn't be necessary from the
      beginning.  Furthermore, it's initialized by sscanf()ing the device
      name of backing_dev_info.  The mind boggles.
      
      Anyways, if blkg is visible under rcu lock, we *know* that the
      associated request_queue hasn't gone away yet and its bdi is
      registered and alive - blkg can't be created for request_queue which
      hasn't been fully initialized and it can't go away before blkg is
      removed.
      
      Let stat and conf read functions get device name from
      blkg->q->backing_dev_info.dev and pass it down to printing functions
      and remove blkg->dev.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      7a4dd281
    • T
      blkcg: don't allow or retain configuration of missing devices · e56da7e2
      Tejun Heo 提交于
      blkcg is very peculiar in that it allows setting and remembering
      configurations for non-existent devices by maintaining separate data
      structures for configuration.
      
      This behavior is completely out of the usual norms and outright
      confusing; furthermore, it uses dev_t number to match the
      configuration to devices, which is unpredictable to begin with and
      becomes completely unuseable if EXT_DEVT is fully used.
      
      It is wholely unnecessary - we already have fully functional userland
      mechanism to program devices being hotplugged which has full access to
      device identification, connection topology and filesystem information.
      
      Add a new struct blkio_group_conf which contains all blkcg
      configurations to blkio_group and let blkio_group, which can be
      created iff the associated device exists and is removed when the
      associated device goes away, carry all configurations.
      
      Note that, after this patch, all newly created blkg's will always have
      the default configuration (unlimited for throttling and blkcg's weight
      for propio).
      
      This patch makes blkio_policy_node meaningless but doesn't remove it.
      The next patch will.
      
      -v2: Updated to retry after short sleep if blkg lookup/creation failed
           due to the queue being temporarily bypassed as indicated by
           -EBUSY return.  Pointed out by Vivek.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: Kay Sievers <kay.sievers@vrfy.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      e56da7e2
    • T
      blkcg: factor out blkio_group creation · cd1604fa
      Tejun Heo 提交于
      Currently both blk-throttle and cfq-iosched implement their own
      blkio_group creation code in throtl_get_tg() and cfq_get_cfqg().  This
      patch factors out the common code into blkg_lookup_create(), which
      returns ERR_PTR value so that transitional failures due to queue
      bypass can be distinguished from other failures.
      
      * New plkio_policy_ops methods blkio_alloc_group_fn() and
        blkio_link_group_fn added.  Both are transitional and will be
        removed once the blkg management code is fully moved into
        blk-cgroup.c.
      
      * blkio_alloc_group_fn() allocates policy-specific blkg which is
        usually a larger data structure with blkg as the first entry and
        intiailizes it.  Note that initialization of blkg proper, including
        percpu stats, is responsibility of blk-cgroup proper.
      
        Note that default config (weight, bps...) initialization is done
        from this method; otherwise, we end up violating locking order
        between blkcg and q locks via blkcg_get_CONF() functions.
      
      * blkio_link_group_fn() is called under queue_lock and responsible for
        linking the blkg to the queue.  blkcg side is handled by blk-cgroup
        proper.
      
      * The common blkg creation function is named blkg_lookup_create() and
        blkiocg_lookup_group() is renamed to blkg_lookup() for consistency.
        Also, throtl / cfq related functions are similarly [re]named for
        consistency.
      
      This simplifies blkcg policy implementations and enables further
      cleanup.
      
      -v2: Vivek noticed that blkg_lookup_create() incorrectly tested
           blk_queue_dead() instead of blk_queue_bypass() leading a user of
           the function ending up creating a new blkg on bypassing queue.
           This is a bug introduced while relocating bypass patches before
           this one.  Fixed.
      
      -v3: ERR_PTR patch folded into this one.  @for_root added to
           blkg_lookup_create() to allow creating root group on a bypassed
           queue during elevator switch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      cd1604fa
    • T
      blkcg: use the usual get blkg path for root blkio_group · f51b802c
      Tejun Heo 提交于
      For root blkg, blk_throtl_init() was using throtl_alloc_tg()
      explicitly and cfq_init_queue() was manually initializing embedded
      cfqd->root_group, adding unnecessarily different code paths to blkg
      handling.
      
      Make both use the usual blkio_group get functions - throtl_get_tg()
      and cfq_get_cfqg() - for the root blkio_group too.  Note that
      blk_throtl_init() callsite is pushed downwards in
      blk_alloc_queue_node() so that @q is sufficiently initialized for
      throtl_get_tg().
      
      This simplifies root blkg handling noticeably for cfq and will allow
      further modularization of blkcg API.
      
      -v2: Vivek pointed out that using cfq_get_cfqg() won't work if
           CONFIG_CFQ_GROUP_IOSCHED is disabled.  Fix it by factoring out
           initialization of base part of cfqg into cfq_init_cfqg_base() and
           alloc/init/free explicitly if !CONFIG_CFQ_GROUP_IOSCHED.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      f51b802c