1. 09 8月, 2013 1 次提交
  2. 02 8月, 2013 1 次提交
  3. 01 8月, 2013 3 次提交
    • M
      vmpressure: make sure there are no events queued after memcg is offlined · 33cb876e
      Michal Hocko 提交于
      vmpressure is called synchronously from reclaim where the target_memcg
      is guaranteed to be alive but the eventfd is signaled from the work
      queue context.  This means that memcg (along with vmpressure structure
      which is embedded into it) might go away while the work item is pending
      which would result in use-after-release bug.
      
      We have two possible ways how to fix this.  Either vmpressure pins memcg
      before it schedules vmpr->work and unpin it in vmpressure_work_fn or
      explicitely flush the work item from the css_offline context (as
      suggested by Tejun).
      
      This patch implements the later one and it introduces vmpressure_cleanup
      which flushes the vmpressure work queue item item.  It hooks into
      mem_cgroup_css_offline after the memcg itself is cleaned up.
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: NMichal Hocko <mhocko@suse.cz>
      Reported-by: NTejun Heo <tj@kernel.org>
      Cc: Anton Vorontsov <anton.vorontsov@linaro.org>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Li Zefan <lizefan@huawei.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      33cb876e
    • M
      vmpressure: change vmpressure::sr_lock to spinlock · 22f2020f
      Michal Hocko 提交于
      There is nothing that can sleep inside critical sections protected by
      this lock and those sections are really small so there doesn't make much
      sense to use mutex for them.  Change the log to a spinlock
      Signed-off-by: NMichal Hocko <mhocko@suse.cz>
      Reported-by: NTejun Heo <tj@kernel.org>
      Cc: Anton Vorontsov <anton.vorontsov@linaro.org>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Li Zefan <lizefan@huawei.com>
      Reviewed-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      22f2020f
    • E
      mlx5_core: Implement new initialization sequence · cd23b14b
      Eli Cohen 提交于
      Introduce enbale_hca and disable_hca commands to signify when the
      driver starts or ceases to operate on the device.
      
      In addition the driver will use boot and init pages count; boot pages
      is required to allow firmware to complete boot commands and the other
      to complete init hca.  Command interface revision is bumped to 4 to
      enforce using supported firmware.
      
      This patch breaks compatibility with old versions of firmware (< 4);
      however, the first GA firmware we will publish will support version 4
      so this should not be a problem.
      Signed-off-by: NEli Cohen <eli@mellanox.com>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      cd23b14b
  4. 30 7月, 2013 1 次提交
    • C
      freezer: set PF_SUSPEND_TASK flag on tasks that call freeze_processes · 2b44c4db
      Colin Cross 提交于
      Calling freeze_processes sets a global flag that will cause any
      process that calls try_to_freeze to enter the refrigerator.  It
      skips sending a signal to the current task, but if the current
      task ever hits try_to_freeze, all threads will be frozen and the
      system will deadlock.
      
      Set a new flag, PF_SUSPEND_TASK, on the task that calls
      freeze_processes.  The flag notifies the freezer that the thread
      is involved in suspend and should not be frozen.  Also add a
      WARN_ON in thaw_processes if the caller does not have the
      PF_SUSPEND_TASK flag set to catch if a different task calls
      thaw_processes than the one that called freeze_processes, leaving
      a task with PF_SUSPEND_TASK permanently set on it.
      
      Threads that spawn off a task with PF_SUSPEND_TASK set (which
      swsusp does) will also have PF_SUSPEND_TASK set, preventing them
      from freezing while they are helping with suspend, but they need
      to be dead by the time suspend is triggered, otherwise they may
      run when userspace is expected to be frozen.  Add a WARN_ON in
      thaw_processes if more than one thread has the PF_SUSPEND_TASK
      flag set.
      Reported-and-tested-by: NMichael Leun <lkml20130126@newton.leun.net>
      Signed-off-by: NColin Cross <ccross@android.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      2b44c4db
  5. 29 7月, 2013 1 次提交
    • R
      Revert "cpuidle: Quickly notice prediction failure for repeat mode" · 14851912
      Rafael J. Wysocki 提交于
      Revert commit 69a37bea (cpuidle: Quickly notice prediction failure for
      repeat mode), because it has been identified as the source of a
      significant performance regression in v3.8 and later as explained by
      Jeremy Eder:
      
        We believe we've identified a particular commit to the cpuidle code
        that seems to be impacting performance of variety of workloads.
        The simplest way to reproduce is using netperf TCP_RR test, so
        we're using that, on a pair of Sandy Bridge based servers.  We also
        have data from a large database setup where performance is also
        measurably/positively impacted, though that test data isn't easily
        share-able.
      
        Included below are test results from 3 test kernels:
      
        kernel       reverts
        -----------------------------------------------------------
        1) vanilla   upstream (no reverts)
      
        2) perfteam2 reverts e11538d1
      
        3) test      reverts 69a37bea
                             e11538d1
      
        In summary, netperf TCP_RR numbers improve by approximately 4%
        after reverting 69a37bea.  When
        69a37bea is included, C0 residency
        never seems to get above 40%.  Taking that patch out gets C0 near
        100% quite often, and performance increases.
      
        The below data are histograms representing the %c0 residency @
        1-second sample rates (using turbostat), while under netperf test.
      
        - If you look at the first 4 histograms, you can see %c0 residency
          almost entirely in the 30,40% bin.
        - The last pair, which reverts 69a37bea,
          shows %c0 in the 80,90,100% bins.
      
        Below each kernel name are netperf TCP_RR trans/s numbers for the
        particular kernel that can be disclosed publicly, comparing the 3
        test kernels.  We ran a 4th test with the vanilla kernel where
        we've also set /dev/cpu_dma_latency=0 to show overall impact
        boosting single-threaded TCP_RR performance over 11% above
        baseline.
      
        3.10-rc2 vanilla RX + c0 lock (/dev/cpu_dma_latency=0):
        TCP_RR trans/s 54323.78
      
        -----------------------------------------------------------
        3.10-rc2 vanilla RX (no reverts)
        TCP_RR trans/s 48192.47
      
        Receiver %c0
            0.0000 -    10.0000 [     1]: *
           10.0000 -    20.0000 [     0]:
           20.0000 -    30.0000 [     0]:
           30.0000 -    40.0000 [    59]:
        ***********************************************************
           40.0000 -    50.0000 [     1]: *
           50.0000 -    60.0000 [     0]:
           60.0000 -    70.0000 [     0]:
           70.0000 -    80.0000 [     0]:
           80.0000 -    90.0000 [     0]:
           90.0000 -   100.0000 [     0]:
      
        Sender %c0
            0.0000 -    10.0000 [     1]: *
           10.0000 -    20.0000 [     0]:
           20.0000 -    30.0000 [     0]:
           30.0000 -    40.0000 [    11]: ***********
           40.0000 -    50.0000 [    49]:
        *************************************************
           50.0000 -    60.0000 [     0]:
           60.0000 -    70.0000 [     0]:
           70.0000 -    80.0000 [     0]:
           80.0000 -    90.0000 [     0]:
           90.0000 -   100.0000 [     0]:
      
        -----------------------------------------------------------
        3.10-rc2 perfteam2 RX (reverts commit
        e11538d1)
        TCP_RR trans/s 49698.69
      
        Receiver %c0
            0.0000 -    10.0000 [     1]: *
           10.0000 -    20.0000 [     1]: *
           20.0000 -    30.0000 [     0]:
           30.0000 -    40.0000 [    59]:
        ***********************************************************
           40.0000 -    50.0000 [     0]:
           50.0000 -    60.0000 [     0]:
           60.0000 -    70.0000 [     0]:
           70.0000 -    80.0000 [     0]:
           80.0000 -    90.0000 [     0]:
           90.0000 -   100.0000 [     0]:
      
        Sender %c0
            0.0000 -    10.0000 [     1]: *
           10.0000 -    20.0000 [     0]:
           20.0000 -    30.0000 [     0]:
           30.0000 -    40.0000 [     2]: **
           40.0000 -    50.0000 [    58]:
        **********************************************************
           50.0000 -    60.0000 [     0]:
           60.0000 -    70.0000 [     0]:
           70.0000 -    80.0000 [     0]:
           80.0000 -    90.0000 [     0]:
           90.0000 -   100.0000 [     0]:
      
        -----------------------------------------------------------
        3.10-rc2 test RX (reverts 69a37bea
        and e11538d1)
        TCP_RR trans/s 47766.95
      
        Receiver %c0
            0.0000 -    10.0000 [     1]: *
           10.0000 -    20.0000 [     1]: *
           20.0000 -    30.0000 [     0]:
           30.0000 -    40.0000 [    27]: ***************************
           40.0000 -    50.0000 [     2]: **
           50.0000 -    60.0000 [     0]:
           60.0000 -    70.0000 [     2]: **
           70.0000 -    80.0000 [     0]:
           80.0000 -    90.0000 [     0]:
           90.0000 -   100.0000 [    28]: ****************************
      
        Sender:
            0.0000 -    10.0000 [     1]: *
           10.0000 -    20.0000 [     0]:
           20.0000 -    30.0000 [     0]:
           30.0000 -    40.0000 [    11]: ***********
           40.0000 -    50.0000 [     0]:
           50.0000 -    60.0000 [     1]: *
           60.0000 -    70.0000 [     0]:
           70.0000 -    80.0000 [     3]: ***
           80.0000 -    90.0000 [     7]: *******
           90.0000 -   100.0000 [    38]: **************************************
      
        These results demonstrate gaining back the tendency of the CPU to
        stay in more responsive, performant C-states (and thus yield
        measurably better performance), by reverting commit
        69a37bea.
      Requested-by: NJeremy Eder <jeder@redhat.com>
      Tested-by: NLen Brown <len.brown@intel.com>
      Cc: 3.8+ <stable@vger.kernel.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      14851912
  6. 28 7月, 2013 1 次提交
  7. 26 7月, 2013 1 次提交
  8. 25 7月, 2013 1 次提交
  9. 24 7月, 2013 3 次提交
    • H
      Revert "crypto: crct10dif - Wrap crc_t10dif function all to use crypto transform framework" · e70308ec
      Herbert Xu 提交于
      This reverts commits
          67822649
          39761214
          0b95a7f8
          31d93962
          2d31e518
      
      Unfortunately this change broke boot on some systems that used an
      initrd which does not include the newly created crct10dif modules.
      As these modules are required by sd_mod under certain configurations
      this is a serious problem.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      e70308ec
    • B
      EDAC: Fix lockdep splat · 88d84ac9
      Borislav Petkov 提交于
      Fix the following:
      
      BUG: key ffff88043bdd0330 not in .data!
      ------------[ cut here ]------------
      WARNING: at kernel/lockdep.c:2987 lockdep_init_map+0x565/0x5a0()
      DEBUG_LOCKS_WARN_ON(1)
      Modules linked in: glue_helper sb_edac(+) edac_core snd acpi_cpufreq lrw gf128mul ablk_helper iTCO_wdt evdev i2c_i801 dcdbas button cryptd pcspkr iTCO_vendor_support usb_common lpc_ich mfd_core soundcore mperf processor microcode
      CPU: 2 PID: 599 Comm: modprobe Not tainted 3.10.0 #1
      Hardware name: Dell Inc. Precision T3600/0PTTT9, BIOS A08 01/24/2013
       0000000000000009 ffff880439a1d920 ffffffff8160a9a9 ffff880439a1d958
       ffffffff8103d9e0 ffff88043af4a510 ffffffff81a16e11 0000000000000000
       ffff88043bdd0330 0000000000000000 ffff880439a1d9b8 ffffffff8103dacc
      Call Trace:
        dump_stack
        warn_slowpath_common
        warn_slowpath_fmt
        lockdep_init_map
        ? trace_hardirqs_on_caller
        ? trace_hardirqs_on
        debug_mutex_init
        __mutex_init
        bus_register
        edac_create_sysfs_mci_device
        edac_mc_add_mc
        sbridge_probe
        pci_device_probe
        driver_probe_device
        __driver_attach
        ? driver_probe_device
        bus_for_each_dev
        driver_attach
        bus_add_driver
        driver_register
        __pci_register_driver
        ? 0xffffffffa0010fff
        sbridge_init
        ? 0xffffffffa0010fff
        do_one_initcall
        load_module
        ? unset_module_init_ro_nx
        SyS_init_module
        tracesys
      ---[ end trace d24a70b0d3ddf733 ]---
      EDAC MC0: Giving out device to 'sbridge_edac.c' 'Sandy Bridge Socket#0': DEV 0000:3f:0e.0
      EDAC sbridge: Driver loaded.
      
      What happens is that bus_register needs a statically allocated lock_key
      because the last is handed in to lockdep. However, struct mem_ctl_info
      embeds struct bus_type (the whole struct, not a pointer to it) and the
      whole thing gets dynamically allocated.
      
      Fix this by using a statically allocated struct bus_type for the MC bus.
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Acked-by: NMauro Carvalho Chehab <mchehab@infradead.org>
      Cc: Markus Trippelsdorf <markus@trippelsdorf.de>
      Cc: stable@kernel.org # v3.10
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      88d84ac9
    • A
      ARM: pxa: propagate errors from regulator_enable() to pxamci · a829abf8
      Arnd Bergmann 提交于
      The em_x270_mci_setpower() and em_x270_usb_hub_init() functions
      call regulator_enable(), which may return an error that must
      be checked.
      
      This changes the em_x270_usb_hub_init() function to bail out
      if it fails, and changes the pxamci_platform_data->setpower
      callback so that the a failed em_x270_mci_setpower call
      can be propagated by the pxamci driver into the mmc core.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Cc: Mike Rapoport <mike@compulab.co.il>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Mark Brown <broonie@opensource.wolfsonmicro.com>
      Cc: Haojian Zhuang <haojian.zhuang@gmail.com>
      Acked-by: NChris Ball <cjb@laptop.org>
      [olof: fixed order of regulator_enable() and test in em_x270_usb_hub_init]
      Signed-off-by: NOlof Johansson <olof@lixom.net>
      a829abf8
  10. 23 7月, 2013 1 次提交
  11. 20 7月, 2013 1 次提交
  12. 19 7月, 2013 2 次提交
    • A
      ssb: fix alignment of struct bcma_device_id · b01a60be
      Arnd Bergmann 提交于
      The ARM OABI and EABI disagree on the alignment of structures
      with small members, so module init tools may interpret the
      ssb device table incorrectly, as shown  by this warning when
      building the b43 device driver in an OABI kernel:
      
      FATAL: drivers/net/wireless/b43/b43: sizeof(struct ssb_device_id)=6 is
      not a modulo of the size of section __mod_ssb_device_table=88.
      
      Forcing the default (EABI) alignment on the structure makes this
      problem go away. Since the ssb_device_id may have the same problem,
      better fix both structures.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: John W. Linville <linville@tuxdriver.com>
      Cc: Michael Buesch <mb@bu3sch.de>
      Cc: Larry Finger <Larry.Finger@lwfinger.net>
      Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
      b01a60be
    • E
      vlan: mask vlan prio bits · d4b812de
      Eric Dumazet 提交于
      In commit 48cc32d3
      ("vlan: don't deliver frames for unknown vlans to protocols")
      Florian made sure we set pkt_type to PACKET_OTHERHOST
      if the vlan id is set and we could find a vlan device for this
      particular id.
      
      But we also have a problem if prio bits are set.
      
      Steinar reported an issue on a router receiving IPv6 frames with a
      vlan tag of 4000 (id 0, prio 2), and tunneled into a sit device,
      because skb->vlan_tci is set.
      
      Forwarded frame is completely corrupted : We can see (8100:4000)
      being inserted in the middle of IPv6 source address :
      
      16:48:00.780413 IP6 2001:16d8:8100:4000:ee1c:0:9d9:bc87 >
      9f94:4d95:2001:67c:29f4::: ICMP6, unknown icmp6 type (0), length 64
             0x0000:  0000 0029 8000 c7c3 7103 0001 a0ae e651
             0x0010:  0000 0000 ccce 0b00 0000 0000 1011 1213
             0x0020:  1415 1617 1819 1a1b 1c1d 1e1f 2021 2223
             0x0030:  2425 2627 2829 2a2b 2c2d 2e2f 3031 3233
      
      It seems we are not really ready to properly cope with this right now.
      
      We can probably do better in future kernels :
      vlan_get_ingress_priority() should be a netdev property instead of
      a per vlan_dev one.
      
      For stable kernels, lets clear vlan_tci to fix the bugs.
      Reported-by: NSteinar H. Gunderson <sesse@google.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d4b812de
  13. 18 7月, 2013 1 次提交
    • R
      ACPI / video / i915: No ACPI backlight if firmware expects Windows 8 · 8c5bd7ad
      Rafael J. Wysocki 提交于
      According to Matthew Garrett, "Windows 8 leaves backlight control up
      to individual graphics drivers rather than making ACPI calls itself.
      There's plenty of evidence to suggest that the Intel driver for
      Windows [8] doesn't use the ACPI interface, including the fact that
      it's broken on a bunch of machines when the OS claims to support
      Windows 8.  The simplest thing to do appears to be to disable the
      ACPI backlight interface on these systems".
      
      There's a problem with that approach, however, because simply
      avoiding to register the ACPI backlight interface if the firmware
      calls _OSI for Windows 8 may not work in the following situations:
       (1) The ACPI backlight interface actually works on the given system
           and the i915 driver is not loaded (e.g. another graphics driver
           is used).
       (2) The ACPI backlight interface doesn't work on the given system,
           but there is a vendor platform driver that will register its
           own, equally broken, backlight interface if not prevented from
           doing so by the ACPI subsystem.
      Therefore we need to allow the ACPI backlight interface to be
      registered until the i915 driver is loaded which then will unregister
      it if the firmware has called _OSI for Windows 8 (or will register
      the ACPI video driver without backlight support if not already
      present).
      
      For this reason, introduce an alternative function for registering
      ACPI video, acpi_video_register_with_quirks(), that will check
      whether or not the ACPI video driver has already been registered
      and whether or not the backlight Windows 8 quirk has to be applied.
      If the quirk has to be applied, it will block the ACPI backlight
      support and either unregister the backlight interface if the ACPI
      video driver has already been registered, or register the ACPI
      video driver without the backlight interface otherwise.  Make
      the i915 driver use acpi_video_register_with_quirks() instead of
      acpi_video_register() in i915_driver_load().
      
      This change is based on earlier patches from Matthew Garrett,
      Chun-Yi Lee and Seth Forshee and includes a fix from Aaron Lu's.
      
      References: https://bugzilla.kernel.org/show_bug.cgi?id=51231Tested-by: NAaron Lu <aaron.lu@intel.com>
      Tested-by: NIgor Gnatenko <i.gnatenko.brain@gmail.com>
      Tested-by: NYves-Alexis Perez <corsac@debian.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Reviewed-by: NAaron Lu <aaron.lu@intel.com>
      Acked-by: NMatthew Garrett <matthew.garrett@nebula.com>
      8c5bd7ad
  14. 17 7月, 2013 11 次提交
  15. 15 7月, 2013 3 次提交
  16. 14 7月, 2013 1 次提交
  17. 13 7月, 2013 4 次提交
    • O
      llist: llist_add() can use llist_add_batch() · e9a17bd7
      Oleg Nesterov 提交于
      llist_add(new, head) can simply use llist_add_batch(new, new, head),
      no need to duplicate the code.
      
      This obviously uninlines llist_add() and to me this is a win. But we
      can make llist_add_batch() inline if this is desirable, in this case
      gcc can notice that new_first == new_last if the caller is llist_add().
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Andrey Vagin <avagin@openvz.org>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Huang Ying <ying.huang@intel.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      e9a17bd7
    • O
      llist: fix/simplify llist_add() and llist_add_batch() · fb4214db
      Oleg Nesterov 提交于
      1. This is mostly theoretical, but llist_add*() need ACCESS_ONCE().
      
         Otherwise it is not guaranteed that the first cmpxchg() uses the
         same value for old_entry and new_last->next.
      
      2. These helpers cache the result of cmpxchg() and read the initial
         value of head->first before the main loop. I do not think this
         makes sense. In the likely case cmpxchg() succeeds, otherwise
         it doesn't hurt to reload head->first.
      
         I think it would be better to simplify the code and simply read
         ->first before cmpxchg().
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Andrey Vagin <avagin@openvz.org>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Huang Ying <ying.huang@intel.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      fb4214db
    • O
      fput: turn "list_head delayed_fput_list" into llist_head · 4f5e65a1
      Oleg Nesterov 提交于
      fput() and delayed_fput() can use llist and avoid the locking.
      
      This is unlikely path, it is not that this change can improve
      the performance, but this way the code looks simpler.
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Suggested-by: NAndrew Morton <akpm@linux-foundation.org>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Andrey Vagin <avagin@openvz.org>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Huang Ying <ying.huang@intel.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      4f5e65a1
    • T
      cgroup: replace task_cgroup_path_from_hierarchy() with task_cgroup_path() · 913ffdb5
      Tejun Heo 提交于
      task_cgroup_path_from_hierarchy() was added for the planned new users
      and none of the currently planned users wants to know about multiple
      hierarchies.  This patch drops the multiple hierarchy part and makes
      it always return the path in the first non-dummy hierarchy.
      
      As unified hierarchy will always have id 1, this is guaranteed to
      return the path for the unified hierarchy if mounted; otherwise, it
      will return the path from the hierarchy which happens to occupy the
      lowest hierarchy id, which will usually be the first hierarchy mounted
      after boot.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NLi Zefan <lizefan@huawei.com>
      Cc: Lennart Poettering <lennart@poettering.net>
      Cc: Kay Sievers <kay.sievers@vrfy.org>
      Cc: Jan Kaluža <jkaluza@redhat.com>
      913ffdb5
  18. 12 7月, 2013 2 次提交
  19. 11 7月, 2013 1 次提交