1. 06 6月, 2018 1 次提交
    • U
      PM / Domains: Add support for multi PM domains per device to genpd · 3c095f32
      Ulf Hansson 提交于
      To support devices being partitioned across multiple PM domains, let's
      begin with extending genpd to cope with these kind of configurations.
      
      Therefore, add a new exported function genpd_dev_pm_attach_by_id(), which
      is similar to the existing genpd_dev_pm_attach(), but with the difference
      that it allows its callers to provide an index to the PM domain that it
      wants to attach.
      
      Note that, genpd_dev_pm_attach_by_id() shall only be called by the driver
      core / PM core, similar to how the existing dev_pm_domain_attach() makes
      use of genpd_dev_pm_attach(). However, this is implemented by following
      changes on top.
      
      Because, only one PM domain can be attached per device, genpd needs to
      create a virtual device that it can attach/detach instead. More precisely,
      let the new function genpd_dev_pm_attach_by_id() register a virtual struct
      device via calling device_register(). Then let it attach this device to the
      corresponding PM domain, rather than the one that is provided by the
      caller. The actual attaching is done via re-using the existing genpd OF
      functions.
      
      At successful attachment, genpd_dev_pm_attach_by_id() returns the created
      virtual device, which allows the caller to operate on it to deal with power
      management. Following changes on top, provides more details in this
      regards.
      
      To deal with detaching of a PM domain for the multiple PM domains case,
      let's also extend the existing genpd_dev_pm_detach() function, to cover the
      cleanup of the created virtual device, via make it call device_unregister()
      on it. In this way, there is no need to introduce a new function to deal
      with detach for the multiple PM domain case, but instead the existing one
      is re-used.
      Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
      Acked-by: NJon Hunter <jonathanh@nvidia.com>
      Tested-by: NJon Hunter <jonathanh@nvidia.com>
      Reviewed-by: NViresh Kumar <viresh.kumar@linaro.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      3c095f32
  2. 05 6月, 2018 1 次提交
    • L
      swait: strengthen language to discourage use · c5e7a7ea
      Linus Torvalds 提交于
      We already earlier discouraged people from using this interface in
      commit 88796e7e ("sched/swait: Document it clearly that the swait
      facilities are special and shouldn't be used"), but I just got a pull
      request with a new broken user.
      
      So make the comment *really* clear.
      
      The swait interfaces are bad, and should not be used unless you have
      some *very* strong reasons that include tons of hard performance numbers
      on just why you want to use them, and you show that you actually
      understand that they aren't at all like the normal wait/wakeup
      interfaces.
      
      So far, every single user has been suspect.  The main user is KVM, which
      is completely pointless (there is only ever one waiter, which avoids the
      interface subtleties, but also means that having a queue instead of a
      pointer is counter-productive and certainly not an "optimization").
      
      So make the comments much stronger.
      
      Not that anybody likely reads them anyway, but there's always some
      slight hope that it will cause somebody to think twice.
      
      I'd like to remove this interface entirely, but there is the theoretical
      possibility that it's actually the right thing to use in some situation,
      most likely some deep RT use.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c5e7a7ea
  3. 03 6月, 2018 1 次提交
    • J
      block: don't use blocking queue entered for recursive bio submits · cd4a4ae4
      Jens Axboe 提交于
      If we end up splitting a bio and the queue goes away between
      the initial submission and the later split submission, then we
      can block forever in blk_queue_enter() waiting for the reference
      to drop to zero. This will never happen, since we already hold
      a reference.
      
      Mark a split bio as already having entered the queue, so we can
      just use the live non-blocking queue enter variant.
      
      Thanks to Tetsuo Handa for the analysis.
      
      Reported-by: syzbot+c4f9cebf9d651f6e54de@syzkaller.appspotmail.com
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      cd4a4ae4
  4. 01 6月, 2018 5 次提交
  5. 31 5月, 2018 6 次提交
  6. 30 5月, 2018 5 次提交
  7. 29 5月, 2018 8 次提交
  8. 28 5月, 2018 2 次提交
  9. 27 5月, 2018 2 次提交
    • U
      PM / runtime: Fixup reference counting of device link suppliers at probe · 1e837861
      Ulf Hansson 提交于
      In the driver core, before it invokes really_probe() it runtime resumes the
      suppliers for the device via calling pm_runtime_get_suppliers(), which also
      increases the runtime PM usage count for each of the available supplier.
      
      This makes sense, as to be able to allow the consumer device to be probed
      by its driver. However, if the driver decides to add a new supplier link
      during ->probe(), hence updating the list of suppliers, the following call
      to pm_runtime_put_suppliers(), invoked after really_probe() in the driver
      core, we get into trouble.
      
      More precisely, pm_runtime_put() gets called also for the new supplier(s),
      which is wrong as the driver core, didn't trigger pm_runtime_get_sync() to
      be called for it in the first place. In other words, the new supplier may
      be runtime suspended even in cases when it shouldn't.
      
      Fix this behaviour, by runtime resume suppliers according to the same
      conditions as managed by the runtime PM core, when runtime resume callbacks
      are being invoked.
      
      Additionally, don't try to runtime suspend any of the suppliers after
      really_probe(), but instead rely on that to happen via the consumer device,
      when it becomes runtime suspended.
      
      Fixes: 21d5c57b (PM / runtime: Use device links)
      Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      1e837861
    • T
      PM / suspend: Prevent might sleep splats · c1a957d1
      Thomas Gleixner 提交于
      timekeeping suspend/resume calls read_persistent_clock() which takes
      rtc_lock. That results in might sleep warnings because at that point
      we run with interrupts disabled.
      
      We cannot convert rtc_lock to a raw spinlock as that would trigger
      other might sleep warnings.
      
      As a workaround we disable the might sleep warnings by setting
      system_state to SYSTEM_SUSPEND before calling sysdev_suspend() and
      restoring it to SYSTEM_RUNNING afer sysdev_resume(). There is no lock
      contention because hibernate / suspend to RAM is single-CPU at this
      point.
      
      In s2idle's case the system_state is set to SYSTEM_SUSPEND before
      timekeeping_suspend() which is invoked by the last CPU. In the resume
      case it set back to SYSTEM_RUNNING after timekeeping_resume() which is
      invoked by the first CPU in the resume case. The other CPUs will block
      on tick_freeze_lock.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      [bigeasy: cover s2idle in tick_freeze() / tick_unfreeze()]
      Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      c1a957d1
  10. 26 5月, 2018 9 次提交
新手
引导
客服 返回
顶部