1. 26 4月, 2019 4 次提交
  2. 05 4月, 2019 7 次提交
    • G
      drivers: base: power: add proper SPDX identifiers on files that did not have them. · 5de363b6
      Greg Kroah-Hartman 提交于
      There were a few files in the driver core power code that did not have
      SPDX identifiers on them, so fix that up.  At the same time, remove the
      "free form" text that specified the license of the file, as that is
      impossible for any tool to properly parse.
      
      Cc: "Rafael J. Wysocki" <rafael@kernel.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      5de363b6
    • G
      drivers: base: firmware_loader: add proper SPDX identifiers on files that did not have them. · 50f86aed
      Greg Kroah-Hartman 提交于
      There were two files in the firmware_loader code that did not have SPDX
      identifiers on them, so fix that up.
      
      Cc: Luis Chamberlain <mcgrof@kernel.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      50f86aed
    • G
      drivers: base: test: add proper SPDX identifier to Makefile · 47bcc18c
      Greg Kroah-Hartman 提交于
      The Makefile in the drivers/base/test/ directory did not have a SPDX
      identifier on it, so fix that up.
      
      Cc: "Rafael J. Wysocki" <rafael@kernel.org>
      Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      47bcc18c
    • L
      arch_topology: Make cpu_capacity sysfs node as read-only · 5d777b18
      Lingutla Chandrasekhar 提交于
      If user updates any cpu's cpu_capacity, then the new value is going to
      be applied to all its online sibling cpus. But this need not to be correct
      always, as sibling cpus (in ARM, same micro architecture cpus) would have
      different cpu_capacity with different performance characteristics.
      So, updating the user supplied cpu_capacity to all cpu siblings
      is not correct.
      
      And another problem is, current code assumes that 'all cpus in a cluster
      or with same package_id (core_siblings), would have same cpu_capacity'.
      But with commit '5bdd2b3f ("arm64: topology: add support to remove
      cpu topology sibling masks")', when a cpu hotplugged out, the cpu
      information gets cleared in its sibling cpus. So, user supplied
      cpu_capacity would be applied to only online sibling cpus at the time.
      After that, if any cpu hotplugged in, it would have different cpu_capacity
      than its siblings, which breaks the above assumption.
      
      So, instead of mucking around the core sibling mask for user supplied
      value, use device-tree to set cpu capacity. And make the cpu_capacity
      node as read-only to know the asymmetry between cpus in the system.
      While at it, remove cpu_scale_mutex usage, which used for sysfs write
      protection.
      Tested-by: NDietmar Eggemann <dietmar.eggemann@arm.com>
      Tested-by: NQuentin Perret <quentin.perret@arm.com>
      Reviewed-by: NQuentin Perret <quentin.perret@arm.com>
      Acked-by: NSudeep Holla <sudeep.holla@arm.com>
      Signed-off-by: NLingutla Chandrasekhar <clingutla@codeaurora.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      5d777b18
    • K
      node: Add memory-side caching attributes · acc02a10
      Keith Busch 提交于
      System memory may have caches to help improve access speed to frequently
      requested address ranges. While the system provided cache is transparent
      to the software accessing these memory ranges, applications can optimize
      their own access based on cache attributes.
      
      Provide a new API for the kernel to register these memory-side caches
      under the memory node that provides it.
      
      The new sysfs representation is modeled from the existing cpu cacheinfo
      attributes, as seen from /sys/devices/system/cpu/<cpu>/cache/.  Unlike CPU
      cacheinfo though, the node cache level is reported from the view of the
      memory. A higher level number is nearer to the CPU, while lower levels
      are closer to the last level memory.
      
      The exported attributes are the cache size, the line size, associativity
      indexing, and write back policy, and add the attributes for the system
      memory caches to sysfs stable documentation.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Reviewed-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Reviewed-by: NBrice Goglin <Brice.Goglin@inria.fr>
      Tested-by: NBrice Goglin <Brice.Goglin@inria.fr>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      acc02a10
    • K
      node: Add heterogenous memory access attributes · e1cf33aa
      Keith Busch 提交于
      Heterogeneous memory systems provide memory nodes with different latency
      and bandwidth performance attributes. Provide a new kernel interface
      for subsystems to register the attributes under the memory target
      node's initiator access class. If the system provides this information,
      applications may query these attributes when deciding which node to
      request memory.
      
      The following example shows the new sysfs hierarchy for a node exporting
      performance attributes:
      
        # tree -P "read*|write*"/sys/devices/system/node/nodeY/accessZ/initiators/
        /sys/devices/system/node/nodeY/accessZ/initiators/
        |-- read_bandwidth
        |-- read_latency
        |-- write_bandwidth
        `-- write_latency
      
      The bandwidth is exported as MB/s and latency is reported in
      nanoseconds. The values are taken from the platform as reported by the
      manufacturer.
      
      Memory accesses from an initiator node that is not one of the memory's
      access "Z" initiator nodes linked in the same directory may observe
      different performance than reported here. When a subsystem makes use
      of this interface, initiators of a different access number may not have
      the same performance relative to initiators in other access numbers, or
      omitted from the any access class' initiators.
      
      Descriptions for memory access initiator performance access attributes
      are added to sysfs stable documentation.
      Acked-by: NJonathan Cameron <Jonathan.Cameron@huawei.com>
      Tested-by: NJonathan Cameron <Jonathan.Cameron@huawei.com>
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Reviewed-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Tested-by: NBrice Goglin <Brice.Goglin@inria.fr>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      e1cf33aa
    • K
      node: Link memory nodes to their compute nodes · 08d9dbe7
      Keith Busch 提交于
      Systems may be constructed with various specialized nodes. Some nodes
      may provide memory, some provide compute devices that access and use
      that memory, and others may provide both. Nodes that provide memory are
      referred to as memory targets, and nodes that can initiate memory access
      are referred to as memory initiators.
      
      Memory targets will often have varying access characteristics from
      different initiators, and platforms may have ways to express those
      relationships. In preparation for these systems, provide interfaces for
      the kernel to export the memory relationship among different nodes memory
      targets and their initiators with symlinks to each other.
      
      If a system provides access locality for each initiator-target pair, nodes
      may be grouped into ranked access classes relative to other nodes. The
      new interface allows a subsystem to register relationships of varying
      classes if available and desired to be exported.
      
      A memory initiator may have multiple memory targets in the same access
      class. The target memory's initiators in a given class indicate the
      nodes access characteristics share the same performance relative to other
      linked initiator nodes. Each target within an initiator's access class,
      though, do not necessarily perform the same as each other.
      
      A memory target node may have multiple memory initiators. All linked
      initiators in a target's class have the same access characteristics to
      that target.
      
      The following example show the nodes' new sysfs hierarchy for a memory
      target node 'Y' with access class 0 from initiator node 'X':
      
        # symlinks -v /sys/devices/system/node/nodeX/access0/
        relative: /sys/devices/system/node/nodeX/access0/targets/nodeY -> ../../nodeY
      
        # symlinks -v /sys/devices/system/node/nodeY/access0/
        relative: /sys/devices/system/node/nodeY/access0/initiators/nodeX -> ../../nodeX
      
      The new attributes are added to the sysfs stable documentation.
      Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com>
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Reviewed-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Tested-by: NBrice Goglin <Brice.Goglin@inria.fr>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      08d9dbe7
  3. 01 4月, 2019 1 次提交
    • G
      driver: base: Disable CONFIG_UEVENT_HELPER by default · 1be01d4a
      Geert Uytterhoeven 提交于
      Since commit 7934779a ("Driver-Core: disable /sbin/hotplug by
      default"), the help text for the /sbin/hotplug fork-bomb says
      "This should not be used today [...] creates a high system load, or
      [...] out-of-memory situations during bootup".  The rationale for this
      was that no recent mainstream system used this anymore (in 2010!).
      
      A few years later, the complete uevent helper support was made optional
      in commit 86d56134 ("kobject: Make support for uevent_helper
      optional.").  However, if was still left enabled by default, to support
      ancient userland.
      
      Time passed by, and nothing should use this anymore, so it can be
      disabled by default.
      Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      1be01d4a
  4. 20 3月, 2019 1 次提交
    • J
      PM / Domains: Avoid a potential deadlock · 2071ac98
      Jiada Wang 提交于
      Lockdep warns that prepare_lock and genpd->mlock can cause a deadlock
      the deadlock scenario is like following:
      First thread is probing cs2000
      cs2000_probe()
        clk_register()
          __clk_core_init()
            clk_prepare_lock()                            ----> acquires prepare_lock
              cs2000_recalc_rate()
                i2c_smbus_read_byte_data()
                  rcar_i2c_master_xfer()
                    dma_request_chan()
                      rcar_dmac_of_xlate()
                        rcar_dmac_alloc_chan_resources()
                          pm_runtime_get_sync()
                            __pm_runtime_resume()
                              rpm_resume()
                                rpm_callback()
                                  genpd_runtime_resume()   ----> acquires genpd->mlock
      
      Second thread is attaching any device to the same PM domain
      genpd_add_device()
        genpd_lock()                                       ----> acquires genpd->mlock
          cpg_mssr_attach_dev()
            of_clk_get_from_provider()
              __of_clk_get_from_provider()
                __clk_create_clk()
                  clk_prepare_lock()                       ----> acquires prepare_lock
      
      Since currently no PM provider access genpd's critical section
      in .attach_dev, and .detach_dev callbacks, so there is no need to protect
      these two callbacks with genpd->mlock.
      This patch avoids a potential deadlock by moving out .attach_dev and .detach_dev
      from genpd->mlock, so that genpd->mlock won't be held when prepare_lock is acquired
      in .attach_dev and .detach_dev
      Signed-off-by: NJiada Wang <jiada_wang@mentor.com>
      Reviewed-by: NUlf Hansson <ulf.hansson@linaro.org>
      Tested-by: NGeert Uytterhoeven <geert+renesas@glider.be>
      Reviewed-by: NGeert Uytterhoeven <geert+renesas@glider.be>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      2071ac98
  5. 19 3月, 2019 1 次提交
  6. 12 3月, 2019 2 次提交
    • R
      PM / wakeup: Drop wakeup_source_drop() · 623217a0
      Rafael J. Wysocki 提交于
      After commit d856f39ac1cc ("PM / wakeup: Rework wakeup source timer
      cancellation") wakeup_source_drop() is a trivial wrapper around
      __pm_relax() and it has no users except for wakeup_source_destroy()
      and wakeup_source_trash() which also has no users, so drop it along
      with the latter and make wakeup_source_destroy() call __pm_relax()
      directly.
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Acked-by: NViresh Kumar <viresh.kumar@linaro.org>
      623217a0
    • V
      PM / wakeup: Rework wakeup source timer cancellation · 1fad17fb
      Viresh Kumar 提交于
      If wakeup_source_add() is called right after wakeup_source_remove()
      for the same wakeup source, timer_setup() may be called for a
      potentially scheduled timer which is incorrect.
      
      To avoid that, move the wakeup source timer cancellation from
      wakeup_source_drop() to wakeup_source_remove().
      
      Moreover, make wakeup_source_remove() clear the timer function after
      canceling the timer to let wakeup_source_not_registered() treat
      unregistered wakeup sources in the same way as the ones that have
      never been registered.
      Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org>
      Cc: 4.4+ <stable@vger.kernel.org> # 4.4+
      [ rjw: Subject, changelog, merged two patches together ]
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      1fad17fb
  7. 11 3月, 2019 4 次提交
  8. 07 3月, 2019 3 次提交
  9. 02 3月, 2019 1 次提交
  10. 01 3月, 2019 1 次提交
    • D
      device-dax: "Hotplug" persistent memory for use like normal RAM · c221c0b0
      Dave Hansen 提交于
      This is intended for use with NVDIMMs that are physically persistent
      (physically like flash) so that they can be used as a cost-effective
      RAM replacement.  Intel Optane DC persistent memory is one
      implementation of this kind of NVDIMM.
      
      Currently, a persistent memory region is "owned" by a device driver,
      either the "Direct DAX" or "Filesystem DAX" drivers.  These drivers
      allow applications to explicitly use persistent memory, generally
      by being modified to use special, new libraries. (DIMM-based
      persistent memory hardware/software is described in great detail
      here: Documentation/nvdimm/nvdimm.txt).
      
      However, this limits persistent memory use to applications which
      *have* been modified.  To make it more broadly usable, this driver
      "hotplugs" memory into the kernel, to be managed and used just like
      normal RAM would be.
      
      To make this work, management software must remove the device from
      being controlled by the "Device DAX" infrastructure:
      
      	echo dax0.0 > /sys/bus/dax/drivers/device_dax/unbind
      
      and then tell the new driver that it can bind to the device:
      
      	echo dax0.0 > /sys/bus/dax/drivers/kmem/new_id
      
      After this, there will be a number of new memory sections visible
      in sysfs that can be onlined, or that may get onlined by existing
      udev-initiated memory hotplug rules.
      
      This rebinding procedure is currently a one-way trip.  Once memory
      is bound to "kmem", it's there permanently and can not be
      unbound and assigned back to device_dax.
      
      The kmem driver will never bind to a dax device unless the device
      is *explicitly* bound to the driver.  There are two reasons for
      this: One, since it is a one-way trip, it can not be undone if
      bound incorrectly.  Two, the kmem driver destroys data on the
      device.  Think of if you had good data on a pmem device.  It
      would be catastrophic if you compile-in "kmem", but leave out
      the "device_dax" driver.  kmem would take over the device and
      write volatile data all over your good data.
      
      This inherits any existing NUMA information for the newly-added
      memory from the persistent memory device that came from the
      firmware.  On Intel platforms, the firmware has guarantees that
      require each socket's persistent memory to be in a separate
      memory-only NUMA node.  That means that this patch is not expected
      to create NUMA nodes, but will simply hotplug memory into existing
      nodes.
      
      Because NUMA nodes are created, the existing NUMA APIs and tools
      are sufficient to create policies for applications or memory areas
      to have affinity for or an aversion to using this memory.
      
      There is currently some metadata at the beginning of pmem regions.
      The section-size memory hotplug restrictions, plus this small
      reserved area can cause the "loss" of a section or two of capacity.
      This should be fixable in follow-on patches.  But, as a first step,
      losing 256MB of memory (worst case) out of hundreds of gigabytes
      is a good tradeoff vs. the required code to fix this up precisely.
      This calculation is also the reason we export
      memory_block_size_bytes().
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NDan Williams <dan.j.williams@intel.com>
      Reviewed-by: NKeith Busch <keith.busch@intel.com>
      Cc: Dave Jiang <dave.jiang@intel.com>
      Cc: Ross Zwisler <zwisler@kernel.org>
      Cc: Vishal Verma <vishal.l.verma@intel.com>
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: linux-nvdimm@lists.01.org
      Cc: linux-kernel@vger.kernel.org
      Cc: linux-mm@kvack.org
      Cc: Huang Ying <ying.huang@intel.com>
      Cc: Fengguang Wu <fengguang.wu@intel.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Yaowei Bai <baiyaowei@cmss.chinamobile.com>
      Cc: Takashi Iwai <tiwai@suse.de>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Reviewed-by: NVishal Verma <vishal.l.verma@intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      c221c0b0
  11. 26 2月, 2019 2 次提交
  12. 22 2月, 2019 1 次提交
  13. 21 2月, 2019 2 次提交
  14. 20 2月, 2019 2 次提交
  15. 19 2月, 2019 2 次提交
    • D
      drivers/component: kerneldoc polish · e4246b05
      Daniel Vetter 提交于
      Polish the kerneldoc a bit with suggestions from Randy.
      
      v2: Randy found another typo: s/compent/component/
      Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
      Cc: "Rafael J. Wysocki" <rafael@kernel.org>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Cc: Ramalingam C <ramalingam.c@intel.com>
      Acked-by: NRandy Dunlap <rdunlap@infradead.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      e4246b05
    • S
      PM / core: Add support to skip power management in device/driver model · 85945c28
      Sudeep Holla 提交于
      All device objects in the driver model contain fields that control the
      handling of various power management activities. However, it's not
      always useful. There are few instances where pseudo devices are added
      to the model just to take advantage of many other features like
      kobjects, udev events, and so on. One such example is cpu devices and
      their caches.
      
      The sysfs for the cpu caches are managed by adding devices with cpu
      as the parent in cpu_device_create() when secondary cpu is brought
      online. Generally when the secondary CPUs are hotplugged back in as part
      of resume from suspend-to-ram, we call cpu_device_create() from the cpu
      hotplug state machine while the cpu device associated with that CPU is
      not yet ready to be resumed as the device_resume() call happens bit
      later. It's not really needed to set the flag is_prepared for cpu
      devices as they are mostly pseudo device and hotplug framework deals
      with state machine and not managed through the cpu device.
      
      This often results in annoying warning when resuming:
      Enabling non-boot CPUs ...
      CPU1: Booted secondary processor
       cache: parent cpu1 should not be sleeping
      CPU1 is up
      CPU2: Booted secondary processor
       cache: parent cpu2 should not be sleeping
      CPU2 is up
      .... and so on.
      
      So in order to fix these kind of errors, we could just completely avoid
      doing any power management related initialisations and operations if
      they are not used by these devices.
      
      Add no_pm flags to indicate that the device doesn't require any sort of
      PM activities and all of them can be completely skipped. We can use the
      same flag to also avoid adding not used *power* sysfs entries for these
      devices. For now, lets use this for cpu cache devices.
      Reviewed-by: NUlf Hansson <ulf.hansson@linaro.org>
      Signed-off-by: NSudeep Holla <sudeep.holla@arm.com>
      Tested-by: NEugeniu Rosca <erosca@de.adit-jv.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      85945c28
  16. 15 2月, 2019 2 次提交
  17. 14 2月, 2019 4 次提交