- 22 9月, 2021 4 次提交
-
-
由 Dan Williams 提交于
Commit 3d135db5 ("cxl/core: Move memdev management to core") left this straggling include for cxl_memdev setup. Clean it up. Cc: Ben Widawsky <ben.widawsky@intel.com> Reported-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: NBen Widawsky <ben.widawsky@intel.com> Link: https://lore.kernel.org/r/163116434668.2460985.12264757586266849616.stgit@dwillia2-desk3.amr.corp.intel.comSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
In preparation for implementing a unit test backend transport for ioctl operations, and making the mailbox available to the cxl/pmem infrastructure, move the existing PCI specific portion of mailbox handling to an "mbox_send" operation. With this split all the PCI-specific transport details are comprehended by a single operation and the rest of the mailbox infrastructure is 'struct cxl_mem' and 'struct cxl_memdev' generic. Acked-by: NBen Widawsky <ben.widawsky@intel.com> Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: NBen Widawsky <ben.widawsky@intel.com> Link: https://lore.kernel.org/r/163116434098.2460985.9004760022659400540.stgit@dwillia2-desk3.amr.corp.intel.comSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
Commit 0b9159d0 ("cxl/pci: Store memory capacity values") missed updating the kernel-doc for 'struct cxl_mem' leading to the following warnings: ./scripts/kernel-doc -v drivers/cxl/cxlmem.h 2>&1 | grep warn drivers/cxl/cxlmem.h:107: warning: Function parameter or member 'total_bytes' not described in 'cxl_mem' drivers/cxl/cxlmem.h:107: warning: Function parameter or member 'volatile_only_bytes' not described in 'cxl_mem' drivers/cxl/cxlmem.h:107: warning: Function parameter or member 'persistent_only_bytes' not described in 'cxl_mem' drivers/cxl/cxlmem.h:107: warning: Function parameter or member 'partition_align_bytes' not described in 'cxl_mem' drivers/cxl/cxlmem.h:107: warning: Function parameter or member 'active_volatile_bytes' not described in 'cxl_mem' drivers/cxl/cxlmem.h:107: warning: Function parameter or member 'active_persistent_bytes' not described in 'cxl_mem' drivers/cxl/cxlmem.h:107: warning: Function parameter or member 'next_volatile_bytes' not described in 'cxl_mem' drivers/cxl/cxlmem.h:107: warning: Function parameter or member 'next_persistent_bytes' not described in 'cxl_mem' Also, it is redundant to describe those same parameters in the kernel-doc for cxl_mem_get_partition_info(). Given the only user of that routine updates the values in @cxlm, just do that implicitly internal to the helper. Cc: Ira Weiny <ira.weiny@intel.com> Reported-by: NBen Widawsky <ben.widawsky@intel.com> Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/163157174216.2653013.1277706528753990974.stgit@dwillia2-desk3.amr.corp.intel.comSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
In preparation for adding a unit test provider of a cxl_memdev, convert the 'struct cxl_mem' driver context to carry a generic device rather than a pci device. Note, some dev_dbg() lines needed extra reformatting per clang-format. This conversion also allows the cxl_mem_create() and devm_cxl_add_memdev() calling conventions to be simplified. The "host" for a cxl_memdev, must be the same device for the driver that allocated @cxlm. Acked-by: NBen Widawsky <ben.widawsky@intel.com> Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: NBen Widawsky <ben.widawsky@intel.com> Link: https://lore.kernel.org/r/163116432973.2460985.7553504957932024222.stgit@dwillia2-desk3.amr.corp.intel.comSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 08 9月, 2021 2 次提交
-
-
由 Li Qiang (Johnny Li) 提交于
Indicator string for mbox and memdev register set to status incorrectly in error message. Cc: <stable@vger.kernel.org> Fixes: 30af9729 ("cxl/pci: Map registers based on capabilities") Signed-off-by: NLi Qiang (Johnny Li) <johnny.li@montage-tech.com> Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/163072205089.2250120.8103605864156687395.stgit@dwillia2-desk3.amr.corp.intel.comSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
A proposed rework of security_locked_down() users identified that the cxl_pci driver was passing the wrong lockdown_reason. Update cxl_mem_raw_command_allowed() to fail raw command access when raw pci access is also disabled. Fixes: 13237183 ("cxl/mem: Add a "RAW" send command") Cc: Ben Widawsky <ben.widawsky@intel.com> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com> Cc: <stable@vger.kernel.org> Cc: Ondrej Mosnacek <omosnace@redhat.com> Cc: Paul Moore <paul@paul-moore.com> Link: https://lore.kernel.org/r/163072204525.2250120.16615792476976546735.stgit@dwillia2-desk3.amr.corp.intel.comSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 11 8月, 2021 2 次提交
-
-
由 Ira Weiny 提交于
CXL spec defines the volatile DPA range to be 0 to Volatile memory size. It further defines the persistent DPA range to follow directly after the end of the Volatile DPA through the persistent memory size. Essentially Volatile DPA range = [0, Volatile size) Persistent DPA range = [Volatile size, Volatile size + Persistent size) Adjust the pmem_range start to reflect this and remote the TODO. Signed-off-by: NIra Weiny <ira.weiny@intel.com> Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20210617221620.1904031-4-ira.weiny@intel.comSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Ira Weiny 提交于
Memory devices may specify volatile only, persistent only, and partitionable space which when added together result in a total capacity. If Identify Memory Device.Partition Alignment != 0 the device supports partitionable space. This partitionable space can be split between volatile and persistent space. The total volatile and persistent sizes are reported in Get Partition Info. ie active volatile memory = volatile only + partitionable volatile active persistent memory = persistent only + partitionable persistent Define cxl_mem_get_partition(), check for partitionable support, and use cxl_mem_get_partition() if applicable. Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: NIra Weiny <ira.weiny@intel.com> Reported-by: Nkernel test robot <lkp@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 07 8月, 2021 1 次提交
-
-
由 Ira Weiny 提交于
The Identify Memory Device command returns information about the volatile only and persistent only memory capacities. Store those values in the cxl_mem structure for later use. While at it, reuse those calculations to calculate the ram and pmem ranges. Signed-off-by: NIra Weiny <ira.weiny@intel.com> Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20210617221620.1904031-2-ira.weiny@intel.comSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 06 8月, 2021 5 次提交
-
-
由 Ben Widawsky 提交于
It is desirable to retain the mappings from the calling function. By simplifying this code, it will be much more straightforward to do that. Signed-off-by: NBen Widawsky <ben.widawsky@intel.com> Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20210716231548.174778-3-ben.widawsky@intel.comSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Ben Widawsky 提交于
In an effort to explicit avoid supporting vendor specific register blocks (which can happily be mapped from userspace), entirely skip probing unknown types. The secondary benefit of this will be revealed in the future with code simplification. Signed-off-by: NBen Widawsky <ben.widawsky@intel.com> Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20210716231548.174778-2-ben.widawsky@intel.comSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Ben Widawsky 提交于
The motivation for moving cxl_memdev allocation to the core (beyond better file organization of sysfs attributes in core/ and drivers in cxl/), is that device lifetime is longer than module lifetime. The cxl_pci module should be free to come and go without needing to coordinate with devices that need the text associated with cxl_memdev_release() to stay resident. The move fixes a use after free bug when looping driver load / unload with CONFIG_DEBUG_KOBJECT_RELEASE=y. Another motivation for disconnecting cxl_memdev creation from cxl_pci is to enable other drivers, like a unit test driver, to registers memdevs. Fixes: b39cb105 ("cxl/mem: Register CXL memX devices") Signed-off-by: NBen Widawsky <ben.widawsky@intel.com> Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/162792540495.368511.9748638751088219595.stgit@dwillia2-desk3.amr.corp.intel.comSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
In preparation for moving cxl_memdev allocation to the core, introduce cdevm_file_operations to coordinate file operations shutdown relative to driver data release. The motivation for moving cxl_memdev allocation to the core (beyond better file organization of sysfs attributes in core/ and drivers in cxl/), is that device lifetime is longer than module lifetime. The cxl_pci module should be free to come and go without needing to coordinate with devices that need the text associated with cxl_memdev_release() to stay resident. The move will fix a use after free bug when looping driver load / unload with CONFIG_DEBUG_KOBJECT_RELEASE=y. Another motivation for passing in file_operations to the core cxl_memdev creation flow is to allow for alternate drivers, like unit test code, to define their own ioctl backends. Signed-off-by: NBen Widawsky <ben.widawsky@intel.com> Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/162792539962.368511.2962268954245340288.stgit@dwillia2-desk3.amr.corp.intel.comSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Ben Widawsky 提交于
CXL core is growing, and it's already arguably unmanageable. To support future growth, move core functionality to a new directory and rename the file to represent just bus support. Future work will remove non-bus functionality. Note that mem.h is renamed to cxlmem.h to avoid a namespace collision with the global ARCH=um mem.h header. Reported-by: Nkernel test robot <lkp@intel.com> Signed-off-by: NBen Widawsky <ben.widawsky@intel.com> Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/162792537866.368511.8915631504621088321.stgit@dwillia2-desk3.amr.corp.intel.comSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 18 6月, 2021 1 次提交
-
-
由 Ben Widawsky 提交于
The current naming is confusing and wrong. The Register Locator is identified by the DSVSEC identifier, not an offset. Cc: Dan Williams <dan.j.williams@intel.com> Signed-off-by: NBen Widawsky <ben.widawsky@intel.com> Link: https://lore.kernel.org/r/20210618003009.956929-1-ben.widawsky@intel.comSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 16 6月, 2021 1 次提交
-
-
由 Dan Williams 提交于
While a memX device on /sys/bus/cxl represents a CXL memory expander control interface, a pmemX device represents the persistent memory sub-functionality. It bridges the CXL subystem to the libnvdimm nmemX control interface. With this skeleton ndctl can now see persistent memory devices on a "CXL" bus. Later patches add support for translating libnvdimm native commands to CXL commands. # ndctl list -BDiu -b CXL { "provider":"CXL", "dev":"ndbus1", "dimms":[ { "dev":"nmem1", "state":"disabled" }, { "dev":"nmem0", "state":"disabled" } ] } Given nvdimm_bus_unregister() removes all devices on an ndbus0 the cxl_pmem infrastructure needs to arrange ->remove() to be triggered on cxl_nvdimm devices to keep their enabled state synchronized with the registration state of their corresponding device on the nvdimm_bus. In other words, always arrange for cxl_nvdimm_driver.remove() to unregister nvdimms from an nvdimm_bus ahead of the bus being unregistered. Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/162380012696.3039556.4293801691038740850.stgit@dwillia2-desk3.amr.corp.intel.comSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 15 6月, 2021 1 次提交
-
-
由 Ben Widawsky 提交于
Some of the commands have already been defined for the support of RAW commands (to be blocked). Unlike their usage in the RAW interface, when used through the supported interface, they will be coordinated and marshalled along with other commands being issued by userspace and the driver itself. That coordination will be added later. The list of commands was determined based on the learnings from libnvdimm and this list is provided directly from Dan. Recommended-by: NDan Williams <dan.j.williams@intel.com> Signed-off-by: NBen Widawsky <ben.widawsky@intel.com> Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20210413140907.534404-1-ben.widawsky@intel.comSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 06 6月, 2021 5 次提交
-
-
由 Ben Widawsky 提交于
An HDM decoder is defined in the CXL 2.0 specification as a mechanism that allow devices and upstream ports to claim memory address ranges and participate in interleave sets. HDM decoder registers are within the component register block defined in CXL 2.0 8.2.3 CXL 2.0 Component Registers as part of the CXL.cache and CXL.mem subregion. The Component Register Block is found via the Register Locator DVSEC in a similar fashion to how the CXL Device Register Block is found. The primary difference is the capability id size of the Component Register Block is a single DWORD instead of 4 DWORDS. It's now possible to configure a CXL type 3 device's HDM decoder. Such programming is expected for CXL devices with persistent memory, and hot plugged CXL devices that participate in CXL.mem with volatile memory. Add probe and mapping functions for the component register blocks. Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Co-developed-by: NIra Weiny <ira.weiny@intel.com> Signed-off-by: NIra Weiny <ira.weiny@intel.com> Co-developed-by: NVishal Verma <vishal.l.verma@intel.com> Signed-off-by: NVishal Verma <vishal.l.verma@intel.com> Signed-off-by: NBen Widawsky <ben.widawsky@intel.com> Link: https://lore.kernel.org/r/20210528004922.3980613-6-ira.weiny@intel.comSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Ira Weiny 提交于
Some hardware implementations mix component and device registers into the same BAR and the driver stack is going to need independent mapping implementations for those 2 cases. Furthermore, it will be nice to have finer grained mappings should user space want to map some register blocks. Now that individual register blocks are mapped; those blocks regions should be reserved individually to fully separate the register blocks. Release the 'global' memory reservation and create individual register block region reservations through devm. NOTE: pci_release_mem_regions() is still compatible with pcim_enable_device() because it removes the automatic region release when called. So preserve the pcim_enable_device() so that the pcim interface can be called if needed. Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: NIra Weiny <ira.weiny@intel.com> Link: https://lore.kernel.org/r/20210604005316.4187340-1-ira.weiny@intel.comSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Ira Weiny 提交于
The information required to map registers based on capabilities is contained within the bars themselves. This means the bar must be mapped to read the information needed and then unmapped to map the individual parts of the BAR based on capabilities. Change cxl_setup_device_regs() to return a new cxl_register_map, change the name to cxl_probe_device_regs(). Allocate and place cxl_register_maps on a list while processing all of the specified register blocks. After probing all the register blocks go back and map smaller registers blocks based on their capabilities and dispose of the cxl_register_maps. NOTE: pci_iomap() is not managed automatically via pcim_enable_device() so be careful to call pci_iounmap() correctly. Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: NIra Weiny <ira.weiny@intel.com> Link: https://lore.kernel.org/r/20210604005036.4187184-1-ira.weiny@intel.comSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Ira Weiny 提交于
In order to remap individual register sets each bar region must be reserved prior to mapping. Because the details of individual register sets are contained within the BARs themselves, the bar must be mapped 2 times, once to extract this information and a second time for each register set. Rather than attempt to reserve each BAR individually and track if that bar has been reserved. Open code pcim_iomap_regions() by first reserving all memory regions on the device and then mapping the bars individually as needed. NOTE pci_request_mem_regions() does not need a corresponding pci_release_mem_regions() because the pci device is managed via pcim_enable_device(). Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: NIra Weiny <ira.weiny@intel.com> Link: https://lore.kernel.org/r/20210528004922.3980613-3-ira.weiny@intel.comSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Ira Weiny 提交于
Each register block located in the DVSEC needs to be decoded from 2 words, 'register offset high' and 'register offset low'. Create a function, cxl_decode_register_block() to perform this decode and return the bar, offset, and register type of the register block. Then use the values decoded in cxl_mem_map_regblock() instead of passing the raw registers. Signed-off-by: NIra Weiny <ira.weiny@intel.com> Link: https://lore.kernel.org/r/20210528004922.3980613-2-ira.weiny@intel.comSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 27 5月, 2021 6 次提交
-
-
由 Ben Widawsky 提交于
@cxlm.base only existed to support holding the base found in the register block mapping code, and pass it along to the register setup code. Now that the register setup function has all logic around managing the registers, from DVSEC to iomapping up to populating our CXL specific information, it is easy to turn the @base values into local variables and remove them from our device driver state. Acked-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: NBen Widawsky <ben.widawsky@intel.com> Link: https://lore.kernel.org/r/20210520212953.1181695-1-ben.widawsky@intel.comSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Ben Widawsky 提交于
Start moving code around to ultimately get rid of @cxlm.base. The @cxlm.base member serves no purpose other than intermediate storage of the offset found in cxl_mem_map_regblock() later used by cxl_mem_setup_regs(). Aside from wanting to get rid of this useless member, it will help later when adding new register block identifiers. While @cxlm.base still exists, it will become trivial to remove it in a future patch. No functional change is meant to be introduced in this patch. Signed-off-by: NBen Widawsky <ben.widawsky@intel.com> Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20210407222625.320177-4-ben.widawsky@intel.comSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Ben Widawsky 提交于
Add a new function specifically for mapping the register blocks and offsets within. The new function can be used more generically for other register block identifiers. No functional change is meant to be introduced in this patch with the exception of a dev_err printed when the device register block isn't found. Signed-off-by: NBen Widawsky <ben.widawsky@intel.com> Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20210407222625.320177-3-ben.widawsky@intel.comSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Ben Widawsky 提交于
Trivial cleanup. Signed-off-by: NBen Widawsky <ben.widawsky@intel.com> Acked-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20210407222625.320177-2-ben.widawsky@intel.comSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Vishal Verma 提交于
The 'Identify Device' mailbox command returns an 'lsa_size', which is the size of the label storage area on the device. Export it as a sysfs attribute so that userspace tooling to read/write the LSA can determine the size without having to run an 'Identify Device' on their own. Cc: Ben Widawsky <ben.widawsky@intel.com> Cc: Dan Williams <dan.j.williams@intel.com> Signed-off-by: NVishal Verma <vishal.l.verma@intel.com> Reviewed-by: NDan Williams <dan.j.williams@intel.com> Link: https://lore.kernel.org/r/20210520194745.1095517-1-vishal.l.verma@intel.comSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Ben Widawsky 提交于
As the driver has undergone development, it's become clear that the majority [entirety?] of the current functionality in mem.c is actually a layer encapsulating functionality exposed through PCI based interactions. This layer can be used either in isolation or to provide functionality for higher level functionality. CXL capabilities exist in a parallel domain to PCIe. CXL devices are enumerable and controllable via "legacy" PCIe mechanisms; however, their CXL capabilities are a superset of PCIe. For example, a CXL device may be connected to a non-CXL capable PCIe root port, and therefore will not be able to participate in CXL.mem or CXL.cache operations, but can still be accessed through PCIe mechanisms for CXL.io operations. To properly represent the PCI nature of this driver, and in preparation for introducing a new driver for the CXL.mem / HDM decoder (Host-managed Device Memory) capabilities of a CXL memory expander, rename mem.c to pci.c so that mem.c is available for this new driver. The result of the change is that there is a clear layering distinction in the driver, and a systems administrator may load only the cxl_pci module and gain access to such operations as, firmware update, offline provisioning of devices, and error collection. In addition to freeing up the file name for another purpose, there are two primary reasons this is useful, 1. Acting upon devices which don't have full CXL capabilities. This may happen for instance if the CXL device is connected in a CXL unaware part of the platform topology. 2. Userspace-first provisioning for devices without kernel driver interference. This may be useful when provisioning a new device in a specific manner that might otherwise be blocked or prevented by the real CXL mem driver. Reviewed-by: NDan Williams <dan.j.williams@intel.com> Signed-off-by: NBen Widawsky <ben.widawsky@intel.com> Link: https://lore.kernel.org/r/20210526174413.802913-1-ben.widawsky@intel.comSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 15 5月, 2021 3 次提交
-
-
由 Dan Williams 提交于
While CXL Memory Device endpoints locate the CXL MMIO registers in a PCI BAR, CXL root bridges have their MMIO base address described by platform firmware. Refactor the existing register lookup into a generic facility for endpoints and bridges to share. Reviewed-by: NBen Widawsky <ben.widawsky@intel.com> Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/162096972534.1865304.3218686216153688039.stgit@dwillia2-desk3.amr.corp.intel.comSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
CXL MMIO register blocks are organized by device type and capabilities. There are Component registers, Device registers (yes, an ambiguous name), and Memory Device registers (a specific extension of Device registers). It is possible for a given device instance (endpoint or port) to implement register sets from multiple of the above categories. The driver code that enumerates and maps the registers is type specific so it is useful to have a dedicated type and helpers for each block type. At the same time, once the registers are mapped the origin type does not matter. It is overly pedantic to reference the register block type in code that is using the registers. In preparation for the endpoint driver to incorporate Component registers into its MMIO operations reorganize the registers to allow typed enumeration + mapping, but anonymous usage. With the end state of 'struct cxl_regs' to be: struct cxl_regs { union { struct { CXL_DEVICE_REGS(); }; struct cxl_device_regs device_regs; }; union { struct { CXL_COMPONENT_REGS(); }; struct cxl_component_regs component_regs; }; }; With this arrangement the driver can share component init code with ports, but when using the registers it can directly reference the component register block type by name without the 'component_regs' prefix. So, map + enumerate can be shared across drivers of different CXL classes e.g.: void cxl_setup_device_regs(struct device *dev, void __iomem *base, struct cxl_device_regs *regs); void cxl_setup_component_regs(struct device *dev, void __iomem *base, struct cxl_component_regs *regs); ...while inline usage in the driver need not indicate where the registers came from: readl(cxlm->regs.mbox + MBOX_OFFSET); readl(cxlm->regs.hdm + HDM_OFFSET); ...instead of: readl(cxlm->regs.device_regs.mbox + MBOX_OFFSET); readl(cxlm->regs.component_regs.hdm + HDM_OFFSET); This complexity of the definition in .h yields improvement in code readability in .c while maintaining type-safety for organization of setup code. It prepares the implementation to maintain organization in the face of CXL devices that compose register interfaces consisting of multiple types. Given that this new container is named 'regs' rename the common register base pointer @base, and fixup the kernel-doc for the missing @cxlmd description. Reviewed-by: NBen Widawsky <ben.widawsky@intel.com> Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Cc: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/162096971451.1865304.13540251513463515153.stgit@dwillia2-desk3.amr.corp.intel.comSigned-off-by: NDan Williams <dan.j.williams@intel.com> -
由 Dan Williams 提交于
In preparation for sharing cxl.h with other generic CXL consumers, move / consolidate some of the memory device specifics to mem.h. The motivation for moving out of cxl.h is to maintain least privilege access to memory-device details since cxl.h is used in multiple files. The motivation for moving definitions into a new mem.h header is for code readability and organization. I.e. minimize implementation details when reading data structures and other definitions. Reviewed-by: NBen Widawsky <ben.widawsky@intel.com> Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/162096970932.1865304.14510894426562947262.stgit@dwillia2-desk3.amr.corp.intel.comSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 17 4月, 2021 1 次提交
-
-
由 Dan Williams 提交于
The CXL Identify Memory Device output payload emits capacity in 256MB units. The driver is treating the capacity field as bytes. This was missed because QEMU reports bytes when it should report bytes / 256MB. Fixes: 8adaf747 ("cxl/mem: Find device capabilities") Reviewed-by: NVishal Verma <vishal.l.verma@intel.com> Cc: Ben Widawsky <ben.widawsky@intel.com> Link: https://lore.kernel.org/r/161862021044.3259705.7008520073059739760.stgit@dwillia2-desk3.amr.corp.intel.comSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 16 4月, 2021 1 次提交
-
-
由 Ben Widawsky 提交于
The "Register Offset Low" register of a "DVSEC Register Locator" contains the 64K aligned offset for the registers along with the BAR indicator and an id. The implementation was treating the "Register Block Offset Low" field a value rather than as a pre-aligned component of the 64-bit offset. So, just mask, don't mask and shift (FIELD_GET). The user visible result of this bug is that the driver fails to bind to the device after none of the required blocks are found. This was missed earlier because the primary development done in the QEMU environment only uses 0 offsets, i.e. 0 shifted is still 0. Fixes: 8adaf747 ("cxl/mem: Find device capabilities") Reported-by: NVishal Verma <vishal.l.verma@intel.com> Signed-off-by: NBen Widawsky <ben.widawsky@intel.com> Link: https://lore.kernel.org/r/20210415232610.603273-1-ben.widawsky@intel.comSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 07 4月, 2021 5 次提交
-
-
由 Robert Richter 提交于
Typically the mem_commands[] array is in sync with 'enum { CXL_CMDS }'. Current code works well. However, the array size of mem_commands[] may not strictly be the same as CXL_MEM_COMMAND_ID_MAX. E.g. if a new CXL_CMD() is added that is guarded by #ifdefs, the array could be shorter. This could lead then further to an out-of-bounds array access in cxl_validate_cmd_from_user(). Fix this by forcing the array size to CXL_MEM_COMMAND_ID_MAX. This also adds range checks for array items in mem_commands[] at compile time. Signed-off-by: NRobert Richter <rrichter@amd.com> Link: https://lore.kernel.org/r/20210324141635.22335-1-rrichter@amd.comSigned-off-by: NDan Williams <dan.j.williams@intel.com> -
由 Dan Williams 提交于
There is no power management of cxl virtual devices, disable device-power-management and runtime-power-management to prevent userspace from growing expectations of those attributes appearing. They can be added back in the future if needed. Reviewed-by: NBen Widawsky <ben.widawsky@intel.com> Link: https://lore.kernel.org/r/161728761025.2474381.808344500111924819.stgit@dwillia2-desk3.amr.corp.intel.comSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
While device_add() will happen to catch dev_set_name() failures it is a broken pattern to follow given that the core may try to fall back to a different name. Add explicit checking for dev_set_name() failures to be cleaned up by put_device(). Skip cdev_device_add() and proceed directly to put_device() if the name set fails. This type of bug is easier to see if 'alloc' is split from 'add' operations that require put_device() on failure. So cxl_memdev_alloc() is split out as a result. Fixes: b39cb105 ("cxl/mem: Register CXL memX devices") Reported-by: NJason Gunthorpe <jgg@nvidia.com> Reviewed-by: NJason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/161728760514.2474381.1163928273337158134.stgit@dwillia2-desk3.amr.corp.intel.comSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
The percpu_ref to gate whether cxl_memdev_ioctl() is free to use the driver context (@cxlm) to issue I/O is overkill, implemented incorrectly (missing a device reference before accessing the percpu_ref), and the complexities of shutting down a percpu_ref contributed to a bug in the error unwind in cxl_mem_add_memdev() (missing put_device() to be fixed separately). Use an rwsem to explicitly synchronize the usage of cxlmd->cxlm, and add the missing reference counting for cxlmd in cxl_memdev_open() and cxl_memdev_release_file(). Fixes: b39cb105 ("cxl/mem: Register CXL memX devices") Reported-by: NJason Gunthorpe <jgg@nvidia.com> Reviewed-by: NJason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/161728759948.2474381.17481500816783671817.stgit@dwillia2-desk3.amr.corp.intel.comSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
While none the CXL sysfs attributes are threatening to overrun a PAGE_SIZE of output, it is good form to use the recommended helpers. Fixes: b39cb105 ("cxl/mem: Register CXL memX devices") Reported-by: NJason Gunthorpe <jgg@nvidia.com> Reviewed-by: NBen Widawsky <ben.widawsky@intel.com> Reviewed-by: NJason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/161728759424.2474381.11231441014951343463.stgit@dwillia2-desk3.amr.corp.intel.comSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 23 2月, 2021 1 次提交
-
-
由 Ben Widawsky 提交于
When submitting a command for userspace, input and output payload bounce buffers are allocated. For a given command, both input and output buffers may exist and so when allocation of the input buffer fails, the output buffer must be freed too. As far as I can tell, userspace can't easily exploit the leak to OOM a machine unless the machine was already near OOM state. Fixes: 583fa5e7 ("cxl/mem: Add basic IOCTL interface") Reported-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NBen Widawsky <ben.widawsky@intel.com> Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Link: https://lore.kernel.org/r/20210221035846.680145-1-ben.widawsky@intel.comSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 19 2月, 2021 1 次提交
-
-
由 Dan Carpenter 提交于
The copy_to_user() function returns the number of bytes remaining to be copied, but we want to return -EFAULT if the copy doesn't complete. Fixes: b754ffbbc0ee ("cxl/mem: Add basic IOCTL interface") Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Link: https://lore.kernel.org/r/YC+K3kgzqm20zCWY@mwandaSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-