summaryrefslogtreecommitdiffstats
path: root/drivers/cxl/Makefile
AgeCommit message (Collapse)AuthorFilesLines
2022-04-22PM: CXL: Disable suspendDan Williams1-1/+1
The CXL specification claims S3 support at a hardware level, but at a system software level there are some missing pieces. Section 9.4 (CXL 2.0) rightly claims that "CXL mem adapters may need aux power to retain memory context across S3", but there is no enumeration mechanism for the OS to determine if a given adapter has that support. Moreover the save state and resume image for the system may inadvertantly end up in a CXL device that needs to be restored before the save state is recoverable. I.e. a circular dependency that is not resolvable without a third party save-area. Arrange for the cxl_mem driver to fail S3 attempts. This still nominaly allows for suspend, but requires unbinding all CXL memory devices before the suspend to ensure the typical DRAM flow is taken. The cxl_mem unbind flow is intended to also tear down all CXL memory regions associated with a given cxl_memdev. It is reasonable to assume that any device participating in a System RAM range published in the EFI memory map is covered by aux power and save-area outside the device itself. So this restriction can be minimized in the future once pre-existing region enumeration support arrives, and perhaps a spec update to clarify if the EFI memory map is sufficent for determining the range of devices managed by platform-firmware for S3 support. Per Rafael, if the CXL configuration prevents suspend then it should fail early before tasks are frozen, and mem_sleep should stop showing 'mem' as an option [1]. Effectively CXL augments the platform suspend ->valid() op since, for example, the ACPI ops are not aware of the CXL / PCI dependencies. Given the split role of platform firmware vs OS provisioned CXL memory it is up to the cxl_mem driver to determine if the CXL configuration has elements that platform firmware may not be prepared to restore. Link: https://lore.kernel.org/r/CAJZ5v0hGVN_=3iU8OLpHY3Ak35T5+JcBM-qs8SbojKrpd0VXsA@mail.gmail.com [1] Cc: "Rafael J. Wysocki" <rafael@kernel.org> Cc: Pavel Machek <pavel@ucw.cz> Cc: Len Brown <len.brown@intel.com> Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Link: https://lore.kernel.org/r/165066828317.3907920.5690432272182042556.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2022-02-08cxl/mem: Add the cxl_mem driverBen Widawsky1-0/+2
At this point the subsystem can enumerate all CXL ports (CXL.mem decode resources in upstream switch ports and host bridges) in a system. The last mile is connecting those ports to endpoints. The cxl_mem driver connects an endpoint device to the platform CXL.mem protoctol decode-topology. At ->probe() time it walks its device-topology-ancestry and adds a CXL Port object at every Upstream Port hop until it gets to CXL root. The CXL root object is only present after a platform firmware driver registers platform CXL resources. For ACPI based platform this is managed by the ACPI0017 device and the cxl_acpi driver. The ports are registered such that disabling a given port automatically unregisters all descendant ports, and the chain can only be registered after the root is established. Given ACPI device scanning may run asynchronously compared to PCI device scanning the root driver is tasked with rescanning the bus after the root successfully probes. Conversely if any ports in a chain between the root and an endpoint becomes disconnected it subsequently triggers the endpoint to unregister. Given lock depenedencies the endpoint unregistration happens in a workqueue asynchronously. If userspace cares about synchronizing delayed work after port events the /sys/bus/cxl/flush attribute is available for that purpose. Reported-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> [djbw: clarify changelog, rework hotplug support] Link: https://lore.kernel.org/r/164398782997.903003.9725273241627693186.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2022-02-08cxl/port: Add a driver for 'struct cxl_port' objectsBen Widawsky1-0/+2
The need for a CXL port driver and a dedicated cxl_bus_type is driven by a need to simultaneously support 2 independent physical memory decode domains (cache coherent CXL.mem and uncached PCI.mmio) that also intersect at a single PCIe device node. A CXL Port is a device that advertises a CXL Component Register block with an "HDM Decoder Capability Structure". >From Documentation/driver-api/cxl/memory-devices.rst: Similar to how a RAID driver takes disk objects and assembles them into a new logical device, the CXL subsystem is tasked to take PCIe and ACPI objects and assemble them into a CXL.mem decode topology. The need for runtime configuration of the CXL.mem topology is also similar to RAID in that different environments with the same hardware configuration may decide to assemble the topology in contrasting ways. One may choose performance (RAID0) striping memory across multiple Host Bridges and endpoints while another may opt for fault tolerance and disable any striping in the CXL.mem topology. The port driver identifies whether an endpoint Memory Expander is connected to a CXL topology. If an active (bound to the 'cxl_port' driver) CXL Port is not found at every PCIe Switch Upstream port and an active "root" CXL Port then the device is just a plain PCIe endpoint only capable of participating in PCI.mmio and DMA cycles, not CXL.mem coherent interleave sets. The 'cxl_port' driver lets the CXL subsystem leverage driver-core infrastructure for setup and teardown of register resources and communicating device activation status to userspace. The cxl_bus_type can rendezvous the async arrival of platform level CXL resources (via the 'cxl_acpi' driver) with the asynchronous enumeration of Memory Expander endpoints, while also implementing a hierarchical locking model independent of the associated 'struct pci_dev' locking model. The locking for dport and decoder enumeration is now handled in the core rather than callers. For now the port driver only enumerates and registers CXL resources (downstream port metadata and decoder resources) later it will be used to take action on its decoders in response to CXL.mem region provisioning requests. Note1: cxlpci.h has long depended on pci.h, but port.c was the first to not include pci.h. Carry that dependency in cxlpci.h. Note2: cxl port enumeration and probing complicates CXL subsystem init to the point that it helps to have centralized debug logging of probe events in cxl_bus_probe(). Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Co-developed-by: Dan Williams <dan.j.williams@intel.com> Link: https://lore.kernel.org/r/164374948116.464348.1772618057599155408.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2022-02-08cxl: Rename CXL_MEM to CXL_PCIBen Widawsky1-1/+1
The cxl_mem module was renamed cxl_pci in commit 21e9f76733a8 ("cxl: Rename mem to pci"). In preparation for adding an ancillary driver for cxl_memdev devices (registered on the cxl bus by cxl_pci), go ahead and rename CONFIG_CXL_MEM to CONFIG_CXL_PCI. Free up the CXL_MEM name for that new driver to manage CXL.mem endpoint operations. Suggested-by: Dan Williams <dan.j.williams@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Link: https://lore.kernel.org/r/164298412409.3018233.12407355692407890752.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2021-08-06cxl: Move cxl_core to new directoryBen Widawsky1-3/+1
CXL core is growing, and it's already arguably unmanageable. To support future growth, move core functionality to a new directory and rename the file to represent just bus support. Future work will remove non-bus functionality. Note that mem.h is renamed to cxlmem.h to avoid a namespace collision with the global ARCH=um mem.h header. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/162792537866.368511.8915631504621088321.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2021-06-15cxl/pmem: Add initial infrastructure for pmem supportDan Williams1-0/+2
Register an 'nvdimm-bridge' device to act as an anchor for a libnvdimm bus hierarchy. Also, flesh out the cxl_bus definition to allow a cxl_nvdimm_bridge_driver to attach to the bridge and trigger the nvdimm-bus registration. The creation of the bridge is gated on the detection of a PMEM capable address space registered to the root. The bridge indirection allows the libnvdimm module to remain unloaded on platforms without PMEM support. Given that the probing of ACPI0017 is asynchronous to CXL endpoint devices, and the expectation that CXL endpoint devices register other PMEM resources on the 'CXL' nvdimm bus, a workqueue is added. The workqueue is needed to run bus_rescan_devices() outside of the device_lock() of the nvdimm-bridge device to rendezvous nvdimm resources as they arrive. For now only the bus is taken online/offline in the workqueue. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/162379909706.2993820.14051258608641140169.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2021-06-09cxl/acpi: Introduce the root of a cxl_port topologyDan Williams1-0/+2
While CXL builds upon the PCI software model for enumeration and endpoint control, a static platform component is required to bootstrap the CXL memory layout. Similar to how ACPI identifies root-level PCI memory resources, ACPI data enumerates the address space and interleave configuration for CXL Memory. In addition to identifying host bridges, ACPI is responsible for enumerating the CXL memory space that can be addressed by downstream decoders. This is similar to the requirement for ACPI to publish resources via the _CRS method for PCI host bridges. Specifically, ACPI publishes a table, CXL Early Discovery Table (CEDT), which includes a list of CXL Memory resources, CXL Fixed Memory Window Structures (CFMWS). For now, introduce the core infrastructure for a cxl_port hierarchy starting with a root level anchor represented by the ACPI0017 device. Follow on changes model support for the configurable decode capabilities of cxl_port instances, i.e. CXL switch support. Co-developed-by: Alison Schofield <alison.schofield@intel.com> Signed-off-by: Alison Schofield <alison.schofield@intel.com> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/162325449515.2293126.15303270193010154608.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2021-05-26cxl: Rename mem to pciBen Widawsky1-2/+2
As the driver has undergone development, it's become clear that the majority [entirety?] of the current functionality in mem.c is actually a layer encapsulating functionality exposed through PCI based interactions. This layer can be used either in isolation or to provide functionality for higher level functionality. CXL capabilities exist in a parallel domain to PCIe. CXL devices are enumerable and controllable via "legacy" PCIe mechanisms; however, their CXL capabilities are a superset of PCIe. For example, a CXL device may be connected to a non-CXL capable PCIe root port, and therefore will not be able to participate in CXL.mem or CXL.cache operations, but can still be accessed through PCIe mechanisms for CXL.io operations. To properly represent the PCI nature of this driver, and in preparation for introducing a new driver for the CXL.mem / HDM decoder (Host-managed Device Memory) capabilities of a CXL memory expander, rename mem.c to pci.c so that mem.c is available for this new driver. The result of the change is that there is a clear layering distinction in the driver, and a systems administrator may load only the cxl_pci module and gain access to such operations as, firmware update, offline provisioning of devices, and error collection. In addition to freeing up the file name for another purpose, there are two primary reasons this is useful, 1. Acting upon devices which don't have full CXL capabilities. This may happen for instance if the CXL device is connected in a CXL unaware part of the platform topology. 2. Userspace-first provisioning for devices without kernel driver interference. This may be useful when provisioning a new device in a specific manner that might otherwise be blocked or prevented by the real CXL mem driver. Reviewed-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Link: https://lore.kernel.org/r/20210526174413.802913-1-ben.widawsky@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2021-05-14cxl/core: Rename bus.c to core.cDan Williams1-2/+2
In preparation for more generic shared functionality across endpoint consumers of core cxl resources, and platform-firmware producers of those resources, rename bus.c to core.c. In addition to the central rendezvous for interleave coordination, the core will also define common routines like CXL register block mapping. Acked-by: Ben Widawsky <ben.widawsky@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/162096972018.1865304.11079951161445408423.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2021-02-16cxl/mem: Register CXL memX devicesDan Williams1-0/+3
Create the /sys/bus/cxl hierarchy to enumerate: * Memory Devices (per-endpoint control devices) * Memory Address Space Devices (platform address ranges with interleaving, performance, and persistence attributes) * Memory Regions (active provisioned memory from an address space device that is in use as System RAM or delegated to libnvdimm as Persistent Memory regions). For now, only the per-endpoint control devices are registered on the 'cxl' bus. However, going forward it will provide a mechanism to coordinate cross-device interleave. Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> (v2) Link: https://lore.kernel.org/r/20210217040958.1354670-4-ben.widawsky@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2021-02-16cxl/mem: Introduce a driver for CXL-2.0-Type-3 endpointsDan Williams1-0/+4
The CXL.mem protocol allows a device to act as a provider of "System RAM" and/or "Persistent Memory" that is fully coherent as if the memory was attached to the typical CPU memory controller. With the CXL-2.0 specification a PCI endpoint can implement a "Type-3" device interface and give the operating system control over "Host Managed Device Memory". See section 2.3 Type 3 CXL Device. The memory range exported by the device may optionally be described by the platform firmware memory map, or by infrastructure like LIBNVDIMM to provision persistent memory capacity from one, or more, CXL.mem devices. A pre-requisite for Linux-managed memory-capacity provisioning is this cxl_mem driver that can speak the mailbox protocol defined in section 8.2.8.4 Mailbox Registers. For now just land the initial driver boiler-plate and Documentation/ infrastructure. Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Acked-by: David Rientjes <rientjes@google.com> (v1) Cc: Jonathan Corbet <corbet@lwn.net> Link: https://www.computeexpresslink.org/download-the-specification Link: https://lore.kernel.org/r/20210217040958.1354670-2-ben.widawsky@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>