CXL changes for v6.18

Misc changes:
 - Use str_plural() instead of open code for emitting strings.
 - Use str_enabled_disabled() instead of ternary operator
 - Fix emit of type resource_size_t argument for validate_region_offset()
 - Typo fixup in CXL driver-api documentation
 - Rename CFMWS coherency restriction defines
 - Add convention doc describe dealing with x86 low memory hole and CXL
 
 Poison Inject support series:
 - Move hpa_to_spa callback to new reoot decoder ops structure
 - Define a SPA to HPA callback for interleave calculation with XOR math
 - Add support for SPA to DPA address translation with XOR
 - Add locked variants of poison inject and clear functions
 - Add inject and clear poison support by region offset
 
 CXL access coordinates update fix series:
 - A comment update for hotplug memory callback prority defines
 - Add node_update_perf_attrs() for updating perf attrs on a node
 - Update cxl_access_coordinates() to use the new node update function
 - Remove hmat_update_target_coordinates() and related code
 
 CXL delayed downstream port enumeration and initialization series
 - Add helper to detect top of CXL device topology and remove open coding
 - Add helper to delete single dport
 - Add a cached copy of target_map to cxl_decoder
 - Refactor decoder setup to reduce cxl_test burden
 - Defer dport allocation for switch ports
 - Add mock version of devm_cxl_add_dport_by_dev() for cxl_test
 - Adjust the mock version of devm_cxl_switch_port_decoders_setup() due to
   cxl core usage
 - Setup target_map for cxl_test decoder initialization
 - Change SSLBIS handler to handle single dport
 - Move port register setup to when first dport appears
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEE5DAy15EJMCV1R6v9YGjFFmlTOEoFAmjdqxAACgkQYGjFFmlT
 OEoayhAAqW6nPPM2XNiNigGp5oxQTt0GiblhS/PDAq+VHZaFQtrM6lvbrvqj1Gus
 g49ID4SkKVq2SKZlGVk5xcPE2BEeKp4YOF6mmAqKNy6geeG0mVXf/gNbhd/8pnpm
 zmbX9FdAR2x4ZimPzBZZO0vlm5NG61sVHWyz1VcU9rQUpB8shSQF3QIoKypq1MpU
 G7PgN92Pc8Ztr1cI9RSFXV6p5Bd26IMt7Bi3Wub5z4rtnQAFzhtQ5oFpen6Dc4Gj
 py+BwY9x25HsVCWD6oQIFvDfH5iiZfSbL62h2ttbalkqM0dFJedKmHq1rNMpsV/4
 mNY2COr2uTBOB7Zht10+Q46pAAYdBTVKFIhRAEUidnCmzF8PPEAEYISo4vE5Oqih
 lUJYhU8tREacLJ9jR4ro0NwgM43mESX4Aj84CV+BtPA2SyI2qsqY8xCHXyyiaLsn
 GUGSbVXRbhtAWs+gM8ciERx9U/AE+yV8oABaz/zeUO5RMSB2ho6y9XD6PFHfp5Fb
 w+Ud6CNkk00HR8Api38zfHJeMOR+GzsaebZgW8pOcfucC6dxS1rhUa3iN0ifLMIC
 QdRSemwBjbPWJ21JwHxJCGVv/OUocziTg9H5ydZfaxOoXjIKYZcKo5ePUe4KH7bi
 2tNjlA8BCycBiwaUMIMEHcIZNNm2GGddeN6TwP8QLVAevj3Hkl0=
 =VYmp
 -----END PGP SIGNATURE-----

Merge tag 'cxl-for-6.18' of git://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl

Pull CXL updates from Dave Jiang:
 "The changes include adding poison injection support, fixing CXL access
  coordinates when onlining CXL memory, and delaing the enumeration of
  downstream switch ports for CXL hierarchy to ensure that the CXL link
  is established at the time of enumeration to address a few issues
  observed on AMD and Intel platforms.

  Misc changes:
   - Use str_plural() instead of open code for emitting strings.
   - Use str_enabled_disabled() instead of ternary operator
   - Fix emit of type resource_size_t argument for
     validate_region_offset()
   - Typo fixup in CXL driver-api documentation
   - Rename CFMWS coherency restriction defines
   - Add convention doc describe dealing with x86 low memory hole
     and CXL

  Poison Inject support:
   - Move hpa_to_spa callback to new reoot decoder ops structure
   - Define a SPA to HPA callback for interleave calculation with
     XOR math
   - Add support for SPA to DPA address translation with XOR
   - Add locked variants of poison inject and clear functions
   - Add inject and clear poison support by region offset

  CXL access coordinates update fix:
   - A comment update for hotplug memory callback prority defines
   - Add node_update_perf_attrs() for updating perf attrs on a node
   - Update cxl_access_coordinates() to use the new node update function
   - Remove hmat_update_target_coordinates() and related code

  CXL delayed downstream port enumeration and initialization:
   - Add helper to detect top of CXL device topology and remove
     open coding
   - Add helper to delete single dport
   - Add a cached copy of target_map to cxl_decoder
   - Refactor decoder setup to reduce cxl_test burden
   - Defer dport allocation for switch ports
   - Add mock version of devm_cxl_add_dport_by_dev() for cxl_test
   - Adjust the mock version of devm_cxl_switch_port_decoders_setup()
     due to cxl core usage
   - Setup target_map for cxl_test decoder initialization
   - Change SSLBIS handler to handle single dport
   - Move port register setup to when first dport appears"

* tag 'cxl-for-6.18' of git://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl: (25 commits)
  cxl: Move port register setup to when first dport appear
  cxl: Change sslbis handler to only handle single dport
  cxl/test: Setup target_map for cxl_test decoder initialization
  cxl/test: Adjust the mock version of devm_cxl_switch_port_decoders_setup()
  cxl/test: Add mock version of devm_cxl_add_dport_by_dev()
  cxl: Defer dport allocation for switch ports
  cxl/test: Refactor decoder setup to reduce cxl_test burden
  cxl: Add a cached copy of target_map to cxl_decoder
  cxl: Add helper to delete dport
  cxl: Add helper to detect top of CXL device topology
  cxl: Documentation/driver-api/cxl: Describe the x86 Low Memory Hole solution
  cxl/acpi: Rename CFMW coherency restrictions
  Documentation/driver-api: Fix typo error in cxl
  acpi/hmat: Remove now unused hmat_update_target_coordinates()
  cxl, acpi/hmat: Update CXL access coordinates directly instead of through HMAT
  drivers/base/node: Add a helper function node_update_perf_attrs()
  mm/memory_hotplug: Update comment for hotplug memory callback priorities
  cxl: Fix emit of type resource_size_t argument for validate_region_offset()
  cxl/region: Add inject and clear poison by region offset
  cxl/core: Add locked variants of the poison inject and clear funcs
  ...
This commit is contained in:
Linus Torvalds 2025-10-04 12:02:50 -07:00
commit d104e3d17f
28 changed files with 1261 additions and 390 deletions

View File

@ -19,6 +19,20 @@ Description:
is returned to the user. The inject_poison attribute is only is returned to the user. The inject_poison attribute is only
visible for devices supporting the capability. visible for devices supporting the capability.
TEST-ONLY INTERFACE: This interface is intended for testing
and validation purposes only. It is not a data repair mechanism
and should never be used on production systems or live data.
DATA LOSS RISK: For CXL persistent memory (PMEM) devices,
poison injection can result in permanent data loss. Injected
poison may render data permanently inaccessible even after
clearing, as the clear operation writes zeros and does not
recover original data.
SYSTEM STABILITY RISK: For volatile memory, poison injection
can cause kernel crashes, system instability, or unpredictable
behavior if the poisoned addresses are accessed by running code
or critical kernel structures.
What: /sys/kernel/debug/cxl/memX/clear_poison What: /sys/kernel/debug/cxl/memX/clear_poison
Date: April, 2023 Date: April, 2023
@ -35,6 +49,79 @@ Description:
The clear_poison attribute is only visible for devices The clear_poison attribute is only visible for devices
supporting the capability. supporting the capability.
TEST-ONLY INTERFACE: This interface is intended for testing
and validation purposes only. It is not a data repair mechanism
and should never be used on production systems or live data.
CLEAR IS NOT DATA RECOVERY: This operation writes zeros to the
specified address range and removes the address from the poison
list. It does NOT recover or restore original data that may have
been present before poison injection. Any original data at the
cleared address is permanently lost and replaced with zeros.
CLEAR IS NOT A REPAIR MECHANISM: This interface is for testing
purposes only and should not be used as a data repair tool.
Clearing poison is fundamentally different from data recovery
or error correction.
What: /sys/kernel/debug/cxl/regionX/inject_poison
Date: August, 2025
Contact: linux-cxl@vger.kernel.org
Description:
(WO) When a Host Physical Address (HPA) is written to this
attribute, the region driver translates it to a Device
Physical Address (DPA) and identifies the corresponding
memdev. It then sends an inject poison command to that memdev
at the translated DPA. Refer to the memdev ABI entry at:
/sys/kernel/debug/cxl/memX/inject_poison for the detailed
behavior. This attribute is only visible if all memdevs
participating in the region support both inject and clear
poison commands.
TEST-ONLY INTERFACE: This interface is intended for testing
and validation purposes only. It is not a data repair mechanism
and should never be used on production systems or live data.
DATA LOSS RISK: For CXL persistent memory (PMEM) devices,
poison injection can result in permanent data loss. Injected
poison may render data permanently inaccessible even after
clearing, as the clear operation writes zeros and does not
recover original data.
SYSTEM STABILITY RISK: For volatile memory, poison injection
can cause kernel crashes, system instability, or unpredictable
behavior if the poisoned addresses are accessed by running code
or critical kernel structures.
What: /sys/kernel/debug/cxl/regionX/clear_poison
Date: August, 2025
Contact: linux-cxl@vger.kernel.org
Description:
(WO) When a Host Physical Address (HPA) is written to this
attribute, the region driver translates it to a Device
Physical Address (DPA) and identifies the corresponding
memdev. It then sends a clear poison command to that memdev
at the translated DPA. Refer to the memdev ABI entry at:
/sys/kernel/debug/cxl/memX/clear_poison for the detailed
behavior. This attribute is only visible if all memdevs
participating in the region support both inject and clear
poison commands.
TEST-ONLY INTERFACE: This interface is intended for testing
and validation purposes only. It is not a data repair mechanism
and should never be used on production systems or live data.
CLEAR IS NOT DATA RECOVERY: This operation writes zeros to the
specified address range and removes the address from the poison
list. It does NOT recover or restore original data that may have
been present before poison injection. Any original data at the
cleared address is permanently lost and replaced with zeros.
CLEAR IS NOT A REPAIR MECHANISM: This interface is for testing
purposes only and should not be used as a data repair tool.
Clearing poison is fundamentally different from data recovery
or error correction.
What: /sys/kernel/debug/cxl/einj_types What: /sys/kernel/debug/cxl/einj_types
Date: January, 2024 Date: January, 2024
KernelVersion: v6.9 KernelVersion: v6.9

View File

@ -45,3 +45,138 @@ Detailed Description of the Change
---------------------------------- ----------------------------------
<Propose spec language that corrects the conflict.> <Propose spec language that corrects the conflict.>
Resolve conflict between CFMWS, Platform Memory Holes, and Endpoint Decoders
============================================================================
Document
--------
CXL Revision 3.2, Version 1.0
License
-------
SPDX-License Identifier: CC-BY-4.0
Creator/Contributors
--------------------
- Fabio M. De Francesco, Intel
- Dan J. Williams, Intel
- Mahesh Natu, Intel
Summary of the Change
---------------------
According to the current Compute Express Link (CXL) Specifications (Revision
3.2, Version 1.0), the CXL Fixed Memory Window Structure (CFMWS) describes zero
or more Host Physical Address (HPA) windows associated with each CXL Host
Bridge. Each window represents a contiguous HPA range that may be interleaved
across one or more targets, including CXL Host Bridges. Each window has a set
of restrictions that govern its usage. It is the Operating System-directed
configuration and Power Management (OSPM) responsibility to utilize each window
for the specified use.
Table 9-22 of the current CXL Specifications states that the Window Size field
contains the total number of consecutive bytes of HPA this window describes.
This value must be a multiple of the Number of Interleave Ways (NIW) * 256 MB.
Platform Firmware (BIOS) might reserve physical addresses below 4 GB where a
memory gap such as the Low Memory Hole for PCIe MMIO may exist. In such cases,
the CFMWS Range Size may not adhere to the NIW * 256 MB rule.
The HPA represents the actual physical memory address space that the CXL devices
can decode and respond to, while the System Physical Address (SPA), a related
but distinct concept, represents the system-visible address space that users can
direct transaction to and so it excludes reserved regions.
BIOS publishes CFMWS to communicate the active SPA ranges that, on platforms
with LMH's, map to a strict subset of the HPA. The SPA range trims out the hole,
resulting in lost capacity in the Endpoints with no SPA to map to that part of
the HPA range that intersects the hole.
E.g, an x86 platform with two CFMWS and an LMH starting at 2 GB:
+--------+------------+-------------------+------------------+-------------------+------+
| Window | CFMWS Base | CFMWS Size | HDM Decoder Base | HDM Decoder Size | Ways |
+========+============+===================+==================+===================+======+
| 0 | 0 GB | 2 GB | 0 GB | 3 GB | 12 |
+--------+------------+-------------------+------------------+-------------------+------+
| 1 | 4 GB | NIW*256MB Aligned | 4 GB | NIW*256MB Aligned | 12 |
+--------+------------+-------------------+------------------+-------------------+------+
HDM decoder base and HDM decoder size represent all the 12 Endpoint Decoders of
a 12 ways region and all the intermediate Switch Decoders. They are configured
by the BIOS according to the NIW * 256MB rule, resulting in a HPA range size of
3GB. Instead, the CFMWS Base and CFMWS Size are used to configure the Root
Decoder HPA range that results smaller (2GB) than that of the Switch and
Endpoint Decoders in the hierarchy (3GB).
This creates 2 issues which lead to a failure to construct a region:
1) A mismatch in region size between root and any HDM decoder. The root decoders
will always be smaller due to the trim.
2) The trim causes the root decoder to violate the (NIW * 256MB) rule.
This change allows a region with a base address of 0GB to bypass these checks to
allow for region creation with the trimmed root decoder address range.
This change does not allow for any other arbitrary region to violate these
checks - it is intended exclusively to enable x86 platforms which map CXL memory
under 4GB.
Despite the HDM decoders covering the PCIE hole HPA region, it is expected that
the platform will never route address accesses to the CXL complex because the
root decoder only covers the trimmed region (which excludes this). This is
outside the ability of Linux to enforce.
On the example platform, only the first 2GB will be potentially usable, but
Linux, aiming to adhere to the current specifications, fails to construct
Regions and attach Endpoint and intermediate Switch Decoders to them.
There are several points of failure that due to the expectation that the Root
Decoder HPA size, that is equal to the CFMWS from which it is configured, has
to be greater or equal to the matching Switch and Endpoint HDM Decoders.
In order to succeed with construction and attachment, Linux must construct a
Region with Root Decoder HPA range size, and then attach to that all the
intermediate Switch Decoders and Endpoint Decoders that belong to the hierarchy
regardless of their range sizes.
Benefits of the Change
----------------------
Without the change, the OSPM wouldn't match intermediate Switch and Endpoint
Decoders with Root Decoders configured with CFMWS HPA sizes that don't align
with the NIW * 256MB constraint, and so it leads to lost memdev capacity.
This change allows the OSPM to construct Regions and attach intermediate Switch
and Endpoint Decoders to them, so that the addressable part of the memory
devices total capacity is made available to the users.
References
----------
Compute Express Link Specification Revision 3.2, Version 1.0
<https://www.computeexpresslink.org/>
Detailed Description of the Change
----------------------------------
The description of the Window Size field in table 9-22 needs to account for
platforms with Low Memory Holes, where SPA ranges might be subsets of the
endpoints HPA. Therefore, it has to be changed to the following:
"The total number of consecutive bytes of HPA this window represents. This value
shall be a multiple of NIW * 256 MB.
On platforms that reserve physical addresses below 4 GB, such as the Low Memory
Hole for PCIe MMIO on x86, an instance of CFMWS whose Base HPA range is 0 might
have a size that doesn't align with the NIW * 256 MB constraint.
Note that the matching intermediate Switch Decoders and the Endpoint Decoders
HPA range sizes must still align to the above-mentioned rule, but the memory
capacity that exceeds the CFMWS window size won't be accessible.".

View File

@ -173,7 +173,7 @@ Accelerator
User Flow Support User Flow Support
----------------- -----------------
* [0] Inject & clear poison by HPA * [2] Inject & clear poison by region offset
Details Details
======= =======

View File

@ -202,7 +202,7 @@ future and such a configuration should be avoided.
Memory Holes Memory Holes
------------ ------------
If your platform includes memory holes intersparsed between your CXL memory, it If your platform includes memory holes interspersed between your CXL memory, it
is recommended to utilize multiple decoders to cover these regions of memory, is recommended to utilize multiple decoders to cover these regions of memory,
rather than try to program the decoders to accept the entire range and expect rather than try to program the decoders to accept the entire range and expect
Linux to manage the overlap. Linux to manage the overlap.

View File

@ -74,7 +74,6 @@ struct memory_target {
struct node_cache_attrs cache_attrs; struct node_cache_attrs cache_attrs;
u8 gen_port_device_handle[ACPI_SRAT_DEVICE_HANDLE_SIZE]; u8 gen_port_device_handle[ACPI_SRAT_DEVICE_HANDLE_SIZE];
bool registered; bool registered;
bool ext_updated; /* externally updated */
}; };
struct memory_initiator { struct memory_initiator {
@ -368,35 +367,6 @@ static void hmat_update_target_access(struct memory_target *target,
} }
} }
int hmat_update_target_coordinates(int nid, struct access_coordinate *coord,
enum access_coordinate_class access)
{
struct memory_target *target;
int pxm;
if (nid == NUMA_NO_NODE)
return -EINVAL;
pxm = node_to_pxm(nid);
guard(mutex)(&target_lock);
target = find_mem_target(pxm);
if (!target)
return -ENODEV;
hmat_update_target_access(target, ACPI_HMAT_READ_LATENCY,
coord->read_latency, access);
hmat_update_target_access(target, ACPI_HMAT_WRITE_LATENCY,
coord->write_latency, access);
hmat_update_target_access(target, ACPI_HMAT_READ_BANDWIDTH,
coord->read_bandwidth, access);
hmat_update_target_access(target, ACPI_HMAT_WRITE_BANDWIDTH,
coord->write_bandwidth, access);
target->ext_updated = true;
return 0;
}
EXPORT_SYMBOL_GPL(hmat_update_target_coordinates);
static __init void hmat_add_locality(struct acpi_hmat_locality *hmat_loc) static __init void hmat_add_locality(struct acpi_hmat_locality *hmat_loc)
{ {
struct memory_locality *loc; struct memory_locality *loc;
@ -773,10 +743,6 @@ static void hmat_update_target_attrs(struct memory_target *target,
u32 best = 0; u32 best = 0;
int i; int i;
/* Don't update if an external agent has changed the data. */
if (target->ext_updated)
return;
/* Don't update for generic port if there's no device handle */ /* Don't update for generic port if there's no device handle */
if ((access == NODE_ACCESS_CLASS_GENPORT_SINK_LOCAL || if ((access == NODE_ACCESS_CLASS_GENPORT_SINK_LOCAL ||
access == NODE_ACCESS_CLASS_GENPORT_SINK_CPU) && access == NODE_ACCESS_CLASS_GENPORT_SINK_CPU) &&

View File

@ -248,6 +248,44 @@ void node_set_perf_attrs(unsigned int nid, struct access_coordinate *coord,
} }
EXPORT_SYMBOL_GPL(node_set_perf_attrs); EXPORT_SYMBOL_GPL(node_set_perf_attrs);
/**
* node_update_perf_attrs - Update the performance values for given access class
* @nid: Node identifier to be updated
* @coord: Heterogeneous memory performance coordinates
* @access: The access class for the given attributes
*/
void node_update_perf_attrs(unsigned int nid, struct access_coordinate *coord,
enum access_coordinate_class access)
{
struct node_access_nodes *access_node;
struct node *node;
int i;
if (WARN_ON_ONCE(!node_online(nid)))
return;
node = node_devices[nid];
list_for_each_entry(access_node, &node->access_list, list_node) {
if (access_node->access != access)
continue;
access_node->coord = *coord;
for (i = 0; access_attrs[i]; i++) {
sysfs_notify(&access_node->dev.kobj,
NULL, access_attrs[i]->name);
}
break;
}
/* When setting CPU access coordinates, update mempolicy */
if (access != ACCESS_COORDINATE_CPU)
return;
if (mempolicy_set_node_perf(nid, coord))
pr_info("failed to set mempolicy attrs for node %d\n", nid);
}
EXPORT_SYMBOL_GPL(node_update_perf_attrs);
/** /**
* struct node_cache_info - Internal tracking for memory node caches * struct node_cache_info - Internal tracking for memory node caches
* @dev: Device represeting the cache level * @dev: Device represeting the cache level

View File

@ -20,8 +20,7 @@ static const guid_t acpi_cxl_qtg_id_guid =
GUID_INIT(0xF365F9A6, 0xA7DE, 0x4071, GUID_INIT(0xF365F9A6, 0xA7DE, 0x4071,
0xA6, 0x6A, 0xB4, 0x0C, 0x0B, 0x4F, 0x8E, 0x52); 0xA6, 0x6A, 0xB4, 0x0C, 0x0B, 0x4F, 0x8E, 0x52);
static u64 cxl_apply_xor_maps(struct cxl_root_decoder *cxlrd, u64 addr)
static u64 cxl_xor_hpa_to_spa(struct cxl_root_decoder *cxlrd, u64 hpa)
{ {
struct cxl_cxims_data *cximsd = cxlrd->platform_data; struct cxl_cxims_data *cximsd = cxlrd->platform_data;
int hbiw = cxlrd->cxlsd.nr_targets; int hbiw = cxlrd->cxlsd.nr_targets;
@ -30,19 +29,23 @@ static u64 cxl_xor_hpa_to_spa(struct cxl_root_decoder *cxlrd, u64 hpa)
/* No xormaps for host bridge interleave ways of 1 or 3 */ /* No xormaps for host bridge interleave ways of 1 or 3 */
if (hbiw == 1 || hbiw == 3) if (hbiw == 1 || hbiw == 3)
return hpa; return addr;
/* /*
* For root decoders using xormaps (hbiw: 2,4,6,8,12,16) restore * In regions using XOR interleave arithmetic the CXL HPA may not
* the position bit to its value before the xormap was applied at * be the same as the SPA. This helper performs the SPA->CXL HPA
* HPA->DPA translation. * or the CXL HPA->SPA translation. Since XOR is self-inverting,
* so is this function.
*
* For root decoders using xormaps (hbiw: 2,4,6,8,12,16) applying the
* xormaps will toggle a position bit.
* *
* pos is the lowest set bit in an XORMAP * pos is the lowest set bit in an XORMAP
* val is the XORALLBITS(HPA & XORMAP) * val is the XORALLBITS(addr & XORMAP)
* *
* XORALLBITS: The CXL spec (3.1 Table 9-22) defines XORALLBITS * XORALLBITS: The CXL spec (3.1 Table 9-22) defines XORALLBITS
* as an operation that outputs a single bit by XORing all the * as an operation that outputs a single bit by XORing all the
* bits in the input (hpa & xormap). Implement XORALLBITS using * bits in the input (addr & xormap). Implement XORALLBITS using
* hweight64(). If the hamming weight is even the XOR of those * hweight64(). If the hamming weight is even the XOR of those
* bits results in val==0, if odd the XOR result is val==1. * bits results in val==0, if odd the XOR result is val==1.
*/ */
@ -51,11 +54,11 @@ static u64 cxl_xor_hpa_to_spa(struct cxl_root_decoder *cxlrd, u64 hpa)
if (!cximsd->xormaps[i]) if (!cximsd->xormaps[i])
continue; continue;
pos = __ffs(cximsd->xormaps[i]); pos = __ffs(cximsd->xormaps[i]);
val = (hweight64(hpa & cximsd->xormaps[i]) & 1); val = (hweight64(addr & cximsd->xormaps[i]) & 1);
hpa = (hpa & ~(1ULL << pos)) | (val << pos); addr = (addr & ~(1ULL << pos)) | (val << pos);
} }
return hpa; return addr;
} }
struct cxl_cxims_context { struct cxl_cxims_context {
@ -113,9 +116,9 @@ static unsigned long cfmws_to_decoder_flags(int restrictions)
{ {
unsigned long flags = CXL_DECODER_F_ENABLE; unsigned long flags = CXL_DECODER_F_ENABLE;
if (restrictions & ACPI_CEDT_CFMWS_RESTRICT_TYPE2) if (restrictions & ACPI_CEDT_CFMWS_RESTRICT_DEVMEM)
flags |= CXL_DECODER_F_TYPE2; flags |= CXL_DECODER_F_TYPE2;
if (restrictions & ACPI_CEDT_CFMWS_RESTRICT_TYPE3) if (restrictions & ACPI_CEDT_CFMWS_RESTRICT_HOSTONLYMEM)
flags |= CXL_DECODER_F_TYPE3; flags |= CXL_DECODER_F_TYPE3;
if (restrictions & ACPI_CEDT_CFMWS_RESTRICT_VOLATILE) if (restrictions & ACPI_CEDT_CFMWS_RESTRICT_VOLATILE)
flags |= CXL_DECODER_F_RAM; flags |= CXL_DECODER_F_RAM;
@ -398,7 +401,6 @@ DEFINE_FREE(del_cxl_resource, struct resource *, if (_T) del_cxl_resource(_T))
static int __cxl_parse_cfmws(struct acpi_cedt_cfmws *cfmws, static int __cxl_parse_cfmws(struct acpi_cedt_cfmws *cfmws,
struct cxl_cfmws_context *ctx) struct cxl_cfmws_context *ctx)
{ {
int target_map[CXL_DECODER_MAX_INTERLEAVE];
struct cxl_port *root_port = ctx->root_port; struct cxl_port *root_port = ctx->root_port;
struct cxl_cxims_context cxims_ctx; struct cxl_cxims_context cxims_ctx;
struct device *dev = ctx->dev; struct device *dev = ctx->dev;
@ -416,8 +418,6 @@ static int __cxl_parse_cfmws(struct acpi_cedt_cfmws *cfmws,
rc = eig_to_granularity(cfmws->granularity, &ig); rc = eig_to_granularity(cfmws->granularity, &ig);
if (rc) if (rc)
return rc; return rc;
for (i = 0; i < ways; i++)
target_map[i] = cfmws->interleave_targets[i];
struct resource *res __free(del_cxl_resource) = alloc_cxl_resource( struct resource *res __free(del_cxl_resource) = alloc_cxl_resource(
cfmws->base_hpa, cfmws->window_size, ctx->id++); cfmws->base_hpa, cfmws->window_size, ctx->id++);
@ -443,6 +443,8 @@ static int __cxl_parse_cfmws(struct acpi_cedt_cfmws *cfmws,
.end = cfmws->base_hpa + cfmws->window_size - 1, .end = cfmws->base_hpa + cfmws->window_size - 1,
}; };
cxld->interleave_ways = ways; cxld->interleave_ways = ways;
for (i = 0; i < ways; i++)
cxld->target_map[i] = cfmws->interleave_targets[i];
/* /*
* Minimize the x1 granularity to advertise support for any * Minimize the x1 granularity to advertise support for any
* valid region granularity * valid region granularity
@ -472,10 +474,16 @@ static int __cxl_parse_cfmws(struct acpi_cedt_cfmws *cfmws,
cxlrd->qos_class = cfmws->qtg_id; cxlrd->qos_class = cfmws->qtg_id;
if (cfmws->interleave_arithmetic == ACPI_CEDT_CFMWS_ARITHMETIC_XOR) if (cfmws->interleave_arithmetic == ACPI_CEDT_CFMWS_ARITHMETIC_XOR) {
cxlrd->hpa_to_spa = cxl_xor_hpa_to_spa; cxlrd->ops = kzalloc(sizeof(*cxlrd->ops), GFP_KERNEL);
if (!cxlrd->ops)
return -ENOMEM;
rc = cxl_decoder_add(cxld, target_map); cxlrd->ops->hpa_to_spa = cxl_apply_xor_maps;
cxlrd->ops->spa_to_hpa = cxl_apply_xor_maps;
}
rc = cxl_decoder_add(cxld);
if (rc) if (rc)
return rc; return rc;

View File

@ -338,7 +338,7 @@ static int match_cxlrd_hb(struct device *dev, void *data)
guard(rwsem_read)(&cxl_rwsem.region); guard(rwsem_read)(&cxl_rwsem.region);
for (int i = 0; i < cxlsd->nr_targets; i++) { for (int i = 0; i < cxlsd->nr_targets; i++) {
if (host_bridge == cxlsd->target[i]->dport_dev) if (cxlsd->target[i] && host_bridge == cxlsd->target[i]->dport_dev)
return 1; return 1;
} }
@ -440,8 +440,8 @@ static int cdat_sslbis_handler(union acpi_subtable_headers *header, void *arg,
} *tbl = (struct acpi_cdat_sslbis_table *)header; } *tbl = (struct acpi_cdat_sslbis_table *)header;
int size = sizeof(header->cdat) + sizeof(tbl->sslbis_header); int size = sizeof(header->cdat) + sizeof(tbl->sslbis_header);
struct acpi_cdat_sslbis *sslbis; struct acpi_cdat_sslbis *sslbis;
struct cxl_port *port = arg; struct cxl_dport *dport = arg;
struct device *dev = &port->dev; struct device *dev = &dport->port->dev;
int remain, entries, i; int remain, entries, i;
u16 len; u16 len;
@ -467,8 +467,6 @@ static int cdat_sslbis_handler(union acpi_subtable_headers *header, void *arg,
u16 y = le16_to_cpu((__force __le16)tbl->entries[i].porty_id); u16 y = le16_to_cpu((__force __le16)tbl->entries[i].porty_id);
__le64 le_base; __le64 le_base;
__le16 le_val; __le16 le_val;
struct cxl_dport *dport;
unsigned long index;
u16 dsp_id; u16 dsp_id;
u64 val; u64 val;
@ -499,28 +497,27 @@ static int cdat_sslbis_handler(union acpi_subtable_headers *header, void *arg,
val = cdat_normalize(le16_to_cpu(le_val), le64_to_cpu(le_base), val = cdat_normalize(le16_to_cpu(le_val), le64_to_cpu(le_base),
sslbis->data_type); sslbis->data_type);
xa_for_each(&port->dports, index, dport) { if (dsp_id == ACPI_CDAT_SSLBIS_ANY_PORT ||
if (dsp_id == ACPI_CDAT_SSLBIS_ANY_PORT || dsp_id == dport->port_id) {
dsp_id == dport->port_id) { cxl_access_coordinate_set(dport->coord,
cxl_access_coordinate_set(dport->coord, sslbis->data_type, val);
sslbis->data_type, return 0;
val);
}
} }
} }
return 0; return 0;
} }
void cxl_switch_parse_cdat(struct cxl_port *port) void cxl_switch_parse_cdat(struct cxl_dport *dport)
{ {
struct cxl_port *port = dport->port;
int rc; int rc;
if (!port->cdat.table) if (!port->cdat.table)
return; return;
rc = cdat_table_parse(ACPI_CDAT_TYPE_SSLBIS, cdat_sslbis_handler, rc = cdat_table_parse(ACPI_CDAT_TYPE_SSLBIS, cdat_sslbis_handler,
port, port->cdat.table, port->cdat.length); dport, port->cdat.table, port->cdat.length);
rc = cdat_table_parse_output(rc); rc = cdat_table_parse_output(rc);
if (rc) if (rc)
dev_dbg(&port->dev, "Failed to parse SSLBIS: %d\n", rc); dev_dbg(&port->dev, "Failed to parse SSLBIS: %d\n", rc);
@ -1075,14 +1072,3 @@ void cxl_region_perf_data_calculate(struct cxl_region *cxlr,
cxlr->coord[i].write_bandwidth += perf->coord[i].write_bandwidth; cxlr->coord[i].write_bandwidth += perf->coord[i].write_bandwidth;
} }
} }
int cxl_update_hmat_access_coordinates(int nid, struct cxl_region *cxlr,
enum access_coordinate_class access)
{
return hmat_update_target_coordinates(nid, &cxlr->coord[access], access);
}
bool cxl_need_node_perf_attrs_update(int nid)
{
return !acpi_node_backed_by_real_pxm(nid);
}

View File

@ -135,11 +135,12 @@ enum cxl_poison_trace_type {
CXL_POISON_TRACE_CLEAR, CXL_POISON_TRACE_CLEAR,
}; };
enum poison_cmd_enabled_bits;
bool cxl_memdev_has_poison_cmd(struct cxl_memdev *cxlmd,
enum poison_cmd_enabled_bits cmd);
long cxl_pci_get_latency(struct pci_dev *pdev); long cxl_pci_get_latency(struct pci_dev *pdev);
int cxl_pci_get_bandwidth(struct pci_dev *pdev, struct access_coordinate *c); int cxl_pci_get_bandwidth(struct pci_dev *pdev, struct access_coordinate *c);
int cxl_update_hmat_access_coordinates(int nid, struct cxl_region *cxlr,
enum access_coordinate_class access);
bool cxl_need_node_perf_attrs_update(int nid);
int cxl_port_get_switch_dport_bandwidth(struct cxl_port *port, int cxl_port_get_switch_dport_bandwidth(struct cxl_port *port,
struct access_coordinate *c); struct access_coordinate *c);
@ -147,6 +148,11 @@ int cxl_ras_init(void);
void cxl_ras_exit(void); void cxl_ras_exit(void);
int cxl_gpf_port_setup(struct cxl_dport *dport); int cxl_gpf_port_setup(struct cxl_dport *dport);
struct cxl_hdm;
int cxl_hdm_decode_init(struct cxl_dev_state *cxlds, struct cxl_hdm *cxlhdm,
struct cxl_endpoint_dvsec_info *info);
int cxl_port_get_possible_dports(struct cxl_port *port);
#ifdef CONFIG_CXL_FEATURES #ifdef CONFIG_CXL_FEATURES
struct cxl_feat_entry * struct cxl_feat_entry *
cxl_feature_info(struct cxl_features_state *cxlfs, const uuid_t *uuid); cxl_feature_info(struct cxl_features_state *cxlfs, const uuid_t *uuid);

View File

@ -21,12 +21,11 @@ struct cxl_rwsem cxl_rwsem = {
.dpa = __RWSEM_INITIALIZER(cxl_rwsem.dpa), .dpa = __RWSEM_INITIALIZER(cxl_rwsem.dpa),
}; };
static int add_hdm_decoder(struct cxl_port *port, struct cxl_decoder *cxld, static int add_hdm_decoder(struct cxl_port *port, struct cxl_decoder *cxld)
int *target_map)
{ {
int rc; int rc;
rc = cxl_decoder_add_locked(cxld, target_map); rc = cxl_decoder_add_locked(cxld);
if (rc) { if (rc) {
put_device(&cxld->dev); put_device(&cxld->dev);
dev_err(&port->dev, "Failed to add decoder\n"); dev_err(&port->dev, "Failed to add decoder\n");
@ -50,12 +49,9 @@ static int add_hdm_decoder(struct cxl_port *port, struct cxl_decoder *cxld,
* are claimed and passed to the single dport. Disable the range until the first * are claimed and passed to the single dport. Disable the range until the first
* CXL region is enumerated / activated. * CXL region is enumerated / activated.
*/ */
int devm_cxl_add_passthrough_decoder(struct cxl_port *port) static int devm_cxl_add_passthrough_decoder(struct cxl_port *port)
{ {
struct cxl_switch_decoder *cxlsd; struct cxl_switch_decoder *cxlsd;
struct cxl_dport *dport = NULL;
int single_port_map[1];
unsigned long index;
struct cxl_hdm *cxlhdm = dev_get_drvdata(&port->dev); struct cxl_hdm *cxlhdm = dev_get_drvdata(&port->dev);
/* /*
@ -71,13 +67,8 @@ int devm_cxl_add_passthrough_decoder(struct cxl_port *port)
device_lock_assert(&port->dev); device_lock_assert(&port->dev);
xa_for_each(&port->dports, index, dport) return add_hdm_decoder(port, &cxlsd->cxld);
break;
single_port_map[0] = dport->port_id;
return add_hdm_decoder(port, &cxlsd->cxld, single_port_map);
} }
EXPORT_SYMBOL_NS_GPL(devm_cxl_add_passthrough_decoder, "CXL");
static void parse_hdm_decoder_caps(struct cxl_hdm *cxlhdm) static void parse_hdm_decoder_caps(struct cxl_hdm *cxlhdm)
{ {
@ -147,8 +138,8 @@ static bool should_emulate_decoders(struct cxl_endpoint_dvsec_info *info)
* @port: cxl_port to map * @port: cxl_port to map
* @info: cached DVSEC range register info * @info: cached DVSEC range register info
*/ */
struct cxl_hdm *devm_cxl_setup_hdm(struct cxl_port *port, static struct cxl_hdm *devm_cxl_setup_hdm(struct cxl_port *port,
struct cxl_endpoint_dvsec_info *info) struct cxl_endpoint_dvsec_info *info)
{ {
struct cxl_register_map *reg_map = &port->reg_map; struct cxl_register_map *reg_map = &port->reg_map;
struct device *dev = &port->dev; struct device *dev = &port->dev;
@ -197,13 +188,12 @@ struct cxl_hdm *devm_cxl_setup_hdm(struct cxl_port *port,
*/ */
if (should_emulate_decoders(info)) { if (should_emulate_decoders(info)) {
dev_dbg(dev, "Fallback map %d range register%s\n", info->ranges, dev_dbg(dev, "Fallback map %d range register%s\n", info->ranges,
info->ranges > 1 ? "s" : ""); str_plural(info->ranges));
cxlhdm->decoder_count = info->ranges; cxlhdm->decoder_count = info->ranges;
} }
return cxlhdm; return cxlhdm;
} }
EXPORT_SYMBOL_NS_GPL(devm_cxl_setup_hdm, "CXL");
static void __cxl_dpa_debug(struct seq_file *file, struct resource *r, int depth) static void __cxl_dpa_debug(struct seq_file *file, struct resource *r, int depth)
{ {
@ -984,7 +974,7 @@ static int cxl_setup_hdm_decoder_from_dvsec(
} }
static int init_hdm_decoder(struct cxl_port *port, struct cxl_decoder *cxld, static int init_hdm_decoder(struct cxl_port *port, struct cxl_decoder *cxld,
int *target_map, void __iomem *hdm, int which, void __iomem *hdm, int which,
u64 *dpa_base, struct cxl_endpoint_dvsec_info *info) u64 *dpa_base, struct cxl_endpoint_dvsec_info *info)
{ {
struct cxl_endpoint_decoder *cxled = NULL; struct cxl_endpoint_decoder *cxled = NULL;
@ -1103,7 +1093,7 @@ static int init_hdm_decoder(struct cxl_port *port, struct cxl_decoder *cxld,
hi = readl(hdm + CXL_HDM_DECODER0_TL_HIGH(which)); hi = readl(hdm + CXL_HDM_DECODER0_TL_HIGH(which));
target_list.value = (hi << 32) + lo; target_list.value = (hi << 32) + lo;
for (i = 0; i < cxld->interleave_ways; i++) for (i = 0; i < cxld->interleave_ways; i++)
target_map[i] = target_list.target_id[i]; cxld->target_map[i] = target_list.target_id[i];
return 0; return 0;
} }
@ -1168,8 +1158,8 @@ static void cxl_settle_decoders(struct cxl_hdm *cxlhdm)
* @cxlhdm: Structure to populate with HDM capabilities * @cxlhdm: Structure to populate with HDM capabilities
* @info: cached DVSEC range register info * @info: cached DVSEC range register info
*/ */
int devm_cxl_enumerate_decoders(struct cxl_hdm *cxlhdm, static int devm_cxl_enumerate_decoders(struct cxl_hdm *cxlhdm,
struct cxl_endpoint_dvsec_info *info) struct cxl_endpoint_dvsec_info *info)
{ {
void __iomem *hdm = cxlhdm->regs.hdm_decoder; void __iomem *hdm = cxlhdm->regs.hdm_decoder;
struct cxl_port *port = cxlhdm->port; struct cxl_port *port = cxlhdm->port;
@ -1179,7 +1169,6 @@ int devm_cxl_enumerate_decoders(struct cxl_hdm *cxlhdm,
cxl_settle_decoders(cxlhdm); cxl_settle_decoders(cxlhdm);
for (i = 0; i < cxlhdm->decoder_count; i++) { for (i = 0; i < cxlhdm->decoder_count; i++) {
int target_map[CXL_DECODER_MAX_INTERLEAVE] = { 0 };
int rc, target_count = cxlhdm->target_count; int rc, target_count = cxlhdm->target_count;
struct cxl_decoder *cxld; struct cxl_decoder *cxld;
@ -1207,8 +1196,7 @@ int devm_cxl_enumerate_decoders(struct cxl_hdm *cxlhdm,
cxld = &cxlsd->cxld; cxld = &cxlsd->cxld;
} }
rc = init_hdm_decoder(port, cxld, target_map, hdm, i, rc = init_hdm_decoder(port, cxld, hdm, i, &dpa_base, info);
&dpa_base, info);
if (rc) { if (rc) {
dev_warn(&port->dev, dev_warn(&port->dev,
"Failed to initialize decoder%d.%d\n", "Failed to initialize decoder%d.%d\n",
@ -1216,7 +1204,7 @@ int devm_cxl_enumerate_decoders(struct cxl_hdm *cxlhdm,
put_device(&cxld->dev); put_device(&cxld->dev);
return rc; return rc;
} }
rc = add_hdm_decoder(port, cxld, target_map); rc = add_hdm_decoder(port, cxld);
if (rc) { if (rc) {
dev_warn(&port->dev, dev_warn(&port->dev,
"Failed to add decoder%d.%d\n", port->id, i); "Failed to add decoder%d.%d\n", port->id, i);
@ -1226,4 +1214,71 @@ int devm_cxl_enumerate_decoders(struct cxl_hdm *cxlhdm,
return 0; return 0;
} }
EXPORT_SYMBOL_NS_GPL(devm_cxl_enumerate_decoders, "CXL");
/**
* __devm_cxl_switch_port_decoders_setup - allocate and setup switch decoders
* @port: CXL port context
*
* Return 0 or -errno on error
*/
int __devm_cxl_switch_port_decoders_setup(struct cxl_port *port)
{
struct cxl_hdm *cxlhdm;
if (is_cxl_root(port) || is_cxl_endpoint(port))
return -EOPNOTSUPP;
cxlhdm = devm_cxl_setup_hdm(port, NULL);
if (!IS_ERR(cxlhdm))
return devm_cxl_enumerate_decoders(cxlhdm, NULL);
if (PTR_ERR(cxlhdm) != -ENODEV) {
dev_err(&port->dev, "Failed to map HDM decoder capability\n");
return PTR_ERR(cxlhdm);
}
if (cxl_port_get_possible_dports(port) == 1) {
dev_dbg(&port->dev, "Fallback to passthrough decoder\n");
return devm_cxl_add_passthrough_decoder(port);
}
dev_err(&port->dev, "HDM decoder capability not found\n");
return -ENXIO;
}
EXPORT_SYMBOL_NS_GPL(__devm_cxl_switch_port_decoders_setup, "CXL");
/**
* devm_cxl_endpoint_decoders_setup - allocate and setup endpoint decoders
* @port: CXL port context
*
* Return 0 or -errno on error
*/
int devm_cxl_endpoint_decoders_setup(struct cxl_port *port)
{
struct cxl_memdev *cxlmd = to_cxl_memdev(port->uport_dev);
struct cxl_endpoint_dvsec_info info = { .port = port };
struct cxl_dev_state *cxlds = cxlmd->cxlds;
struct cxl_hdm *cxlhdm;
int rc;
if (!is_cxl_endpoint(port))
return -EOPNOTSUPP;
rc = cxl_dvsec_rr_decode(cxlds, &info);
if (rc < 0)
return rc;
cxlhdm = devm_cxl_setup_hdm(port, &info);
if (IS_ERR(cxlhdm)) {
if (PTR_ERR(cxlhdm) == -ENODEV)
dev_err(&port->dev, "HDM decoder registers not found\n");
return PTR_ERR(cxlhdm);
}
rc = cxl_hdm_decode_init(cxlds, cxlhdm, &info);
if (rc)
return rc;
return devm_cxl_enumerate_decoders(cxlhdm, &info);
}
EXPORT_SYMBOL_NS_GPL(devm_cxl_endpoint_decoders_setup, "CXL");

View File

@ -200,6 +200,14 @@ static ssize_t security_erase_store(struct device *dev,
static struct device_attribute dev_attr_security_erase = static struct device_attribute dev_attr_security_erase =
__ATTR(erase, 0200, NULL, security_erase_store); __ATTR(erase, 0200, NULL, security_erase_store);
bool cxl_memdev_has_poison_cmd(struct cxl_memdev *cxlmd,
enum poison_cmd_enabled_bits cmd)
{
struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds);
return test_bit(cmd, mds->poison.enabled_cmds);
}
static int cxl_get_poison_by_memdev(struct cxl_memdev *cxlmd) static int cxl_get_poison_by_memdev(struct cxl_memdev *cxlmd)
{ {
struct cxl_dev_state *cxlds = cxlmd->cxlds; struct cxl_dev_state *cxlds = cxlmd->cxlds;
@ -276,7 +284,7 @@ static int cxl_validate_poison_dpa(struct cxl_memdev *cxlmd, u64 dpa)
return 0; return 0;
} }
int cxl_inject_poison(struct cxl_memdev *cxlmd, u64 dpa) int cxl_inject_poison_locked(struct cxl_memdev *cxlmd, u64 dpa)
{ {
struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox; struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox;
struct cxl_mbox_inject_poison inject; struct cxl_mbox_inject_poison inject;
@ -288,13 +296,8 @@ int cxl_inject_poison(struct cxl_memdev *cxlmd, u64 dpa)
if (!IS_ENABLED(CONFIG_DEBUG_FS)) if (!IS_ENABLED(CONFIG_DEBUG_FS))
return 0; return 0;
ACQUIRE(rwsem_read_intr, region_rwsem)(&cxl_rwsem.region); lockdep_assert_held(&cxl_rwsem.dpa);
if ((rc = ACQUIRE_ERR(rwsem_read_intr, &region_rwsem))) lockdep_assert_held(&cxl_rwsem.region);
return rc;
ACQUIRE(rwsem_read_intr, dpa_rwsem)(&cxl_rwsem.dpa);
if ((rc = ACQUIRE_ERR(rwsem_read_intr, &dpa_rwsem)))
return rc;
rc = cxl_validate_poison_dpa(cxlmd, dpa); rc = cxl_validate_poison_dpa(cxlmd, dpa);
if (rc) if (rc)
@ -324,9 +327,24 @@ int cxl_inject_poison(struct cxl_memdev *cxlmd, u64 dpa)
return 0; return 0;
} }
int cxl_inject_poison(struct cxl_memdev *cxlmd, u64 dpa)
{
int rc;
ACQUIRE(rwsem_read_intr, region_rwsem)(&cxl_rwsem.region);
if ((rc = ACQUIRE_ERR(rwsem_read_intr, &region_rwsem)))
return rc;
ACQUIRE(rwsem_read_intr, dpa_rwsem)(&cxl_rwsem.dpa);
if ((rc = ACQUIRE_ERR(rwsem_read_intr, &dpa_rwsem)))
return rc;
return cxl_inject_poison_locked(cxlmd, dpa);
}
EXPORT_SYMBOL_NS_GPL(cxl_inject_poison, "CXL"); EXPORT_SYMBOL_NS_GPL(cxl_inject_poison, "CXL");
int cxl_clear_poison(struct cxl_memdev *cxlmd, u64 dpa) int cxl_clear_poison_locked(struct cxl_memdev *cxlmd, u64 dpa)
{ {
struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox; struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox;
struct cxl_mbox_clear_poison clear; struct cxl_mbox_clear_poison clear;
@ -338,13 +356,8 @@ int cxl_clear_poison(struct cxl_memdev *cxlmd, u64 dpa)
if (!IS_ENABLED(CONFIG_DEBUG_FS)) if (!IS_ENABLED(CONFIG_DEBUG_FS))
return 0; return 0;
ACQUIRE(rwsem_read_intr, region_rwsem)(&cxl_rwsem.region); lockdep_assert_held(&cxl_rwsem.dpa);
if ((rc = ACQUIRE_ERR(rwsem_read_intr, &region_rwsem))) lockdep_assert_held(&cxl_rwsem.region);
return rc;
ACQUIRE(rwsem_read_intr, dpa_rwsem)(&cxl_rwsem.dpa);
if ((rc = ACQUIRE_ERR(rwsem_read_intr, &dpa_rwsem)))
return rc;
rc = cxl_validate_poison_dpa(cxlmd, dpa); rc = cxl_validate_poison_dpa(cxlmd, dpa);
if (rc) if (rc)
@ -383,6 +396,21 @@ int cxl_clear_poison(struct cxl_memdev *cxlmd, u64 dpa)
return 0; return 0;
} }
int cxl_clear_poison(struct cxl_memdev *cxlmd, u64 dpa)
{
int rc;
ACQUIRE(rwsem_read_intr, region_rwsem)(&cxl_rwsem.region);
if ((rc = ACQUIRE_ERR(rwsem_read_intr, &region_rwsem)))
return rc;
ACQUIRE(rwsem_read_intr, dpa_rwsem)(&cxl_rwsem.dpa);
if ((rc = ACQUIRE_ERR(rwsem_read_intr, &dpa_rwsem)))
return rc;
return cxl_clear_poison_locked(cxlmd, dpa);
}
EXPORT_SYMBOL_NS_GPL(cxl_clear_poison, "CXL"); EXPORT_SYMBOL_NS_GPL(cxl_clear_poison, "CXL");
static struct attribute *cxl_memdev_attributes[] = { static struct attribute *cxl_memdev_attributes[] = {

View File

@ -24,6 +24,53 @@ static unsigned short media_ready_timeout = 60;
module_param(media_ready_timeout, ushort, 0644); module_param(media_ready_timeout, ushort, 0644);
MODULE_PARM_DESC(media_ready_timeout, "seconds to wait for media ready"); MODULE_PARM_DESC(media_ready_timeout, "seconds to wait for media ready");
static int pci_get_port_num(struct pci_dev *pdev)
{
u32 lnkcap;
int type;
type = pci_pcie_type(pdev);
if (type != PCI_EXP_TYPE_DOWNSTREAM && type != PCI_EXP_TYPE_ROOT_PORT)
return -EINVAL;
if (pci_read_config_dword(pdev, pci_pcie_cap(pdev) + PCI_EXP_LNKCAP,
&lnkcap))
return -ENXIO;
return FIELD_GET(PCI_EXP_LNKCAP_PN, lnkcap);
}
/**
* __devm_cxl_add_dport_by_dev - allocate a dport by dport device
* @port: cxl_port that hosts the dport
* @dport_dev: 'struct device' of the dport
*
* Returns the allocated dport on success or ERR_PTR() of -errno on error
*/
struct cxl_dport *__devm_cxl_add_dport_by_dev(struct cxl_port *port,
struct device *dport_dev)
{
struct cxl_register_map map;
struct pci_dev *pdev;
int port_num, rc;
if (!dev_is_pci(dport_dev))
return ERR_PTR(-EINVAL);
pdev = to_pci_dev(dport_dev);
port_num = pci_get_port_num(pdev);
if (port_num < 0)
return ERR_PTR(port_num);
rc = cxl_find_regblock(pdev, CXL_REGLOC_RBI_COMPONENT, &map);
if (rc)
return ERR_PTR(rc);
device_lock_assert(&port->dev);
return devm_cxl_add_dport(port, dport_dev, port_num, map.resource);
}
EXPORT_SYMBOL_NS_GPL(__devm_cxl_add_dport_by_dev, "CXL");
struct cxl_walk_context { struct cxl_walk_context {
struct pci_bus *bus; struct pci_bus *bus;
struct cxl_port *port; struct cxl_port *port;
@ -1169,3 +1216,45 @@ int cxl_gpf_port_setup(struct cxl_dport *dport)
return 0; return 0;
} }
static int count_dports(struct pci_dev *pdev, void *data)
{
struct cxl_walk_context *ctx = data;
int type = pci_pcie_type(pdev);
if (pdev->bus != ctx->bus)
return 0;
if (!pci_is_pcie(pdev))
return 0;
if (type != ctx->type)
return 0;
ctx->count++;
return 0;
}
int cxl_port_get_possible_dports(struct cxl_port *port)
{
struct pci_bus *bus = cxl_port_to_pci_bus(port);
struct cxl_walk_context ctx;
int type;
if (!bus) {
dev_err(&port->dev, "No PCI bus found for port %s\n",
dev_name(&port->dev));
return -ENXIO;
}
if (pci_is_root_bus(bus))
type = PCI_EXP_TYPE_ROOT_PORT;
else
type = PCI_EXP_TYPE_DOWNSTREAM;
ctx = (struct cxl_walk_context) {
.bus = bus,
.type = type,
};
pci_walk_bus(bus, count_dports, &ctx);
return ctx.count;
}

View File

@ -33,6 +33,15 @@
static DEFINE_IDA(cxl_port_ida); static DEFINE_IDA(cxl_port_ida);
static DEFINE_XARRAY(cxl_root_buses); static DEFINE_XARRAY(cxl_root_buses);
/*
* The terminal device in PCI is NULL and @platform_bus
* for platform devices (for cxl_test)
*/
static bool is_cxl_host_bridge(struct device *dev)
{
return (!dev || dev == &platform_bus);
}
int cxl_num_decoders_committed(struct cxl_port *port) int cxl_num_decoders_committed(struct cxl_port *port)
{ {
lockdep_assert_held(&cxl_rwsem.region); lockdep_assert_held(&cxl_rwsem.region);
@ -450,6 +459,7 @@ static void cxl_root_decoder_release(struct device *dev)
if (atomic_read(&cxlrd->region_id) >= 0) if (atomic_read(&cxlrd->region_id) >= 0)
memregion_free(atomic_read(&cxlrd->region_id)); memregion_free(atomic_read(&cxlrd->region_id));
__cxl_decoder_release(&cxlrd->cxlsd.cxld); __cxl_decoder_release(&cxlrd->cxlsd.cxld);
kfree(cxlrd->ops);
kfree(cxlrd); kfree(cxlrd);
} }
@ -740,6 +750,7 @@ static struct cxl_port *cxl_port_alloc(struct device *uport_dev,
xa_init(&port->dports); xa_init(&port->dports);
xa_init(&port->endpoints); xa_init(&port->endpoints);
xa_init(&port->regions); xa_init(&port->regions);
port->component_reg_phys = CXL_RESOURCE_NONE;
device_initialize(dev); device_initialize(dev);
lockdep_set_class_and_subclass(&dev->mutex, &cxl_port_key, port->depth); lockdep_set_class_and_subclass(&dev->mutex, &cxl_port_key, port->depth);
@ -858,9 +869,7 @@ static int cxl_port_add(struct cxl_port *port,
if (rc) if (rc)
return rc; return rc;
rc = cxl_port_setup_regs(port, component_reg_phys); port->component_reg_phys = component_reg_phys;
if (rc)
return rc;
} else { } else {
rc = dev_set_name(dev, "root%d", port->id); rc = dev_set_name(dev, "root%d", port->id);
if (rc) if (rc)
@ -1191,6 +1200,18 @@ __devm_cxl_add_dport(struct cxl_port *port, struct device *dport_dev,
cxl_debugfs_create_dport_dir(dport); cxl_debugfs_create_dport_dir(dport);
/*
* Setup port register if this is the first dport showed up. Having
* a dport also means that there is at least 1 active link.
*/
if (port->nr_dports == 1 &&
port->component_reg_phys != CXL_RESOURCE_NONE) {
rc = cxl_port_setup_regs(port, port->component_reg_phys);
if (rc)
return ERR_PTR(rc);
port->component_reg_phys = CXL_RESOURCE_NONE;
}
return dport; return dport;
} }
@ -1348,21 +1369,6 @@ static struct cxl_port *find_cxl_port(struct device *dport_dev,
return port; return port;
} }
static struct cxl_port *find_cxl_port_at(struct cxl_port *parent_port,
struct device *dport_dev,
struct cxl_dport **dport)
{
struct cxl_find_port_ctx ctx = {
.dport_dev = dport_dev,
.parent_port = parent_port,
.dport = dport,
};
struct cxl_port *port;
port = __find_cxl_port(&ctx);
return port;
}
/* /*
* All users of grandparent() are using it to walk PCIe-like switch port * All users of grandparent() are using it to walk PCIe-like switch port
* hierarchy. A PCIe switch is comprised of a bridge device representing the * hierarchy. A PCIe switch is comprised of a bridge device representing the
@ -1423,7 +1429,7 @@ EXPORT_SYMBOL_NS_GPL(cxl_endpoint_autoremove, "CXL");
* through ->remove(). This "bottom-up" removal selectively removes individual * through ->remove(). This "bottom-up" removal selectively removes individual
* child ports manually. This depends on devm_cxl_add_port() to not change is * child ports manually. This depends on devm_cxl_add_port() to not change is
* devm action registration order, and for dports to have already been * devm action registration order, and for dports to have already been
* destroyed by reap_dports(). * destroyed by del_dports().
*/ */
static void delete_switch_port(struct cxl_port *port) static void delete_switch_port(struct cxl_port *port)
{ {
@ -1432,18 +1438,24 @@ static void delete_switch_port(struct cxl_port *port)
devm_release_action(port->dev.parent, unregister_port, port); devm_release_action(port->dev.parent, unregister_port, port);
} }
static void reap_dports(struct cxl_port *port) static void del_dport(struct cxl_dport *dport)
{
struct cxl_port *port = dport->port;
devm_release_action(&port->dev, cxl_dport_unlink, dport);
devm_release_action(&port->dev, cxl_dport_remove, dport);
devm_kfree(&port->dev, dport);
}
static void del_dports(struct cxl_port *port)
{ {
struct cxl_dport *dport; struct cxl_dport *dport;
unsigned long index; unsigned long index;
device_lock_assert(&port->dev); device_lock_assert(&port->dev);
xa_for_each(&port->dports, index, dport) { xa_for_each(&port->dports, index, dport)
devm_release_action(&port->dev, cxl_dport_unlink, dport); del_dport(dport);
devm_release_action(&port->dev, cxl_dport_remove, dport);
devm_kfree(&port->dev, dport);
}
} }
struct detach_ctx { struct detach_ctx {
@ -1501,7 +1513,7 @@ static void cxl_detach_ep(void *data)
*/ */
died = true; died = true;
port->dead = true; port->dead = true;
reap_dports(port); del_dports(port);
} }
device_unlock(&port->dev); device_unlock(&port->dev);
@ -1532,16 +1544,157 @@ static resource_size_t find_component_registers(struct device *dev)
return map.resource; return map.resource;
} }
static int match_port_by_uport(struct device *dev, const void *data)
{
const struct device *uport_dev = data;
struct cxl_port *port;
if (!is_cxl_port(dev))
return 0;
port = to_cxl_port(dev);
return uport_dev == port->uport_dev;
}
/*
* Function takes a device reference on the port device. Caller should do a
* put_device() when done.
*/
static struct cxl_port *find_cxl_port_by_uport(struct device *uport_dev)
{
struct device *dev;
dev = bus_find_device(&cxl_bus_type, NULL, uport_dev, match_port_by_uport);
if (dev)
return to_cxl_port(dev);
return NULL;
}
static int update_decoder_targets(struct device *dev, void *data)
{
struct cxl_dport *dport = data;
struct cxl_switch_decoder *cxlsd;
struct cxl_decoder *cxld;
int i;
if (!is_switch_decoder(dev))
return 0;
cxlsd = to_cxl_switch_decoder(dev);
cxld = &cxlsd->cxld;
guard(rwsem_write)(&cxl_rwsem.region);
for (i = 0; i < cxld->interleave_ways; i++) {
if (cxld->target_map[i] == dport->port_id) {
cxlsd->target[i] = dport;
dev_dbg(dev, "dport%d found in target list, index %d\n",
dport->port_id, i);
return 1;
}
}
return 0;
}
DEFINE_FREE(del_cxl_dport, struct cxl_dport *, if (!IS_ERR_OR_NULL(_T)) del_dport(_T))
static struct cxl_dport *cxl_port_add_dport(struct cxl_port *port,
struct device *dport_dev)
{
struct cxl_dport *dport;
int rc;
device_lock_assert(&port->dev);
if (!port->dev.driver)
return ERR_PTR(-ENXIO);
dport = cxl_find_dport_by_dev(port, dport_dev);
if (dport) {
dev_dbg(&port->dev, "dport%d:%s already exists\n",
dport->port_id, dev_name(dport_dev));
return ERR_PTR(-EBUSY);
}
struct cxl_dport *new_dport __free(del_cxl_dport) =
devm_cxl_add_dport_by_dev(port, dport_dev);
if (IS_ERR(new_dport))
return new_dport;
cxl_switch_parse_cdat(new_dport);
if (ida_is_empty(&port->decoder_ida)) {
rc = devm_cxl_switch_port_decoders_setup(port);
if (rc)
return ERR_PTR(rc);
dev_dbg(&port->dev, "first dport%d:%s added with decoders\n",
new_dport->port_id, dev_name(dport_dev));
return no_free_ptr(new_dport);
}
/* New dport added, update the decoder targets */
device_for_each_child(&port->dev, new_dport, update_decoder_targets);
dev_dbg(&port->dev, "dport%d:%s added\n", new_dport->port_id,
dev_name(dport_dev));
return no_free_ptr(new_dport);
}
static struct cxl_dport *devm_cxl_create_port(struct device *ep_dev,
struct cxl_port *parent_port,
struct cxl_dport *parent_dport,
struct device *uport_dev,
struct device *dport_dev)
{
resource_size_t component_reg_phys;
device_lock_assert(&parent_port->dev);
if (!parent_port->dev.driver) {
dev_warn(ep_dev,
"port %s:%s:%s disabled, failed to enumerate CXL.mem\n",
dev_name(&parent_port->dev), dev_name(uport_dev),
dev_name(dport_dev));
}
struct cxl_port *port __free(put_cxl_port) =
find_cxl_port_by_uport(uport_dev);
if (!port) {
component_reg_phys = find_component_registers(uport_dev);
port = devm_cxl_add_port(&parent_port->dev, uport_dev,
component_reg_phys, parent_dport);
if (IS_ERR(port))
return ERR_CAST(port);
/*
* retry to make sure a port is found. a port device
* reference is taken.
*/
port = find_cxl_port_by_uport(uport_dev);
if (!port)
return ERR_PTR(-ENODEV);
dev_dbg(ep_dev, "created port %s:%s\n",
dev_name(&port->dev), dev_name(port->uport_dev));
} else {
/*
* Port was created before right before this function is
* called. Signal the caller to deal with it.
*/
return ERR_PTR(-EAGAIN);
}
guard(device)(&port->dev);
return cxl_port_add_dport(port, dport_dev);
}
static int add_port_attach_ep(struct cxl_memdev *cxlmd, static int add_port_attach_ep(struct cxl_memdev *cxlmd,
struct device *uport_dev, struct device *uport_dev,
struct device *dport_dev) struct device *dport_dev)
{ {
struct device *dparent = grandparent(dport_dev); struct device *dparent = grandparent(dport_dev);
struct cxl_dport *dport, *parent_dport; struct cxl_dport *dport, *parent_dport;
resource_size_t component_reg_phys;
int rc; int rc;
if (!dparent) { if (is_cxl_host_bridge(dparent)) {
/* /*
* The iteration reached the topology root without finding the * The iteration reached the topology root without finding the
* CXL-root 'cxl_port' on a previous iteration, fail for now to * CXL-root 'cxl_port' on a previous iteration, fail for now to
@ -1553,42 +1706,31 @@ static int add_port_attach_ep(struct cxl_memdev *cxlmd,
} }
struct cxl_port *parent_port __free(put_cxl_port) = struct cxl_port *parent_port __free(put_cxl_port) =
find_cxl_port(dparent, &parent_dport); find_cxl_port_by_uport(dparent->parent);
if (!parent_port) { if (!parent_port) {
/* iterate to create this parent_port */ /* iterate to create this parent_port */
return -EAGAIN; return -EAGAIN;
} }
/*
* Definition with __free() here to keep the sequence of
* dereferencing the device of the port before the parent_port releasing.
*/
struct cxl_port *port __free(put_cxl_port) = NULL;
scoped_guard(device, &parent_port->dev) { scoped_guard(device, &parent_port->dev) {
if (!parent_port->dev.driver) { parent_dport = cxl_find_dport_by_dev(parent_port, dparent);
dev_warn(&cxlmd->dev, if (!parent_dport) {
"port %s:%s disabled, failed to enumerate CXL.mem\n", parent_dport = cxl_port_add_dport(parent_port, dparent);
dev_name(&parent_port->dev), dev_name(uport_dev)); if (IS_ERR(parent_dport))
return -ENXIO; return PTR_ERR(parent_dport);
} }
port = find_cxl_port_at(parent_port, dport_dev, &dport); dport = devm_cxl_create_port(&cxlmd->dev, parent_port,
if (!port) { parent_dport, uport_dev,
component_reg_phys = find_component_registers(uport_dev); dport_dev);
port = devm_cxl_add_port(&parent_port->dev, uport_dev, if (IS_ERR(dport)) {
component_reg_phys, parent_dport); /* Port already exists, restart iteration */
if (IS_ERR(port)) if (PTR_ERR(dport) == -EAGAIN)
return PTR_ERR(port); return 0;
return PTR_ERR(dport);
/* retry find to pick up the new dport information */
port = find_cxl_port_at(parent_port, dport_dev, &dport);
if (!port)
return -ENXIO;
} }
} }
dev_dbg(&cxlmd->dev, "add to new port %s:%s\n",
dev_name(&port->dev), dev_name(port->uport_dev));
rc = cxl_add_ep(dport, &cxlmd->dev); rc = cxl_add_ep(dport, &cxlmd->dev);
if (rc == -EBUSY) { if (rc == -EBUSY) {
/* /*
@ -1601,6 +1743,25 @@ static int add_port_attach_ep(struct cxl_memdev *cxlmd,
return rc; return rc;
} }
static struct cxl_dport *find_or_add_dport(struct cxl_port *port,
struct device *dport_dev)
{
struct cxl_dport *dport;
device_lock_assert(&port->dev);
dport = cxl_find_dport_by_dev(port, dport_dev);
if (!dport) {
dport = cxl_port_add_dport(port, dport_dev);
if (IS_ERR(dport))
return dport;
/* New dport added, restart iteration */
return ERR_PTR(-EAGAIN);
}
return dport;
}
int devm_cxl_enumerate_ports(struct cxl_memdev *cxlmd) int devm_cxl_enumerate_ports(struct cxl_memdev *cxlmd)
{ {
struct device *dev = &cxlmd->dev; struct device *dev = &cxlmd->dev;
@ -1629,11 +1790,7 @@ int devm_cxl_enumerate_ports(struct cxl_memdev *cxlmd)
struct device *uport_dev; struct device *uport_dev;
struct cxl_dport *dport; struct cxl_dport *dport;
/* if (is_cxl_host_bridge(dport_dev))
* The terminal "grandparent" in PCI is NULL and @platform_bus
* for platform devices
*/
if (!dport_dev || dport_dev == &platform_bus)
return 0; return 0;
uport_dev = dport_dev->parent; uport_dev = dport_dev->parent;
@ -1647,12 +1804,26 @@ int devm_cxl_enumerate_ports(struct cxl_memdev *cxlmd)
dev_name(iter), dev_name(dport_dev), dev_name(iter), dev_name(dport_dev),
dev_name(uport_dev)); dev_name(uport_dev));
struct cxl_port *port __free(put_cxl_port) = struct cxl_port *port __free(put_cxl_port) =
find_cxl_port(dport_dev, &dport); find_cxl_port_by_uport(uport_dev);
if (port) { if (port) {
dev_dbg(&cxlmd->dev, dev_dbg(&cxlmd->dev,
"found already registered port %s:%s\n", "found already registered port %s:%s\n",
dev_name(&port->dev), dev_name(&port->dev),
dev_name(port->uport_dev)); dev_name(port->uport_dev));
/*
* RP port enumerated by cxl_acpi without dport will
* have the dport added here.
*/
scoped_guard(device, &port->dev) {
dport = find_or_add_dport(port, dport_dev);
if (IS_ERR(dport)) {
if (PTR_ERR(dport) == -EAGAIN)
goto retry;
return PTR_ERR(dport);
}
}
rc = cxl_add_ep(dport, &cxlmd->dev); rc = cxl_add_ep(dport, &cxlmd->dev);
/* /*
@ -1704,24 +1875,24 @@ struct cxl_port *cxl_mem_find_port(struct cxl_memdev *cxlmd,
EXPORT_SYMBOL_NS_GPL(cxl_mem_find_port, "CXL"); EXPORT_SYMBOL_NS_GPL(cxl_mem_find_port, "CXL");
static int decoder_populate_targets(struct cxl_switch_decoder *cxlsd, static int decoder_populate_targets(struct cxl_switch_decoder *cxlsd,
struct cxl_port *port, int *target_map) struct cxl_port *port)
{ {
struct cxl_decoder *cxld = &cxlsd->cxld;
int i; int i;
if (!target_map)
return 0;
device_lock_assert(&port->dev); device_lock_assert(&port->dev);
if (xa_empty(&port->dports)) if (xa_empty(&port->dports))
return -EINVAL; return 0;
guard(rwsem_write)(&cxl_rwsem.region); guard(rwsem_write)(&cxl_rwsem.region);
for (i = 0; i < cxlsd->cxld.interleave_ways; i++) { for (i = 0; i < cxlsd->cxld.interleave_ways; i++) {
struct cxl_dport *dport = find_dport(port, target_map[i]); struct cxl_dport *dport = find_dport(port, cxld->target_map[i]);
if (!dport) if (!dport) {
return -ENXIO; /* dport may be activated later */
continue;
}
cxlsd->target[i] = dport; cxlsd->target[i] = dport;
} }
@ -1910,9 +2081,6 @@ EXPORT_SYMBOL_NS_GPL(cxl_endpoint_decoder_alloc, "CXL");
/** /**
* cxl_decoder_add_locked - Add a decoder with targets * cxl_decoder_add_locked - Add a decoder with targets
* @cxld: The cxl decoder allocated by cxl_<type>_decoder_alloc() * @cxld: The cxl decoder allocated by cxl_<type>_decoder_alloc()
* @target_map: A list of downstream ports that this decoder can direct memory
* traffic to. These numbers should correspond with the port number
* in the PCIe Link Capabilities structure.
* *
* Certain types of decoders may not have any targets. The main example of this * Certain types of decoders may not have any targets. The main example of this
* is an endpoint device. A more awkward example is a hostbridge whose root * is an endpoint device. A more awkward example is a hostbridge whose root
@ -1926,7 +2094,7 @@ EXPORT_SYMBOL_NS_GPL(cxl_endpoint_decoder_alloc, "CXL");
* Return: Negative error code if the decoder wasn't properly configured; else * Return: Negative error code if the decoder wasn't properly configured; else
* returns 0. * returns 0.
*/ */
int cxl_decoder_add_locked(struct cxl_decoder *cxld, int *target_map) int cxl_decoder_add_locked(struct cxl_decoder *cxld)
{ {
struct cxl_port *port; struct cxl_port *port;
struct device *dev; struct device *dev;
@ -1947,7 +2115,7 @@ int cxl_decoder_add_locked(struct cxl_decoder *cxld, int *target_map)
if (!is_endpoint_decoder(dev)) { if (!is_endpoint_decoder(dev)) {
struct cxl_switch_decoder *cxlsd = to_cxl_switch_decoder(dev); struct cxl_switch_decoder *cxlsd = to_cxl_switch_decoder(dev);
rc = decoder_populate_targets(cxlsd, port, target_map); rc = decoder_populate_targets(cxlsd, port);
if (rc && (cxld->flags & CXL_DECODER_F_ENABLE)) { if (rc && (cxld->flags & CXL_DECODER_F_ENABLE)) {
dev_err(&port->dev, dev_err(&port->dev,
"Failed to populate active decoder targets\n"); "Failed to populate active decoder targets\n");
@ -1966,9 +2134,6 @@ EXPORT_SYMBOL_NS_GPL(cxl_decoder_add_locked, "CXL");
/** /**
* cxl_decoder_add - Add a decoder with targets * cxl_decoder_add - Add a decoder with targets
* @cxld: The cxl decoder allocated by cxl_<type>_decoder_alloc() * @cxld: The cxl decoder allocated by cxl_<type>_decoder_alloc()
* @target_map: A list of downstream ports that this decoder can direct memory
* traffic to. These numbers should correspond with the port number
* in the PCIe Link Capabilities structure.
* *
* This is the unlocked variant of cxl_decoder_add_locked(). * This is the unlocked variant of cxl_decoder_add_locked().
* See cxl_decoder_add_locked(). * See cxl_decoder_add_locked().
@ -1976,7 +2141,7 @@ EXPORT_SYMBOL_NS_GPL(cxl_decoder_add_locked, "CXL");
* Context: Process context. Takes and releases the device lock of the port that * Context: Process context. Takes and releases the device lock of the port that
* owns the @cxld. * owns the @cxld.
*/ */
int cxl_decoder_add(struct cxl_decoder *cxld, int *target_map) int cxl_decoder_add(struct cxl_decoder *cxld)
{ {
struct cxl_port *port; struct cxl_port *port;
@ -1989,7 +2154,7 @@ int cxl_decoder_add(struct cxl_decoder *cxld, int *target_map)
port = to_cxl_port(cxld->dev.parent); port = to_cxl_port(cxld->dev.parent);
guard(device)(&port->dev); guard(device)(&port->dev);
return cxl_decoder_add_locked(cxld, target_map); return cxl_decoder_add_locked(cxld);
} }
EXPORT_SYMBOL_NS_GPL(cxl_decoder_add, "CXL"); EXPORT_SYMBOL_NS_GPL(cxl_decoder_add, "CXL");

View File

@ -2,6 +2,7 @@
/* Copyright(c) 2022 Intel Corporation. All rights reserved. */ /* Copyright(c) 2022 Intel Corporation. All rights reserved. */
#include <linux/memregion.h> #include <linux/memregion.h>
#include <linux/genalloc.h> #include <linux/genalloc.h>
#include <linux/debugfs.h>
#include <linux/device.h> #include <linux/device.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/memory.h> #include <linux/memory.h>
@ -10,6 +11,7 @@
#include <linux/sort.h> #include <linux/sort.h>
#include <linux/idr.h> #include <linux/idr.h>
#include <linux/memory-tiers.h> #include <linux/memory-tiers.h>
#include <linux/string_choices.h>
#include <cxlmem.h> #include <cxlmem.h>
#include <cxl.h> #include <cxl.h>
#include "core.h" #include "core.h"
@ -30,6 +32,12 @@
* 3. Decoder targets * 3. Decoder targets
*/ */
/*
* nodemask that sets per node when the access_coordinates for the node has
* been updated by the CXL memory hotplug notifier.
*/
static nodemask_t nodemask_region_seen = NODE_MASK_NONE;
static struct cxl_region *to_cxl_region(struct device *dev); static struct cxl_region *to_cxl_region(struct device *dev);
#define __ACCESS_ATTR_RO(_level, _name) { \ #define __ACCESS_ATTR_RO(_level, _name) { \
@ -1468,9 +1476,7 @@ static int cxl_port_setup_targets(struct cxl_port *port,
dev_name(port->uport_dev), dev_name(&port->dev), dev_name(port->uport_dev), dev_name(&port->dev),
__func__, cxld->interleave_ways, __func__, cxld->interleave_ways,
cxld->interleave_granularity, cxld->interleave_granularity,
(cxld->flags & CXL_DECODER_F_ENABLE) ? str_enabled_disabled(cxld->flags & CXL_DECODER_F_ENABLE),
"enabled" :
"disabled",
cxld->hpa_range.start, cxld->hpa_range.end); cxld->hpa_range.start, cxld->hpa_range.end);
return -ENXIO; return -ENXIO;
} }
@ -1510,8 +1516,10 @@ static int cxl_port_setup_targets(struct cxl_port *port,
cxl_rr->nr_targets_set); cxl_rr->nr_targets_set);
return -ENXIO; return -ENXIO;
} }
} else } else {
cxlsd->target[cxl_rr->nr_targets_set] = ep->dport; cxlsd->target[cxl_rr->nr_targets_set] = ep->dport;
cxlsd->cxld.target_map[cxl_rr->nr_targets_set] = ep->dport->port_id;
}
inc = 1; inc = 1;
out_target_set: out_target_set:
cxl_rr->nr_targets_set += inc; cxl_rr->nr_targets_set += inc;
@ -2442,14 +2450,8 @@ static bool cxl_region_update_coordinates(struct cxl_region *cxlr, int nid)
for (int i = 0; i < ACCESS_COORDINATE_MAX; i++) { for (int i = 0; i < ACCESS_COORDINATE_MAX; i++) {
if (cxlr->coord[i].read_bandwidth) { if (cxlr->coord[i].read_bandwidth) {
rc = 0; node_update_perf_attrs(nid, &cxlr->coord[i], i);
if (cxl_need_node_perf_attrs_update(nid)) cset++;
node_set_perf_attrs(nid, &cxlr->coord[i], i);
else
rc = cxl_update_hmat_access_coordinates(nid, cxlr, i);
if (rc == 0)
cset++;
} }
} }
@ -2487,6 +2489,10 @@ static int cxl_region_perf_attrs_callback(struct notifier_block *nb,
if (nid != region_nid) if (nid != region_nid)
return NOTIFY_DONE; return NOTIFY_DONE;
/* No action needed if node bit already set */
if (node_test_and_set(nid, nodemask_region_seen))
return NOTIFY_DONE;
if (!cxl_region_update_coordinates(cxlr, nid)) if (!cxl_region_update_coordinates(cxlr, nid))
return NOTIFY_DONE; return NOTIFY_DONE;
@ -2918,6 +2924,16 @@ static bool cxl_is_hpa_in_chunk(u64 hpa, struct cxl_region *cxlr, int pos)
return false; return false;
} }
static bool has_hpa_to_spa(struct cxl_root_decoder *cxlrd)
{
return cxlrd->ops && cxlrd->ops->hpa_to_spa;
}
static bool has_spa_to_hpa(struct cxl_root_decoder *cxlrd)
{
return cxlrd->ops && cxlrd->ops->spa_to_hpa;
}
u64 cxl_dpa_to_hpa(struct cxl_region *cxlr, const struct cxl_memdev *cxlmd, u64 cxl_dpa_to_hpa(struct cxl_region *cxlr, const struct cxl_memdev *cxlmd,
u64 dpa) u64 dpa)
{ {
@ -2972,8 +2988,8 @@ u64 cxl_dpa_to_hpa(struct cxl_region *cxlr, const struct cxl_memdev *cxlmd,
hpa = hpa_offset + p->res->start + p->cache_size; hpa = hpa_offset + p->res->start + p->cache_size;
/* Root decoder translation overrides typical modulo decode */ /* Root decoder translation overrides typical modulo decode */
if (cxlrd->hpa_to_spa) if (has_hpa_to_spa(cxlrd))
hpa = cxlrd->hpa_to_spa(cxlrd, hpa); hpa = cxlrd->ops->hpa_to_spa(cxlrd, hpa);
if (!cxl_resource_contains_addr(p->res, hpa)) { if (!cxl_resource_contains_addr(p->res, hpa)) {
dev_dbg(&cxlr->dev, dev_dbg(&cxlr->dev,
@ -2982,12 +2998,107 @@ u64 cxl_dpa_to_hpa(struct cxl_region *cxlr, const struct cxl_memdev *cxlmd,
} }
/* Simple chunk check, by pos & gran, only applies to modulo decodes */ /* Simple chunk check, by pos & gran, only applies to modulo decodes */
if (!cxlrd->hpa_to_spa && (!cxl_is_hpa_in_chunk(hpa, cxlr, pos))) if (!has_hpa_to_spa(cxlrd) && (!cxl_is_hpa_in_chunk(hpa, cxlr, pos)))
return ULLONG_MAX; return ULLONG_MAX;
return hpa; return hpa;
} }
struct dpa_result {
struct cxl_memdev *cxlmd;
u64 dpa;
};
static int region_offset_to_dpa_result(struct cxl_region *cxlr, u64 offset,
struct dpa_result *result)
{
struct cxl_region_params *p = &cxlr->params;
struct cxl_root_decoder *cxlrd = to_cxl_root_decoder(cxlr->dev.parent);
struct cxl_endpoint_decoder *cxled;
u64 hpa, hpa_offset, dpa_offset;
u64 bits_upper, bits_lower;
u64 shifted, rem, temp;
u16 eig = 0;
u8 eiw = 0;
int pos;
lockdep_assert_held(&cxl_rwsem.region);
lockdep_assert_held(&cxl_rwsem.dpa);
/* Input validation ensures valid ways and gran */
granularity_to_eig(p->interleave_granularity, &eig);
ways_to_eiw(p->interleave_ways, &eiw);
/*
* If the root decoder has SPA to CXL HPA callback, use it. Otherwise
* CXL HPA is assumed to equal SPA.
*/
if (has_spa_to_hpa(cxlrd)) {
hpa = cxlrd->ops->spa_to_hpa(cxlrd, p->res->start + offset);
hpa_offset = hpa - p->res->start;
} else {
hpa_offset = offset;
}
/*
* Interleave position: CXL Spec 3.2 Section 8.2.4.20.13
* eiw < 8
* Position is in the IW bits at HPA_OFFSET[IG+8+IW-1:IG+8].
* Per spec "remove IW bits starting with bit position IG+8"
* eiw >= 8
* Position is not explicitly stored in HPA_OFFSET bits. It is
* derived from the modulo operation of the upper bits using
* the total number of interleave ways.
*/
if (eiw < 8) {
pos = (hpa_offset >> (eig + 8)) & GENMASK(eiw - 1, 0);
} else {
shifted = hpa_offset >> (eig + 8);
div64_u64_rem(shifted, p->interleave_ways, &rem);
pos = rem;
}
if (pos < 0 || pos >= p->nr_targets) {
dev_dbg(&cxlr->dev, "Invalid position %d for %d targets\n",
pos, p->nr_targets);
return -ENXIO;
}
/*
* DPA offset: CXL Spec 3.2 Section 8.2.4.20.13
* Lower bits [IG+7:0] pass through unchanged
* (eiw < 8)
* Per spec: DPAOffset[51:IG+8] = (HPAOffset[51:IG+IW+8] >> IW)
* Clear the position bits to isolate upper section, then
* reverse the left shift by eiw that occurred during DPA->HPA
* (eiw >= 8)
* Per spec: DPAOffset[51:IG+8] = HPAOffset[51:IG+IW] / 3
* Extract upper bits from the correct bit range and divide by 3
* to recover the original DPA upper bits
*/
bits_lower = hpa_offset & GENMASK_ULL(eig + 7, 0);
if (eiw < 8) {
temp = hpa_offset &= ~((u64)GENMASK(eig + eiw + 8 - 1, 0));
dpa_offset = temp >> eiw;
} else {
bits_upper = div64_u64(hpa_offset >> (eig + eiw), 3);
dpa_offset = bits_upper << (eig + 8);
}
dpa_offset |= bits_lower;
/* Look-up and return the result: a memdev and a DPA */
for (int i = 0; i < p->nr_targets; i++) {
cxled = p->targets[i];
if (cxled->pos != pos)
continue;
result->cxlmd = cxled_to_memdev(cxled);
result->dpa = cxl_dpa_resource_start(cxled) + dpa_offset;
return 0;
}
dev_err(&cxlr->dev, "No device found for position %d\n", pos);
return -ENXIO;
}
static struct lock_class_key cxl_pmem_region_key; static struct lock_class_key cxl_pmem_region_key;
static int cxl_pmem_region_alloc(struct cxl_region *cxlr) static int cxl_pmem_region_alloc(struct cxl_region *cxlr)
@ -3542,6 +3653,105 @@ static void shutdown_notifiers(void *_cxlr)
unregister_mt_adistance_algorithm(&cxlr->adist_notifier); unregister_mt_adistance_algorithm(&cxlr->adist_notifier);
} }
static void remove_debugfs(void *dentry)
{
debugfs_remove_recursive(dentry);
}
static int validate_region_offset(struct cxl_region *cxlr, u64 offset)
{
struct cxl_region_params *p = &cxlr->params;
resource_size_t region_size;
u64 hpa;
if (offset < p->cache_size) {
dev_err(&cxlr->dev,
"Offset %#llx is within extended linear cache %pr\n",
offset, &p->cache_size);
return -EINVAL;
}
region_size = resource_size(p->res);
if (offset >= region_size) {
dev_err(&cxlr->dev, "Offset %#llx exceeds region size %pr\n",
offset, &region_size);
return -EINVAL;
}
hpa = p->res->start + offset;
if (hpa < p->res->start || hpa > p->res->end) {
dev_err(&cxlr->dev, "HPA %#llx not in region %pr\n", hpa,
p->res);
return -EINVAL;
}
return 0;
}
static int cxl_region_debugfs_poison_inject(void *data, u64 offset)
{
struct dpa_result result = { .dpa = ULLONG_MAX, .cxlmd = NULL };
struct cxl_region *cxlr = data;
int rc;
ACQUIRE(rwsem_read_intr, region_rwsem)(&cxl_rwsem.region);
if ((rc = ACQUIRE_ERR(rwsem_read_intr, &region_rwsem)))
return rc;
ACQUIRE(rwsem_read_intr, dpa_rwsem)(&cxl_rwsem.dpa);
if ((rc = ACQUIRE_ERR(rwsem_read_intr, &dpa_rwsem)))
return rc;
if (validate_region_offset(cxlr, offset))
return -EINVAL;
rc = region_offset_to_dpa_result(cxlr, offset, &result);
if (rc || !result.cxlmd || result.dpa == ULLONG_MAX) {
dev_dbg(&cxlr->dev,
"Failed to resolve DPA for region offset %#llx rc %d\n",
offset, rc);
return rc ? rc : -EINVAL;
}
return cxl_inject_poison_locked(result.cxlmd, result.dpa);
}
DEFINE_DEBUGFS_ATTRIBUTE(cxl_poison_inject_fops, NULL,
cxl_region_debugfs_poison_inject, "%llx\n");
static int cxl_region_debugfs_poison_clear(void *data, u64 offset)
{
struct dpa_result result = { .dpa = ULLONG_MAX, .cxlmd = NULL };
struct cxl_region *cxlr = data;
int rc;
ACQUIRE(rwsem_read_intr, region_rwsem)(&cxl_rwsem.region);
if ((rc = ACQUIRE_ERR(rwsem_read_intr, &region_rwsem)))
return rc;
ACQUIRE(rwsem_read_intr, dpa_rwsem)(&cxl_rwsem.dpa);
if ((rc = ACQUIRE_ERR(rwsem_read_intr, &dpa_rwsem)))
return rc;
if (validate_region_offset(cxlr, offset))
return -EINVAL;
rc = region_offset_to_dpa_result(cxlr, offset, &result);
if (rc || !result.cxlmd || result.dpa == ULLONG_MAX) {
dev_dbg(&cxlr->dev,
"Failed to resolve DPA for region offset %#llx rc %d\n",
offset, rc);
return rc ? rc : -EINVAL;
}
return cxl_clear_poison_locked(result.cxlmd, result.dpa);
}
DEFINE_DEBUGFS_ATTRIBUTE(cxl_poison_clear_fops, NULL,
cxl_region_debugfs_poison_clear, "%llx\n");
static int cxl_region_can_probe(struct cxl_region *cxlr) static int cxl_region_can_probe(struct cxl_region *cxlr)
{ {
struct cxl_region_params *p = &cxlr->params; struct cxl_region_params *p = &cxlr->params;
@ -3571,6 +3781,7 @@ static int cxl_region_probe(struct device *dev)
{ {
struct cxl_region *cxlr = to_cxl_region(dev); struct cxl_region *cxlr = to_cxl_region(dev);
struct cxl_region_params *p = &cxlr->params; struct cxl_region_params *p = &cxlr->params;
bool poison_supported = true;
int rc; int rc;
rc = cxl_region_can_probe(cxlr); rc = cxl_region_can_probe(cxlr);
@ -3594,6 +3805,31 @@ static int cxl_region_probe(struct device *dev)
if (rc) if (rc)
return rc; return rc;
/* Create poison attributes if all memdevs support the capabilities */
for (int i = 0; i < p->nr_targets; i++) {
struct cxl_endpoint_decoder *cxled = p->targets[i];
struct cxl_memdev *cxlmd = cxled_to_memdev(cxled);
if (!cxl_memdev_has_poison_cmd(cxlmd, CXL_POISON_ENABLED_INJECT) ||
!cxl_memdev_has_poison_cmd(cxlmd, CXL_POISON_ENABLED_CLEAR)) {
poison_supported = false;
break;
}
}
if (poison_supported) {
struct dentry *dentry;
dentry = cxl_debugfs_create_dir(dev_name(dev));
debugfs_create_file("inject_poison", 0200, dentry, cxlr,
&cxl_poison_inject_fops);
debugfs_create_file("clear_poison", 0200, dentry, cxlr,
&cxl_poison_clear_fops);
rc = devm_add_action_or_reset(dev, remove_debugfs, dentry);
if (rc)
return rc;
}
switch (cxlr->mode) { switch (cxlr->mode) {
case CXL_PARTMODE_PMEM: case CXL_PARTMODE_PMEM:
rc = devm_cxl_region_edac_register(cxlr); rc = devm_cxl_region_edac_register(cxlr);

View File

@ -357,6 +357,9 @@ enum cxl_decoder_type {
* @target_type: accelerator vs expander (type2 vs type3) selector * @target_type: accelerator vs expander (type2 vs type3) selector
* @region: currently assigned region for this decoder * @region: currently assigned region for this decoder
* @flags: memory type capabilities and locking * @flags: memory type capabilities and locking
* @target_map: cached copy of hardware port-id list, available at init
* before all @dport objects have been instantiated. While
* dport id is 8bit, CFMWS interleave targets are 32bits.
* @commit: device/decoder-type specific callback to commit settings to hw * @commit: device/decoder-type specific callback to commit settings to hw
* @reset: device/decoder-type specific callback to reset hw settings * @reset: device/decoder-type specific callback to reset hw settings
*/ */
@ -369,6 +372,7 @@ struct cxl_decoder {
enum cxl_decoder_type target_type; enum cxl_decoder_type target_type;
struct cxl_region *region; struct cxl_region *region;
unsigned long flags; unsigned long flags;
u32 target_map[CXL_DECODER_MAX_INTERLEAVE];
int (*commit)(struct cxl_decoder *cxld); int (*commit)(struct cxl_decoder *cxld);
void (*reset)(struct cxl_decoder *cxld); void (*reset)(struct cxl_decoder *cxld);
}; };
@ -419,27 +423,35 @@ struct cxl_switch_decoder {
}; };
struct cxl_root_decoder; struct cxl_root_decoder;
typedef u64 (*cxl_hpa_to_spa_fn)(struct cxl_root_decoder *cxlrd, u64 hpa); /**
* struct cxl_rd_ops - CXL root decoder callback operations
* @hpa_to_spa: Convert host physical address to system physical address
* @spa_to_hpa: Convert system physical address to host physical address
*/
struct cxl_rd_ops {
u64 (*hpa_to_spa)(struct cxl_root_decoder *cxlrd, u64 hpa);
u64 (*spa_to_hpa)(struct cxl_root_decoder *cxlrd, u64 spa);
};
/** /**
* struct cxl_root_decoder - Static platform CXL address decoder * struct cxl_root_decoder - Static platform CXL address decoder
* @res: host / parent resource for region allocations * @res: host / parent resource for region allocations
* @cache_size: extended linear cache size if exists, otherwise zero. * @cache_size: extended linear cache size if exists, otherwise zero.
* @region_id: region id for next region provisioning event * @region_id: region id for next region provisioning event
* @hpa_to_spa: translate CXL host-physical-address to Platform system-physical-address
* @platform_data: platform specific configuration data * @platform_data: platform specific configuration data
* @range_lock: sync region autodiscovery by address range * @range_lock: sync region autodiscovery by address range
* @qos_class: QoS performance class cookie * @qos_class: QoS performance class cookie
* @ops: CXL root decoder operations
* @cxlsd: base cxl switch decoder * @cxlsd: base cxl switch decoder
*/ */
struct cxl_root_decoder { struct cxl_root_decoder {
struct resource *res; struct resource *res;
resource_size_t cache_size; resource_size_t cache_size;
atomic_t region_id; atomic_t region_id;
cxl_hpa_to_spa_fn hpa_to_spa;
void *platform_data; void *platform_data;
struct mutex range_lock; struct mutex range_lock;
int qos_class; int qos_class;
struct cxl_rd_ops *ops;
struct cxl_switch_decoder cxlsd; struct cxl_switch_decoder cxlsd;
}; };
@ -595,6 +607,7 @@ struct cxl_dax_region {
* @cdat: Cached CDAT data * @cdat: Cached CDAT data
* @cdat_available: Should a CDAT attribute be available in sysfs * @cdat_available: Should a CDAT attribute be available in sysfs
* @pci_latency: Upstream latency in picoseconds * @pci_latency: Upstream latency in picoseconds
* @component_reg_phys: Physical address of component register
*/ */
struct cxl_port { struct cxl_port {
struct device dev; struct device dev;
@ -618,6 +631,7 @@ struct cxl_port {
} cdat; } cdat;
bool cdat_available; bool cdat_available;
long pci_latency; long pci_latency;
resource_size_t component_reg_phys;
}; };
/** /**
@ -781,9 +795,9 @@ struct cxl_root_decoder *cxl_root_decoder_alloc(struct cxl_port *port,
unsigned int nr_targets); unsigned int nr_targets);
struct cxl_switch_decoder *cxl_switch_decoder_alloc(struct cxl_port *port, struct cxl_switch_decoder *cxl_switch_decoder_alloc(struct cxl_port *port,
unsigned int nr_targets); unsigned int nr_targets);
int cxl_decoder_add(struct cxl_decoder *cxld, int *target_map); int cxl_decoder_add(struct cxl_decoder *cxld);
struct cxl_endpoint_decoder *cxl_endpoint_decoder_alloc(struct cxl_port *port); struct cxl_endpoint_decoder *cxl_endpoint_decoder_alloc(struct cxl_port *port);
int cxl_decoder_add_locked(struct cxl_decoder *cxld, int *target_map); int cxl_decoder_add_locked(struct cxl_decoder *cxld);
int cxl_decoder_autoremove(struct device *host, struct cxl_decoder *cxld); int cxl_decoder_autoremove(struct device *host, struct cxl_decoder *cxld);
static inline int cxl_root_decoder_autoremove(struct device *host, static inline int cxl_root_decoder_autoremove(struct device *host,
struct cxl_root_decoder *cxlrd) struct cxl_root_decoder *cxlrd)
@ -806,12 +820,10 @@ struct cxl_endpoint_dvsec_info {
struct range dvsec_range[2]; struct range dvsec_range[2];
}; };
struct cxl_hdm; int devm_cxl_switch_port_decoders_setup(struct cxl_port *port);
struct cxl_hdm *devm_cxl_setup_hdm(struct cxl_port *port, int __devm_cxl_switch_port_decoders_setup(struct cxl_port *port);
struct cxl_endpoint_dvsec_info *info); int devm_cxl_endpoint_decoders_setup(struct cxl_port *port);
int devm_cxl_enumerate_decoders(struct cxl_hdm *cxlhdm,
struct cxl_endpoint_dvsec_info *info);
int devm_cxl_add_passthrough_decoder(struct cxl_port *port);
struct cxl_dev_state; struct cxl_dev_state;
int cxl_dvsec_rr_decode(struct cxl_dev_state *cxlds, int cxl_dvsec_rr_decode(struct cxl_dev_state *cxlds,
struct cxl_endpoint_dvsec_info *info); struct cxl_endpoint_dvsec_info *info);
@ -890,7 +902,7 @@ static inline u64 cxl_port_get_spa_cache_alias(struct cxl_port *endpoint,
#endif #endif
void cxl_endpoint_parse_cdat(struct cxl_port *port); void cxl_endpoint_parse_cdat(struct cxl_port *port);
void cxl_switch_parse_cdat(struct cxl_port *port); void cxl_switch_parse_cdat(struct cxl_dport *dport);
int cxl_endpoint_get_perf_coordinates(struct cxl_port *port, int cxl_endpoint_get_perf_coordinates(struct cxl_port *port,
struct access_coordinate *coord); struct access_coordinate *coord);
@ -905,6 +917,10 @@ void cxl_coordinates_combine(struct access_coordinate *out,
struct access_coordinate *c2); struct access_coordinate *c2);
bool cxl_endpoint_decoder_reset_detected(struct cxl_port *port); bool cxl_endpoint_decoder_reset_detected(struct cxl_port *port);
struct cxl_dport *devm_cxl_add_dport_by_dev(struct cxl_port *port,
struct device *dport_dev);
struct cxl_dport *__devm_cxl_add_dport_by_dev(struct cxl_port *port,
struct device *dport_dev);
/* /*
* Unit test builds overrides this to __weak, find the 'strong' version * Unit test builds overrides this to __weak, find the 'strong' version
@ -915,4 +931,21 @@ bool cxl_endpoint_decoder_reset_detected(struct cxl_port *port);
#endif #endif
u16 cxl_gpf_get_dvsec(struct device *dev); u16 cxl_gpf_get_dvsec(struct device *dev);
/*
* Declaration for functions that are mocked by cxl_test that are called by
* cxl_core. The respective functions are defined as __foo() and called by
* cxl_core as foo(). The macros below ensures that those functions would
* exist as foo(). See tools/testing/cxl/cxl_core_exports.c and
* tools/testing/cxl/exports.h for setting up the mock functions. The dance
* is done to avoid a circular dependency where cxl_core calls a function that
* ends up being a mock function and goes to * cxl_test where it calls a
* cxl_core function.
*/
#ifndef CXL_TEST_ENABLE
#define DECLARE_TESTABLE(x) __##x
#define devm_cxl_add_dport_by_dev DECLARE_TESTABLE(devm_cxl_add_dport_by_dev)
#define devm_cxl_switch_port_decoders_setup DECLARE_TESTABLE(devm_cxl_switch_port_decoders_setup)
#endif
#endif /* __CXL_H__ */ #endif /* __CXL_H__ */

View File

@ -869,6 +869,8 @@ int cxl_mem_get_poison(struct cxl_memdev *cxlmd, u64 offset, u64 len,
int cxl_trigger_poison_list(struct cxl_memdev *cxlmd); int cxl_trigger_poison_list(struct cxl_memdev *cxlmd);
int cxl_inject_poison(struct cxl_memdev *cxlmd, u64 dpa); int cxl_inject_poison(struct cxl_memdev *cxlmd, u64 dpa);
int cxl_clear_poison(struct cxl_memdev *cxlmd, u64 dpa); int cxl_clear_poison(struct cxl_memdev *cxlmd, u64 dpa);
int cxl_inject_poison_locked(struct cxl_memdev *cxlmd, u64 dpa);
int cxl_clear_poison_locked(struct cxl_memdev *cxlmd, u64 dpa);
#ifdef CONFIG_CXL_EDAC_MEM_FEATURES #ifdef CONFIG_CXL_EDAC_MEM_FEATURES
int devm_cxl_memdev_edac_register(struct cxl_memdev *cxlmd); int devm_cxl_memdev_edac_register(struct cxl_memdev *cxlmd);

View File

@ -129,8 +129,6 @@ static inline bool cxl_pci_flit_256(struct pci_dev *pdev)
int devm_cxl_port_enumerate_dports(struct cxl_port *port); int devm_cxl_port_enumerate_dports(struct cxl_port *port);
struct cxl_dev_state; struct cxl_dev_state;
int cxl_hdm_decode_init(struct cxl_dev_state *cxlds, struct cxl_hdm *cxlhdm,
struct cxl_endpoint_dvsec_info *info);
void read_cdat_data(struct cxl_port *port); void read_cdat_data(struct cxl_port *port);
void cxl_cor_error_detected(struct pci_dev *pdev); void cxl_cor_error_detected(struct pci_dev *pdev);
pci_ers_result_t cxl_error_detected(struct pci_dev *pdev, pci_ers_result_t cxl_error_detected(struct pci_dev *pdev,

View File

@ -59,55 +59,20 @@ static int discover_region(struct device *dev, void *unused)
static int cxl_switch_port_probe(struct cxl_port *port) static int cxl_switch_port_probe(struct cxl_port *port)
{ {
struct cxl_hdm *cxlhdm; /* Reset nr_dports for rebind of driver */
int rc; port->nr_dports = 0;
/* Cache the data early to ensure is_visible() works */ /* Cache the data early to ensure is_visible() works */
read_cdat_data(port); read_cdat_data(port);
rc = devm_cxl_port_enumerate_dports(port); return 0;
if (rc < 0)
return rc;
cxl_switch_parse_cdat(port);
cxlhdm = devm_cxl_setup_hdm(port, NULL);
if (!IS_ERR(cxlhdm))
return devm_cxl_enumerate_decoders(cxlhdm, NULL);
if (PTR_ERR(cxlhdm) != -ENODEV) {
dev_err(&port->dev, "Failed to map HDM decoder capability\n");
return PTR_ERR(cxlhdm);
}
if (rc == 1) {
dev_dbg(&port->dev, "Fallback to passthrough decoder\n");
return devm_cxl_add_passthrough_decoder(port);
}
dev_err(&port->dev, "HDM decoder capability not found\n");
return -ENXIO;
} }
static int cxl_endpoint_port_probe(struct cxl_port *port) static int cxl_endpoint_port_probe(struct cxl_port *port)
{ {
struct cxl_endpoint_dvsec_info info = { .port = port };
struct cxl_memdev *cxlmd = to_cxl_memdev(port->uport_dev); struct cxl_memdev *cxlmd = to_cxl_memdev(port->uport_dev);
struct cxl_dev_state *cxlds = cxlmd->cxlds;
struct cxl_hdm *cxlhdm;
int rc; int rc;
rc = cxl_dvsec_rr_decode(cxlds, &info);
if (rc < 0)
return rc;
cxlhdm = devm_cxl_setup_hdm(port, &info);
if (IS_ERR(cxlhdm)) {
if (PTR_ERR(cxlhdm) == -ENODEV)
dev_err(&port->dev, "HDM decoder registers not found\n");
return PTR_ERR(cxlhdm);
}
/* Cache the data early to ensure is_visible() works */ /* Cache the data early to ensure is_visible() works */
read_cdat_data(port); read_cdat_data(port);
cxl_endpoint_parse_cdat(port); cxl_endpoint_parse_cdat(port);
@ -117,11 +82,7 @@ static int cxl_endpoint_port_probe(struct cxl_port *port)
if (rc) if (rc)
return rc; return rc;
rc = cxl_hdm_decode_init(cxlds, cxlhdm, &info); rc = devm_cxl_endpoint_decoders_setup(port);
if (rc)
return rc;
rc = devm_cxl_enumerate_decoders(cxlhdm, &info);
if (rc) if (rc)
return rc; return rc;

View File

@ -560,8 +560,8 @@ struct acpi_cedt_cfmws_target_element {
/* Values for Restrictions field above */ /* Values for Restrictions field above */
#define ACPI_CEDT_CFMWS_RESTRICT_TYPE2 (1) #define ACPI_CEDT_CFMWS_RESTRICT_DEVMEM (1)
#define ACPI_CEDT_CFMWS_RESTRICT_TYPE3 (1<<1) #define ACPI_CEDT_CFMWS_RESTRICT_HOSTONLYMEM (1<<1)
#define ACPI_CEDT_CFMWS_RESTRICT_VOLATILE (1<<2) #define ACPI_CEDT_CFMWS_RESTRICT_VOLATILE (1<<2)
#define ACPI_CEDT_CFMWS_RESTRICT_PMEM (1<<3) #define ACPI_CEDT_CFMWS_RESTRICT_PMEM (1<<3)
#define ACPI_CEDT_CFMWS_RESTRICT_FIXED (1<<4) #define ACPI_CEDT_CFMWS_RESTRICT_FIXED (1<<4)

View File

@ -1595,18 +1595,6 @@ static inline void acpi_use_parent_companion(struct device *dev)
ACPI_COMPANION_SET(dev, ACPI_COMPANION(dev->parent)); ACPI_COMPANION_SET(dev, ACPI_COMPANION(dev->parent));
} }
#ifdef CONFIG_ACPI_HMAT
int hmat_update_target_coordinates(int nid, struct access_coordinate *coord,
enum access_coordinate_class access);
#else
static inline int hmat_update_target_coordinates(int nid,
struct access_coordinate *coord,
enum access_coordinate_class access)
{
return -EOPNOTSUPP;
}
#endif
#ifdef CONFIG_ACPI_NUMA #ifdef CONFIG_ACPI_NUMA
bool acpi_node_backed_by_real_pxm(int nid); bool acpi_node_backed_by_real_pxm(int nid);
#else #else

View File

@ -115,13 +115,13 @@ struct notifier_block;
struct mem_section; struct mem_section;
/* /*
* Priorities for the hotplug memory callback routines (stored in decreasing * Priorities for the hotplug memory callback routines. Invoked from
* order in the callback chain) * high to low. Higher priorities correspond to higher numbers.
*/ */
#define DEFAULT_CALLBACK_PRI 0 #define DEFAULT_CALLBACK_PRI 0
#define SLAB_CALLBACK_PRI 1 #define SLAB_CALLBACK_PRI 1
#define HMAT_CALLBACK_PRI 2
#define CXL_CALLBACK_PRI 5 #define CXL_CALLBACK_PRI 5
#define HMAT_CALLBACK_PRI 6
#define MM_COMPUTE_BATCH_PRI 10 #define MM_COMPUTE_BATCH_PRI 10
#define CPUSET_CALLBACK_PRI 10 #define CPUSET_CALLBACK_PRI 10
#define MEMTIER_HOTPLUG_PRI 100 #define MEMTIER_HOTPLUG_PRI 100

View File

@ -85,6 +85,8 @@ struct node_cache_attrs {
void node_add_cache(unsigned int nid, struct node_cache_attrs *cache_attrs); void node_add_cache(unsigned int nid, struct node_cache_attrs *cache_attrs);
void node_set_perf_attrs(unsigned int nid, struct access_coordinate *coord, void node_set_perf_attrs(unsigned int nid, struct access_coordinate *coord,
enum access_coordinate_class access); enum access_coordinate_class access);
void node_update_perf_attrs(unsigned int nid, struct access_coordinate *coord,
enum access_coordinate_class access);
#else #else
static inline void node_add_cache(unsigned int nid, static inline void node_add_cache(unsigned int nid,
struct node_cache_attrs *cache_attrs) struct node_cache_attrs *cache_attrs)
@ -96,6 +98,12 @@ static inline void node_set_perf_attrs(unsigned int nid,
enum access_coordinate_class access) enum access_coordinate_class access)
{ {
} }
static inline void node_update_perf_attrs(unsigned int nid,
struct access_coordinate *coord,
enum access_coordinate_class access)
{
}
#endif #endif
struct node { struct node {

View File

@ -5,22 +5,19 @@ ldflags-y += --wrap=acpi_evaluate_integer
ldflags-y += --wrap=acpi_pci_find_root ldflags-y += --wrap=acpi_pci_find_root
ldflags-y += --wrap=nvdimm_bus_register ldflags-y += --wrap=nvdimm_bus_register
ldflags-y += --wrap=devm_cxl_port_enumerate_dports ldflags-y += --wrap=devm_cxl_port_enumerate_dports
ldflags-y += --wrap=devm_cxl_setup_hdm
ldflags-y += --wrap=devm_cxl_add_passthrough_decoder
ldflags-y += --wrap=devm_cxl_enumerate_decoders
ldflags-y += --wrap=cxl_await_media_ready ldflags-y += --wrap=cxl_await_media_ready
ldflags-y += --wrap=cxl_hdm_decode_init
ldflags-y += --wrap=cxl_dvsec_rr_decode
ldflags-y += --wrap=devm_cxl_add_rch_dport ldflags-y += --wrap=devm_cxl_add_rch_dport
ldflags-y += --wrap=cxl_rcd_component_reg_phys ldflags-y += --wrap=cxl_rcd_component_reg_phys
ldflags-y += --wrap=cxl_endpoint_parse_cdat ldflags-y += --wrap=cxl_endpoint_parse_cdat
ldflags-y += --wrap=cxl_dport_init_ras_reporting ldflags-y += --wrap=cxl_dport_init_ras_reporting
ldflags-y += --wrap=devm_cxl_endpoint_decoders_setup
DRIVERS := ../../../drivers DRIVERS := ../../../drivers
CXL_SRC := $(DRIVERS)/cxl CXL_SRC := $(DRIVERS)/cxl
CXL_CORE_SRC := $(DRIVERS)/cxl/core CXL_CORE_SRC := $(DRIVERS)/cxl/core
ccflags-y := -I$(srctree)/drivers/cxl/ ccflags-y := -I$(srctree)/drivers/cxl/
ccflags-y += -D__mock=__weak ccflags-y += -D__mock=__weak
ccflags-y += -DCXL_TEST_ENABLE=1
ccflags-y += -DTRACE_INCLUDE_PATH=$(CXL_CORE_SRC) -I$(srctree)/drivers/cxl/core/ ccflags-y += -DTRACE_INCLUDE_PATH=$(CXL_CORE_SRC) -I$(srctree)/drivers/cxl/core/
obj-m += cxl_acpi.o obj-m += cxl_acpi.o

View File

@ -2,6 +2,28 @@
/* Copyright(c) 2022 Intel Corporation. All rights reserved. */ /* Copyright(c) 2022 Intel Corporation. All rights reserved. */
#include "cxl.h" #include "cxl.h"
#include "exports.h"
/* Exporting of cxl_core symbols that are only used by cxl_test */ /* Exporting of cxl_core symbols that are only used by cxl_test */
EXPORT_SYMBOL_NS_GPL(cxl_num_decoders_committed, "CXL"); EXPORT_SYMBOL_NS_GPL(cxl_num_decoders_committed, "CXL");
cxl_add_dport_by_dev_fn _devm_cxl_add_dport_by_dev =
__devm_cxl_add_dport_by_dev;
EXPORT_SYMBOL_NS_GPL(_devm_cxl_add_dport_by_dev, "CXL");
struct cxl_dport *devm_cxl_add_dport_by_dev(struct cxl_port *port,
struct device *dport_dev)
{
return _devm_cxl_add_dport_by_dev(port, dport_dev);
}
EXPORT_SYMBOL_NS_GPL(devm_cxl_add_dport_by_dev, "CXL");
cxl_switch_decoders_setup_fn _devm_cxl_switch_port_decoders_setup =
__devm_cxl_switch_port_decoders_setup;
EXPORT_SYMBOL_NS_GPL(_devm_cxl_switch_port_decoders_setup, "CXL");
int devm_cxl_switch_port_decoders_setup(struct cxl_port *port)
{
return _devm_cxl_switch_port_decoders_setup(port);
}
EXPORT_SYMBOL_NS_GPL(devm_cxl_switch_port_decoders_setup, "CXL");

View File

@ -0,0 +1,13 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2025 Intel Corporation */
#ifndef __MOCK_CXL_EXPORTS_H_
#define __MOCK_CXL_EXPORTS_H_
typedef struct cxl_dport *(*cxl_add_dport_by_dev_fn)(struct cxl_port *port,
struct device *dport_dev);
extern cxl_add_dport_by_dev_fn _devm_cxl_add_dport_by_dev;
typedef int(*cxl_switch_decoders_setup_fn)(struct cxl_port *port);
extern cxl_switch_decoders_setup_fn _devm_cxl_switch_port_decoders_setup;
#endif

View File

@ -210,7 +210,7 @@ static struct {
}, },
.interleave_ways = 0, .interleave_ways = 0,
.granularity = 4, .granularity = 4,
.restrictions = ACPI_CEDT_CFMWS_RESTRICT_TYPE3 | .restrictions = ACPI_CEDT_CFMWS_RESTRICT_HOSTONLYMEM |
ACPI_CEDT_CFMWS_RESTRICT_VOLATILE, ACPI_CEDT_CFMWS_RESTRICT_VOLATILE,
.qtg_id = FAKE_QTG_ID, .qtg_id = FAKE_QTG_ID,
.window_size = SZ_256M * 4UL, .window_size = SZ_256M * 4UL,
@ -225,7 +225,7 @@ static struct {
}, },
.interleave_ways = 1, .interleave_ways = 1,
.granularity = 4, .granularity = 4,
.restrictions = ACPI_CEDT_CFMWS_RESTRICT_TYPE3 | .restrictions = ACPI_CEDT_CFMWS_RESTRICT_HOSTONLYMEM |
ACPI_CEDT_CFMWS_RESTRICT_VOLATILE, ACPI_CEDT_CFMWS_RESTRICT_VOLATILE,
.qtg_id = FAKE_QTG_ID, .qtg_id = FAKE_QTG_ID,
.window_size = SZ_256M * 8UL, .window_size = SZ_256M * 8UL,
@ -240,7 +240,7 @@ static struct {
}, },
.interleave_ways = 0, .interleave_ways = 0,
.granularity = 4, .granularity = 4,
.restrictions = ACPI_CEDT_CFMWS_RESTRICT_TYPE3 | .restrictions = ACPI_CEDT_CFMWS_RESTRICT_HOSTONLYMEM |
ACPI_CEDT_CFMWS_RESTRICT_PMEM, ACPI_CEDT_CFMWS_RESTRICT_PMEM,
.qtg_id = FAKE_QTG_ID, .qtg_id = FAKE_QTG_ID,
.window_size = SZ_256M * 4UL, .window_size = SZ_256M * 4UL,
@ -255,7 +255,7 @@ static struct {
}, },
.interleave_ways = 1, .interleave_ways = 1,
.granularity = 4, .granularity = 4,
.restrictions = ACPI_CEDT_CFMWS_RESTRICT_TYPE3 | .restrictions = ACPI_CEDT_CFMWS_RESTRICT_HOSTONLYMEM |
ACPI_CEDT_CFMWS_RESTRICT_PMEM, ACPI_CEDT_CFMWS_RESTRICT_PMEM,
.qtg_id = FAKE_QTG_ID, .qtg_id = FAKE_QTG_ID,
.window_size = SZ_256M * 8UL, .window_size = SZ_256M * 8UL,
@ -270,7 +270,7 @@ static struct {
}, },
.interleave_ways = 0, .interleave_ways = 0,
.granularity = 4, .granularity = 4,
.restrictions = ACPI_CEDT_CFMWS_RESTRICT_TYPE3 | .restrictions = ACPI_CEDT_CFMWS_RESTRICT_HOSTONLYMEM |
ACPI_CEDT_CFMWS_RESTRICT_PMEM, ACPI_CEDT_CFMWS_RESTRICT_PMEM,
.qtg_id = FAKE_QTG_ID, .qtg_id = FAKE_QTG_ID,
.window_size = SZ_256M * 4UL, .window_size = SZ_256M * 4UL,
@ -285,7 +285,7 @@ static struct {
}, },
.interleave_ways = 0, .interleave_ways = 0,
.granularity = 4, .granularity = 4,
.restrictions = ACPI_CEDT_CFMWS_RESTRICT_TYPE3 | .restrictions = ACPI_CEDT_CFMWS_RESTRICT_HOSTONLYMEM |
ACPI_CEDT_CFMWS_RESTRICT_VOLATILE, ACPI_CEDT_CFMWS_RESTRICT_VOLATILE,
.qtg_id = FAKE_QTG_ID, .qtg_id = FAKE_QTG_ID,
.window_size = SZ_256M, .window_size = SZ_256M,
@ -302,7 +302,7 @@ static struct {
.interleave_arithmetic = ACPI_CEDT_CFMWS_ARITHMETIC_XOR, .interleave_arithmetic = ACPI_CEDT_CFMWS_ARITHMETIC_XOR,
.interleave_ways = 0, .interleave_ways = 0,
.granularity = 4, .granularity = 4,
.restrictions = ACPI_CEDT_CFMWS_RESTRICT_TYPE3 | .restrictions = ACPI_CEDT_CFMWS_RESTRICT_HOSTONLYMEM |
ACPI_CEDT_CFMWS_RESTRICT_PMEM, ACPI_CEDT_CFMWS_RESTRICT_PMEM,
.qtg_id = FAKE_QTG_ID, .qtg_id = FAKE_QTG_ID,
.window_size = SZ_256M * 8UL, .window_size = SZ_256M * 8UL,
@ -318,7 +318,7 @@ static struct {
.interleave_arithmetic = ACPI_CEDT_CFMWS_ARITHMETIC_XOR, .interleave_arithmetic = ACPI_CEDT_CFMWS_ARITHMETIC_XOR,
.interleave_ways = 1, .interleave_ways = 1,
.granularity = 0, .granularity = 0,
.restrictions = ACPI_CEDT_CFMWS_RESTRICT_TYPE3 | .restrictions = ACPI_CEDT_CFMWS_RESTRICT_HOSTONLYMEM |
ACPI_CEDT_CFMWS_RESTRICT_PMEM, ACPI_CEDT_CFMWS_RESTRICT_PMEM,
.qtg_id = FAKE_QTG_ID, .qtg_id = FAKE_QTG_ID,
.window_size = SZ_256M * 8UL, .window_size = SZ_256M * 8UL,
@ -334,7 +334,7 @@ static struct {
.interleave_arithmetic = ACPI_CEDT_CFMWS_ARITHMETIC_XOR, .interleave_arithmetic = ACPI_CEDT_CFMWS_ARITHMETIC_XOR,
.interleave_ways = 8, .interleave_ways = 8,
.granularity = 1, .granularity = 1,
.restrictions = ACPI_CEDT_CFMWS_RESTRICT_TYPE3 | .restrictions = ACPI_CEDT_CFMWS_RESTRICT_HOSTONLYMEM |
ACPI_CEDT_CFMWS_RESTRICT_PMEM, ACPI_CEDT_CFMWS_RESTRICT_PMEM,
.qtg_id = FAKE_QTG_ID, .qtg_id = FAKE_QTG_ID,
.window_size = SZ_512M * 6UL, .window_size = SZ_512M * 6UL,
@ -643,15 +643,8 @@ static struct cxl_hdm *mock_cxl_setup_hdm(struct cxl_port *port,
return cxlhdm; return cxlhdm;
} }
static int mock_cxl_add_passthrough_decoder(struct cxl_port *port)
{
dev_err(&port->dev, "unexpected passthrough decoder for cxl_test\n");
return -EOPNOTSUPP;
}
struct target_map_ctx { struct target_map_ctx {
int *target_map; u32 *target_map;
int index; int index;
int target_count; int target_count;
}; };
@ -818,15 +811,21 @@ static void mock_init_hdm_decoder(struct cxl_decoder *cxld)
*/ */
if (WARN_ON(!dev)) if (WARN_ON(!dev))
continue; continue;
cxlsd = to_cxl_switch_decoder(dev); cxlsd = to_cxl_switch_decoder(dev);
if (i == 0) { if (i == 0) {
/* put cxl_mem.4 second in the decode order */ /* put cxl_mem.4 second in the decode order */
if (pdev->id == 4) if (pdev->id == 4) {
cxlsd->target[1] = dport; cxlsd->target[1] = dport;
else cxld->target_map[1] = dport->port_id;
} else {
cxlsd->target[0] = dport; cxlsd->target[0] = dport;
} else cxld->target_map[0] = dport->port_id;
}
} else {
cxlsd->target[0] = dport; cxlsd->target[0] = dport;
cxld->target_map[0] = dport->port_id;
}
cxld = &cxlsd->cxld; cxld = &cxlsd->cxld;
cxld->target_type = CXL_DECODER_HOSTONLYMEM; cxld->target_type = CXL_DECODER_HOSTONLYMEM;
cxld->flags = CXL_DECODER_F_ENABLE; cxld->flags = CXL_DECODER_F_ENABLE;
@ -863,9 +862,7 @@ static int mock_cxl_enumerate_decoders(struct cxl_hdm *cxlhdm,
target_count = NR_CXL_SWITCH_PORTS; target_count = NR_CXL_SWITCH_PORTS;
for (i = 0; i < NR_CXL_PORT_DECODERS; i++) { for (i = 0; i < NR_CXL_PORT_DECODERS; i++) {
int target_map[CXL_DECODER_MAX_INTERLEAVE] = { 0 };
struct target_map_ctx ctx = { struct target_map_ctx ctx = {
.target_map = target_map,
.target_count = target_count, .target_count = target_count,
}; };
struct cxl_decoder *cxld; struct cxl_decoder *cxld;
@ -894,6 +891,8 @@ static int mock_cxl_enumerate_decoders(struct cxl_hdm *cxlhdm,
cxld = &cxled->cxld; cxld = &cxled->cxld;
} }
ctx.target_map = cxld->target_map;
mock_init_hdm_decoder(cxld); mock_init_hdm_decoder(cxld);
if (target_count) { if (target_count) {
@ -905,7 +904,7 @@ static int mock_cxl_enumerate_decoders(struct cxl_hdm *cxlhdm,
} }
} }
rc = cxl_decoder_add_locked(cxld, target_map); rc = cxl_decoder_add_locked(cxld);
if (rc) { if (rc) {
put_device(&cxld->dev); put_device(&cxld->dev);
dev_err(&port->dev, "Failed to add decoder\n"); dev_err(&port->dev, "Failed to add decoder\n");
@ -921,10 +920,42 @@ static int mock_cxl_enumerate_decoders(struct cxl_hdm *cxlhdm,
return 0; return 0;
} }
static int mock_cxl_port_enumerate_dports(struct cxl_port *port) static int __mock_cxl_decoders_setup(struct cxl_port *port)
{
struct cxl_hdm *cxlhdm;
cxlhdm = mock_cxl_setup_hdm(port, NULL);
if (IS_ERR(cxlhdm)) {
if (PTR_ERR(cxlhdm) != -ENODEV)
dev_err(&port->dev, "Failed to map HDM decoder capability\n");
return PTR_ERR(cxlhdm);
}
return mock_cxl_enumerate_decoders(cxlhdm, NULL);
}
static int mock_cxl_switch_port_decoders_setup(struct cxl_port *port)
{
if (is_cxl_root(port) || is_cxl_endpoint(port))
return -EOPNOTSUPP;
return __mock_cxl_decoders_setup(port);
}
static int mock_cxl_endpoint_decoders_setup(struct cxl_port *port)
{
if (!is_cxl_endpoint(port))
return -EOPNOTSUPP;
return __mock_cxl_decoders_setup(port);
}
static int get_port_array(struct cxl_port *port,
struct platform_device ***port_array,
int *port_array_size)
{ {
struct platform_device **array; struct platform_device **array;
int i, array_size; int array_size;
if (port->depth == 1) { if (port->depth == 1) {
if (is_multi_bridge(port->uport_dev)) { if (is_multi_bridge(port->uport_dev)) {
@ -958,6 +989,22 @@ static int mock_cxl_port_enumerate_dports(struct cxl_port *port)
return -ENXIO; return -ENXIO;
} }
*port_array = array;
*port_array_size = array_size;
return 0;
}
static int mock_cxl_port_enumerate_dports(struct cxl_port *port)
{
struct platform_device **array;
int i, array_size;
int rc;
rc = get_port_array(port, &array, &array_size);
if (rc)
return rc;
for (i = 0; i < array_size; i++) { for (i = 0; i < array_size; i++) {
struct platform_device *pdev = array[i]; struct platform_device *pdev = array[i];
struct cxl_dport *dport; struct cxl_dport *dport;
@ -979,6 +1026,36 @@ static int mock_cxl_port_enumerate_dports(struct cxl_port *port)
return 0; return 0;
} }
static struct cxl_dport *mock_cxl_add_dport_by_dev(struct cxl_port *port,
struct device *dport_dev)
{
struct platform_device **array;
int rc, i, array_size;
rc = get_port_array(port, &array, &array_size);
if (rc)
return ERR_PTR(rc);
for (i = 0; i < array_size; i++) {
struct platform_device *pdev = array[i];
if (pdev->dev.parent != port->uport_dev) {
dev_dbg(&port->dev, "%s: mismatch parent %s\n",
dev_name(port->uport_dev),
dev_name(pdev->dev.parent));
continue;
}
if (&pdev->dev != dport_dev)
continue;
return devm_cxl_add_dport(port, &pdev->dev, pdev->id,
CXL_RESOURCE_NONE);
}
return ERR_PTR(-ENODEV);
}
/* /*
* Faking the cxl_dpa_perf for the memdev when appropriate. * Faking the cxl_dpa_perf for the memdev when appropriate.
*/ */
@ -1035,11 +1112,11 @@ static struct cxl_mock_ops cxl_mock_ops = {
.acpi_table_parse_cedt = mock_acpi_table_parse_cedt, .acpi_table_parse_cedt = mock_acpi_table_parse_cedt,
.acpi_evaluate_integer = mock_acpi_evaluate_integer, .acpi_evaluate_integer = mock_acpi_evaluate_integer,
.acpi_pci_find_root = mock_acpi_pci_find_root, .acpi_pci_find_root = mock_acpi_pci_find_root,
.devm_cxl_switch_port_decoders_setup = mock_cxl_switch_port_decoders_setup,
.devm_cxl_endpoint_decoders_setup = mock_cxl_endpoint_decoders_setup,
.devm_cxl_port_enumerate_dports = mock_cxl_port_enumerate_dports, .devm_cxl_port_enumerate_dports = mock_cxl_port_enumerate_dports,
.devm_cxl_setup_hdm = mock_cxl_setup_hdm,
.devm_cxl_add_passthrough_decoder = mock_cxl_add_passthrough_decoder,
.devm_cxl_enumerate_decoders = mock_cxl_enumerate_decoders,
.cxl_endpoint_parse_cdat = mock_cxl_endpoint_parse_cdat, .cxl_endpoint_parse_cdat = mock_cxl_endpoint_parse_cdat,
.devm_cxl_add_dport_by_dev = mock_cxl_add_dport_by_dev,
.list = LIST_HEAD_INIT(cxl_mock_ops.list), .list = LIST_HEAD_INIT(cxl_mock_ops.list),
}; };

View File

@ -10,12 +10,21 @@
#include <cxlmem.h> #include <cxlmem.h>
#include <cxlpci.h> #include <cxlpci.h>
#include "mock.h" #include "mock.h"
#include "../exports.h"
static LIST_HEAD(mock); static LIST_HEAD(mock);
static struct cxl_dport *
redirect_devm_cxl_add_dport_by_dev(struct cxl_port *port,
struct device *dport_dev);
static int redirect_devm_cxl_switch_port_decoders_setup(struct cxl_port *port);
void register_cxl_mock_ops(struct cxl_mock_ops *ops) void register_cxl_mock_ops(struct cxl_mock_ops *ops)
{ {
list_add_rcu(&ops->list, &mock); list_add_rcu(&ops->list, &mock);
_devm_cxl_add_dport_by_dev = redirect_devm_cxl_add_dport_by_dev;
_devm_cxl_switch_port_decoders_setup =
redirect_devm_cxl_switch_port_decoders_setup;
} }
EXPORT_SYMBOL_GPL(register_cxl_mock_ops); EXPORT_SYMBOL_GPL(register_cxl_mock_ops);
@ -23,6 +32,9 @@ DEFINE_STATIC_SRCU(cxl_mock_srcu);
void unregister_cxl_mock_ops(struct cxl_mock_ops *ops) void unregister_cxl_mock_ops(struct cxl_mock_ops *ops)
{ {
_devm_cxl_switch_port_decoders_setup =
__devm_cxl_switch_port_decoders_setup;
_devm_cxl_add_dport_by_dev = __devm_cxl_add_dport_by_dev;
list_del_rcu(&ops->list); list_del_rcu(&ops->list);
synchronize_srcu(&cxl_mock_srcu); synchronize_srcu(&cxl_mock_srcu);
} }
@ -131,55 +143,34 @@ __wrap_nvdimm_bus_register(struct device *dev,
} }
EXPORT_SYMBOL_GPL(__wrap_nvdimm_bus_register); EXPORT_SYMBOL_GPL(__wrap_nvdimm_bus_register);
struct cxl_hdm *__wrap_devm_cxl_setup_hdm(struct cxl_port *port, int redirect_devm_cxl_switch_port_decoders_setup(struct cxl_port *port)
struct cxl_endpoint_dvsec_info *info)
{
int index;
struct cxl_hdm *cxlhdm;
struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
if (ops && ops->is_mock_port(port->uport_dev))
cxlhdm = ops->devm_cxl_setup_hdm(port, info);
else
cxlhdm = devm_cxl_setup_hdm(port, info);
put_cxl_mock_ops(index);
return cxlhdm;
}
EXPORT_SYMBOL_NS_GPL(__wrap_devm_cxl_setup_hdm, "CXL");
int __wrap_devm_cxl_add_passthrough_decoder(struct cxl_port *port)
{ {
int rc, index; int rc, index;
struct cxl_mock_ops *ops = get_cxl_mock_ops(&index); struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
if (ops && ops->is_mock_port(port->uport_dev)) if (ops && ops->is_mock_port(port->uport_dev))
rc = ops->devm_cxl_add_passthrough_decoder(port); rc = ops->devm_cxl_switch_port_decoders_setup(port);
else else
rc = devm_cxl_add_passthrough_decoder(port); rc = __devm_cxl_switch_port_decoders_setup(port);
put_cxl_mock_ops(index); put_cxl_mock_ops(index);
return rc; return rc;
} }
EXPORT_SYMBOL_NS_GPL(__wrap_devm_cxl_add_passthrough_decoder, "CXL");
int __wrap_devm_cxl_enumerate_decoders(struct cxl_hdm *cxlhdm, int __wrap_devm_cxl_endpoint_decoders_setup(struct cxl_port *port)
struct cxl_endpoint_dvsec_info *info)
{ {
int rc, index; int rc, index;
struct cxl_port *port = cxlhdm->port;
struct cxl_mock_ops *ops = get_cxl_mock_ops(&index); struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
if (ops && ops->is_mock_port(port->uport_dev)) if (ops && ops->is_mock_port(port->uport_dev))
rc = ops->devm_cxl_enumerate_decoders(cxlhdm, info); rc = ops->devm_cxl_endpoint_decoders_setup(port);
else else
rc = devm_cxl_enumerate_decoders(cxlhdm, info); rc = devm_cxl_endpoint_decoders_setup(port);
put_cxl_mock_ops(index); put_cxl_mock_ops(index);
return rc; return rc;
} }
EXPORT_SYMBOL_NS_GPL(__wrap_devm_cxl_enumerate_decoders, "CXL"); EXPORT_SYMBOL_NS_GPL(__wrap_devm_cxl_endpoint_decoders_setup, "CXL");
int __wrap_devm_cxl_port_enumerate_dports(struct cxl_port *port) int __wrap_devm_cxl_port_enumerate_dports(struct cxl_port *port)
{ {
@ -211,39 +202,6 @@ int __wrap_cxl_await_media_ready(struct cxl_dev_state *cxlds)
} }
EXPORT_SYMBOL_NS_GPL(__wrap_cxl_await_media_ready, "CXL"); EXPORT_SYMBOL_NS_GPL(__wrap_cxl_await_media_ready, "CXL");
int __wrap_cxl_hdm_decode_init(struct cxl_dev_state *cxlds,
struct cxl_hdm *cxlhdm,
struct cxl_endpoint_dvsec_info *info)
{
int rc = 0, index;
struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
if (ops && ops->is_mock_dev(cxlds->dev))
rc = 0;
else
rc = cxl_hdm_decode_init(cxlds, cxlhdm, info);
put_cxl_mock_ops(index);
return rc;
}
EXPORT_SYMBOL_NS_GPL(__wrap_cxl_hdm_decode_init, "CXL");
int __wrap_cxl_dvsec_rr_decode(struct cxl_dev_state *cxlds,
struct cxl_endpoint_dvsec_info *info)
{
int rc = 0, index;
struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
if (ops && ops->is_mock_dev(cxlds->dev))
rc = 0;
else
rc = cxl_dvsec_rr_decode(cxlds, info);
put_cxl_mock_ops(index);
return rc;
}
EXPORT_SYMBOL_NS_GPL(__wrap_cxl_dvsec_rr_decode, "CXL");
struct cxl_dport *__wrap_devm_cxl_add_rch_dport(struct cxl_port *port, struct cxl_dport *__wrap_devm_cxl_add_rch_dport(struct cxl_port *port,
struct device *dport_dev, struct device *dport_dev,
int port_id, int port_id,
@ -311,6 +269,22 @@ void __wrap_cxl_dport_init_ras_reporting(struct cxl_dport *dport, struct device
} }
EXPORT_SYMBOL_NS_GPL(__wrap_cxl_dport_init_ras_reporting, "CXL"); EXPORT_SYMBOL_NS_GPL(__wrap_cxl_dport_init_ras_reporting, "CXL");
struct cxl_dport *redirect_devm_cxl_add_dport_by_dev(struct cxl_port *port,
struct device *dport_dev)
{
int index;
struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
struct cxl_dport *dport;
if (ops && ops->is_mock_port(port->uport_dev))
dport = ops->devm_cxl_add_dport_by_dev(port, dport_dev);
else
dport = __devm_cxl_add_dport_by_dev(port, dport_dev);
put_cxl_mock_ops(index);
return dport;
}
MODULE_LICENSE("GPL v2"); MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("cxl_test: emulation module"); MODULE_DESCRIPTION("cxl_test: emulation module");
MODULE_IMPORT_NS("ACPI"); MODULE_IMPORT_NS("ACPI");

View File

@ -20,12 +20,11 @@ struct cxl_mock_ops {
bool (*is_mock_port)(struct device *dev); bool (*is_mock_port)(struct device *dev);
bool (*is_mock_dev)(struct device *dev); bool (*is_mock_dev)(struct device *dev);
int (*devm_cxl_port_enumerate_dports)(struct cxl_port *port); int (*devm_cxl_port_enumerate_dports)(struct cxl_port *port);
struct cxl_hdm *(*devm_cxl_setup_hdm)( int (*devm_cxl_switch_port_decoders_setup)(struct cxl_port *port);
struct cxl_port *port, struct cxl_endpoint_dvsec_info *info); int (*devm_cxl_endpoint_decoders_setup)(struct cxl_port *port);
int (*devm_cxl_add_passthrough_decoder)(struct cxl_port *port);
int (*devm_cxl_enumerate_decoders)(
struct cxl_hdm *hdm, struct cxl_endpoint_dvsec_info *info);
void (*cxl_endpoint_parse_cdat)(struct cxl_port *port); void (*cxl_endpoint_parse_cdat)(struct cxl_port *port);
struct cxl_dport *(*devm_cxl_add_dport_by_dev)(struct cxl_port *port,
struct device *dport_dev);
}; };
void register_cxl_mock_ops(struct cxl_mock_ops *ops); void register_cxl_mock_ops(struct cxl_mock_ops *ops);