soc: driver updates for 6.19, part 2

These updates came a little late, or were based on a later 6.18-rc
 tag than the others:
 
  - A new driver for cache management on cxl devices with memory shared
    in a coherent cluster. This is part of the drivers/cache/ tree, but
    unlike the other drivers that back the dma-mapping interfaces, this
    one is needed only during CPU hotplug.
 
  - A shared branch for reset controllers using swnode infrastructure
 
  - Added support for new SoC variants in the Amlogic soc_device
    identification
 
  - Minor updates in Freescale, Microchip, Samsung, and Apple SoC drivers
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEo6/YBQwIrVS28WGKmmx57+YAGNkFAmkzAWoACgkQmmx57+YA
 GNmyOw//ZR4ie0Mcr2NrQFx0eozSKQ2NYpb+bCqy96livGiEjPMrQXqiEv9XMNmw
 V9c3749cXUUMvbd6sNxco75k/8jTMko/K3ZC5DwMlAytgp49b/LK9I3Lj3hUX3+q
 5X4B1TrgH5J42pJ60kDstjiY17anNxP99ht7A1gd2ijp1GIfQxYvUrhGMsELqTxm
 aAznnsYhwI3O2pmJZj2F2Kj4jen6tfTlQh37oIDoLdweXxI9VGjSRY38/TcsgE3E
 o2EwTuyhimmk2iVN5GmSksGQNj1neeJe3QEjMImcn3asR2WtLQOQGOcIa7OSvF7d
 LIE3uQTxtz2W/2CLmM6RHeUwOwBOz9AD0dZ+JGaZ63ePdypU0w8xyrKhMgw9Pq5F
 Mtt4ml3w2zAfyV4VqmkiYdCAML2kkzPfZRYxhASlYcZ/iAylhCqHJXWJ5SSp8BTc
 QY+aZS9RFAylNvx5qVyOtPeDEqKl0UAnYJHF6JGNQR/6vEKvMwxVJ0EEaAo1luXg
 z7RQCC2MRXX9QPq6YQcJXc4u3jDMTNkbElQy2CAXBUdbRVgFJPjTdtEqg860Cml6
 HHXCazeVAPwA88NR4zHBl6QKxfqp8Iezf9WHTz1WS5xq/71kfdzyG9/zKK8jpoLh
 5MXuyGpWhngep+DmOmP0i43HBzP7vD7g7086x+4jozTwg/TjV/s=
 =YvxQ
 -----END PGP SIGNATURE-----

Merge tag 'soc-drivers-6.19-2' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc

Pull more SoC driver updates from Arnd Bergmann:
 "These updates came a little late, or were based on a later 6.18-rc tag
  than the others:

   - A new driver for cache management on cxl devices with memory shared
     in a coherent cluster. This is part of the drivers/cache/ tree, but
     unlike the other drivers that back the dma-mapping interfaces, this
     one is needed only during CPU hotplug.

   - A shared branch for reset controllers using swnode infrastructure

   - Added support for new SoC variants in the Amlogic soc_device
     identification

   - Minor updates in Freescale, Microchip, Samsung, and Apple SoC
     drivers"

* tag 'soc-drivers-6.19-2' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: (24 commits)
  soc: samsung: exynos-pmu: fix device leak on regmap lookup
  soc: samsung: exynos-pmu: Fix structure initialization
  soc: fsl: qbman: use kmalloc_array() instead of kmalloc()
  soc: fsl: qbman: add WQ_PERCPU to alloc_workqueue users
  MAINTAINERS: Update email address for Christophe Leroy
  MAINTAINERS: refer to intended file in STANDALONE CACHE CONTROLLER DRIVERS
  cache: Support cache maintenance for HiSilicon SoC Hydra Home Agent
  cache: Make top level Kconfig menu a boolean dependent on RISCV
  MAINTAINERS: Add Jonathan Cameron to drivers/cache and add lib/cache_maint.c + header
  arm64: Select GENERIC_CPU_CACHE_MAINTENANCE
  lib: Support ARCH_HAS_CPU_CACHE_INVALIDATE_MEMREGION
  soc: amlogic: meson-gx-socinfo: add new SoCs id
  dt-bindings: arm: amlogic: meson-gx-ao-secure: support more SoCs
  memregion: Support fine grained invalidate by cpu_cache_invalidate_memregion()
  memregion: Drop unused IORES_DESC_* parameter from cpu_cache_invalidate_memregion()
  dt-bindings: cache: sifive,ccache0: add a pic64gx compatible
  MAINTAINERS: rename Microchip RISC-V entry
  MAINTAINERS: add new soc drivers to Microchip RISC-V entry
  soc: microchip: add mfd drivers for two syscon regions on PolarFire SoC
  dt-bindings: soc: microchip: document the simple-mfd syscon on PolarFire SoC
  ...
This commit is contained in:
Linus Torvalds 2025-12-05 17:47:59 -08:00
commit 11efc1cb70
29 changed files with 648 additions and 47 deletions

View File

@ -186,6 +186,9 @@ Christian Brauner <brauner@kernel.org> <christian@brauner.io>
Christian Brauner <brauner@kernel.org> <christian.brauner@canonical.com>
Christian Brauner <brauner@kernel.org> <christian.brauner@ubuntu.com>
Christian Marangi <ansuelsmth@gmail.com>
Christophe Leroy <chleroy@kernel.org> <christophe.leroy@c-s.fr>
Christophe Leroy <chleroy@kernel.org> <christophe.leroy@csgroup.eu>
Christophe Leroy <chleroy@kernel.org> <christophe.leroy2@cs-soprasteria.com>
Christophe Ricard <christophe.ricard@gmail.com>
Christopher Obbard <christopher.obbard@linaro.org> <chris.obbard@collabora.com>
Christoph Hellwig <hch@lst.de>

View File

@ -34,6 +34,9 @@ properties:
- amlogic,a4-ao-secure
- amlogic,c3-ao-secure
- amlogic,s4-ao-secure
- amlogic,s6-ao-secure
- amlogic,s7-ao-secure
- amlogic,s7d-ao-secure
- amlogic,t7-ao-secure
- const: amlogic,meson-gx-ao-secure
- const: syscon

View File

@ -48,6 +48,11 @@ properties:
- const: microchip,mpfs-ccache
- const: sifive,fu540-c000-ccache
- const: cache
- items:
- const: microchip,pic64gx-ccache
- const: microchip,mpfs-ccache
- const: sifive,fu540-c000-ccache
- const: cache
cache-block-size:
const: 64

View File

@ -0,0 +1,47 @@
# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
%YAML 1.2
---
$id: http://devicetree.org/schemas/soc/microchip/microchip,mpfs-mss-top-sysreg.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Microchip PolarFire SoC Microprocessor Subsystem (MSS) sysreg register region
maintainers:
- Conor Dooley <conor.dooley@microchip.com>
description:
An wide assortment of registers that control elements of the MSS on PolarFire
SoC, including pinmuxing, resets and clocks among others.
properties:
compatible:
items:
- const: microchip,mpfs-mss-top-sysreg
- const: syscon
reg:
maxItems: 1
'#reset-cells':
description:
The AHB/AXI peripherals on the PolarFire SoC have reset support, so
from CLK_ENVM to CLK_CFM. The reset consumer should specify the
desired peripheral via the clock ID in its "resets" phandle cell.
See include/dt-bindings/clock/microchip,mpfs-clock.h for the full list
of PolarFire clock/reset IDs.
const: 1
required:
- compatible
- reg
additionalProperties: false
examples:
- |
syscon@20002000 {
compatible = "microchip,mpfs-mss-top-sysreg", "syscon";
reg = <0x20002000 0x1000>;
#reset-cells = <1>;
};

View File

@ -4590,7 +4590,7 @@ F: drivers/net/ethernet/netronome/nfp/bpf/
BPF JIT for POWERPC (32-BIT AND 64-BIT)
M: Hari Bathini <hbathini@linux.ibm.com>
M: Christophe Leroy <christophe.leroy@csgroup.eu>
M: Christophe Leroy (CS GROUP) <chleroy@kernel.org>
R: Naveen N Rao <naveen@kernel.org>
L: bpf@vger.kernel.org
S: Supported
@ -10082,7 +10082,7 @@ F: drivers/spi/spi-fsl-qspi.c
FREESCALE QUICC ENGINE LIBRARY
M: Qiang Zhao <qiang.zhao@nxp.com>
M: Christophe Leroy <christophe.leroy@csgroup.eu>
M: Christophe Leroy (CS GROUP) <chleroy@kernel.org>
L: linuxppc-dev@lists.ozlabs.org
S: Maintained
F: drivers/soc/fsl/qe/
@ -10135,7 +10135,7 @@ S: Maintained
F: drivers/tty/serial/ucc_uart.c
FREESCALE SOC DRIVERS
M: Christophe Leroy <christophe.leroy@csgroup.eu>
M: Christophe Leroy (CS GROUP) <chleroy@kernel.org>
L: linuxppc-dev@lists.ozlabs.org
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
@ -14400,7 +14400,7 @@ LINUX FOR POWERPC (32-BIT AND 64-BIT)
M: Madhavan Srinivasan <maddy@linux.ibm.com>
M: Michael Ellerman <mpe@ellerman.id.au>
R: Nicholas Piggin <npiggin@gmail.com>
R: Christophe Leroy <christophe.leroy@csgroup.eu>
R: Christophe Leroy (CS GROUP) <chleroy@kernel.org>
L: linuxppc-dev@lists.ozlabs.org
S: Supported
W: https://github.com/linuxppc/wiki/wiki
@ -14456,7 +14456,7 @@ F: Documentation/devicetree/bindings/powerpc/fsl/
F: arch/powerpc/platforms/85xx/
LINUX FOR POWERPC EMBEDDED PPC8XX AND PPC83XX
M: Christophe Leroy <christophe.leroy@csgroup.eu>
M: Christophe Leroy (CS GROUP) <chleroy@kernel.org>
L: linuxppc-dev@lists.ozlabs.org
S: Maintained
F: arch/powerpc/platforms/8xx/
@ -22316,7 +22316,7 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/iommu/linux.git
F: Documentation/devicetree/bindings/iommu/riscv,iommu.yaml
F: drivers/iommu/riscv/
RISC-V MICROCHIP FPGA SUPPORT
RISC-V MICROCHIP SUPPORT
M: Conor Dooley <conor.dooley@microchip.com>
M: Daire McNamara <daire.mcnamara@microchip.com>
L: linux-riscv@lists.infradead.org
@ -22343,6 +22343,8 @@ F: drivers/pci/controller/plda/pcie-microchip-host.c
F: drivers/pwm/pwm-microchip-core.c
F: drivers/reset/reset-mpfs.c
F: drivers/rtc/rtc-mpfs.c
F: drivers/soc/microchip/mpfs-control-scb.c
F: drivers/soc/microchip/mpfs-mss-top-sysreg.c
F: drivers/soc/microchip/mpfs-sys-controller.c
F: drivers/spi/spi-microchip-core-qspi.c
F: drivers/spi/spi-mpfs.c
@ -24696,10 +24698,13 @@ F: drivers/staging/
STANDALONE CACHE CONTROLLER DRIVERS
M: Conor Dooley <conor@kernel.org>
M: Jonathan Cameron <jonathan.cameron@huawei.com>
S: Maintained
T: git https://git.kernel.org/pub/scm/linux/kernel/git/conor/linux.git/
F: Documentation/devicetree/bindings/cache/
F: drivers/cache
F: include/linux/cache_coherency.h
F: lib/cache_maint.c
STARFIRE/DURALAN NETWORK DRIVER
M: Ion Badulescu <ionut@badula.org>

View File

@ -21,6 +21,7 @@ config ARM64
select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE
select ARCH_HAS_CACHE_LINE_SIZE
select ARCH_HAS_CC_PLATFORM
select ARCH_HAS_CPU_CACHE_INVALIDATE_MEMREGION
select ARCH_HAS_CURRENT_STACK_POINTER
select ARCH_HAS_DEBUG_VIRTUAL
select ARCH_HAS_DEBUG_VM_PGTABLE
@ -148,6 +149,7 @@ config ARM64
select GENERIC_ARCH_TOPOLOGY
select GENERIC_CLOCKEVENTS_BROADCAST
select GENERIC_CPU_AUTOPROBE
select GENERIC_CPU_CACHE_MAINTENANCE
select GENERIC_CPU_DEVICES
select GENERIC_CPU_VULNERABILITIES
select GENERIC_EARLY_IOREMAP

View File

@ -368,7 +368,7 @@ bool cpu_cache_has_invalidate_memregion(void)
}
EXPORT_SYMBOL_NS_GPL(cpu_cache_has_invalidate_memregion, "DEVMEM");
int cpu_cache_invalidate_memregion(int res_desc)
int cpu_cache_invalidate_memregion(phys_addr_t start, size_t len)
{
if (WARN_ON_ONCE(!cpu_cache_has_invalidate_memregion()))
return -ENXIO;

37
drivers/cache/Kconfig vendored
View File

@ -1,9 +1,17 @@
# SPDX-License-Identifier: GPL-2.0
menu "Cache Drivers"
menuconfig CACHEMAINT_FOR_DMA
bool "Cache management for noncoherent DMA"
depends on RISCV
default y
help
These drivers implement support for noncoherent DMA master devices
on platforms that lack the standard CPU interfaces for this.
if CACHEMAINT_FOR_DMA
config AX45MP_L2_CACHE
bool "Andes Technology AX45MP L2 Cache controller"
depends on RISCV
select RISCV_NONSTANDARD_CACHE_OPS
help
Support for the L2 cache controller on Andes Technology AX45MP platforms.
@ -16,7 +24,6 @@ config SIFIVE_CCACHE
config STARFIVE_STARLINK_CACHE
bool "StarFive StarLink Cache controller"
depends on RISCV
depends on ARCH_STARFIVE
depends on 64BIT
select RISCV_DMA_NONCOHERENT
@ -24,4 +31,26 @@ config STARFIVE_STARLINK_CACHE
help
Support for the StarLink cache controller IP from StarFive.
endmenu
endif #CACHEMAINT_FOR_DMA
menuconfig CACHEMAINT_FOR_HOTPLUG
bool "Cache management for memory hot plug like operations"
depends on GENERIC_CPU_CACHE_MAINTENANCE
help
These drivers implement cache management for flows where it is necessary
to flush data from all host caches.
if CACHEMAINT_FOR_HOTPLUG
config HISI_SOC_HHA
tristate "HiSilicon Hydra Home Agent (HHA) device driver"
depends on (ARM64 && ACPI) || COMPILE_TEST
help
The Hydra Home Agent (HHA) is responsible for cache coherency
on the SoC. This drivers enables the cache maintenance functions of
the HHA.
This driver can be built as a module. If so, the module will be
called hisi_soc_hha.
endif #CACHEMAINT_FOR_HOTPLUG

View File

@ -3,3 +3,5 @@
obj-$(CONFIG_AX45MP_L2_CACHE) += ax45mp_cache.o
obj-$(CONFIG_SIFIVE_CCACHE) += sifive_ccache.o
obj-$(CONFIG_STARFIVE_STARLINK_CACHE) += starfive_starlink_cache.o
obj-$(CONFIG_HISI_SOC_HHA) += hisi_soc_hha.o

194
drivers/cache/hisi_soc_hha.c vendored Normal file
View File

@ -0,0 +1,194 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Driver for HiSilicon Hydra Home Agent (HHA).
*
* Copyright (c) 2025 HiSilicon Technologies Co., Ltd.
* Author: Yicong Yang <yangyicong@hisilicon.com>
* Yushan Wang <wangyushan12@huawei.com>
*
* A system typically contains multiple HHAs. Each is responsible for a subset
* of the physical addresses in the system, but interleave can make the mapping
* from a particular cache line to a responsible HHA complex. As such no
* filtering is done in the driver, with the hardware being responsible for
* responding with success for even if it was not responsible for any addresses
* in the range on which the operation was requested.
*/
#include <linux/bitfield.h>
#include <linux/cache_coherency.h>
#include <linux/dev_printk.h>
#include <linux/init.h>
#include <linux/io.h>
#include <linux/iopoll.h>
#include <linux/kernel.h>
#include <linux/memregion.h>
#include <linux/module.h>
#include <linux/mod_devicetable.h>
#include <linux/mutex.h>
#include <linux/platform_device.h>
#define HISI_HHA_CTRL 0x5004
#define HISI_HHA_CTRL_EN BIT(0)
#define HISI_HHA_CTRL_RANGE BIT(1)
#define HISI_HHA_CTRL_TYPE GENMASK(3, 2)
#define HISI_HHA_START_L 0x5008
#define HISI_HHA_START_H 0x500c
#define HISI_HHA_LEN_L 0x5010
#define HISI_HHA_LEN_H 0x5014
/* The maintain operation performs in a 128 Byte granularity */
#define HISI_HHA_MAINT_ALIGN 128
#define HISI_HHA_POLL_GAP_US 10
#define HISI_HHA_POLL_TIMEOUT_US 50000
struct hisi_soc_hha {
/* Must be first element */
struct cache_coherency_ops_inst cci;
/* Locks HHA instance to forbid overlapping access. */
struct mutex lock;
void __iomem *base;
};
static bool hisi_hha_cache_maintain_wait_finished(struct hisi_soc_hha *soc_hha)
{
u32 val;
return !readl_poll_timeout_atomic(soc_hha->base + HISI_HHA_CTRL, val,
!(val & HISI_HHA_CTRL_EN),
HISI_HHA_POLL_GAP_US,
HISI_HHA_POLL_TIMEOUT_US);
}
static int hisi_soc_hha_wbinv(struct cache_coherency_ops_inst *cci,
struct cc_inval_params *invp)
{
struct hisi_soc_hha *soc_hha =
container_of(cci, struct hisi_soc_hha, cci);
phys_addr_t top, addr = invp->addr;
size_t size = invp->size;
u32 reg;
if (!size)
return -EINVAL;
addr = ALIGN_DOWN(addr, HISI_HHA_MAINT_ALIGN);
top = ALIGN(addr + size, HISI_HHA_MAINT_ALIGN);
size = top - addr;
guard(mutex)(&soc_hha->lock);
if (!hisi_hha_cache_maintain_wait_finished(soc_hha))
return -EBUSY;
/*
* Hardware will search for addresses ranging [addr, addr + size - 1],
* last byte included, and perform maintenance in 128 byte granules
* on those cachelines which contain the addresses. If a given instance
* is either not responsible for a cacheline or that cacheline is not
* currently present then the search will fail, no operation will be
* necessary and the device will report success.
*/
size -= 1;
writel(lower_32_bits(addr), soc_hha->base + HISI_HHA_START_L);
writel(upper_32_bits(addr), soc_hha->base + HISI_HHA_START_H);
writel(lower_32_bits(size), soc_hha->base + HISI_HHA_LEN_L);
writel(upper_32_bits(size), soc_hha->base + HISI_HHA_LEN_H);
reg = FIELD_PREP(HISI_HHA_CTRL_TYPE, 1); /* Clean Invalid */
reg |= HISI_HHA_CTRL_RANGE | HISI_HHA_CTRL_EN;
writel(reg, soc_hha->base + HISI_HHA_CTRL);
return 0;
}
static int hisi_soc_hha_done(struct cache_coherency_ops_inst *cci)
{
struct hisi_soc_hha *soc_hha =
container_of(cci, struct hisi_soc_hha, cci);
guard(mutex)(&soc_hha->lock);
if (!hisi_hha_cache_maintain_wait_finished(soc_hha))
return -ETIMEDOUT;
return 0;
}
static const struct cache_coherency_ops hha_ops = {
.wbinv = hisi_soc_hha_wbinv,
.done = hisi_soc_hha_done,
};
static int hisi_soc_hha_probe(struct platform_device *pdev)
{
struct hisi_soc_hha *soc_hha;
struct resource *mem;
int ret;
soc_hha = cache_coherency_ops_instance_alloc(&hha_ops,
struct hisi_soc_hha, cci);
if (!soc_hha)
return -ENOMEM;
platform_set_drvdata(pdev, soc_hha);
mutex_init(&soc_hha->lock);
mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!mem) {
ret = -ENOMEM;
goto err_free_cci;
}
soc_hha->base = ioremap(mem->start, resource_size(mem));
if (!soc_hha->base) {
ret = dev_err_probe(&pdev->dev, -ENOMEM,
"failed to remap io memory");
goto err_free_cci;
}
ret = cache_coherency_ops_instance_register(&soc_hha->cci);
if (ret)
goto err_iounmap;
return 0;
err_iounmap:
iounmap(soc_hha->base);
err_free_cci:
cache_coherency_ops_instance_put(&soc_hha->cci);
return ret;
}
static void hisi_soc_hha_remove(struct platform_device *pdev)
{
struct hisi_soc_hha *soc_hha = platform_get_drvdata(pdev);
cache_coherency_ops_instance_unregister(&soc_hha->cci);
iounmap(soc_hha->base);
cache_coherency_ops_instance_put(&soc_hha->cci);
}
static const struct acpi_device_id hisi_soc_hha_ids[] = {
{ "HISI0511", },
{ }
};
MODULE_DEVICE_TABLE(acpi, hisi_soc_hha_ids);
static struct platform_driver hisi_soc_hha_driver = {
.driver = {
.name = "hisi_soc_hha",
.acpi_match_table = hisi_soc_hha_ids,
},
.probe = hisi_soc_hha_probe,
.remove = hisi_soc_hha_remove,
};
module_platform_driver(hisi_soc_hha_driver);
MODULE_IMPORT_NS("CACHE_COHERENCY");
MODULE_DESCRIPTION("HiSilicon Hydra Home Agent driver supporting cache maintenance");
MODULE_AUTHOR("Yicong Yang <yangyicong@hisilicon.com>");
MODULE_AUTHOR("Yushan Wang <wangyushan12@huawei.com>");
MODULE_LICENSE("GPL");

View File

@ -236,7 +236,10 @@ static int cxl_region_invalidate_memregion(struct cxl_region *cxlr)
return -ENXIO;
}
cpu_cache_invalidate_memregion(IORES_DESC_CXL);
if (!cxlr->params.res)
return -ENXIO;
cpu_cache_invalidate_memregion(cxlr->params.res->start,
resource_size(cxlr->params.res));
return 0;
}

View File

@ -110,7 +110,7 @@ static void nd_region_remove(struct device *dev)
* here is ok.
*/
if (cpu_cache_has_invalidate_memregion())
cpu_cache_invalidate_memregion(IORES_DESC_PERSISTENT_MEMORY);
cpu_cache_invalidate_all();
}
static int child_notify(struct device *dev, void *data)

View File

@ -90,7 +90,7 @@ static int nd_region_invalidate_memregion(struct nd_region *nd_region)
}
}
cpu_cache_invalidate_memregion(IORES_DESC_PERSISTENT_MEMORY);
cpu_cache_invalidate_all();
out:
for (i = 0; i < nd_region->ndr_mappings; i++) {
struct nd_mapping *nd_mapping = &nd_region->mapping[i];

View File

@ -60,12 +60,9 @@ struct meson_canvas *meson_canvas_get(struct device *dev)
return ERR_PTR(-ENODEV);
canvas_pdev = of_find_device_by_node(canvas_node);
if (!canvas_pdev) {
of_node_put(canvas_node);
if (!canvas_pdev)
return ERR_PTR(-EPROBE_DEFER);
}
of_node_put(canvas_node);
/*
* If priv is NULL, it's probably because the canvas hasn't
@ -73,10 +70,9 @@ struct meson_canvas *meson_canvas_get(struct device *dev)
* current state, this driver probe cannot return -EPROBE_DEFER
*/
canvas = dev_get_drvdata(&canvas_pdev->dev);
if (!canvas) {
put_device(&canvas_pdev->dev);
if (!canvas)
return ERR_PTR(-EINVAL);
}
return canvas;
}

View File

@ -46,6 +46,9 @@ static const struct meson_gx_soc_id {
{ "A5", 0x3c },
{ "C3", 0x3d },
{ "A4", 0x40 },
{ "S7", 0x46 },
{ "S7D", 0x47 },
{ "S6", 0x48 },
};
static const struct meson_gx_package_id {
@ -86,6 +89,9 @@ static const struct meson_gx_package_id {
{ "A311D2", 0x36, 0x1, 0xf },
{ "A113X2", 0x3c, 0x1, 0xf },
{ "A113L2", 0x40, 0x1, 0xf },
{ "S805X3", 0x46, 0x3, 0xf },
{ "S905X5M", 0x47, 0x1, 0xf },
{ "S905X5", 0x48, 0x1, 0xf },
};
static inline unsigned int socinfo_to_major(u32 socinfo)

View File

@ -302,11 +302,18 @@ struct apple_mbox *apple_mbox_get(struct device *dev, int index)
return ERR_PTR(-EPROBE_DEFER);
mbox = platform_get_drvdata(pdev);
if (!mbox)
return ERR_PTR(-EPROBE_DEFER);
if (!mbox) {
mbox = ERR_PTR(-EPROBE_DEFER);
goto out_put_pdev;
}
if (!device_link_add(dev, &pdev->dev, DL_FLAG_AUTOREMOVE_CONSUMER))
return ERR_PTR(-ENODEV);
if (!device_link_add(dev, &pdev->dev, DL_FLAG_AUTOREMOVE_CONSUMER)) {
mbox = ERR_PTR(-ENODEV);
goto out_put_pdev;
}
out_put_pdev:
put_device(&pdev->dev);
return mbox;
}

View File

@ -214,17 +214,11 @@ static int apple_sart_probe(struct platform_device *pdev)
return 0;
}
static void apple_sart_put_device(void *dev)
{
put_device(dev);
}
struct apple_sart *devm_apple_sart_get(struct device *dev)
{
struct device_node *sart_node;
struct platform_device *sart_pdev;
struct apple_sart *sart;
int ret;
sart_node = of_parse_phandle(dev->of_node, "apple,sart", 0);
if (!sart_node)
@ -242,14 +236,11 @@ struct apple_sart *devm_apple_sart_get(struct device *dev)
return ERR_PTR(-EPROBE_DEFER);
}
ret = devm_add_action_or_reset(dev, apple_sart_put_device,
&sart_pdev->dev);
if (ret)
return ERR_PTR(ret);
device_link_add(dev, &sart_pdev->dev,
DL_FLAG_PM_RUNTIME | DL_FLAG_AUTOREMOVE_SUPPLIER);
put_device(&sart_pdev->dev);
return sart;
}
EXPORT_SYMBOL_GPL(devm_apple_sart_get);

View File

@ -1073,7 +1073,7 @@ EXPORT_SYMBOL(qman_portal_set_iperiod);
int qman_wq_alloc(void)
{
qm_portal_wq = alloc_workqueue("qman_portal_wq", 0, 1);
qm_portal_wq = alloc_workqueue("qman_portal_wq", WQ_PERCPU, 1);
if (!qm_portal_wq)
return -ENOMEM;
return 0;

View File

@ -219,7 +219,7 @@ static int allocate_frame_data(void)
pcfg = qman_get_qm_portal_config(qman_dma_portal);
__frame_ptr = kmalloc(4 * HP_NUM_WORDS, GFP_KERNEL);
__frame_ptr = kmalloc_array(4, HP_NUM_WORDS, GFP_KERNEL);
if (!__frame_ptr)
return -ENOMEM;

View File

@ -9,3 +9,15 @@ config POLARFIRE_SOC_SYS_CTRL
module will be called mpfs_system_controller.
If unsure, say N.
config POLARFIRE_SOC_SYSCONS
bool "PolarFire SoC (MPFS) syscon drivers"
default y
depends on ARCH_MICROCHIP
select MFD_CORE
help
These drivers add support for the syscons on PolarFire SoC (MPFS).
Without these drivers core parts of the kernel such as clocks
and resets will not function correctly.
If unsure, and on a PolarFire SoC, say y.

View File

@ -1 +1,2 @@
obj-$(CONFIG_POLARFIRE_SOC_SYS_CTRL) += mpfs-sys-controller.o
obj-$(CONFIG_POLARFIRE_SOC_SYSCONS) += mpfs-control-scb.o mpfs-mss-top-sysreg.o

View File

@ -0,0 +1,38 @@
// SPDX-License-Identifier: GPL-2.0
#include <linux/array_size.h>
#include <linux/of.h>
#include <linux/mfd/core.h>
#include <linux/mfd/syscon.h>
#include <linux/platform_device.h>
static const struct mfd_cell mpfs_control_scb_devs[] = {
MFD_CELL_NAME("mpfs-tvs"),
};
static int mpfs_control_scb_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
return mfd_add_devices(dev, PLATFORM_DEVID_NONE, mpfs_control_scb_devs,
ARRAY_SIZE(mpfs_control_scb_devs), NULL, 0, NULL);
}
static const struct of_device_id mpfs_control_scb_of_match[] = {
{ .compatible = "microchip,mpfs-control-scb", },
{},
};
MODULE_DEVICE_TABLE(of, mpfs_control_scb_of_match);
static struct platform_driver mpfs_control_scb_driver = {
.driver = {
.name = "mpfs-control-scb",
.of_match_table = mpfs_control_scb_of_match,
},
.probe = mpfs_control_scb_probe,
};
module_platform_driver(mpfs_control_scb_driver);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Conor Dooley <conor.dooley@microchip.com>");
MODULE_DESCRIPTION("PolarFire SoC control scb driver");

View File

@ -0,0 +1,44 @@
// SPDX-License-Identifier: GPL-2.0
#include <linux/array_size.h>
#include <linux/of.h>
#include <linux/mfd/core.h>
#include <linux/mfd/syscon.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
static const struct mfd_cell mpfs_mss_top_sysreg_devs[] = {
MFD_CELL_NAME("mpfs-reset"),
};
static int mpfs_mss_top_sysreg_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
int ret;
ret = mfd_add_devices(dev, PLATFORM_DEVID_NONE, mpfs_mss_top_sysreg_devs,
ARRAY_SIZE(mpfs_mss_top_sysreg_devs) , NULL, 0, NULL);
if (ret)
return ret;
return devm_of_platform_populate(dev);
}
static const struct of_device_id mpfs_mss_top_sysreg_of_match[] = {
{ .compatible = "microchip,mpfs-mss-top-sysreg", },
{},
};
MODULE_DEVICE_TABLE(of, mpfs_mss_top_sysreg_of_match);
static struct platform_driver mpfs_mss_top_sysreg_driver = {
.driver = {
.name = "mpfs-mss-top-sysreg",
.of_match_table = mpfs_mss_top_sysreg_of_match,
},
.probe = mpfs_mss_top_sysreg_probe,
};
module_platform_driver(mpfs_mss_top_sysreg_driver);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Conor Dooley <conor.dooley@microchip.com>");
MODULE_DESCRIPTION("PolarFire SoC mss top sysreg driver");

View File

@ -213,6 +213,8 @@ struct regmap *exynos_get_pmu_regmap_by_phandle(struct device_node *np,
if (!dev)
return ERR_PTR(-EPROBE_DEFER);
put_device(dev);
return syscon_node_to_regmap(pmu_np);
}
EXPORT_SYMBOL_GPL(exynos_get_pmu_regmap_by_phandle);
@ -454,10 +456,6 @@ static int setup_cpuhp_and_cpuidle(struct device *dev)
if (!pmu_context->in_cpuhp)
return -ENOMEM;
raw_spin_lock_init(&pmu_context->cpupm_lock);
pmu_context->sys_inreboot = false;
pmu_context->sys_insuspend = false;
/* set PMU to power on */
for_each_online_cpu(cpu)
gs101_cpuhp_pmu_online(cpu);
@ -529,6 +527,9 @@ static int exynos_pmu_probe(struct platform_device *pdev)
pmu_context->pmureg = regmap;
pmu_context->dev = dev;
raw_spin_lock_init(&pmu_context->cpupm_lock);
pmu_context->sys_inreboot = false;
pmu_context->sys_insuspend = false;
if (pmu_context->pmu_data && pmu_context->pmu_data->pmu_cpuhp) {
ret = setup_cpuhp_and_cpuidle(dev);

View File

@ -0,0 +1,61 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Cache coherency maintenance operation device drivers
*
* Copyright Huawei 2025
*/
#ifndef _LINUX_CACHE_COHERENCY_H_
#define _LINUX_CACHE_COHERENCY_H_
#include <linux/list.h>
#include <linux/kref.h>
#include <linux/types.h>
struct cc_inval_params {
phys_addr_t addr;
size_t size;
};
struct cache_coherency_ops_inst;
struct cache_coherency_ops {
int (*wbinv)(struct cache_coherency_ops_inst *cci,
struct cc_inval_params *invp);
int (*done)(struct cache_coherency_ops_inst *cci);
};
struct cache_coherency_ops_inst {
struct kref kref;
struct list_head node;
const struct cache_coherency_ops *ops;
};
int cache_coherency_ops_instance_register(struct cache_coherency_ops_inst *cci);
void cache_coherency_ops_instance_unregister(struct cache_coherency_ops_inst *cci);
struct cache_coherency_ops_inst *
_cache_coherency_ops_instance_alloc(const struct cache_coherency_ops *ops,
size_t size);
/**
* cache_coherency_ops_instance_alloc - Allocate cache coherency ops instance
* @ops: Cache maintenance operations
* @drv_struct: structure that contains the struct cache_coherency_ops_inst
* @member: Name of the struct cache_coherency_ops_inst member in @drv_struct.
*
* This allocates a driver specific structure and initializes the
* cache_coherency_ops_inst embedded in the drv_struct. Upon success the
* pointer must be freed via cache_coherency_ops_instance_put().
*
* Returns a &drv_struct * on success, %NULL on error.
*/
#define cache_coherency_ops_instance_alloc(ops, drv_struct, member) \
({ \
static_assert(__same_type(struct cache_coherency_ops_inst, \
((drv_struct *)NULL)->member)); \
static_assert(offsetof(drv_struct, member) == 0); \
(drv_struct *)_cache_coherency_ops_instance_alloc(ops, \
sizeof(drv_struct)); \
})
void cache_coherency_ops_instance_put(struct cache_coherency_ops_inst *cci);
#endif

View File

@ -26,8 +26,10 @@ static inline void memregion_free(int id)
/**
* cpu_cache_invalidate_memregion - drop any CPU cached data for
* memregions described by @res_desc
* @res_desc: one of the IORES_DESC_* types
* memregion
* @start: start physical address of the target memory region.
* @len: length of the target memory region. -1 for all the regions of
* the target type.
*
* Perform cache maintenance after a memory event / operation that
* changes the contents of physical memory in a cache-incoherent manner.
@ -46,7 +48,7 @@ static inline void memregion_free(int id)
* the cache maintenance.
*/
#ifdef CONFIG_ARCH_HAS_CPU_CACHE_INVALIDATE_MEMREGION
int cpu_cache_invalidate_memregion(int res_desc);
int cpu_cache_invalidate_memregion(phys_addr_t start, size_t len);
bool cpu_cache_has_invalidate_memregion(void);
#else
static inline bool cpu_cache_has_invalidate_memregion(void)
@ -54,10 +56,16 @@ static inline bool cpu_cache_has_invalidate_memregion(void)
return false;
}
static inline int cpu_cache_invalidate_memregion(int res_desc)
static inline int cpu_cache_invalidate_memregion(phys_addr_t start, size_t len)
{
WARN_ON_ONCE("CPU cache invalidation required");
return -ENXIO;
}
#endif
static inline int cpu_cache_invalidate_all(void)
{
return cpu_cache_invalidate_memregion(0, -1);
}
#endif /* _MEMREGION_H_ */

View File

@ -542,6 +542,9 @@ config MEMREGION
config ARCH_HAS_CPU_CACHE_INVALIDATE_MEMREGION
bool
config GENERIC_CPU_CACHE_MAINTENANCE
bool
config ARCH_HAS_MEMREMAP_COMPAT_ALIGN
bool

View File

@ -127,6 +127,8 @@ obj-$(CONFIG_HAS_IOMEM) += iomap_copy.o devres.o
obj-$(CONFIG_CHECK_SIGNATURE) += check_signature.o
obj-$(CONFIG_DEBUG_LOCKING_API_SELFTESTS) += locking-selftest.o
obj-$(CONFIG_GENERIC_CPU_CACHE_MAINTENANCE) += cache_maint.o
lib-y += logic_pio.o
lib-$(CONFIG_INDIRECT_IOMEM) += logic_iomem.o

138
lib/cache_maint.c Normal file
View File

@ -0,0 +1,138 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Generic support for Memory System Cache Maintenance operations.
*
* Coherency maintenance drivers register with this simple framework that will
* iterate over each registered instance to first kick off invalidation and
* then to wait until it is complete.
*
* If no implementations are registered yet cpu_cache_has_invalidate_memregion()
* will return false. If this runs concurrently with unregistration then a
* race exists but this is no worse than the case where the operations instance
* responsible for a given memory region has not yet registered.
*/
#include <linux/cache_coherency.h>
#include <linux/cleanup.h>
#include <linux/container_of.h>
#include <linux/export.h>
#include <linux/kref.h>
#include <linux/list.h>
#include <linux/memregion.h>
#include <linux/module.h>
#include <linux/rwsem.h>
#include <linux/slab.h>
static LIST_HEAD(cache_ops_instance_list);
static DECLARE_RWSEM(cache_ops_instance_list_lock);
static void __cache_coherency_ops_instance_free(struct kref *kref)
{
struct cache_coherency_ops_inst *cci =
container_of(kref, struct cache_coherency_ops_inst, kref);
kfree(cci);
}
void cache_coherency_ops_instance_put(struct cache_coherency_ops_inst *cci)
{
kref_put(&cci->kref, __cache_coherency_ops_instance_free);
}
EXPORT_SYMBOL_GPL(cache_coherency_ops_instance_put);
static int cache_inval_one(struct cache_coherency_ops_inst *cci, void *data)
{
if (!cci->ops)
return -EINVAL;
return cci->ops->wbinv(cci, data);
}
static int cache_inval_done_one(struct cache_coherency_ops_inst *cci)
{
if (!cci->ops)
return -EINVAL;
if (!cci->ops->done)
return 0;
return cci->ops->done(cci);
}
static int cache_invalidate_memregion(phys_addr_t addr, size_t size)
{
int ret;
struct cache_coherency_ops_inst *cci;
struct cc_inval_params params = {
.addr = addr,
.size = size,
};
guard(rwsem_read)(&cache_ops_instance_list_lock);
list_for_each_entry(cci, &cache_ops_instance_list, node) {
ret = cache_inval_one(cci, &params);
if (ret)
return ret;
}
list_for_each_entry(cci, &cache_ops_instance_list, node) {
ret = cache_inval_done_one(cci);
if (ret)
return ret;
}
return 0;
}
struct cache_coherency_ops_inst *
_cache_coherency_ops_instance_alloc(const struct cache_coherency_ops *ops,
size_t size)
{
struct cache_coherency_ops_inst *cci;
if (!ops || !ops->wbinv)
return NULL;
cci = kzalloc(size, GFP_KERNEL);
if (!cci)
return NULL;
cci->ops = ops;
INIT_LIST_HEAD(&cci->node);
kref_init(&cci->kref);
return cci;
}
EXPORT_SYMBOL_NS_GPL(_cache_coherency_ops_instance_alloc, "CACHE_COHERENCY");
int cache_coherency_ops_instance_register(struct cache_coherency_ops_inst *cci)
{
guard(rwsem_write)(&cache_ops_instance_list_lock);
list_add(&cci->node, &cache_ops_instance_list);
return 0;
}
EXPORT_SYMBOL_NS_GPL(cache_coherency_ops_instance_register, "CACHE_COHERENCY");
void cache_coherency_ops_instance_unregister(struct cache_coherency_ops_inst *cci)
{
guard(rwsem_write)(&cache_ops_instance_list_lock);
list_del(&cci->node);
}
EXPORT_SYMBOL_NS_GPL(cache_coherency_ops_instance_unregister, "CACHE_COHERENCY");
int cpu_cache_invalidate_memregion(phys_addr_t start, size_t len)
{
return cache_invalidate_memregion(start, len);
}
EXPORT_SYMBOL_NS_GPL(cpu_cache_invalidate_memregion, "DEVMEM");
/*
* Used for optimization / debug purposes only as removal can race
*
* Machines that do not support invalidation, e.g. VMs, will not have any
* operations instance to register and so this will always return false.
*/
bool cpu_cache_has_invalidate_memregion(void)
{
guard(rwsem_read)(&cache_ops_instance_list_lock);
return !list_empty(&cache_ops_instance_list);
}
EXPORT_SYMBOL_NS_GPL(cpu_cache_has_invalidate_memregion, "DEVMEM");