Updates for interrupt chip drivers:

- Use the startup/shutdown callbacks for the PCI/MSI per device interrupt
     domains.
 
     This allows to initialize the RISCV PLIC interrupt hierarchy correctly
     and provides a mechanism to decouple the masking and unmasking during
     run-time from the expensive PCI mask and unmask when the underlying MSI
     provider implementation allows the interrupt to be masked.
 
   - Initialize the RISCV PLIC MSI interrupt hierarchy correctly so that the
     affinity assignment works correctly by switching it over to the
     startup/shutdown scheme
 
   - Allow MSI providers to opt out from masking a PCI/MSI interrupt at the
     PCI device during operation when the provider can mask the interrupt at
     the underlying interrupt chip. This reduces the overhead in scenarios
     where disable_irq()/enable_irq() is utilized frequently by a driver.
 
     The PCI/MSI device level [un]masking is only required on startup and
     shutdown in this case.
 
   - Remove the conditional mask/unmask logic in the PCI/MSI layer as this
     is now handled unconditionally.
 
   - Replace the hardcoded interrupt routing in the Loongson EIOINTC
     interrupt driver to respect the firmware settings and spread them out
     to different CPU interrupt inputs so that the demultiplexing handler
     only needs to read only a single 64-bit status register instead of
     four, which significantly reduces the overhead in VMs as the status
     register access causes a VM exit.
 
   - Add support for the new AST2700 SCU interrupt controllers
 
   - Use the legacy interrupt domain setup for the Loongson PCH-LPC
     interrupt controller, which resembles the x86 legacy PIC setup and has
     the same hardcoded legacy requirements.
 
   - The usual set of cleanups, fixes and improvements all over the place
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmjaSksTHHRnbHhAbGlu
 dXRyb25peC5kZQAKCRCmGPVMDXSYoWzWEADAHpwKaCU2saarfMl0Stgs08pjzkfq
 aeY0DyHmt3We54DT/eCxy5Tf0nUtHNqmLLptFcUHxj2tAO3PqzoZ1UraMc/ldKTm
 X6+CVGvxAre0h5zY0rLOuPJliflI+6cb0jmpZ3/9Q1G+2KmYvETJNgzIYKftzxt6
 CsB9RgO6e9gbOQb22BezxdVq5sfdI4AzaOy6zURRvr3ApjPo+0/xZPUFm5tJxfpn
 fZs4Av8IEJesiY4tR9tmFwodfzVlvhg9/UnzmrZg5CZJvXVFiqdRmkTNmkxgZeBf
 hcI38qHtRwK24XKQ0VoaTioCvSX834F0YuCImYbF9/t3oEDyiArCNhpdUjJoIhan
 EtGbrq87t7bSZ4g64pz6uGNNNy4xYqeahklSRwPcpqxYuekTjxYTN8V08AonkbEx
 xxJ8Oz7AbGncQIQOMuvSMGramPjKVrD4MBM0S/WjnOpxn2vQ1ShOkk5pAEROE8NX
 qIpo33achxUiRYmqHTdJJOzFqYAVaP4+NcmeLaCnYqgJhj7yPIEjpJ90AN6IU/FB
 KZvk5oxiQL7DiUobLE6tmAx29bK7BYfl1+CuV2GDGz2TUYxdqxW0bvGKO0jW3Z4Y
 AdhDgGS7TXaYIdEFf3ktfzS0p9fDUeB2FWxa+xOMV2/FElbmdbX8FG56G3QpByX/
 KTpScsTJYmzzCA==
 =VrB7
 -----END PGP SIGNATURE-----

Merge tag 'irq-drivers-2025-09-29' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull irq chip driver updates from Thomas Gleixner:

 - Use the startup/shutdown callbacks for the PCI/MSI per device
   interrupt domains.

   This allows us to initialize the RISCV PLIC interrupt hierarchy
   correctly and provides a mechanism to decouple the masking and
   unmasking during run-time from the expensive PCI mask and unmask when
   the underlying MSI provider implementation allows the interrupt to be
   masked.

 - Initialize the RISCV PLIC MSI interrupt hierarchy correctly so that
   the affinity assignment works correctly by switching it over to the
   startup/shutdown scheme

 - Allow MSI providers to opt out from masking a PCI/MSI interrupt at
   the PCI device during operation when the provider can mask the
   interrupt at the underlying interrupt chip. This reduces the overhead
   in scenarios where disable_irq()/enable_irq() is utilized frequently
   by a driver.

   The PCI/MSI device level [un]masking is only required on startup and
   shutdown in this case.

 - Remove the conditional mask/unmask logic in the PCI/MSI layer as this
   is now handled unconditionally.

 - Replace the hardcoded interrupt routing in the Loongson EIOINTC
   interrupt driver to respect the firmware settings and spread them out
   to different CPU interrupt inputs so that the demultiplexing handler
   only needs to read only a single 64-bit status register instead of
   four, which significantly reduces the overhead in VMs as the status
   register access causes a VM exit.

 - Add support for the new AST2700 SCU interrupt controllers

 - Use the legacy interrupt domain setup for the Loongson PCH-LPC
   interrupt controller, which resembles the x86 legacy PIC setup and
   has the same hardcoded legacy requirements.

 - The usual set of cleanups, fixes and improvements all over the place

* tag 'irq-drivers-2025-09-29' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (25 commits)
  irqchip/loongson-pch-lpc: Use legacy domain for PCH-LPC IRQ controller
  PCI/MSI: Remove the conditional parent [un]mask logic
  irqchip/msi-lib: Honor the MSI_FLAG_PCI_MSI_MASK_PARENT flag
  irqchip/aspeed-scu-ic: Add support for AST2700 SCU interrupt controllers
  dt-bindings: interrupt-controller: aspeed: Add AST2700 SCU IC compatibles
  dt-bindings: mfd: aspeed: Add AST2700 SCU compatibles
  irqchip/aspeed-scu-ic: Refactor driver to support variant-based initialization
  irqchip/gic-v5: Fix error handling in gicv5_its_irq_domain_alloc()
  irqchip/gic-v5: Fix loop in gicv5_its_create_itt_two_level() cleanup path
  irqchip/gic-v5: Delete a stray tab
  irqchip/sg2042-msi: Set irq type according to DT configuration
  riscv: sophgo: dts: sg2044: Change msi irq type to IRQ_TYPE_EDGE_RISING
  riscv: sophgo: dts: sg2042: Change msi irq type to IRQ_TYPE_EDGE_RISING
  irqchip/gic-v2m: Handle Multiple MSI base IRQ Alignment
  irqchip/renesas-rzg2l: Remove dev_err_probe() if error is -ENOMEM
  irqchip: Use int type to store negative error codes
  irqchip/gic-v5: Remove the redundant ITS cache invalidation
  PCI/MSI: Check MSI_FLAG_PCI_MSI_MASK_PARENT in cond_[startup|shutdown]_parent()
  irqchip/loongson-eiointc: Add multiple interrupt pin routing support
  irqchip/loongson-eiointc: Route interrupt parsed from bios table
  ...
This commit is contained in:
Linus Torvalds 2025-09-30 16:00:29 -07:00
commit 03a53e09cd
20 changed files with 404 additions and 161 deletions

View File

@ -5,7 +5,7 @@
$id: http://devicetree.org/schemas/interrupt-controller/aspeed,ast2500-scu-ic.yaml# $id: http://devicetree.org/schemas/interrupt-controller/aspeed,ast2500-scu-ic.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml# $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Aspeed AST25XX and AST26XX SCU Interrupt Controller title: Aspeed AST25XX, AST26XX, AST27XX SCU Interrupt Controller
maintainers: maintainers:
- Eddie James <eajames@linux.ibm.com> - Eddie James <eajames@linux.ibm.com>
@ -16,6 +16,10 @@ properties:
- aspeed,ast2500-scu-ic - aspeed,ast2500-scu-ic
- aspeed,ast2600-scu-ic0 - aspeed,ast2600-scu-ic0
- aspeed,ast2600-scu-ic1 - aspeed,ast2600-scu-ic1
- aspeed,ast2700-scu-ic0
- aspeed,ast2700-scu-ic1
- aspeed,ast2700-scu-ic2
- aspeed,ast2700-scu-ic3
reg: reg:
maxItems: 1 maxItems: 1

View File

@ -75,6 +75,10 @@ patternProperties:
- aspeed,ast2500-scu-ic - aspeed,ast2500-scu-ic
- aspeed,ast2600-scu-ic0 - aspeed,ast2600-scu-ic0
- aspeed,ast2600-scu-ic1 - aspeed,ast2600-scu-ic1
- aspeed,ast2700-scu-ic0
- aspeed,ast2700-scu-ic1
- aspeed,ast2700-scu-ic2
- aspeed,ast2700-scu-ic3
'^silicon-id@[0-9a-f]+$': '^silicon-id@[0-9a-f]+$':
description: Unique hardware silicon identifiers within the SoC description: Unique hardware silicon identifiers within the SoC

View File

@ -190,7 +190,7 @@ msi: msi-controller@7030010304 {
reg-names = "clr", "doorbell"; reg-names = "clr", "doorbell";
msi-controller; msi-controller;
#msi-cells = <0>; #msi-cells = <0>;
msi-ranges = <&intc 64 IRQ_TYPE_LEVEL_HIGH 32>; msi-ranges = <&intc 64 IRQ_TYPE_EDGE_RISING 32>;
}; };
rpgate: clock-controller@7030010368 { rpgate: clock-controller@7030010368 {

View File

@ -214,7 +214,7 @@ msi: msi-controller@6d50000000 {
reg-names = "clr", "doorbell"; reg-names = "clr", "doorbell";
#msi-cells = <0>; #msi-cells = <0>;
msi-controller; msi-controller;
msi-ranges = <&intc 352 IRQ_TYPE_LEVEL_HIGH 512>; msi-ranges = <&intc 352 IRQ_TYPE_EDGE_RISING 512>;
status = "disabled"; status = "disabled";
}; };

View File

@ -1,61 +1,78 @@
// SPDX-License-Identifier: GPL-2.0-or-later // SPDX-License-Identifier: GPL-2.0-or-later
/* /*
* Aspeed AST24XX, AST25XX, and AST26XX SCU Interrupt Controller * Aspeed AST24XX, AST25XX, AST26XX, and AST27XX SCU Interrupt Controller
* Copyright 2019 IBM Corporation * Copyright 2019 IBM Corporation
* *
* Eddie James <eajames@linux.ibm.com> * Eddie James <eajames@linux.ibm.com>
*/ */
#include <linux/bitops.h> #include <linux/bitops.h>
#include <linux/io.h>
#include <linux/irq.h> #include <linux/irq.h>
#include <linux/irqchip.h> #include <linux/irqchip.h>
#include <linux/irqchip/chained_irq.h> #include <linux/irqchip/chained_irq.h>
#include <linux/irqdomain.h> #include <linux/irqdomain.h>
#include <linux/mfd/syscon.h> #include <linux/of_address.h>
#include <linux/of_irq.h> #include <linux/of_irq.h>
#include <linux/regmap.h>
#define ASPEED_SCU_IC_REG 0x018
#define ASPEED_SCU_IC_SHIFT 0
#define ASPEED_SCU_IC_ENABLE GENMASK(15, ASPEED_SCU_IC_SHIFT)
#define ASPEED_SCU_IC_NUM_IRQS 7
#define ASPEED_SCU_IC_STATUS GENMASK(28, 16) #define ASPEED_SCU_IC_STATUS GENMASK(28, 16)
#define ASPEED_SCU_IC_STATUS_SHIFT 16 #define ASPEED_SCU_IC_STATUS_SHIFT 16
#define AST2700_SCU_IC_STATUS GENMASK(15, 0)
#define ASPEED_AST2600_SCU_IC0_REG 0x560 struct aspeed_scu_ic_variant {
#define ASPEED_AST2600_SCU_IC0_SHIFT 0 const char *compatible;
#define ASPEED_AST2600_SCU_IC0_ENABLE \ unsigned long irq_enable;
GENMASK(5, ASPEED_AST2600_SCU_IC0_SHIFT) unsigned long irq_shift;
#define ASPEED_AST2600_SCU_IC0_NUM_IRQS 6 unsigned int num_irqs;
unsigned long ier;
unsigned long isr;
};
#define ASPEED_AST2600_SCU_IC1_REG 0x570 #define SCU_VARIANT(_compat, _shift, _enable, _num, _ier, _isr) { \
#define ASPEED_AST2600_SCU_IC1_SHIFT 4 .compatible = _compat, \
#define ASPEED_AST2600_SCU_IC1_ENABLE \ .irq_shift = _shift, \
GENMASK(5, ASPEED_AST2600_SCU_IC1_SHIFT) .irq_enable = _enable, \
#define ASPEED_AST2600_SCU_IC1_NUM_IRQS 2 .num_irqs = _num, \
.ier = _ier, \
.isr = _isr, \
}
static const struct aspeed_scu_ic_variant scu_ic_variants[] __initconst = {
SCU_VARIANT("aspeed,ast2400-scu-ic", 0, GENMASK(15, 0), 7, 0x00, 0x00),
SCU_VARIANT("aspeed,ast2500-scu-ic", 0, GENMASK(15, 0), 7, 0x00, 0x00),
SCU_VARIANT("aspeed,ast2600-scu-ic0", 0, GENMASK(5, 0), 6, 0x00, 0x00),
SCU_VARIANT("aspeed,ast2600-scu-ic1", 4, GENMASK(5, 4), 2, 0x00, 0x00),
SCU_VARIANT("aspeed,ast2700-scu-ic0", 0, GENMASK(3, 0), 4, 0x00, 0x04),
SCU_VARIANT("aspeed,ast2700-scu-ic1", 0, GENMASK(3, 0), 4, 0x00, 0x04),
SCU_VARIANT("aspeed,ast2700-scu-ic2", 0, GENMASK(3, 0), 4, 0x04, 0x00),
SCU_VARIANT("aspeed,ast2700-scu-ic3", 0, GENMASK(1, 0), 2, 0x04, 0x00),
};
struct aspeed_scu_ic { struct aspeed_scu_ic {
unsigned long irq_enable; unsigned long irq_enable;
unsigned long irq_shift; unsigned long irq_shift;
unsigned int num_irqs; unsigned int num_irqs;
unsigned int reg; void __iomem *base;
struct regmap *scu;
struct irq_domain *irq_domain; struct irq_domain *irq_domain;
unsigned long ier;
unsigned long isr;
}; };
static void aspeed_scu_ic_irq_handler(struct irq_desc *desc) static inline bool scu_has_split_isr(struct aspeed_scu_ic *scu)
{
return scu->ier != scu->isr;
}
static void aspeed_scu_ic_irq_handler_combined(struct irq_desc *desc)
{ {
unsigned int sts;
unsigned long bit;
unsigned long enabled;
unsigned long max;
unsigned long status;
struct aspeed_scu_ic *scu_ic = irq_desc_get_handler_data(desc); struct aspeed_scu_ic *scu_ic = irq_desc_get_handler_data(desc);
struct irq_chip *chip = irq_desc_get_chip(desc); struct irq_chip *chip = irq_desc_get_chip(desc);
unsigned int mask = scu_ic->irq_enable << ASPEED_SCU_IC_STATUS_SHIFT; unsigned long bit, enabled, max, status;
unsigned int sts, mask;
chained_irq_enter(chip, desc); chained_irq_enter(chip, desc);
mask = scu_ic->irq_enable << ASPEED_SCU_IC_STATUS_SHIFT;
/* /*
* The SCU IC has just one register to control its operation and read * The SCU IC has just one register to control its operation and read
* status. The interrupt enable bits occupy the lower 16 bits of the * status. The interrupt enable bits occupy the lower 16 bits of the
@ -66,7 +83,7 @@ static void aspeed_scu_ic_irq_handler(struct irq_desc *desc)
* shifting the status down to get the mapping and then back up to * shifting the status down to get the mapping and then back up to
* clear the bit. * clear the bit.
*/ */
regmap_read(scu_ic->scu, scu_ic->reg, &sts); sts = readl(scu_ic->base);
enabled = sts & scu_ic->irq_enable; enabled = sts & scu_ic->irq_enable;
status = (sts >> ASPEED_SCU_IC_STATUS_SHIFT) & enabled; status = (sts >> ASPEED_SCU_IC_STATUS_SHIFT) & enabled;
@ -74,43 +91,83 @@ static void aspeed_scu_ic_irq_handler(struct irq_desc *desc)
max = scu_ic->num_irqs + bit; max = scu_ic->num_irqs + bit;
for_each_set_bit_from(bit, &status, max) { for_each_set_bit_from(bit, &status, max) {
generic_handle_domain_irq(scu_ic->irq_domain, generic_handle_domain_irq(scu_ic->irq_domain, bit - scu_ic->irq_shift);
bit - scu_ic->irq_shift); writel((readl(scu_ic->base) & ~mask) | BIT(bit + ASPEED_SCU_IC_STATUS_SHIFT),
scu_ic->base);
regmap_write_bits(scu_ic->scu, scu_ic->reg, mask,
BIT(bit + ASPEED_SCU_IC_STATUS_SHIFT));
} }
chained_irq_exit(chip, desc); chained_irq_exit(chip, desc);
} }
static void aspeed_scu_ic_irq_mask(struct irq_data *data) static void aspeed_scu_ic_irq_handler_split(struct irq_desc *desc)
{
struct aspeed_scu_ic *scu_ic = irq_desc_get_handler_data(desc);
struct irq_chip *chip = irq_desc_get_chip(desc);
unsigned long bit, enabled, max, status;
unsigned int sts, mask;
chained_irq_enter(chip, desc);
mask = scu_ic->irq_enable;
sts = readl(scu_ic->base + scu_ic->isr);
enabled = sts & scu_ic->irq_enable;
sts = readl(scu_ic->base + scu_ic->isr);
status = sts & enabled;
bit = scu_ic->irq_shift;
max = scu_ic->num_irqs + bit;
for_each_set_bit_from(bit, &status, max) {
generic_handle_domain_irq(scu_ic->irq_domain, bit - scu_ic->irq_shift);
/* Clear interrupt */
writel(BIT(bit), scu_ic->base + scu_ic->isr);
}
chained_irq_exit(chip, desc);
}
static void aspeed_scu_ic_irq_mask_combined(struct irq_data *data)
{ {
struct aspeed_scu_ic *scu_ic = irq_data_get_irq_chip_data(data); struct aspeed_scu_ic *scu_ic = irq_data_get_irq_chip_data(data);
unsigned int mask = BIT(data->hwirq + scu_ic->irq_shift) | unsigned int bit = BIT(data->hwirq + scu_ic->irq_shift);
(scu_ic->irq_enable << ASPEED_SCU_IC_STATUS_SHIFT); unsigned int mask = bit | (scu_ic->irq_enable << ASPEED_SCU_IC_STATUS_SHIFT);
/* /*
* Status bits are cleared by writing 1. In order to prevent the mask * Status bits are cleared by writing 1. In order to prevent the mask
* operation from clearing the status bits, they should be under the * operation from clearing the status bits, they should be under the
* mask and written with 0. * mask and written with 0.
*/ */
regmap_update_bits(scu_ic->scu, scu_ic->reg, mask, 0); writel(readl(scu_ic->base) & ~mask, scu_ic->base);
} }
static void aspeed_scu_ic_irq_unmask(struct irq_data *data) static void aspeed_scu_ic_irq_unmask_combined(struct irq_data *data)
{ {
struct aspeed_scu_ic *scu_ic = irq_data_get_irq_chip_data(data); struct aspeed_scu_ic *scu_ic = irq_data_get_irq_chip_data(data);
unsigned int bit = BIT(data->hwirq + scu_ic->irq_shift); unsigned int bit = BIT(data->hwirq + scu_ic->irq_shift);
unsigned int mask = bit | unsigned int mask = bit | (scu_ic->irq_enable << ASPEED_SCU_IC_STATUS_SHIFT);
(scu_ic->irq_enable << ASPEED_SCU_IC_STATUS_SHIFT);
/* /*
* Status bits are cleared by writing 1. In order to prevent the unmask * Status bits are cleared by writing 1. In order to prevent the unmask
* operation from clearing the status bits, they should be under the * operation from clearing the status bits, they should be under the
* mask and written with 0. * mask and written with 0.
*/ */
regmap_update_bits(scu_ic->scu, scu_ic->reg, mask, bit); writel((readl(scu_ic->base) & ~mask) | bit, scu_ic->base);
}
static void aspeed_scu_ic_irq_mask_split(struct irq_data *data)
{
struct aspeed_scu_ic *scu_ic = irq_data_get_irq_chip_data(data);
unsigned int mask = BIT(data->hwirq + scu_ic->irq_shift);
writel(readl(scu_ic->base) & ~mask, scu_ic->base + scu_ic->ier);
}
static void aspeed_scu_ic_irq_unmask_split(struct irq_data *data)
{
struct aspeed_scu_ic *scu_ic = irq_data_get_irq_chip_data(data);
unsigned int bit = BIT(data->hwirq + scu_ic->irq_shift);
writel(readl(scu_ic->base) | bit, scu_ic->base + scu_ic->ier);
} }
static int aspeed_scu_ic_irq_set_affinity(struct irq_data *data, static int aspeed_scu_ic_irq_set_affinity(struct irq_data *data,
@ -120,17 +177,29 @@ static int aspeed_scu_ic_irq_set_affinity(struct irq_data *data,
return -EINVAL; return -EINVAL;
} }
static struct irq_chip aspeed_scu_ic_chip = { static struct irq_chip aspeed_scu_ic_chip_combined = {
.name = "aspeed-scu-ic", .name = "aspeed-scu-ic",
.irq_mask = aspeed_scu_ic_irq_mask, .irq_mask = aspeed_scu_ic_irq_mask_combined,
.irq_unmask = aspeed_scu_ic_irq_unmask, .irq_unmask = aspeed_scu_ic_irq_unmask_combined,
.irq_set_affinity = aspeed_scu_ic_irq_set_affinity,
};
static struct irq_chip aspeed_scu_ic_chip_split = {
.name = "ast2700-scu-ic",
.irq_mask = aspeed_scu_ic_irq_mask_split,
.irq_unmask = aspeed_scu_ic_irq_unmask_split,
.irq_set_affinity = aspeed_scu_ic_irq_set_affinity, .irq_set_affinity = aspeed_scu_ic_irq_set_affinity,
}; };
static int aspeed_scu_ic_map(struct irq_domain *domain, unsigned int irq, static int aspeed_scu_ic_map(struct irq_domain *domain, unsigned int irq,
irq_hw_number_t hwirq) irq_hw_number_t hwirq)
{ {
irq_set_chip_and_handler(irq, &aspeed_scu_ic_chip, handle_level_irq); struct aspeed_scu_ic *scu_ic = domain->host_data;
if (scu_has_split_isr(scu_ic))
irq_set_chip_and_handler(irq, &aspeed_scu_ic_chip_split, handle_level_irq);
else
irq_set_chip_and_handler(irq, &aspeed_scu_ic_chip_combined, handle_level_irq);
irq_set_chip_data(irq, domain->host_data); irq_set_chip_data(irq, domain->host_data);
return 0; return 0;
@ -143,21 +212,21 @@ static const struct irq_domain_ops aspeed_scu_ic_domain_ops = {
static int aspeed_scu_ic_of_init_common(struct aspeed_scu_ic *scu_ic, static int aspeed_scu_ic_of_init_common(struct aspeed_scu_ic *scu_ic,
struct device_node *node) struct device_node *node)
{ {
int irq; int irq, rc = 0;
int rc = 0;
if (!node->parent) { scu_ic->base = of_iomap(node, 0);
rc = -ENODEV; if (IS_ERR(scu_ic->base)) {
rc = PTR_ERR(scu_ic->base);
goto err; goto err;
} }
scu_ic->scu = syscon_node_to_regmap(node->parent); if (scu_has_split_isr(scu_ic)) {
if (IS_ERR(scu_ic->scu)) { writel(AST2700_SCU_IC_STATUS, scu_ic->base + scu_ic->isr);
rc = PTR_ERR(scu_ic->scu); writel(0, scu_ic->base + scu_ic->ier);
goto err; } else {
writel(ASPEED_SCU_IC_STATUS, scu_ic->base);
writel(0, scu_ic->base);
} }
regmap_write_bits(scu_ic->scu, scu_ic->reg, ASPEED_SCU_IC_STATUS, ASPEED_SCU_IC_STATUS);
regmap_write_bits(scu_ic->scu, scu_ic->reg, ASPEED_SCU_IC_ENABLE, 0);
irq = irq_of_parse_and_map(node, 0); irq = irq_of_parse_and_map(node, 0);
if (!irq) { if (!irq) {
@ -166,75 +235,60 @@ static int aspeed_scu_ic_of_init_common(struct aspeed_scu_ic *scu_ic,
} }
scu_ic->irq_domain = irq_domain_create_linear(of_fwnode_handle(node), scu_ic->num_irqs, scu_ic->irq_domain = irq_domain_create_linear(of_fwnode_handle(node), scu_ic->num_irqs,
&aspeed_scu_ic_domain_ops, &aspeed_scu_ic_domain_ops, scu_ic);
scu_ic);
if (!scu_ic->irq_domain) { if (!scu_ic->irq_domain) {
rc = -ENOMEM; rc = -ENOMEM;
goto err; goto err;
} }
irq_set_chained_handler_and_data(irq, aspeed_scu_ic_irq_handler, irq_set_chained_handler_and_data(irq, scu_has_split_isr(scu_ic) ?
aspeed_scu_ic_irq_handler_split :
aspeed_scu_ic_irq_handler_combined,
scu_ic); scu_ic);
return 0; return 0;
err: err:
kfree(scu_ic); kfree(scu_ic);
return rc; return rc;
} }
static int __init aspeed_scu_ic_of_init(struct device_node *node, static const struct aspeed_scu_ic_variant *aspeed_scu_ic_find_variant(struct device_node *np)
struct device_node *parent)
{ {
struct aspeed_scu_ic *scu_ic = kzalloc(sizeof(*scu_ic), GFP_KERNEL); for (int i = 0; i < ARRAY_SIZE(scu_ic_variants); i++) {
if (of_device_is_compatible(np, scu_ic_variants[i].compatible))
if (!scu_ic) return &scu_ic_variants[i];
return -ENOMEM; }
return NULL;
scu_ic->irq_enable = ASPEED_SCU_IC_ENABLE;
scu_ic->irq_shift = ASPEED_SCU_IC_SHIFT;
scu_ic->num_irqs = ASPEED_SCU_IC_NUM_IRQS;
scu_ic->reg = ASPEED_SCU_IC_REG;
return aspeed_scu_ic_of_init_common(scu_ic, node);
} }
static int __init aspeed_ast2600_scu_ic0_of_init(struct device_node *node, static int __init aspeed_scu_ic_of_init(struct device_node *node, struct device_node *parent)
struct device_node *parent)
{ {
struct aspeed_scu_ic *scu_ic = kzalloc(sizeof(*scu_ic), GFP_KERNEL); const struct aspeed_scu_ic_variant *variant;
struct aspeed_scu_ic *scu_ic;
variant = aspeed_scu_ic_find_variant(node);
if (!variant)
return -ENODEV;
scu_ic = kzalloc(sizeof(*scu_ic), GFP_KERNEL);
if (!scu_ic) if (!scu_ic)
return -ENOMEM; return -ENOMEM;
scu_ic->irq_enable = ASPEED_AST2600_SCU_IC0_ENABLE; scu_ic->irq_enable = variant->irq_enable;
scu_ic->irq_shift = ASPEED_AST2600_SCU_IC0_SHIFT; scu_ic->irq_shift = variant->irq_shift;
scu_ic->num_irqs = ASPEED_AST2600_SCU_IC0_NUM_IRQS; scu_ic->num_irqs = variant->num_irqs;
scu_ic->reg = ASPEED_AST2600_SCU_IC0_REG; scu_ic->ier = variant->ier;
scu_ic->isr = variant->isr;
return aspeed_scu_ic_of_init_common(scu_ic, node);
}
static int __init aspeed_ast2600_scu_ic1_of_init(struct device_node *node,
struct device_node *parent)
{
struct aspeed_scu_ic *scu_ic = kzalloc(sizeof(*scu_ic), GFP_KERNEL);
if (!scu_ic)
return -ENOMEM;
scu_ic->irq_enable = ASPEED_AST2600_SCU_IC1_ENABLE;
scu_ic->irq_shift = ASPEED_AST2600_SCU_IC1_SHIFT;
scu_ic->num_irqs = ASPEED_AST2600_SCU_IC1_NUM_IRQS;
scu_ic->reg = ASPEED_AST2600_SCU_IC1_REG;
return aspeed_scu_ic_of_init_common(scu_ic, node); return aspeed_scu_ic_of_init_common(scu_ic, node);
} }
IRQCHIP_DECLARE(ast2400_scu_ic, "aspeed,ast2400-scu-ic", aspeed_scu_ic_of_init); IRQCHIP_DECLARE(ast2400_scu_ic, "aspeed,ast2400-scu-ic", aspeed_scu_ic_of_init);
IRQCHIP_DECLARE(ast2500_scu_ic, "aspeed,ast2500-scu-ic", aspeed_scu_ic_of_init); IRQCHIP_DECLARE(ast2500_scu_ic, "aspeed,ast2500-scu-ic", aspeed_scu_ic_of_init);
IRQCHIP_DECLARE(ast2600_scu_ic0, "aspeed,ast2600-scu-ic0", IRQCHIP_DECLARE(ast2600_scu_ic0, "aspeed,ast2600-scu-ic0", aspeed_scu_ic_of_init);
aspeed_ast2600_scu_ic0_of_init); IRQCHIP_DECLARE(ast2600_scu_ic1, "aspeed,ast2600-scu-ic1", aspeed_scu_ic_of_init);
IRQCHIP_DECLARE(ast2600_scu_ic1, "aspeed,ast2600-scu-ic1", IRQCHIP_DECLARE(ast2700_scu_ic0, "aspeed,ast2700-scu-ic0", aspeed_scu_ic_of_init);
aspeed_ast2600_scu_ic1_of_init); IRQCHIP_DECLARE(ast2700_scu_ic1, "aspeed,ast2700-scu-ic1", aspeed_scu_ic_of_init);
IRQCHIP_DECLARE(ast2700_scu_ic2, "aspeed,ast2700-scu-ic2", aspeed_scu_ic_of_init);
IRQCHIP_DECLARE(ast2700_scu_ic3, "aspeed,ast2700-scu-ic3", aspeed_scu_ic_of_init);

View File

@ -153,14 +153,19 @@ static int gicv2m_irq_domain_alloc(struct irq_domain *domain, unsigned int virq,
{ {
msi_alloc_info_t *info = args; msi_alloc_info_t *info = args;
struct v2m_data *v2m = NULL, *tmp; struct v2m_data *v2m = NULL, *tmp;
int hwirq, offset, i, err = 0; int hwirq, i, err = 0;
unsigned long offset;
unsigned long align_mask = nr_irqs - 1;
spin_lock(&v2m_lock); spin_lock(&v2m_lock);
list_for_each_entry(tmp, &v2m_nodes, entry) { list_for_each_entry(tmp, &v2m_nodes, entry) {
offset = bitmap_find_free_region(tmp->bm, tmp->nr_spis, unsigned long align_off = tmp->spi_start - (tmp->spi_start & ~align_mask);
get_count_order(nr_irqs));
if (offset >= 0) { offset = bitmap_find_next_zero_area_off(tmp->bm, tmp->nr_spis, 0,
nr_irqs, align_mask, align_off);
if (offset < tmp->nr_spis) {
v2m = tmp; v2m = tmp;
bitmap_set(v2m->bm, offset, nr_irqs);
break; break;
} }
} }

View File

@ -1766,8 +1766,9 @@ static int gic_irq_domain_select(struct irq_domain *d,
struct irq_fwspec *fwspec, struct irq_fwspec *fwspec,
enum irq_domain_bus_token bus_token) enum irq_domain_bus_token bus_token)
{ {
unsigned int type, ret, ppi_idx; unsigned int type, ppi_idx;
irq_hw_number_t hwirq; irq_hw_number_t hwirq;
int ret;
/* Not for us */ /* Not for us */
if (fwspec->fwnode != d->fwnode) if (fwspec->fwnode != d->fwnode)

View File

@ -191,9 +191,9 @@ static int gicv5_its_create_itt_two_level(struct gicv5_its_chip_data *its,
unsigned int num_events) unsigned int num_events)
{ {
unsigned int l1_bits, l2_bits, span, events_per_l2_table; unsigned int l1_bits, l2_bits, span, events_per_l2_table;
unsigned int i, complete_tables, final_span, num_ents; unsigned int complete_tables, final_span, num_ents;
__le64 *itt_l1, *itt_l2, **l2ptrs; __le64 *itt_l1, *itt_l2, **l2ptrs;
int ret; int i, ret;
u64 val; u64 val;
ret = gicv5_its_l2sz_to_l2_bits(itt_l2sz); ret = gicv5_its_l2sz_to_l2_bits(itt_l2sz);
@ -768,8 +768,6 @@ static struct gicv5_its_dev *gicv5_its_alloc_device(struct gicv5_its_chip_data *
goto out_dev_free; goto out_dev_free;
} }
gicv5_its_device_cache_inv(its, its_dev);
its_dev->its_node = its; its_dev->its_node = its;
its_dev->event_map = (unsigned long *)bitmap_zalloc(its_dev->num_events, GFP_KERNEL); its_dev->event_map = (unsigned long *)bitmap_zalloc(its_dev->num_events, GFP_KERNEL);
@ -949,15 +947,18 @@ static int gicv5_its_irq_domain_alloc(struct irq_domain *domain, unsigned int vi
device_id = its_dev->device_id; device_id = its_dev->device_id;
for (i = 0; i < nr_irqs; i++) { for (i = 0; i < nr_irqs; i++) {
lpi = gicv5_alloc_lpi(); ret = gicv5_alloc_lpi();
if (ret < 0) { if (ret < 0) {
pr_debug("Failed to find free LPI!\n"); pr_debug("Failed to find free LPI!\n");
goto out_eventid; goto out_free_irqs;
} }
lpi = ret;
ret = irq_domain_alloc_irqs_parent(domain, virq + i, 1, &lpi); ret = irq_domain_alloc_irqs_parent(domain, virq + i, 1, &lpi);
if (ret) if (ret) {
goto out_free_lpi; gicv5_free_lpi(lpi);
goto out_free_irqs;
}
/* /*
* Store eventid and deviceid into the hwirq for later use. * Store eventid and deviceid into the hwirq for later use.
@ -977,8 +978,13 @@ static int gicv5_its_irq_domain_alloc(struct irq_domain *domain, unsigned int vi
return 0; return 0;
out_free_lpi: out_free_irqs:
gicv5_free_lpi(lpi); while (--i >= 0) {
irqd = irq_domain_get_irq_data(domain, virq + i);
gicv5_free_lpi(irqd->parent_data->hwirq);
irq_domain_reset_irq_data(irqd);
irq_domain_free_irqs_parent(domain, virq + i, 1);
}
out_eventid: out_eventid:
gicv5_its_free_eventid(its_dev, event_id_base, nr_irqs); gicv5_its_free_eventid(its_dev, event_id_base, nr_irqs);
return ret; return ret;

View File

@ -46,6 +46,7 @@
#define EIOINTC_ALL_ENABLE_VEC_MASK(vector) (EIOINTC_ALL_ENABLE & ~BIT(vector & 0x1f)) #define EIOINTC_ALL_ENABLE_VEC_MASK(vector) (EIOINTC_ALL_ENABLE & ~BIT(vector & 0x1f))
#define EIOINTC_REG_ENABLE_VEC(vector) (EIOINTC_REG_ENABLE + ((vector >> 5) << 2)) #define EIOINTC_REG_ENABLE_VEC(vector) (EIOINTC_REG_ENABLE + ((vector >> 5) << 2))
#define EIOINTC_USE_CPU_ENCODE BIT(0) #define EIOINTC_USE_CPU_ENCODE BIT(0)
#define EIOINTC_ROUTE_MULT_IP BIT(1)
#define MAX_EIO_NODES (NR_CPUS / CORES_PER_EIO_NODE) #define MAX_EIO_NODES (NR_CPUS / CORES_PER_EIO_NODE)
@ -59,6 +60,14 @@
#define EIOINTC_REG_ROUTE_VEC_MASK(vector) (0xff << EIOINTC_REG_ROUTE_VEC_SHIFT(vector)) #define EIOINTC_REG_ROUTE_VEC_MASK(vector) (0xff << EIOINTC_REG_ROUTE_VEC_SHIFT(vector))
static int nr_pics; static int nr_pics;
struct eiointc_priv;
struct eiointc_ip_route {
struct eiointc_priv *priv;
/* Offset Routed destination IP */
int start;
int end;
};
struct eiointc_priv { struct eiointc_priv {
u32 node; u32 node;
@ -68,6 +77,8 @@ struct eiointc_priv {
struct fwnode_handle *domain_handle; struct fwnode_handle *domain_handle;
struct irq_domain *eiointc_domain; struct irq_domain *eiointc_domain;
int flags; int flags;
irq_hw_number_t parent_hwirq;
struct eiointc_ip_route route_info[VEC_REG_COUNT];
}; };
static struct eiointc_priv *eiointc_priv[MAX_IO_PICS]; static struct eiointc_priv *eiointc_priv[MAX_IO_PICS];
@ -188,6 +199,7 @@ static int eiointc_router_init(unsigned int cpu)
{ {
int i, bit, cores, index, node; int i, bit, cores, index, node;
unsigned int data; unsigned int data;
int hwirq, mask;
node = cpu_to_eio_node(cpu); node = cpu_to_eio_node(cpu);
index = eiointc_index(node); index = eiointc_index(node);
@ -197,6 +209,13 @@ static int eiointc_router_init(unsigned int cpu)
return -EINVAL; return -EINVAL;
} }
/* Enable cpu interrupt pin from eiointc */
hwirq = eiointc_priv[index]->parent_hwirq;
mask = BIT(hwirq);
if (eiointc_priv[index]->flags & EIOINTC_ROUTE_MULT_IP)
mask |= BIT(hwirq + 1) | BIT(hwirq + 2) | BIT(hwirq + 3);
set_csr_ecfg(mask);
if (!(eiointc_priv[index]->flags & EIOINTC_USE_CPU_ENCODE)) if (!(eiointc_priv[index]->flags & EIOINTC_USE_CPU_ENCODE))
cores = CORES_PER_EIO_NODE; cores = CORES_PER_EIO_NODE;
else else
@ -211,8 +230,31 @@ static int eiointc_router_init(unsigned int cpu)
} }
for (i = 0; i < eiointc_priv[0]->vec_count / 32 / 4; i++) { for (i = 0; i < eiointc_priv[0]->vec_count / 32 / 4; i++) {
bit = BIT(1 + index); /* Route to IP[1 + index] */ /*
* Route to interrupt pin, relative offset used here
* Offset 0 means routing to IP0 and so on
*
* If flags is set with EIOINTC_ROUTE_MULT_IP,
* every 64 vector routes to different consecutive
* IPs, otherwise all vector routes to the same IP
*/
if (eiointc_priv[index]->flags & EIOINTC_ROUTE_MULT_IP) {
/* The first 64 vectors route to hwirq */
bit = BIT(hwirq++ - INT_HWI0);
data = bit | (bit << 8);
/* The second 64 vectors route to hwirq + 1 */
bit = BIT(hwirq++ - INT_HWI0);
data |= (bit << 16) | (bit << 24);
/*
* Route to hwirq + 2/hwirq + 3 separately
* in next loop
*/
} else {
bit = BIT(hwirq - INT_HWI0);
data = bit | (bit << 8) | (bit << 16) | (bit << 24); data = bit | (bit << 8) | (bit << 16) | (bit << 24);
}
iocsr_write32(data, EIOINTC_REG_IPMAP + i * 4); iocsr_write32(data, EIOINTC_REG_IPMAP + i * 4);
} }
@ -241,15 +283,22 @@ static int eiointc_router_init(unsigned int cpu)
static void eiointc_irq_dispatch(struct irq_desc *desc) static void eiointc_irq_dispatch(struct irq_desc *desc)
{ {
int i; struct eiointc_ip_route *info = irq_desc_get_handler_data(desc);
u64 pending;
bool handled = false;
struct irq_chip *chip = irq_desc_get_chip(desc); struct irq_chip *chip = irq_desc_get_chip(desc);
struct eiointc_priv *priv = irq_desc_get_handler_data(desc); bool handled = false;
u64 pending;
int i;
chained_irq_enter(chip, desc); chained_irq_enter(chip, desc);
for (i = 0; i < eiointc_priv[0]->vec_count / VEC_COUNT_PER_REG; i++) { /*
* If EIOINTC_ROUTE_MULT_IP is set, every 64 interrupt vectors in
* eiointc interrupt controller routes to different cpu interrupt pins
*
* Every cpu interrupt pin has its own irq handler, it is ok to
* read ISR for these 64 interrupt vectors rather than all vectors
*/
for (i = info->start; i < info->end; i++) {
pending = iocsr_read64(EIOINTC_REG_ISR + (i << 3)); pending = iocsr_read64(EIOINTC_REG_ISR + (i << 3));
/* Skip handling if pending bitmap is zero */ /* Skip handling if pending bitmap is zero */
@ -262,7 +311,7 @@ static void eiointc_irq_dispatch(struct irq_desc *desc)
int bit = __ffs(pending); int bit = __ffs(pending);
int irq = bit + VEC_COUNT_PER_REG * i; int irq = bit + VEC_COUNT_PER_REG * i;
generic_handle_domain_irq(priv->eiointc_domain, irq); generic_handle_domain_irq(info->priv->eiointc_domain, irq);
pending &= ~BIT(bit); pending &= ~BIT(bit);
handled = true; handled = true;
} }
@ -462,8 +511,33 @@ static int __init eiointc_init(struct eiointc_priv *priv, int parent_irq,
} }
eiointc_priv[nr_pics++] = priv; eiointc_priv[nr_pics++] = priv;
/*
* Only the first eiointc device on VM supports routing to
* different CPU interrupt pins. The later eiointc devices use
* generic method if there are multiple eiointc devices in future
*/
if (cpu_has_hypervisor && (nr_pics == 1)) {
priv->flags |= EIOINTC_ROUTE_MULT_IP;
priv->parent_hwirq = INT_HWI0;
}
if (priv->flags & EIOINTC_ROUTE_MULT_IP) {
for (i = 0; i < priv->vec_count / VEC_COUNT_PER_REG; i++) {
priv->route_info[i].start = priv->parent_hwirq - INT_HWI0 + i;
priv->route_info[i].end = priv->route_info[i].start + 1;
priv->route_info[i].priv = priv;
parent_irq = get_percpu_irq(priv->parent_hwirq + i);
irq_set_chained_handler_and_data(parent_irq, eiointc_irq_dispatch,
&priv->route_info[i]);
}
} else {
priv->route_info[0].start = 0;
priv->route_info[0].end = priv->vec_count / VEC_COUNT_PER_REG;
priv->route_info[0].priv = priv;
irq_set_chained_handler_and_data(parent_irq, eiointc_irq_dispatch,
&priv->route_info[0]);
}
eiointc_router_init(0); eiointc_router_init(0);
irq_set_chained_handler_and_data(parent_irq, eiointc_irq_dispatch, priv);
if (nr_pics == 1) { if (nr_pics == 1) {
register_syscore_ops(&eiointc_syscore_ops); register_syscore_ops(&eiointc_syscore_ops);
@ -495,7 +569,7 @@ int __init eiointc_acpi_init(struct irq_domain *parent,
priv->vec_count = VEC_COUNT; priv->vec_count = VEC_COUNT;
priv->node = acpi_eiointc->node; priv->node = acpi_eiointc->node;
priv->parent_hwirq = acpi_eiointc->cascade;
parent_irq = irq_create_mapping(parent, acpi_eiointc->cascade); parent_irq = irq_create_mapping(parent, acpi_eiointc->cascade);
ret = eiointc_init(priv, parent_irq, acpi_eiointc->node_map); ret = eiointc_init(priv, parent_irq, acpi_eiointc->node_map);
@ -527,8 +601,9 @@ int __init eiointc_acpi_init(struct irq_domain *parent,
static int __init eiointc_of_init(struct device_node *of_node, static int __init eiointc_of_init(struct device_node *of_node,
struct device_node *parent) struct device_node *parent)
{ {
int parent_irq, ret;
struct eiointc_priv *priv; struct eiointc_priv *priv;
struct irq_data *irq_data;
int parent_irq, ret;
priv = kzalloc(sizeof(*priv), GFP_KERNEL); priv = kzalloc(sizeof(*priv), GFP_KERNEL);
if (!priv) if (!priv)
@ -544,6 +619,12 @@ static int __init eiointc_of_init(struct device_node *of_node,
if (ret < 0) if (ret < 0)
goto out_free_priv; goto out_free_priv;
irq_data = irq_get_irq_data(parent_irq);
if (!irq_data) {
ret = -ENODEV;
goto out_free_priv;
}
/* /*
* In particular, the number of devices supported by the LS2K0500 * In particular, the number of devices supported by the LS2K0500
* extended I/O interrupt vector is 128. * extended I/O interrupt vector is 128.
@ -552,7 +633,7 @@ static int __init eiointc_of_init(struct device_node *of_node,
priv->vec_count = 128; priv->vec_count = 128;
else else
priv->vec_count = VEC_COUNT; priv->vec_count = VEC_COUNT;
priv->parent_hwirq = irqd_to_hwirq(irq_data);
priv->node = 0; priv->node = 0;
priv->domain_handle = of_fwnode_handle(of_node); priv->domain_handle = of_fwnode_handle(of_node);

View File

@ -200,7 +200,12 @@ int __init pch_lpc_acpi_init(struct irq_domain *parent,
goto iounmap_base; goto iounmap_base;
} }
priv->lpc_domain = irq_domain_create_linear(irq_handle, LPC_COUNT, /*
* The LPC interrupt controller is a legacy i8259-compatible device,
* which requires a static 1:1 mapping for IRQs 0-15.
* Use irq_domain_create_legacy to establish this static mapping early.
*/
priv->lpc_domain = irq_domain_create_legacy(irq_handle, LPC_COUNT, 0, 0,
&pch_lpc_domain_ops, priv); &pch_lpc_domain_ops, priv);
if (!priv->lpc_domain) { if (!priv->lpc_domain) {
pr_err("Failed to create IRQ domain\n"); pr_err("Failed to create IRQ domain\n");

View File

@ -112,6 +112,20 @@ bool msi_lib_init_dev_msi_info(struct device *dev, struct irq_domain *domain,
*/ */
if (!chip->irq_set_affinity && !(info->flags & MSI_FLAG_NO_AFFINITY)) if (!chip->irq_set_affinity && !(info->flags & MSI_FLAG_NO_AFFINITY))
chip->irq_set_affinity = msi_domain_set_affinity; chip->irq_set_affinity = msi_domain_set_affinity;
/*
* If the parent domain insists on being in charge of masking, obey
* blindly. The interrupt is un-masked at the PCI level on startup
* and masked on shutdown to prevent rogue interrupts after the
* driver freed the interrupt. Not masking it at the PCI level
* speeds up operation for disable/enable_irq() as it avoids
* getting all the way out to the PCI device.
*/
if (info->flags & MSI_FLAG_PCI_MSI_MASK_PARENT) {
chip->irq_mask = irq_chip_mask_parent;
chip->irq_unmask = irq_chip_unmask_parent;
}
return true; return true;
} }
EXPORT_SYMBOL_GPL(msi_lib_init_dev_msi_info); EXPORT_SYMBOL_GPL(msi_lib_init_dev_msi_info);

View File

@ -73,8 +73,9 @@ static int __init nvic_of_init(struct device_node *node,
struct device_node *parent) struct device_node *parent)
{ {
unsigned int clr = IRQ_NOREQUEST | IRQ_NOPROBE | IRQ_NOAUTOEN; unsigned int clr = IRQ_NOREQUEST | IRQ_NOPROBE | IRQ_NOAUTOEN;
unsigned int irqs, i, ret, numbanks; unsigned int irqs, i, numbanks;
void __iomem *nvic_base; void __iomem *nvic_base;
int ret;
numbanks = (readl_relaxed(V7M_SCS_ICTR) & numbanks = (readl_relaxed(V7M_SCS_ICTR) &
V7M_SCS_ICTR_INTLINESNUM_MASK) + 1; V7M_SCS_ICTR_INTLINESNUM_MASK) + 1;

View File

@ -142,11 +142,12 @@ static const struct irq_domain_ops rza1_irqc_domain_ops = {
static int rza1_irqc_parse_map(struct rza1_irqc_priv *priv, static int rza1_irqc_parse_map(struct rza1_irqc_priv *priv,
struct device_node *gic_node) struct device_node *gic_node)
{ {
unsigned int imaplen, i, j, ret;
struct device *dev = priv->dev; struct device *dev = priv->dev;
unsigned int imaplen, i, j;
struct device_node *ipar; struct device_node *ipar;
const __be32 *imap; const __be32 *imap;
u32 intsize; u32 intsize;
int ret;
imap = of_get_property(dev->of_node, "interrupt-map", &imaplen); imap = of_get_property(dev->of_node, "interrupt-map", &imaplen);
if (!imap) if (!imap)

View File

@ -578,7 +578,7 @@ static int rzg2l_irqc_common_init(struct device_node *node, struct device_node *
&rzg2l_irqc_domain_ops, rzg2l_irqc_data); &rzg2l_irqc_domain_ops, rzg2l_irqc_data);
if (!irq_domain) { if (!irq_domain) {
pm_runtime_put(dev); pm_runtime_put(dev);
return dev_err_probe(dev, -ENOMEM, "failed to add irq domain\n"); return -ENOMEM;
} }
register_syscore_ops(&rzg2l_irqc_syscore_ops); register_syscore_ops(&rzg2l_irqc_syscore_ops);

View File

@ -30,6 +30,7 @@ struct sg204x_msi_chip_info {
* @doorbell_addr: see TRM, 10.1.32, GP_INTR0_SET * @doorbell_addr: see TRM, 10.1.32, GP_INTR0_SET
* @irq_first: First vectors number that MSIs starts * @irq_first: First vectors number that MSIs starts
* @num_irqs: Number of vectors for MSIs * @num_irqs: Number of vectors for MSIs
* @irq_type: IRQ type for MSIs
* @msi_map: mapping for allocated MSI vectors. * @msi_map: mapping for allocated MSI vectors.
* @msi_map_lock: Lock for msi_map * @msi_map_lock: Lock for msi_map
* @chip_info: chip specific infomations * @chip_info: chip specific infomations
@ -41,6 +42,7 @@ struct sg204x_msi_chipdata {
u32 irq_first; u32 irq_first;
u32 num_irqs; u32 num_irqs;
unsigned int irq_type;
unsigned long *msi_map; unsigned long *msi_map;
struct mutex msi_map_lock; struct mutex msi_map_lock;
@ -85,6 +87,8 @@ static void sg2042_msi_irq_compose_msi_msg(struct irq_data *d, struct msi_msg *m
static const struct irq_chip sg2042_msi_middle_irq_chip = { static const struct irq_chip sg2042_msi_middle_irq_chip = {
.name = "SG2042 MSI", .name = "SG2042 MSI",
.irq_startup = irq_chip_startup_parent,
.irq_shutdown = irq_chip_shutdown_parent,
.irq_ack = sg2042_msi_irq_ack, .irq_ack = sg2042_msi_irq_ack,
.irq_mask = irq_chip_mask_parent, .irq_mask = irq_chip_mask_parent,
.irq_unmask = irq_chip_unmask_parent, .irq_unmask = irq_chip_unmask_parent,
@ -114,6 +118,8 @@ static void sg2044_msi_irq_compose_msi_msg(struct irq_data *d, struct msi_msg *m
static struct irq_chip sg2044_msi_middle_irq_chip = { static struct irq_chip sg2044_msi_middle_irq_chip = {
.name = "SG2044 MSI", .name = "SG2044 MSI",
.irq_startup = irq_chip_startup_parent,
.irq_shutdown = irq_chip_shutdown_parent,
.irq_ack = sg2044_msi_irq_ack, .irq_ack = sg2044_msi_irq_ack,
.irq_mask = irq_chip_mask_parent, .irq_mask = irq_chip_mask_parent,
.irq_unmask = irq_chip_unmask_parent, .irq_unmask = irq_chip_unmask_parent,
@ -133,14 +139,14 @@ static int sg204x_msi_parent_domain_alloc(struct irq_domain *domain, unsigned in
fwspec.fwnode = domain->parent->fwnode; fwspec.fwnode = domain->parent->fwnode;
fwspec.param_count = 2; fwspec.param_count = 2;
fwspec.param[0] = data->irq_first + hwirq; fwspec.param[0] = data->irq_first + hwirq;
fwspec.param[1] = IRQ_TYPE_EDGE_RISING; fwspec.param[1] = data->irq_type;
ret = irq_domain_alloc_irqs_parent(domain, virq, 1, &fwspec); ret = irq_domain_alloc_irqs_parent(domain, virq, 1, &fwspec);
if (ret) if (ret)
return ret; return ret;
d = irq_domain_get_irq_data(domain->parent, virq); d = irq_domain_get_irq_data(domain->parent, virq);
return d->chip->irq_set_type(d, IRQ_TYPE_EDGE_RISING); return d->chip->irq_set_type(d, data->irq_type);
} }
static int sg204x_msi_middle_domain_alloc(struct irq_domain *domain, unsigned int virq, static int sg204x_msi_middle_domain_alloc(struct irq_domain *domain, unsigned int virq,
@ -186,7 +192,9 @@ static const struct irq_domain_ops sg204x_msi_middle_domain_ops = {
}; };
#define SG2042_MSI_FLAGS_REQUIRED (MSI_FLAG_USE_DEF_DOM_OPS | \ #define SG2042_MSI_FLAGS_REQUIRED (MSI_FLAG_USE_DEF_DOM_OPS | \
MSI_FLAG_USE_DEF_CHIP_OPS) MSI_FLAG_USE_DEF_CHIP_OPS | \
MSI_FLAG_PCI_MSI_MASK_PARENT | \
MSI_FLAG_PCI_MSI_STARTUP_PARENT)
#define SG2042_MSI_FLAGS_SUPPORTED MSI_GENERIC_FLAGS_MASK #define SG2042_MSI_FLAGS_SUPPORTED MSI_GENERIC_FLAGS_MASK
@ -201,9 +209,12 @@ static const struct msi_parent_ops sg2042_msi_parent_ops = {
}; };
#define SG2044_MSI_FLAGS_REQUIRED (MSI_FLAG_USE_DEF_DOM_OPS | \ #define SG2044_MSI_FLAGS_REQUIRED (MSI_FLAG_USE_DEF_DOM_OPS | \
MSI_FLAG_USE_DEF_CHIP_OPS) MSI_FLAG_USE_DEF_CHIP_OPS | \
MSI_FLAG_PCI_MSI_MASK_PARENT | \
MSI_FLAG_PCI_MSI_STARTUP_PARENT)
#define SG2044_MSI_FLAGS_SUPPORTED (MSI_GENERIC_FLAGS_MASK | \ #define SG2044_MSI_FLAGS_SUPPORTED (MSI_GENERIC_FLAGS_MASK | \
MSI_FLAG_MULTI_PCI_MSI | \
MSI_FLAG_PCI_MSIX) MSI_FLAG_PCI_MSIX)
static const struct msi_parent_ops sg2044_msi_parent_ops = { static const struct msi_parent_ops sg2044_msi_parent_ops = {
@ -289,6 +300,7 @@ static int sg2042_msi_probe(struct platform_device *pdev)
} }
data->irq_first = (u32)args.args[0]; data->irq_first = (u32)args.args[0];
data->irq_type = (unsigned int)args.args[1];
data->num_irqs = (u32)args.args[args.nargs - 1]; data->num_irqs = (u32)args.args[args.nargs - 1];
mutex_init(&data->msi_map_lock); mutex_init(&data->msi_map_lock);

View File

@ -179,12 +179,14 @@ static int plic_set_affinity(struct irq_data *d,
if (cpu >= nr_cpu_ids) if (cpu >= nr_cpu_ids)
return -EINVAL; return -EINVAL;
plic_irq_disable(d); /* Invalidate the original routing entry */
plic_irq_toggle(irq_data_get_effective_affinity_mask(d), d, 0);
irq_data_update_effective_affinity(d, cpumask_of(cpu)); irq_data_update_effective_affinity(d, cpumask_of(cpu));
/* Setting the new routing entry if irq is enabled */
if (!irqd_irq_disabled(d)) if (!irqd_irq_disabled(d))
plic_irq_enable(d); plic_irq_toggle(irq_data_get_effective_affinity_mask(d), d, 1);
return IRQ_SET_MASK_OK_DONE; return IRQ_SET_MASK_OK_DONE;
} }
@ -257,7 +259,7 @@ static int plic_irq_suspend(void)
readl(priv->regs + PRIORITY_BASE + i * PRIORITY_PER_ID)); readl(priv->regs + PRIORITY_BASE + i * PRIORITY_PER_ID));
} }
for_each_cpu(cpu, cpu_present_mask) { for_each_present_cpu(cpu) {
struct plic_handler *handler = per_cpu_ptr(&plic_handlers, cpu); struct plic_handler *handler = per_cpu_ptr(&plic_handlers, cpu);
if (!handler->present) if (!handler->present)
@ -289,7 +291,7 @@ static void plic_irq_resume(void)
priv->regs + PRIORITY_BASE + i * PRIORITY_PER_ID); priv->regs + PRIORITY_BASE + i * PRIORITY_PER_ID);
} }
for_each_cpu(cpu, cpu_present_mask) { for_each_present_cpu(cpu) {
struct plic_handler *handler = per_cpu_ptr(&plic_handlers, cpu); struct plic_handler *handler = per_cpu_ptr(&plic_handlers, cpu);
if (!handler->present) if (!handler->present)

View File

@ -148,20 +148,43 @@ static void pci_device_domain_set_desc(msi_alloc_info_t *arg, struct msi_desc *d
arg->hwirq = desc->msi_index; arg->hwirq = desc->msi_index;
} }
static __always_inline void cond_mask_parent(struct irq_data *data) static void cond_shutdown_parent(struct irq_data *data)
{ {
struct msi_domain_info *info = data->domain->host_data; struct msi_domain_info *info = data->domain->host_data;
if (unlikely(info->flags & MSI_FLAG_PCI_MSI_MASK_PARENT)) if (unlikely(info->flags & MSI_FLAG_PCI_MSI_STARTUP_PARENT))
irq_chip_shutdown_parent(data);
else if (unlikely(info->flags & MSI_FLAG_PCI_MSI_MASK_PARENT))
irq_chip_mask_parent(data); irq_chip_mask_parent(data);
} }
static __always_inline void cond_unmask_parent(struct irq_data *data) static unsigned int cond_startup_parent(struct irq_data *data)
{ {
struct msi_domain_info *info = data->domain->host_data; struct msi_domain_info *info = data->domain->host_data;
if (unlikely(info->flags & MSI_FLAG_PCI_MSI_MASK_PARENT)) if (unlikely(info->flags & MSI_FLAG_PCI_MSI_STARTUP_PARENT))
return irq_chip_startup_parent(data);
else if (unlikely(info->flags & MSI_FLAG_PCI_MSI_MASK_PARENT))
irq_chip_unmask_parent(data); irq_chip_unmask_parent(data);
return 0;
}
static void pci_irq_shutdown_msi(struct irq_data *data)
{
struct msi_desc *desc = irq_data_get_msi_desc(data);
pci_msi_mask(desc, BIT(data->irq - desc->irq));
cond_shutdown_parent(data);
}
static unsigned int pci_irq_startup_msi(struct irq_data *data)
{
struct msi_desc *desc = irq_data_get_msi_desc(data);
unsigned int ret = cond_startup_parent(data);
pci_msi_unmask(desc, BIT(data->irq - desc->irq));
return ret;
} }
static void pci_irq_mask_msi(struct irq_data *data) static void pci_irq_mask_msi(struct irq_data *data)
@ -169,14 +192,12 @@ static void pci_irq_mask_msi(struct irq_data *data)
struct msi_desc *desc = irq_data_get_msi_desc(data); struct msi_desc *desc = irq_data_get_msi_desc(data);
pci_msi_mask(desc, BIT(data->irq - desc->irq)); pci_msi_mask(desc, BIT(data->irq - desc->irq));
cond_mask_parent(data);
} }
static void pci_irq_unmask_msi(struct irq_data *data) static void pci_irq_unmask_msi(struct irq_data *data)
{ {
struct msi_desc *desc = irq_data_get_msi_desc(data); struct msi_desc *desc = irq_data_get_msi_desc(data);
cond_unmask_parent(data);
pci_msi_unmask(desc, BIT(data->irq - desc->irq)); pci_msi_unmask(desc, BIT(data->irq - desc->irq));
} }
@ -194,6 +215,8 @@ static void pci_irq_unmask_msi(struct irq_data *data)
static const struct msi_domain_template pci_msi_template = { static const struct msi_domain_template pci_msi_template = {
.chip = { .chip = {
.name = "PCI-MSI", .name = "PCI-MSI",
.irq_startup = pci_irq_startup_msi,
.irq_shutdown = pci_irq_shutdown_msi,
.irq_mask = pci_irq_mask_msi, .irq_mask = pci_irq_mask_msi,
.irq_unmask = pci_irq_unmask_msi, .irq_unmask = pci_irq_unmask_msi,
.irq_write_msi_msg = pci_msi_domain_write_msg, .irq_write_msi_msg = pci_msi_domain_write_msg,
@ -210,15 +233,27 @@ static const struct msi_domain_template pci_msi_template = {
}, },
}; };
static void pci_irq_shutdown_msix(struct irq_data *data)
{
pci_msix_mask(irq_data_get_msi_desc(data));
cond_shutdown_parent(data);
}
static unsigned int pci_irq_startup_msix(struct irq_data *data)
{
unsigned int ret = cond_startup_parent(data);
pci_msix_unmask(irq_data_get_msi_desc(data));
return ret;
}
static void pci_irq_mask_msix(struct irq_data *data) static void pci_irq_mask_msix(struct irq_data *data)
{ {
pci_msix_mask(irq_data_get_msi_desc(data)); pci_msix_mask(irq_data_get_msi_desc(data));
cond_mask_parent(data);
} }
static void pci_irq_unmask_msix(struct irq_data *data) static void pci_irq_unmask_msix(struct irq_data *data)
{ {
cond_unmask_parent(data);
pci_msix_unmask(irq_data_get_msi_desc(data)); pci_msix_unmask(irq_data_get_msi_desc(data));
} }
@ -234,6 +269,8 @@ EXPORT_SYMBOL_GPL(pci_msix_prepare_desc);
static const struct msi_domain_template pci_msix_template = { static const struct msi_domain_template pci_msix_template = {
.chip = { .chip = {
.name = "PCI-MSIX", .name = "PCI-MSIX",
.irq_startup = pci_irq_startup_msix,
.irq_shutdown = pci_irq_shutdown_msix,
.irq_mask = pci_irq_mask_msix, .irq_mask = pci_irq_mask_msix,
.irq_unmask = pci_irq_unmask_msix, .irq_unmask = pci_irq_unmask_msix,
.irq_write_msi_msg = pci_msi_domain_write_msg, .irq_write_msi_msg = pci_msi_domain_write_msg,

View File

@ -20,4 +20,18 @@
#define ASPEED_AST2600_SCU_IC1_LPC_RESET_LO_TO_HI 0 #define ASPEED_AST2600_SCU_IC1_LPC_RESET_LO_TO_HI 0
#define ASPEED_AST2600_SCU_IC1_LPC_RESET_HI_TO_LO 1 #define ASPEED_AST2600_SCU_IC1_LPC_RESET_HI_TO_LO 1
#define ASPEED_AST2700_SCU_IC0_PCIE_PERST_LO_TO_HI 3
#define ASPEED_AST2700_SCU_IC0_PCIE_PERST_HI_TO_LO 2
#define ASPEED_AST2700_SCU_IC1_PCIE_RCRST_LO_TO_HI 3
#define ASPEED_AST2700_SCU_IC1_PCIE_RCRST_HI_TO_LO 2
#define ASPEED_AST2700_SCU_IC2_PCIE_PERST_LO_TO_HI 3
#define ASPEED_AST2700_SCU_IC2_PCIE_PERST_HI_TO_LO 2
#define ASPEED_AST2700_SCU_IC2_LPC_RESET_LO_TO_HI 1
#define ASPEED_AST2700_SCU_IC2_LPC_RESET_HI_TO_LO 0
#define ASPEED_AST2700_SCU_IC3_LPC_RESET_LO_TO_HI 1
#define ASPEED_AST2700_SCU_IC3_LPC_RESET_HI_TO_LO 0
#endif /* _DT_BINDINGS_INTERRUPT_CONTROLLER_ASPEED_SCU_IC_H_ */ #endif /* _DT_BINDINGS_INTERRUPT_CONTROLLER_ASPEED_SCU_IC_H_ */

View File

@ -564,6 +564,8 @@ enum {
MSI_FLAG_PARENT_PM_DEV = (1 << 8), MSI_FLAG_PARENT_PM_DEV = (1 << 8),
/* Support for parent mask/unmask */ /* Support for parent mask/unmask */
MSI_FLAG_PCI_MSI_MASK_PARENT = (1 << 9), MSI_FLAG_PCI_MSI_MASK_PARENT = (1 << 9),
/* Support for parent startup/shutdown */
MSI_FLAG_PCI_MSI_STARTUP_PARENT = (1 << 10),
/* Mask for the generic functionality */ /* Mask for the generic functionality */
MSI_GENERIC_FLAGS_MASK = GENMASK(15, 0), MSI_GENERIC_FLAGS_MASK = GENMASK(15, 0),