pci-v6.18-changes

-----BEGIN PGP SIGNATURE-----
 
 iQJIBAABCgAyFiEEgMe7l+5h9hnxdsnuWYigwDrT+vwFAmjgOAkUHGJoZWxnYWFz
 QGdvb2dsZS5jb20ACgkQWYigwDrT+vxzlA//QxoUF4p1cN7+rPwuzCPNi2ZmKNyU
 T7mLfUciV/t8nPLPFdtxdttHB3F+BsA/E9WYFiUUGBzvdYafnoZ/Qnio1WdMIIYz
 0eVrTpnMUMBXrUwGFnnIER3b4GCJb2WR3RPfaBrbqQRHoAlDmv/ijh7rIKhgWIeR
 NsCmPiFnsxPjgVusn2jXWLheUHEbZh2dVTk9lceQXFRdrUELC9wH7zigAA6GviGO
 ssPC1pKfg5DrtuuM6k9JCcEYibQIlynxZ8sbT6YfQ2bs1uSEd2pEcr7AORb4l2yQ
 rcirHwGTpvZ/QvzKpDY8FcuzPFRP7QPd+34zMEQ2OW04y1k61iKE/4EE2Z9w/OoW
 esFQXbevy9P5JHu6DBcaJ2uwvnLiVesry+9CmkKCc6Dxyjbcbgeta1LR5dhn1Rv0
 dMtRnkd/pxzIF5cRnu+WlOFV2aAw2gKL9pGuimH5TO4xL2qCZKak0hh8PAjUN2c/
 12GAlrwAyBK1FeY2ZflTN7Vr8o2O0I6I6NeaF3sCW1VO2e6E9/bAIhrduUO4lhGq
 BHTVRBefFRtbFVaxTlUAj+lSCyqES3Wzm8y/uLQvT6M3opunTziSDff1aWbm1Y2t
 aASl1IByuKsGID8VrT5khHeBKSWtnd/v7LLUjCeq+g6eKdfN2arInPvw5X1NpVMj
 tzzBYqwHgBoA4u8=
 =BUw/
 -----END PGP SIGNATURE-----

Merge tag 'pci-v6.18-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci

Pull pci updates from Bjorn Helgaas:
 "Enumeration:

   - Add PCI_FIND_NEXT_CAP() and PCI_FIND_NEXT_EXT_CAP() macros that
     take config space accessor functions.

     Implement pci_find_capability(), pci_find_ext_capability(), and
     dwc, dwc endpoint, and cadence capability search interfaces with
     them (Hans Zhang)

   - Leave parent unit address 0 in 'interrupt-map' so that when we
     build devicetree nodes to describe PCI functions that contain
     multiple peripherals, we can build this property even when
     interrupt controllers lack 'reg' properties (Lorenzo Pieralisi)

   - Add a Xeon 6 quirk to disable Extended Tags and limit Max Read
     Request Size to 128B to avoid a performance issue (Ilpo Järvinen)

   - Add sysfs 'serial_number' file to expose the Device Serial Number
     (Matthew Wood)

   - Fix pci_acpi_preserve_config() memory leak (Nirmoy Das)

  Resource management:

   - Align m68k pcibios_enable_device() with other arches (Ilpo
     Järvinen)

   - Remove sparc pcibios_enable_device() implementations that don't do
     anything beyond what pci_enable_resources() does (Ilpo Järvinen)

   - Remove mips pcibios_enable_resources() and use
     pci_enable_resources() instead (Ilpo Järvinen)

   - Clean up bridge window sizing and assignment (Ilpo Järvinen),
     including:

       - Leave non-claimed bridge windows disabled

       - Enable bridges even if a window wasn't assigned because not all
         windows are required by downstream devices

       - Preserve bridge window type when releasing the resource, since
         the type is needed for reassignment

       - Consolidate selection of bridge windows into two new
         interfaces, pbus_select_window() and
         pbus_select_window_for_type(), so this is done consistently

       - Compute bridge window start and end earlier to avoid logging
         stale information

  MSI:

   - Add quirk to disable MSI on RDC PCI to PCIe bridges (Marcos Del Sol
     Vives)

  Error handling:

   - Align AER with EEH by allowing drivers to request a Bus Reset on
     Non-Fatal Errors (in addition to the reset on Fatal Errors that we
     already do) (Lukas Wunner)

   - If error recovery fails, emit FAILED_RECOVERY uevents for the
     devices, not for the bridge leading to them.

     This makes them correspond to BEGIN_RECOVERY uevents (Lukas Wunner)

   - Align AER with EEH by calling err_handler.error_detected()
     callbacks to notify drivers if error recovery fails (Lukas Wunner)

   - Align AER with EEH by restoring device error_state to
     pci_channel_io_normal before the err_handler.slot_reset() callback.

     This is earlier than before the err_handler.resume() callback
     (Lukas Wunner)

   - Emit a BEGIN_RECOVERY uevent when driver's
     err_handler.error_detected() requests a reset, as well as when it
     says recovery is complete or can be done without a reset (Niklas
     Schnelle)

   - Align s390 with AER and EEH by emitting uevents during error
     recovery (Niklas Schnelle)

   - Align EEH with AER and s390 by emitting BEGIN_RECOVERY,
     SUCCESSFUL_RECOVERY, or FAILED_RECOVERY uevents depending on the
     result of err_handler.error_detected() (Niklas Schnelle)

   - Fix a NULL pointer dereference in aer_ratelimit() when ACPI GHES
     error information identifies a device without an AER Capability
     (Breno Leitao)

   - Update error decoding and TLP Log printing for new errors in
     current PCIe base spec (Lukas Wunner)

   - Update error recovery documentation to match the current code
     and use consistent nomenclature (Lukas Wunner)

  ASPM:

   - Enable all ClockPM and ASPM states for devicetree platforms, since
     there's typically no firmware that enables ASPM

     This is a risky change that may uncover hardware or configuration
     defects at boot-time rather than when users enable ASPM via sysfs
     later. Booting with "pcie_aspm=off" prevents this enabling
     (Manivannan Sadhasivam)

   - Remove the qcom code that enabled ASPM (Manivannan Sadhasivam)

  Power management:

   - If a device has already been disconnected, e.g., by a hotplug
     removal, don't bother trying to resume it to D0 when detaching the
     driver.

     This avoids annoying "Unable to change power state from D3cold to
     D0" messages (Mario Limonciello)

   - Ensure devices are powered up before config reads for
     'max_link_width', 'current_link_speed', 'current_link_width',
     'secondary_bus_number', and 'subordinate_bus_number' sysfs files.

     This prevents using invalid data (~0) in drivers or lspci and,
     depending on how the PCIe controller reports errors, may avoid
     error interrupts or crashes (Brian Norris)

  Virtualization:

   - Add rescan/remove locking when enabling/disabling SR-IOV, which
     avoids list corruption on s390, where disabling SR-IOV also
     generates hotplug events (Niklas Schnelle)

  Peer-to-peer DMA:

   - Free struct p2p_pgmap, not a member within it, in the
     pci_p2pdma_add_resource() error path (Sungho Kim)

  Endpoint framework:

   - Document sysfs interface for BAR assignment of vNTB endpoint
     functions (Jerome Brunet)

   - Fix array underflow in endpoint BAR test case (Dan Carpenter)

   - Skip endpoint IRQ test if the IRQ is out of range to avoid false
     errors (Christian Bruel)

   - Fix endpoint test case for controllers with fixed-size BARs smaller
     than requested by the test (Marek Vasut)

   - Restore inbound translation when disabling doorbell so the endpoint
     doorbell test case can be run more than once (Niklas Cassel)

   - Avoid a NULL pointer dereference when releasing DMA channels in
     endpoint DMA test case (Shin'ichiro Kawasaki)

   - Convert tegra194 interrupt number to MSI vector to fix endpoint
     Kselftest MSI_TEST test case (Niklas Cassel)

   - Reset tegra194 BARs when running in endpoint mode so the BAR tests
     don't overwrite the ATU settings in BAR4 (Niklas Cassel)

   - Handle errors in tegra194 BPMP transactions so we don't mistakenly
     skip future PERST# assertion (Vidya Sagar)

  AMD MDB PCIe controller driver:

   - Update DT binding example to separate PERST# to a Root Port stanza
     to make multiple Root Ports possible in the future (Sai Krishna
     Musham)

   - Add driver support for PERST# being described in a Root Port
     stanza, falling back to the host bridge if not found there (Sai
     Krishna Musham)

  Freescale i.MX6 PCIe controller driver:

   - Enable the 3.3V Vaux supply if available so devices can request
     wakeup with either Beacon or WAKE# (Richard Zhu)

  MediaTek PCIe Gen3 controller driver:

   - Add optional sys clock ready time setting to avoid sys_clk_rdy
     signal glitching in MT6991 and MT8196 (AngeloGioacchino Del Regno)

   - Add DT binding and driver support for MT6991 and MT8196
     (AngeloGioacchino Del Regno)

  NVIDIA Tegra PCIe controller driver:

   - When asserting PERST#, disable the controller instead of mistakenly
     disabling the PLL twice (Nagarjuna Kristam)

   - Convert struct tegra_msi mask_lock to raw spinlock to avoid a lock
     nesting error (Marek Vasut)

  Qualcomm PCIe controller driver:

   - Select PCI Power Control Slot driver so slot voltage rails can be
     turned on/off if described in Root Port devicetree node (Qiang Yu)

   - Parse only PCI bridge child nodes in devicetree, skipping unrelated
     nodes such as OPP (Operating Performance Points), which caused
     probe failures (Krishna Chaitanya Chundru)

   - Add 8.0 GT/s and 32.0 GT/s equalization settings (Ziyue Zhang)

   - Consolidate Root Port 'phy' and 'reset' properties in struct
     qcom_pcie_port, regardless of whether we got them from the Root
     Port node or the host bridge node (Manivannan Sadhasivam)

   - Fetch and map the ELBI register space in the DWC core rather than
     in each driver individually (Krishna Chaitanya Chundru)

   - Enable ECAM mechanism in DWC core by setting up iATU with 'CFG
     Shift Feature' and use this in the qcom driver (Krishna Chaitanya
     Chundru)

   - Add SM8750 compatible to qcom,pcie-sm8550.yaml (Krishna Chaitanya
     Chundru)

   - Update qcom,pcie-x1e80100.yaml to allow fifth PCIe host on Qualcomm
     Glymur, which is compatible with X1E80100 but doesn't have the
     cnoc_sf_axi clock (Qiang Yu)

  Renesas R-Car PCIe controller driver:

   - Fix a typo that prevented correct PHY initialization (Marek Vasut)

   - Add a missing 1ms delay after PWR reset assertion as required by
     the V4H manual (Marek Vasut)

   - Assure reset has completed before DBI access to avoid SError (Marek
     Vasut)

   - Fix inverted PHY initialization check, which sometimes led to
     timeouts and failure to start the controller (Marek Vasut)

   - Pass the correct IRQ domain to generic_handle_domain_irq() to fix a
     regression when converting to msi_create_parent_irq_domain()
     (Claudiu Beznea)

   - Drop the spinlock protecting the PMSR register - it's no longer
     required since pci_lock already serializes accesses (Marek Vasut)

   - Convert struct rcar_msi mask_lock to raw spinlock to avoid a lock
     nesting error (Marek Vasut)

  SOPHGO PCIe controller driver:

   - Check for existence of struct cdns_pcie.ops before using it to
     allow Cadence drivers that don't need to supply ops (Chen Wang)

   - Add DT binding and driver for the SOPHGO SG2042 PCIe controller
     (Chen Wang)

  STMicroelectronics STM32MP25 PCIe controller driver:

   - Update pinctrl documentation of initial states and use in runtime
     suspend/resume (Christian Bruel)

   - Add pinctrl_pm_select_init_state() for use by stm32 driver, which
     needs it during resume (Christian Bruel)

   - Add devicetree bindings and drivers for the STMicroelectronics
     STM32MP25 in host and endpoint modes (Christian Bruel)

  Synopsys DesignWare PCIe controller driver:

   - Add support for x16 in devicetree 'num-lanes' property (Konrad
     Dybcio)

   - Verify that if DT specifies a single IRQ for all eDMA channels, it
     is named 'dma' (Niklas Cassel)

  TI J721E PCIe driver:

   - Add MODULE_DEVICE_TABLE() so driver can be autoloaded (Siddharth
     Vadapalli)

   - Power controller off before configuring the glue layer so the
     controller latches the correct values on power-on (Siddharth
     Vadapalli)

  TI Keystone PCIe controller driver:

   - Use devm_request_irq() so 'ks-pcie-error-irq' is freed when driver
     exits with error (Siddharth Vadapalli)

   - Add Peripheral Virtualization Unit (PVU), which restricts DMA from
     PCIe devices to specific regions of host memory, to the ti,am65
     binding (Jan Kiszka)

  Xilinx NWL PCIe controller driver:

   - Clear bootloader E_ECAM_CONTROL before merging in the new driver
     value to avoid writing invalid values (Jani Nurminen)"

* tag 'pci-v6.18-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci: (141 commits)
  PCI/AER: Avoid NULL pointer dereference in aer_ratelimit()
  MAINTAINERS: Add entry for ST STM32MP25 PCIe drivers
  PCI: stm32-ep: Add PCIe Endpoint support for STM32MP25
  dt-bindings: PCI: Add STM32MP25 PCIe Endpoint bindings
  PCI: stm32: Add PCIe host support for STM32MP25
  PCI: xilinx-nwl: Fix ECAM programming
  PCI: j721e: Fix incorrect error message in probe()
  PCI: keystone: Use devm_request_irq() to free "ks-pcie-error-irq" on exit
  dt-bindings: PCI: qcom,pcie-x1e80100: Set clocks minItems for the fifth Glymur PCIe Controller
  PCI: dwc: Support 16-lane operation
  PCI: Add lockdep assertion in pci_stop_and_remove_bus_device()
  PCI/IOV: Add PCI rescan-remove locking when enabling/disabling SR-IOV
  PCI: rcar-host: Convert struct rcar_msi mask_lock into raw spinlock
  PCI: tegra194: Rename 'root_bus' to 'root_port_bus' in tegra_pcie_downstream_dev_to_D0()
  PCI: tegra: Convert struct tegra_msi mask_lock into raw spinlock
  PCI: rcar-gen4: Fix inverted break condition in PHY initialization
  PCI: rcar-gen4: Assure reset occurs before DBI access
  PCI: rcar-gen4: Add missing 1ms delay after PWR reset assertion
  PCI: Set up bridge resources earlier
  PCI: rcar-host: Drop PMSR spinlock
  ...
This commit is contained in:
Linus Torvalds 2025-10-06 10:41:03 -07:00
commit 2f2c725493
103 changed files with 3166 additions and 1298 deletions

View File

@ -612,3 +612,12 @@ Description:
# ls doe_features # ls doe_features
0001:01 0001:02 doe_discovery 0001:01 0001:02 doe_discovery
What: /sys/bus/pci/devices/.../serial_number
Date: December 2025
Contact: Matthew Wood <thepacketgeek@gmail.com>
Description:
This is visible only for PCI devices that support the serial
number extended capability. The file is read only and due to
the possible sensitivity of accessible serial numbers, admin
only.

View File

@ -90,8 +90,9 @@ of the function device and is populated with the following NTB specific
attributes that can be configured by the user:: attributes that can be configured by the user::
# ls functions/pci_epf_vntb/func1/pci_epf_vntb.0/ # ls functions/pci_epf_vntb/func1/pci_epf_vntb.0/
db_count mw1 mw2 mw3 mw4 num_mws ctrl_bar db_count mw1_bar mw2_bar mw3_bar mw4_bar spad_count
spad_count db_bar mw1 mw2 mw3 mw4 num_mws vbus_number
vntb_vid vntb_pid
A sample configuration for NTB function is given below:: A sample configuration for NTB function is given below::
@ -100,6 +101,10 @@ A sample configuration for NTB function is given below::
# echo 1 > functions/pci_epf_vntb/func1/pci_epf_vntb.0/num_mws # echo 1 > functions/pci_epf_vntb/func1/pci_epf_vntb.0/num_mws
# echo 0x100000 > functions/pci_epf_vntb/func1/pci_epf_vntb.0/mw1 # echo 0x100000 > functions/pci_epf_vntb/func1/pci_epf_vntb.0/mw1
By default, each construct is assigned a BAR, as needed and in order.
Should a specific BAR setup be required by the platform, BAR may be assigned
to each construct using the related ``XYZ_bar`` entry.
A sample configuration for virtual NTB driver for virtual PCI bus:: A sample configuration for virtual NTB driver for virtual PCI bus::
# echo 0x1957 > functions/pci_epf_vntb/func1/pci_epf_vntb.0/vntb_vid # echo 0x1957 > functions/pci_epf_vntb/func1/pci_epf_vntb.0/vntb_vid

View File

@ -13,7 +13,7 @@ PCI Error Recovery
Many PCI bus controllers are able to detect a variety of hardware Many PCI bus controllers are able to detect a variety of hardware
PCI errors on the bus, such as parity errors on the data and address PCI errors on the bus, such as parity errors on the data and address
buses, as well as SERR and PERR errors. Some of the more advanced buses, as well as SERR and PERR errors. Some of the more advanced
chipsets are able to deal with these errors; these include PCI-E chipsets, chipsets are able to deal with these errors; these include PCIe chipsets,
and the PCI-host bridges found on IBM Power4, Power5 and Power6-based and the PCI-host bridges found on IBM Power4, Power5 and Power6-based
pSeries boxes. A typical action taken is to disconnect the affected device, pSeries boxes. A typical action taken is to disconnect the affected device,
halting all I/O to it. The goal of a disconnection is to avoid system halting all I/O to it. The goal of a disconnection is to avoid system
@ -108,8 +108,8 @@ A driver does not have to implement all of these callbacks; however,
if it implements any, it must implement error_detected(). If a callback if it implements any, it must implement error_detected(). If a callback
is not implemented, the corresponding feature is considered unsupported. is not implemented, the corresponding feature is considered unsupported.
For example, if mmio_enabled() and resume() aren't there, then it For example, if mmio_enabled() and resume() aren't there, then it
is assumed that the driver is not doing any direct recovery and requires is assumed that the driver does not need these callbacks
a slot reset. Typically a driver will want to know about for recovery. Typically a driver will want to know about
a slot_reset(). a slot_reset().
The actual steps taken by a platform to recover from a PCI error The actual steps taken by a platform to recover from a PCI error
@ -122,6 +122,10 @@ A PCI bus error is detected by the PCI hardware. On powerpc, the slot
is isolated, in that all I/O is blocked: all reads return 0xffffffff, is isolated, in that all I/O is blocked: all reads return 0xffffffff,
all writes are ignored. all writes are ignored.
Similarly, on platforms supporting Downstream Port Containment
(PCIe r7.0 sec 6.2.11), the link to the sub-hierarchy with the
faulting device is disabled. Any device in the sub-hierarchy
becomes inaccessible.
STEP 1: Notification STEP 1: Notification
-------------------- --------------------
@ -141,6 +145,9 @@ shouldn't do any new IOs. Called in task context. This is sort of a
All drivers participating in this system must implement this call. All drivers participating in this system must implement this call.
The driver must return one of the following result codes: The driver must return one of the following result codes:
- PCI_ERS_RESULT_RECOVERED
Driver returns this if it thinks the device is usable despite
the error and does not need further intervention.
- PCI_ERS_RESULT_CAN_RECOVER - PCI_ERS_RESULT_CAN_RECOVER
Driver returns this if it thinks it might be able to recover Driver returns this if it thinks it might be able to recover
the HW by just banging IOs or if it wants to be given the HW by just banging IOs or if it wants to be given
@ -199,7 +206,25 @@ reset or some such, but not restart operations. This callback is made if
all drivers on a segment agree that they can try to recover and if no automatic all drivers on a segment agree that they can try to recover and if no automatic
link reset was performed by the HW. If the platform can't just re-enable IOs link reset was performed by the HW. If the platform can't just re-enable IOs
without a slot reset or a link reset, it will not call this callback, and without a slot reset or a link reset, it will not call this callback, and
instead will have gone directly to STEP 3 (Link Reset) or STEP 4 (Slot Reset) instead will have gone directly to STEP 3 (Link Reset) or STEP 4 (Slot Reset).
.. note::
On platforms supporting Advanced Error Reporting (PCIe r7.0 sec 6.2),
the faulting device may already be accessible in STEP 1 (Notification).
Drivers should nevertheless defer accesses to STEP 2 (MMIO Enabled)
to be compatible with EEH on powerpc and with s390 (where devices are
inaccessible until STEP 2).
On platforms supporting Downstream Port Containment, the link to the
sub-hierarchy with the faulting device is re-enabled in STEP 3 (Link
Reset). Hence devices in the sub-hierarchy are inaccessible until
STEP 4 (Slot Reset).
For errors such as Surprise Down (PCIe r7.0 sec 6.2.7), the device
may not even be accessible in STEP 4 (Slot Reset). Drivers can detect
accessibility by checking whether reads from the device return all 1's
(PCI_POSSIBLE_ERROR()).
.. note:: .. note::
@ -234,14 +259,14 @@ The driver should return one of the following result codes:
The next step taken depends on the results returned by the drivers. The next step taken depends on the results returned by the drivers.
If all drivers returned PCI_ERS_RESULT_RECOVERED, then the platform If all drivers returned PCI_ERS_RESULT_RECOVERED, then the platform
proceeds to either STEP3 (Link Reset) or to STEP 5 (Resume Operations). proceeds to either STEP 3 (Link Reset) or to STEP 5 (Resume Operations).
If any driver returned PCI_ERS_RESULT_NEED_RESET, then the platform If any driver returned PCI_ERS_RESULT_NEED_RESET, then the platform
proceeds to STEP 4 (Slot Reset) proceeds to STEP 4 (Slot Reset)
STEP 3: Link Reset STEP 3: Link Reset
------------------ ------------------
The platform resets the link. This is a PCI-Express specific step The platform resets the link. This is a PCIe specific step
and is done whenever a fatal error has been detected that can be and is done whenever a fatal error has been detected that can be
"solved" by resetting the link. "solved" by resetting the link.
@ -263,13 +288,13 @@ that is equivalent to what it would be after a fresh system
power-on followed by power-on BIOS/system firmware initialization. power-on followed by power-on BIOS/system firmware initialization.
Soft reset is also known as hot-reset. Soft reset is also known as hot-reset.
Powerpc fundamental reset is supported by PCI Express cards only Powerpc fundamental reset is supported by PCIe cards only
and results in device's state machines, hardware logic, port states and and results in device's state machines, hardware logic, port states and
configuration registers to initialize to their default conditions. configuration registers to initialize to their default conditions.
For most PCI devices, a soft reset will be sufficient for recovery. For most PCI devices, a soft reset will be sufficient for recovery.
Optional fundamental reset is provided to support a limited number Optional fundamental reset is provided to support a limited number
of PCI Express devices for which a soft reset is not sufficient of PCIe devices for which a soft reset is not sufficient
for recovery. for recovery.
If the platform supports PCI hotplug, then the reset might be If the platform supports PCI hotplug, then the reset might be
@ -313,7 +338,7 @@ Result codes:
- PCI_ERS_RESULT_DISCONNECT - PCI_ERS_RESULT_DISCONNECT
Same as above. Same as above.
Drivers for PCI Express cards that require a fundamental reset must Drivers for PCIe cards that require a fundamental reset must
set the needs_freset bit in the pci_dev structure in their probe function. set the needs_freset bit in the pci_dev structure in their probe function.
For example, the QLogic qla2xxx driver sets the needs_freset bit for certain For example, the QLogic qla2xxx driver sets the needs_freset bit for certain
PCI card types:: PCI card types::

View File

@ -70,16 +70,16 @@ AER error output
---------------- ----------------
When a PCIe AER error is captured, an error message will be output to When a PCIe AER error is captured, an error message will be output to
console. If it's a correctable error, it is output as an info message. console. If it's a correctable error, it is output as a warning message.
Otherwise, it is printed as an error. So users could choose different Otherwise, it is printed as an error. So users could choose different
log level to filter out correctable error messages. log level to filter out correctable error messages.
Below shows an example:: Below shows an example::
0000:50:00.0: PCIe Bus Error: severity=Uncorrected (Fatal), type=Transaction Layer, id=0500(Requester ID) 0000:50:00.0: PCIe Bus Error: severity=Uncorrectable (Fatal), type=Transaction Layer, (Requester ID)
0000:50:00.0: device [8086:0329] error status/mask=00100000/00000000 0000:50:00.0: device [8086:0329] error status/mask=00100000/00000000
0000:50:00.0: [20] Unsupported Request (First) 0000:50:00.0: [20] UnsupReq (First)
0000:50:00.0: TLP Header: 04000001 00200a03 05010000 00050100 0000:50:00.0: TLP Header: 0x04000001 0x00200a03 0x05010000 0x00050100
In the example, 'Requester ID' means the ID of the device that sent In the example, 'Requester ID' means the ID of the device that sent
the error message to the Root Port. Please refer to PCIe specs for other the error message to the Root Port. Please refer to PCIe specs for other
@ -138,7 +138,7 @@ error message to the Root Port above it when it captures
an error. The Root Port, upon receiving an error reporting message, an error. The Root Port, upon receiving an error reporting message,
internally processes and logs the error message in its AER internally processes and logs the error message in its AER
Capability structure. Error information being logged includes storing Capability structure. Error information being logged includes storing
the error reporting agent's requestor ID into the Error Source the error reporting agent's Requester ID into the Error Source
Identification Registers and setting the error bits of the Root Error Identification Registers and setting the error bits of the Root Error
Status Register accordingly. If AER error reporting is enabled in the Root Status Register accordingly. If AER error reporting is enabled in the Root
Error Command Register, the Root Port generates an interrupt when an Error Command Register, the Root Port generates an interrupt when an
@ -152,18 +152,6 @@ the device driver.
Provide callbacks Provide callbacks
----------------- -----------------
callback reset_link to reset PCIe link
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This callback is used to reset the PCIe physical link when a
fatal error happens. The Root Port AER service driver provides a
default reset_link function, but different Upstream Ports might
have different specifications to reset the PCIe link, so
Upstream Port drivers may provide their own reset_link functions.
Section 3.2.2.2 provides more detailed info on when to call
reset_link.
PCI error-recovery callbacks PCI error-recovery callbacks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -174,8 +162,8 @@ when performing error recovery actions.
Data struct pci_driver has a pointer, err_handler, to point to Data struct pci_driver has a pointer, err_handler, to point to
pci_error_handlers who consists of a couple of callback function pci_error_handlers who consists of a couple of callback function
pointers. The AER driver follows the rules defined in pointers. The AER driver follows the rules defined in
pci-error-recovery.rst except PCIe-specific parts (e.g. pci-error-recovery.rst except PCIe-specific parts (see
reset_link). Please refer to pci-error-recovery.rst for detailed below). Please refer to pci-error-recovery.rst for detailed
definitions of the callbacks. definitions of the callbacks.
The sections below specify when to call the error callback functions. The sections below specify when to call the error callback functions.
@ -189,10 +177,21 @@ software intervention or any loss of data. These errors do not
require any recovery actions. The AER driver clears the device's require any recovery actions. The AER driver clears the device's
correctable error status register accordingly and logs these errors. correctable error status register accordingly and logs these errors.
Non-correctable (non-fatal and fatal) errors Uncorrectable (non-fatal and fatal) errors
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If an error message indicates a non-fatal error, performing link reset The AER driver performs a Secondary Bus Reset to recover from
uncorrectable errors. The reset is applied at the port above
the originating device: If the originating device is an Endpoint,
only the Endpoint is reset. If on the other hand the originating
device has subordinate devices, those are all affected by the
reset as well.
If the originating device is a Root Complex Integrated Endpoint,
there's no port above where a Secondary Bus Reset could be applied.
In this case, the AER driver instead applies a Function Level Reset.
If an error message indicates a non-fatal error, performing a reset
at upstream is not required. The AER driver calls error_detected(dev, at upstream is not required. The AER driver calls error_detected(dev,
pci_channel_io_normal) to all drivers associated within a hierarchy in pci_channel_io_normal) to all drivers associated within a hierarchy in
question. For example:: question. For example::
@ -204,38 +203,34 @@ Downstream Port B and Endpoint.
A driver may return PCI_ERS_RESULT_CAN_RECOVER, A driver may return PCI_ERS_RESULT_CAN_RECOVER,
PCI_ERS_RESULT_DISCONNECT, or PCI_ERS_RESULT_NEED_RESET, depending on PCI_ERS_RESULT_DISCONNECT, or PCI_ERS_RESULT_NEED_RESET, depending on
whether it can recover or the AER driver calls mmio_enabled as next. whether it can recover without a reset, considers the device unrecoverable
or needs a reset for recovery. If all affected drivers agree that they can
recover without a reset, it is skipped. Should one driver request a reset,
it overrides all other drivers.
If an error message indicates a fatal error, kernel will broadcast If an error message indicates a fatal error, kernel will broadcast
error_detected(dev, pci_channel_io_frozen) to all drivers within error_detected(dev, pci_channel_io_frozen) to all drivers within
a hierarchy in question. Then, performing link reset at upstream is a hierarchy in question. Then, performing a reset at upstream is
necessary. As different kinds of devices might use different approaches necessary. If error_detected returns PCI_ERS_RESULT_CAN_RECOVER
to reset link, AER port service driver is required to provide the to indicate that recovery without a reset is possible, the error
function to reset link via callback parameter of pcie_do_recovery() handling goes to mmio_enabled, but afterwards a reset is still
function. If reset_link is not NULL, recovery function will use it performed.
to reset the link. If error_detected returns PCI_ERS_RESULT_CAN_RECOVER
and reset_link returns PCI_ERS_RESULT_RECOVERED, the error handling goes
to mmio_enabled.
Frequent Asked Questions In other words, for non-fatal errors, drivers may opt in to a reset.
------------------------ But for fatal errors, they cannot opt out of a reset, based on the
assumption that the link is unreliable.
Frequently Asked Questions
--------------------------
Q: Q:
What happens if a PCIe device driver does not provide an What happens if a PCIe device driver does not provide an
error recovery handler (pci_driver->err_handler is equal to NULL)? error recovery handler (pci_driver->err_handler is equal to NULL)?
A: A:
The devices attached with the driver won't be recovered. If the The devices attached with the driver won't be recovered.
error is fatal, kernel will print out warning messages. Please refer The kernel will print out informational messages to identify
to section 3 for more information. unrecoverable devices.
Q:
What happens if an upstream port service driver does not provide
callback reset_link?
A:
Fatal error recovery will fail if the errors are reported by the
upstream ports who are attached by the service driver.
Software error injection Software error injection

View File

@ -71,6 +71,17 @@ properties:
- "#address-cells" - "#address-cells"
- "#interrupt-cells" - "#interrupt-cells"
patternProperties:
'^pcie@[0-2],0$':
type: object
$ref: /schemas/pci/pci-pci-bridge.yaml#
properties:
reg:
maxItems: 1
unevaluatedProperties: false
required: required:
- reg - reg
- reg-names - reg-names
@ -87,6 +98,7 @@ examples:
- | - |
#include <dt-bindings/interrupt-controller/arm-gic.h> #include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/interrupt-controller/irq.h> #include <dt-bindings/interrupt-controller/irq.h>
#include <dt-bindings/gpio/gpio.h>
soc { soc {
#address-cells = <2>; #address-cells = <2>;
@ -112,10 +124,20 @@ examples:
#size-cells = <2>; #size-cells = <2>;
#interrupt-cells = <1>; #interrupt-cells = <1>;
device_type = "pci"; device_type = "pci";
pcie@0,0 {
device_type = "pci";
reg = <0x0 0x0 0x0 0x0 0x0>;
reset-gpios = <&tca6416_u37 7 GPIO_ACTIVE_LOW>;
#address-cells = <3>;
#size-cells = <2>;
ranges;
};
pcie_intc_0: interrupt-controller { pcie_intc_0: interrupt-controller {
#address-cells = <0>; #address-cells = <0>;
#interrupt-cells = <1>; #interrupt-cells = <1>;
interrupt-controller; interrupt-controller;
}; };
}; };
}; };

View File

@ -52,7 +52,12 @@ properties:
- mediatek,mt8188-pcie - mediatek,mt8188-pcie
- mediatek,mt8195-pcie - mediatek,mt8195-pcie
- const: mediatek,mt8192-pcie - const: mediatek,mt8192-pcie
- items:
- enum:
- mediatek,mt6991-pcie
- const: mediatek,mt8196-pcie
- const: mediatek,mt8192-pcie - const: mediatek,mt8192-pcie
- const: mediatek,mt8196-pcie
- const: airoha,en7581-pcie - const: airoha,en7581-pcie
reg: reg:
@ -212,6 +217,36 @@ allOf:
mediatek,pbus-csr: false mediatek,pbus-csr: false
- if:
properties:
compatible:
contains:
enum:
- mediatek,mt8196-pcie
then:
properties:
clocks:
minItems: 6
clock-names:
items:
- const: pl_250m
- const: tl_26m
- const: bus
- const: low_power
- const: peri_26m
- const: peri_mem
resets:
minItems: 2
reset-names:
items:
- const: phy
- const: mac
mediatek,pbus-csr: false
- if: - if:
properties: properties:
compatible: compatible:

View File

@ -77,46 +77,46 @@ examples:
#size-cells = <2>; #size-cells = <2>;
pci@1c00000 { pci@1c00000 {
compatible = "qcom,pcie-sa8255p"; compatible = "qcom,pcie-sa8255p";
reg = <0x4 0x00000000 0 0x10000000>; reg = <0x4 0x00000000 0 0x10000000>;
device_type = "pci"; device_type = "pci";
#address-cells = <3>; #address-cells = <3>;
#size-cells = <2>; #size-cells = <2>;
ranges = <0x02000000 0x0 0x40100000 0x0 0x40100000 0x0 0x1ff00000>, ranges = <0x02000000 0x0 0x40100000 0x0 0x40100000 0x0 0x1ff00000>,
<0x43000000 0x4 0x10100000 0x4 0x10100000 0x0 0x40000000>; <0x43000000 0x4 0x10100000 0x4 0x10100000 0x0 0x40000000>;
bus-range = <0x00 0xff>; bus-range = <0x00 0xff>;
dma-coherent; dma-coherent;
linux,pci-domain = <0>; linux,pci-domain = <0>;
power-domains = <&scmi5_pd 0>; power-domains = <&scmi5_pd 0>;
iommu-map = <0x0 &pcie_smmu 0x0000 0x1>, iommu-map = <0x0 &pcie_smmu 0x0000 0x1>,
<0x100 &pcie_smmu 0x0001 0x1>; <0x100 &pcie_smmu 0x0001 0x1>;
interrupt-parent = <&intc>; interrupt-parent = <&intc>;
interrupts = <GIC_SPI 307 IRQ_TYPE_LEVEL_HIGH>, interrupts = <GIC_SPI 307 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 308 IRQ_TYPE_LEVEL_HIGH>, <GIC_SPI 308 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 309 IRQ_TYPE_LEVEL_HIGH>, <GIC_SPI 309 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 312 IRQ_TYPE_LEVEL_HIGH>, <GIC_SPI 312 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 313 IRQ_TYPE_LEVEL_HIGH>, <GIC_SPI 313 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 314 IRQ_TYPE_LEVEL_HIGH>, <GIC_SPI 314 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 374 IRQ_TYPE_LEVEL_HIGH>, <GIC_SPI 374 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 375 IRQ_TYPE_LEVEL_HIGH>; <GIC_SPI 375 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "msi0", "msi1", "msi2", "msi3", interrupt-names = "msi0", "msi1", "msi2", "msi3",
"msi4", "msi5", "msi6", "msi7"; "msi4", "msi5", "msi6", "msi7";
#interrupt-cells = <1>; #interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0x7>; interrupt-map-mask = <0 0 0 0x7>;
interrupt-map = <0 0 0 1 &intc GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>, interrupt-map = <0 0 0 1 &intc GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 2 &intc GIC_SPI 149 IRQ_TYPE_LEVEL_HIGH>, <0 0 0 2 &intc GIC_SPI 149 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 3 &intc GIC_SPI 150 IRQ_TYPE_LEVEL_HIGH>, <0 0 0 3 &intc GIC_SPI 150 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 4 &intc GIC_SPI 151 IRQ_TYPE_LEVEL_HIGH>; <0 0 0 4 &intc GIC_SPI 151 IRQ_TYPE_LEVEL_HIGH>;
pcie@0 { pcie@0 {
device_type = "pci"; device_type = "pci";
reg = <0x0 0x0 0x0 0x0 0x0>; reg = <0x0 0x0 0x0 0x0 0x0>;
bus-range = <0x01 0xff>; bus-range = <0x01 0xff>;
#address-cells = <3>; #address-cells = <3>;
#size-cells = <2>; #size-cells = <2>;
ranges; ranges;
}; };
}; };
}; };

View File

@ -22,6 +22,7 @@ properties:
- enum: - enum:
- qcom,sar2130p-pcie - qcom,sar2130p-pcie
- qcom,pcie-sm8650 - qcom,pcie-sm8650
- qcom,pcie-sm8750
- const: qcom,pcie-sm8550 - const: qcom,pcie-sm8550
reg: reg:

View File

@ -32,10 +32,11 @@ properties:
- const: mhi # MHI registers - const: mhi # MHI registers
clocks: clocks:
minItems: 7 minItems: 6
maxItems: 7 maxItems: 7
clock-names: clock-names:
minItems: 6
items: items:
- const: aux # Auxiliary clock - const: aux # Auxiliary clock
- const: cfg # Configuration clock - const: cfg # Configuration clock

View File

@ -0,0 +1,64 @@
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/sophgo,sg2042-pcie-host.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Sophgo SG2042 PCIe Host (Cadence PCIe Wrapper)
description:
Sophgo SG2042 PCIe host controller is based on the Cadence PCIe core.
maintainers:
- Chen Wang <unicorn_wang@outlook.com>
properties:
compatible:
const: sophgo,sg2042-pcie-host
reg:
maxItems: 2
reg-names:
items:
- const: reg
- const: cfg
vendor-id:
const: 0x1f1c
device-id:
const: 0x2042
msi-parent: true
allOf:
- $ref: cdns-pcie-host.yaml#
required:
- compatible
- reg
- reg-names
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/irq.h>
pcie@62000000 {
compatible = "sophgo,sg2042-pcie-host";
device_type = "pci";
reg = <0x62000000 0x00800000>,
<0x48000000 0x00001000>;
reg-names = "reg", "cfg";
#address-cells = <3>;
#size-cells = <2>;
ranges = <0x81000000 0 0x00000000 0xde000000 0 0x00010000>,
<0x82000000 0 0xd0400000 0xd0400000 0 0x0d000000>;
bus-range = <0x00 0xff>;
vendor-id = <0x1f1c>;
device-id = <0x2042>;
cdns,no-bar-match-nbits = <48>;
msi-parent = <&msi>;
};

View File

@ -0,0 +1,33 @@
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/st,stm32-pcie-common.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: STM32MP25 PCIe RC/EP controller
maintainers:
- Christian Bruel <christian.bruel@foss.st.com>
description:
STM32MP25 PCIe RC/EP common properties
properties:
clocks:
maxItems: 1
description: PCIe system clock
resets:
maxItems: 1
power-domains:
maxItems: 1
access-controllers:
maxItems: 1
required:
- clocks
- resets
additionalProperties: true

View File

@ -0,0 +1,73 @@
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/st,stm32-pcie-ep.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: STMicroelectronics STM32MP25 PCIe Endpoint
maintainers:
- Christian Bruel <christian.bruel@foss.st.com>
description:
PCIe endpoint controller based on the Synopsys DesignWare PCIe core.
allOf:
- $ref: /schemas/pci/snps,dw-pcie-ep.yaml#
- $ref: /schemas/pci/st,stm32-pcie-common.yaml#
properties:
compatible:
const: st,stm32mp25-pcie-ep
reg:
items:
- description: Data Bus Interface (DBI) registers.
- description: Data Bus Interface (DBI) shadow registers.
- description: Internal Address Translation Unit (iATU) registers.
- description: PCIe configuration registers.
reg-names:
items:
- const: dbi
- const: dbi2
- const: atu
- const: addr_space
reset-gpios:
description: GPIO controlled connection to PERST# signal
maxItems: 1
phys:
maxItems: 1
required:
- phys
- reset-gpios
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/clock/st,stm32mp25-rcc.h>
#include <dt-bindings/gpio/gpio.h>
#include <dt-bindings/phy/phy.h>
#include <dt-bindings/reset/st,stm32mp25-rcc.h>
pcie-ep@48400000 {
compatible = "st,stm32mp25-pcie-ep";
reg = <0x48400000 0x400000>,
<0x48500000 0x100000>,
<0x48700000 0x80000>,
<0x10000000 0x10000000>;
reg-names = "dbi", "dbi2", "atu", "addr_space";
clocks = <&rcc CK_BUS_PCIE>;
phys = <&combophy PHY_TYPE_PCIE>;
resets = <&rcc PCIE_R>;
pinctrl-names = "default", "init";
pinctrl-0 = <&pcie_pins_a>;
pinctrl-1 = <&pcie_init_pins_a>;
reset-gpios = <&gpioj 8 GPIO_ACTIVE_LOW>;
access-controllers = <&rifsc 68>;
power-domains = <&CLUSTER_PD>;
};

View File

@ -0,0 +1,112 @@
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/st,stm32-pcie-host.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: STMicroelectronics STM32MP25 PCIe Root Complex
maintainers:
- Christian Bruel <christian.bruel@foss.st.com>
description:
PCIe root complex controller based on the Synopsys DesignWare PCIe core.
allOf:
- $ref: /schemas/pci/snps,dw-pcie.yaml#
- $ref: /schemas/pci/st,stm32-pcie-common.yaml#
properties:
compatible:
const: st,stm32mp25-pcie-rc
reg:
items:
- description: Data Bus Interface (DBI) registers.
- description: PCIe configuration registers.
reg-names:
items:
- const: dbi
- const: config
msi-parent:
maxItems: 1
patternProperties:
'^pcie@[0-2],0$':
type: object
$ref: /schemas/pci/pci-pci-bridge.yaml#
properties:
reg:
maxItems: 1
phys:
maxItems: 1
reset-gpios:
description: GPIO controlled connection to PERST# signal
maxItems: 1
wake-gpios:
description: GPIO used as WAKE# input signal
maxItems: 1
required:
- phys
- ranges
unevaluatedProperties: false
required:
- interrupt-map
- interrupt-map-mask
- ranges
- dma-ranges
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/clock/st,stm32mp25-rcc.h>
#include <dt-bindings/gpio/gpio.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/phy/phy.h>
#include <dt-bindings/reset/st,stm32mp25-rcc.h>
pcie@48400000 {
compatible = "st,stm32mp25-pcie-rc";
device_type = "pci";
reg = <0x48400000 0x400000>,
<0x10000000 0x10000>;
reg-names = "dbi", "config";
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 7>;
interrupt-map = <0 0 0 1 &intc 0 0 GIC_SPI 264 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 2 &intc 0 0 GIC_SPI 265 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 3 &intc 0 0 GIC_SPI 266 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 4 &intc 0 0 GIC_SPI 267 IRQ_TYPE_LEVEL_HIGH>;
#address-cells = <3>;
#size-cells = <2>;
ranges = <0x01000000 0x0 0x00000000 0x10010000 0x0 0x10000>,
<0x02000000 0x0 0x10020000 0x10020000 0x0 0x7fe0000>,
<0x42000000 0x0 0x18000000 0x18000000 0x0 0x8000000>;
dma-ranges = <0x42000000 0x0 0x80000000 0x80000000 0x0 0x80000000>;
clocks = <&rcc CK_BUS_PCIE>;
resets = <&rcc PCIE_R>;
msi-parent = <&v2m0>;
access-controllers = <&rifsc 68>;
power-domains = <&CLUSTER_PD>;
pcie@0,0 {
device_type = "pci";
reg = <0x0 0x0 0x0 0x0 0x0>;
phys = <&combophy PHY_TYPE_PCIE>;
wake-gpios = <&gpioh 5 (GPIO_ACTIVE_LOW | GPIO_PULL_UP)>;
reset-gpios = <&gpioj 8 GPIO_ACTIVE_LOW>;
#address-cells = <3>;
#size-cells = <2>;
ranges;
};
};

View File

@ -20,14 +20,18 @@ properties:
- ti,keystone-pcie - ti,keystone-pcie
reg: reg:
maxItems: 4 minItems: 4
maxItems: 6
reg-names: reg-names:
minItems: 4
items: items:
- const: app - const: app
- const: dbics - const: dbics
- const: config - const: config
- const: atu - const: atu
- const: vmap_lp
- const: vmap_hp
interrupts: interrupts:
maxItems: 1 maxItems: 1
@ -69,6 +73,15 @@ properties:
items: items:
pattern: '^pcie-phy[0-1]$' pattern: '^pcie-phy[0-1]$'
memory-region:
maxItems: 1
description: |
phandle to a restricted DMA pool to be used for all devices behind
this controller. The regions should be defined according to
reserved-memory/shared-dma-pool.yaml.
Note that enforcement via the PVU will only be available to
ti,am654-pcie-rc devices.
required: required:
- compatible - compatible
- reg - reg
@ -89,6 +102,13 @@ then:
- power-domains - power-domains
- msi-map - msi-map
- num-viewport - num-viewport
else:
properties:
reg:
maxItems: 4
reg-names:
maxItems: 4
unevaluatedProperties: false unevaluatedProperties: false
@ -104,8 +124,10 @@ examples:
reg = <0x5500000 0x1000>, reg = <0x5500000 0x1000>,
<0x5501000 0x1000>, <0x5501000 0x1000>,
<0x10000000 0x2000>, <0x10000000 0x2000>,
<0x5506000 0x1000>; <0x5506000 0x1000>,
reg-names = "app", "dbics", "config", "atu"; <0x2900000 0x1000>,
<0x2908000 0x1000>;
reg-names = "app", "dbics", "config", "atu", "vmap_lp", "vmap_hp";
power-domains = <&k3_pds 120 TI_SCI_PD_EXCLUSIVE>; power-domains = <&k3_pds 120 TI_SCI_PD_EXCLUSIVE>;
#address-cells = <3>; #address-cells = <3>;
#size-cells = <2>; #size-cells = <2>;

View File

@ -1162,8 +1162,55 @@ pinmux core.
Pin control requests from drivers Pin control requests from drivers
================================= =================================
When a device driver is about to probe the device core will automatically When a device driver is about to probe, the device core attaches the
attempt to issue ``pinctrl_get_select_default()`` on these devices. standard states if they are defined in the device tree by calling
``pinctrl_bind_pins()`` on these devices.
Possible standard state names are: "default", "init", "sleep" and "idle".
- if ``default`` is defined in the device tree, it is selected before
device probe.
- if ``init`` and ``default`` are defined in the device tree, the "init"
state is selected before the driver probe and the "default" state is
selected after the driver probe.
- the ``sleep`` and ``idle`` states are for power management and can only
be selected with the PM API bellow.
PM interfaces
=================
PM runtime suspend/resume might need to execute the same init sequence as
during probe. Since the predefined states are already attached to the
device, the driver can activate these states explicitly with the
following helper functions:
- ``pinctrl_pm_select_default_state()``
- ``pinctrl_pm_select_init_state()``
- ``pinctrl_pm_select_sleep_state()``
- ``pinctrl_pm_select_idle_state()``
For example, if resuming the device depend on certain pinmux states
.. code-block:: c
foo_suspend()
{
/* suspend device */
...
pinctrl_pm_select_sleep_state(dev);
}
foo_resume()
{
pinctrl_pm_select_init_state(dev);
/* resuming device */
...
pinctrl_pm_select_default_state(dev);
}
This way driver writers do not need to add any of the boilerplate code This way driver writers do not need to add any of the boilerplate code
of the type found below. However when doing fine-grained state selection of the type found below. However when doing fine-grained state selection
and not using the "default" state, you may have to do some device driver and not using the "default" state, you may have to do some device driver
@ -1185,6 +1232,12 @@ operation and going to sleep, moving from the ``PINCTRL_STATE_DEFAULT`` to
``PINCTRL_STATE_SLEEP`` at runtime, re-biasing or even re-muxing pins to save ``PINCTRL_STATE_SLEEP`` at runtime, re-biasing or even re-muxing pins to save
current in sleep mode. current in sleep mode.
Another case is when the pinctrl needs to switch to a certain mode during
probe and then revert to the default state at the end of probe. For example
a PINMUX may need to be configured as a GPIO during probe. In this case, use
``PINCTRL_STATE_INIT`` to switch state before probe, then move to
``PINCTRL_STATE_DEFAULT`` at the end of probe for normal operation.
A driver may request a certain control state to be activated, usually just the A driver may request a certain control state to be activated, usually just the
default state like this: default state like this:

View File

@ -19723,6 +19723,13 @@ L: linux-samsung-soc@vger.kernel.org
S: Maintained S: Maintained
F: drivers/pci/controller/dwc/pci-exynos.c F: drivers/pci/controller/dwc/pci-exynos.c
PCI DRIVER FOR STM32MP25
M: Christian Bruel <christian.bruel@foss.st.com>
L: linux-pci@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/pci/st,stm32-pcie-*.yaml
F: drivers/pci/controller/dwc/*stm32*
PCI DRIVER FOR SYNOPSYS DESIGNWARE PCI DRIVER FOR SYNOPSYS DESIGNWARE
M: Jingoo Han <jingoohan1@gmail.com> M: Jingoo Han <jingoohan1@gmail.com>
M: Manivannan Sadhasivam <mani@kernel.org> M: Manivannan Sadhasivam <mani@kernel.org>

View File

@ -44,41 +44,24 @@ resource_size_t pcibios_align_resource(void *data, const struct resource *res,
*/ */
int pcibios_enable_device(struct pci_dev *dev, int mask) int pcibios_enable_device(struct pci_dev *dev, int mask)
{ {
struct resource *r;
u16 cmd, newcmd; u16 cmd, newcmd;
int idx; int ret;
pci_read_config_word(dev, PCI_COMMAND, &cmd); ret = pci_enable_resources(dev, mask);
newcmd = cmd; if (ret < 0)
return ret;
for (idx = 0; idx < 6; idx++) {
/* Only set up the requested stuff */
if (!(mask & (1 << idx)))
continue;
r = dev->resource + idx;
if (!r->start && r->end) {
pr_err("PCI: Device %s not available because of resource collisions\n",
pci_name(dev));
return -EINVAL;
}
if (r->flags & IORESOURCE_IO)
newcmd |= PCI_COMMAND_IO;
if (r->flags & IORESOURCE_MEM)
newcmd |= PCI_COMMAND_MEMORY;
}
/* /*
* Bridges (eg, cardbus bridges) need to be fully enabled * Bridges (eg, cardbus bridges) need to be fully enabled
*/ */
if ((dev->class >> 16) == PCI_BASE_CLASS_BRIDGE) if ((dev->class >> 16) == PCI_BASE_CLASS_BRIDGE) {
pci_read_config_word(dev, PCI_COMMAND, &cmd);
newcmd |= PCI_COMMAND_IO | PCI_COMMAND_MEMORY; newcmd |= PCI_COMMAND_IO | PCI_COMMAND_MEMORY;
if (newcmd != cmd) {
pr_info("PCI: enabling bridge %s (0x%04x -> 0x%04x)\n",
if (newcmd != cmd) { pci_name(dev), cmd, newcmd);
pr_info("PCI: enabling device %s (0x%04x -> 0x%04x)\n", pci_write_config_word(dev, PCI_COMMAND, newcmd);
pci_name(dev), cmd, newcmd); }
pci_write_config_word(dev, PCI_COMMAND, newcmd);
} }
return 0; return 0;
} }

View File

@ -249,45 +249,11 @@ static int __init pcibios_init(void)
subsys_initcall(pcibios_init); subsys_initcall(pcibios_init);
static int pcibios_enable_resources(struct pci_dev *dev, int mask)
{
u16 cmd, old_cmd;
int idx;
struct resource *r;
pci_read_config_word(dev, PCI_COMMAND, &cmd);
old_cmd = cmd;
pci_dev_for_each_resource(dev, r, idx) {
/* Only set up the requested stuff */
if (!(mask & (1<<idx)))
continue;
if (!(r->flags & (IORESOURCE_IO | IORESOURCE_MEM)))
continue;
if ((idx == PCI_ROM_RESOURCE) &&
(!(r->flags & IORESOURCE_ROM_ENABLE)))
continue;
if (!r->start && r->end) {
pci_err(dev,
"can't enable device: resource collisions\n");
return -EINVAL;
}
if (r->flags & IORESOURCE_IO)
cmd |= PCI_COMMAND_IO;
if (r->flags & IORESOURCE_MEM)
cmd |= PCI_COMMAND_MEMORY;
}
if (cmd != old_cmd) {
pci_info(dev, "enabling device (%04x -> %04x)\n", old_cmd, cmd);
pci_write_config_word(dev, PCI_COMMAND, cmd);
}
return 0;
}
int pcibios_enable_device(struct pci_dev *dev, int mask) int pcibios_enable_device(struct pci_dev *dev, int mask)
{ {
int err = pcibios_enable_resources(dev, mask); int err;
err = pci_enable_resources(dev, mask);
if (err < 0) if (err < 0)
return err; return err;

View File

@ -334,7 +334,7 @@ static enum pci_ers_result eeh_report_error(struct eeh_dev *edev,
rc = driver->err_handler->error_detected(pdev, pci_channel_io_frozen); rc = driver->err_handler->error_detected(pdev, pci_channel_io_frozen);
edev->in_error = true; edev->in_error = true;
pci_uevent_ers(pdev, PCI_ERS_RESULT_NONE); pci_uevent_ers(pdev, rc);
return rc; return rc;
} }

View File

@ -88,6 +88,7 @@ static pci_ers_result_t zpci_event_notify_error_detected(struct pci_dev *pdev,
pci_ers_result_t ers_res = PCI_ERS_RESULT_DISCONNECT; pci_ers_result_t ers_res = PCI_ERS_RESULT_DISCONNECT;
ers_res = driver->err_handler->error_detected(pdev, pdev->error_state); ers_res = driver->err_handler->error_detected(pdev, pdev->error_state);
pci_uevent_ers(pdev, ers_res);
if (ers_result_indicates_abort(ers_res)) if (ers_result_indicates_abort(ers_res))
pr_info("%s: Automatic recovery failed after initial reporting\n", pci_name(pdev)); pr_info("%s: Automatic recovery failed after initial reporting\n", pci_name(pdev));
else if (ers_res == PCI_ERS_RESULT_NEED_RESET) else if (ers_res == PCI_ERS_RESULT_NEED_RESET)
@ -244,6 +245,7 @@ static pci_ers_result_t zpci_event_attempt_error_recovery(struct pci_dev *pdev)
ers_res = PCI_ERS_RESULT_RECOVERED; ers_res = PCI_ERS_RESULT_RECOVERED;
if (ers_res != PCI_ERS_RESULT_RECOVERED) { if (ers_res != PCI_ERS_RESULT_RECOVERED) {
pci_uevent_ers(pdev, PCI_ERS_RESULT_DISCONNECT);
pr_err("%s: Automatic recovery failed; operator intervention is required\n", pr_err("%s: Automatic recovery failed; operator intervention is required\n",
pci_name(pdev)); pci_name(pdev));
status_str = "failed (driver can't recover)"; status_str = "failed (driver can't recover)";
@ -253,6 +255,7 @@ static pci_ers_result_t zpci_event_attempt_error_recovery(struct pci_dev *pdev)
pr_info("%s: The device is ready to resume operations\n", pci_name(pdev)); pr_info("%s: The device is ready to resume operations\n", pci_name(pdev));
if (driver->err_handler->resume) if (driver->err_handler->resume)
driver->err_handler->resume(pdev); driver->err_handler->resume(pdev);
pci_uevent_ers(pdev, PCI_ERS_RESULT_RECOVERED);
out_unlock: out_unlock:
pci_dev_unlock(pdev); pci_dev_unlock(pdev);
zpci_report_status(zdev, "recovery", status_str); zpci_report_status(zdev, "recovery", status_str);

View File

@ -60,30 +60,3 @@ void leon_pci_init(struct platform_device *ofdev, struct leon_pci_info *info)
pci_assign_unassigned_resources(); pci_assign_unassigned_resources();
pci_bus_add_devices(root_bus); pci_bus_add_devices(root_bus);
} }
int pcibios_enable_device(struct pci_dev *dev, int mask)
{
struct resource *res;
u16 cmd, oldcmd;
int i;
pci_read_config_word(dev, PCI_COMMAND, &cmd);
oldcmd = cmd;
pci_dev_for_each_resource(dev, res, i) {
/* Only set up the requested stuff */
if (!(mask & (1<<i)))
continue;
if (res->flags & IORESOURCE_IO)
cmd |= PCI_COMMAND_IO;
if (res->flags & IORESOURCE_MEM)
cmd |= PCI_COMMAND_MEMORY;
}
if (cmd != oldcmd) {
pci_info(dev, "enabling device (%04x -> %04x)\n", oldcmd, cmd);
pci_write_config_word(dev, PCI_COMMAND, cmd);
}
return 0;
}

View File

@ -722,33 +722,6 @@ struct pci_bus *pci_scan_one_pbm(struct pci_pbm_info *pbm,
return bus; return bus;
} }
int pcibios_enable_device(struct pci_dev *dev, int mask)
{
struct resource *res;
u16 cmd, oldcmd;
int i;
pci_read_config_word(dev, PCI_COMMAND, &cmd);
oldcmd = cmd;
pci_dev_for_each_resource(dev, res, i) {
/* Only set up the requested stuff */
if (!(mask & (1<<i)))
continue;
if (res->flags & IORESOURCE_IO)
cmd |= PCI_COMMAND_IO;
if (res->flags & IORESOURCE_MEM)
cmd |= PCI_COMMAND_MEMORY;
}
if (cmd != oldcmd) {
pci_info(dev, "enabling device (%04x -> %04x)\n", oldcmd, cmd);
pci_write_config_word(dev, PCI_COMMAND, cmd);
}
return 0;
}
/* Platform support for /proc/bus/pci/X/Y mmap()s. */ /* Platform support for /proc/bus/pci/X/Y mmap()s. */
int pci_iobar_pfn(struct pci_dev *pdev, int bar, struct vm_area_struct *vma) int pci_iobar_pfn(struct pci_dev *pdev, int bar, struct vm_area_struct *vma)
{ {

View File

@ -642,33 +642,6 @@ void pcibios_fixup_bus(struct pci_bus *bus)
} }
} }
int pcibios_enable_device(struct pci_dev *dev, int mask)
{
struct resource *res;
u16 cmd, oldcmd;
int i;
pci_read_config_word(dev, PCI_COMMAND, &cmd);
oldcmd = cmd;
pci_dev_for_each_resource(dev, res, i) {
/* Only set up the requested stuff */
if (!(mask & (1<<i)))
continue;
if (res->flags & IORESOURCE_IO)
cmd |= PCI_COMMAND_IO;
if (res->flags & IORESOURCE_MEM)
cmd |= PCI_COMMAND_MEMORY;
}
if (cmd != oldcmd) {
pci_info(dev, "enabling device (%04x -> %04x)\n", oldcmd, cmd);
pci_write_config_word(dev, PCI_COMMAND, cmd);
}
return 0;
}
/* Makes compiler happy */ /* Makes compiler happy */
static volatile int pcic_timer_dummy; static volatile int pcic_timer_dummy;

View File

@ -294,6 +294,46 @@ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_MCH_PB1, pcie_r
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_MCH_PC, pcie_rootport_aspm_quirk); DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_MCH_PC, pcie_rootport_aspm_quirk);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_MCH_PC1, pcie_rootport_aspm_quirk); DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_MCH_PC1, pcie_rootport_aspm_quirk);
/*
* PCIe devices underneath Xeon 6 PCIe Root Port bifurcated to x2 have lower
* performance with Extended Tags and MRRS > 128B. Work around the performance
* problems by disabling Extended Tags and limiting MRRS to 128B.
*
* https://cdrdv2.intel.com/v1/dl/getContent/837176
*/
static int limit_mrrs_to_128(struct pci_host_bridge *b, struct pci_dev *pdev)
{
int readrq = pcie_get_readrq(pdev);
if (readrq > 128)
pcie_set_readrq(pdev, 128);
return 0;
}
static void pci_xeon_x2_bifurc_quirk(struct pci_dev *pdev)
{
struct pci_host_bridge *bridge = pci_find_host_bridge(pdev->bus);
u32 linkcap;
pcie_capability_read_dword(pdev, PCI_EXP_LNKCAP, &linkcap);
if (FIELD_GET(PCI_EXP_LNKCAP_MLW, linkcap) != 0x2)
return;
bridge->no_ext_tags = 1;
bridge->enable_device = limit_mrrs_to_128;
pci_info(pdev, "Disabling Extended Tags and limiting MRRS to 128B (performance reasons due to x2 PCIe link)\n");
}
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x0db0, pci_xeon_x2_bifurc_quirk);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x0db1, pci_xeon_x2_bifurc_quirk);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x0db2, pci_xeon_x2_bifurc_quirk);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x0db3, pci_xeon_x2_bifurc_quirk);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x0db6, pci_xeon_x2_bifurc_quirk);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x0db7, pci_xeon_x2_bifurc_quirk);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x0db8, pci_xeon_x2_bifurc_quirk);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x0db9, pci_xeon_x2_bifurc_quirk);
/* /*
* Fixup to mark boot BIOS video selected by BIOS before it changes * Fixup to mark boot BIOS video selected by BIOS before it changes
* *

View File

@ -436,7 +436,11 @@ static int pci_endpoint_test_msi_irq(struct pci_endpoint_test *test,
{ {
struct pci_dev *pdev = test->pdev; struct pci_dev *pdev = test->pdev;
u32 val; u32 val;
int ret; int irq;
irq = pci_irq_vector(pdev, msi_num - 1);
if (irq < 0)
return irq;
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_TYPE, pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_TYPE,
msix ? PCITEST_IRQ_TYPE_MSIX : msix ? PCITEST_IRQ_TYPE_MSIX :
@ -450,11 +454,7 @@ static int pci_endpoint_test_msi_irq(struct pci_endpoint_test *test,
if (!val) if (!val)
return -ETIMEDOUT; return -ETIMEDOUT;
ret = pci_irq_vector(pdev, msi_num - 1); if (irq != test->last_irq)
if (ret < 0)
return ret;
if (ret != test->last_irq)
return -EIO; return -EIO;
return 0; return 0;
@ -937,7 +937,7 @@ static long pci_endpoint_test_ioctl(struct file *file, unsigned int cmd,
switch (cmd) { switch (cmd) {
case PCITEST_BAR: case PCITEST_BAR:
bar = arg; bar = arg;
if (bar > BAR_5) if (bar <= NO_BAR || bar > BAR_5)
goto ret; goto ret;
if (is_am654_pci_dev(pdev) && bar == BAR_0) if (is_am654_pci_dev(pdev) && bar == BAR_0)
goto ret; goto ret;
@ -1020,8 +1020,6 @@ static int pci_endpoint_test_probe(struct pci_dev *pdev,
if (!test) if (!test)
return -ENOMEM; return -ENOMEM;
test->test_reg_bar = 0;
test->alignment = 0;
test->pdev = pdev; test->pdev = pdev;
test->irq_type = PCITEST_IRQ_TYPE_UNDEFINED; test->irq_type = PCITEST_IRQ_TYPE_UNDEFINED;

View File

@ -4215,7 +4215,6 @@ static pci_ers_result_t qlcnic_83xx_io_slot_reset(struct pci_dev *pdev)
struct qlcnic_adapter *adapter = pci_get_drvdata(pdev); struct qlcnic_adapter *adapter = pci_get_drvdata(pdev);
int err = 0; int err = 0;
pdev->error_state = pci_channel_io_normal;
err = pci_enable_device(pdev); err = pci_enable_device(pdev);
if (err) if (err)
goto disconnect; goto disconnect;

View File

@ -3766,8 +3766,6 @@ static int qlcnic_attach_func(struct pci_dev *pdev)
struct qlcnic_adapter *adapter = pci_get_drvdata(pdev); struct qlcnic_adapter *adapter = pci_get_drvdata(pdev);
struct net_device *netdev = adapter->netdev; struct net_device *netdev = adapter->netdev;
pdev->error_state = pci_channel_io_normal;
err = pci_enable_device(pdev); err = pci_enable_device(pdev);
if (err) if (err)
return err; return err;

View File

@ -1258,9 +1258,6 @@ static void efx_io_resume(struct pci_dev *pdev)
/* For simplicity and reliability, we always require a slot reset and try to /* For simplicity and reliability, we always require a slot reset and try to
* reset the hardware when a pci error affecting the device is detected. * reset the hardware when a pci error affecting the device is detected.
* We leave both the link_reset and mmio_enabled callback unimplemented:
* with our request for slot reset the mmio_enabled callback will never be
* called, and the link_reset callback is not used by AER or EEH mechanisms.
*/ */
const struct pci_error_handlers efx_err_handlers = { const struct pci_error_handlers efx_err_handlers = {
.error_detected = efx_io_error_detected, .error_detected = efx_io_error_detected,

View File

@ -3127,9 +3127,6 @@ static void ef4_io_resume(struct pci_dev *pdev)
/* For simplicity and reliability, we always require a slot reset and try to /* For simplicity and reliability, we always require a slot reset and try to
* reset the hardware when a pci error affecting the device is detected. * reset the hardware when a pci error affecting the device is detected.
* We leave both the link_reset and mmio_enabled callback unimplemented:
* with our request for slot reset the mmio_enabled callback will never be
* called, and the link_reset callback is not used by AER or EEH mechanisms.
*/ */
static const struct pci_error_handlers ef4_err_handlers = { static const struct pci_error_handlers ef4_err_handlers = {
.error_detected = ef4_io_error_detected, .error_detected = ef4_io_error_detected,

View File

@ -1285,9 +1285,6 @@ static void efx_io_resume(struct pci_dev *pdev)
/* For simplicity and reliability, we always require a slot reset and try to /* For simplicity and reliability, we always require a slot reset and try to
* reset the hardware when a pci error affecting the device is detected. * reset the hardware when a pci error affecting the device is detected.
* We leave both the link_reset and mmio_enabled callback unimplemented:
* with our request for slot reset the mmio_enabled callback will never be
* called, and the link_reset callback is not used by AER or EEH mechanisms.
*/ */
const struct pci_error_handlers efx_siena_err_handlers = { const struct pci_error_handlers efx_siena_err_handlers = {
.error_detected = efx_io_error_detected, .error_detected = efx_io_error_detected,

View File

@ -204,6 +204,9 @@ static int pci_bus_alloc_from_region(struct pci_bus *bus, struct resource *res,
if (!r) if (!r)
continue; continue;
if (r->flags & (IORESOURCE_UNSET|IORESOURCE_DISABLED))
continue;
/* type_mask must match */ /* type_mask must match */
if ((res->flags ^ r->flags) & type_mask) if ((res->flags ^ r->flags) & type_mask)
continue; continue;
@ -361,11 +364,15 @@ void pci_bus_add_device(struct pci_dev *dev)
* before PCI client drivers. * before PCI client drivers.
*/ */
pdev = of_find_device_by_node(dn); pdev = of_find_device_by_node(dn);
if (pdev && of_pci_supply_present(dn)) { if (pdev) {
if (!device_link_add(&dev->dev, &pdev->dev, if (of_pci_supply_present(dn)) {
DL_FLAG_AUTOREMOVE_CONSUMER)) if (!device_link_add(&dev->dev, &pdev->dev,
pci_err(dev, "failed to add device link to power control device %s\n", DL_FLAG_AUTOREMOVE_CONSUMER)) {
pdev->name); pci_err(dev, "failed to add device link to power control device %s\n",
pdev->name);
}
}
put_device(&pdev->dev);
} }
if (!dn || of_device_is_available(dn)) if (!dn || of_device_is_available(dn))

View File

@ -42,6 +42,15 @@ config PCIE_CADENCE_PLAT_EP
endpoint mode. This PCIe controller may be embedded into many endpoint mode. This PCIe controller may be embedded into many
different vendors SoCs. different vendors SoCs.
config PCIE_SG2042_HOST
tristate "Sophgo SG2042 PCIe controller (host mode)"
depends on OF && (ARCH_SOPHGO || COMPILE_TEST)
select PCIE_CADENCE_HOST
help
Say Y here if you want to support the Sophgo SG2042 PCIe platform
controller in host mode. Sophgo SG2042 PCIe controller uses Cadence
PCIe core.
config PCI_J721E config PCI_J721E
tristate tristate
select PCIE_CADENCE_HOST if PCI_J721E_HOST != n select PCIE_CADENCE_HOST if PCI_J721E_HOST != n
@ -67,4 +76,5 @@ config PCI_J721E_EP
Say Y here if you want to support the TI J721E PCIe platform Say Y here if you want to support the TI J721E PCIe platform
controller in endpoint mode. TI J721E PCIe controller uses Cadence PCIe controller in endpoint mode. TI J721E PCIe controller uses Cadence PCIe
core. core.
endmenu endmenu

View File

@ -4,3 +4,4 @@ obj-$(CONFIG_PCIE_CADENCE_HOST) += pcie-cadence-host.o
obj-$(CONFIG_PCIE_CADENCE_EP) += pcie-cadence-ep.o obj-$(CONFIG_PCIE_CADENCE_EP) += pcie-cadence-ep.o
obj-$(CONFIG_PCIE_CADENCE_PLAT) += pcie-cadence-plat.o obj-$(CONFIG_PCIE_CADENCE_PLAT) += pcie-cadence-plat.o
obj-$(CONFIG_PCI_J721E) += pci-j721e.o obj-$(CONFIG_PCI_J721E) += pci-j721e.o
obj-$(CONFIG_PCIE_SG2042_HOST) += pcie-sg2042.o

View File

@ -284,6 +284,25 @@ static int j721e_pcie_ctrl_init(struct j721e_pcie *pcie)
if (!ret) if (!ret)
offset = args.args[0]; offset = args.args[0];
/*
* The PCIe Controller's registers have different "reset-values"
* depending on the "strap" settings programmed into the PCIEn_CTRL
* register within the CTRL_MMR memory-mapped register space.
* The registers latch onto a "reset-value" based on the "strap"
* settings sampled after the PCIe Controller is powered on.
* To ensure that the "reset-values" are sampled accurately, power
* off the PCIe Controller before programming the "strap" settings
* and power it on after that. The runtime PM APIs namely
* pm_runtime_put_sync() and pm_runtime_get_sync() will decrement and
* increment the usage counter respectively, causing GENPD to power off
* and power on the PCIe Controller.
*/
ret = pm_runtime_put_sync(dev);
if (ret < 0) {
dev_err(dev, "Failed to power off PCIe Controller\n");
return ret;
}
ret = j721e_pcie_set_mode(pcie, syscon, offset); ret = j721e_pcie_set_mode(pcie, syscon, offset);
if (ret < 0) { if (ret < 0) {
dev_err(dev, "Failed to set pci mode\n"); dev_err(dev, "Failed to set pci mode\n");
@ -302,6 +321,12 @@ static int j721e_pcie_ctrl_init(struct j721e_pcie *pcie)
return ret; return ret;
} }
ret = pm_runtime_get_sync(dev);
if (ret < 0) {
dev_err(dev, "Failed to power on PCIe Controller\n");
return ret;
}
/* Enable ACSPCIE refclk output if the optional property exists */ /* Enable ACSPCIE refclk output if the optional property exists */
syscon = syscon_regmap_lookup_by_phandle_optional(node, syscon = syscon_regmap_lookup_by_phandle_optional(node,
"ti,syscon-acspcie-proxy-ctrl"); "ti,syscon-acspcie-proxy-ctrl");
@ -440,6 +465,7 @@ static const struct of_device_id of_j721e_pcie_match[] = {
}, },
{}, {},
}; };
MODULE_DEVICE_TABLE(of, of_j721e_pcie_match);
static int j721e_pcie_probe(struct platform_device *pdev) static int j721e_pcie_probe(struct platform_device *pdev)
{ {
@ -549,7 +575,7 @@ static int j721e_pcie_probe(struct platform_device *pdev)
ret = j721e_pcie_ctrl_init(pcie); ret = j721e_pcie_ctrl_init(pcie);
if (ret < 0) { if (ret < 0) {
dev_err_probe(dev, ret, "pm_runtime_get_sync failed\n"); dev_err_probe(dev, ret, "j721e_pcie_ctrl_init failed\n");
goto err_get_sync; goto err_get_sync;
} }

View File

@ -21,12 +21,13 @@
static u8 cdns_pcie_get_fn_from_vfn(struct cdns_pcie *pcie, u8 fn, u8 vfn) static u8 cdns_pcie_get_fn_from_vfn(struct cdns_pcie *pcie, u8 fn, u8 vfn)
{ {
u32 cap = CDNS_PCIE_EP_FUNC_SRIOV_CAP_OFFSET;
u32 first_vf_offset, stride; u32 first_vf_offset, stride;
u16 cap;
if (vfn == 0) if (vfn == 0)
return fn; return fn;
cap = cdns_pcie_find_ext_capability(pcie, PCI_EXT_CAP_ID_SRIOV);
first_vf_offset = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_SRIOV_VF_OFFSET); first_vf_offset = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_SRIOV_VF_OFFSET);
stride = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_SRIOV_VF_STRIDE); stride = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_SRIOV_VF_STRIDE);
fn = fn + first_vf_offset + ((vfn - 1) * stride); fn = fn + first_vf_offset + ((vfn - 1) * stride);
@ -38,10 +39,11 @@ static int cdns_pcie_ep_write_header(struct pci_epc *epc, u8 fn, u8 vfn,
struct pci_epf_header *hdr) struct pci_epf_header *hdr)
{ {
struct cdns_pcie_ep *ep = epc_get_drvdata(epc); struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
u32 cap = CDNS_PCIE_EP_FUNC_SRIOV_CAP_OFFSET;
struct cdns_pcie *pcie = &ep->pcie; struct cdns_pcie *pcie = &ep->pcie;
u32 reg; u32 reg;
u16 cap;
cap = cdns_pcie_find_ext_capability(pcie, PCI_EXT_CAP_ID_SRIOV);
if (vfn > 1) { if (vfn > 1) {
dev_err(&epc->dev, "Only Virtual Function #1 has deviceID\n"); dev_err(&epc->dev, "Only Virtual Function #1 has deviceID\n");
return -EINVAL; return -EINVAL;
@ -227,9 +229,10 @@ static int cdns_pcie_ep_set_msi(struct pci_epc *epc, u8 fn, u8 vfn, u8 nr_irqs)
struct cdns_pcie_ep *ep = epc_get_drvdata(epc); struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
struct cdns_pcie *pcie = &ep->pcie; struct cdns_pcie *pcie = &ep->pcie;
u8 mmc = order_base_2(nr_irqs); u8 mmc = order_base_2(nr_irqs);
u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET;
u16 flags; u16 flags;
u8 cap;
cap = cdns_pcie_find_capability(pcie, PCI_CAP_ID_MSI);
fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn); fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn);
/* /*
@ -249,9 +252,10 @@ static int cdns_pcie_ep_get_msi(struct pci_epc *epc, u8 fn, u8 vfn)
{ {
struct cdns_pcie_ep *ep = epc_get_drvdata(epc); struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
struct cdns_pcie *pcie = &ep->pcie; struct cdns_pcie *pcie = &ep->pcie;
u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET;
u16 flags, mme; u16 flags, mme;
u8 cap;
cap = cdns_pcie_find_capability(pcie, PCI_CAP_ID_MSIX);
fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn); fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn);
/* Validate that the MSI feature is actually enabled. */ /* Validate that the MSI feature is actually enabled. */
@ -272,9 +276,10 @@ static int cdns_pcie_ep_get_msix(struct pci_epc *epc, u8 func_no, u8 vfunc_no)
{ {
struct cdns_pcie_ep *ep = epc_get_drvdata(epc); struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
struct cdns_pcie *pcie = &ep->pcie; struct cdns_pcie *pcie = &ep->pcie;
u32 cap = CDNS_PCIE_EP_FUNC_MSIX_CAP_OFFSET;
u32 val, reg; u32 val, reg;
u8 cap;
cap = cdns_pcie_find_capability(pcie, PCI_CAP_ID_MSIX);
func_no = cdns_pcie_get_fn_from_vfn(pcie, func_no, vfunc_no); func_no = cdns_pcie_get_fn_from_vfn(pcie, func_no, vfunc_no);
reg = cap + PCI_MSIX_FLAGS; reg = cap + PCI_MSIX_FLAGS;
@ -292,9 +297,10 @@ static int cdns_pcie_ep_set_msix(struct pci_epc *epc, u8 fn, u8 vfn,
{ {
struct cdns_pcie_ep *ep = epc_get_drvdata(epc); struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
struct cdns_pcie *pcie = &ep->pcie; struct cdns_pcie *pcie = &ep->pcie;
u32 cap = CDNS_PCIE_EP_FUNC_MSIX_CAP_OFFSET;
u32 val, reg; u32 val, reg;
u8 cap;
cap = cdns_pcie_find_capability(pcie, PCI_CAP_ID_MSIX);
fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn); fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn);
reg = cap + PCI_MSIX_FLAGS; reg = cap + PCI_MSIX_FLAGS;
@ -380,11 +386,11 @@ static int cdns_pcie_ep_send_msi_irq(struct cdns_pcie_ep *ep, u8 fn, u8 vfn,
u8 interrupt_num) u8 interrupt_num)
{ {
struct cdns_pcie *pcie = &ep->pcie; struct cdns_pcie *pcie = &ep->pcie;
u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET;
u16 flags, mme, data, data_mask; u16 flags, mme, data, data_mask;
u8 msi_count;
u64 pci_addr, pci_addr_mask = 0xff; u64 pci_addr, pci_addr_mask = 0xff;
u8 msi_count, cap;
cap = cdns_pcie_find_capability(pcie, PCI_CAP_ID_MSI);
fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn); fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn);
/* Check whether the MSI feature has been enabled by the PCI host. */ /* Check whether the MSI feature has been enabled by the PCI host. */
@ -432,14 +438,14 @@ static int cdns_pcie_ep_map_msi_irq(struct pci_epc *epc, u8 fn, u8 vfn,
u32 *msi_addr_offset) u32 *msi_addr_offset)
{ {
struct cdns_pcie_ep *ep = epc_get_drvdata(epc); struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET;
struct cdns_pcie *pcie = &ep->pcie; struct cdns_pcie *pcie = &ep->pcie;
u64 pci_addr, pci_addr_mask = 0xff; u64 pci_addr, pci_addr_mask = 0xff;
u16 flags, mme, data, data_mask; u16 flags, mme, data, data_mask;
u8 msi_count; u8 msi_count, cap;
int ret; int ret;
int i; int i;
cap = cdns_pcie_find_capability(pcie, PCI_CAP_ID_MSI);
fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn); fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn);
/* Check whether the MSI feature has been enabled by the PCI host. */ /* Check whether the MSI feature has been enabled by the PCI host. */
@ -482,16 +488,16 @@ static int cdns_pcie_ep_map_msi_irq(struct pci_epc *epc, u8 fn, u8 vfn,
static int cdns_pcie_ep_send_msix_irq(struct cdns_pcie_ep *ep, u8 fn, u8 vfn, static int cdns_pcie_ep_send_msix_irq(struct cdns_pcie_ep *ep, u8 fn, u8 vfn,
u16 interrupt_num) u16 interrupt_num)
{ {
u32 cap = CDNS_PCIE_EP_FUNC_MSIX_CAP_OFFSET;
u32 tbl_offset, msg_data, reg; u32 tbl_offset, msg_data, reg;
struct cdns_pcie *pcie = &ep->pcie; struct cdns_pcie *pcie = &ep->pcie;
struct pci_epf_msix_tbl *msix_tbl; struct pci_epf_msix_tbl *msix_tbl;
struct cdns_pcie_epf *epf; struct cdns_pcie_epf *epf;
u64 pci_addr_mask = 0xff; u64 pci_addr_mask = 0xff;
u64 msg_addr; u64 msg_addr;
u8 bir, cap;
u16 flags; u16 flags;
u8 bir;
cap = cdns_pcie_find_capability(pcie, PCI_CAP_ID_MSIX);
epf = &ep->epf[fn]; epf = &ep->epf[fn];
if (vfn > 0) if (vfn > 0)
epf = &epf->epf[vfn - 1]; epf = &epf->epf[vfn - 1];
@ -565,7 +571,9 @@ static int cdns_pcie_ep_start(struct pci_epc *epc)
int max_epfs = sizeof(epc->function_num_map) * 8; int max_epfs = sizeof(epc->function_num_map) * 8;
int ret, epf, last_fn; int ret, epf, last_fn;
u32 reg, value; u32 reg, value;
u8 cap;
cap = cdns_pcie_find_capability(pcie, PCI_CAP_ID_EXP);
/* /*
* BIT(0) is hardwired to 1, hence function 0 is always enabled * BIT(0) is hardwired to 1, hence function 0 is always enabled
* and can't be disabled anyway. * and can't be disabled anyway.
@ -589,12 +597,10 @@ static int cdns_pcie_ep_start(struct pci_epc *epc)
continue; continue;
value = cdns_pcie_ep_fn_readl(pcie, epf, value = cdns_pcie_ep_fn_readl(pcie, epf,
CDNS_PCIE_EP_FUNC_DEV_CAP_OFFSET + cap + PCI_EXP_DEVCAP);
PCI_EXP_DEVCAP);
value &= ~PCI_EXP_DEVCAP_FLR; value &= ~PCI_EXP_DEVCAP_FLR;
cdns_pcie_ep_fn_writel(pcie, epf, cdns_pcie_ep_fn_writel(pcie, epf,
CDNS_PCIE_EP_FUNC_DEV_CAP_OFFSET + cap + PCI_EXP_DEVCAP, value);
PCI_EXP_DEVCAP, value);
} }
} }
@ -608,14 +614,12 @@ static int cdns_pcie_ep_start(struct pci_epc *epc)
} }
static const struct pci_epc_features cdns_pcie_epc_vf_features = { static const struct pci_epc_features cdns_pcie_epc_vf_features = {
.linkup_notifier = false,
.msi_capable = true, .msi_capable = true,
.msix_capable = true, .msix_capable = true,
.align = 65536, .align = 65536,
}; };
static const struct pci_epc_features cdns_pcie_epc_features = { static const struct pci_epc_features cdns_pcie_epc_features = {
.linkup_notifier = false,
.msi_capable = true, .msi_capable = true,
.msix_capable = true, .msix_capable = true,
.align = 256, .align = 256,

View File

@ -531,7 +531,7 @@ static int cdns_pcie_host_init_address_translation(struct cdns_pcie_rc *rc)
cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR1(0), addr1); cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR1(0), addr1);
cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC1(0), desc1); cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC1(0), desc1);
if (pcie->ops->cpu_addr_fixup) if (pcie->ops && pcie->ops->cpu_addr_fixup)
cpu_addr = pcie->ops->cpu_addr_fixup(pcie, cpu_addr); cpu_addr = pcie->ops->cpu_addr_fixup(pcie, cpu_addr);
addr0 = CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS(12) | addr0 = CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS(12) |

View File

@ -8,6 +8,20 @@
#include <linux/of.h> #include <linux/of.h>
#include "pcie-cadence.h" #include "pcie-cadence.h"
#include "../../pci.h"
u8 cdns_pcie_find_capability(struct cdns_pcie *pcie, u8 cap)
{
return PCI_FIND_NEXT_CAP(cdns_pcie_read_cfg, PCI_CAPABILITY_LIST,
cap, pcie);
}
EXPORT_SYMBOL_GPL(cdns_pcie_find_capability);
u16 cdns_pcie_find_ext_capability(struct cdns_pcie *pcie, u8 cap)
{
return PCI_FIND_NEXT_EXT_CAP(cdns_pcie_read_cfg, 0, cap, pcie);
}
EXPORT_SYMBOL_GPL(cdns_pcie_find_ext_capability);
void cdns_pcie_detect_quiet_min_delay_set(struct cdns_pcie *pcie) void cdns_pcie_detect_quiet_min_delay_set(struct cdns_pcie *pcie)
{ {
@ -92,7 +106,7 @@ void cdns_pcie_set_outbound_region(struct cdns_pcie *pcie, u8 busnr, u8 fn,
cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC1(r), desc1); cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC1(r), desc1);
/* Set the CPU address */ /* Set the CPU address */
if (pcie->ops->cpu_addr_fixup) if (pcie->ops && pcie->ops->cpu_addr_fixup)
cpu_addr = pcie->ops->cpu_addr_fixup(pcie, cpu_addr); cpu_addr = pcie->ops->cpu_addr_fixup(pcie, cpu_addr);
addr0 = CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS(nbits) | addr0 = CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS(nbits) |
@ -123,7 +137,7 @@ void cdns_pcie_set_outbound_region_for_normal_msg(struct cdns_pcie *pcie,
} }
/* Set the CPU address */ /* Set the CPU address */
if (pcie->ops->cpu_addr_fixup) if (pcie->ops && pcie->ops->cpu_addr_fixup)
cpu_addr = pcie->ops->cpu_addr_fixup(pcie, cpu_addr); cpu_addr = pcie->ops->cpu_addr_fixup(pcie, cpu_addr);
addr0 = CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS(17) | addr0 = CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS(17) |

View File

@ -125,11 +125,6 @@
*/ */
#define CDNS_PCIE_EP_FUNC_BASE(fn) (((fn) << 12) & GENMASK(19, 12)) #define CDNS_PCIE_EP_FUNC_BASE(fn) (((fn) << 12) & GENMASK(19, 12))
#define CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET 0x90
#define CDNS_PCIE_EP_FUNC_MSIX_CAP_OFFSET 0xb0
#define CDNS_PCIE_EP_FUNC_DEV_CAP_OFFSET 0xc0
#define CDNS_PCIE_EP_FUNC_SRIOV_CAP_OFFSET 0x200
/* /*
* Endpoint PF Registers * Endpoint PF Registers
*/ */
@ -367,6 +362,37 @@ static inline u32 cdns_pcie_readl(struct cdns_pcie *pcie, u32 reg)
return readl(pcie->reg_base + reg); return readl(pcie->reg_base + reg);
} }
static inline u16 cdns_pcie_readw(struct cdns_pcie *pcie, u32 reg)
{
return readw(pcie->reg_base + reg);
}
static inline u8 cdns_pcie_readb(struct cdns_pcie *pcie, u32 reg)
{
return readb(pcie->reg_base + reg);
}
static inline int cdns_pcie_read_cfg_byte(struct cdns_pcie *pcie, int where,
u8 *val)
{
*val = cdns_pcie_readb(pcie, where);
return PCIBIOS_SUCCESSFUL;
}
static inline int cdns_pcie_read_cfg_word(struct cdns_pcie *pcie, int where,
u16 *val)
{
*val = cdns_pcie_readw(pcie, where);
return PCIBIOS_SUCCESSFUL;
}
static inline int cdns_pcie_read_cfg_dword(struct cdns_pcie *pcie, int where,
u32 *val)
{
*val = cdns_pcie_readl(pcie, where);
return PCIBIOS_SUCCESSFUL;
}
static inline u32 cdns_pcie_read_sz(void __iomem *addr, int size) static inline u32 cdns_pcie_read_sz(void __iomem *addr, int size)
{ {
void __iomem *aligned_addr = PTR_ALIGN_DOWN(addr, 0x4); void __iomem *aligned_addr = PTR_ALIGN_DOWN(addr, 0x4);
@ -468,7 +494,7 @@ static inline u32 cdns_pcie_ep_fn_readl(struct cdns_pcie *pcie, u8 fn, u32 reg)
static inline int cdns_pcie_start_link(struct cdns_pcie *pcie) static inline int cdns_pcie_start_link(struct cdns_pcie *pcie)
{ {
if (pcie->ops->start_link) if (pcie->ops && pcie->ops->start_link)
return pcie->ops->start_link(pcie); return pcie->ops->start_link(pcie);
return 0; return 0;
@ -476,13 +502,13 @@ static inline int cdns_pcie_start_link(struct cdns_pcie *pcie)
static inline void cdns_pcie_stop_link(struct cdns_pcie *pcie) static inline void cdns_pcie_stop_link(struct cdns_pcie *pcie)
{ {
if (pcie->ops->stop_link) if (pcie->ops && pcie->ops->stop_link)
pcie->ops->stop_link(pcie); pcie->ops->stop_link(pcie);
} }
static inline bool cdns_pcie_link_up(struct cdns_pcie *pcie) static inline bool cdns_pcie_link_up(struct cdns_pcie *pcie)
{ {
if (pcie->ops->link_up) if (pcie->ops && pcie->ops->link_up)
return pcie->ops->link_up(pcie); return pcie->ops->link_up(pcie);
return true; return true;
@ -536,6 +562,9 @@ static inline void cdns_pcie_ep_disable(struct cdns_pcie_ep *ep)
} }
#endif #endif
u8 cdns_pcie_find_capability(struct cdns_pcie *pcie, u8 cap);
u16 cdns_pcie_find_ext_capability(struct cdns_pcie *pcie, u8 cap);
void cdns_pcie_detect_quiet_min_delay_set(struct cdns_pcie *pcie); void cdns_pcie_detect_quiet_min_delay_set(struct cdns_pcie *pcie);
void cdns_pcie_set_outbound_region(struct cdns_pcie *pcie, u8 busnr, u8 fn, void cdns_pcie_set_outbound_region(struct cdns_pcie *pcie, u8 busnr, u8 fn,

View File

@ -0,0 +1,134 @@
// SPDX-License-Identifier: GPL-2.0
/*
* pcie-sg2042 - PCIe controller driver for Sophgo SG2042 SoC
*
* Copyright (C) 2025 Sophgo Technology Inc.
* Copyright (C) 2025 Chen Wang <unicorn_wang@outlook.com>
*/
#include <linux/mod_devicetable.h>
#include <linux/pci.h>
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include "pcie-cadence.h"
/*
* SG2042 only supports 4-byte aligned access, so for the rootbus (i.e. to
* read/write the Root Port itself, read32/write32 is required. For
* non-rootbus (i.e. to read/write the PCIe peripheral registers, supports
* 1/2/4 byte aligned access, so directly using read/write should be fine.
*/
static struct pci_ops sg2042_pcie_root_ops = {
.map_bus = cdns_pci_map_bus,
.read = pci_generic_config_read32,
.write = pci_generic_config_write32,
};
static struct pci_ops sg2042_pcie_child_ops = {
.map_bus = cdns_pci_map_bus,
.read = pci_generic_config_read,
.write = pci_generic_config_write,
};
static int sg2042_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct pci_host_bridge *bridge;
struct cdns_pcie *pcie;
struct cdns_pcie_rc *rc;
int ret;
bridge = devm_pci_alloc_host_bridge(dev, sizeof(*rc));
if (!bridge)
return dev_err_probe(dev, -ENOMEM, "Failed to alloc host bridge!\n");
bridge->ops = &sg2042_pcie_root_ops;
bridge->child_ops = &sg2042_pcie_child_ops;
rc = pci_host_bridge_priv(bridge);
pcie = &rc->pcie;
pcie->dev = dev;
platform_set_drvdata(pdev, pcie);
pm_runtime_set_active(dev);
pm_runtime_no_callbacks(dev);
devm_pm_runtime_enable(dev);
ret = cdns_pcie_init_phy(dev, pcie);
if (ret)
return dev_err_probe(dev, ret, "Failed to init phy!\n");
ret = cdns_pcie_host_setup(rc);
if (ret) {
dev_err_probe(dev, ret, "Failed to setup host!\n");
cdns_pcie_disable_phy(pcie);
return ret;
}
return 0;
}
static void sg2042_pcie_remove(struct platform_device *pdev)
{
struct cdns_pcie *pcie = platform_get_drvdata(pdev);
struct device *dev = &pdev->dev;
struct cdns_pcie_rc *rc;
rc = container_of(pcie, struct cdns_pcie_rc, pcie);
cdns_pcie_host_disable(rc);
cdns_pcie_disable_phy(pcie);
pm_runtime_disable(dev);
}
static int sg2042_pcie_suspend_noirq(struct device *dev)
{
struct cdns_pcie *pcie = dev_get_drvdata(dev);
cdns_pcie_disable_phy(pcie);
return 0;
}
static int sg2042_pcie_resume_noirq(struct device *dev)
{
struct cdns_pcie *pcie = dev_get_drvdata(dev);
int ret;
ret = cdns_pcie_enable_phy(pcie);
if (ret) {
dev_err(dev, "failed to enable PHY\n");
return ret;
}
return 0;
}
static DEFINE_NOIRQ_DEV_PM_OPS(sg2042_pcie_pm_ops,
sg2042_pcie_suspend_noirq,
sg2042_pcie_resume_noirq);
static const struct of_device_id sg2042_pcie_of_match[] = {
{ .compatible = "sophgo,sg2042-pcie-host" },
{},
};
MODULE_DEVICE_TABLE(of, sg2042_pcie_of_match);
static struct platform_driver sg2042_pcie_driver = {
.driver = {
.name = "sg2042-pcie",
.of_match_table = sg2042_pcie_of_match,
.pm = pm_sleep_ptr(&sg2042_pcie_pm_ops),
},
.probe = sg2042_pcie_probe,
.remove = sg2042_pcie_remove,
};
module_platform_driver(sg2042_pcie_driver);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("PCIe controller driver for SG2042 SoCs");
MODULE_AUTHOR("Chen Wang <unicorn_wang@outlook.com>");

View File

@ -20,6 +20,7 @@ config PCIE_DW_HOST
bool bool
select PCIE_DW select PCIE_DW
select IRQ_MSI_LIB select IRQ_MSI_LIB
select PCI_HOST_COMMON
config PCIE_DW_EP config PCIE_DW_EP
bool bool
@ -298,6 +299,7 @@ config PCIE_QCOM
select CRC8 select CRC8
select PCIE_QCOM_COMMON select PCIE_QCOM_COMMON
select PCI_HOST_COMMON select PCI_HOST_COMMON
select PCI_PWRCTRL_SLOT
help help
Say Y here to enable PCIe controller support on Qualcomm SoCs. The Say Y here to enable PCIe controller support on Qualcomm SoCs. The
PCIe controller uses the DesignWare core plus Qualcomm-specific PCIe controller uses the DesignWare core plus Qualcomm-specific
@ -422,6 +424,30 @@ config PCIE_SPEAR13XX
help help
Say Y here if you want PCIe support on SPEAr13XX SoCs. Say Y here if you want PCIe support on SPEAr13XX SoCs.
config PCIE_STM32_HOST
tristate "STMicroelectronics STM32MP25 PCIe Controller (host mode)"
depends on ARCH_STM32 || COMPILE_TEST
depends on PCI_MSI
select PCIE_DW_HOST
help
Enables Root Complex (RC) support for the DesignWare core based PCIe
controller found in STM32MP25 SoC.
This driver can also be built as a module. If so, the module
will be called pcie-stm32.
config PCIE_STM32_EP
tristate "STMicroelectronics STM32MP25 PCIe Controller (endpoint mode)"
depends on ARCH_STM32 || COMPILE_TEST
depends on PCI_ENDPOINT
select PCIE_DW_EP
help
Enables Endpoint (EP) support for the DesignWare core based PCIe
controller found in STM32MP25 SoC.
This driver can also be built as a module. If so, the module
will be called pcie-stm32-ep.
config PCI_DRA7XX config PCI_DRA7XX
tristate tristate

View File

@ -31,6 +31,8 @@ obj-$(CONFIG_PCIE_UNIPHIER) += pcie-uniphier.o
obj-$(CONFIG_PCIE_UNIPHIER_EP) += pcie-uniphier-ep.o obj-$(CONFIG_PCIE_UNIPHIER_EP) += pcie-uniphier-ep.o
obj-$(CONFIG_PCIE_VISCONTI_HOST) += pcie-visconti.o obj-$(CONFIG_PCIE_VISCONTI_HOST) += pcie-visconti.o
obj-$(CONFIG_PCIE_RCAR_GEN4) += pcie-rcar-gen4.o obj-$(CONFIG_PCIE_RCAR_GEN4) += pcie-rcar-gen4.o
obj-$(CONFIG_PCIE_STM32_HOST) += pcie-stm32.o
obj-$(CONFIG_PCIE_STM32_EP) += pcie-stm32-ep.o
# The following drivers are for devices that use the generic ACPI # The following drivers are for devices that use the generic ACPI
# pci_root.c driver but don't support standard ECAM config access. # pci_root.c driver but don't support standard ECAM config access.

View File

@ -426,7 +426,6 @@ static int dra7xx_pcie_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
static const struct pci_epc_features dra7xx_pcie_epc_features = { static const struct pci_epc_features dra7xx_pcie_epc_features = {
.linkup_notifier = true, .linkup_notifier = true,
.msi_capable = true, .msi_capable = true,
.msix_capable = false,
}; };
static const struct pci_epc_features* static const struct pci_epc_features*

View File

@ -53,7 +53,6 @@
struct exynos_pcie { struct exynos_pcie {
struct dw_pcie pci; struct dw_pcie pci;
void __iomem *elbi_base;
struct clk_bulk_data *clks; struct clk_bulk_data *clks;
struct phy *phy; struct phy *phy;
struct regulator_bulk_data supplies[2]; struct regulator_bulk_data supplies[2];
@ -71,73 +70,78 @@ static u32 exynos_pcie_readl(void __iomem *base, u32 reg)
static void exynos_pcie_sideband_dbi_w_mode(struct exynos_pcie *ep, bool on) static void exynos_pcie_sideband_dbi_w_mode(struct exynos_pcie *ep, bool on)
{ {
struct dw_pcie *pci = &ep->pci;
u32 val; u32 val;
val = exynos_pcie_readl(ep->elbi_base, PCIE_ELBI_SLV_AWMISC); val = exynos_pcie_readl(pci->elbi_base, PCIE_ELBI_SLV_AWMISC);
if (on) if (on)
val |= PCIE_ELBI_SLV_DBI_ENABLE; val |= PCIE_ELBI_SLV_DBI_ENABLE;
else else
val &= ~PCIE_ELBI_SLV_DBI_ENABLE; val &= ~PCIE_ELBI_SLV_DBI_ENABLE;
exynos_pcie_writel(ep->elbi_base, val, PCIE_ELBI_SLV_AWMISC); exynos_pcie_writel(pci->elbi_base, val, PCIE_ELBI_SLV_AWMISC);
} }
static void exynos_pcie_sideband_dbi_r_mode(struct exynos_pcie *ep, bool on) static void exynos_pcie_sideband_dbi_r_mode(struct exynos_pcie *ep, bool on)
{ {
struct dw_pcie *pci = &ep->pci;
u32 val; u32 val;
val = exynos_pcie_readl(ep->elbi_base, PCIE_ELBI_SLV_ARMISC); val = exynos_pcie_readl(pci->elbi_base, PCIE_ELBI_SLV_ARMISC);
if (on) if (on)
val |= PCIE_ELBI_SLV_DBI_ENABLE; val |= PCIE_ELBI_SLV_DBI_ENABLE;
else else
val &= ~PCIE_ELBI_SLV_DBI_ENABLE; val &= ~PCIE_ELBI_SLV_DBI_ENABLE;
exynos_pcie_writel(ep->elbi_base, val, PCIE_ELBI_SLV_ARMISC); exynos_pcie_writel(pci->elbi_base, val, PCIE_ELBI_SLV_ARMISC);
} }
static void exynos_pcie_assert_core_reset(struct exynos_pcie *ep) static void exynos_pcie_assert_core_reset(struct exynos_pcie *ep)
{ {
struct dw_pcie *pci = &ep->pci;
u32 val; u32 val;
val = exynos_pcie_readl(ep->elbi_base, PCIE_CORE_RESET); val = exynos_pcie_readl(pci->elbi_base, PCIE_CORE_RESET);
val &= ~PCIE_CORE_RESET_ENABLE; val &= ~PCIE_CORE_RESET_ENABLE;
exynos_pcie_writel(ep->elbi_base, val, PCIE_CORE_RESET); exynos_pcie_writel(pci->elbi_base, val, PCIE_CORE_RESET);
exynos_pcie_writel(ep->elbi_base, 0, PCIE_STICKY_RESET); exynos_pcie_writel(pci->elbi_base, 0, PCIE_STICKY_RESET);
exynos_pcie_writel(ep->elbi_base, 0, PCIE_NONSTICKY_RESET); exynos_pcie_writel(pci->elbi_base, 0, PCIE_NONSTICKY_RESET);
} }
static void exynos_pcie_deassert_core_reset(struct exynos_pcie *ep) static void exynos_pcie_deassert_core_reset(struct exynos_pcie *ep)
{ {
struct dw_pcie *pci = &ep->pci;
u32 val; u32 val;
val = exynos_pcie_readl(ep->elbi_base, PCIE_CORE_RESET); val = exynos_pcie_readl(pci->elbi_base, PCIE_CORE_RESET);
val |= PCIE_CORE_RESET_ENABLE; val |= PCIE_CORE_RESET_ENABLE;
exynos_pcie_writel(ep->elbi_base, val, PCIE_CORE_RESET); exynos_pcie_writel(pci->elbi_base, val, PCIE_CORE_RESET);
exynos_pcie_writel(ep->elbi_base, 1, PCIE_STICKY_RESET); exynos_pcie_writel(pci->elbi_base, 1, PCIE_STICKY_RESET);
exynos_pcie_writel(ep->elbi_base, 1, PCIE_NONSTICKY_RESET); exynos_pcie_writel(pci->elbi_base, 1, PCIE_NONSTICKY_RESET);
exynos_pcie_writel(ep->elbi_base, 1, PCIE_APP_INIT_RESET); exynos_pcie_writel(pci->elbi_base, 1, PCIE_APP_INIT_RESET);
exynos_pcie_writel(ep->elbi_base, 0, PCIE_APP_INIT_RESET); exynos_pcie_writel(pci->elbi_base, 0, PCIE_APP_INIT_RESET);
} }
static int exynos_pcie_start_link(struct dw_pcie *pci) static int exynos_pcie_start_link(struct dw_pcie *pci)
{ {
struct exynos_pcie *ep = to_exynos_pcie(pci);
u32 val; u32 val;
val = exynos_pcie_readl(ep->elbi_base, PCIE_SW_WAKE); val = exynos_pcie_readl(pci->elbi_base, PCIE_SW_WAKE);
val &= ~PCIE_BUS_EN; val &= ~PCIE_BUS_EN;
exynos_pcie_writel(ep->elbi_base, val, PCIE_SW_WAKE); exynos_pcie_writel(pci->elbi_base, val, PCIE_SW_WAKE);
/* assert LTSSM enable */ /* assert LTSSM enable */
exynos_pcie_writel(ep->elbi_base, PCIE_ELBI_LTSSM_ENABLE, exynos_pcie_writel(pci->elbi_base, PCIE_ELBI_LTSSM_ENABLE,
PCIE_APP_LTSSM_ENABLE); PCIE_APP_LTSSM_ENABLE);
return 0; return 0;
} }
static void exynos_pcie_clear_irq_pulse(struct exynos_pcie *ep) static void exynos_pcie_clear_irq_pulse(struct exynos_pcie *ep)
{ {
u32 val = exynos_pcie_readl(ep->elbi_base, PCIE_IRQ_PULSE); struct dw_pcie *pci = &ep->pci;
exynos_pcie_writel(ep->elbi_base, val, PCIE_IRQ_PULSE); u32 val = exynos_pcie_readl(pci->elbi_base, PCIE_IRQ_PULSE);
exynos_pcie_writel(pci->elbi_base, val, PCIE_IRQ_PULSE);
} }
static irqreturn_t exynos_pcie_irq_handler(int irq, void *arg) static irqreturn_t exynos_pcie_irq_handler(int irq, void *arg)
@ -150,12 +154,14 @@ static irqreturn_t exynos_pcie_irq_handler(int irq, void *arg)
static void exynos_pcie_enable_irq_pulse(struct exynos_pcie *ep) static void exynos_pcie_enable_irq_pulse(struct exynos_pcie *ep)
{ {
struct dw_pcie *pci = &ep->pci;
u32 val = IRQ_INTA_ASSERT | IRQ_INTB_ASSERT | u32 val = IRQ_INTA_ASSERT | IRQ_INTB_ASSERT |
IRQ_INTC_ASSERT | IRQ_INTD_ASSERT; IRQ_INTC_ASSERT | IRQ_INTD_ASSERT;
exynos_pcie_writel(ep->elbi_base, val, PCIE_IRQ_EN_PULSE); exynos_pcie_writel(pci->elbi_base, val, PCIE_IRQ_EN_PULSE);
exynos_pcie_writel(ep->elbi_base, 0, PCIE_IRQ_EN_LEVEL); exynos_pcie_writel(pci->elbi_base, 0, PCIE_IRQ_EN_LEVEL);
exynos_pcie_writel(ep->elbi_base, 0, PCIE_IRQ_EN_SPECIAL); exynos_pcie_writel(pci->elbi_base, 0, PCIE_IRQ_EN_SPECIAL);
} }
static u32 exynos_pcie_read_dbi(struct dw_pcie *pci, void __iomem *base, static u32 exynos_pcie_read_dbi(struct dw_pcie *pci, void __iomem *base,
@ -211,8 +217,7 @@ static struct pci_ops exynos_pci_ops = {
static bool exynos_pcie_link_up(struct dw_pcie *pci) static bool exynos_pcie_link_up(struct dw_pcie *pci)
{ {
struct exynos_pcie *ep = to_exynos_pcie(pci); u32 val = exynos_pcie_readl(pci->elbi_base, PCIE_ELBI_RDLH_LINKUP);
u32 val = exynos_pcie_readl(ep->elbi_base, PCIE_ELBI_RDLH_LINKUP);
return val & PCIE_ELBI_XMLH_LINKUP; return val & PCIE_ELBI_XMLH_LINKUP;
} }
@ -295,11 +300,6 @@ static int exynos_pcie_probe(struct platform_device *pdev)
if (IS_ERR(ep->phy)) if (IS_ERR(ep->phy))
return PTR_ERR(ep->phy); return PTR_ERR(ep->phy);
/* External Local Bus interface (ELBI) registers */
ep->elbi_base = devm_platform_ioremap_resource_byname(pdev, "elbi");
if (IS_ERR(ep->elbi_base))
return PTR_ERR(ep->elbi_base);
ret = devm_clk_bulk_get_all_enabled(dev, &ep->clks); ret = devm_clk_bulk_get_all_enabled(dev, &ep->clks);
if (ret < 0) if (ret < 0)
return ret; return ret;

View File

@ -1387,9 +1387,7 @@ static int imx_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
} }
static const struct pci_epc_features imx8m_pcie_epc_features = { static const struct pci_epc_features imx8m_pcie_epc_features = {
.linkup_notifier = false,
.msi_capable = true, .msi_capable = true,
.msix_capable = false,
.bar[BAR_1] = { .type = BAR_RESERVED, }, .bar[BAR_1] = { .type = BAR_RESERVED, },
.bar[BAR_3] = { .type = BAR_RESERVED, }, .bar[BAR_3] = { .type = BAR_RESERVED, },
.bar[BAR_4] = { .type = BAR_FIXED, .fixed_size = SZ_256, }, .bar[BAR_4] = { .type = BAR_FIXED, .fixed_size = SZ_256, },
@ -1398,9 +1396,7 @@ static const struct pci_epc_features imx8m_pcie_epc_features = {
}; };
static const struct pci_epc_features imx8q_pcie_epc_features = { static const struct pci_epc_features imx8q_pcie_epc_features = {
.linkup_notifier = false,
.msi_capable = true, .msi_capable = true,
.msix_capable = false,
.bar[BAR_1] = { .type = BAR_RESERVED, }, .bar[BAR_1] = { .type = BAR_RESERVED, },
.bar[BAR_3] = { .type = BAR_RESERVED, }, .bar[BAR_3] = { .type = BAR_RESERVED, },
.bar[BAR_5] = { .type = BAR_RESERVED, }, .bar[BAR_5] = { .type = BAR_RESERVED, },
@ -1745,6 +1741,10 @@ static int imx_pcie_probe(struct platform_device *pdev)
pci->max_link_speed = 1; pci->max_link_speed = 1;
of_property_read_u32(node, "fsl,max-link-speed", &pci->max_link_speed); of_property_read_u32(node, "fsl,max-link-speed", &pci->max_link_speed);
ret = devm_regulator_get_enable_optional(&pdev->dev, "vpcie3v3aux");
if (ret < 0 && ret != -ENODEV)
return dev_err_probe(dev, ret, "failed to enable Vaux supply\n");
imx_pcie->vpcie = devm_regulator_get_optional(&pdev->dev, "vpcie"); imx_pcie->vpcie = devm_regulator_get_optional(&pdev->dev, "vpcie");
if (IS_ERR(imx_pcie->vpcie)) { if (IS_ERR(imx_pcie->vpcie)) {
if (PTR_ERR(imx_pcie->vpcie) != -ENODEV) if (PTR_ERR(imx_pcie->vpcie) != -ENODEV)

View File

@ -960,7 +960,6 @@ static int ks_pcie_am654_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
} }
static const struct pci_epc_features ks_pcie_am654_epc_features = { static const struct pci_epc_features ks_pcie_am654_epc_features = {
.linkup_notifier = false,
.msi_capable = true, .msi_capable = true,
.msix_capable = true, .msix_capable = true,
.bar[BAR_0] = { .type = BAR_RESERVED, }, .bar[BAR_0] = { .type = BAR_RESERVED, },
@ -1201,8 +1200,8 @@ static int ks_pcie_probe(struct platform_device *pdev)
if (irq < 0) if (irq < 0)
return irq; return irq;
ret = request_irq(irq, ks_pcie_err_irq_handler, IRQF_SHARED, ret = devm_request_irq(dev, irq, ks_pcie_err_irq_handler, IRQF_SHARED,
"ks-pcie-error-irq", ks_pcie); "ks-pcie-error-irq", ks_pcie);
if (ret < 0) { if (ret < 0) {
dev_err(dev, "failed to request error IRQ %d\n", dev_err(dev, "failed to request error IRQ %d\n",
irq); irq);
@ -1213,11 +1212,11 @@ static int ks_pcie_probe(struct platform_device *pdev)
if (ret) if (ret)
num_lanes = 1; num_lanes = 1;
phy = devm_kzalloc(dev, sizeof(*phy) * num_lanes, GFP_KERNEL); phy = devm_kcalloc(dev, num_lanes, sizeof(*phy), GFP_KERNEL);
if (!phy) if (!phy)
return -ENOMEM; return -ENOMEM;
link = devm_kzalloc(dev, sizeof(*link) * num_lanes, GFP_KERNEL); link = devm_kcalloc(dev, num_lanes, sizeof(*link), GFP_KERNEL);
if (!link) if (!link)
return -ENOMEM; return -ENOMEM;

View File

@ -352,6 +352,7 @@ static int al_pcie_probe(struct platform_device *pdev)
return -ENOENT; return -ENOENT;
} }
al_pcie->ecam_size = resource_size(ecam_res); al_pcie->ecam_size = resource_size(ecam_res);
pci->pp.native_ecam = true;
controller_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, controller_res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
"controller"); "controller");

View File

@ -18,6 +18,7 @@
#include <linux/resource.h> #include <linux/resource.h>
#include <linux/types.h> #include <linux/types.h>
#include "../../pci.h"
#include "pcie-designware.h" #include "pcie-designware.h"
#define AMD_MDB_TLP_IR_STATUS_MISC 0x4C0 #define AMD_MDB_TLP_IR_STATUS_MISC 0x4C0
@ -56,6 +57,7 @@
* @slcr: MDB System Level Control and Status Register (SLCR) base * @slcr: MDB System Level Control and Status Register (SLCR) base
* @intx_domain: INTx IRQ domain pointer * @intx_domain: INTx IRQ domain pointer
* @mdb_domain: MDB IRQ domain pointer * @mdb_domain: MDB IRQ domain pointer
* @perst_gpio: GPIO descriptor for PERST# signal handling
* @intx_irq: INTx IRQ interrupt number * @intx_irq: INTx IRQ interrupt number
*/ */
struct amd_mdb_pcie { struct amd_mdb_pcie {
@ -63,6 +65,7 @@ struct amd_mdb_pcie {
void __iomem *slcr; void __iomem *slcr;
struct irq_domain *intx_domain; struct irq_domain *intx_domain;
struct irq_domain *mdb_domain; struct irq_domain *mdb_domain;
struct gpio_desc *perst_gpio;
int intx_irq; int intx_irq;
}; };
@ -284,7 +287,7 @@ static int amd_mdb_pcie_init_irq_domains(struct amd_mdb_pcie *pcie,
struct device_node *pcie_intc_node; struct device_node *pcie_intc_node;
int err; int err;
pcie_intc_node = of_get_next_child(node, NULL); pcie_intc_node = of_get_child_by_name(node, "interrupt-controller");
if (!pcie_intc_node) { if (!pcie_intc_node) {
dev_err(dev, "No PCIe Intc node found\n"); dev_err(dev, "No PCIe Intc node found\n");
return -ENODEV; return -ENODEV;
@ -402,6 +405,28 @@ static int amd_mdb_setup_irq(struct amd_mdb_pcie *pcie,
return 0; return 0;
} }
static int amd_mdb_parse_pcie_port(struct amd_mdb_pcie *pcie)
{
struct device *dev = pcie->pci.dev;
struct device_node *pcie_port_node __maybe_unused;
/*
* This platform currently supports only one Root Port, so the loop
* will execute only once.
* TODO: Enhance the driver to handle multiple Root Ports in the future.
*/
for_each_child_of_node_with_prefix(dev->of_node, pcie_port_node, "pcie") {
pcie->perst_gpio = devm_fwnode_gpiod_get(dev, of_fwnode_handle(pcie_port_node),
"reset", GPIOD_OUT_HIGH, NULL);
if (IS_ERR(pcie->perst_gpio))
return dev_err_probe(dev, PTR_ERR(pcie->perst_gpio),
"Failed to request reset GPIO\n");
return 0;
}
return -ENODEV;
}
static int amd_mdb_add_pcie_port(struct amd_mdb_pcie *pcie, static int amd_mdb_add_pcie_port(struct amd_mdb_pcie *pcie,
struct platform_device *pdev) struct platform_device *pdev)
{ {
@ -426,6 +451,12 @@ static int amd_mdb_add_pcie_port(struct amd_mdb_pcie *pcie,
pp->ops = &amd_mdb_pcie_host_ops; pp->ops = &amd_mdb_pcie_host_ops;
if (pcie->perst_gpio) {
mdelay(PCIE_T_PVPERL_MS);
gpiod_set_value_cansleep(pcie->perst_gpio, 0);
mdelay(PCIE_RESET_CONFIG_WAIT_MS);
}
err = dw_pcie_host_init(pp); err = dw_pcie_host_init(pp);
if (err) { if (err) {
dev_err(dev, "Failed to initialize host, err=%d\n", err); dev_err(dev, "Failed to initialize host, err=%d\n", err);
@ -444,6 +475,7 @@ static int amd_mdb_pcie_probe(struct platform_device *pdev)
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
struct amd_mdb_pcie *pcie; struct amd_mdb_pcie *pcie;
struct dw_pcie *pci; struct dw_pcie *pci;
int ret;
pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL); pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
if (!pcie) if (!pcie)
@ -454,6 +486,24 @@ static int amd_mdb_pcie_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, pcie); platform_set_drvdata(pdev, pcie);
ret = amd_mdb_parse_pcie_port(pcie);
/*
* If amd_mdb_parse_pcie_port returns -ENODEV, it indicates that the
* PCIe Bridge node was not found in the device tree. This is not
* considered a fatal error and will trigger a fallback where the
* reset GPIO is acquired directly from the PCIe Host Bridge node.
*/
if (ret) {
if (ret != -ENODEV)
return ret;
pcie->perst_gpio = devm_gpiod_get_optional(dev, "reset",
GPIOD_OUT_HIGH);
if (IS_ERR(pcie->perst_gpio))
return dev_err_probe(dev, PTR_ERR(pcie->perst_gpio),
"Failed to request reset GPIO\n");
}
return amd_mdb_add_pcie_port(pcie, pdev); return amd_mdb_add_pcie_port(pcie, pdev);
} }

View File

@ -370,9 +370,7 @@ static int artpec6_pcie_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
} }
static const struct pci_epc_features artpec6_pcie_epc_features = { static const struct pci_epc_features artpec6_pcie_epc_features = {
.linkup_notifier = false,
.msi_capable = true, .msi_capable = true,
.msix_capable = false,
}; };
static const struct pci_epc_features * static const struct pci_epc_features *

View File

@ -69,37 +69,10 @@ void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar)
} }
EXPORT_SYMBOL_GPL(dw_pcie_ep_reset_bar); EXPORT_SYMBOL_GPL(dw_pcie_ep_reset_bar);
static u8 __dw_pcie_ep_find_next_cap(struct dw_pcie_ep *ep, u8 func_no,
u8 cap_ptr, u8 cap)
{
u8 cap_id, next_cap_ptr;
u16 reg;
if (!cap_ptr)
return 0;
reg = dw_pcie_ep_readw_dbi(ep, func_no, cap_ptr);
cap_id = (reg & 0x00ff);
if (cap_id > PCI_CAP_ID_MAX)
return 0;
if (cap_id == cap)
return cap_ptr;
next_cap_ptr = (reg & 0xff00) >> 8;
return __dw_pcie_ep_find_next_cap(ep, func_no, next_cap_ptr, cap);
}
static u8 dw_pcie_ep_find_capability(struct dw_pcie_ep *ep, u8 func_no, u8 cap) static u8 dw_pcie_ep_find_capability(struct dw_pcie_ep *ep, u8 func_no, u8 cap)
{ {
u8 next_cap_ptr; return PCI_FIND_NEXT_CAP(dw_pcie_ep_read_cfg, PCI_CAPABILITY_LIST,
u16 reg; cap, ep, func_no);
reg = dw_pcie_ep_readw_dbi(ep, func_no, PCI_CAPABILITY_LIST);
next_cap_ptr = (reg & 0x00ff);
return __dw_pcie_ep_find_next_cap(ep, func_no, next_cap_ptr, cap);
} }
/** /**

View File

@ -8,6 +8,7 @@
* Author: Jingoo Han <jg1.han@samsung.com> * Author: Jingoo Han <jg1.han@samsung.com>
*/ */
#include <linux/align.h>
#include <linux/iopoll.h> #include <linux/iopoll.h>
#include <linux/irqchip/chained_irq.h> #include <linux/irqchip/chained_irq.h>
#include <linux/irqchip/irq-msi-lib.h> #include <linux/irqchip/irq-msi-lib.h>
@ -32,6 +33,8 @@ static struct pci_ops dw_child_pcie_ops;
MSI_FLAG_PCI_MSIX | \ MSI_FLAG_PCI_MSIX | \
MSI_GENERIC_FLAGS_MASK) MSI_GENERIC_FLAGS_MASK)
#define IS_256MB_ALIGNED(x) IS_ALIGNED(x, SZ_256M)
static const struct msi_parent_ops dw_pcie_msi_parent_ops = { static const struct msi_parent_ops dw_pcie_msi_parent_ops = {
.required_flags = DW_PCIE_MSI_FLAGS_REQUIRED, .required_flags = DW_PCIE_MSI_FLAGS_REQUIRED,
.supported_flags = DW_PCIE_MSI_FLAGS_SUPPORTED, .supported_flags = DW_PCIE_MSI_FLAGS_SUPPORTED,
@ -413,6 +416,95 @@ static void dw_pcie_host_request_msg_tlp_res(struct dw_pcie_rp *pp)
} }
} }
static int dw_pcie_config_ecam_iatu(struct dw_pcie_rp *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct dw_pcie_ob_atu_cfg atu = {0};
resource_size_t bus_range_max;
struct resource_entry *bus;
int ret;
bus = resource_list_first_type(&pp->bridge->windows, IORESOURCE_BUS);
/*
* Root bus under the host bridge doesn't require any iATU configuration
* as DBI region will be used to access root bus config space.
* Immediate bus under Root Bus, needs type 0 iATU configuration and
* remaining buses need type 1 iATU configuration.
*/
atu.index = 0;
atu.type = PCIE_ATU_TYPE_CFG0;
atu.parent_bus_addr = pp->cfg0_base + SZ_1M;
/* 1MiB is to cover 1 (bus) * 32 (devices) * 8 (functions) */
atu.size = SZ_1M;
atu.ctrl2 = PCIE_ATU_CFG_SHIFT_MODE_ENABLE;
ret = dw_pcie_prog_outbound_atu(pci, &atu);
if (ret)
return ret;
bus_range_max = resource_size(bus->res);
if (bus_range_max < 2)
return 0;
/* Configure remaining buses in type 1 iATU configuration */
atu.index = 1;
atu.type = PCIE_ATU_TYPE_CFG1;
atu.parent_bus_addr = pp->cfg0_base + SZ_2M;
atu.size = (SZ_1M * bus_range_max) - SZ_2M;
atu.ctrl2 = PCIE_ATU_CFG_SHIFT_MODE_ENABLE;
return dw_pcie_prog_outbound_atu(pci, &atu);
}
static int dw_pcie_create_ecam_window(struct dw_pcie_rp *pp, struct resource *res)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct device *dev = pci->dev;
struct resource_entry *bus;
bus = resource_list_first_type(&pp->bridge->windows, IORESOURCE_BUS);
if (!bus)
return -ENODEV;
pp->cfg = pci_ecam_create(dev, res, bus->res, &pci_generic_ecam_ops);
if (IS_ERR(pp->cfg))
return PTR_ERR(pp->cfg);
pci->dbi_base = pp->cfg->win;
pci->dbi_phys_addr = res->start;
return 0;
}
static bool dw_pcie_ecam_enabled(struct dw_pcie_rp *pp, struct resource *config_res)
{
struct resource *bus_range;
u64 nr_buses;
/* Vendor glue drivers may implement their own ECAM mechanism */
if (pp->native_ecam)
return false;
/*
* PCIe spec r6.0, sec 7.2.2 mandates the base address used for ECAM to
* be aligned on a 2^(n+20) byte boundary, where n is the number of bits
* used for representing 'bus' in BDF. Since the DWC cores always use 8
* bits for representing 'bus', the base address has to be aligned to
* 2^28 byte boundary, which is 256 MiB.
*/
if (!IS_256MB_ALIGNED(config_res->start))
return false;
bus_range = resource_list_first_type(&pp->bridge->windows, IORESOURCE_BUS)->res;
if (!bus_range)
return false;
nr_buses = resource_size(config_res) >> PCIE_ECAM_BUS_SHIFT;
return nr_buses >= resource_size(bus_range);
}
static int dw_pcie_host_get_resources(struct dw_pcie_rp *pp) static int dw_pcie_host_get_resources(struct dw_pcie_rp *pp)
{ {
struct dw_pcie *pci = to_dw_pcie_from_pp(pp); struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
@ -422,10 +514,6 @@ static int dw_pcie_host_get_resources(struct dw_pcie_rp *pp)
struct resource *res; struct resource *res;
int ret; int ret;
ret = dw_pcie_get_resources(pci);
if (ret)
return ret;
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "config"); res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "config");
if (!res) { if (!res) {
dev_err(dev, "Missing \"config\" reg space\n"); dev_err(dev, "Missing \"config\" reg space\n");
@ -435,9 +523,32 @@ static int dw_pcie_host_get_resources(struct dw_pcie_rp *pp)
pp->cfg0_size = resource_size(res); pp->cfg0_size = resource_size(res);
pp->cfg0_base = res->start; pp->cfg0_base = res->start;
pp->va_cfg0_base = devm_pci_remap_cfg_resource(dev, res); pp->ecam_enabled = dw_pcie_ecam_enabled(pp, res);
if (IS_ERR(pp->va_cfg0_base)) if (pp->ecam_enabled) {
return PTR_ERR(pp->va_cfg0_base); ret = dw_pcie_create_ecam_window(pp, res);
if (ret)
return ret;
pp->bridge->ops = (struct pci_ops *)&pci_generic_ecam_ops.pci_ops;
pp->bridge->sysdata = pp->cfg;
pp->cfg->priv = pp;
} else {
pp->va_cfg0_base = devm_pci_remap_cfg_resource(dev, res);
if (IS_ERR(pp->va_cfg0_base))
return PTR_ERR(pp->va_cfg0_base);
/* Set default bus ops */
pp->bridge->ops = &dw_pcie_ops;
pp->bridge->child_ops = &dw_child_pcie_ops;
pp->bridge->sysdata = pp;
}
ret = dw_pcie_get_resources(pci);
if (ret) {
if (pp->cfg)
pci_ecam_free(pp->cfg);
return ret;
}
/* Get the I/O range from DT */ /* Get the I/O range from DT */
win = resource_list_first_type(&pp->bridge->windows, IORESOURCE_IO); win = resource_list_first_type(&pp->bridge->windows, IORESOURCE_IO);
@ -476,14 +587,10 @@ int dw_pcie_host_init(struct dw_pcie_rp *pp)
if (ret) if (ret)
return ret; return ret;
/* Set default bus ops */
bridge->ops = &dw_pcie_ops;
bridge->child_ops = &dw_child_pcie_ops;
if (pp->ops->init) { if (pp->ops->init) {
ret = pp->ops->init(pp); ret = pp->ops->init(pp);
if (ret) if (ret)
return ret; goto err_free_ecam;
} }
if (pci_msi_enabled()) { if (pci_msi_enabled()) {
@ -525,6 +632,14 @@ int dw_pcie_host_init(struct dw_pcie_rp *pp)
if (ret) if (ret)
goto err_free_msi; goto err_free_msi;
if (pp->ecam_enabled) {
ret = dw_pcie_config_ecam_iatu(pp);
if (ret) {
dev_err(dev, "Failed to configure iATU in ECAM mode\n");
goto err_free_msi;
}
}
/* /*
* Allocate the resource for MSG TLP before programming the iATU * Allocate the resource for MSG TLP before programming the iATU
* outbound window in dw_pcie_setup_rc(). Since the allocation depends * outbound window in dw_pcie_setup_rc(). Since the allocation depends
@ -560,8 +675,6 @@ int dw_pcie_host_init(struct dw_pcie_rp *pp)
/* Ignore errors, the link may come up later */ /* Ignore errors, the link may come up later */
dw_pcie_wait_for_link(pci); dw_pcie_wait_for_link(pci);
bridge->sysdata = pp;
ret = pci_host_probe(bridge); ret = pci_host_probe(bridge);
if (ret) if (ret)
goto err_stop_link; goto err_stop_link;
@ -587,6 +700,10 @@ int dw_pcie_host_init(struct dw_pcie_rp *pp)
if (pp->ops->deinit) if (pp->ops->deinit)
pp->ops->deinit(pp); pp->ops->deinit(pp);
err_free_ecam:
if (pp->cfg)
pci_ecam_free(pp->cfg);
return ret; return ret;
} }
EXPORT_SYMBOL_GPL(dw_pcie_host_init); EXPORT_SYMBOL_GPL(dw_pcie_host_init);
@ -609,6 +726,9 @@ void dw_pcie_host_deinit(struct dw_pcie_rp *pp)
if (pp->ops->deinit) if (pp->ops->deinit)
pp->ops->deinit(pp); pp->ops->deinit(pp);
if (pp->cfg)
pci_ecam_free(pp->cfg);
} }
EXPORT_SYMBOL_GPL(dw_pcie_host_deinit); EXPORT_SYMBOL_GPL(dw_pcie_host_deinit);

View File

@ -61,7 +61,6 @@ static int dw_plat_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
} }
static const struct pci_epc_features dw_plat_pcie_epc_features = { static const struct pci_epc_features dw_plat_pcie_epc_features = {
.linkup_notifier = false,
.msi_capable = true, .msi_capable = true,
.msix_capable = true, .msix_capable = true,
}; };

View File

@ -167,6 +167,14 @@ int dw_pcie_get_resources(struct dw_pcie *pci)
} }
} }
/* ELBI is an optional resource */
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "elbi");
if (res) {
pci->elbi_base = devm_ioremap_resource(pci->dev, res);
if (IS_ERR(pci->elbi_base))
return PTR_ERR(pci->elbi_base);
}
/* LLDD is supposed to manually switch the clocks and resets state */ /* LLDD is supposed to manually switch the clocks and resets state */
if (dw_pcie_cap_is(pci, REQ_RES)) { if (dw_pcie_cap_is(pci, REQ_RES)) {
ret = dw_pcie_get_clocks(pci); ret = dw_pcie_get_clocks(pci);
@ -213,83 +221,16 @@ void dw_pcie_version_detect(struct dw_pcie *pci)
pci->type = ver; pci->type = ver;
} }
/*
* These interfaces resemble the pci_find_*capability() interfaces, but these
* are for configuring host controllers, which are bridges *to* PCI devices but
* are not PCI devices themselves.
*/
static u8 __dw_pcie_find_next_cap(struct dw_pcie *pci, u8 cap_ptr,
u8 cap)
{
u8 cap_id, next_cap_ptr;
u16 reg;
if (!cap_ptr)
return 0;
reg = dw_pcie_readw_dbi(pci, cap_ptr);
cap_id = (reg & 0x00ff);
if (cap_id > PCI_CAP_ID_MAX)
return 0;
if (cap_id == cap)
return cap_ptr;
next_cap_ptr = (reg & 0xff00) >> 8;
return __dw_pcie_find_next_cap(pci, next_cap_ptr, cap);
}
u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap) u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap)
{ {
u8 next_cap_ptr; return PCI_FIND_NEXT_CAP(dw_pcie_read_cfg, PCI_CAPABILITY_LIST, cap,
u16 reg; pci);
reg = dw_pcie_readw_dbi(pci, PCI_CAPABILITY_LIST);
next_cap_ptr = (reg & 0x00ff);
return __dw_pcie_find_next_cap(pci, next_cap_ptr, cap);
} }
EXPORT_SYMBOL_GPL(dw_pcie_find_capability); EXPORT_SYMBOL_GPL(dw_pcie_find_capability);
static u16 dw_pcie_find_next_ext_capability(struct dw_pcie *pci, u16 start,
u8 cap)
{
u32 header;
int ttl;
int pos = PCI_CFG_SPACE_SIZE;
/* minimum 8 bytes per capability */
ttl = (PCI_CFG_SPACE_EXP_SIZE - PCI_CFG_SPACE_SIZE) / 8;
if (start)
pos = start;
header = dw_pcie_readl_dbi(pci, pos);
/*
* If we have no capabilities, this is indicated by cap ID,
* cap version and next pointer all being 0.
*/
if (header == 0)
return 0;
while (ttl-- > 0) {
if (PCI_EXT_CAP_ID(header) == cap && pos != start)
return pos;
pos = PCI_EXT_CAP_NEXT(header);
if (pos < PCI_CFG_SPACE_SIZE)
break;
header = dw_pcie_readl_dbi(pci, pos);
}
return 0;
}
u16 dw_pcie_find_ext_capability(struct dw_pcie *pci, u8 cap) u16 dw_pcie_find_ext_capability(struct dw_pcie *pci, u8 cap)
{ {
return dw_pcie_find_next_ext_capability(pci, 0, cap); return PCI_FIND_NEXT_EXT_CAP(dw_pcie_read_cfg, 0, cap, pci);
} }
EXPORT_SYMBOL_GPL(dw_pcie_find_ext_capability); EXPORT_SYMBOL_GPL(dw_pcie_find_ext_capability);
@ -302,8 +243,8 @@ static u16 __dw_pcie_find_vsec_capability(struct dw_pcie *pci, u16 vendor_id,
if (vendor_id != dw_pcie_readw_dbi(pci, PCI_VENDOR_ID)) if (vendor_id != dw_pcie_readw_dbi(pci, PCI_VENDOR_ID))
return 0; return 0;
while ((vsec = dw_pcie_find_next_ext_capability(pci, vsec, while ((vsec = PCI_FIND_NEXT_EXT_CAP(dw_pcie_read_cfg, vsec,
PCI_EXT_CAP_ID_VNDR))) { PCI_EXT_CAP_ID_VNDR, pci))) {
header = dw_pcie_readl_dbi(pci, vsec + PCI_VNDR_HEADER); header = dw_pcie_readl_dbi(pci, vsec + PCI_VNDR_HEADER);
if (PCI_VNDR_HEADER_ID(header) == vsec_id) if (PCI_VNDR_HEADER_ID(header) == vsec_id)
return vsec; return vsec;
@ -567,7 +508,7 @@ int dw_pcie_prog_outbound_atu(struct dw_pcie *pci,
val = dw_pcie_enable_ecrc(val); val = dw_pcie_enable_ecrc(val);
dw_pcie_writel_atu_ob(pci, atu->index, PCIE_ATU_REGION_CTRL1, val); dw_pcie_writel_atu_ob(pci, atu->index, PCIE_ATU_REGION_CTRL1, val);
val = PCIE_ATU_ENABLE; val = PCIE_ATU_ENABLE | atu->ctrl2;
if (atu->type == PCIE_ATU_TYPE_MSG) { if (atu->type == PCIE_ATU_TYPE_MSG) {
/* The data-less messages only for now */ /* The data-less messages only for now */
val |= PCIE_ATU_INHIBIT_PAYLOAD | atu->code; val |= PCIE_ATU_INHIBIT_PAYLOAD | atu->code;
@ -841,6 +782,9 @@ static void dw_pcie_link_set_max_link_width(struct dw_pcie *pci, u32 num_lanes)
case 8: case 8:
plc |= PORT_LINK_MODE_8_LANES; plc |= PORT_LINK_MODE_8_LANES;
break; break;
case 16:
plc |= PORT_LINK_MODE_16_LANES;
break;
default: default:
dev_err(pci->dev, "num-lanes %u: invalid value\n", num_lanes); dev_err(pci->dev, "num-lanes %u: invalid value\n", num_lanes);
return; return;
@ -1045,9 +989,7 @@ static int dw_pcie_edma_irq_verify(struct dw_pcie *pci)
char name[15]; char name[15];
int ret; int ret;
if (pci->edma.nr_irqs == 1) if (pci->edma.nr_irqs > 1)
return 0;
else if (pci->edma.nr_irqs > 1)
return pci->edma.nr_irqs != ch_cnt ? -EINVAL : 0; return pci->edma.nr_irqs != ch_cnt ? -EINVAL : 0;
ret = platform_get_irq_byname_optional(pdev, "dma"); ret = platform_get_irq_byname_optional(pdev, "dma");

View File

@ -20,6 +20,7 @@
#include <linux/irq.h> #include <linux/irq.h>
#include <linux/msi.h> #include <linux/msi.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/pci-ecam.h>
#include <linux/reset.h> #include <linux/reset.h>
#include <linux/pci-epc.h> #include <linux/pci-epc.h>
@ -90,6 +91,7 @@
#define PORT_LINK_MODE_2_LANES PORT_LINK_MODE(0x3) #define PORT_LINK_MODE_2_LANES PORT_LINK_MODE(0x3)
#define PORT_LINK_MODE_4_LANES PORT_LINK_MODE(0x7) #define PORT_LINK_MODE_4_LANES PORT_LINK_MODE(0x7)
#define PORT_LINK_MODE_8_LANES PORT_LINK_MODE(0xf) #define PORT_LINK_MODE_8_LANES PORT_LINK_MODE(0xf)
#define PORT_LINK_MODE_16_LANES PORT_LINK_MODE(0x1f)
#define PCIE_PORT_LANE_SKEW 0x714 #define PCIE_PORT_LANE_SKEW 0x714
#define PORT_LANE_SKEW_INSERT_MASK GENMASK(23, 0) #define PORT_LANE_SKEW_INSERT_MASK GENMASK(23, 0)
@ -123,7 +125,6 @@
#define GEN3_RELATED_OFF_GEN3_EQ_DISABLE BIT(16) #define GEN3_RELATED_OFF_GEN3_EQ_DISABLE BIT(16)
#define GEN3_RELATED_OFF_RATE_SHADOW_SEL_SHIFT 24 #define GEN3_RELATED_OFF_RATE_SHADOW_SEL_SHIFT 24
#define GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK GENMASK(25, 24) #define GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK GENMASK(25, 24)
#define GEN3_RELATED_OFF_RATE_SHADOW_SEL_16_0GT 0x1
#define GEN3_EQ_CONTROL_OFF 0x8A8 #define GEN3_EQ_CONTROL_OFF 0x8A8
#define GEN3_EQ_CONTROL_OFF_FB_MODE GENMASK(3, 0) #define GEN3_EQ_CONTROL_OFF_FB_MODE GENMASK(3, 0)
@ -134,8 +135,8 @@
#define GEN3_EQ_FB_MODE_DIR_CHANGE_OFF 0x8AC #define GEN3_EQ_FB_MODE_DIR_CHANGE_OFF 0x8AC
#define GEN3_EQ_FMDC_T_MIN_PHASE23 GENMASK(4, 0) #define GEN3_EQ_FMDC_T_MIN_PHASE23 GENMASK(4, 0)
#define GEN3_EQ_FMDC_N_EVALS GENMASK(9, 5) #define GEN3_EQ_FMDC_N_EVALS GENMASK(9, 5)
#define GEN3_EQ_FMDC_MAX_PRE_CUSROR_DELTA GENMASK(13, 10) #define GEN3_EQ_FMDC_MAX_PRE_CURSOR_DELTA GENMASK(13, 10)
#define GEN3_EQ_FMDC_MAX_POST_CUSROR_DELTA GENMASK(17, 14) #define GEN3_EQ_FMDC_MAX_POST_CURSOR_DELTA GENMASK(17, 14)
#define PCIE_PORT_MULTI_LANE_CTRL 0x8C0 #define PCIE_PORT_MULTI_LANE_CTRL 0x8C0
#define PORT_MLTI_UPCFG_SUPPORT BIT(7) #define PORT_MLTI_UPCFG_SUPPORT BIT(7)
@ -169,6 +170,7 @@
#define PCIE_ATU_REGION_CTRL2 0x004 #define PCIE_ATU_REGION_CTRL2 0x004
#define PCIE_ATU_ENABLE BIT(31) #define PCIE_ATU_ENABLE BIT(31)
#define PCIE_ATU_BAR_MODE_ENABLE BIT(30) #define PCIE_ATU_BAR_MODE_ENABLE BIT(30)
#define PCIE_ATU_CFG_SHIFT_MODE_ENABLE BIT(28)
#define PCIE_ATU_INHIBIT_PAYLOAD BIT(22) #define PCIE_ATU_INHIBIT_PAYLOAD BIT(22)
#define PCIE_ATU_FUNC_NUM_MATCH_EN BIT(19) #define PCIE_ATU_FUNC_NUM_MATCH_EN BIT(19)
#define PCIE_ATU_LOWER_BASE 0x008 #define PCIE_ATU_LOWER_BASE 0x008
@ -387,6 +389,7 @@ struct dw_pcie_ob_atu_cfg {
u8 func_no; u8 func_no;
u8 code; u8 code;
u8 routing; u8 routing;
u32 ctrl2;
u64 parent_bus_addr; u64 parent_bus_addr;
u64 pci_addr; u64 pci_addr;
u64 size; u64 size;
@ -425,6 +428,9 @@ struct dw_pcie_rp {
struct resource *msg_res; struct resource *msg_res;
bool use_linkup_irq; bool use_linkup_irq;
struct pci_eq_presets presets; struct pci_eq_presets presets;
struct pci_config_window *cfg;
bool ecam_enabled;
bool native_ecam;
}; };
struct dw_pcie_ep_ops { struct dw_pcie_ep_ops {
@ -492,6 +498,7 @@ struct dw_pcie {
resource_size_t dbi_phys_addr; resource_size_t dbi_phys_addr;
void __iomem *dbi_base2; void __iomem *dbi_base2;
void __iomem *atu_base; void __iomem *atu_base;
void __iomem *elbi_base;
resource_size_t atu_phys_addr; resource_size_t atu_phys_addr;
size_t atu_size; size_t atu_size;
resource_size_t parent_bus_offset; resource_size_t parent_bus_offset;
@ -609,6 +616,27 @@ static inline void dw_pcie_writel_dbi2(struct dw_pcie *pci, u32 reg, u32 val)
dw_pcie_write_dbi2(pci, reg, 0x4, val); dw_pcie_write_dbi2(pci, reg, 0x4, val);
} }
static inline int dw_pcie_read_cfg_byte(struct dw_pcie *pci, int where,
u8 *val)
{
*val = dw_pcie_readb_dbi(pci, where);
return PCIBIOS_SUCCESSFUL;
}
static inline int dw_pcie_read_cfg_word(struct dw_pcie *pci, int where,
u16 *val)
{
*val = dw_pcie_readw_dbi(pci, where);
return PCIBIOS_SUCCESSFUL;
}
static inline int dw_pcie_read_cfg_dword(struct dw_pcie *pci, int where,
u32 *val)
{
*val = dw_pcie_readl_dbi(pci, where);
return PCIBIOS_SUCCESSFUL;
}
static inline unsigned int dw_pcie_ep_get_dbi_offset(struct dw_pcie_ep *ep, static inline unsigned int dw_pcie_ep_get_dbi_offset(struct dw_pcie_ep *ep,
u8 func_no) u8 func_no)
{ {
@ -674,6 +702,27 @@ static inline u8 dw_pcie_ep_readb_dbi(struct dw_pcie_ep *ep, u8 func_no,
return dw_pcie_ep_read_dbi(ep, func_no, reg, 0x1); return dw_pcie_ep_read_dbi(ep, func_no, reg, 0x1);
} }
static inline int dw_pcie_ep_read_cfg_byte(struct dw_pcie_ep *ep, u8 func_no,
int where, u8 *val)
{
*val = dw_pcie_ep_readb_dbi(ep, func_no, where);
return PCIBIOS_SUCCESSFUL;
}
static inline int dw_pcie_ep_read_cfg_word(struct dw_pcie_ep *ep, u8 func_no,
int where, u16 *val)
{
*val = dw_pcie_ep_readw_dbi(ep, func_no, where);
return PCIBIOS_SUCCESSFUL;
}
static inline int dw_pcie_ep_read_cfg_dword(struct dw_pcie_ep *ep, u8 func_no,
int where, u32 *val)
{
*val = dw_pcie_ep_readl_dbi(ep, func_no, where);
return PCIBIOS_SUCCESSFUL;
}
static inline unsigned int dw_pcie_ep_get_dbi2_offset(struct dw_pcie_ep *ep, static inline unsigned int dw_pcie_ep_get_dbi2_offset(struct dw_pcie_ep *ep,
u8 func_no) u8 func_no)
{ {

View File

@ -331,7 +331,6 @@ static const struct pci_epc_features rockchip_pcie_epc_features_rk3568 = {
.linkup_notifier = true, .linkup_notifier = true,
.msi_capable = true, .msi_capable = true,
.msix_capable = true, .msix_capable = true,
.intx_capable = false,
.align = SZ_64K, .align = SZ_64K,
.bar[BAR_0] = { .type = BAR_RESIZABLE, }, .bar[BAR_0] = { .type = BAR_RESIZABLE, },
.bar[BAR_1] = { .type = BAR_RESIZABLE, }, .bar[BAR_1] = { .type = BAR_RESIZABLE, },
@ -352,7 +351,6 @@ static const struct pci_epc_features rockchip_pcie_epc_features_rk3588 = {
.linkup_notifier = true, .linkup_notifier = true,
.msi_capable = true, .msi_capable = true,
.msix_capable = true, .msix_capable = true,
.intx_capable = false,
.align = SZ_64K, .align = SZ_64K,
.bar[BAR_0] = { .type = BAR_RESIZABLE, }, .bar[BAR_0] = { .type = BAR_RESIZABLE, },
.bar[BAR_1] = { .type = BAR_RESIZABLE, }, .bar[BAR_1] = { .type = BAR_RESIZABLE, },

View File

@ -309,7 +309,6 @@ static int keembay_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
} }
static const struct pci_epc_features keembay_pcie_epc_features = { static const struct pci_epc_features keembay_pcie_epc_features = {
.linkup_notifier = false,
.msi_capable = true, .msi_capable = true,
.msix_capable = true, .msix_capable = true,
.bar[BAR_0] = { .only_64bit = true, }, .bar[BAR_0] = { .only_64bit = true, },

View File

@ -8,9 +8,11 @@
#include "pcie-designware.h" #include "pcie-designware.h"
#include "pcie-qcom-common.h" #include "pcie-qcom-common.h"
void qcom_pcie_common_set_16gt_equalization(struct dw_pcie *pci) void qcom_pcie_common_set_equalization(struct dw_pcie *pci)
{ {
struct device *dev = pci->dev;
u32 reg; u32 reg;
u16 speed;
/* /*
* GEN3_RELATED_OFF register is repurposed to apply equalization * GEN3_RELATED_OFF register is repurposed to apply equalization
@ -19,32 +21,40 @@ void qcom_pcie_common_set_16gt_equalization(struct dw_pcie *pci)
* determines the data rate for which these equalization settings are * determines the data rate for which these equalization settings are
* applied. * applied.
*/ */
reg = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF);
reg &= ~GEN3_RELATED_OFF_GEN3_ZRXDC_NONCOMPL;
reg &= ~GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK;
reg |= FIELD_PREP(GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK,
GEN3_RELATED_OFF_RATE_SHADOW_SEL_16_0GT);
dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, reg);
reg = dw_pcie_readl_dbi(pci, GEN3_EQ_FB_MODE_DIR_CHANGE_OFF); for (speed = PCIE_SPEED_8_0GT; speed <= pcie_link_speed[pci->max_link_speed]; speed++) {
reg &= ~(GEN3_EQ_FMDC_T_MIN_PHASE23 | if (speed > PCIE_SPEED_32_0GT) {
GEN3_EQ_FMDC_N_EVALS | dev_warn(dev, "Skipped equalization settings for unsupported data rate\n");
GEN3_EQ_FMDC_MAX_PRE_CUSROR_DELTA | break;
GEN3_EQ_FMDC_MAX_POST_CUSROR_DELTA); }
reg |= FIELD_PREP(GEN3_EQ_FMDC_T_MIN_PHASE23, 0x1) |
FIELD_PREP(GEN3_EQ_FMDC_N_EVALS, 0xd) |
FIELD_PREP(GEN3_EQ_FMDC_MAX_PRE_CUSROR_DELTA, 0x5) |
FIELD_PREP(GEN3_EQ_FMDC_MAX_POST_CUSROR_DELTA, 0x5);
dw_pcie_writel_dbi(pci, GEN3_EQ_FB_MODE_DIR_CHANGE_OFF, reg);
reg = dw_pcie_readl_dbi(pci, GEN3_EQ_CONTROL_OFF); reg = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF);
reg &= ~(GEN3_EQ_CONTROL_OFF_FB_MODE | reg &= ~GEN3_RELATED_OFF_GEN3_ZRXDC_NONCOMPL;
GEN3_EQ_CONTROL_OFF_PHASE23_EXIT_MODE | reg &= ~GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK;
GEN3_EQ_CONTROL_OFF_FOM_INC_INITIAL_EVAL | reg |= FIELD_PREP(GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK,
GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC); speed - PCIE_SPEED_8_0GT);
dw_pcie_writel_dbi(pci, GEN3_EQ_CONTROL_OFF, reg); dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, reg);
reg = dw_pcie_readl_dbi(pci, GEN3_EQ_FB_MODE_DIR_CHANGE_OFF);
reg &= ~(GEN3_EQ_FMDC_T_MIN_PHASE23 |
GEN3_EQ_FMDC_N_EVALS |
GEN3_EQ_FMDC_MAX_PRE_CURSOR_DELTA |
GEN3_EQ_FMDC_MAX_POST_CURSOR_DELTA);
reg |= FIELD_PREP(GEN3_EQ_FMDC_T_MIN_PHASE23, 0x1) |
FIELD_PREP(GEN3_EQ_FMDC_N_EVALS, 0xd) |
FIELD_PREP(GEN3_EQ_FMDC_MAX_PRE_CURSOR_DELTA, 0x5) |
FIELD_PREP(GEN3_EQ_FMDC_MAX_POST_CURSOR_DELTA, 0x5);
dw_pcie_writel_dbi(pci, GEN3_EQ_FB_MODE_DIR_CHANGE_OFF, reg);
reg = dw_pcie_readl_dbi(pci, GEN3_EQ_CONTROL_OFF);
reg &= ~(GEN3_EQ_CONTROL_OFF_FB_MODE |
GEN3_EQ_CONTROL_OFF_PHASE23_EXIT_MODE |
GEN3_EQ_CONTROL_OFF_FOM_INC_INITIAL_EVAL |
GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC);
dw_pcie_writel_dbi(pci, GEN3_EQ_CONTROL_OFF, reg);
}
} }
EXPORT_SYMBOL_GPL(qcom_pcie_common_set_16gt_equalization); EXPORT_SYMBOL_GPL(qcom_pcie_common_set_equalization);
void qcom_pcie_common_set_16gt_lane_margining(struct dw_pcie *pci) void qcom_pcie_common_set_16gt_lane_margining(struct dw_pcie *pci)
{ {

View File

@ -8,7 +8,7 @@
struct dw_pcie; struct dw_pcie;
void qcom_pcie_common_set_16gt_equalization(struct dw_pcie *pci); void qcom_pcie_common_set_equalization(struct dw_pcie *pci);
void qcom_pcie_common_set_16gt_lane_margining(struct dw_pcie *pci); void qcom_pcie_common_set_16gt_lane_margining(struct dw_pcie *pci);
#endif #endif

View File

@ -179,7 +179,6 @@ struct qcom_pcie_ep_cfg {
* struct qcom_pcie_ep - Qualcomm PCIe Endpoint Controller * struct qcom_pcie_ep - Qualcomm PCIe Endpoint Controller
* @pci: Designware PCIe controller struct * @pci: Designware PCIe controller struct
* @parf: Qualcomm PCIe specific PARF register base * @parf: Qualcomm PCIe specific PARF register base
* @elbi: Designware PCIe specific ELBI register base
* @mmio: MMIO register base * @mmio: MMIO register base
* @perst_map: PERST regmap * @perst_map: PERST regmap
* @mmio_res: MMIO region resource * @mmio_res: MMIO region resource
@ -202,7 +201,6 @@ struct qcom_pcie_ep {
struct dw_pcie pci; struct dw_pcie pci;
void __iomem *parf; void __iomem *parf;
void __iomem *elbi;
void __iomem *mmio; void __iomem *mmio;
struct regmap *perst_map; struct regmap *perst_map;
struct resource *mmio_res; struct resource *mmio_res;
@ -267,10 +265,9 @@ static void qcom_pcie_ep_configure_tcsr(struct qcom_pcie_ep *pcie_ep)
static bool qcom_pcie_dw_link_up(struct dw_pcie *pci) static bool qcom_pcie_dw_link_up(struct dw_pcie *pci)
{ {
struct qcom_pcie_ep *pcie_ep = to_pcie_ep(pci);
u32 reg; u32 reg;
reg = readl_relaxed(pcie_ep->elbi + ELBI_SYS_STTS); reg = readl_relaxed(pci->elbi_base + ELBI_SYS_STTS);
return reg & XMLH_LINK_UP; return reg & XMLH_LINK_UP;
} }
@ -294,16 +291,15 @@ static void qcom_pcie_dw_stop_link(struct dw_pcie *pci)
static void qcom_pcie_dw_write_dbi2(struct dw_pcie *pci, void __iomem *base, static void qcom_pcie_dw_write_dbi2(struct dw_pcie *pci, void __iomem *base,
u32 reg, size_t size, u32 val) u32 reg, size_t size, u32 val)
{ {
struct qcom_pcie_ep *pcie_ep = to_pcie_ep(pci);
int ret; int ret;
writel(1, pcie_ep->elbi + ELBI_CS2_ENABLE); writel(1, pci->elbi_base + ELBI_CS2_ENABLE);
ret = dw_pcie_write(pci->dbi_base2 + reg, size, val); ret = dw_pcie_write(pci->dbi_base2 + reg, size, val);
if (ret) if (ret)
dev_err(pci->dev, "Failed to write DBI2 register (0x%x): %d\n", reg, ret); dev_err(pci->dev, "Failed to write DBI2 register (0x%x): %d\n", reg, ret);
writel(0, pcie_ep->elbi + ELBI_CS2_ENABLE); writel(0, pci->elbi_base + ELBI_CS2_ENABLE);
} }
static void qcom_pcie_ep_icc_update(struct qcom_pcie_ep *pcie_ep) static void qcom_pcie_ep_icc_update(struct qcom_pcie_ep *pcie_ep)
@ -511,10 +507,10 @@ static int qcom_pcie_perst_deassert(struct dw_pcie *pci)
goto err_disable_resources; goto err_disable_resources;
} }
if (pcie_link_speed[pci->max_link_speed] == PCIE_SPEED_16_0GT) { qcom_pcie_common_set_equalization(pci);
qcom_pcie_common_set_16gt_equalization(pci);
if (pcie_link_speed[pci->max_link_speed] == PCIE_SPEED_16_0GT)
qcom_pcie_common_set_16gt_lane_margining(pci); qcom_pcie_common_set_16gt_lane_margining(pci);
}
/* /*
* The physical address of the MMIO region which is exposed as the BAR * The physical address of the MMIO region which is exposed as the BAR
@ -583,11 +579,6 @@ static int qcom_pcie_ep_get_io_resources(struct platform_device *pdev,
return PTR_ERR(pci->dbi_base); return PTR_ERR(pci->dbi_base);
pci->dbi_base2 = pci->dbi_base; pci->dbi_base2 = pci->dbi_base;
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "elbi");
pcie_ep->elbi = devm_pci_remap_cfg_resource(dev, res);
if (IS_ERR(pcie_ep->elbi))
return PTR_ERR(pcie_ep->elbi);
pcie_ep->mmio_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, pcie_ep->mmio_res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
"mmio"); "mmio");
if (!pcie_ep->mmio_res) { if (!pcie_ep->mmio_res) {
@ -831,7 +822,6 @@ static void qcom_pcie_ep_init_debugfs(struct qcom_pcie_ep *pcie_ep)
static const struct pci_epc_features qcom_pcie_epc_features = { static const struct pci_epc_features qcom_pcie_epc_features = {
.linkup_notifier = true, .linkup_notifier = true,
.msi_capable = true, .msi_capable = true,
.msix_capable = false,
.align = SZ_4K, .align = SZ_4K,
.bar[BAR_0] = { .only_64bit = true, }, .bar[BAR_0] = { .only_64bit = true, },
.bar[BAR_1] = { .type = BAR_RESERVED, }, .bar[BAR_1] = { .type = BAR_RESERVED, },
@ -874,7 +864,6 @@ static int qcom_pcie_ep_probe(struct platform_device *pdev)
pcie_ep->pci.dev = dev; pcie_ep->pci.dev = dev;
pcie_ep->pci.ops = &pci_ops; pcie_ep->pci.ops = &pci_ops;
pcie_ep->pci.ep.ops = &pci_ep_ops; pcie_ep->pci.ep.ops = &pci_ep_ops;
pcie_ep->pci.edma.nr_irqs = 1;
pcie_ep->cfg = of_device_get_match_data(dev); pcie_ep->cfg = of_device_get_match_data(dev);
if (pcie_ep->cfg && pcie_ep->cfg->hdma_support) { if (pcie_ep->cfg && pcie_ep->cfg->hdma_support) {

View File

@ -55,6 +55,7 @@
#define PARF_AXI_MSTR_WR_ADDR_HALT_V2 0x1a8 #define PARF_AXI_MSTR_WR_ADDR_HALT_V2 0x1a8
#define PARF_Q2A_FLUSH 0x1ac #define PARF_Q2A_FLUSH 0x1ac
#define PARF_LTSSM 0x1b0 #define PARF_LTSSM 0x1b0
#define PARF_SLV_DBI_ELBI 0x1b4
#define PARF_INT_ALL_STATUS 0x224 #define PARF_INT_ALL_STATUS 0x224
#define PARF_INT_ALL_CLEAR 0x228 #define PARF_INT_ALL_CLEAR 0x228
#define PARF_INT_ALL_MASK 0x22c #define PARF_INT_ALL_MASK 0x22c
@ -64,6 +65,16 @@
#define PARF_DBI_BASE_ADDR_V2_HI 0x354 #define PARF_DBI_BASE_ADDR_V2_HI 0x354
#define PARF_SLV_ADDR_SPACE_SIZE_V2 0x358 #define PARF_SLV_ADDR_SPACE_SIZE_V2 0x358
#define PARF_SLV_ADDR_SPACE_SIZE_V2_HI 0x35c #define PARF_SLV_ADDR_SPACE_SIZE_V2_HI 0x35c
#define PARF_BLOCK_SLV_AXI_WR_BASE 0x360
#define PARF_BLOCK_SLV_AXI_WR_BASE_HI 0x364
#define PARF_BLOCK_SLV_AXI_WR_LIMIT 0x368
#define PARF_BLOCK_SLV_AXI_WR_LIMIT_HI 0x36c
#define PARF_BLOCK_SLV_AXI_RD_BASE 0x370
#define PARF_BLOCK_SLV_AXI_RD_BASE_HI 0x374
#define PARF_BLOCK_SLV_AXI_RD_LIMIT 0x378
#define PARF_BLOCK_SLV_AXI_RD_LIMIT_HI 0x37c
#define PARF_ECAM_BASE 0x380
#define PARF_ECAM_BASE_HI 0x384
#define PARF_NO_SNOOP_OVERRIDE 0x3d4 #define PARF_NO_SNOOP_OVERRIDE 0x3d4
#define PARF_ATU_BASE_ADDR 0x634 #define PARF_ATU_BASE_ADDR 0x634
#define PARF_ATU_BASE_ADDR_HI 0x638 #define PARF_ATU_BASE_ADDR_HI 0x638
@ -87,6 +98,7 @@
/* PARF_SYS_CTRL register fields */ /* PARF_SYS_CTRL register fields */
#define MAC_PHY_POWERDOWN_IN_P2_D_MUX_EN BIT(29) #define MAC_PHY_POWERDOWN_IN_P2_D_MUX_EN BIT(29)
#define PCIE_ECAM_BLOCKER_EN BIT(26)
#define MST_WAKEUP_EN BIT(13) #define MST_WAKEUP_EN BIT(13)
#define SLV_WAKEUP_EN BIT(12) #define SLV_WAKEUP_EN BIT(12)
#define MSTR_ACLK_CGC_DIS BIT(10) #define MSTR_ACLK_CGC_DIS BIT(10)
@ -134,6 +146,9 @@
/* PARF_LTSSM register fields */ /* PARF_LTSSM register fields */
#define LTSSM_EN BIT(8) #define LTSSM_EN BIT(8)
/* PARF_SLV_DBI_ELBI */
#define SLV_DBI_ELBI_ADDR_BASE GENMASK(11, 0)
/* PARF_INT_ALL_{STATUS/CLEAR/MASK} register fields */ /* PARF_INT_ALL_{STATUS/CLEAR/MASK} register fields */
#define PARF_INT_ALL_LINK_UP BIT(13) #define PARF_INT_ALL_LINK_UP BIT(13)
#define PARF_INT_MSI_DEV_0_7 GENMASK(30, 23) #define PARF_INT_MSI_DEV_0_7 GENMASK(30, 23)
@ -247,7 +262,6 @@ struct qcom_pcie_ops {
int (*get_resources)(struct qcom_pcie *pcie); int (*get_resources)(struct qcom_pcie *pcie);
int (*init)(struct qcom_pcie *pcie); int (*init)(struct qcom_pcie *pcie);
int (*post_init)(struct qcom_pcie *pcie); int (*post_init)(struct qcom_pcie *pcie);
void (*host_post_init)(struct qcom_pcie *pcie);
void (*deinit)(struct qcom_pcie *pcie); void (*deinit)(struct qcom_pcie *pcie);
void (*ltssm_enable)(struct qcom_pcie *pcie); void (*ltssm_enable)(struct qcom_pcie *pcie);
int (*config_sid)(struct qcom_pcie *pcie); int (*config_sid)(struct qcom_pcie *pcie);
@ -276,11 +290,8 @@ struct qcom_pcie_port {
struct qcom_pcie { struct qcom_pcie {
struct dw_pcie *pci; struct dw_pcie *pci;
void __iomem *parf; /* DT parf */ void __iomem *parf; /* DT parf */
void __iomem *elbi; /* DT elbi */
void __iomem *mhi; void __iomem *mhi;
union qcom_pcie_resources res; union qcom_pcie_resources res;
struct phy *phy;
struct gpio_desc *reset;
struct icc_path *icc_mem; struct icc_path *icc_mem;
struct icc_path *icc_cpu; struct icc_path *icc_cpu;
const struct qcom_pcie_cfg *cfg; const struct qcom_pcie_cfg *cfg;
@ -297,11 +308,8 @@ static void qcom_perst_assert(struct qcom_pcie *pcie, bool assert)
struct qcom_pcie_port *port; struct qcom_pcie_port *port;
int val = assert ? 1 : 0; int val = assert ? 1 : 0;
if (list_empty(&pcie->ports)) list_for_each_entry(port, &pcie->ports, list)
gpiod_set_value_cansleep(pcie->reset, val); gpiod_set_value_cansleep(port->reset, val);
else
list_for_each_entry(port, &pcie->ports, list)
gpiod_set_value_cansleep(port->reset, val);
usleep_range(PERST_DELAY_US, PERST_DELAY_US + 500); usleep_range(PERST_DELAY_US, PERST_DELAY_US + 500);
} }
@ -318,14 +326,55 @@ static void qcom_ep_reset_deassert(struct qcom_pcie *pcie)
qcom_perst_assert(pcie, false); qcom_perst_assert(pcie, false);
} }
static void qcom_pci_config_ecam(struct dw_pcie_rp *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct qcom_pcie *pcie = to_qcom_pcie(pci);
u64 addr, addr_end;
u32 val;
writel_relaxed(lower_32_bits(pci->dbi_phys_addr), pcie->parf + PARF_ECAM_BASE);
writel_relaxed(upper_32_bits(pci->dbi_phys_addr), pcie->parf + PARF_ECAM_BASE_HI);
/*
* The only device on the root bus is a single Root Port. If we try to
* access any devices other than Device/Function 00.0 on Bus 0, the TLP
* will go outside of the controller to the PCI bus. But with CFG Shift
* Feature (ECAM) enabled in iATU, there is no guarantee that the
* response is going to be all F's. Hence, to make sure that the
* requester gets all F's response for accesses other than the Root
* Port, configure iATU to block the transactions starting from
* function 1 of the root bus to the end of the root bus (i.e., from
* dbi_base + 4KB to dbi_base + 1MB).
*/
addr = pci->dbi_phys_addr + SZ_4K;
writel_relaxed(lower_32_bits(addr), pcie->parf + PARF_BLOCK_SLV_AXI_WR_BASE);
writel_relaxed(upper_32_bits(addr), pcie->parf + PARF_BLOCK_SLV_AXI_WR_BASE_HI);
writel_relaxed(lower_32_bits(addr), pcie->parf + PARF_BLOCK_SLV_AXI_RD_BASE);
writel_relaxed(upper_32_bits(addr), pcie->parf + PARF_BLOCK_SLV_AXI_RD_BASE_HI);
addr_end = pci->dbi_phys_addr + SZ_1M - 1;
writel_relaxed(lower_32_bits(addr_end), pcie->parf + PARF_BLOCK_SLV_AXI_WR_LIMIT);
writel_relaxed(upper_32_bits(addr_end), pcie->parf + PARF_BLOCK_SLV_AXI_WR_LIMIT_HI);
writel_relaxed(lower_32_bits(addr_end), pcie->parf + PARF_BLOCK_SLV_AXI_RD_LIMIT);
writel_relaxed(upper_32_bits(addr_end), pcie->parf + PARF_BLOCK_SLV_AXI_RD_LIMIT_HI);
val = readl_relaxed(pcie->parf + PARF_SYS_CTRL);
val |= PCIE_ECAM_BLOCKER_EN;
writel_relaxed(val, pcie->parf + PARF_SYS_CTRL);
}
static int qcom_pcie_start_link(struct dw_pcie *pci) static int qcom_pcie_start_link(struct dw_pcie *pci)
{ {
struct qcom_pcie *pcie = to_qcom_pcie(pci); struct qcom_pcie *pcie = to_qcom_pcie(pci);
if (pcie_link_speed[pci->max_link_speed] == PCIE_SPEED_16_0GT) { qcom_pcie_common_set_equalization(pci);
qcom_pcie_common_set_16gt_equalization(pci);
if (pcie_link_speed[pci->max_link_speed] == PCIE_SPEED_16_0GT)
qcom_pcie_common_set_16gt_lane_margining(pci); qcom_pcie_common_set_16gt_lane_margining(pci);
}
/* Enable Link Training state machine */ /* Enable Link Training state machine */
if (pcie->cfg->ops->ltssm_enable) if (pcie->cfg->ops->ltssm_enable)
@ -414,12 +463,17 @@ static void qcom_pcie_configure_dbi_atu_base(struct qcom_pcie *pcie)
static void qcom_pcie_2_1_0_ltssm_enable(struct qcom_pcie *pcie) static void qcom_pcie_2_1_0_ltssm_enable(struct qcom_pcie *pcie)
{ {
struct dw_pcie *pci = pcie->pci;
u32 val; u32 val;
if (!pci->elbi_base) {
dev_err(pci->dev, "ELBI is not present\n");
return;
}
/* enable link training */ /* enable link training */
val = readl(pcie->elbi + ELBI_SYS_CTRL); val = readl(pci->elbi_base + ELBI_SYS_CTRL);
val |= ELBI_SYS_CTRL_LT_ENABLE; val |= ELBI_SYS_CTRL_LT_ENABLE;
writel(val, pcie->elbi + ELBI_SYS_CTRL); writel(val, pci->elbi_base + ELBI_SYS_CTRL);
} }
static int qcom_pcie_get_resources_2_1_0(struct qcom_pcie *pcie) static int qcom_pcie_get_resources_2_1_0(struct qcom_pcie *pcie)
@ -1040,25 +1094,6 @@ static int qcom_pcie_post_init_2_7_0(struct qcom_pcie *pcie)
return 0; return 0;
} }
static int qcom_pcie_enable_aspm(struct pci_dev *pdev, void *userdata)
{
/*
* Downstream devices need to be in D0 state before enabling PCI PM
* substates.
*/
pci_set_power_state_locked(pdev, PCI_D0);
pci_enable_link_state_locked(pdev, PCIE_LINK_STATE_ALL);
return 0;
}
static void qcom_pcie_host_post_init_2_7_0(struct qcom_pcie *pcie)
{
struct dw_pcie_rp *pp = &pcie->pci->pp;
pci_walk_bus(pp->bridge->bus, qcom_pcie_enable_aspm, NULL);
}
static void qcom_pcie_deinit_2_7_0(struct qcom_pcie *pcie) static void qcom_pcie_deinit_2_7_0(struct qcom_pcie *pcie)
{ {
struct qcom_pcie_resources_2_7_0 *res = &pcie->res.v2_7_0; struct qcom_pcie_resources_2_7_0 *res = &pcie->res.v2_7_0;
@ -1253,63 +1288,39 @@ static bool qcom_pcie_link_up(struct dw_pcie *pci)
return val & PCI_EXP_LNKSTA_DLLLA; return val & PCI_EXP_LNKSTA_DLLLA;
} }
static void qcom_pcie_phy_exit(struct qcom_pcie *pcie)
{
struct qcom_pcie_port *port;
if (list_empty(&pcie->ports))
phy_exit(pcie->phy);
else
list_for_each_entry(port, &pcie->ports, list)
phy_exit(port->phy);
}
static void qcom_pcie_phy_power_off(struct qcom_pcie *pcie) static void qcom_pcie_phy_power_off(struct qcom_pcie *pcie)
{ {
struct qcom_pcie_port *port; struct qcom_pcie_port *port;
if (list_empty(&pcie->ports)) { list_for_each_entry(port, &pcie->ports, list)
phy_power_off(pcie->phy); phy_power_off(port->phy);
} else {
list_for_each_entry(port, &pcie->ports, list)
phy_power_off(port->phy);
}
} }
static int qcom_pcie_phy_power_on(struct qcom_pcie *pcie) static int qcom_pcie_phy_power_on(struct qcom_pcie *pcie)
{ {
struct qcom_pcie_port *port; struct qcom_pcie_port *port;
int ret = 0; int ret;
if (list_empty(&pcie->ports)) { list_for_each_entry(port, &pcie->ports, list) {
ret = phy_set_mode_ext(pcie->phy, PHY_MODE_PCIE, PHY_MODE_PCIE_RC); ret = phy_set_mode_ext(port->phy, PHY_MODE_PCIE, PHY_MODE_PCIE_RC);
if (ret) if (ret)
return ret; return ret;
ret = phy_power_on(pcie->phy); ret = phy_power_on(port->phy);
if (ret) if (ret) {
qcom_pcie_phy_power_off(pcie);
return ret; return ret;
} else {
list_for_each_entry(port, &pcie->ports, list) {
ret = phy_set_mode_ext(port->phy, PHY_MODE_PCIE, PHY_MODE_PCIE_RC);
if (ret)
return ret;
ret = phy_power_on(port->phy);
if (ret) {
qcom_pcie_phy_power_off(pcie);
return ret;
}
} }
} }
return ret; return 0;
} }
static int qcom_pcie_host_init(struct dw_pcie_rp *pp) static int qcom_pcie_host_init(struct dw_pcie_rp *pp)
{ {
struct dw_pcie *pci = to_dw_pcie_from_pp(pp); struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct qcom_pcie *pcie = to_qcom_pcie(pci); struct qcom_pcie *pcie = to_qcom_pcie(pci);
u16 offset;
int ret; int ret;
qcom_ep_reset_assert(pcie); qcom_ep_reset_assert(pcie);
@ -1318,6 +1329,17 @@ static int qcom_pcie_host_init(struct dw_pcie_rp *pp)
if (ret) if (ret)
return ret; return ret;
if (pp->ecam_enabled) {
/*
* Override ELBI when ECAM is enabled, as when ECAM is enabled,
* ELBI moves under the 'config' space.
*/
offset = FIELD_GET(SLV_DBI_ELBI_ADDR_BASE, readl(pcie->parf + PARF_SLV_DBI_ELBI));
pci->elbi_base = pci->dbi_base + offset;
qcom_pci_config_ecam(pp);
}
ret = qcom_pcie_phy_power_on(pcie); ret = qcom_pcie_phy_power_on(pcie);
if (ret) if (ret)
goto err_deinit; goto err_deinit;
@ -1358,19 +1380,9 @@ static void qcom_pcie_host_deinit(struct dw_pcie_rp *pp)
pcie->cfg->ops->deinit(pcie); pcie->cfg->ops->deinit(pcie);
} }
static void qcom_pcie_host_post_init(struct dw_pcie_rp *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct qcom_pcie *pcie = to_qcom_pcie(pci);
if (pcie->cfg->ops->host_post_init)
pcie->cfg->ops->host_post_init(pcie);
}
static const struct dw_pcie_host_ops qcom_pcie_dw_ops = { static const struct dw_pcie_host_ops qcom_pcie_dw_ops = {
.init = qcom_pcie_host_init, .init = qcom_pcie_host_init,
.deinit = qcom_pcie_host_deinit, .deinit = qcom_pcie_host_deinit,
.post_init = qcom_pcie_host_post_init,
}; };
/* Qcom IP rev.: 2.1.0 Synopsys IP rev.: 4.01a */ /* Qcom IP rev.: 2.1.0 Synopsys IP rev.: 4.01a */
@ -1432,7 +1444,6 @@ static const struct qcom_pcie_ops ops_1_9_0 = {
.get_resources = qcom_pcie_get_resources_2_7_0, .get_resources = qcom_pcie_get_resources_2_7_0,
.init = qcom_pcie_init_2_7_0, .init = qcom_pcie_init_2_7_0,
.post_init = qcom_pcie_post_init_2_7_0, .post_init = qcom_pcie_post_init_2_7_0,
.host_post_init = qcom_pcie_host_post_init_2_7_0,
.deinit = qcom_pcie_deinit_2_7_0, .deinit = qcom_pcie_deinit_2_7_0,
.ltssm_enable = qcom_pcie_2_3_2_ltssm_enable, .ltssm_enable = qcom_pcie_2_3_2_ltssm_enable,
.config_sid = qcom_pcie_config_sid_1_9_0, .config_sid = qcom_pcie_config_sid_1_9_0,
@ -1443,7 +1454,6 @@ static const struct qcom_pcie_ops ops_1_21_0 = {
.get_resources = qcom_pcie_get_resources_2_7_0, .get_resources = qcom_pcie_get_resources_2_7_0,
.init = qcom_pcie_init_2_7_0, .init = qcom_pcie_init_2_7_0,
.post_init = qcom_pcie_post_init_2_7_0, .post_init = qcom_pcie_post_init_2_7_0,
.host_post_init = qcom_pcie_host_post_init_2_7_0,
.deinit = qcom_pcie_deinit_2_7_0, .deinit = qcom_pcie_deinit_2_7_0,
.ltssm_enable = qcom_pcie_2_3_2_ltssm_enable, .ltssm_enable = qcom_pcie_2_3_2_ltssm_enable,
}; };
@ -1740,6 +1750,8 @@ static int qcom_pcie_parse_ports(struct qcom_pcie *pcie)
int ret = -ENOENT; int ret = -ENOENT;
for_each_available_child_of_node_scoped(dev->of_node, of_port) { for_each_available_child_of_node_scoped(dev->of_node, of_port) {
if (!of_node_is_type(of_port, "pci"))
continue;
ret = qcom_pcie_parse_port(pcie, of_port); ret = qcom_pcie_parse_port(pcie, of_port);
if (ret) if (ret)
goto err_port_del; goto err_port_del;
@ -1748,8 +1760,10 @@ static int qcom_pcie_parse_ports(struct qcom_pcie *pcie)
return ret; return ret;
err_port_del: err_port_del:
list_for_each_entry_safe(port, tmp, &pcie->ports, list) list_for_each_entry_safe(port, tmp, &pcie->ports, list) {
phy_exit(port->phy);
list_del(&port->list); list_del(&port->list);
}
return ret; return ret;
} }
@ -1757,20 +1771,32 @@ static int qcom_pcie_parse_ports(struct qcom_pcie *pcie)
static int qcom_pcie_parse_legacy_binding(struct qcom_pcie *pcie) static int qcom_pcie_parse_legacy_binding(struct qcom_pcie *pcie)
{ {
struct device *dev = pcie->pci->dev; struct device *dev = pcie->pci->dev;
struct qcom_pcie_port *port;
struct gpio_desc *reset;
struct phy *phy;
int ret; int ret;
pcie->phy = devm_phy_optional_get(dev, "pciephy"); phy = devm_phy_optional_get(dev, "pciephy");
if (IS_ERR(pcie->phy)) if (IS_ERR(phy))
return PTR_ERR(pcie->phy); return PTR_ERR(phy);
pcie->reset = devm_gpiod_get_optional(dev, "perst", GPIOD_OUT_HIGH); reset = devm_gpiod_get_optional(dev, "perst", GPIOD_OUT_HIGH);
if (IS_ERR(pcie->reset)) if (IS_ERR(reset))
return PTR_ERR(pcie->reset); return PTR_ERR(reset);
ret = phy_init(pcie->phy); ret = phy_init(phy);
if (ret) if (ret)
return ret; return ret;
port = devm_kzalloc(dev, sizeof(*port), GFP_KERNEL);
if (!port)
return -ENOMEM;
port->reset = reset;
port->phy = phy;
INIT_LIST_HEAD(&port->list);
list_add_tail(&port->list, &pcie->ports);
return 0; return 0;
} }
@ -1861,12 +1887,6 @@ static int qcom_pcie_probe(struct platform_device *pdev)
goto err_pm_runtime_put; goto err_pm_runtime_put;
} }
pcie->elbi = devm_platform_ioremap_resource_byname(pdev, "elbi");
if (IS_ERR(pcie->elbi)) {
ret = PTR_ERR(pcie->elbi);
goto err_pm_runtime_put;
}
/* MHI region is optional */ /* MHI region is optional */
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "mhi"); res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "mhi");
if (res) { if (res) {
@ -1984,9 +2004,10 @@ static int qcom_pcie_probe(struct platform_device *pdev)
err_host_deinit: err_host_deinit:
dw_pcie_host_deinit(pp); dw_pcie_host_deinit(pp);
err_phy_exit: err_phy_exit:
qcom_pcie_phy_exit(pcie); list_for_each_entry_safe(port, tmp, &pcie->ports, list) {
list_for_each_entry_safe(port, tmp, &pcie->ports, list) phy_exit(port->phy);
list_del(&port->list); list_del(&port->list);
}
err_pm_runtime_put: err_pm_runtime_put:
pm_runtime_put(dev); pm_runtime_put(dev);
pm_runtime_disable(dev); pm_runtime_disable(dev);

View File

@ -182,8 +182,17 @@ static int rcar_gen4_pcie_common_init(struct rcar_gen4_pcie *rcar)
return ret; return ret;
} }
if (!reset_control_status(dw->core_rsts[DW_PCIE_PWR_RST].rstc)) if (!reset_control_status(dw->core_rsts[DW_PCIE_PWR_RST].rstc)) {
reset_control_assert(dw->core_rsts[DW_PCIE_PWR_RST].rstc); reset_control_assert(dw->core_rsts[DW_PCIE_PWR_RST].rstc);
/*
* R-Car V4H Reference Manual R19UH0186EJ0130 Rev.1.30 Apr.
* 21, 2025 page 585 Figure 9.3.2 Software Reset flow (B)
* indicates that for peripherals in HSC domain, after
* reset has been asserted by writing a matching reset bit
* into register SRCR, it is mandatory to wait 1ms.
*/
fsleep(1000);
}
val = readl(rcar->base + PCIEMSR0); val = readl(rcar->base + PCIEMSR0);
if (rcar->drvdata->mode == DW_PCIE_RC_TYPE) { if (rcar->drvdata->mode == DW_PCIE_RC_TYPE) {
@ -204,6 +213,19 @@ static int rcar_gen4_pcie_common_init(struct rcar_gen4_pcie *rcar)
if (ret) if (ret)
goto err_unprepare; goto err_unprepare;
/*
* Assure the reset is latched and the core is ready for DBI access.
* On R-Car V4H, the PCIe reset is asynchronous and does not take
* effect immediately, but needs a short time to complete. In case
* DBI access happens in that short time, that access generates an
* SError. To make sure that condition can never happen, read back the
* state of the reset, which should turn the asynchronous reset into
* synchronous one, and wait a little over 1ms to add additional
* safety margin.
*/
reset_control_status(dw->core_rsts[DW_PCIE_PWR_RST].rstc);
fsleep(1000);
if (rcar->drvdata->additional_common_init) if (rcar->drvdata->additional_common_init)
rcar->drvdata->additional_common_init(rcar); rcar->drvdata->additional_common_init(rcar);
@ -398,9 +420,7 @@ static int rcar_gen4_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
} }
static const struct pci_epc_features rcar_gen4_pcie_epc_features = { static const struct pci_epc_features rcar_gen4_pcie_epc_features = {
.linkup_notifier = false,
.msi_capable = true, .msi_capable = true,
.msix_capable = false,
.bar[BAR_1] = { .type = BAR_RESERVED, }, .bar[BAR_1] = { .type = BAR_RESERVED, },
.bar[BAR_3] = { .type = BAR_RESERVED, }, .bar[BAR_3] = { .type = BAR_RESERVED, },
.bar[BAR_4] = { .type = BAR_FIXED, .fixed_size = 256 }, .bar[BAR_4] = { .type = BAR_FIXED, .fixed_size = 256 },
@ -701,7 +721,7 @@ static int rcar_gen4_pcie_ltssm_control(struct rcar_gen4_pcie *rcar, bool enable
rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x148, GENMASK(23, 22), BIT(22)); rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x148, GENMASK(23, 22), BIT(22));
rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x148, GENMASK(18, 16), GENMASK(17, 16)); rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x148, GENMASK(18, 16), GENMASK(17, 16));
rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x148, GENMASK(7, 6), BIT(6)); rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x148, GENMASK(7, 6), BIT(6));
rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x148, GENMASK(2, 0), GENMASK(11, 0)); rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x148, GENMASK(2, 0), GENMASK(1, 0));
rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x1d4, GENMASK(16, 15), GENMASK(16, 15)); rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x1d4, GENMASK(16, 15), GENMASK(16, 15));
rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x514, BIT(26), BIT(26)); rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x514, BIT(26), BIT(26));
rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x0f8, BIT(16), 0); rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x0f8, BIT(16), 0);
@ -711,7 +731,7 @@ static int rcar_gen4_pcie_ltssm_control(struct rcar_gen4_pcie *rcar, bool enable
val &= ~APP_HOLD_PHY_RST; val &= ~APP_HOLD_PHY_RST;
writel(val, rcar->base + PCIERSTCTRL1); writel(val, rcar->base + PCIERSTCTRL1);
ret = readl_poll_timeout(rcar->phy_base + 0x0f8, val, !(val & BIT(18)), 100, 10000); ret = readl_poll_timeout(rcar->phy_base + 0x0f8, val, val & BIT(18), 100, 10000);
if (ret < 0) if (ret < 0)
return ret; return ret;

View File

@ -0,0 +1,364 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* STMicroelectronics STM32MP25 PCIe endpoint driver.
*
* Copyright (C) 2025 STMicroelectronics
* Author: Christian Bruel <christian.bruel@foss.st.com>
*/
#include <linux/clk.h>
#include <linux/mfd/syscon.h>
#include <linux/of_platform.h>
#include <linux/of_gpio.h>
#include <linux/phy/phy.h>
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <linux/regmap.h>
#include <linux/reset.h>
#include "pcie-designware.h"
#include "pcie-stm32.h"
struct stm32_pcie {
struct dw_pcie pci;
struct regmap *regmap;
struct reset_control *rst;
struct phy *phy;
struct clk *clk;
struct gpio_desc *perst_gpio;
unsigned int perst_irq;
};
static void stm32_pcie_ep_init(struct dw_pcie_ep *ep)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
enum pci_barno bar;
for (bar = 0; bar < PCI_STD_NUM_BARS; bar++)
dw_pcie_ep_reset_bar(pci, bar);
}
static int stm32_pcie_enable_link(struct dw_pcie *pci)
{
struct stm32_pcie *stm32_pcie = to_stm32_pcie(pci);
regmap_update_bits(stm32_pcie->regmap, SYSCFG_PCIECR,
STM32MP25_PCIECR_LTSSM_EN,
STM32MP25_PCIECR_LTSSM_EN);
return dw_pcie_wait_for_link(pci);
}
static void stm32_pcie_disable_link(struct dw_pcie *pci)
{
struct stm32_pcie *stm32_pcie = to_stm32_pcie(pci);
regmap_update_bits(stm32_pcie->regmap, SYSCFG_PCIECR, STM32MP25_PCIECR_LTSSM_EN, 0);
}
static int stm32_pcie_start_link(struct dw_pcie *pci)
{
struct stm32_pcie *stm32_pcie = to_stm32_pcie(pci);
int ret;
dev_dbg(pci->dev, "Enable link\n");
ret = stm32_pcie_enable_link(pci);
if (ret) {
dev_err(pci->dev, "PCIe cannot establish link: %d\n", ret);
return ret;
}
enable_irq(stm32_pcie->perst_irq);
return 0;
}
static void stm32_pcie_stop_link(struct dw_pcie *pci)
{
struct stm32_pcie *stm32_pcie = to_stm32_pcie(pci);
dev_dbg(pci->dev, "Disable link\n");
disable_irq(stm32_pcie->perst_irq);
stm32_pcie_disable_link(pci);
}
static int stm32_pcie_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
unsigned int type, u16 interrupt_num)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
switch (type) {
case PCI_IRQ_INTX:
return dw_pcie_ep_raise_intx_irq(ep, func_no);
case PCI_IRQ_MSI:
return dw_pcie_ep_raise_msi_irq(ep, func_no, interrupt_num);
default:
dev_err(pci->dev, "UNKNOWN IRQ type\n");
return -EINVAL;
}
}
static const struct pci_epc_features stm32_pcie_epc_features = {
.msi_capable = true,
.align = SZ_64K,
};
static const struct pci_epc_features*
stm32_pcie_get_features(struct dw_pcie_ep *ep)
{
return &stm32_pcie_epc_features;
}
static const struct dw_pcie_ep_ops stm32_pcie_ep_ops = {
.init = stm32_pcie_ep_init,
.raise_irq = stm32_pcie_raise_irq,
.get_features = stm32_pcie_get_features,
};
static const struct dw_pcie_ops dw_pcie_ops = {
.start_link = stm32_pcie_start_link,
.stop_link = stm32_pcie_stop_link,
};
static int stm32_pcie_enable_resources(struct stm32_pcie *stm32_pcie)
{
int ret;
ret = phy_init(stm32_pcie->phy);
if (ret)
return ret;
ret = clk_prepare_enable(stm32_pcie->clk);
if (ret)
phy_exit(stm32_pcie->phy);
return ret;
}
static void stm32_pcie_disable_resources(struct stm32_pcie *stm32_pcie)
{
clk_disable_unprepare(stm32_pcie->clk);
phy_exit(stm32_pcie->phy);
}
static void stm32_pcie_perst_assert(struct dw_pcie *pci)
{
struct stm32_pcie *stm32_pcie = to_stm32_pcie(pci);
struct dw_pcie_ep *ep = &stm32_pcie->pci.ep;
struct device *dev = pci->dev;
dev_dbg(dev, "PERST asserted by host\n");
pci_epc_deinit_notify(ep->epc);
stm32_pcie_disable_resources(stm32_pcie);
pm_runtime_put_sync(dev);
}
static void stm32_pcie_perst_deassert(struct dw_pcie *pci)
{
struct stm32_pcie *stm32_pcie = to_stm32_pcie(pci);
struct device *dev = pci->dev;
struct dw_pcie_ep *ep = &pci->ep;
int ret;
dev_dbg(dev, "PERST de-asserted by host\n");
ret = pm_runtime_resume_and_get(dev);
if (ret < 0) {
dev_err(dev, "Failed to resume runtime PM: %d\n", ret);
return;
}
ret = stm32_pcie_enable_resources(stm32_pcie);
if (ret) {
dev_err(dev, "Failed to enable resources: %d\n", ret);
goto err_pm_put_sync;
}
/*
* Reprogram the configuration space registers here because the DBI
* registers were reset by the PHY RCC during phy_init().
*/
ret = dw_pcie_ep_init_registers(ep);
if (ret) {
dev_err(dev, "Failed to complete initialization: %d\n", ret);
goto err_disable_resources;
}
pci_epc_init_notify(ep->epc);
return;
err_disable_resources:
stm32_pcie_disable_resources(stm32_pcie);
err_pm_put_sync:
pm_runtime_put_sync(dev);
}
static irqreturn_t stm32_pcie_ep_perst_irq_thread(int irq, void *data)
{
struct stm32_pcie *stm32_pcie = data;
struct dw_pcie *pci = &stm32_pcie->pci;
u32 perst;
perst = gpiod_get_value(stm32_pcie->perst_gpio);
if (perst)
stm32_pcie_perst_assert(pci);
else
stm32_pcie_perst_deassert(pci);
irq_set_irq_type(gpiod_to_irq(stm32_pcie->perst_gpio),
(perst ? IRQF_TRIGGER_HIGH : IRQF_TRIGGER_LOW));
return IRQ_HANDLED;
}
static int stm32_add_pcie_ep(struct stm32_pcie *stm32_pcie,
struct platform_device *pdev)
{
struct dw_pcie_ep *ep = &stm32_pcie->pci.ep;
struct device *dev = &pdev->dev;
int ret;
ret = regmap_update_bits(stm32_pcie->regmap, SYSCFG_PCIECR,
STM32MP25_PCIECR_TYPE_MASK,
STM32MP25_PCIECR_EP);
if (ret)
return ret;
reset_control_assert(stm32_pcie->rst);
reset_control_deassert(stm32_pcie->rst);
ep->ops = &stm32_pcie_ep_ops;
ret = dw_pcie_ep_init(ep);
if (ret) {
dev_err(dev, "Failed to initialize ep: %d\n", ret);
return ret;
}
ret = stm32_pcie_enable_resources(stm32_pcie);
if (ret) {
dev_err(dev, "Failed to enable resources: %d\n", ret);
dw_pcie_ep_deinit(ep);
return ret;
}
return 0;
}
static int stm32_pcie_probe(struct platform_device *pdev)
{
struct stm32_pcie *stm32_pcie;
struct device *dev = &pdev->dev;
int ret;
stm32_pcie = devm_kzalloc(dev, sizeof(*stm32_pcie), GFP_KERNEL);
if (!stm32_pcie)
return -ENOMEM;
stm32_pcie->pci.dev = dev;
stm32_pcie->pci.ops = &dw_pcie_ops;
stm32_pcie->regmap = syscon_regmap_lookup_by_compatible("st,stm32mp25-syscfg");
if (IS_ERR(stm32_pcie->regmap))
return dev_err_probe(dev, PTR_ERR(stm32_pcie->regmap),
"No syscfg specified\n");
stm32_pcie->phy = devm_phy_get(dev, NULL);
if (IS_ERR(stm32_pcie->phy))
return dev_err_probe(dev, PTR_ERR(stm32_pcie->phy),
"failed to get pcie-phy\n");
stm32_pcie->clk = devm_clk_get(dev, NULL);
if (IS_ERR(stm32_pcie->clk))
return dev_err_probe(dev, PTR_ERR(stm32_pcie->clk),
"Failed to get PCIe clock source\n");
stm32_pcie->rst = devm_reset_control_get_exclusive(dev, NULL);
if (IS_ERR(stm32_pcie->rst))
return dev_err_probe(dev, PTR_ERR(stm32_pcie->rst),
"Failed to get PCIe reset\n");
stm32_pcie->perst_gpio = devm_gpiod_get(dev, "reset", GPIOD_IN);
if (IS_ERR(stm32_pcie->perst_gpio))
return dev_err_probe(dev, PTR_ERR(stm32_pcie->perst_gpio),
"Failed to get reset GPIO\n");
ret = phy_set_mode(stm32_pcie->phy, PHY_MODE_PCIE);
if (ret)
return ret;
platform_set_drvdata(pdev, stm32_pcie);
pm_runtime_get_noresume(dev);
ret = devm_pm_runtime_enable(dev);
if (ret < 0) {
pm_runtime_put_noidle(&pdev->dev);
return dev_err_probe(dev, ret, "Failed to enable runtime PM\n");
}
stm32_pcie->perst_irq = gpiod_to_irq(stm32_pcie->perst_gpio);
/* Will be enabled in start_link when device is initialized. */
irq_set_status_flags(stm32_pcie->perst_irq, IRQ_NOAUTOEN);
ret = devm_request_threaded_irq(dev, stm32_pcie->perst_irq, NULL,
stm32_pcie_ep_perst_irq_thread,
IRQF_TRIGGER_HIGH | IRQF_ONESHOT,
"perst_irq", stm32_pcie);
if (ret) {
pm_runtime_put_noidle(&pdev->dev);
return dev_err_probe(dev, ret, "Failed to request PERST IRQ\n");
}
ret = stm32_add_pcie_ep(stm32_pcie, pdev);
if (ret)
pm_runtime_put_noidle(&pdev->dev);
return ret;
}
static void stm32_pcie_remove(struct platform_device *pdev)
{
struct stm32_pcie *stm32_pcie = platform_get_drvdata(pdev);
struct dw_pcie *pci = &stm32_pcie->pci;
struct dw_pcie_ep *ep = &pci->ep;
dw_pcie_stop_link(pci);
pci_epc_deinit_notify(ep->epc);
dw_pcie_ep_deinit(ep);
stm32_pcie_disable_resources(stm32_pcie);
pm_runtime_put_sync(&pdev->dev);
}
static const struct of_device_id stm32_pcie_ep_of_match[] = {
{ .compatible = "st,stm32mp25-pcie-ep" },
{},
};
static struct platform_driver stm32_pcie_ep_driver = {
.probe = stm32_pcie_probe,
.remove = stm32_pcie_remove,
.driver = {
.name = "stm32-ep-pcie",
.of_match_table = stm32_pcie_ep_of_match,
},
};
module_platform_driver(stm32_pcie_ep_driver);
MODULE_AUTHOR("Christian Bruel <christian.bruel@foss.st.com>");
MODULE_DESCRIPTION("STM32MP25 PCIe Endpoint Controller driver");
MODULE_LICENSE("GPL");
MODULE_DEVICE_TABLE(of, stm32_pcie_ep_of_match);

View File

@ -0,0 +1,358 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* STMicroelectronics STM32MP25 PCIe root complex driver.
*
* Copyright (C) 2025 STMicroelectronics
* Author: Christian Bruel <christian.bruel@foss.st.com>
*/
#include <linux/clk.h>
#include <linux/mfd/syscon.h>
#include <linux/of_platform.h>
#include <linux/phy/phy.h>
#include <linux/pinctrl/consumer.h>
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <linux/pm_wakeirq.h>
#include <linux/regmap.h>
#include <linux/reset.h>
#include "pcie-designware.h"
#include "pcie-stm32.h"
#include "../../pci.h"
struct stm32_pcie {
struct dw_pcie pci;
struct regmap *regmap;
struct reset_control *rst;
struct phy *phy;
struct clk *clk;
struct gpio_desc *perst_gpio;
struct gpio_desc *wake_gpio;
};
static void stm32_pcie_deassert_perst(struct stm32_pcie *stm32_pcie)
{
if (stm32_pcie->perst_gpio) {
msleep(PCIE_T_PVPERL_MS);
gpiod_set_value(stm32_pcie->perst_gpio, 0);
}
msleep(PCIE_RESET_CONFIG_WAIT_MS);
}
static void stm32_pcie_assert_perst(struct stm32_pcie *stm32_pcie)
{
gpiod_set_value(stm32_pcie->perst_gpio, 1);
}
static int stm32_pcie_start_link(struct dw_pcie *pci)
{
struct stm32_pcie *stm32_pcie = to_stm32_pcie(pci);
return regmap_update_bits(stm32_pcie->regmap, SYSCFG_PCIECR,
STM32MP25_PCIECR_LTSSM_EN,
STM32MP25_PCIECR_LTSSM_EN);
}
static void stm32_pcie_stop_link(struct dw_pcie *pci)
{
struct stm32_pcie *stm32_pcie = to_stm32_pcie(pci);
regmap_update_bits(stm32_pcie->regmap, SYSCFG_PCIECR,
STM32MP25_PCIECR_LTSSM_EN, 0);
}
static int stm32_pcie_suspend_noirq(struct device *dev)
{
struct stm32_pcie *stm32_pcie = dev_get_drvdata(dev);
int ret;
ret = dw_pcie_suspend_noirq(&stm32_pcie->pci);
if (ret)
return ret;
stm32_pcie_assert_perst(stm32_pcie);
clk_disable_unprepare(stm32_pcie->clk);
if (!device_wakeup_path(dev))
phy_exit(stm32_pcie->phy);
return pinctrl_pm_select_sleep_state(dev);
}
static int stm32_pcie_resume_noirq(struct device *dev)
{
struct stm32_pcie *stm32_pcie = dev_get_drvdata(dev);
int ret;
/*
* The core clock is gated with CLKREQ# from the COMBOPHY REFCLK,
* thus if no device is present, must deassert it with a GPIO from
* pinctrl pinmux before accessing the DBI registers.
*/
ret = pinctrl_pm_select_init_state(dev);
if (ret) {
dev_err(dev, "Failed to activate pinctrl pm state: %d\n", ret);
return ret;
}
if (!device_wakeup_path(dev)) {
ret = phy_init(stm32_pcie->phy);
if (ret) {
pinctrl_pm_select_default_state(dev);
return ret;
}
}
ret = clk_prepare_enable(stm32_pcie->clk);
if (ret)
goto err_phy_exit;
stm32_pcie_deassert_perst(stm32_pcie);
ret = dw_pcie_resume_noirq(&stm32_pcie->pci);
if (ret)
goto err_disable_clk;
pinctrl_pm_select_default_state(dev);
return 0;
err_disable_clk:
stm32_pcie_assert_perst(stm32_pcie);
clk_disable_unprepare(stm32_pcie->clk);
err_phy_exit:
phy_exit(stm32_pcie->phy);
pinctrl_pm_select_default_state(dev);
return ret;
}
static const struct dev_pm_ops stm32_pcie_pm_ops = {
NOIRQ_SYSTEM_SLEEP_PM_OPS(stm32_pcie_suspend_noirq,
stm32_pcie_resume_noirq)
};
static const struct dw_pcie_host_ops stm32_pcie_host_ops = {
};
static const struct dw_pcie_ops dw_pcie_ops = {
.start_link = stm32_pcie_start_link,
.stop_link = stm32_pcie_stop_link
};
static int stm32_add_pcie_port(struct stm32_pcie *stm32_pcie)
{
struct device *dev = stm32_pcie->pci.dev;
unsigned int wake_irq;
int ret;
ret = phy_set_mode(stm32_pcie->phy, PHY_MODE_PCIE);
if (ret)
return ret;
ret = phy_init(stm32_pcie->phy);
if (ret)
return ret;
ret = regmap_update_bits(stm32_pcie->regmap, SYSCFG_PCIECR,
STM32MP25_PCIECR_TYPE_MASK,
STM32MP25_PCIECR_RC);
if (ret)
goto err_phy_exit;
stm32_pcie_deassert_perst(stm32_pcie);
if (stm32_pcie->wake_gpio) {
wake_irq = gpiod_to_irq(stm32_pcie->wake_gpio);
ret = dev_pm_set_dedicated_wake_irq(dev, wake_irq);
if (ret) {
dev_err(dev, "Failed to enable wakeup irq %d\n", ret);
goto err_assert_perst;
}
irq_set_irq_type(wake_irq, IRQ_TYPE_EDGE_FALLING);
}
return 0;
err_assert_perst:
stm32_pcie_assert_perst(stm32_pcie);
err_phy_exit:
phy_exit(stm32_pcie->phy);
return ret;
}
static void stm32_remove_pcie_port(struct stm32_pcie *stm32_pcie)
{
dev_pm_clear_wake_irq(stm32_pcie->pci.dev);
stm32_pcie_assert_perst(stm32_pcie);
phy_exit(stm32_pcie->phy);
}
static int stm32_pcie_parse_port(struct stm32_pcie *stm32_pcie)
{
struct device *dev = stm32_pcie->pci.dev;
struct device_node *root_port;
root_port = of_get_next_available_child(dev->of_node, NULL);
stm32_pcie->phy = devm_of_phy_get(dev, root_port, NULL);
if (IS_ERR(stm32_pcie->phy)) {
of_node_put(root_port);
return dev_err_probe(dev, PTR_ERR(stm32_pcie->phy),
"Failed to get pcie-phy\n");
}
stm32_pcie->perst_gpio = devm_fwnode_gpiod_get(dev, of_fwnode_handle(root_port),
"reset", GPIOD_OUT_HIGH, NULL);
if (IS_ERR(stm32_pcie->perst_gpio)) {
if (PTR_ERR(stm32_pcie->perst_gpio) != -ENOENT) {
of_node_put(root_port);
return dev_err_probe(dev, PTR_ERR(stm32_pcie->perst_gpio),
"Failed to get reset GPIO\n");
}
stm32_pcie->perst_gpio = NULL;
}
stm32_pcie->wake_gpio = devm_fwnode_gpiod_get(dev, of_fwnode_handle(root_port),
"wake", GPIOD_IN, NULL);
if (IS_ERR(stm32_pcie->wake_gpio)) {
if (PTR_ERR(stm32_pcie->wake_gpio) != -ENOENT) {
of_node_put(root_port);
return dev_err_probe(dev, PTR_ERR(stm32_pcie->wake_gpio),
"Failed to get wake GPIO\n");
}
stm32_pcie->wake_gpio = NULL;
}
of_node_put(root_port);
return 0;
}
static int stm32_pcie_probe(struct platform_device *pdev)
{
struct stm32_pcie *stm32_pcie;
struct device *dev = &pdev->dev;
int ret;
stm32_pcie = devm_kzalloc(dev, sizeof(*stm32_pcie), GFP_KERNEL);
if (!stm32_pcie)
return -ENOMEM;
stm32_pcie->pci.dev = dev;
stm32_pcie->pci.ops = &dw_pcie_ops;
stm32_pcie->pci.pp.ops = &stm32_pcie_host_ops;
stm32_pcie->regmap = syscon_regmap_lookup_by_compatible("st,stm32mp25-syscfg");
if (IS_ERR(stm32_pcie->regmap))
return dev_err_probe(dev, PTR_ERR(stm32_pcie->regmap),
"No syscfg specified\n");
stm32_pcie->clk = devm_clk_get(dev, NULL);
if (IS_ERR(stm32_pcie->clk))
return dev_err_probe(dev, PTR_ERR(stm32_pcie->clk),
"Failed to get PCIe clock source\n");
stm32_pcie->rst = devm_reset_control_get_exclusive(dev, NULL);
if (IS_ERR(stm32_pcie->rst))
return dev_err_probe(dev, PTR_ERR(stm32_pcie->rst),
"Failed to get PCIe reset\n");
ret = stm32_pcie_parse_port(stm32_pcie);
if (ret)
return ret;
platform_set_drvdata(pdev, stm32_pcie);
ret = stm32_add_pcie_port(stm32_pcie);
if (ret)
return ret;
reset_control_assert(stm32_pcie->rst);
reset_control_deassert(stm32_pcie->rst);
ret = clk_prepare_enable(stm32_pcie->clk);
if (ret) {
dev_err(dev, "Core clock enable failed %d\n", ret);
goto err_remove_port;
}
ret = pm_runtime_set_active(dev);
if (ret < 0) {
dev_err_probe(dev, ret, "Failed to activate runtime PM\n");
goto err_disable_clk;
}
pm_runtime_no_callbacks(dev);
ret = devm_pm_runtime_enable(dev);
if (ret < 0) {
dev_err_probe(dev, ret, "Failed to enable runtime PM\n");
goto err_disable_clk;
}
ret = dw_pcie_host_init(&stm32_pcie->pci.pp);
if (ret)
goto err_disable_clk;
if (stm32_pcie->wake_gpio)
device_init_wakeup(dev, true);
return 0;
err_disable_clk:
clk_disable_unprepare(stm32_pcie->clk);
err_remove_port:
stm32_remove_pcie_port(stm32_pcie);
return ret;
}
static void stm32_pcie_remove(struct platform_device *pdev)
{
struct stm32_pcie *stm32_pcie = platform_get_drvdata(pdev);
struct dw_pcie_rp *pp = &stm32_pcie->pci.pp;
if (stm32_pcie->wake_gpio)
device_init_wakeup(&pdev->dev, false);
dw_pcie_host_deinit(pp);
clk_disable_unprepare(stm32_pcie->clk);
stm32_remove_pcie_port(stm32_pcie);
pm_runtime_put_noidle(&pdev->dev);
}
static const struct of_device_id stm32_pcie_of_match[] = {
{ .compatible = "st,stm32mp25-pcie-rc" },
{},
};
static struct platform_driver stm32_pcie_driver = {
.probe = stm32_pcie_probe,
.remove = stm32_pcie_remove,
.driver = {
.name = "stm32-pcie",
.of_match_table = stm32_pcie_of_match,
.pm = &stm32_pcie_pm_ops,
.probe_type = PROBE_PREFER_ASYNCHRONOUS,
},
};
module_platform_driver(stm32_pcie_driver);
MODULE_AUTHOR("Christian Bruel <christian.bruel@foss.st.com>");
MODULE_DESCRIPTION("STM32MP25 PCIe Controller driver");
MODULE_LICENSE("GPL");
MODULE_DEVICE_TABLE(of, stm32_pcie_of_match);

View File

@ -0,0 +1,16 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* ST PCIe driver definitions for STM32-MP25 SoC
*
* Copyright (C) 2025 STMicroelectronics - All Rights Reserved
* Author: Christian Bruel <christian.bruel@foss.st.com>
*/
#define to_stm32_pcie(x) dev_get_drvdata((x)->dev)
#define STM32MP25_PCIECR_TYPE_MASK GENMASK(11, 8)
#define STM32MP25_PCIECR_EP 0
#define STM32MP25_PCIECR_LTSSM_EN BIT(2)
#define STM32MP25_PCIECR_RC BIT(10)
#define SYSCFG_PCIECR 0x6000

View File

@ -1214,6 +1214,7 @@ static int tegra_pcie_bpmp_set_ctrl_state(struct tegra_pcie_dw *pcie,
struct mrq_uphy_response resp; struct mrq_uphy_response resp;
struct tegra_bpmp_message msg; struct tegra_bpmp_message msg;
struct mrq_uphy_request req; struct mrq_uphy_request req;
int err;
/* /*
* Controller-5 doesn't need to have its state set by BPMP-FW in * Controller-5 doesn't need to have its state set by BPMP-FW in
@ -1236,7 +1237,13 @@ static int tegra_pcie_bpmp_set_ctrl_state(struct tegra_pcie_dw *pcie,
msg.rx.data = &resp; msg.rx.data = &resp;
msg.rx.size = sizeof(resp); msg.rx.size = sizeof(resp);
return tegra_bpmp_transfer(pcie->bpmp, &msg); err = tegra_bpmp_transfer(pcie->bpmp, &msg);
if (err)
return err;
if (msg.rx.ret)
return -EINVAL;
return 0;
} }
static int tegra_pcie_bpmp_set_pll_state(struct tegra_pcie_dw *pcie, static int tegra_pcie_bpmp_set_pll_state(struct tegra_pcie_dw *pcie,
@ -1245,6 +1252,7 @@ static int tegra_pcie_bpmp_set_pll_state(struct tegra_pcie_dw *pcie,
struct mrq_uphy_response resp; struct mrq_uphy_response resp;
struct tegra_bpmp_message msg; struct tegra_bpmp_message msg;
struct mrq_uphy_request req; struct mrq_uphy_request req;
int err;
memset(&req, 0, sizeof(req)); memset(&req, 0, sizeof(req));
memset(&resp, 0, sizeof(resp)); memset(&resp, 0, sizeof(resp));
@ -1264,13 +1272,19 @@ static int tegra_pcie_bpmp_set_pll_state(struct tegra_pcie_dw *pcie,
msg.rx.data = &resp; msg.rx.data = &resp;
msg.rx.size = sizeof(resp); msg.rx.size = sizeof(resp);
return tegra_bpmp_transfer(pcie->bpmp, &msg); err = tegra_bpmp_transfer(pcie->bpmp, &msg);
if (err)
return err;
if (msg.rx.ret)
return -EINVAL;
return 0;
} }
static void tegra_pcie_downstream_dev_to_D0(struct tegra_pcie_dw *pcie) static void tegra_pcie_downstream_dev_to_D0(struct tegra_pcie_dw *pcie)
{ {
struct dw_pcie_rp *pp = &pcie->pci.pp; struct dw_pcie_rp *pp = &pcie->pci.pp;
struct pci_bus *child, *root_bus = NULL; struct pci_bus *child, *root_port_bus = NULL;
struct pci_dev *pdev; struct pci_dev *pdev;
/* /*
@ -1283,19 +1297,19 @@ static void tegra_pcie_downstream_dev_to_D0(struct tegra_pcie_dw *pcie)
*/ */
list_for_each_entry(child, &pp->bridge->bus->children, node) { list_for_each_entry(child, &pp->bridge->bus->children, node) {
/* Bring downstream devices to D0 if they are not already in */
if (child->parent == pp->bridge->bus) { if (child->parent == pp->bridge->bus) {
root_bus = child; root_port_bus = child;
break; break;
} }
} }
if (!root_bus) { if (!root_port_bus) {
dev_err(pcie->dev, "Failed to find downstream devices\n"); dev_err(pcie->dev, "Failed to find downstream bus of Root Port\n");
return; return;
} }
list_for_each_entry(pdev, &root_bus->devices, bus_list) { /* Bring downstream devices to D0 if they are not already in */
list_for_each_entry(pdev, &root_port_bus->devices, bus_list) {
if (PCI_SLOT(pdev->devfn) == 0) { if (PCI_SLOT(pdev->devfn) == 0) {
if (pci_set_power_state(pdev, PCI_D0)) if (pci_set_power_state(pdev, PCI_D0))
dev_err(pcie->dev, dev_err(pcie->dev,
@ -1722,9 +1736,9 @@ static void pex_ep_event_pex_rst_assert(struct tegra_pcie_dw *pcie)
ret); ret);
} }
ret = tegra_pcie_bpmp_set_pll_state(pcie, false); ret = tegra_pcie_bpmp_set_ctrl_state(pcie, false);
if (ret) if (ret)
dev_err(pcie->dev, "Failed to turn off UPHY: %d\n", ret); dev_err(pcie->dev, "Failed to disable controller: %d\n", ret);
pcie->ep_state = EP_STATE_DISABLED; pcie->ep_state = EP_STATE_DISABLED;
dev_dbg(pcie->dev, "Uninitialization of endpoint is completed\n"); dev_dbg(pcie->dev, "Uninitialization of endpoint is completed\n");
@ -1941,6 +1955,15 @@ static irqreturn_t tegra_pcie_ep_pex_rst_irq(int irq, void *arg)
return IRQ_HANDLED; return IRQ_HANDLED;
} }
static void tegra_pcie_ep_init(struct dw_pcie_ep *ep)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
enum pci_barno bar;
for (bar = 0; bar < PCI_STD_NUM_BARS; bar++)
dw_pcie_ep_reset_bar(pci, bar);
};
static int tegra_pcie_ep_raise_intx_irq(struct tegra_pcie_dw *pcie, u16 irq) static int tegra_pcie_ep_raise_intx_irq(struct tegra_pcie_dw *pcie, u16 irq)
{ {
/* Tegra194 supports only INTA */ /* Tegra194 supports only INTA */
@ -1955,10 +1978,10 @@ static int tegra_pcie_ep_raise_intx_irq(struct tegra_pcie_dw *pcie, u16 irq)
static int tegra_pcie_ep_raise_msi_irq(struct tegra_pcie_dw *pcie, u16 irq) static int tegra_pcie_ep_raise_msi_irq(struct tegra_pcie_dw *pcie, u16 irq)
{ {
if (unlikely(irq > 31)) if (unlikely(irq > 32))
return -EINVAL; return -EINVAL;
appl_writel(pcie, BIT(irq), APPL_MSI_CTRL_1); appl_writel(pcie, BIT(irq - 1), APPL_MSI_CTRL_1);
return 0; return 0;
} }
@ -1998,8 +2021,7 @@ static int tegra_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
static const struct pci_epc_features tegra_pcie_epc_features = { static const struct pci_epc_features tegra_pcie_epc_features = {
.linkup_notifier = true, .linkup_notifier = true,
.msi_capable = false, .msi_capable = true,
.msix_capable = false,
.bar[BAR_0] = { .type = BAR_FIXED, .fixed_size = SZ_1M, .bar[BAR_0] = { .type = BAR_FIXED, .fixed_size = SZ_1M,
.only_64bit = true, }, .only_64bit = true, },
.bar[BAR_1] = { .type = BAR_RESERVED, }, .bar[BAR_1] = { .type = BAR_RESERVED, },
@ -2017,6 +2039,7 @@ tegra_pcie_ep_get_features(struct dw_pcie_ep *ep)
} }
static const struct dw_pcie_ep_ops pcie_ep_ops = { static const struct dw_pcie_ep_ops pcie_ep_ops = {
.init = tegra_pcie_ep_init,
.raise_irq = tegra_pcie_ep_raise_irq, .raise_irq = tegra_pcie_ep_raise_irq,
.get_features = tegra_pcie_ep_get_features, .get_features = tegra_pcie_ep_get_features,
}; };

View File

@ -1680,7 +1680,6 @@ static void hv_int_desc_free(struct hv_pci_dev *hpdev,
/** /**
* hv_msi_free() - Free the MSI. * hv_msi_free() - Free the MSI.
* @domain: The interrupt domain pointer * @domain: The interrupt domain pointer
* @info: Extra MSI-related context
* @irq: Identifies the IRQ. * @irq: Identifies the IRQ.
* *
* The Hyper-V parent partition and hypervisor are tracking the * The Hyper-V parent partition and hypervisor are tracking the
@ -1688,8 +1687,7 @@ static void hv_int_desc_free(struct hv_pci_dev *hpdev,
* table up to date. This callback sends a message that frees * table up to date. This callback sends a message that frees
* the IRT entry and related tracking nonsense. * the IRT entry and related tracking nonsense.
*/ */
static void hv_msi_free(struct irq_domain *domain, struct msi_domain_info *info, static void hv_msi_free(struct irq_domain *domain, unsigned int irq)
unsigned int irq)
{ {
struct hv_pcibus_device *hbus; struct hv_pcibus_device *hbus;
struct hv_pci_dev *hpdev; struct hv_pci_dev *hpdev;
@ -2181,10 +2179,8 @@ static int hv_pcie_domain_alloc(struct irq_domain *d, unsigned int virq, unsigne
static void hv_pcie_domain_free(struct irq_domain *d, unsigned int virq, unsigned int nr_irqs) static void hv_pcie_domain_free(struct irq_domain *d, unsigned int virq, unsigned int nr_irqs)
{ {
struct msi_domain_info *info = d->host_data;
for (int i = 0; i < nr_irqs; i++) for (int i = 0; i < nr_irqs; i++)
hv_msi_free(d, info, virq + i); hv_msi_free(d, virq + i);
irq_domain_free_irqs_top(d, virq, nr_irqs); irq_domain_free_irqs_top(d, virq, nr_irqs);
} }

View File

@ -14,6 +14,7 @@
*/ */
#include <linux/clk.h> #include <linux/clk.h>
#include <linux/cleanup.h>
#include <linux/debugfs.h> #include <linux/debugfs.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/export.h> #include <linux/export.h>
@ -270,7 +271,7 @@ struct tegra_msi {
DECLARE_BITMAP(used, INT_PCI_MSI_NR); DECLARE_BITMAP(used, INT_PCI_MSI_NR);
struct irq_domain *domain; struct irq_domain *domain;
struct mutex map_lock; struct mutex map_lock;
spinlock_t mask_lock; raw_spinlock_t mask_lock;
void *virt; void *virt;
dma_addr_t phys; dma_addr_t phys;
int irq; int irq;
@ -1344,7 +1345,7 @@ static int tegra_pcie_port_get_phys(struct tegra_pcie_port *port)
unsigned int i; unsigned int i;
int err; int err;
port->phys = devm_kcalloc(dev, sizeof(phy), port->lanes, GFP_KERNEL); port->phys = devm_kcalloc(dev, port->lanes, sizeof(phy), GFP_KERNEL);
if (!port->phys) if (!port->phys)
return -ENOMEM; return -ENOMEM;
@ -1581,14 +1582,13 @@ static void tegra_msi_irq_mask(struct irq_data *d)
struct tegra_msi *msi = irq_data_get_irq_chip_data(d); struct tegra_msi *msi = irq_data_get_irq_chip_data(d);
struct tegra_pcie *pcie = msi_to_pcie(msi); struct tegra_pcie *pcie = msi_to_pcie(msi);
unsigned int index = d->hwirq / 32; unsigned int index = d->hwirq / 32;
unsigned long flags;
u32 value; u32 value;
spin_lock_irqsave(&msi->mask_lock, flags); scoped_guard(raw_spinlock_irqsave, &msi->mask_lock) {
value = afi_readl(pcie, AFI_MSI_EN_VEC(index)); value = afi_readl(pcie, AFI_MSI_EN_VEC(index));
value &= ~BIT(d->hwirq % 32); value &= ~BIT(d->hwirq % 32);
afi_writel(pcie, value, AFI_MSI_EN_VEC(index)); afi_writel(pcie, value, AFI_MSI_EN_VEC(index));
spin_unlock_irqrestore(&msi->mask_lock, flags); }
} }
static void tegra_msi_irq_unmask(struct irq_data *d) static void tegra_msi_irq_unmask(struct irq_data *d)
@ -1596,14 +1596,13 @@ static void tegra_msi_irq_unmask(struct irq_data *d)
struct tegra_msi *msi = irq_data_get_irq_chip_data(d); struct tegra_msi *msi = irq_data_get_irq_chip_data(d);
struct tegra_pcie *pcie = msi_to_pcie(msi); struct tegra_pcie *pcie = msi_to_pcie(msi);
unsigned int index = d->hwirq / 32; unsigned int index = d->hwirq / 32;
unsigned long flags;
u32 value; u32 value;
spin_lock_irqsave(&msi->mask_lock, flags); scoped_guard(raw_spinlock_irqsave, &msi->mask_lock) {
value = afi_readl(pcie, AFI_MSI_EN_VEC(index)); value = afi_readl(pcie, AFI_MSI_EN_VEC(index));
value |= BIT(d->hwirq % 32); value |= BIT(d->hwirq % 32);
afi_writel(pcie, value, AFI_MSI_EN_VEC(index)); afi_writel(pcie, value, AFI_MSI_EN_VEC(index));
spin_unlock_irqrestore(&msi->mask_lock, flags); }
} }
static void tegra_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) static void tegra_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
@ -1711,7 +1710,7 @@ static int tegra_pcie_msi_setup(struct tegra_pcie *pcie)
int err; int err;
mutex_init(&msi->map_lock); mutex_init(&msi->map_lock);
spin_lock_init(&msi->mask_lock); raw_spin_lock_init(&msi->mask_lock);
if (IS_ENABLED(CONFIG_PCI_MSI)) { if (IS_ENABLED(CONFIG_PCI_MSI)) {
err = tegra_allocate_domains(msi); err = tegra_allocate_domains(msi);

View File

@ -311,7 +311,7 @@ static int xgene_msi_handler_setup(struct platform_device *pdev)
msi_val = xgene_msi_int_read(xgene_msi, i); msi_val = xgene_msi_int_read(xgene_msi, i);
if (msi_val) { if (msi_val) {
dev_err(&pdev->dev, "Failed to clear spurious IRQ\n"); dev_err(&pdev->dev, "Failed to clear spurious IRQ\n");
return EINVAL; return -EINVAL;
} }
irq = platform_get_irq(pdev, i); irq = platform_get_irq(pdev, i);

View File

@ -102,6 +102,9 @@
#define PCIE_MSI_SET_ADDR_HI_BASE 0xc80 #define PCIE_MSI_SET_ADDR_HI_BASE 0xc80
#define PCIE_MSI_SET_ADDR_HI_OFFSET 0x04 #define PCIE_MSI_SET_ADDR_HI_OFFSET 0x04
#define PCIE_RESOURCE_CTRL_REG 0xd2c
#define PCIE_RSRC_SYS_CLK_RDY_TIME_MASK GENMASK(7, 0)
#define PCIE_ICMD_PM_REG 0x198 #define PCIE_ICMD_PM_REG 0x198
#define PCIE_TURN_OFF_LINK BIT(4) #define PCIE_TURN_OFF_LINK BIT(4)
@ -149,6 +152,7 @@ enum mtk_gen3_pcie_flags {
* struct mtk_gen3_pcie_pdata - differentiate between host generations * struct mtk_gen3_pcie_pdata - differentiate between host generations
* @power_up: pcie power_up callback * @power_up: pcie power_up callback
* @phy_resets: phy reset lines SoC data. * @phy_resets: phy reset lines SoC data.
* @sys_clk_rdy_time_us: System clock ready time override (microseconds)
* @flags: pcie device flags. * @flags: pcie device flags.
*/ */
struct mtk_gen3_pcie_pdata { struct mtk_gen3_pcie_pdata {
@ -157,6 +161,7 @@ struct mtk_gen3_pcie_pdata {
const char *id[MAX_NUM_PHY_RESETS]; const char *id[MAX_NUM_PHY_RESETS];
int num_resets; int num_resets;
} phy_resets; } phy_resets;
u8 sys_clk_rdy_time_us;
u32 flags; u32 flags;
}; };
@ -435,6 +440,14 @@ static int mtk_pcie_startup_port(struct mtk_gen3_pcie *pcie)
writel_relaxed(val, pcie->base + PCIE_CONF_LINK2_CTL_STS); writel_relaxed(val, pcie->base + PCIE_CONF_LINK2_CTL_STS);
} }
/* If parameter is present, adjust SYS_CLK_RDY_TIME to avoid glitching */
if (pcie->soc->sys_clk_rdy_time_us) {
val = readl_relaxed(pcie->base + PCIE_RESOURCE_CTRL_REG);
FIELD_MODIFY(PCIE_RSRC_SYS_CLK_RDY_TIME_MASK, &val,
pcie->soc->sys_clk_rdy_time_us);
writel_relaxed(val, pcie->base + PCIE_RESOURCE_CTRL_REG);
}
/* Set class code */ /* Set class code */
val = readl_relaxed(pcie->base + PCIE_PCI_IDS_1); val = readl_relaxed(pcie->base + PCIE_PCI_IDS_1);
val &= ~GENMASK(31, 8); val &= ~GENMASK(31, 8);
@ -1327,6 +1340,15 @@ static const struct mtk_gen3_pcie_pdata mtk_pcie_soc_mt8192 = {
}, },
}; };
static const struct mtk_gen3_pcie_pdata mtk_pcie_soc_mt8196 = {
.power_up = mtk_pcie_power_up,
.phy_resets = {
.id[0] = "phy",
.num_resets = 1,
},
.sys_clk_rdy_time_us = 10,
};
static const struct mtk_gen3_pcie_pdata mtk_pcie_soc_en7581 = { static const struct mtk_gen3_pcie_pdata mtk_pcie_soc_en7581 = {
.power_up = mtk_pcie_en7581_power_up, .power_up = mtk_pcie_en7581_power_up,
.phy_resets = { .phy_resets = {
@ -1341,6 +1363,7 @@ static const struct mtk_gen3_pcie_pdata mtk_pcie_soc_en7581 = {
static const struct of_device_id mtk_pcie_of_match[] = { static const struct of_device_id mtk_pcie_of_match[] = {
{ .compatible = "airoha,en7581-pcie", .data = &mtk_pcie_soc_en7581 }, { .compatible = "airoha,en7581-pcie", .data = &mtk_pcie_soc_en7581 },
{ .compatible = "mediatek,mt8192-pcie", .data = &mtk_pcie_soc_mt8192 }, { .compatible = "mediatek,mt8192-pcie", .data = &mtk_pcie_soc_mt8192 },
{ .compatible = "mediatek,mt8196-pcie", .data = &mtk_pcie_soc_mt8196 },
{}, {},
}; };
MODULE_DEVICE_TABLE(of, mtk_pcie_of_match); MODULE_DEVICE_TABLE(of, mtk_pcie_of_match);

View File

@ -436,9 +436,7 @@ static void rcar_pcie_ep_stop(struct pci_epc *epc)
} }
static const struct pci_epc_features rcar_pcie_epc_features = { static const struct pci_epc_features rcar_pcie_epc_features = {
.linkup_notifier = false,
.msi_capable = true, .msi_capable = true,
.msix_capable = false,
/* use 64-bit BARs so mark BAR[1,3,5] as reserved */ /* use 64-bit BARs so mark BAR[1,3,5] as reserved */
.bar[BAR_0] = { .type = BAR_FIXED, .fixed_size = 128, .bar[BAR_0] = { .type = BAR_FIXED, .fixed_size = 128,
.only_64bit = true, }, .only_64bit = true, },

View File

@ -12,6 +12,7 @@
*/ */
#include <linux/bitops.h> #include <linux/bitops.h>
#include <linux/cleanup.h>
#include <linux/clk.h> #include <linux/clk.h>
#include <linux/clk-provider.h> #include <linux/clk-provider.h>
#include <linux/delay.h> #include <linux/delay.h>
@ -38,7 +39,7 @@ struct rcar_msi {
DECLARE_BITMAP(used, INT_PCI_MSI_NR); DECLARE_BITMAP(used, INT_PCI_MSI_NR);
struct irq_domain *domain; struct irq_domain *domain;
struct mutex map_lock; struct mutex map_lock;
spinlock_t mask_lock; raw_spinlock_t mask_lock;
int irq1; int irq1;
int irq2; int irq2;
}; };
@ -52,20 +53,13 @@ struct rcar_pcie_host {
int (*phy_init_fn)(struct rcar_pcie_host *host); int (*phy_init_fn)(struct rcar_pcie_host *host);
}; };
static DEFINE_SPINLOCK(pmsr_lock);
static int rcar_pcie_wakeup(struct device *pcie_dev, void __iomem *pcie_base) static int rcar_pcie_wakeup(struct device *pcie_dev, void __iomem *pcie_base)
{ {
unsigned long flags;
u32 pmsr, val; u32 pmsr, val;
int ret = 0; int ret = 0;
spin_lock_irqsave(&pmsr_lock, flags); if (!pcie_base || pm_runtime_suspended(pcie_dev))
return -EINVAL;
if (!pcie_base || pm_runtime_suspended(pcie_dev)) {
ret = -EINVAL;
goto unlock_exit;
}
pmsr = readl(pcie_base + PMSR); pmsr = readl(pcie_base + PMSR);
@ -87,8 +81,6 @@ static int rcar_pcie_wakeup(struct device *pcie_dev, void __iomem *pcie_base)
writel(L1FAEG | PMEL1RX, pcie_base + PMSR); writel(L1FAEG | PMEL1RX, pcie_base + PMSR);
} }
unlock_exit:
spin_unlock_irqrestore(&pmsr_lock, flags);
return ret; return ret;
} }
@ -584,7 +576,7 @@ static irqreturn_t rcar_pcie_msi_irq(int irq, void *data)
unsigned int index = find_first_bit(&reg, 32); unsigned int index = find_first_bit(&reg, 32);
int ret; int ret;
ret = generic_handle_domain_irq(msi->domain->parent, index); ret = generic_handle_domain_irq(msi->domain, index);
if (ret) { if (ret) {
/* Unknown MSI, just clear it */ /* Unknown MSI, just clear it */
dev_dbg(dev, "unexpected MSI\n"); dev_dbg(dev, "unexpected MSI\n");
@ -611,28 +603,26 @@ static void rcar_msi_irq_mask(struct irq_data *d)
{ {
struct rcar_msi *msi = irq_data_get_irq_chip_data(d); struct rcar_msi *msi = irq_data_get_irq_chip_data(d);
struct rcar_pcie *pcie = &msi_to_host(msi)->pcie; struct rcar_pcie *pcie = &msi_to_host(msi)->pcie;
unsigned long flags;
u32 value; u32 value;
spin_lock_irqsave(&msi->mask_lock, flags); scoped_guard(raw_spinlock_irqsave, &msi->mask_lock) {
value = rcar_pci_read_reg(pcie, PCIEMSIIER); value = rcar_pci_read_reg(pcie, PCIEMSIIER);
value &= ~BIT(d->hwirq); value &= ~BIT(d->hwirq);
rcar_pci_write_reg(pcie, value, PCIEMSIIER); rcar_pci_write_reg(pcie, value, PCIEMSIIER);
spin_unlock_irqrestore(&msi->mask_lock, flags); }
} }
static void rcar_msi_irq_unmask(struct irq_data *d) static void rcar_msi_irq_unmask(struct irq_data *d)
{ {
struct rcar_msi *msi = irq_data_get_irq_chip_data(d); struct rcar_msi *msi = irq_data_get_irq_chip_data(d);
struct rcar_pcie *pcie = &msi_to_host(msi)->pcie; struct rcar_pcie *pcie = &msi_to_host(msi)->pcie;
unsigned long flags;
u32 value; u32 value;
spin_lock_irqsave(&msi->mask_lock, flags); scoped_guard(raw_spinlock_irqsave, &msi->mask_lock) {
value = rcar_pci_read_reg(pcie, PCIEMSIIER); value = rcar_pci_read_reg(pcie, PCIEMSIIER);
value |= BIT(d->hwirq); value |= BIT(d->hwirq);
rcar_pci_write_reg(pcie, value, PCIEMSIIER); rcar_pci_write_reg(pcie, value, PCIEMSIIER);
spin_unlock_irqrestore(&msi->mask_lock, flags); }
} }
static void rcar_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) static void rcar_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
@ -745,7 +735,7 @@ static int rcar_pcie_enable_msi(struct rcar_pcie_host *host)
int err; int err;
mutex_init(&msi->map_lock); mutex_init(&msi->map_lock);
spin_lock_init(&msi->mask_lock); raw_spin_lock_init(&msi->mask_lock);
err = of_address_to_resource(dev->of_node, 0, &res); err = of_address_to_resource(dev->of_node, 0, &res);
if (err) if (err)

View File

@ -694,7 +694,6 @@ static int rockchip_pcie_ep_setup_irq(struct pci_epc *epc)
static const struct pci_epc_features rockchip_pcie_epc_features = { static const struct pci_epc_features rockchip_pcie_epc_features = {
.linkup_notifier = true, .linkup_notifier = true,
.msi_capable = true, .msi_capable = true,
.msix_capable = false,
.intx_capable = true, .intx_capable = true,
.align = ROCKCHIP_PCIE_AT_SIZE_ALIGN, .align = ROCKCHIP_PCIE_AT_SIZE_ALIGN,
}; };

View File

@ -718,9 +718,10 @@ static int nwl_pcie_bridge_init(struct nwl_pcie *pcie)
nwl_bridge_writel(pcie, nwl_bridge_readl(pcie, E_ECAM_CONTROL) | nwl_bridge_writel(pcie, nwl_bridge_readl(pcie, E_ECAM_CONTROL) |
E_ECAM_CR_ENABLE, E_ECAM_CONTROL); E_ECAM_CR_ENABLE, E_ECAM_CONTROL);
nwl_bridge_writel(pcie, nwl_bridge_readl(pcie, E_ECAM_CONTROL) | ecam_val = nwl_bridge_readl(pcie, E_ECAM_CONTROL);
(NWL_ECAM_MAX_SIZE << E_ECAM_SIZE_SHIFT), ecam_val &= ~E_ECAM_SIZE_LOC;
E_ECAM_CONTROL); ecam_val |= NWL_ECAM_MAX_SIZE << E_ECAM_SIZE_SHIFT;
nwl_bridge_writel(pcie, ecam_val, E_ECAM_CONTROL);
nwl_bridge_writel(pcie, lower_32_bits(pcie->phys_ecam_base), nwl_bridge_writel(pcie, lower_32_bits(pcie->phys_ecam_base),
E_ECAM_BASE_LO); E_ECAM_BASE_LO);

View File

@ -599,8 +599,7 @@ int plda_pcie_host_init(struct plda_pcie_rp *port, struct pci_ops *ops,
bridge = devm_pci_alloc_host_bridge(dev, 0); bridge = devm_pci_alloc_host_bridge(dev, 0);
if (!bridge) if (!bridge)
return dev_err_probe(dev, -ENOMEM, return -ENOMEM;
"failed to alloc bridge\n");
if (port->host_ops && port->host_ops->host_init) { if (port->host_ops && port->host_ops->host_init) {
ret = port->host_ops->host_init(port); ret = port->host_ops->host_init(port);

View File

@ -301,15 +301,20 @@ static void pci_epf_test_clean_dma_chan(struct pci_epf_test *epf_test)
if (!epf_test->dma_supported) if (!epf_test->dma_supported)
return; return;
dma_release_channel(epf_test->dma_chan_tx); if (epf_test->dma_chan_tx) {
if (epf_test->dma_chan_tx == epf_test->dma_chan_rx) { dma_release_channel(epf_test->dma_chan_tx);
if (epf_test->dma_chan_tx == epf_test->dma_chan_rx) {
epf_test->dma_chan_tx = NULL;
epf_test->dma_chan_rx = NULL;
return;
}
epf_test->dma_chan_tx = NULL; epf_test->dma_chan_tx = NULL;
epf_test->dma_chan_rx = NULL;
return;
} }
dma_release_channel(epf_test->dma_chan_rx); if (epf_test->dma_chan_rx) {
epf_test->dma_chan_rx = NULL; dma_release_channel(epf_test->dma_chan_rx);
epf_test->dma_chan_rx = NULL;
}
} }
static void pci_epf_test_print_rate(struct pci_epf_test *epf_test, static void pci_epf_test_print_rate(struct pci_epf_test *epf_test,
@ -772,12 +777,24 @@ static void pci_epf_test_disable_doorbell(struct pci_epf_test *epf_test,
u32 status = le32_to_cpu(reg->status); u32 status = le32_to_cpu(reg->status);
struct pci_epf *epf = epf_test->epf; struct pci_epf *epf = epf_test->epf;
struct pci_epc *epc = epf->epc; struct pci_epc *epc = epf->epc;
int ret;
if (bar < BAR_0) if (bar < BAR_0)
goto set_status_err; goto set_status_err;
pci_epf_test_doorbell_cleanup(epf_test); pci_epf_test_doorbell_cleanup(epf_test);
pci_epc_clear_bar(epc, epf->func_no, epf->vfunc_no, &epf_test->db_bar);
/*
* The doorbell feature temporarily overrides the inbound translation
* to point to the address stored in epf_test->db_bar.phys_addr, i.e.,
* it calls set_bar() twice without ever calling clear_bar(), as
* calling clear_bar() would clear the BAR's PCI address assigned by
* the host. Thus, when disabling the doorbell, restore the inbound
* translation to point to the memory allocated for the BAR.
*/
ret = pci_epc_set_bar(epc, epf->func_no, epf->vfunc_no, &epf->bar[bar]);
if (ret)
goto set_status_err;
status |= STATUS_DOORBELL_DISABLE_SUCCESS; status |= STATUS_DOORBELL_DISABLE_SUCCESS;
reg->status = cpu_to_le32(status); reg->status = cpu_to_le32(status);
@ -1050,7 +1067,12 @@ static int pci_epf_test_alloc_space(struct pci_epf *epf)
if (bar == test_reg_bar) if (bar == test_reg_bar)
continue; continue;
base = pci_epf_alloc_space(epf, bar_size[bar], bar, if (epc_features->bar[bar].type == BAR_FIXED)
test_reg_size = epc_features->bar[bar].fixed_size;
else
test_reg_size = bar_size[bar];
base = pci_epf_alloc_space(epf, test_reg_size, bar,
epc_features, PRIMARY_INTERFACE); epc_features, PRIMARY_INTERFACE);
if (!base) if (!base)
dev_err(dev, "Failed to allocate space for BAR%d\n", dev_err(dev, "Failed to allocate space for BAR%d\n",

View File

@ -24,7 +24,7 @@ static void pci_epf_write_msi_msg(struct msi_desc *desc, struct msi_msg *msg)
struct pci_epf *epf; struct pci_epf *epf;
epc = pci_epc_get(dev_name(msi_desc_to_dev(desc))); epc = pci_epc_get(dev_name(msi_desc_to_dev(desc)));
if (!epc) if (IS_ERR(epc))
return; return;
epf = list_first_entry_or_null(&epc->pci_epf, struct pci_epf, list); epf = list_first_entry_or_null(&epc->pci_epf, struct pci_epf, list);

View File

@ -1302,7 +1302,7 @@ int cpqhp_find_available_resources(struct controller *ctrl, void __iomem *rom_st
dbg("found io_node(base, length) = %x, %x\n", dbg("found io_node(base, length) = %x, %x\n",
io_node->base, io_node->length); io_node->base, io_node->length);
dbg("populated slot =%d \n", populated_slot); dbg("populated slot = %d\n", populated_slot);
if (!populated_slot) { if (!populated_slot) {
io_node->next = ctrl->io_head; io_node->next = ctrl->io_head;
ctrl->io_head = io_node; ctrl->io_head = io_node;
@ -1325,7 +1325,7 @@ int cpqhp_find_available_resources(struct controller *ctrl, void __iomem *rom_st
dbg("found mem_node(base, length) = %x, %x\n", dbg("found mem_node(base, length) = %x, %x\n",
mem_node->base, mem_node->length); mem_node->base, mem_node->length);
dbg("populated slot =%d \n", populated_slot); dbg("populated slot = %d\n", populated_slot);
if (!populated_slot) { if (!populated_slot) {
mem_node->next = ctrl->mem_head; mem_node->next = ctrl->mem_head;
ctrl->mem_head = mem_node; ctrl->mem_head = mem_node;
@ -1349,7 +1349,7 @@ int cpqhp_find_available_resources(struct controller *ctrl, void __iomem *rom_st
p_mem_node->length = pre_mem_length << 16; p_mem_node->length = pre_mem_length << 16;
dbg("found p_mem_node(base, length) = %x, %x\n", dbg("found p_mem_node(base, length) = %x, %x\n",
p_mem_node->base, p_mem_node->length); p_mem_node->base, p_mem_node->length);
dbg("populated slot =%d \n", populated_slot); dbg("populated slot = %d\n", populated_slot);
if (!populated_slot) { if (!populated_slot) {
p_mem_node->next = ctrl->p_mem_head; p_mem_node->next = ctrl->p_mem_head;
@ -1373,7 +1373,7 @@ int cpqhp_find_available_resources(struct controller *ctrl, void __iomem *rom_st
bus_node->length = max_bus - secondary_bus + 1; bus_node->length = max_bus - secondary_bus + 1;
dbg("found bus_node(base, length) = %x, %x\n", dbg("found bus_node(base, length) = %x, %x\n",
bus_node->base, bus_node->length); bus_node->base, bus_node->length);
dbg("populated slot =%d \n", populated_slot); dbg("populated slot = %d\n", populated_slot);
if (!populated_slot) { if (!populated_slot) {
bus_node->next = ctrl->bus_head; bus_node->next = ctrl->bus_head;
ctrl->bus_head = bus_node; ctrl->bus_head = bus_node;

View File

@ -124,7 +124,7 @@ static u8 i2c_ctrl_read(struct controller *ctlr_ptr, void __iomem *WPGBbar, u8 i
unsigned long ultemp; unsigned long ultemp;
unsigned long data; // actual data HILO format unsigned long data; // actual data HILO format
debug_polling("%s - Entry WPGBbar[%p] index[%x] \n", __func__, WPGBbar, index); debug_polling("%s - Entry WPGBbar[%p] index[%x]\n", __func__, WPGBbar, index);
//-------------------------------------------------------------------- //--------------------------------------------------------------------
// READ - step 1 // READ - step 1
@ -147,7 +147,7 @@ static u8 i2c_ctrl_read(struct controller *ctlr_ptr, void __iomem *WPGBbar, u8 i
ultemp = ultemp << 8; ultemp = ultemp << 8;
data |= ultemp; data |= ultemp;
} else { } else {
err("this controller type is not supported \n"); err("this controller type is not supported\n");
return HPC_ERROR; return HPC_ERROR;
} }
@ -258,7 +258,7 @@ static u8 i2c_ctrl_write(struct controller *ctlr_ptr, void __iomem *WPGBbar, u8
ultemp = ultemp << 8; ultemp = ultemp << 8;
data |= ultemp; data |= ultemp;
} else { } else {
err("this controller type is not supported \n"); err("this controller type is not supported\n");
return HPC_ERROR; return HPC_ERROR;
} }

View File

@ -629,15 +629,18 @@ static int sriov_add_vfs(struct pci_dev *dev, u16 num_vfs)
if (dev->no_vf_scan) if (dev->no_vf_scan)
return 0; return 0;
pci_lock_rescan_remove();
for (i = 0; i < num_vfs; i++) { for (i = 0; i < num_vfs; i++) {
rc = pci_iov_add_virtfn(dev, i); rc = pci_iov_add_virtfn(dev, i);
if (rc) if (rc)
goto failed; goto failed;
} }
pci_unlock_rescan_remove();
return 0; return 0;
failed: failed:
while (i--) while (i--)
pci_iov_remove_virtfn(dev, i); pci_iov_remove_virtfn(dev, i);
pci_unlock_rescan_remove();
return rc; return rc;
} }
@ -762,8 +765,10 @@ static void sriov_del_vfs(struct pci_dev *dev)
struct pci_sriov *iov = dev->sriov; struct pci_sriov *iov = dev->sriov;
int i; int i;
pci_lock_rescan_remove();
for (i = 0; i < iov->num_VFs; i++) for (i = 0; i < iov->num_VFs; i++)
pci_iov_remove_virtfn(dev, i); pci_iov_remove_virtfn(dev, i);
pci_unlock_rescan_remove();
} }
static void sriov_disable(struct pci_dev *dev) static void sriov_disable(struct pci_dev *dev)

View File

@ -279,13 +279,21 @@ static int of_pci_prop_intr_map(struct pci_dev *pdev, struct of_changeset *ocs,
mapp++; mapp++;
*mapp = out_irq[i].np->phandle; *mapp = out_irq[i].np->phandle;
mapp++; mapp++;
if (addr_sz[i]) {
ret = of_property_read_u32_array(out_irq[i].np, /*
"reg", mapp, * A device address does not affect the device <->
addr_sz[i]); * interrupt-controller HW connection for all
if (ret) * modern interrupt controllers; moreover, the
goto failed; * kernel (i.e., of_irq_parse_raw()) ignores the
} * values in the parent unit address cells while
* parsing the interrupt-map property because they
* are irrelevant for interrupt mapping in modern
* systems.
*
* Leave the parent unit address initialized to 0 --
* just take into account the #address-cells size
* to build the property properly.
*/
mapp += addr_sz[i]; mapp += addr_sz[i];
memcpy(mapp, out_irq[i].args, memcpy(mapp, out_irq[i].args,
out_irq[i].args_count * sizeof(u32)); out_irq[i].args_count * sizeof(u32));

View File

@ -360,7 +360,7 @@ int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, size_t size,
pages_free: pages_free:
devm_memunmap_pages(&pdev->dev, pgmap); devm_memunmap_pages(&pdev->dev, pgmap);
pgmap_free: pgmap_free:
devm_kfree(&pdev->dev, pgmap); devm_kfree(&pdev->dev, p2p_pgmap);
return error; return error;
} }
EXPORT_SYMBOL_GPL(pci_p2pdma_add_resource); EXPORT_SYMBOL_GPL(pci_p2pdma_add_resource);
@ -738,7 +738,7 @@ EXPORT_SYMBOL_GPL(pci_p2pdma_distance_many);
* pci_has_p2pmem - check if a given PCI device has published any p2pmem * pci_has_p2pmem - check if a given PCI device has published any p2pmem
* @pdev: PCI device to check * @pdev: PCI device to check
*/ */
bool pci_has_p2pmem(struct pci_dev *pdev) static bool pci_has_p2pmem(struct pci_dev *pdev)
{ {
struct pci_p2pdma *p2pdma; struct pci_p2pdma *p2pdma;
bool res; bool res;
@ -750,7 +750,6 @@ bool pci_has_p2pmem(struct pci_dev *pdev)
return res; return res;
} }
EXPORT_SYMBOL_GPL(pci_has_p2pmem);
/** /**
* pci_p2pmem_find_many - find a peer-to-peer DMA memory device compatible with * pci_p2pmem_find_many - find a peer-to-peer DMA memory device compatible with

View File

@ -122,6 +122,8 @@ phys_addr_t acpi_pci_root_get_mcfg_addr(acpi_handle handle)
bool pci_acpi_preserve_config(struct pci_host_bridge *host_bridge) bool pci_acpi_preserve_config(struct pci_host_bridge *host_bridge)
{ {
bool ret = false;
if (ACPI_HANDLE(&host_bridge->dev)) { if (ACPI_HANDLE(&host_bridge->dev)) {
union acpi_object *obj; union acpi_object *obj;
@ -135,11 +137,11 @@ bool pci_acpi_preserve_config(struct pci_host_bridge *host_bridge)
1, DSM_PCI_PRESERVE_BOOT_CONFIG, 1, DSM_PCI_PRESERVE_BOOT_CONFIG,
NULL, ACPI_TYPE_INTEGER); NULL, ACPI_TYPE_INTEGER);
if (obj && obj->integer.value == 0) if (obj && obj->integer.value == 0)
return true; ret = true;
ACPI_FREE(obj); ACPI_FREE(obj);
} }
return false; return ret;
} }
/* _HPX PCI Setting Record (Type 0); same as _HPP */ /* _HPX PCI Setting Record (Type 0); same as _HPP */

View File

@ -1582,7 +1582,7 @@ static int pci_uevent(const struct device *dev, struct kobj_uevent_env *env)
return 0; return 0;
} }
#if defined(CONFIG_PCIEAER) || defined(CONFIG_EEH) #if defined(CONFIG_PCIEAER) || defined(CONFIG_EEH) || defined(CONFIG_S390)
/** /**
* pci_uevent_ers - emit a uevent during recovery path of PCI device * pci_uevent_ers - emit a uevent during recovery path of PCI device
* @pdev: PCI device undergoing error recovery * @pdev: PCI device undergoing error recovery
@ -1596,6 +1596,7 @@ void pci_uevent_ers(struct pci_dev *pdev, enum pci_ers_result err_type)
switch (err_type) { switch (err_type) {
case PCI_ERS_RESULT_NONE: case PCI_ERS_RESULT_NONE:
case PCI_ERS_RESULT_CAN_RECOVER: case PCI_ERS_RESULT_CAN_RECOVER:
case PCI_ERS_RESULT_NEED_RESET:
envp[idx++] = "ERROR_EVENT=BEGIN_RECOVERY"; envp[idx++] = "ERROR_EVENT=BEGIN_RECOVERY";
envp[idx++] = "DEVICE_ONLINE=0"; envp[idx++] = "DEVICE_ONLINE=0";
break; break;

View File

@ -30,6 +30,7 @@
#include <linux/msi.h> #include <linux/msi.h>
#include <linux/of.h> #include <linux/of.h>
#include <linux/aperture.h> #include <linux/aperture.h>
#include <linux/unaligned.h>
#include "pci.h" #include "pci.h"
#ifndef ARCH_PCI_DEV_GROUPS #ifndef ARCH_PCI_DEV_GROUPS
@ -177,6 +178,13 @@ static ssize_t resource_show(struct device *dev, struct device_attribute *attr,
for (i = 0; i < max; i++) { for (i = 0; i < max; i++) {
struct resource *res = &pci_dev->resource[i]; struct resource *res = &pci_dev->resource[i];
struct resource zerores = {};
/* For backwards compatibility */
if (i >= PCI_BRIDGE_RESOURCES && i <= PCI_BRIDGE_RESOURCE_END &&
res->flags & (IORESOURCE_UNSET | IORESOURCE_DISABLED))
res = &zerores;
pci_resource_to_user(pci_dev, i, res, &start, &end); pci_resource_to_user(pci_dev, i, res, &start, &end);
len += sysfs_emit_at(buf, len, "0x%016llx 0x%016llx 0x%016llx\n", len += sysfs_emit_at(buf, len, "0x%016llx 0x%016llx 0x%016llx\n",
(unsigned long long)start, (unsigned long long)start,
@ -201,8 +209,14 @@ static ssize_t max_link_width_show(struct device *dev,
struct device_attribute *attr, char *buf) struct device_attribute *attr, char *buf)
{ {
struct pci_dev *pdev = to_pci_dev(dev); struct pci_dev *pdev = to_pci_dev(dev);
ssize_t ret;
return sysfs_emit(buf, "%u\n", pcie_get_width_cap(pdev)); /* We read PCI_EXP_LNKCAP, so we need the device to be accessible. */
pci_config_pm_runtime_get(pdev);
ret = sysfs_emit(buf, "%u\n", pcie_get_width_cap(pdev));
pci_config_pm_runtime_put(pdev);
return ret;
} }
static DEVICE_ATTR_RO(max_link_width); static DEVICE_ATTR_RO(max_link_width);
@ -214,7 +228,10 @@ static ssize_t current_link_speed_show(struct device *dev,
int err; int err;
enum pci_bus_speed speed; enum pci_bus_speed speed;
pci_config_pm_runtime_get(pci_dev);
err = pcie_capability_read_word(pci_dev, PCI_EXP_LNKSTA, &linkstat); err = pcie_capability_read_word(pci_dev, PCI_EXP_LNKSTA, &linkstat);
pci_config_pm_runtime_put(pci_dev);
if (err) if (err)
return -EINVAL; return -EINVAL;
@ -231,7 +248,10 @@ static ssize_t current_link_width_show(struct device *dev,
u16 linkstat; u16 linkstat;
int err; int err;
pci_config_pm_runtime_get(pci_dev);
err = pcie_capability_read_word(pci_dev, PCI_EXP_LNKSTA, &linkstat); err = pcie_capability_read_word(pci_dev, PCI_EXP_LNKSTA, &linkstat);
pci_config_pm_runtime_put(pci_dev);
if (err) if (err)
return -EINVAL; return -EINVAL;
@ -247,7 +267,10 @@ static ssize_t secondary_bus_number_show(struct device *dev,
u8 sec_bus; u8 sec_bus;
int err; int err;
pci_config_pm_runtime_get(pci_dev);
err = pci_read_config_byte(pci_dev, PCI_SECONDARY_BUS, &sec_bus); err = pci_read_config_byte(pci_dev, PCI_SECONDARY_BUS, &sec_bus);
pci_config_pm_runtime_put(pci_dev);
if (err) if (err)
return -EINVAL; return -EINVAL;
@ -263,7 +286,10 @@ static ssize_t subordinate_bus_number_show(struct device *dev,
u8 sub_bus; u8 sub_bus;
int err; int err;
pci_config_pm_runtime_get(pci_dev);
err = pci_read_config_byte(pci_dev, PCI_SUBORDINATE_BUS, &sub_bus); err = pci_read_config_byte(pci_dev, PCI_SUBORDINATE_BUS, &sub_bus);
pci_config_pm_runtime_put(pci_dev);
if (err) if (err)
return -EINVAL; return -EINVAL;
@ -694,6 +720,22 @@ static ssize_t boot_vga_show(struct device *dev, struct device_attribute *attr,
} }
static DEVICE_ATTR_RO(boot_vga); static DEVICE_ATTR_RO(boot_vga);
static ssize_t serial_number_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct pci_dev *pci_dev = to_pci_dev(dev);
u64 dsn;
u8 bytes[8];
dsn = pci_get_dsn(pci_dev);
if (!dsn)
return -EIO;
put_unaligned_be64(dsn, bytes);
return sysfs_emit(buf, "%8phD\n", bytes);
}
static DEVICE_ATTR_ADMIN_RO(serial_number);
static ssize_t pci_read_config(struct file *filp, struct kobject *kobj, static ssize_t pci_read_config(struct file *filp, struct kobject *kobj,
const struct bin_attribute *bin_attr, char *buf, const struct bin_attribute *bin_attr, char *buf,
loff_t off, size_t count) loff_t off, size_t count)
@ -1555,13 +1597,19 @@ static ssize_t __resource_resize_store(struct device *dev, int n,
const char *buf, size_t count) const char *buf, size_t count)
{ {
struct pci_dev *pdev = to_pci_dev(dev); struct pci_dev *pdev = to_pci_dev(dev);
unsigned long size, flags; struct pci_bus *bus = pdev->bus;
struct resource *b_win, *res;
unsigned long size;
int ret, i; int ret, i;
u16 cmd; u16 cmd;
if (kstrtoul(buf, 0, &size) < 0) if (kstrtoul(buf, 0, &size) < 0)
return -EINVAL; return -EINVAL;
b_win = pbus_select_window(bus, pci_resource_n(pdev, n));
if (!b_win)
return -EINVAL;
device_lock(dev); device_lock(dev);
if (dev->driver || pci_num_vf(pdev)) { if (dev->driver || pci_num_vf(pdev)) {
ret = -EBUSY; ret = -EBUSY;
@ -1581,19 +1629,19 @@ static ssize_t __resource_resize_store(struct device *dev, int n,
pci_write_config_word(pdev, PCI_COMMAND, pci_write_config_word(pdev, PCI_COMMAND,
cmd & ~PCI_COMMAND_MEMORY); cmd & ~PCI_COMMAND_MEMORY);
flags = pci_resource_flags(pdev, n);
pci_remove_resource_files(pdev); pci_remove_resource_files(pdev);
for (i = 0; i < PCI_BRIDGE_RESOURCES; i++) { pci_dev_for_each_resource(pdev, res, i) {
if (pci_resource_len(pdev, i) && if (i >= PCI_BRIDGE_RESOURCES)
pci_resource_flags(pdev, i) == flags) break;
if (b_win == pbus_select_window(bus, res))
pci_release_resource(pdev, i); pci_release_resource(pdev, i);
} }
ret = pci_resize_resource(pdev, n, size); ret = pci_resize_resource(pdev, n, size);
pci_assign_unassigned_bus_resources(pdev->bus); pci_assign_unassigned_bus_resources(bus);
if (pci_create_resource_files(pdev)) if (pci_create_resource_files(pdev))
pci_warn(pdev, "Failed to recreate resource files after BAR resizing\n"); pci_warn(pdev, "Failed to recreate resource files after BAR resizing\n");
@ -1698,6 +1746,7 @@ late_initcall(pci_sysfs_init);
static struct attribute *pci_dev_dev_attrs[] = { static struct attribute *pci_dev_dev_attrs[] = {
&dev_attr_boot_vga.attr, &dev_attr_boot_vga.attr,
&dev_attr_serial_number.attr,
NULL, NULL,
}; };
@ -1710,6 +1759,9 @@ static umode_t pci_dev_attrs_are_visible(struct kobject *kobj,
if (a == &dev_attr_boot_vga.attr && pci_is_vga(pdev)) if (a == &dev_attr_boot_vga.attr && pci_is_vga(pdev))
return a->mode; return a->mode;
if (a == &dev_attr_serial_number.attr && pci_get_dsn(pdev))
return a->mode;
return 0; return 0;
} }

View File

@ -423,36 +423,10 @@ static int pci_dev_str_match(struct pci_dev *dev, const char *p,
return 1; return 1;
} }
static u8 __pci_find_next_cap_ttl(struct pci_bus *bus, unsigned int devfn,
u8 pos, int cap, int *ttl)
{
u8 id;
u16 ent;
pci_bus_read_config_byte(bus, devfn, pos, &pos);
while ((*ttl)--) {
if (pos < 0x40)
break;
pos &= ~3;
pci_bus_read_config_word(bus, devfn, pos, &ent);
id = ent & 0xff;
if (id == 0xff)
break;
if (id == cap)
return pos;
pos = (ent >> 8);
}
return 0;
}
static u8 __pci_find_next_cap(struct pci_bus *bus, unsigned int devfn, static u8 __pci_find_next_cap(struct pci_bus *bus, unsigned int devfn,
u8 pos, int cap) u8 pos, int cap)
{ {
int ttl = PCI_FIND_CAP_TTL; return PCI_FIND_NEXT_CAP(pci_bus_read_config, pos, cap, bus, devfn);
return __pci_find_next_cap_ttl(bus, devfn, pos, cap, &ttl);
} }
u8 pci_find_next_capability(struct pci_dev *dev, u8 pos, int cap) u8 pci_find_next_capability(struct pci_dev *dev, u8 pos, int cap)
@ -553,42 +527,11 @@ EXPORT_SYMBOL(pci_bus_find_capability);
*/ */
u16 pci_find_next_ext_capability(struct pci_dev *dev, u16 start, int cap) u16 pci_find_next_ext_capability(struct pci_dev *dev, u16 start, int cap)
{ {
u32 header;
int ttl;
u16 pos = PCI_CFG_SPACE_SIZE;
/* minimum 8 bytes per capability */
ttl = (PCI_CFG_SPACE_EXP_SIZE - PCI_CFG_SPACE_SIZE) / 8;
if (dev->cfg_size <= PCI_CFG_SPACE_SIZE) if (dev->cfg_size <= PCI_CFG_SPACE_SIZE)
return 0; return 0;
if (start) return PCI_FIND_NEXT_EXT_CAP(pci_bus_read_config, start, cap,
pos = start; dev->bus, dev->devfn);
if (pci_read_config_dword(dev, pos, &header) != PCIBIOS_SUCCESSFUL)
return 0;
/*
* If we have no capabilities, this is indicated by cap ID,
* cap version and next pointer all being 0.
*/
if (header == 0)
return 0;
while (ttl-- > 0) {
if (PCI_EXT_CAP_ID(header) == cap && pos != start)
return pos;
pos = PCI_EXT_CAP_NEXT(header);
if (pos < PCI_CFG_SPACE_SIZE)
break;
if (pci_read_config_dword(dev, pos, &header) != PCIBIOS_SUCCESSFUL)
break;
}
return 0;
} }
EXPORT_SYMBOL_GPL(pci_find_next_ext_capability); EXPORT_SYMBOL_GPL(pci_find_next_ext_capability);
@ -648,7 +591,7 @@ EXPORT_SYMBOL_GPL(pci_get_dsn);
static u8 __pci_find_next_ht_cap(struct pci_dev *dev, u8 pos, int ht_cap) static u8 __pci_find_next_ht_cap(struct pci_dev *dev, u8 pos, int ht_cap)
{ {
int rc, ttl = PCI_FIND_CAP_TTL; int rc;
u8 cap, mask; u8 cap, mask;
if (ht_cap == HT_CAPTYPE_SLAVE || ht_cap == HT_CAPTYPE_HOST) if (ht_cap == HT_CAPTYPE_SLAVE || ht_cap == HT_CAPTYPE_HOST)
@ -656,8 +599,8 @@ static u8 __pci_find_next_ht_cap(struct pci_dev *dev, u8 pos, int ht_cap)
else else
mask = HT_5BIT_CAP_MASK; mask = HT_5BIT_CAP_MASK;
pos = __pci_find_next_cap_ttl(dev->bus, dev->devfn, pos, pos = PCI_FIND_NEXT_CAP(pci_bus_read_config, pos,
PCI_CAP_ID_HT, &ttl); PCI_CAP_ID_HT, dev->bus, dev->devfn);
while (pos) { while (pos) {
rc = pci_read_config_byte(dev, pos + 3, &cap); rc = pci_read_config_byte(dev, pos + 3, &cap);
if (rc != PCIBIOS_SUCCESSFUL) if (rc != PCIBIOS_SUCCESSFUL)
@ -666,9 +609,10 @@ static u8 __pci_find_next_ht_cap(struct pci_dev *dev, u8 pos, int ht_cap)
if ((cap & mask) == ht_cap) if ((cap & mask) == ht_cap)
return pos; return pos;
pos = __pci_find_next_cap_ttl(dev->bus, dev->devfn, pos = PCI_FIND_NEXT_CAP(pci_bus_read_config,
pos + PCI_CAP_LIST_NEXT, pos + PCI_CAP_LIST_NEXT,
PCI_CAP_ID_HT, &ttl); PCI_CAP_ID_HT, dev->bus,
dev->devfn);
} }
return 0; return 0;
@ -1374,6 +1318,11 @@ int pci_power_up(struct pci_dev *dev)
return -EIO; return -EIO;
} }
if (pci_dev_is_disconnected(dev)) {
dev->current_state = PCI_D3cold;
return -EIO;
}
pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr); pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr);
if (PCI_POSSIBLE_ERROR(pmcsr)) { if (PCI_POSSIBLE_ERROR(pmcsr)) {
pci_err(dev, "Unable to change power state from %s to D0, device inaccessible\n", pci_err(dev, "Unable to change power state from %s to D0, device inaccessible\n",

View File

@ -2,12 +2,15 @@
#ifndef DRIVERS_PCI_H #ifndef DRIVERS_PCI_H
#define DRIVERS_PCI_H #define DRIVERS_PCI_H
#include <linux/align.h>
#include <linux/bitfield.h>
#include <linux/pci.h> #include <linux/pci.h>
struct pcie_tlp_log; struct pcie_tlp_log;
/* Number of possible devfns: 0.0 to 1f.7 inclusive */ /* Number of possible devfns: 0.0 to 1f.7 inclusive */
#define MAX_NR_DEVFNS 256 #define MAX_NR_DEVFNS 256
#define PCI_MAX_NR_DEVS 32
#define MAX_NR_LANES 16 #define MAX_NR_LANES 16
@ -81,13 +84,102 @@ struct pcie_tlp_log;
#define PCIE_MSG_CODE_DEASSERT_INTC 0x26 #define PCIE_MSG_CODE_DEASSERT_INTC 0x26
#define PCIE_MSG_CODE_DEASSERT_INTD 0x27 #define PCIE_MSG_CODE_DEASSERT_INTD 0x27
#define PCI_BUS_BRIDGE_IO_WINDOW 0
#define PCI_BUS_BRIDGE_MEM_WINDOW 1
#define PCI_BUS_BRIDGE_PREF_MEM_WINDOW 2
extern const unsigned char pcie_link_speed[]; extern const unsigned char pcie_link_speed[];
extern bool pci_early_dump; extern bool pci_early_dump;
extern struct mutex pci_rescan_remove_lock;
bool pcie_cap_has_lnkctl(const struct pci_dev *dev); bool pcie_cap_has_lnkctl(const struct pci_dev *dev);
bool pcie_cap_has_lnkctl2(const struct pci_dev *dev); bool pcie_cap_has_lnkctl2(const struct pci_dev *dev);
bool pcie_cap_has_rtctl(const struct pci_dev *dev); bool pcie_cap_has_rtctl(const struct pci_dev *dev);
/* Standard Capability finder */
/**
* PCI_FIND_NEXT_CAP - Find a PCI standard capability
* @read_cfg: Function pointer for reading PCI config space
* @start: Starting position to begin search
* @cap: Capability ID to find
* @args: Arguments to pass to read_cfg function
*
* Search the capability list in PCI config space to find @cap.
* Implements TTL (time-to-live) protection against infinite loops.
*
* Return: Position of the capability if found, 0 otherwise.
*/
#define PCI_FIND_NEXT_CAP(read_cfg, start, cap, args...) \
({ \
int __ttl = PCI_FIND_CAP_TTL; \
u8 __id, __found_pos = 0; \
u8 __pos = (start); \
u16 __ent; \
\
read_cfg##_byte(args, __pos, &__pos); \
\
while (__ttl--) { \
if (__pos < PCI_STD_HEADER_SIZEOF) \
break; \
\
__pos = ALIGN_DOWN(__pos, 4); \
read_cfg##_word(args, __pos, &__ent); \
\
__id = FIELD_GET(PCI_CAP_ID_MASK, __ent); \
if (__id == 0xff) \
break; \
\
if (__id == (cap)) { \
__found_pos = __pos; \
break; \
} \
\
__pos = FIELD_GET(PCI_CAP_LIST_NEXT_MASK, __ent); \
} \
__found_pos; \
})
/* Extended Capability finder */
/**
* PCI_FIND_NEXT_EXT_CAP - Find a PCI extended capability
* @read_cfg: Function pointer for reading PCI config space
* @start: Starting position to begin search (0 for initial search)
* @cap: Extended capability ID to find
* @args: Arguments to pass to read_cfg function
*
* Search the extended capability list in PCI config space to find @cap.
* Implements TTL protection against infinite loops using a calculated
* maximum search count.
*
* Return: Position of the capability if found, 0 otherwise.
*/
#define PCI_FIND_NEXT_EXT_CAP(read_cfg, start, cap, args...) \
({ \
u16 __pos = (start) ?: PCI_CFG_SPACE_SIZE; \
u16 __found_pos = 0; \
int __ttl, __ret; \
u32 __header; \
\
__ttl = (PCI_CFG_SPACE_EXP_SIZE - PCI_CFG_SPACE_SIZE) / 8; \
while (__ttl-- > 0 && __pos >= PCI_CFG_SPACE_SIZE) { \
__ret = read_cfg##_dword(args, __pos, &__header); \
if (__ret != PCIBIOS_SUCCESSFUL) \
break; \
\
if (__header == 0) \
break; \
\
if (PCI_EXT_CAP_ID(__header) == (cap) && __pos != start) {\
__found_pos = __pos; \
break; \
} \
\
__pos = PCI_EXT_CAP_NEXT(__header); \
} \
__found_pos; \
})
/* Functions internal to the PCI core code */ /* Functions internal to the PCI core code */
#ifdef CONFIG_DMI #ifdef CONFIG_DMI
@ -330,7 +422,7 @@ struct device *pci_get_host_bridge_device(struct pci_dev *dev);
void pci_put_host_bridge_device(struct device *dev); void pci_put_host_bridge_device(struct device *dev);
unsigned int pci_rescan_bus_bridge_resize(struct pci_dev *bridge); unsigned int pci_rescan_bus_bridge_resize(struct pci_dev *bridge);
int pci_reassign_bridge_resources(struct pci_dev *bridge, unsigned long type); int pbus_reassign_bridge_resources(struct pci_bus *bus, struct resource *res);
int __must_check pci_reassign_resource(struct pci_dev *dev, int i, resource_size_t add_size, resource_size_t align); int __must_check pci_reassign_resource(struct pci_dev *dev, int i, resource_size_t add_size, resource_size_t align);
int pci_configure_extended_tags(struct pci_dev *dev, void *ign); int pci_configure_extended_tags(struct pci_dev *dev, void *ign);
@ -381,6 +473,8 @@ static inline int pci_resource_num(const struct pci_dev *dev,
return resno; return resno;
} }
struct resource *pbus_select_window(struct pci_bus *bus,
const struct resource *res);
void pci_reassigndev_resource_alignment(struct pci_dev *dev); void pci_reassigndev_resource_alignment(struct pci_dev *dev);
void pci_disable_bridge_window(struct pci_dev *dev); void pci_disable_bridge_window(struct pci_dev *dev);
struct pci_bus *pci_bus_get(struct pci_bus *bus); struct pci_bus *pci_bus_get(struct pci_bus *bus);

View File

@ -43,7 +43,7 @@
#define AER_ERROR_SOURCES_MAX 128 #define AER_ERROR_SOURCES_MAX 128
#define AER_MAX_TYPEOF_COR_ERRS 16 /* as per PCI_ERR_COR_STATUS */ #define AER_MAX_TYPEOF_COR_ERRS 16 /* as per PCI_ERR_COR_STATUS */
#define AER_MAX_TYPEOF_UNCOR_ERRS 27 /* as per PCI_ERR_UNCOR_STATUS*/ #define AER_MAX_TYPEOF_UNCOR_ERRS 32 /* as per PCI_ERR_UNCOR_STATUS*/
struct aer_err_source { struct aer_err_source {
u32 status; /* PCI_ERR_ROOT_STATUS */ u32 status; /* PCI_ERR_ROOT_STATUS */
@ -96,11 +96,21 @@ struct aer_info {
}; };
#define AER_LOG_TLP_MASKS (PCI_ERR_UNC_POISON_TLP| \ #define AER_LOG_TLP_MASKS (PCI_ERR_UNC_POISON_TLP| \
PCI_ERR_UNC_POISON_BLK | \
PCI_ERR_UNC_ECRC| \ PCI_ERR_UNC_ECRC| \
PCI_ERR_UNC_UNSUP| \ PCI_ERR_UNC_UNSUP| \
PCI_ERR_UNC_COMP_ABORT| \ PCI_ERR_UNC_COMP_ABORT| \
PCI_ERR_UNC_UNX_COMP| \ PCI_ERR_UNC_UNX_COMP| \
PCI_ERR_UNC_MALF_TLP) PCI_ERR_UNC_ACSV | \
PCI_ERR_UNC_MCBTLP | \
PCI_ERR_UNC_ATOMEG | \
PCI_ERR_UNC_DMWR_BLK | \
PCI_ERR_UNC_XLAT_BLK | \
PCI_ERR_UNC_TLPPRE | \
PCI_ERR_UNC_MALF_TLP | \
PCI_ERR_UNC_IDE_CHECK | \
PCI_ERR_UNC_MISR_IDE | \
PCI_ERR_UNC_PCRC_CHECK)
#define SYSTEM_ERROR_INTR_ON_MESG_MASK (PCI_EXP_RTCTL_SECEE| \ #define SYSTEM_ERROR_INTR_ON_MESG_MASK (PCI_EXP_RTCTL_SECEE| \
PCI_EXP_RTCTL_SENFEE| \ PCI_EXP_RTCTL_SENFEE| \
@ -383,6 +393,10 @@ void pci_aer_init(struct pci_dev *dev)
return; return;
dev->aer_info = kzalloc(sizeof(*dev->aer_info), GFP_KERNEL); dev->aer_info = kzalloc(sizeof(*dev->aer_info), GFP_KERNEL);
if (!dev->aer_info) {
dev->aer_cap = 0;
return;
}
ratelimit_state_init(&dev->aer_info->correctable_ratelimit, ratelimit_state_init(&dev->aer_info->correctable_ratelimit,
DEFAULT_RATELIMIT_INTERVAL, DEFAULT_RATELIMIT_BURST); DEFAULT_RATELIMIT_INTERVAL, DEFAULT_RATELIMIT_BURST);
@ -525,11 +539,11 @@ static const char *aer_uncorrectable_error_string[] = {
"AtomicOpBlocked", /* Bit Position 24 */ "AtomicOpBlocked", /* Bit Position 24 */
"TLPBlockedErr", /* Bit Position 25 */ "TLPBlockedErr", /* Bit Position 25 */
"PoisonTLPBlocked", /* Bit Position 26 */ "PoisonTLPBlocked", /* Bit Position 26 */
NULL, /* Bit Position 27 */ "DMWrReqBlocked", /* Bit Position 27 */
NULL, /* Bit Position 28 */ "IDECheck", /* Bit Position 28 */
NULL, /* Bit Position 29 */ "MisIDETLP", /* Bit Position 29 */
NULL, /* Bit Position 30 */ "PCRC_CHECK", /* Bit Position 30 */
NULL, /* Bit Position 31 */ "TLPXlatBlocked", /* Bit Position 31 */
}; };
static const char *aer_agent_string[] = { static const char *aer_agent_string[] = {
@ -786,6 +800,9 @@ static void pci_rootport_aer_stats_incr(struct pci_dev *pdev,
static int aer_ratelimit(struct pci_dev *dev, unsigned int severity) static int aer_ratelimit(struct pci_dev *dev, unsigned int severity)
{ {
if (!dev->aer_info)
return 1;
switch (severity) { switch (severity) {
case AER_NONFATAL: case AER_NONFATAL:
return __ratelimit(&dev->aer_info->nonfatal_ratelimit); return __ratelimit(&dev->aer_info->nonfatal_ratelimit);
@ -796,6 +813,20 @@ static int aer_ratelimit(struct pci_dev *dev, unsigned int severity)
} }
} }
static bool tlp_header_logged(u32 status, u32 capctl)
{
/* Errors for which a header is always logged (PCIe r7.0 sec 6.2.7) */
if (status & AER_LOG_TLP_MASKS)
return true;
/* Completion Timeout header is only logged on capable devices */
if (status & PCI_ERR_UNC_COMP_TIME &&
capctl & PCI_ERR_CAP_COMP_TIME_LOG)
return true;
return false;
}
static void __aer_print_error(struct pci_dev *dev, struct aer_err_info *info) static void __aer_print_error(struct pci_dev *dev, struct aer_err_info *info)
{ {
const char **strings; const char **strings;
@ -910,7 +941,7 @@ void pci_print_aer(struct pci_dev *dev, int aer_severity,
status = aer->uncor_status; status = aer->uncor_status;
mask = aer->uncor_mask; mask = aer->uncor_mask;
info.level = KERN_ERR; info.level = KERN_ERR;
tlp_header_valid = status & AER_LOG_TLP_MASKS; tlp_header_valid = tlp_header_logged(status, aer->cap_control);
} }
info.status = status; info.status = status;
@ -1401,7 +1432,7 @@ int aer_get_device_error_info(struct aer_err_info *info, int i)
pci_read_config_dword(dev, aer + PCI_ERR_CAP, &aercc); pci_read_config_dword(dev, aer + PCI_ERR_CAP, &aercc);
info->first_error = PCI_ERR_CAP_FEP(aercc); info->first_error = PCI_ERR_CAP_FEP(aercc);
if (info->status & AER_LOG_TLP_MASKS) { if (tlp_header_logged(info->status, aercc)) {
info->tlp_header_valid = 1; info->tlp_header_valid = 1;
pcie_read_tlp_log(dev, aer + PCI_ERR_HEADER_LOG, pcie_read_tlp_log(dev, aer + PCI_ERR_HEADER_LOG,
aer + PCI_ERR_PREFIX_LOG, aer + PCI_ERR_PREFIX_LOG,

View File

@ -15,6 +15,7 @@
#include <linux/math.h> #include <linux/math.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/moduleparam.h> #include <linux/moduleparam.h>
#include <linux/of.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/pci_regs.h> #include <linux/pci_regs.h>
#include <linux/errno.h> #include <linux/errno.h>
@ -235,13 +236,15 @@ struct pcie_link_state {
u32 aspm_support:7; /* Supported ASPM state */ u32 aspm_support:7; /* Supported ASPM state */
u32 aspm_enabled:7; /* Enabled ASPM state */ u32 aspm_enabled:7; /* Enabled ASPM state */
u32 aspm_capable:7; /* Capable ASPM state with latency */ u32 aspm_capable:7; /* Capable ASPM state with latency */
u32 aspm_default:7; /* Default ASPM state by BIOS */ u32 aspm_default:7; /* Default ASPM state by BIOS or
override */
u32 aspm_disable:7; /* Disabled ASPM state */ u32 aspm_disable:7; /* Disabled ASPM state */
/* Clock PM state */ /* Clock PM state */
u32 clkpm_capable:1; /* Clock PM capable? */ u32 clkpm_capable:1; /* Clock PM capable? */
u32 clkpm_enabled:1; /* Current Clock PM state */ u32 clkpm_enabled:1; /* Current Clock PM state */
u32 clkpm_default:1; /* Default Clock PM state by BIOS */ u32 clkpm_default:1; /* Default Clock PM state by BIOS or
override */
u32 clkpm_disable:1; /* Clock PM disabled */ u32 clkpm_disable:1; /* Clock PM disabled */
}; };
@ -373,6 +376,18 @@ static void pcie_set_clkpm(struct pcie_link_state *link, int enable)
pcie_set_clkpm_nocheck(link, enable); pcie_set_clkpm_nocheck(link, enable);
} }
static void pcie_clkpm_override_default_link_state(struct pcie_link_state *link,
int enabled)
{
struct pci_dev *pdev = link->downstream;
/* For devicetree platforms, enable ClockPM by default */
if (of_have_populated_dt() && !enabled) {
link->clkpm_default = 1;
pci_info(pdev, "ASPM: DT platform, enabling ClockPM\n");
}
}
static void pcie_clkpm_cap_init(struct pcie_link_state *link, int blacklist) static void pcie_clkpm_cap_init(struct pcie_link_state *link, int blacklist)
{ {
int capable = 1, enabled = 1; int capable = 1, enabled = 1;
@ -395,6 +410,7 @@ static void pcie_clkpm_cap_init(struct pcie_link_state *link, int blacklist)
} }
link->clkpm_enabled = enabled; link->clkpm_enabled = enabled;
link->clkpm_default = enabled; link->clkpm_default = enabled;
pcie_clkpm_override_default_link_state(link, enabled);
link->clkpm_capable = capable; link->clkpm_capable = capable;
link->clkpm_disable = blacklist ? 1 : 0; link->clkpm_disable = blacklist ? 1 : 0;
} }
@ -788,6 +804,29 @@ static void aspm_l1ss_init(struct pcie_link_state *link)
aspm_calc_l12_info(link, parent_l1ss_cap, child_l1ss_cap); aspm_calc_l12_info(link, parent_l1ss_cap, child_l1ss_cap);
} }
#define FLAG(x, y, d) (((x) & (PCIE_LINK_STATE_##y)) ? d : "")
static void pcie_aspm_override_default_link_state(struct pcie_link_state *link)
{
struct pci_dev *pdev = link->downstream;
u32 override;
/* For devicetree platforms, enable all ASPM states by default */
if (of_have_populated_dt()) {
link->aspm_default = PCIE_LINK_STATE_ASPM_ALL;
override = link->aspm_default & ~link->aspm_enabled;
if (override)
pci_info(pdev, "ASPM: DT platform, enabling%s%s%s%s%s%s%s\n",
FLAG(override, L0S_UP, " L0s-up"),
FLAG(override, L0S_DW, " L0s-dw"),
FLAG(override, L1, " L1"),
FLAG(override, L1_1, " ASPM-L1.1"),
FLAG(override, L1_2, " ASPM-L1.2"),
FLAG(override, L1_1_PCIPM, " PCI-PM-L1.1"),
FLAG(override, L1_2_PCIPM, " PCI-PM-L1.2"));
}
}
static void pcie_aspm_cap_init(struct pcie_link_state *link, int blacklist) static void pcie_aspm_cap_init(struct pcie_link_state *link, int blacklist)
{ {
struct pci_dev *child = link->downstream, *parent = link->pdev; struct pci_dev *child = link->downstream, *parent = link->pdev;
@ -868,6 +907,8 @@ static void pcie_aspm_cap_init(struct pcie_link_state *link, int blacklist)
/* Save default state */ /* Save default state */
link->aspm_default = link->aspm_enabled; link->aspm_default = link->aspm_enabled;
pcie_aspm_override_default_link_state(link);
/* Setup initial capable state. Will be updated later */ /* Setup initial capable state. Will be updated later */
link->aspm_capable = link->aspm_support; link->aspm_capable = link->aspm_support;

View File

@ -108,6 +108,24 @@ static int report_normal_detected(struct pci_dev *dev, void *data)
return report_error_detected(dev, pci_channel_io_normal, data); return report_error_detected(dev, pci_channel_io_normal, data);
} }
static int report_perm_failure_detected(struct pci_dev *dev, void *data)
{
struct pci_driver *pdrv;
const struct pci_error_handlers *err_handler;
device_lock(&dev->dev);
pdrv = dev->driver;
if (!pdrv || !pdrv->err_handler || !pdrv->err_handler->error_detected)
goto out;
err_handler = pdrv->err_handler;
err_handler->error_detected(dev, pci_channel_io_perm_failure);
out:
pci_uevent_ers(dev, PCI_ERS_RESULT_DISCONNECT);
device_unlock(&dev->dev);
return 0;
}
static int report_mmio_enabled(struct pci_dev *dev, void *data) static int report_mmio_enabled(struct pci_dev *dev, void *data)
{ {
struct pci_driver *pdrv; struct pci_driver *pdrv;
@ -135,7 +153,8 @@ static int report_slot_reset(struct pci_dev *dev, void *data)
device_lock(&dev->dev); device_lock(&dev->dev);
pdrv = dev->driver; pdrv = dev->driver;
if (!pdrv || !pdrv->err_handler || !pdrv->err_handler->slot_reset) if (!pci_dev_set_io_state(dev, pci_channel_io_normal) ||
!pdrv || !pdrv->err_handler || !pdrv->err_handler->slot_reset)
goto out; goto out;
err_handler = pdrv->err_handler; err_handler = pdrv->err_handler;
@ -217,15 +236,10 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
pci_walk_bridge(bridge, pci_pm_runtime_get_sync, NULL); pci_walk_bridge(bridge, pci_pm_runtime_get_sync, NULL);
pci_dbg(bridge, "broadcast error_detected message\n"); pci_dbg(bridge, "broadcast error_detected message\n");
if (state == pci_channel_io_frozen) { if (state == pci_channel_io_frozen)
pci_walk_bridge(bridge, report_frozen_detected, &status); pci_walk_bridge(bridge, report_frozen_detected, &status);
if (reset_subordinates(bridge) != PCI_ERS_RESULT_RECOVERED) { else
pci_warn(bridge, "subordinate device reset failed\n");
goto failed;
}
} else {
pci_walk_bridge(bridge, report_normal_detected, &status); pci_walk_bridge(bridge, report_normal_detected, &status);
}
if (status == PCI_ERS_RESULT_CAN_RECOVER) { if (status == PCI_ERS_RESULT_CAN_RECOVER) {
status = PCI_ERS_RESULT_RECOVERED; status = PCI_ERS_RESULT_RECOVERED;
@ -233,6 +247,14 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
pci_walk_bridge(bridge, report_mmio_enabled, &status); pci_walk_bridge(bridge, report_mmio_enabled, &status);
} }
if (status == PCI_ERS_RESULT_NEED_RESET ||
state == pci_channel_io_frozen) {
if (reset_subordinates(bridge) != PCI_ERS_RESULT_RECOVERED) {
pci_warn(bridge, "subordinate device reset failed\n");
goto failed;
}
}
if (status == PCI_ERS_RESULT_NEED_RESET) { if (status == PCI_ERS_RESULT_NEED_RESET) {
/* /*
* TODO: Should call platform-specific * TODO: Should call platform-specific
@ -269,7 +291,7 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
failed: failed:
pci_walk_bridge(bridge, pci_pm_runtime_put, NULL); pci_walk_bridge(bridge, pci_pm_runtime_put, NULL);
pci_uevent_ers(bridge, PCI_ERS_RESULT_DISCONNECT); pci_walk_bridge(bridge, report_perm_failure_detected, NULL);
pci_info(bridge, "device recovery failed\n"); pci_info(bridge, "device recovery failed\n");

View File

@ -3,6 +3,7 @@
* PCI detection and setup code * PCI detection and setup code
*/ */
#include <linux/array_size.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/init.h> #include <linux/init.h>
@ -419,13 +420,17 @@ static void pci_read_bridge_io(struct pci_dev *dev, struct resource *res,
limit |= ((unsigned long) io_limit_hi << 16); limit |= ((unsigned long) io_limit_hi << 16);
} }
res->flags = (io_base_lo & PCI_IO_RANGE_TYPE_MASK) | IORESOURCE_IO;
if (base <= limit) { if (base <= limit) {
res->flags = (io_base_lo & PCI_IO_RANGE_TYPE_MASK) | IORESOURCE_IO;
region.start = base; region.start = base;
region.end = limit + io_granularity - 1; region.end = limit + io_granularity - 1;
pcibios_bus_to_resource(dev->bus, res, &region); pcibios_bus_to_resource(dev->bus, res, &region);
if (log) if (log)
pci_info(dev, " bridge window %pR\n", res); pci_info(dev, " bridge window %pR\n", res);
} else {
resource_set_range(res, 0, 0);
res->flags |= IORESOURCE_UNSET | IORESOURCE_DISABLED;
} }
} }
@ -440,13 +445,18 @@ static void pci_read_bridge_mmio(struct pci_dev *dev, struct resource *res,
pci_read_config_word(dev, PCI_MEMORY_LIMIT, &mem_limit_lo); pci_read_config_word(dev, PCI_MEMORY_LIMIT, &mem_limit_lo);
base = ((unsigned long) mem_base_lo & PCI_MEMORY_RANGE_MASK) << 16; base = ((unsigned long) mem_base_lo & PCI_MEMORY_RANGE_MASK) << 16;
limit = ((unsigned long) mem_limit_lo & PCI_MEMORY_RANGE_MASK) << 16; limit = ((unsigned long) mem_limit_lo & PCI_MEMORY_RANGE_MASK) << 16;
res->flags = (mem_base_lo & PCI_MEMORY_RANGE_TYPE_MASK) | IORESOURCE_MEM;
if (base <= limit) { if (base <= limit) {
res->flags = (mem_base_lo & PCI_MEMORY_RANGE_TYPE_MASK) | IORESOURCE_MEM;
region.start = base; region.start = base;
region.end = limit + 0xfffff; region.end = limit + 0xfffff;
pcibios_bus_to_resource(dev->bus, res, &region); pcibios_bus_to_resource(dev->bus, res, &region);
if (log) if (log)
pci_info(dev, " bridge window %pR\n", res); pci_info(dev, " bridge window %pR\n", res);
} else {
resource_set_range(res, 0, 0);
res->flags |= IORESOURCE_UNSET | IORESOURCE_DISABLED;
} }
} }
@ -489,16 +499,20 @@ static void pci_read_bridge_mmio_pref(struct pci_dev *dev, struct resource *res,
return; return;
} }
res->flags = (mem_base_lo & PCI_PREF_RANGE_TYPE_MASK) | IORESOURCE_MEM |
IORESOURCE_PREFETCH;
if (res->flags & PCI_PREF_RANGE_TYPE_64)
res->flags |= IORESOURCE_MEM_64;
if (base <= limit) { if (base <= limit) {
res->flags = (mem_base_lo & PCI_PREF_RANGE_TYPE_MASK) |
IORESOURCE_MEM | IORESOURCE_PREFETCH;
if (res->flags & PCI_PREF_RANGE_TYPE_64)
res->flags |= IORESOURCE_MEM_64;
region.start = base; region.start = base;
region.end = limit + 0xfffff; region.end = limit + 0xfffff;
pcibios_bus_to_resource(dev->bus, res, &region); pcibios_bus_to_resource(dev->bus, res, &region);
if (log) if (log)
pci_info(dev, " bridge window %pR\n", res); pci_info(dev, " bridge window %pR\n", res);
} else {
resource_set_range(res, 0, 0);
res->flags |= IORESOURCE_UNSET | IORESOURCE_DISABLED;
} }
} }
@ -524,10 +538,14 @@ static void pci_read_bridge_windows(struct pci_dev *bridge)
} }
if (io) { if (io) {
bridge->io_window = 1; bridge->io_window = 1;
pci_read_bridge_io(bridge, &res, true); pci_read_bridge_io(bridge,
pci_resource_n(bridge, PCI_BRIDGE_IO_WINDOW),
true);
} }
pci_read_bridge_mmio(bridge, &res, true); pci_read_bridge_mmio(bridge,
pci_resource_n(bridge, PCI_BRIDGE_MEM_WINDOW),
true);
/* /*
* DECchip 21050 pass 2 errata: the bridge may miss an address * DECchip 21050 pass 2 errata: the bridge may miss an address
@ -565,7 +583,10 @@ static void pci_read_bridge_windows(struct pci_dev *bridge)
bridge->pref_64_window = 1; bridge->pref_64_window = 1;
} }
pci_read_bridge_mmio_pref(bridge, &res, true); pci_read_bridge_mmio_pref(bridge,
pci_resource_n(bridge,
PCI_BRIDGE_PREF_MEM_WINDOW),
true);
} }
void pci_read_bridge_bases(struct pci_bus *child) void pci_read_bridge_bases(struct pci_bus *child)
@ -585,9 +606,13 @@ void pci_read_bridge_bases(struct pci_bus *child)
for (i = 0; i < PCI_BRIDGE_RESOURCE_NUM; i++) for (i = 0; i < PCI_BRIDGE_RESOURCE_NUM; i++)
child->resource[i] = &dev->resource[PCI_BRIDGE_RESOURCES+i]; child->resource[i] = &dev->resource[PCI_BRIDGE_RESOURCES+i];
pci_read_bridge_io(child->self, child->resource[0], false); pci_read_bridge_io(child->self,
pci_read_bridge_mmio(child->self, child->resource[1], false); child->resource[PCI_BUS_BRIDGE_IO_WINDOW], false);
pci_read_bridge_mmio_pref(child->self, child->resource[2], false); pci_read_bridge_mmio(child->self,
child->resource[PCI_BUS_BRIDGE_MEM_WINDOW], false);
pci_read_bridge_mmio_pref(child->self,
child->resource[PCI_BUS_BRIDGE_PREF_MEM_WINDOW],
false);
if (!dev->transparent) if (!dev->transparent)
return; return;
@ -1912,16 +1937,16 @@ static int pci_intx_mask_broken(struct pci_dev *dev)
static void early_dump_pci_device(struct pci_dev *pdev) static void early_dump_pci_device(struct pci_dev *pdev)
{ {
u32 value[256 / 4]; u32 value[PCI_CFG_SPACE_SIZE / sizeof(u32)];
int i; int i;
pci_info(pdev, "config space:\n"); pci_info(pdev, "config space:\n");
for (i = 0; i < 256; i += 4) for (i = 0; i < ARRAY_SIZE(value); i++)
pci_read_config_dword(pdev, i, &value[i / 4]); pci_read_config_dword(pdev, i * sizeof(u32), &value[i]);
print_hex_dump(KERN_INFO, "", DUMP_PREFIX_OFFSET, 16, 1, print_hex_dump(KERN_INFO, "", DUMP_PREFIX_OFFSET, 16, 1,
value, 256, false); value, ARRAY_SIZE(value) * sizeof(u32), false);
} }
static const char *pci_type_str(struct pci_dev *dev) static const char *pci_type_str(struct pci_dev *dev)
@ -1985,8 +2010,8 @@ int pci_setup_device(struct pci_dev *dev)
dev->sysdata = dev->bus->sysdata; dev->sysdata = dev->bus->sysdata;
dev->dev.parent = dev->bus->bridge; dev->dev.parent = dev->bus->bridge;
dev->dev.bus = &pci_bus_type; dev->dev.bus = &pci_bus_type;
dev->hdr_type = hdr_type & 0x7f; dev->hdr_type = FIELD_GET(PCI_HEADER_TYPE_MASK, hdr_type);
dev->multifunction = !!(hdr_type & 0x80); dev->multifunction = FIELD_GET(PCI_HEADER_TYPE_MFD, hdr_type);
dev->error_state = pci_channel_io_normal; dev->error_state = pci_channel_io_normal;
set_pcie_port_type(dev); set_pcie_port_type(dev);
@ -2516,9 +2541,15 @@ static struct platform_device *pci_pwrctrl_create_device(struct pci_bus *bus, in
struct device_node *np; struct device_node *np;
np = of_pci_find_child_device(dev_of_node(&bus->dev), devfn); np = of_pci_find_child_device(dev_of_node(&bus->dev), devfn);
if (!np || of_find_device_by_node(np)) if (!np)
return NULL; return NULL;
pdev = of_find_device_by_node(np);
if (pdev) {
put_device(&pdev->dev);
goto err_put_of_node;
}
/* /*
* First check whether the pwrctrl device really needs to be created or * First check whether the pwrctrl device really needs to be created or
* not. This is decided based on at least one of the power supplies * not. This is decided based on at least one of the power supplies
@ -2526,17 +2557,24 @@ static struct platform_device *pci_pwrctrl_create_device(struct pci_bus *bus, in
*/ */
if (!of_pci_supply_present(np)) { if (!of_pci_supply_present(np)) {
pr_debug("PCI/pwrctrl: Skipping OF node: %s\n", np->name); pr_debug("PCI/pwrctrl: Skipping OF node: %s\n", np->name);
return NULL; goto err_put_of_node;
} }
/* Now create the pwrctrl device */ /* Now create the pwrctrl device */
pdev = of_platform_device_create(np, NULL, &host->dev); pdev = of_platform_device_create(np, NULL, &host->dev);
if (!pdev) { if (!pdev) {
pr_err("PCI/pwrctrl: Failed to create pwrctrl device for node: %s\n", np->name); pr_err("PCI/pwrctrl: Failed to create pwrctrl device for node: %s\n", np->name);
return NULL; goto err_put_of_node;
} }
of_node_put(np);
return pdev; return pdev;
err_put_of_node:
of_node_put(np);
return NULL;
} }
#else #else
static struct platform_device *pci_pwrctrl_create_device(struct pci_bus *bus, int devfn) static struct platform_device *pci_pwrctrl_create_device(struct pci_bus *bus, int devfn)
@ -3045,14 +3083,14 @@ static unsigned int pci_scan_child_bus_extend(struct pci_bus *bus,
{ {
unsigned int used_buses, normal_bridges = 0, hotplug_bridges = 0; unsigned int used_buses, normal_bridges = 0, hotplug_bridges = 0;
unsigned int start = bus->busn_res.start; unsigned int start = bus->busn_res.start;
unsigned int devfn, cmax, max = start; unsigned int devnr, cmax, max = start;
struct pci_dev *dev; struct pci_dev *dev;
dev_dbg(&bus->dev, "scanning bus\n"); dev_dbg(&bus->dev, "scanning bus\n");
/* Go find them, Rover! */ /* Go find them, Rover! */
for (devfn = 0; devfn < 256; devfn += 8) for (devnr = 0; devnr < PCI_MAX_NR_DEVS; devnr++)
pci_scan_slot(bus, devfn); pci_scan_slot(bus, PCI_DEVFN(devnr, 0));
/* Reserve buses for SR-IOV capability */ /* Reserve buses for SR-IOV capability */
used_buses = pci_iov_bus_range(bus); used_buses = pci_iov_bus_range(bus);
@ -3469,7 +3507,7 @@ EXPORT_SYMBOL_GPL(pci_rescan_bus);
* pci_rescan_bus(), pci_rescan_bus_bridge_resize() and PCI device removal * pci_rescan_bus(), pci_rescan_bus_bridge_resize() and PCI device removal
* routines should always be executed under this mutex. * routines should always be executed under this mutex.
*/ */
static DEFINE_MUTEX(pci_rescan_remove_lock); DEFINE_MUTEX(pci_rescan_remove_lock);
void pci_lock_rescan_remove(void) void pci_lock_rescan_remove(void)
{ {

View File

@ -49,13 +49,14 @@ static int pci_pwrctrl_slot_probe(struct platform_device *pdev)
ret = regulator_bulk_enable(slot->num_supplies, slot->supplies); ret = regulator_bulk_enable(slot->num_supplies, slot->supplies);
if (ret < 0) { if (ret < 0) {
dev_err_probe(dev, ret, "Failed to enable slot regulators\n"); dev_err_probe(dev, ret, "Failed to enable slot regulators\n");
goto err_regulator_free; regulator_bulk_free(slot->num_supplies, slot->supplies);
return ret;
} }
ret = devm_add_action_or_reset(dev, devm_pci_pwrctrl_slot_power_off, ret = devm_add_action_or_reset(dev, devm_pci_pwrctrl_slot_power_off,
slot); slot);
if (ret) if (ret)
goto err_regulator_disable; return ret;
clk = devm_clk_get_optional_enabled(dev, NULL); clk = devm_clk_get_optional_enabled(dev, NULL);
if (IS_ERR(clk)) { if (IS_ERR(clk)) {
@ -70,13 +71,6 @@ static int pci_pwrctrl_slot_probe(struct platform_device *pdev)
return dev_err_probe(dev, ret, "Failed to register pwrctrl driver\n"); return dev_err_probe(dev, ret, "Failed to register pwrctrl driver\n");
return 0; return 0;
err_regulator_disable:
regulator_bulk_disable(slot->num_supplies, slot->supplies);
err_regulator_free:
regulator_bulk_free(slot->num_supplies, slot->supplies);
return ret;
} }
static const struct of_device_id pci_pwrctrl_slot_of_match[] = { static const struct of_device_id pci_pwrctrl_slot_of_match[] = {

View File

@ -2717,6 +2717,7 @@ static void quirk_disable_msi(struct pci_dev *dev)
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_8131_BRIDGE, quirk_disable_msi); DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_8131_BRIDGE, quirk_disable_msi);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, 0xa238, quirk_disable_msi); DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, 0xa238, quirk_disable_msi);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x5a3f, quirk_disable_msi); DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x5a3f, quirk_disable_msi);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_RDC, 0x1031, quirk_disable_msi);
/* /*
* The APC bridge device in AMD 780 family northbridges has some random * The APC bridge device in AMD 780 family northbridges has some random

View File

@ -31,6 +31,8 @@ static void pci_pwrctrl_unregister(struct device *dev)
return; return;
of_device_unregister(pdev); of_device_unregister(pdev);
put_device(&pdev->dev);
of_node_clear_flag(np, OF_POPULATED); of_node_clear_flag(np, OF_POPULATED);
} }
@ -138,6 +140,7 @@ static void pci_remove_bus_device(struct pci_dev *dev)
*/ */
void pci_stop_and_remove_bus_device(struct pci_dev *dev) void pci_stop_and_remove_bus_device(struct pci_dev *dev)
{ {
lockdep_assert_held(&pci_rescan_remove_lock);
pci_stop_bus_device(dev); pci_stop_bus_device(dev);
pci_remove_bus_device(dev); pci_remove_bus_device(dev);
} }

File diff suppressed because it is too large Load Diff

View File

@ -359,6 +359,9 @@ int pci_assign_resource(struct pci_dev *dev, int resno)
res->flags &= ~IORESOURCE_UNSET; res->flags &= ~IORESOURCE_UNSET;
res->flags &= ~IORESOURCE_STARTALIGN; res->flags &= ~IORESOURCE_STARTALIGN;
if (resno >= PCI_BRIDGE_RESOURCES && resno <= PCI_BRIDGE_RESOURCE_END)
res->flags &= ~IORESOURCE_DISABLED;
pci_info(dev, "%s %pR: assigned\n", res_name, res); pci_info(dev, "%s %pR: assigned\n", res_name, res);
if (resno < PCI_BRIDGE_RESOURCES) if (resno < PCI_BRIDGE_RESOURCES)
pci_update_resource(dev, resno); pci_update_resource(dev, resno);
@ -406,20 +409,25 @@ int pci_reassign_resource(struct pci_dev *dev, int resno,
return 0; return 0;
} }
void pci_release_resource(struct pci_dev *dev, int resno) int pci_release_resource(struct pci_dev *dev, int resno)
{ {
struct resource *res = pci_resource_n(dev, resno); struct resource *res = pci_resource_n(dev, resno);
const char *res_name = pci_resource_name(dev, resno); const char *res_name = pci_resource_name(dev, resno);
int ret;
if (!res->parent) if (!res->parent)
return; return 0;
pci_info(dev, "%s %pR: releasing\n", res_name, res); pci_info(dev, "%s %pR: releasing\n", res_name, res);
release_resource(res); ret = release_resource(res);
if (ret)
return ret;
res->end = resource_size(res) - 1; res->end = resource_size(res) - 1;
res->start = 0; res->start = 0;
res->flags |= IORESOURCE_UNSET; res->flags |= IORESOURCE_UNSET;
return 0;
} }
EXPORT_SYMBOL(pci_release_resource); EXPORT_SYMBOL(pci_release_resource);
@ -488,7 +496,7 @@ int pci_resize_resource(struct pci_dev *dev, int resno, int size)
/* Check if the new config works by trying to assign everything. */ /* Check if the new config works by trying to assign everything. */
if (dev->bus->self) { if (dev->bus->self) {
ret = pci_reassign_bridge_resources(dev->bus->self, res->flags); ret = pbus_reassign_bridge_resources(dev->bus, res);
if (ret) if (ret)
goto error_resize; goto error_resize;
} }
@ -522,22 +530,26 @@ int pci_enable_resources(struct pci_dev *dev, int mask)
if (pci_resource_is_optional(dev, i)) if (pci_resource_is_optional(dev, i))
continue; continue;
if (r->flags & IORESOURCE_UNSET) { if (i < PCI_BRIDGE_RESOURCES) {
pci_err(dev, "%s %pR: not assigned; can't enable device\n", if (r->flags & IORESOURCE_UNSET) {
r_name, r); pci_err(dev, "%s %pR: not assigned; can't enable device\n",
return -EINVAL; r_name, r);
return -EINVAL;
}
if (!r->parent) {
pci_err(dev, "%s %pR: not claimed; can't enable device\n",
r_name, r);
return -EINVAL;
}
} }
if (!r->parent) { if (r->parent) {
pci_err(dev, "%s %pR: not claimed; can't enable device\n", if (r->flags & IORESOURCE_IO)
r_name, r); cmd |= PCI_COMMAND_IO;
return -EINVAL; if (r->flags & IORESOURCE_MEM)
cmd |= PCI_COMMAND_MEMORY;
} }
if (r->flags & IORESOURCE_IO)
cmd |= PCI_COMMAND_IO;
if (r->flags & IORESOURCE_MEM)
cmd |= PCI_COMMAND_MEMORY;
} }
if (cmd != old_cmd) { if (cmd != old_cmd) {

View File

@ -269,10 +269,9 @@ static void mrpc_event_work(struct work_struct *work)
dev_dbg(&stdev->dev, "%s\n", __func__); dev_dbg(&stdev->dev, "%s\n", __func__);
mutex_lock(&stdev->mrpc_mutex); guard(mutex)(&stdev->mrpc_mutex);
cancel_delayed_work(&stdev->mrpc_timeout); cancel_delayed_work(&stdev->mrpc_timeout);
mrpc_complete_cmd(stdev); mrpc_complete_cmd(stdev);
mutex_unlock(&stdev->mrpc_mutex);
} }
static void mrpc_error_complete_cmd(struct switchtec_dev *stdev) static void mrpc_error_complete_cmd(struct switchtec_dev *stdev)
@ -1322,19 +1321,19 @@ static void stdev_kill(struct switchtec_dev *stdev)
cancel_delayed_work_sync(&stdev->mrpc_timeout); cancel_delayed_work_sync(&stdev->mrpc_timeout);
/* Mark the hardware as unavailable and complete all completions */ /* Mark the hardware as unavailable and complete all completions */
mutex_lock(&stdev->mrpc_mutex); scoped_guard (mutex, &stdev->mrpc_mutex) {
stdev->alive = false; stdev->alive = false;
/* Wake up and kill any users waiting on an MRPC request */
list_for_each_entry_safe(stuser, tmpuser, &stdev->mrpc_queue, list) {
stuser->cmd_done = true;
wake_up_interruptible(&stuser->cmd_comp);
list_del_init(&stuser->list);
stuser_put(stuser);
}
/* Wake up and kill any users waiting on an MRPC request */
list_for_each_entry_safe(stuser, tmpuser, &stdev->mrpc_queue, list) {
stuser->cmd_done = true;
wake_up_interruptible(&stuser->cmd_comp);
list_del_init(&stuser->list);
stuser_put(stuser);
} }
mutex_unlock(&stdev->mrpc_mutex);
/* Wake up any users waiting on event_wq */ /* Wake up any users waiting on event_wq */
wake_up_interruptible(&stdev->event_wq); wake_up_interruptible(&stdev->event_wq);
} }

View File

@ -1655,6 +1655,19 @@ int pinctrl_pm_select_default_state(struct device *dev)
} }
EXPORT_SYMBOL_GPL(pinctrl_pm_select_default_state); EXPORT_SYMBOL_GPL(pinctrl_pm_select_default_state);
/**
* pinctrl_pm_select_init_state() - select init pinctrl state for PM
* @dev: device to select init state for
*/
int pinctrl_pm_select_init_state(struct device *dev)
{
if (!dev->pins)
return 0;
return pinctrl_select_bound_state(dev, dev->pins->init_state);
}
EXPORT_SYMBOL_GPL(pinctrl_pm_select_init_state);
/** /**
* pinctrl_pm_select_sleep_state() - select sleep pinctrl state for PM * pinctrl_pm_select_sleep_state() - select sleep pinctrl state for PM
* @dev: device to select sleep state for * @dev: device to select sleep state for

View File

@ -14367,7 +14367,7 @@ lpfc_sli_prep_dev_for_perm_failure(struct lpfc_hba *phba)
* as desired. * as desired.
* *
* Return codes * Return codes
* PCI_ERS_RESULT_CAN_RECOVER - can be recovered with reset_link * PCI_ERS_RESULT_CAN_RECOVER - can be recovered without reset
* PCI_ERS_RESULT_NEED_RESET - need to reset before recovery * PCI_ERS_RESULT_NEED_RESET - need to reset before recovery
* PCI_ERS_RESULT_DISCONNECT - device could not be recovered * PCI_ERS_RESULT_DISCONNECT - device could not be recovered
**/ **/

View File

@ -7884,11 +7884,6 @@ qla2xxx_pci_slot_reset(struct pci_dev *pdev)
"Slot Reset.\n"); "Slot Reset.\n");
ha->pci_error_state = QLA_PCI_SLOT_RESET; ha->pci_error_state = QLA_PCI_SLOT_RESET;
/* Workaround: qla2xxx driver which access hardware earlier
* needs error state to be pci_channel_io_online.
* Otherwise mailbox command timesout.
*/
pdev->error_state = pci_channel_io_normal;
pci_restore_state(pdev); pci_restore_state(pdev);

View File

@ -21,7 +21,6 @@ int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, size_t size,
u64 offset); u64 offset);
int pci_p2pdma_distance_many(struct pci_dev *provider, struct device **clients, int pci_p2pdma_distance_many(struct pci_dev *provider, struct device **clients,
int num_clients, bool verbose); int num_clients, bool verbose);
bool pci_has_p2pmem(struct pci_dev *pdev);
struct pci_dev *pci_p2pmem_find_many(struct device **clients, int num_clients); struct pci_dev *pci_p2pmem_find_many(struct device **clients, int num_clients);
void *pci_alloc_p2pmem(struct pci_dev *pdev, size_t size); void *pci_alloc_p2pmem(struct pci_dev *pdev, size_t size);
void pci_free_p2pmem(struct pci_dev *pdev, void *addr, size_t size); void pci_free_p2pmem(struct pci_dev *pdev, void *addr, size_t size);
@ -45,10 +44,6 @@ static inline int pci_p2pdma_distance_many(struct pci_dev *provider,
{ {
return -1; return -1;
} }
static inline bool pci_has_p2pmem(struct pci_dev *pdev)
{
return false;
}
static inline struct pci_dev *pci_p2pmem_find_many(struct device **clients, static inline struct pci_dev *pci_p2pmem_find_many(struct device **clients,
int num_clients) int num_clients)
{ {

View File

@ -119,7 +119,8 @@ enum {
#define PCI_CB_BRIDGE_MEM_1_WINDOW (PCI_BRIDGE_RESOURCES + 3) #define PCI_CB_BRIDGE_MEM_1_WINDOW (PCI_BRIDGE_RESOURCES + 3)
/* Total number of bridge resources for P2P and CardBus */ /* Total number of bridge resources for P2P and CardBus */
#define PCI_BRIDGE_RESOURCE_NUM 4 #define PCI_P2P_BRIDGE_RESOURCE_NUM 3
#define PCI_BRIDGE_RESOURCE_NUM 4
/* Resources assigned to buses behind the bridge */ /* Resources assigned to buses behind the bridge */
PCI_BRIDGE_RESOURCES, PCI_BRIDGE_RESOURCES,
@ -1417,7 +1418,7 @@ void pci_reset_secondary_bus(struct pci_dev *dev);
void pcibios_reset_secondary_bus(struct pci_dev *dev); void pcibios_reset_secondary_bus(struct pci_dev *dev);
void pci_update_resource(struct pci_dev *dev, int resno); void pci_update_resource(struct pci_dev *dev, int resno);
int __must_check pci_assign_resource(struct pci_dev *dev, int i); int __must_check pci_assign_resource(struct pci_dev *dev, int i);
void pci_release_resource(struct pci_dev *dev, int resno); int pci_release_resource(struct pci_dev *dev, int resno);
static inline int pci_rebar_bytes_to_size(u64 bytes) static inline int pci_rebar_bytes_to_size(u64 bytes)
{ {
bytes = roundup_pow_of_two(bytes); bytes = roundup_pow_of_two(bytes);
@ -2764,7 +2765,7 @@ static inline bool pci_is_thunderbolt_attached(struct pci_dev *pdev)
return false; return false;
} }
#if defined(CONFIG_PCIEPORTBUS) || defined(CONFIG_EEH) #if defined(CONFIG_PCIEPORTBUS) || defined(CONFIG_EEH) || defined(CONFIG_S390)
void pci_uevent_ers(struct pci_dev *pdev, enum pci_ers_result err_type); void pci_uevent_ers(struct pci_dev *pdev, enum pci_ers_result err_type);
#endif #endif

Some files were not shown because too many files have changed in this diff Show More