LoongArch KVM changes for v6.18

1. Add PTW feature detection on new hardware.
 2. Add sign extension with kernel MMIO/IOCSR emulation.
 3. Improve in-kernel IPI emulation.
 4. Improve in-kernel PCH-PIC emulation.
 5. Move kvm_iocsr tracepoint out of generic code.
 -----BEGIN PGP SIGNATURE-----
 
 iQJKBAABCAA0FiEEzOlt8mkP+tbeiYy5AoYrw/LiJnoFAmjTd/wWHGNoZW5odWFj
 YWlAa2VybmVsLm9yZwAKCRAChivD8uImep7+D/4uA7triOH1eB4TikB6NlF3T+ry
 1qLycMy+O8mm24UxT0217sqOtKiZHrglyJB+cCmxPNjq0vWX/F08VgNoh8+6feWL
 6X28YlJ8qcRGCTUxlaaleEGIiXt7lbbnZKFfiBn8Ibb9rHn3tE8V738Kzm+SV1Dr
 bSZlAGnAqp/pRI1UFBQ0T+GpqQz+UvDw8JCOJj5Vs5UylhDY3atPaNhLjfR2tkUh
 AFrqr87gKPHdJxmk//7u+e6QLGViBB9aO1fNMP6y8gViJRfkCEbbm8XYe0qe+SmO
 QpLKuHBEVo7C8vOzemEVieQX2VujDcGSDDRGCU3wKbIpbIQgmOGbbsKfrKf9FxaR
 8ieNyP3UExr2ZvYV9SOLqeLD2K2yox9EkO7tD2CM9kwev9HUtr+6e/OaIP3Sjth2
 rd7V47x/8bCdgt0grQXIxejebbO5NawnFjXlFS7M2SRQXLMtyfFbiuZVGw0kZ+nn
 rzMeodQVmGi518nmmc9YcyW0/R8qer9DaaiN1ybgVF/4ZSK+LhlZo7xv4Dv5bWuv
 ThR89Iz09xUmmGYDniAR6q3/zH/52lJAuNU4tQGwB6O/+z8qJR0tZFM86KnApyLU
 pGI3q1s0g9ZK9mouaM+jcV5/fzFGoTXGkIQqaaXOqob97klagudAdBWKN74YynT2
 rAU7sWXF6F9WsKX2sA==
 =keSO
 -----END PGP SIGNATURE-----

Merge tag 'loongarch-kvm-6.18' of git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson into HEAD

LoongArch KVM changes for v6.18

1. Add PTW feature detection on new hardware.
2. Add sign extension with kernel MMIO/IOCSR emulation.
3. Improve in-kernel IPI emulation.
4. Improve in-kernel PCH-PIC emulation.
5. Move kvm_iocsr tracepoint out of generic code.
This commit is contained in:
Paolo Bonzini 2025-09-30 13:23:44 -04:00
commit 6a13749717
495 changed files with 5521 additions and 2513 deletions

View File

@ -586,6 +586,7 @@ What: /sys/devices/system/cpu/vulnerabilities
/sys/devices/system/cpu/vulnerabilities/srbds
/sys/devices/system/cpu/vulnerabilities/tsa
/sys/devices/system/cpu/vulnerabilities/tsx_async_abort
/sys/devices/system/cpu/vulnerabilities/vmscape
Date: January 2018
Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
Description: Information about CPU vulnerabilities

View File

@ -26,3 +26,4 @@ are configurable at compile, boot or run time.
rsb
old_microcode
indirect-target-selection
vmscape

View File

@ -0,0 +1,110 @@
.. SPDX-License-Identifier: GPL-2.0
VMSCAPE
=======
VMSCAPE is a vulnerability that may allow a guest to influence the branch
prediction in host userspace. It particularly affects hypervisors like QEMU.
Even if a hypervisor may not have any sensitive data like disk encryption keys,
guest-userspace may be able to attack the guest-kernel using the hypervisor as
a confused deputy.
Affected processors
-------------------
The following CPU families are affected by VMSCAPE:
**Intel processors:**
- Skylake generation (Parts without Enhanced-IBRS)
- Cascade Lake generation - (Parts affected by ITS guest/host separation)
- Alder Lake and newer (Parts affected by BHI)
Note that, BHI affected parts that use BHB clearing software mitigation e.g.
Icelake are not vulnerable to VMSCAPE.
**AMD processors:**
- Zen series (families 0x17, 0x19, 0x1a)
** Hygon processors:**
- Family 0x18
Mitigation
----------
Conditional IBPB
----------------
Kernel tracks when a CPU has run a potentially malicious guest and issues an
IBPB before the first exit to userspace after VM-exit. If userspace did not run
between VM-exit and the next VM-entry, no IBPB is issued.
Note that the existing userspace mitigation against Spectre-v2 is effective in
protecting the userspace. They are insufficient to protect the userspace VMMs
from a malicious guest. This is because Spectre-v2 mitigations are applied at
context switch time, while the userspace VMM can run after a VM-exit without a
context switch.
Vulnerability enumeration and mitigation is not applied inside a guest. This is
because nested hypervisors should already be deploying IBPB to isolate
themselves from nested guests.
SMT considerations
------------------
When Simultaneous Multi-Threading (SMT) is enabled, hypervisors can be
vulnerable to cross-thread attacks. For complete protection against VMSCAPE
attacks in SMT environments, STIBP should be enabled.
The kernel will issue a warning if SMT is enabled without adequate STIBP
protection. Warning is not issued when:
- SMT is disabled
- STIBP is enabled system-wide
- Intel eIBRS is enabled (which implies STIBP protection)
System information and options
------------------------------
The sysfs file showing VMSCAPE mitigation status is:
/sys/devices/system/cpu/vulnerabilities/vmscape
The possible values in this file are:
* 'Not affected':
The processor is not vulnerable to VMSCAPE attacks.
* 'Vulnerable':
The processor is vulnerable and no mitigation has been applied.
* 'Mitigation: IBPB before exit to userspace':
Conditional IBPB mitigation is enabled. The kernel tracks when a CPU has
run a potentially malicious guest and issues an IBPB before the first
exit to userspace after VM-exit.
* 'Mitigation: IBPB on VMEXIT':
IBPB is issued on every VM-exit. This occurs when other mitigations like
RETBLEED or SRSO are already issuing IBPB on VM-exit.
Mitigation control on the kernel command line
----------------------------------------------
The mitigation can be controlled via the ``vmscape=`` command line parameter:
* ``vmscape=off``:
Disable the VMSCAPE mitigation.
* ``vmscape=ibpb``:
Enable conditional IBPB mitigation (default when CONFIG_MITIGATION_VMSCAPE=y).
* ``vmscape=force``:
Force vulnerability detection and mitigation even on processors that are
not known to be affected.

View File

@ -3829,6 +3829,7 @@
srbds=off [X86,INTEL]
ssbd=force-off [ARM64]
tsx_async_abort=off [X86]
vmscape=off [X86]
Exceptions:
This does not have any effect on
@ -8041,6 +8042,16 @@
vmpoff= [KNL,S390] Perform z/VM CP command after power off.
Format: <command>
vmscape= [X86] Controls mitigation for VMscape attacks.
VMscape attacks can leak information from a userspace
hypervisor to a guest via speculative side-channels.
off - disable the mitigation
ibpb - use Indirect Branch Prediction Barrier
(IBPB) mitigation (default)
force - force vulnerability detection even on
unaffected processors
vsyscall= [X86-64,EARLY]
Controls the behavior of vsyscalls (i.e. calls to
fixed addresses of 0xffffffffff600x00 from legacy

View File

@ -92,8 +92,12 @@ required:
anyOf:
- required:
- qcom,powered-remotely
- num-channels
- qcom,num-ees
- required:
- qcom,controlled-remotely
- num-channels
- qcom,num-ees
- required:
- clocks
- clock-names

View File

@ -47,21 +47,19 @@ properties:
const: 0
clocks:
minItems: 1
maxItems: 3
description: Reference clocks for CP110; MG clock, MG Core clock, AXI clock
clock-names:
items:
- const: mg_clk
- const: mg_core_clk
- const: axi_clk
minItems: 1
maxItems: 3
marvell,system-controller:
description: Phandle to the Marvell system controller (CP110 only)
$ref: /schemas/types.yaml#/definitions/phandle
patternProperties:
'^phy@[0-2]$':
'^phy@[0-5]$':
description: A COMPHY lane child node
type: object
additionalProperties: false
@ -69,10 +67,14 @@ patternProperties:
properties:
reg:
description: COMPHY lane number
maximum: 5
'#phy-cells':
const: 1
connector:
type: object
required:
- reg
- '#phy-cells'
@ -91,13 +93,24 @@ allOf:
then:
properties:
clocks: false
clock-names: false
clocks:
maxItems: 1
clock-names:
const: xtal
required:
- reg-names
else:
properties:
clocks:
minItems: 3
clock-names:
items:
- const: mg_clk
- const: mg_core_clk
- const: axi_clk
required:
- marvell,system-controller

View File

@ -176,6 +176,8 @@ allOf:
compatible:
contains:
enum:
- qcom,sa8775p-qmp-gen4x2-pcie-phy
- qcom,sa8775p-qmp-gen4x4-pcie-phy
- qcom,sc8280xp-qmp-gen3x1-pcie-phy
- qcom,sc8280xp-qmp-gen3x2-pcie-phy
- qcom,sc8280xp-qmp-gen3x4-pcie-phy
@ -197,8 +199,6 @@ allOf:
contains:
enum:
- qcom,qcs8300-qmp-gen4x2-pcie-phy
- qcom,sa8775p-qmp-gen4x2-pcie-phy
- qcom,sa8775p-qmp-gen4x4-pcie-phy
then:
properties:
clocks:

View File

@ -48,7 +48,6 @@ allOf:
oneOf:
- required: [ clock-frequency ]
- required: [ clocks ]
- if:
properties:
compatible:
@ -60,12 +59,39 @@ allOf:
items:
- const: uartclk
- const: reg
else:
- if:
properties:
compatible:
contains:
const: spacemit,k1-uart
then:
properties:
clock-names:
items:
- const: core
- const: bus
- if:
properties:
compatible:
contains:
enum:
- spacemit,k1-uart
- nxp,lpc1850-uart
then:
required:
- clocks
- clock-names
properties:
clocks:
minItems: 2
clock-names:
minItems: 2
else:
properties:
clocks:
maxItems: 1
clock-names:
maxItems: 1
properties:
compatible:
@ -162,6 +188,9 @@ properties:
minItems: 1
maxItems: 2
oneOf:
- enum:
- main
- uart
- items:
- const: core
- const: bus
@ -264,29 +293,6 @@ required:
- reg
- interrupts
if:
properties:
compatible:
contains:
enum:
- spacemit,k1-uart
- nxp,lpc1850-uart
then:
required:
- clocks
- clock-names
properties:
clocks:
minItems: 2
clock-names:
minItems: 2
else:
properties:
clocks:
maxItems: 1
clock-names:
maxItems: 1
unevaluatedProperties: false
examples:

View File

@ -41,7 +41,7 @@ properties:
- const: dma_intr2
clocks:
minItems: 1
maxItems: 1
clock-names:
const: sw_baud

View File

@ -575,8 +575,8 @@ operations:
- nat-dst
- timeout
- mark
- counter-orig
- counter-reply
- counters-orig
- counters-reply
- use
- id
- nat-dst
@ -591,7 +591,6 @@ operations:
request:
value: 0x101
attributes:
- nfgen-family
- mark
- filter
- status
@ -608,8 +607,8 @@ operations:
- nat-dst
- timeout
- mark
- counter-orig
- counter-reply
- counters-orig
- counters-reply
- use
- id
- nat-dst

View File

@ -28,13 +28,13 @@ definitions:
traffic-patterns it can take a long time until the
MPTCP_EVENT_ESTABLISHED is sent.
Attributes: token, family, saddr4 | saddr6, daddr4 | daddr6, sport,
dport, server-side.
dport, server-side, [flags].
-
name: established
doc: >-
A MPTCP connection is established (can start new subflows).
Attributes: token, family, saddr4 | saddr6, daddr4 | daddr6, sport,
dport, server-side.
dport, server-side, [flags].
-
name: closed
doc: >-
@ -256,7 +256,7 @@ attribute-sets:
type: u32
-
name: if-idx
type: u32
type: s32
-
name: reset-reason
type: u32

View File

@ -742,7 +742,7 @@ The broadcast manager sends responses to user space in the same form:
struct timeval ival1, ival2; /* count and subsequent interval */
canid_t can_id; /* unique can_id for task */
__u32 nframes; /* number of can_frames following */
struct can_frame frames[0];
struct can_frame frames[];
};
The aligned payload 'frames' uses the same basic CAN frame structure defined

View File

@ -60,10 +60,10 @@ address announcements. Typically, it is the client side that initiates subflows,
and the server side that announces additional addresses via the ``ADD_ADDR`` and
``REMOVE_ADDR`` options.
Path managers are controlled by the ``net.mptcp.pm_type`` sysctl knob -- see
mptcp-sysctl.rst. There are two types: the in-kernel one (type ``0``) where the
same rules are applied for all the connections (see: ``ip mptcp``) ; and the
userspace one (type ``1``), controlled by a userspace daemon (i.e. `mptcpd
Path managers are controlled by the ``net.mptcp.path_manager`` sysctl knob --
see mptcp-sysctl.rst. There are two types: the in-kernel one (``kernel``) where
the same rules are applied for all the connections (see: ``ip mptcp``) ; and the
userspace one (``userspace``), controlled by a userspace daemon (i.e. `mptcpd
<https://mptcpd.mptcp.dev/>`_) where different rules can be applied for each
connection. The path managers can be controlled via a Netlink API; see
netlink_spec/mptcp_pm.rst.

View File

@ -2293,7 +2293,7 @@ delayed_register
notice the need.
skip_validation
Skip unit descriptor validation (default: no).
The option is used to ignores the validation errors with the hexdump
The option is used to ignore the validation errors with the hexdump
of the unit descriptor instead of a driver probe error, so that we
can check its details.
quirk_flags

View File

@ -4683,7 +4683,6 @@ F: security/bpf/
BPF [SELFTESTS] (Test Runners & Infrastructure)
M: Andrii Nakryiko <andrii@kernel.org>
M: Eduard Zingerman <eddyz87@gmail.com>
R: Mykola Lysenko <mykolal@fb.com>
L: bpf@vger.kernel.org
S: Maintained
F: tools/testing/selftests/bpf/
@ -5259,7 +5258,6 @@ F: drivers/gpio/gpio-bt8xx.c
BTRFS FILE SYSTEM
M: Chris Mason <clm@fb.com>
M: Josef Bacik <josef@toxicpanda.com>
M: David Sterba <dsterba@suse.com>
L: linux-btrfs@vger.kernel.org
S: Maintained
@ -7240,15 +7238,15 @@ F: include/linux/swiotlb.h
F: kernel/dma/
DMA MAPPING HELPERS DEVICE DRIVER API [RUST]
M: Abdiel Janulgue <abdiel.janulgue@gmail.com>
M: Danilo Krummrich <dakr@kernel.org>
R: Abdiel Janulgue <abdiel.janulgue@gmail.com>
R: Daniel Almeida <daniel.almeida@collabora.com>
R: Robin Murphy <robin.murphy@arm.com>
R: Andreas Hindborg <a.hindborg@kernel.org>
L: rust-for-linux@vger.kernel.org
S: Supported
W: https://rust-for-linux.com
T: git https://github.com/Rust-for-Linux/linux.git alloc-next
T: git git://git.kernel.org/pub/scm/linux/kernel/git/driver-core/driver-core.git
F: rust/helpers/dma.c
F: rust/kernel/dma.rs
F: samples/rust/rust_dma.rs
@ -7432,7 +7430,7 @@ S: Supported
F: Documentation/devicetree/bindings/dpll/dpll-device.yaml
F: Documentation/devicetree/bindings/dpll/dpll-pin.yaml
F: Documentation/driver-api/dpll.rst
F: drivers/dpll/*
F: drivers/dpll/
F: include/linux/dpll.h
F: include/uapi/linux/dpll.h
@ -8080,7 +8078,6 @@ F: Documentation/devicetree/bindings/gpu/
F: Documentation/gpu/
F: drivers/gpu/drm/
F: drivers/gpu/vga/
F: rust/kernel/drm/
F: include/drm/drm
F: include/linux/vga*
F: include/uapi/drm/
@ -8092,11 +8089,21 @@ X: drivers/gpu/drm/i915/
X: drivers/gpu/drm/kmb/
X: drivers/gpu/drm/mediatek/
X: drivers/gpu/drm/msm/
X: drivers/gpu/drm/nouveau/
X: drivers/gpu/drm/nova/
X: drivers/gpu/drm/radeon/
X: drivers/gpu/drm/tegra/
X: drivers/gpu/drm/xe/
DRM DRIVERS AND COMMON INFRASTRUCTURE [RUST]
M: Danilo Krummrich <dakr@kernel.org>
M: Alice Ryhl <aliceryhl@google.com>
S: Supported
W: https://drm.pages.freedesktop.org/maintainer-tools/drm-rust.html
T: git https://gitlab.freedesktop.org/drm/rust/kernel.git
F: drivers/gpu/drm/nova/
F: drivers/gpu/nova-core/
F: rust/kernel/drm/
DRM DRIVERS FOR ALLWINNER A10
M: Maxime Ripard <mripard@kernel.org>
M: Chen-Yu Tsai <wens@csie.org>
@ -15741,13 +15748,6 @@ S: Supported
W: http://www.melexis.com
F: drivers/iio/temperature/mlx90635.c
MELFAS MIP4 TOUCHSCREEN DRIVER
M: Sangwon Jee <jeesw@melfas.com>
S: Supported
W: http://www.melfas.com
F: Documentation/devicetree/bindings/input/touchscreen/melfas_mip4.txt
F: drivers/input/touchscreen/melfas_mip4.c
MELLANOX BLUEFIELD I2C DRIVER
M: Khalil Blaiech <kblaiech@nvidia.com>
M: Asmaa Mnebhi <asmaa@nvidia.com>
@ -16128,6 +16128,7 @@ M: Andrew Morton <akpm@linux-foundation.org>
M: Mike Rapoport <rppt@kernel.org>
L: linux-mm@kvack.org
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock.git
F: include/linux/numa_memblks.h
F: mm/numa.c
F: mm/numa_emulation.c
@ -16195,6 +16196,7 @@ R: Rik van Riel <riel@surriel.com>
R: Liam R. Howlett <Liam.Howlett@oracle.com>
R: Vlastimil Babka <vbabka@suse.cz>
R: Harry Yoo <harry.yoo@oracle.com>
R: Jann Horn <jannh@google.com>
L: linux-mm@kvack.org
S: Maintained
F: include/linux/rmap.h
@ -16239,6 +16241,7 @@ R: Nico Pache <npache@redhat.com>
R: Ryan Roberts <ryan.roberts@arm.com>
R: Dev Jain <dev.jain@arm.com>
R: Barry Song <baohua@kernel.org>
R: Lance Yang <lance.yang@linux.dev>
L: linux-mm@kvack.org
S: Maintained
W: http://www.linux-mm.org
@ -17480,6 +17483,7 @@ NETFILTER
M: Pablo Neira Ayuso <pablo@netfilter.org>
M: Jozsef Kadlecsik <kadlec@netfilter.org>
M: Florian Westphal <fw@strlen.de>
R: Phil Sutter <phil@nwl.cc>
L: netfilter-devel@vger.kernel.org
L: coreteam@netfilter.org
S: Maintained
@ -22048,6 +22052,7 @@ F: drivers/infiniband/ulp/rtrs/
RUNTIME VERIFICATION (RV)
M: Steven Rostedt <rostedt@goodmis.org>
M: Gabriele Monaco <gmonaco@redhat.com>
L: linux-trace-kernel@vger.kernel.org
S: Maintained
F: Documentation/trace/rv/
@ -24255,7 +24260,7 @@ F: Documentation/devicetree/bindings/input/allwinner,sun4i-a10-lradc-keys.yaml
F: drivers/input/keyboard/sun4i-lradc-keys.c
SUNDANCE NETWORK DRIVER
M: Denis Kirjanov <dkirjanov@suse.de>
M: Denis Kirjanov <kirjanov@gmail.com>
L: netdev@vger.kernel.org
S: Maintained
F: drivers/net/ethernet/dlink/sundance.c

View File

@ -2,7 +2,7 @@
VERSION = 6
PATCHLEVEL = 17
SUBLEVEL = 0
EXTRAVERSION = -rc5
EXTRAVERSION = -rc7
NAME = Baby Opossum Posse
# *DOCUMENTATION*

View File

@ -94,7 +94,7 @@ int load_other_segments(struct kimage *image,
char *initrd, unsigned long initrd_len,
char *cmdline)
{
struct kexec_buf kbuf;
struct kexec_buf kbuf = {};
void *dtb = NULL;
unsigned long initrd_load_addr = 0, dtb_len,
orig_segments = image->nr_segments;

View File

@ -298,6 +298,10 @@ config AS_HAS_LVZ_EXTENSION
config CC_HAS_ANNOTATE_TABLEJUMP
def_bool $(cc-option,-mannotate-tablejump)
config RUSTC_HAS_ANNOTATE_TABLEJUMP
depends on RUST
def_bool $(rustc-option,-Cllvm-args=--loongarch-annotate-tablejump)
menu "Kernel type and options"
source "kernel/Kconfig.hz"
@ -563,10 +567,14 @@ config ARCH_STRICT_ALIGN
-mstrict-align build parameter to prevent unaligned accesses.
CPUs with h/w unaligned access support:
Loongson-2K2000/2K3000/3A5000/3C5000/3D5000.
Loongson-2K2000/2K3000 and all of Loongson-3 series processors
based on LoongArch.
CPUs without h/w unaligned access support:
Loongson-2K500/2K1000.
Loongson-2K0300/2K0500/2K1000.
If you want to make sure whether to support unaligned memory access
on your hardware, please read the bit 20 (UAL) of CPUCFG1 register.
This option is enabled by default to make the kernel be able to run
on all LoongArch systems. But you can disable it manually if you want

View File

@ -102,16 +102,21 @@ KBUILD_CFLAGS += $(call cc-option,-mthin-add-sub) $(call cc-option,-Wa$(comma)
ifdef CONFIG_OBJTOOL
ifdef CONFIG_CC_HAS_ANNOTATE_TABLEJUMP
KBUILD_CFLAGS += -mannotate-tablejump
else
KBUILD_CFLAGS += -fno-jump-tables # keep compatibility with older compilers
endif
ifdef CONFIG_RUSTC_HAS_ANNOTATE_TABLEJUMP
KBUILD_RUSTFLAGS += -Cllvm-args=--loongarch-annotate-tablejump
else
KBUILD_RUSTFLAGS += -Zno-jump-tables # keep compatibility with older compilers
endif
ifdef CONFIG_LTO_CLANG
# The annotate-tablejump option can not be passed to LLVM backend when LTO is enabled.
# Ensure it is aware of linker with LTO, '--loongarch-annotate-tablejump' also needs to
# be passed via '-mllvm' to ld.lld.
KBUILD_CFLAGS += -mannotate-tablejump
ifdef CONFIG_LTO_CLANG
KBUILD_LDFLAGS += -mllvm --loongarch-annotate-tablejump
endif
else
KBUILD_CFLAGS += -fno-jump-tables # keep compatibility with older compilers
endif
endif
KBUILD_RUSTFLAGS += --target=loongarch64-unknown-none-softfloat -Ccode-model=small

View File

@ -10,9 +10,8 @@
#ifndef _ASM_LOONGARCH_ACENV_H
#define _ASM_LOONGARCH_ACENV_H
/*
* This header is required by ACPI core, but we have nothing to fill in
* right now. Will be updated later when needed.
*/
#ifdef CONFIG_ARCH_STRICT_ALIGN
#define ACPI_MISALIGNMENT_NOT_SUPPORTED
#endif /* CONFIG_ARCH_STRICT_ALIGN */
#endif /* _ASM_LOONGARCH_ACENV_H */

View File

@ -16,6 +16,13 @@
*/
#define KVM_MMU_CACHE_MIN_PAGES (CONFIG_PGTABLE_LEVELS - 1)
/*
* _PAGE_MODIFIED is a SW pte bit, it records page ever written on host
* kernel, on secondary MMU it records the page writeable attribute, in
* order for fast path handling.
*/
#define KVM_PAGE_WRITEABLE _PAGE_MODIFIED
#define _KVM_FLUSH_PGTABLE 0x1
#define _KVM_HAS_PGMASK 0x2
#define kvm_pfn_pte(pfn, prot) (((pfn) << PFN_PTE_SHIFT) | pgprot_val(prot))
@ -52,10 +59,10 @@ static inline void kvm_set_pte(kvm_pte_t *ptep, kvm_pte_t val)
WRITE_ONCE(*ptep, val);
}
static inline int kvm_pte_write(kvm_pte_t pte) { return pte & _PAGE_WRITE; }
static inline int kvm_pte_dirty(kvm_pte_t pte) { return pte & _PAGE_DIRTY; }
static inline int kvm_pte_young(kvm_pte_t pte) { return pte & _PAGE_ACCESSED; }
static inline int kvm_pte_huge(kvm_pte_t pte) { return pte & _PAGE_HUGE; }
static inline int kvm_pte_dirty(kvm_pte_t pte) { return pte & __WRITEABLE; }
static inline int kvm_pte_writeable(kvm_pte_t pte) { return pte & KVM_PAGE_WRITEABLE; }
static inline kvm_pte_t kvm_pte_mkyoung(kvm_pte_t pte)
{
@ -69,12 +76,12 @@ static inline kvm_pte_t kvm_pte_mkold(kvm_pte_t pte)
static inline kvm_pte_t kvm_pte_mkdirty(kvm_pte_t pte)
{
return pte | _PAGE_DIRTY;
return pte | __WRITEABLE;
}
static inline kvm_pte_t kvm_pte_mkclean(kvm_pte_t pte)
{
return pte & ~_PAGE_DIRTY;
return pte & ~__WRITEABLE;
}
static inline kvm_pte_t kvm_pte_mkhuge(kvm_pte_t pte)
@ -87,6 +94,11 @@ static inline kvm_pte_t kvm_pte_mksmall(kvm_pte_t pte)
return pte & ~_PAGE_HUGE;
}
static inline kvm_pte_t kvm_pte_mkwriteable(kvm_pte_t pte)
{
return pte | KVM_PAGE_WRITEABLE;
}
static inline int kvm_need_flush(kvm_ptw_ctx *ctx)
{
return ctx->flag & _KVM_FLUSH_PGTABLE;

View File

@ -34,13 +34,26 @@
#define PCH_PIC_INT_ISR_END 0x3af
#define PCH_PIC_POLARITY_START 0x3e0
#define PCH_PIC_POLARITY_END 0x3e7
#define PCH_PIC_INT_ID_VAL 0x7000000UL
#define PCH_PIC_INT_ID_VAL 0x7UL
#define PCH_PIC_INT_ID_VER 0x1UL
union pch_pic_id {
struct {
uint8_t reserved_0[3];
uint8_t id;
uint8_t version;
uint8_t reserved_1;
uint8_t irq_num;
uint8_t reserved_2;
} desc;
uint64_t data;
};
struct loongarch_pch_pic {
spinlock_t lock;
struct kvm *kvm;
struct kvm_io_device device;
union pch_pic_id id;
uint64_t mask; /* 1:disable irq, 0:enable irq */
uint64_t htmsi_en; /* 1:msi */
uint64_t edge; /* 1:edge triggered, 0:level triggered */

View File

@ -103,6 +103,7 @@ struct kvm_fpu {
#define KVM_LOONGARCH_VM_FEAT_PMU 5
#define KVM_LOONGARCH_VM_FEAT_PV_IPI 6
#define KVM_LOONGARCH_VM_FEAT_PV_STEALTIME 7
#define KVM_LOONGARCH_VM_FEAT_PTW 8
/* Device Control API on vcpu fd */
#define KVM_LOONGARCH_VCPU_CPUCFG 0

View File

@ -86,7 +86,7 @@ late_initcall(fdt_cpu_clk_init);
static ssize_t boardinfo_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
return sprintf(buf,
return sysfs_emit(buf,
"BIOS Information\n"
"Vendor\t\t\t: %s\n"
"Version\t\t\t: %s\n"
@ -109,6 +109,8 @@ static int __init boardinfo_init(void)
struct kobject *loongson_kobj;
loongson_kobj = kobject_create_and_add("loongson", firmware_kobj);
if (!loongson_kobj)
return -ENOMEM;
return sysfs_create_file(loongson_kobj, &boardinfo_attr.attr);
}

View File

@ -51,12 +51,13 @@ int arch_stack_walk_reliable(stack_trace_consume_fn consume_entry,
if (task == current) {
regs->regs[3] = (unsigned long)__builtin_frame_address(0);
regs->csr_era = (unsigned long)__builtin_return_address(0);
regs->regs[22] = 0;
} else {
regs->regs[3] = thread_saved_fp(task);
regs->csr_era = thread_saved_ra(task);
regs->regs[22] = task->thread.reg22;
}
regs->regs[1] = 0;
regs->regs[22] = 0;
for (unwind_start(&state, task, regs);
!unwind_done(&state) && !unwind_error(&state); unwind_next_frame(&state)) {

View File

@ -54,6 +54,9 @@ static int __init init_vdso(void)
vdso_info.code_mapping.pages =
kcalloc(vdso_info.size / PAGE_SIZE, sizeof(struct page *), GFP_KERNEL);
if (!vdso_info.code_mapping.pages)
return -ENOMEM;
pfn = __phys_to_pfn(__pa_symbol(vdso_info.vdso));
for (i = 0; i < vdso_info.size / PAGE_SIZE; i++)
vdso_info.code_mapping.pages[i] = pfn_to_page(pfn + i);

View File

@ -218,16 +218,16 @@ int kvm_emu_iocsr(larch_inst inst, struct kvm_run *run, struct kvm_vcpu *vcpu)
}
trace_kvm_iocsr(KVM_TRACE_IOCSR_WRITE, run->iocsr_io.len, addr, val);
} else {
vcpu->arch.io_gpr = rd; /* Set register id for iocsr read completion */
idx = srcu_read_lock(&vcpu->kvm->srcu);
ret = kvm_io_bus_read(vcpu, KVM_IOCSR_BUS, addr, run->iocsr_io.len, val);
ret = kvm_io_bus_read(vcpu, KVM_IOCSR_BUS, addr,
run->iocsr_io.len, run->iocsr_io.data);
srcu_read_unlock(&vcpu->kvm->srcu, idx);
if (ret == 0)
if (ret == 0) {
kvm_complete_iocsr_read(vcpu, run);
ret = EMULATE_DONE;
else {
} else
ret = EMULATE_DO_IOCSR;
/* Save register id for iocsr read completion */
vcpu->arch.io_gpr = rd;
}
trace_kvm_iocsr(KVM_TRACE_IOCSR_READ, run->iocsr_io.len, addr, NULL);
}
@ -468,6 +468,8 @@ int kvm_emu_mmio_read(struct kvm_vcpu *vcpu, larch_inst inst)
if (ret == EMULATE_DO_MMIO) {
trace_kvm_mmio(KVM_TRACE_MMIO_READ, run->mmio.len, run->mmio.phys_addr, NULL);
vcpu->arch.io_gpr = rd; /* Set for kvm_complete_mmio_read() use */
/*
* If mmio device such as PCH-PIC is emulated in KVM,
* it need not return to user space to handle the mmio
@ -475,16 +477,15 @@ int kvm_emu_mmio_read(struct kvm_vcpu *vcpu, larch_inst inst)
*/
idx = srcu_read_lock(&vcpu->kvm->srcu);
ret = kvm_io_bus_read(vcpu, KVM_MMIO_BUS, vcpu->arch.badv,
run->mmio.len, &vcpu->arch.gprs[rd]);
run->mmio.len, run->mmio.data);
srcu_read_unlock(&vcpu->kvm->srcu, idx);
if (!ret) {
kvm_complete_mmio_read(vcpu, run);
update_pc(&vcpu->arch);
vcpu->mmio_needed = 0;
return EMULATE_DONE;
}
/* Set for kvm_complete_mmio_read() use */
vcpu->arch.io_gpr = rd;
run->mmio.is_write = 0;
vcpu->mmio_is_write = 0;
return EMULATE_DO_MMIO;
@ -778,10 +779,8 @@ static long kvm_save_notify(struct kvm_vcpu *vcpu)
return 0;
default:
return KVM_HCALL_INVALID_CODE;
};
return KVM_HCALL_INVALID_CODE;
};
}
}
/*
* kvm_handle_lsx_disabled() - Guest used LSX while disabled in root.

View File

@ -426,21 +426,26 @@ static int kvm_eiointc_ctrl_access(struct kvm_device *dev,
struct loongarch_eiointc *s = dev->kvm->arch.eiointc;
data = (void __user *)attr->addr;
switch (type) {
case KVM_DEV_LOONGARCH_EXTIOI_CTRL_INIT_NUM_CPU:
case KVM_DEV_LOONGARCH_EXTIOI_CTRL_INIT_FEATURE:
if (copy_from_user(&val, data, 4))
return -EFAULT;
break;
default:
break;
}
spin_lock_irqsave(&s->lock, flags);
switch (type) {
case KVM_DEV_LOONGARCH_EXTIOI_CTRL_INIT_NUM_CPU:
if (copy_from_user(&val, data, 4))
ret = -EFAULT;
else {
if (val >= EIOINTC_ROUTE_MAX_VCPUS)
ret = -EINVAL;
else
s->num_cpu = val;
}
if (val >= EIOINTC_ROUTE_MAX_VCPUS)
ret = -EINVAL;
else
s->num_cpu = val;
break;
case KVM_DEV_LOONGARCH_EXTIOI_CTRL_INIT_FEATURE:
if (copy_from_user(&s->features, data, 4))
ret = -EFAULT;
s->features = val;
if (!(s->features & BIT(EIOINTC_HAS_VIRT_EXTENSION)))
s->status |= BIT(EIOINTC_ENABLE);
break;
@ -462,19 +467,17 @@ static int kvm_eiointc_ctrl_access(struct kvm_device *dev,
static int kvm_eiointc_regs_access(struct kvm_device *dev,
struct kvm_device_attr *attr,
bool is_write)
bool is_write, int *data)
{
int addr, cpu, offset, ret = 0;
unsigned long flags;
void *p = NULL;
void __user *data;
struct loongarch_eiointc *s;
s = dev->kvm->arch.eiointc;
addr = attr->attr;
cpu = addr >> 16;
addr &= 0xffff;
data = (void __user *)attr->addr;
switch (addr) {
case EIOINTC_NODETYPE_START ... EIOINTC_NODETYPE_END:
offset = (addr - EIOINTC_NODETYPE_START) / 4;
@ -513,13 +516,10 @@ static int kvm_eiointc_regs_access(struct kvm_device *dev,
}
spin_lock_irqsave(&s->lock, flags);
if (is_write) {
if (copy_from_user(p, data, 4))
ret = -EFAULT;
} else {
if (copy_to_user(data, p, 4))
ret = -EFAULT;
}
if (is_write)
memcpy(p, data, 4);
else
memcpy(data, p, 4);
spin_unlock_irqrestore(&s->lock, flags);
return ret;
@ -527,19 +527,17 @@ static int kvm_eiointc_regs_access(struct kvm_device *dev,
static int kvm_eiointc_sw_status_access(struct kvm_device *dev,
struct kvm_device_attr *attr,
bool is_write)
bool is_write, int *data)
{
int addr, ret = 0;
unsigned long flags;
void *p = NULL;
void __user *data;
struct loongarch_eiointc *s;
s = dev->kvm->arch.eiointc;
addr = attr->attr;
addr &= 0xffff;
data = (void __user *)attr->addr;
switch (addr) {
case KVM_DEV_LOONGARCH_EXTIOI_SW_STATUS_NUM_CPU:
if (is_write)
@ -561,13 +559,10 @@ static int kvm_eiointc_sw_status_access(struct kvm_device *dev,
return -EINVAL;
}
spin_lock_irqsave(&s->lock, flags);
if (is_write) {
if (copy_from_user(p, data, 4))
ret = -EFAULT;
} else {
if (copy_to_user(data, p, 4))
ret = -EFAULT;
}
if (is_write)
memcpy(p, data, 4);
else
memcpy(data, p, 4);
spin_unlock_irqrestore(&s->lock, flags);
return ret;
@ -576,11 +571,27 @@ static int kvm_eiointc_sw_status_access(struct kvm_device *dev,
static int kvm_eiointc_get_attr(struct kvm_device *dev,
struct kvm_device_attr *attr)
{
int ret, data;
switch (attr->group) {
case KVM_DEV_LOONGARCH_EXTIOI_GRP_REGS:
return kvm_eiointc_regs_access(dev, attr, false);
ret = kvm_eiointc_regs_access(dev, attr, false, &data);
if (ret)
return ret;
if (copy_to_user((void __user *)attr->addr, &data, 4))
ret = -EFAULT;
return ret;
case KVM_DEV_LOONGARCH_EXTIOI_GRP_SW_STATUS:
return kvm_eiointc_sw_status_access(dev, attr, false);
ret = kvm_eiointc_sw_status_access(dev, attr, false, &data);
if (ret)
return ret;
if (copy_to_user((void __user *)attr->addr, &data, 4))
ret = -EFAULT;
return ret;
default:
return -EINVAL;
}
@ -589,13 +600,21 @@ static int kvm_eiointc_get_attr(struct kvm_device *dev,
static int kvm_eiointc_set_attr(struct kvm_device *dev,
struct kvm_device_attr *attr)
{
int data;
switch (attr->group) {
case KVM_DEV_LOONGARCH_EXTIOI_GRP_CTRL:
return kvm_eiointc_ctrl_access(dev, attr);
case KVM_DEV_LOONGARCH_EXTIOI_GRP_REGS:
return kvm_eiointc_regs_access(dev, attr, true);
if (copy_from_user(&data, (void __user *)attr->addr, 4))
return -EFAULT;
return kvm_eiointc_regs_access(dev, attr, true, &data);
case KVM_DEV_LOONGARCH_EXTIOI_GRP_SW_STATUS:
return kvm_eiointc_sw_status_access(dev, attr, true);
if (copy_from_user(&data, (void __user *)attr->addr, 4))
return -EFAULT;
return kvm_eiointc_sw_status_access(dev, attr, true, &data);
default:
return -EINVAL;
}

View File

@ -7,12 +7,25 @@
#include <asm/kvm_ipi.h>
#include <asm/kvm_vcpu.h>
static void ipi_set(struct kvm_vcpu *vcpu, uint32_t data)
{
uint32_t status;
struct kvm_interrupt irq;
spin_lock(&vcpu->arch.ipi_state.lock);
status = vcpu->arch.ipi_state.status;
vcpu->arch.ipi_state.status |= data;
spin_unlock(&vcpu->arch.ipi_state.lock);
if ((status == 0) && data) {
irq.irq = LARCH_INT_IPI;
kvm_vcpu_ioctl_interrupt(vcpu, &irq);
}
}
static void ipi_send(struct kvm *kvm, uint64_t data)
{
int cpu, action;
uint32_t status;
int cpu;
struct kvm_vcpu *vcpu;
struct kvm_interrupt irq;
cpu = ((data & 0xffffffff) >> 16) & 0x3ff;
vcpu = kvm_get_vcpu_by_cpuid(kvm, cpu);
@ -21,15 +34,7 @@ static void ipi_send(struct kvm *kvm, uint64_t data)
return;
}
action = BIT(data & 0x1f);
spin_lock(&vcpu->arch.ipi_state.lock);
status = vcpu->arch.ipi_state.status;
vcpu->arch.ipi_state.status |= action;
spin_unlock(&vcpu->arch.ipi_state.lock);
if (status == 0) {
irq.irq = LARCH_INT_IPI;
kvm_vcpu_ioctl_interrupt(vcpu, &irq);
}
ipi_set(vcpu, BIT(data & 0x1f));
}
static void ipi_clear(struct kvm_vcpu *vcpu, uint64_t data)
@ -96,6 +101,34 @@ static void write_mailbox(struct kvm_vcpu *vcpu, int offset, uint64_t data, int
spin_unlock(&vcpu->arch.ipi_state.lock);
}
static int mail_send(struct kvm *kvm, uint64_t data)
{
int i, cpu, mailbox, offset;
uint32_t val = 0, mask = 0;
struct kvm_vcpu *vcpu;
cpu = ((data & 0xffffffff) >> 16) & 0x3ff;
vcpu = kvm_get_vcpu_by_cpuid(kvm, cpu);
if (unlikely(vcpu == NULL)) {
kvm_err("%s: invalid target cpu: %d\n", __func__, cpu);
return -EINVAL;
}
mailbox = ((data & 0xffffffff) >> 2) & 0x7;
offset = IOCSR_IPI_BUF_20 + mailbox * 4;
if ((data >> 27) & 0xf) {
val = read_mailbox(vcpu, offset, 4);
for (i = 0; i < 4; i++)
if (data & (BIT(27 + i)))
mask |= (0xff << (i * 8));
val &= mask;
}
val |= ((uint32_t)(data >> 32) & ~mask);
write_mailbox(vcpu, offset, val, 4);
return 0;
}
static int send_ipi_data(struct kvm_vcpu *vcpu, gpa_t addr, uint64_t data)
{
int i, idx, ret;
@ -132,23 +165,6 @@ static int send_ipi_data(struct kvm_vcpu *vcpu, gpa_t addr, uint64_t data)
return ret;
}
static int mail_send(struct kvm *kvm, uint64_t data)
{
int cpu, mailbox, offset;
struct kvm_vcpu *vcpu;
cpu = ((data & 0xffffffff) >> 16) & 0x3ff;
vcpu = kvm_get_vcpu_by_cpuid(kvm, cpu);
if (unlikely(vcpu == NULL)) {
kvm_err("%s: invalid target cpu: %d\n", __func__, cpu);
return -EINVAL;
}
mailbox = ((data & 0xffffffff) >> 2) & 0x7;
offset = IOCSR_IPI_BASE + IOCSR_IPI_BUF_20 + mailbox * 4;
return send_ipi_data(vcpu, offset, data);
}
static int any_send(struct kvm *kvm, uint64_t data)
{
int cpu, offset;
@ -231,7 +247,7 @@ static int loongarch_ipi_writel(struct kvm_vcpu *vcpu, gpa_t addr, int len, cons
spin_unlock(&vcpu->arch.ipi_state.lock);
break;
case IOCSR_IPI_SET:
ret = -EINVAL;
ipi_set(vcpu, data);
break;
case IOCSR_IPI_CLEAR:
/* Just clear the status of the current vcpu */
@ -250,10 +266,10 @@ static int loongarch_ipi_writel(struct kvm_vcpu *vcpu, gpa_t addr, int len, cons
ipi_send(vcpu->kvm, data);
break;
case IOCSR_MAIL_SEND:
ret = mail_send(vcpu->kvm, *(uint64_t *)val);
ret = mail_send(vcpu->kvm, data);
break;
case IOCSR_ANY_SEND:
ret = any_send(vcpu->kvm, *(uint64_t *)val);
ret = any_send(vcpu->kvm, data);
break;
default:
kvm_err("%s: unknown addr: %llx\n", __func__, addr);

View File

@ -35,16 +35,11 @@ static void pch_pic_update_irq(struct loongarch_pch_pic *s, int irq, int level)
/* update batch irqs, the irq_mask is a bitmap of irqs */
static void pch_pic_update_batch_irqs(struct loongarch_pch_pic *s, u64 irq_mask, int level)
{
int irq, bits;
unsigned int irq;
DECLARE_BITMAP(irqs, 64) = { BITMAP_FROM_U64(irq_mask) };
/* find each irq by irqs bitmap and update each irq */
bits = sizeof(irq_mask) * 8;
irq = find_first_bit((void *)&irq_mask, bits);
while (irq < bits) {
for_each_set_bit(irq, irqs, 64)
pch_pic_update_irq(s, irq, level);
bitmap_clear((void *)&irq_mask, irq, 1);
irq = find_first_bit((void *)&irq_mask, bits);
}
}
/* called when a irq is triggered in pch pic */
@ -77,109 +72,65 @@ void pch_msi_set_irq(struct kvm *kvm, int irq, int level)
eiointc_set_irq(kvm->arch.eiointc, irq, level);
}
/*
* pch pic register is 64-bit, but it is accessed by 32-bit,
* so we use high to get whether low or high 32 bits we want
* to read.
*/
static u32 pch_pic_read_reg(u64 *s, int high)
{
u64 val = *s;
/* read the high 32 bits when high is 1 */
return high ? (u32)(val >> 32) : (u32)val;
}
/*
* pch pic register is 64-bit, but it is accessed by 32-bit,
* so we use high to get whether low or high 32 bits we want
* to write.
*/
static u32 pch_pic_write_reg(u64 *s, int high, u32 v)
{
u64 val = *s, data = v;
if (high) {
/*
* Clear val high 32 bits
* Write the high 32 bits when the high is 1
*/
*s = (val << 32 >> 32) | (data << 32);
val >>= 32;
} else
/*
* Clear val low 32 bits
* Write the low 32 bits when the high is 0
*/
*s = (val >> 32 << 32) | v;
return (u32)val;
}
static int loongarch_pch_pic_read(struct loongarch_pch_pic *s, gpa_t addr, int len, void *val)
{
int offset, index, ret = 0;
u32 data = 0;
u64 int_id = 0;
int ret = 0, offset;
u64 data = 0;
void *ptemp;
offset = addr - s->pch_pic_base;
offset -= offset & 7;
spin_lock(&s->lock);
switch (offset) {
case PCH_PIC_INT_ID_START ... PCH_PIC_INT_ID_END:
/* int id version */
int_id |= (u64)PCH_PIC_INT_ID_VER << 32;
/* irq number */
int_id |= (u64)31 << (32 + 16);
/* int id value */
int_id |= PCH_PIC_INT_ID_VAL;
*(u64 *)val = int_id;
data = s->id.data;
break;
case PCH_PIC_MASK_START ... PCH_PIC_MASK_END:
offset -= PCH_PIC_MASK_START;
index = offset >> 2;
/* read mask reg */
data = pch_pic_read_reg(&s->mask, index);
*(u32 *)val = data;
data = s->mask;
break;
case PCH_PIC_HTMSI_EN_START ... PCH_PIC_HTMSI_EN_END:
offset -= PCH_PIC_HTMSI_EN_START;
index = offset >> 2;
/* read htmsi enable reg */
data = pch_pic_read_reg(&s->htmsi_en, index);
*(u32 *)val = data;
data = s->htmsi_en;
break;
case PCH_PIC_EDGE_START ... PCH_PIC_EDGE_END:
offset -= PCH_PIC_EDGE_START;
index = offset >> 2;
/* read edge enable reg */
data = pch_pic_read_reg(&s->edge, index);
*(u32 *)val = data;
data = s->edge;
break;
case PCH_PIC_AUTO_CTRL0_START ... PCH_PIC_AUTO_CTRL0_END:
case PCH_PIC_AUTO_CTRL1_START ... PCH_PIC_AUTO_CTRL1_END:
/* we only use default mode: fixed interrupt distribution mode */
*(u32 *)val = 0;
break;
case PCH_PIC_ROUTE_ENTRY_START ... PCH_PIC_ROUTE_ENTRY_END:
/* only route to int0: eiointc */
*(u8 *)val = 1;
ptemp = s->route_entry + (offset - PCH_PIC_ROUTE_ENTRY_START);
data = *(u64 *)ptemp;
break;
case PCH_PIC_HTMSI_VEC_START ... PCH_PIC_HTMSI_VEC_END:
offset -= PCH_PIC_HTMSI_VEC_START;
/* read htmsi vector */
data = s->htmsi_vector[offset];
*(u8 *)val = data;
ptemp = s->htmsi_vector + (offset - PCH_PIC_HTMSI_VEC_START);
data = *(u64 *)ptemp;
break;
case PCH_PIC_POLARITY_START ... PCH_PIC_POLARITY_END:
/* we only use defalut value 0: high level triggered */
*(u32 *)val = 0;
data = s->polarity;
break;
case PCH_PIC_INT_IRR_START:
data = s->irr;
break;
case PCH_PIC_INT_ISR_START:
data = s->isr;
break;
default:
ret = -EINVAL;
}
spin_unlock(&s->lock);
if (ret == 0) {
offset = (addr - s->pch_pic_base) & 7;
data = data >> (offset * 8);
memcpy(val, &data, len);
}
return ret;
}
@ -210,81 +161,69 @@ static int kvm_pch_pic_read(struct kvm_vcpu *vcpu,
static int loongarch_pch_pic_write(struct loongarch_pch_pic *s, gpa_t addr,
int len, const void *val)
{
int ret;
u32 old, data, offset, index;
u64 irq;
int ret = 0, offset;
u64 old, data, mask;
void *ptemp;
ret = 0;
data = *(u32 *)val;
offset = addr - s->pch_pic_base;
switch (len) {
case 1:
data = *(u8 *)val;
mask = 0xFF;
break;
case 2:
data = *(u16 *)val;
mask = USHRT_MAX;
break;
case 4:
data = *(u32 *)val;
mask = UINT_MAX;
break;
case 8:
default:
data = *(u64 *)val;
mask = ULONG_MAX;
break;
}
offset = (addr - s->pch_pic_base) & 7;
mask = mask << (offset * 8);
data = data << (offset * 8);
offset = (addr - s->pch_pic_base) - offset;
spin_lock(&s->lock);
switch (offset) {
case PCH_PIC_MASK_START ... PCH_PIC_MASK_END:
offset -= PCH_PIC_MASK_START;
/* get whether high or low 32 bits we want to write */
index = offset >> 2;
old = pch_pic_write_reg(&s->mask, index, data);
/* enable irq when mask value change to 0 */
irq = (old & ~data) << (32 * index);
pch_pic_update_batch_irqs(s, irq, 1);
/* disable irq when mask value change to 1 */
irq = (~old & data) << (32 * index);
pch_pic_update_batch_irqs(s, irq, 0);
case PCH_PIC_MASK_START:
old = s->mask;
s->mask = (old & ~mask) | data;
if (old & ~data)
pch_pic_update_batch_irqs(s, old & ~data, 1);
if (~old & data)
pch_pic_update_batch_irqs(s, ~old & data, 0);
break;
case PCH_PIC_HTMSI_EN_START ... PCH_PIC_HTMSI_EN_END:
offset -= PCH_PIC_HTMSI_EN_START;
index = offset >> 2;
pch_pic_write_reg(&s->htmsi_en, index, data);
case PCH_PIC_HTMSI_EN_START:
s->htmsi_en = (s->htmsi_en & ~mask) | data;
break;
case PCH_PIC_EDGE_START ... PCH_PIC_EDGE_END:
offset -= PCH_PIC_EDGE_START;
index = offset >> 2;
/* 1: edge triggered, 0: level triggered */
pch_pic_write_reg(&s->edge, index, data);
case PCH_PIC_EDGE_START:
s->edge = (s->edge & ~mask) | data;
break;
case PCH_PIC_CLEAR_START ... PCH_PIC_CLEAR_END:
offset -= PCH_PIC_CLEAR_START;
index = offset >> 2;
/* write 1 to clear edge irq */
old = pch_pic_read_reg(&s->irr, index);
/*
* get the irq bitmap which is edge triggered and
* already set and to be cleared
*/
irq = old & pch_pic_read_reg(&s->edge, index) & data;
/* write irr to the new state where irqs have been cleared */
pch_pic_write_reg(&s->irr, index, old & ~irq);
/* update cleared irqs */
pch_pic_update_batch_irqs(s, irq, 0);
case PCH_PIC_POLARITY_START:
s->polarity = (s->polarity & ~mask) | data;
break;
case PCH_PIC_AUTO_CTRL0_START ... PCH_PIC_AUTO_CTRL0_END:
offset -= PCH_PIC_AUTO_CTRL0_START;
index = offset >> 2;
/* we only use default mode: fixed interrupt distribution mode */
pch_pic_write_reg(&s->auto_ctrl0, index, 0);
break;
case PCH_PIC_AUTO_CTRL1_START ... PCH_PIC_AUTO_CTRL1_END:
offset -= PCH_PIC_AUTO_CTRL1_START;
index = offset >> 2;
/* we only use default mode: fixed interrupt distribution mode */
pch_pic_write_reg(&s->auto_ctrl1, index, 0);
break;
case PCH_PIC_ROUTE_ENTRY_START ... PCH_PIC_ROUTE_ENTRY_END:
offset -= PCH_PIC_ROUTE_ENTRY_START;
/* only route to int0: eiointc */
s->route_entry[offset] = 1;
case PCH_PIC_CLEAR_START:
old = s->irr & s->edge & data;
if (old) {
s->irr &= ~old;
pch_pic_update_batch_irqs(s, old, 0);
}
break;
case PCH_PIC_HTMSI_VEC_START ... PCH_PIC_HTMSI_VEC_END:
/* route table to eiointc */
offset -= PCH_PIC_HTMSI_VEC_START;
s->htmsi_vector[offset] = (u8)data;
ptemp = s->htmsi_vector + (offset - PCH_PIC_HTMSI_VEC_START);
*(u64 *)ptemp = (*(u64 *)ptemp & ~mask) | data;
break;
case PCH_PIC_POLARITY_START ... PCH_PIC_POLARITY_END:
offset -= PCH_PIC_POLARITY_START;
index = offset >> 2;
/* we only use defalut value 0: high level triggered */
pch_pic_write_reg(&s->polarity, index, 0);
/* Not implemented */
case PCH_PIC_AUTO_CTRL0_START:
case PCH_PIC_AUTO_CTRL1_START:
case PCH_PIC_ROUTE_ENTRY_START ... PCH_PIC_ROUTE_ENTRY_END:
break;
default:
ret = -EINVAL;
@ -348,6 +287,7 @@ static int kvm_pch_pic_regs_access(struct kvm_device *dev,
struct kvm_device_attr *attr,
bool is_write)
{
char buf[8];
int addr, offset, len = 8, ret = 0;
void __user *data;
void *p = NULL;
@ -397,17 +337,23 @@ static int kvm_pch_pic_regs_access(struct kvm_device *dev,
return -EINVAL;
}
spin_lock(&s->lock);
/* write or read value according to is_write */
if (is_write) {
if (copy_from_user(p, data, len))
ret = -EFAULT;
} else {
if (copy_to_user(data, p, len))
ret = -EFAULT;
if (copy_from_user(buf, data, len))
return -EFAULT;
}
spin_lock(&s->lock);
if (is_write)
memcpy(p, buf, len);
else
memcpy(buf, p, len);
spin_unlock(&s->lock);
if (!is_write) {
if (copy_to_user(data, buf, len))
return -EFAULT;
}
return ret;
}
@ -477,7 +423,7 @@ static int kvm_setup_default_irq_routing(struct kvm *kvm)
static int kvm_pch_pic_create(struct kvm_device *dev, u32 type)
{
int ret;
int i, ret, irq_num;
struct kvm *kvm = dev->kvm;
struct loongarch_pch_pic *s;
@ -493,6 +439,22 @@ static int kvm_pch_pic_create(struct kvm_device *dev, u32 type)
if (!s)
return -ENOMEM;
/*
* Interrupt controller identification register 1
* Bit 24-31 Interrupt Controller ID
* Interrupt controller identification register 2
* Bit 0-7 Interrupt Controller version number
* Bit 16-23 The number of interrupt sources supported
*/
irq_num = 32;
s->mask = -1UL;
s->id.desc.id = PCH_PIC_INT_ID_VAL;
s->id.desc.version = PCH_PIC_INT_ID_VER;
s->id.desc.irq_num = irq_num - 1;
for (i = 0; i < irq_num; i++) {
s->route_entry[i] = 1;
s->htmsi_vector[i] = i;
}
spin_lock_init(&s->lock);
s->kvm = kvm;
kvm->arch.pch_pic = s;

View File

@ -569,7 +569,7 @@ static int kvm_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa, bool writ
/* Track access to pages marked old */
new = kvm_pte_mkyoung(*ptep);
if (write && !kvm_pte_dirty(new)) {
if (!kvm_pte_write(new)) {
if (!kvm_pte_writeable(new)) {
ret = -EFAULT;
goto out;
}
@ -856,9 +856,9 @@ static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write)
prot_bits |= _CACHE_SUC;
if (writeable) {
prot_bits |= _PAGE_WRITE;
prot_bits = kvm_pte_mkwriteable(prot_bits);
if (write)
prot_bits |= __WRITEABLE;
prot_bits = kvm_pte_mkdirty(prot_bits);
}
/* Disable dirty logging on HugePages */
@ -904,7 +904,7 @@ static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write)
kvm_release_faultin_page(kvm, page, false, writeable);
spin_unlock(&kvm->mmu_lock);
if (prot_bits & _PAGE_DIRTY)
if (kvm_pte_dirty(prot_bits))
mark_page_dirty_in_slot(kvm, memslot, gfn);
out:

View File

@ -161,6 +161,41 @@ TRACE_EVENT(kvm_aux,
__entry->pc)
);
#define KVM_TRACE_IOCSR_READ_UNSATISFIED 0
#define KVM_TRACE_IOCSR_READ 1
#define KVM_TRACE_IOCSR_WRITE 2
#define kvm_trace_symbol_iocsr \
{ KVM_TRACE_IOCSR_READ_UNSATISFIED, "unsatisfied-read" }, \
{ KVM_TRACE_IOCSR_READ, "read" }, \
{ KVM_TRACE_IOCSR_WRITE, "write" }
TRACE_EVENT(kvm_iocsr,
TP_PROTO(int type, int len, u64 gpa, void *val),
TP_ARGS(type, len, gpa, val),
TP_STRUCT__entry(
__field( u32, type )
__field( u32, len )
__field( u64, gpa )
__field( u64, val )
),
TP_fast_assign(
__entry->type = type;
__entry->len = len;
__entry->gpa = gpa;
__entry->val = 0;
if (val)
memcpy(&__entry->val, val,
min_t(u32, sizeof(__entry->val), len));
),
TP_printk("iocsr %s len %u gpa 0x%llx val 0x%llx",
__print_symbolic(__entry->type, kvm_trace_symbol_iocsr),
__entry->len, __entry->gpa, __entry->val)
);
TRACE_EVENT(kvm_vpid_change,
TP_PROTO(struct kvm_vcpu *vcpu, unsigned long vpid),
TP_ARGS(vcpu, vpid),

View File

@ -680,6 +680,8 @@ static int _kvm_get_cpucfg_mask(int id, u64 *v)
*v |= CPUCFG2_ARMBT;
if (cpu_has_lbt_mips)
*v |= CPUCFG2_MIPSBT;
if (cpu_has_ptw)
*v |= CPUCFG2_PTW;
return 0;
case LOONGARCH_CPUCFG3:

View File

@ -146,6 +146,10 @@ static int kvm_vm_feature_has_attr(struct kvm *kvm, struct kvm_device_attr *attr
if (kvm_pvtime_supported())
return 0;
return -ENXIO;
case KVM_LOONGARCH_VM_FEAT_PTW:
if (cpu_has_ptw)
return 0;
return -ENXIO;
default:
return -ENXIO;
}

View File

@ -16,11 +16,11 @@
#define ZPCI_PCI_ST_FUNC_NOT_AVAIL 40
#define ZPCI_PCI_ST_ALREADY_IN_RQ_STATE 44
/* Load/Store return codes */
#define ZPCI_PCI_LS_OK 0
#define ZPCI_PCI_LS_ERR 1
#define ZPCI_PCI_LS_BUSY 2
#define ZPCI_PCI_LS_INVAL_HANDLE 3
/* PCI instruction condition codes */
#define ZPCI_CC_OK 0
#define ZPCI_CC_ERR 1
#define ZPCI_CC_BUSY 2
#define ZPCI_CC_INVAL_HANDLE 3
/* Load/Store address space identifiers */
#define ZPCI_PCIAS_MEMIO_0 0

View File

@ -16,7 +16,7 @@
static int kexec_file_add_kernel_elf(struct kimage *image,
struct s390_load_data *data)
{
struct kexec_buf buf;
struct kexec_buf buf = {};
const Elf_Ehdr *ehdr;
const Elf_Phdr *phdr;
Elf_Addr entry;

View File

@ -16,7 +16,7 @@
static int kexec_file_add_kernel_image(struct kimage *image,
struct s390_load_data *data)
{
struct kexec_buf buf;
struct kexec_buf buf = {};
buf.image = image;

View File

@ -129,7 +129,7 @@ static int kexec_file_update_purgatory(struct kimage *image,
static int kexec_file_add_purgatory(struct kimage *image,
struct s390_load_data *data)
{
struct kexec_buf buf;
struct kexec_buf buf = {};
int ret;
buf.image = image;
@ -152,7 +152,7 @@ static int kexec_file_add_purgatory(struct kimage *image,
static int kexec_file_add_initrd(struct kimage *image,
struct s390_load_data *data)
{
struct kexec_buf buf;
struct kexec_buf buf = {};
int ret;
buf.image = image;
@ -184,7 +184,7 @@ static int kexec_file_add_ipl_report(struct kimage *image,
{
__u32 *lc_ipl_parmblock_ptr;
unsigned int len, ncerts;
struct kexec_buf buf;
struct kexec_buf buf = {};
unsigned long addr;
void *ptr, *end;
int ret;

View File

@ -760,8 +760,6 @@ static int __hw_perf_event_init(struct perf_event *event, unsigned int type)
break;
case PERF_TYPE_HARDWARE:
if (is_sampling_event(event)) /* No sampling support */
return -ENOENT;
ev = attr->config;
if (!attr->exclude_user && attr->exclude_kernel) {
/*
@ -859,6 +857,8 @@ static int cpumf_pmu_event_init(struct perf_event *event)
unsigned int type = event->attr.type;
int err = -ENOENT;
if (is_sampling_event(event)) /* No sampling support */
return err;
if (type == PERF_TYPE_HARDWARE || type == PERF_TYPE_RAW)
err = __hw_perf_event_init(event, type);
else if (event->pmu->type == type)

View File

@ -285,10 +285,10 @@ static int paicrypt_event_init(struct perf_event *event)
/* PAI crypto PMU registered as PERF_TYPE_RAW, check event type */
if (a->type != PERF_TYPE_RAW && event->pmu->type != a->type)
return -ENOENT;
/* PAI crypto event must be in valid range */
/* PAI crypto event must be in valid range, try others if not */
if (a->config < PAI_CRYPTO_BASE ||
a->config > PAI_CRYPTO_BASE + paicrypt_cnt)
return -EINVAL;
return -ENOENT;
/* Allow only CRYPTO_ALL for sampling */
if (a->sample_period && a->config != PAI_CRYPTO_BASE)
return -EINVAL;

View File

@ -265,7 +265,7 @@ static int paiext_event_valid(struct perf_event *event)
event->hw.config_base = offsetof(struct paiext_cb, acc);
return 0;
}
return -EINVAL;
return -ENOENT;
}
/* Might be called on different CPU than the one the event is intended for. */

View File

@ -2776,12 +2776,19 @@ static unsigned long get_ind_bit(__u64 addr, unsigned long bit_nr, bool swap)
static struct page *get_map_page(struct kvm *kvm, u64 uaddr)
{
struct mm_struct *mm = kvm->mm;
struct page *page = NULL;
int locked = 1;
if (mmget_not_zero(mm)) {
mmap_read_lock(mm);
get_user_pages_remote(mm, uaddr, 1, FOLL_WRITE,
&page, &locked);
if (locked)
mmap_read_unlock(mm);
mmput(mm);
}
mmap_read_lock(kvm->mm);
get_user_pages_remote(kvm->mm, uaddr, 1, FOLL_WRITE,
&page, NULL);
mmap_read_unlock(kvm->mm);
return page;
}

View File

@ -4864,12 +4864,12 @@ static void kvm_s390_assert_primary_as(struct kvm_vcpu *vcpu)
* @vcpu: the vCPU whose gmap is to be fixed up
* @gfn: the guest frame number used for memslots (including fake memslots)
* @gaddr: the gmap address, does not have to match @gfn for ucontrol gmaps
* @flags: FOLL_* flags
* @foll: FOLL_* flags
*
* Return: 0 on success, < 0 in case of error.
* Context: The mm lock must not be held before calling. May sleep.
*/
int __kvm_s390_handle_dat_fault(struct kvm_vcpu *vcpu, gfn_t gfn, gpa_t gaddr, unsigned int flags)
int __kvm_s390_handle_dat_fault(struct kvm_vcpu *vcpu, gfn_t gfn, gpa_t gaddr, unsigned int foll)
{
struct kvm_memory_slot *slot;
unsigned int fault_flags;
@ -4883,13 +4883,13 @@ int __kvm_s390_handle_dat_fault(struct kvm_vcpu *vcpu, gfn_t gfn, gpa_t gaddr, u
if (!slot || slot->flags & KVM_MEMSLOT_INVALID)
return vcpu_post_run_addressing_exception(vcpu);
fault_flags = flags & FOLL_WRITE ? FAULT_FLAG_WRITE : 0;
fault_flags = foll & FOLL_WRITE ? FAULT_FLAG_WRITE : 0;
if (vcpu->arch.gmap->pfault_enabled)
flags |= FOLL_NOWAIT;
foll |= FOLL_NOWAIT;
vmaddr = __gfn_to_hva_memslot(slot, gfn);
try_again:
pfn = __kvm_faultin_pfn(slot, gfn, flags, &writable, &page);
pfn = __kvm_faultin_pfn(slot, gfn, foll, &writable, &page);
/* Access outside memory, inject addressing exception */
if (is_noslot_pfn(pfn))
@ -4905,7 +4905,7 @@ int __kvm_s390_handle_dat_fault(struct kvm_vcpu *vcpu, gfn_t gfn, gpa_t gaddr, u
return 0;
vcpu->stat.pfault_sync++;
/* Could not setup async pfault, try again synchronously */
flags &= ~FOLL_NOWAIT;
foll &= ~FOLL_NOWAIT;
goto try_again;
}
/* Any other error */
@ -4925,7 +4925,7 @@ int __kvm_s390_handle_dat_fault(struct kvm_vcpu *vcpu, gfn_t gfn, gpa_t gaddr, u
return rc;
}
static int vcpu_dat_fault_handler(struct kvm_vcpu *vcpu, unsigned long gaddr, unsigned int flags)
static int vcpu_dat_fault_handler(struct kvm_vcpu *vcpu, unsigned long gaddr, unsigned int foll)
{
unsigned long gaddr_tmp;
gfn_t gfn;
@ -4950,18 +4950,18 @@ static int vcpu_dat_fault_handler(struct kvm_vcpu *vcpu, unsigned long gaddr, un
}
gfn = gpa_to_gfn(gaddr_tmp);
}
return __kvm_s390_handle_dat_fault(vcpu, gfn, gaddr, flags);
return __kvm_s390_handle_dat_fault(vcpu, gfn, gaddr, foll);
}
static int vcpu_post_run_handle_fault(struct kvm_vcpu *vcpu)
{
unsigned int flags = 0;
unsigned int foll = 0;
unsigned long gaddr;
int rc;
gaddr = current->thread.gmap_teid.addr * PAGE_SIZE;
if (kvm_s390_cur_gmap_fault_is_write())
flags = FAULT_FLAG_WRITE;
foll = FOLL_WRITE;
switch (current->thread.gmap_int_code & PGM_INT_CODE_MASK) {
case 0:
@ -5003,7 +5003,7 @@ static int vcpu_post_run_handle_fault(struct kvm_vcpu *vcpu)
send_sig(SIGSEGV, current, 0);
if (rc != -ENXIO)
break;
flags = FAULT_FLAG_WRITE;
foll = FOLL_WRITE;
fallthrough;
case PGM_PROTECTION:
case PGM_SEGMENT_TRANSLATION:
@ -5013,7 +5013,7 @@ static int vcpu_post_run_handle_fault(struct kvm_vcpu *vcpu)
case PGM_REGION_SECOND_TRANS:
case PGM_REGION_THIRD_TRANS:
kvm_s390_assert_primary_as(vcpu);
return vcpu_dat_fault_handler(vcpu, gaddr, flags);
return vcpu_dat_fault_handler(vcpu, gaddr, foll);
default:
KVM_BUG(1, vcpu->kvm, "Unexpected program interrupt 0x%x, TEID 0x%016lx",
current->thread.gmap_int_code, current->thread.gmap_teid.val);

View File

@ -624,6 +624,17 @@ int kvm_s390_pv_init_vm(struct kvm *kvm, u16 *rc, u16 *rrc)
int cc, ret;
u16 dummy;
/* Add the notifier only once. No races because we hold kvm->lock */
if (kvm->arch.pv.mmu_notifier.ops != &kvm_s390_pv_mmu_notifier_ops) {
/* The notifier will be unregistered when the VM is destroyed */
kvm->arch.pv.mmu_notifier.ops = &kvm_s390_pv_mmu_notifier_ops;
ret = mmu_notifier_register(&kvm->arch.pv.mmu_notifier, kvm->mm);
if (ret) {
kvm->arch.pv.mmu_notifier.ops = NULL;
return ret;
}
}
ret = kvm_s390_pv_alloc_vm(kvm);
if (ret)
return ret;
@ -659,11 +670,6 @@ int kvm_s390_pv_init_vm(struct kvm *kvm, u16 *rc, u16 *rrc)
return -EIO;
}
kvm->arch.gmap->guest_handle = uvcb.guest_handle;
/* Add the notifier only once. No races because we hold kvm->lock */
if (kvm->arch.pv.mmu_notifier.ops != &kvm_s390_pv_mmu_notifier_ops) {
kvm->arch.pv.mmu_notifier.ops = &kvm_s390_pv_mmu_notifier_ops;
mmu_notifier_register(&kvm->arch.pv.mmu_notifier, kvm->mm);
}
return 0;
}

View File

@ -314,7 +314,6 @@ pte_t ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr,
int nodat;
struct mm_struct *mm = vma->vm_mm;
preempt_disable();
pgste = ptep_xchg_start(mm, addr, ptep);
nodat = !!(pgste_val(pgste) & _PGSTE_GPS_NODAT);
old = ptep_flush_lazy(mm, addr, ptep, nodat);
@ -339,7 +338,6 @@ void ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr,
} else {
set_pte(ptep, pte);
}
preempt_enable();
}
static inline void pmdp_idte_local(struct mm_struct *mm,

View File

@ -1250,10 +1250,12 @@ static int virtio_uml_probe(struct platform_device *pdev)
device_set_wakeup_capable(&vu_dev->vdev.dev, true);
rc = register_virtio_device(&vu_dev->vdev);
if (rc)
if (rc) {
put_device(&vu_dev->vdev.dev);
return rc;
}
vu_dev->registered = 1;
return rc;
return 0;
error_init:
os_close_file(vu_dev->sock);

View File

@ -535,7 +535,7 @@ ssize_t os_rcv_fd_msg(int fd, int *fds, unsigned int n_fds,
cmsg->cmsg_type != SCM_RIGHTS)
return n;
memcpy(fds, CMSG_DATA(cmsg), cmsg->cmsg_len);
memcpy(fds, CMSG_DATA(cmsg), cmsg->cmsg_len - CMSG_LEN(0));
return n;
}

View File

@ -20,8 +20,7 @@
void stack_protections(unsigned long address)
{
if (mprotect((void *) address, UM_THREAD_SIZE,
PROT_READ | PROT_WRITE | PROT_EXEC) < 0)
if (mprotect((void *) address, UM_THREAD_SIZE, PROT_READ | PROT_WRITE) < 0)
panic("protecting stack failed, errno = %d", errno);
}

View File

@ -2701,6 +2701,15 @@ config MITIGATION_TSA
security vulnerability on AMD CPUs which can lead to forwarding of
invalid info to subsequent instructions and thus can affect their
timing and thereby cause a leakage.
config MITIGATION_VMSCAPE
bool "Mitigate VMSCAPE"
depends on KVM
default y
help
Enable mitigation for VMSCAPE attacks. VMSCAPE is a hardware security
vulnerability on Intel and AMD CPUs that may allow a guest to do
Spectre v2 style attacks on userspace hypervisor.
endif
config ARCH_HAS_ADD_PAGES

View File

@ -495,6 +495,7 @@
#define X86_FEATURE_TSA_SQ_NO (21*32+11) /* AMD CPU not vulnerable to TSA-SQ */
#define X86_FEATURE_TSA_L1_NO (21*32+12) /* AMD CPU not vulnerable to TSA-L1 */
#define X86_FEATURE_CLEAR_CPU_BUF_VM (21*32+13) /* Clear CPU buffers using VERW before VMRUN */
#define X86_FEATURE_IBPB_EXIT_TO_USER (21*32+14) /* Use IBPB on exit-to-userspace, see VMSCAPE bug */
/*
* BUG word(s)
@ -551,4 +552,5 @@
#define X86_BUG_ITS X86_BUG( 1*32+ 7) /* "its" CPU is affected by Indirect Target Selection */
#define X86_BUG_ITS_NATIVE_ONLY X86_BUG( 1*32+ 8) /* "its_native_only" CPU is affected by ITS, VMX is not affected */
#define X86_BUG_TSA X86_BUG( 1*32+ 9) /* "tsa" CPU is affected by Transient Scheduler Attacks */
#define X86_BUG_VMSCAPE X86_BUG( 1*32+10) /* "vmscape" CPU is affected by VMSCAPE attacks from guests */
#endif /* _ASM_X86_CPUFEATURES_H */

View File

@ -93,6 +93,13 @@ static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs,
* 8 (ia32) bits.
*/
choose_random_kstack_offset(rdtsc());
/* Avoid unnecessary reads of 'x86_ibpb_exit_to_user' */
if (cpu_feature_enabled(X86_FEATURE_IBPB_EXIT_TO_USER) &&
this_cpu_read(x86_ibpb_exit_to_user)) {
indirect_branch_prediction_barrier();
this_cpu_write(x86_ibpb_exit_to_user, false);
}
}
#define arch_exit_to_user_mode_prepare arch_exit_to_user_mode_prepare

View File

@ -530,6 +530,8 @@ void alternative_msr_write(unsigned int msr, u64 val, unsigned int feature)
: "memory");
}
DECLARE_PER_CPU(bool, x86_ibpb_exit_to_user);
static inline void indirect_branch_prediction_barrier(void)
{
asm_inline volatile(ALTERNATIVE("", "call write_ibpb", X86_FEATURE_IBPB)

View File

@ -562,6 +562,24 @@ enum es_result sev_es_ghcb_hv_call(struct ghcb *ghcb,
extern struct ghcb *boot_ghcb;
static inline void sev_evict_cache(void *va, int npages)
{
volatile u8 val __always_unused;
u8 *bytes = va;
int page_idx;
/*
* For SEV guests, a read from the first/last cache-lines of a 4K page
* using the guest key is sufficient to cause a flush of all cache-lines
* associated with that 4K page without incurring all the overhead of a
* full CLFLUSH sequence.
*/
for (page_idx = 0; page_idx < npages; page_idx++) {
val = bytes[page_idx * PAGE_SIZE];
val = bytes[page_idx * PAGE_SIZE + PAGE_SIZE - 1];
}
}
#else /* !CONFIG_AMD_MEM_ENCRYPT */
#define snp_vmpl 0
@ -605,6 +623,7 @@ static inline int snp_send_guest_request(struct snp_msg_desc *mdesc,
static inline int snp_svsm_vtpm_send_command(u8 *buffer) { return -ENODEV; }
static inline void __init snp_secure_tsc_prepare(void) { }
static inline void __init snp_secure_tsc_init(void) { }
static inline void sev_evict_cache(void *va, int npages) {}
#endif /* CONFIG_AMD_MEM_ENCRYPT */
@ -619,24 +638,6 @@ int rmp_make_shared(u64 pfn, enum pg_level level);
void snp_leak_pages(u64 pfn, unsigned int npages);
void kdump_sev_callback(void);
void snp_fixup_e820_tables(void);
static inline void sev_evict_cache(void *va, int npages)
{
volatile u8 val __always_unused;
u8 *bytes = va;
int page_idx;
/*
* For SEV guests, a read from the first/last cache-lines of a 4K page
* using the guest key is sufficient to cause a flush of all cache-lines
* associated with that 4K page without incurring all the overhead of a
* full CLFLUSH sequence.
*/
for (page_idx = 0; page_idx < npages; page_idx++) {
val = bytes[page_idx * PAGE_SIZE];
val = bytes[page_idx * PAGE_SIZE + PAGE_SIZE - 1];
}
}
#else
static inline bool snp_probe_rmptable_info(void) { return false; }
static inline int snp_rmptable_init(void) { return -ENOSYS; }
@ -652,7 +653,6 @@ static inline int rmp_make_shared(u64 pfn, enum pg_level level) { return -ENODEV
static inline void snp_leak_pages(u64 pfn, unsigned int npages) {}
static inline void kdump_sev_callback(void) { }
static inline void snp_fixup_e820_tables(void) {}
static inline void sev_evict_cache(void *va, int npages) {}
#endif
#endif

View File

@ -96,6 +96,9 @@ static void __init its_update_mitigation(void);
static void __init its_apply_mitigation(void);
static void __init tsa_select_mitigation(void);
static void __init tsa_apply_mitigation(void);
static void __init vmscape_select_mitigation(void);
static void __init vmscape_update_mitigation(void);
static void __init vmscape_apply_mitigation(void);
/* The base value of the SPEC_CTRL MSR without task-specific bits set */
u64 x86_spec_ctrl_base;
@ -105,6 +108,14 @@ EXPORT_SYMBOL_GPL(x86_spec_ctrl_base);
DEFINE_PER_CPU(u64, x86_spec_ctrl_current);
EXPORT_PER_CPU_SYMBOL_GPL(x86_spec_ctrl_current);
/*
* Set when the CPU has run a potentially malicious guest. An IBPB will
* be needed to before running userspace. That IBPB will flush the branch
* predictor content.
*/
DEFINE_PER_CPU(bool, x86_ibpb_exit_to_user);
EXPORT_PER_CPU_SYMBOL_GPL(x86_ibpb_exit_to_user);
u64 x86_pred_cmd __ro_after_init = PRED_CMD_IBPB;
static u64 __ro_after_init x86_arch_cap_msr;
@ -262,6 +273,7 @@ void __init cpu_select_mitigations(void)
its_select_mitigation();
bhi_select_mitigation();
tsa_select_mitigation();
vmscape_select_mitigation();
/*
* After mitigations are selected, some may need to update their
@ -293,6 +305,7 @@ void __init cpu_select_mitigations(void)
bhi_update_mitigation();
/* srso_update_mitigation() depends on retbleed_update_mitigation(). */
srso_update_mitigation();
vmscape_update_mitigation();
spectre_v1_apply_mitigation();
spectre_v2_apply_mitigation();
@ -310,6 +323,7 @@ void __init cpu_select_mitigations(void)
its_apply_mitigation();
bhi_apply_mitigation();
tsa_apply_mitigation();
vmscape_apply_mitigation();
}
/*
@ -2538,88 +2552,6 @@ static void update_mds_branch_idle(void)
}
}
#define MDS_MSG_SMT "MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.\n"
#define TAA_MSG_SMT "TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.\n"
#define MMIO_MSG_SMT "MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.\n"
void cpu_bugs_smt_update(void)
{
mutex_lock(&spec_ctrl_mutex);
if (sched_smt_active() && unprivileged_ebpf_enabled() &&
spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE)
pr_warn_once(SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG);
switch (spectre_v2_user_stibp) {
case SPECTRE_V2_USER_NONE:
break;
case SPECTRE_V2_USER_STRICT:
case SPECTRE_V2_USER_STRICT_PREFERRED:
update_stibp_strict();
break;
case SPECTRE_V2_USER_PRCTL:
case SPECTRE_V2_USER_SECCOMP:
update_indir_branch_cond();
break;
}
switch (mds_mitigation) {
case MDS_MITIGATION_FULL:
case MDS_MITIGATION_AUTO:
case MDS_MITIGATION_VMWERV:
if (sched_smt_active() && !boot_cpu_has(X86_BUG_MSBDS_ONLY))
pr_warn_once(MDS_MSG_SMT);
update_mds_branch_idle();
break;
case MDS_MITIGATION_OFF:
break;
}
switch (taa_mitigation) {
case TAA_MITIGATION_VERW:
case TAA_MITIGATION_AUTO:
case TAA_MITIGATION_UCODE_NEEDED:
if (sched_smt_active())
pr_warn_once(TAA_MSG_SMT);
break;
case TAA_MITIGATION_TSX_DISABLED:
case TAA_MITIGATION_OFF:
break;
}
switch (mmio_mitigation) {
case MMIO_MITIGATION_VERW:
case MMIO_MITIGATION_AUTO:
case MMIO_MITIGATION_UCODE_NEEDED:
if (sched_smt_active())
pr_warn_once(MMIO_MSG_SMT);
break;
case MMIO_MITIGATION_OFF:
break;
}
switch (tsa_mitigation) {
case TSA_MITIGATION_USER_KERNEL:
case TSA_MITIGATION_VM:
case TSA_MITIGATION_AUTO:
case TSA_MITIGATION_FULL:
/*
* TSA-SQ can potentially lead to info leakage between
* SMT threads.
*/
if (sched_smt_active())
static_branch_enable(&cpu_buf_idle_clear);
else
static_branch_disable(&cpu_buf_idle_clear);
break;
case TSA_MITIGATION_NONE:
case TSA_MITIGATION_UCODE_NEEDED:
break;
}
mutex_unlock(&spec_ctrl_mutex);
}
#undef pr_fmt
#define pr_fmt(fmt) "Speculative Store Bypass: " fmt
@ -3330,9 +3262,185 @@ static void __init srso_apply_mitigation(void)
}
}
#undef pr_fmt
#define pr_fmt(fmt) "VMSCAPE: " fmt
enum vmscape_mitigations {
VMSCAPE_MITIGATION_NONE,
VMSCAPE_MITIGATION_AUTO,
VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER,
VMSCAPE_MITIGATION_IBPB_ON_VMEXIT,
};
static const char * const vmscape_strings[] = {
[VMSCAPE_MITIGATION_NONE] = "Vulnerable",
/* [VMSCAPE_MITIGATION_AUTO] */
[VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER] = "Mitigation: IBPB before exit to userspace",
[VMSCAPE_MITIGATION_IBPB_ON_VMEXIT] = "Mitigation: IBPB on VMEXIT",
};
static enum vmscape_mitigations vmscape_mitigation __ro_after_init =
IS_ENABLED(CONFIG_MITIGATION_VMSCAPE) ? VMSCAPE_MITIGATION_AUTO : VMSCAPE_MITIGATION_NONE;
static int __init vmscape_parse_cmdline(char *str)
{
if (!str)
return -EINVAL;
if (!strcmp(str, "off")) {
vmscape_mitigation = VMSCAPE_MITIGATION_NONE;
} else if (!strcmp(str, "ibpb")) {
vmscape_mitigation = VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER;
} else if (!strcmp(str, "force")) {
setup_force_cpu_bug(X86_BUG_VMSCAPE);
vmscape_mitigation = VMSCAPE_MITIGATION_AUTO;
} else {
pr_err("Ignoring unknown vmscape=%s option.\n", str);
}
return 0;
}
early_param("vmscape", vmscape_parse_cmdline);
static void __init vmscape_select_mitigation(void)
{
if (cpu_mitigations_off() ||
!boot_cpu_has_bug(X86_BUG_VMSCAPE) ||
!boot_cpu_has(X86_FEATURE_IBPB)) {
vmscape_mitigation = VMSCAPE_MITIGATION_NONE;
return;
}
if (vmscape_mitigation == VMSCAPE_MITIGATION_AUTO)
vmscape_mitigation = VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER;
}
static void __init vmscape_update_mitigation(void)
{
if (!boot_cpu_has_bug(X86_BUG_VMSCAPE))
return;
if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB ||
srso_mitigation == SRSO_MITIGATION_IBPB_ON_VMEXIT)
vmscape_mitigation = VMSCAPE_MITIGATION_IBPB_ON_VMEXIT;
pr_info("%s\n", vmscape_strings[vmscape_mitigation]);
}
static void __init vmscape_apply_mitigation(void)
{
if (vmscape_mitigation == VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER)
setup_force_cpu_cap(X86_FEATURE_IBPB_EXIT_TO_USER);
}
#undef pr_fmt
#define pr_fmt(fmt) fmt
#define MDS_MSG_SMT "MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.\n"
#define TAA_MSG_SMT "TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.\n"
#define MMIO_MSG_SMT "MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.\n"
#define VMSCAPE_MSG_SMT "VMSCAPE: SMT on, STIBP is required for full protection. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/vmscape.html for more details.\n"
void cpu_bugs_smt_update(void)
{
mutex_lock(&spec_ctrl_mutex);
if (sched_smt_active() && unprivileged_ebpf_enabled() &&
spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE)
pr_warn_once(SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG);
switch (spectre_v2_user_stibp) {
case SPECTRE_V2_USER_NONE:
break;
case SPECTRE_V2_USER_STRICT:
case SPECTRE_V2_USER_STRICT_PREFERRED:
update_stibp_strict();
break;
case SPECTRE_V2_USER_PRCTL:
case SPECTRE_V2_USER_SECCOMP:
update_indir_branch_cond();
break;
}
switch (mds_mitigation) {
case MDS_MITIGATION_FULL:
case MDS_MITIGATION_AUTO:
case MDS_MITIGATION_VMWERV:
if (sched_smt_active() && !boot_cpu_has(X86_BUG_MSBDS_ONLY))
pr_warn_once(MDS_MSG_SMT);
update_mds_branch_idle();
break;
case MDS_MITIGATION_OFF:
break;
}
switch (taa_mitigation) {
case TAA_MITIGATION_VERW:
case TAA_MITIGATION_AUTO:
case TAA_MITIGATION_UCODE_NEEDED:
if (sched_smt_active())
pr_warn_once(TAA_MSG_SMT);
break;
case TAA_MITIGATION_TSX_DISABLED:
case TAA_MITIGATION_OFF:
break;
}
switch (mmio_mitigation) {
case MMIO_MITIGATION_VERW:
case MMIO_MITIGATION_AUTO:
case MMIO_MITIGATION_UCODE_NEEDED:
if (sched_smt_active())
pr_warn_once(MMIO_MSG_SMT);
break;
case MMIO_MITIGATION_OFF:
break;
}
switch (tsa_mitigation) {
case TSA_MITIGATION_USER_KERNEL:
case TSA_MITIGATION_VM:
case TSA_MITIGATION_AUTO:
case TSA_MITIGATION_FULL:
/*
* TSA-SQ can potentially lead to info leakage between
* SMT threads.
*/
if (sched_smt_active())
static_branch_enable(&cpu_buf_idle_clear);
else
static_branch_disable(&cpu_buf_idle_clear);
break;
case TSA_MITIGATION_NONE:
case TSA_MITIGATION_UCODE_NEEDED:
break;
}
switch (vmscape_mitigation) {
case VMSCAPE_MITIGATION_NONE:
case VMSCAPE_MITIGATION_AUTO:
break;
case VMSCAPE_MITIGATION_IBPB_ON_VMEXIT:
case VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER:
/*
* Hypervisors can be attacked across-threads, warn for SMT when
* STIBP is not already enabled system-wide.
*
* Intel eIBRS (!AUTOIBRS) implies STIBP on.
*/
if (!sched_smt_active() ||
spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT ||
spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED ||
(spectre_v2_in_eibrs_mode(spectre_v2_enabled) &&
!boot_cpu_has(X86_FEATURE_AUTOIBRS)))
break;
pr_warn_once(VMSCAPE_MSG_SMT);
break;
}
mutex_unlock(&spec_ctrl_mutex);
}
#ifdef CONFIG_SYSFS
#define L1TF_DEFAULT_MSG "Mitigation: PTE Inversion"
@ -3578,6 +3686,11 @@ static ssize_t tsa_show_state(char *buf)
return sysfs_emit(buf, "%s\n", tsa_strings[tsa_mitigation]);
}
static ssize_t vmscape_show_state(char *buf)
{
return sysfs_emit(buf, "%s\n", vmscape_strings[vmscape_mitigation]);
}
static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
char *buf, unsigned int bug)
{
@ -3644,6 +3757,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
case X86_BUG_TSA:
return tsa_show_state(buf);
case X86_BUG_VMSCAPE:
return vmscape_show_state(buf);
default:
break;
}
@ -3735,6 +3851,11 @@ ssize_t cpu_show_tsa(struct device *dev, struct device_attribute *attr, char *bu
{
return cpu_show_common(dev, attr, buf, X86_BUG_TSA);
}
ssize_t cpu_show_vmscape(struct device *dev, struct device_attribute *attr, char *buf)
{
return cpu_show_common(dev, attr, buf, X86_BUG_VMSCAPE);
}
#endif
void __warn_thunk(void)

View File

@ -1236,55 +1236,71 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
#define ITS_NATIVE_ONLY BIT(9)
/* CPU is affected by Transient Scheduler Attacks */
#define TSA BIT(10)
/* CPU is affected by VMSCAPE */
#define VMSCAPE BIT(11)
static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
VULNBL_INTEL_STEPS(INTEL_IVYBRIDGE, X86_STEP_MAX, SRBDS),
VULNBL_INTEL_STEPS(INTEL_HASWELL, X86_STEP_MAX, SRBDS),
VULNBL_INTEL_STEPS(INTEL_HASWELL_L, X86_STEP_MAX, SRBDS),
VULNBL_INTEL_STEPS(INTEL_HASWELL_G, X86_STEP_MAX, SRBDS),
VULNBL_INTEL_STEPS(INTEL_HASWELL_X, X86_STEP_MAX, MMIO),
VULNBL_INTEL_STEPS(INTEL_BROADWELL_D, X86_STEP_MAX, MMIO),
VULNBL_INTEL_STEPS(INTEL_BROADWELL_G, X86_STEP_MAX, SRBDS),
VULNBL_INTEL_STEPS(INTEL_BROADWELL_X, X86_STEP_MAX, MMIO),
VULNBL_INTEL_STEPS(INTEL_BROADWELL, X86_STEP_MAX, SRBDS),
VULNBL_INTEL_STEPS(INTEL_SKYLAKE_X, 0x5, MMIO | RETBLEED | GDS),
VULNBL_INTEL_STEPS(INTEL_SKYLAKE_X, X86_STEP_MAX, MMIO | RETBLEED | GDS | ITS),
VULNBL_INTEL_STEPS(INTEL_SKYLAKE_L, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS),
VULNBL_INTEL_STEPS(INTEL_SKYLAKE, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS),
VULNBL_INTEL_STEPS(INTEL_KABYLAKE_L, 0xb, MMIO | RETBLEED | GDS | SRBDS),
VULNBL_INTEL_STEPS(INTEL_KABYLAKE_L, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS | ITS),
VULNBL_INTEL_STEPS(INTEL_KABYLAKE, 0xc, MMIO | RETBLEED | GDS | SRBDS),
VULNBL_INTEL_STEPS(INTEL_KABYLAKE, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS | ITS),
VULNBL_INTEL_STEPS(INTEL_CANNONLAKE_L, X86_STEP_MAX, RETBLEED),
VULNBL_INTEL_STEPS(INTEL_SANDYBRIDGE_X, X86_STEP_MAX, VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_SANDYBRIDGE, X86_STEP_MAX, VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_IVYBRIDGE_X, X86_STEP_MAX, VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_IVYBRIDGE, X86_STEP_MAX, SRBDS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_HASWELL, X86_STEP_MAX, SRBDS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_HASWELL_L, X86_STEP_MAX, SRBDS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_HASWELL_G, X86_STEP_MAX, SRBDS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_HASWELL_X, X86_STEP_MAX, MMIO | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_BROADWELL_D, X86_STEP_MAX, MMIO | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_BROADWELL_X, X86_STEP_MAX, MMIO | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_BROADWELL_G, X86_STEP_MAX, SRBDS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_BROADWELL, X86_STEP_MAX, SRBDS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_SKYLAKE_X, 0x5, MMIO | RETBLEED | GDS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_SKYLAKE_X, X86_STEP_MAX, MMIO | RETBLEED | GDS | ITS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_SKYLAKE_L, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_SKYLAKE, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_KABYLAKE_L, 0xb, MMIO | RETBLEED | GDS | SRBDS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_KABYLAKE_L, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS | ITS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_KABYLAKE, 0xc, MMIO | RETBLEED | GDS | SRBDS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_KABYLAKE, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS | ITS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_CANNONLAKE_L, X86_STEP_MAX, RETBLEED | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_ICELAKE_L, X86_STEP_MAX, MMIO | MMIO_SBDS | RETBLEED | GDS | ITS | ITS_NATIVE_ONLY),
VULNBL_INTEL_STEPS(INTEL_ICELAKE_D, X86_STEP_MAX, MMIO | GDS | ITS | ITS_NATIVE_ONLY),
VULNBL_INTEL_STEPS(INTEL_ICELAKE_X, X86_STEP_MAX, MMIO | GDS | ITS | ITS_NATIVE_ONLY),
VULNBL_INTEL_STEPS(INTEL_COMETLAKE, X86_STEP_MAX, MMIO | MMIO_SBDS | RETBLEED | GDS | ITS),
VULNBL_INTEL_STEPS(INTEL_COMETLAKE_L, 0x0, MMIO | RETBLEED | ITS),
VULNBL_INTEL_STEPS(INTEL_COMETLAKE_L, X86_STEP_MAX, MMIO | MMIO_SBDS | RETBLEED | GDS | ITS),
VULNBL_INTEL_STEPS(INTEL_COMETLAKE, X86_STEP_MAX, MMIO | MMIO_SBDS | RETBLEED | GDS | ITS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_COMETLAKE_L, 0x0, MMIO | RETBLEED | ITS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_COMETLAKE_L, X86_STEP_MAX, MMIO | MMIO_SBDS | RETBLEED | GDS | ITS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_TIGERLAKE_L, X86_STEP_MAX, GDS | ITS | ITS_NATIVE_ONLY),
VULNBL_INTEL_STEPS(INTEL_TIGERLAKE, X86_STEP_MAX, GDS | ITS | ITS_NATIVE_ONLY),
VULNBL_INTEL_STEPS(INTEL_LAKEFIELD, X86_STEP_MAX, MMIO | MMIO_SBDS | RETBLEED),
VULNBL_INTEL_STEPS(INTEL_ROCKETLAKE, X86_STEP_MAX, MMIO | RETBLEED | GDS | ITS | ITS_NATIVE_ONLY),
VULNBL_INTEL_TYPE(INTEL_ALDERLAKE, ATOM, RFDS),
VULNBL_INTEL_STEPS(INTEL_ALDERLAKE_L, X86_STEP_MAX, RFDS),
VULNBL_INTEL_TYPE(INTEL_RAPTORLAKE, ATOM, RFDS),
VULNBL_INTEL_STEPS(INTEL_RAPTORLAKE_P, X86_STEP_MAX, RFDS),
VULNBL_INTEL_STEPS(INTEL_RAPTORLAKE_S, X86_STEP_MAX, RFDS),
VULNBL_INTEL_STEPS(INTEL_ATOM_GRACEMONT, X86_STEP_MAX, RFDS),
VULNBL_INTEL_TYPE(INTEL_ALDERLAKE, ATOM, RFDS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_ALDERLAKE, X86_STEP_MAX, VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_ALDERLAKE_L, X86_STEP_MAX, RFDS | VMSCAPE),
VULNBL_INTEL_TYPE(INTEL_RAPTORLAKE, ATOM, RFDS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_RAPTORLAKE, X86_STEP_MAX, VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_RAPTORLAKE_P, X86_STEP_MAX, RFDS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_RAPTORLAKE_S, X86_STEP_MAX, RFDS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_METEORLAKE_L, X86_STEP_MAX, VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_ARROWLAKE_H, X86_STEP_MAX, VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_ARROWLAKE, X86_STEP_MAX, VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_ARROWLAKE_U, X86_STEP_MAX, VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_LUNARLAKE_M, X86_STEP_MAX, VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_SAPPHIRERAPIDS_X, X86_STEP_MAX, VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_GRANITERAPIDS_X, X86_STEP_MAX, VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_EMERALDRAPIDS_X, X86_STEP_MAX, VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_ATOM_GRACEMONT, X86_STEP_MAX, RFDS | VMSCAPE),
VULNBL_INTEL_STEPS(INTEL_ATOM_TREMONT, X86_STEP_MAX, MMIO | MMIO_SBDS | RFDS),
VULNBL_INTEL_STEPS(INTEL_ATOM_TREMONT_D, X86_STEP_MAX, MMIO | RFDS),
VULNBL_INTEL_STEPS(INTEL_ATOM_TREMONT_L, X86_STEP_MAX, MMIO | MMIO_SBDS | RFDS),
VULNBL_INTEL_STEPS(INTEL_ATOM_GOLDMONT, X86_STEP_MAX, RFDS),
VULNBL_INTEL_STEPS(INTEL_ATOM_GOLDMONT_D, X86_STEP_MAX, RFDS),
VULNBL_INTEL_STEPS(INTEL_ATOM_GOLDMONT_PLUS, X86_STEP_MAX, RFDS),
VULNBL_INTEL_STEPS(INTEL_ATOM_CRESTMONT_X, X86_STEP_MAX, VMSCAPE),
VULNBL_AMD(0x15, RETBLEED),
VULNBL_AMD(0x16, RETBLEED),
VULNBL_AMD(0x17, RETBLEED | SMT_RSB | SRSO),
VULNBL_HYGON(0x18, RETBLEED | SMT_RSB | SRSO),
VULNBL_AMD(0x19, SRSO | TSA),
VULNBL_AMD(0x1a, SRSO),
VULNBL_AMD(0x17, RETBLEED | SMT_RSB | SRSO | VMSCAPE),
VULNBL_HYGON(0x18, RETBLEED | SMT_RSB | SRSO | VMSCAPE),
VULNBL_AMD(0x19, SRSO | TSA | VMSCAPE),
VULNBL_AMD(0x1a, SRSO | VMSCAPE),
{}
};
@ -1543,6 +1559,14 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
}
}
/*
* Set the bug only on bare-metal. A nested hypervisor should already be
* deploying IBPB to isolate itself from nested guests.
*/
if (cpu_matches(cpu_vuln_blacklist, VMSCAPE) &&
!boot_cpu_has(X86_FEATURE_HYPERVISOR))
setup_force_cpu_bug(X86_BUG_VMSCAPE);
if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
return;

View File

@ -175,27 +175,30 @@ static void topoext_fixup(struct topo_scan *tscan)
static void parse_topology_amd(struct topo_scan *tscan)
{
bool has_topoext = false;
/*
* If the extended topology leaf 0x8000_001e is available
* try to get SMT, CORE, TILE, and DIE shifts from extended
* Try to get SMT, CORE, TILE, and DIE shifts from extended
* CPUID leaf 0x8000_0026 on supported processors first. If
* extended CPUID leaf 0x8000_0026 is not supported, try to
* get SMT and CORE shift from leaf 0xb first, then try to
* get the CORE shift from leaf 0x8000_0008.
* get SMT and CORE shift from leaf 0xb. If either leaf is
* available, cpu_parse_topology_ext() will return true.
*/
if (cpu_feature_enabled(X86_FEATURE_TOPOEXT))
has_topoext = cpu_parse_topology_ext(tscan);
bool has_xtopology = cpu_parse_topology_ext(tscan);
if (cpu_feature_enabled(X86_FEATURE_AMD_HTR_CORES))
tscan->c->topo.cpu_type = cpuid_ebx(0x80000026);
if (!has_topoext && !parse_8000_0008(tscan))
/*
* If XTOPOLOGY leaves (0x26/0xb) are not available, try to
* get the CORE shift from leaf 0x8000_0008 first.
*/
if (!has_xtopology && !parse_8000_0008(tscan))
return;
/* Prefer leaf 0x8000001e if available */
if (parse_8000_001e(tscan, has_topoext))
/*
* Prefer leaf 0x8000001e if available to get the SMT shift and
* the initial APIC ID if XTOPOLOGY leaves are not available.
*/
if (parse_8000_001e(tscan, has_xtopology))
return;
/* Try the NODEID MSR */

View File

@ -4046,8 +4046,7 @@ static inline void sync_lapic_to_cr8(struct kvm_vcpu *vcpu)
struct vcpu_svm *svm = to_svm(vcpu);
u64 cr8;
if (nested_svm_virtualize_tpr(vcpu) ||
kvm_vcpu_apicv_active(vcpu))
if (nested_svm_virtualize_tpr(vcpu))
return;
cr8 = kvm_get_cr8(vcpu);

View File

@ -11010,6 +11010,15 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
if (vcpu->arch.guest_fpu.xfd_err)
wrmsrq(MSR_IA32_XFD_ERR, 0);
/*
* Mark this CPU as needing a branch predictor flush before running
* userspace. Must be done before enabling preemption to ensure it gets
* set for the CPU that actually ran the guest, and not the CPU that it
* may migrate to.
*/
if (cpu_feature_enabled(X86_FEATURE_IBPB_EXIT_TO_USER))
this_cpu_write(x86_ibpb_exit_to_user, true);
/*
* Consume any pending interrupts, including the possible source of
* VM-Exit on SVM and any ticks that occur between VM-Exit and now.

View File

@ -7,6 +7,7 @@
#include <linux/init.h>
#include <linux/mm.h>
#include <linux/blkdev.h>
#include <linux/blk-integrity.h>
#include <linux/buffer_head.h>
#include <linux/mpage.h>
#include <linux/uio.h>
@ -54,7 +55,6 @@ static ssize_t __blkdev_direct_IO_simple(struct kiocb *iocb,
struct bio bio;
ssize_t ret;
WARN_ON_ONCE(iocb->ki_flags & IOCB_HAS_METADATA);
if (nr_pages <= DIO_INLINE_BIO_VECS)
vecs = inline_vecs;
else {
@ -131,7 +131,7 @@ static void blkdev_bio_end_io(struct bio *bio)
if (bio->bi_status && !dio->bio.bi_status)
dio->bio.bi_status = bio->bi_status;
if (!is_sync && (dio->iocb->ki_flags & IOCB_HAS_METADATA))
if (bio_integrity(bio))
bio_integrity_unmap_user(bio);
if (atomic_dec_and_test(&dio->ref)) {
@ -233,7 +233,7 @@ static ssize_t __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter,
}
bio->bi_opf |= REQ_NOWAIT;
}
if (!is_sync && (iocb->ki_flags & IOCB_HAS_METADATA)) {
if (iocb->ki_flags & IOCB_HAS_METADATA) {
ret = bio_integrity_map_iter(bio, iocb->private);
if (unlikely(ret))
goto fail;
@ -301,7 +301,7 @@ static void blkdev_bio_end_io_async(struct bio *bio)
ret = blk_status_to_errno(bio->bi_status);
}
if (iocb->ki_flags & IOCB_HAS_METADATA)
if (bio_integrity(bio))
bio_integrity_unmap_user(bio);
iocb->ki_complete(iocb, ret);
@ -422,7 +422,8 @@ static ssize_t blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
}
nr_pages = bio_iov_vecs_to_alloc(iter, BIO_MAX_VECS + 1);
if (likely(nr_pages <= BIO_MAX_VECS)) {
if (likely(nr_pages <= BIO_MAX_VECS &&
!(iocb->ki_flags & IOCB_HAS_METADATA))) {
if (is_sync_kiocb(iocb))
return __blkdev_direct_IO_simple(iocb, iter, bdev,
nr_pages);
@ -687,6 +688,8 @@ static int blkdev_open(struct inode *inode, struct file *filp)
if (bdev_can_atomic_write(bdev))
filp->f_mode |= FMODE_CAN_ATOMIC_WRITE;
if (blk_get_integrity(bdev->bd_disk))
filp->f_mode |= FMODE_HAS_METADATA;
ret = bdev_open(bdev, mode, filp->private_data, NULL, filp);
if (ret)

View File

@ -970,6 +970,12 @@ int af_alg_sendmsg(struct socket *sock, struct msghdr *msg, size_t size,
}
lock_sock(sk);
if (ctx->write) {
release_sock(sk);
return -EBUSY;
}
ctx->write = true;
if (ctx->init && !ctx->more) {
if (ctx->used) {
err = -EINVAL;
@ -1019,6 +1025,8 @@ int af_alg_sendmsg(struct socket *sock, struct msghdr *msg, size_t size,
continue;
}
ctx->merge = 0;
if (!af_alg_writable(sk)) {
err = af_alg_wait_for_wmem(sk, msg->msg_flags);
if (err)
@ -1058,7 +1066,6 @@ int af_alg_sendmsg(struct socket *sock, struct msghdr *msg, size_t size,
ctx->used += plen;
copied += plen;
size -= plen;
ctx->merge = 0;
} else {
do {
struct page *pg;
@ -1104,6 +1111,7 @@ int af_alg_sendmsg(struct socket *sock, struct msghdr *msg, size_t size,
unlock:
af_alg_data_wakeup(sk);
ctx->write = false;
release_sock(sk);
return copied ?: err;

View File

@ -603,6 +603,7 @@ CPU_SHOW_VULN_FALLBACK(ghostwrite);
CPU_SHOW_VULN_FALLBACK(old_microcode);
CPU_SHOW_VULN_FALLBACK(indirect_target_selection);
CPU_SHOW_VULN_FALLBACK(tsa);
CPU_SHOW_VULN_FALLBACK(vmscape);
static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
@ -622,6 +623,7 @@ static DEVICE_ATTR(ghostwrite, 0444, cpu_show_ghostwrite, NULL);
static DEVICE_ATTR(old_microcode, 0444, cpu_show_old_microcode, NULL);
static DEVICE_ATTR(indirect_target_selection, 0444, cpu_show_indirect_target_selection, NULL);
static DEVICE_ATTR(tsa, 0444, cpu_show_tsa, NULL);
static DEVICE_ATTR(vmscape, 0444, cpu_show_vmscape, NULL);
static struct attribute *cpu_root_vulnerabilities_attrs[] = {
&dev_attr_meltdown.attr,
@ -642,6 +644,7 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
&dev_attr_old_microcode.attr,
&dev_attr_indirect_target_selection.attr,
&dev_attr_tsa.attr,
&dev_attr_vmscape.attr,
NULL
};

View File

@ -1330,6 +1330,7 @@ void drbd_reconsider_queue_parameters(struct drbd_device *device,
lim.max_write_zeroes_sectors = DRBD_MAX_BBIO_SECTORS;
else
lim.max_write_zeroes_sectors = 0;
lim.max_hw_wzeroes_unmap_sectors = 0;
if ((lim.discard_granularity >> SECTOR_SHIFT) >
lim.max_hw_discard_sectors) {

View File

@ -1795,6 +1795,7 @@ static int write_same_filled_page(struct zram *zram, unsigned long fill,
u32 index)
{
zram_slot_lock(zram, index);
zram_free_page(zram, index);
zram_set_flag(zram, index, ZRAM_SAME);
zram_set_handle(zram, index, fill);
zram_slot_unlock(zram, index);
@ -1832,6 +1833,7 @@ static int write_incompressible_page(struct zram *zram, struct page *page,
kunmap_local(src);
zram_slot_lock(zram, index);
zram_free_page(zram, index);
zram_set_flag(zram, index, ZRAM_HUGE);
zram_set_handle(zram, index, handle);
zram_set_obj_size(zram, index, PAGE_SIZE);
@ -1855,11 +1857,6 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index)
unsigned long element;
bool same_filled;
/* First, free memory allocated to this slot (if any) */
zram_slot_lock(zram, index);
zram_free_page(zram, index);
zram_slot_unlock(zram, index);
mem = kmap_local_page(page);
same_filled = page_same_filled(mem, &element);
kunmap_local(mem);
@ -1901,6 +1898,7 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index)
zcomp_stream_put(zstrm);
zram_slot_lock(zram, index);
zram_free_page(zram, index);
zram_set_handle(zram, index, handle);
zram_set_obj_size(zram, index, comp_len);
zram_slot_unlock(zram, index);

View File

@ -303,6 +303,9 @@ void cpg_mstp_detach_dev(struct generic_pm_domain *unused, struct device *dev)
pm_clk_destroy(dev);
}
static struct device_node *cpg_mstp_pd_np __initdata = NULL;
static struct generic_pm_domain *cpg_mstp_pd_genpd __initdata = NULL;
void __init cpg_mstp_add_clk_domain(struct device_node *np)
{
struct generic_pm_domain *pd;
@ -324,5 +327,20 @@ void __init cpg_mstp_add_clk_domain(struct device_node *np)
pd->detach_dev = cpg_mstp_detach_dev;
pm_genpd_init(pd, &pm_domain_always_on_gov, false);
of_genpd_add_provider_simple(np, pd);
cpg_mstp_pd_np = of_node_get(np);
cpg_mstp_pd_genpd = pd;
}
static int __init cpg_mstp_pd_init_provider(void)
{
int error;
if (!cpg_mstp_pd_np)
return -ENODEV;
error = of_genpd_add_provider_simple(cpg_mstp_pd_np, cpg_mstp_pd_genpd);
of_node_put(cpg_mstp_pd_np);
return error;
}
postcore_initcall(cpg_mstp_pd_init_provider);

View File

@ -185,7 +185,7 @@ static unsigned long ccu_mp_recalc_rate(struct clk_hw *hw,
p &= (1 << cmp->p.width) - 1;
if (cmp->common.features & CCU_FEATURE_DUAL_DIV)
rate = (parent_rate / p) / m;
rate = (parent_rate / (p + cmp->p.offset)) / m;
else
rate = (parent_rate >> p) / m;

View File

@ -1554,13 +1554,15 @@ static void amd_pstate_epp_cpu_exit(struct cpufreq_policy *policy)
pr_debug("CPU %d exiting\n", policy->cpu);
}
static int amd_pstate_epp_update_limit(struct cpufreq_policy *policy)
static int amd_pstate_epp_update_limit(struct cpufreq_policy *policy, bool policy_change)
{
struct amd_cpudata *cpudata = policy->driver_data;
union perf_cached perf;
u8 epp;
if (policy->min != cpudata->min_limit_freq || policy->max != cpudata->max_limit_freq)
if (policy_change ||
policy->min != cpudata->min_limit_freq ||
policy->max != cpudata->max_limit_freq)
amd_pstate_update_min_max_limit(policy);
if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE)
@ -1584,7 +1586,7 @@ static int amd_pstate_epp_set_policy(struct cpufreq_policy *policy)
cpudata->policy = policy->policy;
ret = amd_pstate_epp_update_limit(policy);
ret = amd_pstate_epp_update_limit(policy, true);
if (ret)
return ret;
@ -1626,13 +1628,14 @@ static int amd_pstate_suspend(struct cpufreq_policy *policy)
* min_perf value across kexec reboots. If this CPU is just resumed back without kexec,
* the limits, epp and desired perf will get reset to the cached values in cpudata struct
*/
ret = amd_pstate_update_perf(policy, perf.bios_min_perf, 0U, 0U, 0U, false);
ret = amd_pstate_update_perf(policy, perf.bios_min_perf,
FIELD_GET(AMD_CPPC_DES_PERF_MASK, cpudata->cppc_req_cached),
FIELD_GET(AMD_CPPC_MAX_PERF_MASK, cpudata->cppc_req_cached),
FIELD_GET(AMD_CPPC_EPP_PERF_MASK, cpudata->cppc_req_cached),
false);
if (ret)
return ret;
/* invalidate to ensure it's rewritten during resume */
cpudata->cppc_req_cached = 0;
/* set this flag to avoid setting core offline*/
cpudata->suspended = true;
@ -1658,7 +1661,7 @@ static int amd_pstate_epp_resume(struct cpufreq_policy *policy)
int ret;
/* enable amd pstate from suspend state*/
ret = amd_pstate_epp_update_limit(policy);
ret = amd_pstate_epp_update_limit(policy, false);
if (ret)
return ret;

View File

@ -1034,8 +1034,8 @@ static bool hybrid_register_perf_domain(unsigned int cpu)
if (!cpu_dev)
return false;
if (em_dev_register_perf_domain(cpu_dev, HYBRID_EM_STATE_COUNT, &cb,
cpumask_of(cpu), false))
if (em_dev_register_pd_no_update(cpu_dev, HYBRID_EM_STATE_COUNT, &cb,
cpumask_of(cpu), false))
return false;
cpudata->pd_registered = true;

View File

@ -2430,7 +2430,7 @@ static void __sev_firmware_shutdown(struct sev_device *sev, bool panic)
{
int error;
__sev_platform_shutdown_locked(NULL);
__sev_platform_shutdown_locked(&error);
if (sev_es_tmr) {
/*

View File

@ -48,12 +48,16 @@ static void *rzn1_dmamux_route_allocate(struct of_phandle_args *dma_spec,
u32 mask;
int ret;
if (dma_spec->args_count != RNZ1_DMAMUX_NCELLS)
return ERR_PTR(-EINVAL);
if (dma_spec->args_count != RNZ1_DMAMUX_NCELLS) {
ret = -EINVAL;
goto put_device;
}
map = kzalloc(sizeof(*map), GFP_KERNEL);
if (!map)
return ERR_PTR(-ENOMEM);
if (!map) {
ret = -ENOMEM;
goto put_device;
}
chan = dma_spec->args[0];
map->req_idx = dma_spec->args[4];
@ -94,12 +98,15 @@ static void *rzn1_dmamux_route_allocate(struct of_phandle_args *dma_spec,
if (ret)
goto clear_bitmap;
put_device(&pdev->dev);
return map;
clear_bitmap:
clear_bit(map->req_idx, dmamux->used_chans);
free_map:
kfree(map);
put_device:
put_device(&pdev->dev);
return ERR_PTR(ret);
}

View File

@ -189,27 +189,30 @@ static int idxd_setup_wqs(struct idxd_device *idxd)
idxd->wq_enable_map = bitmap_zalloc_node(idxd->max_wqs, GFP_KERNEL, dev_to_node(dev));
if (!idxd->wq_enable_map) {
rc = -ENOMEM;
goto err_bitmap;
goto err_free_wqs;
}
for (i = 0; i < idxd->max_wqs; i++) {
wq = kzalloc_node(sizeof(*wq), GFP_KERNEL, dev_to_node(dev));
if (!wq) {
rc = -ENOMEM;
goto err;
goto err_unwind;
}
idxd_dev_set_type(&wq->idxd_dev, IDXD_DEV_WQ);
conf_dev = wq_confdev(wq);
wq->id = i;
wq->idxd = idxd;
device_initialize(wq_confdev(wq));
device_initialize(conf_dev);
conf_dev->parent = idxd_confdev(idxd);
conf_dev->bus = &dsa_bus_type;
conf_dev->type = &idxd_wq_device_type;
rc = dev_set_name(conf_dev, "wq%d.%d", idxd->id, wq->id);
if (rc < 0)
goto err;
if (rc < 0) {
put_device(conf_dev);
kfree(wq);
goto err_unwind;
}
mutex_init(&wq->wq_lock);
init_waitqueue_head(&wq->err_queue);
@ -220,15 +223,20 @@ static int idxd_setup_wqs(struct idxd_device *idxd)
wq->enqcmds_retries = IDXD_ENQCMDS_RETRIES;
wq->wqcfg = kzalloc_node(idxd->wqcfg_size, GFP_KERNEL, dev_to_node(dev));
if (!wq->wqcfg) {
put_device(conf_dev);
kfree(wq);
rc = -ENOMEM;
goto err;
goto err_unwind;
}
if (idxd->hw.wq_cap.op_config) {
wq->opcap_bmap = bitmap_zalloc(IDXD_MAX_OPCAP_BITS, GFP_KERNEL);
if (!wq->opcap_bmap) {
kfree(wq->wqcfg);
put_device(conf_dev);
kfree(wq);
rc = -ENOMEM;
goto err_opcap_bmap;
goto err_unwind;
}
bitmap_copy(wq->opcap_bmap, idxd->opcap_bmap, IDXD_MAX_OPCAP_BITS);
}
@ -239,13 +247,7 @@ static int idxd_setup_wqs(struct idxd_device *idxd)
return 0;
err_opcap_bmap:
kfree(wq->wqcfg);
err:
put_device(conf_dev);
kfree(wq);
err_unwind:
while (--i >= 0) {
wq = idxd->wqs[i];
if (idxd->hw.wq_cap.op_config)
@ -254,11 +256,10 @@ static int idxd_setup_wqs(struct idxd_device *idxd)
conf_dev = wq_confdev(wq);
put_device(conf_dev);
kfree(wq);
}
bitmap_free(idxd->wq_enable_map);
err_bitmap:
err_free_wqs:
kfree(idxd->wqs);
return rc;
@ -1291,10 +1292,12 @@ static void idxd_remove(struct pci_dev *pdev)
device_unregister(idxd_confdev(idxd));
idxd_shutdown(pdev);
idxd_device_remove_debugfs(idxd);
idxd_cleanup(idxd);
perfmon_pmu_remove(idxd);
idxd_cleanup_interrupts(idxd);
if (device_pasid_enabled(idxd))
idxd_disable_system_pasid(idxd);
pci_iounmap(pdev, idxd->reg_base);
put_device(idxd_confdev(idxd));
idxd_free(idxd);
pci_disable_device(pdev);
}

View File

@ -1283,13 +1283,17 @@ static int bam_dma_probe(struct platform_device *pdev)
if (!bdev->bamclk) {
ret = of_property_read_u32(pdev->dev.of_node, "num-channels",
&bdev->num_channels);
if (ret)
if (ret) {
dev_err(bdev->dev, "num-channels unspecified in dt\n");
return ret;
}
ret = of_property_read_u32(pdev->dev.of_node, "qcom,num-ees",
&bdev->num_ees);
if (ret)
if (ret) {
dev_err(bdev->dev, "num-ees unspecified in dt\n");
return ret;
}
}
ret = clk_prepare_enable(bdev->bamclk);

View File

@ -2064,8 +2064,8 @@ static int edma_setup_from_hw(struct device *dev, struct edma_soc_info *pdata,
* priority. So Q0 is the highest priority queue and the last queue has
* the lowest priority.
*/
queue_priority_map = devm_kcalloc(dev, ecc->num_tc + 1, sizeof(s8),
GFP_KERNEL);
queue_priority_map = devm_kcalloc(dev, ecc->num_tc + 1,
sizeof(*queue_priority_map), GFP_KERNEL);
if (!queue_priority_map)
return -ENOMEM;

View File

@ -211,8 +211,8 @@ static int
dpll_msg_add_clock_quality_level(struct sk_buff *msg, struct dpll_device *dpll,
struct netlink_ext_ack *extack)
{
DECLARE_BITMAP(qls, DPLL_CLOCK_QUALITY_LEVEL_MAX + 1) = { 0 };
const struct dpll_device_ops *ops = dpll_device_ops(dpll);
DECLARE_BITMAP(qls, DPLL_CLOCK_QUALITY_LEVEL_MAX) = { 0 };
enum dpll_clock_quality_level ql;
int ret;
@ -221,7 +221,7 @@ dpll_msg_add_clock_quality_level(struct sk_buff *msg, struct dpll_device *dpll,
ret = ops->clock_quality_level_get(dpll, dpll_priv(dpll), qls, extack);
if (ret)
return ret;
for_each_set_bit(ql, qls, DPLL_CLOCK_QUALITY_LEVEL_MAX)
for_each_set_bit(ql, qls, DPLL_CLOCK_QUALITY_LEVEL_MAX + 1)
if (nla_put_u32(msg, DPLL_A_CLOCK_QUALITY_LEVEL, ql))
return -EMSGSIZE;

View File

@ -41,7 +41,7 @@
/*
* ABI version history is documented in linux/firewire-cdev.h.
*/
#define FW_CDEV_KERNEL_VERSION 5
#define FW_CDEV_KERNEL_VERSION 6
#define FW_CDEV_VERSION_EVENT_REQUEST2 4
#define FW_CDEV_VERSION_ALLOCATE_REGION_END 4
#define FW_CDEV_VERSION_AUTO_FLUSH_ISO_OVERFLOW 5

View File

@ -942,8 +942,9 @@ struct gpio_desc *acpi_find_gpio(struct fwnode_handle *fwnode,
{
struct acpi_device *adev = to_acpi_device_node(fwnode);
bool can_fallback = acpi_can_fallback_to_crs(adev, con_id);
struct acpi_gpio_info info;
struct acpi_gpio_info info = {};
struct gpio_desc *desc;
int ret;
desc = __acpi_find_gpio(fwnode, con_id, idx, can_fallback, &info);
if (IS_ERR(desc))
@ -957,6 +958,12 @@ struct gpio_desc *acpi_find_gpio(struct fwnode_handle *fwnode,
acpi_gpio_update_gpiod_flags(dflags, &info);
acpi_gpio_update_gpiod_lookup_flags(lookupflags, &info);
/* ACPI uses hundredths of milliseconds units */
ret = gpio_set_debounce_timeout(desc, info.debounce * 10);
if (ret)
return ERR_PTR(ret);
return desc;
}
@ -992,7 +999,7 @@ int acpi_dev_gpio_irq_wake_get_by(struct acpi_device *adev, const char *con_id,
int ret;
for (i = 0, idx = 0; idx <= index; i++) {
struct acpi_gpio_info info;
struct acpi_gpio_info info = {};
struct gpio_desc *desc;
/* Ignore -EPROBE_DEFER, it only matters if idx matches */

View File

@ -317,6 +317,18 @@ static const struct dmi_system_id gpiolib_acpi_quirks[] __initconst = {
.ignore_wake = "PNP0C50:00@8",
},
},
{
/*
* Same as G1619-04. New model.
*/
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "GPD"),
DMI_MATCH(DMI_PRODUCT_NAME, "G1619-05"),
},
.driver_data = &(struct acpi_gpiolib_dmi_quirk) {
.ignore_wake = "PNP0C50:00@8",
},
},
{
/*
* Spurious wakeups from GPIO 11

View File

@ -250,16 +250,24 @@ void amdgpu_amdkfd_interrupt(struct amdgpu_device *adev,
void amdgpu_amdkfd_suspend(struct amdgpu_device *adev, bool suspend_proc)
{
if (adev->kfd.dev)
kgd2kfd_suspend(adev->kfd.dev, suspend_proc);
if (adev->kfd.dev) {
if (adev->in_s0ix)
kgd2kfd_stop_sched_all_nodes(adev->kfd.dev);
else
kgd2kfd_suspend(adev->kfd.dev, suspend_proc);
}
}
int amdgpu_amdkfd_resume(struct amdgpu_device *adev, bool resume_proc)
{
int r = 0;
if (adev->kfd.dev)
r = kgd2kfd_resume(adev->kfd.dev, resume_proc);
if (adev->kfd.dev) {
if (adev->in_s0ix)
r = kgd2kfd_start_sched_all_nodes(adev->kfd.dev);
else
r = kgd2kfd_resume(adev->kfd.dev, resume_proc);
}
return r;
}

View File

@ -426,7 +426,9 @@ void kgd2kfd_smi_event_throttle(struct kfd_dev *kfd, uint64_t throttle_bitmask);
int kgd2kfd_check_and_lock_kfd(struct kfd_dev *kfd);
void kgd2kfd_unlock_kfd(struct kfd_dev *kfd);
int kgd2kfd_start_sched(struct kfd_dev *kfd, uint32_t node_id);
int kgd2kfd_start_sched_all_nodes(struct kfd_dev *kfd);
int kgd2kfd_stop_sched(struct kfd_dev *kfd, uint32_t node_id);
int kgd2kfd_stop_sched_all_nodes(struct kfd_dev *kfd);
bool kgd2kfd_compute_active(struct kfd_dev *kfd, uint32_t node_id);
bool kgd2kfd_vmfault_fast_path(struct amdgpu_device *adev, struct amdgpu_iv_entry *entry,
bool retry_fault);
@ -516,11 +518,21 @@ static inline int kgd2kfd_start_sched(struct kfd_dev *kfd, uint32_t node_id)
return 0;
}
static inline int kgd2kfd_start_sched_all_nodes(struct kfd_dev *kfd)
{
return 0;
}
static inline int kgd2kfd_stop_sched(struct kfd_dev *kfd, uint32_t node_id)
{
return 0;
}
static inline int kgd2kfd_stop_sched_all_nodes(struct kfd_dev *kfd)
{
return 0;
}
static inline bool kgd2kfd_compute_active(struct kfd_dev *kfd, uint32_t node_id)
{
return false;

View File

@ -213,19 +213,35 @@ int amdgpu_amdkfd_reserve_mem_limit(struct amdgpu_device *adev,
spin_lock(&kfd_mem_limit.mem_limit_lock);
if (kfd_mem_limit.system_mem_used + system_mem_needed >
kfd_mem_limit.max_system_mem_limit)
kfd_mem_limit.max_system_mem_limit) {
pr_debug("Set no_system_mem_limit=1 if using shared memory\n");
if (!no_system_mem_limit) {
ret = -ENOMEM;
goto release;
}
}
if ((kfd_mem_limit.system_mem_used + system_mem_needed >
kfd_mem_limit.max_system_mem_limit && !no_system_mem_limit) ||
(kfd_mem_limit.ttm_mem_used + ttm_mem_needed >
kfd_mem_limit.max_ttm_mem_limit) ||
(adev && xcp_id >= 0 && adev->kfd.vram_used[xcp_id] + vram_needed >
vram_size - reserved_for_pt - reserved_for_ras - atomic64_read(&adev->vram_pin_size))) {
if (kfd_mem_limit.ttm_mem_used + ttm_mem_needed >
kfd_mem_limit.max_ttm_mem_limit) {
ret = -ENOMEM;
goto release;
}
/*if is_app_apu is false and apu_prefer_gtt is true, it is an APU with
* carve out < gtt. In that case, VRAM allocation will go to gtt domain, skip
* VRAM check since ttm_mem_limit check already cover this allocation
*/
if (adev && xcp_id >= 0 && (!adev->apu_prefer_gtt || adev->gmc.is_app_apu)) {
uint64_t vram_available =
vram_size - reserved_for_pt - reserved_for_ras -
atomic64_read(&adev->vram_pin_size);
if (adev->kfd.vram_used[xcp_id] + vram_needed > vram_available) {
ret = -ENOMEM;
goto release;
}
}
/* Update memory accounting by decreasing available system
* memory, TTM memory and GPU memory as computed above
*/
@ -1626,11 +1642,15 @@ size_t amdgpu_amdkfd_get_available_memory(struct amdgpu_device *adev,
uint64_t vram_available, system_mem_available, ttm_mem_available;
spin_lock(&kfd_mem_limit.mem_limit_lock);
vram_available = KFD_XCP_MEMORY_SIZE(adev, xcp_id)
- adev->kfd.vram_used_aligned[xcp_id]
- atomic64_read(&adev->vram_pin_size)
- reserved_for_pt
- reserved_for_ras;
if (adev->apu_prefer_gtt && !adev->gmc.is_app_apu)
vram_available = KFD_XCP_MEMORY_SIZE(adev, xcp_id)
- adev->kfd.vram_used_aligned[xcp_id];
else
vram_available = KFD_XCP_MEMORY_SIZE(adev, xcp_id)
- adev->kfd.vram_used_aligned[xcp_id]
- atomic64_read(&adev->vram_pin_size)
- reserved_for_pt
- reserved_for_ras;
if (adev->apu_prefer_gtt) {
system_mem_available = no_system_mem_limit ?

View File

@ -5136,7 +5136,7 @@ int amdgpu_device_suspend(struct drm_device *dev, bool notify_clients)
adev->in_suspend = true;
if (amdgpu_sriov_vf(adev)) {
if (!adev->in_s0ix && !adev->in_runpm)
if (!adev->in_runpm)
amdgpu_amdkfd_suspend_process(adev);
amdgpu_virt_fini_data_exchange(adev);
r = amdgpu_virt_request_full_gpu(adev, false);
@ -5156,10 +5156,8 @@ int amdgpu_device_suspend(struct drm_device *dev, bool notify_clients)
amdgpu_device_ip_suspend_phase1(adev);
if (!adev->in_s0ix) {
amdgpu_amdkfd_suspend(adev, !amdgpu_sriov_vf(adev) && !adev->in_runpm);
amdgpu_userq_suspend(adev);
}
amdgpu_amdkfd_suspend(adev, !amdgpu_sriov_vf(adev) && !adev->in_runpm);
amdgpu_userq_suspend(adev);
r = amdgpu_device_evict_resources(adev);
if (r)
@ -5254,15 +5252,13 @@ int amdgpu_device_resume(struct drm_device *dev, bool notify_clients)
goto exit;
}
if (!adev->in_s0ix) {
r = amdgpu_amdkfd_resume(adev, !amdgpu_sriov_vf(adev) && !adev->in_runpm);
if (r)
goto exit;
r = amdgpu_amdkfd_resume(adev, !amdgpu_sriov_vf(adev) && !adev->in_runpm);
if (r)
goto exit;
r = amdgpu_userq_resume(adev);
if (r)
goto exit;
}
r = amdgpu_userq_resume(adev);
if (r)
goto exit;
r = amdgpu_device_ip_late_init(adev);
if (r)
@ -5275,7 +5271,7 @@ int amdgpu_device_resume(struct drm_device *dev, bool notify_clients)
amdgpu_virt_init_data_exchange(adev);
amdgpu_virt_release_full_gpu(adev, true);
if (!adev->in_s0ix && !r && !adev->in_runpm)
if (!r && !adev->in_runpm)
r = amdgpu_amdkfd_resume_process(adev);
}

View File

@ -421,8 +421,6 @@ void amdgpu_ring_fini(struct amdgpu_ring *ring)
dma_fence_put(ring->vmid_wait);
ring->vmid_wait = NULL;
ring->me = 0;
ring->adev->rings[ring->idx] = NULL;
}
/**

View File

@ -1654,6 +1654,21 @@ static int gfx_v11_0_sw_init(struct amdgpu_ip_block *ip_block)
}
}
break;
case IP_VERSION(11, 0, 1):
case IP_VERSION(11, 0, 4):
adev->gfx.cleaner_shader_ptr = gfx_11_0_3_cleaner_shader_hex;
adev->gfx.cleaner_shader_size = sizeof(gfx_11_0_3_cleaner_shader_hex);
if (adev->gfx.pfp_fw_version >= 102 &&
adev->gfx.mec_fw_version >= 66 &&
adev->mes.fw_version[0] >= 128) {
adev->gfx.enable_cleaner_shader = true;
r = amdgpu_gfx_cleaner_shader_sw_init(adev, adev->gfx.cleaner_shader_size);
if (r) {
adev->gfx.enable_cleaner_shader = false;
dev_err(adev->dev, "Failed to initialize cleaner shader\n");
}
}
break;
case IP_VERSION(11, 5, 0):
case IP_VERSION(11, 5, 1):
adev->gfx.cleaner_shader_ptr = gfx_11_0_3_cleaner_shader_hex;

View File

@ -29,6 +29,8 @@
#include "amdgpu.h"
#include "isp_v4_1_1.h"
MODULE_FIRMWARE("amdgpu/isp_4_1_1.bin");
#define ISP_PERFORMANCE_STATE_LOW 0
#define ISP_PERFORMANCE_STATE_HIGH 1

View File

@ -149,12 +149,12 @@ static int psp_v11_0_wait_for_bootloader(struct psp_context *psp)
int ret;
int retry_loop;
for (retry_loop = 0; retry_loop < 10; retry_loop++) {
for (retry_loop = 0; retry_loop < 20; retry_loop++) {
/* Wait for bootloader to signify that is
ready having bit 31 of C2PMSG_35 set to 1 */
ret = psp_wait_for(
psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_35),
0x80000000, 0x80000000, PSP_WAITREG_NOVERBOSE);
0x80000000, 0x8000FFFF, PSP_WAITREG_NOVERBOSE);
if (ret == 0)
return 0;
@ -397,18 +397,6 @@ static int psp_v11_0_mode1_reset(struct psp_context *psp)
msleep(500);
offset = SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_33);
ret = psp_wait_for(psp, offset, MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK,
0);
if (ret) {
DRM_INFO("psp mode 1 reset failed!\n");
return -EINVAL;
}
DRM_INFO("psp mode1 reset succeed \n");
return 0;
}
@ -665,7 +653,8 @@ static const struct psp_funcs psp_v11_0_funcs = {
.ring_get_wptr = psp_v11_0_ring_get_wptr,
.ring_set_wptr = psp_v11_0_ring_set_wptr,
.load_usbc_pd_fw = psp_v11_0_load_usbc_pd_fw,
.read_usbc_pd_fw = psp_v11_0_read_usbc_pd_fw
.read_usbc_pd_fw = psp_v11_0_read_usbc_pd_fw,
.wait_for_bootloader = psp_v11_0_wait_for_bootloader
};
void psp_v11_0_set_psp_funcs(struct psp_context *psp)

View File

@ -1888,15 +1888,19 @@ static int vcn_v3_0_limit_sched(struct amdgpu_cs_parser *p,
struct amdgpu_job *job)
{
struct drm_gpu_scheduler **scheds;
/* The create msg must be in the first IB submitted */
if (atomic_read(&job->base.entity->fence_seq))
return -EINVAL;
struct dma_fence *fence;
/* if VCN0 is harvested, we can't support AV1 */
if (p->adev->vcn.harvest_config & AMDGPU_VCN_HARVEST_VCN0)
return -EINVAL;
/* wait for all jobs to finish before switching to instance 0 */
fence = amdgpu_ctx_get_fence(p->ctx, job->base.entity, ~0ull);
if (fence) {
dma_fence_wait(fence, false);
dma_fence_put(fence);
}
scheds = p->adev->gpu_sched[AMDGPU_HW_IP_VCN_DEC]
[AMDGPU_RING_PRIO_DEFAULT].sched;
drm_sched_entity_modify_sched(job->base.entity, scheds, 1);

View File

@ -1808,15 +1808,19 @@ static int vcn_v4_0_limit_sched(struct amdgpu_cs_parser *p,
struct amdgpu_job *job)
{
struct drm_gpu_scheduler **scheds;
/* The create msg must be in the first IB submitted */
if (atomic_read(&job->base.entity->fence_seq))
return -EINVAL;
struct dma_fence *fence;
/* if VCN0 is harvested, we can't support AV1 */
if (p->adev->vcn.harvest_config & AMDGPU_VCN_HARVEST_VCN0)
return -EINVAL;
/* wait for all jobs to finish before switching to instance 0 */
fence = amdgpu_ctx_get_fence(p->ctx, job->base.entity, ~0ull);
if (fence) {
dma_fence_wait(fence, false);
dma_fence_put(fence);
}
scheds = p->adev->gpu_sched[AMDGPU_HW_IP_VCN_ENC]
[AMDGPU_RING_PRIO_0].sched;
drm_sched_entity_modify_sched(job->base.entity, scheds, 1);
@ -1907,22 +1911,16 @@ static int vcn_v4_0_dec_msg(struct amdgpu_cs_parser *p, struct amdgpu_job *job,
#define RADEON_VCN_ENGINE_TYPE_ENCODE (0x00000002)
#define RADEON_VCN_ENGINE_TYPE_DECODE (0x00000003)
#define RADEON_VCN_ENGINE_INFO (0x30000001)
#define RADEON_VCN_ENGINE_INFO_MAX_OFFSET 16
#define RENCODE_ENCODE_STANDARD_AV1 2
#define RENCODE_IB_PARAM_SESSION_INIT 0x00000003
#define RENCODE_IB_PARAM_SESSION_INIT_MAX_OFFSET 64
/* return the offset in ib if id is found, -1 otherwise
* to speed up the searching we only search upto max_offset
*/
static int vcn_v4_0_enc_find_ib_param(struct amdgpu_ib *ib, uint32_t id, int max_offset)
/* return the offset in ib if id is found, -1 otherwise */
static int vcn_v4_0_enc_find_ib_param(struct amdgpu_ib *ib, uint32_t id, int start)
{
int i;
for (i = 0; i < ib->length_dw && i < max_offset && ib->ptr[i] >= 8; i += ib->ptr[i]/4) {
for (i = start; i < ib->length_dw && ib->ptr[i] >= 8; i += ib->ptr[i] / 4) {
if (ib->ptr[i + 1] == id)
return i;
}
@ -1937,33 +1935,29 @@ static int vcn_v4_0_ring_patch_cs_in_place(struct amdgpu_cs_parser *p,
struct amdgpu_vcn_decode_buffer *decode_buffer;
uint64_t addr;
uint32_t val;
int idx;
int idx = 0, sidx;
/* The first instance can decode anything */
if (!ring->me)
return 0;
/* RADEON_VCN_ENGINE_INFO is at the top of ib block */
idx = vcn_v4_0_enc_find_ib_param(ib, RADEON_VCN_ENGINE_INFO,
RADEON_VCN_ENGINE_INFO_MAX_OFFSET);
if (idx < 0) /* engine info is missing */
return 0;
while ((idx = vcn_v4_0_enc_find_ib_param(ib, RADEON_VCN_ENGINE_INFO, idx)) >= 0) {
val = amdgpu_ib_get_value(ib, idx + 2); /* RADEON_VCN_ENGINE_TYPE */
if (val == RADEON_VCN_ENGINE_TYPE_DECODE) {
decode_buffer = (struct amdgpu_vcn_decode_buffer *)&ib->ptr[idx + 6];
val = amdgpu_ib_get_value(ib, idx + 2); /* RADEON_VCN_ENGINE_TYPE */
if (val == RADEON_VCN_ENGINE_TYPE_DECODE) {
decode_buffer = (struct amdgpu_vcn_decode_buffer *)&ib->ptr[idx + 6];
if (!(decode_buffer->valid_buf_flag & 0x1))
return 0;
if (!(decode_buffer->valid_buf_flag & 0x1))
return 0;
addr = ((u64)decode_buffer->msg_buffer_address_hi) << 32 |
decode_buffer->msg_buffer_address_lo;
return vcn_v4_0_dec_msg(p, job, addr);
} else if (val == RADEON_VCN_ENGINE_TYPE_ENCODE) {
idx = vcn_v4_0_enc_find_ib_param(ib, RENCODE_IB_PARAM_SESSION_INIT,
RENCODE_IB_PARAM_SESSION_INIT_MAX_OFFSET);
if (idx >= 0 && ib->ptr[idx + 2] == RENCODE_ENCODE_STANDARD_AV1)
return vcn_v4_0_limit_sched(p, job);
addr = ((u64)decode_buffer->msg_buffer_address_hi) << 32 |
decode_buffer->msg_buffer_address_lo;
return vcn_v4_0_dec_msg(p, job, addr);
} else if (val == RADEON_VCN_ENGINE_TYPE_ENCODE) {
sidx = vcn_v4_0_enc_find_ib_param(ib, RENCODE_IB_PARAM_SESSION_INIT, idx);
if (sidx >= 0 && ib->ptr[sidx + 2] == RENCODE_ENCODE_STANDARD_AV1)
return vcn_v4_0_limit_sched(p, job);
}
idx += ib->ptr[idx] / 4;
}
return 0;
}

View File

@ -1550,6 +1550,25 @@ int kgd2kfd_start_sched(struct kfd_dev *kfd, uint32_t node_id)
return ret;
}
int kgd2kfd_start_sched_all_nodes(struct kfd_dev *kfd)
{
struct kfd_node *node;
int i, r;
if (!kfd->init_complete)
return 0;
for (i = 0; i < kfd->num_nodes; i++) {
node = kfd->nodes[i];
r = node->dqm->ops.unhalt(node->dqm);
if (r) {
dev_err(kfd_device, "Error in starting scheduler\n");
return r;
}
}
return 0;
}
int kgd2kfd_stop_sched(struct kfd_dev *kfd, uint32_t node_id)
{
struct kfd_node *node;
@ -1567,6 +1586,23 @@ int kgd2kfd_stop_sched(struct kfd_dev *kfd, uint32_t node_id)
return node->dqm->ops.halt(node->dqm);
}
int kgd2kfd_stop_sched_all_nodes(struct kfd_dev *kfd)
{
struct kfd_node *node;
int i, r;
if (!kfd->init_complete)
return 0;
for (i = 0; i < kfd->num_nodes; i++) {
node = kfd->nodes[i];
r = node->dqm->ops.halt(node->dqm);
if (r)
return r;
}
return 0;
}
bool kgd2kfd_compute_active(struct kfd_dev *kfd, uint32_t node_id)
{
struct kfd_node *node;

View File

@ -1587,7 +1587,8 @@ static int kfd_dev_create_p2p_links(void)
break;
if (!dev->gpu || !dev->gpu->adev ||
(dev->gpu->kfd->hive_id &&
dev->gpu->kfd->hive_id == new_dev->gpu->kfd->hive_id))
dev->gpu->kfd->hive_id == new_dev->gpu->kfd->hive_id &&
amdgpu_xgmi_get_is_sharing_enabled(dev->gpu->adev, new_dev->gpu->adev)))
goto next;
/* check if node(s) is/are peer accessible in one direction or bi-direction */

View File

@ -2913,6 +2913,17 @@ static int dm_oem_i2c_hw_init(struct amdgpu_device *adev)
return 0;
}
static void dm_oem_i2c_hw_fini(struct amdgpu_device *adev)
{
struct amdgpu_display_manager *dm = &adev->dm;
if (dm->oem_i2c) {
i2c_del_adapter(&dm->oem_i2c->base);
kfree(dm->oem_i2c);
dm->oem_i2c = NULL;
}
}
/**
* dm_hw_init() - Initialize DC device
* @ip_block: Pointer to the amdgpu_ip_block for this hw instance.
@ -2963,7 +2974,7 @@ static int dm_hw_fini(struct amdgpu_ip_block *ip_block)
{
struct amdgpu_device *adev = ip_block->adev;
kfree(adev->dm.oem_i2c);
dm_oem_i2c_hw_fini(adev);
amdgpu_dm_hpd_fini(adev);
@ -3127,25 +3138,6 @@ static void dm_destroy_cached_state(struct amdgpu_device *adev)
dm->cached_state = NULL;
}
static void dm_complete(struct amdgpu_ip_block *ip_block)
{
struct amdgpu_device *adev = ip_block->adev;
dm_destroy_cached_state(adev);
}
static int dm_prepare_suspend(struct amdgpu_ip_block *ip_block)
{
struct amdgpu_device *adev = ip_block->adev;
if (amdgpu_in_reset(adev))
return 0;
WARN_ON(adev->dm.cached_state);
return dm_cache_state(adev);
}
static int dm_suspend(struct amdgpu_ip_block *ip_block)
{
struct amdgpu_device *adev = ip_block->adev;
@ -3571,10 +3563,8 @@ static const struct amd_ip_funcs amdgpu_dm_funcs = {
.early_fini = amdgpu_dm_early_fini,
.hw_init = dm_hw_init,
.hw_fini = dm_hw_fini,
.prepare_suspend = dm_prepare_suspend,
.suspend = dm_suspend,
.resume = dm_resume,
.complete = dm_complete,
.is_idle = dm_is_idle,
.wait_for_idle = dm_wait_for_idle,
.check_soft_reset = dm_check_soft_reset,
@ -8727,7 +8717,16 @@ static int amdgpu_dm_encoder_init(struct drm_device *dev,
static void manage_dm_interrupts(struct amdgpu_device *adev,
struct amdgpu_crtc *acrtc,
struct dm_crtc_state *acrtc_state)
{
{ /*
* We cannot be sure that the frontend index maps to the same
* backend index - some even map to more than one.
* So we have to go through the CRTC to find the right IRQ.
*/
int irq_type = amdgpu_display_crtc_idx_to_irq_type(
adev,
acrtc->crtc_id);
struct drm_device *dev = adev_to_drm(adev);
struct drm_vblank_crtc_config config = {0};
struct dc_crtc_timing *timing;
int offdelay;
@ -8780,7 +8779,35 @@ static void manage_dm_interrupts(struct amdgpu_device *adev,
drm_crtc_vblank_on_config(&acrtc->base,
&config);
/* Allow RX6xxx, RX7700, RX7800 GPUs to call amdgpu_irq_get.*/
switch (amdgpu_ip_version(adev, DCE_HWIP, 0)) {
case IP_VERSION(3, 0, 0):
case IP_VERSION(3, 0, 2):
case IP_VERSION(3, 0, 3):
case IP_VERSION(3, 2, 0):
if (amdgpu_irq_get(adev, &adev->pageflip_irq, irq_type))
drm_err(dev, "DM_IRQ: Cannot get pageflip irq!\n");
#if defined(CONFIG_DRM_AMD_SECURE_DISPLAY)
if (amdgpu_irq_get(adev, &adev->vline0_irq, irq_type))
drm_err(dev, "DM_IRQ: Cannot get vline0 irq!\n");
#endif
}
} else {
/* Allow RX6xxx, RX7700, RX7800 GPUs to call amdgpu_irq_put.*/
switch (amdgpu_ip_version(adev, DCE_HWIP, 0)) {
case IP_VERSION(3, 0, 0):
case IP_VERSION(3, 0, 2):
case IP_VERSION(3, 0, 3):
case IP_VERSION(3, 2, 0):
#if defined(CONFIG_DRM_AMD_SECURE_DISPLAY)
if (amdgpu_irq_put(adev, &adev->vline0_irq, irq_type))
drm_err(dev, "DM_IRQ: Cannot put vline0 irq!\n");
#endif
if (amdgpu_irq_put(adev, &adev->pageflip_irq, irq_type))
drm_err(dev, "DM_IRQ: Cannot put pageflip irq!\n");
}
drm_crtc_vblank_off(&acrtc->base);
}
}

View File

@ -809,6 +809,7 @@ void amdgpu_dm_initialize_dp_connector(struct amdgpu_display_manager *dm,
drm_dp_aux_init(&aconnector->dm_dp_aux.aux);
drm_dp_cec_register_connector(&aconnector->dm_dp_aux.aux,
&aconnector->base);
drm_dp_dpcd_set_probe(&aconnector->dm_dp_aux.aux, false);
if (aconnector->base.connector_type == DRM_MODE_CONNECTOR_eDP)
return;

View File

@ -1145,6 +1145,7 @@ struct dc_debug_options {
bool enable_hblank_borrow;
bool force_subvp_df_throttle;
uint32_t acpi_transition_bitmasks[MAX_PIPES];
bool enable_pg_cntl_debug_logs;
};

View File

@ -133,30 +133,34 @@ enum dsc_clk_source {
};
static void dccg35_set_dsc_clk_rcg(struct dccg *dccg, int inst, bool enable)
static void dccg35_set_dsc_clk_rcg(struct dccg *dccg, int inst, bool allow_rcg)
{
struct dcn_dccg *dccg_dcn = TO_DCN_DCCG(dccg);
if (!dccg->ctx->dc->debug.root_clock_optimization.bits.dsc && enable)
if (!dccg->ctx->dc->debug.root_clock_optimization.bits.dsc && allow_rcg)
return;
switch (inst) {
case 0:
REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK0_ROOT_GATE_DISABLE, enable ? 0 : 1);
REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK0_ROOT_GATE_DISABLE, allow_rcg ? 0 : 1);
break;
case 1:
REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK1_ROOT_GATE_DISABLE, enable ? 0 : 1);
REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK1_ROOT_GATE_DISABLE, allow_rcg ? 0 : 1);
break;
case 2:
REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK2_ROOT_GATE_DISABLE, enable ? 0 : 1);
REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK2_ROOT_GATE_DISABLE, allow_rcg ? 0 : 1);
break;
case 3:
REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK3_ROOT_GATE_DISABLE, enable ? 0 : 1);
REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK3_ROOT_GATE_DISABLE, allow_rcg ? 0 : 1);
break;
default:
BREAK_TO_DEBUGGER();
return;
}
/* Wait for clock to ramp */
if (!allow_rcg)
udelay(10);
}
static void dccg35_set_symclk32_se_rcg(
@ -385,35 +389,34 @@ static void dccg35_set_dtbclk_p_rcg(struct dccg *dccg, int inst, bool enable)
}
}
static void dccg35_set_dppclk_rcg(struct dccg *dccg,
int inst, bool enable)
static void dccg35_set_dppclk_rcg(struct dccg *dccg, int inst, bool allow_rcg)
{
struct dcn_dccg *dccg_dcn = TO_DCN_DCCG(dccg);
if (!dccg->ctx->dc->debug.root_clock_optimization.bits.dpp && enable)
if (!dccg->ctx->dc->debug.root_clock_optimization.bits.dpp && allow_rcg)
return;
switch (inst) {
case 0:
REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK0_ROOT_GATE_DISABLE, enable ? 0 : 1);
REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK0_ROOT_GATE_DISABLE, allow_rcg ? 0 : 1);
break;
case 1:
REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK1_ROOT_GATE_DISABLE, enable ? 0 : 1);
REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK1_ROOT_GATE_DISABLE, allow_rcg ? 0 : 1);
break;
case 2:
REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK2_ROOT_GATE_DISABLE, enable ? 0 : 1);
REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK2_ROOT_GATE_DISABLE, allow_rcg ? 0 : 1);
break;
case 3:
REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK3_ROOT_GATE_DISABLE, enable ? 0 : 1);
REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK3_ROOT_GATE_DISABLE, allow_rcg ? 0 : 1);
break;
default:
BREAK_TO_DEBUGGER();
break;
}
//DC_LOG_DEBUG("%s: inst(%d) DPPCLK rcg_disable: %d\n", __func__, inst, enable ? 0 : 1);
/* Wait for clock to ramp */
if (!allow_rcg)
udelay(10);
}
static void dccg35_set_dpstreamclk_rcg(
@ -1177,32 +1180,34 @@ static void dccg35_update_dpp_dto(struct dccg *dccg, int dpp_inst,
}
static void dccg35_set_dppclk_root_clock_gating(struct dccg *dccg,
uint32_t dpp_inst, uint32_t enable)
uint32_t dpp_inst, uint32_t disallow_rcg)
{
struct dcn_dccg *dccg_dcn = TO_DCN_DCCG(dccg);
if (!dccg->ctx->dc->debug.root_clock_optimization.bits.dpp)
if (!dccg->ctx->dc->debug.root_clock_optimization.bits.dpp && !disallow_rcg)
return;
switch (dpp_inst) {
case 0:
REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK0_ROOT_GATE_DISABLE, enable);
REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK0_ROOT_GATE_DISABLE, disallow_rcg);
break;
case 1:
REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK1_ROOT_GATE_DISABLE, enable);
REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK1_ROOT_GATE_DISABLE, disallow_rcg);
break;
case 2:
REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK2_ROOT_GATE_DISABLE, enable);
REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK2_ROOT_GATE_DISABLE, disallow_rcg);
break;
case 3:
REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK3_ROOT_GATE_DISABLE, enable);
REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK3_ROOT_GATE_DISABLE, disallow_rcg);
break;
default:
break;
}
//DC_LOG_DEBUG("%s: dpp_inst(%d) rcg: %d\n", __func__, dpp_inst, enable);
/* Wait for clock to ramp */
if (disallow_rcg)
udelay(10);
}
static void dccg35_get_pixel_rate_div(
@ -1782,8 +1787,7 @@ static void dccg35_enable_dscclk(struct dccg *dccg, int inst)
//Disable DTO
switch (inst) {
case 0:
if (dccg->ctx->dc->debug.root_clock_optimization.bits.dsc)
REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK0_ROOT_GATE_DISABLE, 1);
REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK0_ROOT_GATE_DISABLE, 1);
REG_UPDATE_2(DSCCLK0_DTO_PARAM,
DSCCLK0_DTO_PHASE, 0,
@ -1791,8 +1795,7 @@ static void dccg35_enable_dscclk(struct dccg *dccg, int inst)
REG_UPDATE(DSCCLK_DTO_CTRL, DSCCLK0_EN, 1);
break;
case 1:
if (dccg->ctx->dc->debug.root_clock_optimization.bits.dsc)
REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK1_ROOT_GATE_DISABLE, 1);
REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK1_ROOT_GATE_DISABLE, 1);
REG_UPDATE_2(DSCCLK1_DTO_PARAM,
DSCCLK1_DTO_PHASE, 0,
@ -1800,8 +1803,7 @@ static void dccg35_enable_dscclk(struct dccg *dccg, int inst)
REG_UPDATE(DSCCLK_DTO_CTRL, DSCCLK1_EN, 1);
break;
case 2:
if (dccg->ctx->dc->debug.root_clock_optimization.bits.dsc)
REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK2_ROOT_GATE_DISABLE, 1);
REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK2_ROOT_GATE_DISABLE, 1);
REG_UPDATE_2(DSCCLK2_DTO_PARAM,
DSCCLK2_DTO_PHASE, 0,
@ -1809,8 +1811,7 @@ static void dccg35_enable_dscclk(struct dccg *dccg, int inst)
REG_UPDATE(DSCCLK_DTO_CTRL, DSCCLK2_EN, 1);
break;
case 3:
if (dccg->ctx->dc->debug.root_clock_optimization.bits.dsc)
REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK3_ROOT_GATE_DISABLE, 1);
REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK3_ROOT_GATE_DISABLE, 1);
REG_UPDATE_2(DSCCLK3_DTO_PARAM,
DSCCLK3_DTO_PHASE, 0,
@ -1821,6 +1822,9 @@ static void dccg35_enable_dscclk(struct dccg *dccg, int inst)
BREAK_TO_DEBUGGER();
return;
}
/* Wait for clock to ramp */
udelay(10);
}
static void dccg35_disable_dscclk(struct dccg *dccg,
@ -1864,6 +1868,9 @@ static void dccg35_disable_dscclk(struct dccg *dccg,
default:
return;
}
/* Wait for clock ramp */
udelay(10);
}
static void dccg35_enable_symclk_se(struct dccg *dccg, uint32_t stream_enc_inst, uint32_t link_enc_inst)
@ -2349,10 +2356,7 @@ static void dccg35_disable_symclk_se_cb(
void dccg35_root_gate_disable_control(struct dccg *dccg, uint32_t pipe_idx, uint32_t disable_clock_gating)
{
if (dccg->ctx->dc->debug.root_clock_optimization.bits.dpp) {
dccg35_set_dppclk_root_clock_gating(dccg, pipe_idx, disable_clock_gating);
}
dccg35_set_dppclk_root_clock_gating(dccg, pipe_idx, disable_clock_gating);
}
static const struct dccg_funcs dccg35_funcs_new = {

View File

@ -955,7 +955,7 @@ enum dc_status dcn20_enable_stream_timing(
return DC_ERROR_UNEXPECTED;
}
fsleep(stream->timing.v_total * (stream->timing.h_total * 10000u / stream->timing.pix_clk_100hz));
udelay(stream->timing.v_total * (stream->timing.h_total * 10000u / stream->timing.pix_clk_100hz));
params.vertical_total_min = stream->adjust.v_total_min;
params.vertical_total_max = stream->adjust.v_total_max;

View File

@ -113,6 +113,14 @@ static void enable_memory_low_power(struct dc *dc)
}
#endif
static void print_pg_status(struct dc *dc, const char *debug_func, const char *debug_log)
{
if (dc->debug.enable_pg_cntl_debug_logs && dc->res_pool->pg_cntl) {
if (dc->res_pool->pg_cntl->funcs->print_pg_status)
dc->res_pool->pg_cntl->funcs->print_pg_status(dc->res_pool->pg_cntl, debug_func, debug_log);
}
}
void dcn35_set_dmu_fgcg(struct dce_hwseq *hws, bool enable)
{
REG_UPDATE_3(DMU_CLK_CNTL,
@ -137,6 +145,8 @@ void dcn35_init_hw(struct dc *dc)
uint32_t user_level = MAX_BACKLIGHT_LEVEL;
int i;
print_pg_status(dc, __func__, ": start");
if (dc->clk_mgr && dc->clk_mgr->funcs->init_clocks)
dc->clk_mgr->funcs->init_clocks(dc->clk_mgr);
@ -200,10 +210,7 @@ void dcn35_init_hw(struct dc *dc)
/* we want to turn off all dp displays before doing detection */
dc->link_srv->blank_all_dp_displays(dc);
/*
if (hws->funcs.enable_power_gating_plane)
hws->funcs.enable_power_gating_plane(dc->hwseq, true);
*/
if (res_pool->hubbub && res_pool->hubbub->funcs->dchubbub_init)
res_pool->hubbub->funcs->dchubbub_init(dc->res_pool->hubbub);
/* If taking control over from VBIOS, we may want to optimize our first
@ -236,6 +243,8 @@ void dcn35_init_hw(struct dc *dc)
}
hws->funcs.init_pipes(dc, dc->current_state);
print_pg_status(dc, __func__, ": after init_pipes");
if (dc->res_pool->hubbub->funcs->allow_self_refresh_control &&
!dc->res_pool->hubbub->ctx->dc->debug.disable_stutter)
dc->res_pool->hubbub->funcs->allow_self_refresh_control(dc->res_pool->hubbub,
@ -312,6 +321,7 @@ void dcn35_init_hw(struct dc *dc)
if (dc->res_pool->pg_cntl->funcs->init_pg_status)
dc->res_pool->pg_cntl->funcs->init_pg_status(dc->res_pool->pg_cntl);
}
print_pg_status(dc, __func__, ": after init_pg_status");
}
static void update_dsc_on_stream(struct pipe_ctx *pipe_ctx, bool enable)
@ -500,97 +510,6 @@ void dcn35_physymclk_root_clock_control(struct dce_hwseq *hws, unsigned int phy_
}
}
void dcn35_dsc_pg_control(
struct dce_hwseq *hws,
unsigned int dsc_inst,
bool power_on)
{
uint32_t power_gate = power_on ? 0 : 1;
uint32_t pwr_status = power_on ? 0 : 2;
uint32_t org_ip_request_cntl = 0;
if (hws->ctx->dc->debug.disable_dsc_power_gate)
return;
if (hws->ctx->dc->debug.ignore_pg)
return;
REG_GET(DC_IP_REQUEST_CNTL, IP_REQUEST_EN, &org_ip_request_cntl);
if (org_ip_request_cntl == 0)
REG_SET(DC_IP_REQUEST_CNTL, 0, IP_REQUEST_EN, 1);
switch (dsc_inst) {
case 0: /* DSC0 */
REG_UPDATE(DOMAIN16_PG_CONFIG,
DOMAIN_POWER_GATE, power_gate);
REG_WAIT(DOMAIN16_PG_STATUS,
DOMAIN_PGFSM_PWR_STATUS, pwr_status,
1, 1000);
break;
case 1: /* DSC1 */
REG_UPDATE(DOMAIN17_PG_CONFIG,
DOMAIN_POWER_GATE, power_gate);
REG_WAIT(DOMAIN17_PG_STATUS,
DOMAIN_PGFSM_PWR_STATUS, pwr_status,
1, 1000);
break;
case 2: /* DSC2 */
REG_UPDATE(DOMAIN18_PG_CONFIG,
DOMAIN_POWER_GATE, power_gate);
REG_WAIT(DOMAIN18_PG_STATUS,
DOMAIN_PGFSM_PWR_STATUS, pwr_status,
1, 1000);
break;
case 3: /* DSC3 */
REG_UPDATE(DOMAIN19_PG_CONFIG,
DOMAIN_POWER_GATE, power_gate);
REG_WAIT(DOMAIN19_PG_STATUS,
DOMAIN_PGFSM_PWR_STATUS, pwr_status,
1, 1000);
break;
default:
BREAK_TO_DEBUGGER();
break;
}
if (org_ip_request_cntl == 0)
REG_SET(DC_IP_REQUEST_CNTL, 0, IP_REQUEST_EN, 0);
}
void dcn35_enable_power_gating_plane(struct dce_hwseq *hws, bool enable)
{
bool force_on = true; /* disable power gating */
uint32_t org_ip_request_cntl = 0;
if (hws->ctx->dc->debug.disable_hubp_power_gate)
return;
if (hws->ctx->dc->debug.ignore_pg)
return;
REG_GET(DC_IP_REQUEST_CNTL, IP_REQUEST_EN, &org_ip_request_cntl);
if (org_ip_request_cntl == 0)
REG_SET(DC_IP_REQUEST_CNTL, 0, IP_REQUEST_EN, 1);
/* DCHUBP0/1/2/3/4/5 */
REG_UPDATE(DOMAIN0_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
REG_UPDATE(DOMAIN2_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
/* DPP0/1/2/3/4/5 */
REG_UPDATE(DOMAIN1_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
REG_UPDATE(DOMAIN3_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
force_on = true; /* disable power gating */
if (enable && !hws->ctx->dc->debug.disable_dsc_power_gate)
force_on = false;
/* DCS0/1/2/3/4 */
REG_UPDATE(DOMAIN16_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
REG_UPDATE(DOMAIN17_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
REG_UPDATE(DOMAIN18_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
REG_UPDATE(DOMAIN19_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
}
/* In headless boot cases, DIG may be turned
* on which causes HW/SW discrepancies.
* To avoid this, power down hardware on boot
@ -1453,6 +1372,8 @@ void dcn35_prepare_bandwidth(
}
dcn20_prepare_bandwidth(dc, context);
print_pg_status(dc, __func__, ": after rcg and power up");
}
void dcn35_optimize_bandwidth(
@ -1461,6 +1382,8 @@ void dcn35_optimize_bandwidth(
{
struct pg_block_update pg_update_state;
print_pg_status(dc, __func__, ": before rcg and power up");
dcn20_optimize_bandwidth(dc, context);
if (dc->hwss.calc_blocks_to_gate) {
@ -1472,6 +1395,8 @@ void dcn35_optimize_bandwidth(
if (dc->hwss.root_clock_control)
dc->hwss.root_clock_control(dc, &pg_update_state, false);
}
print_pg_status(dc, __func__, ": after rcg and power up");
}
void dcn35_set_drr(struct pipe_ctx **pipe_ctx,

View File

@ -115,7 +115,6 @@ static const struct hw_sequencer_funcs dcn35_funcs = {
.exit_optimized_pwr_state = dcn21_exit_optimized_pwr_state,
.update_visual_confirm_color = dcn10_update_visual_confirm_color,
.apply_idle_power_optimizations = dcn35_apply_idle_power_optimizations,
.update_dsc_pg = dcn32_update_dsc_pg,
.calc_blocks_to_gate = dcn35_calc_blocks_to_gate,
.calc_blocks_to_ungate = dcn35_calc_blocks_to_ungate,
.hw_block_power_up = dcn35_hw_block_power_up,
@ -150,7 +149,6 @@ static const struct hwseq_private_funcs dcn35_private_funcs = {
.plane_atomic_disable = dcn35_plane_atomic_disable,
//.plane_atomic_disable = dcn20_plane_atomic_disable,/*todo*/
//.hubp_pg_control = dcn35_hubp_pg_control,
.enable_power_gating_plane = dcn35_enable_power_gating_plane,
.dpp_root_clock_control = dcn35_dpp_root_clock_control,
.dpstream_root_clock_control = dcn35_dpstream_root_clock_control,
.physymclk_root_clock_control = dcn35_physymclk_root_clock_control,
@ -165,7 +163,6 @@ static const struct hwseq_private_funcs dcn35_private_funcs = {
.calculate_dccg_k1_k2_values = dcn32_calculate_dccg_k1_k2_values,
.resync_fifo_dccg_dio = dcn314_resync_fifo_dccg_dio,
.is_dp_dig_pixel_rate_div_policy = dcn35_is_dp_dig_pixel_rate_div_policy,
.dsc_pg_control = dcn35_dsc_pg_control,
.dsc_pg_status = dcn32_dsc_pg_status,
.enable_plane = dcn35_enable_plane,
.wait_for_pipe_update_if_needed = dcn10_wait_for_pipe_update_if_needed,

View File

@ -114,7 +114,6 @@ static const struct hw_sequencer_funcs dcn351_funcs = {
.exit_optimized_pwr_state = dcn21_exit_optimized_pwr_state,
.update_visual_confirm_color = dcn10_update_visual_confirm_color,
.apply_idle_power_optimizations = dcn35_apply_idle_power_optimizations,
.update_dsc_pg = dcn32_update_dsc_pg,
.calc_blocks_to_gate = dcn351_calc_blocks_to_gate,
.calc_blocks_to_ungate = dcn351_calc_blocks_to_ungate,
.hw_block_power_up = dcn351_hw_block_power_up,
@ -145,7 +144,6 @@ static const struct hwseq_private_funcs dcn351_private_funcs = {
.plane_atomic_disable = dcn35_plane_atomic_disable,
//.plane_atomic_disable = dcn20_plane_atomic_disable,/*todo*/
//.hubp_pg_control = dcn35_hubp_pg_control,
.enable_power_gating_plane = dcn35_enable_power_gating_plane,
.dpp_root_clock_control = dcn35_dpp_root_clock_control,
.dpstream_root_clock_control = dcn35_dpstream_root_clock_control,
.physymclk_root_clock_control = dcn35_physymclk_root_clock_control,
@ -159,7 +157,6 @@ static const struct hwseq_private_funcs dcn351_private_funcs = {
.setup_hpo_hw_control = dcn35_setup_hpo_hw_control,
.calculate_dccg_k1_k2_values = dcn32_calculate_dccg_k1_k2_values,
.is_dp_dig_pixel_rate_div_policy = dcn35_is_dp_dig_pixel_rate_div_policy,
.dsc_pg_control = dcn35_dsc_pg_control,
.dsc_pg_status = dcn32_dsc_pg_status,
.enable_plane = dcn35_enable_plane,
.wait_for_pipe_update_if_needed = dcn10_wait_for_pipe_update_if_needed,

View File

@ -49,6 +49,7 @@ struct pg_cntl_funcs {
void (*mem_pg_control)(struct pg_cntl *pg_cntl, bool power_on);
void (*dio_pg_control)(struct pg_cntl *pg_cntl, bool power_on);
void (*init_pg_status)(struct pg_cntl *pg_cntl);
void (*print_pg_status)(struct pg_cntl *pg_cntl, const char *debug_func, const char *debug_log);
};
#endif //__DC_PG_CNTL_H__

View File

@ -79,16 +79,12 @@ void pg_cntl35_dsc_pg_control(struct pg_cntl *pg_cntl, unsigned int dsc_inst, bo
uint32_t power_gate = power_on ? 0 : 1;
uint32_t pwr_status = power_on ? 0 : 2;
uint32_t org_ip_request_cntl = 0;
bool block_enabled;
bool block_enabled = false;
bool skip_pg = pg_cntl->ctx->dc->debug.ignore_pg ||
pg_cntl->ctx->dc->debug.disable_dsc_power_gate ||
pg_cntl->ctx->dc->idle_optimizations_allowed;
/*need to enable dscclk regardless DSC_PG*/
if (pg_cntl->ctx->dc->res_pool->dccg->funcs->enable_dsc && power_on)
pg_cntl->ctx->dc->res_pool->dccg->funcs->enable_dsc(
pg_cntl->ctx->dc->res_pool->dccg, dsc_inst);
if (pg_cntl->ctx->dc->debug.ignore_pg ||
pg_cntl->ctx->dc->debug.disable_dsc_power_gate ||
pg_cntl->ctx->dc->idle_optimizations_allowed)
if (skip_pg && !power_on)
return;
block_enabled = pg_cntl35_dsc_pg_status(pg_cntl, dsc_inst);
@ -111,7 +107,7 @@ void pg_cntl35_dsc_pg_control(struct pg_cntl *pg_cntl, unsigned int dsc_inst, bo
REG_WAIT(DOMAIN16_PG_STATUS,
DOMAIN_PGFSM_PWR_STATUS, pwr_status,
1, 1000);
1, 10000);
break;
case 1: /* DSC1 */
REG_UPDATE(DOMAIN17_PG_CONFIG,
@ -119,7 +115,7 @@ void pg_cntl35_dsc_pg_control(struct pg_cntl *pg_cntl, unsigned int dsc_inst, bo
REG_WAIT(DOMAIN17_PG_STATUS,
DOMAIN_PGFSM_PWR_STATUS, pwr_status,
1, 1000);
1, 10000);
break;
case 2: /* DSC2 */
REG_UPDATE(DOMAIN18_PG_CONFIG,
@ -127,7 +123,7 @@ void pg_cntl35_dsc_pg_control(struct pg_cntl *pg_cntl, unsigned int dsc_inst, bo
REG_WAIT(DOMAIN18_PG_STATUS,
DOMAIN_PGFSM_PWR_STATUS, pwr_status,
1, 1000);
1, 10000);
break;
case 3: /* DSC3 */
REG_UPDATE(DOMAIN19_PG_CONFIG,
@ -135,7 +131,7 @@ void pg_cntl35_dsc_pg_control(struct pg_cntl *pg_cntl, unsigned int dsc_inst, bo
REG_WAIT(DOMAIN19_PG_STATUS,
DOMAIN_PGFSM_PWR_STATUS, pwr_status,
1, 1000);
1, 10000);
break;
default:
BREAK_TO_DEBUGGER();
@ -144,12 +140,6 @@ void pg_cntl35_dsc_pg_control(struct pg_cntl *pg_cntl, unsigned int dsc_inst, bo
if (dsc_inst < MAX_PIPES)
pg_cntl->pg_pipe_res_enable[PG_DSC][dsc_inst] = power_on;
if (pg_cntl->ctx->dc->res_pool->dccg->funcs->disable_dsc && !power_on) {
/*this is to disable dscclk*/
pg_cntl->ctx->dc->res_pool->dccg->funcs->disable_dsc(
pg_cntl->ctx->dc->res_pool->dccg, dsc_inst);
}
}
static bool pg_cntl35_hubp_dpp_pg_status(struct pg_cntl *pg_cntl, unsigned int hubp_dpp_inst)
@ -189,11 +179,12 @@ void pg_cntl35_hubp_dpp_pg_control(struct pg_cntl *pg_cntl, unsigned int hubp_dp
uint32_t pwr_status = power_on ? 0 : 2;
uint32_t org_ip_request_cntl;
bool block_enabled;
bool skip_pg = pg_cntl->ctx->dc->debug.ignore_pg ||
pg_cntl->ctx->dc->debug.disable_hubp_power_gate ||
pg_cntl->ctx->dc->debug.disable_dpp_power_gate ||
pg_cntl->ctx->dc->idle_optimizations_allowed;
if (pg_cntl->ctx->dc->debug.ignore_pg ||
pg_cntl->ctx->dc->debug.disable_hubp_power_gate ||
pg_cntl->ctx->dc->debug.disable_dpp_power_gate ||
pg_cntl->ctx->dc->idle_optimizations_allowed)
if (skip_pg && !power_on)
return;
block_enabled = pg_cntl35_hubp_dpp_pg_status(pg_cntl, hubp_dpp_inst);
@ -213,22 +204,22 @@ void pg_cntl35_hubp_dpp_pg_control(struct pg_cntl *pg_cntl, unsigned int hubp_dp
case 0:
/* DPP0 & HUBP0 */
REG_UPDATE(DOMAIN0_PG_CONFIG, DOMAIN_POWER_GATE, power_gate);
REG_WAIT(DOMAIN0_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, pwr_status, 1, 1000);
REG_WAIT(DOMAIN0_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, pwr_status, 1, 10000);
break;
case 1:
/* DPP1 & HUBP1 */
REG_UPDATE(DOMAIN1_PG_CONFIG, DOMAIN_POWER_GATE, power_gate);
REG_WAIT(DOMAIN1_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, pwr_status, 1, 1000);
REG_WAIT(DOMAIN1_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, pwr_status, 1, 10000);
break;
case 2:
/* DPP2 & HUBP2 */
REG_UPDATE(DOMAIN2_PG_CONFIG, DOMAIN_POWER_GATE, power_gate);
REG_WAIT(DOMAIN2_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, pwr_status, 1, 1000);
REG_WAIT(DOMAIN2_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, pwr_status, 1, 10000);
break;
case 3:
/* DPP3 & HUBP3 */
REG_UPDATE(DOMAIN3_PG_CONFIG, DOMAIN_POWER_GATE, power_gate);
REG_WAIT(DOMAIN3_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, pwr_status, 1, 1000);
REG_WAIT(DOMAIN3_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, pwr_status, 1, 10000);
break;
default:
BREAK_TO_DEBUGGER();
@ -501,6 +492,36 @@ void pg_cntl35_init_pg_status(struct pg_cntl *pg_cntl)
pg_cntl->pg_res_enable[PG_DWB] = block_enabled;
}
static void pg_cntl35_print_pg_status(struct pg_cntl *pg_cntl, const char *debug_func, const char *debug_log)
{
int i = 0;
bool block_enabled = false;
DC_LOG_DEBUG("%s: %s", debug_func, debug_log);
DC_LOG_DEBUG("PG_CNTL status:\n");
block_enabled = pg_cntl35_io_clk_status(pg_cntl);
DC_LOG_DEBUG("ONO0=%d (DCCG, DIO, DCIO)\n", block_enabled ? 1 : 0);
block_enabled = pg_cntl35_mem_status(pg_cntl);
DC_LOG_DEBUG("ONO1=%d (DCHUBBUB, DCHVM, DCHUBBUBMEM)\n", block_enabled ? 1 : 0);
block_enabled = pg_cntl35_plane_otg_status(pg_cntl);
DC_LOG_DEBUG("ONO2=%d (MPC, OPP, OPTC, DWB)\n", block_enabled ? 1 : 0);
block_enabled = pg_cntl35_hpo_pg_status(pg_cntl);
DC_LOG_DEBUG("ONO3=%d (HPO)\n", block_enabled ? 1 : 0);
for (i = 0; i < pg_cntl->ctx->dc->res_pool->pipe_count; i++) {
block_enabled = pg_cntl35_hubp_dpp_pg_status(pg_cntl, i);
DC_LOG_DEBUG("ONO%d=%d (DCHUBP%d, DPP%d)\n", 4 + i * 2, block_enabled ? 1 : 0, i, i);
block_enabled = pg_cntl35_dsc_pg_status(pg_cntl, i);
DC_LOG_DEBUG("ONO%d=%d (DSC%d)\n", 5 + i * 2, block_enabled ? 1 : 0, i);
}
}
static const struct pg_cntl_funcs pg_cntl35_funcs = {
.init_pg_status = pg_cntl35_init_pg_status,
.dsc_pg_control = pg_cntl35_dsc_pg_control,
@ -511,7 +532,8 @@ static const struct pg_cntl_funcs pg_cntl35_funcs = {
.mpcc_pg_control = pg_cntl35_mpcc_pg_control,
.opp_pg_control = pg_cntl35_opp_pg_control,
.optc_pg_control = pg_cntl35_optc_pg_control,
.dwb_pg_control = pg_cntl35_dwb_pg_control
.dwb_pg_control = pg_cntl35_dwb_pg_control,
.print_pg_status = pg_cntl35_print_pg_status
};
struct pg_cntl *pg_cntl35_create(

View File

@ -2236,7 +2236,7 @@ static int smu_resume(struct amdgpu_ip_block *ip_block)
return ret;
}
if (smu_dpm_ctx->dpm_level == AMD_DPM_FORCED_LEVEL_MANUAL) {
if (smu_dpm_ctx->dpm_level == AMD_DPM_FORCED_LEVEL_MANUAL && smu->od_enabled) {
ret = smu_od_edit_dpm_table(smu, PP_OD_COMMIT_DPM_TABLE, NULL, 0);
if (ret)
return ret;

View File

@ -2677,7 +2677,7 @@ static int anx7625_i2c_probe(struct i2c_client *client)
ret = devm_request_threaded_irq(dev, platform->pdata.intp_irq,
NULL, anx7625_intr_hpd_isr,
IRQF_TRIGGER_FALLING |
IRQF_ONESHOT,
IRQF_ONESHOT | IRQF_NO_AUTOEN,
"anx7625-intp", platform);
if (ret) {
DRM_DEV_ERROR(dev, "fail to request irq\n");
@ -2746,8 +2746,10 @@ static int anx7625_i2c_probe(struct i2c_client *client)
}
/* Add work function */
if (platform->pdata.intp_irq)
if (platform->pdata.intp_irq) {
enable_irq(platform->pdata.intp_irq);
queue_work(platform->workqueue, &platform->work);
}
if (platform->pdata.audio_en)
anx7625_register_audio(dev, platform);

Some files were not shown because too many files have changed in this diff Show More