mirror of https://github.com/torvalds/linux.git
This has been another busy cycle for documentation, with a lot of
build-system thrashing. That work should slow down from here on out.
- The various scripts and tools for documentation were spread out in
several directories; now they are (almost) all coalesced under
tools/docs/. The holdout is the kernel-doc script, which cannot be
easily moved without some further thought.
- As the amount of Python code increases, we are accumulating modules that
are imported by multiple programs. These modules have been pulled
together under tools/lib/python/ -- at least, for documentation-related
programs. There is other Python code in the tree that might eventually
want to move toward this organization.
- The Perl kernel-doc.pl script has been removed. It is no longer used by
default, and nobody has missed it, least of all anybody who actually had
to look at it.
- The docs build was controlled by a complex mess of makefilese that few
dared to touch. Mauro has moved that logic into a new program
(tools/docs/sphinx-build-wrapper) that, with any luck at all, will be far
easier to understand and maintain.
- The get_feat.pl program, used to access information under
Documentation/features/, has been rewritten in Python, bringing an end to
the use of Perl in the docs subsystem.
- The top-level README file has been reorganized into a more
reader-friendly presentation.
- A lot of Chinese translation additions
- Typo fixes and documentation updates as usual
-----BEGIN PGP SIGNATURE-----
iQEzBAABCgAdFiEEIw+MvkEiF49krdp9F0NaE2wMflgFAmkuHzEACgkQF0NaE2wM
flgiqAf/aFUA3zCMvOSjbOpX8EO/4rs0ISkhhb01rLSMsRs3P+v9SlGVJls734BE
0ZvVmBo0p7mNakZD4tFCryFn8Gntn28smCEmpDu/FRDMOEcXFUqxQ9st9OhRlar2
tETdFIKIF+yncFJ83Mjr7F5Yeqg38m82g5JdTxvh6FmvDhPLiSXDEeBV2L7hU+St
EX8D8KOZH74XM8LMr8eg3GbUXx72A7WELndlF7DfGIAC8rFC3C9wa0CUSx8wz2Zh
CoCOYsrmd7WdB+c30SUmQbtYLVyWraiqVQVUlCdZDeYPBT/mTo6zJI5L0WxgXuHz
6o+fQdfLA7zlMHTelVcM6y4Qwkmayg==
=1TPY
-----END PGP SIGNATURE-----
Merge tag 'docs-6.19' of git://git.lwn.net/linux
Pull documentation updates from Jonathan Corbet:
"This has been another busy cycle for documentation, with a lot of
build-system thrashing. That work should slow down from here on out.
- The various scripts and tools for documentation were spread out in
several directories; now they are (almost) all coalesced under
tools/docs/. The holdout is the kernel-doc script, which cannot be
easily moved without some further thought.
- As the amount of Python code increases, we are accumulating modules
that are imported by multiple programs. These modules have been
pulled together under tools/lib/python/ -- at least, for
documentation-related programs. There is other Python code in the
tree that might eventually want to move toward this organization.
- The Perl kernel-doc.pl script has been removed. It is no longer
used by default, and nobody has missed it, least of all anybody who
actually had to look at it.
- The docs build was controlled by a complex mess of makefilese that
few dared to touch. Mauro has moved that logic into a new program
(tools/docs/sphinx-build-wrapper) that, with any luck at all, will
be far easier to understand and maintain.
- The get_feat.pl program, used to access information under
Documentation/features/, has been rewritten in Python, bringing an
end to the use of Perl in the docs subsystem.
- The top-level README file has been reorganized into a more
reader-friendly presentation.
- A lot of Chinese translation additions
- Typo fixes and documentation updates as usual"
* tag 'docs-6.19' of git://git.lwn.net/linux: (164 commits)
docs: makefile: move rustdoc check to the build wrapper
README: restructure with role-based documentation and guidelines
docs: kdoc: various fixes for grammar, spelling, punctuation
docs: kdoc_parser: use '@' for Excess enum value
docs: submitting-patches: Clarify that removal of Acks needs explanation too
docs: kdoc_parser: add data/function attributes to ignore
docs: MAINTAINERS: update Mauro's files/paths
docs/zh_CN: Add wd719x.rst translation
docs/zh_CN: Add libsas.rst translation
get_feat.pl: remove it, as it got replaced by get_feat.py
Documentation/sphinx/kernel_feat.py: use class directly
tools/docs/get_feat.py: convert get_feat.pl to Python
Documentation/admin-guide: fix typo and comment in cscope example
docs/zh_CN: Add data-integrity.rst translation
docs/zh_CN: Add blk-mq.rst translation
docs/zh_CN: Add block/index.rst translation
docs/zh_CN: Update the Chinese translation of kbuild.rst
docs: bring some order to our Python module hierarchy
docs: Move the python libraries to tools/lib/python
Documentation/kernel-parameters: Move the kernel build options
...
This commit is contained in:
commit
f96163865a
|
|
@ -1,2 +1,2 @@
|
|||
[MASTER]
|
||||
init-hook='import sys; sys.path += ["scripts/lib/kdoc", "scripts/lib/abi", "tools/docs/lib"]'
|
||||
init-hook='import sys; sys.path += ["tools/lib/python"]'
|
||||
|
|
|
|||
|
|
@ -59,6 +59,8 @@ Description: Module taint flags:
|
|||
F force-loaded module
|
||||
C staging driver module
|
||||
E unsigned module
|
||||
K livepatch module
|
||||
N in-kernel test module
|
||||
== =====================
|
||||
|
||||
What: /sys/module/grant_table/parameters/free_per_iteration
|
||||
|
|
|
|||
|
|
@ -19,7 +19,7 @@ config WARN_ABI_ERRORS
|
|||
described at Documentation/ABI/README. Yet, as they're manually
|
||||
written, it would be possible that some of those files would
|
||||
have errors that would break them for being parsed by
|
||||
scripts/get_abi.pl. Add a check to verify them.
|
||||
tools/docs/get_abi.py. Add a check to verify them.
|
||||
|
||||
If unsure, select 'N'.
|
||||
|
||||
|
|
|
|||
|
|
@ -8,12 +8,12 @@ subdir- := devicetree/bindings
|
|||
ifneq ($(MAKECMDGOALS),cleandocs)
|
||||
# Check for broken documentation file references
|
||||
ifeq ($(CONFIG_WARN_MISSING_DOCUMENTS),y)
|
||||
$(shell $(srctree)/scripts/documentation-file-ref-check --warn)
|
||||
$(shell $(srctree)/tools/docs/documentation-file-ref-check --warn)
|
||||
endif
|
||||
|
||||
# Check for broken ABI files
|
||||
ifeq ($(CONFIG_WARN_ABI_ERRORS),y)
|
||||
$(shell $(srctree)/scripts/get_abi.py --dir $(srctree)/Documentation/ABI validate)
|
||||
$(shell $(srctree)/tools/docs/get_abi.py --dir $(srctree)/Documentation/ABI validate)
|
||||
endif
|
||||
endif
|
||||
|
||||
|
|
@ -23,21 +23,22 @@ SPHINXOPTS =
|
|||
SPHINXDIRS = .
|
||||
DOCS_THEME =
|
||||
DOCS_CSS =
|
||||
_SPHINXDIRS = $(sort $(patsubst $(srctree)/Documentation/%/index.rst,%,$(wildcard $(srctree)/Documentation/*/index.rst)))
|
||||
SPHINX_CONF = conf.py
|
||||
RUSTDOC =
|
||||
PAPER =
|
||||
BUILDDIR = $(obj)/output
|
||||
PDFLATEX = xelatex
|
||||
LATEXOPTS = -interaction=batchmode -no-shell-escape
|
||||
|
||||
PYTHONPYCACHEPREFIX ?= $(abspath $(BUILDDIR)/__pycache__)
|
||||
|
||||
# Wrapper for sphinx-build
|
||||
|
||||
BUILD_WRAPPER = $(srctree)/tools/docs/sphinx-build-wrapper
|
||||
|
||||
# For denylisting "variable font" files
|
||||
# Can be overridden by setting as an env variable
|
||||
FONTS_CONF_DENY_VF ?= $(HOME)/deny-vf
|
||||
|
||||
ifeq ($(findstring 1, $(KBUILD_VERBOSE)),)
|
||||
SPHINXOPTS += "-q"
|
||||
endif
|
||||
|
||||
# User-friendly check for sphinx-build
|
||||
HAVE_SPHINX := $(shell if which $(SPHINXBUILD) >/dev/null 2>&1; then echo 1; else echo 0; fi)
|
||||
|
||||
|
|
@ -46,141 +47,46 @@ ifeq ($(HAVE_SPHINX),0)
|
|||
.DEFAULT:
|
||||
$(warning The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed and in PATH, or set the SPHINXBUILD make variable to point to the full path of the '$(SPHINXBUILD)' executable.)
|
||||
@echo
|
||||
@$(srctree)/scripts/sphinx-pre-install
|
||||
@$(srctree)/tools/docs/sphinx-pre-install
|
||||
@echo " SKIP Sphinx $@ target."
|
||||
|
||||
else # HAVE_SPHINX
|
||||
|
||||
# User-friendly check for pdflatex and latexmk
|
||||
HAVE_PDFLATEX := $(shell if which $(PDFLATEX) >/dev/null 2>&1; then echo 1; else echo 0; fi)
|
||||
HAVE_LATEXMK := $(shell if which latexmk >/dev/null 2>&1; then echo 1; else echo 0; fi)
|
||||
# Common documentation targets
|
||||
htmldocs mandocs infodocs texinfodocs latexdocs epubdocs xmldocs pdfdocs linkcheckdocs:
|
||||
$(Q)PYTHONPYCACHEPREFIX="$(PYTHONPYCACHEPREFIX)" \
|
||||
$(srctree)/tools/docs/sphinx-pre-install --version-check
|
||||
+$(Q)PYTHONPYCACHEPREFIX="$(PYTHONPYCACHEPREFIX)" \
|
||||
$(PYTHON3) $(BUILD_WRAPPER) $@ \
|
||||
--sphinxdirs="$(SPHINXDIRS)" $(RUSTDOC) \
|
||||
--builddir="$(BUILDDIR)" --deny-vf=$(FONTS_CONF_DENY_VF) \
|
||||
--theme=$(DOCS_THEME) --css=$(DOCS_CSS) --paper=$(PAPER)
|
||||
|
||||
ifeq ($(HAVE_LATEXMK),1)
|
||||
PDFLATEX := latexmk -$(PDFLATEX)
|
||||
endif #HAVE_LATEXMK
|
||||
|
||||
# Internal variables.
|
||||
PAPEROPT_a4 = -D latex_elements.papersize=a4paper
|
||||
PAPEROPT_letter = -D latex_elements.papersize=letterpaper
|
||||
ALLSPHINXOPTS = -D kerneldoc_srctree=$(srctree) -D kerneldoc_bin=$(KERNELDOC)
|
||||
ALLSPHINXOPTS += $(PAPEROPT_$(PAPER)) $(SPHINXOPTS)
|
||||
ifneq ($(wildcard $(srctree)/.config),)
|
||||
ifeq ($(CONFIG_RUST),y)
|
||||
# Let Sphinx know we will include rustdoc
|
||||
ALLSPHINXOPTS += -t rustdoc
|
||||
endif
|
||||
endif
|
||||
# the i18n builder cannot share the environment and doctrees with the others
|
||||
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
|
||||
|
||||
# commands; the 'cmd' from scripts/Kbuild.include is not *loopable*
|
||||
loop_cmd = $(echo-cmd) $(cmd_$(1)) || exit;
|
||||
|
||||
# $2 sphinx builder e.g. "html"
|
||||
# $3 name of the build subfolder / e.g. "userspace-api/media", used as:
|
||||
# * dest folder relative to $(BUILDDIR) and
|
||||
# * cache folder relative to $(BUILDDIR)/.doctrees
|
||||
# $4 dest subfolder e.g. "man" for man pages at userspace-api/media/man
|
||||
# $5 reST source folder relative to $(src),
|
||||
# e.g. "userspace-api/media" for the linux-tv book-set at ./Documentation/userspace-api/media
|
||||
|
||||
PYTHONPYCACHEPREFIX ?= $(abspath $(BUILDDIR)/__pycache__)
|
||||
|
||||
quiet_cmd_sphinx = SPHINX $@ --> file://$(abspath $(BUILDDIR)/$3/$4)
|
||||
cmd_sphinx = \
|
||||
PYTHONPYCACHEPREFIX="$(PYTHONPYCACHEPREFIX)" \
|
||||
BUILDDIR=$(abspath $(BUILDDIR)) SPHINX_CONF=$(abspath $(src)/$5/$(SPHINX_CONF)) \
|
||||
$(PYTHON3) $(srctree)/scripts/jobserver-exec \
|
||||
$(CONFIG_SHELL) $(srctree)/Documentation/sphinx/parallel-wrapper.sh \
|
||||
$(SPHINXBUILD) \
|
||||
-b $2 \
|
||||
-c $(abspath $(src)) \
|
||||
-d $(abspath $(BUILDDIR)/.doctrees/$3) \
|
||||
-D version=$(KERNELVERSION) -D release=$(KERNELRELEASE) \
|
||||
$(ALLSPHINXOPTS) \
|
||||
$(abspath $(src)/$5) \
|
||||
$(abspath $(BUILDDIR)/$3/$4) && \
|
||||
if [ "x$(DOCS_CSS)" != "x" ]; then \
|
||||
cp $(if $(patsubst /%,,$(DOCS_CSS)),$(abspath $(srctree)/$(DOCS_CSS)),$(DOCS_CSS)) $(BUILDDIR)/$3/_static/; \
|
||||
fi
|
||||
|
||||
htmldocs:
|
||||
@$(srctree)/scripts/sphinx-pre-install --version-check
|
||||
@+$(foreach var,$(SPHINXDIRS),$(call loop_cmd,sphinx,html,$(var),,$(var)))
|
||||
|
||||
htmldocs-redirects: $(srctree)/Documentation/.renames.txt
|
||||
@tools/docs/gen-redirects.py --output $(BUILDDIR) < $<
|
||||
|
||||
# If Rust support is available and .config exists, add rustdoc generated contents.
|
||||
# If there are any, the errors from this make rustdoc will be displayed but
|
||||
# won't stop the execution of htmldocs
|
||||
|
||||
ifneq ($(wildcard $(srctree)/.config),)
|
||||
ifeq ($(CONFIG_RUST),y)
|
||||
$(Q)$(MAKE) rustdoc || true
|
||||
endif
|
||||
endif
|
||||
|
||||
texinfodocs:
|
||||
@$(srctree)/scripts/sphinx-pre-install --version-check
|
||||
@+$(foreach var,$(SPHINXDIRS),$(call loop_cmd,sphinx,texinfo,$(var),texinfo,$(var)))
|
||||
|
||||
# Note: the 'info' Make target is generated by sphinx itself when
|
||||
# running the texinfodocs target define above.
|
||||
infodocs: texinfodocs
|
||||
$(MAKE) -C $(BUILDDIR)/texinfo info
|
||||
|
||||
linkcheckdocs:
|
||||
@$(foreach var,$(SPHINXDIRS),$(call loop_cmd,sphinx,linkcheck,$(var),,$(var)))
|
||||
|
||||
latexdocs:
|
||||
@$(srctree)/scripts/sphinx-pre-install --version-check
|
||||
@+$(foreach var,$(SPHINXDIRS),$(call loop_cmd,sphinx,latex,$(var),latex,$(var)))
|
||||
|
||||
ifeq ($(HAVE_PDFLATEX),0)
|
||||
|
||||
pdfdocs:
|
||||
$(warning The '$(PDFLATEX)' command was not found. Make sure you have it installed and in PATH to produce PDF output.)
|
||||
@echo " SKIP Sphinx $@ target."
|
||||
|
||||
else # HAVE_PDFLATEX
|
||||
|
||||
pdfdocs: DENY_VF = XDG_CONFIG_HOME=$(FONTS_CONF_DENY_VF)
|
||||
pdfdocs: latexdocs
|
||||
@$(srctree)/scripts/sphinx-pre-install --version-check
|
||||
$(foreach var,$(SPHINXDIRS), \
|
||||
$(MAKE) PDFLATEX="$(PDFLATEX)" LATEXOPTS="$(LATEXOPTS)" $(DENY_VF) -C $(BUILDDIR)/$(var)/latex || sh $(srctree)/scripts/check-variable-fonts.sh || exit; \
|
||||
mkdir -p $(BUILDDIR)/$(var)/pdf; \
|
||||
mv $(subst .tex,.pdf,$(wildcard $(BUILDDIR)/$(var)/latex/*.tex)) $(BUILDDIR)/$(var)/pdf/; \
|
||||
)
|
||||
|
||||
endif # HAVE_PDFLATEX
|
||||
|
||||
epubdocs:
|
||||
@$(srctree)/scripts/sphinx-pre-install --version-check
|
||||
@+$(foreach var,$(SPHINXDIRS),$(call loop_cmd,sphinx,epub,$(var),epub,$(var)))
|
||||
|
||||
xmldocs:
|
||||
@$(srctree)/scripts/sphinx-pre-install --version-check
|
||||
@+$(foreach var,$(SPHINXDIRS),$(call loop_cmd,sphinx,xml,$(var),xml,$(var)))
|
||||
|
||||
endif # HAVE_SPHINX
|
||||
|
||||
# The following targets are independent of HAVE_SPHINX, and the rules should
|
||||
# work or silently pass without Sphinx.
|
||||
|
||||
htmldocs-redirects: $(srctree)/Documentation/.renames.txt
|
||||
@tools/docs/gen-redirects.py --output $(BUILDDIR) < $<
|
||||
|
||||
refcheckdocs:
|
||||
$(Q)cd $(srctree);scripts/documentation-file-ref-check
|
||||
$(Q)cd $(srctree); tools/docs/documentation-file-ref-check
|
||||
|
||||
cleandocs:
|
||||
$(Q)rm -rf $(BUILDDIR)
|
||||
|
||||
# Used only on help
|
||||
_SPHINXDIRS = $(shell printf "%s\n" $(patsubst $(srctree)/Documentation/%/index.rst,%,$(wildcard $(srctree)/Documentation/*/index.rst)) | sort -f)
|
||||
|
||||
dochelp:
|
||||
@echo ' Linux kernel internal documentation in different formats from ReST:'
|
||||
@echo ' htmldocs - HTML'
|
||||
@echo ' htmldocs-redirects - generate HTML redirects for moved pages'
|
||||
@echo ' texinfodocs - Texinfo'
|
||||
@echo ' infodocs - Info'
|
||||
@echo ' mandocs - Man pages'
|
||||
@echo ' latexdocs - LaTeX'
|
||||
@echo ' pdfdocs - PDF'
|
||||
@echo ' epubdocs - EPUB'
|
||||
|
|
@ -192,13 +98,17 @@ dochelp:
|
|||
@echo ' cleandocs - clean all generated files'
|
||||
@echo
|
||||
@echo ' make SPHINXDIRS="s1 s2" [target] Generate only docs of folder s1, s2'
|
||||
@echo ' valid values for SPHINXDIRS are: $(_SPHINXDIRS)'
|
||||
@echo
|
||||
@echo ' make SPHINX_CONF={conf-file} [target] use *additional* sphinx-build'
|
||||
@echo ' configuration. This is e.g. useful to build with nit-picking config.'
|
||||
@echo ' top level values for SPHINXDIRS are: $(_SPHINXDIRS)'
|
||||
@echo ' you may also use a subdirectory like SPHINXDIRS=userspace-api/media,'
|
||||
@echo ' provided that there is an index.rst file at the subdirectory.'
|
||||
@echo
|
||||
@echo ' make DOCS_THEME={sphinx-theme} selects a different Sphinx theme.'
|
||||
@echo
|
||||
@echo ' make DOCS_CSS={a .css file} adds a DOCS_CSS override file for html/epub output.'
|
||||
@echo
|
||||
@echo ' make PAPER={a4|letter} Specifies the paper size used for LaTeX/PDF output.'
|
||||
@echo
|
||||
@echo ' make FONTS_CONF_DENY_VF={path} sets a deny list to block variable Noto CJK fonts'
|
||||
@echo ' for PDF build. See tools/lib/python/kdoc/latex_fonts.py for more details'
|
||||
@echo
|
||||
@echo ' Default location for the generated documents is Documentation/output'
|
||||
|
|
|
|||
|
|
@ -76,41 +76,43 @@ The messages are in the format::
|
|||
The taskstats payload is one of the following three kinds:
|
||||
|
||||
1. Commands: Sent from user to kernel. Commands to get data on
|
||||
a pid/tgid consist of one attribute, of type TASKSTATS_CMD_ATTR_PID/TGID,
|
||||
containing a u32 pid or tgid in the attribute payload. The pid/tgid denotes
|
||||
the task/process for which userspace wants statistics.
|
||||
a pid/tgid consist of one attribute, of type TASKSTATS_CMD_ATTR_PID/TGID,
|
||||
containing a u32 pid or tgid in the attribute payload. The pid/tgid denotes
|
||||
the task/process for which userspace wants statistics.
|
||||
|
||||
Commands to register/deregister interest in exit data from a set of cpus
|
||||
consist of one attribute, of type
|
||||
TASKSTATS_CMD_ATTR_REGISTER/DEREGISTER_CPUMASK and contain a cpumask in the
|
||||
attribute payload. The cpumask is specified as an ascii string of
|
||||
comma-separated cpu ranges e.g. to listen to exit data from cpus 1,2,3,5,7,8
|
||||
the cpumask would be "1-3,5,7-8". If userspace forgets to deregister interest
|
||||
in cpus before closing the listening socket, the kernel cleans up its interest
|
||||
set over time. However, for the sake of efficiency, an explicit deregistration
|
||||
is advisable.
|
||||
Commands to register/deregister interest in exit data from a set of cpus
|
||||
consist of one attribute, of type
|
||||
TASKSTATS_CMD_ATTR_REGISTER/DEREGISTER_CPUMASK and contain a cpumask in the
|
||||
attribute payload. The cpumask is specified as an ascii string of
|
||||
comma-separated cpu ranges e.g. to listen to exit data from cpus 1,2,3,5,7,8
|
||||
the cpumask would be "1-3,5,7-8". If userspace forgets to deregister
|
||||
interest in cpus before closing the listening socket, the kernel cleans up
|
||||
its interest set over time. However, for the sake of efficiency, an explicit
|
||||
deregistration is advisable.
|
||||
|
||||
2. Response for a command: sent from the kernel in response to a userspace
|
||||
command. The payload is a series of three attributes of type:
|
||||
command. The payload is a series of three attributes of type:
|
||||
|
||||
a) TASKSTATS_TYPE_AGGR_PID/TGID : attribute containing no payload but indicates
|
||||
a pid/tgid will be followed by some stats.
|
||||
a) TASKSTATS_TYPE_AGGR_PID/TGID: attribute containing no payload but
|
||||
indicates a pid/tgid will be followed by some stats.
|
||||
|
||||
b) TASKSTATS_TYPE_PID/TGID: attribute whose payload is the pid/tgid whose stats
|
||||
are being returned.
|
||||
b) TASKSTATS_TYPE_PID/TGID: attribute whose payload is the pid/tgid whose
|
||||
stats are being returned.
|
||||
|
||||
c) TASKSTATS_TYPE_STATS: attribute with a struct taskstats as payload. The
|
||||
same structure is used for both per-pid and per-tgid stats.
|
||||
c) TASKSTATS_TYPE_STATS: attribute with a struct taskstats as payload. The
|
||||
same structure is used for both per-pid and per-tgid stats.
|
||||
|
||||
3. New message sent by kernel whenever a task exits. The payload consists of a
|
||||
series of attributes of the following type:
|
||||
|
||||
a) TASKSTATS_TYPE_AGGR_PID: indicates next two attributes will be pid+stats
|
||||
b) TASKSTATS_TYPE_PID: contains exiting task's pid
|
||||
c) TASKSTATS_TYPE_STATS: contains the exiting task's per-pid stats
|
||||
d) TASKSTATS_TYPE_AGGR_TGID: indicates next two attributes will be tgid+stats
|
||||
e) TASKSTATS_TYPE_TGID: contains tgid of process to which task belongs
|
||||
f) TASKSTATS_TYPE_STATS: contains the per-tgid stats for exiting task's process
|
||||
a) TASKSTATS_TYPE_AGGR_PID: indicates next two attributes will be pid+stats
|
||||
b) TASKSTATS_TYPE_PID: contains exiting task's pid
|
||||
c) TASKSTATS_TYPE_STATS: contains the exiting task's per-pid stats
|
||||
d) TASKSTATS_TYPE_AGGR_TGID: indicates next two attributes will be
|
||||
tgid+stats
|
||||
e) TASKSTATS_TYPE_TGID: contains tgid of process to which task belongs
|
||||
f) TASKSTATS_TYPE_STATS: contains the per-tgid stats for exiting task's
|
||||
process
|
||||
|
||||
|
||||
per-tgid stats
|
||||
|
|
|
|||
|
|
@ -79,6 +79,9 @@ because the image we're executing is interpreted by the EFI shell,
|
|||
which understands relative paths, whereas the rest of the command line
|
||||
is passed to bzImage.efi.
|
||||
|
||||
.. hint::
|
||||
It is also possible to provide an initrd using a Linux-specific UEFI
|
||||
protocol at boot time. See :ref:`pe-coff-entry-point` for details.
|
||||
|
||||
The "dtb=" option
|
||||
-----------------
|
||||
|
|
|
|||
|
|
@ -31,7 +31,7 @@ specifically opt into the feature to enable it.
|
|||
Mitigation
|
||||
----------
|
||||
|
||||
When PR_SET_L1D_FLUSH is enabled for a task a flush of the L1D cache is
|
||||
When PR_SPEC_L1D_FLUSH is enabled for a task a flush of the L1D cache is
|
||||
performed when the task is scheduled out and the incoming task belongs to a
|
||||
different process and therefore to a different address space.
|
||||
|
||||
|
|
|
|||
|
|
@ -406,7 +406,7 @@ The possible values in this file are:
|
|||
|
||||
- Single threaded indirect branch prediction (STIBP) status for protection
|
||||
between different hyper threads. This feature can be controlled through
|
||||
prctl per process, or through kernel command line options. This is x86
|
||||
prctl per process, or through kernel command line options. This is an x86
|
||||
only feature. For more details see below.
|
||||
|
||||
==================== ========================================================
|
||||
|
|
|
|||
|
|
@ -110,102 +110,7 @@ The parameters listed below are only valid if certain kernel build options
|
|||
were enabled and if respective hardware is present. This list should be kept
|
||||
in alphabetical order. The text in square brackets at the beginning
|
||||
of each description states the restrictions within which a parameter
|
||||
is applicable::
|
||||
|
||||
ACPI ACPI support is enabled.
|
||||
AGP AGP (Accelerated Graphics Port) is enabled.
|
||||
ALSA ALSA sound support is enabled.
|
||||
APIC APIC support is enabled.
|
||||
APM Advanced Power Management support is enabled.
|
||||
APPARMOR AppArmor support is enabled.
|
||||
ARM ARM architecture is enabled.
|
||||
ARM64 ARM64 architecture is enabled.
|
||||
AX25 Appropriate AX.25 support is enabled.
|
||||
CLK Common clock infrastructure is enabled.
|
||||
CMA Contiguous Memory Area support is enabled.
|
||||
DRM Direct Rendering Management support is enabled.
|
||||
DYNAMIC_DEBUG Build in debug messages and enable them at runtime
|
||||
EARLY Parameter processed too early to be embedded in initrd.
|
||||
EDD BIOS Enhanced Disk Drive Services (EDD) is enabled
|
||||
EFI EFI Partitioning (GPT) is enabled
|
||||
EVM Extended Verification Module
|
||||
FB The frame buffer device is enabled.
|
||||
FTRACE Function tracing enabled.
|
||||
GCOV GCOV profiling is enabled.
|
||||
HIBERNATION HIBERNATION is enabled.
|
||||
HW Appropriate hardware is enabled.
|
||||
HYPER_V HYPERV support is enabled.
|
||||
IMA Integrity measurement architecture is enabled.
|
||||
IP_PNP IP DHCP, BOOTP, or RARP is enabled.
|
||||
IPV6 IPv6 support is enabled.
|
||||
ISAPNP ISA PnP code is enabled.
|
||||
ISDN Appropriate ISDN support is enabled.
|
||||
ISOL CPU Isolation is enabled.
|
||||
JOY Appropriate joystick support is enabled.
|
||||
KGDB Kernel debugger support is enabled.
|
||||
KVM Kernel Virtual Machine support is enabled.
|
||||
LIBATA Libata driver is enabled
|
||||
LOONGARCH LoongArch architecture is enabled.
|
||||
LOOP Loopback device support is enabled.
|
||||
LP Printer support is enabled.
|
||||
M68k M68k architecture is enabled.
|
||||
These options have more detailed description inside of
|
||||
Documentation/arch/m68k/kernel-options.rst.
|
||||
MDA MDA console support is enabled.
|
||||
MIPS MIPS architecture is enabled.
|
||||
MOUSE Appropriate mouse support is enabled.
|
||||
MSI Message Signaled Interrupts (PCI).
|
||||
MTD MTD (Memory Technology Device) support is enabled.
|
||||
NET Appropriate network support is enabled.
|
||||
NFS Appropriate NFS support is enabled.
|
||||
NUMA NUMA support is enabled.
|
||||
OF Devicetree is enabled.
|
||||
PARISC The PA-RISC architecture is enabled.
|
||||
PCI PCI bus support is enabled.
|
||||
PCIE PCI Express support is enabled.
|
||||
PCMCIA The PCMCIA subsystem is enabled.
|
||||
PNP Plug & Play support is enabled.
|
||||
PPC PowerPC architecture is enabled.
|
||||
PPT Parallel port support is enabled.
|
||||
PS2 Appropriate PS/2 support is enabled.
|
||||
PV_OPS A paravirtualized kernel is enabled.
|
||||
RAM RAM disk support is enabled.
|
||||
RDT Intel Resource Director Technology.
|
||||
RISCV RISCV architecture is enabled.
|
||||
S390 S390 architecture is enabled.
|
||||
SCSI Appropriate SCSI support is enabled.
|
||||
A lot of drivers have their options described inside
|
||||
the Documentation/scsi/ sub-directory.
|
||||
SDW SoundWire support is enabled.
|
||||
SECURITY Different security models are enabled.
|
||||
SELINUX SELinux support is enabled.
|
||||
SERIAL Serial support is enabled.
|
||||
SH SuperH architecture is enabled.
|
||||
SMP The kernel is an SMP kernel.
|
||||
SPARC Sparc architecture is enabled.
|
||||
SUSPEND System suspend states are enabled.
|
||||
SWSUSP Software suspend (hibernation) is enabled.
|
||||
TPM TPM drivers are enabled.
|
||||
UMS USB Mass Storage support is enabled.
|
||||
USB USB support is enabled.
|
||||
USBHID USB Human Interface Device support is enabled.
|
||||
V4L Video For Linux support is enabled.
|
||||
VGA The VGA console has been enabled.
|
||||
VMMIO Driver for memory mapped virtio devices is enabled.
|
||||
VT Virtual terminal support is enabled.
|
||||
WDT Watchdog support is enabled.
|
||||
X86-32 X86-32, aka i386 architecture is enabled.
|
||||
X86-64 X86-64 architecture is enabled.
|
||||
X86 Either 32-bit or 64-bit x86 (same as X86-32+X86-64)
|
||||
X86_UV SGI UV support is enabled.
|
||||
XEN Xen support is enabled
|
||||
XTENSA xtensa architecture is enabled.
|
||||
|
||||
In addition, the following text indicates that the option::
|
||||
|
||||
BOOT Is a boot loader parameter.
|
||||
BUGS= Relates to possible processor bugs on the said processor.
|
||||
KNL Is a kernel start-up parameter.
|
||||
is applicable.
|
||||
|
||||
Parameters denoted with BOOT are actually interpreted by the boot
|
||||
loader, and have no meaning to the kernel directly.
|
||||
|
|
|
|||
|
|
@ -1,3 +1,101 @@
|
|||
ACPI ACPI support is enabled.
|
||||
AGP AGP (Accelerated Graphics Port) is enabled.
|
||||
ALSA ALSA sound support is enabled.
|
||||
APIC APIC support is enabled.
|
||||
APM Advanced Power Management support is enabled.
|
||||
APPARMOR AppArmor support is enabled.
|
||||
ARM ARM architecture is enabled.
|
||||
ARM64 ARM64 architecture is enabled.
|
||||
AX25 Appropriate AX.25 support is enabled.
|
||||
CLK Common clock infrastructure is enabled.
|
||||
CMA Contiguous Memory Area support is enabled.
|
||||
DRM Direct Rendering Management support is enabled.
|
||||
DYNAMIC_DEBUG Build in debug messages and enable them at runtime
|
||||
EARLY Parameter processed too early to be embedded in initrd.
|
||||
EDD BIOS Enhanced Disk Drive Services (EDD) is enabled
|
||||
EFI EFI Partitioning (GPT) is enabled
|
||||
EVM Extended Verification Module
|
||||
FB The frame buffer device is enabled.
|
||||
FTRACE Function tracing enabled.
|
||||
GCOV GCOV profiling is enabled.
|
||||
HIBERNATION HIBERNATION is enabled.
|
||||
HW Appropriate hardware is enabled.
|
||||
HYPER_V HYPERV support is enabled.
|
||||
IMA Integrity measurement architecture is enabled.
|
||||
IP_PNP IP DHCP, BOOTP, or RARP is enabled.
|
||||
IPV6 IPv6 support is enabled.
|
||||
ISAPNP ISA PnP code is enabled.
|
||||
ISDN Appropriate ISDN support is enabled.
|
||||
ISOL CPU Isolation is enabled.
|
||||
JOY Appropriate joystick support is enabled.
|
||||
KGDB Kernel debugger support is enabled.
|
||||
KVM Kernel Virtual Machine support is enabled.
|
||||
LIBATA Libata driver is enabled
|
||||
LOONGARCH LoongArch architecture is enabled.
|
||||
LOOP Loopback device support is enabled.
|
||||
LP Printer support is enabled.
|
||||
M68k M68k architecture is enabled.
|
||||
These options have more detailed description inside of
|
||||
Documentation/arch/m68k/kernel-options.rst.
|
||||
MDA MDA console support is enabled.
|
||||
MIPS MIPS architecture is enabled.
|
||||
MOUSE Appropriate mouse support is enabled.
|
||||
MSI Message Signaled Interrupts (PCI).
|
||||
MTD MTD (Memory Technology Device) support is enabled.
|
||||
NET Appropriate network support is enabled.
|
||||
NFS Appropriate NFS support is enabled.
|
||||
NUMA NUMA support is enabled.
|
||||
OF Devicetree is enabled.
|
||||
PARISC The PA-RISC architecture is enabled.
|
||||
PCI PCI bus support is enabled.
|
||||
PCIE PCI Express support is enabled.
|
||||
PCMCIA The PCMCIA subsystem is enabled.
|
||||
PNP Plug & Play support is enabled.
|
||||
PPC PowerPC architecture is enabled.
|
||||
PPT Parallel port support is enabled.
|
||||
PS2 Appropriate PS/2 support is enabled.
|
||||
PV_OPS A paravirtualized kernel is enabled.
|
||||
RAM RAM disk support is enabled.
|
||||
RDT Intel Resource Director Technology.
|
||||
RISCV RISCV architecture is enabled.
|
||||
S390 S390 architecture is enabled.
|
||||
SCSI Appropriate SCSI support is enabled.
|
||||
A lot of drivers have their options described inside
|
||||
the Documentation/scsi/ sub-directory.
|
||||
SDW SoundWire support is enabled.
|
||||
SECURITY Different security models are enabled.
|
||||
SELINUX SELinux support is enabled.
|
||||
SERIAL Serial support is enabled.
|
||||
SH SuperH architecture is enabled.
|
||||
SMP The kernel is an SMP kernel.
|
||||
SPARC Sparc architecture is enabled.
|
||||
SUSPEND System suspend states are enabled.
|
||||
SWSUSP Software suspend (hibernation) is enabled.
|
||||
TPM TPM drivers are enabled.
|
||||
UMS USB Mass Storage support is enabled.
|
||||
USB USB support is enabled.
|
||||
USBHID USB Human Interface Device support is enabled.
|
||||
V4L Video For Linux support is enabled.
|
||||
VGA The VGA console has been enabled.
|
||||
VMMIO Driver for memory mapped virtio devices is enabled.
|
||||
VT Virtual terminal support is enabled.
|
||||
WDT Watchdog support is enabled.
|
||||
X86-32 X86-32, aka i386 architecture is enabled.
|
||||
X86-64 X86-64 architecture is enabled.
|
||||
X86 Either 32-bit or 64-bit x86 (same as X86-32+X86-64)
|
||||
X86_UV SGI UV support is enabled.
|
||||
XEN Xen support is enabled
|
||||
XTENSA xtensa architecture is enabled.
|
||||
|
||||
In addition, the following text indicates that the option
|
||||
|
||||
BOOT Is a boot loader parameter.
|
||||
BUGS= Relates to possible processor bugs on the said processor.
|
||||
KNL Is a kernel start-up parameter.
|
||||
|
||||
|
||||
Kernel parameters
|
||||
|
||||
accept_memory= [MM]
|
||||
Format: { eager | lazy }
|
||||
default: lazy
|
||||
|
|
@ -6414,7 +6512,7 @@
|
|||
that don't.
|
||||
|
||||
off - no mitigation
|
||||
auto - automatically select a migitation
|
||||
auto - automatically select a mitigation
|
||||
auto,nosmt - automatically select a mitigation,
|
||||
disabling SMT if necessary for
|
||||
the full mitigation (only on Zen1
|
||||
|
|
@ -7164,7 +7262,7 @@
|
|||
limit. Default value is 8191 pools.
|
||||
|
||||
stacktrace [FTRACE]
|
||||
Enabled the stack tracer on boot up.
|
||||
Enable the stack tracer on boot up.
|
||||
|
||||
stacktrace_filter=[function-list]
|
||||
[FTRACE] Limit the functions that the stack tracer
|
||||
|
|
|
|||
|
|
@ -186,6 +186,6 @@ More detailed explanation for tainting
|
|||
|
||||
18) ``N`` if an in-kernel test, such as a KUnit test, has been run.
|
||||
|
||||
19) ``J`` if userpace opened /dev/fwctl/* and performed a FWTCL_RPC_DEBUG_WRITE
|
||||
19) ``J`` if userspace opened /dev/fwctl/* and performed a FWTCL_RPC_DEBUG_WRITE
|
||||
to use the devices debugging features. Device debugging features could
|
||||
cause the device to malfunction in undefined ways.
|
||||
|
|
|
|||
|
|
@ -196,11 +196,11 @@ Let’s checkout the latest Linux repository and build cscope database::
|
|||
cscope -R -p10 # builds cscope.out database before starting browse session
|
||||
cscope -d -p10 # starts browse session on cscope.out database
|
||||
|
||||
Note: Run "cscope -R -p10" to build the database and c"scope -d -p10" to
|
||||
enter into the browsing session. cscope by default cscope.out database.
|
||||
To get out of this mode press ctrl+d. -p option is used to specify the
|
||||
number of file path components to display. -p10 is optimal for browsing
|
||||
kernel sources.
|
||||
Note: Run "cscope -R -p10" to build the database and "cscope -d -p10" to
|
||||
enter into the browsing session. cscope by default uses the cscope.out
|
||||
database. To get out of this mode press ctrl+d. -p option is used to
|
||||
specify the number of file path components to display. -p10 is optimal
|
||||
for browsing kernel sources.
|
||||
|
||||
What is perf and how do we use it?
|
||||
==================================
|
||||
|
|
|
|||
|
|
@ -1431,12 +1431,34 @@ The boot loader *must* fill out the following fields in bp::
|
|||
All other fields should be zero.
|
||||
|
||||
.. note::
|
||||
The EFI Handover Protocol is deprecated in favour of the ordinary PE/COFF
|
||||
entry point, combined with the LINUX_EFI_INITRD_MEDIA_GUID based initrd
|
||||
loading protocol (refer to [0] for an example of the bootloader side of
|
||||
this), which removes the need for any knowledge on the part of the EFI
|
||||
bootloader regarding the internal representation of boot_params or any
|
||||
requirements/limitations regarding the placement of the command line
|
||||
and ramdisk in memory, or the placement of the kernel image itself.
|
||||
The EFI Handover Protocol is deprecated in favour of the ordinary PE/COFF
|
||||
entry point described below.
|
||||
|
||||
[0] https://github.com/u-boot/u-boot/commit/ec80b4735a593961fe701cc3a5d717d4739b0fd0
|
||||
.. _pe-coff-entry-point:
|
||||
|
||||
PE/COFF entry point
|
||||
===================
|
||||
|
||||
When compiled with ``CONFIG_EFI_STUB=y``, the kernel can be executed as a
|
||||
regular PE/COFF binary. See Documentation/admin-guide/efi-stub.rst for
|
||||
implementation details.
|
||||
|
||||
The stub loader can request the initrd via a UEFI protocol. For this to work,
|
||||
the firmware or bootloader needs to register a handle which carries
|
||||
implementations of the ``EFI_LOAD_FILE2`` protocol and the device path
|
||||
protocol exposing the ``LINUX_EFI_INITRD_MEDIA_GUID`` vendor media device path.
|
||||
In this case, a kernel booting via the EFI stub will invoke
|
||||
``LoadFile2::LoadFile()`` method on the registered protocol to instruct the
|
||||
firmware to load the initrd into a memory location chosen by the kernel/EFI
|
||||
stub.
|
||||
|
||||
This approach removes the need for any knowledge on the part of the EFI
|
||||
bootloader regarding the internal representation of boot_params or any
|
||||
requirements/limitations regarding the placement of the command line and
|
||||
ramdisk in memory, or the placement of the kernel image itself.
|
||||
|
||||
For sample implementations, refer to `the original u-boot implementation`_ or
|
||||
`the OVMF implementation`_.
|
||||
|
||||
.. _the original u-boot implementation: https://github.com/u-boot/u-boot/commit/ec80b4735a593961fe701cc3a5d717d4739b0fd0
|
||||
.. _the OVMF implementation: https://github.com/tianocore/edk2/blob/1780373897f12c25075f8883e073144506441168/OvmfPkg/LinuxInitrdDynamicShellCommand/LinuxInitrdDynamicShellCommand.c
|
||||
|
|
|
|||
|
|
@ -18,8 +18,6 @@ import sphinx
|
|||
# documentation root, use os.path.abspath to make it absolute, like shown here.
|
||||
sys.path.insert(0, os.path.abspath("sphinx"))
|
||||
|
||||
from load_config import loadConfig # pylint: disable=C0413,E0401
|
||||
|
||||
# Minimal supported version
|
||||
needs_sphinx = "3.4.3"
|
||||
|
||||
|
|
@ -93,8 +91,12 @@ def config_init(app, config):
|
|||
# LaTeX and PDF output require a list of documents with are dependent
|
||||
# of the app.srcdir. Add them here
|
||||
|
||||
# When SPHINXDIRS is used, we just need to get index.rst, if it exists
|
||||
# Handle the case where SPHINXDIRS is used
|
||||
if not os.path.samefile(doctree, app.srcdir):
|
||||
# Add a tag to mark that the build is actually a subproject
|
||||
tags.add("subproject")
|
||||
|
||||
# get index.rst, if it exists
|
||||
doc = os.path.basename(app.srcdir)
|
||||
fname = "index"
|
||||
if os.path.exists(os.path.join(app.srcdir, fname + ".rst")):
|
||||
|
|
@ -583,13 +585,6 @@ pdf_documents = [
|
|||
kerneldoc_bin = "../scripts/kernel-doc.py"
|
||||
kerneldoc_srctree = ".."
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# Since loadConfig overwrites settings from the global namespace, it has to be
|
||||
# the last statement in the conf.py file
|
||||
# ------------------------------------------------------------------------------
|
||||
loadConfig(globals())
|
||||
|
||||
|
||||
def setup(app):
|
||||
"""Patterns need to be updated at init time on older Sphinx versions"""
|
||||
|
||||
|
|
|
|||
|
|
@ -92,18 +92,18 @@ There are two functions for dealing with the script:
|
|||
|
||||
void assoc_array_apply_edit(struct assoc_array_edit *edit);
|
||||
|
||||
This will perform the edit functions, interpolating various write barriers
|
||||
to permit accesses under the RCU read lock to continue. The edit script
|
||||
will then be passed to ``call_rcu()`` to free it and any dead stuff it points
|
||||
to.
|
||||
This will perform the edit functions, interpolating various write barriers
|
||||
to permit accesses under the RCU read lock to continue. The edit script
|
||||
will then be passed to ``call_rcu()`` to free it and any dead stuff it
|
||||
points to.
|
||||
|
||||
2. Cancel an edit script::
|
||||
|
||||
void assoc_array_cancel_edit(struct assoc_array_edit *edit);
|
||||
|
||||
This frees the edit script and all preallocated memory immediately. If
|
||||
this was for insertion, the new object is _not_ released by this function,
|
||||
but must rather be released by the caller.
|
||||
This frees the edit script and all preallocated memory immediately. If
|
||||
this was for insertion, the new object is *not* released by this function,
|
||||
but must rather be released by the caller.
|
||||
|
||||
These functions are guaranteed not to fail.
|
||||
|
||||
|
|
@ -123,43 +123,43 @@ This points to a number of methods, all of which need to be provided:
|
|||
|
||||
unsigned long (*get_key_chunk)(const void *index_key, int level);
|
||||
|
||||
This should return a chunk of caller-supplied index key starting at the
|
||||
*bit* position given by the level argument. The level argument will be a
|
||||
multiple of ``ASSOC_ARRAY_KEY_CHUNK_SIZE`` and the function should return
|
||||
``ASSOC_ARRAY_KEY_CHUNK_SIZE bits``. No error is possible.
|
||||
This should return a chunk of caller-supplied index key starting at the
|
||||
*bit* position given by the level argument. The level argument will be a
|
||||
multiple of ``ASSOC_ARRAY_KEY_CHUNK_SIZE`` and the function should return
|
||||
``ASSOC_ARRAY_KEY_CHUNK_SIZE bits``. No error is possible.
|
||||
|
||||
|
||||
2. Get a chunk of an object's index key::
|
||||
|
||||
unsigned long (*get_object_key_chunk)(const void *object, int level);
|
||||
|
||||
As the previous function, but gets its data from an object in the array
|
||||
rather than from a caller-supplied index key.
|
||||
As the previous function, but gets its data from an object in the array
|
||||
rather than from a caller-supplied index key.
|
||||
|
||||
|
||||
3. See if this is the object we're looking for::
|
||||
|
||||
bool (*compare_object)(const void *object, const void *index_key);
|
||||
|
||||
Compare the object against an index key and return ``true`` if it matches and
|
||||
``false`` if it doesn't.
|
||||
Compare the object against an index key and return ``true`` if it matches
|
||||
and ``false`` if it doesn't.
|
||||
|
||||
|
||||
4. Diff the index keys of two objects::
|
||||
|
||||
int (*diff_objects)(const void *object, const void *index_key);
|
||||
|
||||
Return the bit position at which the index key of the specified object
|
||||
differs from the given index key or -1 if they are the same.
|
||||
Return the bit position at which the index key of the specified object
|
||||
differs from the given index key or -1 if they are the same.
|
||||
|
||||
|
||||
5. Free an object::
|
||||
|
||||
void (*free_object)(void *object);
|
||||
|
||||
Free the specified object. Note that this may be called an RCU grace period
|
||||
after ``assoc_array_apply_edit()`` was called, so ``synchronize_rcu()`` may be
|
||||
necessary on module unloading.
|
||||
Free the specified object. Note that this may be called an RCU grace period
|
||||
after ``assoc_array_apply_edit()`` was called, so ``synchronize_rcu()`` may
|
||||
be necessary on module unloading.
|
||||
|
||||
|
||||
Manipulation Functions
|
||||
|
|
@ -171,7 +171,7 @@ There are a number of functions for manipulating an associative array:
|
|||
|
||||
void assoc_array_init(struct assoc_array *array);
|
||||
|
||||
This initialises the base structure for an associative array. It can't fail.
|
||||
This initialises the base structure for an associative array. It can't fail.
|
||||
|
||||
|
||||
2. Insert/replace an object in an associative array::
|
||||
|
|
@ -182,21 +182,21 @@ This initialises the base structure for an associative array. It can't fail.
|
|||
const void *index_key,
|
||||
void *object);
|
||||
|
||||
This inserts the given object into the array. Note that the least
|
||||
significant bit of the pointer must be zero as it's used to type-mark
|
||||
pointers internally.
|
||||
This inserts the given object into the array. Note that the least
|
||||
significant bit of the pointer must be zero as it's used to type-mark
|
||||
pointers internally.
|
||||
|
||||
If an object already exists for that key then it will be replaced with the
|
||||
new object and the old one will be freed automatically.
|
||||
If an object already exists for that key then it will be replaced with the
|
||||
new object and the old one will be freed automatically.
|
||||
|
||||
The ``index_key`` argument should hold index key information and is
|
||||
passed to the methods in the ops table when they are called.
|
||||
The ``index_key`` argument should hold index key information and is
|
||||
passed to the methods in the ops table when they are called.
|
||||
|
||||
This function makes no alteration to the array itself, but rather returns
|
||||
an edit script that must be applied. ``-ENOMEM`` is returned in the case of
|
||||
an out-of-memory error.
|
||||
This function makes no alteration to the array itself, but rather returns
|
||||
an edit script that must be applied. ``-ENOMEM`` is returned in the case of
|
||||
an out-of-memory error.
|
||||
|
||||
The caller should lock exclusively against other modifiers of the array.
|
||||
The caller should lock exclusively against other modifiers of the array.
|
||||
|
||||
|
||||
3. Delete an object from an associative array::
|
||||
|
|
@ -206,15 +206,15 @@ The caller should lock exclusively against other modifiers of the array.
|
|||
const struct assoc_array_ops *ops,
|
||||
const void *index_key);
|
||||
|
||||
This deletes an object that matches the specified data from the array.
|
||||
This deletes an object that matches the specified data from the array.
|
||||
|
||||
The ``index_key`` argument should hold index key information and is
|
||||
passed to the methods in the ops table when they are called.
|
||||
The ``index_key`` argument should hold index key information and is
|
||||
passed to the methods in the ops table when they are called.
|
||||
|
||||
This function makes no alteration to the array itself, but rather returns
|
||||
an edit script that must be applied. ``-ENOMEM`` is returned in the case of
|
||||
an out-of-memory error. ``NULL`` will be returned if the specified object is
|
||||
not found within the array.
|
||||
This function makes no alteration to the array itself, but rather returns
|
||||
an edit script that must be applied. ``-ENOMEM`` is returned in the case of
|
||||
an out-of-memory error. ``NULL`` will be returned if the specified object
|
||||
is not found within the array.
|
||||
|
||||
The caller should lock exclusively against other modifiers of the array.
|
||||
|
||||
|
|
@ -225,14 +225,14 @@ The caller should lock exclusively against other modifiers of the array.
|
|||
assoc_array_clear(struct assoc_array *array,
|
||||
const struct assoc_array_ops *ops);
|
||||
|
||||
This deletes all the objects from an associative array and leaves it
|
||||
completely empty.
|
||||
This deletes all the objects from an associative array and leaves it
|
||||
completely empty.
|
||||
|
||||
This function makes no alteration to the array itself, but rather returns
|
||||
an edit script that must be applied. ``-ENOMEM`` is returned in the case of
|
||||
an out-of-memory error.
|
||||
This function makes no alteration to the array itself, but rather returns
|
||||
an edit script that must be applied. ``-ENOMEM`` is returned in the case of
|
||||
an out-of-memory error.
|
||||
|
||||
The caller should lock exclusively against other modifiers of the array.
|
||||
The caller should lock exclusively against other modifiers of the array.
|
||||
|
||||
|
||||
5. Destroy an associative array, deleting all objects::
|
||||
|
|
@ -240,14 +240,14 @@ The caller should lock exclusively against other modifiers of the array.
|
|||
void assoc_array_destroy(struct assoc_array *array,
|
||||
const struct assoc_array_ops *ops);
|
||||
|
||||
This destroys the contents of the associative array and leaves it
|
||||
completely empty. It is not permitted for another thread to be traversing
|
||||
the array under the RCU read lock at the same time as this function is
|
||||
destroying it as no RCU deferral is performed on memory release -
|
||||
something that would require memory to be allocated.
|
||||
This destroys the contents of the associative array and leaves it
|
||||
completely empty. It is not permitted for another thread to be traversing
|
||||
the array under the RCU read lock at the same time as this function is
|
||||
destroying it as no RCU deferral is performed on memory release -
|
||||
something that would require memory to be allocated.
|
||||
|
||||
The caller should lock exclusively against other modifiers and accessors
|
||||
of the array.
|
||||
The caller should lock exclusively against other modifiers and accessors
|
||||
of the array.
|
||||
|
||||
|
||||
6. Garbage collect an associative array::
|
||||
|
|
@ -257,24 +257,24 @@ of the array.
|
|||
bool (*iterator)(void *object, void *iterator_data),
|
||||
void *iterator_data);
|
||||
|
||||
This iterates over the objects in an associative array and passes each one to
|
||||
``iterator()``. If ``iterator()`` returns ``true``, the object is kept. If it
|
||||
returns ``false``, the object will be freed. If the ``iterator()`` function
|
||||
returns ``true``, it must perform any appropriate refcount incrementing on the
|
||||
object before returning.
|
||||
This iterates over the objects in an associative array and passes each one
|
||||
to ``iterator()``. If ``iterator()`` returns ``true``, the object is kept.
|
||||
If it returns ``false``, the object will be freed. If the ``iterator()``
|
||||
function returns ``true``, it must perform any appropriate refcount
|
||||
incrementing on the object before returning.
|
||||
|
||||
The internal tree will be packed down if possible as part of the iteration
|
||||
to reduce the number of nodes in it.
|
||||
The internal tree will be packed down if possible as part of the iteration
|
||||
to reduce the number of nodes in it.
|
||||
|
||||
The ``iterator_data`` is passed directly to ``iterator()`` and is otherwise
|
||||
ignored by the function.
|
||||
The ``iterator_data`` is passed directly to ``iterator()`` and is otherwise
|
||||
ignored by the function.
|
||||
|
||||
The function will return ``0`` if successful and ``-ENOMEM`` if there wasn't
|
||||
enough memory.
|
||||
The function will return ``0`` if successful and ``-ENOMEM`` if there wasn't
|
||||
enough memory.
|
||||
|
||||
It is possible for other threads to iterate over or search the array under
|
||||
the RCU read lock while this function is in progress. The caller should
|
||||
lock exclusively against other modifiers of the array.
|
||||
It is possible for other threads to iterate over or search the array under
|
||||
the RCU read lock while this function is in progress. The caller should
|
||||
lock exclusively against other modifiers of the array.
|
||||
|
||||
|
||||
Access Functions
|
||||
|
|
@ -289,19 +289,19 @@ There are two functions for accessing an associative array:
|
|||
void *iterator_data),
|
||||
void *iterator_data);
|
||||
|
||||
This passes each object in the array to the iterator callback function.
|
||||
``iterator_data`` is private data for that function.
|
||||
This passes each object in the array to the iterator callback function.
|
||||
``iterator_data`` is private data for that function.
|
||||
|
||||
This may be used on an array at the same time as the array is being
|
||||
modified, provided the RCU read lock is held. Under such circumstances,
|
||||
it is possible for the iteration function to see some objects twice. If
|
||||
this is a problem, then modification should be locked against. The
|
||||
iteration algorithm should not, however, miss any objects.
|
||||
This may be used on an array at the same time as the array is being
|
||||
modified, provided the RCU read lock is held. Under such circumstances,
|
||||
it is possible for the iteration function to see some objects twice. If
|
||||
this is a problem, then modification should be locked against. The
|
||||
iteration algorithm should not, however, miss any objects.
|
||||
|
||||
The function will return ``0`` if no objects were in the array or else it will
|
||||
return the result of the last iterator function called. Iteration stops
|
||||
immediately if any call to the iteration function results in a non-zero
|
||||
return.
|
||||
The function will return ``0`` if no objects were in the array or else it
|
||||
will return the result of the last iterator function called. Iteration
|
||||
stops immediately if any call to the iteration function results in a
|
||||
non-zero return.
|
||||
|
||||
|
||||
2. Find an object in an associative array::
|
||||
|
|
@ -310,14 +310,14 @@ return.
|
|||
const struct assoc_array_ops *ops,
|
||||
const void *index_key);
|
||||
|
||||
This walks through the array's internal tree directly to the object
|
||||
specified by the index key..
|
||||
This walks through the array's internal tree directly to the object
|
||||
specified by the index key.
|
||||
|
||||
This may be used on an array at the same time as the array is being
|
||||
modified, provided the RCU read lock is held.
|
||||
This may be used on an array at the same time as the array is being
|
||||
modified, provided the RCU read lock is held.
|
||||
|
||||
The function will return the object if found (and set ``*_type`` to the object
|
||||
type) or will return ``NULL`` if the object was not found.
|
||||
The function will return the object if found (and set ``*_type`` to the
|
||||
object type) or will return ``NULL`` if the object was not found.
|
||||
|
||||
|
||||
Index Key Form
|
||||
|
|
@ -399,10 +399,11 @@ fixed levels. For example::
|
|||
|
||||
In the above example, there are 7 nodes (A-G), each with 16 slots (0-f).
|
||||
Assuming no other meta data nodes in the tree, the key space is divided
|
||||
thusly::
|
||||
thusly:
|
||||
|
||||
=========== ====
|
||||
KEY PREFIX NODE
|
||||
========== ====
|
||||
=========== ====
|
||||
137* D
|
||||
138* E
|
||||
13[0-69-f]* C
|
||||
|
|
@ -410,10 +411,12 @@ thusly::
|
|||
e6* G
|
||||
e[0-57-f]* F
|
||||
[02-df]* A
|
||||
=========== ====
|
||||
|
||||
So, for instance, keys with the following example index keys will be found in
|
||||
the appropriate nodes::
|
||||
the appropriate nodes:
|
||||
|
||||
=============== ======= ====
|
||||
INDEX KEY PREFIX NODE
|
||||
=============== ======= ====
|
||||
13694892892489 13 C
|
||||
|
|
@ -422,12 +425,13 @@ the appropriate nodes::
|
|||
138bbb89003093 138 E
|
||||
1394879524789 12 C
|
||||
1458952489 1 B
|
||||
9431809de993ba - A
|
||||
b4542910809cd - A
|
||||
9431809de993ba \- A
|
||||
b4542910809cd \- A
|
||||
e5284310def98 e F
|
||||
e68428974237 e6 G
|
||||
e7fffcbd443 e F
|
||||
f3842239082 - A
|
||||
f3842239082 \- A
|
||||
=============== ======= ====
|
||||
|
||||
To save memory, if a node can hold all the leaves in its portion of keyspace,
|
||||
then the node will have all those leaves in it and will not have any metadata
|
||||
|
|
@ -441,8 +445,9 @@ metadata pointer. If the metadata pointer is there, any leaf whose key matches
|
|||
the metadata key prefix must be in the subtree that the metadata pointer points
|
||||
to.
|
||||
|
||||
In the above example list of index keys, node A will contain::
|
||||
In the above example list of index keys, node A will contain:
|
||||
|
||||
==== =============== ==================
|
||||
SLOT CONTENT INDEX KEY (PREFIX)
|
||||
==== =============== ==================
|
||||
1 PTR TO NODE B 1*
|
||||
|
|
@ -450,11 +455,16 @@ In the above example list of index keys, node A will contain::
|
|||
any LEAF b4542910809cd
|
||||
e PTR TO NODE F e*
|
||||
any LEAF f3842239082
|
||||
==== =============== ==================
|
||||
|
||||
and node B::
|
||||
and node B:
|
||||
|
||||
3 PTR TO NODE C 13*
|
||||
any LEAF 1458952489
|
||||
==== =============== ==================
|
||||
SLOT CONTENT INDEX KEY (PREFIX)
|
||||
==== =============== ==================
|
||||
3 PTR TO NODE C 13*
|
||||
any LEAF 1458952489
|
||||
==== =============== ==================
|
||||
|
||||
|
||||
Shortcuts
|
||||
|
|
|
|||
|
|
@ -461,16 +461,9 @@ Comments
|
|||
line comments is::
|
||||
|
||||
/*
|
||||
* This is the preferred style
|
||||
* for multi line comments.
|
||||
*/
|
||||
|
||||
The networking comment style is a bit different, with the first line
|
||||
not empty like the former::
|
||||
|
||||
/* This is the preferred comment style
|
||||
* for files in net/ and drivers/net/
|
||||
*/
|
||||
* This is the preferred style
|
||||
* for multi line comments.
|
||||
*/
|
||||
|
||||
See: https://www.kernel.org/doc/html/latest/process/coding-style.html#commenting
|
||||
|
||||
|
|
|
|||
|
|
@ -27,15 +27,15 @@ Usage
|
|||
|
||||
::
|
||||
|
||||
./scripts/checktransupdate.py --help
|
||||
tools/docs/checktransupdate.py --help
|
||||
|
||||
Please refer to the output of argument parser for usage details.
|
||||
|
||||
Samples
|
||||
|
||||
- ``./scripts/checktransupdate.py -l zh_CN``
|
||||
- ``tools/docs/checktransupdate.py -l zh_CN``
|
||||
This will print all the files that need to be updated in the zh_CN locale.
|
||||
- ``./scripts/checktransupdate.py Documentation/translations/zh_CN/dev-tools/testing-overview.rst``
|
||||
- ``tools/docs/checktransupdate.py Documentation/translations/zh_CN/dev-tools/testing-overview.rst``
|
||||
This will only print the status of the specified file.
|
||||
|
||||
Then the output is something like:
|
||||
|
|
|
|||
|
|
@ -152,7 +152,7 @@ generate links to that documentation. Adding ``kernel-doc`` directives to
|
|||
the documentation to bring those comments in can help the community derive
|
||||
the full value of the work that has gone into creating them.
|
||||
|
||||
The ``scripts/find-unused-docs.sh`` tool can be used to find these
|
||||
The ``tools/docs/find-unused-docs.sh`` tool can be used to find these
|
||||
overlooked comments.
|
||||
|
||||
Note that the most value comes from pulling in the documentation for
|
||||
|
|
|
|||
|
|
@ -405,6 +405,10 @@ Domain`_ references.
|
|||
``%CONST``
|
||||
Name of a constant. (No cross-referencing, just formatting.)
|
||||
|
||||
Examples::
|
||||
|
||||
%0 %NULL %-1 %-EFAULT %-EINVAL %-ENOMEM
|
||||
|
||||
````literal````
|
||||
A literal block that should be handled as-is. The output will use a
|
||||
``monospaced font``.
|
||||
|
|
@ -579,20 +583,23 @@ source.
|
|||
How to use kernel-doc to generate man pages
|
||||
-------------------------------------------
|
||||
|
||||
If you just want to use kernel-doc to generate man pages you can do this
|
||||
from the kernel git tree::
|
||||
To generate man pages for all files that contain kernel-doc markups, run::
|
||||
|
||||
$ scripts/kernel-doc -man \
|
||||
$(git grep -l '/\*\*' -- :^Documentation :^tools) \
|
||||
| scripts/split-man.pl /tmp/man
|
||||
$ make mandocs
|
||||
|
||||
Some older versions of git do not support some of the variants of syntax for
|
||||
path exclusion. One of the following commands may work for those versions::
|
||||
Or calling ``script-build-wrapper`` directly::
|
||||
|
||||
$ scripts/kernel-doc -man \
|
||||
$(git grep -l '/\*\*' -- . ':!Documentation' ':!tools') \
|
||||
| scripts/split-man.pl /tmp/man
|
||||
$ ./tools/docs/sphinx-build-wrapper mandocs
|
||||
|
||||
$ scripts/kernel-doc -man \
|
||||
$(git grep -l '/\*\*' -- . ":(exclude)Documentation" ":(exclude)tools") \
|
||||
| scripts/split-man.pl /tmp/man
|
||||
The output will be at ``/man`` directory inside the output directory
|
||||
(by default: ``Documentation/output``).
|
||||
|
||||
Optionally, it is possible to generate a partial set of man pages by
|
||||
using SPHINXDIRS:
|
||||
|
||||
$ make SPHINXDIRS=driver-api/media mandocs
|
||||
|
||||
.. note::
|
||||
|
||||
When SPHINXDIRS={subdir} is used, it will only generate man pages for
|
||||
the files explicitly inside a ``Documentation/{subdir}/.../*.rst`` file.
|
||||
|
|
|
|||
|
|
@ -5,173 +5,168 @@ Including uAPI header files
|
|||
Sometimes, it is useful to include header files and C example codes in
|
||||
order to describe the userspace API and to generate cross-references
|
||||
between the code and the documentation. Adding cross-references for
|
||||
userspace API files has an additional vantage: Sphinx will generate warnings
|
||||
userspace API files has an additional advantage: Sphinx will generate warnings
|
||||
if a symbol is not found at the documentation. That helps to keep the
|
||||
uAPI documentation in sync with the Kernel changes.
|
||||
The :ref:`parse_headers.pl <parse_headers>` provide a way to generate such
|
||||
The :ref:`parse_headers.py <parse_headers>` provides a way to generate such
|
||||
cross-references. It has to be called via Makefile, while building the
|
||||
documentation. Please see ``Documentation/userspace-api/media/Makefile`` for an example
|
||||
about how to use it inside the Kernel tree.
|
||||
|
||||
.. _parse_headers:
|
||||
|
||||
parse_headers.pl
|
||||
^^^^^^^^^^^^^^^^
|
||||
tools/docs/parse_headers.py
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
NAME
|
||||
****
|
||||
|
||||
|
||||
parse_headers.pl - parse a C file, in order to identify functions, structs,
|
||||
parse_headers.py - parse a C file, in order to identify functions, structs,
|
||||
enums and defines and create cross-references to a Sphinx book.
|
||||
|
||||
USAGE
|
||||
*****
|
||||
|
||||
parse-headers.py [-h] [-d] [-t] ``FILE_IN`` ``FILE_OUT`` ``FILE_RULES``
|
||||
|
||||
SYNOPSIS
|
||||
********
|
||||
|
||||
Converts a C header or source file ``FILE_IN`` into a ReStructured Text
|
||||
included via ..parsed-literal block with cross-references for the
|
||||
documentation files that describe the API. It accepts an optional
|
||||
``FILE_RULES`` file to describe what elements will be either ignored or
|
||||
be pointed to a non-default reference type/name.
|
||||
|
||||
\ **parse_headers.pl**\ [<options>] <C_FILE> <OUT_FILE> [<EXCEPTIONS_FILE>]
|
||||
The output is written at ``FILE_OUT``.
|
||||
|
||||
Where <options> can be: --debug, --help or --usage.
|
||||
It is capable of identifying ``define``, ``struct``, ``typedef``, ``enum``
|
||||
and enum ``symbol``, creating cross-references for all of them.
|
||||
|
||||
It is also capable of distinguishing ``#define`` used for specifying
|
||||
Linux-specific macros used to define ``ioctl``.
|
||||
|
||||
The optional ``FILE_RULES`` contains a set of rules like::
|
||||
|
||||
ignore ioctl VIDIOC_ENUM_FMT
|
||||
replace ioctl VIDIOC_DQBUF vidioc_qbuf
|
||||
replace define V4L2_EVENT_MD_FL_HAVE_FRAME_SEQ :c:type:`v4l2_event_motion_det`
|
||||
|
||||
POSITIONAL ARGUMENTS
|
||||
********************
|
||||
|
||||
``FILE_IN``
|
||||
Input C file
|
||||
|
||||
``FILE_OUT``
|
||||
Output RST file
|
||||
|
||||
``FILE_RULES``
|
||||
Exceptions file (optional)
|
||||
|
||||
OPTIONS
|
||||
*******
|
||||
|
||||
|
||||
|
||||
\ **--debug**\
|
||||
|
||||
Put the script in verbose mode, useful for debugging.
|
||||
|
||||
|
||||
|
||||
\ **--usage**\
|
||||
|
||||
Prints a brief help message and exits.
|
||||
|
||||
|
||||
|
||||
\ **--help**\
|
||||
|
||||
Prints a more detailed help message and exits.
|
||||
``-h``, ``--help``
|
||||
show a help message and exit
|
||||
``-d``, ``--debug``
|
||||
Increase debug level. Can be used multiple times
|
||||
``-t``, ``--toc``
|
||||
instead of a literal block, outputs a TOC table at the RST file
|
||||
|
||||
|
||||
DESCRIPTION
|
||||
***********
|
||||
|
||||
Creates an enriched version of a Kernel header file with cross-links
|
||||
to each C data structure type, from ``FILE_IN``, formatting it with
|
||||
reStructuredText notation, either as-is or as a table of contents.
|
||||
|
||||
Convert a C header or source file (C_FILE), into a reStructuredText
|
||||
included via ..parsed-literal block with cross-references for the
|
||||
documentation files that describe the API. It accepts an optional
|
||||
EXCEPTIONS_FILE with describes what elements will be either ignored or
|
||||
be pointed to a non-default reference.
|
||||
It accepts an optional ``FILE_RULES`` which describes what elements will be
|
||||
either ignored or be pointed to a non-default reference, and optionally
|
||||
defines the C namespace to be used.
|
||||
|
||||
The output is written at the (OUT_FILE).
|
||||
It is meant to allow having more comprehensive documentation, where
|
||||
uAPI headers will create cross-reference links to the code.
|
||||
|
||||
It is capable of identifying defines, functions, structs, typedefs,
|
||||
enums and enum symbols and create cross-references for all of them.
|
||||
It is also capable of distinguish #define used for specifying a Linux
|
||||
ioctl.
|
||||
The output is written at the ``FILE_OUT``.
|
||||
|
||||
The EXCEPTIONS_FILE contain two types of statements: \ **ignore**\ or \ **replace**\ .
|
||||
The ``FILE_RULES`` may contain contain three types of statements:
|
||||
**ignore**, **replace** and **namespace**.
|
||||
|
||||
The syntax for the ignore tag is:
|
||||
By default, it create rules for all symbols and defines, but it also
|
||||
allows parsing an exception file. Such file contains a set of rules
|
||||
using the syntax below:
|
||||
|
||||
1. Ignore rules:
|
||||
|
||||
ignore \ **type**\ \ **name**\
|
||||
ignore *type* *symbol*
|
||||
|
||||
The \ **ignore**\ means that it won't generate cross references for a
|
||||
\ **name**\ symbol of type \ **type**\ .
|
||||
Removes the symbol from reference generation.
|
||||
|
||||
The syntax for the replace tag is:
|
||||
2. Replace rules:
|
||||
|
||||
replace *type* *old_symbol* *new_reference*
|
||||
|
||||
replace \ **type**\ \ **name**\ \ **new_value**\
|
||||
Replaces *old_symbol* with a *new_reference*.
|
||||
The *new_reference* can be:
|
||||
|
||||
The \ **replace**\ means that it will generate cross references for a
|
||||
\ **name**\ symbol of type \ **type**\ , but, instead of using the default
|
||||
replacement rule, it will use \ **new_value**\ .
|
||||
- A simple symbol name;
|
||||
- A full Sphinx reference.
|
||||
|
||||
For both statements, \ **type**\ can be either one of the following:
|
||||
3. Namespace rules
|
||||
|
||||
namespace *namespace*
|
||||
|
||||
\ **ioctl**\
|
||||
Sets C *namespace* to be used during cross-reference generation. Can
|
||||
be overridden by replace rules.
|
||||
|
||||
The ignore or replace statement will apply to ioctl definitions like:
|
||||
On ignore and replace rules, *type* can be:
|
||||
|
||||
#define VIDIOC_DBG_S_REGISTER _IOW('V', 79, struct v4l2_dbg_register)
|
||||
- ioctl:
|
||||
for defines of the form ``_IO*``, e.g., ioctl definitions
|
||||
|
||||
- define:
|
||||
for other defines
|
||||
|
||||
- symbol:
|
||||
for symbols defined within enums;
|
||||
|
||||
\ **define**\
|
||||
- typedef:
|
||||
for typedefs;
|
||||
|
||||
The ignore or replace statement will apply to any other #define found
|
||||
at C_FILE.
|
||||
|
||||
|
||||
|
||||
\ **typedef**\
|
||||
|
||||
The ignore or replace statement will apply to typedef statements at C_FILE.
|
||||
|
||||
|
||||
|
||||
\ **struct**\
|
||||
|
||||
The ignore or replace statement will apply to the name of struct statements
|
||||
at C_FILE.
|
||||
|
||||
|
||||
|
||||
\ **enum**\
|
||||
|
||||
The ignore or replace statement will apply to the name of enum statements
|
||||
at C_FILE.
|
||||
|
||||
|
||||
|
||||
\ **symbol**\
|
||||
|
||||
The ignore or replace statement will apply to the name of enum value
|
||||
at C_FILE.
|
||||
|
||||
For replace statements, \ **new_value**\ will automatically use :c:type:
|
||||
references for \ **typedef**\ , \ **enum**\ and \ **struct**\ types. It will use :ref:
|
||||
for \ **ioctl**\ , \ **define**\ and \ **symbol**\ types. The type of reference can
|
||||
also be explicitly defined at the replace statement.
|
||||
- enum:
|
||||
for the name of a non-anonymous enum;
|
||||
|
||||
- struct:
|
||||
for structs.
|
||||
|
||||
|
||||
EXAMPLES
|
||||
********
|
||||
|
||||
- Ignore a define ``_VIDEODEV2_H`` at ``FILE_IN``::
|
||||
|
||||
ignore define _VIDEODEV2_H
|
||||
ignore define _VIDEODEV2_H
|
||||
|
||||
- On an data structure like this enum::
|
||||
|
||||
enum foo { BAR1, BAR2, PRIVATE };
|
||||
|
||||
It won't generate cross-references for ``PRIVATE``::
|
||||
|
||||
ignore symbol PRIVATE
|
||||
|
||||
At the same struct, instead of creating one cross reference per symbol,
|
||||
make them all point to the ``enum foo`` C type::
|
||||
|
||||
replace symbol BAR1 :c:type:\`foo\`
|
||||
replace symbol BAR2 :c:type:\`foo\`
|
||||
|
||||
|
||||
Ignore a #define _VIDEODEV2_H at the C_FILE.
|
||||
|
||||
ignore symbol PRIVATE
|
||||
|
||||
|
||||
On a struct like:
|
||||
|
||||
enum foo { BAR1, BAR2, PRIVATE };
|
||||
|
||||
It won't generate cross-references for \ **PRIVATE**\ .
|
||||
|
||||
replace symbol BAR1 :c:type:\`foo\`
|
||||
replace symbol BAR2 :c:type:\`foo\`
|
||||
|
||||
|
||||
On a struct like:
|
||||
|
||||
enum foo { BAR1, BAR2, PRIVATE };
|
||||
|
||||
It will make the BAR1 and BAR2 enum symbols to cross reference the foo
|
||||
symbol at the C domain.
|
||||
- Use C namespace ``MC`` for all symbols at ``FILE_IN``::
|
||||
|
||||
namespace MC
|
||||
|
||||
BUGS
|
||||
****
|
||||
|
|
@ -184,7 +179,7 @@ COPYRIGHT
|
|||
*********
|
||||
|
||||
|
||||
Copyright (c) 2016 by Mauro Carvalho Chehab <mchehab+samsung@kernel.org>.
|
||||
Copyright (c) 2016, 2025 by Mauro Carvalho Chehab <mchehab+huawei@kernel.org>.
|
||||
|
||||
License GPLv2: GNU GPL version 2 <https://gnu.org/licenses/gpl.html>.
|
||||
|
||||
|
|
|
|||
|
|
@ -106,7 +106,7 @@ There's a script that automatically checks for Sphinx dependencies. If it can
|
|||
recognize your distribution, it will also give a hint about the install
|
||||
command line options for your distro::
|
||||
|
||||
$ ./scripts/sphinx-pre-install
|
||||
$ ./tools/docs/sphinx-pre-install
|
||||
Checking if the needed tools for Fedora release 26 (Twenty Six) are available
|
||||
Warning: better to also install "texlive-luatex85".
|
||||
You should run:
|
||||
|
|
@ -116,7 +116,7 @@ command line options for your distro::
|
|||
. sphinx_2.4.4/bin/activate
|
||||
pip install -r Documentation/sphinx/requirements.txt
|
||||
|
||||
Can't build as 1 mandatory dependency is missing at ./scripts/sphinx-pre-install line 468.
|
||||
Can't build as 1 mandatory dependency is missing at ./tools/docs/sphinx-pre-install line 468.
|
||||
|
||||
By default, it checks all the requirements for both html and PDF, including
|
||||
the requirements for images, math expressions and LaTeX build, and assumes
|
||||
|
|
@ -149,7 +149,7 @@ a venv with it with, and install minimal requirements with::
|
|||
|
||||
A more comprehensive test can be done by using:
|
||||
|
||||
scripts/test_doc_build.py
|
||||
tools/docs/test_doc_build.py
|
||||
|
||||
Such script create one Python venv per supported version,
|
||||
optionally building documentation for a range of Sphinx versions.
|
||||
|
|
|
|||
|
|
@ -7,6 +7,7 @@ PARPORT interface documentation
|
|||
Described here are the following functions:
|
||||
|
||||
Global functions::
|
||||
|
||||
parport_register_driver
|
||||
parport_unregister_driver
|
||||
parport_enumerate
|
||||
|
|
@ -34,6 +35,7 @@ Global functions::
|
|||
Port functions (can be overridden by low-level drivers):
|
||||
|
||||
SPP::
|
||||
|
||||
port->ops->read_data
|
||||
port->ops->write_data
|
||||
port->ops->read_status
|
||||
|
|
@ -46,17 +48,20 @@ Port functions (can be overridden by low-level drivers):
|
|||
port->ops->data_reverse
|
||||
|
||||
EPP::
|
||||
|
||||
port->ops->epp_write_data
|
||||
port->ops->epp_read_data
|
||||
port->ops->epp_write_addr
|
||||
port->ops->epp_read_addr
|
||||
|
||||
ECP::
|
||||
|
||||
port->ops->ecp_write_data
|
||||
port->ops->ecp_read_data
|
||||
port->ops->ecp_write_addr
|
||||
|
||||
Other::
|
||||
|
||||
port->ops->nibble_read_data
|
||||
port->ops->byte_read_data
|
||||
port->ops->compat_write_data
|
||||
|
|
|
|||
|
|
@ -14,7 +14,6 @@ the PLDM for Firmware Update standard
|
|||
file-format
|
||||
driver-ops
|
||||
|
||||
==================================
|
||||
Overview of the ``pldmfw`` library
|
||||
==================================
|
||||
|
||||
|
|
|
|||
|
|
@ -709,7 +709,7 @@ Resources
|
|||
|
||||
USB Home Page: https://www.usb.org
|
||||
|
||||
linux-usb Mailing List Archives: https://marc.info/?l=linux-usb
|
||||
linux-usb Mailing List Archives: https://lore.kernel.org/linux-usb
|
||||
|
||||
USB On-the-Go Basics:
|
||||
https://www.maximintegrated.com/app-notes/index.mvp/id/1822
|
||||
|
|
|
|||
|
|
@ -290,11 +290,11 @@ Why cpio rather than tar?
|
|||
|
||||
This decision was made back in December, 2001. The discussion started here:
|
||||
|
||||
http://www.uwsg.iu.edu/hypermail/linux/kernel/0112.2/1538.html
|
||||
- https://lore.kernel.org/lkml/a03cke$640$1@cesium.transmeta.com/
|
||||
|
||||
And spawned a second thread (specifically on tar vs cpio), starting here:
|
||||
|
||||
http://www.uwsg.iu.edu/hypermail/linux/kernel/0112.2/1587.html
|
||||
- https://lore.kernel.org/lkml/3C25A06D.7030408@zytor.com/
|
||||
|
||||
The quick and dirty summary version (which is no substitute for reading
|
||||
the above threads) is:
|
||||
|
|
@ -310,7 +310,7 @@ the above threads) is:
|
|||
either way about the archive format, and there are alternative tools,
|
||||
such as:
|
||||
|
||||
http://freecode.com/projects/afio
|
||||
https://linux.die.net/man/1/afio
|
||||
|
||||
2) The cpio archive format chosen by the kernel is simpler and cleaner (and
|
||||
thus easier to create and parse) than any of the (literally dozens of)
|
||||
|
|
@ -331,12 +331,12 @@ the above threads) is:
|
|||
5) Al Viro made the decision (quote: "tar is ugly as hell and not going to be
|
||||
supported on the kernel side"):
|
||||
|
||||
http://www.uwsg.iu.edu/hypermail/linux/kernel/0112.2/1540.html
|
||||
- https://lore.kernel.org/lkml/Pine.GSO.4.21.0112222109050.21702-100000@weyl.math.psu.edu/
|
||||
|
||||
explained his reasoning:
|
||||
|
||||
- http://www.uwsg.iu.edu/hypermail/linux/kernel/0112.2/1550.html
|
||||
- http://www.uwsg.iu.edu/hypermail/linux/kernel/0112.2/1638.html
|
||||
- https://lore.kernel.org/lkml/Pine.GSO.4.21.0112222240530.21702-100000@weyl.math.psu.edu/
|
||||
- https://lore.kernel.org/lkml/Pine.GSO.4.21.0112230849550.23300-100000@weyl.math.psu.edu/
|
||||
|
||||
and, most importantly, designed and implemented the initramfs code.
|
||||
|
||||
|
|
|
|||
|
|
@ -249,7 +249,7 @@ sharing and lock acquisition rules as the regular filesystem.
|
|||
This means that scrub cannot take *any* shortcuts to save time, because doing
|
||||
so could lead to concurrency problems.
|
||||
In other words, online fsck is not a complete replacement for offline fsck, and
|
||||
a complete run of online fsck may take longer than online fsck.
|
||||
a complete run of online fsck may take longer than offline fsck.
|
||||
However, both of these limitations are acceptable tradeoffs to satisfy the
|
||||
different motivations of online fsck, which are to **minimize system downtime**
|
||||
and to **increase predictability of operation**.
|
||||
|
|
|
|||
|
|
@ -28,8 +28,10 @@ MCAMSR and register xfer commands.
|
|||
Register sets is common across APML protocols. IOCTL is providing synchronization
|
||||
among protocols as transactions may create race condition.
|
||||
|
||||
$ ls -al /dev/sbrmi-3c
|
||||
crw------- 1 root root 10, 53 Jul 10 11:13 /dev/sbrmi-3c
|
||||
.. code-block:: bash
|
||||
|
||||
$ ls -al /dev/sbrmi-3c
|
||||
crw------- 1 root root 10, 53 Jul 10 11:13 /dev/sbrmi-3c
|
||||
|
||||
apml_sbrmi driver registers hwmon sensors for monitoring power_cap_max,
|
||||
current power consumption and managing power_cap.
|
||||
|
|
|
|||
|
|
@ -33,12 +33,12 @@ drivers/misc/mrvl_cn10k_dpi.c
|
|||
Driver IOCTLs
|
||||
=============
|
||||
|
||||
:c:macro::`DPI_MPS_MRRS_CFG`
|
||||
:c:macro:`DPI_MPS_MRRS_CFG`
|
||||
ioctl that sets max payload size & max read request size parameters of
|
||||
a pem port to which DMA engines are wired.
|
||||
|
||||
|
||||
:c:macro::`DPI_ENGINE_CFG`
|
||||
:c:macro:`DPI_ENGINE_CFG`
|
||||
ioctl that sets DMA engine's fifo sizes & max outstanding load request
|
||||
thresholds.
|
||||
|
||||
|
|
|
|||
|
|
@ -39,28 +39,28 @@ include/uapi/linux/tps6594_pfsm.h
|
|||
Driver IOCTLs
|
||||
=============
|
||||
|
||||
:c:macro::`PMIC_GOTO_STANDBY`
|
||||
:c:macro:`PMIC_GOTO_STANDBY`
|
||||
All device resources are powered down. The processor is off, and
|
||||
no voltage domains are energized.
|
||||
|
||||
:c:macro::`PMIC_GOTO_LP_STANDBY`
|
||||
:c:macro:`PMIC_GOTO_LP_STANDBY`
|
||||
The digital and analog functions of the PMIC, which are not
|
||||
required to be always-on, are turned off (low-power).
|
||||
|
||||
:c:macro::`PMIC_UPDATE_PGM`
|
||||
:c:macro:`PMIC_UPDATE_PGM`
|
||||
Triggers a firmware update.
|
||||
|
||||
:c:macro::`PMIC_SET_ACTIVE_STATE`
|
||||
:c:macro:`PMIC_SET_ACTIVE_STATE`
|
||||
One of the operational modes.
|
||||
The PMICs are fully functional and supply power to all PDN loads.
|
||||
All voltage domains are energized in both MCU and Main processor
|
||||
sections.
|
||||
|
||||
:c:macro::`PMIC_SET_MCU_ONLY_STATE`
|
||||
:c:macro:`PMIC_SET_MCU_ONLY_STATE`
|
||||
One of the operational modes.
|
||||
Only the power resources assigned to the MCU Safety Island are on.
|
||||
|
||||
:c:macro::`PMIC_SET_RETENTION_STATE`
|
||||
:c:macro:`PMIC_SET_RETENTION_STATE`
|
||||
One of the operational modes.
|
||||
Depending on the triggers set, some DDR/GPIO voltage domains can
|
||||
remain energized, while all other domains are off to minimize
|
||||
|
|
|
|||
|
|
@ -1,7 +1,10 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
Introduction of Uacce
|
||||
---------------------
|
||||
Uacce (Unified/User-space-access-intended Accelerator Framework)
|
||||
================================================================
|
||||
|
||||
Introduction
|
||||
------------
|
||||
|
||||
Uacce (Unified/User-space-access-intended Accelerator Framework) targets to
|
||||
provide Shared Virtual Addressing (SVA) between accelerators and processes.
|
||||
|
|
|
|||
|
|
@ -92,4 +92,4 @@ helpers, which abstract this config option.
|
|||
and register state is separate, the alpha PALcode joins the two, and you
|
||||
need to switch both together).
|
||||
|
||||
(From http://marc.info/?l=linux-kernel&m=93337278602211&w=2)
|
||||
(From https://lore.kernel.org/lkml/Pine.LNX.4.10.9907301410280.752-100000@penguin.transmeta.com/)
|
||||
|
|
|
|||
|
|
@ -13,24 +13,19 @@ how the process works is required in order to be an effective part of it.
|
|||
The big picture
|
||||
---------------
|
||||
|
||||
The kernel developers use a loosely time-based release process, with a new
|
||||
major kernel release happening every two or three months. The recent
|
||||
release history looks like this:
|
||||
The Linux kernel uses a loosely time-based, rolling release development
|
||||
model. A new major kernel release (which we will call, as an example, 9.x)
|
||||
[1]_ happens every two or three months, which comes with new features,
|
||||
internal API changes, and more. A typical release can contain about 13,000
|
||||
changesets with changes to several hundred thousand lines of code. Recent
|
||||
releases, along with their dates, can be found at `Wikipedia
|
||||
<https://en.wikipedia.org/wiki/Linux_kernel_version_history>`_.
|
||||
|
||||
====== =================
|
||||
5.0 March 3, 2019
|
||||
5.1 May 5, 2019
|
||||
5.2 July 7, 2019
|
||||
5.3 September 15, 2019
|
||||
5.4 November 24, 2019
|
||||
5.5 January 6, 2020
|
||||
====== =================
|
||||
|
||||
Every 5.x release is a major kernel release with new features, internal
|
||||
API changes, and more. A typical release can contain about 13,000
|
||||
changesets with changes to several hundred thousand lines of code. 5.x is
|
||||
the leading edge of Linux kernel development; the kernel uses a
|
||||
rolling development model which is continually integrating major changes.
|
||||
.. [1] Strictly speaking, the Linux kernel does not use semantic versioning
|
||||
number scheme, but rather the 9.x pair identifies major release
|
||||
version as a whole number. For each release, x is incremented,
|
||||
but 9 is incremented only if x is deemed large enough (e.g.
|
||||
Linux 5.0 is released following Linux 4.20).
|
||||
|
||||
A relatively straightforward discipline is followed with regard to the
|
||||
merging of patches for each release. At the beginning of each development
|
||||
|
|
@ -48,9 +43,9 @@ detail later on).
|
|||
|
||||
The merge window lasts for approximately two weeks. At the end of this
|
||||
time, Linus Torvalds will declare that the window is closed and release the
|
||||
first of the "rc" kernels. For the kernel which is destined to be 5.6,
|
||||
first of the "rc" kernels. For the kernel which is destined to be 9.x,
|
||||
for example, the release which happens at the end of the merge window will
|
||||
be called 5.6-rc1. The -rc1 release is the signal that the time to
|
||||
be called 9.x-rc1. The -rc1 release is the signal that the time to
|
||||
merge new features has passed, and that the time to stabilize the next
|
||||
kernel has begun.
|
||||
|
||||
|
|
@ -99,13 +94,15 @@ release is made. In the real world, this kind of perfection is hard to
|
|||
achieve; there are just too many variables in a project of this size.
|
||||
There comes a point where delaying the final release just makes the problem
|
||||
worse; the pile of changes waiting for the next merge window will grow
|
||||
larger, creating even more regressions the next time around. So most 5.x
|
||||
kernels go out with a handful of known regressions though, hopefully, none
|
||||
of them are serious.
|
||||
larger, creating even more regressions the next time around. So most kernels
|
||||
go out with a handful of known regressions, though, hopefully, none of them
|
||||
are serious.
|
||||
|
||||
Once a stable release is made, its ongoing maintenance is passed off to the
|
||||
"stable team," currently Greg Kroah-Hartman. The stable team will release
|
||||
occasional updates to the stable release using the 5.x.y numbering scheme.
|
||||
"stable team," currently consists of Greg Kroah-Hartman and Sasha Levin. The
|
||||
stable team will release occasional updates to the stable release using the
|
||||
9.x.y numbering scheme.
|
||||
|
||||
To be considered for an update release, a patch must (1) fix a significant
|
||||
bug, and (2) already be merged into the mainline for the next development
|
||||
kernel. Kernels will typically receive stable updates for a little more
|
||||
|
|
|
|||
|
|
@ -76,7 +76,7 @@ Don't use commas to avoid using braces:
|
|||
if (condition)
|
||||
do_this(), do_that();
|
||||
|
||||
Always uses braces for multiple statements:
|
||||
Always use braces for multiple statements:
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
|
|
|
|||
|
|
@ -592,8 +592,9 @@ Both Tested-by and Reviewed-by tags, once received on mailing list from tester
|
|||
or reviewer, should be added by author to the applicable patches when sending
|
||||
next versions. However if the patch has changed substantially in following
|
||||
version, these tags might not be applicable anymore and thus should be removed.
|
||||
Usually removal of someone's Tested-by or Reviewed-by tags should be mentioned
|
||||
in the patch changelog (after the '---' separator).
|
||||
Usually removal of someone's Acked-by, Tested-by or Reviewed-by tags should be
|
||||
mentioned in the patch changelog with an explanation (after the '---'
|
||||
separator).
|
||||
|
||||
A Suggested-by: tag indicates that the patch idea is suggested by the person
|
||||
named and ensures credit to the person for the idea: if we diligently credit
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@
|
|||
:license: GPL Version 2, June 1991 see Linux/COPYING for details.
|
||||
|
||||
The ``kernel-abi`` (:py:class:`KernelCmd`) directive calls the
|
||||
scripts/get_abi.py script to parse the Kernel ABI files.
|
||||
AbiParser class to parse the Kernel ABI files.
|
||||
|
||||
Overview of directive's argument and options.
|
||||
|
||||
|
|
@ -43,9 +43,9 @@ from sphinx.util.docutils import switch_source_input
|
|||
from sphinx.util import logging
|
||||
|
||||
srctree = os.path.abspath(os.environ["srctree"])
|
||||
sys.path.insert(0, os.path.join(srctree, "scripts/lib/abi"))
|
||||
sys.path.insert(0, os.path.join(srctree, "tools/lib/python"))
|
||||
|
||||
from abi_parser import AbiParser
|
||||
from abi.abi_parser import AbiParser
|
||||
|
||||
__version__ = "1.0"
|
||||
|
||||
|
|
|
|||
|
|
@ -13,7 +13,7 @@
|
|||
:license: GPL Version 2, June 1991 see Linux/COPYING for details.
|
||||
|
||||
The ``kernel-feat`` (:py:class:`KernelFeat`) directive calls the
|
||||
scripts/get_feat.pl script to parse the Kernel ABI files.
|
||||
tools/docs/get_feat.pl script to parse the Kernel ABI files.
|
||||
|
||||
Overview of directive's argument and options.
|
||||
|
||||
|
|
@ -34,7 +34,6 @@
|
|||
import codecs
|
||||
import os
|
||||
import re
|
||||
import subprocess
|
||||
import sys
|
||||
|
||||
from docutils import nodes, statemachine
|
||||
|
|
@ -42,6 +41,11 @@ from docutils.statemachine import ViewList
|
|||
from docutils.parsers.rst import directives, Directive
|
||||
from sphinx.util.docutils import switch_source_input
|
||||
|
||||
srctree = os.path.abspath(os.environ["srctree"])
|
||||
sys.path.insert(0, os.path.join(srctree, "tools/lib/python"))
|
||||
|
||||
from feat.parse_features import ParseFeature # pylint: disable=C0413
|
||||
|
||||
def ErrorString(exc): # Shamelessly stolen from docutils
|
||||
return f'{exc.__class__.__name}: {exc}'
|
||||
|
||||
|
|
@ -84,18 +88,16 @@ class KernelFeat(Directive):
|
|||
|
||||
srctree = os.path.abspath(os.environ["srctree"])
|
||||
|
||||
args = [
|
||||
os.path.join(srctree, 'scripts/get_feat.pl'),
|
||||
'rest',
|
||||
'--enable-fname',
|
||||
'--dir',
|
||||
os.path.join(srctree, 'Documentation', self.arguments[0]),
|
||||
]
|
||||
feature_dir = os.path.join(srctree, 'Documentation', self.arguments[0])
|
||||
|
||||
feat = ParseFeature(feature_dir, False, True)
|
||||
feat.parse()
|
||||
|
||||
if len(self.arguments) > 1:
|
||||
args.extend(['--arch', self.arguments[1]])
|
||||
|
||||
lines = subprocess.check_output(args, cwd=os.path.dirname(doc.current_source)).decode('utf-8')
|
||||
arch = self.arguments[1]
|
||||
lines = feat.output_arch_table(arch)
|
||||
else:
|
||||
lines = feat.output_matrix()
|
||||
|
||||
line_regex = re.compile(r"^\.\. FILE (\S+)$")
|
||||
|
||||
|
|
|
|||
|
|
@ -87,6 +87,8 @@ import os.path
|
|||
import re
|
||||
import sys
|
||||
|
||||
from difflib import get_close_matches
|
||||
|
||||
from docutils import io, nodes, statemachine
|
||||
from docutils.statemachine import ViewList
|
||||
from docutils.parsers.rst import Directive, directives
|
||||
|
|
@ -95,15 +97,17 @@ from docutils.parsers.rst.directives.body import CodeBlock, NumberLines
|
|||
from sphinx.util import logging
|
||||
|
||||
srctree = os.path.abspath(os.environ["srctree"])
|
||||
sys.path.insert(0, os.path.join(srctree, "tools/docs/lib"))
|
||||
sys.path.insert(0, os.path.join(srctree, "tools/lib/python"))
|
||||
|
||||
from parse_data_structs import ParseDataStructs
|
||||
from kdoc.parse_data_structs import ParseDataStructs
|
||||
|
||||
__version__ = "1.0"
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
RE_DOMAIN_REF = re.compile(r'\\ :(ref|c:type|c:func):`([^<`]+)(?:<([^>]+)>)?`\\')
|
||||
RE_SIMPLE_REF = re.compile(r'`([^`]+)`')
|
||||
RE_LINENO_REF = re.compile(r'^\s*-\s+LINENO_(\d+):\s+(.*)')
|
||||
RE_SPLIT_DOMAIN = re.compile(r"(.*)\.(.*)")
|
||||
|
||||
def ErrorString(exc): # Shamelessly stolen from docutils
|
||||
return f'{exc.__class__.__name}: {exc}'
|
||||
|
|
@ -212,14 +216,16 @@ class KernelInclude(Directive):
|
|||
- a TOC table containing cross references.
|
||||
"""
|
||||
parser = ParseDataStructs()
|
||||
parser.parse_file(path)
|
||||
|
||||
if 'exception-file' in self.options:
|
||||
source_dir = os.path.dirname(os.path.abspath(
|
||||
self.state_machine.input_lines.source(
|
||||
self.lineno - self.state_machine.input_offset - 1)))
|
||||
exceptions_file = os.path.join(source_dir, self.options['exception-file'])
|
||||
parser.process_exceptions(exceptions_file)
|
||||
else:
|
||||
exceptions_file = None
|
||||
|
||||
parser.parse_file(path, exceptions_file)
|
||||
|
||||
# Store references on a symbol dict to be used at check time
|
||||
if 'warn-broken' in self.options:
|
||||
|
|
@ -242,23 +248,32 @@ class KernelInclude(Directive):
|
|||
# TOC output is a ReST file, not a literal. So, we can add line
|
||||
# numbers
|
||||
|
||||
rawtext = parser.gen_toc()
|
||||
|
||||
include_lines = statemachine.string2lines(rawtext, tab_width,
|
||||
convert_whitespace=True)
|
||||
|
||||
# Append line numbers data
|
||||
|
||||
startline = self.options.get('start-line', None)
|
||||
endline = self.options.get('end-line', None)
|
||||
|
||||
relpath = os.path.relpath(path, srctree)
|
||||
|
||||
result = ViewList()
|
||||
if startline and startline > 0:
|
||||
offset = startline - 1
|
||||
else:
|
||||
offset = 0
|
||||
for line in parser.gen_toc().split("\n"):
|
||||
match = RE_LINENO_REF.match(line)
|
||||
if not match:
|
||||
result.append(line, path)
|
||||
continue
|
||||
|
||||
for ln, line in enumerate(include_lines, start=offset):
|
||||
result.append(line, path, ln)
|
||||
ln, ref = match.groups()
|
||||
ln = int(ln)
|
||||
|
||||
# Filter line range if needed
|
||||
if startline and (ln < startline):
|
||||
continue
|
||||
|
||||
if endline and (ln > endline):
|
||||
continue
|
||||
|
||||
# Sphinx numerates starting with zero, but text editors
|
||||
# and other tools start from one
|
||||
realln = ln + 1
|
||||
result.append(f"- {ref}: {relpath}#{realln}", path, ln)
|
||||
|
||||
self.state_machine.insert_input(result, path)
|
||||
|
||||
|
|
@ -388,6 +403,63 @@ class KernelInclude(Directive):
|
|||
# ==============================================================================
|
||||
|
||||
reported = set()
|
||||
DOMAIN_INFO = {}
|
||||
all_refs = {}
|
||||
|
||||
def fill_domain_info(env):
|
||||
"""
|
||||
Get supported reference types for each Sphinx domain and C namespaces
|
||||
"""
|
||||
if DOMAIN_INFO:
|
||||
return
|
||||
|
||||
for domain_name, domain_instance in env.domains.items():
|
||||
try:
|
||||
object_types = list(domain_instance.object_types.keys())
|
||||
DOMAIN_INFO[domain_name] = object_types
|
||||
except AttributeError:
|
||||
# Ignore domains that we can't retrieve object types, if any
|
||||
pass
|
||||
|
||||
for domain in DOMAIN_INFO.keys():
|
||||
domain_obj = env.get_domain(domain)
|
||||
for name, dispname, objtype, docname, anchor, priority in domain_obj.get_objects():
|
||||
ref_name = name.lower()
|
||||
|
||||
if domain == "c":
|
||||
if '.' in ref_name:
|
||||
ref_name = ref_name.split(".")[-1]
|
||||
|
||||
if not ref_name in all_refs:
|
||||
all_refs[ref_name] = []
|
||||
|
||||
all_refs[ref_name].append(f"\t{domain}:{objtype}:`{name}` (from {docname})")
|
||||
|
||||
def get_suggestions(app, env, node,
|
||||
original_target, original_domain, original_reftype):
|
||||
"""Check if target exists in the other domain or with different reftypes."""
|
||||
original_target = original_target.lower()
|
||||
|
||||
# Remove namespace if present
|
||||
if original_domain == "c":
|
||||
if '.' in original_target:
|
||||
original_target = original_target.split(".")[-1]
|
||||
|
||||
suggestions = []
|
||||
|
||||
# If name exists, propose exact name match on different domains
|
||||
if original_target in all_refs:
|
||||
return all_refs[original_target]
|
||||
|
||||
# If not found, get a close match, using difflib.
|
||||
# Such method is based on Ratcliff-Obershelp Algorithm, which seeks
|
||||
# for a close match within a certain distance. We're using the defaults
|
||||
# here, e.g. cutoff=0.6, proposing 3 alternatives
|
||||
matches = get_close_matches(original_target, all_refs.keys())
|
||||
for match in matches:
|
||||
suggestions += all_refs[match]
|
||||
|
||||
return suggestions
|
||||
|
||||
def check_missing_refs(app, env, node, contnode):
|
||||
"""Check broken refs for the files it creates xrefs"""
|
||||
|
|
@ -404,11 +476,13 @@ def check_missing_refs(app, env, node, contnode):
|
|||
if node.source not in xref_files:
|
||||
return None
|
||||
|
||||
fill_domain_info(env)
|
||||
|
||||
target = node.get('reftarget', '')
|
||||
domain = node.get('refdomain', 'std')
|
||||
reftype = node.get('reftype', '')
|
||||
|
||||
msg = f"can't link to: {domain}:{reftype}:: {target}"
|
||||
msg = f"Invalid xref: {domain}:{reftype}:`{target}`"
|
||||
|
||||
# Don't duplicate warnings
|
||||
data = (node.source, msg)
|
||||
|
|
@ -416,6 +490,10 @@ def check_missing_refs(app, env, node, contnode):
|
|||
return None
|
||||
reported.add(data)
|
||||
|
||||
suggestions = get_suggestions(app, env, node, target, domain, reftype)
|
||||
if suggestions:
|
||||
msg += ". Possible alternatives:\n" + '\n'.join(suggestions)
|
||||
|
||||
logger.warning(msg, location=node, type='ref', subtype='missing')
|
||||
|
||||
return None
|
||||
|
|
|
|||
|
|
@ -220,7 +220,7 @@
|
|||
If you want them, please install non-variable ``Noto Sans CJK''
|
||||
font families along with the texlive-xecjk package by following
|
||||
instructions from
|
||||
\sphinxcode{./scripts/sphinx-pre-install}.
|
||||
\sphinxcode{./tools/docs/sphinx-pre-install}.
|
||||
Having optional non-variable ``Noto Serif CJK'' font families will
|
||||
improve the looks of those translations.
|
||||
\end{sphinxadmonition}}
|
||||
|
|
|
|||
|
|
@ -42,10 +42,10 @@ from sphinx.util import logging
|
|||
from pprint import pformat
|
||||
|
||||
srctree = os.path.abspath(os.environ["srctree"])
|
||||
sys.path.insert(0, os.path.join(srctree, "scripts/lib/kdoc"))
|
||||
sys.path.insert(0, os.path.join(srctree, "tools/lib/python"))
|
||||
|
||||
from kdoc_files import KernelFiles
|
||||
from kdoc_output import RestFormat
|
||||
from kdoc.kdoc_files import KernelFiles
|
||||
from kdoc.kdoc_output import RestFormat
|
||||
|
||||
__version__ = '1.0'
|
||||
kfiles = None
|
||||
|
|
|
|||
|
|
@ -1,60 +0,0 @@
|
|||
# -*- coding: utf-8; mode: python -*-
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
# pylint: disable=R0903, C0330, R0914, R0912, E0401
|
||||
|
||||
import os
|
||||
import sys
|
||||
from sphinx.util.osutil import fs_encoding
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
def loadConfig(namespace):
|
||||
# ------------------------------------------------------------------------------
|
||||
|
||||
"""Load an additional configuration file into *namespace*.
|
||||
|
||||
The name of the configuration file is taken from the environment
|
||||
``SPHINX_CONF``. The external configuration file extends (or overwrites) the
|
||||
configuration values from the origin ``conf.py``. With this you are able to
|
||||
maintain *build themes*. """
|
||||
|
||||
config_file = os.environ.get("SPHINX_CONF", None)
|
||||
if (config_file is not None
|
||||
and os.path.normpath(namespace["__file__"]) != os.path.normpath(config_file) ):
|
||||
config_file = os.path.abspath(config_file)
|
||||
|
||||
# Let's avoid one conf.py file just due to latex_documents
|
||||
start = config_file.find('Documentation/')
|
||||
if start >= 0:
|
||||
start = config_file.find('/', start + 1)
|
||||
|
||||
end = config_file.rfind('/')
|
||||
if start >= 0 and end > 0:
|
||||
dir = config_file[start + 1:end]
|
||||
|
||||
print("source directory: %s" % dir)
|
||||
new_latex_docs = []
|
||||
latex_documents = namespace['latex_documents']
|
||||
|
||||
for l in latex_documents:
|
||||
if l[0].find(dir + '/') == 0:
|
||||
has = True
|
||||
fn = l[0][len(dir) + 1:]
|
||||
new_latex_docs.append((fn, l[1], l[2], l[3], l[4]))
|
||||
break
|
||||
|
||||
namespace['latex_documents'] = new_latex_docs
|
||||
|
||||
# If there is an extra conf.py file, load it
|
||||
if os.path.isfile(config_file):
|
||||
sys.stdout.write("load additional sphinx-config: %s\n" % config_file)
|
||||
config = namespace.copy()
|
||||
config['__file__'] = config_file
|
||||
with open(config_file, 'rb') as f:
|
||||
code = compile(f.read(), fs_encoding, 'exec')
|
||||
exec(code, config)
|
||||
del config['__file__']
|
||||
namespace.update(config)
|
||||
else:
|
||||
config = namespace.copy()
|
||||
config['tags'].add("subproject")
|
||||
namespace.update(config)
|
||||
|
|
@ -1,33 +0,0 @@
|
|||
#!/bin/sh
|
||||
# SPDX-License-Identifier: GPL-2.0+
|
||||
#
|
||||
# Figure out if we should follow a specific parallelism from the make
|
||||
# environment (as exported by scripts/jobserver-exec), or fall back to
|
||||
# the "auto" parallelism when "-jN" is not specified at the top-level
|
||||
# "make" invocation.
|
||||
|
||||
sphinx="$1"
|
||||
shift || true
|
||||
|
||||
parallel="$PARALLELISM"
|
||||
if [ -z "$parallel" ] ; then
|
||||
# If no parallelism is specified at the top-level make, then
|
||||
# fall back to the expected "-jauto" mode that the "htmldocs"
|
||||
# target has had.
|
||||
auto=$(perl -e 'open IN,"'"$sphinx"' --version 2>&1 |";
|
||||
while (<IN>) {
|
||||
if (m/([\d\.]+)/) {
|
||||
print "auto" if ($1 >= "1.7")
|
||||
}
|
||||
}
|
||||
close IN')
|
||||
if [ -n "$auto" ] ; then
|
||||
parallel="$auto"
|
||||
fi
|
||||
fi
|
||||
# Only if some parallelism has been determined do we add the -jN option.
|
||||
if [ -n "$parallel" ] ; then
|
||||
parallel="-j$parallel"
|
||||
fi
|
||||
|
||||
exec "$sphinx" $parallel "$@"
|
||||
|
|
@ -1,11 +1,15 @@
|
|||
**-c**, **--cpus** *cpu-list*
|
||||
|
||||
Set the osnoise tracer to run the sample threads in the cpu-list.
|
||||
Set the |tool| tracer to run the sample threads in the cpu-list.
|
||||
|
||||
By default, the |tool| tracer runs the sample threads on all CPUs.
|
||||
|
||||
**-H**, **--house-keeping** *cpu-list*
|
||||
|
||||
Run rtla control threads only on the given cpu-list.
|
||||
|
||||
If omitted, rtla will attempt to auto-migrate its main thread to any CPU that is not running any workload threads.
|
||||
|
||||
**-d**, **--duration** *time[s|m|h|d]*
|
||||
|
||||
Set the duration of the session.
|
||||
|
|
@ -35,17 +39,21 @@
|
|||
|
||||
**-P**, **--priority** *o:prio|r:prio|f:prio|d:runtime:period*
|
||||
|
||||
Set scheduling parameters to the osnoise tracer threads, the format to set the priority are:
|
||||
Set scheduling parameters to the |tool| tracer threads, the format to set the priority are:
|
||||
|
||||
- *o:prio* - use SCHED_OTHER with *prio*;
|
||||
- *r:prio* - use SCHED_RR with *prio*;
|
||||
- *f:prio* - use SCHED_FIFO with *prio*;
|
||||
- *d:runtime[us|ms|s]:period[us|ms|s]* - use SCHED_DEADLINE with *runtime* and *period* in nanoseconds.
|
||||
|
||||
If not set, tracer threads keep their default priority. For rtla user threads, it is set to SCHED_FIFO with priority 95. For kernel threads, see *osnoise* and *timerlat* tracer documentation for the running kernel version.
|
||||
|
||||
**-C**, **--cgroup**\[*=cgroup*]
|
||||
|
||||
Set a *cgroup* to the tracer's threads. If the **-C** option is passed without arguments, the tracer's thread will inherit **rtla**'s *cgroup*. Otherwise, the threads will be placed on the *cgroup* passed to the option.
|
||||
|
||||
If not set, the behavior differs between workload types. User workloads created by rtla will inherit rtla's cgroup. Kernel workloads are assigned the root cgroup.
|
||||
|
||||
**--warm-up** *s*
|
||||
|
||||
After starting the workload, let it run for *s* seconds before starting collecting the data, allowing the system to warm-up. Statistical data generated during warm-up is discarded.
|
||||
|
|
@ -53,6 +61,8 @@
|
|||
**--trace-buffer-size** *kB*
|
||||
Set the per-cpu trace buffer size in kB for the tracing output.
|
||||
|
||||
If not set, the default tracefs buffer size is used.
|
||||
|
||||
**--on-threshold** *action*
|
||||
|
||||
Defines an action to be executed when tracing is stopped on a latency threshold
|
||||
|
|
@ -67,7 +77,7 @@
|
|||
- *trace[,file=<filename>]*
|
||||
|
||||
Saves trace output, optionally taking a filename. Alternative to -t/--trace.
|
||||
Note that nlike -t/--trace, specifying this multiple times will result in
|
||||
Note that unlike -t/--trace, specifying this multiple times will result in
|
||||
the trace being saved multiple times.
|
||||
|
||||
- *signal,num=<sig>,pid=<pid>*
|
||||
|
|
@ -13,7 +13,7 @@
|
|||
Set the automatic trace mode. This mode sets some commonly used options
|
||||
while debugging the system. It is equivalent to use **-T** *us* **-s** *us*
|
||||
**-t**. By default, *timerlat* tracer uses FIFO:95 for *timerlat* threads,
|
||||
thus equilavent to **-P** *f:95*.
|
||||
thus equivalent to **-P** *f:95*.
|
||||
|
||||
**-p**, **--period** *us*
|
||||
|
||||
|
|
@ -56,7 +56,7 @@
|
|||
**-u**, **--user-threads**
|
||||
|
||||
Set timerlat to run without a workload, and then dispatches user-space workloads
|
||||
to wait on the timerlat_fd. Once the workload is awakes, it goes to sleep again
|
||||
to wait on the timerlat_fd. Once the workload is awakened, it goes to sleep again
|
||||
adding so the measurement for the kernel-to-user and user-to-kernel to the tracer
|
||||
output. **--user-threads** will be used unless the user specify **-k**.
|
||||
|
||||
|
|
@ -29,11 +29,11 @@ collection of the tracer output.
|
|||
|
||||
OPTIONS
|
||||
=======
|
||||
.. include:: common_osnoise_options.rst
|
||||
.. include:: common_osnoise_options.txt
|
||||
|
||||
.. include:: common_top_options.rst
|
||||
.. include:: common_top_options.txt
|
||||
|
||||
.. include:: common_options.rst
|
||||
.. include:: common_options.txt
|
||||
|
||||
EXAMPLE
|
||||
=======
|
||||
|
|
@ -106,4 +106,4 @@ AUTHOR
|
|||
======
|
||||
Written by Daniel Bristot de Oliveira <bristot@kernel.org>
|
||||
|
||||
.. include:: common_appendix.rst
|
||||
.. include:: common_appendix.txt
|
||||
|
|
|
|||
|
|
@ -15,7 +15,7 @@ SYNOPSIS
|
|||
|
||||
DESCRIPTION
|
||||
===========
|
||||
.. include:: common_osnoise_description.rst
|
||||
.. include:: common_osnoise_description.txt
|
||||
|
||||
The **rtla osnoise hist** tool collects all **osnoise:sample_threshold**
|
||||
occurrence in a histogram, displaying the results in a user-friendly way.
|
||||
|
|
@ -24,11 +24,11 @@ collection of the tracer output.
|
|||
|
||||
OPTIONS
|
||||
=======
|
||||
.. include:: common_osnoise_options.rst
|
||||
.. include:: common_osnoise_options.txt
|
||||
|
||||
.. include:: common_hist_options.rst
|
||||
.. include:: common_hist_options.txt
|
||||
|
||||
.. include:: common_options.rst
|
||||
.. include:: common_options.txt
|
||||
|
||||
EXAMPLE
|
||||
=======
|
||||
|
|
@ -65,4 +65,4 @@ AUTHOR
|
|||
======
|
||||
Written by Daniel Bristot de Oliveira <bristot@kernel.org>
|
||||
|
||||
.. include:: common_appendix.rst
|
||||
.. include:: common_appendix.txt
|
||||
|
|
|
|||
|
|
@ -15,7 +15,7 @@ SYNOPSIS
|
|||
|
||||
DESCRIPTION
|
||||
===========
|
||||
.. include:: common_osnoise_description.rst
|
||||
.. include:: common_osnoise_description.txt
|
||||
|
||||
**rtla osnoise top** collects the periodic summary from the *osnoise* tracer,
|
||||
including the counters of the occurrence of the interference source,
|
||||
|
|
@ -26,11 +26,11 @@ collection of the tracer output.
|
|||
|
||||
OPTIONS
|
||||
=======
|
||||
.. include:: common_osnoise_options.rst
|
||||
.. include:: common_osnoise_options.txt
|
||||
|
||||
.. include:: common_top_options.rst
|
||||
.. include:: common_top_options.txt
|
||||
|
||||
.. include:: common_options.rst
|
||||
.. include:: common_options.txt
|
||||
|
||||
EXAMPLE
|
||||
=======
|
||||
|
|
@ -60,4 +60,4 @@ AUTHOR
|
|||
======
|
||||
Written by Daniel Bristot de Oliveira <bristot@kernel.org>
|
||||
|
||||
.. include:: common_appendix.rst
|
||||
.. include:: common_appendix.txt
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ SYNOPSIS
|
|||
DESCRIPTION
|
||||
===========
|
||||
|
||||
.. include:: common_osnoise_description.rst
|
||||
.. include:: common_osnoise_description.txt
|
||||
|
||||
The *osnoise* tracer outputs information in two ways. It periodically prints
|
||||
a summary of the noise of the operating system, including the counters of
|
||||
|
|
@ -56,4 +56,4 @@ AUTHOR
|
|||
======
|
||||
Written by Daniel Bristot de Oliveira <bristot@kernel.org>
|
||||
|
||||
.. include:: common_appendix.rst
|
||||
.. include:: common_appendix.txt
|
||||
|
|
|
|||
|
|
@ -16,7 +16,7 @@ SYNOPSIS
|
|||
DESCRIPTION
|
||||
===========
|
||||
|
||||
.. include:: common_timerlat_description.rst
|
||||
.. include:: common_timerlat_description.txt
|
||||
|
||||
The **rtla timerlat hist** displays a histogram of each tracer event
|
||||
occurrence. This tool uses the periodic information, and the
|
||||
|
|
@ -25,13 +25,13 @@ occurrence. This tool uses the periodic information, and the
|
|||
OPTIONS
|
||||
=======
|
||||
|
||||
.. include:: common_timerlat_options.rst
|
||||
.. include:: common_timerlat_options.txt
|
||||
|
||||
.. include:: common_hist_options.rst
|
||||
.. include:: common_hist_options.txt
|
||||
|
||||
.. include:: common_options.rst
|
||||
.. include:: common_options.txt
|
||||
|
||||
.. include:: common_timerlat_aa.rst
|
||||
.. include:: common_timerlat_aa.txt
|
||||
|
||||
EXAMPLE
|
||||
=======
|
||||
|
|
@ -110,4 +110,4 @@ AUTHOR
|
|||
======
|
||||
Written by Daniel Bristot de Oliveira <bristot@kernel.org>
|
||||
|
||||
.. include:: common_appendix.rst
|
||||
.. include:: common_appendix.txt
|
||||
|
|
|
|||
|
|
@ -16,23 +16,23 @@ SYNOPSIS
|
|||
DESCRIPTION
|
||||
===========
|
||||
|
||||
.. include:: common_timerlat_description.rst
|
||||
.. include:: common_timerlat_description.txt
|
||||
|
||||
The **rtla timerlat top** displays a summary of the periodic output
|
||||
from the *timerlat* tracer. It also provides information for each
|
||||
operating system noise via the **osnoise:** tracepoints that can be
|
||||
seem with the option **-T**.
|
||||
seen with the option **-T**.
|
||||
|
||||
OPTIONS
|
||||
=======
|
||||
|
||||
.. include:: common_timerlat_options.rst
|
||||
.. include:: common_timerlat_options.txt
|
||||
|
||||
.. include:: common_top_options.rst
|
||||
.. include:: common_top_options.txt
|
||||
|
||||
.. include:: common_options.rst
|
||||
.. include:: common_options.txt
|
||||
|
||||
.. include:: common_timerlat_aa.rst
|
||||
.. include:: common_timerlat_aa.txt
|
||||
|
||||
**--aa-only** *us*
|
||||
|
||||
|
|
@ -133,4 +133,4 @@ AUTHOR
|
|||
------
|
||||
Written by Daniel Bristot de Oliveira <bristot@kernel.org>
|
||||
|
||||
.. include:: common_appendix.rst
|
||||
.. include:: common_appendix.txt
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ SYNOPSIS
|
|||
DESCRIPTION
|
||||
===========
|
||||
|
||||
.. include:: common_timerlat_description.rst
|
||||
.. include:: common_timerlat_description.txt
|
||||
|
||||
The **rtla timerlat top** mode displays a summary of the periodic output
|
||||
from the *timerlat* tracer. The **rtla timerlat hist** mode displays
|
||||
|
|
@ -51,4 +51,4 @@ AUTHOR
|
|||
======
|
||||
Written by Daniel Bristot de Oliveira <bristot@kernel.org>
|
||||
|
||||
.. include:: common_appendix.rst
|
||||
.. include:: common_appendix.txt
|
||||
|
|
|
|||
|
|
@ -45,4 +45,4 @@ AUTHOR
|
|||
======
|
||||
Daniel Bristot de Oliveira <bristot@kernel.org>
|
||||
|
||||
.. include:: common_appendix.rst
|
||||
.. include:: common_appendix.txt
|
||||
|
|
|
|||
|
|
@ -43,12 +43,12 @@ It is possible to follow the trace by reading the trace file::
|
|||
<...>-868 [001] .... 54.030347: #2 context thread timer_latency 4351 ns
|
||||
|
||||
|
||||
The tracer creates a per-cpu kernel thread with real-time priority that
|
||||
prints two lines at every activation. The first is the *timer latency*
|
||||
observed at the *hardirq* context before the activation of the thread.
|
||||
The second is the *timer latency* observed by the thread. The ACTIVATION
|
||||
ID field serves to relate the *irq* execution to its respective *thread*
|
||||
execution.
|
||||
The tracer creates a per-cpu kernel thread with real-time priority
|
||||
SCHED_FIFO:95 that prints two lines at every activation. The first is
|
||||
the *timer latency* observed at the *hardirq* context before the activation
|
||||
of the thread. The second is the *timer latency* observed by the thread.
|
||||
The ACTIVATION ID field serves to relate the *irq* execution to its
|
||||
respective *thread* execution.
|
||||
|
||||
The *irq*/*thread* splitting is important to clarify in which context
|
||||
the unexpected high value is coming from. The *irq* context can be
|
||||
|
|
|
|||
|
|
@ -13,28 +13,28 @@ dello spazio utente ha ulteriori vantaggi: Sphinx genererà dei messaggi
|
|||
d'avviso se un simbolo non viene trovato nella documentazione. Questo permette
|
||||
di mantenere allineate la documentazione della uAPI (API spazio utente)
|
||||
con le modifiche del kernel.
|
||||
Il programma :ref:`parse_headers.pl <it_parse_headers>` genera questi riferimenti.
|
||||
Il programma :ref:`parse_headers.py <it_parse_headers>` genera questi riferimenti.
|
||||
Esso dev'essere invocato attraverso un Makefile, mentre si genera la
|
||||
documentazione. Per avere un esempio su come utilizzarlo all'interno del kernel
|
||||
consultate ``Documentation/userspace-api/media/Makefile``.
|
||||
|
||||
.. _it_parse_headers:
|
||||
|
||||
parse_headers.pl
|
||||
parse_headers.py
|
||||
^^^^^^^^^^^^^^^^
|
||||
|
||||
NOME
|
||||
****
|
||||
|
||||
|
||||
parse_headers.pl - analizza i file C al fine di identificare funzioni,
|
||||
parse_headers.py - analizza i file C al fine di identificare funzioni,
|
||||
strutture, enumerati e definizioni, e creare riferimenti per Sphinx
|
||||
|
||||
SINTASSI
|
||||
********
|
||||
|
||||
|
||||
\ **parse_headers.pl**\ [<options>] <C_FILE> <OUT_FILE> [<EXCEPTIONS_FILE>]
|
||||
\ **parse_headers.py**\ [<options>] <C_FILE> <OUT_FILE> [<EXCEPTIONS_FILE>]
|
||||
|
||||
Dove <options> può essere: --debug, --usage o --help.
|
||||
|
||||
|
|
|
|||
|
|
@ -109,7 +109,7 @@ Sphinx. Se lo script riesce a riconoscere la vostra distribuzione, allora
|
|||
sarà in grado di darvi dei suggerimenti su come procedere per completare
|
||||
l'installazione::
|
||||
|
||||
$ ./scripts/sphinx-pre-install
|
||||
$ ./tools/docs/sphinx-pre-install
|
||||
Checking if the needed tools for Fedora release 26 (Twenty Six) are available
|
||||
Warning: better to also install "texlive-luatex85".
|
||||
You should run:
|
||||
|
|
@ -119,7 +119,7 @@ l'installazione::
|
|||
. sphinx_2.4.4/bin/activate
|
||||
pip install -r Documentation/sphinx/requirements.txt
|
||||
|
||||
Can't build as 1 mandatory dependency is missing at ./scripts/sphinx-pre-install line 468.
|
||||
Can't build as 1 mandatory dependency is missing at ./tools/docs/sphinx-pre-install line 468.
|
||||
|
||||
L'impostazione predefinita prevede il controllo dei requisiti per la generazione
|
||||
di documenti html e PDF, includendo anche il supporto per le immagini, le
|
||||
|
|
|
|||
|
|
@ -132,6 +132,25 @@ http://savannah.nongnu.org/projects/quilt
|
|||
platform_set_drvdata(), but left the variable "dev" unused,
|
||||
delete it.
|
||||
|
||||
特定のコミットで導入された不具合を修正する場合(例えば ``git bisect`` で原因となった
|
||||
コミットを特定したときなど)は、コミットの SHA-1 の先頭12文字と1行の要約を添えた
|
||||
「Fixes:」タグを付けてください。この行は75文字を超えても構いませんが、途中で
|
||||
改行せず、必ず1行で記述してください。
|
||||
例:
|
||||
Fixes: 54a4f0239f2e ("KVM: MMU: make kvm_mmu_zap_page() return the number of pages it actually freed")
|
||||
|
||||
以下の git の設定を使うと、git log や git show で上記形式を出力するための
|
||||
専用の出力形式を追加できます::
|
||||
|
||||
[core]
|
||||
abbrev = 12
|
||||
[pretty]
|
||||
fixes = Fixes: %h (\"%s\")
|
||||
|
||||
使用例::
|
||||
|
||||
$ git log -1 --pretty=fixes 54a4f0239f2e
|
||||
Fixes: 54a4f0239f2e ("KVM: MMU: make kvm_mmu_zap_page() return the number of pages it actually freed")
|
||||
|
||||
3) パッチの分割
|
||||
|
||||
|
|
@ -409,7 +428,7 @@ Acked-by: が必ずしもパッチ全体の承認を示しているわけでは
|
|||
このタグはパッチに関心があると思われる人達がそのパッチの議論に含まれていたこと
|
||||
を明文化します。
|
||||
|
||||
14) Reported-by:, Tested-by:, Reviewed-by: および Suggested-by: の利用
|
||||
14) Reported-by:, Tested-by:, Reviewed-by:, Suggested-by: および Fixes: の利用
|
||||
|
||||
他の誰かによって報告された問題を修正するパッチであれば、問題報告者という寄与を
|
||||
クレジットするために、Reported-by: タグを追加することを検討してください。
|
||||
|
|
@ -465,6 +484,13 @@ Suggested-by: タグは、パッチのアイデアがその人からの提案に
|
|||
クレジットしていけば、望むらくはその人たちが将来別の機会に再度力を貸す気に
|
||||
なってくれるかもしれません。
|
||||
|
||||
Fixes: タグは、そのパッチが以前のコミットにあった問題を修正することを示します。
|
||||
これは、バグがどこで発生したかを特定しやすくし、バグ修正のレビューに役立ちます。
|
||||
また、このタグはstableカーネルチームが、あなたの修正をどのstableカーネル
|
||||
バージョンに適用すべきか判断する手助けにもなります。パッチによって修正された
|
||||
バグを示すには、この方法が推奨されます。前述の、「2) パッチに対する説明」の
|
||||
セクションを参照してください。
|
||||
|
||||
15) 標準的なパッチのフォーマット
|
||||
|
||||
標準的なパッチのサブジェクトは以下のとおりです。
|
||||
|
|
|
|||
|
|
@ -288,4 +288,4 @@ Documentation/translations/zh_CN/admin-guide/bug-hunting.rst 。
|
|||
|
||||
更多用GDB调试内核的信息,请参阅:
|
||||
Documentation/translations/zh_CN/dev-tools/gdb-kernel-debugging.rst
|
||||
和 Documentation/dev-tools/kgdb.rst 。
|
||||
和 Documentation/process/debugging/kgdb.rst 。
|
||||
|
|
|
|||
|
|
@ -0,0 +1,130 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0
|
||||
.. include:: ../disclaimer-zh_CN.rst
|
||||
|
||||
:Original: Documentation/block/blk-mq.rst
|
||||
|
||||
:翻译:
|
||||
|
||||
柯子杰 kezijie <kezijie@leap-io-kernel.com>
|
||||
|
||||
:校译:
|
||||
|
||||
|
||||
|
||||
================================================
|
||||
多队列块设备 I/O 排队机制 (blk-mq)
|
||||
================================================
|
||||
|
||||
多队列块设备 I/O 排队机制提供了一组 API,使高速存储设备能够同时在多个队列中
|
||||
处理并发的 I/O 请求并将其提交到块设备,从而实现极高的每秒输入/输出操作次数
|
||||
(IOPS),充分发挥现代存储设备的并行能力。
|
||||
|
||||
介绍
|
||||
====
|
||||
|
||||
背景
|
||||
----
|
||||
|
||||
磁盘从 Linux 内核开发初期就已成为事实上的标准。块 I/O 子系统的目标是尽可能
|
||||
为此类设备提供最佳性能,因为它们在进行随机访问时代价极高,性能瓶颈主要在机械
|
||||
运动部件上,其速度远低于存储栈中其他任何层。其中一个软件优化例子是根据硬盘磁
|
||||
头当前的位置重新排序读/写请求。
|
||||
|
||||
然而,随着固态硬盘和非易失性存储的发展,它们没有机械部件,也不存在随机访问代
|
||||
码,并能够进行高速并行访问,存储栈的瓶颈从存储设备转移到了操作系统。为了充分
|
||||
利用这些设备设计中的并行性,引入了多队列机制。
|
||||
|
||||
原来的设计只有一个队列来存储块设备 I/O 请求,并且只使用一个锁。由于缓存中的
|
||||
脏数据和多处理器共享单锁的瓶颈,这种设计在 SMP 系统中扩展性不佳。当不同进程
|
||||
(或同一进程在不同 CPU 上)同时执行块设备 I/O 时,该单队列模型还会出现严重
|
||||
的拥塞问题。为了解决这些问题,blk-mq API 引入了多个队列,每个队列在本地 CPU
|
||||
上拥有独立的入口点,从而消除了对全局锁的需求。关于其具体工作机制的更深入说明,
|
||||
请参见下一节( `工作原理`_ )。
|
||||
|
||||
工作原理
|
||||
--------
|
||||
|
||||
当用户空间执行对块设备的 I/O(例如读写文件)时,blk-mq 便会介入:它将存储和
|
||||
管理发送到块设备的 I/O 请求,充当用户空间(文件系统,如果存在的话)与块设备驱
|
||||
动之间的中间层。
|
||||
|
||||
blk-mq 由两组队列组成:软件暂存队列和硬件派发队列。当请求到达块层时,它会尝
|
||||
试最短路径:直接发送到硬件队列。然而,有两种情况下可能不会这样做:如果该层有
|
||||
IO 调度器或者是希望合并请求。在这两种情况下,请求将被发送到软件队列。
|
||||
|
||||
随后,在软件队列中的请求被处理后,请求会被放置到硬件队列。硬件队列是第二阶段
|
||||
的队列,硬件可以直接访问并处理这些请求。然而,如果硬件没有足够的资源来接受更
|
||||
多请求,blk-mq 会将请求放置在临时队列中,待硬件资源充足时再发送。
|
||||
|
||||
软件暂存队列
|
||||
~~~~~~~~~~~~
|
||||
|
||||
在这些请求未直接发送到驱动时,块设备 I/O 子系统会将请求添加到软件暂存队列中
|
||||
(由 struct blk_mq_ctx 表示)。一个请求可能包含一个或多个 BIO。它们通过 struct bio
|
||||
数据结构到达块层。块层随后会基于这些 BIO 构建新的结构体 struct request,用于
|
||||
与设备驱动通信。每个队列都有自己的锁,队列数量由每个 CPU 和每个 node 为基础
|
||||
来决定。
|
||||
|
||||
暂存队列可用于合并相邻扇区的请求。例如,对扇区3-6、6-7、7-9的请求可以合并
|
||||
为对扇区3-9的一个请求。即便 SSD 或 NVM 的随机访问和顺序访问响应时间相同,
|
||||
合并顺序访问的请求仍可减少单独请求的数量。这种合并请求的技术称为 plugging。
|
||||
|
||||
此外,I/O 调度器还可以对请求进行重新排序以确保系统资源的公平性(例如防止某
|
||||
个应用出现“饥饿”现象)或是提高 I/O 性能。
|
||||
|
||||
I/O 调度器
|
||||
^^^^^^^^^^
|
||||
|
||||
块层实现了多种调度器,每种调度器都遵循一定启发式规则以提高 I/O 性能。它们是
|
||||
“可插拔”的(plug and play),可在运行时通过 sysfs 选择。你可以在这里阅读更
|
||||
多关于 Linux IO 调度器知识 `here
|
||||
<https://www.kernel.org/doc/html/latest/block/index.html>`_。调度只发
|
||||
生在同一队列内的请求之间,因此无法合并不同队列的请求,否则会造成缓存冲突并需
|
||||
要为每个队列加锁。调度后,请求即可发送到硬件。可能选择的调度器之一是 NONE 调
|
||||
度器,这是最直接的调度器:它只将请求放到进程所在的软件队列,不进行重新排序。
|
||||
当设备开始处理硬件队列中的请求时(运行硬件队列),映射到该硬件队列的软件队列
|
||||
会按映射顺序依次清空。
|
||||
|
||||
硬件派发队列
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
硬件队列(由 struct blk_mq_hw_ctx 表示)是设备驱动用来映射设备提交队列
|
||||
(或设备 DMA 环缓存)的结构体,它是块层提交路径在底层设备驱动接管请求之前的
|
||||
最后一个阶段。运行此队列时,块层会从相关软件队列中取出请求,并尝试派发到硬件。
|
||||
|
||||
如果请求无法直接发送到硬件,它们会被加入到请求的链表(``hctx->dispatch``) 中。
|
||||
随后,当块层下次运行该队列时,会优先发送位于 ``dispatch`` 链表中的请求,
|
||||
以确保那些最早准备好发送的请求能够得到公平调度。硬件队列的数量取决于硬件及
|
||||
其设备驱动所支持的硬件上下文数,但不会超过系统的CPU核心数。在这个阶段不
|
||||
会发生重新排序,每个软件队列都有一组硬件队列来用于提交请求。
|
||||
|
||||
.. note::
|
||||
|
||||
块层和设备协议都不保证请求完成顺序。此问题需由更高层处理,例如文件系统。
|
||||
|
||||
基于标识的完成机制
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
为了指示哪一个请求已经完成,每个请求都会被分配一个整数标识,该标识的取值范围
|
||||
是从0到分发队列的大小。这个标识由块层生成,并在之后由设备驱动使用,从而避
|
||||
免了为每个请求再单独创建冗余的标识符。当请求在驱动中完成时,驱动会将该标识返
|
||||
回给块层,以通知该请求已完成。这样,块层就无需再进行线性搜索来确定是哪一个
|
||||
I/O 请求完成了。
|
||||
|
||||
更多阅读
|
||||
--------
|
||||
|
||||
- `Linux 块 I/O:多队列 SSD 并发访问简介 <http://kernel.dk/blk-mq.pdf>`_
|
||||
|
||||
- `NOOP 调度器 <https://en.wikipedia.org/wiki/Noop_scheduler>`_
|
||||
|
||||
- `Null 块设备驱动程序 <https://www.kernel.org/doc/html/latest/block/null_blk.html>`_
|
||||
|
||||
源代码
|
||||
======
|
||||
|
||||
该API在以下内核代码中:
|
||||
|
||||
include/linux/blk-mq.h
|
||||
|
||||
block/blk-mq.c
|
||||
|
|
@ -0,0 +1,192 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0
|
||||
.. include:: ../disclaimer-zh_CN.rst
|
||||
|
||||
:Original: Documentation/block/data-integrity.rst
|
||||
|
||||
:翻译:
|
||||
|
||||
柯子杰 kezijie <kezijie@leap-io-kernel.com>
|
||||
|
||||
:校译:
|
||||
|
||||
==========
|
||||
数据完整性
|
||||
==========
|
||||
|
||||
1. 引言
|
||||
=======
|
||||
|
||||
现代文件系统对数据和元数据都进行了校验和保护以防止数据损坏。然而,这种损坏的
|
||||
检测是在读取时才进行,这可能发生在数据写入数月之后。到那时,应用程序尝试写入
|
||||
的原始数据很可能已经丢失。
|
||||
|
||||
解决方案是确保磁盘实际存储的内容就是应用程序想存储的。SCSI 协议族(如 SBC
|
||||
数据完整性字段、SCC 保护提案)以及 SATA/T13(外部路径保护)最近新增的功能,
|
||||
通过在 I/O 中附加完整性元数据的方式,试图解决这一问题。完整性元数据(在
|
||||
SCSI 术语中称为保护信息)包括每个扇区的校验和,以及一个递增计数器,用于确保
|
||||
各扇区按正确顺序被写入盘。在某些保护方案中,还能保证 I/O 写入磁盘的正确位置。
|
||||
|
||||
当前的存储控制器和设备实现了多种保护措施,例如校验和和数据清理。但这些技术通
|
||||
常只在各自的独立域内工作,或最多仅在 I/O 路径的相邻节点之间发挥作用。DIF 及
|
||||
其它数据完整性拓展有意思的点在于保护格式定义明确,I/O 路径上的每个节点都可以
|
||||
验证 I/O 的完整性,如检测到损坏可直接拒绝。这不仅可以防止数据损坏,还能够隔
|
||||
离故障点。
|
||||
|
||||
2. 数据完整性拓展
|
||||
=================
|
||||
|
||||
如上所述,这些协议扩展只保护控制器与存储设备之间的路径。然而,许多控制器实际
|
||||
上允许操作系统与完整性元数据(IMD)交互。我们一直与多家 FC/SAS HBA 厂商合作,
|
||||
使保护信息能够在其控制器与操作系统之间传输。
|
||||
|
||||
SCSI 数据完整性字段通过在每个扇区后附加8字节的保护信息来实现。数据 + 完整
|
||||
性元数据存储在磁盘的520字节扇区中。数据 + IMD 在控制器与目标设备之间传输
|
||||
时是交错组合在一起的。T13 提案的方式类似。
|
||||
|
||||
由于操作系统处理520字节(甚至 4104 字节)扇区非常不便,我们联系了多家 HBA
|
||||
厂商,并鼓励它们分离数据与完整性元数据的 scatter-gather lists。
|
||||
|
||||
控制器在写入时会将数据缓冲区和完整性元数据缓冲区的数据交错在一起,并在读取时
|
||||
会拆分它们。这样,Linux 就能直接通过 DMA 将数据缓冲区传输到主机内存或从主机
|
||||
内存读取,而无需修改页缓存。
|
||||
|
||||
此外,SCSI 与 SATA 规范要求的16位 CRC 校验在软件中计算代价较高。基准测试发
|
||||
现,计算此校验在高负载情形下显著影响系统性能。一些控制器允许在操作系统接口处
|
||||
使用轻量级校验。例如 Emulex 支持 TCP/IP 校验。操作系统提供的 IP 校验在写入
|
||||
时会转换为16位 CRC,读取时则相反。这允许 Linux 或应用程序以极低的开销生成
|
||||
完整性元数据(与软件 RAID5 相当)。
|
||||
|
||||
IP 校验在检测位错误方面比 CRC 弱,但关键在于数据缓冲区与完整性元数据缓冲区
|
||||
的分离。只有这两个不同的缓冲区匹配,I/O 才能完成。
|
||||
|
||||
数据与完整性元数据缓冲区的分离以及校验选择被称为数据完整性扩展。由于这些扩展
|
||||
超出了协议主体(T10、T13)的范围,Oracle 及其合作伙伴正尝试在存储网络行业协
|
||||
会内对其进行标准化。
|
||||
|
||||
3. 内核变更
|
||||
===========
|
||||
|
||||
Linux 中的数据完整性框架允许将保护信息固定到 I/O 上,并在支持该功能的控制器
|
||||
之间发送和接收。
|
||||
|
||||
SCSI 和 SATA 中完整性扩展的优势在于,它们能够保护从应用程序到存储设备的整个
|
||||
路径。然而,这同时也是最大的劣势。这意味着保护信息必须采用磁盘可以理解的格式。
|
||||
|
||||
通常,Linux/POSIX 应用程序并不关心所访问存储设备的具体细节。虚拟文件系统层
|
||||
和块层会让硬件扇区大小和传输协议对应用程序完全透明。
|
||||
|
||||
然而,在准备发送到磁盘的保护信息时,就需要这种细节。因此,端到端保护方案的概
|
||||
念实际上违反了层次结构。应用程序完全不应该知道它访问的是 SCSI 还是 SATA 磁盘。
|
||||
|
||||
Linux 中实现的数据完整性支持尝试将这些细节对应用程序隐藏。就应用程序(以及在
|
||||
某种程度上内核)而言,完整性元数据是附加在 I/O 上的不透明信息。
|
||||
|
||||
当前实现允许块层自动为任何 I/O 生成保护信息。最终目标是将用户数据的完整性元
|
||||
数据计算移至用户空间。内核中产生的元数据和其他 I/O 仍将使用自动生成接口。
|
||||
|
||||
一些存储设备允许为每个硬件扇区附加一个16位的标识值。这个标识空间的所有者是
|
||||
块设备的所有者,也就是在多数情况下由文件系统掌控。文件系统可以利用这额外空间
|
||||
按需为扇区附加标识。由于标识空间有限,块接口允许通过交错方式对更大的数据块标
|
||||
识。这样,8*16位的信息可以附加到典型的 4KB 文件系统块上。
|
||||
|
||||
这也意味着诸如 fsck 和 mkfs 等应用程序需要能够从用户空间访问并操作这些标记。
|
||||
为此,正在开发一个透传接口。
|
||||
|
||||
4. 块层实现细节
|
||||
===============
|
||||
|
||||
4.1 Bio
|
||||
--------
|
||||
|
||||
当启用 CONFIG_BLK_DEV_INTEGRITY 时,数据完整性补丁会在 struct bio 中添加
|
||||
一个新字段。调用 bio_integrity(bio) 会返回一个指向 struct bip 的指针,该
|
||||
结构体包含了该 bio 的完整性负载。本质上,bip 是一个精简版的 struct bio,其
|
||||
中包含一个 bio_vec,用于保存完整性元数据以及所需的维护信息(bvec 池、向量计
|
||||
数等)。
|
||||
|
||||
内核子系统可以通过调用 bio_integrity_alloc(bio) 来为某个 bio 启用数据完整
|
||||
性保护。该函数会分配并附加一个 bip 到该 bio 上。
|
||||
|
||||
随后使用 bio_integrity_add_page() 将包含完整性元数据的单独页面附加到该 bio。
|
||||
|
||||
调用 bio_free() 会自动释放bip。
|
||||
|
||||
4.2 块设备
|
||||
-----------
|
||||
|
||||
块设备可以在 queue_limits 结构中的 integrity 子结构中设置完整性信息。
|
||||
|
||||
对于分层块设备,需要选择一个适用于所有子设备的完整性配置文件。可以使用
|
||||
queue_limits_stack_integrity() 来协助完成该操作。目前,DM 和 MD linear、
|
||||
RAID0 和 RAID1 已受支持。而RAID4/5/6因涉及应用标签仍需额外的开发工作。
|
||||
|
||||
5.0 块层完整性API
|
||||
==================
|
||||
|
||||
5.1 普通文件系统
|
||||
-----------------
|
||||
|
||||
普通文件系统并不知道其下层块设备具备发送或接收完整性元数据的能力。
|
||||
在执行写操作时,块层会在调用 submit_bio() 时自动生成完整性元数据。
|
||||
在执行读操作时,I/O 完成后会触发完整性验证。
|
||||
|
||||
IMD 的生成与验证行为可以通过以下开关控制::
|
||||
|
||||
/sys/block/<bdev>/integrity/write_generate
|
||||
|
||||
and::
|
||||
|
||||
/sys/block/<bdev>/integrity/read_verify
|
||||
|
||||
flags.
|
||||
|
||||
5.2 具备完整性感知的文件系统
|
||||
----------------------------
|
||||
|
||||
具备完整性感知能力的文件系统可以在准备 I/O 时附加完整性元数据,
|
||||
并且如果底层块设备支持应用标签空间,也可以加以利用。
|
||||
|
||||
|
||||
`bool bio_integrity_prep(bio);`
|
||||
|
||||
要为写操作生成完整性元数据或为读操作设置缓冲区,文件系统必须调用
|
||||
bio_integrity_prep(bio)。
|
||||
|
||||
在调用此函数之前,必须先设置好 bio 的数据方向和起始扇区,并确
|
||||
保该 bio 已经添加完所有的数据页。调用者需要自行保证,在 I/O 进行
|
||||
期间 bio 不会被修改。如果由于某种原因准备失败,则应当以错误状态
|
||||
完成该 bio。
|
||||
|
||||
5.3 传递已有的完整性元数据
|
||||
--------------------------
|
||||
|
||||
能够自行生成完整性元数据或可以从用户空间传输完整性元数据的文件系统,
|
||||
可以使用如下接口:
|
||||
|
||||
|
||||
`struct bip * bio_integrity_alloc(bio, gfp_mask, nr_pages);`
|
||||
|
||||
为 bio 分配完整性负载并挂载到 bio 上。nr_pages 表示需要在
|
||||
integrity bio_vec list 中存储多少页保护数据(类似 bio_alloc)。
|
||||
|
||||
完整性负载将在 bio_free() 被调用时释放。
|
||||
|
||||
|
||||
`int bio_integrity_add_page(bio, page, len, offset);`
|
||||
|
||||
将包含完整性元数据的一页附加到已有的 bio 上。该 bio 必须已有 bip,
|
||||
即必须先调用 bio_integrity_alloc()。对于写操作,页中的完整
|
||||
性元数据必须采用目标设备可识别的格式,但有一个例外,当请求在 I/O 栈
|
||||
中传递时,扇区号会被重新映射。这意味着通过此接口添加的页在 I/O 过程
|
||||
中可能会被修改!完整性元数据中的第一个引用标签必须等于 bip->bip_sector。
|
||||
|
||||
只要 bip bio_vec array(nr_pages)有空间,就可以继续通过
|
||||
bio_integrity_add_page()添加页。
|
||||
|
||||
当读操作完成后,附加的页将包含从存储设备接收到的完整性元数据。
|
||||
接收方需要处理这些元数据,并在操作完成时验证数据完整性
|
||||
|
||||
|
||||
----------------------------------------------------------------------
|
||||
|
||||
2007-12-24 Martin K. Petersen <martin.petersen@oracle.com>
|
||||
|
|
@ -0,0 +1,35 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0
|
||||
.. include:: ../disclaimer-zh_CN.rst
|
||||
|
||||
:Original: Documentation/block/index.rst
|
||||
|
||||
:翻译:
|
||||
|
||||
柯子杰 ke zijie <kezijie@leap-io-kernel.com>
|
||||
|
||||
:校译:
|
||||
|
||||
=====
|
||||
Block
|
||||
=====
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
blk-mq
|
||||
data-integrity
|
||||
|
||||
TODOList:
|
||||
* bfq-iosched
|
||||
* biovecs
|
||||
* cmdline-partition
|
||||
* deadline-iosched
|
||||
* inline-encryption
|
||||
* ioprio
|
||||
* kyber-iosched
|
||||
* null_blk
|
||||
* pr
|
||||
* stat
|
||||
* switching-sched
|
||||
* writeback_cache_control
|
||||
* ublk
|
||||
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
.. include:: ../disclaimer-zh_CN.rst
|
||||
|
||||
:Original: Documentation/dev-tools/gdb-kernel-debugging.rst
|
||||
:Original: Documentation/process/debugging/gdb-kernel-debugging.rst
|
||||
:Translator: 高超 gao chao <gaochao49@huawei.com>
|
||||
|
||||
通过gdb调试内核和模块
|
||||
|
|
|
|||
|
|
@ -28,15 +28,15 @@
|
|||
|
||||
::
|
||||
|
||||
./scripts/checktransupdate.py --help
|
||||
tools/docs/checktransupdate.py --help
|
||||
|
||||
具体用法请参考参数解析器的输出
|
||||
|
||||
示例
|
||||
|
||||
- ``./scripts/checktransupdate.py -l zh_CN``
|
||||
- ``tools/docs/checktransupdate.py -l zh_CN``
|
||||
这将打印 zh_CN 语言中需要更新的所有文件。
|
||||
- ``./scripts/checktransupdate.py Documentation/translations/zh_CN/dev-tools/testing-overview.rst``
|
||||
- ``tools/docs/checktransupdate.py Documentation/translations/zh_CN/dev-tools/testing-overview.rst``
|
||||
这将只打印指定文件的状态。
|
||||
|
||||
然后输出类似如下的内容:
|
||||
|
|
|
|||
|
|
@ -124,7 +124,7 @@ C代码编译器发出的警告常常会被视为误报,从而导致出现了
|
|||
这使得这些信息更难找到,例如使Sphinx无法生成指向该文档的链接。将 ``kernel-doc``
|
||||
指令添加到文档中以引入这些注释可以帮助社区获得为编写注释所做工作的全部价值。
|
||||
|
||||
``scripts/find-unused-docs.sh`` 工具可以用来找到这些被忽略的评论。
|
||||
``tools/docs/find-unused-docs.sh`` 工具可以用来找到这些被忽略的评论。
|
||||
|
||||
请注意,将导出的函数和数据结构引入文档是最有价值的。许多子系统还具有供内部
|
||||
使用的kernel-doc注释;除非这些注释放在专门针对相关子系统开发人员的文档中,
|
||||
|
|
|
|||
|
|
@ -13,20 +13,20 @@
|
|||
有时,为了描述用户空间API并在代码和文档之间生成交叉引用,需要包含头文件和示例
|
||||
C代码。为用户空间API文件添加交叉引用还有一个好处:如果在文档中找不到相应符号,
|
||||
Sphinx将生成警告。这有助于保持用户空间API文档与内核更改同步。
|
||||
:ref:`parse_headers.pl <parse_headers_zh>` 提供了生成此类交叉引用的一种方法。
|
||||
:ref:`parse_headers.py <parse_headers_zh>` 提供了生成此类交叉引用的一种方法。
|
||||
在构建文档时,必须通过Makefile调用它。有关如何在内核树中使用它的示例,请参阅
|
||||
``Documentation/userspace-api/media/Makefile`` 。
|
||||
|
||||
.. _parse_headers_zh:
|
||||
|
||||
parse_headers.pl
|
||||
parse_headers.py
|
||||
----------------
|
||||
|
||||
脚本名称
|
||||
~~~~~~~~
|
||||
|
||||
|
||||
parse_headers.pl——解析一个C文件,识别函数、结构体、枚举、定义并对Sphinx文档
|
||||
parse_headers.py——解析一个C文件,识别函数、结构体、枚举、定义并对Sphinx文档
|
||||
创建交叉引用。
|
||||
|
||||
|
||||
|
|
@ -34,7 +34,7 @@ parse_headers.pl——解析一个C文件,识别函数、结构体、枚举、
|
|||
~~~~~~~~
|
||||
|
||||
|
||||
\ **parse_headers.pl**\ [<选项>] <C文件> <输出文件> [<例外文件>]
|
||||
\ **parse_headers.py**\ [<选项>] <C文件> <输出文件> [<例外文件>]
|
||||
|
||||
<选项> 可以是: --debug, --help 或 --usage 。
|
||||
|
||||
|
|
|
|||
|
|
@ -84,7 +84,7 @@ PDF和LaTeX构建
|
|||
这有一个脚本可以自动检查Sphinx依赖项。如果它认得您的发行版,还会提示您所用发行
|
||||
版的安装命令::
|
||||
|
||||
$ ./scripts/sphinx-pre-install
|
||||
$ ./tools/docs/sphinx-pre-install
|
||||
Checking if the needed tools for Fedora release 26 (Twenty Six) are available
|
||||
Warning: better to also install "texlive-luatex85".
|
||||
You should run:
|
||||
|
|
@ -94,7 +94,7 @@ PDF和LaTeX构建
|
|||
. sphinx_2.4.4/bin/activate
|
||||
pip install -r Documentation/sphinx/requirements.txt
|
||||
|
||||
Can't build as 1 mandatory dependency is missing at ./scripts/sphinx-pre-install line 468.
|
||||
Can't build as 1 mandatory dependency is missing at ./tools/docs/sphinx-pre-install line 468.
|
||||
|
||||
默认情况下,它会检查html和PDF的所有依赖项,包括图像、数学表达式和LaTeX构建的
|
||||
需求,并假设将使用虚拟Python环境。html构建所需的依赖项被认为是必需的,其他依
|
||||
|
|
|
|||
|
|
@ -0,0 +1,67 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
.. include:: ../disclaimer-zh_CN.rst
|
||||
|
||||
:Original: Documentation/filesystems/dnotify.rst
|
||||
|
||||
:翻译:
|
||||
|
||||
王龙杰 Wang Longjie <wang.longjie1@zte.com.cn>
|
||||
|
||||
==============
|
||||
Linux 目录通知
|
||||
==============
|
||||
|
||||
Stephen Rothwell <sfr@canb.auug.org.au>
|
||||
|
||||
目录通知的目的是使用户应用程序能够在目录或目录中的任何文件发生变更时收到通知。基本机制包括应用程序
|
||||
通过 fcntl(2) 调用在目录上注册通知,通知本身则通过信号传递。
|
||||
|
||||
应用程序可以决定希望收到哪些 “事件” 的通知。当前已定义的事件如下:
|
||||
|
||||
========= =====================================
|
||||
DN_ACCESS 目录中的文件被访问(read)
|
||||
DN_MODIFY 目录中的文件被修改(write,truncate)
|
||||
DN_CREATE 目录中创建了文件
|
||||
DN_DELETE 目录中的文件被取消链接
|
||||
DN_RENAME 目录中的文件被重命名
|
||||
DN_ATTRIB 目录中的文件属性被更改(chmod,chown)
|
||||
========= =====================================
|
||||
|
||||
通常,应用程序必须在每次通知后重新注册,但如果将 DN_MULTISHOT 与事件掩码进行或运算,则注册
|
||||
将一直保持有效,直到被显式移除(通过注册为不接收任何事件)。
|
||||
|
||||
默认情况下,SIGIO 信号将被传递给进程,且不附带其他有用的信息。但是,如果使用 F_SETSIG fcntl(2)
|
||||
调用让内核知道要传递哪个信号,一个 siginfo 结构体将被传递给信号处理程序,该结构体的 si_fd 成员将
|
||||
包含与发生事件的目录相关联的文件描述符。
|
||||
|
||||
应用程序最好选择一个实时信号(SIGRTMIN + <n>),以便通知可以被排队。如果指定了 DN_MULTISHOT,
|
||||
这一点尤为重要。注意,SIGRTMIN 通常是被阻塞的,因此最好使用(至少)SIGRTMIN + 1。
|
||||
|
||||
实现预期(特性与缺陷 :-))
|
||||
--------------------------
|
||||
|
||||
对于文件的任何本地访问,通知都应能正常工作,即使实际文件系统位于远程服务器上。这意味着,对本地用户
|
||||
模式服务器提供的文件的远程访问应能触发通知。同样的,对本地内核 NFS 服务器提供的文件的远程访问
|
||||
也应能触发通知。
|
||||
|
||||
为了尽可能减小对文件系统代码的影响,文件硬链接的问题已被忽略。因此,如果一个文件(x)存在于两个
|
||||
目录(a 和 b)中,通过名称”a/x”对该文件进行的更改应通知给期望接收目录“a”通知的程序,但不会
|
||||
通知给期望接收目录“b”通知的程序。
|
||||
|
||||
此外,取消链接的文件仍会在它们链接到的最后一个目录中触发通知。
|
||||
|
||||
配置
|
||||
----
|
||||
|
||||
Dnotify 由 CONFIG_DNOTIFY 配置选项控制。禁用该选项时,fcntl(fd, F_NOTIFY, ...) 将返
|
||||
回 -EINVAL。
|
||||
|
||||
示例
|
||||
----
|
||||
具体示例可参见 tools/testing/selftests/filesystems/dnotify_test.c。
|
||||
|
||||
注意
|
||||
----
|
||||
从 Linux 2.6.13 开始,dnotify 已被 inotify 取代。有关 inotify 的更多信息,请参见
|
||||
Documentation/filesystems/inotify.rst。
|
||||
|
|
@ -0,0 +1,211 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
.. include:: ../disclaimer-zh_CN.rst
|
||||
|
||||
:Original: Documentation/filesystems/gfs2-glocks.rst
|
||||
|
||||
:翻译:
|
||||
|
||||
邵明寅 Shao Mingyin <shao.mingyin@zte.com.cn>
|
||||
|
||||
:校译:
|
||||
|
||||
杨涛 yang tao <yang.tao172@zte.com.cn>
|
||||
|
||||
==================
|
||||
Glock 内部加锁规则
|
||||
==================
|
||||
|
||||
本文档阐述 glock 状态机内部运作的基本原理。每个 glock(即
|
||||
fs/gfs2/incore.h 中的 struct gfs2_glock)包含两把主要的内部锁:
|
||||
|
||||
1. 自旋锁(gl_lockref.lock):用于保护内部状态(如
|
||||
gl_state、gl_target)和持有者列表(gl_holders)
|
||||
2. 非阻塞的位锁(GLF_LOCK):用于防止其他线程同时调用
|
||||
DLM 等操作。若某线程获取此锁,则在释放时必须调用
|
||||
run_queue(通常通过工作队列),以确保所有待处理任务
|
||||
得以完成。
|
||||
|
||||
gl_holders 列表包含与该 glock 关联的所有排队锁请求(不
|
||||
仅是持有者)。若存在已持有的锁,它们将位于列表开头的连
|
||||
续条目中。锁的授予严格遵循排队顺序。
|
||||
|
||||
glock 层用户可请求三种锁状态:共享(SH)、延迟(DF)和
|
||||
排他(EX)。它们对应以下 DLM 锁模式:
|
||||
|
||||
========== ====== =====================================================
|
||||
Glock 模式 DLM 锁模式
|
||||
========== ====== =====================================================
|
||||
UN IV/NL 未加锁(无关联的 DLM 锁)或 NL
|
||||
SH PR 受保护读(Protected read)
|
||||
DF CW 并发写(Concurrent write)
|
||||
EX EX 排他(Exclusive)
|
||||
========== ====== =====================================================
|
||||
|
||||
因此,DF 本质上是一种与“常规”共享锁模式(SH)互斥的共
|
||||
享模式。在 GFS2 中,DF 模式专用于直接 I/O 操作。Glock
|
||||
本质上是锁加缓存管理例程的组合,其缓存规则如下:
|
||||
|
||||
========== ============== ========== ========== ==============
|
||||
Glock 模式 缓存元数据 缓存数据 脏数据 脏元数据
|
||||
========== ============== ========== ========== ==============
|
||||
UN 否 否 否 否
|
||||
DF 是 否 否 否
|
||||
SH 是 是 否 否
|
||||
EX 是 是 是 是
|
||||
========== ============== ========== ========== ==============
|
||||
|
||||
这些规则通过为每种 glock 定义的操作函数实现。并非所有
|
||||
glock 类型都使用全部的模式,例如仅 inode glock 使用 DF 模
|
||||
式。
|
||||
|
||||
glock 操作函数及类型常量说明表:
|
||||
|
||||
============== ========================================================
|
||||
字段 用途
|
||||
============== ========================================================
|
||||
go_sync 远程状态变更前调用(如同步脏数据)
|
||||
go_xmote_bh 远程状态变更后调用(如刷新缓存)
|
||||
go_inval 远程状态变更需使缓存失效时调用
|
||||
go_instantiate 获取 glock 时调用
|
||||
go_held 每次获取 glock 持有者时调用
|
||||
go_dump 为 debugfs 文件打印对象内容,或出错时将 glock 转储至日志
|
||||
go_callback 若 DLM 发送回调以释放此锁时调用
|
||||
go_unlocked 当 glock 解锁时调用(dlm_unlock())
|
||||
go_type glock 类型,``LM_TYPE_*``
|
||||
go_flags 若 glock 关联地址空间,则设置GLOF_ASPACE 标志
|
||||
============== ========================================================
|
||||
|
||||
每种锁的最短持有时间是指在远程锁授予后忽略远程降级请求
|
||||
的时间段。此举旨在防止锁在集群节点间持续弹跳而无实质进
|
||||
展的情况,此现象常见于多节点写入的共享内存映射文件。通
|
||||
过延迟响应远程回调的降级操作,为用户空间程序争取页面取
|
||||
消映射前的处理时间。
|
||||
|
||||
未来计划将 glock 的 "EX" 模式设为本地共享,使本地锁通
|
||||
过 i_mutex 实现而非 glock。
|
||||
|
||||
glock 操作函数的加锁规则:
|
||||
|
||||
============== ====================== =============================
|
||||
操作 GLF_LOCK 位锁持有 gl_lockref.lock 自旋锁持有
|
||||
============== ====================== =============================
|
||||
go_sync 是 否
|
||||
go_xmote_bh 是 否
|
||||
go_inval 是 否
|
||||
go_instantiate 否 否
|
||||
go_held 否 否
|
||||
go_dump 有时 是
|
||||
go_callback 有时(N/A) 是
|
||||
go_unlocked 是 否
|
||||
============== ====================== =============================
|
||||
|
||||
.. Note::
|
||||
|
||||
若入口处持有锁则操作期间不得释放位锁或自旋锁。
|
||||
go_dump 和 do_demote_ok 严禁阻塞。
|
||||
仅当 glock 状态指示其缓存最新数据时才会调用 go_dump。
|
||||
|
||||
GFS2 内部的 glock 加锁顺序:
|
||||
|
||||
1. i_rwsem(如需要)
|
||||
2. 重命名 glock(仅用于重命名)
|
||||
3. Inode glock
|
||||
(父级优先于子级,同级 inode 按锁编号排序)
|
||||
4. Rgrp glock(用于(反)分配操作)
|
||||
5. 事务 glock(通过 gfs2_trans_begin,非读操作)
|
||||
6. i_rw_mutex(如需要)
|
||||
7. 页锁(始终最后,至关重要!)
|
||||
|
||||
每个 inode 对应两把 glock:一把管理 inode 本身(加锁顺
|
||||
序如上),另一把(称为 iopen glock)结合 inode 的
|
||||
i_nlink 字段决定 inode 生命周期。inode 加锁基于单个
|
||||
inode,rgrp 加锁基于单个 rgrp。通常优先获取本地锁再获
|
||||
取集群锁。
|
||||
|
||||
Glock 统计
|
||||
----------
|
||||
|
||||
统计分为两类:超级块相关统计和单个 glock 相关统计。超级
|
||||
块统计按每 CPU 执行以减少收集开销,并进一步按 glock 类
|
||||
型细分。所有时间单位为纳秒。
|
||||
|
||||
超级块和 glock 统计收集相同信息。超级块时序统计为 glock
|
||||
时序统计提供默认值,使新建 glock 具有合理的初始值。每个
|
||||
glock 的计数器在创建时初始化为零,当 glock 从内存移除时
|
||||
统计丢失。
|
||||
|
||||
统计包含三组均值/方差对及两个计数器。均值/方差对为平滑
|
||||
指数估计,算法与网络代码中的往返时间计算类似(参见《
|
||||
TCP/IP详解 卷1》第21.3节及《卷2》第25.10节)。与 TCP/IP
|
||||
案例不同,此处均值/方差未缩放且单位为整数纳秒。
|
||||
|
||||
三组均值/方差对测量以下内容:
|
||||
|
||||
1. DLM 锁时间(非阻塞请求)
|
||||
2. DLM 锁时间(阻塞请求)
|
||||
3. 请求间隔时间(指向 DLM)
|
||||
|
||||
非阻塞请求指无论目标 DLM 锁处于何种状态均能立即完成的请求。
|
||||
当前满足条件的请求包括:(a)锁当前状态为互斥(如锁降级)、
|
||||
(b)请求状态为空置或解锁(同样如锁降级)、或(c)设置"try lock"
|
||||
标志的请求。其余锁请求均属阻塞请求。
|
||||
|
||||
两个计数器分别统计:
|
||||
1. 锁请求总数(决定均值/方差计算的数据量)
|
||||
2. glock 代码顶层的持有者排队数(通常远大于 DLM 锁请求数)
|
||||
|
||||
为什么收集这些统计数据?我们需深入分析时序参数的动因如下:
|
||||
|
||||
1. 更精准设置 glock "最短持有时间"
|
||||
2. 快速识别性能问题
|
||||
3. 改进资源组分配算法(基于锁等待时间而非盲目 "try lock")
|
||||
|
||||
因平滑更新的特性,采样量的阶跃变化需经 8 次采样(方差需
|
||||
4 次)才能完全体现,解析结果时需审慎考虑。
|
||||
|
||||
通过锁请求完成时间和 glock 平均锁请求间隔时间,可计算节
|
||||
点使用 glock 时长与集群共享时长的占比,对设置锁最短持有
|
||||
时间至关重要。
|
||||
|
||||
我们已采取严谨措施,力求精准测量目标量值。任何测量系统均
|
||||
存在误差,但我期望当前方案已达到合理精度极限。
|
||||
|
||||
超级块状态统计路径::
|
||||
|
||||
/sys/kernel/debug/gfs2/<fsname>/sbstats
|
||||
|
||||
Glock 状态统计路径::
|
||||
|
||||
/sys/kernel/debug/gfs2/<fsname>/glstats
|
||||
|
||||
(假设 debugfs 挂载于 /sys/kernel/debug,且 <fsname> 替
|
||||
换为对应 GFS2 文件系统名)
|
||||
|
||||
输出缩写说明:
|
||||
|
||||
========= ============================================
|
||||
srtt 非阻塞 DLM 请求的平滑往返时间
|
||||
srttvar srtt 的方差估计
|
||||
srttb (潜在)阻塞 DLM 请求的平滑往返时间
|
||||
srttvarb srttb 的方差估计
|
||||
sirt DLM 请求的平滑请求间隔时间
|
||||
sirtvar sirt 的方差估计
|
||||
dlm DLM 请求数(glstats 文件中的 dcnt)
|
||||
queue 排队的 glock 请求数(glstats 文件中的 qcnt)
|
||||
========= ============================================
|
||||
|
||||
sbstats文件按glock类型(每种类型8行)和CPU核心(每CPU一列)
|
||||
记录统计数据集。glstats文件则为每个glock提供统计集,其格式
|
||||
与glocks文件类似,但所有时序统计量均采用均值/方差格式存储。
|
||||
|
||||
gfs2_glock_lock_time 跟踪点实时输出目标 glock 的当前统计
|
||||
值,并附带每次接收到的dlm响应附加信息:
|
||||
|
||||
====== ============
|
||||
status DLM 请求状态
|
||||
flags DLM 请求标志
|
||||
tdiff 该请求的耗时
|
||||
====== ============
|
||||
|
||||
(其余字段同上表)
|
||||
|
|
@ -0,0 +1,97 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
.. include:: ../disclaimer-zh_CN.rst
|
||||
|
||||
:Original: Documentation/filesystems/gfs2-uevents.rst
|
||||
|
||||
:翻译:
|
||||
|
||||
邵明寅 Shao Mingyin <shao.mingyin@zte.com.cn>
|
||||
|
||||
:校译:
|
||||
|
||||
杨涛 yang tao <yang.tao172@zte.com.cn>
|
||||
|
||||
===============
|
||||
uevents 与 GFS2
|
||||
===============
|
||||
|
||||
在 GFS2 文件系统的挂载生命周期内,会生成多个 uevent。
|
||||
本文档解释了这些事件的含义及其用途(被 gfs2-utils 中的 gfs_controld 使用)。
|
||||
|
||||
GFS2 uevents 列表
|
||||
=================
|
||||
|
||||
1. ADD
|
||||
------
|
||||
|
||||
ADD 事件发生在挂载时。它始终是新建文件系统生成的第一个 uevent。如果挂载成
|
||||
功,随后会生成 ONLINE uevent。如果挂载失败,则随后会生成 REMOVE uevent。
|
||||
|
||||
ADD uevent 包含两个环境变量:SPECTATOR=[0|1] 和 RDONLY=[0|1],分别用
|
||||
于指定文件系统的观察者状态(一种未分配日志的只读挂载)和只读状态(已分配日志)。
|
||||
|
||||
2. ONLINE
|
||||
---------
|
||||
|
||||
ONLINE uevent 在成功挂载或重新挂载后生成。它具有与 ADD uevent 相同的环
|
||||
境变量。ONLINE uevent 及其用于标识观察者和 RDONLY 状态的两个环境变量是较
|
||||
新版本内核引入的功能(2.6.32-rc+ 及以上),旧版本内核不会生成此事件。
|
||||
|
||||
3. CHANGE
|
||||
---------
|
||||
|
||||
CHANGE uevent 在两种场景下使用。一是报告第一个节点成功挂载文件系统时
|
||||
(FIRSTMOUNT=Done)。这作为信号告知 gfs_controld,此时集群中其他节点可以
|
||||
安全挂载该文件系统。
|
||||
|
||||
另一个 CHANGE uevent 用于通知文件系统某个日志的日志恢复已完成。它包含两个
|
||||
环境变量:JID= 指定刚恢复的日志 ID,RECOVERY=[Done|Failed] 表示操作成
|
||||
功与否。这些 uevent 会在每次日志恢复时生成,无论是在初始挂载过程中,还是
|
||||
gfs_controld 通过 /sys/fs/gfs2/<fsname>/lock_module/recovery 文件
|
||||
请求特定日志恢复的结果。
|
||||
|
||||
由于早期版本的 gfs_controld 使用 CHANGE uevent 时未检查环境变量以确定状
|
||||
态,若为其添加新功能,存在用户工具版本过旧导致集群故障的风险。因此,在新增用
|
||||
于标识成功挂载或重新挂载的 uevent 时,选择了使用 ONLINE uevent。
|
||||
|
||||
4. OFFLINE
|
||||
----------
|
||||
|
||||
OFFLINE uevent 仅在文件系统发生错误时生成,是 "withdraw" 机制的一部分。
|
||||
当前该事件未提供具体错误信息,此问题有待修复。
|
||||
|
||||
5. REMOVE
|
||||
---------
|
||||
|
||||
REMOVE uevent 在挂载失败结束或卸载文件系统时生成。所有 REMOVE uevent
|
||||
之前都至少存在同一文件系统的 ADD uevent。与其他 uevent 不同,它由内核的
|
||||
kobject 子系统自动生成。
|
||||
|
||||
|
||||
所有 GFS2 uevents 的通用信息(uevent 环境变量)
|
||||
===============================================
|
||||
|
||||
1. LOCKTABLE=
|
||||
--------------
|
||||
|
||||
LOCKTABLE 是一个字符串,其值来源于挂载命令行(locktable=)或 fstab 文件。
|
||||
它用作文件系统标签,并为 lock_dlm 类型的挂载提供加入集群所需的信息。
|
||||
|
||||
2. LOCKPROTO=
|
||||
-------------
|
||||
|
||||
LOCKPROTO 是一个字符串,其值取决于挂载命令行或 fstab 中的设置。其值将是
|
||||
lock_nolock 或 lock_dlm。未来可能支持其他锁管理器。
|
||||
|
||||
3. JOURNALID=
|
||||
-------------
|
||||
|
||||
如果文件系统正在使用日志(观察者挂载不分配日志),则所有 GFS2 uevent 中都
|
||||
会包含此变量,其值为数字形式的日志 ID。
|
||||
|
||||
4. UUID=
|
||||
--------
|
||||
|
||||
在较新版本的 gfs2-utils 中,mkfs.gfs2 会向文件系统超级块写入 UUID。若存
|
||||
在 UUID,所有与该文件系统相关的 uevent 中均会包含此信息。
|
||||
|
|
@ -0,0 +1,57 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
.. include:: ../disclaimer-zh_CN.rst
|
||||
|
||||
:Original: Documentation/filesystems/gfs2.rst
|
||||
|
||||
:翻译:
|
||||
|
||||
邵明寅 Shao Mingyin <shao.mingyin@zte.com.cn>
|
||||
|
||||
:校译:
|
||||
|
||||
杨涛 yang tao <yang.tao172@zte.com.cn>
|
||||
|
||||
=====================================
|
||||
全局文件系统 2 (Global File System 2)
|
||||
=====================================
|
||||
|
||||
GFS2 是一个集群文件系统。它允许一组计算机同时使用在它们之间共享的块设备(通
|
||||
过 FC、iSCSI、NBD 等)。GFS2 像本地文件系统一样读写块设备,但也使用一个锁
|
||||
模块来让计算机协调它们的 I/O 操作,从而维护文件系统的一致性。GFS2 的出色特
|
||||
性之一是完美一致性——在一台机器上对文件系统所做的更改会立即显示在集群中的所
|
||||
有其他机器上。
|
||||
|
||||
GFS2 使用可互换的节点间锁定机制,当前支持的机制有:
|
||||
|
||||
lock_nolock
|
||||
- 允许将 GFS2 用作本地文件系统
|
||||
|
||||
lock_dlm
|
||||
- 使用分布式锁管理器 (dlm) 进行节点间锁定。
|
||||
该 dlm 位于 linux/fs/dlm/
|
||||
|
||||
lock_dlm 依赖于在上述 URL 中找到的用户空间集群管理系统。
|
||||
|
||||
若要将 GFS2 用作本地文件系统,则不需要外部集群系统,只需::
|
||||
|
||||
$ mkfs -t gfs2 -p lock_nolock -j 1 /dev/block_device
|
||||
$ mount -t gfs2 /dev/block_device /dir
|
||||
|
||||
在所有集群节点上都需要安装 gfs2-utils 软件包;对于 lock_dlm,您还需要按
|
||||
照文档配置 dlm 和 corosync 用户空间工具。
|
||||
|
||||
gfs2-utils 可在 https://pagure.io/gfs2-utils 找到。
|
||||
|
||||
GFS2 在磁盘格式上与早期版本的 GFS 不兼容,但它已相当接近。
|
||||
|
||||
以下手册页 (man pages) 可在 gfs2-utils 中找到:
|
||||
|
||||
============ =============================================
|
||||
fsck.gfs2 用于修复文件系统
|
||||
gfs2_grow 用于在线扩展文件系统
|
||||
gfs2_jadd 用于在线向文件系统添加日志
|
||||
tunegfs2 用于操作、检查和调优文件系统
|
||||
gfs2_convert 用于将 gfs 文件系统原地转换为 GFS2
|
||||
mkfs.gfs2 用于创建文件系统
|
||||
============ =============================================
|
||||
|
|
@ -15,6 +15,16 @@ Linux Kernel中的文件系统
|
|||
文件系统(VFS)层以及基于其上的各种文件系统如何工作呈现给大家。当前\
|
||||
可以看到下面的内容。
|
||||
|
||||
核心 VFS 文档
|
||||
=============
|
||||
|
||||
有关 VFS 层本身以及其算法工作方式的文档,请参阅这些手册。
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
dnotify
|
||||
|
||||
文件系统
|
||||
========
|
||||
|
||||
|
|
@ -26,4 +36,9 @@ Linux Kernel中的文件系统
|
|||
virtiofs
|
||||
debugfs
|
||||
tmpfs
|
||||
|
||||
ubifs
|
||||
ubifs-authentication
|
||||
gfs2
|
||||
gfs2-uevents
|
||||
gfs2-glocks
|
||||
inotify
|
||||
|
|
|
|||
|
|
@ -0,0 +1,80 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
.. include:: ../disclaimer-zh_CN.rst
|
||||
|
||||
:Original: Documentation/filesystems/inotify.rst
|
||||
|
||||
:翻译:
|
||||
|
||||
王龙杰 Wang Longjie <wang.longjie1@zte.com.cn>
|
||||
|
||||
==========================================
|
||||
Inotify - 一个强大且简单的文件变更通知系统
|
||||
==========================================
|
||||
|
||||
|
||||
|
||||
文档由 Robert Love <rml@novell.com> 于 2005 年 3 月 15 日开始撰写
|
||||
|
||||
文档由 Zhang Zhen <zhenzhang.zhang@huawei.com> 于 2015 年 1 月 4 日更新
|
||||
|
||||
- 删除了已废弃的接口,关于用户接口请参考手册页。
|
||||
|
||||
(i) 基本原理
|
||||
|
||||
问:
|
||||
不将监控项与被监控对象打开的文件描述符(fd)绑定,这背后的设计决策是什么?
|
||||
|
||||
答:
|
||||
监控项会与打开的 inotify 设备相关联,而非与打开的文件相关联。这解决了 dnotify 的主要问题:
|
||||
保持文件打开会锁定文件,更糟的是,还会锁定挂载点。因此,dnotify 在带有可移动介质的桌面系统
|
||||
上难以使用,因为介质将无法被卸载。监控文件不应要求文件处于打开状态。
|
||||
|
||||
问:
|
||||
与每个监控项一个文件描述符的方式相比,采用每个实例一个文件描述符的设计决策是出于什么
|
||||
考虑?
|
||||
|
||||
答:
|
||||
每个监控项一个文件描述符会很快的消耗掉超出允许数量的文件描述符,其数量会超出实际可管理的范
|
||||
围,也会超出 select() 能高效处理的范围。诚然,root 用户可以提高每个进程的文件描述符限制,
|
||||
用户也可以使用 epoll,但同时要求这两者是不合理且多余的。一个监控项所消耗的内存比一个打开的文
|
||||
件要少,因此将这两个数量空间分开是合理的。当前的设计正是用户空间开发者所期望的:用户只需初始
|
||||
化一次 inotify,然后添加 n 个监控项,而这只需要一个文件描述符,无需调整文件描述符限制。初
|
||||
始化 inotify 实例初始化两千次是很荒谬的。如果我们能够简洁地实现用户空间的偏好——而且我们
|
||||
确实可以,idr 层让这类事情变得轻而易举——那么我们就应该这么做。
|
||||
|
||||
还有其他合理的理由。如果只有一个文件描述符,那就只需要在该描述符上阻塞,它对应着一个事件队列。
|
||||
这个单一文件描述符会返回所有的监控事件以及任何可能的带外数据。而如果每个文件描述符都是一个独
|
||||
立的监控项,
|
||||
|
||||
- 将无法知晓事件的顺序。文件 foo 和文件 bar 上的事件会触发两个文件描述符上的 poll(),
|
||||
但无法判断哪个事件先发生。而用单个队列就可以很容易的提供事件的顺序。这种顺序对现有的应用程
|
||||
序(如 Beagle)至关重要。想象一下,如果“mv a b ; mv b a”这样的事件没有顺序会是什么
|
||||
情况。
|
||||
|
||||
- 我们将不得不维护 n 个文件描述符和 n 个带有状态的内部队列,而不是仅仅一个。这在 kernel 中
|
||||
会混乱得多。单个线性队列是合理的数据结构。
|
||||
|
||||
- 用户空间开发者更青睐当前的 API。例如,Beagle 的开发者们就很喜欢它。相信我,我问过他们。
|
||||
这并不奇怪:谁会想通过 select 来管理以及阻塞在 1000 个文件描述符上呢?
|
||||
|
||||
- 无法获取带外数据。
|
||||
|
||||
- 1024 这个数量仍然太少。 ;-)
|
||||
|
||||
当要设计一个可扩展到数千个目录的文件变更通知系统时,处理数千个文件描述符似乎并不是合适的接口。
|
||||
这太繁琐了。
|
||||
|
||||
此外,创建多个实例、处理多个队列以及相应的多个文件描述符是可行的。不必是每个进程对应一个文件描
|
||||
述符;而是每个队列对应一个文件描述符,一个进程完全可能需要多个队列。
|
||||
|
||||
问:
|
||||
为什么采用系统调用的方式?
|
||||
|
||||
答:
|
||||
糟糕的用户空间接口是 dnotify 的第二大问题。信号对于文件通知来说是一种非常糟糕的接口。其实对
|
||||
于其他任何事情,信号也都不是好的接口。从各个角度来看,理想的解决方案是基于文件描述符的,它允许
|
||||
基本的文件 I/O 操作以及 poll/select 操作。获取文件描述符和管理监控项既可以通过设备文件来
|
||||
实现,也可以通过一系列新的系统调用来实现。我们决定采用一系列系统调用,因为这是提供新的内核接口
|
||||
的首选方法。两者之间唯一真正的区别在于,我们是想使用 open(2) 和 ioctl(2),还是想使用几
|
||||
个新的系统调用。系统调用比 ioctl 更有优势。
|
||||
|
|
@ -0,0 +1,354 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
.. include:: ../disclaimer-zh_CN.rst
|
||||
|
||||
:Original: Documentation/filesystems/ubifs-authentication.rst
|
||||
|
||||
:翻译:
|
||||
|
||||
邵明寅 Shao Mingyin <shao.mingyin@zte.com.cn>
|
||||
|
||||
:校译:
|
||||
|
||||
杨涛 yang tao <yang.tao172@zte.com.cn>
|
||||
|
||||
=============
|
||||
UBIFS认证支持
|
||||
=============
|
||||
|
||||
引言
|
||||
====
|
||||
UBIFS 利用 fscrypt 框架为文件内容及文件名提供保密性。这能防止攻击者在单一
|
||||
时间点读取文件系统内容的攻击行为。典型案例是智能手机丢失时,攻击者若没有文件
|
||||
系统解密密钥则无法读取设备上的个人数据。
|
||||
|
||||
在现阶段,UBIFS 加密尚不能防止攻击者篡改文件系统内容后用户继续使用设备的攻
|
||||
击场景。这种情况下,攻击者可任意修改文件系统内容而不被用户察觉。例如修改二
|
||||
进制文件使其执行时触发恶意行为 [DMC-CBC-ATTACK]。由于 UBIFS 大部分文件
|
||||
系统元数据以明文存储,使得文件替换和内容篡改变得相当容易。
|
||||
|
||||
其他全盘加密系统(如 dm-crypt)可以覆盖所有文件系统元数据,这类系统虽然能
|
||||
增加这种攻击的难度,但特别是当攻击者能多次访问设备时,也有可能实现攻击。对于
|
||||
基于 Linux 块 IO 层的 dm-crypt 等文件系统,可通过 dm-integrity 或
|
||||
dm-verity 子系统[DM-INTEGRITY, DM-VERITY]在块层实现完整数据认证,这些
|
||||
功能也可与 dm-crypt 结合使用[CRYPTSETUP2]。
|
||||
|
||||
本文描述一种为 UBIFS 实现文件内容认证和完整元数据认证的方法。由于 UBIFS
|
||||
使用 fscrypt 进行文件内容和文件名加密,认证系统可与 fscrypt 集成以利用密
|
||||
钥派生等现有功能。但系统同时也应支持在不启用加密的情况下使用 UBIFS 认证。
|
||||
|
||||
|
||||
MTD, UBI & UBIFS
|
||||
----------------
|
||||
在 Linux 中,MTD(内存技术设备)子系统提供访问裸闪存设备的统一接口。运行于
|
||||
MTD 之上的重要子系统是 UBI(无序块映像),它为闪存设备提供卷管理功能,类似
|
||||
于块设备的 LVM。此外,UBI 还处理闪存特有的磨损均衡和透明 I/O 错误处理。
|
||||
UBI 向上层提供逻辑擦除块(LEB),并透明地映射到闪存的物理擦除块(PEB)。
|
||||
|
||||
UBIFS 是运行于 UBI 之上的裸闪存文件系统。因此 UBI 处理磨损均衡和部分闪存
|
||||
特性,而 UBIFS专注于可扩展性、性能和可恢复性。
|
||||
|
||||
::
|
||||
|
||||
+------------+ +*******+ +-----------+ +-----+
|
||||
| | * UBIFS * | UBI-BLOCK | | ... |
|
||||
| JFFS/JFFS2 | +*******+ +-----------+ +-----+
|
||||
| | +-----------------------------+ +-----------+ +-----+
|
||||
| | | UBI | | MTD-BLOCK | | ... |
|
||||
+------------+ +-----------------------------+ +-----------+ +-----+
|
||||
+------------------------------------------------------------------+
|
||||
| MEMORY TECHNOLOGY DEVICES (MTD) |
|
||||
+------------------------------------------------------------------+
|
||||
+-----------------------------+ +--------------------------+ +-----+
|
||||
| NAND DRIVERS | | NOR DRIVERS | | ... |
|
||||
+-----------------------------+ +--------------------------+ +-----+
|
||||
|
||||
图1:处理裸闪存的 Linux 内核子系统
|
||||
|
||||
|
||||
|
||||
UBIFS 内部维护多个持久化在闪存上的数据结构:
|
||||
|
||||
- *索引*:存储在闪存上的 B+ 树,叶节点包含文件系统数据
|
||||
- *日志*:在更新闪存索引前收集文件系统变更的辅助数据结构,可减少闪存磨损
|
||||
- *树节点缓存(TNC)*:反映当前文件系统状态的内存 B+ 树,避免频繁读取闪存。
|
||||
本质上是索引的内存表示,但包含额外属性
|
||||
- *LEB属性树(LPT)*:用于统计每个 UBI LEB 空闲空间的闪存B+树
|
||||
|
||||
本节后续将详细讨论UBIFS的闪存数据结构。因为 TNC 不直接持久化到闪存,其在此
|
||||
处的重要性较低。更多 UBIFS 细节详见[UBIFS-WP]。
|
||||
|
||||
|
||||
UBIFS 索引与树节点缓存
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
UBIFS 在闪存上的基础实体称为 *节点* ,包含多种类型。如存储文件内容块的数据
|
||||
节点
|
||||
( ``struct ubifs_data_node`` ),或表示 VFS 索引节点的 inode 节点
|
||||
( ``struct ubifs_ino_node`` )。几乎所有节点共享包含节点类型、长度、序列
|
||||
号等基础信息的通用头
|
||||
( ``ubifs_ch`` )(见内核源码 ``fs/ubifs/ubifs-media.h`` )。LPT条目
|
||||
和填充节点(用于填充 LEB
|
||||
尾部不可用空间)等次要节点类型除外。
|
||||
|
||||
为避免每次变更重写整个 B+ 树,UBIFS 采用 *wandering tree* 实现:仅重写
|
||||
变更节点,旧版本被标记废弃而非立即擦除。因此索引不固定存储于闪存某处,而是在
|
||||
闪存上 *wanders* ,在 LEB 被 UBIFS 重用前,闪存上会存在废弃部分。为定位
|
||||
最新索引,UBIFS 在 UBI LEB 1 存储称为 *主节点* 的特殊节点,始终指向最新
|
||||
UBIFS 索引根节点。为增强可恢复性,主节点还备份到 LEB 2。因此挂载 UBIFS 只
|
||||
需读取 LEB 1 和 2 获取当前主节点,进而定位最新闪存索引。
|
||||
|
||||
TNC 是闪存索引的内存表示,包含未持久化的运行时属性(如脏标记)。TNC 作为回
|
||||
写式缓存,所有闪存索引修改都通过 TNC 完成。与其他缓存类似,TNC 无需将完整
|
||||
索引全部加载到内存中,需要时从闪存读取部分内容。 *提交* 是更新闪存文件系统
|
||||
结构(如索引)的 UBIFS 操作。每次提交时,标记为脏的 TNC 节点被写入闪存以更
|
||||
新持久化索引。
|
||||
|
||||
|
||||
日志
|
||||
~~~~
|
||||
|
||||
为避免闪存磨损,索引仅在满足特定条件(如 ``fsync(2)`` )时才持久化(提交)。
|
||||
日志用于记录索引提交之间的所有变更(以 inode 节点、数据节点等形式)。挂载时
|
||||
从闪存读取日志并重放到 TNC(此时 TNC 按需从闪存索引创建)。
|
||||
|
||||
UBIFS 保留一组专用于日志的 LEB(称为 *日志区* )。日志区 LEB 数量在文件系
|
||||
统创建时配置(使用 ``mkfs.ubifs`` )并存储于超级块节点。日志区仅含两类节
|
||||
点: *引用节点* 和 *提交起始节点* 。执行索引提交时写入提交起始节点,每次日
|
||||
志更新时写入引用节点。每个引用节点指向构成日志条目的其他节点( inode 节点、
|
||||
数据节点等)在闪存上的位置,这些节点称为 *bud* ,描述包含数据的实际文件系
|
||||
统变更。
|
||||
|
||||
日志区以环形缓冲区维护。当日志将满时触发提交操作,同时写入提交起始节点。因此
|
||||
挂载时 UBIFS 查找最新提交起始节点,仅重放其后的引用节点。提交起始节点前的引
|
||||
用节点将被忽略(因其已属于闪存索引)。
|
||||
|
||||
写入日志条目时,UBIFS 首先确保有足够空间写入引用节点和该条目的 bud。然后先
|
||||
写引用节点,再写描述文件变更的 bud。在日志重放阶段,UBIFS 会记录每个参考节
|
||||
点,并检查其引用的 LEB位置以定位 buds。若这些数据损坏或丢失,UBIFS 会尝试
|
||||
通过重新读取 LEB 来恢复,但仅针对日志中最后引用的 LEB,因为只有它可能因断
|
||||
电而损坏。若恢复失败,UBIFS 将拒绝挂载。对于其他 LEB 的错误,UBIFS 会直接
|
||||
终止挂载操作。
|
||||
|
||||
::
|
||||
|
||||
| ---- LOG AREA ---- | ---------- MAIN AREA ------------ |
|
||||
|
||||
-----+------+-----+--------+---- ------+-----+-----+---------------
|
||||
\ | | | | / / | | | \
|
||||
/ CS | REF | REF | | \ \ DENT | INO | INO | /
|
||||
\ | | | | / / | | | \
|
||||
----+------+-----+--------+--- -------+-----+-----+----------------
|
||||
| | ^ ^
|
||||
| | | |
|
||||
+------------------------+ |
|
||||
| |
|
||||
+-------------------------------+
|
||||
|
||||
|
||||
图2:包含提交起始节点(CS)和引用节点(REF)的日志区闪存布局,引用节点指向含
|
||||
bud 的主区
|
||||
|
||||
|
||||
LEB属性树/表
|
||||
~~~~~~~~~~~~
|
||||
|
||||
LEB 属性树用于存储每个 LEB 的信息,包括 LEB 类型、LEB 上的空闲空间和
|
||||
*脏空间* (旧空间,废弃内容) [1]_ 的数量。因为 UBIFS 从不在单个 LEB 混
|
||||
合存储索引节点和数据节点,所以 LEB 的类型至关重要,每个 LEB 都有特定用途,
|
||||
这对空闲空间计算非常有帮助。详见[UBIFS-WP]。
|
||||
|
||||
LEB 属性树也是 B+ 树,但远小于索引。因为其体积小,所以每次提交时都整块写入,
|
||||
保存 LPT 是原子操作。
|
||||
|
||||
|
||||
.. [1] 由于LEB只能追加写入不能覆盖,空闲空间(即 LEB 剩余可写空间)与废弃
|
||||
内容(先前写入但未擦除前不能覆盖)存在区别。
|
||||
|
||||
|
||||
UBIFS认证
|
||||
=========
|
||||
|
||||
本章介绍UBIFS认证,使UBIFS能验证闪存上元数据和文件内容的真实性与完整性。
|
||||
|
||||
|
||||
威胁模型
|
||||
--------
|
||||
|
||||
UBIFS 认证可检测离线数据篡改。虽然不能防止篡改,但是能让(可信)代码检查闪
|
||||
存文件内容和文件系统元数据的完整性与真实性,也能检查文件内容被替换的攻击。
|
||||
|
||||
UBIFS 认证不防护全闪存内容回滚(攻击者可转储闪存内容并在后期还原)。也不防护
|
||||
单个索引提交的部分回滚(攻击者能部分撤销变更)。这是因为 UBIFS 不立即覆盖索
|
||||
引树或日志的旧版本,而是标记为废弃,稍后由垃圾回收擦除。攻击者可擦除当前树部
|
||||
分内容并还原闪存上尚未擦除的旧版本。因每次提交总会写入索引根节点和主节点的新
|
||||
版本而不覆盖旧版本,UBI 的磨损均衡操作(将内容从物理擦除块复制到另一擦除块
|
||||
且非原子擦除原块)进一步助长此问题。
|
||||
|
||||
UBIFS 认证不覆盖认证密钥提供后攻击者在设备执行代码的攻击,需结合安全启动和
|
||||
可信启动等措施确保设备仅执行可信代码。
|
||||
|
||||
|
||||
认证
|
||||
----
|
||||
|
||||
为完全信任从闪存读取的数据,所有存储在闪存的 UBIFS 数据结构均需认证:
|
||||
- 包含文件内容、扩展属性、文件长度等元数据的索引
|
||||
- 通过记录文件系统变更来包含文件内容和元数据的日志
|
||||
- 存储 UBIFS 用于空闲空间统计的 UBI LEB 元数据的 LPT
|
||||
|
||||
|
||||
索引认证
|
||||
~~~~~~~~
|
||||
|
||||
借助 *wandering tree* 概念,UBIFS 仅更新和持久化从叶节点到根节点的变更
|
||||
部分。这允许用子节点哈希增强索引树节点。最终索引基本成为 Merkle 树:因索引
|
||||
叶节点含实际文件系统数据,其父索引节点的哈希覆盖所有文件内容和元数据。文件
|
||||
变更时,UBIFS 索引从叶节点到根节点(含主节点)相应更新,此过程可挂钩以同步
|
||||
重新计算各变更节点的哈希。读取文件时,UBIFS 可从叶节点到根节点逐级验证哈希
|
||||
确保节点完整性。
|
||||
|
||||
为确保整个索引真实性,UBIFS 主节点存储基于密钥的哈希(HMAC),覆盖自身内容及
|
||||
索引树根节点哈希。如前所述,主节点在索引持久化时(即索引提交时)总会写入闪存。
|
||||
|
||||
此方法仅修改 UBIFS 索引节点和主节点以包含哈希,其他类型节点保持不变,减少了
|
||||
对 UBIFS 用户(如嵌入式设备)宝贵的存储开销。
|
||||
|
||||
::
|
||||
|
||||
+---------------+
|
||||
| Master Node |
|
||||
| (hash) |
|
||||
+---------------+
|
||||
|
|
||||
v
|
||||
+-------------------+
|
||||
| Index Node #1 |
|
||||
| |
|
||||
| branch0 branchn |
|
||||
| (hash) (hash) |
|
||||
+-------------------+
|
||||
| ... | (fanout: 8)
|
||||
| |
|
||||
+-------+ +------+
|
||||
| |
|
||||
v v
|
||||
+-------------------+ +-------------------+
|
||||
| Index Node #2 | | Index Node #3 |
|
||||
| | | |
|
||||
| branch0 branchn | | branch0 branchn |
|
||||
| (hash) (hash) | | (hash) (hash) |
|
||||
+-------------------+ +-------------------+
|
||||
| ... | ... |
|
||||
v v v
|
||||
+-----------+ +----------+ +-----------+
|
||||
| Data Node | | INO Node | | DENT Node |
|
||||
+-----------+ +----------+ +-----------+
|
||||
|
||||
|
||||
图3:索引节点哈希与主节点 HMAC 的覆盖范围
|
||||
|
||||
|
||||
|
||||
健壮性性和断电安全性的关键在于以原子操作持久化哈希值与文件内容。UBIFS 现有
|
||||
的变更节点持久化机制专为此设计,能够确保断电时安全恢复。为索引节点添加哈希值
|
||||
不会改变该机制,因为每个哈希值都与其对应节点以原子操作同步持久化。
|
||||
|
||||
|
||||
日志认证
|
||||
~~~~~~~~
|
||||
|
||||
日志也需要认证。因为日志持续写入,必须频繁地添加认证信息以确保断电时未认证数
|
||||
据量可控。方法是从提交起始节点开始,对先前引用节点、当前引用节点和 bud 节点
|
||||
创建连续哈希链。适时地在bud节点间插入认证节点,这种新节点类型包含哈希链当前
|
||||
状态的 HMAC。因此日志可认证至最后一个认证节点。日志尾部无认证节点的部分无法
|
||||
认证,在日志重放时跳过。
|
||||
|
||||
日志认证示意图如下::
|
||||
|
||||
,,,,,,,,
|
||||
,......,...........................................
|
||||
,. CS , hash1.----. hash2.----.
|
||||
,. | , . |hmac . |hmac
|
||||
,. v , . v . v
|
||||
,.REF#0,-> bud -> bud -> bud.-> auth -> bud -> bud.-> auth ...
|
||||
,..|...,...........................................
|
||||
, | ,
|
||||
, | ,,,,,,,,,,,,,,,
|
||||
. | hash3,----.
|
||||
, | , |hmac
|
||||
, v , v
|
||||
, REF#1 -> bud -> bud,-> auth ...
|
||||
,,,|,,,,,,,,,,,,,,,,,,
|
||||
v
|
||||
REF#2 -> ...
|
||||
|
|
||||
V
|
||||
...
|
||||
|
||||
因为哈希值包含引用节点,攻击者无法重排或跳过日志头重放,仅能移除日志尾部的
|
||||
bud 节点或引用节点,最大限度将文件系统回退至上次提交。
|
||||
|
||||
日志区位置存储于主节点。因为主节点通过 HMAC 认证,所以未经检测无法篡改。日
|
||||
志区大小在文件系统创建时由 `mkfs.ubifs` 指定并存储于超级块节点。为避免篡
|
||||
改此值及其他参数,超级块结构添加 HMAC。超级块节点存储在 LEB 0,仅在功能标
|
||||
志等变更时修改,文件变更时不修改。
|
||||
|
||||
|
||||
LPT认证
|
||||
~~~~~~~
|
||||
|
||||
LPT 根节点在闪存上的位置存储于 UBIFS 主节点。因为 LPT 每次提交时都以原子
|
||||
操作写入和读取,无需单独认证树节点。通过主节点存储的简单哈希保护完整 LPT
|
||||
即可。因为主节点自身已认证,通过验证主节点真实性并比对存储的 LTP 哈希与读
|
||||
取的闪存 LPT 计算哈希值,即可验证 LPT 真实性。
|
||||
|
||||
|
||||
密钥管理
|
||||
--------
|
||||
|
||||
为了简化实现,UBIFS 认证使用单一密钥计算超级块、主节点、提交起始节点和引用
|
||||
节点的 HMAC。创建文件系统(`mkfs.ubifs`) 时需提供此密钥以认证超级块节点。
|
||||
挂载文件系统时也需此密钥验证认证节点并为变更生成新 HMAC。
|
||||
|
||||
UBIFS 认证旨在与 UBIFS 加密(fscrypt)协同工作以提供保密性和真实性。因为
|
||||
UBIFS 加密采用基于目录的差异化加密策略,可能存在多个 fscrypt 主密钥甚至未
|
||||
加密目录。而 UBIFS 认证采用全有或全无方式,要么认证整个文件系统要么完全不
|
||||
认证。基于此特性,且为确保认证机制可独立于加密功能使用,UBIFS 认证不与
|
||||
fscrypt 共享主密钥,而是维护独立的认证专用密钥。
|
||||
|
||||
提供认证密钥的API尚未定义,但可通过类似 fscrypt 的用户空间密钥环提供。需注
|
||||
意当前 fscrypt 方案存在缺陷,用户空间 API 终将变更[FSCRYPT-POLICY2]。
|
||||
|
||||
用户仍可通过用户空间提供单一口令或密钥覆盖 UBIFS 认证与加密。相应用户空间工
|
||||
具可解决此问题:除派生的 fscrypt 加密主密钥外,额外派生认证密钥。
|
||||
|
||||
为检查挂载时密钥可用性,UBIFS 超级块节点将额外存储认证密钥的哈希。此方法类
|
||||
似 fscrypt 加密策略 v2 提出的方法[FSCRYPT-POLICY2]。
|
||||
|
||||
|
||||
未来扩展
|
||||
========
|
||||
|
||||
特定场景下,若供应商需要向客户提供认证文件系统镜像,应该能在不共享 UBIFS 认
|
||||
证密钥的前提下实现。方法是在每个 HMAC 外额外存储数字签名,供应商随文件系统
|
||||
镜像分发公钥。若该文件系统后续需要修改,若后续需修改该文件系统,UBIFS 可在
|
||||
首次挂载时将全部数字签名替换为 HMAC,其处理逻辑与 IMA/EVM 子系统应对此类情
|
||||
况的方式类似。此时,HMAC 密钥需按常规方式预先提供。
|
||||
|
||||
|
||||
参考
|
||||
====
|
||||
|
||||
[CRYPTSETUP2] https://www.saout.de/pipermail/dm-crypt/2017-November/005745.html
|
||||
|
||||
[DMC-CBC-ATTACK] https://www.jakoblell.com/blog/2013/12/22/practical-malleability-attack-against-cbc-en
|
||||
crypted-luks-partitions/
|
||||
|
||||
[DM-INTEGRITY] https://www.kernel.org/doc/Documentation/device-mapper/dm-integrity.rst
|
||||
|
||||
[DM-VERITY] https://www.kernel.org/doc/Documentation/device-mapper/verity.rst
|
||||
|
||||
[FSCRYPT-POLICY2] https://www.spinics.net/lists/linux-ext4/msg58710.html
|
||||
|
||||
[UBIFS-WP] http://www.linux-mtd.infradead.org/doc/ubifs_whitepaper.pdf
|
||||
|
|
@ -0,0 +1,114 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
.. include:: ../disclaimer-zh_CN.rst
|
||||
|
||||
:Original: Documentation/filesystems/ubifs.rst
|
||||
|
||||
:翻译:
|
||||
|
||||
邵明寅 Shao Mingyin <shao.mingyin@zte.com.cn>
|
||||
|
||||
:校译:
|
||||
|
||||
杨涛 yang tao <yang.tao172@zte.com.cn>
|
||||
|
||||
============
|
||||
UBI 文件系统
|
||||
============
|
||||
|
||||
简介
|
||||
====
|
||||
|
||||
UBIFS 文件系统全称为 UBI 文件系统(UBI File System)。UBI 代表无序块镜
|
||||
像(Unsorted Block Images)。UBIFS 是一种闪存文件系统,这意味着它专为闪
|
||||
存设备设计。需要理解的是,UBIFS与 Linux 中任何传统文件系统(如 Ext2、
|
||||
XFS、JFS 等)完全不同。UBIFS 代表一类特殊的文件系统,它们工作在 MTD 设备
|
||||
而非块设备上。该类别的另一个 Linux 文件系统是 JFFS2。
|
||||
|
||||
为更清晰说明,以下是 MTD 设备与块设备的简要比较:
|
||||
|
||||
1. MTD 设备代表闪存设备,由较大尺寸的擦除块组成,通常约 128KiB。块设备由
|
||||
小块组成,通常 512 字节。
|
||||
2. MTD 设备支持 3 种主要操作:在擦除块内偏移位置读取、在擦除块内偏移位置写
|
||||
入、以及擦除整个擦除块。块设备支持 2 种主要操作:读取整个块和写入整个块。
|
||||
3. 整个擦除块必须先擦除才能重写内容。块可直接重写。
|
||||
4. 擦除块在经历一定次数的擦写周期后会磨损,通常 SLC NAND 和 NOR 闪存为
|
||||
100K-1G 次,MLC NAND 闪存为 1K-10K 次。块设备不具备磨损特性。
|
||||
5. 擦除块可能损坏(仅限 NAND 闪存),软件需处理此问题。硬盘上的块通常不会损
|
||||
坏,因为硬件有坏块替换机制(至少现代 LBA 硬盘如此)。
|
||||
|
||||
这充分说明了 UBIFS 与传统文件系统的本质差异。
|
||||
|
||||
UBIFS 工作在 UBI 层之上。UBI 是一个独立的软件层(位于 drivers/mtd/ubi),
|
||||
本质上是卷管理和磨损均衡层。它提供称为 UBI 卷的高级抽象,比 MTD 设备更上层。
|
||||
UBI 设备的编程模型与 MTD 设备非常相似,仍由大容量擦除块组成,支持读/写/擦
|
||||
除操作,但 UBI 设备消除了磨损和坏块限制(上述列表的第 4 和第 5 项)。
|
||||
|
||||
某种意义上,UBIFS 是 JFFS2 文件系统的下一代产品,但它与 JFFS2 差异巨大且
|
||||
不兼容。主要区别如下:
|
||||
|
||||
* JFFS2 工作在 MTD 设备之上,UBIFS 依赖于 UBI 并工作在 UBI 卷之上。
|
||||
* JFFS2 没有介质索引,需在挂载时构建索引,这要求全介质扫描。UBIFS 在闪存
|
||||
介质上维护文件系统索引信息,无需全介质扫描,因此挂载速度远快于 JFFS2。
|
||||
* JFFS2 是直写(write-through)文件系统,而 UBIFS 支持回写
|
||||
(write-back),这使得 UBIFS 写入速度快得多。
|
||||
|
||||
与 JFFS2 类似,UBIFS 支持实时压缩,可将大量数据存入闪存。
|
||||
|
||||
与 JFFS2 类似,UBIFS 能容忍异常重启和断电。它不需要类似 fsck.ext2 的工
|
||||
具。UBIFS 会自动重放日志并从崩溃中恢复,确保闪存数据结构的一致性。
|
||||
|
||||
UBIFS 具有对数级扩展性(其使用的数据结构多为树形),因此挂载时间和内存消耗不
|
||||
像 JFFS2 那样线性依赖于闪存容量。这是因为 UBIFS 在闪存介质上维护文件系统
|
||||
索引。但 UBIFS 依赖于线性扩展的 UBI 层,因此整体 UBI/UBIFS 栈仍是线性扩
|
||||
展。尽管如此,UBIFS/UBI 的扩展性仍显著优于 JFFS2。
|
||||
|
||||
UBIFS 开发者认为,未来可开发同样具备对数级扩展性的 UBI2。UBI2 将支持与
|
||||
UBI 相同的 API,但二进制不兼容。因此 UBIFS 无需修改即可使用 UBI2。
|
||||
|
||||
挂载选项
|
||||
========
|
||||
|
||||
(*) 表示默认选项。
|
||||
|
||||
==================== =======================================================
|
||||
bulk_read 批量读取以利用闪存介质的顺序读取加速特性
|
||||
no_bulk_read (*) 禁用批量读取
|
||||
no_chk_data_crc (*) 跳过数据节点的 CRC 校验以提高读取性能。 仅在闪存
|
||||
介质高度可靠时使用此选项。 此选项可能导致文件内容损坏无法被
|
||||
察觉。
|
||||
chk_data_crc 强制校验数据节点的 CRC
|
||||
compr=none 覆盖默认压缩器,设置为"none"
|
||||
compr=lzo 覆盖默认压缩器,设置为"LZO"
|
||||
compr=zlib 覆盖默认压缩器,设置为"zlib"
|
||||
auth_key= 指定用于文件系统身份验证的密钥。
|
||||
使用此选项将强制启用身份验证。
|
||||
传入的密钥必须存在于内核密钥环中, 且类型必须是'logon'
|
||||
auth_hash_name= 用于身份验证的哈希算法。同时用于哈希计算和 HMAC
|
||||
生成。典型值包括"sha256"或"sha512"
|
||||
==================== =======================================================
|
||||
|
||||
快速使用指南
|
||||
============
|
||||
|
||||
挂载的 UBI 卷通过 "ubiX_Y" 或 "ubiX:NAME" 语法指定,其中 "X" 是 UBI
|
||||
设备编号,"Y" 是 UBI 卷编号,"NAME" 是 UBI 卷名称。
|
||||
|
||||
将 UBI 设备 0 的卷 0 挂载到 /mnt/ubifs::
|
||||
|
||||
$ mount -t ubifs ubi0_0 /mnt/ubifs
|
||||
|
||||
将 UBI 设备 0 的 "rootfs" 卷挂载到 /mnt/ubifs("rootfs" 是卷名)::
|
||||
|
||||
$ mount -t ubifs ubi0:rootfs /mnt/ubifs
|
||||
|
||||
以下是内核启动参数的示例,用于将 mtd0 附加到 UBI 并挂载 "rootfs" 卷:
|
||||
ubi.mtd=0 root=ubi0:rootfs rootfstype=ubifs
|
||||
|
||||
参考资料
|
||||
========
|
||||
|
||||
UBIFS 文档及常见问题解答/操作指南请访问 MTD 官网:
|
||||
|
||||
- http://www.linux-mtd.infradead.org/doc/ubifs.html
|
||||
- http://www.linux-mtd.infradead.org/faq/ubifs.html
|
||||
|
|
@ -64,7 +64,7 @@ Linux 发行版和简单地使用 Linux 命令行,那么可以迅速开始了
|
|||
::
|
||||
|
||||
cd linux
|
||||
./scripts/sphinx-pre-install
|
||||
./tools/docs/sphinx-pre-install
|
||||
|
||||
以 Fedora 为例,它的输出是这样的::
|
||||
|
||||
|
|
@ -437,7 +437,7 @@ git email 默认会抄送给您一份,所以您可以切换为审阅者的角
|
|||
对于首次参与 Linux 内核中文文档翻译的新手,建议您在 linux 目录中运行以下命令:
|
||||
::
|
||||
|
||||
./script/checktransupdate.py -l zh_CN``
|
||||
tools/docs/checktransupdate.py -l zh_CN``
|
||||
|
||||
该命令会列出需要翻译或更新的英文文档,结果同时保存在 checktransupdate.log 中。
|
||||
|
||||
|
|
|
|||
|
|
@ -93,6 +93,16 @@ HOSTRUSTFLAGS
|
|||
-------------
|
||||
在构建主机程序时传递给 $(HOSTRUSTC) 的额外标志。
|
||||
|
||||
PROCMACROLDFLAGS
|
||||
----------------
|
||||
用于链接 Rust 过程宏的标志。由于过程宏是由 rustc 在构建时加载的,
|
||||
因此必须以与当前使用的 rustc 工具链兼容的方式进行链接。
|
||||
|
||||
例如,当 rustc 使用的 C 库与用户希望用于主机程序的 C 库不同时,
|
||||
此设置会非常有用。
|
||||
|
||||
如果未设置,则默认使用链接主机程序时传递的标志。
|
||||
|
||||
HOSTLDFLAGS
|
||||
-----------
|
||||
链接主机程序时传递的额外选项。
|
||||
|
|
@ -135,12 +145,18 @@ KBUILD_OUTPUT
|
|||
指定内核构建的输出目录。
|
||||
|
||||
在单独的构建目录中为预构建内核构建外部模块时,这个变量也可以指向内核输出目录。请注意,
|
||||
这并不指定外部模块本身的输出目录。
|
||||
这并不指定外部模块本身的输出目录(使用 KBUILD_EXTMOD_OUTPUT 来达到这个目的)。
|
||||
|
||||
输出目录也可以使用 "O=..." 指定。
|
||||
|
||||
设置 "O=..." 优先于 KBUILD_OUTPUT。
|
||||
|
||||
KBUILD_EXTMOD_OUTPUT
|
||||
--------------------
|
||||
指定外部模块的输出目录
|
||||
|
||||
设置 "MO=..." 优先于 KBUILD_EXTMOD_OUTPUT.
|
||||
|
||||
KBUILD_EXTRA_WARN
|
||||
-----------------
|
||||
指定额外的构建检查。也可以通过在命令行传递 "W=..." 来设置相同的值。
|
||||
|
|
@ -290,8 +306,13 @@ IGNORE_DIRS
|
|||
KBUILD_BUILD_TIMESTAMP
|
||||
----------------------
|
||||
将该环境变量设置为日期字符串,可以覆盖在 UTS_VERSION 定义中使用的时间戳
|
||||
(运行内核时的 uname -v)。该值必须是一个可以传递给 date -d 的字符串。默认值是
|
||||
内核构建某个时刻的 date 命令输出。
|
||||
(运行内核时的 uname -v) 。该值必须是一个可以传递给 date -d 的字符串。例如::
|
||||
|
||||
$ KBUILD_BUILD_TIMESTAMP="Mon Oct 13 00:00:00 UTC 2025" make
|
||||
|
||||
默认值是内核构建某个时刻的 date 命令输出。如果提供该时戳,它还用于任何 initramfs 归
|
||||
档文件中的 mtime 字段。 Initramfs mtimes 是 32 位的,因此早于 Unix 纪元 1970 年,或
|
||||
晚于协调世界时 (UTC) 2106 年 2 月 7 日 6 时 28 分 15 秒的日期是无效的。
|
||||
|
||||
KBUILD_BUILD_USER, KBUILD_BUILD_HOST
|
||||
------------------------------------
|
||||
|
|
|
|||
|
|
@ -87,4 +87,4 @@ Active MM
|
|||
最丑陋的之一--不像其他架构的MM和寄存器状态是分开的,alpha的PALcode将两者
|
||||
连接起来,你需要同时切换两者)。
|
||||
|
||||
(文档来源 http://marc.info/?l=linux-kernel&m=93337278602211&w=2)
|
||||
(文档来源 https://lore.kernel.org/lkml/Pine.LNX.4.10.9907301410280.752-100000@penguin.transmeta.com/)
|
||||
|
|
|
|||
|
|
@ -0,0 +1,176 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
.. include:: ../disclaimer-zh_CN.rst
|
||||
|
||||
:Original: Documentation/networking/generic-hdlc.rst
|
||||
|
||||
:翻译:
|
||||
|
||||
孙渔喜 Sun yuxi <sun.yuxi@zte.com.cn>
|
||||
|
||||
==========
|
||||
通用HDLC层
|
||||
==========
|
||||
|
||||
Krzysztof Halasa <khc@pm.waw.pl>
|
||||
|
||||
|
||||
通用HDLC层当前支持以下协议:
|
||||
|
||||
1. 帧中继(支持ANSI、CCITT、Cisco及无LMI模式)
|
||||
|
||||
- 常规(路由)接口和以太网桥接(以太网设备仿真)接口
|
||||
可共享同一条PVC。
|
||||
- 支持ARP(内核暂不支持InARP,但可通过实验性用户空间守护程序实现,
|
||||
下载地址:http://www.kernel.org/pub/linux/utils/net/hdlc/)。
|
||||
|
||||
2. 原始HDLC —— 支持IP(IPv4)接口或以太网设备仿真
|
||||
3. Cisco HDLC
|
||||
4. PPP
|
||||
5. X.25(使用X.25协议栈)
|
||||
|
||||
通用HDLC仅作为协议驱动 - 必须配合具体硬件的底层驱动
|
||||
才能运行。
|
||||
|
||||
以太网设备仿真(使用HDLC或帧中继PVC)兼容IEEE 802.1Q(VLAN)和
|
||||
802.1D(以太网桥接)。
|
||||
|
||||
|
||||
请确保已加载 hdlc.o 和硬件驱动程序。系统将为每个WAN端口创建一个
|
||||
"hdlc"网络设备(如hdlc0等)。您需要使用"sethdlc"工具,可从以下
|
||||
地址获取:
|
||||
|
||||
http://www.kernel.org/pub/linux/utils/net/hdlc/
|
||||
|
||||
编译 sethdlc.c 工具::
|
||||
|
||||
gcc -O2 -Wall -o sethdlc sethdlc.c
|
||||
|
||||
请确保使用与您内核版本匹配的 sethdlc 工具。
|
||||
|
||||
使用 sethdlc 工具设置物理接口、时钟频率、HDLC 模式,
|
||||
若使用帧中继还需添加所需的 PVC。
|
||||
通常您需要执行类似以下命令::
|
||||
|
||||
sethdlc hdlc0 clock int rate 128000
|
||||
sethdlc hdlc0 cisco interval 10 timeout 25
|
||||
|
||||
或::
|
||||
|
||||
sethdlc hdlc0 rs232 clock ext
|
||||
sethdlc hdlc0 fr lmi ansi
|
||||
sethdlc hdlc0 create 99
|
||||
ifconfig hdlc0 up
|
||||
ifconfig pvc0 localIP pointopoint remoteIP
|
||||
|
||||
在帧中继模式下,请先启用主hdlc设备(不分配IP地址),再
|
||||
使用pvc设备。
|
||||
|
||||
|
||||
接口设置选项:
|
||||
|
||||
* v35 | rs232 | x21 | t1 | e1
|
||||
- 当网卡支持软件可选接口时,可为指定端口设置物理接口
|
||||
loopback
|
||||
- 启用硬件环回(仅用于测试)
|
||||
* clock ext
|
||||
- RX与TX时钟均使用外部时钟源
|
||||
* clock int
|
||||
- RX与TX时钟均使用内部时钟源
|
||||
* clock txint
|
||||
- RX时钟使用外部时钟源,TX时钟使用内部时钟源
|
||||
* clock txfromrx
|
||||
- RX时钟使用外部时钟源,TX时钟从RX时钟派生
|
||||
* rate
|
||||
- 设置时钟速率(仅适用于"int"或"txint"时钟模式)
|
||||
|
||||
|
||||
设置协议选项:
|
||||
|
||||
* hdlc - 设置原始HDLC模式(仅支持IP协议)
|
||||
|
||||
nrz / nrzi / fm-mark / fm-space / manchester - 传输编码选项
|
||||
|
||||
no-parity / crc16 / crc16-pr0 (预设零值的CRC16) / crc32-itu
|
||||
|
||||
crc16-itu (使用ITU-T多项式的CRC16) / crc16-itu-pr0 - 校验方式选项
|
||||
|
||||
* hdlc-eth - 使用HDLC进行以太网设备仿真. 校验和编码方式同上
|
||||
as above.
|
||||
|
||||
* cisco - 设置Cisco HDLC模式(支持IP、IPv6和IPX协议)
|
||||
|
||||
interval - 保活数据包发送间隔(秒)
|
||||
|
||||
timeout - 未收到保活数据包的超时时间(秒),超过此时长将判定
|
||||
链路断开
|
||||
|
||||
* ppp - 设置同步PPP模式
|
||||
|
||||
* x25 - 设置X.25模式
|
||||
|
||||
* fr - 帧中继模式
|
||||
|
||||
lmi ansi / ccitt / cisco / none - LMI(链路管理)类型
|
||||
|
||||
dce - 将帧中继设置为DCE(网络侧)LMI模式(默认为DTE用户侧)。
|
||||
|
||||
此设置与时钟无关!
|
||||
|
||||
- t391 - 链路完整性验证轮询定时器(秒)- 用户侧
|
||||
- t392 - 轮询验证定时器(秒)- 网络侧
|
||||
- n391 - 全状态轮询计数器 - 用户侧
|
||||
- n392 - 错误阈值 - 用户侧和网络侧共用
|
||||
- n393 - 监控事件计数 - 用户侧和网络侧共用
|
||||
|
||||
帧中继专用命令:
|
||||
|
||||
* create n | delete n - 添加/删除DLCI编号为n的PVC接口。
|
||||
新创建的接口将命名为pvc0、pvc1等。
|
||||
|
||||
* create ether n | delete ether n - 添加/删除用于以太网
|
||||
桥接帧的设备设备将命名为pvceth0、pvceth1等。
|
||||
|
||||
|
||||
|
||||
|
||||
板卡特定问题
|
||||
------------
|
||||
|
||||
n2.o 和 c101.o 驱动模块需要参数才能工作::
|
||||
|
||||
insmod n2 hw=io,irq,ram,ports[:io,irq,...]
|
||||
|
||||
示例::
|
||||
|
||||
insmod n2 hw=0x300,10,0xD0000,01
|
||||
|
||||
或::
|
||||
|
||||
insmod c101 hw=irq,ram[:irq,...]
|
||||
|
||||
示例::
|
||||
|
||||
insmod c101 hw=9,0xdc000
|
||||
|
||||
若直接编译进内核,这些驱动需要通过内核(命令行)参数配置::
|
||||
|
||||
n2.hw=io,irq,ram,ports:...
|
||||
|
||||
或::
|
||||
|
||||
c101.hw=irq,ram:...
|
||||
|
||||
|
||||
|
||||
若您的N2、C101或PLX200SYN板卡出现问题,可通过"private"
|
||||
命令查看端口数据包描述符环(显示在内核日志中)
|
||||
|
||||
sethdlc hdlc0 private
|
||||
|
||||
硬件驱动需使用#define DEBUG_RINGS编译选项构建。
|
||||
在提交错误报告时附上这些信息将很有帮助。如在使用过程中遇
|
||||
到任何问题,请随时告知。
|
||||
|
||||
获取补丁和其他信息,请访问:
|
||||
<http://www.kernel.org/pub/linux/utils/net/hdlc/>.
|
||||
|
|
@ -27,6 +27,9 @@
|
|||
xfrm_proc
|
||||
netmem
|
||||
alias
|
||||
mptcp-sysctl
|
||||
generic-hdlc
|
||||
timestamping
|
||||
|
||||
Todolist:
|
||||
|
||||
|
|
@ -76,7 +79,6 @@ Todolist:
|
|||
* eql
|
||||
* fib_trie
|
||||
* filter
|
||||
* generic-hdlc
|
||||
* generic_netlink
|
||||
* netlink_spec/index
|
||||
* gen_stats
|
||||
|
|
@ -96,7 +98,6 @@ Todolist:
|
|||
* mctp
|
||||
* mpls-sysctl
|
||||
* mptcp
|
||||
* mptcp-sysctl
|
||||
* multiqueue
|
||||
* multi-pf-netdev
|
||||
* net_cachelines/index
|
||||
|
|
@ -126,7 +127,6 @@ Todolist:
|
|||
* sctp
|
||||
* secid
|
||||
* seg6-sysctl
|
||||
* skbuff
|
||||
* smc-sysctl
|
||||
* sriov
|
||||
* statistics
|
||||
|
|
@ -138,7 +138,6 @@ Todolist:
|
|||
* tcp_ao
|
||||
* tcp-thin
|
||||
* team
|
||||
* timestamping
|
||||
* tipc
|
||||
* tproxy
|
||||
* tuntap
|
||||
|
|
|
|||
|
|
@ -0,0 +1,139 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
.. include:: ../disclaimer-zh_CN.rst
|
||||
|
||||
:Original: Documentation/networking/mptcp-sysctl.rst
|
||||
|
||||
:翻译:
|
||||
|
||||
孙渔喜 Sun yuxi <sun.yuxi@zte.com.cn>
|
||||
|
||||
================
|
||||
MPTCP Sysfs 变量
|
||||
================
|
||||
|
||||
/proc/sys/net/mptcp/* Variables
|
||||
===============================
|
||||
|
||||
add_addr_timeout - INTEGER (秒)
|
||||
设置ADD_ADDR控制消息的重传超时时间。当MPTCP对端未确认
|
||||
先前的ADD_ADDR消息时,将在该超时时间后重新发送。
|
||||
|
||||
默认值与TCP_RTO_MAX相同。此为每个命名空间的sysctl参数。
|
||||
|
||||
默认值:120
|
||||
|
||||
allow_join_initial_addr_port - BOOLEAN
|
||||
控制是否允许对端向初始子流使用的IP地址和端口号发送加入
|
||||
请求(1表示允许)。此参数会设置连接时发送给对端的标志位,
|
||||
并决定是否接受此类加入请求。
|
||||
|
||||
通过ADD_ADDR通告的地址不受此参数影响。
|
||||
|
||||
此为每个命名空间的sysctl参数。
|
||||
|
||||
默认值:1
|
||||
|
||||
available_path_managers - STRING
|
||||
显示已注册的可用路径管理器选项。可能有更多路径管理器可用
|
||||
但尚未加载。
|
||||
|
||||
available_schedulers - STRING
|
||||
显示已注册的可用调度器选项。可能有更多数据包调度器可用
|
||||
但尚未加载。
|
||||
|
||||
blackhole_timeout - INTEGER (秒)
|
||||
当发生MPTCP防火墙黑洞问题时,初始禁用活跃MPTCP套接字上MPTCP
|
||||
功能的时间(秒)。如果在重新启用MPTCP后立即检测到更多黑洞问题,
|
||||
此时间段将呈指数增长;当黑洞问题消失时,将重置为初始值。
|
||||
|
||||
设置为0可禁用黑洞检测功能。此为每个命名空间的sysctl参数。
|
||||
|
||||
默认值:3600
|
||||
|
||||
checksum_enabled - BOOLEAN
|
||||
控制是否启用DSS校验和功能。
|
||||
|
||||
当值为非零时可启用DSS校验和。此为每个命名空间的sysctl参数。
|
||||
|
||||
默认值:0
|
||||
|
||||
close_timeout - INTEGER (seconds)
|
||||
设置"先断后连"超时时间:在未调用close或shutdown系统调用时,
|
||||
MPTCP套接字将在最后一个子流移除后保持当前状态达到该时长,才
|
||||
会转为TCP_CLOSE状态。
|
||||
|
||||
默认值与TCP_TIMEWAIT_LEN相同。此为每个命名空间的sysctl参数。
|
||||
|
||||
默认值:60
|
||||
|
||||
enabled - BOOLEAN
|
||||
控制是否允许创建MPTCP套接字。
|
||||
|
||||
当值为1时允许创建MPTCP套接字。此为每个命名空间的sysctl参数。
|
||||
|
||||
默认值:1(启用)
|
||||
|
||||
path_manager - STRING
|
||||
设置用于每个新MPTCP套接字的默认路径管理器名称。内核路径管理将
|
||||
根据通过MPTCP netlink API配置的每个命名空间值来控制子流连接
|
||||
和地址通告。用户空间路径管理将每个MPTCP连接的子流连接决策和地
|
||||
址通告交由特权用户空间程序控制,代价是需要更多netlink流量来
|
||||
传播所有相关事件和命令。
|
||||
|
||||
此为每个命名空间的sysctl参数。
|
||||
|
||||
* "kernel" - 内核路径管理器
|
||||
* "userspace" - 用户空间路径管理器
|
||||
|
||||
默认值:"kernel"
|
||||
|
||||
pm_type - INTEGER
|
||||
设置用于每个新MPTCP套接字的默认路径管理器类型。内核路径管理将
|
||||
根据通过MPTCP netlink API配置的每个命名空间值来控制子流连接
|
||||
和地址通告。用户空间路径管理将每个MPTCP连接的子流连接决策和地
|
||||
址通告交由特权用户空间程序控制,代价是需要更多netlink流量来
|
||||
传播所有相关事件和命令。
|
||||
|
||||
此为每个命名空间的sysctl参数。
|
||||
|
||||
自v6.15起已弃用,请改用path_manager参数。
|
||||
|
||||
* 0 - 内核路径管理器
|
||||
* 1 - 用户空间路径管理器
|
||||
|
||||
默认值:0
|
||||
|
||||
scheduler - STRING
|
||||
选择所需的调度器类型。
|
||||
|
||||
支持选择不同的数据包调度器。此为每个命名空间的sysctl参数。
|
||||
|
||||
默认值:"default"
|
||||
|
||||
stale_loss_cnt - INTEGER
|
||||
用于判定子流失效(stale)的MPTCP层重传间隔次数阈值。当指定
|
||||
子流在连续多个重传间隔内既无数据传输又有待处理数据时,将被标
|
||||
记为失效状态。失效子流将被数据包调度器忽略。
|
||||
设置较低的stale_loss_cnt值可实现快速主备切换,较高的值则能
|
||||
最大化边缘场景(如高误码率链路或对端暂停数据处理等异常情况)
|
||||
的链路利用率。
|
||||
|
||||
此为每个命名空间的sysctl参数。
|
||||
|
||||
默认值:4
|
||||
|
||||
syn_retrans_before_tcp_fallback - INTEGER
|
||||
在回退到 TCP(即丢弃 MPTCP 选项)之前,SYN + MP_CAPABLE
|
||||
报文的重传次数。换句话说,如果所有报文在传输过程中都被丢弃,
|
||||
那么将会:
|
||||
|
||||
* 首次SYN携带MPTCP支持选项
|
||||
* 按本参数值重传携带MPTCP选项的SYN包
|
||||
* 后续重传将不再携带MPTCP支持选项
|
||||
|
||||
0 表示首次重传即丢弃MPTCP选项。
|
||||
>=128 表示所有SYN重传均保留MPTCP选项设置过低的值可能增加
|
||||
MPTCP黑洞误判几率。此为每个命名空间的sysctl参数。
|
||||
|
||||
默认值:2
|
||||
|
|
@ -0,0 +1,674 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
.. include:: ../disclaimer-zh_CN.rst
|
||||
|
||||
:Original: Documentation/networking/timestamping.rst
|
||||
|
||||
:翻译:
|
||||
|
||||
王亚鑫 Wang Yaxin <wang.yaxin@zte.com.cn>
|
||||
|
||||
======
|
||||
时间戳
|
||||
======
|
||||
|
||||
|
||||
1. 控制接口
|
||||
===========
|
||||
|
||||
接收网络数据包时间戳的接口包括:
|
||||
|
||||
SO_TIMESTAMP
|
||||
为每个传入数据包生成(不一定是单调的)系统时间时间戳。通过 recvmsg()
|
||||
在控制消息中以微秒分辨率报告时间戳。
|
||||
SO_TIMESTAMP 根据架构类型和 libc 的 lib 中的 time_t 表示方式定义为
|
||||
SO_TIMESTAMP_NEW 或 SO_TIMESTAMP_OLD。
|
||||
SO_TIMESTAMP_OLD 和 SO_TIMESTAMP_NEW 的控制消息格式分别为
|
||||
struct __kernel_old_timeval 和 struct __kernel_sock_timeval。
|
||||
|
||||
SO_TIMESTAMPNS
|
||||
与 SO_TIMESTAMP 相同的时间戳机制,但以 struct timespec 格式报告时间戳,
|
||||
纳秒分辨率。
|
||||
SO_TIMESTAMPNS 根据架构类型和 libc 的 time_t 表示方式定义为
|
||||
SO_TIMESTAMPNS_NEW 或 SO_TIMESTAMPNS_OLD。
|
||||
控制消息格式对于 SO_TIMESTAMPNS_OLD 为 struct timespec,
|
||||
对于 SO_TIMESTAMPNS_NEW 为 struct __kernel_timespec。
|
||||
|
||||
IP_MULTICAST_LOOP + SO_TIMESTAMP[NS]
|
||||
仅用于多播:通过读取回环数据包接收时间戳,获得近似的传输时间戳。
|
||||
|
||||
SO_TIMESTAMPING
|
||||
在接收、传输或两者时生成时间戳。支持多个时间戳源,包括硬件。
|
||||
支持为流套接字生成时间戳。
|
||||
|
||||
|
||||
1.1 SO_TIMESTAMP(也包括 SO_TIMESTAMP_OLD 和 SO_TIMESTAMP_NEW)
|
||||
---------------------------------------------------------------
|
||||
|
||||
此套接字选项在接收路径上启用数据报的时间戳。由于目标套接字(如果有)
|
||||
在网络栈早期未知,因此必须为所有数据包启用此功能。所有早期接收的时间
|
||||
戳选项也是如此。
|
||||
|
||||
有关接口详细信息,请参阅 `man 7 socket`。
|
||||
|
||||
始终使用 SO_TIMESTAMP_NEW 时间戳以获得 struct __kernel_sock_timeval
|
||||
格式的时间戳。
|
||||
|
||||
如果时间在 2038 年后,SO_TIMESTAMP_OLD 在 32 位机器上将返回错误的时间戳。
|
||||
|
||||
1.2 SO_TIMESTAMPNS(也包括 SO_TIMESTAMPNS_OLD 和 SO_TIMESTAMPNS_NEW)
|
||||
---------------------------------------------------------------------
|
||||
|
||||
此选项与 SO_TIMESTAMP 相同,但返回数据类型有所不同。其 struct timespec
|
||||
能达到比 SO_TIMESTAMP 的 timeval(毫秒)更高的分辨率(纳秒)时间戳。
|
||||
|
||||
始终使用 SO_TIMESTAMPNS_NEW 时间戳获得 struct __kernel_timespec 格式
|
||||
的时间戳。
|
||||
|
||||
如果时间在 2038 年后,SO_TIMESTAMPNS_OLD 在 32 位机器上将返回错误的时间戳。
|
||||
|
||||
1.3 SO_TIMESTAMPING(也包括 SO_TIMESTAMPING_OLD 和 SO_TIMESTAMPING_NEW)
|
||||
------------------------------------------------------------------------
|
||||
|
||||
支持多种类型的时间戳请求。因此,此套接字选项接受标志位图,而不是布尔值。在::
|
||||
|
||||
err = setsockopt(fd, SOL_SOCKET, SO_TIMESTAMPING, &val, sizeof(val));
|
||||
|
||||
val 是一个整数,设置了以下任何位。设置其他位将返回 EINVAL 且不更改当前状态。
|
||||
|
||||
这个套接字选项配置以下几个方面的时间戳生成:
|
||||
为单个 sk_buff 结构体生成时间戳(1.3.1);
|
||||
将时间戳报告到套接字的错误队列(1.3.2);
|
||||
配置相关选项(1.3.3);
|
||||
也可以通过 cmsg 为单个 sendmsg 调用启用时间戳生成(1.3.4)。
|
||||
|
||||
1.3.1 时间戳生成
|
||||
^^^^^^^^^^^^^^^^
|
||||
|
||||
某些位是向协议栈请求尝试生成时间戳。它们的任何组合都是有效的。对这些位的更改适
|
||||
用于新创建的数据包,而不是已经在协议栈中的数据包。因此,可以通过在两个 setsockopt
|
||||
调用之间嵌入 send() 调用来选择性地为数据包子集请求时间戳(例如,用于采样),
|
||||
一个用于启用时间戳生成,一个用于禁用它。时间戳也可能由于特定套接字请求之外的原
|
||||
因而生成,例如在当系统范围内启用接收时间戳时,如前所述。
|
||||
|
||||
SOF_TIMESTAMPING_RX_HARDWARE:
|
||||
请求由网络适配器生成的接收时间戳。
|
||||
|
||||
SOF_TIMESTAMPING_RX_SOFTWARE:
|
||||
当数据进入内核时请求接收时间戳。这些时间戳在设备驱动程序将数据包交给内核接收
|
||||
协议栈后生成。
|
||||
|
||||
SOF_TIMESTAMPING_TX_HARDWARE:
|
||||
请求由网络适配器生成的传输时间戳。此标志可以通过套接字选项和控制消息启用。
|
||||
|
||||
SOF_TIMESTAMPING_TX_SOFTWARE:
|
||||
当数据离开内核时请求传输(TX)时间戳。这些时间戳由设备驱动程序生成,并且尽
|
||||
可能贴近网络接口发送点,但始终在内核将数据包传递给网络接口之前生成。因此,
|
||||
它们需要驱动程序支持,且可能并非所有设备都可用。此标志可通过套接字选项和
|
||||
控制消息两种方式启用。
|
||||
|
||||
SOF_TIMESTAMPING_TX_SCHED:
|
||||
在进入数据包调度器之前请求传输时间戳。内核传输延迟(如果很长)通常由排队
|
||||
延迟主导。此时间戳与在 SOF_TIMESTAMPING_TX_SOFTWARE 处获取的时间戳之
|
||||
间的差异将暴露此延迟,并且与协议处理无关。协议处理中产生的延迟(如果有)
|
||||
可以通过从 send() 之前立即获取的用户空间时间戳中减去此时间戳来计算。在
|
||||
具有虚拟设备的机器上,传输的数据包通过多个设备和多个数据包调度器,在每层
|
||||
生成时间戳。这允许对排队延迟进行细粒度测量。此标志可以通过套接字选项和控
|
||||
制消息启用。
|
||||
|
||||
SOF_TIMESTAMPING_TX_ACK:
|
||||
请求在发送缓冲区中的所有数据都已得到确认时生成传输(TX)时间戳。此选项
|
||||
仅适用于可靠协议,目前仅在TCP协议中实现。对于该协议,它可能会过度报告
|
||||
测量结果,因为时间戳是在send()调用时缓冲区中的所有数据(包括该缓冲区)
|
||||
都被确认时生成的,即累积确认。该机制会忽略选择确认(SACK)和前向确认
|
||||
(FACK)。此标志可通过套接字选项和控制消息两种方式启用。
|
||||
|
||||
SOF_TIMESTAMPING_TX_COMPLETION:
|
||||
在数据包传输完成时请求传输时间戳。完成时间戳由内核在从硬件接收数据包完成
|
||||
报告时生成。硬件可能一次报告多个数据包,完成时间戳反映报告的时序而不是实
|
||||
际传输时间。此标志可以通过套接字选项和控制消息启用。
|
||||
|
||||
|
||||
1.3.2 时间戳报告
|
||||
^^^^^^^^^^^^^^^^
|
||||
|
||||
其他三个位控制将在生成的控制消息中报告哪些时间戳。对这些位的更改在协议栈中
|
||||
的时间戳报告位置立即生效。仅当数据包设置了相关的时间戳生成请求时,才会报告
|
||||
其时间戳。
|
||||
|
||||
SOF_TIMESTAMPING_SOFTWARE:
|
||||
在可用时报告任何软件时间戳。
|
||||
|
||||
SOF_TIMESTAMPING_SYS_HARDWARE:
|
||||
此选项已被弃用和忽略。
|
||||
|
||||
SOF_TIMESTAMPING_RAW_HARDWARE:
|
||||
在可用时报告由 SOF_TIMESTAMPING_TX_HARDWARE 或 SOF_TIMESTAMPING_RX_HARDWARE
|
||||
生成的硬件时间戳。
|
||||
|
||||
|
||||
1.3.3 时间戳选项
|
||||
^^^^^^^^^^^^^^^^
|
||||
|
||||
接口支持以下选项
|
||||
|
||||
SOF_TIMESTAMPING_OPT_ID:
|
||||
每个数据包生成一个唯一标识符。一个进程可以同时存在多个未完成的时间戳请求。
|
||||
数据包在传输路径中可能会发生重排序(例如在数据包调度器中)。在这种情况下,
|
||||
时间戳会以与原始send()调用不同的顺序排队到错误队列中。如此一来,仅根据
|
||||
时间戳顺序或 payload(有效载荷)检查,并不总能将时间戳与原始send()调用
|
||||
唯一匹配。
|
||||
|
||||
此选项在 send() 时将每个数据包与唯一标识符关联,并与时间戳一起返回。
|
||||
标识符源自每个套接字的 u32 计数器(会回绕)。对于数据报套接字,计数器
|
||||
随每个发送的数据包递增。对于流套接字,它随每个字节递增。对于流套接字,
|
||||
还要设置 SOF_TIMESTAMPING_OPT_ID_TCP,请参阅下面的部分。
|
||||
|
||||
计数器从零开始。在首次启用套接字选项时初始化。在禁用后再重新启用选项时
|
||||
重置。重置计数器不会更改系统中现有数据包的标识符。
|
||||
|
||||
此选项仅针对传输时间戳实现。在这种情况下,时间戳总是与sock_extended_err
|
||||
结构体一起回环。该选项会修改ee_data字段,以传递一个在该套接字所有同时
|
||||
存在的未完成时间戳请求中唯一的 ID。
|
||||
|
||||
进程可以通过控制消息SCM_TS_OPT_ID(TCP 套接字不支持)传递特定 ID,
|
||||
从而选择性地覆盖默认生成的 ID,示例如下::
|
||||
|
||||
struct msghdr *msg;
|
||||
...
|
||||
cmsg = CMSG_FIRSTHDR(msg);
|
||||
cmsg->cmsg_level = SOL_SOCKET;
|
||||
cmsg->cmsg_type = SCM_TS_OPT_ID;
|
||||
cmsg->cmsg_len = CMSG_LEN(sizeof(__u32));
|
||||
*((__u32 *) CMSG_DATA(cmsg)) = opt_id;
|
||||
err = sendmsg(fd, msg, 0);
|
||||
|
||||
|
||||
SOF_TIMESTAMPING_OPT_ID_TCP:
|
||||
与 SOF_TIMESTAMPING_OPT_ID 一起传递给新的 TCP 时间戳应用程序。
|
||||
SOF_TIMESTAMPING_OPT_ID 定义了流套接字计数器的增量,但其起始点
|
||||
并不完全显而易见。此选项修复了这一点。
|
||||
|
||||
对于流套接字,如果设置了 SOF_TIMESTAMPING_OPT_ID,则此选项应始终
|
||||
设置。在数据报套接字上,选项没有效果。
|
||||
|
||||
一个合理的期望是系统调用后计数器重置为零,因此后续写入 N 字节将生成
|
||||
计数器为 N-1 的时间戳。SOF_TIMESTAMPING_OPT_ID_TCP 在所有条件下
|
||||
都实现了此行为。
|
||||
|
||||
SOF_TIMESTAMPING_OPT_ID 不带修饰符时通常报告相同,特别是在套接字选项
|
||||
在无数据传输时设置时。如果正在传输数据,它可能与输出队列的长度(SIOCOUTQ)
|
||||
偏差。
|
||||
|
||||
差异是由于基于 snd_una 与 write_seq 的。snd_una 是 peer 确认的 stream
|
||||
的偏移量。这取决于外部因素,例如网络 RTT。write_seq 是进程写入的最后一个
|
||||
字节。此偏移量不受外部输入影响。
|
||||
|
||||
差异细微,在套接字选项初始化时配置时不易察觉,但 SOF_TIMESTAMPING_OPT_ID_TCP
|
||||
行为在任何时候都更稳健。
|
||||
|
||||
SOF_TIMESTAMPING_OPT_CMSG:
|
||||
支持所有时间戳数据包的 recv() cmsg。控制消息已无条件地在所有接收时间戳数据包
|
||||
和 IPv6 数据包上支持,以及在发送时间戳数据包的 IPv4 数据包上支持。此选项扩展
|
||||
了它们以在发送时间戳数据包的 IPv4 数据包上支持。一个用例是启用 socket 选项
|
||||
IP_PKTINFO 以关联数据包与其出口设备,通过启用 socket 选项 IP_PKTINFO 同时。
|
||||
|
||||
|
||||
SOF_TIMESTAMPING_OPT_TSONLY:
|
||||
仅适用于传输时间戳。使内核返回一个 cmsg 与一个空数据包一起,而不是与原
|
||||
始数据包一起。这减少了套接字接收预算(SO_RCVBUF)中收取的内存量,并即使
|
||||
在 sysctl net.core.tstamp_allow_data 为 0 时也提供时间戳。此选项禁用
|
||||
SOF_TIMESTAMPING_OPT_CMSG。
|
||||
|
||||
SOF_TIMESTAMPING_OPT_STATS:
|
||||
与传输时间戳一起获取的选项性统计信息。它必须与 SOF_TIMESTAMPING_OPT_TSONLY
|
||||
一起使用。当传输时间戳可用时,统计信息可在类型为 SCM_TIMESTAMPING_OPT_STATS
|
||||
的单独控制消息中获取,作为 TLV(struct nlattr)类型的列表。这些统计信息允许应
|
||||
用程序将各种传输层统计信息与传输时间戳关联,例如某个数据块被 peer 的接收窗口限
|
||||
制了多长时间。
|
||||
|
||||
SOF_TIMESTAMPING_OPT_PKTINFO:
|
||||
启用 SCM_TIMESTAMPING_PKTINFO 控制消息以接收带有硬件时间戳的数据包。
|
||||
消息包含 struct scm_ts_pktinfo,它提供接收数据包的实际接口索引和层 2 长度。
|
||||
只有在 CONFIG_NET_RX_BUSY_POLL 启用且驱动程序使用 NAPI 时,才会返回非零的
|
||||
有效接口索引。该结构还包含另外两个字段,但它们是保留字段且未定义。
|
||||
|
||||
SOF_TIMESTAMPING_OPT_TX_SWHW:
|
||||
请求在 SOF_TIMESTAMPING_TX_HARDWARE 和 SOF_TIMESTAMPING_TX_SOFTWARE
|
||||
同时启用时,为传出数据包生成硬件和软件时间戳。如果同时生成两个时间戳,两个单
|
||||
独的消息将回环到套接字的错误队列,每个消息仅包含一个时间戳。
|
||||
|
||||
SOF_TIMESTAMPING_OPT_RX_FILTER:
|
||||
过滤掉虚假接收时间戳:仅当匹配的时间戳生成标志已启用时才报告接收时间戳。
|
||||
|
||||
接收时间戳在入口路径中生成较早,在数据包的目的套接字确定之前。如果任何套接
|
||||
字启用接收时间戳,所有套接字的数据包将接收时间戳数据包。包括那些请求时间戳
|
||||
报告与 SOF_TIMESTAMPING_SOFTWARE 和/或 SOF_TIMESTAMPING_RAW_HARDWARE,
|
||||
但未请求接收时间戳生成。这可能发生在仅请求发送时间戳时。
|
||||
|
||||
接收虚假时间戳通常是无害的。进程可以忽略意外的非零值。但它使行为在其他套接
|
||||
字上微妙地依赖。此标志隔离套接字以获得更确定的行为。
|
||||
|
||||
新应用程序鼓励传递 SOF_TIMESTAMPING_OPT_ID 以区分时间戳并传递
|
||||
SOF_TIMESTAMPING_OPT_TSONLY 以操作,而不管 sysctl net.core.tstamp_allow_data
|
||||
的设置。
|
||||
|
||||
例外情况是当进程需要额外的 cmsg 数据时,例如 SOL_IP/IP_PKTINFO 以检测出
|
||||
口网络接口。然后传递选项 SOF_TIMESTAMPING_OPT_CMSG。此选项依赖于访问原
|
||||
始数据包的内容,因此不能与 SOF_TIMESTAMPING_OPT_TSONLY 组合。
|
||||
|
||||
|
||||
1.3.4. 通过控制消息启用时间戳
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
除了套接字选项外,时间戳生成还可以通过 cmsg 按写入请求,仅适用于
|
||||
SOF_TIMESTAMPING_TX_*(见第 1.3.1 节)。使用此功能,应用程序可以无需启用和
|
||||
禁用时间戳即可采样每个 sendmsg() 的时间戳::
|
||||
|
||||
struct msghdr *msg;
|
||||
...
|
||||
cmsg = CMSG_FIRSTHDR(msg);
|
||||
cmsg->cmsg_level = SOL_SOCKET;
|
||||
cmsg->cmsg_type = SO_TIMESTAMPING;
|
||||
cmsg->cmsg_len = CMSG_LEN(sizeof(__u32));
|
||||
*((__u32 *) CMSG_DATA(cmsg)) = SOF_TIMESTAMPING_TX_SCHED |
|
||||
SOF_TIMESTAMPING_TX_SOFTWARE |
|
||||
SOF_TIMESTAMPING_TX_ACK;
|
||||
err = sendmsg(fd, msg, 0);
|
||||
|
||||
通过 cmsg 设置的 SOF_TIMESTAMPING_TX_* 标志将覆盖通过 setsockopt 设置的
|
||||
SOF_TIMESTAMPING_TX_* 标志。
|
||||
|
||||
此外,应用程序仍然需要通过 setsockopt 启用时间戳报告以接收时间戳::
|
||||
|
||||
__u32 val = SOF_TIMESTAMPING_SOFTWARE |
|
||||
SOF_TIMESTAMPING_OPT_ID /* 或任何其他标志 */;
|
||||
err = setsockopt(fd, SOL_SOCKET, SO_TIMESTAMPING, &val, sizeof(val));
|
||||
|
||||
|
||||
1.4 字节流时间戳
|
||||
----------------
|
||||
|
||||
SO_TIMESTAMPING 接口支持字节流的时间戳。每个请求解释为请求当整个缓冲区内容
|
||||
通过时间戳点时。也就是说,对于流选项 SOF_TIMESTAMPING_TX_SOFTWARE 将记录
|
||||
当所有字节都到达设备驱动程序时,无论数据被转换成多少个数据包。
|
||||
|
||||
一般来说,字节流没有自然分隔符,因此将时间戳与数据相关联是非平凡的。字节范围
|
||||
可能跨段,任何段可能合并(可能合并先前分段缓冲区关联的独立 send() 调用)。段
|
||||
可以重新排序,同一字节范围可以在多个段中并存,对于实现重传的协议。
|
||||
|
||||
所有时间戳必须实现相同的语义,否则它们是不可比较的。以不同于简单情况(缓冲区
|
||||
到 skb 的 1:1 映射)的方式处理“罕见”角落情况是不够的,因为性能调试通常需要
|
||||
关注这些异常。
|
||||
|
||||
在实践中,时间戳可以与字节流段一致地关联,如果时间戳语义和测量时序的选择正确。
|
||||
此挑战与决定 IP 分片策略没有不同。在那里,定义是仅对第一个分片进行时间戳。对
|
||||
于字节流,我们选择仅在所有字节通过某个点时生成时间戳。SOF_TIMESTAMPING_TX_ACK
|
||||
定义的实现和推理是容易的。一个需要考虑 SACK 的实现会更复杂,因为可能存在传输
|
||||
空洞和乱序到达。
|
||||
|
||||
在主机上,TCP 也可以通过 Nagle、cork、autocork、分段和 GSO 打破简单的 1:1
|
||||
缓冲区到 skbuff 映射。实现确保在所有情况下都正确,通过跟踪每个 send() 传递
|
||||
给send() 的最后一个字节,即使它在 skbuff 扩展或合并操作后不再是最后一个字
|
||||
节。它存储相关的序列号在 skb_shinfo(skb)->tskey。因为一个 skbuff 只有一
|
||||
个这样的字段,所以只能生成一个时间戳。
|
||||
|
||||
在罕见情况下,如果两个请求折叠到同一个 skb,则时间戳请求可能会被错过。进程可
|
||||
以通过始终在请求之间刷新 TCP 栈来检测此情况,例如启用 TCP_NODELAY 和禁用
|
||||
TCP_CORK和 autocork。在 linux-4.7 之后,更好的预防合并方法是使用 MSG_EOR
|
||||
标志在sendmsg()时。
|
||||
|
||||
这些预防措施确保时间戳仅在所有字节通过时间戳点时生成,假设网络栈本身不会重新
|
||||
排序段。栈确实试图避免重新排序。唯一的例外是管理员控制:可以构造一个数据包调
|
||||
度器配置,将来自同一流的不同段延迟不同。这种设置通常不常见。
|
||||
|
||||
|
||||
2 数据接口
|
||||
==========
|
||||
|
||||
时间戳通过 recvmsg() 的辅助数据功能读取。请参阅 `man 3 cmsg` 了解此接口的
|
||||
详细信息。套接字手册页面 (`man 7 socket`) 描述了如何检索SO_TIMESTAMP 和
|
||||
SO_TIMESTAMPNS 生成的数据包时间戳。
|
||||
|
||||
|
||||
2.1 SCM_TIMESTAMPING 记录
|
||||
-------------------------
|
||||
|
||||
这些时间戳在 cmsg_level SOL_SOCKET、cmsg_type SCM_TIMESTAMPING 和类型为
|
||||
|
||||
对于 SO_TIMESTAMPING_OLD::
|
||||
|
||||
struct scm_timestamping {
|
||||
struct timespec ts[3];
|
||||
};
|
||||
|
||||
对于 SO_TIMESTAMPING_NEW::
|
||||
|
||||
struct scm_timestamping64 {
|
||||
struct __kernel_timespec ts[3];
|
||||
|
||||
始终使用 SO_TIMESTAMPING_NEW 时间戳以始终获得 struct scm_timestamping64
|
||||
格式的时间戳。
|
||||
|
||||
SO_TIMESTAMPING_OLD 在 32 位机器上 2038 年后返回错误的时间戳。
|
||||
|
||||
该结构可以返回最多三个时间戳。这是一个遗留功能。任何时候至少有一个字
|
||||
段不为零。大多数时间戳都通过 ts[0] 传递。硬件时间戳通过 ts[2] 传递。
|
||||
|
||||
ts[1] 以前用于存储硬件时间戳转换为系统时间。相反,将硬件时钟设备直接
|
||||
暴露为HW PTP时钟源,以允许用户空间进行时间转换,并可选地与用户空间
|
||||
PTP 堆栈(如linuxptp)同步系统时间。对于 PTP 时钟 API,请参阅
|
||||
Documentation/driver-api/ptp.rst。
|
||||
|
||||
注意,如果同时启用了 SO_TIMESTAMP 或 SO_TIMESTAMPNS 与
|
||||
SO_TIMESTAMPING 使用 SOF_TIMESTAMPING_SOFTWARE,在 recvmsg()
|
||||
调用时会生成一个虚假的软件时间戳,并传递给 ts[0] 当真实软件时间戳缺
|
||||
失时。这也发生在硬件传输时间戳上。
|
||||
|
||||
2.1.1 传输时间戳与 MSG_ERRQUEUE
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
对于传输时间戳,传出数据包回环到套接字的错误队列,并附加发送时间戳(s)。
|
||||
进程通过调用带有 MSG_ERRQUEUE 标志的 recvmsg() 接收时间戳,并传递
|
||||
一个足够大的 msg_control缓冲区以接收相关的元数据结构。recvmsg 调用
|
||||
返回原始传出数据包,并附加两个辅助消息。
|
||||
|
||||
一个 cm_level SOL_IP(V6) 和 cm_type IP(V6)_RECVERR 嵌入一个
|
||||
struct sock_extended_err这定义了错误类型。对于时间戳,ee_errno
|
||||
字段是 ENOMSG。另一个辅助消息将具有 cm_level SOL_SOCKET 和 cm_type
|
||||
SCM_TIMESTAMPING。这嵌入了 struct scm_timestamping。
|
||||
|
||||
|
||||
2.1.1.2 时间戳类型
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
三个 struct timespec 的语义由 struct sock_extended_err 中的
|
||||
ee_info 字段定义。它包含一个类型 SCM_TSTAMP_* 来定义实际传递给
|
||||
scm_timestamping 的时间戳。
|
||||
|
||||
SCM_TSTAMP_* 类型与之前讨论的 SOF_TIMESTAMPING_* 控制字段完全
|
||||
匹配,只有一个例外对于遗留原因,SCM_TSTAMP_SND 等于零,可以设置为
|
||||
SOF_TIMESTAMPING_TX_HARDWARE 和 SOF_TIMESTAMPING_TX_SOFTWARE。
|
||||
它是第一个,如果 ts[2] 不为零,否则是第二个,在这种情况下,时间戳存
|
||||
储在ts[0] 中。
|
||||
|
||||
|
||||
2.1.1.3 分片
|
||||
~~~~~~~~~~~~
|
||||
|
||||
传出数据报分片很少见,但可能发生,例如通过显式禁用 PMTU 发现。如果
|
||||
传出数据包被分片,则仅对第一个分片进行时间戳,并返回给发送套接字。
|
||||
|
||||
|
||||
2.1.1.4 数据包负载
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
调用应用程序通常不关心接收它传递给堆栈的整个数据包负载:套接字错误队
|
||||
列机制仅是一种将时间戳附加到其上的方法。在这种情况下,应用程序可以选
|
||||
择读取较小的数据报,甚至长度为 0。负载相应地被截断。直到进程调用
|
||||
recvmsg() 到错误队列,然而,整个数据包仍在队列中,占用 SO_RCVBUF 预算。
|
||||
|
||||
|
||||
2.1.1.5 阻塞读取
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
从错误队列读取始终是非阻塞操作。要阻塞等待时间戳,请使用 poll 或
|
||||
select。poll() 将在 pollfd.revents 中返回 POLLERR,如果错误队列
|
||||
中有数据。没有必要在 pollfd.events中传递此标志。此标志在请求时被忽
|
||||
略。另请参阅 `man 2 poll`。
|
||||
|
||||
|
||||
2.1.2 接收时间戳
|
||||
^^^^^^^^^^^^^^^^
|
||||
|
||||
在接收时,没有理由从套接字错误队列读取。SCM_TIMESTAMPING 辅助数据与
|
||||
数据包数据一起通过正常 recvmsg() 发送。由于这不是套接字错误,它不伴
|
||||
随消息 SOL_IP(V6)/IP(V6)_RECVERROR。在这种情况下,struct
|
||||
scm_timestamping 中的三个字段含义隐式定义。ts[0] 在设置时包含软件
|
||||
时间戳,ts[1] 再次被弃用,ts[2] 在设置时包含硬件时间戳。
|
||||
|
||||
|
||||
3. 硬件时间戳配置:ETHTOOL_MSG_TSCONFIG_SET/GET
|
||||
===============================================
|
||||
|
||||
硬件时间戳也必须为每个设备驱动程序初始化,该驱动程序预期执行硬件时间戳。
|
||||
参数在 include/uapi/linux/net_tstamp.h 中定义为::
|
||||
|
||||
struct hwtstamp_config {
|
||||
int flags; /* 目前没有定义的标志,必须为零 */
|
||||
int tx_type; /* HWTSTAMP_TX_* */
|
||||
int rx_filter; /* HWTSTAMP_FILTER_* */
|
||||
};
|
||||
|
||||
期望的行为通过 tsconfig netlink 套接字 ``ETHTOOL_MSG_TSCONFIG_SET``
|
||||
传递到内核,并通过 ``ETHTOOL_A_TSCONFIG_TX_TYPES``、
|
||||
``ETHTOOL_A_TSCONFIG_RX_FILTERS`` 和 ``ETHTOOL_A_TSCONFIG_HWTSTAMP_FLAGS``
|
||||
netlink 属性设置 struct hwtstamp_config 相应地。
|
||||
|
||||
``ETHTOOL_A_TSCONFIG_HWTSTAMP_PROVIDER`` netlink 嵌套属性用于选择
|
||||
硬件时间戳的来源。它由设备源的索引和时间戳类型限定符组成。
|
||||
|
||||
驱动程序可以自由使用比请求更宽松的配置。预期驱动程序应仅实现可以直接支持的
|
||||
最通用模式。例如,如果硬件可以支持 HWTSTAMP_FILTER_PTP_V2_EVENT,则它
|
||||
通常应始终升级HWTSTAMP_FILTER_PTP_V2_L2_SYNC,依此类推,因为
|
||||
HWTSTAMP_FILTER_PTP_V2_EVENT 更通用(更实用)。
|
||||
|
||||
支持硬件时间戳的驱动程序应更新 struct,并可能返回更宽松的实际配置。如果
|
||||
请求的数据包无法进行时间戳,则不应更改任何内容,并返回 ERANGE(与 EINVAL
|
||||
相反,这表明 SIOCSHWTSTAMP 根本不支持)。
|
||||
|
||||
只有具有管理权限的进程才能更改配置。用户空间负责确保多个进程不会相互干扰,
|
||||
并确保设置被重置。
|
||||
|
||||
任何进程都可以通过请求 tsconfig netlink 套接字 ``ETHTOOL_MSG_TSCONFIG_GET``
|
||||
读取实际配置。
|
||||
|
||||
遗留配置是使用 ioctl(SIOCSHWTSTAMP) 与指向 struct ifreq 的指针,其
|
||||
ifr_data指向 struct hwtstamp_config。tx_type 和 rx_filter 是驱动
|
||||
程序期望执行的提示。如果请求的细粒度过滤对传入数据包不支持,驱动程序可能
|
||||
会对请求的数据包进行时间戳。ioctl(SIOCGHWTSTAMP) 以与
|
||||
ioctl(SIOCSHWTSTAMP) 相同的方式使用。然而,并非所有驱动程序都实现了这一点。
|
||||
|
||||
::
|
||||
|
||||
/* 可能的 hwtstamp_config->tx_type 值 */
|
||||
enum {
|
||||
/*
|
||||
* 不会需要硬件时间戳的传出数据包;
|
||||
* 如果数据包到达并请求它,则不会进行硬件时间戳
|
||||
*/
|
||||
HWTSTAMP_TX_OFF,
|
||||
|
||||
/*
|
||||
* 启用传出数据包的硬件时间戳;
|
||||
* 数据包的发送者决定哪些数据包需要时间戳,
|
||||
* 在发送数据包之前设置 SOF_TIMESTAMPING_TX_SOFTWARE
|
||||
*/
|
||||
HWTSTAMP_TX_ON,
|
||||
};
|
||||
|
||||
/* 可能的 hwtstamp_config->rx_filter 值 */
|
||||
enum {
|
||||
/* 时间戳不传入任何数据包 */
|
||||
HWTSTAMP_FILTER_NONE,
|
||||
|
||||
/* 时间戳任何传入数据包 */
|
||||
HWTSTAMP_FILTER_ALL,
|
||||
|
||||
/* 返回值:时间戳所有请求的数据包加上一些其他数据包 */
|
||||
HWTSTAMP_FILTER_SOME,
|
||||
|
||||
/* PTP v1,UDP,任何事件数据包 */
|
||||
HWTSTAMP_FILTER_PTP_V1_L4_EVENT,
|
||||
|
||||
/* 有关完整值列表,请检查
|
||||
* 文件 include/uapi/linux/net_tstamp.h
|
||||
*/
|
||||
};
|
||||
|
||||
3.1 硬件时间戳实现:设备驱动程序
|
||||
--------------------------------
|
||||
|
||||
支持硬件时间戳的驱动程序必须支持 ndo_hwtstamp_set NDO 或遗留 SIOCSHWTSTAMP
|
||||
ioctl 并更新提供的 struct hwtstamp_config 与实际值,如 SIOCSHWTSTAMP 部分
|
||||
所述。它还应支持 ndo_hwtstamp_get 或遗留 SIOCGHWTSTAMP。
|
||||
|
||||
接收数据包的时间戳必须存储在 skb 中。要获取 skb 的共享时间戳结构,请调用
|
||||
skb_hwtstamps()。然后设置结构中的时间戳::
|
||||
|
||||
struct skb_shared_hwtstamps {
|
||||
/* 硬件时间戳转换为自任意时间点的持续时间
|
||||
* 自定义点
|
||||
*/
|
||||
ktime_t hwtstamp;
|
||||
};
|
||||
|
||||
传出数据包的时间戳应按如下方式生成:
|
||||
|
||||
- 在 hard_start_xmit() 中,检查 (skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)
|
||||
是否不为零。如果是,则驱动程序期望执行硬件时间戳。
|
||||
- 如果此 skb 和请求都可能,则声明驱动程序正在执行时间戳,通过设置 skb_shinfo(skb)->tx_flags
|
||||
中的标志SKBTX_IN_PROGRESS,例如::
|
||||
|
||||
skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
|
||||
|
||||
您可能希望保留与 skb 关联的指针,而不是释放 skb。不支持硬件时间戳的驱
|
||||
动程序不会这样做。驱动程序绝不能触及 sk_buff::tstamp!它用于存储网络
|
||||
子系统生成的软件时间戳。
|
||||
- 驱动程序应在尽可能接近将 sk_buff 传递给硬件时调用 skb_tx_timestamp()。
|
||||
skb_tx_timestamp()提供软件时间戳(如果请求),并且硬件时间戳不可用
|
||||
(SKBTX_IN_PROGRESS 未设置)。
|
||||
- 一旦驱动程序发送数据包并/或获取硬件时间戳,它就会通过 skb_tstamp_tx()
|
||||
传递时间戳,原始 skb,原始硬件时间戳。skb_tstamp_tx() 克隆原始 skb 并
|
||||
添加时间戳,因此原始 skb 现在必须释放。如果获取硬件时间戳失败,则驱动程序
|
||||
不应回退到软件时间戳。理由是,这会在处理管道中的稍后时间发生,而不是其他软
|
||||
件时间戳,因此可能导致时间戳之间的差异。
|
||||
|
||||
3.2 堆叠 PTP 硬件时钟的特殊考虑
|
||||
-------------------------------
|
||||
|
||||
在数据包的路径中可能存在多个 PHC(PTP 硬件时钟)。内核没有明确的机制允许用
|
||||
户选择用于时间戳以太网帧的 PHC。相反,假设最外层的 PHC 始终是最优的,并且
|
||||
内核驱动程序协作以实现这一目标。目前有 3 种堆叠 PHC 的情况,如下所示:
|
||||
|
||||
3.2.1 DSA(分布式交换架构)交换机
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
这些是具有一个端口连接到(完全不知情的)主机以太网接口的以太网交换机,并且
|
||||
执行端口多路复用或可选转发加速功能。每个 DSA 交换机端口在用户看来都是独立的
|
||||
(虚拟)网络接口,其网络 I/O 在底层通过主机接口(在 TX 上重定向到主机端口,
|
||||
在 RX 上拦截帧)执行。
|
||||
|
||||
当 DSA 交换机连接到主机端口时,PTP 同步必须受到限制,因为交换机的可变排队
|
||||
延迟引入了主机端口与其 PTP 伙伴之间的路径延迟抖动。因此,一些 DSA 交换机
|
||||
包含自己的时间戳时钟,并具有在自身 MAC上执行网络时间戳的能力,因此路径延迟
|
||||
仅测量线缆和 PHY 传播延迟。支持 Linux 的 DSA 交换机暴露了与任何其他网络
|
||||
接口相同的 ABI(除了 DSA 接口在网络 I/O 方面实际上是虚拟的,它们确实有自
|
||||
己的PHC)。典型地,但不是强制性地,所有DSA 交换机接口共享相同的 PHC。
|
||||
|
||||
通过设计,DSA 交换机对连接到其主机端口的 PTP 时间戳不需要任何特殊的驱动程
|
||||
序处理。然而,当主机端口也支持 PTP 时间戳时,DSA 将负责拦截
|
||||
``.ndo_eth_ioctl`` 调用,并阻止尝试在主机端口上启用硬件时间戳。这是因为
|
||||
SO_TIMESTAMPING API 不允许为同一数据包传递多个硬件时间戳,因此除了 DSA
|
||||
交换机端口之外的任何人都不应阻止这样做。
|
||||
|
||||
在通用层,DSA 提供了以下基础设施用于 PTP 时间戳:
|
||||
|
||||
- ``.port_txtstamp()``:在用户空间从用户空间请求带有硬件 TX 时间戳请求
|
||||
的数据包之前调用的钩子。这是必需的,因为硬件时间戳在实际 MAC 传输后才可
|
||||
用,因此驱动程序必须准备将时间戳与原始数据包相关联,以便它可以重新入队数
|
||||
据包到套接字的错误队列。为了保存可能在时间戳可用时需要的数据包,驱动程序
|
||||
可以调用 ``skb_clone_sk``,在 skb->cb 中保存克隆指针,并入队一个 tx
|
||||
skb 队列。通常,交换机会有一个PTP TX 时间戳寄存器(或有时是一个 FIFO),
|
||||
其中时间戳可用。在 FIFO 的情况下,硬件可能会存储PTP 序列 ID/消息类型/
|
||||
域号和实际时间戳的键值对。为了在等待时间戳的数据包队列和实际时间戳之间正
|
||||
确关联,驱动程序可以使用 BPF 分类器(``ptp_classify_raw``) 来识别 PTP
|
||||
传输类型,并使用 ``ptp_parse_header`` 解释 PTP 头字段。可能存在一个 IRQ,
|
||||
当此时间戳可用时触发,或者驱动程序可能需要轮询,在调用 ``dev_queue_xmit()``
|
||||
到主机接口之后。单步 TX 时间戳不需要数据包克隆,因为 PTP 协议不需要后续消
|
||||
息(因为TX 时间戳已嵌入到数据包中),因此用户空间不期望数据包带有 TX 时间戳
|
||||
被重新入队到其套接字的错误队列。
|
||||
|
||||
- ``.port_rxtstamp()``:在 RX 上,DSA 运行 BPF 分类器以识别 PTP 事件消息
|
||||
(任何其他数据包,包括 PTP 通用消息,不进行时间戳)。驱动程序提供原始(也是唯一)
|
||||
时间戳数据包,以便它可以标记它,如果它是立即可用的,或者延迟。在接收时,时间
|
||||
戳可能要么在频带内(通过DSA 头中的元数据,或以其他方式附加到数据包),要么在频
|
||||
带外(通过另一个 RX 时间戳FIFO)。在 RX 上延迟通常是必要的,当检索时间戳需要
|
||||
可睡眠上下文时。在这种情况下,DSA驱动程序有责任调用 ``netif_rx()`` 在新鲜时
|
||||
间戳的 skb 上。
|
||||
|
||||
3.2.2 以太网 PHYs
|
||||
^^^^^^^^^^^^^^^^^
|
||||
|
||||
这些是通常在网络栈中履行第 1 层角色的设备,因此它们在 DSA 交换机中没有网络接
|
||||
口的表示。然而,PHY可能能够检测和时间戳 PTP 数据包,出于性能原因:在尽可能接
|
||||
近导线的地方获取的时间戳具有更稳定的同步性和更精确的精度。
|
||||
|
||||
支持 PTP 时间戳的 PHY 驱动程序必须创建 ``struct mii_timestamper`` 并添加
|
||||
指向它的指针在 ``phydev->mii_ts`` 中。 ``phydev->mii_ts`` 的存在将由网络
|
||||
堆栈检查。
|
||||
|
||||
由于 PHY 没有网络接口表示,PHY 的时间戳和 ethtool ioctl 操作需要通过其各自
|
||||
的 MAC驱动程序进行中介。因此,与 DSA 交换机不同,需要对每个单独的 MAC 驱动
|
||||
程序进行 PHY时间戳支持的修改。这包括:
|
||||
|
||||
- 在 ``.ndo_eth_ioctl`` 中检查,是否 ``phy_has_hwtstamp(netdev->phydev)``
|
||||
为真或假。如果是,则 MAC 驱动程序不应处理此请求,而应将其传递给 PHY 使用
|
||||
``phy_mii_ioctl()``。
|
||||
|
||||
- 在 RX 上,特殊干预可能或可能不需要,具体取决于将 skb 传递到网络堆栈的函数。
|
||||
在 plain ``netif_rx()`` 和类似情况下,MAC 驱动程序必须检查是否
|
||||
``skb_defer_rx_timestamp(skb)`` 是必要的,如果是,则不调用 ``netif_rx()``。
|
||||
如果 ``CONFIG_NETWORK_PHY_TIMESTAMPING`` 启用,并且
|
||||
``skb->dev->phydev->mii_ts`` 存在,它的 ``.rxtstamp()`` 钩子现在将被调
|
||||
用,以使用与 DSA 类似的逻辑确定 RX 时间戳延迟是否必要。同样像 DSA,它成为
|
||||
PHY 驱动程序的责任,在时间戳可用时发送数据包到堆栈。
|
||||
|
||||
对于其他 skb 接收函数,例如 ``napi_gro_receive`` 和 ``netif_receive_skb``,
|
||||
堆栈会自动检查是否 ``skb_defer_rx_timestamp()`` 是必要的,因此此检查不
|
||||
需要在驱动程序内部。
|
||||
|
||||
- 在 TX 上,同样,特殊干预可能或可能不需要。调用 ``mii_ts->txtstamp()``钩
|
||||
子的函数名为``skb_clone_tx_timestamp()``。此函数可以直接调用(在这种情
|
||||
况下,确实需要显式 MAC 驱动程序支持),但函数也 piggybacks 从
|
||||
``skb_tx_timestamp()`` 调用,许多 MAC 驱动程序已经为软件时间戳目的执行。
|
||||
因此,如果 MAC 支持软件时间戳,则它不需要在此阶段执行任何其他操作。
|
||||
|
||||
3.2.3 MII 总线嗅探设备
|
||||
^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
这些执行与时间戳以太网 PHY 相同的角色,除了它们是离散设备,因此可以与任何 PHY
|
||||
组合,即使它不支持时间戳。在 Linux 中,它们是可发现的,可以通过 Device Tree
|
||||
附加到 ``struct phy_device``,对于其余部分,它们使用与那些相同的 mii_ts 基
|
||||
础设施。请参阅 Documentation/devicetree/bindings/ptp/timestamper.txt 了
|
||||
解更多详细信息。
|
||||
|
||||
3.2.4 MAC 驱动程序的其他注意事项
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
堆叠 PHC 可能会暴露 MAC 驱动程序的错误,这些错误在未堆叠 PHC 时无法触发。一个
|
||||
例子涉及此行代码,已经在前面的部分中介绍过::
|
||||
|
||||
skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
|
||||
|
||||
任何 TX 时间戳逻辑,无论是普通的 MAC 驱动程序、DSA 交换机驱动程序、PHY 驱动程
|
||||
序还是 MII 总线嗅探设备驱动程序,都应该设置此标志。但一个未意识到 PHC 堆叠的
|
||||
MAC 驱动程序可能会被其他不是它自己的实体设置此标志,并传递一个重复的时间戳。例
|
||||
如,典型的 TX 时间戳逻辑可能是将传输部分分为 2 个部分:
|
||||
|
||||
1. "TX":检查是否通过 ``.ndo_eth_ioctl``("``priv->hwtstamp_tx_enabled
|
||||
== true``")和当前 skb 是否需要 TX 时间戳("``skb_shinfo(skb)->tx_flags
|
||||
& SKBTX_HW_TSTAMP``")。如果为真,则设置 "``skb_shinfo(skb)->tx_flags
|
||||
|= SKBTX_IN_PROGRESS``" 标志。注意:如上所述,在堆叠 PHC 系统中,此条件
|
||||
不应触发,因为此 MAC 肯定不是最外层的 PHC。但这是典型的错误所在。传输继续
|
||||
使用此数据包。
|
||||
|
||||
2. "TX 确认":传输完成。驱动程序检查是否需要收集任何 TX 时间戳。这里通常是典
|
||||
型的错误所在:驱动程序采取捷径,只检查 "``skb_shinfo(skb)->tx_flags &
|
||||
SKBTX_IN_PROGRESS``" 是否设置。在堆叠 PHC 系统中,这是错误的,因为此 MAC
|
||||
驱动程序不是唯一在 TX 数据路径中启用 SKBTX_IN_PROGRESS 的实体。
|
||||
|
||||
此问题的正确解决方案是 MAC 驱动程序在其 "TX 确认" 部分中有一个复合检查,不仅
|
||||
针对 "``skb_shinfo(skb)->tx_flags & SKBTX_IN_PROGRESS``",还针对
|
||||
"``priv->hwtstamp_tx_enabled == true``"。因为系统确保 PTP 时间戳仅对最
|
||||
外层 PHC 启用,此增强检查将避免向用户空间传递重复的 TX 时间戳。
|
||||
|
|
@ -13,6 +13,7 @@
|
|||
|
||||
本文档包含了在内核中使用Rust支持时需要了解的有用信息。
|
||||
|
||||
.. _rust_code_documentation_zh_cn:
|
||||
|
||||
代码文档
|
||||
--------
|
||||
|
|
|
|||
|
|
@ -10,7 +10,35 @@
|
|||
Rust
|
||||
====
|
||||
|
||||
与内核中的Rust有关的文档。若要开始在内核中使用Rust,请阅读quick-start.rst指南。
|
||||
与内核中的Rust有关的文档。若要开始在内核中使用Rust,请阅读 quick-start.rst 指南。
|
||||
|
||||
Rust 实验
|
||||
---------
|
||||
Rust 支持在 v6.1 版本中合并到主线,以帮助确定 Rust 作为一种语言是否适合内核,
|
||||
即是否值得进行权衡。
|
||||
|
||||
目前,Rust 支持主要面向对 Rust 支持感兴趣的内核开发人员和维护者,
|
||||
以便他们可以开始处理抽象和驱动程序,并帮助开发基础设施和工具。
|
||||
|
||||
如果您是终端用户,请注意,目前没有适合或旨在生产使用的内置驱动程序或模块,
|
||||
并且 Rust 支持仍处于开发/实验阶段,尤其是对于特定内核配置。
|
||||
|
||||
代码文档
|
||||
--------
|
||||
|
||||
给定一个内核配置,内核可能会生成 Rust 代码文档,即由 ``rustdoc`` 工具呈现的 HTML。
|
||||
|
||||
.. only:: rustdoc and html
|
||||
|
||||
该内核文档使用 `Rust 代码文档 <rustdoc/kernel/index.html>`_ 构建。
|
||||
|
||||
.. only:: not rustdoc and html
|
||||
|
||||
该内核文档不使用 Rust 代码文档构建。
|
||||
|
||||
预生成版本提供在:https://rust.docs.kernel.org。
|
||||
|
||||
请参阅 :ref:`代码文档 <rust_code_documentation_zh_cn>` 部分以获取更多详细信息。
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
|
@ -19,6 +47,9 @@ Rust
|
|||
general-information
|
||||
coding-guidelines
|
||||
arch-support
|
||||
testing
|
||||
|
||||
你还可以在 :doc:`../../../process/kernel-docs` 中找到 Rust 的学习材料。
|
||||
|
||||
.. only:: subproject and html
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,215 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0
|
||||
.. include:: ../disclaimer-zh_CN.rst
|
||||
|
||||
:Original: Documentation/rust/testing.rst
|
||||
|
||||
:翻译:
|
||||
|
||||
郭杰 Ben Guo <benx.guo@gmail.com>
|
||||
|
||||
测试
|
||||
====
|
||||
|
||||
本文介绍了如何在内核中测试 Rust 代码。
|
||||
|
||||
有三种测试类型:
|
||||
|
||||
- KUnit 测试
|
||||
- ``#[test]`` 测试
|
||||
- Kselftests
|
||||
|
||||
KUnit 测试
|
||||
----------
|
||||
|
||||
这些测试来自 Rust 文档中的示例。它们会被转换为 KUnit 测试。
|
||||
|
||||
使用
|
||||
****
|
||||
|
||||
这些测试可以通过 KUnit 运行。例如,在命令行中使用 ``kunit_tool`` ( ``kunit.py`` )::
|
||||
|
||||
./tools/testing/kunit/kunit.py run --make_options LLVM=1 --arch x86_64 --kconfig_add CONFIG_RUST=y
|
||||
|
||||
或者,KUnit 也可以在内核启动时以内置方式运行。获取更多 KUnit 信息,请参阅
|
||||
Documentation/dev-tools/kunit/index.rst。
|
||||
关于内核内置与命令行测试的详细信息,请参阅 Documentation/dev-tools/kunit/architecture.rst。
|
||||
|
||||
要使用这些 KUnit 文档测试,需要在内核配置中启用以下选项::
|
||||
|
||||
CONFIG_KUNIT
|
||||
Kernel hacking -> Kernel Testing and Coverage -> KUnit - Enable support for unit tests
|
||||
CONFIG_RUST_KERNEL_DOCTESTS
|
||||
Kernel hacking -> Rust hacking -> Doctests for the `kernel` crate
|
||||
|
||||
KUnit 测试即文档测试
|
||||
********************
|
||||
|
||||
文档测试( *doctests* )一般用于展示函数、结构体或模块等的使用方法。
|
||||
|
||||
它们非常方便,因为它们就写在文档旁边。例如:
|
||||
|
||||
.. code-block:: rust
|
||||
|
||||
/// 求和两个数字。
|
||||
///
|
||||
/// ```
|
||||
/// assert_eq!(mymod::f(10, 20), 30);
|
||||
/// ```
|
||||
pub fn f(a: i32, b: i32) -> i32 {
|
||||
a + b
|
||||
}
|
||||
|
||||
在用户空间中,这些测试由 ``rustdoc`` 负责收集并运行。单独使用这个工具已经很有价值,
|
||||
因为它可以验证示例能否成功编译(确保和代码保持同步),
|
||||
同时还可以运行那些不依赖内核 API 的示例。
|
||||
|
||||
然而,在内核中,这些测试会转换成 KUnit 测试套件。
|
||||
这意味着文档测试会被编译成 Rust 内核对象,从而可以在构建的内核环境中运行。
|
||||
|
||||
通过与 KUnit 集成,Rust 的文档测试可以复用内核现有的测试设施。
|
||||
例如,内核日志会显示::
|
||||
|
||||
KTAP version 1
|
||||
1..1
|
||||
KTAP version 1
|
||||
# Subtest: rust_doctests_kernel
|
||||
1..59
|
||||
# rust_doctest_kernel_build_assert_rs_0.location: rust/kernel/build_assert.rs:13
|
||||
ok 1 rust_doctest_kernel_build_assert_rs_0
|
||||
# rust_doctest_kernel_build_assert_rs_1.location: rust/kernel/build_assert.rs:56
|
||||
ok 2 rust_doctest_kernel_build_assert_rs_1
|
||||
# rust_doctest_kernel_init_rs_0.location: rust/kernel/init.rs:122
|
||||
ok 3 rust_doctest_kernel_init_rs_0
|
||||
...
|
||||
# rust_doctest_kernel_types_rs_2.location: rust/kernel/types.rs:150
|
||||
ok 59 rust_doctest_kernel_types_rs_2
|
||||
# rust_doctests_kernel: pass:59 fail:0 skip:0 total:59
|
||||
# Totals: pass:59 fail:0 skip:0 total:59
|
||||
ok 1 rust_doctests_kernel
|
||||
|
||||
文档测试中,也可以正常使用 `? <https://doc.rust-lang.org/reference/expressions/operator-expr.html#the-question-mark-operator>`_ 运算符,例如:
|
||||
|
||||
.. code-block:: rust
|
||||
|
||||
/// ```
|
||||
/// # use kernel::{spawn_work_item, workqueue};
|
||||
/// spawn_work_item!(workqueue::system(), || pr_info!("x\n"))?;
|
||||
/// # Ok::<(), Error>(())
|
||||
/// ```
|
||||
|
||||
这些测试和普通代码一样,也可以在 ``CLIPPY=1`` 条件下通过 Clippy 进行编译,
|
||||
因此可以从额外的 lint 检查中获益。
|
||||
|
||||
为了便于开发者定位文档测试出错的具体行号,日志会输出一条 KTAP 诊断信息。
|
||||
其中标明了原始测试的文件和行号(不是 ``rustdoc`` 生成的临时 Rust 文件位置)::
|
||||
|
||||
# rust_doctest_kernel_types_rs_2.location: rust/kernel/types.rs:150
|
||||
|
||||
Rust 测试中常用的断言宏是来自 Rust 标准库( ``core`` )中的 ``assert!`` 和 ``assert_eq!`` 宏。
|
||||
内核提供了一个定制版本,这些宏的调用会被转发到 KUnit。
|
||||
和 KUnit 测试不同的是,这些宏不需要传递上下文参数( ``struct kunit *`` )。
|
||||
这使得它们更易于使用,同时文档的读者无需关心底层用的是什么测试框架。
|
||||
此外,这种方式未来也许可以让我们更容易测试第三方代码。
|
||||
|
||||
当前有一个限制:KUnit 不支持在其他任务中执行断言。
|
||||
因此,如果断言真的失败了,我们只是简单地把错误打印到内核日志里。
|
||||
另外,文档测试不适用于非公开的函数。
|
||||
|
||||
作为文档中的测试示例,应当像 “实际代码” 一样编写。
|
||||
例如:不要使用 ``unwrap()`` 或 ``expect()``,请使用 `? <https://doc.rust-lang.org/reference/expressions/operator-expr.html#the-question-mark-operator>`_ 运算符。
|
||||
更多背景信息,请参阅:
|
||||
|
||||
https://rust.docs.kernel.org/kernel/error/type.Result.html#error-codes-in-c-and-rust
|
||||
|
||||
``#[test]`` 测试
|
||||
----------------
|
||||
|
||||
此外,还有 ``#[test]`` 测试。与文档测试类似,这些测试与用户空间中的测试方式也非常相近,并且同样会映射到 KUnit。
|
||||
|
||||
这些测试通过 ``kunit_tests`` 过程宏引入,该宏将测试套件的名称作为参数。
|
||||
|
||||
例如,假设想要测试前面文档测试示例中的函数 ``f``,我们可以在定义该函数的同一文件中编写:
|
||||
|
||||
.. code-block:: rust
|
||||
|
||||
#[kunit_tests(rust_kernel_mymod)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_f() {
|
||||
assert_eq!(f(10, 20), 30);
|
||||
}
|
||||
}
|
||||
|
||||
如果我们执行这段代码,内核日志会显示::
|
||||
|
||||
KTAP version 1
|
||||
# Subtest: rust_kernel_mymod
|
||||
# speed: normal
|
||||
1..1
|
||||
# test_f.speed: normal
|
||||
ok 1 test_f
|
||||
ok 1 rust_kernel_mymod
|
||||
|
||||
与文档测试类似, ``assert!`` 和 ``assert_eq!`` 宏被映射回 KUnit 并且不会发生 panic。
|
||||
同样,支持 `? <https://doc.rust-lang.org/reference/expressions/operator-expr.html#the-question-mark-operator>`_ 运算符,
|
||||
测试函数可以什么都不返回(单元类型 ``()``)或 ``Result`` (任何 ``Result<T, E>``)。例如:
|
||||
|
||||
.. code-block:: rust
|
||||
|
||||
#[kunit_tests(rust_kernel_mymod)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_g() -> Result {
|
||||
let x = g()?;
|
||||
assert_eq!(x, 30);
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
如果我们运行测试并且调用 ``g`` 失败,那么内核日志会显示::
|
||||
|
||||
KTAP version 1
|
||||
# Subtest: rust_kernel_mymod
|
||||
# speed: normal
|
||||
1..1
|
||||
# test_g: ASSERTION FAILED at rust/kernel/lib.rs:335
|
||||
Expected is_test_result_ok(test_g()) to be true, but is false
|
||||
# test_g.speed: normal
|
||||
not ok 1 test_g
|
||||
not ok 1 rust_kernel_mymod
|
||||
|
||||
如果 ``#[test]`` 测试可以对用户起到示例作用,那就应该改用文档测试。
|
||||
即使是 API 的边界情况,例如错误或边界问题,放在示例中展示也同样有价值。
|
||||
|
||||
``rusttest`` 宿主机测试
|
||||
-----------------------
|
||||
|
||||
这类测试运行在用户空间,可以通过 ``rusttest`` 目标在构建内核的宿主机中编译并运行::
|
||||
|
||||
make LLVM=1 rusttest
|
||||
|
||||
当前操作需要内核 ``.config``。
|
||||
|
||||
目前,它们主要用于测试 ``macros`` crate 的示例。
|
||||
|
||||
Kselftests
|
||||
----------
|
||||
|
||||
Kselftests 可以在 ``tools/testing/selftests/rust`` 文件夹中找到。
|
||||
|
||||
测试所需的内核配置选项列在 ``tools/testing/selftests/rust/config`` 文件中,
|
||||
可以借助 ``merge_config.sh`` 脚本合并到现有配置中::
|
||||
|
||||
./scripts/kconfig/merge_config.sh .config tools/testing/selftests/rust/config
|
||||
|
||||
Kselftests 会在内核源码树中构建,以便在运行相同版本内核的系统上执行测试。
|
||||
|
||||
一旦安装并启动了与源码树匹配的内核,测试即可通过以下命令编译并执行::
|
||||
|
||||
make TARGETS="rust" kselftest
|
||||
|
||||
请参阅 Documentation/dev-tools/kselftest.rst 文档以获取更多信息。
|
||||
|
|
@ -0,0 +1,92 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0
|
||||
.. include:: ../disclaimer-zh_CN.rst
|
||||
|
||||
:Original: Documentation/scsi/index.rst
|
||||
|
||||
:翻译:
|
||||
|
||||
郝栋栋 doubled <doubled@leap-io-kernel.com>
|
||||
|
||||
:校译:
|
||||
|
||||
|
||||
|
||||
==========
|
||||
SCSI子系统
|
||||
==========
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
简介
|
||||
====
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
scsi
|
||||
|
||||
SCSI驱动接口
|
||||
============
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
scsi_mid_low_api
|
||||
scsi_eh
|
||||
|
||||
SCSI驱动参数
|
||||
============
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
scsi-parameters
|
||||
link_power_management_policy
|
||||
|
||||
SCSI主机适配器驱动
|
||||
==================
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
libsas
|
||||
sd-parameters
|
||||
wd719x
|
||||
|
||||
Todolist:
|
||||
|
||||
* 53c700
|
||||
* aacraid
|
||||
* advansys
|
||||
* aha152x
|
||||
* aic79xx
|
||||
* aic7xxx
|
||||
* arcmsr_spec
|
||||
* bfa
|
||||
* bnx2fc
|
||||
* BusLogic
|
||||
* cxgb3i
|
||||
* dc395x
|
||||
* dpti
|
||||
* FlashPoint
|
||||
* g_NCR5380
|
||||
* hpsa
|
||||
* hptiop
|
||||
* lpfc
|
||||
* megaraid
|
||||
* ncr53c8xx
|
||||
* NinjaSCSI
|
||||
* ppa
|
||||
* qlogicfas
|
||||
* scsi-changer
|
||||
* scsi_fc_transport
|
||||
* scsi-generic
|
||||
* smartpqi
|
||||
* st
|
||||
* sym53c500_cs
|
||||
* sym53c8xx_2
|
||||
* tcm_qla2xxx
|
||||
* ufs
|
||||
|
||||
* scsi_transport_srp/figures
|
||||
|
|
@ -0,0 +1,425 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0
|
||||
.. include:: ../disclaimer-zh_CN.rst
|
||||
|
||||
:Original: Documentation/scsi/libsas.rst
|
||||
|
||||
:翻译:
|
||||
|
||||
张钰杰 Yujie Zhang <yjzhang@leap-io-kernel.com>
|
||||
|
||||
:校译:
|
||||
|
||||
======
|
||||
SAS 层
|
||||
======
|
||||
|
||||
SAS 层是一个管理基础架构,用于管理 SAS LLDD。它位于 SCSI Core
|
||||
与 SAS LLDD 之间。 体系结构如下: SCSI Core 关注的是 SAM/SPC 相
|
||||
关的问题;SAS LLDD 及其序列控制器负责 PHY 层、OOB 信号以及链路
|
||||
管理;而 SAS 层则负责以下任务::
|
||||
|
||||
* SAS Phy、Port 和主机适配器(HA)事件管理(事件由 LLDD
|
||||
生成,由 SAS 层处理);
|
||||
* SAS 端口的管理(创建与销毁);
|
||||
* SAS 域的发现与重新验证;
|
||||
* SAS 域内设备的管理;
|
||||
* SCSI 主机的注册与注销;
|
||||
* 将设备注册到 SCSI Core(SAS 设备)或 libata(SATA 设备);
|
||||
* 扩展器的管理,并向用户空间导出扩展器控制接口。
|
||||
|
||||
SAS LLDD 是一种 PCI 设备驱动程序。它负责 PHY 层和 OOB(带外)
|
||||
信号的管理、厂商特定的任务,并向 SAS 层上报事件。
|
||||
|
||||
SAS 层实现了 SAS 1.1 规范中定义的大部分 SAS 功能。
|
||||
|
||||
sas_ha_struct 结构体用于向 SAS 层描述一个 SAS LLDD。该结构的
|
||||
大部分字段由 SAS 层使用,但其中少数字段需要由 LLDD 进行初始化。
|
||||
|
||||
在完成硬件初始化之后,应当在驱动的 probe() 函数中调用
|
||||
sas_register_ha()。该函数会将 LLDD 注册到 SCSI 子系统中,创
|
||||
建一个对应的 SCSI 主机,并将你的 SAS 驱动程序注册到其在 sysfs
|
||||
下创建的 SAS 设备树中。随后该函数将返回。接着,你需要使能 PHY,
|
||||
以启动实际的 OOB(带外)过程;此时驱动将开始调用 notify_* 系
|
||||
列事件回调函数。
|
||||
|
||||
结构体说明
|
||||
==========
|
||||
|
||||
``struct sas_phy``
|
||||
------------------
|
||||
|
||||
通常情况下,该结构体会被静态地嵌入到驱动自身定义的 PHY 结构体中,
|
||||
例如::
|
||||
|
||||
struct my_phy {
|
||||
blah;
|
||||
struct sas_phy sas_phy;
|
||||
bleh;
|
||||
}
|
||||
|
||||
随后,在主机适配器(HA)的结构体中,所有的 PHY 通常以 my_phy
|
||||
数组的形式存在(如下文所示)。
|
||||
|
||||
在初始化各个 PHY 时,除了初始化驱动自定义的 PHY 结构体外,还
|
||||
需要同时初始化其中的 sas_phy 结构体。
|
||||
|
||||
一般来说,PHY 的管理由 LLDD 负责,而端口(port)的管理由 SAS
|
||||
层负责。因此,PHY 的初始化与更新由 LLDD 完成,而端口的初始化与
|
||||
更新则由 SAS 层完成。系统设计中规定,某些字段可由 LLDD 进行读
|
||||
写,而 SAS 层只能读取这些字段;反之亦然。其设计目的是为了避免不
|
||||
必要的锁操作。
|
||||
|
||||
在该设计中,某些字段可由 LLDD 进行读写(RW),而 SAS 层仅可读
|
||||
取这些字段;反之亦然。这样设计的目的在于避免不必要的锁操作。
|
||||
|
||||
enabled
|
||||
- 必须设置(0/1)
|
||||
|
||||
id
|
||||
- 必须设置[0,MAX_PHYS)]
|
||||
|
||||
class, proto, type, role, oob_mode, linkrate
|
||||
- 必须设置。
|
||||
|
||||
oob_mode
|
||||
- 当 OOB(带外信号)完成后,设置此字段,然后通知 SAS 层。
|
||||
|
||||
sas_addr
|
||||
- 通常指向一个保存该 PHY 的 SAS 地址的数组,该数组可能位于
|
||||
驱动自定义的 my_phy 结构体中。
|
||||
|
||||
attached_sas_addr
|
||||
- 当 LLDD 接收到 IDENTIFY 帧或 FIS 帧时,应在通知 SAS 层
|
||||
之前设置该字段。其设计意图在于:有时 LLDD 可能需要伪造或
|
||||
提供一个与实际不同的 SAS 地址用于该 PHY/端口,而该机制允许
|
||||
LLDD 这样做。理想情况下,应将 SAS 地址从 IDENTIFY 帧中
|
||||
复制过来;对于直接连接的 SATA 设备,也可以由 LLDD 生成一
|
||||
个 SAS 地址。后续的发现过程可能会修改此字段。
|
||||
|
||||
frame_rcvd
|
||||
- 当接收到 IDENTIFY 或 FIS 帧时,将该帧复制到此处。正确的
|
||||
操作流程是获取锁 → 复制数据 → 设置 frame_rcvd_size → 释
|
||||
放锁 → 调用事件通知。该字段是一个指针,因为驱动无法精确确
|
||||
定硬件帧的大小;因此,实际的帧数据数组应定义在驱动自定义的
|
||||
PHY 结构体中,然后让此指针指向该数组。在持锁状态下,将帧从
|
||||
DMA 可访问内存区域复制到该数组中。
|
||||
|
||||
sas_prim
|
||||
- 用于存放接收到的原语(primitive)。参见 sas.h。操作流程同
|
||||
样是:获取锁 → 设置 primitive → 释放锁 → 通知事件。
|
||||
|
||||
port
|
||||
- 如果该 PHY 属于某个端口(port),此字段指向对应的 sas_port
|
||||
结构体。LLDD 仅可读取此字段。它由 SAS 层设置,用于指向当前
|
||||
PHY 所属的 sas_port。
|
||||
|
||||
ha
|
||||
- 可以由 LLDD 设置;但无论是否设置,SAS 层都会再次对其进行赋值。
|
||||
|
||||
lldd_phy
|
||||
- LLDD 应将此字段设置为指向自身定义的 PHY 结构体,这样当 SAS
|
||||
层调用某个回调并传入 sas_phy 时,驱动可以快速定位自身的 PHY
|
||||
结构体。如果 sas_phy 是嵌入式成员,也可以使用 container_of()
|
||||
宏进行访问——两种方式均可。
|
||||
|
||||
``struct sas_port``
|
||||
-------------------
|
||||
|
||||
LLDD 不应修改该结构体中的任何字段——它只能读取这些字段。这些字段的
|
||||
含义应当是不言自明的。
|
||||
|
||||
phy_mask 为 32 位,目前这一长度已足够使用,因为尚未听说有主机适配
|
||||
器拥有超过8 个 PHY。
|
||||
|
||||
lldd_port
|
||||
- 目前尚无明确用途。不过,对于那些希望在 LLDD 内部维护自身端
|
||||
口表示的驱动,实现时可以利用该字段。
|
||||
|
||||
``struct sas_ha_struct``
|
||||
------------------------
|
||||
|
||||
它通常静态声明在你自己的 LLDD 结构中,用于描述您的适配器::
|
||||
|
||||
struct my_sas_ha {
|
||||
blah;
|
||||
struct sas_ha_struct sas_ha;
|
||||
struct my_phy phys[MAX_PHYS];
|
||||
struct sas_port sas_ports[MAX_PHYS]; /* (1) */
|
||||
bleh;
|
||||
};
|
||||
|
||||
(1) 如果你的 LLDD 没有自己的端口表示
|
||||
|
||||
需要初始化(示例函数如下所示)。
|
||||
|
||||
pcidev
|
||||
^^^^^^
|
||||
|
||||
sas_addr
|
||||
- 由于 SAS 层不想弄乱内存分配等, 因此这指向静态分配的数
|
||||
组中的某个位置(例如,在您的主机适配器结构中),并保存您或
|
||||
制造商等给出的主机适配器的 SAS 地址。
|
||||
|
||||
sas_port
|
||||
^^^^^^^^
|
||||
|
||||
sas_phy
|
||||
- 指向结构体的指针数组(参见上文关于 sas_addr 的说明)。
|
||||
这些指针必须设置。更多细节见下文说明。
|
||||
|
||||
num_phys
|
||||
- 表示 sas_phy 数组中 PHY 的数量,同时也表示 sas_port
|
||||
数组中的端口数量。一个端口最多对应一个 PHY,因此最大端口数
|
||||
等于 num_phys。因此,结构中不再单独使用 num_ports 字段,
|
||||
而仅使用 num_phys。
|
||||
|
||||
事件接口::
|
||||
|
||||
/* LLDD 调用以下函数来通知 SAS 类层发生事件 */
|
||||
void sas_notify_port_event(struct sas_phy *, enum port_event, gfp_t);
|
||||
void sas_notify_phy_event(struct sas_phy *, enum phy_event, gfp_t);
|
||||
|
||||
端口事件通知::
|
||||
|
||||
/* SAS 类层调用以下回调来通知 LLDD 端口事件 */
|
||||
void (*lldd_port_formed)(struct sas_phy *);
|
||||
void (*lldd_port_deformed)(struct sas_phy *);
|
||||
|
||||
如果 LLDD 希望在端口形成或解散时接收通知,则应将上述回调指针设
|
||||
置为符合函数类型定义的处理函数。
|
||||
|
||||
SAS LLDD 还应至少实现 SCSI 协议中定义的一种任务管理函数(TMFs)::
|
||||
|
||||
/* 任务管理函数. 必须在进程上下文中调用 */
|
||||
int (*lldd_abort_task)(struct sas_task *);
|
||||
int (*lldd_abort_task_set)(struct domain_device *, u8 *lun);
|
||||
int (*lldd_clear_task_set)(struct domain_device *, u8 *lun);
|
||||
int (*lldd_I_T_nexus_reset)(struct domain_device *);
|
||||
int (*lldd_lu_reset)(struct domain_device *, u8 *lun);
|
||||
int (*lldd_query_task)(struct sas_task *);
|
||||
|
||||
如需更多信息,请参考 T10.org。
|
||||
|
||||
端口与适配器管理::
|
||||
|
||||
/* 端口与适配器管理 */
|
||||
int (*lldd_clear_nexus_port)(struct sas_port *);
|
||||
int (*lldd_clear_nexus_ha)(struct sas_ha_struct *);
|
||||
|
||||
SAS LLDD 至少应实现上述函数中的一个。
|
||||
|
||||
PHY 管理::
|
||||
|
||||
/* PHY 管理 */
|
||||
int (*lldd_control_phy)(struct sas_phy *, enum phy_func);
|
||||
|
||||
lldd_ha
|
||||
- 应设置为指向驱动的主机适配器(HA)结构体的指针。如果 sas_ha_struct
|
||||
被嵌入到更大的结构体中,也可以通过 container_of() 宏来获取。
|
||||
|
||||
一个示例的初始化与注册函数可以如下所示:(该函数应在 probe()
|
||||
函数的最后调用)但必须在使能 PHY 执行 OOB 之前调用::
|
||||
|
||||
static int register_sas_ha(struct my_sas_ha *my_ha)
|
||||
{
|
||||
int i;
|
||||
static struct sas_phy *sas_phys[MAX_PHYS];
|
||||
static struct sas_port *sas_ports[MAX_PHYS];
|
||||
|
||||
my_ha->sas_ha.sas_addr = &my_ha->sas_addr[0];
|
||||
|
||||
for (i = 0; i < MAX_PHYS; i++) {
|
||||
sas_phys[i] = &my_ha->phys[i].sas_phy;
|
||||
sas_ports[i] = &my_ha->sas_ports[i];
|
||||
}
|
||||
|
||||
my_ha->sas_ha.sas_phy = sas_phys;
|
||||
my_ha->sas_ha.sas_port = sas_ports;
|
||||
my_ha->sas_ha.num_phys = MAX_PHYS;
|
||||
|
||||
my_ha->sas_ha.lldd_port_formed = my_port_formed;
|
||||
|
||||
my_ha->sas_ha.lldd_dev_found = my_dev_found;
|
||||
my_ha->sas_ha.lldd_dev_gone = my_dev_gone;
|
||||
|
||||
my_ha->sas_ha.lldd_execute_task = my_execute_task;
|
||||
|
||||
my_ha->sas_ha.lldd_abort_task = my_abort_task;
|
||||
my_ha->sas_ha.lldd_abort_task_set = my_abort_task_set;
|
||||
my_ha->sas_ha.lldd_clear_task_set = my_clear_task_set;
|
||||
my_ha->sas_ha.lldd_I_T_nexus_reset= NULL; (2)
|
||||
my_ha->sas_ha.lldd_lu_reset = my_lu_reset;
|
||||
my_ha->sas_ha.lldd_query_task = my_query_task;
|
||||
|
||||
my_ha->sas_ha.lldd_clear_nexus_port = my_clear_nexus_port;
|
||||
my_ha->sas_ha.lldd_clear_nexus_ha = my_clear_nexus_ha;
|
||||
|
||||
my_ha->sas_ha.lldd_control_phy = my_control_phy;
|
||||
|
||||
return sas_register_ha(&my_ha->sas_ha);
|
||||
}
|
||||
|
||||
(2) SAS 1.1 未定义 I_T Nexus Reset TMF(任务管理功能)。
|
||||
|
||||
事件
|
||||
====
|
||||
|
||||
事件是 SAS LLDD 唯一的通知 SAS 层发生任何情况的方式。
|
||||
LLDD 没有其他方法可以告知 SAS 层其内部或 SAS 域中发生的事件。
|
||||
|
||||
Phy 事件::
|
||||
|
||||
PHYE_LOSS_OF_SIGNAL, (C)
|
||||
PHYE_OOB_DONE,
|
||||
PHYE_OOB_ERROR, (C)
|
||||
PHYE_SPINUP_HOLD.
|
||||
|
||||
端口事件,通过 _phy_ 传递::
|
||||
|
||||
PORTE_BYTES_DMAED, (M)
|
||||
PORTE_BROADCAST_RCVD, (E)
|
||||
PORTE_LINK_RESET_ERR, (C)
|
||||
PORTE_TIMER_EVENT, (C)
|
||||
PORTE_HARD_RESET.
|
||||
|
||||
主机适配器事件:
|
||||
HAE_RESET
|
||||
|
||||
SAS LLDD 应能够生成以下事件::
|
||||
|
||||
- 来自 C 组的至少一个事件(可选),
|
||||
- 标记为 M(必需)的事件为必需事件(至少一种);
|
||||
- 若希望 SAS 层处理域重新验证(domain revalidation),则
|
||||
应生成标记为 E(扩展器)的事件(仅需一种);
|
||||
- 未标记的事件为可选事件。
|
||||
|
||||
含义
|
||||
|
||||
HAE_RESET
|
||||
- 当 HA 发生内部错误并被复位时。
|
||||
|
||||
PORTE_BYTES_DMAED
|
||||
- 在接收到 IDENTIFY/FIS 帧时。
|
||||
|
||||
PORTE_BROADCAST_RCVD
|
||||
- 在接收到一个原语时。
|
||||
|
||||
PORTE_LINK_RESET_ERR
|
||||
- 定时器超时、信号丢失、丢失 DWS 等情况。 [1]_
|
||||
|
||||
PORTE_TIMER_EVENT
|
||||
- DWS 复位超时定时器到期时。[1]_
|
||||
|
||||
PORTE_HARD_RESET
|
||||
- 收到 Hard Reset 原语。
|
||||
|
||||
PHYE_LOSS_OF_SIGNAL
|
||||
- 设备已断开连接。 [1]_
|
||||
|
||||
PHYE_OOB_DONE
|
||||
- OOB 过程成功完成,oob_mode 有效。
|
||||
|
||||
PHYE_OOB_ERROR
|
||||
- 执行 OOB 过程中出现错误,设备可能已断开。 [1]_
|
||||
|
||||
PHYE_SPINUP_HOLD
|
||||
- 检测到 SATA 设备,但未发送 COMWAKE 信号。
|
||||
|
||||
.. [1] 应设置或清除 phy 中相应的字段,或者从 tasklet 中调用
|
||||
内联函数 sas_phy_disconnected(),该函数只是一个辅助函数。
|
||||
|
||||
执行命令 SCSI RPC::
|
||||
|
||||
int (*lldd_execute_task)(struct sas_task *, gfp_t gfp_flags);
|
||||
|
||||
用于将任务排队提交给 SAS LLDD,@task 为要执行的任务,@gfp_mask
|
||||
为定义调用者上下文的 gfp 掩码。
|
||||
|
||||
此函数应实现 执行 SCSI RPC 命令。
|
||||
|
||||
也就是说,当调用 lldd_execute_task() 时,命令应当立即在传输
|
||||
层发出。SAS LLDD 中在任何层级上都不应再进行队列排放。
|
||||
|
||||
返回值::
|
||||
|
||||
* 返回 -SAS_QUEUE_FULL 或 -ENOMEM 表示未排入队列;
|
||||
* 返回 0 表示任务已成功排入队列。
|
||||
|
||||
::
|
||||
|
||||
struct sas_task {
|
||||
dev —— 此任务目标设备;
|
||||
task_proto —— 协议类型,为 enum sas_proto 中的一种;
|
||||
scatter —— 指向散布/聚集(SG)列表数组的指针;
|
||||
num_scatter —— SG 列表元素数量;
|
||||
total_xfer_len —— 预计传输的总字节数;
|
||||
data_dir —— 数据传输方向(PCI_DMA_*);
|
||||
task_done —— 任务执行完成时的回调函数。
|
||||
};
|
||||
|
||||
发现
|
||||
====
|
||||
|
||||
sysfs 树有以下用途::
|
||||
|
||||
a) 它显示当前时刻 SAS 域的物理布局,即展示当前物理世界中
|
||||
域的实际结构。
|
||||
b) 显示某些设备的参数。 _at_discovery_time_.
|
||||
|
||||
下面是一个指向 tree(1) 程序的链接,该工具在查看 SAS 域时非常
|
||||
有用:
|
||||
ftp://mama.indstate.edu/linux/tree/
|
||||
|
||||
我期望用户空间的应用程序最终能够为此创建一个图形界面。
|
||||
|
||||
也就是说,sysfs 域树不会显示或保存某些状态变化,例如,如果你更
|
||||
改了 READY LED 含义的设置,sysfs 树不会反映这种状态变化;但它
|
||||
确实会显示域设备的当前连接状态。
|
||||
|
||||
维护内部设备状态变化的职责由上层(命令集驱动)和用户空间负责。
|
||||
|
||||
当某个设备或多个设备从域中拔出时,这一变化会立即反映在 sysfs
|
||||
树中,并且这些设备会从系统中移除。
|
||||
|
||||
结构体 domain_device 描述了 SAS 域中的任意设备。它完全由 SAS
|
||||
层管理。一个任务会指向某个域设备,SAS LLDD 就是通过这种方式知
|
||||
道任务应发送到何处。SAS LLDD 只读取 domain_device 结构的内容,
|
||||
但不会创建或销毁它。
|
||||
|
||||
用户空间中的扩展器管理
|
||||
======================
|
||||
|
||||
在 sysfs 中的每个扩展器目录下,都有一个名为 "smp_portal" 的
|
||||
文件。这是一个二进制的 sysfs 属性文件,它实现了一个 SMP 入口
|
||||
(注意:这并不是一个 SMP 端口),用户空间程序可以通过它发送
|
||||
SMP 请求并接收 SMP 响应。
|
||||
|
||||
该功能的实现方式看起来非常简单:
|
||||
|
||||
1. 构建要发送的 SMP 帧。其格式和布局在 SAS 规范中有说明。保持
|
||||
CRC 字段为 0。
|
||||
|
||||
open(2)
|
||||
|
||||
2. 以读写模式打开该扩展器的 SMP portal sysfs 文件。
|
||||
|
||||
write(2)
|
||||
|
||||
3. 将第 1 步中构建的帧写入文件。
|
||||
|
||||
read(2)
|
||||
|
||||
4. 读取与所构建帧预期返回长度相同的数据量。如果读取的数据量与
|
||||
预期不符,则表示发生了某种错误。
|
||||
|
||||
close(2)
|
||||
|
||||
整个过程在 "expander_conf.c" 文件中的函数 do_smp_func()
|
||||
及其调用者中有详细展示。
|
||||
|
||||
对应的内核实现位于 "sas_expander.c" 文件中。
|
||||
|
||||
程序 "expander_conf.c" 实现了上述逻辑。它接收一个参数——扩展器
|
||||
SMP portal 的 sysfs 文件名,并输出扩展器的信息,包括路由表内容。
|
||||
|
||||
SMP portal 赋予了你对扩展器的完全控制权,因此请谨慎操作。
|
||||
|
|
@ -0,0 +1,32 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0
|
||||
.. include:: ../disclaimer-zh_CN.rst
|
||||
|
||||
:Original: Documentation/scsi/link_power_management_policy.rst
|
||||
|
||||
:翻译:
|
||||
|
||||
郝栋栋 doubled <doubled@leap-io-kernel.com>
|
||||
|
||||
:校译:
|
||||
|
||||
|
||||
|
||||
================
|
||||
链路电源管理策略
|
||||
================
|
||||
|
||||
该参数允许用户设置链路(接口)的电源管理模式。
|
||||
共计三类可选项:
|
||||
|
||||
===================== =====================================================
|
||||
选项 作用
|
||||
===================== =====================================================
|
||||
min_power 指示控制器在可能的情况下尽量使链路处于最低功耗。
|
||||
这可能会牺牲一定的性能,因为从低功耗状态恢复时会增加延迟。
|
||||
|
||||
max_performance 通常,这意味着不进行电源管理。指示
|
||||
控制器优先考虑性能而非电源管理。
|
||||
|
||||
medium_power 指示控制器在可能的情况下进入较低功耗状态,
|
||||
而非最低功耗状态,从而改善min_power模式下的延迟。
|
||||
===================== =====================================================
|
||||
|
|
@ -0,0 +1,118 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0
|
||||
.. include:: ../disclaimer-zh_CN.rst
|
||||
|
||||
:Original: Documentation/scsi/scsi-parameters.rst
|
||||
|
||||
:翻译:
|
||||
|
||||
郝栋栋 doubled <doubled@leap-io-kernel.com>
|
||||
|
||||
:校译:
|
||||
|
||||
|
||||
|
||||
============
|
||||
SCSI内核参数
|
||||
============
|
||||
|
||||
请查阅Documentation/admin-guide/kernel-parameters.rst以获取
|
||||
指定模块参数相关的通用信息。
|
||||
|
||||
当前文档可能不完全是最新和全面的。命令 ``modinfo -p ${modulename}``
|
||||
显示了可加载模块的参数列表。可加载模块被加载到内核中后,也会在
|
||||
/sys/module/${modulename}/parameters/ 目录下显示其参数。其
|
||||
中某些参数可以通过命令
|
||||
``echo -n ${value} > /sys/module/${modulename}/parameters/${parm}``
|
||||
在运行时修改。
|
||||
|
||||
::
|
||||
|
||||
advansys= [HW,SCSI]
|
||||
请查阅 drivers/scsi/advansys.c 文件头部。
|
||||
|
||||
aha152x= [HW,SCSI]
|
||||
请查阅 Documentation/scsi/aha152x.rst。
|
||||
|
||||
aha1542= [HW,SCSI]
|
||||
格式:<portbase>[,<buson>,<busoff>[,<dmaspeed>]]
|
||||
|
||||
aic7xxx= [HW,SCSI]
|
||||
请查阅 Documentation/scsi/aic7xxx.rst。
|
||||
|
||||
aic79xx= [HW,SCSI]
|
||||
请查阅 Documentation/scsi/aic79xx.rst。
|
||||
|
||||
atascsi= [HW,SCSI]
|
||||
请查阅 drivers/scsi/atari_scsi.c。
|
||||
|
||||
BusLogic= [HW,SCSI]
|
||||
请查阅 drivers/scsi/BusLogic.c 文件中
|
||||
BusLogic_ParseDriverOptions()函数前的注释。
|
||||
|
||||
gvp11= [HW,SCSI]
|
||||
|
||||
ips= [HW,SCSI] Adaptec / IBM ServeRAID 控制器
|
||||
请查阅 drivers/scsi/ips.c 文件头部。
|
||||
|
||||
mac5380= [HW,SCSI]
|
||||
请查阅 drivers/scsi/mac_scsi.c。
|
||||
|
||||
scsi_mod.max_luns=
|
||||
[SCSI] 最大可探测LUN数。
|
||||
取值范围为 1 到 2^32-1。
|
||||
|
||||
scsi_mod.max_report_luns=
|
||||
[SCSI] 接收到的最大LUN数。
|
||||
取值范围为 1 到 16384。
|
||||
|
||||
NCR_D700= [HW,SCSI]
|
||||
请查阅 drivers/scsi/NCR_D700.c 文件头部。
|
||||
|
||||
ncr5380= [HW,SCSI]
|
||||
请查阅 Documentation/scsi/g_NCR5380.rst。
|
||||
|
||||
ncr53c400= [HW,SCSI]
|
||||
请查阅 Documentation/scsi/g_NCR5380.rst。
|
||||
|
||||
ncr53c400a= [HW,SCSI]
|
||||
请查阅 Documentation/scsi/g_NCR5380.rst。
|
||||
|
||||
ncr53c8xx= [HW,SCSI]
|
||||
|
||||
osst= [HW,SCSI] SCSI磁带驱动
|
||||
格式:<buffer_size>,<write_threshold>
|
||||
另请查阅 Documentation/scsi/st.rst。
|
||||
|
||||
scsi_debug_*= [SCSI]
|
||||
请查阅 drivers/scsi/scsi_debug.c。
|
||||
|
||||
scsi_mod.default_dev_flags=
|
||||
[SCSI] SCSI默认设备标志
|
||||
格式:<integer>
|
||||
|
||||
scsi_mod.dev_flags=
|
||||
[SCSI] 厂商和型号的黑/白名单条目
|
||||
格式:<vendor>:<model>:<flags>
|
||||
(flags 为整数值)
|
||||
|
||||
scsi_mod.scsi_logging_level=
|
||||
[SCSI] 日志级别的位掩码
|
||||
位的定义请查阅 drivers/scsi/scsi_logging.h。
|
||||
此参数也可以通过sysctl对dev.scsi.logging_level
|
||||
进行设置(/proc/sys/dev/scsi/logging_level)。
|
||||
此外,S390-tools软件包提供了一个便捷的
|
||||
‘scsi_logging_level’ 脚本,可以从以下地址下载:
|
||||
https://github.com/ibm-s390-linux/s390-tools/blob/master/scripts/scsi_logging_level
|
||||
|
||||
scsi_mod.scan= [SCSI] sync(默认)在发现SCSI总线过程中
|
||||
同步扫描。async在内核线程中异步扫描,允许系统继续
|
||||
启动流程。none忽略扫描,预期由用户空间完成扫描。
|
||||
|
||||
sim710= [SCSI,HW]
|
||||
请查阅 drivers/scsi/sim710.c 文件头部。
|
||||
|
||||
st= [HW,SCSI] SCSI磁带参数(缓冲区大小等)
|
||||
请查阅 Documentation/scsi/st.rst。
|
||||
|
||||
wd33c93= [HW,SCSI]
|
||||
请查阅 drivers/scsi/wd33c93.c 文件头部。
|
||||
|
|
@ -0,0 +1,48 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0
|
||||
.. include:: ../disclaimer-zh_CN.rst
|
||||
|
||||
:Original: Documentation/scsi/scsi.rst
|
||||
|
||||
:翻译:
|
||||
|
||||
郝栋栋 doubled <doubled@leap-io-kernel.com>
|
||||
|
||||
:校译:
|
||||
|
||||
|
||||
|
||||
==============
|
||||
SCSI子系统文档
|
||||
==============
|
||||
|
||||
Linux文档项目(LDP)维护了一份描述Linux内核(lk) 2.4中SCSI
|
||||
子系统的文档。请参考:
|
||||
https://www.tldp.org/HOWTO/SCSI-2.4-HOWTO 。LDP提供单页和
|
||||
多页的HTML版本,以及PostScript与PDF格式的文档。
|
||||
|
||||
在SCSI子系统中使用模块的注意事项
|
||||
================================
|
||||
Linux内核中的SCSI支持可以根据终端用户的需求以不同的方式模块
|
||||
化。为了理解你的选择,我们首先需要定义一些术语。
|
||||
|
||||
scsi-core(也被称为“中间层”)包含SCSI支持的核心。没有他你将
|
||||
无法使用任何其他SCSI驱动程序。SCSI核心支持可以是一个模块(
|
||||
scsi_mod.o),也可以编译进内核。如果SCSI核心是一个模块,那么
|
||||
他必须是第一个被加载的SCSI模块,如果你将卸载该模块,那么他必
|
||||
须是最后一个被卸载的模块。实际上,modprobe和rmmod命令将确保
|
||||
SCSI子系统中模块加载与卸载的正确顺序。
|
||||
|
||||
一旦SCSI核心存在于内核中(无论是编译进内核还是作为模块加载),
|
||||
独立的上层驱动和底层驱动可以按照任意顺序加载。磁盘驱动程序
|
||||
(sd_mod.o)、光盘驱动程序(sr_mod.o)、磁带驱动程序 [1]_
|
||||
(st.o)以及SCSI通用驱动程序(sg.o)代表了上层驱动,用于控制
|
||||
相应的各种设备。例如,你可以加载磁带驱动程序来使用磁带驱动器,
|
||||
然后在不需要该驱动程序时卸载他(并释放相关内存)。
|
||||
|
||||
底层驱动程序用于支持您所运行硬件平台支持的不同主机卡。这些不同
|
||||
的主机卡通常被称为主机总线适配器(HBAs)。例如,aic7xxx.o驱动
|
||||
程序被用于控制Adaptec所属的所有最新的SCSI控制器。几乎所有的底
|
||||
层驱动都可以被编译为模块或直接编译进内核。
|
||||
|
||||
.. [1] 磁带驱动程序有一个变种用于控制OnStream磁带设备。其模块
|
||||
名称为osst.o 。
|
||||
|
|
@ -0,0 +1,482 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0
|
||||
.. include:: ../disclaimer-zh_CN.rst
|
||||
|
||||
:Original: Documentation/scsi/scsi_eh.rst
|
||||
|
||||
:翻译:
|
||||
|
||||
郝栋栋 doubled <doubled@leap-io-kernel.com>
|
||||
|
||||
:校译:
|
||||
|
||||
|
||||
===================
|
||||
SCSI 中间层错误处理
|
||||
===================
|
||||
|
||||
本文档描述了SCSI中间层(mid layer)的错误处理基础架构。
|
||||
关于SCSI中间层的更多信息,请参阅:
|
||||
Documentation/scsi/scsi_mid_low_api.rst。
|
||||
|
||||
.. 目录
|
||||
|
||||
[1] SCSI 命令如何通过中间层传递并进入错误处理(EH)
|
||||
[1-1] scsi_cmnd(SCSI命令)结构体
|
||||
[1-2] scmd(SCSI 命令)是如何完成的?
|
||||
[1-2-1] 通过scsi_done完成scmd
|
||||
[1-2-2] 通过超时机制完成scmd
|
||||
[1-3] 错误处理模块如何接管流程
|
||||
[2] SCSI错误处理机制工作原理
|
||||
[2-1] 基于细粒度回调的错误处理
|
||||
[2-1-1] 概览
|
||||
[2-1-2] scmd在错误处理流程中的传递路径
|
||||
[2-1-3] 控制流分析
|
||||
[2-2] 通过transportt->eh_strategy_handler()实现的错误处理
|
||||
[2-2-1] transportt->eh_strategy_handler()调用前的中间层状态
|
||||
[2-2-2] transportt->eh_strategy_handler()调用后的中间层状态
|
||||
[2-2-3] 注意事项
|
||||
|
||||
|
||||
1. SCSI命令在中间层及错误处理中的传递流程
|
||||
=========================================
|
||||
|
||||
1.1 scsi_cmnd结构体
|
||||
-------------------
|
||||
|
||||
每个SCSI命令都由struct scsi_cmnd(简称scmd)结构体
|
||||
表示。scmd包含两个list_head类型的链表节点:scmd->list
|
||||
与scmd->eh_entry。其中scmd->list是用于空闲链表或设备
|
||||
专属的scmd分配链表,与错误处理讨论关联不大。而
|
||||
scmd->eh_entry则是专用于命令完成和错误处理链表,除非
|
||||
特别说明,本文讨论中所有scmd的链表操作均通过
|
||||
scmd->eh_entry实现。
|
||||
|
||||
|
||||
1.2 scmd是如何完成的?
|
||||
----------------------
|
||||
|
||||
底层设备驱动(LLDD)在获取SCSI命令(scmd)后,存在两种
|
||||
完成路径:底层驱动可通过调用hostt->queuecommand()时从
|
||||
中间层传递的scsi_done回调函数主动完成命令,或者当命令未
|
||||
及时完成时由块层(block layer)触发超时处理机制。
|
||||
|
||||
|
||||
1.2.1 通过scsi_done回调完成SCSI命令
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
对于所有非错误处理(EH)命令,scsi_done()是其完成回调
|
||||
函数。它只调用blk_mq_complete_request()来删除块层的
|
||||
定时器并触发块设备软中断(BLOCK_SOFTIRQ)。
|
||||
|
||||
BLOCK_SOFTIRQ会间接调用scsi_complete(),进而调用
|
||||
scsi_decide_disposition()来决定如何处理该命令。
|
||||
scsi_decide_disposition()会查看scmd->result值和感
|
||||
应码数据来决定如何处理命令。
|
||||
|
||||
- SUCCESS
|
||||
|
||||
调用scsi_finish_command()来处理该命令。该函数会
|
||||
执行一些维护操作,然后调用scsi_io_completion()来
|
||||
完成I/O操作。scsi_io_completion()会通过调用
|
||||
blk_end_request及其相关函数来通知块层该请求已完成,
|
||||
如果发生错误,还会判断如何处理剩余的数据。
|
||||
|
||||
- NEEDS_RETRY
|
||||
|
||||
- ADD_TO_MLQUEUE
|
||||
|
||||
scmd被重新加入到块设备队列中。
|
||||
|
||||
- otherwise
|
||||
|
||||
调用scsi_eh_scmd_add(scmd)来处理该命令。
|
||||
关于此函数的详细信息,请参见 [1-3]。
|
||||
|
||||
|
||||
1.2.2 scmd超时完成机制
|
||||
^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
SCSI命令超时处理机制由scsi_timeout()函数实现。
|
||||
当发生超时事件时,该函数
|
||||
|
||||
1. 首先调用可选的hostt->eh_timed_out()回调函数。
|
||||
返回值可能是以下3种情况之一:
|
||||
|
||||
- ``SCSI_EH_RESET_TIMER``
|
||||
表示需要延长命令执行时间并重启计时器。
|
||||
|
||||
- ``SCSI_EH_NOT_HANDLED``
|
||||
表示eh_timed_out()未处理该命令。
|
||||
此时将执行第2步的处理流程。
|
||||
|
||||
- ``SCSI_EH_DONE``
|
||||
表示eh_timed_out()已完成该命令。
|
||||
|
||||
2. 若未通过回调函数解决,系统将调用
|
||||
scsi_abort_command()发起异步中止操作,该操作最多
|
||||
可执行scmd->allowed + 1次。但存在三种例外情况会跳
|
||||
过异步中止而直接进入第3步处理:当检测到
|
||||
SCSI_EH_ABORT_SCHEDULED标志位已置位(表明该命令先
|
||||
前已被中止过一次且当前重试仍失败)、当重试次数已达上
|
||||
限、或当错误处理时限已到期时。在这些情况下,系统将跳
|
||||
过异步中止流程而直接执行第3步处理方案。
|
||||
|
||||
3. 最终未解决的命令会通过scsi_eh_scmd_add(scmd)移交给
|
||||
错误处理子系统,具体流程详见[1-4]章节说明。
|
||||
|
||||
1.3 异步命令中止机制
|
||||
--------------------
|
||||
|
||||
当命令超时触发后,系统会通过scsi_abort_command()调度异
|
||||
步中止操作。若中止操作执行成功,则根据重试次数决定后续处
|
||||
理:若未达最大重试限制,命令将重新下发执行;若重试次数已
|
||||
耗尽,则命令最终以DID_TIME_OUT状态终止。当中止操作失败
|
||||
时,系统会调用scsi_eh_scmd_add()将该命令移交错误处理子
|
||||
系统,具体处理流程详见[1-4]。
|
||||
|
||||
1.4 错误处理(EH)接管机制
|
||||
------------------------
|
||||
|
||||
SCSI命令通过scsi_eh_scmd_add()函数进入错误处理流程,该函
|
||||
数执行以下操作:
|
||||
|
||||
1. 将scmd->eh_entry链接到shost->eh_cmd_q
|
||||
|
||||
2. 在shost->shost_state中设置SHOST_RECOVERY状态位
|
||||
|
||||
3. 递增shost->host_failed失败计数器
|
||||
|
||||
4. 当检测到shost->host_busy == shost->host_failed
|
||||
时(即所有进行中命令均已失败)立即唤醒SCSI错误处理
|
||||
线程。
|
||||
|
||||
如上所述,当任一scmd被加入到shost->eh_cmd_q队列时,系统
|
||||
会立即置位shost_state中的SHOST_RECOVERY状态标志位,该操
|
||||
作将阻止块层向对应主机控制器下发任何新的SCSI命令。在此状
|
||||
态下,主机控制器上所有正在处理的scmd最终会进入以下三种状
|
||||
态之一:正常完成、失败后被移入到eh_cmd_q队列、或因超时被
|
||||
添加到shost->eh_cmd_q队列。
|
||||
|
||||
如果所有的SCSI命令都已经完成或失败,系统中正在执行的命令
|
||||
数量与失败命令数量相等(
|
||||
即shost->host_busy == shost->host_failed),此时将唤
|
||||
醒SCSI错误处理线程。SCSI错误处理线程一旦被唤醒,就可以确
|
||||
保所有未完成命令均已标记为失败状态,并且已经被链接到
|
||||
shost->eh_cmd_q队列中。
|
||||
|
||||
需要特别说明的是,这并不意味着底层处理流程完全静止。当底层
|
||||
驱动以错误状态完成某个scmd时,底层驱动及其下层组件会立刻遗
|
||||
忘该命令的所有关联状态。但对于超时命令,除非
|
||||
hostt->eh_timed_out()回调函数已经明确通知底层驱动丢弃该
|
||||
命令(当前所有底层驱动均未实现此功能),否则从底层驱动视角
|
||||
看该命令仍处于活跃状态,理论上仍可能在某时刻完成。当然,由
|
||||
于超时计时器早已触发,所有此类延迟完成都将被系统直接忽略。
|
||||
|
||||
我们将在后续章节详细讨论关于SCSI错误处理如何执行中止操作(
|
||||
即强制底层驱动丢弃已超时SCSI命令)。
|
||||
|
||||
|
||||
2. SCSI错误处理机制详解
|
||||
=======================
|
||||
|
||||
SCSI底层驱动可以通过以下两种方式之一来实现SCSI错误处理。
|
||||
|
||||
- 细粒度的错误处理回调机制
|
||||
底层驱动可选择实现细粒度的错误处理回调函数,由SCSI中间层
|
||||
主导错误恢复流程并自动调用对应的回调函数。此实现模式的详
|
||||
细设计规范在[2-1]节中展开讨论。
|
||||
|
||||
- eh_strategy_handler()回调函数
|
||||
该回调函数作为统一的错误处理入口,需要完整实现所有的恢复
|
||||
操作。具体而言,它必须涵盖SCSI中间层在常规恢复过程中执行
|
||||
的全部处理流程,相关实现将在[2-2]节中详细描述。
|
||||
|
||||
当错误恢复流程完成后,SCSI错误处理系统通过调用
|
||||
scsi_restart_operations()函数恢复正常运行,该函数按顺序执行
|
||||
以下操作:
|
||||
|
||||
1. 验证是否需要执行驱动器安全门锁定机制
|
||||
|
||||
2. 清除shost_state中的SHOST_RECOVERY状态标志位
|
||||
|
||||
3. 唤醒所有在shost->host_wait上等待的任务。如果有人调用了
|
||||
scsi_block_when_processing_errors()则会发生这种情况。
|
||||
(疑问:由于错误处理期间块层队列已被阻塞,为何仍需显式
|
||||
唤醒?)
|
||||
|
||||
4. 强制激活该主机控制器下所有设备的I/O队列
|
||||
|
||||
|
||||
2.1 基于细粒度回调的错误处理机制
|
||||
--------------------------------
|
||||
|
||||
2.1.1 概述
|
||||
^^^^^^^^^^^
|
||||
|
||||
如果不存在eh_strategy_handler(),SCSI中间层将负责驱动的
|
||||
错误处理。错误处理(EH)的目标有两个:一是让底层驱动程序、
|
||||
主机和设备不再维护已超时的SCSI命令(scmd);二是使他们准备
|
||||
好接收新命令。当一个SCSI命令(scmd)被底层遗忘且底层已准备
|
||||
好再次处理或拒绝该命令时,即可认为该scmd已恢复。
|
||||
|
||||
为实现这些目标,错误处理(EH)会逐步执行严重性递增的恢复
|
||||
操作。部分操作通过下发SCSI命令完成,而其他操作则通过调用
|
||||
以下细粒度的错误处理回调函数实现。这些回调函数可以省略,
|
||||
若被省略则默认始终视为执行失败。
|
||||
|
||||
::
|
||||
|
||||
int (* eh_abort_handler)(struct scsi_cmnd *);
|
||||
int (* eh_device_reset_handler)(struct scsi_cmnd *);
|
||||
int (* eh_bus_reset_handler)(struct scsi_cmnd *);
|
||||
int (* eh_host_reset_handler)(struct scsi_cmnd *);
|
||||
|
||||
只有在低级别的错误恢复操作无法恢复部分失败的SCSI命令
|
||||
(scmd)时,才会采取更高级别的恢复操作。如果最高级别的错误
|
||||
处理失败,就意味着整个错误恢复(EH)过程失败,所有未能恢复
|
||||
的设备被强制下线。
|
||||
|
||||
在恢复过程中,需遵循以下规则:
|
||||
|
||||
- 错误恢复操作针对待处理列表eh_work_q中的失败的scmds执
|
||||
行。如果某个恢复操作成功恢复了一个scmd,那么该scmd会
|
||||
从eh_work_q链表中移除。
|
||||
|
||||
需要注意的是,对某个scmd执行的单个恢复操作可能会恢复
|
||||
多个scmd。例如,对某个设备执行复位操作可能会恢复该设
|
||||
备上所有失败的scmd。
|
||||
|
||||
- 仅当低级别的恢复操作完成且eh_work_q仍然非空时,才会
|
||||
触发更高级别的操作
|
||||
|
||||
- SCSI错误恢复机制会重用失败的scmd来发送恢复命令。对于
|
||||
超时的scmd,SCSI错误处理机制会确保底层驱动在重用scmd
|
||||
前已不再维护该命令。
|
||||
|
||||
当一个SCSI命令(scmd)被成功恢复后,错误处理逻辑会通过
|
||||
scsi_eh_finish_cmd()将其从待处理队列(eh_work_q)移
|
||||
至错误处理的本地完成队列(eh_done_q)。当所有scmd均恢
|
||||
复完成(即eh_work_q为空时),错误处理逻辑会调用
|
||||
scsi_eh_flush_done_q()对这些已恢复的scmd进行处理,即
|
||||
重新尝试或错误总终止(向上层通知失败)。
|
||||
|
||||
SCSI命令仅在满足以下全部条件时才会被重试:对应的SCSI设
|
||||
备仍处于在线状态,未设置REQ_FAILFAST标志或递增后的
|
||||
scmd->retries值仍小于scmd->allowed。
|
||||
|
||||
2.1.2 SCSI命令在错误处理过程中的流转路径
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
1. 错误完成/超时
|
||||
|
||||
:处理: 调用scsi_eh_scmd_add()处理scmd
|
||||
|
||||
- 将scmd添加到shost->eh_cmd_q
|
||||
- 设置SHOST_RECOVERY标记位
|
||||
- shost->host_failed++
|
||||
|
||||
:锁要求: shost->host_lock
|
||||
|
||||
2. 启动错误处理(EH)
|
||||
|
||||
:操作: 将所有scmd移动到EH本地eh_work_q队列,并
|
||||
清空 shost->eh_cmd_q。
|
||||
|
||||
:锁要求: shost->host_lock(非严格必需,仅为保持一致性)
|
||||
|
||||
3. scmd恢复
|
||||
|
||||
:操作: 调用scsi_eh_finish_cmd()完成scmd的EH
|
||||
|
||||
- 将scmd从本地eh_work_q队列移至本地eh_done_q队列
|
||||
|
||||
:锁要求: 无
|
||||
|
||||
:并发控制: 每个独立的eh_work_q至多一个线程,确保无锁
|
||||
队列的访问
|
||||
|
||||
4. EH完成
|
||||
|
||||
:操作: 调用scsi_eh_flush_done_q()重试scmd或通知上层处理
|
||||
失败。此函数可以被并发调用,但每个独立的eh_work_q队
|
||||
列至多一个线程,以确保无锁队列的访问。
|
||||
|
||||
- 从eh_done_q队列中移除scmd,清除scmd->eh_entry
|
||||
- 如果需要重试,调用scsi_queue_insert()重新入队scmd
|
||||
- 否则,调用scsi_finish_command()完成scmd
|
||||
- 将shost->host_failed置为零
|
||||
|
||||
:锁要求: 队列或完成函数会执行适当的加锁操作
|
||||
|
||||
|
||||
2.1.3 控制流
|
||||
^^^^^^^^^^^^
|
||||
|
||||
通过细粒度回调机制执行的SCSI错误处理(EH)是从
|
||||
scsi_unjam_host()函数开始的
|
||||
|
||||
``scsi_unjam_host``
|
||||
|
||||
1. 持有shost->host_lock锁,将shost->eh_cmd_q中的命令移动
|
||||
到本地的eh_work_q队里中,并释放host_lock锁。注意,这一步
|
||||
会清空shost->eh_cmd_q。
|
||||
|
||||
2. 调用scsi_eh_get_sense函数。
|
||||
|
||||
``scsi_eh_get_sense``
|
||||
|
||||
该操作针对没有有效感知数据的错误完成命令。大部分SCSI传输协议
|
||||
或底层驱动在命令失败时会自动获取感知数据(自动感知)。出于性
|
||||
能原因,建议使用自动感知,推荐使用自动感知机制,因为它不仅有
|
||||
助于提升性能,还能避免从发生CHECK CONDITION到执行本操作之间,
|
||||
感知信息出现不同步的问题。
|
||||
|
||||
注意,如果不支持自动感知,那么在使用scsi_done()以错误状态完成
|
||||
scmd 时,scmd->sense_buffer将包含无效感知数据。在这种情况下,
|
||||
scsi_decide_disposition()总是返回FAILED从而触发SCSI错误处理
|
||||
(EH)。当该scmd执行到这里时,会重新获取感知数据,并再次调用
|
||||
scsi_decide_disposition()进行处理。
|
||||
|
||||
1. 调用scsi_request_sense()发送REQUEST_SENSE命令。如果失败,
|
||||
则不采取任何操作。请注意,不采取任何操作会导致对该scmd执行
|
||||
更高级别的恢复操作。
|
||||
|
||||
2. 调用scsi_decide_disposition()处理scmd
|
||||
|
||||
- SUCCESS
|
||||
scmd->retries被设置为scmd->allowed以防止
|
||||
scsi_eh_flush_done_q()重试该scmd,并调用
|
||||
scsi_eh_finish_cmd()。
|
||||
|
||||
- NEEDS_RETRY
|
||||
调用scsi_eh_finish_cmd()
|
||||
|
||||
- 其他情况
|
||||
无操作。
|
||||
|
||||
4. 如果!list_empty(&eh_work_q),则调用scsi_eh_ready_devs()。
|
||||
|
||||
``scsi_eh_ready_devs``
|
||||
|
||||
该函数采取四种逐步增强的措施,使失败的设备准备好处理新的命令。
|
||||
|
||||
1. 调用scsi_eh_stu()
|
||||
|
||||
``scsi_eh_stu``
|
||||
|
||||
对于每个具有有效感知数据且scsi_check_sense()判断为失败的
|
||||
scmd发送START STOP UNIT(STU)命令且将start置1。注意,由
|
||||
于我们明确选择错误完成的scmd,可以确定底层驱动已不再维护该
|
||||
scmd,我们可以重用它进行STU。
|
||||
|
||||
如果STU操作成功且sdev处于离线或就绪状态,所有在sdev上失败的
|
||||
scmd都会通过scsi_eh_finish_cmd()完成。
|
||||
|
||||
*注意* 如果hostt->eh_abort_handler()未实现或返回失败,可能
|
||||
此时仍有超时的scmd,此时STU不会导致底层驱动不再维护scmd。但
|
||||
是,如果STU执行成功,该函数会通过scsi_eh_finish_cmd()来完成
|
||||
sdev上的所有scmd,这会导致底层驱动处于不一致的状态。看来STU
|
||||
操作应仅在sdev不包含超时scmd时进行。
|
||||
|
||||
2. 如果!list_empty(&eh_work_q),调用scsi_eh_bus_device_reset()。
|
||||
|
||||
``scsi_eh_bus_device_reset``
|
||||
|
||||
此操作与scsi_eh_stu()非常相似,区别在于使用
|
||||
hostt->eh_device_reset_handler()替代STU命令。此外,由于我们
|
||||
没有发送SCSI命令且重置会清空该sdev上所有的scmd,所以无需筛选错
|
||||
误完成的scmd。
|
||||
|
||||
3. 如果!list_empty(&eh_work_q),调用scsi_eh_bus_reset()。
|
||||
|
||||
``scsi_eh_bus_reset``
|
||||
|
||||
对于每个包含失败scmd的SCSI通道调用
|
||||
hostt->eh_bus_reset_handler()。如果总线重置成功,那么该通道上
|
||||
所有准备就绪或离线状态sdev上的失败scmd都会被处理处理完成。
|
||||
|
||||
4. 如果!list_empty(&eh_work_q),调用scsi_eh_host_reset()。
|
||||
|
||||
``scsi_eh_host_reset``
|
||||
|
||||
调用hostt->eh_host_reset_handler()是最终的手段。如果SCSI主机
|
||||
重置成功,主机上所有就绪或离线sdev上的失败scmd都会通过错误处理
|
||||
完成。
|
||||
|
||||
5. 如果!list_empty(&eh_work_q),调用scsi_eh_offline_sdevs()。
|
||||
|
||||
``scsi_eh_offline_sdevs``
|
||||
|
||||
离线所有包含未恢复scmd的所有sdev,并通过
|
||||
scsi_eh_finish_cmd()完成这些scmd。
|
||||
|
||||
5. 调用scsi_eh_flush_done_q()。
|
||||
|
||||
``scsi_eh_flush_done_q``
|
||||
|
||||
此时所有的scmd都已经恢复(或放弃),并通过
|
||||
scsi_eh_finish_cmd()函数加入eh_done_q队列。该函数通过
|
||||
重试或显示通知上层scmd的失败来刷新eh_done_q。
|
||||
|
||||
|
||||
2.2 基于transportt->eh_strategy_handler()的错误处理机制
|
||||
-------------------------------------------------------------
|
||||
|
||||
在该机制中,transportt->eh_strategy_handler()替代
|
||||
scsi_unjam_host()的被调用,并负责整个错误恢复过程。该处理
|
||||
函数完成后应该确保底层驱动不再维护任何失败的scmd并且将设备
|
||||
设置为就绪(准备接收新命令)或离线状态。此外,该函数还应该
|
||||
执行SCSI错误处理的维护任务,以维护SCSI中间层的数据完整性。
|
||||
换句话说,eh_strategy_handler()必须实现[2-1-2]中除第1步
|
||||
外的所有步骤。
|
||||
|
||||
|
||||
2.2.1 transportt->eh_strategy_handler()调用前的SCSI中间层状态
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
进入该处理函数时,以下条件成立。
|
||||
|
||||
- 每个失败的scmd的eh_flags字段已正确设置。
|
||||
|
||||
- 每个失败的scmd通过scmd->eh_entry链接到scmd->eh_cmd_q队列。
|
||||
|
||||
- 已设置SHOST_RECOVERY标志。
|
||||
|
||||
- `shost->host_failed == shost->host_busy`。
|
||||
|
||||
2.2.2 transportt->eh_strategy_handler()调用后的SCSI中间层状态
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
从该处理函数退出时,以下条件成立。
|
||||
|
||||
- shost->host_failed为零。
|
||||
|
||||
- shost->eh_cmd_q被清空。
|
||||
|
||||
- 每个scmd->eh_entry被清空。
|
||||
|
||||
- 对每个scmd必须调用scsi_queue_insert()或scsi_finish_command()。
|
||||
注意,该处理程序可以使用scmd->retries(剩余重试次数)和
|
||||
scmd->allowed(允许重试次数)限制重试次数。
|
||||
|
||||
|
||||
2.2.3 注意事项
|
||||
^^^^^^^^^^^^^^
|
||||
|
||||
- 需明确已超时的scmd在底层仍处于活跃状态,因此在操作这些
|
||||
scmd前,必须确保底层已彻底不再维护。
|
||||
|
||||
- 访问或修改shost数据结构时,必须持有shost->host_lock锁
|
||||
以维持数据一致性。
|
||||
|
||||
- 错误处理完成后,每个故障设备必须彻底清除所有活跃SCSI命
|
||||
令(scmd)的关联状态。
|
||||
|
||||
- 错误处理完成后,每个故障设备必须被设置为就绪(准备接收
|
||||
新命令)或离线状态。
|
||||
|
||||
|
||||
Tejun Heo
|
||||
htejun@gmail.com
|
||||
|
||||
11th September 2005
|
||||
File diff suppressed because it is too large
Load Diff
|
|
@ -0,0 +1,38 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0
|
||||
.. include:: ../disclaimer-zh_CN.rst
|
||||
|
||||
:Original: Documentation/scsi/sd-parameters.rst
|
||||
|
||||
:翻译:
|
||||
|
||||
郝栋栋 doubled <doubled@leap-io-kernel.com>
|
||||
|
||||
:校译:
|
||||
|
||||
|
||||
|
||||
============================
|
||||
Linux SCSI磁盘驱动(sd)参数
|
||||
============================
|
||||
|
||||
缓存类型(读/写)
|
||||
------------------
|
||||
启用/禁用驱动器读写缓存。
|
||||
|
||||
=========================== ===== ===== ======= =======
|
||||
缓存类型字符串 WCE RCD 写缓存 读缓存
|
||||
=========================== ===== ===== ======= =======
|
||||
write through 0 0 关闭 开启
|
||||
none 0 1 关闭 关闭
|
||||
write back 1 0 开启 开启
|
||||
write back, no read (daft) 1 1 开启 关闭
|
||||
=========================== ===== ===== ======= =======
|
||||
|
||||
将缓存类型设置为“write back”并将该设置保存到驱动器::
|
||||
|
||||
# echo "write back" > cache_type
|
||||
|
||||
如果要修改缓存模式但不使更改持久化,可在缓存类型字符串前
|
||||
添加“temporary ”。例如::
|
||||
|
||||
# echo "temporary write back" > cache_type
|
||||
|
|
@ -0,0 +1,35 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0
|
||||
.. include:: ../disclaimer-zh_CN.rst
|
||||
|
||||
:Original: Documentation/scsi/libsas.rst
|
||||
|
||||
:翻译:
|
||||
|
||||
张钰杰 Yujie Zhang <yjzhang@leap-io-kernel.com>
|
||||
|
||||
:校译:
|
||||
|
||||
====================================================
|
||||
Western Digital WD7193, WD7197 和 WD7296 SCSI 卡驱动
|
||||
====================================================
|
||||
|
||||
这些卡需要加载固件。固件可从 WD 提供下载的 Windows NT 驱动程
|
||||
序中提取。地址如下:
|
||||
|
||||
http://support.wdc.com/product/download.asp?groupid=801&sid=27&lang=en
|
||||
|
||||
该文件或网页上都未包含任何许可声明,因此该固件可能无法被收录到
|
||||
linux-firmware 项目中。
|
||||
|
||||
提供的脚本可用于下载并提取固件,生成 wd719x-risc.bin 和
|
||||
wd719x-wcs.bin 文件。请将它们放置在 /lib/firmware/ 目录下。
|
||||
脚本内容如下:
|
||||
|
||||
#!/bin/sh
|
||||
wget http://support.wdc.com/download/archive/pciscsi.exe
|
||||
lha xi pciscsi.exe pci-scsi.exe
|
||||
lha xi pci-scsi.exe nt/wd7296a.sys
|
||||
rm pci-scsi.exe
|
||||
dd if=wd7296a.sys of=wd719x-risc.bin bs=1 skip=5760 count=14336
|
||||
dd if=wd7296a.sys of=wd719x-wcs.bin bs=1 skip=20096 count=514
|
||||
rm wd7296a.sys
|
||||
|
|
@ -0,0 +1,317 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0
|
||||
.. include:: ../disclaimer-zh_CN.rst
|
||||
|
||||
:Original: Documentation/security/SCTP.rst
|
||||
|
||||
:翻译:
|
||||
赵硕 Shuo Zhao <zhaoshuo@cqsoftware.com.cn>
|
||||
|
||||
====
|
||||
SCTP
|
||||
====
|
||||
|
||||
SCTP的LSM支持
|
||||
=============
|
||||
|
||||
安全钩子
|
||||
--------
|
||||
|
||||
对于安全模块支持,已经实现了三个特定于SCTP的钩子::
|
||||
|
||||
security_sctp_assoc_request()
|
||||
security_sctp_bind_connect()
|
||||
security_sctp_sk_clone()
|
||||
security_sctp_assoc_established()
|
||||
|
||||
这些钩子的用法在下面的 `SCTP的SELinux支持`_ 一章中描述SELinux的实现。
|
||||
|
||||
|
||||
security_sctp_assoc_request()
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
将关联INIT数据包的 ``@asoc`` 和 ``@chunk->skb`` 传递给安全模块。
|
||||
成功时返回 0,失败时返回错误。
|
||||
::
|
||||
|
||||
@asoc - 指向sctp关联结构的指针。
|
||||
@skb - 指向包含关联数据包skbuff的指针。
|
||||
|
||||
|
||||
security_sctp_bind_connect()
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
将一个或多个IPv4/IPv6地址传递给安全模块进行基于 ``@optname`` 的验证,
|
||||
这将导致是绑定还是连接服务,如下面的权限检查表所示。成功时返回 0,失败
|
||||
时返回错误。
|
||||
::
|
||||
|
||||
@sk - 指向sock结构的指针。
|
||||
@optname - 需要验证的选项名称。
|
||||
@address - 一个或多个IPv4 / IPv6地址。
|
||||
@addrlen - 地址的总长度。使用sizeof(struct sockaddr_in)或
|
||||
sizeof(struct sockaddr_in6)来计算每个ipv4或ipv6地址。
|
||||
|
||||
------------------------------------------------------------------
|
||||
| BIND 类型检查 |
|
||||
| @optname | @address contains |
|
||||
|----------------------------|-----------------------------------|
|
||||
| SCTP_SOCKOPT_BINDX_ADD | 一个或多个 ipv4 / ipv6 地址 |
|
||||
| SCTP_PRIMARY_ADDR | 单个 ipv4 or ipv6 地址 |
|
||||
| SCTP_SET_PEER_PRIMARY_ADDR | 单个 ipv4 or ipv6 地址 |
|
||||
------------------------------------------------------------------
|
||||
|
||||
------------------------------------------------------------------
|
||||
| CONNECT 类型检查 |
|
||||
| @optname | @address contains |
|
||||
|----------------------------|-----------------------------------|
|
||||
| SCTP_SOCKOPT_CONNECTX | 一个或多个 ipv4 / ipv6 地址 |
|
||||
| SCTP_PARAM_ADD_IP | 一个或多个 ipv4 / ipv6 地址 |
|
||||
| SCTP_SENDMSG_CONNECT | 单个 ipv4 or ipv6 地址 |
|
||||
| SCTP_PARAM_SET_PRIMARY | 单个 ipv4 or ipv6 地址 |
|
||||
------------------------------------------------------------------
|
||||
|
||||
条目 ``@optname`` 的摘要如下::
|
||||
|
||||
SCTP_SOCKOPT_BINDX_ADD - 允许在(可选地)调用 bind(3) 后,关联额外
|
||||
的绑定地址。
|
||||
sctp_bindx(3) 用于在套接字上添加一组绑定地址。
|
||||
|
||||
SCTP_SOCKOPT_CONNECTX - 允许分配多个地址以连接到对端(多宿主)。
|
||||
sctp_connectx(3) 使用多个目标地址在SCTP
|
||||
套接字上发起连接。
|
||||
|
||||
SCTP_SENDMSG_CONNECT - 通过sendmsg(2)或sctp_sendmsg(3)在新关联上
|
||||
发起连接。
|
||||
|
||||
SCTP_PRIMARY_ADDR - 设置本地主地址。
|
||||
|
||||
SCTP_SET_PEER_PRIMARY_ADDR - 请求远程对端将某个地址设置为其主地址。
|
||||
|
||||
SCTP_PARAM_ADD_IP - 在启用动态地址重配置时使用。
|
||||
SCTP_PARAM_SET_PRIMARY - 如下所述,启用重新配置功能。
|
||||
|
||||
|
||||
为了支持动态地址重新配置,必须在两个端点上启用以下
|
||||
参数(或使用适当的 **setsockopt**\(2))::
|
||||
|
||||
/proc/sys/net/sctp/addip_enable
|
||||
/proc/sys/net/sctp/addip_noauth_enable
|
||||
|
||||
当相应的 ``@optname`` 存在时,以下的 *_PARAM_* 参数会
|
||||
通过ASCONF块发送到对端::
|
||||
|
||||
@optname ASCONF Parameter
|
||||
---------- ------------------
|
||||
SCTP_SOCKOPT_BINDX_ADD -> SCTP_PARAM_ADD_IP
|
||||
SCTP_SET_PEER_PRIMARY_ADDR -> SCTP_PARAM_SET_PRIMARY
|
||||
|
||||
|
||||
security_sctp_sk_clone()
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
每当通过 **accept**\(2)创建一个新的套接字(即TCP类型的套接字),或者当
|
||||
一个套接字被‘剥离’时如用户空间调用 **sctp_peeloff**\(3),会调用此函数。
|
||||
::
|
||||
|
||||
@asoc - 指向当前sctp关联结构的指针。
|
||||
@sk - 指向当前套接字结构的指针。
|
||||
@newsk - 指向新的套接字结构的指针。
|
||||
|
||||
|
||||
security_sctp_assoc_established()
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
当收到COOKIE ACK时调用,对于客户端,对端的secid将被保存
|
||||
到 ``@asoc->peer_secid`` 中::
|
||||
|
||||
@asoc - 指向sctp关联结构的指针。
|
||||
@skb - 指向COOKIE ACK数据包的skbuff指针。
|
||||
|
||||
|
||||
用于关联建立的安全钩子
|
||||
----------------------
|
||||
|
||||
下图展示了在建立关联时 ``security_sctp_bind_connect()``、 ``security_sctp_assoc_request()``
|
||||
和 ``security_sctp_assoc_established()`` 的使用。
|
||||
::
|
||||
|
||||
SCTP 端点 "A" SCTP 端点 "Z"
|
||||
============= =============
|
||||
sctp_sf_do_prm_asoc()
|
||||
关联的设置可以通过connect(2),
|
||||
sctp_connectx(3),sendmsg(2)
|
||||
or sctp_sendmsg(3)来发起。
|
||||
这将导致调用security_sctp_bind_connect()
|
||||
发起与SCTP对端端点"Z"的关联。
|
||||
INIT --------------------------------------------->
|
||||
sctp_sf_do_5_1B_init()
|
||||
响应一个INIT数据块。
|
||||
SCTP对端端点"A"正在请求一个临时关联。
|
||||
如果是首次关联,调用security_sctp_assoc_request()
|
||||
来设置对等方标签。
|
||||
如果不是首次关联,检查是否被允许。
|
||||
如果允许,则发送:
|
||||
<----------------------------------------------- INIT ACK
|
||||
|
|
||||
| 否则,生成审计事件并默默丢弃该数据包。
|
||||
|
|
||||
COOKIE ECHO ------------------------------------------>
|
||||
sctp_sf_do_5_1D_ce()
|
||||
响应一个COOKIE ECHO数据块。
|
||||
确认该cookie并创建一个永久关联。
|
||||
调用security_sctp_assoc_request()
|
||||
执行与INIT数据块响应相同的操作。
|
||||
<------------------------------------------- COOKIE ACK
|
||||
| |
|
||||
sctp_sf_do_5_1E_ca |
|
||||
调用security_sctp_assoc_established() |
|
||||
来设置对方标签 |
|
||||
| |
|
||||
| 如果是SCTP_SOCKET_TCP或是剥离的套接
|
||||
| 字,会调用 security_sctp_sk_clone()
|
||||
| 来克隆新的套接字。
|
||||
| |
|
||||
建立 建立
|
||||
| |
|
||||
------------------------------------------------------------------
|
||||
| 关联建立 |
|
||||
------------------------------------------------------------------
|
||||
|
||||
|
||||
SCTP的SELinux支持
|
||||
=================
|
||||
|
||||
安全钩子
|
||||
--------
|
||||
|
||||
上面的 `SCTP的LSM支持`_ 章节描述了以下SCTP安全钩子,SELinux的细节
|
||||
说明如下::
|
||||
|
||||
security_sctp_assoc_request()
|
||||
security_sctp_bind_connect()
|
||||
security_sctp_sk_clone()
|
||||
security_sctp_assoc_established()
|
||||
|
||||
|
||||
security_sctp_assoc_request()
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
将关联INIT数据包的 ``@asoc`` 和 ``@chunk->skb`` 传递给安全模块。
|
||||
成功时返回 0,失败时返回错误。
|
||||
::
|
||||
|
||||
@asoc - 指向sctp关联结构的指针。
|
||||
@skb - 指向关联数据包skbuff的指针。
|
||||
|
||||
安全模块执行以下操作:
|
||||
如果这是 ``@asoc->base.sk`` 上的首次关联,则将对端的sid设置
|
||||
为 ``@skb`` 中的值。这将确保只有一个对端sid分配给可能支持多个
|
||||
关联的 ``@asoc->base.sk``。
|
||||
|
||||
否则验证 ``@asoc->base.sk peer sid`` 是否与 ``@skb peer sid``
|
||||
匹配,以确定该关联是否应被允许或拒绝。
|
||||
|
||||
将sctp的 ``@asoc sid`` 设置为套接字的sid(来自 ``asoc->base.sk``)
|
||||
并从 ``@skb peer sid`` 中提取MLS部分。这将在SCTP的TCP类型套接字及
|
||||
剥离连接中使用,因为它们会导致生成一个新的套接字。
|
||||
|
||||
如果配置了IP安全选项(CIPSO/CALIPSO),则会在套接字上设置IP选项。
|
||||
|
||||
|
||||
security_sctp_bind_connect()
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
根据 ``@optname`` 检查ipv4/ipv6地址所需的权限,具体如下::
|
||||
|
||||
------------------------------------------------------------------
|
||||
| BIND 权限检查 |
|
||||
| @optname | @address contains |
|
||||
|----------------------------|-----------------------------------|
|
||||
| SCTP_SOCKOPT_BINDX_ADD | 一个或多个 ipv4 / ipv6 地址 |
|
||||
| SCTP_PRIMARY_ADDR | 单个 ipv4 or ipv6 地址 |
|
||||
| SCTP_SET_PEER_PRIMARY_ADDR | 单个 ipv4 or ipv6 地址 |
|
||||
------------------------------------------------------------------
|
||||
|
||||
------------------------------------------------------------------
|
||||
| CONNECT 权限检查 |
|
||||
| @optname | @address contains |
|
||||
|----------------------------|-----------------------------------|
|
||||
| SCTP_SOCKOPT_CONNECTX | 一个或多个 ipv4 / ipv6 地址 |
|
||||
| SCTP_PARAM_ADD_IP | 一个或多个 ipv4 / ipv6 地址 |
|
||||
| SCTP_SENDMSG_CONNECT | 单个 ipv4 or ipv6 地址 |
|
||||
| SCTP_PARAM_SET_PRIMARY | 单个 ipv4 or ipv6 地址 |
|
||||
------------------------------------------------------------------
|
||||
|
||||
|
||||
`SCTP的LSM支持`_ 提供了 ``@optname`` 摘要,并且还描述了当启用动态地址重新
|
||||
配置时,ASCONF块的处理过程。
|
||||
|
||||
|
||||
security_sctp_sk_clone()
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
每当通过 **accept**\(2)(即TCP类型的套接字)创建一个新的套接字,或者
|
||||
当一个套接字被“剥离”如用户空间调用 **sctp_peeloff**\(3)时,
|
||||
``security_sctp_sk_clone()`` 将会分别将新套接字的sid和对端sid设置为
|
||||
``@asoc sid`` 和 ``@asoc peer sid`` 中包含的值。
|
||||
::
|
||||
|
||||
@asoc - 指向当前sctp关联结构的指针。
|
||||
@sk - 指向当前sock结构的指针。
|
||||
@newsk - 指向新sock结构的指针。
|
||||
|
||||
|
||||
security_sctp_assoc_established()
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
当接收到COOKIE ACK时调用,它将连接的对端sid设置为 ``@skb`` 中的值::
|
||||
|
||||
@asoc - 指向sctp关联结构的指针。
|
||||
@skb - 指向COOKIE ACK包skbuff的指针。
|
||||
|
||||
|
||||
策略声明
|
||||
--------
|
||||
以下支持SCTP的类和权限在内核中是可用的::
|
||||
|
||||
class sctp_socket inherits socket { node_bind }
|
||||
|
||||
当启用以下策略功能时::
|
||||
|
||||
policycap extended_socket_class;
|
||||
|
||||
SELinux对SCTP的支持添加了用于连接特定端口类型 ``name_connect`` 权限
|
||||
以及在下面的章节中进行解释的 ``association`` 权限。
|
||||
|
||||
如果用户空间工具已更新,SCTP将支持如下所示的 ``portcon`` 声明::
|
||||
|
||||
portcon sctp 1024-1036 system_u:object_r:sctp_ports_t:s0
|
||||
|
||||
|
||||
SCTP对端标签
|
||||
------------
|
||||
每个SCTP套接字仅分配一个对端标签。这个标签将在建立第一个关联时分配。
|
||||
任何后续在该套接字上的关联都会将它们的数据包对端标签与套接字的对端标
|
||||
签进行比较,只有在它们不同的情况下 ``association`` 权限才会被验证。
|
||||
这是通过检查套接字的对端sid与接收到的数据包中的对端sid来验证的,以决
|
||||
定是否允许或拒绝该关联。
|
||||
|
||||
注:
|
||||
1) 如果对端标签未启用,则对端上下文将始终是 ``SECINITSID_UNLABELED``
|
||||
(在策略声明中为 ``unlabeled_t`` )。
|
||||
|
||||
2) 由于SCTP可以在单个套接字上支持每个端点(多宿主)的多个传输地址,因此
|
||||
可以配置策略和NetLabel为每个端点提供不同的对端标签。由于套接字的对端
|
||||
标签是由第一个关联的传输地址决定的,因此建议所有的对端标签保持一致。
|
||||
|
||||
3) 用户空间可以使用 **getpeercon**\(3) 来检索套接字的对端上下文。
|
||||
|
||||
4) 虽然这不是SCTP特有的,但在使用NetLabel时要注意,如果标签分配给特定的接
|
||||
口,而该接口‘goes down’,则NetLabel服务会移除该条目。因此,请确保网络启
|
||||
动脚本调用 **netlabelctl**\(8) 来设置所需的标签(详细信息,
|
||||
请参阅 **netlabel-config**\(8) 辅助脚本)。
|
||||
|
||||
5) NetLabel SCTP对端标签规则应用如下所述标签为“netlabel”的一组帖子:
|
||||
https://www.paul-moore.com/blog/t.
|
||||
|
||||
6) CIPSO仅支持IPv4地址: ``socket(AF_INET, ...)``
|
||||
CALIPSO仅支持IPv6地址: ``socket(AF_INET6, ...)``
|
||||
|
||||
测试CIPSO/CALIPSO时请注意以下事项:
|
||||
a) 如果SCTP数据包由于无效标签无法送达,CIPSO会发送一个ICMP包。
|
||||
b) CALIPSO不会发送ICMP包,只会默默丢弃数据包。
|
||||
|
||||
7) RFC 3554不支持IPSEC —— SCTP/IPSEC支持尚未在用户空间实现(**racoon**\(8)
|
||||
或 **ipsec_pluto**\(8)),尽管内核支持 SCTP/IPSEC。
|
||||
|
|
@ -18,7 +18,9 @@
|
|||
credentials
|
||||
snp-tdx-threat-model
|
||||
lsm
|
||||
lsm-development
|
||||
sak
|
||||
SCTP
|
||||
self-protection
|
||||
siphash
|
||||
tpm/index
|
||||
|
|
@ -28,7 +30,5 @@
|
|||
TODOLIST:
|
||||
* IMA-templates
|
||||
* keys/index
|
||||
* lsm-development
|
||||
* SCTP
|
||||
* secrets/index
|
||||
* ipe
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue