Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • drm/msm
  • lumag/msm
  • abhinavk/msm-next
  • robclark/msm
  • vigneshraman/msm
5 results
Show changes
Showing
with 557 additions and 105 deletions
......@@ -303,7 +303,7 @@ control by passing the parameter ``transparent_hugepage=always`` or
kernel command line.
Alternatively, each supported anonymous THP size can be controlled by
passing ``thp_anon=<size>,<size>[KMG]:<state>;<size>-<size>[KMG]:<state>``,
passing ``thp_anon=<size>[KMG],<size>[KMG]:<state>;<size>[KMG]-<size>[KMG]:<state>``,
where ``<size>`` is the THP size (must be a power of 2 of PAGE_SIZE and
supported anonymous THP) and ``<state>`` is one of ``always``, ``madvise``,
``never`` or ``inherit``.
......@@ -326,11 +326,53 @@ PMD_ORDER THP policy will be overridden. If the policy for PMD_ORDER
is not defined within a valid ``thp_anon``, its policy will default to
``never``.
Similarly to ``transparent_hugepage``, you can control the hugepage
allocation policy for the internal shmem mount by using the kernel parameter
``transparent_hugepage_shmem=<policy>``, where ``<policy>`` is one of the
seven valid policies for shmem (``always``, ``within_size``, ``advise``,
``never``, ``deny``, and ``force``).
Similarly to ``transparent_hugepage_shmem``, you can control the default
hugepage allocation policy for the tmpfs mount by using the kernel parameter
``transparent_hugepage_tmpfs=<policy>``, where ``<policy>`` is one of the
four valid policies for tmpfs (``always``, ``within_size``, ``advise``,
``never``). The tmpfs mount default policy is ``never``.
In the same manner as ``thp_anon`` controls each supported anonymous THP
size, ``thp_shmem`` controls each supported shmem THP size. ``thp_shmem``
has the same format as ``thp_anon``, but also supports the policy
``within_size``.
``thp_shmem=`` may be specified multiple times to configure all THP sizes
as required. If ``thp_shmem=`` is specified at least once, any shmem THP
sizes not explicitly configured on the command line are implicitly set to
``never``.
``transparent_hugepage_shmem`` setting only affects the global toggle. If
``thp_shmem`` is not specified, PMD_ORDER hugepage will default to
``inherit``. However, if a valid ``thp_shmem`` setting is provided by the
user, the PMD_ORDER hugepage policy will be overridden. If the policy for
PMD_ORDER is not defined within a valid ``thp_shmem``, its policy will
default to ``never``.
Hugepages in tmpfs/shmem
========================
You can control hugepage allocation policy in tmpfs with mount option
``huge=``. It can have following values:
Traditionally, tmpfs only supported a single huge page size ("PMD"). Today,
it also supports smaller sizes just like anonymous memory, often referred
to as "multi-size THP" (mTHP). Huge pages of any size are commonly
represented in the kernel as "large folios".
While there is fine control over the huge page sizes to use for the internal
shmem mount (see below), ordinary tmpfs mounts will make use of all available
huge page sizes without any control over the exact sizes, behaving more like
other file systems.
tmpfs mounts
------------
The THP allocation policy for tmpfs mounts can be adjusted using the mount
option: ``huge=``. It can have following values:
always
Attempt to allocate huge pages every time we need a new page;
......@@ -340,24 +382,24 @@ never
within_size
Only allocate huge page if it will be fully within i_size.
Also respect fadvise()/madvise() hints;
Also respect madvise() hints;
advise
Only allocate huge pages if requested with fadvise()/madvise();
Only allocate huge pages if requested with madvise();
Remember, that the kernel may use huge pages of all available sizes, and
that no fine control as for the internal tmpfs mount is available.
The default policy is ``never``.
The default policy in the past was ``never``, but it can now be adjusted
using the kernel parameter ``transparent_hugepage_tmpfs=<policy>``.
``mount -o remount,huge= /mountpoint`` works fine after mount: remounting
``huge=never`` will not attempt to break up huge pages at all, just stop more
from being allocated.
There's also sysfs knob to control hugepage allocation policy for internal
shmem mount: /sys/kernel/mm/transparent_hugepage/shmem_enabled. The mount
is used for SysV SHM, memfds, shared anonymous mmaps (of /dev/zero or
MAP_ANONYMOUS), GPU drivers' DRM objects, Ashmem.
In addition to policies listed above, shmem_enabled allows two further
values:
In addition to policies listed above, the sysfs knob
/sys/kernel/mm/transparent_hugepage/shmem_enabled will affect the
allocation policy of tmpfs mounts, when set to the following values:
deny
For use in emergencies, to force the huge option off from
......@@ -365,13 +407,24 @@ deny
force
Force the huge option on for all - very useful for testing;
Shmem can also use "multi-size THP" (mTHP) by adding a new sysfs knob to
control mTHP allocation:
'/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/shmem_enabled',
and its value for each mTHP is essentially consistent with the global
setting. An 'inherit' option is added to ensure compatibility with these
global settings. Conversely, the options 'force' and 'deny' are dropped,
which are rather testing artifacts from the old ages.
shmem / internal tmpfs
----------------------
The mount internal tmpfs mount is used for SysV SHM, memfds, shared anonymous
mmaps (of /dev/zero or MAP_ANONYMOUS), GPU drivers' DRM objects, Ashmem.
To control the THP allocation policy for this internal tmpfs mount, the
sysfs knob /sys/kernel/mm/transparent_hugepage/shmem_enabled and the knobs
per THP size in
'/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/shmem_enabled'
can be used.
The global knob has the same semantics as the ``huge=`` mount options
for tmpfs mounts, except that the different huge page sizes can be controlled
individually, and will only use the setting of the global knob when the
per-size knob is set to 'inherit'.
The options 'force' and 'deny' are dropped for the individual sizes, which
are rather testing artifacts from the old ages.
always
Attempt to allocate <size> huge pages every time we need a new page;
......@@ -385,10 +438,10 @@ never
within_size
Only allocate <size> huge page if it will be fully within i_size.
Also respect fadvise()/madvise() hints;
Also respect madvise() hints;
advise
Only allocate <size> huge pages if requested with fadvise()/madvise();
Only allocate <size> huge pages if requested with madvise();
Need of application restart
===========================
......@@ -413,7 +466,7 @@ AnonHugePmdMapped).
The number of file transparent huge pages mapped to userspace is available
by reading ShmemPmdMapped and ShmemHugePages fields in ``/proc/meminfo``.
To identify what applications are mapping file transparent huge pages, it
is necessary to read ``/proc/PID/smaps`` and count the FileHugeMapped fields
is necessary to read ``/proc/PID/smaps`` and count the FilePmdMapped fields
for each mapping.
Note that reading the smaps file is expensive and reading it
......@@ -530,10 +583,28 @@ anon_fault_fallback_charge
instead falls back to using huge pages with lower orders or
small pages even though the allocation was successful.
swpout
is incremented every time a huge page is swapped out in one
zswpout
is incremented every time a huge page is swapped out to zswap in one
piece without splitting.
swpin
is incremented every time a huge page is swapped in from a non-zswap
swap device in one piece.
swpin_fallback
is incremented if swapin fails to allocate or charge a huge page
and instead falls back to using huge pages with lower orders or
small pages.
swpin_fallback_charge
is incremented if swapin fails to charge a huge page and instead
falls back to using huge pages with lower orders or small pages
even though the allocation was successful.
swpout
is incremented every time a huge page is swapped out to a non-zswap
swap device in one piece without splitting.
swpout_fallback
is incremented if a huge page has to be split before swapout.
Usually because failed to allocate some continuous swap space
......
.. SPDX-License-Identifier: GPL-2.0
====================
Linux NVMe multipath
====================
This document describes NVMe multipath and its path selection policies supported
by the Linux NVMe host driver.
Introduction
============
The NVMe multipath feature in Linux integrates namespaces with the same
identifier into a single block device. Using multipath enhances the reliability
and stability of I/O access while improving bandwidth performance. When a user
sends I/O to this merged block device, the multipath mechanism selects one of
the underlying block devices (paths) according to the configured policy.
Different policies result in different path selections.
Policies
========
All policies follow the ANA (Asymmetric Namespace Access) mechanism, meaning
that when an optimized path is available, it will be chosen over a non-optimized
one. Current the NVMe multipath policies include numa(default), round-robin and
queue-depth.
To set the desired policy (e.g., round-robin), use one of the following methods:
1. echo -n "round-robin" > /sys/module/nvme_core/parameters/iopolicy
2. or add the "nvme_core.iopolicy=round-robin" to cmdline.
NUMA
----
The NUMA policy selects the path closest to the NUMA node of the current CPU for
I/O distribution. This policy maintains the nearest paths to each NUMA node
based on network interface connections.
When to use the NUMA policy:
1. Multi-core Systems: Optimizes memory access in multi-core and
multi-processor systems, especially under NUMA architecture.
2. High Affinity Workloads: Binds I/O processing to the CPU to reduce
communication and data transfer delays across nodes.
Round-Robin
-----------
The round-robin policy distributes I/O requests evenly across all paths to
enhance throughput and resource utilization. Each I/O operation is sent to the
next path in sequence.
When to use the round-robin policy:
1. Balanced Workloads: Effective for balanced and predictable workloads with
similar I/O size and type.
2. Homogeneous Path Performance: Utilizes all paths efficiently when
performance characteristics (e.g., latency, bandwidth) are similar.
Queue-Depth
-----------
The queue-depth policy manages I/O requests based on the current queue depth
of each path, selecting the path with the least number of in-flight I/Os.
When to use the queue-depth policy:
1. High load with small I/Os: Effectively balances load across paths when
the load is high, and I/O operations consist of small, relatively
fixed-sized requests.
......@@ -60,7 +60,7 @@ description of available events and configuration options in sysfs, see
The "format" directory describes format of the config fields of the
perf_event_attr structure. The "events" directory provides configuration
templates for all documented events. For example,
"Rx_PCIe_TLP_Data_Payload" is an equivalent of "eventid=0x22,type=0x1".
"rx_pcie_tlp_data_payload" is an equivalent of "eventid=0x21,type=0x0".
The "perf list" command shall list the available events from sysfs, e.g.::
......@@ -79,8 +79,8 @@ Example usage of counting PCIe RX TLP data payload (Units of bytes)::
The average RX/TX bandwidth can be calculated using the following formula:
PCIe RX Bandwidth = Rx_PCIe_TLP_Data_Payload / Measure_Time_Window
PCIe TX Bandwidth = Tx_PCIe_TLP_Data_Payload / Measure_Time_Window
PCIe RX Bandwidth = rx_pcie_tlp_data_payload / Measure_Time_Window
PCIe TX Bandwidth = tx_pcie_tlp_data_payload / Measure_Time_Window
Lane Event Usage
-------------------------------
......
......@@ -35,7 +35,10 @@ e.g. hisi_sccl1_hha0/rx_operations is RX_OPERATIONS event of HHA index #0 in
SCCL ID #1.
The driver also provides a "cpumask" sysfs attribute, which shows the CPU core
ID used to count the uncore PMU event.
ID used to count the uncore PMU event. An "associated_cpus" sysfs attribute is
also provided to show the CPUs associated with this PMU. The "cpumask" indicates
the CPUs to open the events, usually as a hint for userspaces tools like perf.
It only contains one associated CPU from the "associated_cpus".
Example usage of perf::
......
......@@ -14,6 +14,8 @@ Performance monitor support
qcom_l2_pmu
qcom_l3_pmu
starfive_starlink_pmu
mrvl-odyssey-ddr-pmu
mrvl-odyssey-tad-pmu
arm-ccn
arm-cmn
arm-ni
......@@ -26,3 +28,4 @@ Performance monitor support
meson-ddr-pmu
cxl
ampere_cspmu
mrvl-pem-pmu
===================================================================
Marvell Odyssey DDR PMU Performance Monitoring Unit (PMU UNCORE)
===================================================================
Odyssey DRAM Subsystem supports eight counters for monitoring performance
and software can program those counters to monitor any of the defined
performance events. Supported performance events include those counted
at the interface between the DDR controller and the PHY, interface between
the DDR Controller and the CHI interconnect, or within the DDR Controller.
Additionally DSS also supports two fixed performance event counters, one
for ddr reads and the other for ddr writes.
The counter will be operating in either manual or auto mode.
The PMU driver exposes the available events and format options under sysfs::
/sys/bus/event_source/devices/mrvl_ddr_pmu_<>/events/
/sys/bus/event_source/devices/mrvl_ddr_pmu_<>/format/
Examples::
$ perf list | grep ddr
mrvl_ddr_pmu_<>/ddr_act_bypass_access/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_bsm_alloc/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_bsm_starvation/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_cam_active_access/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_cam_mwr/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_cam_rd_active_access/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_cam_rd_or_wr_access/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_cam_read/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_cam_wr_access/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_cam_write/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_capar_error/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_crit_ref/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_ddr_reads/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_ddr_writes/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_dfi_cmd_is_retry/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_dfi_cycles/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_dfi_parity_poison/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_dfi_rd_data_access/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_dfi_wr_data_access/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_dqsosc_mpc/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_dqsosc_mrr/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_enter_mpsm/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_enter_powerdown/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_enter_selfref/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_hif_pri_rdaccess/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_hif_rd_access/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_hif_rd_or_wr_access/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_hif_rmw_access/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_hif_wr_access/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_hpri_sched_rd_crit_access/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_load_mode/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_lpri_sched_rd_crit_access/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_precharge/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_precharge_for_other/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_precharge_for_rdwr/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_raw_hazard/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_rd_bypass_access/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_rd_crc_error/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_rd_uc_ecc_error/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_rdwr_transitions/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_refresh/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_retry_fifo_full/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_spec_ref/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_tcr_mrr/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_war_hazard/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_waw_hazard/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_win_limit_reached_rd/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_win_limit_reached_wr/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_wr_crc_error/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_wr_trxn_crit_access/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_write_combine/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_zqcl/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_zqlatch/ [Kernel PMU event]
mrvl_ddr_pmu_<>/ddr_zqstart/ [Kernel PMU event]
$ perf stat -e ddr_cam_read,ddr_cam_write,ddr_cam_active_access,ddr_cam
rd_or_wr_access,ddr_cam_rd_active_access,ddr_cam_mwr <workload>
====================================================================
Marvell Odyssey LLC-TAD Performance Monitoring Unit (PMU UNCORE)
====================================================================
Each TAD provides eight 64-bit counters for monitoring
cache behavior.The driver always configures the same counter for
all the TADs. The user would end up effectively reserving one of
eight counters in every TAD to look across all TADs.
The occurrences of events are aggregated and presented to the user
at the end of running the workload. The driver does not provide a
way for the user to partition TADs so that different TADs are used for
different applications.
The performance events reflect various internal or interface activities.
By combining the values from multiple performance counters, cache
performance can be measured in terms such as: cache miss rate, cache
allocations, interface retry rate, internal resource occupancy, etc.
The PMU driver exposes the available events and format options under sysfs::
/sys/bus/event_source/devices/tad/events/
/sys/bus/event_source/devices/tad/format/
Examples::
$ perf list | grep tad
tad/tad_alloc_any/ [Kernel PMU event]
tad/tad_alloc_dtg/ [Kernel PMU event]
tad/tad_alloc_ltg/ [Kernel PMU event]
tad/tad_hit_any/ [Kernel PMU event]
tad/tad_hit_dtg/ [Kernel PMU event]
tad/tad_hit_ltg/ [Kernel PMU event]
tad/tad_req_msh_in_exlmn/ [Kernel PMU event]
tad/tad_tag_rd/ [Kernel PMU event]
tad/tad_tot_cycle/ [Kernel PMU event]
$ perf stat -e tad_alloc_dtg,tad_alloc_ltg,tad_alloc_any,tad_hit_dtg,tad_hit_ltg,tad_hit_any,tad_tag_rd <workload>
=================================================================
Marvell Odyssey PEM Performance Monitoring Unit (PMU UNCORE)
=================================================================
The PCI Express Interface Units(PEM) are associated with a corresponding
monitoring unit. This includes performance counters to track various
characteristics of the data that is transmitted over the PCIe link.
The counters track inbound and outbound transactions which
includes separate counters for posted/non-posted/completion TLPs.
Also, inbound and outbound memory read requests along with their
latencies can also be monitored. Address Translation Services(ATS)events
such as ATS Translation, ATS Page Request, ATS Invalidation along with
their corresponding latencies are also tracked.
There are separate 64 bit counters to measure posted/non-posted/completion
tlps in inbound and outbound transactions. ATS events are measured by
different counters.
The PMU driver exposes the available events and format options under sysfs,
/sys/bus/event_source/devices/mrvl_pcie_rc_pmu_<>/events/
/sys/bus/event_source/devices/mrvl_pcie_rc_pmu_<>/format/
Examples::
# perf list | grep mrvl_pcie_rc_pmu
mrvl_pcie_rc_pmu_<>/ats_inv/ [Kernel PMU event]
mrvl_pcie_rc_pmu_<>/ats_inv_latency/ [Kernel PMU event]
mrvl_pcie_rc_pmu_<>/ats_pri/ [Kernel PMU event]
mrvl_pcie_rc_pmu_<>/ats_pri_latency/ [Kernel PMU event]
mrvl_pcie_rc_pmu_<>/ats_trans/ [Kernel PMU event]
mrvl_pcie_rc_pmu_<>/ats_trans_latency/ [Kernel PMU event]
mrvl_pcie_rc_pmu_<>/ib_inflight/ [Kernel PMU event]
mrvl_pcie_rc_pmu_<>/ib_reads/ [Kernel PMU event]
mrvl_pcie_rc_pmu_<>/ib_req_no_ro_ebus/ [Kernel PMU event]
mrvl_pcie_rc_pmu_<>/ib_req_no_ro_ncb/ [Kernel PMU event]
mrvl_pcie_rc_pmu_<>/ib_tlp_cpl_partid/ [Kernel PMU event]
mrvl_pcie_rc_pmu_<>/ib_tlp_dwords_cpl_partid/ [Kernel PMU event]
mrvl_pcie_rc_pmu_<>/ib_tlp_dwords_npr/ [Kernel PMU event]
mrvl_pcie_rc_pmu_<>/ib_tlp_dwords_pr/ [Kernel PMU event]
mrvl_pcie_rc_pmu_<>/ib_tlp_npr/ [Kernel PMU event]
mrvl_pcie_rc_pmu_<>/ib_tlp_pr/ [Kernel PMU event]
mrvl_pcie_rc_pmu_<>/ob_inflight_partid/ [Kernel PMU event]
mrvl_pcie_rc_pmu_<>/ob_merges_cpl_partid/ [Kernel PMU event]
mrvl_pcie_rc_pmu_<>/ob_merges_npr_partid/ [Kernel PMU event]
mrvl_pcie_rc_pmu_<>/ob_merges_pr_partid/ [Kernel PMU event]
mrvl_pcie_rc_pmu_<>/ob_reads_partid/ [Kernel PMU event]
mrvl_pcie_rc_pmu_<>/ob_tlp_cpl_partid/ [Kernel PMU event]
mrvl_pcie_rc_pmu_<>/ob_tlp_dwords_cpl_partid/ [Kernel PMU event]
mrvl_pcie_rc_pmu_<>/ob_tlp_dwords_npr_partid/ [Kernel PMU event]
mrvl_pcie_rc_pmu_<>/ob_tlp_dwords_pr_partid/ [Kernel PMU event]
mrvl_pcie_rc_pmu_<>/ob_tlp_npr_partid/ [Kernel PMU event]
mrvl_pcie_rc_pmu_<>/ob_tlp_pr_partid/ [Kernel PMU event]
# perf stat -e ib_inflight,ib_reads,ib_req_no_ro_ebus,ib_req_no_ro_ncb <workload>
......@@ -251,9 +251,7 @@ performance supported in `AMD CPPC Performance Capability <perf_cap_>`_).
In some ASICs, the highest CPPC performance is not the one in the ``_CPC``
table, so we need to expose it to sysfs. If boost is not active, but
still supported, this maximum frequency will be larger than the one in
``cpuinfo``. On systems that support preferred core, the driver will have
different values for some cores than others and this will reflect the values
advertised by the platform at bootup.
``cpuinfo``.
This attribute is read-only.
``amd_pstate_lowest_nonlinear_freq``
......
This diff is collapsed.
This diff is collapsed.
......@@ -733,7 +733,7 @@ can easily happen that your self-built kernel will lack modules for tasks you
did not perform before utilizing this make target. That's because those tasks
require kernel modules that are normally autoloaded when you perform that task
for the first time; if you didn't perform that task at least once before using
localmodonfig, the latter will thus assume these modules are superfluous and
localmodconfig, the latter will thus assume these modules are superfluous and
disable them.
You can try to avoid this by performing typical tasks that often will autoload
......
This diff is collapsed.
This diff is collapsed.
......@@ -1431,7 +1431,7 @@ can easily happen that your self-built kernels will lack modules for tasks you
did not perform at least once before utilizing this make target. That happens
when a task requires kernel modules which are only autoloaded when you execute
it for the first time. So when you never performed that task since starting your
kernel the modules will not have been loaded -- and from localmodonfig's point
kernel the modules will not have been loaded -- and from localmodconfig's point
of view look superfluous, which thus disables them to reduce the amount of code
to be compiled.
......
This diff is collapsed.