The code to lookup the scatter gather table entry assumed that it was
possible to use sg_virt() in order to lookup the DMA address in a mapped
scatter gather table. However, this assumption is incorrect as the DMA
mapping code may merge multiple entries into one. In that case, the DMA
address space may have e.g. two consecutive pages which is correctly
represented by the scatter gather list entry, however the virtual
addresses for these two pages may differ and the relationship cannot be
resolved anymore.
Avoid this problem entirely by working with the offset into the mapped
area instead of using virtual addresses. With that we only use the DMA
length and DMA address from the scatter gather list entries. The
underlying DMA/IOMMU code is therefore free to merge two entries into
one even if the virtual addresses space for the area is not continuous.
Fixes: 90db507552 ("wifi: iwlwifi: use already mapped data when TXing an AMSDU")
Reported-by: Chris Bainbridge <chris.bainbridge@gmail.com>
Closes: https://lore.kernel.org/r/ZrNRoEbdkxkKFMBi@debian.local
Signed-off-by: Benjamin Berg <benjamin.berg@intel.com>
Tested-by: Chris Bainbridge <chris.bainbridge@gmail.com>
Signed-off-by: Kalle Valo <kvalo@kernel.org>
Link: https://patch.msgid.link/20240812110640.460514-1-benjamin@sipsolutions.net
Map the pages when allocating them so that we will not need to map each
of the used fragments at a later point.
For now the mapping is not used, this will be changed in a later commit.
Signed-off-by: Benjamin Berg <benjamin.berg@intel.com>
Signed-off-by: Miri Korenblit <miriam.rachel.korenblit@intel.com>
Reviewed-by: Johannes Berg <johannes.berg@intel.com>
Link: https://patch.msgid.link/20240703125541.7ced468fe431.Ibb109867dc680c37fe8d891e9ab9ef64ed5c5d2d@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
This adds logic to map the entire SKB for AMSDUs. The required scatter
gather list is allocated together with the space for TSO headers.
Unmapping happens again when free'ing the TSO header page.
For now the mapping is unused, this will be changed in a later commit.
Signed-off-by: Benjamin Berg <benjamin.berg@intel.com>
Reviewed-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Miri Korenblit <miriam.rachel.korenblit@intel.com>
Link: https://patch.msgid.link/20240703125541.96c6006f40ff.I55b74bc97c4026761397a7513a559c88a10b6489@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Instead of returning the pointer to the structure describing the header
page, return the pointer to the newly allocated area. This disentangles
the user from the allocation within the page as it does not need to
advance the position itself.
Signed-off-by: Benjamin Berg <benjamin.berg@intel.com>
Signed-off-by: Miri Korenblit <miriam.rachel.korenblit@intel.com>
Reviewed-by: Johannes Berg <johannes.berg@intel.com>
Link: https://patch.msgid.link/20240703125541.044f2cb373f1.I52a807ac6f311b89530e18deacc7452638a6f5d8@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
iwl_trans_pcie_gen2_fw_alive doesn't use the scd_addr parameter,
it was there only because we needed the functio to have a prototype same
as iwl_trans_ops::fw_alive callback.
But now the ops is removed so no reason to keep the parameter.
Signed-off-by: Miri Korenblit <miriam.rachel.korenblit@intel.com>
Reviewed-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Link: https://patch.msgid.link/20240618194245.1aa8bf13aea9.I9662c10c1db545dd8849af4bb4ab47708d4548d8@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
This was needed when we had multiple types of transports. Now we only
have pcie, so there is no need for this ops.
Cleanup the code such as the different trans APIs will call the pcie
function directly, instead of calling the callback,
and remove struct iwl_trans_ops.
Signed-off-by: Yedidya Benshimol <yedidya.ben.shimol@intel.com>
Signed-off-by: Miri Korenblit <miriam.rachel.korenblit@intel.com>
Link: https://msgid.link/20240605140327.8315ff64f9f3.Ifdbc1f26d49766f7de553dcb5f613885f4ee65cc@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
The TX queue code was mostly moved out to support an internal
transport that we were never going to publish, but we're no
longer using that. Since we're also going to be dissolving
the virtual transport layer entirely, integrate the TX queue
code into the PCIe layer.
This also has a small kernel of already removing the virtual
transport function layer, since iwl_trans_send_cmd() calls
iwl_trans_pcie_send_hcmd() directly now, even if that still
calls the transport send_cmd method for now, we'll clean it
up later.
Also, not everything is renamed yet.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Miri Korenblit <miriam.rachel.korenblit@intel.com>
Link: https://msgid.link/20240605140327.936b13f45071.Ib219ce01a1e67bcad79d5131626db950252aaa46@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
struct net_device shouldn't be embedded into any structure, instead,
the owner should use the priv space to embed their state into net_device.
Embedding net_device into structures prohibits the usage of flexible
arrays in the net_device structure. For more details, see the discussion
at [1].
Un-embed the net_device from struct iwl_trans_pcie by converting it
into a pointer. Then use the leverage alloc_netdev() to allocate the
net_device object at iwl_trans_pcie_alloc.
The private data of net_device becomes a pointer for the struct
iwl_trans_pcie, so, it is easy to get back to the iwl_trans_pcie parent
given the net_device object.
[1] https://lore.kernel.org/all/20240229225910.79e224cf@kernel.org/
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Breno Leitao <leitao@debian.org>
Link: https://msgid.link/20240501165417.3406039-1-leitao@debian.org
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
On older devices (before unified image!) we can end up calling
stop_device from an rfkill interrupt. However, in stop_device
we attempt to synchronize IRQs, which then of course deadlocks.
Avoid this by checking the context, if running from the IRQ
thread then don't synchronize. This wouldn't be correct on a
new device since RSS is supported, but older devices only have
a single interrupt/queue.
Fixes: 37fb29bd1f ("wifi: iwlwifi: pcie: synchronize IRQs before NAPI")
Reviewed-by: Miri Korenblit <miriam.rachel.korenblit@intel.com>
Reviewed-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Kalle Valo <kvalo@kernel.org>
Link: https://msgid.link/20231215111335.59aab00baed7.Iadfe154d6248e7f9dfd69522e5429dbbd72925d7@changeid
On newer hardware, a queue's RB status / write pointer
can be bigger than 4095 (0xFFF), so we cannot mask the
value by 0xFFF unconditionally. Since anyway that's
only necessary on older hardware, move the masking to
the helper function and apply it only for older HW.
This also moves the endian conversion in to handle it
more easily.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Gregory Greenman <gregory.greenman@intel.com>
Link: https://lore.kernel.org/r/20230830112059.7be2a3fff6f4.I94f11dee314a4f7c1941d2d223936b1fa8aa9ee4@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Replace the field reduce_power_dram with a struct that holds data about
the reduced-power tables drams regions. Generalize load_payloads_segments()
to work for both pnvm tables and reduction power tables.
Make required adjustments in the data structures.
Signed-off-by: Alon Giladi <alon.giladi@intel.com>
Signed-off-by: Gregory Greenman <gregory.greenman@intel.com>
Link: https://lore.kernel.org/r/20230606103519.6fe66958f049.I85d80682229fc02fe354462cc9da40937558f30c@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Save the pnvm payloads in several DRAM segments (not only in one as
used to). In addition, allocate a FW structure in DRAM that holds the
segments' addresses and forward its address to the FW. It's done when
FW has the capability to handle pnvm images this way (helps to process
large pnvm images).
Signed-off-by: Alon Giladi <alon.giladi@intel.com>
Signed-off-by: Gregory Greenman <gregory.greenman@intel.com>
Link: https://lore.kernel.org/r/20230606103519.dbdad8995ce1.I986213527982637042532de3851a1bd8a11be87a@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Change the field pnvm_dram to an array that describes many regions
and add a counter to the number of pnvm regions that were allocated
in DRAM.
Signed-off-by: Alon Giladi <alon.giladi@intel.com>
Signed-off-by: Gregory Greenman <gregory.greenman@intel.com>
Link: https://lore.kernel.org/r/20230606103519.bb206d71bf45.I627640701757bb2f234f8e18a3afbd6af1206658@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
When rx/tx queues are being freed, on a different CPU there could be
still rx flow running. Call napi_synchronize() to prevent such a race.
Signed-off-by: Gregory Greenman <gregory.greenman@intel.com>
Co-developed-by: Benjamin Berg <benjamin.berg@intel.com>
Signed-off-by: Benjamin Berg <benjamin.berg@intel.com>
Link: https://lore.kernel.org/r/20230416154301.5171ee44dcc1.Iff18718540da412e084e7d8266447d40730600ed@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
The Bz devices got a new completion descriptor again since
we only ever really used 4 out of 32 bytes anyway. Adjust
the code to deal with that. Note that the intention was to
reduce the size, but the hardware was implemented wrongly.
While at it, do some cleanups and remove the union to simplify
the code, clean up iwl_pcie_free_bd_size() to no longer need
an argument and add iwl_pcie_used_bd_size() with the logic to
selct completion descriptor size.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Link: https://lore.kernel.org/r/iwlwifi.20220204122220.bef461a04110.I90c8885550fa54eb0aaa4363d322f50e301175a6@changeid
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
If the firmware crashes while we're waiting for the reset
handshake then it cannot possibly make progress anymore,
and we will just time out the wait. That's pointless, so
just stop waiting at that point.
Additionally, if it never acknowledges the reset handshake,
something went wrong.
Dump an error in both of these cases, but we need to do it
synchronously here since the device will be turned off.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Link: https://lore.kernel.org/r/iwlwifi.20210802170640.8b6a33544b4b.I55f97f70f8efa64db064a9207177a094c60ac8f1@changeid
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
On 64-bit machines, struct iwl_rx_mem_buffer has a lot of
padding due to the use of pointers after the small items.
Move the list entry before them, and while at it also add
documentation for it.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Link: https://lore.kernel.org/r/iwlwifi.20210802170640.6a62255b3df0.I47bb36530a3c2cdbd73454c796ce608ee2a32a6c@changeid
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
This new feature allows OEMs to set a special reduced power table in a
UEFI variable, which we use to tell the firmware to change the TX
power tables.
Read the variable and store it in a dram block to pass it to the
firmware. We do this as part of the PNVM loading flow.
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Link: https://lore.kernel.org/r/iwlwifi.20210621103449.259a33ba5074.I2e0bb142d2a9c412547cba89b62dd077b328fdc4@changeid
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
The TR/CR tail data are meant to be per-queue-arrays, however,
we allocate them completely wrong (we have a separate allocation
per queue).
Looking at this more closely, it turns out that the hardware
never uses these - we have a separate free list per RX queue
and maintain a write pointer for that in a register, and the
RX itself is indicated in the RB status (rb_stts) DMA region.
Despite nothing using the tail pointers, the hardware will
unconditionally access them to write updates, even when we aren't
using CRs/TRs.
Give it dummy values that we never use/update so it can do that
without causing trouble.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Link: https://lore.kernel.org/r/iwlwifi.20210617110647.5f5764e04c46.I4d5de1929be048085767f1234a1e07b517ab6a2d@changeid
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
After the fix from Jiri that disabled local IRQs instead of
just BHs (necessary to fix an issue with submitting a command
with IRQs already disabled), there was still a situation in
which we could deep in there enable BHs, if the device config
sets the apmg_wake_up_wa configuration, which is true on all
7000 series devices.
To fix that, but not require reverting commit 1ed08f6fb5
("iwlwifi: remove flags argument for nic_access"), split up
nic access into a version with BH manipulation to use most
of the time, and without it for this specific case where the
local IRQs are already disabled.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Link: https://lore.kernel.org/r/iwlwifi.20210415164821.d0f2edda1651.I75f762e0bed38914d1300ea198b86dd449b4b206@changeid
Handling host commands in a sync way is not directly related to PCIe
transport, and can serve as common logic for any transport, so move
it to trans layer.
Signed-off-by: Mordechay Goodstein <mordechay.goodstein@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Link: https://lore.kernel.org/r/iwlwifi.20210117164916.fde99af4e0f7.I4cab95919eb35cc5bfb26d32dcf5e15419d0e0ef@changeid
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Instead of pretending to have NAPI and then relying entirely on
interrupts anyway, properly implement NAPI and schedule the poll
when we get an interrupt, re-enabling the interrupt only after
the poll completed.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Link: https://lore.kernel.org/r/iwlwifi.20210117130510.a5951ac4fc06.I9c84a147288fcfb1b019572c6758f2d92949f5d7@changeid
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
There are some races in the hardware that can possibly lead to
a bus lockup later during a restart when we manage to kill the
firmware at a bad time (while it's accessing the bus).
To work around this, add support for a new handshake between
firmware and driver to ensure that the firmware is in a well-
known state before we kill it.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Link: https://lore.kernel.org/r/iwlwifi.20201209231352.7756fcc9865c.I13de65e0ffcb4186dd4c1a465f66df2e98c9a947@changeid
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
To avoid duplicating code we need to call iwl_pcie_txq_update_byte_cnt_tbl
function from non bus independent code so make it bus independent.
Used spatch rule
@r1@
struct iwl_trans_pcie *trans_pcie;
@@
(
-trans_pcie->scd_bc_tbls
+trans->txqs.scd_bc_tbls
|
-iwl_pcie_txq_update_byte_cnt_tbl
+iwl_txq_gen1_update_byte_cnt_tbl
|
-iwl_pcie_txq_inval_byte_cnt_tbl
+iwl_txq_gen1_inval_byte_cnt_tbl
|
-iwl_pcie_tfd_unmap
+iwl_txq_gen1_tfd_unmap
|
-iwl_pcie_tfd_tb_get_addr
+iwl_txq_gen1_tfd_tb_get_addr
|
-iwl_pcie_tfd_tb_get_len
+iwl_txq_gen1_tfd_tb_get_len
|
-iwl_pcie_tfd_get_num_tbs
+iwl_txq_gen1_tfd_get_num_tbs
)
/* clean all new unused variables */
@ depends on r1@
type T;
identifier i;
expression E;
@@
- T i = E;
... when != i
Signed-off-by: Mordechay Goodstein <mordechay.goodstein@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Link: https://lore.kernel.org/r/iwlwifi.20200930191738.8d33e791ec8c.Ica35125ed640aa3aa1ecc38fb5e8f1600caa8df6@changeid
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
We don't want to have txq code in the PCIe transport code, so move all
the relevant elements to a new iwl_txq structure and store it in
iwl_trans.
spatch
@ replace_pcie @
struct iwl_trans_pcie *trans_pcie;
@@
(
-trans_pcie->queue_stopped
+trans->txqs.queue_stopped
|
-trans_pcie->queue_used
+trans->txqs.queue_used
|
-trans_pcie->txq
+trans->txqs.txq
|
-trans_pcie->txq
+trans->txqs.txq
|
-trans_pcie->cmd_queue
+trans->txqs.cmd.q_id
|
-trans_pcie->cmd_fifo
+trans->txqs.cmd.fifo
|
-trans_pcie->cmd_q_wdg_timeout
+trans->txqs.cmd.wdg_timeout
)
// clean all new unused variables
@ depends on replace_pcie @
type T;
identifier i;
expression E;
@@
- T i = E;
... when != i
Signed-off-by: Mordechay Goodstein <mordechay.goodstein@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Link: https://lore.kernel.org/r/iwlwifi.20200529092401.a428d3c9d66f.Ie04ae55f33954636a39c98e7ae1e739c0507435b@changeid
We don't really expect fragmented RBs, and don't seem to be seeing
them in practice since that would've caused a crash. Nevertheless,
we should be expecting the hardware to send them.
Parse the flag indicating a fragmented buffer, but then discard it
and any fragments thereof, at least for now. We need to do more
work in the higher layers to properly deal with this, since we may
not get "normal" firmware notifications that are fragmented, only
RX, and then we need to put it back together and add the necessary
API to report a chain of things to the higher layers, this doesn't
fit into the struct iwl_rx_cmd_buffer today.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Link: https://lore.kernel.org/r/iwlwifi.20200425130140.e78a59f70b1d.Ica656a98a4e4220d73edc97600edd680cbc97241@changeid
Since the recent patch in this area, we no longer allocate 64k
for a single queue, but only 1k, which still means a full page.
Use a DMA pool to reduce this further, since we will have a lot
of queues in a typical system that can share pages.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Link: https://lore.kernel.org/r/iwlwifi.20200425130140.6e84c79aea30.Ie9a417132812d110ec1cc87852f101477c01cfcb@changeid
There's no need for this to be exposed outside of the tx.c
file, make it static.
Change-Id: I41d40008311b108d0578bd2ec73c5477e700a839
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>