IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
In TVQM mode the queue ID is assigned after enablement.
Get rid of assuming pre-defined TX queue ID in functions
that will be used by TVQM allocation path.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Change queue allocation to be dynamic. On transport init only
the command queue is being allocated. Other queues are allocated
on demand.
This is due to the huge amount of queues we will soon enable (512)
and as a preparation for TX Virtual Queue Manager feature (TVQM),
where firmware will assign the actual queue number on demand.
This includes also allocation of the byte count table per queue
and not as a contiguous chunk of memory.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
This function is basically the same as gen1, except for clean
ups of old devices configuration that are never used in a000
configuration.
It will also help with refactoring rf_kill later on.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
In a000 transport we will allocate queues dynamically.
Right now queue are allocated as one big chunk of memory
and accessed as such.
The dynamic allocation of the queues will require accessing
the queues as pointers.
In order to keep simplicity of pre-a000 tx queues handling,
keep allocating and freeing the memory in the same style,
but move to access the queues in the various functions as
individual pointers.
Dynamic allocation for the a000 devices will be in a separate
patch.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Code is basically the same, with a cleanups of old narrow host
command, ampg workarounds, some cosmetic stuff, and usage of
TFH functions when accessing TFD queues.
This enables also the cleanup of iwl_pcie_tfd_set_tb() since
now it won't be called anywhere in the a000 data path
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Move to use the correct structure.
Remove code referring to old command.
Update DMA locations.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Cleanup code that is irrelevant for a000 devices.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
This is just a copy-paste in order to make changes tracking
easier.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
In a000 devices the TX handling is different in a few ways:
* Queues are allocated dynamically
* DQA is enabled by default
* Driver shouldn't access TFH registers - ucode configures it
all in SCD_QUEUE_CFG command
Support all this in a new API with op mode, where op mode sends
the command, transport will allocate the queue dynamically, fill
in DMA properties, send the command to FW and get the ID back.
Current implementation only sets the new transport API and fills
the DMA properties.
Future patches will complete the other parts.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Context information structure is going to be used in a000
devices for firmware self init.
The self init includes firmware self loading from DRAM by
ROM.
This means the TFH relevant firmware loading can be cleaned up.
The firmware loading includes the paging memory as well, so op
mode can stop initializing the paging and sending the DRAM_BLOCK_CMD.
Firmware is doing RFH, TFH and SCD configuration, while driver
only fills the required configurations and addresses in the
context information structure.
The only remaining access to RFH is the write pointer, which
is updated upon alive interrupt after FW configured the RFH.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
We already have queue_used in the transport - we can
use it instead.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
This reverts commit 8aacf4b73fe8 ("iwlwifi: introduce trans API
to get byte count table").
The commit is not needed as a better approach will be taken.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
David reported that the code I added uses the decrement
and increment operator on a boolean variable.
Fix that.
Fixes: 0cd58eaab148 ("iwlwifi: pcie: allow the op_mode to block the tx queues")
Reported-by: David Binderman <dcb314@hotmail.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
When resuming, it's possible for the following scenario to occur:
* iwl_pci_resume() enables the RF-kill interrupt
* iwl_pci_resume() reads the RF-kill state (e.g. to 'radio enabled')
* RF_KILL interrupt triggers, and iwl_pcie_irq_handler() reads the
state, now 'radio disabled', and acquires the &trans_pcie->mutex.
* iwl_pcie_irq_handler() further calls iwl_trans_pcie_rf_kill() to
indicate to the higher layers that the radio is now disabled (and
stops the device while at it)
* iwl_pcie_irq_handler() drops the mutex
* iwl_pci_resume() continues, acquires the mutex and calls the higher
layers to indicate that the radio is enabled.
At this point, the device is stopped but the higher layers think it's
available, and can call deeply into the driver to try to enable it.
However, this will fail since the device is actually disabled.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
The various TFD/TB helpers have two code paths depending on the
type of TFD supported, with variable shadowing due to the new if
branches. Move the fall-through code into else branches to avoid
variable shadowing. While doing so, rename some of the variables
and do some other cleanups (like removing void * casts of void *
pointers.)
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Due to firmware design considerations, move to wide ID for
all commands.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
In order to utilize the host's CPUs in the most efficient way
we bind each rx interrupt vector to each CPU on the host.
Each rx interrupt is prioritized to execute only on the designated CPU
rather than any CPU.
Processor affinity takes advantage of the fact that some remnants of
a process that was run on a given processor may remain in that
processor's memory state for example, data in the CPU cache after
another process is run on that CPU. Scheduling that process to execute
on the same processor could result in an efficient use of process by
reducing performance-degrading situations such as cache misses
and parallel processing.
Signed-off-by: Haim Dreyfuss <haim.dreyfuss@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
In case the OS provides fewer interrupts than requested, different
causes will share the same interrupt vector as follow:
1.One interrupt less: non rx causes shared with FBQ.
2.Two interrupts less: non rx causes shared with FBQ and RSS.
3.More than two interrupts: we will use fewer RSS queues.
Also make the request depend on the number of online CPUs
instead of possible CPUs.
Signed-off-by: Haim Dreyfuss <haim.dreyfuss@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
The original intent was to have the general iwl_queue shared
between RX and TX queues, but it is not the actual status.
Since it is not shared with any struct but iwl_txq, it adds
unnecessary complexity. Merge those structs.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Previous patch introduced the new formats. This patch
allocates the new structures and adjusts code accordingly.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
In future HW the byte count table address will be configured
by ucode per queue. Add API to expose the byte count table to
the opmode
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
New hardware supports bigger TFDs and TBs.
Introduce the new formats and adjust defines and code
relying on old format.
Changing the actual TFD allocation is trickier and
deferred to the next patch.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
All transports has this structure. By moving it to be
shared, we can get rid of casting to the specific transport
in probe and remove.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Centralize the logging of SCD status. The motivation is
that for a000 devices we will have new SCD HW, but this
code was duplicate anyway, so it is a proper cleanup.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Currently the scratch buffer is set to 16 bytes and indicates
the size of the bi-directional DMA.
However, next HW generation will perform additional offloading,
and will write the result in the key location of the TX command,
so the size of the bi-directional consistent memory should grow
accordingly - increase it to 40.
Generalize the code to get rid of now irrelevant scratch references.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
In MQ environment and new architecture in early stages
we may encounter DMA issues. Track RXB status and bail
out in case we receive index to an RXB that was not
mapped and handed over to HW.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Upon firmware load interrupt (FH_TX), the ISR re-enables the
firmware load interrupt only to avoid races with other
flows as described in the commit below. When the firmware
is completely loaded, the thread that is loading the
firmware will enable all the interrupts to make sure that
the driver gets the ALIVE interrupt.
The problem with that is that the thread that is loading
the firmware is actually racing against the ISR and we can
get to the following situation:
CPU0 CPU1
iwl_pcie_load_given_ucode
...
iwl_pcie_load_firmware_chunk
wait_for_interrupt
<interrupt>
ISR handles CSR_INT_BIT_FH_TX
ISR wakes up the thread on CPU0
/* enable all the interrupts
* to get the ALIVE interrupt
*/
iwl_enable_interrupts
ISR re-enables CSR_INT_BIT_FH_TX only
/* start the firmware */
iwl_write32(trans, CSR_RESET, 0);
BUG! ALIVE interrupt will never arrive since it has been
masked by CPU1.
In order to fix that, change the ISR to first check if
STATUS_INT_ENABLED is set. If so, re-enable all the
interrupts. If STATUS_INT_ENABLED is clear, then we can
check what specific interrupt happened and re-enable only
that specific interrupt (RFKILL or FH_TX).
All the credit for the analysis goes to Kirtika who did the
actual debugging work.
Cc: <stable@vger.kernel.org> [4.5+]
Fixes: a6bd005fe92 ("iwlwifi: pcie: fix RF-Kill vs. firmware load race")
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
The PCIe transport needs to store two pointers in each TX SKB, and
currently assumes mac80211's ieee80211_tx_info is present in the CB
to do that.
In order to remove that assumption, have the opmodes pass in the
offset to where the pointers can be stored in the CB and use the
offset in the PCIe code.
To make the disentanglement complete, remove mac80211.h includes
from everywhere in the generic iwlwifi code. This required adding
an include of cfg80211.h in one place.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Support DQA queue sharing when no free queue exists for
allocation to a STA that already exists. This means that
a single queue will serve more than a single TID (although
the RA will be the same for all TIDs served).
We try to choose the lowest AC possible, to ensure the
shared queues have the lowest possible combined AC
requirements. The queue to share is chosen only from the
same RA's DATA queues as follows (in descending priority):
1. An AC_BE queue
2. Same AC queue
3. Highest AC queue that is lower than new AC
4. Any existing AC (there always is at least 1 DATA queue)
If any aggregations existed for any of the TIDs of the
shared queue - they are stopped (the FW is notified), but
no delBA is sent.
Signed-off-by: Liad Kaufman <liad.kaufman@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Integrated 9000 devices have a bug with shadow registers
value retention.
If driver writes RBD registers while MAC is asleep the
values are stored in shadow registers to be copied whenever
MAC wakes up.
However, in 9000 devices a MAC wakeup is not triggered
and when the bus powers down due to inactivity the shadow
values and dirty bits are lost.
Turn on the chicken-bits that cause MAC wakeup for RX-related
values as well when the device is in D0.
When the device is in low power mode turn the RX wakeup chicken
bits off since driver is idle and this W/A is not needed.
Remove previous W/A which was ineffective.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
It's cleaner to always call the iwl_trans_ref/unref() functions
instead of sometimes calling the trans-specific ops directly. This
also prepares for moving some of the code from the trans-specific ops
to the common trans code.
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
We don't use the refcount value anymore, all the refcounting is done
in the runtime PM usage_count value. Remove it.
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
My patch resized the pool size, but neglected to resize
the global table, which is obviously wrong since the global
table maps the pool's rxb to vid one to one. This results
in a panic in 9000 devices.
Add a build bug to avoid such a case in the future.
Fixes: 7b5424361ec9 ("iwlwifi: pcie: fine tune number of rxbs")
Reported-by: Haim Dreyfuss <haim.dreyfuss@intel.com>
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
We kick the allocator when we have 2 RBDs that don't have
attached RBs, and the allocator allocates 8 RBs meaning
that it needs another 6 RBDs to attach the RBs to.
The design is that allocator should always have enough RBDs
to fulfill requests, so we give in advance 6 RBDs to the
allocator so that when it is kicked, it gets additional 2 RBDs
and has enough RBDs.
These RBDs were taken from the Rx queue itself, meaning
that each Rx queue didn't have the maximal number of
RBDs, but MAX - 6.
Change initial number of RBDs in the system to include both
queue size and allocator reserves.
Note the multi-queue is always 511 instead of 512 to avoid a
full queue since we cannot detect this state easily enough in
the 9000 arch.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Working with MSIX requires prior configuration.
This includes requesting interrupt vectors from the OS,
registering the vectors and mapping the optional causes to the
relevant interrupt. In addition add new interrupt handler
to handle MSIX interrupt.
Signed-off-by: Haim Dreyfuss <haim.dreyfuss@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
These are a few fixes for the current cycle.
3 out of the 5 patches fix a bugzilla.
* fix a race that users reported when we try to load the firmware
and the hardware rfkill interrupt triggers at the same time.
* Luca fixes a very visible bug in scheduled scan: our firmware
doesn't support scheduled scan with no profile configured and
the supplicant sometimes requests such scheduled scans.
* build system fix
* firmware name update for 8265
* typo fix in return value
When we load the firmware, we hold trans_pcie->mutex to
avoid nested flows. We also rely on the ISR to wake up the
thread when the DMA has finished copying a chunk. During
this flow, we enable the RF-Kill interrupt.
The problem is that the RF-Kill interrupt handler can take
the mutex and bring the device down. This means that if
we load the firmware while the RF-Kill switch is enabled
(which will happen when we load the INIT firmware to read
the device's capabilities and register to mac80211), we
may get an RF-Kill interrupt immediately and the ISR will
be waiting for the mutex held by the thread that is
currently loading the firmware. At this stage, the ISR
won't be able to service the DMA's interrupt needed to
wake up the thread that load the firmware. We are in a
deadlock situation which ends when the thread that loads
the firmware fails on timeout and releases the mutex.
To fix this, take the mutex later in the flow, disable
the interrupts and synchronize_irq() to give a chance to
the RF-Kill interrupt to run and complete.
After that, mask all the interrupts besides the DMA
interrupt and proceed with firmware load. Make sure to
check that there was no RF-Kill interrupt when the
interrupts were disabled.
This fixes https://bugzilla.kernel.org/show_bug.cgi?id=111361
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Previous patches enabled new 9000 hardware DMA for one queue
only.
Enable the actual multi-queue path and configuration now.
This requires also per-queue NAPI struct.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Enable runtime power management (RTPM) for PCIe devices and implement
the corresponding functions to enable D0i3 mode when the device is
idle.
Additionally, remove some unnecessary #ifdef's because the RTPM code
will not be called if runtime PM is not configured.
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
The 9000 series introduces several changes in the device
DMA operation.
As the device now supports multi-queue rx, several DMA channels
should be configured.
The flows of providing the device with the allocated RBDs now
changes as well - the device maintains a separate table of used
and free table.
The hardware may use the free table to feed RBDs to any queue.
This requires maintaing a shared table to map returned RBDs to
the original RXB - for that purpose the VID is introduced - an
internal identifier of the RB placed in the lower 12 bits and
returned by HW in the used data.
Another change is the support of 64 bit DMA address.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
The 9000 series devices will support multi rx queues.
Current code has one static rx queue - change it to allocate
a number of queues per the device capability (pre-9000 devices
have the number of rx queues set to one).
Subsequent generalizations are:
Change the code to access an explicit numbered rx queue only
when the queue number is known - when handling interrupt, when
accessing the default queue and when iterating the queues.
The rest of the functions will receive the rx queue as a pointer.
Generalize the warning in allocation failure to consider the
allocator status instead of a single rx queue status.
Move the rx initial pool of memory buffers to be shared among
all the queues and allocated to the default queue on init.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
When the Tx queues are full above a threshold, we
immediately stop the mac80211's queue to stop getting new
packets. This worked until TSO was enabled.
With TSO, one single packet from mac80211 can use many
descriptors since a large send needs to be split into
several segments.
This means that stopping mac80211's queues is not enough
and we also need to ensure that we don't overflow the Tx
queues with one single packet from mac80211.
Add code to transport layer to do just that. Stop
mac80211's queue as soon as the queue is full above the
same threshold as before, and keep pushing the current
packet along with its segments on the queue, but check
that we don't overflow. If that would happen, buffer the
segments, and send them when there is room in the Tx queue
again. Of course, we first need to send the buffered
segments and only then, wake up mac80211's queues.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
When the op_mode sends an skb whose payload is bigger than
MSS, PCIe will create an A-MSDU out of it. PCIe assumes
that the skb that is coming from the op_mode can fit in one
A-MSDU. It is the op_mode's responsibility to make sure
that this guarantee holds.
Additional headers need to be built for the subframes.
The TSO core code takes care of the IP / TCP headers and
the driver takes care of the 802.11 subframe headers.
These headers are stored on a per-cpu page that is re-used
for all the packets handled on that same CPU. Each skb
holds a reference to that page and releases the page when
it is reclaimed. When the page gets full, it is released
and a new one is allocated.
Since any SKB that doesn't go through the fast-xmit path
of mac80211 will be segmented, we can assume here that the
packet is not WEP / TKIP and has a proper SNAP header.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Allow to configure the driver to pretend to have TX CSUM
offload support. This will be useful to test the TSO flows
that will come in further patches.
This configuration is disabled by default.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
ilw@linux.intel.com is not available anymore.
linuxwifi@intel.com should be used instead.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Host commands now have a group id, express this in printed messages.
Signed-off-by: Sharon Dvir <sharon.dvir@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
In certain flows (see next patches), the op_mode may need to
block the Tx queues for a short period. Provide an API for
that. The transport is in charge of counting the number of
times the queues are blocked since the op_mode may block the
queues several times in a row before unblocking them.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Transport code currently calls itself through the transport ops,
which is quite pointless. Clean up all of this. While at it,
remove the unnecessary dir argument and the redundant IDI code.
In slave transports, call both the common slave debugfs and the
transport's own. SDIO has no files, so remove it all there.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Make the various conversion functions typesafe, so we don't
accidentally try to call them with the wrong pointers and
cast them to something that will crash.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>