IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
The interrupt handler (pt_core_irq_handler()) of the ptdma
driver can be called from interrupt context. The code flow
in this function can lead down to pt_core_execute_cmd() which
will attempt to grab a mutex, which is not appropriate in
interrupt context and ultimately leads to a kernel panic.
The fix here changes this mutex to a spinlock, which has
been verified to resolve the issue.
Fixes: fa5d823b16 ("dmaengine: ptdma: Initial driver for the AMD PTDMA")
Signed-off-by: Eric Pilmore <epilmore@gigaio.com>
Link: https://lore.kernel.org/r/20230119033907.35071-1-epilmore@gigaio.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Terminate vdesc when terminating an ongoing transfer.
This will ensure that the vdesc is present in the desc_terminated list
The descriptor will be freed later in desc_free_list().
This fixes the memory leaks which can happen when terminating an
ongoing transfer.
Fixes: ee17028009 ("dmaengine: tegra: Add tegra gpcdma driver")
Signed-off-by: Akhil R <akhilrajeev@nvidia.com>
Link: https://lore.kernel.org/r/20230118115801.15210-1-akhilrajeev@nvidia.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
If the function sdma_load_context() fails, the sdma_desc will be
freed, but the allocated desc->bd is forgot to be freed.
We already met the sdma_load_context() failure case and the log as
below:
[ 450.699064] imx-sdma 30bd0000.dma-controller: Timeout waiting for CH0 ready
...
In this case, the desc->bd will not be freed without this change.
Signed-off-by: Hui Wang <hui.wang@canonical.com>
Reviewed-by: Sascha Hauer <s.hauer@pengutronix.de>
Link: https://lore.kernel.org/r/20221130090800.102035-1-hui.wang@canonical.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
PSIL_EP_NATIVE endpoints may not have PEER registers for BCNT and thus
udma_decrement_byte_counters() should not try to decrement these counters.
This fixes the issue of crypto IPERF testing where the client side (EVM)
hangs without transfer of packets to the server side, seen since this
function was added.
Fixes: 7c94dcfa8f ("dmaengine: ti: k3-udma: Reset UDMA_CHAN_RT byte counters to prevent overflow")
Signed-off-by: Jayesh Choudhary <j-choudhary@ti.com>
Acked-by: Peter Ujfalusi <peter.ujfalusi@gmail.com>
Link: https://lore.kernel.org/r/20221128085005.489964-1-j-choudhary@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
On driver unload any pending descriptors are flushed and pending
DMA descriptors are explicitly completed:
idxd_dmaengine_drv_remove() ->
drv_disable_wq() ->
idxd_wq_free_irq() ->
idxd_flush_pending_descs() ->
idxd_dma_complete_txd()
With this done during driver unload any remaining descriptor is
likely stuck and can be dropped. Even so, the descriptor may still
have a callback set that could no longer be accessible. An
example of such a problem is when the dmatest fails and the dmatest
module is unloaded. The failure of dmatest leaves descriptors with
dma_async_tx_descriptor::callback pointing to code that no longer
exist. This causes a page fault as below at the time the IDXD driver
is unloaded when it attempts to run the callback:
BUG: unable to handle page fault for address: ffffffffc0665190
#PF: supervisor instruction fetch in kernel mode
#PF: error_code(0x0010) - not-present page
Fix this by clearing the callback pointers on the transmit
descriptors only when workqueue is disabled.
Fixes: 403a2e2365 ("dmaengine: idxd: change MSIX allocation based on per wq activation")
Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Fenghua Yu <fenghua.yu@intel.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/37d06b772aa7f8863ca50f90930ea2fd80b38fc3.1670452419.git.reinette.chatre@intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
On driver unload any pending descriptors are flushed at the
time the interrupt is freed:
idxd_dmaengine_drv_remove() ->
drv_disable_wq() ->
idxd_wq_free_irq() ->
idxd_flush_pending_descs().
If there are any descriptors present that need to be flushed this
flow triggers a "not present" page fault as below:
BUG: unable to handle page fault for address: ff391c97c70c9040
#PF: supervisor read access in kernel mode
#PF: error_code(0x0000) - not-present page
The address that triggers the fault is the address of the
descriptor that was freed moments earlier via:
drv_disable_wq()->idxd_wq_free_resources()
Fix the use after free by freeing the descriptors after any possible
usage. This is done after idxd_wq_reset() to ensure that the memory
remains accessible during possible completion writes by the device.
Fixes: 63c14ae6c1 ("dmaengine: idxd: refactor wq driver enable/disable operations")
Suggested-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Fenghua Yu <fenghua.yu@intel.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/6c4657d9cff0a0a00501a7b928297ac966e9ec9d.1670452419.git.reinette.chatre@intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The workqueue is enabled when the appropriate driver is loaded and
disabled when the driver is removed. When the driver is removed it
assumes that the workqueue was enabled successfully and proceeds to
free allocations made during workqueue enabling.
Failure during workqueue enabling does not prevent the driver from
being loaded. This is because the error path within drv_enable_wq()
returns success unless a second failure is encountered
during the error path. By returning success it is possible to load
the driver even if the workqueue cannot be enabled and
allocations that do not exist are attempted to be freed during
driver remove.
Some examples of problematic flows:
(a)
idxd_dmaengine_drv_probe() -> drv_enable_wq() -> idxd_wq_request_irq():
In above flow, if idxd_wq_request_irq() fails then
idxd_wq_unmap_portal() is called on error exit path, but
drv_enable_wq() returns 0 because idxd_wq_disable() succeeds. The
driver is thus loaded successfully.
idxd_dmaengine_drv_remove()->drv_disable_wq()->idxd_wq_unmap_portal()
Above flow on driver unload triggers the WARN in devm_iounmap() because
the device resource has already been removed during error path of
drv_enable_wq().
(b)
idxd_dmaengine_drv_probe() -> drv_enable_wq() -> idxd_wq_request_irq():
In above flow, if idxd_wq_request_irq() fails then
idxd_wq_init_percpu_ref() is never called to initialize the percpu
counter, yet the driver loads successfully because drv_enable_wq()
returns 0.
idxd_dmaengine_drv_remove()->__idxd_wq_quiesce()->percpu_ref_kill():
Above flow on driver unload triggers a BUG when attempting to drop the
initial ref of the uninitialized percpu ref:
BUG: kernel NULL pointer dereference, address: 0000000000000010
Fix the drv_enable_wq() error path by returning the original error that
indicates failure of workqueue enabling. This ensures that the probe
fails when an error is encountered and the driver remove paths are only
attempted when the workqueue was enabled successfully.
Fixes: 1f2bb40337 ("dmaengine: idxd: move wq_enable() to device.c")
Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Fenghua Yu <fenghua.yu@intel.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/e8d8116e5efa0fd14fadc5adae6ffd319f0e5ff1.1670452419.git.reinette.chatre@intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Rx operation on SPI GSI DMA is currently not working.
As per GSI spec, link_rx bit is to be set on GO TRE on tx
channel whenever there is going to be a DMA TRE on rx
channel. This is currently set for duplex operation only.
Set the bit for rx operation as well.
This is part of changes required to bring up Rx.
Fixes: 94b8f0e58f ("dmaengine: qcom: gpi: set chain and link flag for duplex")
Signed-off-by: Vijaya Krishna Nivarthi <quic_vnivarth@quicinc.com>
Reviewed-by: Douglas Anderson <dianders@chromium.org>
Link: https://lore.kernel.org/r/1671212293-14767-1-git-send-email-quic_vnivarth@quicinc.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Pull dmaengine updates from Vinod Koul:
"New support:
- Qualcomm SDM670, SM6115 and SM6375 GPI controller support
- Ingenic JZ4755 dmaengine support
- Removal of iop-adma driver
Updates:
- Tegra support for dma-channel-mask
- at_hdmac cleanup and virt-chan support for this driver"
* tag 'dmaengine-6.2-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine: (46 commits)
dmaengine: Revert "dmaengine: remove s3c24xx driver"
dmaengine: tegra: Add support for dma-channel-mask
dt-bindings: dmaengine: Add dma-channel-mask to Tegra GPCDMA
dmaengine: idxd: Remove linux/msi.h include
dt-bindings: dmaengine: qcom: gpi: add compatible for SM6375
dmaengine: idxd: Fix crc_val field for completion record
dmaengine: at_hdmac: Convert driver to use virt-dma
dmaengine: at_hdmac: Remove unused member of at_dma_chan
dmaengine: at_hdmac: Rename "chan_common" to "dma_chan"
dmaengine: at_hdmac: Rename "dma_common" to "dma_device"
dmaengine: at_hdmac: Use bitfield access macros
dmaengine: at_hdmac: Keep register definitions and structures private to at_hdmac.c
dmaengine: at_hdmac: Set include entries in alphabetic order
dmaengine: at_hdmac: Use pm_ptr()
dmaengine: at_hdmac: Use devm_clk_get()
dmaengine: at_hdmac: Use devm_platform_ioremap_resource
dmaengine: at_hdmac: Use devm_kzalloc() and struct_size()
dmaengine: at_hdmac: Introduce atc_get_llis_residue()
dmaengine: at_hdmac: s/atc_get_bytes_left/atc_get_residue
dmaengine: at_hdmac: Pass residue by address to avoid unnecessary implicit casts
...
Pull iommufd implementation from Jason Gunthorpe:
"iommufd is the user API to control the IOMMU subsystem as it relates
to managing IO page tables that point at user space memory.
It takes over from drivers/vfio/vfio_iommu_type1.c (aka the VFIO
container) which is the VFIO specific interface for a similar idea.
We see a broad need for extended features, some being highly IOMMU
device specific:
- Binding iommu_domain's to PASID/SSID
- Userspace IO page tables, for ARM, x86 and S390
- Kernel bypassed invalidation of user page tables
- Re-use of the KVM page table in the IOMMU
- Dirty page tracking in the IOMMU
- Runtime Increase/Decrease of IOPTE size
- PRI support with faults resolved in userspace
Many of these HW features exist to support VM use cases - for instance
the combination of PASID, PRI and Userspace IO Page Tables allows an
implementation of DMA Shared Virtual Addressing (vSVA) within a guest.
Dirty tracking enables VM live migration with SRIOV devices and PASID
support allow creating "scalable IOV" devices, among other things.
As these features are fundamental to a VM platform they need to be
uniformly exposed to all the driver families that do DMA into VMs,
which is currently VFIO and VDPA"
For more background, see the extended explanations in Jason's pull request:
https://lore.kernel.org/lkml/Y5dzTU8dlmXTbzoJ@nvidia.com/
* tag 'for-linus-iommufd' of git://git.kernel.org/pub/scm/linux/kernel/git/jgg/iommufd: (62 commits)
iommufd: Change the order of MSI setup
iommufd: Improve a few unclear bits of code
iommufd: Fix comment typos
vfio: Move vfio group specific code into group.c
vfio: Refactor dma APIs for emulated devices
vfio: Wrap vfio group module init/clean code into helpers
vfio: Refactor vfio_device open and close
vfio: Make vfio_device_open() truly device specific
vfio: Swap order of vfio_device_container_register() and open_device()
vfio: Set device->group in helper function
vfio: Create wrappers for group register/unregister
vfio: Move the sanity check of the group to vfio_create_group()
vfio: Simplify vfio_create_group()
iommufd: Allow iommufd to supply /dev/vfio/vfio
vfio: Make vfio_container optionally compiled
vfio: Move container related MODULE_ALIAS statements into container.c
vfio-iommufd: Support iommufd for emulated VFIO devices
vfio-iommufd: Support iommufd for physical VFIO devices
vfio-iommufd: Allow iommufd to be used in place of a container fd
vfio: Use IOMMU_CAP_ENFORCE_CACHE_COHERENCY for vfio_file_enforced_coherent()
...
Resolve conflicts in drivers/vfio/vfio_main.c by using the iommfd version.
The rc fix was done a different way when iommufd patches reworked this
code.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
This reverts cccc46ae36 ("dmaengine: remove s3c24xx driver") as it
causes regression due to missing header
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Add support for dma-channel-mask so that only the specified channels
are used. This helps to reserve some channels for the firmware.
This was initially achieved by limiting the channel number to 31 in
the driver and adjusting the register address to skip channel0 which
was reserved for a firmware. This is wrong and does not align with
the hardware.
Now, with this change, the driver can align more to the actual hardware
which has 32 channels. But this implies that there will be a break in the
ABI and the device tree need to be updated along with this change for the
driver to pickup the right interrupt corresponding to the channel
Reviewed-by: Jon Hunter <jonathanh@nvidia.com>
Link: https://lore.kernel.org/all/Y2EFoG1H9YpfxRjs@orome/
Signed-off-by: Akhil R <akhilrajeev@nvidia.com>
Link: https://lore.kernel.org/r/20221110171748.40304-4-akhilrajeev@nvidia.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Convert the driver to use the core virt-dma. The driver will be easier to
maintain as it uses the list handling and the tasklet from virt-dma.
With the conversion replace the election of a new transfer in the tasklet
with the election of the new transfer in the interrupt handler. With this
we have a shorter idle window as we remove the scheduling latency of the
tasklet. I chose to do this while doing the conversion to virt-dma,
because if I made a prerequisite patch with the new transfer election in
the irq handler, I would have to duplicate some virt-dma code in the
at_hdmac driver that would end up being removed at the virt-dma conversion
anyway. So do this in a single step.
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Acked-by: Nicolas Ferre <nicolas.ferre@microchip.com>
Link: https://lore.kernel.org/r/20221025090306.297886-1-tudor.ambarus@microchip.com
Link: https://lore.kernel.org/r/20221025090306.297886-33-tudor.ambarus@microchip.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>