IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
commit 6744a030d81e456883bfbb627ac1f30465c1a989 upstream.
On driver unload any pending descriptors are flushed and pending
DMA descriptors are explicitly completed:
idxd_dmaengine_drv_remove() ->
drv_disable_wq() ->
idxd_wq_free_irq() ->
idxd_flush_pending_descs() ->
idxd_dma_complete_txd()
With this done during driver unload any remaining descriptor is
likely stuck and can be dropped. Even so, the descriptor may still
have a callback set that could no longer be accessible. An
example of such a problem is when the dmatest fails and the dmatest
module is unloaded. The failure of dmatest leaves descriptors with
dma_async_tx_descriptor::callback pointing to code that no longer
exist. This causes a page fault as below at the time the IDXD driver
is unloaded when it attempts to run the callback:
BUG: unable to handle page fault for address: ffffffffc0665190
#PF: supervisor instruction fetch in kernel mode
#PF: error_code(0x0010) - not-present page
Fix this by clearing the callback pointers on the transmit
descriptors only when workqueue is disabled.
Fixes: 403a2e236538 ("dmaengine: idxd: change MSIX allocation based on per wq activation")
Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Fenghua Yu <fenghua.yu@intel.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/37d06b772aa7f8863ca50f90930ea2fd80b38fc3.1670452419.git.reinette.chatre@intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 1beeec45f9ac31eba52478379f70a5fa9c2ad005 upstream.
On driver unload any pending descriptors are flushed at the
time the interrupt is freed:
idxd_dmaengine_drv_remove() ->
drv_disable_wq() ->
idxd_wq_free_irq() ->
idxd_flush_pending_descs().
If there are any descriptors present that need to be flushed this
flow triggers a "not present" page fault as below:
BUG: unable to handle page fault for address: ff391c97c70c9040
#PF: supervisor read access in kernel mode
#PF: error_code(0x0000) - not-present page
The address that triggers the fault is the address of the
descriptor that was freed moments earlier via:
drv_disable_wq()->idxd_wq_free_resources()
Fix the use after free by freeing the descriptors after any possible
usage. This is done after idxd_wq_reset() to ensure that the memory
remains accessible during possible completion writes by the device.
Fixes: 63c14ae6c161 ("dmaengine: idxd: refactor wq driver enable/disable operations")
Suggested-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Fenghua Yu <fenghua.yu@intel.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/6c4657d9cff0a0a00501a7b928297ac966e9ec9d.1670452419.git.reinette.chatre@intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit b51b75f0604f17c0f6f3b6f68f1a521a5cc6b04f upstream.
The workqueue is enabled when the appropriate driver is loaded and
disabled when the driver is removed. When the driver is removed it
assumes that the workqueue was enabled successfully and proceeds to
free allocations made during workqueue enabling.
Failure during workqueue enabling does not prevent the driver from
being loaded. This is because the error path within drv_enable_wq()
returns success unless a second failure is encountered
during the error path. By returning success it is possible to load
the driver even if the workqueue cannot be enabled and
allocations that do not exist are attempted to be freed during
driver remove.
Some examples of problematic flows:
(a)
idxd_dmaengine_drv_probe() -> drv_enable_wq() -> idxd_wq_request_irq():
In above flow, if idxd_wq_request_irq() fails then
idxd_wq_unmap_portal() is called on error exit path, but
drv_enable_wq() returns 0 because idxd_wq_disable() succeeds. The
driver is thus loaded successfully.
idxd_dmaengine_drv_remove()->drv_disable_wq()->idxd_wq_unmap_portal()
Above flow on driver unload triggers the WARN in devm_iounmap() because
the device resource has already been removed during error path of
drv_enable_wq().
(b)
idxd_dmaengine_drv_probe() -> drv_enable_wq() -> idxd_wq_request_irq():
In above flow, if idxd_wq_request_irq() fails then
idxd_wq_init_percpu_ref() is never called to initialize the percpu
counter, yet the driver loads successfully because drv_enable_wq()
returns 0.
idxd_dmaengine_drv_remove()->__idxd_wq_quiesce()->percpu_ref_kill():
Above flow on driver unload triggers a BUG when attempting to drop the
initial ref of the uninitialized percpu ref:
BUG: kernel NULL pointer dereference, address: 0000000000000010
Fix the drv_enable_wq() error path by returning the original error that
indicates failure of workqueue enabling. This ensures that the probe
fails when an error is encountered and the driver remove paths are only
attempted when the workqueue was enabled successfully.
Fixes: 1f2bb40337f0 ("dmaengine: idxd: move wq_enable() to device.c")
Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Fenghua Yu <fenghua.yu@intel.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/e8d8116e5efa0fd14fadc5adae6ffd319f0e5ff1.1670452419.git.reinette.chatre@intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 96b3bb18f6cbe259ef4e0bed3135911b7e8d2af5 upstream.
ldma_cfg_init() will parse DT to retrieve certain configs.
However, that is called before ldma_dma_init_vXX(), which
will make some initialization to channel configs. It will
thus incorrectly overwrite certain configs that are declared
in DT.
To fix that, we move DT parsing after initialization.
Function name is renamed to better represent what it does.
Fixes: 32d31c79a1a4 ("dmaengine: Add Intel LGM SoC DMA support.")
Signed-off-by: Peter Harliman Liem <pliem@maxlinear.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/afef6fc1ed20098b684e0d53737d69faf63c125f.1672887183.git.pliem@maxlinear.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 9a8ddb35a9d5d3ad76784a012459b256a9d7de7e ]
In current code, the following sysfs attributes are exposed to user to
show or update the values:
max_read_buffers (max_tokens)
read_buffer_limit (token_limit)
group/read_buffers_allowed (group/tokens_allowed)
group/read_buffers_reserved (group/tokens_reserved)
group/use_read_buffer_limit (group/use_token_limit)
>From Intel IAA spec [1], Intel IAA does not support Read Buffer
allocation control. So these sysfs attributes should not be supported on
IAA device.
Fix this issue by making these sysfs attributes invisible through
is_visible() filter when the device is IAA.
Add description in the ABI documentation to mention that these
attributes are not visible when the device does not support Read Buffer
allocation control.
[1]: https://cdrdv2.intel.com/v1/dl/getContent/721858
Fixes: fde212e44f45 ("dmaengine: idxd: deprecate token sysfs attributes for read buffers")
Fixes: c52ca478233c ("dmaengine: idxd: add configuration component of driver")
Signed-off-by: Xiaochen Shen <xiaochen.shen@intel.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/20221022074949.11719-1-xiaochen.shen@intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
[ Upstream commit 568aa6dd641f63166bb60d769e256789b3ac42d4 ]
There's a previously unknown part of the controller interface: We have
to assign SRAM carveouts to channels to store their in-flight samples
in. So, obtain the size of the SRAM from a read-only register and divide
it into 2K blocks for allocation to channels. The FIFO depths we
configure will always fit into 2K.
(This fixes audio artifacts during simultaneous playback/capture on
multiple channels -- which looking back is fully accounted for by having
had the caches in the DMA controller overlap in memory.)
Fixes: b127315d9a78 ("dmaengine: apple-admac: Add Apple ADMAC driver")
Signed-off-by: Martin Povišer <povik+lin@cutebit.org>
Link: https://lore.kernel.org/r/20221019132324.8585-2-povik+lin@cutebit.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
[ Upstream commit 91123b37e8a99cc489d5bdcfebd1c25f29382504 ]
In current code, dev.max_batch_size and wq.max_batch_size attributes in
sysfs are exposed to user to show or update the values.
>From Intel IAA spec [1], Intel IAA does not support batch processing. So
these sysfs attributes should not be supported on IAA device.
Fix this issue by making the attributes of max_batch_size invisible in
sysfs through is_visible() filter when the device is IAA.
Add description in the ABI documentation to mention that the attributes
are not visible when the device does not support batch.
[1]: https://cdrdv2.intel.com/v1/dl/getContent/721858
Fixes: e7184b159dd3 ("dmaengine: idxd: add support for configurable max wq batch size")
Fixes: c52ca478233c ("dmaengine: idxd: add configuration component of driver")
Signed-off-by: Xiaochen Shen <xiaochen.shen@intel.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Fenghua Yu <fenghua.yu@intel.com>
Link: https://lore.kernel.org/r/20220930201528.18621-3-xiaochen.shen@intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
atc_complete_all() had concurrency bugs, thus remove it:
1/ atc_complete_all() in its entirety was buggy, as when the atchan->queue
list (the one that contains descriptors that are not yet issued to the
hardware) contained descriptors, it fired just the first from the
atchan->queue, but moved all the desc from atchan->queue to
atchan->active_list and considered them all as fired. This could result in
calling the completion of a descriptor that was not yet issued to the
hardware.
2/ when in tasklet at atc_advance_work() time, atchan->active_list was
queried without holding the lock of the chan. This can result in
atchan->active_list concurrency problems between the tasklet and
issue_pending().
Fixes: dc78baa2b90b ("dmaengine: at_hdmac: new driver for the Atmel AHB DMA Controller")
Reported-by: Peter Rosin <peda@axentia.se>
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/lkml/13c6c9a2-6db5-c3bf-349b-4c127ad3496a@axentia.se/
Acked-by: Nicolas Ferre <nicolas.ferre@microchip.com>
Link: https://lore.kernel.org/r/20221025090306.297886-1-tudor.ambarus@microchip.com
Link: https://lore.kernel.org/r/20221025090306.297886-8-tudor.ambarus@microchip.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
tx_submit is supposed to push the current transaction descriptor to a
pending queue, waiting for issue_pending() to be called. issue_pending()
must start the transfer, not tx_submit(), thus remove atc_dostart() from
atc_tx_submit(). Clients of at_xdmac that assume that tx_submit() starts
the transfer must be updated and call dma_async_issue_pending() if they
miss to call it.
The vdbg print was moved to after the lock is released. It is desirable to
do the prints without the lock held if possible, and because the if
statement disappears there's no reason why to do the print while holding
the lock.
Fixes: dc78baa2b90b ("dmaengine: at_hdmac: new driver for the Atmel AHB DMA Controller")
Reported-by: Peter Rosin <peda@axentia.se>
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/lkml/13c6c9a2-6db5-c3bf-349b-4c127ad3496a@axentia.se/
Acked-by: Nicolas Ferre <nicolas.ferre@microchip.com>
Link: https://lore.kernel.org/r/20221025090306.297886-1-tudor.ambarus@microchip.com
Link: https://lore.kernel.org/r/20221025090306.297886-3-tudor.ambarus@microchip.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When disabling dma channel, a TCF flag is set and as TCIE is enabled, an
interrupt is raised.
On a busy system, the interrupt may have latency and the user can ask for
dmaengine_resume while stm32-dma driver has not yet managed the complete
pause (backup of registers to restore state in resume).
To avoid such a case, instead of waiting the interrupt to backup the
registers, do it just after disabling the channel and discard Transfer
Complete interrupt in case the channel is paused.
Fixes: 099a9a94be0e ("dmaengine: stm32-dma: add device_pause/device_resume support")
Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com>
Link: https://lore.kernel.org/r/20221024083611.132588-1-amelie.delaunay@foss.st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
If device_register() fails, it should call put_device() to give
up reference, the name allocated in dev_set_name() can be freed
in callback function kobject_cleanup().
Fixes: 5b65781d06ea ("dmaengine: ti: k3-udma-glue: Add support for K3 PKTDMA")
Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
Acked-by: Peter Ujfalusi <peter.ujfalusi@gmail.com>
Link: https://lore.kernel.org/r/20221020062827.2914148-1-yangyingliang@huawei.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
A clk_prepare_enable() call in the probe is not balanced by a corresponding
clk_disable_unprepare() in the remove function.
Add the missing call.
Fixes: 3cd2c313f1d6 ("dmaengine: mv_xor_v2: Fix clock resource by adding a register clock")
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Link: https://lore.kernel.org/r/e9e3837a680c9bd2438e4db2b83270c6c052d005.1666640987.git.christophe.jaillet@wanadoo.fr
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The of_xlate callback is supposed to return the channel after already
having 'grabbed' it for private use, so fill that in.
Fixes: b127315d9a78 ("dmaengine: apple-admac: Add Apple ADMAC driver")
Signed-off-by: Martin Povišer <povik+lin@cutebit.org>
Link: https://lore.kernel.org/r/20221019132324.8585-1-povik+lin@cutebit.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When IDXD is not configurable, that means its WQ, engine, and group
configurations cannot be changed. But it can be disabled and its state
should be set as disabled regardless it's configurable or not.
Fix this by setting device state IDXD_DEV_DISABLED for read-only device
as well in idxd_device_clear_state().
Fixes: cf4ac3fef338 ("dmaengine: idxd: fix lockdep warning on device driver removal")
Signed-off-by: Fengqian Gao <fengqian.gao@intel.com>
Reviewed-by: Xiaochen Shen <xiaochen.shen@intel.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Fenghua Yu <fenghua.yu@intel.com>
Link: https://lore.kernel.org/r/20220930032835.2290-1-fengqian.gao@intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
>From Intel IAA spec [1], Intel IAA does not support batch processing.
Two batch related default values for IAA are incorrect in current code:
(1) The max batch size of device is set during device initialization,
that indicates batch is supported. It should be always 0 on IAA.
(2) The max batch size of work queue is set to WQ_DEFAULT_MAX_BATCH (32)
as the default value regardless of Intel DSA or IAA device during
work queue setup and cleanup. It should be always 0 on IAA.
Fix the issues by setting the max batch size of device and max batch
size of work queue to 0 on IAA device, that means batch is not
supported.
[1]: https://cdrdv2.intel.com/v1/dl/getContent/721858
Fixes: 23084545dbb0 ("dmaengine: idxd: set max_xfer and max_batch for RO device")
Fixes: 92452a72ebdf ("dmaengine: idxd: set defaults for wq configs")
Fixes: bfe1d56091c1 ("dmaengine: idxd: Init and probe for Intel data accelerators")
Signed-off-by: Xiaochen Shen <xiaochen.shen@intel.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Fenghua Yu <fenghua.yu@intel.com>
Link: https://lore.kernel.org/r/20220930201528.18621-2-xiaochen.shen@intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The first IRQ is required, but IRQs 1 through (nb_phy_chans - 1) are
optional, because on some platforms (e.g. PXA168) there is a single IRQ
shared between all channels.
This change inhibits a flood of "IRQ index # not found" messages at
startup. Tested on a PXA168-based device.
Fixes: 7723f4c5ecdb ("driver core: platform: Add an error message to platform_get_irq*()")
Signed-off-by: Doug Brown <doug@schmorgal.com>
Link: https://lore.kernel.org/r/20220906000709.52705-1-doug@schmorgal.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When the idxd_user_drv driver is bound to a Work Queue (WQ) device
without IOMMU or with IOMMU Passthrough without Shared Virtual
Addressing (SVA), the application gains direct access to physical
memory via the device by programming physical address to a submitted
descriptor. This allows direct userspace read and write access to
arbitrary physical memory. This is inconsistent with the security
goals of a good kernel API.
Unlike vfio_pci driver, the IDXD char device driver does not provide any
ways to pin user pages and translate the address from user VA to IOVA or
PA without IOMMU SVA. Therefore the application has no way to instruct the
device to perform DMA function. This makes the char device not usable for
normal application usage.
Since user type WQ without SVA cannot be used for normal application usage
and presents the security issue, bind idxd_user_drv driver and enable user
type WQ only when SVA is enabled (i.e. user PASID is enabled).
Fixes: 448c3de8ac83 ("dmaengine: idxd: create user driver for wq 'device'")
Cc: stable@vger.kernel.org
Suggested-by: Arjan Van De Ven <arjan.van.de.ven@intel.com>
Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com>
Link: https://lore.kernel.org/r/20221014222541.3912195-1-fenghua.yu@intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
New bool m2m_hw has been added at the end of stm32_mdma_chan_config struct
to support the STM32 DMA MDMA chaining.
m2m_hw is set true in stm32_mdma_slave_config() if peripheral_size is set,
but m2m_hw is never initialized false.
To ensure this case, and any further new update of the structure, memset it
to 0 before using it.
Fixes: 696874322771 ("dmaengine: stm32-mdma: add support to be triggered by STM32 DMA")
Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com>
Link: https://lore.kernel.org/r/20221017131413.202567-1-amelie.delaunay@foss.st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The prandom_bytes() function has been a deprecated inline wrapper around
get_random_bytes() for several releases now, and compiles down to the
exact same code. Replace the deprecated wrapper with a direct call to
the real function. This was done as a basic find and replace.
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Yury Norov <yury.norov@gmail.com>
Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> # powerpc
Acked-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
New Support:
- AngeloGioacchino Del Regno added support for MT6795 SoC dma controller
- Christian Marangi updated qcom-adm controller binding to yaml
- Geert Uytterhoeven added yaml binding for Renesas r8a779g0 dma controller
- Luca Weiss added support for Qualcomm SM6350 GPI dma controller
Updates:
- Amelie Delaunay provided STM32 DMA-MDMA chaining support
- Andy Shevchenko updated hsu driver to use managed resources
- Dave Jiang & Jerry Snitselaar provided usual round of idxd driver updates
- Janne Grunau & Martin Povišer updated apple dma driver for iommu and pd
properties and removed use of devres for irqs
- Swati Agarwal added device_synchronize support for Xilinx zynqmp driver
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEE+vs47OPLdNbVcHzyfBQHDyUjg0cFAmNAHYQACgkQfBQHDyUj
g0fEOw/8C2HtDZmO2JDD17k+oSidFwDk0a9D2a4Rd5FKmh1mazCxMkHbDf0r0FOe
Ejgmsi79ScgE1t3SVo+VNHg1au2svxL9Gp3Fpgd26w1MgNjE8ePYpwdubRJgv6/9
S8vNEfU3OVzSKRIZk7H0ti7uT/mhQdBibayDWwLLT72Q0eBy0K4Vu+I4n8x/PQ/Y
eLUgMyZ3ZZl45KWztqOfX99lTAezV7ATY9uQsDJ+OgSz6C3Au15cQvrU2YjdjbFe
MYvZYJCQD9rrcY3v3wnNt7rwH4TUom8UKaOM7v4suSkmlGsf1MTC9Utzu036u9r8
xv8sPlozxnnoL+IjugfiWtpq04jHz2h4CDCpXHg1EMKCACIQfkDiUiZbVwpNGoLW
rg2Of5XkShhzbSHf5SGyvWl9sjNR88flih0itPsMQ52Hk4ypUa6OUJ1XP2yuOW5I
dUQS8p0Mv87UWDpywCGjHsf5106RhbmlyYTEBfMS1q13qGXv2rtIigYpLpfmyYyn
xsHF1BE/M/Z7mRNNOc+wg7UGeDi8Dl3geir6xQ7atzHNrWXRqE45l50jCHolLKPe
KwevD0GF7lacTKZo/MIQUbZOIrCfqM+BG54Bzz1XFGfcYmyY4j5jxd1wEnrFYG8R
Adgz76bNKdKAPZexhMA4WHeao6/B3CouoiQQtYuTgXkSnlwkg50=
=zRon
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-6.1-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine
Pull dmaengine updates from Vinod Koul:
"New Support:
- MT6795 SoC dma controller (AngeloGioacchino Del Regno)
- qcom-adm controller yaml binding (Christian Marangi)
- Renesas r8a779g0 dma controller yaml binding (Geert Uytterhoeven)
- Qualcomm SM6350 GPI dma controller (Luca Weiss)
Updates:
- STM32 DMA-MDMA chaining support (Amelie Delaunay)
- make hsu driver use managed resources (Andy Shevchenko)
- the usual round of idxd driver updates (Dave Jiang & Jerry
Snitselaar)
- apple dma driver iommu and pd properties and remove use
of devres for irqs (Janne Grunau & Martin Povišer)
- device_synchronize support for Xilinx zynqmp driver (Swati
Agarwal)"
* tag 'dmaengine-6.1-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine: (60 commits)
dmaengine: ioat: remove unused declarations in dma.h
dmaengine: ti: k3-udma: Respond TX done if DMA_PREP_INTERRUPT is not requested
dmaengine: zynqmp_dma: Add device_synchronize support
dt-bindings: dma: add additional pbus reset to qcom,adm
dt-bindings: dma: rework qcom,adm Documentation to yaml schema
dt-bindings: dma: apple,admac: Add iommus and power-domains properties
dmaengine: dw-edma: Remove runtime PM support
dmaengine: idxd: add configuration for concurrent batch descriptor processing
dmaengine: idxd: add configuration for concurrent work descriptor processing
dmaengine: idxd: add WQ operation cap restriction support
dmanegine: idxd: reformat opcap output to match bitmap_parse() input
dmaengine: idxd: convert ats_dis to a wq flag
dmaengine: ioat: stop mod_timer from resurrecting deleted timer in __cleanup()
dmaengine: qcom-adm: fix wrong calling convention for prep_slave_sg
dmaengine: qcom-adm: fix wrong sizeof config in slave_config
dmaengine: ti: k3-psil: add additional TX threads for j721e
dmaengine: ti: k3-psil: add additional TX threads for j7200
dmaengine: apple-admac: Trigger shared reset
dmaengine: apple-admac: Do not use devres for IRQs
dmaengine: ti: edma: Remove some unused functions
...
ioat_ring_alloc_order and ioat_ring_max_alloc_order have
been removed since commit cd60cd96137f ("dmaengine: IOATDMA:
Removing descriptor ring reshape"), so remove them.
Signed-off-by: Gaosheng Cui <cuigaosheng1@huawei.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/20220911091817.3214271-1-cuigaosheng1@huawei.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
If the DMA consumer driver does not expect the callback for TX done, then
we need not perform the channel RT byte counter calculations and estimate
the completion but return complete on first attempt itself.This assumes
that the consumer who did not request DMA_PREP_INTERRUPT has its own
mechanism for understanding TX completion, example: MCSPI EOW interrupt
can be used as TX completion signal for a SPI transaction.
Signed-off-by: Vaishnav Achath <vaishnav.a@ti.com>
Acked-by: Peter Ujfalusi <peter.ujfalusi@gmail.com>
Link: https://lore.kernel.org/r/20220914110049.5842-1-vaishnav.a@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
dmaengine_synchronize implementation is required to synchronize proper
termination of current transfers so that any memory resources are not freed
while still in use.
Implement this callback in the driver so that framework can use the same
(in dmaengine_terminate_sync/ dmaengine_synchronize).
Signed-off-by: Swati Agarwal <swati.agarwal@amd.com>
Link: https://lore.kernel.org/r/20220915090516.5812-1-swati.agarwal@amd.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Currently, the dw-edma driver enables the runtime_pm for parent device
(chip->dev) and increments/decrements the refcount during alloc/free
chan resources callbacks.
This leads to a problem when the eDMA driver has been probed, but the
channels were not used. This scenario can happen when the DW PCIe driver
probes eDMA driver successfully, but the PCI EPF driver decides not to
use eDMA channels and use iATU instead for PCI transfers.
In this case, the underlying device would be runtime suspended due to
pm_runtime_enable() in dw_edma_probe() and the PCI EPF driver would have
no knowledge of it.
Ideally, the eDMA driver should not be the one doing the runtime PM of
the parent device. The responsibility should instead belong to the client
drivers like PCI EPF.
So let's remove the runtime PM support from eDMA driver.
Cc: Serge Semin <fancer.lancer@gmail.com>
Cc: Frank Li <Frank.Li@nxp.com>
Reviewed-by: Serge Semin <fancer.lancer@gmail.com>
Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
Link: https://lore.kernel.org/r/20220910054700.12205-1-manivannan.sadhasivam@linaro.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Add sysfs knob to allow control of the number of batch descriptors that can
be concurrently processed by an engine in the group as a fraction of the
Maximum Work Descriptors in Progress value specfied in ENGCAP register.
This control knob is part of toggle for QoS control.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Co-developed-by: Fenghua Yu <fenghua.yu@intel.com>
Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
Link: https://lore.kernel.org/r/20220917161222.2835172-6-fenghua.yu@intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Add sysfs knob to allow control of the number of work descriptors that can
be concurrently processed by an engine in the group as a fraction of the
Maximum Work Descriptors in Progress value specified in ENGCAP register.
This control knob is part of toggle for QoS control.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Co-developed-by: Fenghua Yu <fenghua.yu@intel.com>
Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
Link: https://lore.kernel.org/r/20220917161222.2835172-5-fenghua.yu@intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
DSA 2.0 add the capability of configuring DMA ops on a per workqueue basis.
This means that certain ops can be disabled by the system administrator for
certain wq. By default, all ops are available. A bitmap is used to store
the ops due to total op size of 256 bits and it is more convenient to use a
range list to specify which bits are enabled.
One of the usage to support this is for VM migration between different
iteration of devices. The newer ops are disabled in order to allow guest to
migrate to a host that only support older ops. Another usage is to
restrict the WQ to certain operations for QoS of performance.
A sysfs of ops_config attribute is added per wq. It is only usable when the
ops_config bit is set under WQ_CAP register. This means that this attribute
will return -EOPNOTSUPP on DSA 1.x devices. The expected input is a range
list for the bits per operation the WQ supports.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Co-developed-by: Fenghua Yu <fenghua.yu@intel.com>
Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
Link: https://lore.kernel.org/r/20220917161222.2835172-4-fenghua.yu@intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
To make input and output consistent and prepping for the per WQ operation
configuration support, change the output of opcap display to match the
input that is expected by bitmap_parse() helper function. The output will
be a bitmap with field width as the number of bits using the %*pb format
specifier for printk() family.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Co-developed-by: Fenghua Yu <fenghua.yu@intel.com>
Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
Link: https://lore.kernel.org/r/20220917161222.2835172-3-fenghua.yu@intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
User reports observing timer event report channel halted but no error
observed in CHANERR register. The driver finished self-test and released
channel resources. Debug shows that __cleanup() can call
mod_timer() after the timer has been deleted and thus resurrect the
timer. While harmless, it causes suprious error message to be emitted.
Use mod_timer_pending() call to prevent deleted timer from being
resurrected.
Fixes: 3372de5813e4 ("dmaengine: ioatdma: removal of dma_v3.c and relevant ioat3 references")
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/166360672197.3851724.17040290563764838369.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The calling convention for pre_slave_sg is to return NULL on error and
provide an error log to the system. Qcom-adm instead provide error
pointer when an error occur. This indirectly cause kernel panic for
example for the nandc driver that checks only if the pointer returned by
device_prep_slave_sg is not NULL. Returning an error pointer makes nandc
think the device_prep_slave_sg function correctly completed and makes
the kernel panics later in the code.
While nandc is the one that makes the kernel crash, it was pointed out
that the real problem is qcom-adm not following calling convention for
that function.
To fix this, drop returning error pointer and return NULL with an error
log.
Fixes: 03de6b273805 ("dmaengine: qcom-adm: stop abusing slave_id config")
Fixes: 5c9f8c2dbdbe ("dmaengine: qcom: Add ADM driver")
Signed-off-by: Christian Marangi <ansuelsmth@gmail.com>
Cc: stable@vger.kernel.org # v5.11+
Link: https://lore.kernel.org/r/20220916041256.7104-1-ansuelsmth@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Fix broken slave_config function that uncorrectly compare the
peripheral_size with the size of the config pointer instead of the size
of the config struct. This cause the crci value to be ignored and cause
a kernel panic on any slave that use adm driver.
To fix this, compare to the size of the struct and NOT the size of the
pointer.
Fixes: 03de6b273805 ("dmaengine: qcom-adm: stop abusing slave_id config")
Signed-off-by: Christian Marangi <ansuelsmth@gmail.com>
Cc: stable@vger.kernel.org # v5.17+
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Link: https://lore.kernel.org/r/20220915204844.3838-1-ansuelsmth@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
If a reset domain is attached to the device, obtain a shared reference
to it and trigger it. Typically on a chip the ADMAC controller will
share a reset domain with the MCA peripheral.
Signed-off-by: Martin Povišer <povik+lin@cutebit.org>
Link: https://lore.kernel.org/r/20220918095845.68860-5-povik+lin@cutebit.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>