IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Add a load_device_defaults() function pointer to struct
idxd_driver_data, which if defined, will be called when an idxd device
is probed and will allow the idxd device to be configured with default
values.
The load_device_defaults() function is passed an idxd device to work
with to set specific device attributes.
Also add a load_device_defaults() implementation IAA devices; future
patches would add default functions for other device types such as
DSA.
The way idxd device probing works, if the device configuration is
valid at that point e.g. at least one workqueue and engine is properly
configured then the device will be enabled and ready to go.
The IAA implementation, idxd_load_iaa_device_defaults(), configures a
single workqueue (wq0) for each device with the following default
values:
mode "dedicated"
threshold 0
size Total WQ Size from WQCAP
priority 10
type IDXD_WQT_KERNEL
group 0
name "iaa_crypto"
driver_name "crypto"
Note that this now adds another configuration step for any users that
want to configure their own devices/workqueus with something different
in that they'll first need to disable (in the case of IAA) wq0 and the
device itself before they can set their own attributes and re-enable,
since they've been already been auto-enabled. Note also that in order
for the new configuration to be applied to the deflate-iaa crypto
algorithm the iaa_crypto module needs to unregister the old version,
which is accomplished by removing the iaa_crypto module, and
re-registering it with the new configuration by reinserting the
iaa_crypto module.
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Create a lightweight callback interface to allow idxd sub-drivers to
be notified when work sent to idxd wqs has completed.
For a sub-driver to be notified of work completion, it needs to:
- Set the descriptor's 'Request Completion Interrupt'
(IDXD_OP_FLAG_RCI)
- Set the sub-driver desc_complete() callback when registering the
sub-driver e.g.:
struct idxd_device_driver my_drv = {
.probe = my_probe,
.desc_complete = my_complete,
}
- Set the sub-driver-specific context in the sub-driver's descriptor
e.g:
idxd_desc->crypto.req = req;
idxd_desc->crypto.tfm = tfm;
idxd_desc->crypto.src_addr = src_addr;
idxd_desc->crypto.dst_addr = dst_addr;
When the work completes and the completion irq fires, idxd will invoke
the desc_complete() callback with pointers to the descriptor, context,
and completion_type.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Reviewed-by: Fenghua Yu <fenghua.yu@intel.com>
Acked-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add the accessors idxd_wq_set_private() and idxd_wq_get_private()
allowing users to set and retrieve a private void * associated with an
idxd_wq.
The private data is stored in the idxd_dev.conf_dev associated with
each idxd_wq.
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Reviewed-by: Fenghua Yu <fenghua.yu@intel.com>
Acked-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rename drv_enable_wq and drv_disable_wq to idxd_drv_enable_wq and
idxd_drv_disable_wq respectively, so that they're no longer too
generic to be exported. This also matches existing naming within the
idxd driver.
And to allow idxd sub-drivers to enable and disable wqs, export them.
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Fenghua Yu <fenghua.yu@intel.com>
Acked-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add support to allow an external driver to be registered to the
dsa_bus_type and also auto-loaded.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Reviewed-by: Fenghua Yu <fenghua.yu@intel.com>
Acked-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This API was defined to formalize the access to internal iommu details on
some Tegra SOCs, but a few callers got missed. Add them.
The helper already masks by 0xFFFF so remove this code from the callers.
Suggested-by: Thierry Reding <thierry.reding@gmail.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Link: https://lore.kernel.org/r/7-v2-16e4def25ebb+820-iommu_fwspec_p1_jgg@nvidia.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Source and destination data buffers are allocated with GPF_KERNEL flag.
It means that, if the DDR is more than 2GB, buffers can be allocated above
the 32-bit addressable space. In this case, and if the dma controller is
only 32-bit compatible, swiotlb bounce buffer, located in the 32-bit
addressable space, is used and introduces a memcpy.
To prevent this extra memcpy, due to swiotlb bounce buffer use because
source or destination data buffer is allocated above the 32-bit addressable
space, force source and destination data buffers allocation with GPF_DMA
instead, when nobounce parameter is true.
Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com>
Link: https://lore.kernel.org/r/20231124160235.2459326-1-amelie.delaunay@foss.st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Allocate channel count consistently increases due to a missing source ID
(srcid) cleanup in the fsl_edma_free_chan_resources() function at imx93
eDMAv4.
Reset 'srcid' at fsl_edma_free_chan_resources().
Cc: stable@vger.kernel.org
Fixes: 72f5801a4e2b ("dmaengine: fsl-edma: integrate v3 support")
Signed-off-by: Frank Li <Frank.Li@nxp.com>
Link: https://lore.kernel.org/r/20231127214325.2477247-1-Frank.Li@nxp.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
To support the flexibility to reserve the specific dma channels
add the support of dma-channel-mask property in the tegra210-adma
driver
Signed-off-by: Mohan Kumar <mkumard@nvidia.com>
Link: https://lore.kernel.org/r/20231128071615.31447-3-mkumard@nvidia.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
device_link_add() returns NULL pointer not PTR_ERR() when it fails,
so replace the IS_ERR() check with NULL pointer check.
Fixes: 72f5801a4e2b ("dmaengine: fsl-edma: integrate v3 support")
Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
Link: https://lore.kernel.org/r/20231129090000.841440-1-yangyingliang@huaweicloud.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Sifive platform dma (sf-pdma) has both in-order and out-of-order
configurations but sf-pdam driver configured to do in-order DMA
transfers, with out-of-order configuration got better throughput
in the PolarFire SoC platform.
Add a PolarFire SoC specific compatible and code to support
for out-of-order dma transfers
Reviewed-by: Emil Renner Berthing <emil.renner.berthing@canonical.com>
Signed-off-by: Shravan Chippa <shravan.chippa@microchip.com>
Link: https://lore.kernel.org/r/20231208103856.3732998-4-shravan.chippa@microchip.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Update sf-pdma driver to adopt generic DMA device tree bindings.
It calls of_dma_controller_register() with of_dma_xlate_by_chan_id
to get the generic DMA device tree helper support and the DMA
clients can look up the sf-pdma controller using standard APIs.
Signed-off-by: Shravan Chippa <shravan.chippa@microchip.com>
Link: https://lore.kernel.org/r/20231208103856.3732998-2-shravan.chippa@microchip.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Fix incorrect descriptions for the GRPCFG register which has three
sub-registers (GRPWQCFG, GRPENGCFG and GRPFLGCFG).
No functional changes
Signed-off-by: Guanjun <guanjun@linux.alibaba.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Fenghua Yu <fenghua.yu@intel.com>
Acked-by: Lijun Pan <lijun.pan@intel.com>
Link: https://lore.kernel.org/r/20231211053704.2725417-3-guanjun@linux.alibaba.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The int_handle field in hw descriptor should also be protected
by wmb() before possibly triggering a DMA read.
Fixes: eb0cf33a91b4 (dmaengine: idxd: move interrupt handle assignment)
Signed-off-by: Guanjun <guanjun@linux.alibaba.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Fenghua Yu <fenghua.yu@intel.com>
Reviewed-by: Lijun Pan <lijun.pan@intel.com>
Link: https://lore.kernel.org/r/20231211053704.2725417-2-guanjun@linux.alibaba.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
For RX channels, REG_BUS_WIDTH seems to default to a value of 0xf00, and
macOS preserves the upper bits when setting the configuration in the
lower ones. If we reset the upper bits to 0, this causes framing errors
on suspend/resume (the data stream "tears" and channels get swapped
around). Keeping the upper bits untouched, like the macOS driver does,
fixes this issue.
Signed-off-by: Hector Martin <marcan@marcan.st>
Reviewed-by: Martin Povišer <povik+lin@cutebit.org>
Signed-off-by: Martin Povišer <povik+lin@cutebit.org>
Link: https://lore.kernel.org/r/20231029170704.82238-1-povik+lin@cutebit.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The .remove() callback for a platform driver returns an int which makes
many driver authors wrongly assume it's possible to do error handling by
returning an error code. However the value returned is ignored (apart
from emitting a warning) and this typically results in resource leaks.
To improve here there is a quest to make the remove callback return
void. In the first step of this quest all drivers are converted to
.remove_new(), which already returns void. Eventually after all drivers
are converted, .remove_new() will be renamed to .remove().
There is an error path that has the above mentioned problem. This patch
only adds a more drastic error message. To properly fix it,
dmaengine_terminate_sync() must be known to have succeeded (or that it's
safe to not call it as other drivers seem to assume).
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Link: https://lore.kernel.org/r/20231105093415.3704633-10-u.kleine-koenig@pengutronix.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The .remove() callback for a platform driver returns an int which makes
many driver authors wrongly assume it's possible to do error handling by
returning an error code. However the value returned is ignored (apart
from emitting a warning) and this typically results in resource leaks.
To improve here there is a quest to make the remove callback return
void. In the first step of this quest all drivers are converted to
.remove_new(), which already returns void. Eventually after all drivers
are converted, .remove_new() will be renamed to .remove().
There is an error path that has the above mentioned problem. This patch
only adds a more drastic error message. To properly fix it,
dmaengine_terminate_sync() must be known to have succeeded (or that it's
safe to not call it as other drivers seem to assume).
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Link: https://lore.kernel.org/r/20231105093415.3704633-9-u.kleine-koenig@pengutronix.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The .remove() callback for a platform driver returns an int which makes
many driver authors wrongly assume it's possible to do error handling by
returning an error code. However the value returned is ignored (apart
from emitting a warning) and this typically results in resource leaks.
To improve here there is a quest to make the remove callback return
void. In the first step of this quest all drivers are converted to
.remove_new(), which already returns void. Eventually after all drivers
are converted, .remove_new() will be renamed to .remove().
There is an error path that has the above mentioned problem. This patch
only adds a more drastic error message. To properly fix it,
dmaengine_terminate_sync() must be known to have succeeded (or that it's
safe to not call it as other drivers seem to assume).
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Link: https://lore.kernel.org/r/20231105093415.3704633-8-u.kleine-koenig@pengutronix.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The .remove() callback for a platform driver returns an int which makes
many driver authors wrongly assume it's possible to do error handling by
returning an error code. However the value returned is ignored (apart
from emitting a warning) and this typically results in resource leaks.
To improve here there is a quest to make the remove callback return
void. In the first step of this quest all drivers are converted to
.remove_new(), which already returns void. Eventually after all drivers
are converted, .remove_new() will be renamed to .remove().
There is an error path that has the above mentioned problem. This patch
only adds a more drastic error message. To properly fix it,
dmaengine_terminate_sync() must be known to have succeeded (or that it's
safe to not call it as other drivers seem to assume).
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Link: https://lore.kernel.org/r/20231105093415.3704633-7-u.kleine-koenig@pengutronix.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
stm32_dma_get_burst() returns a negative error for invalid input, which
gets turned into a large u32 value in stm32_dma_prep_dma_memcpy() that
in turn triggers an assertion because it does not fit into a two-bit field:
drivers/dma/stm32-dma.c: In function 'stm32_dma_prep_dma_memcpy':
include/linux/compiler_types.h:354:38: error: call to '__compiletime_assert_282' declared with attribute error: FIELD_PREP: value too large for the field
_compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
^
include/linux/compiler_types.h:335:4: note: in definition of macro '__compiletime_assert'
prefix ## suffix(); \
^~~~~~
include/linux/compiler_types.h:354:2: note: in expansion of macro '_compiletime_assert'
_compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
^~~~~~~~~~~~~~~~~~~
include/linux/build_bug.h:39:37: note: in expansion of macro 'compiletime_assert'
#define BUILD_BUG_ON_MSG(cond, msg) compiletime_assert(!(cond), msg)
^~~~~~~~~~~~~~~~~~
include/linux/bitfield.h:68:3: note: in expansion of macro 'BUILD_BUG_ON_MSG'
BUILD_BUG_ON_MSG(__builtin_constant_p(_val) ? \
^~~~~~~~~~~~~~~~
include/linux/bitfield.h:114:3: note: in expansion of macro '__BF_FIELD_CHECK'
__BF_FIELD_CHECK(_mask, 0ULL, _val, "FIELD_PREP: "); \
^~~~~~~~~~~~~~~~
drivers/dma/stm32-dma.c:1237:4: note: in expansion of macro 'FIELD_PREP'
FIELD_PREP(STM32_DMA_SCR_PBURST_MASK, dma_burst) |
^~~~~~~~~~
As an easy workaround, assume the error can happen, so try to handle this
by failing stm32_dma_prep_dma_memcpy() before the assertion. It replicates
what is done in stm32_dma_set_xfer_param() where stm32_dma_get_burst() is
also used.
Fixes: 1c32d6c37cc2 ("dmaengine: stm32-dma: use bitfield helpers")
Fixes: a2b6103b7a8a ("dmaengine: stm32-dma: Improve memory burst management")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com>
Cc: stable@vger.kernel.org
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202311060135.Q9eMnpCL-lkp@intel.com/
Link: https://lore.kernel.org/r/20231106134832.1470305-1-amelie.delaunay@foss.st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
AM62x has 3 SPI channels where each channel has 4 TX and 4 RX threads.
This also fixes the thread numbers.
Signed-off-by: Ronald Wahl <ronald.wahl@raritan.com>
Fixes: 5ac6bfb58777 ("dmaengine: ti: k3-psil: Add AM62x PSIL and PDMA data")
Reviewed-by: Jai Luthra <j-luthra@ti.com>
Acked-by: Peter Ujfalusi <peter.ujfalusi@gmail.com>
Link: https://lore.kernel.org/r/20231030190113.16782-1-rwahl@gmx.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In case of the prep descriptor while the channel is already running, the
CCR register value stored into the channel could already have its EN bit
set. This would lead to a bad transfer since, at start transfer time,
enabling the channel while other registers aren't yet properly set.
To avoid this, ensure to mask the CCR_EN bit when storing the ccr value
into the mdma channel structure.
Fixes: a4ffb13c8946 ("dmaengine: Add STM32 MDMA driver")
Signed-off-by: Alain Volmat <alain.volmat@foss.st.com>
Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com>
Cc: stable@vger.kernel.org
Tested-by: Alain Volmat <alain.volmat@foss.st.com>
Link: https://lore.kernel.org/r/20231009082450.452877-1-amelie.delaunay@foss.st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Since commit cc99582d46b4 ("dmaengine: xilinx: xilinx_dma: Convert to
platform remove callback returning void") xilinx_dma_remove() doesn't
return zero any more. As the function has no return value any more, just
drop the statement about the return value.
Reported-by: Radhey Shyam Pandey <radhey.shyam.pandey@amd.com>
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Link: https://lore.kernel.org/r/20231014211656.1512016-2-u.kleine-koenig@pengutronix.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Driver fixes for:
- stm32 dma residue calculation and chaining
- stm32 mdma for setting inflight bytes, residue calculation
and resume abort
- channel request, channel enable and dma error in fsl_edma
- runtime pm imbalance in ste_dma40 driver
- deadlock fix in mediatek driver
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEE+vs47OPLdNbVcHzyfBQHDyUjg0cFAmUpSbsACgkQfBQHDyUj
g0dlphAAxeixlq3kY6JzAa1IDSDuahLB8ldtW3Vlqbs9q8WvAIdWK/CJAxZp97MU
JeTG1JutXpHuPKobYvgqZGiDi5yGTqXsWgPMQhjC82ewiFJGA5csJiPBv60QX5Yv
Y9jsnX9umUBsTNAB9vE9Qn7RpclZWW9brILtE3Cwomxbl1E3RBWCOJJf9tfVDoT4
WBgrFCBA6cXmeebe3TKCc4vCNrThcrV0jN3fOt0YEBaLEXUF7jtSikGHzoqR9j3D
G9cwS/Xcc5MkYsHzRV+HSdxoeM4AZoaUP2PpOpI+XH3RoN9DsDmKflgtbRDvTvmA
9swXZOpcY28aZZ7zgrSBCeNW2uWUuFTtGZvsOhRcYciXaNWSb+07BO4tYwVx3FWk
apFEj97cYHbus4LuHaG6xOupqv4o6g4OBP35+Q4T6CU8yPlrZst7fQei4puJhLsv
nv8q85EofXWre+VpA5lJ0Y1viBNQMJzr4r8dSwS/1VdSoo7vCYvJ6v0rfXVQJBRM
PC1zrUBX6unIhCP+BNzNSY1KOIiNF/b0jJQZTMNRWtS5KbcmIO1WtkOKJDvODS+M
DuXOC97Z4wGv4xhieTDm8Sxbd8WnoiCtz6KPhKZ6CKnY61kJyqzhAe6RG+UcmlGI
m2I8MPYG6OvoxUSxOTibhBM1bZCgcErYOLFQ7N1uHomVpO/09QI=
=8dWs
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-fix-6.6' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine
Pull dmaengine fixes from Vinod Koul:
"Driver fixes for:
- stm32 dma residue calculation and chaining
- stm32 mdma for setting inflight bytes, residue calculation and
resume abort
- channel request, channel enable and dma error in fsl_edma
- runtime pm imbalance in ste_dma40 driver
- deadlock fix in mediatek driver"
* tag 'dmaengine-fix-6.6' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine:
dmaengine: fsl-edma: fix all channels requested when call fsl_edma3_xlate()
dmaengine: stm32-dma: fix residue in case of MDMA chaining
dmaengine: stm32-dma: fix stm32_dma_prep_slave_sg in case of MDMA chaining
dmaengine: stm32-mdma: set in_flight_bytes in case CRQA flag is set
dmaengine: stm32-mdma: use Link Address Register to compute residue
dmaengine: stm32-mdma: abort resume if no ongoing transfer
dmaengine: ste_dma40: Fix PM disable depth imbalance in d40_probe
dmaengine: mediatek: Fix deadlock caused by synchronize_irq()
dmaengine: idxd: use spin_lock_irqsave before wait_event_lock_irq
dmaengine: fsl-edma: fix edma4 channel enable failure on second attempt
dt-bindings: dmaengine: zynqmp_dma: add xlnx,bus-width required property
dmaengine: fsl-dma: fix DMA error when enabling sg if 'DONE' bit is set
Recent change a67ba97dfb30 ("dmaengine: Use device_get_match_data()")
cleaned up device tree data calls but left an unused variable, so drop
that
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Fixes: a67ba97dfb30 ("dmaengine: Use device_get_match_data()")
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Link: https://lore.kernel.org/r/20231010065729.29385-1-vkoul@kernel.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
dma_get_slave_channel() increases client_count for all channels. It should
only be called when a matched channel is found in fsl_edma3_xlate().
Move dma_get_slave_channel() after checking for a matched channel.
Cc: stable@vger.kernel.org
Fixes: 72f5801a4e2b ("dmaengine: fsl-edma: integrate v3 support")
Signed-off-by: Frank Li <Frank.Li@nxp.com>
Link: https://lore.kernel.org/r/20231004142911.838916-1-Frank.Li@nxp.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In case of MDMA chaining, DMA is configured in Double-Buffer Mode (DBM)
with two periods, but if transfer has been prepared with _prep_slave_sg(),
the transfer is not marked cyclic (=!chan->desc->cyclic). However, as DBM
is activated for MDMA chaining, residue computation must take into account
cyclic constraints.
With only two periods in MDMA chaining, and no update due to Transfer
Complete interrupt masked, n_sg is always 0. If DMA current memory address
(depending on SxCR.CT and SxM0AR/SxM1AR) does not correspond, it means n_sg
should be increased.
Then, the residue of the current period is the one read from SxNDTR and
should not be overwritten with the full period length.
Fixes: 723795173ce1 ("dmaengine: stm32-dma: add support to trigger STM32 MDMA")
Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20231004155024.2609531-2-amelie.delaunay@foss.st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Current Target (CT) have to be reset when starting an MDMA chaining use
case, as Double Buffer mode is activated. It ensures the DMA will start
processing the first memory target (pointed with SxM0AR).
Fixes: 723795173ce1 ("dmaengine: stm32-dma: add support to trigger STM32 MDMA")
Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20231004155024.2609531-1-amelie.delaunay@foss.st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
CRQA flag is set by hardware when the channel request become active and
the channel is enabled. It is cleared by hardware, when the channel request
is completed.
So when it is set, it means MDMA is transferring bytes.
This information is useful in case of STM32 DMA and MDMA chaining,
especially when the user pauses DMA before stopping it, to trig one last
MDMA transfer to get the latest bytes of the SRAM buffer to the
destination buffer.
STM32 DCMI driver can then use this to know if the last MDMA transfer in
case of chaining is done.
Fixes: 696874322771 ("dmaengine: stm32-mdma: add support to be triggered by STM32 DMA")
Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20231004163531.2864160-3-amelie.delaunay@foss.st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Current implementation relies on curr_hwdesc index. But to keep this index
up to date, Block Transfer interrupt (BTIE) has to be enabled.
If it is not, curr_hwdesc is not updated, and then residue is not reliable.
Rely on Link Address Register instead. And disable BTIE interrupt
in stm32_mdma_setup_xfer() because it is no more needed in case of
_prep_slave_sg() to maintain curr_hwdesc up to date.
It avoids extra interrupts and also ensures a reliable residue. These
improvements are required for STM32 DCMI camera capture use case, which
need STM32 DMA and MDMA chaining for good performance.
Fixes: 696874322771 ("dmaengine: stm32-mdma: add support to be triggered by STM32 DMA")
Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20231004163531.2864160-2-amelie.delaunay@foss.st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
chan->desc can be null, if transfer is terminated when resume is called,
leading to a NULL pointer when retrieving the hwdesc.
To avoid this case, check that chan->desc is not null and channel is
disabled (transfer previously paused or terminated).
Fixes: a4ffb13c8946 ("dmaengine: Add STM32 MDMA driver")
Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20231004163531.2864160-1-amelie.delaunay@foss.st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In order to use this dmaengine with sound devices, let's add cyclic
transfers support. Most of the code is reused from the existing
scatter-gather implementation, only the final linking between
descriptors, the control fields (to trigger interrupts more often) and
the interrupt handling are really different.
This controller supports up to 32 adjacent descriptors, we assume this
is way more than enough for the purpose of cyclic transfers and limit to
32 the number of cycled descriptors. This way, we simplify a lot the
overall handling of the descriptors.
Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
Link: https://lore.kernel.org/r/20231005160237.2804238-4-miquel.raynal@bootlin.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In order to reduce and clarify the diff when introducing cyclic
transfers support, let's first prepare the driver a bit. There is no
functional change.
Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
Link: https://lore.kernel.org/r/20231005160237.2804238-3-miquel.raynal@bootlin.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The pm_runtime_enable will increase power disable depth. Thus
a pairing decrement is needed on the error handling path to
keep it balanced according to context.
We fix it by calling pm_runtime_disable when error returns.
Signed-off-by: Zhang Shurong <zhang_shurong@foxmail.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Link: https://lore.kernel.org/r/tencent_DD2D371DB5925B4B602B1E1D0A5FA88F1208@qq.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
If probe is reached, we've already matched the device and in the case of
DT matching, the struct device_node pointer will be set. Therefore, there
is no need to call of_match_device() in probe.
Signed-off-by: Rob Herring <robh@kernel.org>
Link: https://lore.kernel.org/r/20231006213835.332848-1-robh@kernel.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Use preferred device_get_match_data() instead of of_match_device() to
get the driver match data. With this, adjust the includes to explicitly
include the correct headers.
Signed-off-by: Rob Herring <robh@kernel.org>
Link: https://lore.kernel.org/r/20231006213844.333027-1-robh@kernel.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).
To do so, the code needs a little shuffling related to how hw_desc is used
and nb_desc incremented.
The one by one increment is needed for the error handling path, calling
pxad_free_desc(), to work correctly.
So, add a new intermediate variable, desc, to store the result of the
dma_pool_alloc() call.
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Reviewed-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/1c9ef22826f449a3756bb13a83494e9fe3e0be8b.1696676782.git.christophe.jaillet@wanadoo.fr
Signed-off-by: Vinod Koul <vkoul@kernel.org>
If pxad_alloc_desc() fails on the first dma_pool_alloc() call, then
sw_desc->nb_desc is zero.
In such a case pxad_free_desc() is called and it will BUG_ON().
Remove this erroneous BUG_ON().
It is also useless, because if "sw_desc->nb_desc == 0", then, on the first
iteration of the for loop, i is -1 and the loop will not be executed.
(both i and sw_desc->nb_desc are 'int')
Fixes: a57e16cf0333 ("dmaengine: pxa: add pxa dmaengine driver")
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Link: https://lore.kernel.org/r/c8fc5563c9593c914fde41f0f7d1489a21b45a9a.1696676782.git.christophe.jaillet@wanadoo.fr
Signed-off-by: Vinod Koul <vkoul@kernel.org>
There is a warning reported by coccinelle:
./drivers/dma/xilinx/xdma.c:888:22-25: ERROR:
Missing resource_size with res
Use resource_size() on resource object instead of explicit computation.
Signed-off-by: Li Zetao <lizetao1@huawei.com>
Link: https://lore.kernel.org/r/20230803033235.3049137-1-lizetao1@huawei.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The fsl_mc_driver_register() will set "THIS_MODULE" to driver.owner when
register a fsl_mc_driver driver, so it is redundant initialization to set
driver.owner in dpaa2_qdma_driver statement. Remove it for clean code.
Signed-off-by: Li Zetao <lizetao1@huawei.com>
Link: https://lore.kernel.org/r/20230804100245.100068-1-lizetao1@huawei.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The synchronize_irq(c->irq) will not return until the IRQ handler
mtk_uart_apdma_irq_handler() is completed. If the synchronize_irq()
holds a spin_lock and waits the IRQ handler to complete, but the
IRQ handler also needs the same spin_lock. The deadlock will happen.
The process is shown below:
cpu0 cpu1
mtk_uart_apdma_device_pause() | mtk_uart_apdma_irq_handler()
spin_lock_irqsave() |
| spin_lock_irqsave()
//hold the lock to wait |
synchronize_irq() |
This patch reorders the synchronize_irq(c->irq) outside the spin_lock
in order to mitigate the bug.
Fixes: 9135408c3ace ("dmaengine: mediatek: Add MediaTek UART APDMA support")
Signed-off-by: Duoming Zhou <duoming@zju.edu.cn>
Reviewed-by: Eugen Hristev <eugen.hristev@collabora.com>
Link: https://lore.kernel.org/r/20230806032511.45263-1-duoming@zju.edu.cn
Signed-off-by: Vinod Koul <vkoul@kernel.org>
'type' is an enum, thus cast of pointer on 64-bit compile test with W=1
causes:
mmp_tdma.c:649:10: error: cast to smaller integer type 'enum mmp_tdma_type' from 'const void *' [-Werror,-Wvoid-pointer-to-enum-cast]
Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Link: https://lore.kernel.org/r/20230810100000.123515-2-krzysztof.kozlowski@linaro.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
'cap' is an enum, thus cast of pointer on 64-bit compile test with W=1
causes:
hidma.c:748:8: error: cast to smaller integer type 'enum hidma_cap' from 'const void *' [-Werror,-Wvoid-pointer-to-enum-cast]
Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Link: https://lore.kernel.org/r/20230810100000.123515-1-krzysztof.kozlowski@linaro.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>