IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
The hardware channel next descriptor view structure contains just
fields of 32 bits, while dma_addr_t can be of type u64 or u32
depending on CONFIG_ARCH_DMA_ADDR_T_64BIT. Force u32 to comply with
what the hardware expects.
Fixes: e1f7c9eee7 ("dmaengine: at_xdmac: creation of the atmel eXtended DMA Controller driver")
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Link: https://lore.kernel.org/r/20211215110115.191749-11-tudor.ambarus@microchip.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
AT_XDMAC_CNDC_NDVIEW_NDV3 was set even for AT_XDMAC_MBR_UBC_NDV2,
because of the wrong bit handling. Fix it.
Fixes: ee0fe35c8d ("dmaengine: xdmac: Handle descriptor's view 3 registers")
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Link: https://lore.kernel.org/r/20211215110115.191749-10-tudor.ambarus@microchip.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
It's easier to read code with fewer levels of indentation, remove a level
of indentation in at_xdmac_advance_work()
if (!foo() & !bar()) {
}
was replaced by:
if (foo() || bar())
return;
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Link: https://lore.kernel.org/r/20211215110115.191749-9-tudor.ambarus@microchip.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Since tx_submit can be called from a hard IRQ, xfers_list must be
protected with a lock to avoid concurency on the list's elements.
Since at_xdmac_handle_cyclic() is called from a tasklet, spin_lock_irq
is enough to protect from a hard IRQ.
Fixes: e1f7c9eee7 ("dmaengine: at_xdmac: creation of the atmel eXtended DMA Controller driver")
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Link: https://lore.kernel.org/r/20211215110115.191749-8-tudor.ambarus@microchip.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Move the free desc to the tail of the list, so that the sequence of
descriptors is more track-able in case of debug. One would know which
descriptor should come next and could easier catch concurrency over
descriptors for example. virt-dma uses list_splice_tail_init() as well,
follow the core driver.
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Link: https://lore.kernel.org/r/20211215110115.191749-7-tudor.ambarus@microchip.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The transfer descriptors were wrongly moved to the free descriptors list
before calling the tx desc callback. As the DMA engine drivers drop any
locks before calling the callback function, txd could be taken again,
resulting in its callback called prematurely. Fix the race for the tx desc
callback by moving the xfer desc into the free desc list after the
callback is invoked.
Fixes: e1f7c9eee7 ("dmaengine: at_xdmac: creation of the atmel eXtended DMA Controller driver")
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Link: https://lore.kernel.org/r/20211215110115.191749-6-tudor.ambarus@microchip.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Caller of dma_cookie_complete is expected to hold a lock to prevent
concurrency over the channel's completed cookie marker. Call
dma_cookie_complete() with the lock held.
Fixes: e1f7c9eee7 ("dmaengine: at_xdmac: creation of the atmel eXtended DMA Controller driver")
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Link: https://lore.kernel.org/r/20211215110115.191749-5-tudor.ambarus@microchip.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
It is desirable to do the prints without the lock held if possible, so
move the print after the lock is released.
Fixes: e1f7c9eee7 ("dmaengine: at_xdmac: creation of the atmel eXtended DMA Controller driver")
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Link: https://lore.kernel.org/r/20211215110115.191749-4-tudor.ambarus@microchip.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Cyclic channels must too call issue_pending in order to start a transfer.
Start the transfer in issue_pending regardless of the type of channel.
This wrongly worked before, because in the past the transfer was started
at tx_submit level when only a desc in the transfer list.
Fixes: e1f7c9eee7 ("dmaengine: at_xdmac: creation of the atmel eXtended DMA Controller driver")
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Link: https://lore.kernel.org/r/20211215110115.191749-3-tudor.ambarus@microchip.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
tx_submit is supposed to push the current transaction descriptor to a
pending queue, waiting for issue_pending() to be called. issue_pending()
must start the transfer, not tx_submit(), thus remove
at_xdmac_start_xfer() from at_xdmac_tx_submit(). Clients of at_xdmac that
assume that tx_submit() starts the transfer must be updated and call
dma_async_issue_pending() if they miss to call it (one example is
atmel_serial).
As the at_xdmac_start_xfer() is now called only from
at_xdmac_advance_work() when !at_xdmac_chan_is_enabled(), the
at_xdmac_chan_is_enabled() check is no longer needed in
at_xdmac_start_xfer(), thus remove it.
Fixes: e1f7c9eee7 ("dmaengine: at_xdmac: creation of the atmel eXtended DMA Controller driver")
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Link: https://lore.kernel.org/r/20211215110115.191749-2-tudor.ambarus@microchip.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The following sysfs attributes will be obsolete due to the name change of
tokens to read buffers:
max_tokens
token_limit
group/tokens_allowed
group/tokens_reserved
group/use_token_limit
Create new entries and have old entry print warning of deprecation.
New attributes to replace the token ones:
max_read_buffers
read_buffer_limit
group/read_buffers_allowed
group/read_buffers_reserved
group/use_read_buffer_limit
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/163951339488.2988321.2424012059911316373.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
DSA spec v1.2 has changed the term of "bandwidth tokens" to "read buffers"
in order to make the concept clearer. Deprecate bandwidth token
naming in the driver and convert to read buffers in order to match with
the spec and reduce confusion when reading the spec.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/163951338932.2988321.6162640806935567317.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
By the spec, wq size and group association is not changeable unless device
is disabled. Exclude clearing the shadow copy on wq disable/reset. This
allows wq type to be changed after disable to be re-enabled.
Move the size and group association to its own cleanup and only call it
during device disable.
Fixes: 0dcfe41e9a ("dmanegine: idxd: cleanup all device related bits after disabling device")
Reported-by: Lucas Van <lucas.van@intel.com>
Tested-by: Lucas Van <lucas.van@intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/163951291732.2987775.13576571320501115257.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Change the driver where WQ interrupt is requested only when wq is being
enabled. This new scheme set things up so that request_threaded_irq() is
only called when a kernel wq type is being enabled. This also sets up for
future interrupt request where different interrupt handler such as wq
occupancy interrupt can be setup instead of the wq completion interrupt.
Not calling request_irq() until the WQ actually needs an irq also prevents
wasting of CPU irq vectors on x86 systems, which is a limited resource.
idxd_flush_pending_descs() is moved to device.c since descriptor flushing
is now part of wq disable rather than shutdown().
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/163942149487.2412839.6691222855803875848.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The descriptor flushing for shutdown is not holding the irq_entry list
lock. If there's ongoing interrupt completion handling, this can corrupt
the list. Add locking to protect list walking. Also refactor the code so
it's more compact.
Fixes: 8f47d1a5e5 ("dmaengine: idxd: connect idxd to dmaengine subsystem")
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/163942148935.2412839.18282664745572777280.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
With irq_entry already being associated with the wq in a 1:1 relationship,
embed the irq_entry in the idxd_wq struct and remove back pointers for
idxe_wq and idxd_device. In the process of this work, clean up the interrupt
handle assignment so that there's no decision to be made during submit
call on where interrupt handle value comes from. Set the interrupt handle
during irq request initialization time.
irq_entry 0 is designated as special and is tied to the device itself.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/163942148362.2412839.12055447853311267866.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
There are currently 2 ways to create a set of sysfs files for a
kobj_type, through the default_attrs field, and the default_groups
field. Move the ioatdma sysfs code to use default_groups field which has
been the preferred way since aa30f47cf6 ("kobject: Add support for
default attribute groups to kobj_type") so that we can soon get rid of
the obsolete default_attrs field.
Cc: Vinod Koul <vkoul@kernel.org>
Cc: dmaengine@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/20220104163330.1338824-1-gregkh@linuxfoundation.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The variables src_addr and dst_addr handle DMA addresses, so these should
be declared as dma_addr_t.
Fixes: 667b925144 ("dmaengine: uniphier-xdmac: Add UniPhier external DMA controller driver")
Signed-off-by: Kunihiko Hayashi <hayashi.kunihiko@socionext.com>
Link: https://lore.kernel.org/r/1639456963-10232-1-git-send-email-hayashi.kunihiko@socionext.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Add support for R-Car S4-8. We can reuse R-Car V3U code so that
renames variable names as "gen4".
Note that some registers of R-Car V3U do not exist on R-Car S4-8,
but none of them are used by the driver for now.
Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Reviewed-by: Ulrich Hecht <uli+renesas@fpond.eu>
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Link: https://lore.kernel.org/r/20211222114507.1252947-3-yoshihiro.shimoda.uh@renesas.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Tag for dmaengine slave_id removal topic branch which should be merged
into v5.17
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEE+vs47OPLdNbVcHzyfBQHDyUjg0cFAmG8J+UACgkQfBQHDyUj
g0cmEQ/9H5sOft40t02H4tKkdIm8ZvEL4p58+Dzkv09YR7PjGH9pOWoPikzMozNe
aQVIoMs+0QAe9HmaHNOqEgXX3yCswm+PTl9KoULOzMZGMIbw9KxeippVvpqybgtH
flxVPBN/u++WEyPUHRWkr8TCHRzDZjzOQDlHVfOFTl09oNM302Mmz7XasArDwQkI
X3FIpI4mDsWxXOE2hX+A6zqUTqhakV+KBg7a7/JpPnJ558REsvdOxTTRVyW8dQiW
O5EXvrCqus6Ahtu+NIJ3wHjEHcn2pUBeuDRGJgo729f9tBa4ERcZIWSoMXPNLqZm
jbmyoUZIBt6DCTwwd+k35BtreN47dfxY7KIjX3D3qdWDPOBrWsVUIhbEjkzIUqOa
mnEJf6mdDs9z6qCDTYm+Fly7n0bFaOSs94wLWbccxf5PK1ZnS8yI7XcWrA37WRl/
5196CvFoMx4n68QJeyv1PdMEzSMb2aubniOohgfMrPE1HxBdRYcikzoNExiwZeGs
m5oIAQ0bCAu/Jp1c1m/wMN5hE2KSzGQY6L/8h0Jl4ML1jn2x6QGQa6NkHhpVH2AX
5aNF2tOUFNUN8MxunU7eNTE3icpIwhWnW5emfRwrW7sbhPvssbN+O6Pv30TgjA5e
gvtytGO8sXFhRFKoVpMEPI/CYyYz8hxGJKLnLFSvcLYkZN9yvig=
=Mpf8
-----END PGP SIGNATURE-----
Merge tag 'dmaengine_topic_slave_id_removal_5.17' into next
Merge the tag dmaengine_topic_slave_id_removal_5.17 into next. This
brings in the slave_id removal topic changes
'shdma_slave_used' is a bitmap. So use 'bitmap_zalloc()' to simplify code,
improve the semantic and avoid some open-coded arithmetic in allocator
arguments.
Also change the corresponding 'kfree()' into 'bitmap_free()' to keep
consistency.
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Link: https://lore.kernel.org/r/3efaf2784424ae3d7411dc47f8b6b03e7bb8c059.1637702701.git.christophe.jaillet@wanadoo.fr
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The pointer hwdesc is being initialized with a value that is never
read, it is being updated later in a for-loop. The assignment is
redundant and can be removed.
Signed-off-by: Colin Ian King <colin.i.king@gmail.com>
Reviewed-by: Amelie Delaunay <amelie.delaunay@foss.st.com>
Link: https://lore.kernel.org/r/20211204140032.548066-1-colin.i.king@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Add a sysfs knob to allow tuning of retries for the kernel ENQCMDS
descriptor submission. While on host, it is not as likely that ENQCMDS
return busy during normal operations due to the driver controlling the
number of descriptors allocated for submission. However, when the driver is
operating as a guest driver, the chance of retry goes up significantly due
to sharing a wq with multiple VMs. A default value is provided with the
system admin being able to tune the value on a per WQ basis.
Suggested-by: Sanjay Kumar <sanjay.k.kumar@intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/163820629464.2702134.7577370098568297574.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
For some devices with only half-duplex capabilities, it doesn't make
much sense to use one DMA channel per direction, as both channels will
never be active at the same time.
Add support for bidirectional I/O on DMA channels. The client drivers
can then request a "tx-rx" DMA channel which will be used for both
directions.
Signed-off-by: Paul Cercueil <paul@crapouillou.net>
Link: https://lore.kernel.org/r/20211206174259.68133-7-paul@crapouillou.net
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The JZ4760 and JZ4760B SoCs have two regular DMA controllers with 6
channels each. They also have an extra DMA controller named MDMA
with only 2 channels, that only supports memcpy operations, and one
named BDMA with only 3 channels, that is mostly used for transfers
between memories and the BCH controller.
Signed-off-by: Paul Cercueil <paul@crapouillou.net>
Link: https://lore.kernel.org/r/20211206174259.68133-5-paul@crapouillou.net
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The JZ4760 SoC has a hardware problem with chan0 not enabling properly
if it's enabled before chan1, after a reset (works fine afterwards).
This is worked around in the probe function by just enabling then
disabling chan1.
Signed-off-by: Paul Cercueil <paul@crapouillou.net>
Link: https://lore.kernel.org/r/20211206174259.68133-4-paul@crapouillou.net
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Add SYSFW defined rchan_oes_offset number for J721S2 SoC in soc data.
Signed-off-by: Aswath Govindraju <a-govindraju@ti.com>
Acked-by: Peter Ujfalusi <peter.ujfalusi@gmail.com>
Link: https://lore.kernel.org/r/20211119132315.15901-2-a-govindraju@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Make use of the struct_size() helper instead of an open-coded version, in
order to avoid any potential type mistakes or integer overflows that, in
the worst scenario, could lead to heap overflows.
Link: https://github.com/KSPP/linux/issues/160
Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20211208001013.GA62330@embeddedor
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Handle errors when trying to map the IRQ for the DMA channels.
The main motivation here is to be able to handle probe deferral. E.g. when
using DT overlays it is possible that the DMA controller is probed before
interrupt controller, depending on the order in the DT.
In order to support this switch from irq_of_parse_and_map() to
of_irq_get(), which internally does the same, but it will return
EPROBE_DEFER when the interrupt controller is not yet available.
As a result other errors, such as an invalid IRQ specification, or missing
IRQ are also properly handled.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Link: https://lore.kernel.org/r/20211208114212.234130-1-lars@metafoo.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The display driver wants to pass a custom flag to the DMA engine driver,
which it started doing by using the slave_id field that was traditionally
used for a different purpose.
As there is no longer a correct use for the slave_id field, it should
really be removed, and the remaining users changed over to something
different.
The new mechanism for passing nonstandard settings is using the
.peripheral_config field, so use that to pass a newly defined structure
here, making it clear that this will not work in portable drivers.
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20211122222203.4103644-10-arnd@kernel.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The slave_id was previously used to pick one DMA slave instead of another,
but this is now done through the DMA descriptors in device tree.
For the qcom_adm driver, the configuration is documented in the DT
binding to contain a tuple of device identifier and a "crci" field,
but the implementation ends up using only a single cell for identifying
the slave, with the crci getting passed in nonstandard properties of
the device, and passed through the dma driver using the old slave_id
field. Part of the problem apparently is that the nand driver ends up
using only a single DMA request ID, but requires distinct values for
"crci" depending on the type of transfer.
Change both the dmaengine driver and the two slave drivers to allow
the documented binding to work in addition to the ad-hoc passing
of crci values. In order to no longer abuse the slave_id field, pass
the data using the "peripheral_config" mechanism instead.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20211122222203.4103644-9-arnd@kernel.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
It appears that the code that reads the slave_id from the channel config
was copied incorrectly from other drivers. Nothing ever sets this field
on platforms that use this driver, so remove the reference.
Reviewed-by: Baolin Wang <baolin.wang7@gmail.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20211122222203.4103644-8-arnd@kernel.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The last driver referencing the slave_id on Marvell PXA and MMP platforms
was the SPI driver, but this stopped doing so a long time ago, so the
TODO from the earlier patch can no be removed.
Fixes: b729bf3453 ("spi/pxa2xx: Don't use slave_id of dma_slave_config")
Fixes: 13b3006b8e ("dma: mmp_pdma: add filter function")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20211122222203.4103644-7-arnd@kernel.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The slave device is picked through either devicetree or a filter
function, and any remaining out-of-tree drivers would have warned
about this usage since 2015.
Stop interpreting the field finally so it can be removed from
the interface.
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20211122222203.4103644-6-arnd@kernel.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Nothing sets the slave_id field any more, so stop accessing
it to allow the removal of this field.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20211122222203.4103644-11-arnd@kernel.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
There is no reason to walk the MSI descriptors to retrieve the interrupt
number for a device. Use msi_get_virq() instead.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Acked-by: Sinan Kaya <okaya@kernel.org>
Acked-by: Vinod Koul <vkoul@kernel.org>
Link: https://lore.kernel.org/r/20211210221815.329792721@linutronix.de
Storing a pointer to the MSI descriptor just to keep track of the Linux
interrupt number is daft. Use msi_get_virq() instead.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Acked-by: Vinod Koul <vkoul@kernel.org>
Link: https://lore.kernel.org/r/20211210221814.970099984@linutronix.de
Use the common msi_index member and get rid of the pointless wrapper struct.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Nishanth Menon <nm@ti.com>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Link: https://lore.kernel.org/r/20211210221814.413638645@linutronix.de
The only unconditional part of MSI data in struct device is the irqdomain
pointer. Everything else can be allocated on demand. Create a data
structure and move the irqdomain pointer into it. The other MSI specific
parts are going to be removed from struct device in later steps.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>
Tested-by: Nishanth Menon <nm@ti.com>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Link: https://lore.kernel.org/r/20211210221813.617178827@linutronix.de
Ming reported that with the abort path of the descriptor submission, there
can be a window where a completed descriptor can be missed to be completed
by the irq completion thread:
CPU A CPU B
Submit (successful)
Submit (fail)
irq_process_work_list() // empty
llist_abort_desc()
// remove all descs from pending list
irq_process_pending_llist() // empty
exit idxd_wq_thread() with no processing
Add opportunistic descriptor completion in the abort path in order to
remove the missed completion.
Fixes: 6b4b87f2c3 ("dmaengine: idxd: fix submission race window")
Reported-by: Ming Li <ming4.li@intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/163898288714.443911.16084982766671976640.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Smatch reports below warnings [1] wrt dereferencing rm_res when it can
potentially be ERR_PTR(). This is possible when entire range is
allocated to Linux
Fix this case by making sure, there is no deference of rm_res when its
ERR_PTR().
[1]:
drivers/dma/ti/k3-udma.c:4524 udma_setup_resources() error: 'rm_res' dereferencing possible ERR_PTR()
drivers/dma/ti/k3-udma.c:4537 udma_setup_resources() error: 'rm_res' dereferencing possible ERR_PTR()
drivers/dma/ti/k3-udma.c:4681 bcdma_setup_resources() error: 'rm_res' dereferencing possible ERR_PTR()
drivers/dma/ti/k3-udma.c:4696 bcdma_setup_resources() error: 'rm_res' dereferencing possible ERR_PTR()
drivers/dma/ti/k3-udma.c:4711 bcdma_setup_resources() error: 'rm_res' dereferencing possible ERR_PTR()
drivers/dma/ti/k3-udma.c:4848 pktdma_setup_resources() error: 'rm_res' dereferencing possible ERR_PTR()
drivers/dma/ti/k3-udma.c:4861 pktdma_setup_resources() error: 'rm_res' dereferencing possible ERR_PTR()
Reported-by: Nishanth Menon <nm@ti.com>
Signed-off-by: Vignesh Raghavendra <vigneshr@ti.com>
Acked-by: Peter Ujfalusi <peter.ujfalusi@gmail.com>
Link: https://lore.kernel.org/r/20211209180957.29036-1-vigneshr@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The variable used for returning status in
`ppc440spe_adma_dma2rxor_prep_src' function is never changed
and this function just need to return 0. Thus, the `rval' can
be removed and return 0 from `ppc440spe_adma_dma2rxor_prep_src'.
Signed-off-by: Jason Wang <wangborong@cdjrlc.com>
Link: https://lore.kernel.org/r/20211114060856.239314-1-wangborong@cdjrlc.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Dan reports that smatch has found idxd_wq_quiesce() is being called inside
the idxd->dev_lock. idxd_wq_quiesce() calls wait_for_completion() and
therefore it can sleep. Move the call outside of the spinlock as it does
not need device lock.
Fixes: 5b0c68c473 ("dmaengine: idxd: support reporting of halt interrupt")
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/163716858508.1721911.15051495873516709923.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The commit in the Fixes: tag has changed the logic of the code and now it
is likely that the probe will return an early success (0), even if not
completely executed.
This should lead to a crash or similar issue later on when the code
accesses to some never allocated resources.
Change the '!err' into a 'err' when checking if
'dma_set_mask_and_coherent()' has failed or not.
While at it, simplify the code and remove the "can't success code" related
to 32 DMA mask.
As stated in [1], 'dma_set_mask_and_coherent(DMA_BIT_MASK(64))' can't fail
if 'dev->dma_mask' is non-NULL. And if it is NULL, it would fail for the
same reason when tried with DMA_BIT_MASK(32).
[1]: https://lkml.org/lkml/2021/6/7/398
Fixes: ecb8c88bd3 ("dmaengine: dw-edma-pcie: switch from 'pci_' to 'dma_' API")
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Link: https://lore.kernel.org/r/935fbb40ae930c5fe87482a41dcb73abf2257973.1636492127.git.christophe.jaillet@wanadoo.fr
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This new CDMA binding for device_prep_dma_memcpy_sg was partially borrowed
from xlnx kernel tree, an expanded with extended address space support when
linking descriptor segments and checking for incorrect zero transfer size.
Signed-off-by: Adrian Larumbe <adrianml@alumnos.upm.es>
Link: https://lore.kernel.org/r/20211101180825.241048-4-adrianml@alumnos.upm.es
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This is the old DMA_SG interface that was removed in commit
c678fa6634 ("dmaengine: remove DMA_SG as it is dead code in kernel"). It
has been renamed to DMA_MEMCPY_SG to better match the MEMSET and MEMSET_SG
naming convention.
It should only be used for mem2mem copies, either main system memory or
CPU-addressable device memory (like video memory on a PCI graphics card).
Bringing back this interface was prompted by the need to use the Xilinx
CDMA device for mem2mem SG transfers.
Signed-off-by: Adrian Larumbe <adrianml@alumnos.upm.es>
Link: https://lore.kernel.org/r/20211101180825.241048-3-adrianml@alumnos.upm.es
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Coverity complains of an uninitialized variable:
5. uninit_use_in_call: Using uninitialized value config.dst_per when calling axi_chan_config_write. [show details]
6. uninit_use_in_call: Using uninitialized value config.hs_sel_src when calling axi_chan_config_write. [show details]
CID 121164 (#1-3 of 3): Uninitialized scalar variable (UNINIT)
7. uninit_use_in_call: Using uninitialized value config.src_per when calling axi_chan_config_write. [show details]
418 axi_chan_config_write(chan, &config);
Fix this by initializing the structure to 0 which should at least be benign in axi_chan_config_write(). Also fix
what looks like a cut-n-paste error when initializing config.hs_sel_dst.
Fixes: 824351668a ("dmaengine: dw-axi-dmac: support DMAX_NUM_CHANNELS > 8")
Cc: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Cc: Vinod Koul <vkoul@kernel.org>
Cc: dmaengine@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
Link: https://lore.kernel.org/r/20211025181656.31658-1-tim.gardner@canonical.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
"Interrupt handle revoked" is an event that happens when the driver is
running on a guest kernel and the VM is migrated to a new machine.
The device will trigger an interrupt that signals to the guest driver
that the interrupt handles need to be replaced.
The misc irq thread function calls a helper function to handle the
event. The function uses the WQ percpu_ref to quiesce the kernel
submissions. It then replaces the interrupt handles by requesting
interrupt handle command for each I/O MSIX vector. Once the handle is
updated, the driver will unblock the submission path to allow new
submissions.
The submitter will attempt to acquire a percpu_ref before submission. When
the request fails, it will wait on the wq_resurrect 'completion'.
The driver does anticipate the possibility of descriptors being submitted
before the WQ percpu_ref is killed. If a descriptor has already been
submitted, it will return with incorrect interrupt handle status. The
descriptor will be re-submitted with the new interrupt handle on the
completion path. For descriptors with incorrect interrupt handles,
completion interrupt won't be triggered.
At the completion of the interrupt handle refresh, the handling function
will call idxd_int_handle_refresh_drain() to issue drain descriptors to
each of the wq with associated interrupt handle. The drain descriptor will have
interrupt request set but without completion record. This will ensure all
descriptors with incorrect interrupt completion handle get drained and
a completion interrupt is triggered for the guest driver to process them.
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Co-Developed-by: Sanjay Kumar <sanjay.k.kumar@intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/163528420189.3925689.18212568593220415551.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Handle a descriptor that has been marked with invalid interrupt handle
error in status. Create a work item that will resubmit the descriptor. This
typically happens when the driver has handled the revoke interrupt handle
event and has a new interrupt handle.
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/163528419601.3925689.4166517602890523193.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Add a locked version of idxd_quiesce() call so that the quiesce can be
called with a lock in situations where the lock is not held by the caller.
In the driver probe/remove path, the lock is already held, so the raw
version can be called w/o locking.
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/163528418980.3925689.5841907054957931211.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The helper is called at the completion of the interrupt handle refresh
event. It issues drain descriptors to each of the wq with associated
interrupt handle. The drain descriptor will have interrupt request set but
without completion record. This will ensure all descriptors with incorrect
interrupt completion handle get drained and a completion interrupt is
triggered for the guest driver to process them.
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/163528418315.3925689.7944718440052849626.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In preparation of supporting interrupt handle revoke event, move the
interrupt handle assignment to right before the descriptor to be submitted.
This allows the interrupt handle revoke logic to assign the latest
interrupt handle on submission.
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/163528417767.3925689.7730411152122952808.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Attach int_handle to irq_entry. This removes the separate management of int
handles and reduces the confusion of interating through int handles that is
off by 1 count.
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/163528417065.3925689.11505755433684476288.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Refactor the completion function to allow skipping of descriptor freeing on
the submission failure path. This completely removes descriptor freeing
from the submit failure path and leave the responsibility to the caller.
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/163528416222.3925689.12859769271667814762.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Updates:
- Another pile of idxd updates
- pm routines cleanup for at_xdmac driver
- Correct handling of callback_result for few drivers
- zynqmp_dma driver updates and descriptor management refinement
- Hardware handshaking support for dw-axi-dmac
- Support for remotely powered controllers in Qcom bam dma
- tegra driver updates
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEE+vs47OPLdNbVcHzyfBQHDyUjg0cFAmGLtLMACgkQfBQHDyUj
g0dZKA/+P+OiIoQ3SHvEARL1JCIVKB1aWfIq4RIwQLcEInfL+P461csjld0hBAAJ
oHZxVhE5PGKJ2qgZWyrhBe+5pX1RZm3U5rkNCA/qvEWQPJPukTP2Wvo28uDmCadz
PjQ/KfX/OPf+uYiKxxPDwgoFeWbNX0hTIjRqxd0/roclcOLvNGbpDj8KEBYvvh0/
kzpmn3SMoA/ak3Y7e3AUHQGN8hDEtncETzNojkF8KcNwu3LYjgWqd+PPj7VQYJ8c
NCaen7iPHYS8ZstgPU4bXaET/yuByv9PLm4Mw11R6y8dUdL/BjD9RRM6KL81dZvS
/dxtBdequpCod+2Uf0/OOlxUsPYaGyJ3M5NSNAx8Iqz7CX4UkiiQyC5Px5PfBkC/
BSCxP2D3E9BQ7k/uKO0pDXVpQoLiAGKZAPoONApG3/Y/hgwi2VifmoOte1nvzJx8
6AGngwHbSyUq8E3M2mIRhpO+3sH58f8cRLOqrxSHxWGq2C9yI3oFqh4n7jW1pDUh
8DEDTV4JtNF5gxuDi9tM6dqJ5cZvdcQ5ITRjXj4CjTMdd4P4D7hJzxR3OyKyUFiQ
DqtW9UL0vJdufdE04IL29V5p746XkMe++o0Op912kDUIGSH8vuR9GMs9qvMriysw
6PZ+DOzlMfdkWc654A8W4vcmqY5JF5xOM00SYznGnruUu/RCNh4=
=eqOs
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-5.16-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine
Pull dmaengine updates from Vinod Koul:
"A bunch of driver updates, no new driver or controller support this
time though:
- Another pile of idxd updates
- pm routines cleanup for at_xdmac driver
- Correct handling of callback_result for few drivers
- zynqmp_dma driver updates and descriptor management refinement
- Hardware handshaking support for dw-axi-dmac
- Support for remotely powered controllers in Qcom bam dma
- tegra driver updates"
* tag 'dmaengine-5.16-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine: (69 commits)
dmaengine: ti: k3-udma: Set r/tchan or rflow to NULL if request fail
dmaengine: ti: k3-udma: Set bchan to NULL if a channel request fail
dmaengine: stm32-dma: avoid 64-bit division in stm32_dma_get_max_width
dmaengine: fsl-edma: support edma memcpy
dmaengine: idxd: fix resource leak on dmaengine driver disable
dmaengine: idxd: cleanup completion record allocation
dmaengine: zynqmp_dma: Correctly handle descriptor callbacks
dmaengine: xilinx_dma: Correctly handle cyclic descriptor callbacks
dmaengine: altera-msgdma: Correctly handle descriptor callbacks
dmaengine: at_xdmac: fix compilation warning
dmaengine: dw-axi-dmac: Simplify assignment in dma_chan_pause()
dmaengine: qcom: bam_dma: Add "powered remotely" mode
dt-bindings: dmaengine: bam_dma: Add "powered remotely" mode
dmaengine: sa11x0: Mark PM functions as __maybe_unused
dmaengine: switch from 'pci_' to 'dma_' API
dmaengine: ioat: switch from 'pci_' to 'dma_' API
dmaengine: hsu: switch from 'pci_' to 'dma_' API
dmaengine: hisi_dma: switch from 'pci_' to 'dma_' API
dmaengine: dw: switch from 'pci_' to 'dma_' API
dmaengine: dw-edma-pcie: switch from 'pci_' to 'dma_' API
...
udma_get_*() checks if rchan/tchan/rflow is already allocated by checking
if it has a NON NULL value. For the error cases, rchan/tchan/rflow will
have error value and udma_get_*() considers this as already allocated
(PASS) since the error values are NON NULL. This results in NULL pointer
dereference error while de-referencing rchan/tchan/rflow.
Reset the value of rchan/tchan/rflow to NULL if a channel request fails.
CC: stable@vger.kernel.org
Acked-by: Peter Ujfalusi <peter.ujfalusi@gmail.com>
Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com>
Link: https://lore.kernel.org/r/20211031032411.27235-3-kishon@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
bcdma_get_*() checks if bchan is already allocated by checking if it
has a NON NULL value. For the error cases, bchan will have error value
and bcdma_get_*() considers this as already allocated (PASS) since the
error values are NON NULL. This results in NULL pointer dereference
error while de-referencing bchan.
Reset the value of bchan to NULL if a channel request fails.
CC: stable@vger.kernel.org
Acked-by: Peter Ujfalusi <peter.ujfalusi@gmail.com>
Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com>
Link: https://lore.kernel.org/r/20211031032411.27235-2-kishon@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Using the % operator on a 64-bit variable is expensive and can
cause a link failure:
arm-linux-gnueabi-ld: drivers/dma/stm32-dma.o: in function `stm32_dma_get_max_width':
stm32-dma.c:(.text+0x170): undefined reference to `__aeabi_uldivmod'
arm-linux-gnueabi-ld: drivers/dma/stm32-dma.o: in function `stm32_dma_set_xfer_param':
stm32-dma.c:(.text+0x1cd4): undefined reference to `__aeabi_uldivmod'
As we know that we just want to check the alignment in
stm32_dma_get_max_width(), there is no need for a full division, and
using a simple mask is a faster replacement.
Same in stm32_dma_set_xfer_param(), change this to only allow burst
transfers if the address is a multiple of the length.
stm32_dma_get_best_burst just after will take buf_len into account to fix
burst in case of misalignment.
Fixes: b20fd5fa31 ("dmaengine: stm32-dma: fix stm32_dma_get_max_width")
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com>
Link: https://lore.kernel.org/r/20211103153312.41483-1-amelie.delaunay@foss.st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Add memcpy in edma. The edma has the capability to transfer data by
software trigger so that it could be used for memory copy. Enable
MEMCPY for edma driver and it could be test directly by dmatest.
Signed-off-by: Joy Zou <joy.zou@nxp.com>
Link: https://lore.kernel.org/r/20211026090025.2777292-1-joy.zou@nxp.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The wq resources needs to be released before the kernel type is reset by
__drv_disable_wq(). With dma channels unregistered and wq quiesced, all the
wq resources for dmaengine can be freed. There is no need to wait until wq
is disabled. With the wq->type being reset to "unknown", the driver is
skipping the freeing of the resources.
Fixes: 0cda4f6986 ("dmaengine: idxd: create dmaengine driver for wq 'device'")
Reported-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
Tested-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/163517405099.3484556.12521975053711345244.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
According to core-api/dma-api-howto.rst, the address from
dma_alloc_coherent is gauranteed to align to the smallest PAGE_SIZE order.
That supercedes the 64B/32B alignment requirement of the completion record.
Remove alignment adjustment code.
Tested-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/163517396063.3484297.7494385225280705372.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
DMA clients can provide one of two types of callbacks. For this reason
dmaengine drivers should not directly invoke `callback`, but always use
`dmaengine_desc_callback_invoke()`. This makes sure that both types of
callbacks are handled correctly.
The zynqmp_dma driver currently doesn't do this and only handles the
`callback` type callback. If the client used the `callback_result` type
callback it will not be called.
Fix this by switching to `dmaengine_desc_callback_valid()` and
`dmaengine_desc_callback_invoke()`.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Link: https://lore.kernel.org/r/20211025075428.2094-3-lars@metafoo.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
DMA clients can provide one of two types of callbacks. For this reason
dmaengine drivers should not directly invoke `callback`, but always use
`dmaengine_desc_callback_invoke()`. This makes sure that both types of
callbacks are handled correctly.
The xilinx_dma driver currently doesn't do this for cyclic descriptors and
only handles the `callback` type callback. If the client used the
`callback_result` type callback it will not be called.
Fix this by switching to `dmaengine_desc_callback_valid()` and
`dmaengine_desc_callback_invoke()`.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Link: https://lore.kernel.org/r/20211025075428.2094-2-lars@metafoo.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
DMA clients can provide one of two types of callbacks. For this reason
dmaengine drivers should not directly invoke `callback`, but always use
dmaengine_desc_callback_invoke(). This makes sure that both types of
callbacks are handled correctly.
The altera-msgdma driver currently doesn't do this and only handles the
`callback` type callback. If the client used the `callback_result` type
callback it will not be called.
Fix this by switching to `dmaengine_desc_callback_valid()` and
`dmaengine_desc_callback_invoke()`.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Link: https://lore.kernel.org/r/20211025075428.2094-1-lars@metafoo.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In some configurations, the BAM DMA controller is set up by a remote
processor and the local processor can simply start making use of it
without setting up the BAM. This is already supported using the
"qcom,controlled-remotely" property.
However, for some reason another possible configuration is that the
remote processor is responsible for powering up the BAM, but we are
still responsible for initializing it (e.g. resetting it etc).
This configuration is quite challenging to handle properly because
the power control is handled through separate channels
(e.g. device-specific SMSM interrupts / smem-states). Great care
must be taken to ensure the BAM registers are not accessed while
the BAM is powered off since this results in a bus stall.
Attempt to support this configuration with minimal device-specific
code in the bam_dma driver by tracking the number of requested
channels. Consumers of DMA channels are responsible to only request
DMA channels when the BAM was powered on by the remote processor,
and to release them before the BAM is powered off.
When the first channel is requested the BAM is initialized (reset)
and it is also put into reset when the last channel was released.
Signed-off-by: Stephan Gerhold <stephan@gerhold.net>
Reviewed-by: Bhupesh Sharma <bhupesh.sharma@linaro.org>
Link: https://lore.kernel.org/r/20211018102421.19848-3-stephan@gerhold.net
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Without CONFIG_PM_SLEEP, the runtime suspend/resume functions
are unused, producing a warning:
../drivers/dma/sa11x0-dma.c:1042:12: error: 'sa11x0_dma_resume' defined but not used
../drivers/dma/sa11x0-dma.c:1004:12: error: 'sa11x0_dma_suspend' defined but not used
Signed-off-by: Cai Huoqing <caihuoqing@baidu.com>
Link: https://lore.kernel.org/r/20211026020508.550-1-caihuoqing@baidu.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The wrappers in include/linux/pci-dma-compat.h should go away.
pci_set_dma_mask()/pci_set_consistent_dma_mask() should be
replaced with dma_set_mask()/dma_set_coherent_mask(),
and use dma_set_mask_and_coherent() for both.
Signed-off-by: Qing Wang <wangqing@vivo.com>
Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
Link: https://lore.kernel.org/r/1633663733-47199-7-git-send-email-wangqing@vivo.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The wrappers in include/linux/pci-dma-compat.h should go away.
pci_set_dma_mask()/pci_set_consistent_dma_mask() should be
replaced with dma_set_mask()/dma_set_coherent_mask(),
and use dma_set_mask_and_coherent() for both.
Signed-off-by: Qing Wang <wangqing@vivo.com>
Link: https://lore.kernel.org/r/1633663733-47199-3-git-send-email-wangqing@vivo.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The wrappers in include/linux/pci-dma-compat.h should go away.
pci_set_dma_mask()/pci_set_consistent_dma_mask() should be
replaced with dma_set_mask()/dma_set_coherent_mask(),
and use dma_set_mask_and_coherent() for both.
Signed-off-by: Qing Wang <wangqing@vivo.com>
Link: https://lore.kernel.org/r/1633663733-47199-4-git-send-email-wangqing@vivo.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The wrappers in include/linux/pci-dma-compat.h should go away.
pci_set_dma_mask()/pci_set_consistent_dma_mask() should be
replaced with dma_set_mask()/dma_set_coherent_mask(),
and use dma_set_mask_and_coherent() for both.
Signed-off-by: Qing Wang <wangqing@vivo.com>
Link: https://lore.kernel.org/r/1633663733-47199-5-git-send-email-wangqing@vivo.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The wrappers in include/linux/pci-dma-compat.h should go away.
pci_set_dma_mask()/pci_set_consistent_dma_mask() should be
replaced with dma_set_mask()/dma_set_coherent_mask(),
and use dma_set_mask_and_coherent() for both.
Signed-off-by: Qing Wang <wangqing@vivo.com>
Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com>
Link: https://lore.kernel.org/r/1633663733-47199-6-git-send-email-wangqing@vivo.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The wrappers in include/linux/pci-dma-compat.h should go away.
pci_set_dma_mask()/pci_set_consistent_dma_mask() should be
replaced with dma_set_mask()/dma_set_coherent_mask(),
and use dma_set_mask_and_coherent() for both.
Signed-off-by: Wang Qing <wangqing@vivo.com>
Link: https://lore.kernel.org/r/1633663733-47199-2-git-send-email-wangqing@vivo.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Don't populate the read-only array ds_lut on the stack but instead it
static. Also makes the object code smaller by 163 bytes:
Before:
text data bss dec hex filename
23508 4796 0 28304 6e90 ./drivers/dma/sh/rz-dmac.o
After:
text data bss dec hex filename
23281 4860 0 28141 6ded ./drivers/dma/sh/rz-dmac.o
(gcc version 11.2.0)
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Link: https://lore.kernel.org/r/20210915112038.12407-1-colin.king@canonical.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The issue happens in an error handling path. If
of_dma_controller_register() fails, the function simply prints error
messages and returns error code, without decrementing the reference
count of pdev->device incremented earlier by
dma_async_device_register(), which may result in refcount leaks.
Fix it by invoking dma_async_device_unregister() before returning the
error code.
Signed-off-by: Xin Xiong <xiongx18@fudan.edu.cn>
Signed-off-by: Xiyu Yang <xiyuyang19@fudan.edu.cn>
Signed-off-by: Xin Tan <tanxin.ctf@gmail.com>
Link: https://lore.kernel.org/r/20210911070533.3114-1-xiongx18@fudan.edu.cn
Signed-off-by: Vinod Koul <vkoul@kernel.org>
As noted in the "Deprecated Interfaces, Language Features, Attributes,
and Conventions" documentation [1], size calculations (especially
multiplication) should not be performed in memory allocator (or similar)
function arguments due to the risk of them overflowing. This could lead
to values wrapping around and a smaller allocation being made than the
caller was expecting. Using those allocations could lead to linear
overflows of heap memory and other misbehaviors.
So, use the purpose specific kcalloc() function instead of the argument
size * count in the kzalloc() function.
[1] https://www.kernel.org/doc/html/v5.14/process/deprecated.html#open-coded-arithmetic-in-allocator-arguments
Signed-off-by: Len Baker <len.baker@gmx.com>
Link: https://lore.kernel.org/r/20210904145813.5161-1-len.baker@gmx.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Modify the prototype from xilinx_dma_tx_descriptor to
xilinx_dma_alloc_tx_descriptor and xilinx_dma_channel_set_config
to xilinx_vdma_channel_set_config in API description to
fix below linux kernel-doc warnings.
drivers/dma/xilinx/xilinx_dma.c:800: warning: expecting
prototype for xilinx_dma_tx_descriptor(). Prototype was
for xilinx_dma_alloc_tx_descriptor() instead.
drivers/dma/xilinx/xilinx_dma.c:2471: warning: expecting
prototype for xilinx_dma_channel_set_config(). Prototype
was for xilinx_vdma_channel_set_config() instead.
Signed-off-by: Shravya Kumbham <shravya.kumbham@xilinx.com>
Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Link: https://lore.kernel.org/r/1631525316-2323-1-git-send-email-radhey.shyam.pandey@xilinx.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Use the helper macro SET_NOIRQ_SYSTEM_SLEEP_PM_OPS() instead of the
verbose operators ".suspend_noirq /.resume_noirq/.freeze_noirq/
.thaw_noirq/.poweroff_noirq/.restore_noirq", because the
SET_NOIRQ_SYSTEM_SLEEP_PM_OPS() is a nice helper macro that could
be brought in to make code a little clearer, a little more concise.
Signed-off-by: Cai Huoqing <caihuoqing@baidu.com>
Link: https://lore.kernel.org/r/20210828090117.1814-1-caihuoqing@baidu.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Device reset clears the MSIXPERM table and the device registers. Re-program
the MSIXPERM table and re-enable the error interrupts post reset.
Fixes: 745e92a6d8 ("dmaengine: idxd: idxd: move remove() bits for idxd 'struct device' to device.c")
Reported-by: Sanjay Kumar <sanjay.k.kumar@intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/163054188513.2853562.12077053294595278181.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Fix following panic on system halt:
Requesting system halt
[ 10.600000] spi spi0.1: spi_device 0.1 cleanup
[ 10.630000] fsl_edma_chan_mux() fsl_chan->edma->n_chans 64 dmamux_nr 0
[ 10.630000] *** ZERO DIVIDE *** FORMAT=4
[ 10.630000] Current process id is 38
[ 10.630000] BAD KERNEL TRAP: 00000000
[ 10.630000] PC: [<402f09ba>] fsl_edma_chan_mux+0x7c/0x12e
...
Some architecture as mcf5441x (ColdFire) may not have
a dmamux, so dmamux_nr is set to 0. This patch considers this case.
Signed-off-by: Angelo Dureghello <angelo.dureghello@timesys.com>
Link: https://lore.kernel.org/r/20210901211610.662077-1-angelo.dureghello@timesys.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Using list_move_tail() instead of list_del() + list_add_tail()
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: Bixuan Cui <cuibixuan@huawei.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/20210908092826.67765-1-cuibixuan@huawei.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The ptdma driver has added debugfs support, but this fails to build
when debugfs is disabled:
drivers/dma/ptdma/ptdma-debugfs.c: In function 'ptdma_debugfs_setup':
drivers/dma/ptdma/ptdma-debugfs.c:93:54: error: 'struct dma_device' has no member named 'dbg_dev_root'
93 | debugfs_create_file("info", 0400, pt->dma_dev.dbg_dev_root, pt,
| ^
drivers/dma/ptdma/ptdma-debugfs.c:96:55: error: 'struct dma_device' has no member named 'dbg_dev_root'
96 | debugfs_create_file("stats", 0400, pt->dma_dev.dbg_dev_root, pt,
| ^
drivers/dma/ptdma/ptdma-debugfs.c:102:52: error: 'struct dma_device' has no member named 'dbg_dev_root'
102 | debugfs_create_dir("q", pt->dma_dev.dbg_dev_root);
| ^
Remove the #ifdef in the header, as this only saves a few bytes,
but would require ugly #ifdefs in each driver using it.
Simplify the other user while we're at it.
Fixes: e2fb2e2a33 ("dmaengine: ptdma: Add debugfs entries for PTDMA")
Fixes: 26cf132de6 ("dmaengine: Create debug directories for DMA devices")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Link: https://lore.kernel.org/r/20210920122017.205975-1-arnd@kernel.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Since pm_runtime_put is done when tegra_adma_probe is successful, we
cannot do pm_runtime_put_sync again in tegra_adma_remove.
Fix this by removing the pm_runtime_put_sync in tegra_adma_remove.
Signed-off-by: Dongliang Mu <mudongliangabcd@gmail.com>
Reviewed-by: Jon Hunter <jonathanh@nvidia.com>
Link: https://lore.kernel.org/r/20211021031432.3466261-1-mudongliangabcd@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The previous commit 059e969c2a ("dmaengine: tegra210-adma: Using
pm_runtime_resume_and_get to replace open coding") forgets to replace
the pm_runtime_get_sync in the tegra_adma_probe, but removes the
pm_runtime_put_noidle.
Fix this by continuing to replace pm_runtime_get_sync with
pm_runtime_resume_and_get in tegra_adma_probe.
Fixes: 059e969c2a ("dmaengine: tegra210-adma: Using pm_runtime_resume_and_get to replace open coding")
Signed-off-by: Dongliang Mu <mudongliangabcd@gmail.com>
Reviewed-by: Jon Hunter <jonathanh@nvidia.com>
Link: https://lore.kernel.org/r/20211021030538.3465287-1-mudongliangabcd@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In rcar_dmac_probe, if pm_runtime_resume_and_get fails, it forgets to
disable runtime PM. And of_dma_controller_free should only be invoked
after the success of of_dma_controller_register.
Fix this by refactoring the error handling code.
Signed-off-by: Dongliang Mu <mudongliangabcd@gmail.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Link: https://lore.kernel.org/r/20211020143546.3436205-1-mudongliangabcd@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Before the `callback_result` callback was introduced drivers coded their
invocation to the callback in a similar way to:
if (cb->callback) {
spin_unlock(&dma->lock);
cb->callback(cb->callback_param);
spin_lock(&dma->lock);
}
With the introduction of `callback_result` two helpers where introduced to
transparently handle both types of callbacks. And drivers where updated to
look like this:
if (dmaengine_desc_callback_valid(cb)) {
spin_unlock(&dma->lock);
dmaengine_desc_callback_invoke(cb, ...);
spin_lock(&dma->lock);
}
dmaengine_desc_callback_invoke() correctly handles both `callback_result`
and `callback`. But we forgot to update the dmaengine_desc_callback_valid()
function to check for `callback_result`. As a result DMA descriptors that
use the `callback_result` rather than `callback` don't have their callback
invoked by drivers that follow the pattern above.
Fix this by checking for both `callback` and `callback_result` in
dmaengine_desc_callback_valid().
Fixes: f067025bc6 ("dmaengine: add support to provide error result from a DMA transation")
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/20211023134101.28042-1-lars@metafoo.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>