IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
platform_get_resource() may fail and return NULL, so we should
better check it's return value to avoid a NULL pointer dereference
a bit later in the code.
This is detected by Coccinelle semantic patch.
@@
expression pdev, res, n, t, e, e1, e2;
@@
res = platform_get_resource(pdev, t, n);
+ if (!res)
+ return -EINVAL;
... when != res == NULL
e = devm_ioremap_nocache(e1, res->start, e2);
Fixes: 9b3b8171f7f4 ("dmaengine: sprd: Add Spreadtrum DMA driver")
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Reviewed-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
With each bus implementing its own DMA configuration callback, there is no
need for bus to explicitly set the force_dma flag. Modify the
of_dma_configure function to accept an input parameter which specifies if
implicit DMA configuration is required when it is not described by the
firmware.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Bjorn Helgaas <bhelgaas@google.com> # PCI parts
Reviewed-by: Rob Herring <robh@kernel.org>
[hch: tweaked the changelog a bit]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Do DMAFLUSHP _before_ the first DMAWFP to ensure controller
and peripheral are in agreement about dma request state before first
transfer. Add support for burst transfers to/from peripherals. In the new
scheme, the controller does as many burst transfers as it can then
transfers the remaining dregs with either single transfers for
peripherals, or with a reduced size burst for memory-to-memory transfers.
Signed-off-by: Frank Mori Hess <fmh6jj@gmail.com>
Tested-by: Frank Mori Hess <fmh6jj@gmail.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Request IRQ with IRQF_SHARED flag to enable setups with multiple
instances of the core sharing a single IRQ line.
This works out since the IRQ handler already checks if there is
an actual IRQ pending and returns IRQ_NONE otherwise.
Acked-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Moritz Fischer <mdf@kernel.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Trivial fix to spelling mistake in dev_err error message text and make
channel plural.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This patch will move the Spreadtrum DMA request mode and interrupt type
into one head file for user to configure.
Signed-off-by: Eric Long <eric.long@spreadtrum.com>
Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Define the DMA data width type to make code more readable.
Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Define the DMA transfer step type to make code more readable.
Signed-off-by: Eric Long <eric.long@spreadtrum.com>
Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Since commit 9b5ba0df4ea4f940 ("ARM: shmobile: Introduce ARCH_RENESAS")
is CONFIG_ARCH_RENESAS a more appropriate platform check than the legacy
CONFIG_ARCH_SHMOBILE, hence use the former.
Renesas SuperH SH-Mobile SoCs are still covered by the CONFIG_CPU_SH4
check, just like before support for Renesas ARM SoCs was added.
Instead of blindly changing all the #ifdefs, switch the main code block
in sh_dmae_probe() to IS_ENABLED(), as this allows to remove all the
remaining #ifdefs.
This will allow to drop ARCH_SHMOBILE on ARM in the near future.
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Reviewed-by: Simon Horman <horms+renesas@verge.net.au>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Both buffer Transfer Length (TLEN if any) and transfer size have to be
aligned on burst size (burst beats*bus width).
Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
We should get drvdata from struct device directly. Going via
platform_device is an unneeded step back and forth.
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
We should get drvdata from struct device directly. Going via
platform_device is an unneeded step back and forth.
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
We should get drvdata from struct device directly. Going via
platform_device is an unneeded step back and forth.
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
We should get drvdata from struct device directly. Going via
platform_device is an unneeded step back and forth.
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
We should get drvdata from struct device directly. Going via
platform_device is an unneeded step back and forth.
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
We should get drvdata from struct device directly. Going via
platform_device is an unneeded step back and forth.
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
We should get drvdata from struct device directly. Going via
platform_device is an unneeded step back and forth.
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
We should get drvdata from struct device directly. Going via
platform_device is an unneeded step back and forth.
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Reviewed-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
There's an ongoing effort to remove VLAs from the kernel
(https://lkml.org/lkml/2018/3/7/621) to eventually turn on -Wvla.
The test already pre-allocates some buffers with kmalloc so turn
the two VLAs in to pre-allocated kmalloc buffers.
Signed-off-by: Laura Abbott <labbott@redhat.com>
Reviewed-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This time we have couple of new drivers along with updates to drivers.
- new driver for DesignWare AXI DMAC and MediaTek High-Speed DMA controller
- stm32 dma and qcom bam dma driver updates
- norandom test option for dmatest
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJazNiVAAoJEHwUBw8lI4NHGoYP/0G4butX2wkKgQMt+CCTSvC8
3J3E3hqyd4aIPu9O2ebxz4MpVRcJKQRDZiJ5ZIcVaP+79ehjpRH1iVMxO3vX0Eb4
urTRB0F+waZJK1Cdm/tXtLkCGIzxFuv4HJbd+Z2CHYiuKT5SbNvhz4j8HRUmoV35
+Vlify3NZgKdQzVAbD+ZHPWAnyIQFeQHjywS8PIGSKg/iXTpSGDV+2NweH8rOPre
MlTL5/YknO/rn5w34/kz4nzqc1AWH57+HBh3tEtsrlrgdZjO0czFmEGbLzTls5tw
AcbFm41m7ZV2PADHaH+LLF0cxDp299sbHqROkVM9evnLYu4IEUIUL6mOp13wKh45
x78QDzlLQiFQ+gbjJ37PXg3SKsQ8Krr/lXnvBITnnC+56w+L7/OuAyM3bxFRCTke
9iCIOeKXHxhESh3J3Znup9BEEdP5DN84jjjaF4EZ2e6wXbd+GaPQABYoRRwyn7pr
G49a0k5o3j4vT7vo9j6Eqw7kqwI8oOkAtAVKGWttoO1r0YbaV0ATPBuvD9SZgFpx
bLZehKWZCVRnr18J4MiMcSq+bwci9FM7P3HU7nECeJWTwb694LER4hnbQqGS2mZh
kOZXexrqQljKbaLRR8YCXy8BcgK4qufOPSX//ts7FX0J1cNqtWU86XkWwPI7WYoa
KkN2omd6rNtdxoXtVO4Y
=0YWt
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-4.17-rc1' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine updates from Vinod Koul:
"This time we have couple of new drivers along with updates to drivers:
- new drivers for the DesignWare AXI DMAC and MediaTek High-Speed DMA
controllers
- stm32 dma and qcom bam dma driver updates
- norandom test option for dmatest"
* tag 'dmaengine-4.17-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (30 commits)
dmaengine: stm32-dma: properly mask irq bits
dmaengine: stm32-dma: fix max items per transfer
dmaengine: stm32-dma: fix DMA IRQ status handling
dmaengine: stm32-dma: Improve memory burst management
dmaengine: stm32-dma: fix typo and reported checkpatch warnings
dmaengine: stm32-dma: fix incomplete configuration in cyclic mode
dmaengine: stm32-dma: threshold manages with bitfield feature
dt-bindings: stm32-dma: introduce DMA features bitfield
dt-bindings: rcar-dmac: Document r8a77470 support
dmaengine: rcar-dmac: Fix too early/late system suspend/resume callbacks
dmaengine: dw-axi-dmac: fix spelling mistake: "catched" -> "caught"
dmaengine: edma: Check the memory allocation for the memcpy dma device
dmaengine: at_xdmac: fix rare residue corruption
dmaengine: mediatek: update MAINTAINERS entry with MediaTek DMA driver
dmaengine: mediatek: Add MediaTek High-Speed DMA controller for MT7622 and MT7623 SoC
dt-bindings: dmaengine: Add MediaTek High-Speed DMA controller bindings
dt-bindings: Document the Synopsys DW AXI DMA bindings
dmaengine: Introduce DW AXI DMAC driver
dmaengine: pl330: fix a race condition in case of threaded irqs
dmaengine: imx-sdma: fix pagefault when channel is disabled during interrupt
...
A single register of the controller holds the information for four dma
channels.
The functions stm32_dma_irq_status() don't mask the relevant bits after
the shift, thus adjacent channel's status is also reported in the returned
value.
Fixed by masking the value before returning it.
Similarly, the function stm32_dma_irq_clear() don't mask the input value
before shifting it, thus an incorrect input value could disable the
interrupts of adjacent channels.
Fixed by masking the input value before using it.
Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Antonio Borneo <borneo.antonio@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Having 0 in item counter register is valid and stands for a "No or Ended
transfer". Therefore valid transfer starts from @+0 to @+0xFFFE leading to
unaligned scatter gather at boundary. Thus it's safer to round down this
value on its FIFO size (16 Bytes).
Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Update the way Transfer Complete and Half Transfer Complete status are
acknowledge. Even if HTI is not enabled its status is shown when reading
registers, driver has to clear it gently and not raise an error.
Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch improves memory burst capability using best burst size
according to transferred buffer size from/to memory.
>From now on, memory burst is not necessarily same as with peripheral
burst one and fifo threshold is directly managed by this driver in order
to fit with computed memory burst.
Signed-off-by: M'boumba Cedric Madianga <cedric.madianga@gmail.com>
Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When in cyclic mode, the configuration is updated after having started the
DMA hardware (STM32_DMA_SCR_EN) leading to incomplete configuration of
SMxAR registers.
Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Hugues Fruchet <hugues.fruchet@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
>From now on, DMA bitfield is to manage DMA FIFO Threshold.
Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
If serial console wake-up is enabled ("echo enabled >
/sys/.../ttySC0/power/wakeup"), and any serial input is received while
the system is suspended, serial port input no longer works after system
resume.
Note that:
1) The system can still be woken up using the serial console,
2) Serial port input keeps working if the system is woken up in some
other way (e.g. Wake-on-LAN or gpio-keys), and no serial input was
received while suspended.
To fix this, replace SET_LATE_SYSTEM_SLEEP_PM_OPS() by
SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(), as the callbacks installed by the
former happen too early resp. late in the suspend resp. resume process.
Reported-by: RVC test team via Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Fixes: 1131b0a4af911de5 ("dmaengine: rcar-dmac: Make DMAC reinit during system resume explicit")
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Trivial fix to spelling mistake in dev_err error message text
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
If the allocation fails then disable the memcpy support.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Despite the efforts made to correctly read the NDA and CUBC registers,
the order in which the registers are read could sometimes lead to an
inconsistent state.
Re-using the timeline from the comments, this following timing of
registers reads could lead to reading NDA with value "@desc2" and
CUBC with value "MAX desc1":
INITD -------- ------------
|____________________|
_______________________ _______________
NDA @desc2 \/ @desc3
_______________________/\_______________
__________ ___________ _______________
CUBC 0 \/ MAX desc1 \/ MAX desc2
__________/\___________/\_______________
| | | |
Events:(1)(2) (3)(4)
(1) check_nda = @desc2
(2) initd = 1
(3) cur_ubc = MAX desc1
(4) cur_nda = @desc2
This is allowed by the condition ((check_nda == cur_nda) && initd),
despite cur_ubc and cur_nda being in the precise state we don't want.
This error leads to incorrect residue computation.
Fix it by inversing the order in which CUBC and INITD are read. This
makes sure that NDA and CUBC are always read together either _before_
INITD goes to 0 or _after_ it is back at 1.
The case where NDA is read before INITD is at 0 and CUBC is read after
INITD is back at 1 will be rejected by check_nda and cur_nda being
different.
Fixes: 53398f488821 ("dmaengine: at_xdmac: fix residue corruption")
Cc: stable@vger.kernel.org
Signed-off-by: Maxime Jayat <maxime.jayat@mobile-devices.fr>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
MediaTek High-Speed DMA controller (HSDMA) on MT7622 and MT7623 SoC has
a single ring is dedicated to memory-to-memory transfer through ring based
descriptor management.
Even though there is only one physical ring available inside HSDMA, the
driver can be easily extended to the support of multiple virtual channels
processing simultaneously by means of DMA_VIRTUAL_CHANNELS effort.
Signed-off-by: Sean Wang <sean.wang@mediatek.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Fengguang Wu <fengguang.wu@intel.com>
Cc: Julia Lawall <julia.lawall@lip6.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The bitfield dma_inuse is allocated of size dma_requests bits, thus a
valid bit address is from 0 to (dma_requests - 1).
When find_first_zero_bit() fails, it returns dma_requests as invalid
address.
Using such address for the following set_bit() is incorrect and, if
dma_requests is a multiple of BITS_PER_LONG, it will cause a buffer
overflow.
Currently this driver is only used in DT stm32h743.dtsi where a safe value
dma_requests=16 is not triggering the buffer overflow.
Fixed by checking the return value of find_first_zero_bit() _before_
using it.
Signed-off-by: Antonio Borneo <borneo.antonio@gmail.com>
Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch adds support for the DW AXI DMAC controller.
DW AXI DMAC is a part of HSDK development board from Synopsys.
In this driver implementation only DMA_MEMCPY transfers are
supported.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
On the CP110 components which are present on the Armada 7K/8K SoC we need
to explicitly enable the clock for the registers. However it is not
needed for the AP8xx component, that's why this clock is optional.
With this patch both clock have now a name, but in order to be backward
compatible, the name of the first clock is not used. It allows to still
use this clock with a device tree using the old binding.
Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
Reviewed-by: Rob Herring <robh@kernel.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Add a spinlock and a 'enabled' boolean on channel descriptor, to avoid
using buffer descriptors in the interrupt context,
when sdma_disable_channel is called meanwhile.
Signed-off-by: Thierry Bultel <tbultel@pixelsurmer.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch fixes an issue that a race condition happens between a client
driver and the rcar-dmac driver:
- The rcar_dmac_isr_transfer_end() is called.
- The done list appears, and desc.running is the next active list.
- rcar_dmac_chan_get_residue() is called by a client driver before
rcar_dmac_isr_channel_thread() is called.
- The rcar_dmac_chan_get_residue() will not find any descriptors.
- And, the following WARNING happens:
WARN(1, "No descriptor for cookie!");
The sh-sci driver with HSCIF (921,600bps) on R-Car H3 can cause this
situation.
So, this patch checks the done lists in rcar_dmac_chan_get_residue()
and returns zero if the done lists has the argument cookie.
Tested-by: Nguyen Viet Dung <dung.nguyen.aj@renesas.com>
Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Remotely controlled BAM instance should not do any power management from
CPU side, as cpu can not reliably say if the BAM is busy or not.
Disable it for such instances.
Signed-off-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>