IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
The PXA DMA controller has a DALGN register which allows for
byte-aligned DMA transfers. Use it in case any of the transfer
descriptors is not aligned to a mask of ~0x7.
Signed-off-by: Daniel Mack <zonque@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The DMA_SLAVE is currently set twice.
Signed-off-by: Daniel Mack <zonque@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch makes the mmp_pdma controller able to provide DMA resources
in DT environments by providing an dma xlate function.
of_dma_simple_xlate() isn't used here, because if fails to handle
multiple different DMA engines or several instances of the same
controller. Instead, a private implementation is provided that makes use
of the newly introduced dma_get_slave_channel() call.
Signed-off-by: Daniel Mack <zonque@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
PXA peripherals need to obtain specific DMA request ids which will
eventually be stored in the DRCMR register.
Currently, clients are expected to store that number inside the slave
config block as slave_id, which is unfortunately incompatible with the
way DMA resources are handled in DT environments.
This patch adds a filter function which stores the filter parameter
passed in by of-dma.c into the channel's drcmr register.
For backward compatability, cfg->slave_id is still used if set to
a non-zero value.
Signed-off-by: Daniel Mack <zonque@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There's no reason for limiting the maximum transfer length to 0x1000.
Take the actual bit mask instead; the PDMA is able to transfer chunks of
up to SZ_8K - 1.
Signed-off-by: Daniel Mack <zonque@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
As suggested by Ezequiel García, release the spinlock at the end of the
function only, and use a goto for the control flow.
Just a minor cleanup.
Signed-off-by: Daniel Mack <zonque@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The exact same calculation is done twice, so let's factor it out to a
macro.
Signed-off-by: Daniel Mack <zonque@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
All patches here have been pending on linux-usb
and sitting in linux-next for a while now.
The biggest things in this tag are:
DWC3 learned proper usage of threaded IRQ
handlers and now we spend very little time
in hardirq context.
MUSB now has proper support for BeagleBone and
Beaglebone Black.
Tegra's USB support also got quite a bit of love
and is learning to use PHY layer and generic DT
attributes.
Other than that, the usual pack of cleanups and
non-critical fixes follow.
Signed-of-by: Felipe Balbi <balbi@ti.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
iQIcBAABAgAGBQJSCpApAAoJEIaOsuA1yqREJl0QAJ6SY4cOVcUrk0gMcbPcU6ah
mhGJAzA5xcOrRzsrA/r9+mT4aN5zMtOmPtYNJGLgHxPxtmrkWDnUqnpUqBSCJXpt
45GZTIY/TNbe0USteVg0sGz9y8FEokpcLsXk2bBpdnpb0eCC/6UiEl4kVgvNbtTu
z8+vooY9O++Y5bcR6L5QJVBwm+YiIm/rReoLb17aYQCWVLkPvQ5J5dNdfRF/5FUS
uzA4bZdcQCaUtzAAUroIL8z8TgVFOZUCrUalRCs7fE5+7gh9+i/JlVQKMuol/3rR
1bfOdYwuG9XVu3iYKssRLSGUSUXU68ZviLBxwO24cz7EFkCxiKSF6+JT2PHrG1hj
XPxPGuKx4zqn4Lol2KdE5iban9AdgN+2JgjwZ8w9hBob+O14HfRTafKRlwBc27Mt
BiXJv+5mEVmAbi8Xya1w3J/mWHAh+Qxhi1SlPEyT5FfUG3b+2D/Kv1dgpApdVdYL
BW3CFSBgkFK8+WYGifnkNYtjj0v8z0eDaEU0cPmpy2L1pKgL3czNMNv/rgSH6r2n
ilF5kR05CkEYsP56ZpuYg0VYCkpchhW1REDwaMx/2Nt1W4GXRql15aAyN9CcS+v4
Xq0HVOSDyOV4juEryi296DDJPid6COELP8UtsKQLD+3nmifQEB58/S0NdNXJWcqs
GocgpeGXdnzyk5y14FdJ
=3NS9
-----END PGP SIGNATURE-----
Merge tag 'usb-for-v3.12' of git://git.kernel.org/pub/scm/linux/kernel/git/balbi/usb into usb-next
Felipe writes:
usb: patches for v3.12 merge window
All patches here have been pending on linux-usb
and sitting in linux-next for a while now.
The biggest things in this tag are:
DWC3 learned proper usage of threaded IRQ
handlers and now we spend very little time
in hardirq context.
MUSB now has proper support for BeagleBone and
Beaglebone Black.
Tegra's USB support also got quite a bit of love
and is learning to use PHY layer and generic DT
attributes.
Other than that, the usual pack of cleanups and
non-critical fixes follow.
Signed-of-by: Felipe Balbi <balbi@ti.com>
Conflicts:
drivers/usb/gadget/udc-core.c
drivers/usb/host/ehci-tegra.c
drivers/usb/musb/omap2430.c
drivers/usb/musb/tusb6010.c
This patch adds __pl330_giveback_descs which give back descriptors when fails
allocating descriptors. It requires to eliminate duplication for
pl330_prep_dma_sg which will be added later.
Signed-off-by: Chanho Park <chanho61.park@samsung.com>
Acked-by : Jassi Brar <jassisinghbrar@gmail.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
this patch adds PM ops entries in sirf-dma drivers, so that this
driver can support suspend/resume, hibernation and runtime PM.
while suspending, sirf-dma will lose all registers, so we save
them at suspend and restore in resume for active channels.
Signed-off-by: Barry Song <Baohua.Song@csr.com>
Signed-off-by: Rongjun Ying <Rongjun.Ying@csr.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Use the wrapper function for retrieving the platform data instead of
accessing dev->platform_data directly.
Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
sirfsoc_dma_prep_cyclic() returns pointer, thus NULL should be
used instead of 0 in order to fix the following sparse warning:
drivers/dma/sirf-dma.c:598:24: warning: Using plain integer as NULL pointer
Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
%p is used, thus NULL should be used instead of 0
in order to fix the following sparse warning:
drivers/dma/mv_xor.c:648:9: warning: Using plain integer as NULL pointer
Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
mmp_pdma_alloc_descriptor() is used only in this file.
Fix the following sparse warning:
drivers/dma/mmp_pdma.c:359:25: warning: symbol 'mmp_pdma_alloc_descriptor' was not declared. Should it be static?
Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This driver is currently used by musb' cppi41 couter part. I may merge
both dma engine user of musb at some point but not just yet.
The driver seems to work in RX/TX mode in host mode, tested on mass
storage. I increaed the size of the TX / RX transfers and waited for the
core code to cancel a transfers and it seems to recover.
v2..3:
- use mall transfers on RX side and check data toggle.
- use rndis mode on tx side so we haveon interrupt for 4096 transfers.
- remove custom "transferred" hack and use dmaengine_tx_status() to
compute the total amount of data that has been transferred.
- cancel transfers and reclaim descriptors
v1..v2:
- RX path added
- dma mode 0 & 1 is working
- device tree nodes re-created.
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Dan Williams <djbw@fb.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Felipe Balbi <balbi@ti.com>
In mmp pdma, phy channels are allocated/freed dynamically.
The mapping from DMA request to DMA channel number in DRCMR
should be cleared when a phy channel is freed. Otherwise
conflicts will happen when:
1. A is using channel 2 and free it after finished, but A
still maps to channel 2 in DRCMR of A.
2. Now another one B gets channel 2. So B maps to channel 2
too in DRCMR of B.
In the datasheet, it is described that "Do not map two active
requests to the same channel since it produces unpredictable
results" and we can observe that during test.
Signed-off-by: Xiang Wang <wangx@marvell.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In mmp pdma, phy channels are allocated/freed dynamically
and frequently. But no proper protection is added.
Conflict will happen when multi-users are requesting phy
channels at the same time. Use spinlock to protect.
Signed-off-by: Xiang Wang <wangx@marvell.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
To obey a usual practice let's return DMA_PAUSED status only if
dma_cookie_status returned DMA_IN_PROGRESS.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There is no point to go throught the rest of the function if first call to
dma_cookie_status() returned DMA_SUCCESS.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In the PC world is quite possible that devices are sharing the same interrupt
line. The patch prepares dw_dmac driver to such cases.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In general ~0 does not fit some integer types. Let's do a helper to make a
comparison with that constant properly.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In rare cases (mostly for the testing purposes) the dw_dmac driver might be
compiled as a module as well as the other LPSS device drivers (I2C, SPI,
HSUART). When udev handles the event of the devices appearing the dw_dmac
module is missing. This patch will fix that.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch fixes sparse warning:
drivers/dma/acpi-dma.c:76:21: sparse: cast to restricted __le32
Since everything in all ACPI tables is little-endian, by definition, the used
types in practice are uXX. Thus, we have to enforce __leXX if we want to
convert them to CPU order.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Reviewed-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There is no point to go throught the rest of the function if first call to
dma_cookie_status() returned DMA_SUCCESS.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
dma_set_residue() sets only residue value, so user can't rely on the returned
values of cookies. That patch standardize the behaviour.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
It's better to use generic dma_cookie_status() that allows user to get standard
possible return codes independently of the DMAC driver in charge.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Accordingly to dma_cookie_status() description locking is not required.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Stephen Warren <swarren@wwwdotorg.org>
Cc: linux-tegra@vger.kernel.org
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Accordingly to dma_cookie_status() description locking is not required.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Accordingly to dma_cookie_status() description locking is not required.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Accordingly to dma_cookie_status() description locking is not required.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Accordingly to dma_cookie_status() description locking is not required.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Li Yang <leoli@freescale.com>
Cc: Zhang Wei <zw@zh-kernel.org>
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Accordingly to dma_cookie_status() description locking is not required.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Residue value is assigned to 0 by dma_cookie_status().
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
last_used variable is applied only once, so, let's substitute it by its value.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
last_used variable is applied only once, so, let's substitute it by its value.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
sh_desc->hw.tcr is controlling real data size,
and, register TCR is controlling data transfer count
which was xmit_shifted value of hw.tcr.
Current sh_dmae_get_partial() is calculating in different unit.
This patch fixes it.
This bug has been present since c014906a870ce70e009def0c9d170ccabeb0be63
("dmaengine: shdma: extend .device_terminate_all() to record partial
transfer"), which was added in 2.6.34-rc1.
Cc: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Acked-by: Guennadi Liakhovetski <g.liakhovetski+renesas@gmail.com>
Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
Allocate a descriptor for each period of a cyclic transfer, not just the first.
Also since the callback needs to be called for each finished period make sure to
initialize the callback and callback_param fields of each descriptor in a cyclic
transfer.
Cc: stable@vger.kernel.org
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The dev_attrs field of struct class is going away soon, dev_groups
should be used instead. This converts the dma dma_devclass code to use
the correct field.
Cc: Dan Williams <djbw@fb.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Fix to return -ENODEV when no proper base address found error
handling case instead of 0, as done elsewhere in this function.
Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Implement the device_slave_caps() callback for the pl330 driver. This allows
dmaengine users like the generic ALSA dmaengine PCM driver to query the
capabilities of the driver. The PL330 supports all buswidths and both
mem-to-dev as well as dev-to-mem transfers. In theory there is no limit on the
number of segments that can be transferred (in practice you'll run out of memory
eventually) and the number of bytes per segment is limited by the size of the
PL330 program buffer. Due to the nature of the PL330 the maximum number of bytes
per segment depends on the burstsize, the driver sets it to the value for a
1-byte burstsize, since it is the smallest.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The recent "drivers/dma: remove unused support for MEMSET operations"
change has fallout from lack of build testing by the author. This
fixes:
drivers/dma/iop-adma.c:1020:13: warning: unused variable 'dma_addr' [-Wunused-variable]
drivers/dma/iop-adma.c:1519:2: warning: format '%s' expects a matching 'char *' argument [-Wformat=]
Signed-off-by: Olof Johansson <olof@lixom.net>
Cc: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Cc: Kyungmin Park <kyungmin.park@samsung.com>
Cc: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Pull slave-dmaengine updates from Vinod Koul:
"Once you have some time from extended weekend celebrations please
consider pulling the following to get:
- Various fixes and PCI driver for dw_dmac by Andy
- DT binding for imx-dma by Markus & imx-sdma by Shawn
- DT fixes for dmaengine by Lars
- jz4740 dmac driver by Lars
- and various fixes across the drivers"
What "extended weekend celebrations"? I'm in the merge window, who has
time for extended celebrations..
* 'for-linus' of git://git.infradead.org/users/vkoul/slave-dma: (40 commits)
DMA: shdma: add DT support
DMA: shdma: shdma_chan_filter() has to be in shdma-base.h
DMA: shdma: (cosmetic) don't re-calculate a pointer
dmaengine: at_hdmac: prepare clk before calling enable
dmaengine/trivial: at_hdmac: add curly brackets to if/else expressions
dmaengine: at_hdmac: remove unsuded atc_cleanup_descriptors()
dmaengine: at_hdmac: add FIFO configuration parameter to DMA DT binding
ARM: at91: dt: add header to define at_hdmac configuration
MIPS: jz4740: Correct clock gate bit for DMA controller
MIPS: jz4740: Remove custom DMA API
MIPS: jz4740: Register jz4740 DMA device
dma: Add a jz4740 dmaengine driver
MIPS: jz4740: Acquire and enable DMA controller clock
dma: mmp_tdma: disable irq when disabling dma channel
dmaengine: PL08x: Avoid collisions with get_signal() macro
dmaengine: dw: select DW_DMAC_BIG_ENDIAN_IO automagically
dma: dw: add PCI part of the driver
dma: dw: split driver to library part and platform code
dma: move dw_dmac driver to an own directory
dw_dmac: don't check resource with devm_ioremap_resource
...
This patch adds Device Tree support to the shdma driver. No special DT
properties are used, only standard DMA DT bindings are implemented. Since
shdma controllers reside on SoCs, their configuration is SoC-specific and
shall be passed to the driver from the SoC platform data, using the
auxdata procedure.
Signed-off-by: Guennadi Liakhovetski <g.liakhovetski+renesas@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Use an existing pointer instead of retrieving it again.
Signed-off-by: Guennadi Liakhovetski <g.liakhovetski+renesas@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Replace clk_enable/disable with clk_prepare_enable/disable_unprepare to
avoid common clk framework warnings.
Signed-off-by: Boris BREZILLON <b.brezillon@overkiz.com>
[nicolas.ferre@atmel.com: remove return code checking in at_dma_resume_noirq()]
Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Correct coding style following the patch:
7c407d3e54dcc0c79119553c8d5ef176c1d5bc3a (DMA: AT91:
Get residual bytes in dma buffer).
Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com>