dmaengine updates for 4.8-rc1

This is bit large pile of code which bring in some nice additions:
  - Error reporting: we have added a new mechanism for users of dmaenegine to
    register a callback_result which tells them the result of the dma
    transaction. Right now only one user ntb is using it.
  - As we discussed on KS mailing list and pointed out NO_IRQ has no place in
    kernel, this also remove NO_IRQ from dmaengine subsystem (both arm and
    ppc users)
  - Support for IOMMU slave transfers and it implementation for arm.
  - To get better build coverage, enable COMPILE_TEST for bunch of driver,
    and fix the warning and sparse complaints on these.
  - Apart from above, usual updates spread across drivers.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJX9cGYAAoJEHwUBw8lI4NHXh0P/3OsctPYnwcangOz268hHDap
 7ZHwau96K7DRi8cFCc0XmG083Ivqih/fWMFJBUOEsuwS3zPHkfgfhsvm7MqrK3vv
 psJIwnubwTVVQ3lePYJlnna6mijcRNXVAooRLiqylA3QPIYRxECDFVDRNwf39D+I
 bYp5tmlFcobugOUUoMqq1D/gH8EHUWxrnrsS6UBBpYm+cusc6u9/JXlOb4pcJGSL
 V340zQ0S9FNuEM3b+1kMAeq3DG2wLXv9oJzz/6EN59sx5AdjlYUPHd/PvTYOeG0T
 crdtDfL+7xcqP0Ms4SGTOD4kXSe6nErr3bIBHQXI6ZmJn0j//+3yU21kTMl95kM+
 RM7nE4vItuQR0jPxVlhuLCcf3q7zMi+noOPZ1DVRTE1Yf9AizAgbPXyOE+jzGUUi
 6E+0Mj6CLpFH/Mffxphs7L6GKwfWqaLjAupbjR6EWZud37KAwvpcB1CkJEgT9C4s
 OiZ4INTPxXmw9dX/T9CPOyh8oZ8mB9LTUzHoJDvDGuwYm7HE0U9pzHG4bP0mjIIt
 y3RboP78t1HC9oZUrxCoGhvekJtok0k3RLGJTSx9ujklY9MJGG/F1KEC6APp5tXu
 0UToMXpgXSUkKEZesmsJFj/lbh1+h/yo5zTG5Hek8lh1K0sczaoWu3xTTSY9SSZQ
 ihlqyvdzSBweKo8ktU8A
 =9iA3
 -----END PGP SIGNATURE-----

Merge tag 'dmaengine-4.9-rc1' of git://git.infradead.org/users/vkoul/slave-dma

Pull dmaengine updates from Vinod Koul:
 "This is bit large pile of code which bring in some nice additions:

   - Error reporting: we have added a new mechanism for users of
     dmaenegine to register a callback_result which tells them the
     result of the dma transaction. Right now only one user (ntb) is
     using it.

   - As we discussed on KS mailing list and pointed out NO_IRQ has no
     place in kernel, this also remove NO_IRQ from dmaengine subsystem
     (both arm and ppc users)

   - Support for IOMMU slave transfers and its implementation for arm.

   - To get better build coverage, enable COMPILE_TEST for bunch of
     driver, and fix the warning and sparse complaints on these.

   - Apart from above, usual updates spread across drivers"

* tag 'dmaengine-4.9-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (169 commits)
  async_pq_val: fix DMA memory leak
  dmaengine: virt-dma: move function declarations
  dmaengine: omap-dma: Enable burst and data pack for SG
  DT: dmaengine: rcar-dmac: document R8A7743/5 support
  dmaengine: fsldma: Unmap region obtained by of_iomap
  dmaengine: jz4780: fix resource leaks on error exit return
  dma-debug: fix ia64 build, use PHYS_PFN
  dmaengine: coh901318: fix integer overflow when shifting more than 32 places
  dmaengine: edma: avoid uninitialized variable use
  dma-mapping: fix m32r build warning
  dma-mapping: fix ia64 build, use PHYS_PFN
  dmaengine: ti-dma-crossbar: enable COMPILE_TEST
  dmaengine: omap-dma: enable COMPILE_TEST
  dmaengine: edma: enable COMPILE_TEST
  dmaengine: ti-dma-crossbar: Fix of_device_id data parameter usage
  dmaengine: ti-dma-crossbar: Correct type for of_find_property() third parameter
  dmaengine/ARM: omap-dma: Fix the DMAengine compile test on non OMAP configs
  dmaengine: edma: Rename set_bits and remove unused clear_bits helper
  dmaengine: edma: Use correct type for of_find_property() third parameter
  dmaengine: edma: Fix of_device_id data parameter usage (legacy vs TPCC)
  ...
This commit is contained in:
Linus Torvalds 2016-10-06 17:13:54 -07:00
commit 553911c67e
76 changed files with 1919 additions and 744 deletions

View File

@ -277,14 +277,26 @@ and <size> parameters are provided to do partial page mapping, it is
recommended that you never use these unless you really know what the recommended that you never use these unless you really know what the
cache width is. cache width is.
dma_addr_t
dma_map_resource(struct device *dev, phys_addr_t phys_addr, size_t size,
enum dma_data_direction dir, unsigned long attrs)
void
dma_unmap_resource(struct device *dev, dma_addr_t addr, size_t size,
enum dma_data_direction dir, unsigned long attrs)
API for mapping and unmapping for MMIO resources. All the notes and
warnings for the other mapping APIs apply here. The API should only be
used to map device MMIO resources, mapping of RAM is not permitted.
int int
dma_mapping_error(struct device *dev, dma_addr_t dma_addr) dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
In some circumstances dma_map_single() and dma_map_page() will fail to create In some circumstances dma_map_single(), dma_map_page() and dma_map_resource()
a mapping. A driver can check for these errors by testing the returned will fail to create a mapping. A driver can check for these errors by testing
DMA address with dma_mapping_error(). A non-zero return value means the mapping the returned DMA address with dma_mapping_error(). A non-zero return value
could not be created and the driver should take appropriate action (e.g. means the mapping could not be created and the driver should take appropriate
reduce current DMA mapping usage or delay and try again later). action (e.g. reduce current DMA mapping usage or delay and try again later).
int int
dma_map_sg(struct device *dev, struct scatterlist *sg, dma_map_sg(struct device *dev, struct scatterlist *sg,

View File

@ -8,6 +8,7 @@ Required properties:
"fsl,imx51-sdma" "fsl,imx51-sdma"
"fsl,imx53-sdma" "fsl,imx53-sdma"
"fsl,imx6q-sdma" "fsl,imx6q-sdma"
"fsl,imx7d-sdma"
The -to variants should be preferred since they allow to determine the The -to variants should be preferred since they allow to determine the
correct ROM script addresses needed for the driver to work without additional correct ROM script addresses needed for the driver to work without additional
firmware. firmware.

View File

@ -1,4 +1,4 @@
* Renesas R-Car DMA Controller Device Tree bindings * Renesas R-Car (RZ/G) DMA Controller Device Tree bindings
Renesas R-Car Generation 2 SoCs have multiple multi-channel DMA Renesas R-Car Generation 2 SoCs have multiple multi-channel DMA
controller instances named DMAC capable of serving multiple clients. Channels controller instances named DMAC capable of serving multiple clients. Channels
@ -16,6 +16,8 @@ Required Properties:
- compatible: "renesas,dmac-<soctype>", "renesas,rcar-dmac" as fallback. - compatible: "renesas,dmac-<soctype>", "renesas,rcar-dmac" as fallback.
Examples with soctypes are: Examples with soctypes are:
- "renesas,dmac-r8a7743" (RZ/G1M)
- "renesas,dmac-r8a7745" (RZ/G1E)
- "renesas,dmac-r8a7790" (R-Car H2) - "renesas,dmac-r8a7790" (R-Car H2)
- "renesas,dmac-r8a7791" (R-Car M2-W) - "renesas,dmac-r8a7791" (R-Car M2-W)
- "renesas,dmac-r8a7792" (R-Car V2H) - "renesas,dmac-r8a7792" (R-Car V2H)

View File

@ -7,6 +7,7 @@ Required properties:
- compatible: Must be one of - compatible: Must be one of
"allwinner,sun6i-a31-dma" "allwinner,sun6i-a31-dma"
"allwinner,sun8i-a23-dma" "allwinner,sun8i-a23-dma"
"allwinner,sun8i-a83t-dma"
"allwinner,sun8i-h3-dma" "allwinner,sun8i-h3-dma"
- reg: Should contain the registers base address and length - reg: Should contain the registers base address and length
- interrupts: Should contain a reference to the interrupt used by this device - interrupts: Should contain a reference to the interrupt used by this device

View File

@ -282,6 +282,17 @@ supported.
that is supposed to push the current that is supposed to push the current
transaction descriptor to a pending queue, waiting transaction descriptor to a pending queue, waiting
for issue_pending to be called. for issue_pending to be called.
- In this structure the function pointer callback_result can be
initialized in order for the submitter to be notified that a
transaction has completed. In the earlier code the function pointer
callback has been used. However it does not provide any status to the
transaction and will be deprecated. The result structure defined as
dmaengine_result that is passed in to callback_result has two fields:
+ result: This provides the transfer result defined by
dmaengine_tx_result. Either success or some error
condition.
+ residue: Provides the residue bytes of the transfer for those that
support residue.
* device_issue_pending * device_issue_pending
- Takes the first transaction descriptor in the pending queue, - Takes the first transaction descriptor in the pending queue,

View File

@ -33,6 +33,7 @@
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/io.h> #include <linux/io.h>
#include <linux/platform_data/dma-s3c24xx.h> #include <linux/platform_data/dma-s3c24xx.h>
#include <linux/dmaengine.h>
#include <mach/hardware.h> #include <mach/hardware.h>
#include <mach/regs-clock.h> #include <mach/regs-clock.h>
@ -439,10 +440,44 @@ static struct s3c24xx_dma_channel s3c2440_dma_channels[DMACH_MAX] = {
[DMACH_USB_EP4] = { S3C24XX_DMA_APB, true, S3C24XX_DMA_CHANREQ(4, 3), }, [DMACH_USB_EP4] = { S3C24XX_DMA_APB, true, S3C24XX_DMA_CHANREQ(4, 3), },
}; };
static const struct dma_slave_map s3c2440_dma_slave_map[] = {
/* TODO: DMACH_XD0 */
/* TODO: DMACH_XD1 */
{ "s3c2440-sdi", "rx-tx", (void *)DMACH_SDI },
{ "s3c2410-spi.0", "rx", (void *)DMACH_SPI0 },
{ "s3c2410-spi.0", "tx", (void *)DMACH_SPI0 },
{ "s3c2410-spi.1", "rx", (void *)DMACH_SPI1 },
{ "s3c2410-spi.1", "tx", (void *)DMACH_SPI1 },
{ "s3c2440-uart.0", "rx", (void *)DMACH_UART0 },
{ "s3c2440-uart.0", "tx", (void *)DMACH_UART0 },
{ "s3c2440-uart.1", "rx", (void *)DMACH_UART1 },
{ "s3c2440-uart.1", "tx", (void *)DMACH_UART1 },
{ "s3c2440-uart.2", "rx", (void *)DMACH_UART2 },
{ "s3c2440-uart.2", "tx", (void *)DMACH_UART2 },
{ "s3c2440-uart.3", "rx", (void *)DMACH_UART3 },
{ "s3c2440-uart.3", "tx", (void *)DMACH_UART3 },
/* TODO: DMACH_TIMER */
{ "s3c24xx-iis", "rx", (void *)DMACH_I2S_IN },
{ "s3c24xx-iis", "tx", (void *)DMACH_I2S_OUT },
{ "samsung-ac97", "rx", (void *)DMACH_PCM_IN },
{ "samsung-ac97", "tx", (void *)DMACH_PCM_OUT },
{ "samsung-ac97", "rx", (void *)DMACH_MIC_IN },
{ "s3c-hsudc", "rx0", (void *)DMACH_USB_EP1 },
{ "s3c-hsudc", "rx1", (void *)DMACH_USB_EP2 },
{ "s3c-hsudc", "rx2", (void *)DMACH_USB_EP3 },
{ "s3c-hsudc", "rx3", (void *)DMACH_USB_EP4 },
{ "s3c-hsudc", "tx0", (void *)DMACH_USB_EP1 },
{ "s3c-hsudc", "tx1", (void *)DMACH_USB_EP2 },
{ "s3c-hsudc", "tx2", (void *)DMACH_USB_EP3 },
{ "s3c-hsudc", "tx3", (void *)DMACH_USB_EP4 }
};
static struct s3c24xx_dma_platdata s3c2440_dma_platdata = { static struct s3c24xx_dma_platdata s3c2440_dma_platdata = {
.num_phy_channels = 4, .num_phy_channels = 4,
.channels = s3c2440_dma_channels, .channels = s3c2440_dma_channels,
.num_channels = DMACH_MAX, .num_channels = DMACH_MAX,
.slave_map = s3c2440_dma_slave_map,
.slavecnt = ARRAY_SIZE(s3c2440_dma_slave_map),
}; };
struct platform_device s3c2440_device_dma = { struct platform_device s3c2440_device_dma = {

View File

@ -2014,6 +2014,63 @@ static void arm_iommu_unmap_page(struct device *dev, dma_addr_t handle,
__free_iova(mapping, iova, len); __free_iova(mapping, iova, len);
} }
/**
* arm_iommu_map_resource - map a device resource for DMA
* @dev: valid struct device pointer
* @phys_addr: physical address of resource
* @size: size of resource to map
* @dir: DMA transfer direction
*/
static dma_addr_t arm_iommu_map_resource(struct device *dev,
phys_addr_t phys_addr, size_t size,
enum dma_data_direction dir, unsigned long attrs)
{
struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev);
dma_addr_t dma_addr;
int ret, prot;
phys_addr_t addr = phys_addr & PAGE_MASK;
unsigned int offset = phys_addr & ~PAGE_MASK;
size_t len = PAGE_ALIGN(size + offset);
dma_addr = __alloc_iova(mapping, len);
if (dma_addr == DMA_ERROR_CODE)
return dma_addr;
prot = __dma_direction_to_prot(dir) | IOMMU_MMIO;
ret = iommu_map(mapping->domain, dma_addr, addr, len, prot);
if (ret < 0)
goto fail;
return dma_addr + offset;
fail:
__free_iova(mapping, dma_addr, len);
return DMA_ERROR_CODE;
}
/**
* arm_iommu_unmap_resource - unmap a device DMA resource
* @dev: valid struct device pointer
* @dma_handle: DMA address to resource
* @size: size of resource to map
* @dir: DMA transfer direction
*/
static void arm_iommu_unmap_resource(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction dir,
unsigned long attrs)
{
struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev);
dma_addr_t iova = dma_handle & PAGE_MASK;
unsigned int offset = dma_handle & ~PAGE_MASK;
size_t len = PAGE_ALIGN(size + offset);
if (!iova)
return;
iommu_unmap(mapping->domain, iova, len);
__free_iova(mapping, iova, len);
}
static void arm_iommu_sync_single_for_cpu(struct device *dev, static void arm_iommu_sync_single_for_cpu(struct device *dev,
dma_addr_t handle, size_t size, enum dma_data_direction dir) dma_addr_t handle, size_t size, enum dma_data_direction dir)
{ {
@ -2057,6 +2114,9 @@ struct dma_map_ops iommu_ops = {
.unmap_sg = arm_iommu_unmap_sg, .unmap_sg = arm_iommu_unmap_sg,
.sync_sg_for_cpu = arm_iommu_sync_sg_for_cpu, .sync_sg_for_cpu = arm_iommu_sync_sg_for_cpu,
.sync_sg_for_device = arm_iommu_sync_sg_for_device, .sync_sg_for_device = arm_iommu_sync_sg_for_device,
.map_resource = arm_iommu_map_resource,
.unmap_resource = arm_iommu_unmap_resource,
}; };
struct dma_map_ops iommu_coherent_ops = { struct dma_map_ops iommu_coherent_ops = {
@ -2070,6 +2130,9 @@ struct dma_map_ops iommu_coherent_ops = {
.map_sg = arm_coherent_iommu_map_sg, .map_sg = arm_coherent_iommu_map_sg,
.unmap_sg = arm_coherent_iommu_unmap_sg, .unmap_sg = arm_coherent_iommu_unmap_sg,
.map_resource = arm_iommu_map_resource,
.unmap_resource = arm_iommu_unmap_resource,
}; };
/** /**

View File

@ -368,8 +368,6 @@ async_syndrome_val(struct page **blocks, unsigned int offset, int disks,
dma_set_unmap(tx, unmap); dma_set_unmap(tx, unmap);
async_tx_submit(chan, tx, submit); async_tx_submit(chan, tx, submit);
return tx;
} else { } else {
struct page *p_src = P(blocks, disks); struct page *p_src = P(blocks, disks);
struct page *q_src = Q(blocks, disks); struct page *q_src = Q(blocks, disks);
@ -424,9 +422,11 @@ async_syndrome_val(struct page **blocks, unsigned int offset, int disks,
submit->cb_param = cb_param_orig; submit->cb_param = cb_param_orig;
submit->flags = flags_orig; submit->flags = flags_orig;
async_tx_sync_epilog(submit); async_tx_sync_epilog(submit);
tx = NULL;
return NULL;
} }
dmaengine_unmap_put(unmap);
return tx;
} }
EXPORT_SYMBOL_GPL(async_syndrome_val); EXPORT_SYMBOL_GPL(async_syndrome_val);

View File

@ -102,7 +102,7 @@ config AXI_DMAC
config COH901318 config COH901318
bool "ST-Ericsson COH901318 DMA support" bool "ST-Ericsson COH901318 DMA support"
select DMA_ENGINE select DMA_ENGINE
depends on ARCH_U300 depends on ARCH_U300 || COMPILE_TEST
help help
Enable support for ST-Ericsson COH 901 318 DMA. Enable support for ST-Ericsson COH 901 318 DMA.
@ -114,13 +114,13 @@ config DMA_BCM2835
config DMA_JZ4740 config DMA_JZ4740
tristate "JZ4740 DMA support" tristate "JZ4740 DMA support"
depends on MACH_JZ4740 depends on MACH_JZ4740 || COMPILE_TEST
select DMA_ENGINE select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS select DMA_VIRTUAL_CHANNELS
config DMA_JZ4780 config DMA_JZ4780
tristate "JZ4780 DMA support" tristate "JZ4780 DMA support"
depends on MACH_JZ4780 depends on MACH_JZ4780 || COMPILE_TEST
select DMA_ENGINE select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS select DMA_VIRTUAL_CHANNELS
help help
@ -130,14 +130,14 @@ config DMA_JZ4780
config DMA_OMAP config DMA_OMAP
tristate "OMAP DMA support" tristate "OMAP DMA support"
depends on ARCH_OMAP depends on ARCH_OMAP || COMPILE_TEST
select DMA_ENGINE select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS select DMA_VIRTUAL_CHANNELS
select TI_DMA_CROSSBAR if SOC_DRA7XX select TI_DMA_CROSSBAR if (SOC_DRA7XX || COMPILE_TEST)
config DMA_SA11X0 config DMA_SA11X0
tristate "SA-11x0 DMA support" tristate "SA-11x0 DMA support"
depends on ARCH_SA1100 depends on ARCH_SA1100 || COMPILE_TEST
select DMA_ENGINE select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS select DMA_VIRTUAL_CHANNELS
help help
@ -150,7 +150,6 @@ config DMA_SUN4I
depends on MACH_SUN4I || MACH_SUN5I || MACH_SUN7I depends on MACH_SUN4I || MACH_SUN5I || MACH_SUN7I
default (MACH_SUN4I || MACH_SUN5I || MACH_SUN7I) default (MACH_SUN4I || MACH_SUN5I || MACH_SUN7I)
select DMA_ENGINE select DMA_ENGINE
select DMA_OF
select DMA_VIRTUAL_CHANNELS select DMA_VIRTUAL_CHANNELS
help help
Enable support for the DMA controller present in the sun4i, Enable support for the DMA controller present in the sun4i,
@ -167,7 +166,7 @@ config DMA_SUN6I
config EP93XX_DMA config EP93XX_DMA
bool "Cirrus Logic EP93xx DMA support" bool "Cirrus Logic EP93xx DMA support"
depends on ARCH_EP93XX depends on ARCH_EP93XX || COMPILE_TEST
select DMA_ENGINE select DMA_ENGINE
help help
Enable support for the Cirrus Logic EP93xx M2P/M2M DMA controller. Enable support for the Cirrus Logic EP93xx M2P/M2M DMA controller.
@ -279,7 +278,7 @@ config INTEL_MIC_X100_DMA
config K3_DMA config K3_DMA
tristate "Hisilicon K3 DMA support" tristate "Hisilicon K3 DMA support"
depends on ARCH_HI3xxx depends on ARCH_HI3xxx || ARCH_HISI || COMPILE_TEST
select DMA_ENGINE select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS select DMA_VIRTUAL_CHANNELS
help help
@ -297,16 +296,16 @@ config LPC18XX_DMAMUX
config MMP_PDMA config MMP_PDMA
bool "MMP PDMA support" bool "MMP PDMA support"
depends on (ARCH_MMP || ARCH_PXA) depends on ARCH_MMP || ARCH_PXA || COMPILE_TEST
select DMA_ENGINE select DMA_ENGINE
help help
Support the MMP PDMA engine for PXA and MMP platform. Support the MMP PDMA engine for PXA and MMP platform.
config MMP_TDMA config MMP_TDMA
bool "MMP Two-Channel DMA support" bool "MMP Two-Channel DMA support"
depends on ARCH_MMP depends on ARCH_MMP || COMPILE_TEST
select DMA_ENGINE select DMA_ENGINE
select MMP_SRAM select MMP_SRAM if ARCH_MMP
help help
Support the MMP Two-Channel DMA engine. Support the MMP Two-Channel DMA engine.
This engine used for MMP Audio DMA and pxa910 SQU. This engine used for MMP Audio DMA and pxa910 SQU.
@ -316,7 +315,6 @@ config MOXART_DMA
tristate "MOXART DMA support" tristate "MOXART DMA support"
depends on ARCH_MOXART depends on ARCH_MOXART
select DMA_ENGINE select DMA_ENGINE
select DMA_OF
select DMA_VIRTUAL_CHANNELS select DMA_VIRTUAL_CHANNELS
help help
Enable support for the MOXA ART SoC DMA controller. Enable support for the MOXA ART SoC DMA controller.
@ -439,9 +437,8 @@ config STE_DMA40
config STM32_DMA config STM32_DMA
bool "STMicroelectronics STM32 DMA support" bool "STMicroelectronics STM32 DMA support"
depends on ARCH_STM32 depends on ARCH_STM32 || COMPILE_TEST
select DMA_ENGINE select DMA_ENGINE
select DMA_OF
select DMA_VIRTUAL_CHANNELS select DMA_VIRTUAL_CHANNELS
help help
Enable support for the on-chip DMA controller on STMicroelectronics Enable support for the on-chip DMA controller on STMicroelectronics
@ -451,7 +448,7 @@ config STM32_DMA
config S3C24XX_DMAC config S3C24XX_DMAC
bool "Samsung S3C24XX DMA support" bool "Samsung S3C24XX DMA support"
depends on ARCH_S3C24XX depends on ARCH_S3C24XX || COMPILE_TEST
select DMA_ENGINE select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS select DMA_VIRTUAL_CHANNELS
help help
@ -483,10 +480,9 @@ config TEGRA20_APB_DMA
config TEGRA210_ADMA config TEGRA210_ADMA
bool "NVIDIA Tegra210 ADMA support" bool "NVIDIA Tegra210 ADMA support"
depends on ARCH_TEGRA_210_SOC depends on (ARCH_TEGRA_210_SOC || COMPILE_TEST) && PM_CLK
select DMA_ENGINE select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS select DMA_VIRTUAL_CHANNELS
select PM_CLK
help help
Support for the NVIDIA Tegra210 ADMA controller driver. The Support for the NVIDIA Tegra210 ADMA controller driver. The
DMA controller has multiple DMA channels and is used to service DMA controller has multiple DMA channels and is used to service
@ -497,7 +493,7 @@ config TEGRA210_ADMA
config TIMB_DMA config TIMB_DMA
tristate "Timberdale FPGA DMA support" tristate "Timberdale FPGA DMA support"
depends on MFD_TIMBERDALE depends on MFD_TIMBERDALE || COMPILE_TEST
select DMA_ENGINE select DMA_ENGINE
help help
Enable support for the Timberdale FPGA DMA engine. Enable support for the Timberdale FPGA DMA engine.
@ -515,10 +511,10 @@ config TI_DMA_CROSSBAR
config TI_EDMA config TI_EDMA
bool "TI EDMA support" bool "TI EDMA support"
depends on ARCH_DAVINCI || ARCH_OMAP || ARCH_KEYSTONE depends on ARCH_DAVINCI || ARCH_OMAP || ARCH_KEYSTONE || COMPILE_TEST
select DMA_ENGINE select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS select DMA_VIRTUAL_CHANNELS
select TI_DMA_CROSSBAR if ARCH_OMAP select TI_DMA_CROSSBAR if (ARCH_OMAP || COMPILE_TEST)
default n default n
help help
Enable support for the TI EDMA controller. This DMA Enable support for the TI EDMA controller. This DMA
@ -561,7 +557,7 @@ config XILINX_ZYNQMP_DMA
config ZX_DMA config ZX_DMA
tristate "ZTE ZX296702 DMA support" tristate "ZTE ZX296702 DMA support"
depends on ARCH_ZX depends on ARCH_ZX || COMPILE_TEST
select DMA_ENGINE select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS select DMA_VIRTUAL_CHANNELS
help help

View File

@ -473,15 +473,11 @@ atc_chain_complete(struct at_dma_chan *atchan, struct at_desc *desc)
/* for cyclic transfers, /* for cyclic transfers,
* no need to replay callback function while stopping */ * no need to replay callback function while stopping */
if (!atc_chan_is_cyclic(atchan)) { if (!atc_chan_is_cyclic(atchan)) {
dma_async_tx_callback callback = txd->callback;
void *param = txd->callback_param;
/* /*
* The API requires that no submissions are done from a * The API requires that no submissions are done from a
* callback, so we don't need to drop the lock here * callback, so we don't need to drop the lock here
*/ */
if (callback) dmaengine_desc_get_callback_invoke(txd, NULL);
callback(param);
} }
dma_run_dependencies(txd); dma_run_dependencies(txd);
@ -598,15 +594,12 @@ static void atc_handle_cyclic(struct at_dma_chan *atchan)
{ {
struct at_desc *first = atc_first_active(atchan); struct at_desc *first = atc_first_active(atchan);
struct dma_async_tx_descriptor *txd = &first->txd; struct dma_async_tx_descriptor *txd = &first->txd;
dma_async_tx_callback callback = txd->callback;
void *param = txd->callback_param;
dev_vdbg(chan2dev(&atchan->chan_common), dev_vdbg(chan2dev(&atchan->chan_common),
"new cyclic period llp 0x%08x\n", "new cyclic period llp 0x%08x\n",
channel_readl(atchan, DSCR)); channel_readl(atchan, DSCR));
if (callback) dmaengine_desc_get_callback_invoke(txd, NULL);
callback(param);
} }
/*-- IRQ & Tasklet ---------------------------------------------------*/ /*-- IRQ & Tasklet ---------------------------------------------------*/

View File

@ -1572,8 +1572,8 @@ static void at_xdmac_handle_cyclic(struct at_xdmac_chan *atchan)
desc = list_first_entry(&atchan->xfers_list, struct at_xdmac_desc, xfer_node); desc = list_first_entry(&atchan->xfers_list, struct at_xdmac_desc, xfer_node);
txd = &desc->tx_dma_desc; txd = &desc->tx_dma_desc;
if (txd->callback && (txd->flags & DMA_PREP_INTERRUPT)) if (txd->flags & DMA_PREP_INTERRUPT)
txd->callback(txd->callback_param); dmaengine_desc_get_callback_invoke(txd, NULL);
} }
static void at_xdmac_tasklet(unsigned long data) static void at_xdmac_tasklet(unsigned long data)
@ -1616,8 +1616,8 @@ static void at_xdmac_tasklet(unsigned long data)
if (!at_xdmac_chan_is_cyclic(atchan)) { if (!at_xdmac_chan_is_cyclic(atchan)) {
dma_cookie_complete(txd); dma_cookie_complete(txd);
if (txd->callback && (txd->flags & DMA_PREP_INTERRUPT)) if (txd->flags & DMA_PREP_INTERRUPT)
txd->callback(txd->callback_param); dmaengine_desc_get_callback_invoke(txd, NULL);
} }
dma_run_dependencies(txd); dma_run_dependencies(txd);

View File

@ -82,7 +82,7 @@ bcom_task_alloc(int bd_count, int bd_size, int priv_size)
/* Get IRQ of that task */ /* Get IRQ of that task */
tsk->irq = irq_of_parse_and_map(bcom_eng->ofnode, tsk->tasknum); tsk->irq = irq_of_parse_and_map(bcom_eng->ofnode, tsk->tasknum);
if (tsk->irq == NO_IRQ) if (!tsk->irq)
goto error; goto error;
/* Init the BDs, if needed */ /* Init the BDs, if needed */
@ -104,7 +104,7 @@ bcom_task_alloc(int bd_count, int bd_size, int priv_size)
error: error:
if (tsk) { if (tsk) {
if (tsk->irq != NO_IRQ) if (tsk->irq)
irq_dispose_mapping(tsk->irq); irq_dispose_mapping(tsk->irq);
bcom_sram_free(tsk->bd); bcom_sram_free(tsk->bd);
kfree(tsk->cookie); kfree(tsk->cookie);

View File

@ -1319,10 +1319,10 @@ static void coh901318_list_print(struct coh901318_chan *cohc,
int i = 0; int i = 0;
while (l) { while (l) {
dev_vdbg(COHC_2_DEV(cohc), "i %d, lli %p, ctrl 0x%x, src 0x%x" dev_vdbg(COHC_2_DEV(cohc), "i %d, lli %p, ctrl 0x%x, src 0x%pad"
", dst 0x%x, link 0x%x virt_link_addr 0x%p\n", ", dst 0x%pad, link 0x%pad virt_link_addr 0x%p\n",
i, l, l->control, l->src_addr, l->dst_addr, i, l, l->control, &l->src_addr, &l->dst_addr,
l->link_addr, l->virt_link_addr); &l->link_addr, l->virt_link_addr);
i++; i++;
l = l->virt_link_addr; l = l->virt_link_addr;
} }
@ -1335,7 +1335,7 @@ static void coh901318_list_print(struct coh901318_chan *cohc,
static struct coh901318_base *debugfs_dma_base; static struct coh901318_base *debugfs_dma_base;
static struct dentry *dma_dentry; static struct dentry *dma_dentry;
static int coh901318_debugfs_read(struct file *file, char __user *buf, static ssize_t coh901318_debugfs_read(struct file *file, char __user *buf,
size_t count, loff_t *f_pos) size_t count, loff_t *f_pos)
{ {
u64 started_channels = debugfs_dma_base->pm.started_channels; u64 started_channels = debugfs_dma_base->pm.started_channels;
@ -1352,9 +1352,10 @@ static int coh901318_debugfs_read(struct file *file, char __user *buf,
tmp += sprintf(tmp, "DMA -- enabled dma channels\n"); tmp += sprintf(tmp, "DMA -- enabled dma channels\n");
for (i = 0; i < U300_DMA_CHANNELS; i++) for (i = 0; i < U300_DMA_CHANNELS; i++) {
if (started_channels & (1 << i)) if (started_channels & (1ULL << i))
tmp += sprintf(tmp, "channel %d\n", i); tmp += sprintf(tmp, "channel %d\n", i);
}
tmp += sprintf(tmp, "Pool alloc nbr %d\n", pool_count); tmp += sprintf(tmp, "Pool alloc nbr %d\n", pool_count);
@ -1553,15 +1554,8 @@ coh901318_desc_submit(struct coh901318_chan *cohc, struct coh901318_desc *desc)
static struct coh901318_desc * static struct coh901318_desc *
coh901318_first_active_get(struct coh901318_chan *cohc) coh901318_first_active_get(struct coh901318_chan *cohc)
{ {
struct coh901318_desc *d; return list_first_entry_or_null(&cohc->active, struct coh901318_desc,
node);
if (list_empty(&cohc->active))
return NULL;
d = list_first_entry(&cohc->active,
struct coh901318_desc,
node);
return d;
} }
static void static void
@ -1579,15 +1573,8 @@ coh901318_desc_queue(struct coh901318_chan *cohc, struct coh901318_desc *desc)
static struct coh901318_desc * static struct coh901318_desc *
coh901318_first_queued(struct coh901318_chan *cohc) coh901318_first_queued(struct coh901318_chan *cohc)
{ {
struct coh901318_desc *d; return list_first_entry_or_null(&cohc->queue, struct coh901318_desc,
node);
if (list_empty(&cohc->queue))
return NULL;
d = list_first_entry(&cohc->queue,
struct coh901318_desc,
node);
return d;
} }
static inline u32 coh901318_get_bytes_in_lli(struct coh901318_lli *in_lli) static inline u32 coh901318_get_bytes_in_lli(struct coh901318_lli *in_lli)
@ -1766,7 +1753,7 @@ static int coh901318_resume(struct dma_chan *chan)
bool coh901318_filter_id(struct dma_chan *chan, void *chan_id) bool coh901318_filter_id(struct dma_chan *chan, void *chan_id)
{ {
unsigned int ch_nr = (unsigned int) chan_id; unsigned long ch_nr = (unsigned long) chan_id;
if (ch_nr == to_coh901318_chan(chan)->id) if (ch_nr == to_coh901318_chan(chan)->id)
return true; return true;
@ -1888,8 +1875,7 @@ static void dma_tasklet(unsigned long data)
struct coh901318_chan *cohc = (struct coh901318_chan *) data; struct coh901318_chan *cohc = (struct coh901318_chan *) data;
struct coh901318_desc *cohd_fin; struct coh901318_desc *cohd_fin;
unsigned long flags; unsigned long flags;
dma_async_tx_callback callback; struct dmaengine_desc_callback cb;
void *callback_param;
dev_vdbg(COHC_2_DEV(cohc), "[%s] chan_id %d" dev_vdbg(COHC_2_DEV(cohc), "[%s] chan_id %d"
" nbr_active_done %ld\n", __func__, " nbr_active_done %ld\n", __func__,
@ -1904,8 +1890,7 @@ static void dma_tasklet(unsigned long data)
goto err; goto err;
/* locate callback to client */ /* locate callback to client */
callback = cohd_fin->desc.callback; dmaengine_desc_get_callback(&cohd_fin->desc, &cb);
callback_param = cohd_fin->desc.callback_param;
/* sign this job as completed on the channel */ /* sign this job as completed on the channel */
dma_cookie_complete(&cohd_fin->desc); dma_cookie_complete(&cohd_fin->desc);
@ -1920,8 +1905,7 @@ static void dma_tasklet(unsigned long data)
spin_unlock_irqrestore(&cohc->lock, flags); spin_unlock_irqrestore(&cohc->lock, flags);
/* Call the callback when we're done */ /* Call the callback when we're done */
if (callback) dmaengine_desc_callback_invoke(&cb, NULL);
callback(callback_param);
spin_lock_irqsave(&cohc->lock, flags); spin_lock_irqsave(&cohc->lock, flags);
@ -2247,8 +2231,8 @@ coh901318_prep_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
spin_lock_irqsave(&cohc->lock, flg); spin_lock_irqsave(&cohc->lock, flg);
dev_vdbg(COHC_2_DEV(cohc), dev_vdbg(COHC_2_DEV(cohc),
"[%s] channel %d src 0x%x dest 0x%x size %d\n", "[%s] channel %d src 0x%pad dest 0x%pad size %zu\n",
__func__, cohc->id, src, dest, size); __func__, cohc->id, &src, &dest, size);
if (flags & DMA_PREP_INTERRUPT) if (flags & DMA_PREP_INTERRUPT)
/* Trigger interrupt after last lli */ /* Trigger interrupt after last lli */
@ -2744,8 +2728,8 @@ static int __init coh901318_probe(struct platform_device *pdev)
goto err_register_of_dma; goto err_register_of_dma;
platform_set_drvdata(pdev, base); platform_set_drvdata(pdev, base);
dev_info(&pdev->dev, "Initialized COH901318 DMA on virtual base 0x%08x\n", dev_info(&pdev->dev, "Initialized COH901318 DMA on virtual base 0x%p\n",
(u32) base->virtbase); base->virtbase);
return err; return err;

View File

@ -75,7 +75,7 @@ coh901318_lli_alloc(struct coh901318_pool *pool, unsigned int len)
lli = head; lli = head;
lli->phy_this = phy; lli->phy_this = phy;
lli->link_addr = 0x00000000; lli->link_addr = 0x00000000;
lli->virt_link_addr = 0x00000000U; lli->virt_link_addr = NULL;
for (i = 1; i < len; i++) { for (i = 1; i < len; i++) {
lli_prev = lli; lli_prev = lli;
@ -88,7 +88,7 @@ coh901318_lli_alloc(struct coh901318_pool *pool, unsigned int len)
DEBUGFS_POOL_COUNTER_ADD(pool, 1); DEBUGFS_POOL_COUNTER_ADD(pool, 1);
lli->phy_this = phy; lli->phy_this = phy;
lli->link_addr = 0x00000000; lli->link_addr = 0x00000000;
lli->virt_link_addr = 0x00000000U; lli->virt_link_addr = NULL;
lli_prev->link_addr = phy; lli_prev->link_addr = phy;
lli_prev->virt_link_addr = lli; lli_prev->virt_link_addr = lli;

View File

@ -108,6 +108,8 @@ struct cppi41_channel {
unsigned td_queued:1; unsigned td_queued:1;
unsigned td_seen:1; unsigned td_seen:1;
unsigned td_desc_seen:1; unsigned td_desc_seen:1;
struct list_head node; /* Node for pending list */
}; };
struct cppi41_desc { struct cppi41_desc {
@ -146,6 +148,9 @@ struct cppi41_dd {
const struct chan_queues *queues_tx; const struct chan_queues *queues_tx;
struct chan_queues td_queue; struct chan_queues td_queue;
struct list_head pending; /* Pending queued transfers */
spinlock_t lock; /* Lock for pending list */
/* context for suspend/resume */ /* context for suspend/resume */
unsigned int dma_tdfdq; unsigned int dma_tdfdq;
}; };
@ -331,7 +336,11 @@ static irqreturn_t cppi41_irq(int irq, void *data)
c->residue = pd_trans_len(c->desc->pd6) - len; c->residue = pd_trans_len(c->desc->pd6) - len;
dma_cookie_complete(&c->txd); dma_cookie_complete(&c->txd);
c->txd.callback(c->txd.callback_param); dmaengine_desc_get_callback_invoke(&c->txd, NULL);
/* Paired with cppi41_dma_issue_pending */
pm_runtime_mark_last_busy(cdd->ddev.dev);
pm_runtime_put_autosuspend(cdd->ddev.dev);
} }
} }
return IRQ_HANDLED; return IRQ_HANDLED;
@ -349,6 +358,12 @@ static dma_cookie_t cppi41_tx_submit(struct dma_async_tx_descriptor *tx)
static int cppi41_dma_alloc_chan_resources(struct dma_chan *chan) static int cppi41_dma_alloc_chan_resources(struct dma_chan *chan)
{ {
struct cppi41_channel *c = to_cpp41_chan(chan); struct cppi41_channel *c = to_cpp41_chan(chan);
struct cppi41_dd *cdd = c->cdd;
int error;
error = pm_runtime_get_sync(cdd->ddev.dev);
if (error < 0)
return error;
dma_cookie_init(chan); dma_cookie_init(chan);
dma_async_tx_descriptor_init(&c->txd, chan); dma_async_tx_descriptor_init(&c->txd, chan);
@ -357,11 +372,26 @@ static int cppi41_dma_alloc_chan_resources(struct dma_chan *chan)
if (!c->is_tx) if (!c->is_tx)
cppi_writel(c->q_num, c->gcr_reg + RXHPCRA0); cppi_writel(c->q_num, c->gcr_reg + RXHPCRA0);
pm_runtime_mark_last_busy(cdd->ddev.dev);
pm_runtime_put_autosuspend(cdd->ddev.dev);
return 0; return 0;
} }
static void cppi41_dma_free_chan_resources(struct dma_chan *chan) static void cppi41_dma_free_chan_resources(struct dma_chan *chan)
{ {
struct cppi41_channel *c = to_cpp41_chan(chan);
struct cppi41_dd *cdd = c->cdd;
int error;
error = pm_runtime_get_sync(cdd->ddev.dev);
if (error < 0)
return;
WARN_ON(!list_empty(&cdd->pending));
pm_runtime_mark_last_busy(cdd->ddev.dev);
pm_runtime_put_autosuspend(cdd->ddev.dev);
} }
static enum dma_status cppi41_dma_tx_status(struct dma_chan *chan, static enum dma_status cppi41_dma_tx_status(struct dma_chan *chan,
@ -386,21 +416,6 @@ static void push_desc_queue(struct cppi41_channel *c)
u32 desc_phys; u32 desc_phys;
u32 reg; u32 reg;
desc_phys = lower_32_bits(c->desc_phys);
desc_num = (desc_phys - cdd->descs_phys) / sizeof(struct cppi41_desc);
WARN_ON(cdd->chan_busy[desc_num]);
cdd->chan_busy[desc_num] = c;
reg = (sizeof(struct cppi41_desc) - 24) / 4;
reg |= desc_phys;
cppi_writel(reg, cdd->qmgr_mem + QMGR_QUEUE_D(c->q_num));
}
static void cppi41_dma_issue_pending(struct dma_chan *chan)
{
struct cppi41_channel *c = to_cpp41_chan(chan);
u32 reg;
c->residue = 0; c->residue = 0;
reg = GCR_CHAN_ENABLE; reg = GCR_CHAN_ENABLE;
@ -418,7 +433,46 @@ static void cppi41_dma_issue_pending(struct dma_chan *chan)
* before starting the dma engine. * before starting the dma engine.
*/ */
__iowmb(); __iowmb();
push_desc_queue(c);
desc_phys = lower_32_bits(c->desc_phys);
desc_num = (desc_phys - cdd->descs_phys) / sizeof(struct cppi41_desc);
WARN_ON(cdd->chan_busy[desc_num]);
cdd->chan_busy[desc_num] = c;
reg = (sizeof(struct cppi41_desc) - 24) / 4;
reg |= desc_phys;
cppi_writel(reg, cdd->qmgr_mem + QMGR_QUEUE_D(c->q_num));
}
static void pending_desc(struct cppi41_channel *c)
{
struct cppi41_dd *cdd = c->cdd;
unsigned long flags;
spin_lock_irqsave(&cdd->lock, flags);
list_add_tail(&c->node, &cdd->pending);
spin_unlock_irqrestore(&cdd->lock, flags);
}
static void cppi41_dma_issue_pending(struct dma_chan *chan)
{
struct cppi41_channel *c = to_cpp41_chan(chan);
struct cppi41_dd *cdd = c->cdd;
int error;
/* PM runtime paired with dmaengine_desc_get_callback_invoke */
error = pm_runtime_get(cdd->ddev.dev);
if ((error != -EINPROGRESS) && error < 0) {
dev_err(cdd->ddev.dev, "Failed to pm_runtime_get: %i\n",
error);
return;
}
if (likely(pm_runtime_active(cdd->ddev.dev)))
push_desc_queue(c);
else
pending_desc(c);
} }
static u32 get_host_pd0(u32 length) static u32 get_host_pd0(u32 length)
@ -940,12 +994,18 @@ static int cppi41_dma_probe(struct platform_device *pdev)
cdd->ctrl_mem = of_iomap(dev->of_node, 1); cdd->ctrl_mem = of_iomap(dev->of_node, 1);
cdd->sched_mem = of_iomap(dev->of_node, 2); cdd->sched_mem = of_iomap(dev->of_node, 2);
cdd->qmgr_mem = of_iomap(dev->of_node, 3); cdd->qmgr_mem = of_iomap(dev->of_node, 3);
spin_lock_init(&cdd->lock);
INIT_LIST_HEAD(&cdd->pending);
platform_set_drvdata(pdev, cdd);
if (!cdd->usbss_mem || !cdd->ctrl_mem || !cdd->sched_mem || if (!cdd->usbss_mem || !cdd->ctrl_mem || !cdd->sched_mem ||
!cdd->qmgr_mem) !cdd->qmgr_mem)
return -ENXIO; return -ENXIO;
pm_runtime_enable(dev); pm_runtime_enable(dev);
pm_runtime_set_autosuspend_delay(dev, 100);
pm_runtime_use_autosuspend(dev);
ret = pm_runtime_get_sync(dev); ret = pm_runtime_get_sync(dev);
if (ret < 0) if (ret < 0)
goto err_get_sync; goto err_get_sync;
@ -985,7 +1045,9 @@ static int cppi41_dma_probe(struct platform_device *pdev)
if (ret) if (ret)
goto err_of; goto err_of;
platform_set_drvdata(pdev, cdd); pm_runtime_mark_last_busy(dev);
pm_runtime_put_autosuspend(dev);
return 0; return 0;
err_of: err_of:
dma_async_device_unregister(&cdd->ddev); dma_async_device_unregister(&cdd->ddev);
@ -996,7 +1058,8 @@ err_irq:
err_chans: err_chans:
deinit_cppi41(dev, cdd); deinit_cppi41(dev, cdd);
err_init_cppi: err_init_cppi:
pm_runtime_put(dev); pm_runtime_dont_use_autosuspend(dev);
pm_runtime_put_sync(dev);
err_get_sync: err_get_sync:
pm_runtime_disable(dev); pm_runtime_disable(dev);
iounmap(cdd->usbss_mem); iounmap(cdd->usbss_mem);
@ -1021,13 +1084,13 @@ static int cppi41_dma_remove(struct platform_device *pdev)
iounmap(cdd->ctrl_mem); iounmap(cdd->ctrl_mem);
iounmap(cdd->sched_mem); iounmap(cdd->sched_mem);
iounmap(cdd->qmgr_mem); iounmap(cdd->qmgr_mem);
pm_runtime_put(&pdev->dev); pm_runtime_dont_use_autosuspend(&pdev->dev);
pm_runtime_put_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev); pm_runtime_disable(&pdev->dev);
return 0; return 0;
} }
#ifdef CONFIG_PM_SLEEP static int __maybe_unused cppi41_suspend(struct device *dev)
static int cppi41_suspend(struct device *dev)
{ {
struct cppi41_dd *cdd = dev_get_drvdata(dev); struct cppi41_dd *cdd = dev_get_drvdata(dev);
@ -1038,7 +1101,7 @@ static int cppi41_suspend(struct device *dev)
return 0; return 0;
} }
static int cppi41_resume(struct device *dev) static int __maybe_unused cppi41_resume(struct device *dev)
{ {
struct cppi41_dd *cdd = dev_get_drvdata(dev); struct cppi41_dd *cdd = dev_get_drvdata(dev);
struct cppi41_channel *c; struct cppi41_channel *c;
@ -1062,9 +1125,38 @@ static int cppi41_resume(struct device *dev)
return 0; return 0;
} }
#endif
static SIMPLE_DEV_PM_OPS(cppi41_pm_ops, cppi41_suspend, cppi41_resume); static int __maybe_unused cppi41_runtime_suspend(struct device *dev)
{
struct cppi41_dd *cdd = dev_get_drvdata(dev);
WARN_ON(!list_empty(&cdd->pending));
return 0;
}
static int __maybe_unused cppi41_runtime_resume(struct device *dev)
{
struct cppi41_dd *cdd = dev_get_drvdata(dev);
struct cppi41_channel *c, *_c;
unsigned long flags;
spin_lock_irqsave(&cdd->lock, flags);
list_for_each_entry_safe(c, _c, &cdd->pending, node) {
push_desc_queue(c);
list_del(&c->node);
}
spin_unlock_irqrestore(&cdd->lock, flags);
return 0;
}
static const struct dev_pm_ops cppi41_pm_ops = {
SET_LATE_SYSTEM_SLEEP_PM_OPS(cppi41_suspend, cppi41_resume)
SET_RUNTIME_PM_OPS(cppi41_runtime_suspend,
cppi41_runtime_resume,
NULL)
};
static struct platform_driver cpp41_dma_driver = { static struct platform_driver cpp41_dma_driver = {
.probe = cppi41_dma_probe, .probe = cppi41_dma_probe,

View File

@ -21,8 +21,6 @@
#include <linux/irq.h> #include <linux/irq.h>
#include <linux/clk.h> #include <linux/clk.h>
#include <asm/mach-jz4740/dma.h>
#include "virt-dma.h" #include "virt-dma.h"
#define JZ_DMA_NR_CHANS 6 #define JZ_DMA_NR_CHANS 6

View File

@ -324,8 +324,10 @@ static struct dma_async_tx_descriptor *jz4780_dma_prep_slave_sg(
sg_dma_address(&sgl[i]), sg_dma_address(&sgl[i]),
sg_dma_len(&sgl[i]), sg_dma_len(&sgl[i]),
direction); direction);
if (err < 0) if (err < 0) {
jz4780_dma_desc_free(&jzchan->desc->vdesc);
return NULL; return NULL;
}
desc->desc[i].dcm |= JZ_DMA_DCM_TIE; desc->desc[i].dcm |= JZ_DMA_DCM_TIE;
@ -368,8 +370,10 @@ static struct dma_async_tx_descriptor *jz4780_dma_prep_dma_cyclic(
for (i = 0; i < periods; i++) { for (i = 0; i < periods; i++) {
err = jz4780_dma_setup_hwdesc(jzchan, &desc->desc[i], buf_addr, err = jz4780_dma_setup_hwdesc(jzchan, &desc->desc[i], buf_addr,
period_len, direction); period_len, direction);
if (err < 0) if (err < 0) {
jz4780_dma_desc_free(&jzchan->desc->vdesc);
return NULL; return NULL;
}
buf_addr += period_len; buf_addr += period_len;
@ -396,7 +400,7 @@ static struct dma_async_tx_descriptor *jz4780_dma_prep_dma_cyclic(
return vchan_tx_prep(&jzchan->vchan, &desc->vdesc, flags); return vchan_tx_prep(&jzchan->vchan, &desc->vdesc, flags);
} }
struct dma_async_tx_descriptor *jz4780_dma_prep_dma_memcpy( static struct dma_async_tx_descriptor *jz4780_dma_prep_dma_memcpy(
struct dma_chan *chan, dma_addr_t dest, dma_addr_t src, struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
size_t len, unsigned long flags) size_t len, unsigned long flags)
{ {

View File

@ -997,6 +997,13 @@ int dma_async_device_register(struct dma_device *device)
} }
chan->client_count = 0; chan->client_count = 0;
} }
if (!chancnt) {
dev_err(device->dev, "%s: device has no channels!\n", __func__);
rc = -ENODEV;
goto err_out;
}
device->chancnt = chancnt; device->chancnt = chancnt;
mutex_lock(&dma_list_mutex); mutex_lock(&dma_list_mutex);

View File

@ -86,4 +86,88 @@ static inline void dma_set_residue(struct dma_tx_state *state, u32 residue)
state->residue = residue; state->residue = residue;
} }
struct dmaengine_desc_callback {
dma_async_tx_callback callback;
dma_async_tx_callback_result callback_result;
void *callback_param;
};
/**
* dmaengine_desc_get_callback - get the passed in callback function
* @tx: tx descriptor
* @cb: temp struct to hold the callback info
*
* Fill the passed in cb struct with what's available in the passed in
* tx descriptor struct
* No locking is required.
*/
static inline void
dmaengine_desc_get_callback(struct dma_async_tx_descriptor *tx,
struct dmaengine_desc_callback *cb)
{
cb->callback = tx->callback;
cb->callback_result = tx->callback_result;
cb->callback_param = tx->callback_param;
}
/**
* dmaengine_desc_callback_invoke - call the callback function in cb struct
* @cb: temp struct that is holding the callback info
* @result: transaction result
*
* Call the callback function provided in the cb struct with the parameter
* in the cb struct.
* Locking is dependent on the driver.
*/
static inline void
dmaengine_desc_callback_invoke(struct dmaengine_desc_callback *cb,
const struct dmaengine_result *result)
{
struct dmaengine_result dummy_result = {
.result = DMA_TRANS_NOERROR,
.residue = 0
};
if (cb->callback_result) {
if (!result)
result = &dummy_result;
cb->callback_result(cb->callback_param, result);
} else if (cb->callback) {
cb->callback(cb->callback_param);
}
}
/**
* dmaengine_desc_get_callback_invoke - get the callback in tx descriptor and
* then immediately call the callback.
* @tx: dma async tx descriptor
* @result: transaction result
*
* Call dmaengine_desc_get_callback() and dmaengine_desc_callback_invoke()
* in a single function since no work is necessary in between for the driver.
* Locking is dependent on the driver.
*/
static inline void
dmaengine_desc_get_callback_invoke(struct dma_async_tx_descriptor *tx,
const struct dmaengine_result *result)
{
struct dmaengine_desc_callback cb;
dmaengine_desc_get_callback(tx, &cb);
dmaengine_desc_callback_invoke(&cb, result);
}
/**
* dmaengine_desc_callback_valid - verify the callback is valid in cb
* @cb: callback info struct
*
* Return a bool that verifies whether callback in cb is valid or not.
* No locking is required.
*/
static inline bool
dmaengine_desc_callback_valid(struct dmaengine_desc_callback *cb)
{
return (cb->callback) ? true : false;
}
#endif #endif

View File

@ -56,10 +56,10 @@ module_param(sg_buffers, uint, S_IRUGO | S_IWUSR);
MODULE_PARM_DESC(sg_buffers, MODULE_PARM_DESC(sg_buffers,
"Number of scatter gather buffers (default: 1)"); "Number of scatter gather buffers (default: 1)");
static unsigned int dmatest = 1; static unsigned int dmatest;
module_param(dmatest, uint, S_IRUGO | S_IWUSR); module_param(dmatest, uint, S_IRUGO | S_IWUSR);
MODULE_PARM_DESC(dmatest, MODULE_PARM_DESC(dmatest,
"dmatest 0-memcpy 1-slave_sg (default: 1)"); "dmatest 0-memcpy 1-slave_sg (default: 0)");
static unsigned int xor_sources = 3; static unsigned int xor_sources = 3;
module_param(xor_sources, uint, S_IRUGO | S_IWUSR); module_param(xor_sources, uint, S_IRUGO | S_IWUSR);
@ -426,7 +426,9 @@ static int dmatest_func(void *data)
int src_cnt; int src_cnt;
int dst_cnt; int dst_cnt;
int i; int i;
ktime_t ktime; ktime_t ktime, start, diff;
ktime_t filltime = ktime_set(0, 0);
ktime_t comparetime = ktime_set(0, 0);
s64 runtime = 0; s64 runtime = 0;
unsigned long long total_len = 0; unsigned long long total_len = 0;
@ -503,7 +505,7 @@ static int dmatest_func(void *data)
total_tests++; total_tests++;
/* honor alignment restrictions */ /* honor alignment restrictions */
if (thread->type == DMA_MEMCPY) if (thread->type == DMA_MEMCPY || thread->type == DMA_SG)
align = dev->copy_align; align = dev->copy_align;
else if (thread->type == DMA_XOR) else if (thread->type == DMA_XOR)
align = dev->xor_align; align = dev->xor_align;
@ -531,6 +533,7 @@ static int dmatest_func(void *data)
src_off = 0; src_off = 0;
dst_off = 0; dst_off = 0;
} else { } else {
start = ktime_get();
src_off = dmatest_random() % (params->buf_size - len + 1); src_off = dmatest_random() % (params->buf_size - len + 1);
dst_off = dmatest_random() % (params->buf_size - len + 1); dst_off = dmatest_random() % (params->buf_size - len + 1);
@ -541,6 +544,9 @@ static int dmatest_func(void *data)
params->buf_size); params->buf_size);
dmatest_init_dsts(thread->dsts, dst_off, len, dmatest_init_dsts(thread->dsts, dst_off, len,
params->buf_size); params->buf_size);
diff = ktime_sub(ktime_get(), start);
filltime = ktime_add(filltime, diff);
} }
um = dmaengine_get_unmap_data(dev->dev, src_cnt+dst_cnt, um = dmaengine_get_unmap_data(dev->dev, src_cnt+dst_cnt,
@ -683,6 +689,7 @@ static int dmatest_func(void *data)
continue; continue;
} }
start = ktime_get();
pr_debug("%s: verifying source buffer...\n", current->comm); pr_debug("%s: verifying source buffer...\n", current->comm);
error_count = dmatest_verify(thread->srcs, 0, src_off, error_count = dmatest_verify(thread->srcs, 0, src_off,
0, PATTERN_SRC, true); 0, PATTERN_SRC, true);
@ -703,6 +710,9 @@ static int dmatest_func(void *data)
params->buf_size, dst_off + len, params->buf_size, dst_off + len,
PATTERN_DST, false); PATTERN_DST, false);
diff = ktime_sub(ktime_get(), start);
comparetime = ktime_add(comparetime, diff);
if (error_count) { if (error_count) {
result("data error", total_tests, src_off, dst_off, result("data error", total_tests, src_off, dst_off,
len, error_count); len, error_count);
@ -712,7 +722,10 @@ static int dmatest_func(void *data)
dst_off, len, 0); dst_off, len, 0);
} }
} }
runtime = ktime_us_delta(ktime_get(), ktime); ktime = ktime_sub(ktime_get(), ktime);
ktime = ktime_sub(ktime, comparetime);
ktime = ktime_sub(ktime, filltime);
runtime = ktime_to_us(ktime);
ret = 0; ret = 0;
err_dstbuf: err_dstbuf:

View File

@ -274,20 +274,19 @@ static void
dwc_descriptor_complete(struct dw_dma_chan *dwc, struct dw_desc *desc, dwc_descriptor_complete(struct dw_dma_chan *dwc, struct dw_desc *desc,
bool callback_required) bool callback_required)
{ {
dma_async_tx_callback callback = NULL;
void *param = NULL;
struct dma_async_tx_descriptor *txd = &desc->txd; struct dma_async_tx_descriptor *txd = &desc->txd;
struct dw_desc *child; struct dw_desc *child;
unsigned long flags; unsigned long flags;
struct dmaengine_desc_callback cb;
dev_vdbg(chan2dev(&dwc->chan), "descriptor %u complete\n", txd->cookie); dev_vdbg(chan2dev(&dwc->chan), "descriptor %u complete\n", txd->cookie);
spin_lock_irqsave(&dwc->lock, flags); spin_lock_irqsave(&dwc->lock, flags);
dma_cookie_complete(txd); dma_cookie_complete(txd);
if (callback_required) { if (callback_required)
callback = txd->callback; dmaengine_desc_get_callback(txd, &cb);
param = txd->callback_param; else
} memset(&cb, 0, sizeof(cb));
/* async_tx_ack */ /* async_tx_ack */
list_for_each_entry(child, &desc->tx_list, desc_node) list_for_each_entry(child, &desc->tx_list, desc_node)
@ -296,8 +295,7 @@ dwc_descriptor_complete(struct dw_dma_chan *dwc, struct dw_desc *desc,
dwc_desc_put(dwc, desc); dwc_desc_put(dwc, desc);
spin_unlock_irqrestore(&dwc->lock, flags); spin_unlock_irqrestore(&dwc->lock, flags);
if (callback) dmaengine_desc_callback_invoke(&cb, NULL);
callback(param);
} }
static void dwc_complete_all(struct dw_dma *dw, struct dw_dma_chan *dwc) static void dwc_complete_all(struct dw_dma *dw, struct dw_dma_chan *dwc)

View File

@ -263,22 +263,29 @@ static const struct edmacc_param dummy_paramset = {
#define EDMA_BINDING_LEGACY 0 #define EDMA_BINDING_LEGACY 0
#define EDMA_BINDING_TPCC 1 #define EDMA_BINDING_TPCC 1
static const u32 edma_binding_type[] = {
[EDMA_BINDING_LEGACY] = EDMA_BINDING_LEGACY,
[EDMA_BINDING_TPCC] = EDMA_BINDING_TPCC,
};
static const struct of_device_id edma_of_ids[] = { static const struct of_device_id edma_of_ids[] = {
{ {
.compatible = "ti,edma3", .compatible = "ti,edma3",
.data = (void *)EDMA_BINDING_LEGACY, .data = &edma_binding_type[EDMA_BINDING_LEGACY],
}, },
{ {
.compatible = "ti,edma3-tpcc", .compatible = "ti,edma3-tpcc",
.data = (void *)EDMA_BINDING_TPCC, .data = &edma_binding_type[EDMA_BINDING_TPCC],
}, },
{} {}
}; };
MODULE_DEVICE_TABLE(of, edma_of_ids);
static const struct of_device_id edma_tptc_of_ids[] = { static const struct of_device_id edma_tptc_of_ids[] = {
{ .compatible = "ti,edma3-tptc", }, { .compatible = "ti,edma3-tptc", },
{} {}
}; };
MODULE_DEVICE_TABLE(of, edma_tptc_of_ids);
static inline unsigned int edma_read(struct edma_cc *ecc, int offset) static inline unsigned int edma_read(struct edma_cc *ecc, int offset)
{ {
@ -405,18 +412,12 @@ static inline void edma_param_or(struct edma_cc *ecc, int offset, int param_no,
edma_or(ecc, EDMA_PARM + offset + (param_no << 5), or); edma_or(ecc, EDMA_PARM + offset + (param_no << 5), or);
} }
static inline void set_bits(int offset, int len, unsigned long *p) static inline void edma_set_bits(int offset, int len, unsigned long *p)
{ {
for (; len > 0; len--) for (; len > 0; len--)
set_bit(offset + (len - 1), p); set_bit(offset + (len - 1), p);
} }
static inline void clear_bits(int offset, int len, unsigned long *p)
{
for (; len > 0; len--)
clear_bit(offset + (len - 1), p);
}
static void edma_assign_priority_to_queue(struct edma_cc *ecc, int queue_no, static void edma_assign_priority_to_queue(struct edma_cc *ecc, int queue_no,
int priority) int priority)
{ {
@ -464,13 +465,15 @@ static void edma_write_slot(struct edma_cc *ecc, unsigned slot,
memcpy_toio(ecc->base + PARM_OFFSET(slot), param, PARM_SIZE); memcpy_toio(ecc->base + PARM_OFFSET(slot), param, PARM_SIZE);
} }
static void edma_read_slot(struct edma_cc *ecc, unsigned slot, static int edma_read_slot(struct edma_cc *ecc, unsigned slot,
struct edmacc_param *param) struct edmacc_param *param)
{ {
slot = EDMA_CHAN_SLOT(slot); slot = EDMA_CHAN_SLOT(slot);
if (slot >= ecc->num_slots) if (slot >= ecc->num_slots)
return; return -EINVAL;
memcpy_fromio(param, ecc->base + PARM_OFFSET(slot), PARM_SIZE); memcpy_fromio(param, ecc->base + PARM_OFFSET(slot), PARM_SIZE);
return 0;
} }
/** /**
@ -1476,13 +1479,15 @@ static void edma_error_handler(struct edma_chan *echan)
struct edma_cc *ecc = echan->ecc; struct edma_cc *ecc = echan->ecc;
struct device *dev = echan->vchan.chan.device->dev; struct device *dev = echan->vchan.chan.device->dev;
struct edmacc_param p; struct edmacc_param p;
int err;
if (!echan->edesc) if (!echan->edesc)
return; return;
spin_lock(&echan->vchan.lock); spin_lock(&echan->vchan.lock);
edma_read_slot(ecc, echan->slot[0], &p); err = edma_read_slot(ecc, echan->slot[0], &p);
/* /*
* Issue later based on missed flag which will be sure * Issue later based on missed flag which will be sure
* to happen as: * to happen as:
@ -1495,7 +1500,7 @@ static void edma_error_handler(struct edma_chan *echan)
* lead to some nasty recursion when we are in a NULL * lead to some nasty recursion when we are in a NULL
* slot. So we avoid doing so and set the missed flag. * slot. So we avoid doing so and set the missed flag.
*/ */
if (p.a_b_cnt == 0 && p.ccnt == 0) { if (err || (p.a_b_cnt == 0 && p.ccnt == 0)) {
dev_dbg(dev, "Error on null slot, setting miss\n"); dev_dbg(dev, "Error on null slot, setting miss\n");
echan->missed = 1; echan->missed = 1;
} else { } else {
@ -2019,8 +2024,7 @@ static struct edma_soc_info *edma_setup_info_from_dt(struct device *dev,
{ {
struct edma_soc_info *info; struct edma_soc_info *info;
struct property *prop; struct property *prop;
size_t sz; int sz, ret;
int ret;
info = devm_kzalloc(dev, sizeof(struct edma_soc_info), GFP_KERNEL); info = devm_kzalloc(dev, sizeof(struct edma_soc_info), GFP_KERNEL);
if (!info) if (!info)
@ -2182,7 +2186,7 @@ static int edma_probe(struct platform_device *pdev)
const struct of_device_id *match; const struct of_device_id *match;
match = of_match_node(edma_of_ids, node); match = of_match_node(edma_of_ids, node);
if (match && (u32)match->data == EDMA_BINDING_TPCC) if (match && (*(u32 *)match->data) == EDMA_BINDING_TPCC)
legacy_mode = false; legacy_mode = false;
info = edma_setup_info_from_dt(dev, legacy_mode); info = edma_setup_info_from_dt(dev, legacy_mode);
@ -2260,7 +2264,7 @@ static int edma_probe(struct platform_device *pdev)
for (i = 0; rsv_slots[i][0] != -1; i++) { for (i = 0; rsv_slots[i][0] != -1; i++) {
off = rsv_slots[i][0]; off = rsv_slots[i][0];
ln = rsv_slots[i][1]; ln = rsv_slots[i][1];
set_bits(off, ln, ecc->slot_inuse); edma_set_bits(off, ln, ecc->slot_inuse);
} }
} }
} }

View File

@ -262,10 +262,8 @@ static void ep93xx_dma_set_active(struct ep93xx_dma_chan *edmac,
static struct ep93xx_dma_desc * static struct ep93xx_dma_desc *
ep93xx_dma_get_active(struct ep93xx_dma_chan *edmac) ep93xx_dma_get_active(struct ep93xx_dma_chan *edmac)
{ {
if (list_empty(&edmac->active)) return list_first_entry_or_null(&edmac->active,
return NULL; struct ep93xx_dma_desc, node);
return list_first_entry(&edmac->active, struct ep93xx_dma_desc, node);
} }
/** /**
@ -739,10 +737,10 @@ static void ep93xx_dma_tasklet(unsigned long data)
{ {
struct ep93xx_dma_chan *edmac = (struct ep93xx_dma_chan *)data; struct ep93xx_dma_chan *edmac = (struct ep93xx_dma_chan *)data;
struct ep93xx_dma_desc *desc, *d; struct ep93xx_dma_desc *desc, *d;
dma_async_tx_callback callback = NULL; struct dmaengine_desc_callback cb;
void *callback_param = NULL;
LIST_HEAD(list); LIST_HEAD(list);
memset(&cb, 0, sizeof(cb));
spin_lock_irq(&edmac->lock); spin_lock_irq(&edmac->lock);
/* /*
* If dma_terminate_all() was called before we get to run, the active * If dma_terminate_all() was called before we get to run, the active
@ -757,8 +755,7 @@ static void ep93xx_dma_tasklet(unsigned long data)
dma_cookie_complete(&desc->txd); dma_cookie_complete(&desc->txd);
list_splice_init(&edmac->active, &list); list_splice_init(&edmac->active, &list);
} }
callback = desc->txd.callback; dmaengine_desc_get_callback(&desc->txd, &cb);
callback_param = desc->txd.callback_param;
} }
spin_unlock_irq(&edmac->lock); spin_unlock_irq(&edmac->lock);
@ -771,8 +768,7 @@ static void ep93xx_dma_tasklet(unsigned long data)
ep93xx_dma_desc_put(edmac, desc); ep93xx_dma_desc_put(edmac, desc);
} }
if (callback) dmaengine_desc_callback_invoke(&cb, NULL);
callback(callback_param);
} }
static irqreturn_t ep93xx_dma_interrupt(int irq, void *dev_id) static irqreturn_t ep93xx_dma_interrupt(int irq, void *dev_id)
@ -1047,11 +1043,11 @@ ep93xx_dma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
first = NULL; first = NULL;
for_each_sg(sgl, sg, sg_len, i) { for_each_sg(sgl, sg, sg_len, i) {
size_t sg_len = sg_dma_len(sg); size_t len = sg_dma_len(sg);
if (sg_len > DMA_MAX_CHAN_BYTES) { if (len > DMA_MAX_CHAN_BYTES) {
dev_warn(chan2dev(edmac), "too big transfer size %d\n", dev_warn(chan2dev(edmac), "too big transfer size %zu\n",
sg_len); len);
goto fail; goto fail;
} }
@ -1068,7 +1064,7 @@ ep93xx_dma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
desc->src_addr = edmac->runtime_addr; desc->src_addr = edmac->runtime_addr;
desc->dst_addr = sg_dma_address(sg); desc->dst_addr = sg_dma_address(sg);
} }
desc->size = sg_len; desc->size = len;
if (!first) if (!first)
first = desc; first = desc;
@ -1125,7 +1121,7 @@ ep93xx_dma_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t dma_addr,
} }
if (period_len > DMA_MAX_CHAN_BYTES) { if (period_len > DMA_MAX_CHAN_BYTES) {
dev_warn(chan2dev(edmac), "too big period length %d\n", dev_warn(chan2dev(edmac), "too big period length %zu\n",
period_len); period_len);
return NULL; return NULL;
} }

View File

@ -134,17 +134,9 @@ static void fsl_re_issue_pending(struct dma_chan *chan)
static void fsl_re_desc_done(struct fsl_re_desc *desc) static void fsl_re_desc_done(struct fsl_re_desc *desc)
{ {
dma_async_tx_callback callback;
void *callback_param;
dma_cookie_complete(&desc->async_tx); dma_cookie_complete(&desc->async_tx);
callback = desc->async_tx.callback;
callback_param = desc->async_tx.callback_param;
if (callback)
callback(callback_param);
dma_descriptor_unmap(&desc->async_tx); dma_descriptor_unmap(&desc->async_tx);
dmaengine_desc_get_callback_invoke(&desc->async_tx, NULL);
} }
static void fsl_re_cleanup_descs(struct fsl_re_chan *re_chan) static void fsl_re_cleanup_descs(struct fsl_re_chan *re_chan)
@ -670,7 +662,7 @@ static int fsl_re_chan_probe(struct platform_device *ofdev,
/* read irq property from dts */ /* read irq property from dts */
chan->irq = irq_of_parse_and_map(np, 0); chan->irq = irq_of_parse_and_map(np, 0);
if (chan->irq == NO_IRQ) { if (!chan->irq) {
dev_err(dev, "No IRQ defined for JR %d\n", q); dev_err(dev, "No IRQ defined for JR %d\n", q);
ret = -ENODEV; ret = -ENODEV;
goto err_free; goto err_free;

View File

@ -516,13 +516,9 @@ static dma_cookie_t fsldma_run_tx_complete_actions(struct fsldma_chan *chan,
if (txd->cookie > 0) { if (txd->cookie > 0) {
ret = txd->cookie; ret = txd->cookie;
/* Run the link descriptor callback function */
if (txd->callback) {
chan_dbg(chan, "LD %p callback\n", desc);
txd->callback(txd->callback_param);
}
dma_descriptor_unmap(txd); dma_descriptor_unmap(txd);
/* Run the link descriptor callback function */
dmaengine_desc_get_callback_invoke(txd, NULL);
} }
/* Run any dependencies */ /* Run any dependencies */
@ -1153,7 +1149,7 @@ static void fsldma_free_irqs(struct fsldma_device *fdev)
struct fsldma_chan *chan; struct fsldma_chan *chan;
int i; int i;
if (fdev->irq != NO_IRQ) { if (fdev->irq) {
dev_dbg(fdev->dev, "free per-controller IRQ\n"); dev_dbg(fdev->dev, "free per-controller IRQ\n");
free_irq(fdev->irq, fdev); free_irq(fdev->irq, fdev);
return; return;
@ -1161,7 +1157,7 @@ static void fsldma_free_irqs(struct fsldma_device *fdev)
for (i = 0; i < FSL_DMA_MAX_CHANS_PER_DEVICE; i++) { for (i = 0; i < FSL_DMA_MAX_CHANS_PER_DEVICE; i++) {
chan = fdev->chan[i]; chan = fdev->chan[i];
if (chan && chan->irq != NO_IRQ) { if (chan && chan->irq) {
chan_dbg(chan, "free per-channel IRQ\n"); chan_dbg(chan, "free per-channel IRQ\n");
free_irq(chan->irq, chan); free_irq(chan->irq, chan);
} }
@ -1175,7 +1171,7 @@ static int fsldma_request_irqs(struct fsldma_device *fdev)
int i; int i;
/* if we have a per-controller IRQ, use that */ /* if we have a per-controller IRQ, use that */
if (fdev->irq != NO_IRQ) { if (fdev->irq) {
dev_dbg(fdev->dev, "request per-controller IRQ\n"); dev_dbg(fdev->dev, "request per-controller IRQ\n");
ret = request_irq(fdev->irq, fsldma_ctrl_irq, IRQF_SHARED, ret = request_irq(fdev->irq, fsldma_ctrl_irq, IRQF_SHARED,
"fsldma-controller", fdev); "fsldma-controller", fdev);
@ -1188,7 +1184,7 @@ static int fsldma_request_irqs(struct fsldma_device *fdev)
if (!chan) if (!chan)
continue; continue;
if (chan->irq == NO_IRQ) { if (!chan->irq) {
chan_err(chan, "interrupts property missing in device tree\n"); chan_err(chan, "interrupts property missing in device tree\n");
ret = -ENODEV; ret = -ENODEV;
goto out_unwind; goto out_unwind;
@ -1211,7 +1207,7 @@ out_unwind:
if (!chan) if (!chan)
continue; continue;
if (chan->irq == NO_IRQ) if (!chan->irq)
continue; continue;
free_irq(chan->irq, chan); free_irq(chan->irq, chan);
@ -1311,7 +1307,7 @@ static int fsl_dma_chan_probe(struct fsldma_device *fdev,
list_add_tail(&chan->common.device_node, &fdev->common.channels); list_add_tail(&chan->common.device_node, &fdev->common.channels);
dev_info(fdev->dev, "#%d (%s), irq %d\n", chan->id, compatible, dev_info(fdev->dev, "#%d (%s), irq %d\n", chan->id, compatible,
chan->irq != NO_IRQ ? chan->irq : fdev->irq); chan->irq ? chan->irq : fdev->irq);
return 0; return 0;
@ -1351,7 +1347,7 @@ static int fsldma_of_probe(struct platform_device *op)
if (!fdev->regs) { if (!fdev->regs) {
dev_err(&op->dev, "unable to ioremap registers\n"); dev_err(&op->dev, "unable to ioremap registers\n");
err = -ENOMEM; err = -ENOMEM;
goto out_free_fdev; goto out_free;
} }
/* map the channel IRQ if it exists, but don't hookup the handler yet */ /* map the channel IRQ if it exists, but don't hookup the handler yet */
@ -1416,6 +1412,8 @@ static int fsldma_of_probe(struct platform_device *op)
out_free_fdev: out_free_fdev:
irq_dispose_mapping(fdev->irq); irq_dispose_mapping(fdev->irq);
iounmap(fdev->regs);
out_free:
kfree(fdev); kfree(fdev);
out_return: out_return:
return err; return err;

View File

@ -663,9 +663,7 @@ static void imxdma_tasklet(unsigned long data)
out: out:
spin_unlock_irqrestore(&imxdma->lock, flags); spin_unlock_irqrestore(&imxdma->lock, flags);
if (desc->desc.callback) dmaengine_desc_get_callback_invoke(&desc->desc, NULL);
desc->desc.callback(desc->desc.callback_param);
} }
static int imxdma_terminate_all(struct dma_chan *chan) static int imxdma_terminate_all(struct dma_chan *chan)

View File

@ -184,7 +184,7 @@
struct sdma_mode_count { struct sdma_mode_count {
u32 count : 16; /* size of the buffer pointed by this BD */ u32 count : 16; /* size of the buffer pointed by this BD */
u32 status : 8; /* E,R,I,C,W,D status bits stored here */ u32 status : 8; /* E,R,I,C,W,D status bits stored here */
u32 command : 8; /* command mostlky used for channel 0 */ u32 command : 8; /* command mostly used for channel 0 */
}; };
/* /*
@ -479,6 +479,24 @@ static struct sdma_driver_data sdma_imx6q = {
.script_addrs = &sdma_script_imx6q, .script_addrs = &sdma_script_imx6q,
}; };
static struct sdma_script_start_addrs sdma_script_imx7d = {
.ap_2_ap_addr = 644,
.uart_2_mcu_addr = 819,
.mcu_2_app_addr = 749,
.uartsh_2_mcu_addr = 1034,
.mcu_2_shp_addr = 962,
.app_2_mcu_addr = 685,
.shp_2_mcu_addr = 893,
.spdif_2_mcu_addr = 1102,
.mcu_2_spdif_addr = 1136,
};
static struct sdma_driver_data sdma_imx7d = {
.chnenbl0 = SDMA_CHNENBL0_IMX35,
.num_events = 48,
.script_addrs = &sdma_script_imx7d,
};
static const struct platform_device_id sdma_devtypes[] = { static const struct platform_device_id sdma_devtypes[] = {
{ {
.name = "imx25-sdma", .name = "imx25-sdma",
@ -498,6 +516,9 @@ static const struct platform_device_id sdma_devtypes[] = {
}, { }, {
.name = "imx6q-sdma", .name = "imx6q-sdma",
.driver_data = (unsigned long)&sdma_imx6q, .driver_data = (unsigned long)&sdma_imx6q,
}, {
.name = "imx7d-sdma",
.driver_data = (unsigned long)&sdma_imx7d,
}, { }, {
/* sentinel */ /* sentinel */
} }
@ -511,6 +532,7 @@ static const struct of_device_id sdma_dt_ids[] = {
{ .compatible = "fsl,imx35-sdma", .data = &sdma_imx35, }, { .compatible = "fsl,imx35-sdma", .data = &sdma_imx35, },
{ .compatible = "fsl,imx31-sdma", .data = &sdma_imx31, }, { .compatible = "fsl,imx31-sdma", .data = &sdma_imx31, },
{ .compatible = "fsl,imx25-sdma", .data = &sdma_imx25, }, { .compatible = "fsl,imx25-sdma", .data = &sdma_imx25, },
{ .compatible = "fsl,imx7d-sdma", .data = &sdma_imx7d, },
{ /* sentinel */ } { /* sentinel */ }
}; };
MODULE_DEVICE_TABLE(of, sdma_dt_ids); MODULE_DEVICE_TABLE(of, sdma_dt_ids);
@ -686,8 +708,7 @@ static void sdma_update_channel_loop(struct sdma_channel *sdmac)
* executed. * executed.
*/ */
if (sdmac->desc.callback) dmaengine_desc_get_callback_invoke(&sdmac->desc, NULL);
sdmac->desc.callback(sdmac->desc.callback_param);
sdmac->buf_tail++; sdmac->buf_tail++;
sdmac->buf_tail %= sdmac->num_bd; sdmac->buf_tail %= sdmac->num_bd;
@ -722,8 +743,8 @@ static void mxc_sdma_handle_channel_normal(unsigned long data)
sdmac->status = DMA_COMPLETE; sdmac->status = DMA_COMPLETE;
dma_cookie_complete(&sdmac->desc); dma_cookie_complete(&sdmac->desc);
if (sdmac->desc.callback)
sdmac->desc.callback(sdmac->desc.callback_param); dmaengine_desc_get_callback_invoke(&sdmac->desc, NULL);
} }
static irqreturn_t sdma_int_handler(int irq, void *dev_id) static irqreturn_t sdma_int_handler(int irq, void *dev_id)
@ -1387,6 +1408,7 @@ static void sdma_issue_pending(struct dma_chan *chan)
#define SDMA_SCRIPT_ADDRS_ARRAY_SIZE_V1 34 #define SDMA_SCRIPT_ADDRS_ARRAY_SIZE_V1 34
#define SDMA_SCRIPT_ADDRS_ARRAY_SIZE_V2 38 #define SDMA_SCRIPT_ADDRS_ARRAY_SIZE_V2 38
#define SDMA_SCRIPT_ADDRS_ARRAY_SIZE_V3 41 #define SDMA_SCRIPT_ADDRS_ARRAY_SIZE_V3 41
#define SDMA_SCRIPT_ADDRS_ARRAY_SIZE_V4 42
static void sdma_add_scripts(struct sdma_engine *sdma, static void sdma_add_scripts(struct sdma_engine *sdma,
const struct sdma_script_start_addrs *addr) const struct sdma_script_start_addrs *addr)
@ -1436,6 +1458,9 @@ static void sdma_load_firmware(const struct firmware *fw, void *context)
case 3: case 3:
sdma->script_number = SDMA_SCRIPT_ADDRS_ARRAY_SIZE_V3; sdma->script_number = SDMA_SCRIPT_ADDRS_ARRAY_SIZE_V3;
break; break;
case 4:
sdma->script_number = SDMA_SCRIPT_ADDRS_ARRAY_SIZE_V4;
break;
default: default:
dev_err(sdma->dev, "unknown firmware version\n"); dev_err(sdma->dev, "unknown firmware version\n");
goto err_firmware; goto err_firmware;

View File

@ -38,8 +38,54 @@
#include "../dmaengine.h" #include "../dmaengine.h"
static char *chanerr_str[] = {
"DMA Transfer Destination Address Error",
"Next Descriptor Address Error",
"Descriptor Error",
"Chan Address Value Error",
"CHANCMD Error",
"Chipset Uncorrectable Data Integrity Error",
"DMA Uncorrectable Data Integrity Error",
"Read Data Error",
"Write Data Error",
"Descriptor Control Error",
"Descriptor Transfer Size Error",
"Completion Address Error",
"Interrupt Configuration Error",
"Super extended descriptor Address Error",
"Unaffiliated Error",
"CRC or XOR P Error",
"XOR Q Error",
"Descriptor Count Error",
"DIF All F detect Error",
"Guard Tag verification Error",
"Application Tag verification Error",
"Reference Tag verification Error",
"Bundle Bit Error",
"Result DIF All F detect Error",
"Result Guard Tag verification Error",
"Result Application Tag verification Error",
"Result Reference Tag verification Error",
NULL
};
static void ioat_eh(struct ioatdma_chan *ioat_chan); static void ioat_eh(struct ioatdma_chan *ioat_chan);
static void ioat_print_chanerrs(struct ioatdma_chan *ioat_chan, u32 chanerr)
{
int i;
for (i = 0; i < 32; i++) {
if ((chanerr >> i) & 1) {
if (chanerr_str[i]) {
dev_err(to_dev(ioat_chan), "Err(%d): %s\n",
i, chanerr_str[i]);
} else
break;
}
}
}
/** /**
* ioat_dma_do_interrupt - handler used for single vector interrupt mode * ioat_dma_do_interrupt - handler used for single vector interrupt mode
* @irq: interrupt id * @irq: interrupt id
@ -568,12 +614,14 @@ static void __cleanup(struct ioatdma_chan *ioat_chan, dma_addr_t phys_complete)
tx = &desc->txd; tx = &desc->txd;
if (tx->cookie) { if (tx->cookie) {
struct dmaengine_result res;
dma_cookie_complete(tx); dma_cookie_complete(tx);
dma_descriptor_unmap(tx); dma_descriptor_unmap(tx);
if (tx->callback) { res.result = DMA_TRANS_NOERROR;
tx->callback(tx->callback_param); dmaengine_desc_get_callback_invoke(tx, NULL);
tx->callback = NULL; tx->callback = NULL;
} tx->callback_result = NULL;
} }
if (tx->phys == phys_complete) if (tx->phys == phys_complete)
@ -622,7 +670,8 @@ static void ioat_cleanup(struct ioatdma_chan *ioat_chan)
if (is_ioat_halted(*ioat_chan->completion)) { if (is_ioat_halted(*ioat_chan->completion)) {
u32 chanerr = readl(ioat_chan->reg_base + IOAT_CHANERR_OFFSET); u32 chanerr = readl(ioat_chan->reg_base + IOAT_CHANERR_OFFSET);
if (chanerr & IOAT_CHANERR_HANDLE_MASK) { if (chanerr &
(IOAT_CHANERR_HANDLE_MASK | IOAT_CHANERR_RECOVER_MASK)) {
mod_timer(&ioat_chan->timer, jiffies + IDLE_TIMEOUT); mod_timer(&ioat_chan->timer, jiffies + IDLE_TIMEOUT);
ioat_eh(ioat_chan); ioat_eh(ioat_chan);
} }
@ -652,6 +701,61 @@ static void ioat_restart_channel(struct ioatdma_chan *ioat_chan)
__ioat_restart_chan(ioat_chan); __ioat_restart_chan(ioat_chan);
} }
static void ioat_abort_descs(struct ioatdma_chan *ioat_chan)
{
struct ioatdma_device *ioat_dma = ioat_chan->ioat_dma;
struct ioat_ring_ent *desc;
u16 active;
int idx = ioat_chan->tail, i;
/*
* We assume that the failed descriptor has been processed.
* Now we are just returning all the remaining submitted
* descriptors to abort.
*/
active = ioat_ring_active(ioat_chan);
/* we skip the failed descriptor that tail points to */
for (i = 1; i < active; i++) {
struct dma_async_tx_descriptor *tx;
smp_read_barrier_depends();
prefetch(ioat_get_ring_ent(ioat_chan, idx + i + 1));
desc = ioat_get_ring_ent(ioat_chan, idx + i);
tx = &desc->txd;
if (tx->cookie) {
struct dmaengine_result res;
dma_cookie_complete(tx);
dma_descriptor_unmap(tx);
res.result = DMA_TRANS_ABORTED;
dmaengine_desc_get_callback_invoke(tx, &res);
tx->callback = NULL;
tx->callback_result = NULL;
}
/* skip extended descriptors */
if (desc_has_ext(desc)) {
WARN_ON(i + 1 >= active);
i++;
}
/* cleanup super extended descriptors */
if (desc->sed) {
ioat_free_sed(ioat_dma, desc->sed);
desc->sed = NULL;
}
}
smp_mb(); /* finish all descriptor reads before incrementing tail */
ioat_chan->tail = idx + active;
desc = ioat_get_ring_ent(ioat_chan, ioat_chan->tail);
ioat_chan->last_completion = *ioat_chan->completion = desc->txd.phys;
}
static void ioat_eh(struct ioatdma_chan *ioat_chan) static void ioat_eh(struct ioatdma_chan *ioat_chan)
{ {
struct pci_dev *pdev = to_pdev(ioat_chan); struct pci_dev *pdev = to_pdev(ioat_chan);
@ -662,6 +766,8 @@ static void ioat_eh(struct ioatdma_chan *ioat_chan)
u32 err_handled = 0; u32 err_handled = 0;
u32 chanerr_int; u32 chanerr_int;
u32 chanerr; u32 chanerr;
bool abort = false;
struct dmaengine_result res;
/* cleanup so tail points to descriptor that caused the error */ /* cleanup so tail points to descriptor that caused the error */
if (ioat_cleanup_preamble(ioat_chan, &phys_complete)) if (ioat_cleanup_preamble(ioat_chan, &phys_complete))
@ -697,30 +803,55 @@ static void ioat_eh(struct ioatdma_chan *ioat_chan)
break; break;
} }
if (chanerr & IOAT_CHANERR_RECOVER_MASK) {
if (chanerr & IOAT_CHANERR_READ_DATA_ERR) {
res.result = DMA_TRANS_READ_FAILED;
err_handled |= IOAT_CHANERR_READ_DATA_ERR;
} else if (chanerr & IOAT_CHANERR_WRITE_DATA_ERR) {
res.result = DMA_TRANS_WRITE_FAILED;
err_handled |= IOAT_CHANERR_WRITE_DATA_ERR;
}
abort = true;
} else
res.result = DMA_TRANS_NOERROR;
/* fault on unhandled error or spurious halt */ /* fault on unhandled error or spurious halt */
if (chanerr ^ err_handled || chanerr == 0) { if (chanerr ^ err_handled || chanerr == 0) {
dev_err(to_dev(ioat_chan), "%s: fatal error (%x:%x)\n", dev_err(to_dev(ioat_chan), "%s: fatal error (%x:%x)\n",
__func__, chanerr, err_handled); __func__, chanerr, err_handled);
dev_err(to_dev(ioat_chan), "Errors handled:\n");
ioat_print_chanerrs(ioat_chan, err_handled);
dev_err(to_dev(ioat_chan), "Errors not handled:\n");
ioat_print_chanerrs(ioat_chan, (chanerr & ~err_handled));
BUG(); BUG();
} else { /* cleanup the faulty descriptor */
tx = &desc->txd;
if (tx->cookie) {
dma_cookie_complete(tx);
dma_descriptor_unmap(tx);
if (tx->callback) {
tx->callback(tx->callback_param);
tx->callback = NULL;
}
}
} }
writel(chanerr, ioat_chan->reg_base + IOAT_CHANERR_OFFSET); /* cleanup the faulty descriptor since we are continuing */
pci_write_config_dword(pdev, IOAT_PCI_CHANERR_INT_OFFSET, chanerr_int); tx = &desc->txd;
if (tx->cookie) {
dma_cookie_complete(tx);
dma_descriptor_unmap(tx);
dmaengine_desc_get_callback_invoke(tx, &res);
tx->callback = NULL;
tx->callback_result = NULL;
}
/* mark faulting descriptor as complete */ /* mark faulting descriptor as complete */
*ioat_chan->completion = desc->txd.phys; *ioat_chan->completion = desc->txd.phys;
spin_lock_bh(&ioat_chan->prep_lock); spin_lock_bh(&ioat_chan->prep_lock);
/* we need abort all descriptors */
if (abort) {
ioat_abort_descs(ioat_chan);
/* clean up the channel, we could be in weird state */
ioat_reset_hw(ioat_chan);
}
writel(chanerr, ioat_chan->reg_base + IOAT_CHANERR_OFFSET);
pci_write_config_dword(pdev, IOAT_PCI_CHANERR_INT_OFFSET, chanerr_int);
ioat_restart_channel(ioat_chan); ioat_restart_channel(ioat_chan);
spin_unlock_bh(&ioat_chan->prep_lock); spin_unlock_bh(&ioat_chan->prep_lock);
} }
@ -753,10 +884,28 @@ void ioat_timer_event(unsigned long data)
chanerr = readl(ioat_chan->reg_base + IOAT_CHANERR_OFFSET); chanerr = readl(ioat_chan->reg_base + IOAT_CHANERR_OFFSET);
dev_err(to_dev(ioat_chan), "%s: Channel halted (%x)\n", dev_err(to_dev(ioat_chan), "%s: Channel halted (%x)\n",
__func__, chanerr); __func__, chanerr);
if (test_bit(IOAT_RUN, &ioat_chan->state)) dev_err(to_dev(ioat_chan), "Errors:\n");
BUG_ON(is_ioat_bug(chanerr)); ioat_print_chanerrs(ioat_chan, chanerr);
else /* we never got off the ground */
return; if (test_bit(IOAT_RUN, &ioat_chan->state)) {
spin_lock_bh(&ioat_chan->cleanup_lock);
spin_lock_bh(&ioat_chan->prep_lock);
set_bit(IOAT_CHAN_DOWN, &ioat_chan->state);
spin_unlock_bh(&ioat_chan->prep_lock);
ioat_abort_descs(ioat_chan);
dev_warn(to_dev(ioat_chan), "Reset channel...\n");
ioat_reset_hw(ioat_chan);
dev_warn(to_dev(ioat_chan), "Restart channel...\n");
ioat_restart_channel(ioat_chan);
spin_lock_bh(&ioat_chan->prep_lock);
clear_bit(IOAT_CHAN_DOWN, &ioat_chan->state);
spin_unlock_bh(&ioat_chan->prep_lock);
spin_unlock_bh(&ioat_chan->cleanup_lock);
}
return;
} }
spin_lock_bh(&ioat_chan->cleanup_lock); spin_lock_bh(&ioat_chan->cleanup_lock);
@ -780,14 +929,26 @@ void ioat_timer_event(unsigned long data)
u32 chanerr; u32 chanerr;
chanerr = readl(ioat_chan->reg_base + IOAT_CHANERR_OFFSET); chanerr = readl(ioat_chan->reg_base + IOAT_CHANERR_OFFSET);
dev_warn(to_dev(ioat_chan), "Restarting channel...\n"); dev_err(to_dev(ioat_chan), "CHANSTS: %#Lx CHANERR: %#x\n",
dev_warn(to_dev(ioat_chan), "CHANSTS: %#Lx CHANERR: %#x\n", status, chanerr);
status, chanerr); dev_err(to_dev(ioat_chan), "Errors:\n");
dev_warn(to_dev(ioat_chan), "Active descriptors: %d\n", ioat_print_chanerrs(ioat_chan, chanerr);
ioat_ring_active(ioat_chan));
dev_dbg(to_dev(ioat_chan), "Active descriptors: %d\n",
ioat_ring_active(ioat_chan));
spin_lock_bh(&ioat_chan->prep_lock); spin_lock_bh(&ioat_chan->prep_lock);
set_bit(IOAT_CHAN_DOWN, &ioat_chan->state);
spin_unlock_bh(&ioat_chan->prep_lock);
ioat_abort_descs(ioat_chan);
dev_warn(to_dev(ioat_chan), "Resetting channel...\n");
ioat_reset_hw(ioat_chan);
dev_warn(to_dev(ioat_chan), "Restarting channel...\n");
ioat_restart_channel(ioat_chan); ioat_restart_channel(ioat_chan);
spin_lock_bh(&ioat_chan->prep_lock);
clear_bit(IOAT_CHAN_DOWN, &ioat_chan->state);
spin_unlock_bh(&ioat_chan->prep_lock); spin_unlock_bh(&ioat_chan->prep_lock);
spin_unlock_bh(&ioat_chan->cleanup_lock); spin_unlock_bh(&ioat_chan->cleanup_lock);
return; return;

View File

@ -828,7 +828,7 @@ static int ioat_xor_val_self_test(struct ioatdma_device *ioat_dma)
dest_dma = dma_map_page(dev, dest, 0, PAGE_SIZE, DMA_FROM_DEVICE); dest_dma = dma_map_page(dev, dest, 0, PAGE_SIZE, DMA_FROM_DEVICE);
if (dma_mapping_error(dev, dest_dma)) if (dma_mapping_error(dev, dest_dma))
goto dma_unmap; goto free_resources;
for (i = 0; i < IOAT_NUM_SRC_TEST; i++) for (i = 0; i < IOAT_NUM_SRC_TEST; i++)
dma_srcs[i] = DMA_ERROR_CODE; dma_srcs[i] = DMA_ERROR_CODE;

View File

@ -240,6 +240,8 @@
#define IOAT_CHANERR_DESCRIPTOR_COUNT_ERR 0x40000 #define IOAT_CHANERR_DESCRIPTOR_COUNT_ERR 0x40000
#define IOAT_CHANERR_HANDLE_MASK (IOAT_CHANERR_XOR_P_OR_CRC_ERR | IOAT_CHANERR_XOR_Q_ERR) #define IOAT_CHANERR_HANDLE_MASK (IOAT_CHANERR_XOR_P_OR_CRC_ERR | IOAT_CHANERR_XOR_Q_ERR)
#define IOAT_CHANERR_RECOVER_MASK (IOAT_CHANERR_READ_DATA_ERR | \
IOAT_CHANERR_WRITE_DATA_ERR)
#define IOAT_CHANERR_MASK_OFFSET 0x2C /* 32-bit Channel Error Register */ #define IOAT_CHANERR_MASK_OFFSET 0x2C /* 32-bit Channel Error Register */

View File

@ -71,8 +71,7 @@ iop_adma_run_tx_complete_actions(struct iop_adma_desc_slot *desc,
/* call the callback (must not sleep or submit new /* call the callback (must not sleep or submit new
* operations to this channel) * operations to this channel)
*/ */
if (tx->callback) dmaengine_desc_get_callback_invoke(tx, NULL);
tx->callback(tx->callback_param);
dma_descriptor_unmap(tx); dma_descriptor_unmap(tx);
if (desc->group_head) if (desc->group_head)

View File

@ -1160,11 +1160,10 @@ static irqreturn_t idmac_interrupt(int irq, void *dev_id)
struct scatterlist **sg, *sgnext, *sgnew = NULL; struct scatterlist **sg, *sgnext, *sgnew = NULL;
/* Next transfer descriptor */ /* Next transfer descriptor */
struct idmac_tx_desc *desc, *descnew; struct idmac_tx_desc *desc, *descnew;
dma_async_tx_callback callback;
void *callback_param;
bool done = false; bool done = false;
u32 ready0, ready1, curbuf, err; u32 ready0, ready1, curbuf, err;
unsigned long flags; unsigned long flags;
struct dmaengine_desc_callback cb;
/* IDMAC has cleared the respective BUFx_RDY bit, we manage the buffer */ /* IDMAC has cleared the respective BUFx_RDY bit, we manage the buffer */
@ -1278,12 +1277,12 @@ static irqreturn_t idmac_interrupt(int irq, void *dev_id)
if (likely(sgnew) && if (likely(sgnew) &&
ipu_submit_buffer(ichan, descnew, sgnew, ichan->active_buffer) < 0) { ipu_submit_buffer(ichan, descnew, sgnew, ichan->active_buffer) < 0) {
callback = descnew->txd.callback; dmaengine_desc_get_callback(&descnew->txd, &cb);
callback_param = descnew->txd.callback_param;
list_del_init(&descnew->list); list_del_init(&descnew->list);
spin_unlock(&ichan->lock); spin_unlock(&ichan->lock);
if (callback)
callback(callback_param); dmaengine_desc_callback_invoke(&cb, NULL);
spin_lock(&ichan->lock); spin_lock(&ichan->lock);
} }
@ -1292,13 +1291,12 @@ static irqreturn_t idmac_interrupt(int irq, void *dev_id)
if (done) if (done)
dma_cookie_complete(&desc->txd); dma_cookie_complete(&desc->txd);
callback = desc->txd.callback; dmaengine_desc_get_callback(&desc->txd, &cb);
callback_param = desc->txd.callback_param;
spin_unlock(&ichan->lock); spin_unlock(&ichan->lock);
if (done && (desc->txd.flags & DMA_PREP_INTERRUPT) && callback) if (done && (desc->txd.flags & DMA_PREP_INTERRUPT))
callback(callback_param); dmaengine_desc_callback_invoke(&cb, NULL);
return IRQ_HANDLED; return IRQ_HANDLED;
} }

View File

@ -286,22 +286,21 @@ static void ipu_irq_handler(struct irq_desc *desc)
raw_spin_unlock(&bank_lock); raw_spin_unlock(&bank_lock);
while ((line = ffs(status))) { while ((line = ffs(status))) {
struct ipu_irq_map *map; struct ipu_irq_map *map;
unsigned int irq = NO_IRQ; unsigned int irq;
line--; line--;
status &= ~(1UL << line); status &= ~(1UL << line);
raw_spin_lock(&bank_lock); raw_spin_lock(&bank_lock);
map = src2map(32 * i + line); map = src2map(32 * i + line);
if (map)
irq = map->irq;
raw_spin_unlock(&bank_lock);
if (!map) { if (!map) {
raw_spin_unlock(&bank_lock);
pr_err("IPU: Interrupt on unmapped source %u bank %d\n", pr_err("IPU: Interrupt on unmapped source %u bank %d\n",
line, i); line, i);
continue; continue;
} }
irq = map->irq;
raw_spin_unlock(&bank_lock);
generic_handle_irq(irq); generic_handle_irq(irq);
} }
} }

View File

@ -1,5 +1,5 @@
/* /*
* Copyright (c) 2013 Linaro Ltd. * Copyright (c) 2013 - 2015 Linaro Ltd.
* Copyright (c) 2013 Hisilicon Limited. * Copyright (c) 2013 Hisilicon Limited.
* *
* This program is free software; you can redistribute it and/or modify * This program is free software; you can redistribute it and/or modify
@ -8,6 +8,8 @@
*/ */
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/device.h> #include <linux/device.h>
#include <linux/dma-mapping.h>
#include <linux/dmapool.h>
#include <linux/dmaengine.h> #include <linux/dmaengine.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
@ -25,22 +27,28 @@
#define DRIVER_NAME "k3-dma" #define DRIVER_NAME "k3-dma"
#define DMA_MAX_SIZE 0x1ffc #define DMA_MAX_SIZE 0x1ffc
#define DMA_CYCLIC_MAX_PERIOD 0x1000
#define LLI_BLOCK_SIZE (4 * PAGE_SIZE)
#define INT_STAT 0x00 #define INT_STAT 0x00
#define INT_TC1 0x04 #define INT_TC1 0x04
#define INT_TC2 0x08
#define INT_ERR1 0x0c #define INT_ERR1 0x0c
#define INT_ERR2 0x10 #define INT_ERR2 0x10
#define INT_TC1_MASK 0x18 #define INT_TC1_MASK 0x18
#define INT_TC2_MASK 0x1c
#define INT_ERR1_MASK 0x20 #define INT_ERR1_MASK 0x20
#define INT_ERR2_MASK 0x24 #define INT_ERR2_MASK 0x24
#define INT_TC1_RAW 0x600 #define INT_TC1_RAW 0x600
#define INT_ERR1_RAW 0x608 #define INT_TC2_RAW 0x608
#define INT_ERR2_RAW 0x610 #define INT_ERR1_RAW 0x610
#define INT_ERR2_RAW 0x618
#define CH_PRI 0x688 #define CH_PRI 0x688
#define CH_STAT 0x690 #define CH_STAT 0x690
#define CX_CUR_CNT 0x704 #define CX_CUR_CNT 0x704
#define CX_LLI 0x800 #define CX_LLI 0x800
#define CX_CNT 0x810 #define CX_CNT1 0x80c
#define CX_CNT0 0x810
#define CX_SRC 0x814 #define CX_SRC 0x814
#define CX_DST 0x818 #define CX_DST 0x818
#define CX_CFG 0x81c #define CX_CFG 0x81c
@ -49,6 +57,7 @@
#define CX_LLI_CHAIN_EN 0x2 #define CX_LLI_CHAIN_EN 0x2
#define CX_CFG_EN 0x1 #define CX_CFG_EN 0x1
#define CX_CFG_NODEIRQ BIT(1)
#define CX_CFG_MEM2PER (0x1 << 2) #define CX_CFG_MEM2PER (0x1 << 2)
#define CX_CFG_PER2MEM (0x2 << 2) #define CX_CFG_PER2MEM (0x2 << 2)
#define CX_CFG_SRCINCR (0x1 << 31) #define CX_CFG_SRCINCR (0x1 << 31)
@ -68,7 +77,7 @@ struct k3_dma_desc_sw {
dma_addr_t desc_hw_lli; dma_addr_t desc_hw_lli;
size_t desc_num; size_t desc_num;
size_t size; size_t size;
struct k3_desc_hw desc_hw[0]; struct k3_desc_hw *desc_hw;
}; };
struct k3_dma_phy; struct k3_dma_phy;
@ -81,6 +90,7 @@ struct k3_dma_chan {
enum dma_transfer_direction dir; enum dma_transfer_direction dir;
dma_addr_t dev_addr; dma_addr_t dev_addr;
enum dma_status status; enum dma_status status;
bool cyclic;
}; };
struct k3_dma_phy { struct k3_dma_phy {
@ -100,6 +110,7 @@ struct k3_dma_dev {
struct k3_dma_phy *phy; struct k3_dma_phy *phy;
struct k3_dma_chan *chans; struct k3_dma_chan *chans;
struct clk *clk; struct clk *clk;
struct dma_pool *pool;
u32 dma_channels; u32 dma_channels;
u32 dma_requests; u32 dma_requests;
unsigned int irq; unsigned int irq;
@ -135,6 +146,7 @@ static void k3_dma_terminate_chan(struct k3_dma_phy *phy, struct k3_dma_dev *d)
val = 0x1 << phy->idx; val = 0x1 << phy->idx;
writel_relaxed(val, d->base + INT_TC1_RAW); writel_relaxed(val, d->base + INT_TC1_RAW);
writel_relaxed(val, d->base + INT_TC2_RAW);
writel_relaxed(val, d->base + INT_ERR1_RAW); writel_relaxed(val, d->base + INT_ERR1_RAW);
writel_relaxed(val, d->base + INT_ERR2_RAW); writel_relaxed(val, d->base + INT_ERR2_RAW);
} }
@ -142,7 +154,7 @@ static void k3_dma_terminate_chan(struct k3_dma_phy *phy, struct k3_dma_dev *d)
static void k3_dma_set_desc(struct k3_dma_phy *phy, struct k3_desc_hw *hw) static void k3_dma_set_desc(struct k3_dma_phy *phy, struct k3_desc_hw *hw)
{ {
writel_relaxed(hw->lli, phy->base + CX_LLI); writel_relaxed(hw->lli, phy->base + CX_LLI);
writel_relaxed(hw->count, phy->base + CX_CNT); writel_relaxed(hw->count, phy->base + CX_CNT0);
writel_relaxed(hw->saddr, phy->base + CX_SRC); writel_relaxed(hw->saddr, phy->base + CX_SRC);
writel_relaxed(hw->daddr, phy->base + CX_DST); writel_relaxed(hw->daddr, phy->base + CX_DST);
writel_relaxed(AXI_CFG_DEFAULT, phy->base + AXI_CFG); writel_relaxed(AXI_CFG_DEFAULT, phy->base + AXI_CFG);
@ -176,11 +188,13 @@ static void k3_dma_enable_dma(struct k3_dma_dev *d, bool on)
/* unmask irq */ /* unmask irq */
writel_relaxed(0xffff, d->base + INT_TC1_MASK); writel_relaxed(0xffff, d->base + INT_TC1_MASK);
writel_relaxed(0xffff, d->base + INT_TC2_MASK);
writel_relaxed(0xffff, d->base + INT_ERR1_MASK); writel_relaxed(0xffff, d->base + INT_ERR1_MASK);
writel_relaxed(0xffff, d->base + INT_ERR2_MASK); writel_relaxed(0xffff, d->base + INT_ERR2_MASK);
} else { } else {
/* mask irq */ /* mask irq */
writel_relaxed(0x0, d->base + INT_TC1_MASK); writel_relaxed(0x0, d->base + INT_TC1_MASK);
writel_relaxed(0x0, d->base + INT_TC2_MASK);
writel_relaxed(0x0, d->base + INT_ERR1_MASK); writel_relaxed(0x0, d->base + INT_ERR1_MASK);
writel_relaxed(0x0, d->base + INT_ERR2_MASK); writel_relaxed(0x0, d->base + INT_ERR2_MASK);
} }
@ -193,22 +207,31 @@ static irqreturn_t k3_dma_int_handler(int irq, void *dev_id)
struct k3_dma_chan *c; struct k3_dma_chan *c;
u32 stat = readl_relaxed(d->base + INT_STAT); u32 stat = readl_relaxed(d->base + INT_STAT);
u32 tc1 = readl_relaxed(d->base + INT_TC1); u32 tc1 = readl_relaxed(d->base + INT_TC1);
u32 tc2 = readl_relaxed(d->base + INT_TC2);
u32 err1 = readl_relaxed(d->base + INT_ERR1); u32 err1 = readl_relaxed(d->base + INT_ERR1);
u32 err2 = readl_relaxed(d->base + INT_ERR2); u32 err2 = readl_relaxed(d->base + INT_ERR2);
u32 i, irq_chan = 0; u32 i, irq_chan = 0;
while (stat) { while (stat) {
i = __ffs(stat); i = __ffs(stat);
stat &= (stat - 1); stat &= ~BIT(i);
if (likely(tc1 & BIT(i))) { if (likely(tc1 & BIT(i)) || (tc2 & BIT(i))) {
unsigned long flags;
p = &d->phy[i]; p = &d->phy[i];
c = p->vchan; c = p->vchan;
if (c) { if (c && (tc1 & BIT(i))) {
unsigned long flags;
spin_lock_irqsave(&c->vc.lock, flags); spin_lock_irqsave(&c->vc.lock, flags);
vchan_cookie_complete(&p->ds_run->vd); vchan_cookie_complete(&p->ds_run->vd);
WARN_ON_ONCE(p->ds_done);
p->ds_done = p->ds_run; p->ds_done = p->ds_run;
p->ds_run = NULL;
spin_unlock_irqrestore(&c->vc.lock, flags);
}
if (c && (tc2 & BIT(i))) {
spin_lock_irqsave(&c->vc.lock, flags);
if (p->ds_run != NULL)
vchan_cyclic_callback(&p->ds_run->vd);
spin_unlock_irqrestore(&c->vc.lock, flags); spin_unlock_irqrestore(&c->vc.lock, flags);
} }
irq_chan |= BIT(i); irq_chan |= BIT(i);
@ -218,14 +241,17 @@ static irqreturn_t k3_dma_int_handler(int irq, void *dev_id)
} }
writel_relaxed(irq_chan, d->base + INT_TC1_RAW); writel_relaxed(irq_chan, d->base + INT_TC1_RAW);
writel_relaxed(irq_chan, d->base + INT_TC2_RAW);
writel_relaxed(err1, d->base + INT_ERR1_RAW); writel_relaxed(err1, d->base + INT_ERR1_RAW);
writel_relaxed(err2, d->base + INT_ERR2_RAW); writel_relaxed(err2, d->base + INT_ERR2_RAW);
if (irq_chan) { if (irq_chan)
tasklet_schedule(&d->task); tasklet_schedule(&d->task);
if (irq_chan || err1 || err2)
return IRQ_HANDLED; return IRQ_HANDLED;
} else
return IRQ_NONE; return IRQ_NONE;
} }
static int k3_dma_start_txd(struct k3_dma_chan *c) static int k3_dma_start_txd(struct k3_dma_chan *c)
@ -247,14 +273,14 @@ static int k3_dma_start_txd(struct k3_dma_chan *c)
* so vc->desc_issued only contains desc pending * so vc->desc_issued only contains desc pending
*/ */
list_del(&ds->vd.node); list_del(&ds->vd.node);
WARN_ON_ONCE(c->phy->ds_run);
WARN_ON_ONCE(c->phy->ds_done);
c->phy->ds_run = ds; c->phy->ds_run = ds;
c->phy->ds_done = NULL;
/* start dma */ /* start dma */
k3_dma_set_desc(c->phy, &ds->desc_hw[0]); k3_dma_set_desc(c->phy, &ds->desc_hw[0]);
return 0; return 0;
} }
c->phy->ds_done = NULL;
c->phy->ds_run = NULL;
return -EAGAIN; return -EAGAIN;
} }
@ -351,7 +377,7 @@ static enum dma_status k3_dma_tx_status(struct dma_chan *chan,
* its total size. * its total size.
*/ */
vd = vchan_find_desc(&c->vc, cookie); vd = vchan_find_desc(&c->vc, cookie);
if (vd) { if (vd && !c->cyclic) {
bytes = container_of(vd, struct k3_dma_desc_sw, vd)->size; bytes = container_of(vd, struct k3_dma_desc_sw, vd)->size;
} else if ((!p) || (!p->ds_run)) { } else if ((!p) || (!p->ds_run)) {
bytes = 0; bytes = 0;
@ -361,7 +387,8 @@ static enum dma_status k3_dma_tx_status(struct dma_chan *chan,
bytes = k3_dma_get_curr_cnt(d, p); bytes = k3_dma_get_curr_cnt(d, p);
clli = k3_dma_get_curr_lli(p); clli = k3_dma_get_curr_lli(p);
index = (clli - ds->desc_hw_lli) / sizeof(struct k3_desc_hw); index = ((clli - ds->desc_hw_lli) /
sizeof(struct k3_desc_hw)) + 1;
for (; index < ds->desc_num; index++) { for (; index < ds->desc_num; index++) {
bytes += ds->desc_hw[index].count; bytes += ds->desc_hw[index].count;
/* end of lli */ /* end of lli */
@ -402,9 +429,10 @@ static void k3_dma_issue_pending(struct dma_chan *chan)
static void k3_dma_fill_desc(struct k3_dma_desc_sw *ds, dma_addr_t dst, static void k3_dma_fill_desc(struct k3_dma_desc_sw *ds, dma_addr_t dst,
dma_addr_t src, size_t len, u32 num, u32 ccfg) dma_addr_t src, size_t len, u32 num, u32 ccfg)
{ {
if ((num + 1) < ds->desc_num) if (num != ds->desc_num - 1)
ds->desc_hw[num].lli = ds->desc_hw_lli + (num + 1) * ds->desc_hw[num].lli = ds->desc_hw_lli + (num + 1) *
sizeof(struct k3_desc_hw); sizeof(struct k3_desc_hw);
ds->desc_hw[num].lli |= CX_LLI_CHAIN_EN; ds->desc_hw[num].lli |= CX_LLI_CHAIN_EN;
ds->desc_hw[num].count = len; ds->desc_hw[num].count = len;
ds->desc_hw[num].saddr = src; ds->desc_hw[num].saddr = src;
@ -412,6 +440,35 @@ static void k3_dma_fill_desc(struct k3_dma_desc_sw *ds, dma_addr_t dst,
ds->desc_hw[num].config = ccfg; ds->desc_hw[num].config = ccfg;
} }
static struct k3_dma_desc_sw *k3_dma_alloc_desc_resource(int num,
struct dma_chan *chan)
{
struct k3_dma_chan *c = to_k3_chan(chan);
struct k3_dma_desc_sw *ds;
struct k3_dma_dev *d = to_k3_dma(chan->device);
int lli_limit = LLI_BLOCK_SIZE / sizeof(struct k3_desc_hw);
if (num > lli_limit) {
dev_dbg(chan->device->dev, "vch %p: sg num %d exceed max %d\n",
&c->vc, num, lli_limit);
return NULL;
}
ds = kzalloc(sizeof(*ds), GFP_NOWAIT);
if (!ds)
return NULL;
ds->desc_hw = dma_pool_alloc(d->pool, GFP_NOWAIT, &ds->desc_hw_lli);
if (!ds->desc_hw) {
dev_dbg(chan->device->dev, "vch %p: dma alloc fail\n", &c->vc);
kfree(ds);
return NULL;
}
memset(ds->desc_hw, 0, sizeof(struct k3_desc_hw) * num);
ds->desc_num = num;
return ds;
}
static struct dma_async_tx_descriptor *k3_dma_prep_memcpy( static struct dma_async_tx_descriptor *k3_dma_prep_memcpy(
struct dma_chan *chan, dma_addr_t dst, dma_addr_t src, struct dma_chan *chan, dma_addr_t dst, dma_addr_t src,
size_t len, unsigned long flags) size_t len, unsigned long flags)
@ -425,13 +482,13 @@ static struct dma_async_tx_descriptor *k3_dma_prep_memcpy(
return NULL; return NULL;
num = DIV_ROUND_UP(len, DMA_MAX_SIZE); num = DIV_ROUND_UP(len, DMA_MAX_SIZE);
ds = kzalloc(sizeof(*ds) + num * sizeof(ds->desc_hw[0]), GFP_ATOMIC);
ds = k3_dma_alloc_desc_resource(num, chan);
if (!ds) if (!ds)
return NULL; return NULL;
ds->desc_hw_lli = __virt_to_phys((unsigned long)&ds->desc_hw[0]); c->cyclic = 0;
ds->size = len; ds->size = len;
ds->desc_num = num;
num = 0; num = 0;
if (!c->ccfg) { if (!c->ccfg) {
@ -474,18 +531,17 @@ static struct dma_async_tx_descriptor *k3_dma_prep_slave_sg(
if (sgl == NULL) if (sgl == NULL)
return NULL; return NULL;
c->cyclic = 0;
for_each_sg(sgl, sg, sglen, i) { for_each_sg(sgl, sg, sglen, i) {
avail = sg_dma_len(sg); avail = sg_dma_len(sg);
if (avail > DMA_MAX_SIZE) if (avail > DMA_MAX_SIZE)
num += DIV_ROUND_UP(avail, DMA_MAX_SIZE) - 1; num += DIV_ROUND_UP(avail, DMA_MAX_SIZE) - 1;
} }
ds = kzalloc(sizeof(*ds) + num * sizeof(ds->desc_hw[0]), GFP_ATOMIC); ds = k3_dma_alloc_desc_resource(num, chan);
if (!ds) if (!ds)
return NULL; return NULL;
ds->desc_hw_lli = __virt_to_phys((unsigned long)&ds->desc_hw[0]);
ds->desc_num = num;
num = 0; num = 0;
for_each_sg(sgl, sg, sglen, i) { for_each_sg(sgl, sg, sglen, i) {
@ -516,6 +572,73 @@ static struct dma_async_tx_descriptor *k3_dma_prep_slave_sg(
return vchan_tx_prep(&c->vc, &ds->vd, flags); return vchan_tx_prep(&c->vc, &ds->vd, flags);
} }
static struct dma_async_tx_descriptor *
k3_dma_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t buf_addr,
size_t buf_len, size_t period_len,
enum dma_transfer_direction dir,
unsigned long flags)
{
struct k3_dma_chan *c = to_k3_chan(chan);
struct k3_dma_desc_sw *ds;
size_t len, avail, total = 0;
dma_addr_t addr, src = 0, dst = 0;
int num = 1, since = 0;
size_t modulo = DMA_CYCLIC_MAX_PERIOD;
u32 en_tc2 = 0;
dev_dbg(chan->device->dev, "%s: buf %pad, dst %pad, buf len %zu, period_len = %zu, dir %d\n",
__func__, &buf_addr, &to_k3_chan(chan)->dev_addr,
buf_len, period_len, (int)dir);
avail = buf_len;
if (avail > modulo)
num += DIV_ROUND_UP(avail, modulo) - 1;
ds = k3_dma_alloc_desc_resource(num, chan);
if (!ds)
return NULL;
c->cyclic = 1;
addr = buf_addr;
avail = buf_len;
total = avail;
num = 0;
if (period_len < modulo)
modulo = period_len;
do {
len = min_t(size_t, avail, modulo);
if (dir == DMA_MEM_TO_DEV) {
src = addr;
dst = c->dev_addr;
} else if (dir == DMA_DEV_TO_MEM) {
src = c->dev_addr;
dst = addr;
}
since += len;
if (since >= period_len) {
/* descriptor asks for TC2 interrupt on completion */
en_tc2 = CX_CFG_NODEIRQ;
since -= period_len;
} else
en_tc2 = 0;
k3_dma_fill_desc(ds, dst, src, len, num++, c->ccfg | en_tc2);
addr += len;
avail -= len;
} while (avail);
/* "Cyclic" == end of link points back to start of link */
ds->desc_hw[num - 1].lli |= ds->desc_hw_lli;
ds->size = total;
return vchan_tx_prep(&c->vc, &ds->vd, flags);
}
static int k3_dma_config(struct dma_chan *chan, static int k3_dma_config(struct dma_chan *chan,
struct dma_slave_config *cfg) struct dma_slave_config *cfg)
{ {
@ -551,7 +674,7 @@ static int k3_dma_config(struct dma_chan *chan,
c->ccfg |= (val << 12) | (val << 16); c->ccfg |= (val << 12) | (val << 16);
if ((maxburst == 0) || (maxburst > 16)) if ((maxburst == 0) || (maxburst > 16))
val = 16; val = 15;
else else
val = maxburst - 1; val = maxburst - 1;
c->ccfg |= (val << 20) | (val << 24); c->ccfg |= (val << 20) | (val << 24);
@ -563,6 +686,16 @@ static int k3_dma_config(struct dma_chan *chan,
return 0; return 0;
} }
static void k3_dma_free_desc(struct virt_dma_desc *vd)
{
struct k3_dma_desc_sw *ds =
container_of(vd, struct k3_dma_desc_sw, vd);
struct k3_dma_dev *d = to_k3_dma(vd->tx.chan->device);
dma_pool_free(d->pool, ds->desc_hw, ds->desc_hw_lli);
kfree(ds);
}
static int k3_dma_terminate_all(struct dma_chan *chan) static int k3_dma_terminate_all(struct dma_chan *chan)
{ {
struct k3_dma_chan *c = to_k3_chan(chan); struct k3_dma_chan *c = to_k3_chan(chan);
@ -586,7 +719,15 @@ static int k3_dma_terminate_all(struct dma_chan *chan)
k3_dma_terminate_chan(p, d); k3_dma_terminate_chan(p, d);
c->phy = NULL; c->phy = NULL;
p->vchan = NULL; p->vchan = NULL;
p->ds_run = p->ds_done = NULL; if (p->ds_run) {
k3_dma_free_desc(&p->ds_run->vd);
p->ds_run = NULL;
}
if (p->ds_done) {
k3_dma_free_desc(&p->ds_done->vd);
p->ds_done = NULL;
}
} }
spin_unlock_irqrestore(&c->vc.lock, flags); spin_unlock_irqrestore(&c->vc.lock, flags);
vchan_dma_desc_free_list(&c->vc, &head); vchan_dma_desc_free_list(&c->vc, &head);
@ -639,14 +780,6 @@ static int k3_dma_transfer_resume(struct dma_chan *chan)
return 0; return 0;
} }
static void k3_dma_free_desc(struct virt_dma_desc *vd)
{
struct k3_dma_desc_sw *ds =
container_of(vd, struct k3_dma_desc_sw, vd);
kfree(ds);
}
static const struct of_device_id k3_pdma_dt_ids[] = { static const struct of_device_id k3_pdma_dt_ids[] = {
{ .compatible = "hisilicon,k3-dma-1.0", }, { .compatible = "hisilicon,k3-dma-1.0", },
{} {}
@ -706,6 +839,12 @@ static int k3_dma_probe(struct platform_device *op)
d->irq = irq; d->irq = irq;
/* A DMA memory pool for LLIs, align on 32-byte boundary */
d->pool = dmam_pool_create(DRIVER_NAME, &op->dev,
LLI_BLOCK_SIZE, 32, 0);
if (!d->pool)
return -ENOMEM;
/* init phy channel */ /* init phy channel */
d->phy = devm_kzalloc(&op->dev, d->phy = devm_kzalloc(&op->dev,
d->dma_channels * sizeof(struct k3_dma_phy), GFP_KERNEL); d->dma_channels * sizeof(struct k3_dma_phy), GFP_KERNEL);
@ -722,11 +861,13 @@ static int k3_dma_probe(struct platform_device *op)
INIT_LIST_HEAD(&d->slave.channels); INIT_LIST_HEAD(&d->slave.channels);
dma_cap_set(DMA_SLAVE, d->slave.cap_mask); dma_cap_set(DMA_SLAVE, d->slave.cap_mask);
dma_cap_set(DMA_MEMCPY, d->slave.cap_mask); dma_cap_set(DMA_MEMCPY, d->slave.cap_mask);
dma_cap_set(DMA_CYCLIC, d->slave.cap_mask);
d->slave.dev = &op->dev; d->slave.dev = &op->dev;
d->slave.device_free_chan_resources = k3_dma_free_chan_resources; d->slave.device_free_chan_resources = k3_dma_free_chan_resources;
d->slave.device_tx_status = k3_dma_tx_status; d->slave.device_tx_status = k3_dma_tx_status;
d->slave.device_prep_dma_memcpy = k3_dma_prep_memcpy; d->slave.device_prep_dma_memcpy = k3_dma_prep_memcpy;
d->slave.device_prep_slave_sg = k3_dma_prep_slave_sg; d->slave.device_prep_slave_sg = k3_dma_prep_slave_sg;
d->slave.device_prep_dma_cyclic = k3_dma_prep_dma_cyclic;
d->slave.device_issue_pending = k3_dma_issue_pending; d->slave.device_issue_pending = k3_dma_issue_pending;
d->slave.device_config = k3_dma_config; d->slave.device_config = k3_dma_config;
d->slave.device_pause = k3_dma_transfer_pause; d->slave.device_pause = k3_dma_transfer_pause;

View File

@ -104,10 +104,8 @@ static void mic_dma_cleanup(struct mic_dma_chan *ch)
tx = &ch->tx_array[last_tail]; tx = &ch->tx_array[last_tail];
if (tx->cookie) { if (tx->cookie) {
dma_cookie_complete(tx); dma_cookie_complete(tx);
if (tx->callback) { dmaengine_desc_get_callback_invoke(tx, NULL);
tx->callback(tx->callback_param); tx->callback = NULL;
tx->callback = NULL;
}
} }
last_tail = mic_dma_hw_ring_inc(last_tail); last_tail = mic_dma_hw_ring_inc(last_tail);
} }

View File

@ -864,19 +864,15 @@ static void dma_do_tasklet(unsigned long data)
struct mmp_pdma_desc_sw *desc, *_desc; struct mmp_pdma_desc_sw *desc, *_desc;
LIST_HEAD(chain_cleanup); LIST_HEAD(chain_cleanup);
unsigned long flags; unsigned long flags;
struct dmaengine_desc_callback cb;
if (chan->cyclic_first) { if (chan->cyclic_first) {
dma_async_tx_callback cb = NULL;
void *cb_data = NULL;
spin_lock_irqsave(&chan->desc_lock, flags); spin_lock_irqsave(&chan->desc_lock, flags);
desc = chan->cyclic_first; desc = chan->cyclic_first;
cb = desc->async_tx.callback; dmaengine_desc_get_callback(&desc->async_tx, &cb);
cb_data = desc->async_tx.callback_param;
spin_unlock_irqrestore(&chan->desc_lock, flags); spin_unlock_irqrestore(&chan->desc_lock, flags);
if (cb) dmaengine_desc_callback_invoke(&cb, NULL);
cb(cb_data);
return; return;
} }
@ -921,8 +917,8 @@ static void dma_do_tasklet(unsigned long data)
/* Remove from the list of transactions */ /* Remove from the list of transactions */
list_del(&desc->node); list_del(&desc->node);
/* Run the link descriptor callback function */ /* Run the link descriptor callback function */
if (txd->callback) dmaengine_desc_get_callback(txd, &cb);
txd->callback(txd->callback_param); dmaengine_desc_callback_invoke(&cb, NULL);
dma_pool_free(chan->desc_pool, desc, txd->phys); dma_pool_free(chan->desc_pool, desc, txd->phys);
} }

View File

@ -349,9 +349,7 @@ static void dma_do_tasklet(unsigned long data)
{ {
struct mmp_tdma_chan *tdmac = (struct mmp_tdma_chan *)data; struct mmp_tdma_chan *tdmac = (struct mmp_tdma_chan *)data;
if (tdmac->desc.callback) dmaengine_desc_get_callback_invoke(&tdmac->desc, NULL);
tdmac->desc.callback(tdmac->desc.callback_param);
} }
static void mmp_tdma_free_descriptor(struct mmp_tdma_chan *tdmac) static void mmp_tdma_free_descriptor(struct mmp_tdma_chan *tdmac)
@ -433,7 +431,7 @@ static struct dma_async_tx_descriptor *mmp_tdma_prep_dma_cyclic(
if (period_len > TDMA_MAX_XFER_BYTES) { if (period_len > TDMA_MAX_XFER_BYTES) {
dev_err(tdmac->dev, dev_err(tdmac->dev,
"maximum period size exceeded: %d > %d\n", "maximum period size exceeded: %zu > %d\n",
period_len, TDMA_MAX_XFER_BYTES); period_len, TDMA_MAX_XFER_BYTES);
goto err_out; goto err_out;
} }

View File

@ -579,7 +579,7 @@ static int moxart_probe(struct platform_device *pdev)
return -ENOMEM; return -ENOMEM;
irq = irq_of_parse_and_map(node, 0); irq = irq_of_parse_and_map(node, 0);
if (irq == NO_IRQ) { if (!irq) {
dev_err(dev, "no IRQ resource\n"); dev_err(dev, "no IRQ resource\n");
return -EINVAL; return -EINVAL;
} }

View File

@ -411,8 +411,7 @@ static void mpc_dma_process_completed(struct mpc_dma *mdma)
list_for_each_entry(mdesc, &list, node) { list_for_each_entry(mdesc, &list, node) {
desc = &mdesc->desc; desc = &mdesc->desc;
if (desc->callback) dmaengine_desc_get_callback_invoke(desc, NULL);
desc->callback(desc->callback_param);
last_cookie = desc->cookie; last_cookie = desc->cookie;
dma_run_dependencies(desc); dma_run_dependencies(desc);
@ -926,7 +925,7 @@ static int mpc_dma_probe(struct platform_device *op)
} }
mdma->irq = irq_of_parse_and_map(dn, 0); mdma->irq = irq_of_parse_and_map(dn, 0);
if (mdma->irq == NO_IRQ) { if (!mdma->irq) {
dev_err(dev, "Error mapping IRQ!\n"); dev_err(dev, "Error mapping IRQ!\n");
retval = -EINVAL; retval = -EINVAL;
goto err; goto err;
@ -935,7 +934,7 @@ static int mpc_dma_probe(struct platform_device *op)
if (of_device_is_compatible(dn, "fsl,mpc8308-dma")) { if (of_device_is_compatible(dn, "fsl,mpc8308-dma")) {
mdma->is_mpc8308 = 1; mdma->is_mpc8308 = 1;
mdma->irq2 = irq_of_parse_and_map(dn, 1); mdma->irq2 = irq_of_parse_and_map(dn, 1);
if (mdma->irq2 == NO_IRQ) { if (!mdma->irq2) {
dev_err(dev, "Error mapping IRQ!\n"); dev_err(dev, "Error mapping IRQ!\n");
retval = -EINVAL; retval = -EINVAL;
goto err_dispose1; goto err_dispose1;

View File

@ -206,14 +206,11 @@ mv_desc_run_tx_complete_actions(struct mv_xor_desc_slot *desc,
if (desc->async_tx.cookie > 0) { if (desc->async_tx.cookie > 0) {
cookie = desc->async_tx.cookie; cookie = desc->async_tx.cookie;
dma_descriptor_unmap(&desc->async_tx);
/* call the callback (must not sleep or submit new /* call the callback (must not sleep or submit new
* operations to this channel) * operations to this channel)
*/ */
if (desc->async_tx.callback) dmaengine_desc_get_callback_invoke(&desc->async_tx, NULL);
desc->async_tx.callback(
desc->async_tx.callback_param);
dma_descriptor_unmap(&desc->async_tx);
} }
/* run dependent operations */ /* run dependent operations */
@ -470,12 +467,90 @@ static int mv_xor_alloc_chan_resources(struct dma_chan *chan)
return mv_chan->slots_allocated ? : -ENOMEM; return mv_chan->slots_allocated ? : -ENOMEM;
} }
/*
* Check if source or destination is an PCIe/IO address (non-SDRAM) and add
* a new MBus window if necessary. Use a cache for these check so that
* the MMIO mapped registers don't have to be accessed for this check
* to speed up this process.
*/
static int mv_xor_add_io_win(struct mv_xor_chan *mv_chan, u32 addr)
{
struct mv_xor_device *xordev = mv_chan->xordev;
void __iomem *base = mv_chan->mmr_high_base;
u32 win_enable;
u32 size;
u8 target, attr;
int ret;
int i;
/* Nothing needs to get done for the Armada 3700 */
if (xordev->xor_type == XOR_ARMADA_37XX)
return 0;
/*
* Loop over the cached windows to check, if the requested area
* is already mapped. If this the case, nothing needs to be done
* and we can return.
*/
for (i = 0; i < WINDOW_COUNT; i++) {
if (addr >= xordev->win_start[i] &&
addr <= xordev->win_end[i]) {
/* Window is already mapped */
return 0;
}
}
/*
* The window is not mapped, so we need to create the new mapping
*/
/* If no IO window is found that addr has to be located in SDRAM */
ret = mvebu_mbus_get_io_win_info(addr, &size, &target, &attr);
if (ret < 0)
return 0;
/*
* Mask the base addr 'addr' according to 'size' read back from the
* MBus window. Otherwise we might end up with an address located
* somewhere in the middle of this area here.
*/
size -= 1;
addr &= ~size;
/*
* Reading one of both enabled register is enough, as they are always
* programmed to the identical values
*/
win_enable = readl(base + WINDOW_BAR_ENABLE(0));
/* Set 'i' to the first free window to write the new values to */
i = ffs(~win_enable) - 1;
if (i >= WINDOW_COUNT)
return -ENOMEM;
writel((addr & 0xffff0000) | (attr << 8) | target,
base + WINDOW_BASE(i));
writel(size & 0xffff0000, base + WINDOW_SIZE(i));
/* Fill the caching variables for later use */
xordev->win_start[i] = addr;
xordev->win_end[i] = addr + size;
win_enable |= (1 << i);
win_enable |= 3 << (16 + (2 * i));
writel(win_enable, base + WINDOW_BAR_ENABLE(0));
writel(win_enable, base + WINDOW_BAR_ENABLE(1));
return 0;
}
static struct dma_async_tx_descriptor * static struct dma_async_tx_descriptor *
mv_xor_prep_dma_xor(struct dma_chan *chan, dma_addr_t dest, dma_addr_t *src, mv_xor_prep_dma_xor(struct dma_chan *chan, dma_addr_t dest, dma_addr_t *src,
unsigned int src_cnt, size_t len, unsigned long flags) unsigned int src_cnt, size_t len, unsigned long flags)
{ {
struct mv_xor_chan *mv_chan = to_mv_xor_chan(chan); struct mv_xor_chan *mv_chan = to_mv_xor_chan(chan);
struct mv_xor_desc_slot *sw_desc; struct mv_xor_desc_slot *sw_desc;
int ret;
if (unlikely(len < MV_XOR_MIN_BYTE_COUNT)) if (unlikely(len < MV_XOR_MIN_BYTE_COUNT))
return NULL; return NULL;
@ -486,6 +561,11 @@ mv_xor_prep_dma_xor(struct dma_chan *chan, dma_addr_t dest, dma_addr_t *src,
"%s src_cnt: %d len: %zu dest %pad flags: %ld\n", "%s src_cnt: %d len: %zu dest %pad flags: %ld\n",
__func__, src_cnt, len, &dest, flags); __func__, src_cnt, len, &dest, flags);
/* Check if a new window needs to get added for 'dest' */
ret = mv_xor_add_io_win(mv_chan, dest);
if (ret)
return NULL;
sw_desc = mv_chan_alloc_slot(mv_chan); sw_desc = mv_chan_alloc_slot(mv_chan);
if (sw_desc) { if (sw_desc) {
sw_desc->type = DMA_XOR; sw_desc->type = DMA_XOR;
@ -493,8 +573,13 @@ mv_xor_prep_dma_xor(struct dma_chan *chan, dma_addr_t dest, dma_addr_t *src,
mv_desc_init(sw_desc, dest, len, flags); mv_desc_init(sw_desc, dest, len, flags);
if (mv_chan->op_in_desc == XOR_MODE_IN_DESC) if (mv_chan->op_in_desc == XOR_MODE_IN_DESC)
mv_desc_set_mode(sw_desc); mv_desc_set_mode(sw_desc);
while (src_cnt--) while (src_cnt--) {
/* Check if a new window needs to get added for 'src' */
ret = mv_xor_add_io_win(mv_chan, src[src_cnt]);
if (ret)
return NULL;
mv_desc_set_src_addr(sw_desc, src_cnt, src[src_cnt]); mv_desc_set_src_addr(sw_desc, src_cnt, src[src_cnt]);
}
} }
dev_dbg(mv_chan_to_devp(mv_chan), dev_dbg(mv_chan_to_devp(mv_chan),
@ -959,6 +1044,7 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
mv_chan->op_in_desc = XOR_MODE_IN_DESC; mv_chan->op_in_desc = XOR_MODE_IN_DESC;
dma_dev = &mv_chan->dmadev; dma_dev = &mv_chan->dmadev;
mv_chan->xordev = xordev;
/* /*
* These source and destination dummy buffers are used to implement * These source and destination dummy buffers are used to implement
@ -1086,6 +1172,10 @@ mv_xor_conf_mbus_windows(struct mv_xor_device *xordev,
dram->mbus_dram_target_id, base + WINDOW_BASE(i)); dram->mbus_dram_target_id, base + WINDOW_BASE(i));
writel((cs->size - 1) & 0xffff0000, base + WINDOW_SIZE(i)); writel((cs->size - 1) & 0xffff0000, base + WINDOW_SIZE(i));
/* Fill the caching variables for later use */
xordev->win_start[i] = cs->base;
xordev->win_end[i] = cs->base + cs->size - 1;
win_enable |= (1 << i); win_enable |= (1 << i);
win_enable |= 3 << (16 + (2 * i)); win_enable |= 3 << (16 + (2 * i));
} }

View File

@ -80,12 +80,17 @@
#define WINDOW_BAR_ENABLE(chan) (0x40 + ((chan) << 2)) #define WINDOW_BAR_ENABLE(chan) (0x40 + ((chan) << 2))
#define WINDOW_OVERRIDE_CTRL(chan) (0xA0 + ((chan) << 2)) #define WINDOW_OVERRIDE_CTRL(chan) (0xA0 + ((chan) << 2))
#define WINDOW_COUNT 8
struct mv_xor_device { struct mv_xor_device {
void __iomem *xor_base; void __iomem *xor_base;
void __iomem *xor_high_base; void __iomem *xor_high_base;
struct clk *clk; struct clk *clk;
struct mv_xor_chan *channels[MV_XOR_MAX_CHANNELS]; struct mv_xor_chan *channels[MV_XOR_MAX_CHANNELS];
int xor_type; int xor_type;
u32 win_start[WINDOW_COUNT];
u32 win_end[WINDOW_COUNT];
}; };
/** /**
@ -127,6 +132,8 @@ struct mv_xor_chan {
char dummy_dst[MV_XOR_MIN_BYTE_COUNT]; char dummy_dst[MV_XOR_MIN_BYTE_COUNT];
dma_addr_t dummy_src_addr, dummy_dst_addr; dma_addr_t dummy_src_addr, dummy_dst_addr;
u32 saved_config_reg, saved_int_mask_reg; u32 saved_config_reg, saved_int_mask_reg;
struct mv_xor_device *xordev;
}; };
/** /**

View File

@ -326,8 +326,7 @@ static void mxs_dma_tasklet(unsigned long data)
{ {
struct mxs_dma_chan *mxs_chan = (struct mxs_dma_chan *) data; struct mxs_dma_chan *mxs_chan = (struct mxs_dma_chan *) data;
if (mxs_chan->desc.callback) dmaengine_desc_get_callback_invoke(&mxs_chan->desc, NULL);
mxs_chan->desc.callback(mxs_chan->desc.callback_param);
} }
static int mxs_dma_irq_to_chan(struct mxs_dma_engine *mxs_dma, int irq) static int mxs_dma_irq_to_chan(struct mxs_dma_engine *mxs_dma, int irq)
@ -429,12 +428,10 @@ static int mxs_dma_alloc_chan_resources(struct dma_chan *chan)
goto err_alloc; goto err_alloc;
} }
if (mxs_chan->chan_irq != NO_IRQ) { ret = request_irq(mxs_chan->chan_irq, mxs_dma_int_handler,
ret = request_irq(mxs_chan->chan_irq, mxs_dma_int_handler, 0, "mxs-dma", mxs_dma);
0, "mxs-dma", mxs_dma); if (ret)
if (ret) goto err_irq;
goto err_irq;
}
ret = clk_prepare_enable(mxs_dma->clk); ret = clk_prepare_enable(mxs_dma->clk);
if (ret) if (ret)

View File

@ -1102,8 +1102,7 @@ static void nbpf_chan_tasklet(unsigned long data)
{ {
struct nbpf_channel *chan = (struct nbpf_channel *)data; struct nbpf_channel *chan = (struct nbpf_channel *)data;
struct nbpf_desc *desc, *tmp; struct nbpf_desc *desc, *tmp;
dma_async_tx_callback callback; struct dmaengine_desc_callback cb;
void *param;
while (!list_empty(&chan->done)) { while (!list_empty(&chan->done)) {
bool found = false, must_put, recycling = false; bool found = false, must_put, recycling = false;
@ -1151,14 +1150,12 @@ static void nbpf_chan_tasklet(unsigned long data)
must_put = false; must_put = false;
} }
callback = desc->async_tx.callback; dmaengine_desc_get_callback(&desc->async_tx, &cb);
param = desc->async_tx.callback_param;
/* ack and callback completed descriptor */ /* ack and callback completed descriptor */
spin_unlock_irq(&chan->lock); spin_unlock_irq(&chan->lock);
if (callback) dmaengine_desc_callback_invoke(&cb, NULL);
callback(param);
if (must_put) if (must_put)
nbpf_desc_put(desc); nbpf_desc_put(desc);

View File

@ -8,6 +8,7 @@
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/dmaengine.h> #include <linux/dmaengine.h>
#include <linux/dma-mapping.h> #include <linux/dma-mapping.h>
#include <linux/dmapool.h>
#include <linux/err.h> #include <linux/err.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
@ -32,10 +33,12 @@ struct omap_dmadev {
const struct omap_dma_reg *reg_map; const struct omap_dma_reg *reg_map;
struct omap_system_dma_plat_info *plat; struct omap_system_dma_plat_info *plat;
bool legacy; bool legacy;
bool ll123_supported;
struct dma_pool *desc_pool;
unsigned dma_requests; unsigned dma_requests;
spinlock_t irq_lock; spinlock_t irq_lock;
uint32_t irq_enable_mask; uint32_t irq_enable_mask;
struct omap_chan *lch_map[OMAP_SDMA_CHANNELS]; struct omap_chan **lch_map;
}; };
struct omap_chan { struct omap_chan {
@ -55,16 +58,40 @@ struct omap_chan {
unsigned sgidx; unsigned sgidx;
}; };
#define DESC_NXT_SV_REFRESH (0x1 << 24)
#define DESC_NXT_SV_REUSE (0x2 << 24)
#define DESC_NXT_DV_REFRESH (0x1 << 26)
#define DESC_NXT_DV_REUSE (0x2 << 26)
#define DESC_NTYPE_TYPE2 (0x2 << 29)
/* Type 2 descriptor with Source or Destination address update */
struct omap_type2_desc {
uint32_t next_desc;
uint32_t en;
uint32_t addr; /* src or dst */
uint16_t fn;
uint16_t cicr;
int16_t cdei;
int16_t csei;
int32_t cdfi;
int32_t csfi;
} __packed;
struct omap_sg { struct omap_sg {
dma_addr_t addr; dma_addr_t addr;
uint32_t en; /* number of elements (24-bit) */ uint32_t en; /* number of elements (24-bit) */
uint32_t fn; /* number of frames (16-bit) */ uint32_t fn; /* number of frames (16-bit) */
int32_t fi; /* for double indexing */ int32_t fi; /* for double indexing */
int16_t ei; /* for double indexing */ int16_t ei; /* for double indexing */
/* Linked list */
struct omap_type2_desc *t2_desc;
dma_addr_t t2_desc_paddr;
}; };
struct omap_desc { struct omap_desc {
struct virt_dma_desc vd; struct virt_dma_desc vd;
bool using_ll;
enum dma_transfer_direction dir; enum dma_transfer_direction dir;
dma_addr_t dev_addr; dma_addr_t dev_addr;
@ -81,6 +108,9 @@ struct omap_desc {
}; };
enum { enum {
CAPS_0_SUPPORT_LL123 = BIT(20), /* Linked List type1/2/3 */
CAPS_0_SUPPORT_LL4 = BIT(21), /* Linked List type4 */
CCR_FS = BIT(5), CCR_FS = BIT(5),
CCR_READ_PRIORITY = BIT(6), CCR_READ_PRIORITY = BIT(6),
CCR_ENABLE = BIT(7), CCR_ENABLE = BIT(7),
@ -151,6 +181,19 @@ enum {
CICR_SUPER_BLOCK_IE = BIT(14), /* OMAP2+ only */ CICR_SUPER_BLOCK_IE = BIT(14), /* OMAP2+ only */
CLNK_CTRL_ENABLE_LNK = BIT(15), CLNK_CTRL_ENABLE_LNK = BIT(15),
CDP_DST_VALID_INC = 0 << 0,
CDP_DST_VALID_RELOAD = 1 << 0,
CDP_DST_VALID_REUSE = 2 << 0,
CDP_SRC_VALID_INC = 0 << 2,
CDP_SRC_VALID_RELOAD = 1 << 2,
CDP_SRC_VALID_REUSE = 2 << 2,
CDP_NTYPE_TYPE1 = 1 << 4,
CDP_NTYPE_TYPE2 = 2 << 4,
CDP_NTYPE_TYPE3 = 3 << 4,
CDP_TMODE_NORMAL = 0 << 8,
CDP_TMODE_LLIST = 1 << 8,
CDP_FAST = BIT(10),
}; };
static const unsigned es_bytes[] = { static const unsigned es_bytes[] = {
@ -180,7 +223,64 @@ static inline struct omap_desc *to_omap_dma_desc(struct dma_async_tx_descriptor
static void omap_dma_desc_free(struct virt_dma_desc *vd) static void omap_dma_desc_free(struct virt_dma_desc *vd)
{ {
kfree(container_of(vd, struct omap_desc, vd)); struct omap_desc *d = to_omap_dma_desc(&vd->tx);
if (d->using_ll) {
struct omap_dmadev *od = to_omap_dma_dev(vd->tx.chan->device);
int i;
for (i = 0; i < d->sglen; i++) {
if (d->sg[i].t2_desc)
dma_pool_free(od->desc_pool, d->sg[i].t2_desc,
d->sg[i].t2_desc_paddr);
}
}
kfree(d);
}
static void omap_dma_fill_type2_desc(struct omap_desc *d, int idx,
enum dma_transfer_direction dir, bool last)
{
struct omap_sg *sg = &d->sg[idx];
struct omap_type2_desc *t2_desc = sg->t2_desc;
if (idx)
d->sg[idx - 1].t2_desc->next_desc = sg->t2_desc_paddr;
if (last)
t2_desc->next_desc = 0xfffffffc;
t2_desc->en = sg->en;
t2_desc->addr = sg->addr;
t2_desc->fn = sg->fn & 0xffff;
t2_desc->cicr = d->cicr;
if (!last)
t2_desc->cicr &= ~CICR_BLOCK_IE;
switch (dir) {
case DMA_DEV_TO_MEM:
t2_desc->cdei = sg->ei;
t2_desc->csei = d->ei;
t2_desc->cdfi = sg->fi;
t2_desc->csfi = d->fi;
t2_desc->en |= DESC_NXT_DV_REFRESH;
t2_desc->en |= DESC_NXT_SV_REUSE;
break;
case DMA_MEM_TO_DEV:
t2_desc->cdei = d->ei;
t2_desc->csei = sg->ei;
t2_desc->cdfi = d->fi;
t2_desc->csfi = sg->fi;
t2_desc->en |= DESC_NXT_SV_REFRESH;
t2_desc->en |= DESC_NXT_DV_REUSE;
break;
default:
return;
}
t2_desc->en |= DESC_NTYPE_TYPE2;
} }
static void omap_dma_write(uint32_t val, unsigned type, void __iomem *addr) static void omap_dma_write(uint32_t val, unsigned type, void __iomem *addr)
@ -285,6 +385,7 @@ static void omap_dma_assign(struct omap_dmadev *od, struct omap_chan *c,
static void omap_dma_start(struct omap_chan *c, struct omap_desc *d) static void omap_dma_start(struct omap_chan *c, struct omap_desc *d)
{ {
struct omap_dmadev *od = to_omap_dma_dev(c->vc.chan.device); struct omap_dmadev *od = to_omap_dma_dev(c->vc.chan.device);
uint16_t cicr = d->cicr;
if (__dma_omap15xx(od->plat->dma_attr)) if (__dma_omap15xx(od->plat->dma_attr))
omap_dma_chan_write(c, CPC, 0); omap_dma_chan_write(c, CPC, 0);
@ -293,8 +394,27 @@ static void omap_dma_start(struct omap_chan *c, struct omap_desc *d)
omap_dma_clear_csr(c); omap_dma_clear_csr(c);
if (d->using_ll) {
uint32_t cdp = CDP_TMODE_LLIST | CDP_NTYPE_TYPE2 | CDP_FAST;
if (d->dir == DMA_DEV_TO_MEM)
cdp |= (CDP_DST_VALID_RELOAD | CDP_SRC_VALID_REUSE);
else
cdp |= (CDP_DST_VALID_REUSE | CDP_SRC_VALID_RELOAD);
omap_dma_chan_write(c, CDP, cdp);
omap_dma_chan_write(c, CNDP, d->sg[0].t2_desc_paddr);
omap_dma_chan_write(c, CCDN, 0);
omap_dma_chan_write(c, CCFN, 0xffff);
omap_dma_chan_write(c, CCEN, 0xffffff);
cicr &= ~CICR_BLOCK_IE;
} else if (od->ll123_supported) {
omap_dma_chan_write(c, CDP, 0);
}
/* Enable interrupts */ /* Enable interrupts */
omap_dma_chan_write(c, CICR, d->cicr); omap_dma_chan_write(c, CICR, cicr);
/* Enable channel */ /* Enable channel */
omap_dma_chan_write(c, CCR, d->ccr | CCR_ENABLE); omap_dma_chan_write(c, CCR, d->ccr | CCR_ENABLE);
@ -365,10 +485,9 @@ static void omap_dma_stop(struct omap_chan *c)
c->running = false; c->running = false;
} }
static void omap_dma_start_sg(struct omap_chan *c, struct omap_desc *d, static void omap_dma_start_sg(struct omap_chan *c, struct omap_desc *d)
unsigned idx)
{ {
struct omap_sg *sg = d->sg + idx; struct omap_sg *sg = d->sg + c->sgidx;
unsigned cxsa, cxei, cxfi; unsigned cxsa, cxei, cxfi;
if (d->dir == DMA_DEV_TO_MEM || d->dir == DMA_MEM_TO_MEM) { if (d->dir == DMA_DEV_TO_MEM || d->dir == DMA_MEM_TO_MEM) {
@ -388,6 +507,7 @@ static void omap_dma_start_sg(struct omap_chan *c, struct omap_desc *d,
omap_dma_chan_write(c, CFN, sg->fn); omap_dma_chan_write(c, CFN, sg->fn);
omap_dma_start(c, d); omap_dma_start(c, d);
c->sgidx++;
} }
static void omap_dma_start_desc(struct omap_chan *c) static void omap_dma_start_desc(struct omap_chan *c)
@ -433,7 +553,7 @@ static void omap_dma_start_desc(struct omap_chan *c)
omap_dma_chan_write(c, CSDP, d->csdp); omap_dma_chan_write(c, CSDP, d->csdp);
omap_dma_chan_write(c, CLNK_CTRL, d->clnk_ctrl); omap_dma_chan_write(c, CLNK_CTRL, d->clnk_ctrl);
omap_dma_start_sg(c, d, 0); omap_dma_start_sg(c, d);
} }
static void omap_dma_callback(int ch, u16 status, void *data) static void omap_dma_callback(int ch, u16 status, void *data)
@ -445,15 +565,13 @@ static void omap_dma_callback(int ch, u16 status, void *data)
spin_lock_irqsave(&c->vc.lock, flags); spin_lock_irqsave(&c->vc.lock, flags);
d = c->desc; d = c->desc;
if (d) { if (d) {
if (!c->cyclic) { if (c->cyclic) {
if (++c->sgidx < d->sglen) {
omap_dma_start_sg(c, d, c->sgidx);
} else {
omap_dma_start_desc(c);
vchan_cookie_complete(&d->vd);
}
} else {
vchan_cyclic_callback(&d->vd); vchan_cyclic_callback(&d->vd);
} else if (d->using_ll || c->sgidx == d->sglen) {
omap_dma_start_desc(c);
vchan_cookie_complete(&d->vd);
} else {
omap_dma_start_sg(c, d);
} }
} }
spin_unlock_irqrestore(&c->vc.lock, flags); spin_unlock_irqrestore(&c->vc.lock, flags);
@ -503,6 +621,7 @@ static int omap_dma_alloc_chan_resources(struct dma_chan *chan)
{ {
struct omap_dmadev *od = to_omap_dma_dev(chan->device); struct omap_dmadev *od = to_omap_dma_dev(chan->device);
struct omap_chan *c = to_omap_dma_chan(chan); struct omap_chan *c = to_omap_dma_chan(chan);
struct device *dev = od->ddev.dev;
int ret; int ret;
if (od->legacy) { if (od->legacy) {
@ -513,8 +632,7 @@ static int omap_dma_alloc_chan_resources(struct dma_chan *chan)
&c->dma_ch); &c->dma_ch);
} }
dev_dbg(od->ddev.dev, "allocating channel %u for %u\n", dev_dbg(dev, "allocating channel %u for %u\n", c->dma_ch, c->dma_sig);
c->dma_ch, c->dma_sig);
if (ret >= 0) { if (ret >= 0) {
omap_dma_assign(od, c, c->dma_ch); omap_dma_assign(od, c, c->dma_ch);
@ -570,7 +688,8 @@ static void omap_dma_free_chan_resources(struct dma_chan *chan)
vchan_free_chan_resources(&c->vc); vchan_free_chan_resources(&c->vc);
omap_free_dma(c->dma_ch); omap_free_dma(c->dma_ch);
dev_dbg(od->ddev.dev, "freeing channel for %u\n", c->dma_sig); dev_dbg(od->ddev.dev, "freeing channel %u used for %u\n", c->dma_ch,
c->dma_sig);
c->dma_sig = 0; c->dma_sig = 0;
} }
@ -744,6 +863,7 @@ static struct dma_async_tx_descriptor *omap_dma_prep_slave_sg(
struct omap_desc *d; struct omap_desc *d;
dma_addr_t dev_addr; dma_addr_t dev_addr;
unsigned i, es, en, frame_bytes; unsigned i, es, en, frame_bytes;
bool ll_failed = false;
u32 burst; u32 burst;
if (dir == DMA_DEV_TO_MEM) { if (dir == DMA_DEV_TO_MEM) {
@ -784,13 +904,16 @@ static struct dma_async_tx_descriptor *omap_dma_prep_slave_sg(
d->es = es; d->es = es;
d->ccr = c->ccr | CCR_SYNC_FRAME; d->ccr = c->ccr | CCR_SYNC_FRAME;
if (dir == DMA_DEV_TO_MEM) if (dir == DMA_DEV_TO_MEM) {
d->ccr |= CCR_DST_AMODE_POSTINC | CCR_SRC_AMODE_CONSTANT; d->ccr |= CCR_DST_AMODE_POSTINC | CCR_SRC_AMODE_CONSTANT;
else d->csdp = CSDP_DST_BURST_64 | CSDP_DST_PACKED;
} else {
d->ccr |= CCR_DST_AMODE_CONSTANT | CCR_SRC_AMODE_POSTINC; d->ccr |= CCR_DST_AMODE_CONSTANT | CCR_SRC_AMODE_POSTINC;
d->csdp = CSDP_SRC_BURST_64 | CSDP_SRC_PACKED;
}
d->cicr = CICR_DROP_IE | CICR_BLOCK_IE; d->cicr = CICR_DROP_IE | CICR_BLOCK_IE;
d->csdp = es; d->csdp |= es;
if (dma_omap1()) { if (dma_omap1()) {
d->cicr |= CICR_TOUT_IE; d->cicr |= CICR_TOUT_IE;
@ -819,14 +942,47 @@ static struct dma_async_tx_descriptor *omap_dma_prep_slave_sg(
*/ */
en = burst; en = burst;
frame_bytes = es_bytes[es] * en; frame_bytes = es_bytes[es] * en;
if (sglen >= 2)
d->using_ll = od->ll123_supported;
for_each_sg(sgl, sgent, sglen, i) { for_each_sg(sgl, sgent, sglen, i) {
d->sg[i].addr = sg_dma_address(sgent); struct omap_sg *osg = &d->sg[i];
d->sg[i].en = en;
d->sg[i].fn = sg_dma_len(sgent) / frame_bytes; osg->addr = sg_dma_address(sgent);
osg->en = en;
osg->fn = sg_dma_len(sgent) / frame_bytes;
if (d->using_ll) {
osg->t2_desc = dma_pool_alloc(od->desc_pool, GFP_ATOMIC,
&osg->t2_desc_paddr);
if (!osg->t2_desc) {
dev_err(chan->device->dev,
"t2_desc[%d] allocation failed\n", i);
ll_failed = true;
d->using_ll = false;
continue;
}
omap_dma_fill_type2_desc(d, i, dir, (i == sglen - 1));
}
} }
d->sglen = sglen; d->sglen = sglen;
/* Release the dma_pool entries if one allocation failed */
if (ll_failed) {
for (i = 0; i < d->sglen; i++) {
struct omap_sg *osg = &d->sg[i];
if (osg->t2_desc) {
dma_pool_free(od->desc_pool, osg->t2_desc,
osg->t2_desc_paddr);
osg->t2_desc = NULL;
}
}
}
return vchan_tx_prep(&c->vc, &d->vd, tx_flags); return vchan_tx_prep(&c->vc, &d->vd, tx_flags);
} }
@ -1225,16 +1381,24 @@ static int omap_dma_probe(struct platform_device *pdev)
spin_lock_init(&od->lock); spin_lock_init(&od->lock);
spin_lock_init(&od->irq_lock); spin_lock_init(&od->irq_lock);
od->dma_requests = OMAP_SDMA_REQUESTS; if (!pdev->dev.of_node) {
if (pdev->dev.of_node && of_property_read_u32(pdev->dev.of_node, od->dma_requests = od->plat->dma_attr->lch_count;
"dma-requests", if (unlikely(!od->dma_requests))
&od->dma_requests)) { od->dma_requests = OMAP_SDMA_REQUESTS;
} else if (of_property_read_u32(pdev->dev.of_node, "dma-requests",
&od->dma_requests)) {
dev_info(&pdev->dev, dev_info(&pdev->dev,
"Missing dma-requests property, using %u.\n", "Missing dma-requests property, using %u.\n",
OMAP_SDMA_REQUESTS); OMAP_SDMA_REQUESTS);
od->dma_requests = OMAP_SDMA_REQUESTS;
} }
for (i = 0; i < OMAP_SDMA_CHANNELS; i++) { od->lch_map = devm_kcalloc(&pdev->dev, od->dma_requests,
sizeof(*od->lch_map), GFP_KERNEL);
if (!od->lch_map)
return -ENOMEM;
for (i = 0; i < od->dma_requests; i++) {
rc = omap_dma_chan_init(od); rc = omap_dma_chan_init(od);
if (rc) { if (rc) {
omap_dma_free(od); omap_dma_free(od);
@ -1257,10 +1421,25 @@ static int omap_dma_probe(struct platform_device *pdev)
return rc; return rc;
} }
if (omap_dma_glbl_read(od, CAPS_0) & CAPS_0_SUPPORT_LL123)
od->ll123_supported = true;
od->ddev.filter.map = od->plat->slave_map; od->ddev.filter.map = od->plat->slave_map;
od->ddev.filter.mapcnt = od->plat->slavecnt; od->ddev.filter.mapcnt = od->plat->slavecnt;
od->ddev.filter.fn = omap_dma_filter_fn; od->ddev.filter.fn = omap_dma_filter_fn;
if (od->ll123_supported) {
od->desc_pool = dma_pool_create(dev_name(&pdev->dev),
&pdev->dev,
sizeof(struct omap_type2_desc),
4, 0);
if (!od->desc_pool) {
dev_err(&pdev->dev,
"unable to allocate descriptor pool\n");
od->ll123_supported = false;
}
}
rc = dma_async_device_register(&od->ddev); rc = dma_async_device_register(&od->ddev);
if (rc) { if (rc) {
pr_warn("OMAP-DMA: failed to register slave DMA engine device: %d\n", pr_warn("OMAP-DMA: failed to register slave DMA engine device: %d\n",
@ -1284,7 +1463,8 @@ static int omap_dma_probe(struct platform_device *pdev)
} }
} }
dev_info(&pdev->dev, "OMAP DMA engine driver\n"); dev_info(&pdev->dev, "OMAP DMA engine driver%s\n",
od->ll123_supported ? " (LinkedList1/2/3 supported)" : "");
return rc; return rc;
} }
@ -1307,6 +1487,9 @@ static int omap_dma_remove(struct platform_device *pdev)
omap_dma_glbl_write(od, IRQENABLE_L0, 0); omap_dma_glbl_write(od, IRQENABLE_L0, 0);
} }
if (od->ll123_supported)
dma_pool_destroy(od->desc_pool);
omap_dma_free(od); omap_dma_free(od);
return 0; return 0;

View File

@ -357,14 +357,13 @@ static void pdc_chain_complete(struct pch_dma_chan *pd_chan,
struct pch_dma_desc *desc) struct pch_dma_desc *desc)
{ {
struct dma_async_tx_descriptor *txd = &desc->txd; struct dma_async_tx_descriptor *txd = &desc->txd;
dma_async_tx_callback callback = txd->callback; struct dmaengine_desc_callback cb;
void *param = txd->callback_param;
dmaengine_desc_get_callback(txd, &cb);
list_splice_init(&desc->tx_list, &pd_chan->free_list); list_splice_init(&desc->tx_list, &pd_chan->free_list);
list_move(&desc->desc_node, &pd_chan->free_list); list_move(&desc->desc_node, &pd_chan->free_list);
if (callback) dmaengine_desc_callback_invoke(&cb, NULL);
callback(param);
} }
static void pdc_complete_all(struct pch_dma_chan *pd_chan) static void pdc_complete_all(struct pch_dma_chan *pd_chan)

View File

@ -2039,14 +2039,12 @@ static void pl330_tasklet(unsigned long data)
} }
while (!list_empty(&pch->completed_list)) { while (!list_empty(&pch->completed_list)) {
dma_async_tx_callback callback; struct dmaengine_desc_callback cb;
void *callback_param;
desc = list_first_entry(&pch->completed_list, desc = list_first_entry(&pch->completed_list,
struct dma_pl330_desc, node); struct dma_pl330_desc, node);
callback = desc->txd.callback; dmaengine_desc_get_callback(&desc->txd, &cb);
callback_param = desc->txd.callback_param;
if (pch->cyclic) { if (pch->cyclic) {
desc->status = PREP; desc->status = PREP;
@ -2064,9 +2062,9 @@ static void pl330_tasklet(unsigned long data)
dma_descriptor_unmap(&desc->txd); dma_descriptor_unmap(&desc->txd);
if (callback) { if (dmaengine_desc_callback_valid(&cb)) {
spin_unlock_irqrestore(&pch->lock, flags); spin_unlock_irqrestore(&pch->lock, flags);
callback(callback_param); dmaengine_desc_callback_invoke(&cb, NULL);
spin_lock_irqsave(&pch->lock, flags); spin_lock_irqsave(&pch->lock, flags);
} }
} }
@ -2274,7 +2272,7 @@ pl330_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
{ {
enum dma_status ret; enum dma_status ret;
unsigned long flags; unsigned long flags;
struct dma_pl330_desc *desc, *running = NULL; struct dma_pl330_desc *desc, *running = NULL, *last_enq = NULL;
struct dma_pl330_chan *pch = to_pchan(chan); struct dma_pl330_chan *pch = to_pchan(chan);
unsigned int transferred, residual = 0; unsigned int transferred, residual = 0;
@ -2287,10 +2285,13 @@ pl330_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
goto out; goto out;
spin_lock_irqsave(&pch->lock, flags); spin_lock_irqsave(&pch->lock, flags);
spin_lock(&pch->thread->dmac->lock);
if (pch->thread->req_running != -1) if (pch->thread->req_running != -1)
running = pch->thread->req[pch->thread->req_running].desc; running = pch->thread->req[pch->thread->req_running].desc;
last_enq = pch->thread->req[pch->thread->lstenq].desc;
/* Check in pending list */ /* Check in pending list */
list_for_each_entry(desc, &pch->work_list, node) { list_for_each_entry(desc, &pch->work_list, node) {
if (desc->status == DONE) if (desc->status == DONE)
@ -2298,6 +2299,15 @@ pl330_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
else if (running && desc == running) else if (running && desc == running)
transferred = transferred =
pl330_get_current_xferred_count(pch, desc); pl330_get_current_xferred_count(pch, desc);
else if (desc->status == BUSY)
/*
* Busy but not running means either just enqueued,
* or finished and not yet marked done
*/
if (desc == last_enq)
transferred = 0;
else
transferred = desc->bytes_requested;
else else
transferred = 0; transferred = 0;
residual += desc->bytes_requested - transferred; residual += desc->bytes_requested - transferred;
@ -2318,6 +2328,7 @@ pl330_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
if (desc->last) if (desc->last)
residual = 0; residual = 0;
} }
spin_unlock(&pch->thread->dmac->lock);
spin_unlock_irqrestore(&pch->lock, flags); spin_unlock_irqrestore(&pch->lock, flags);
out: out:

View File

@ -1482,14 +1482,11 @@ static dma_cookie_t ppc440spe_adma_run_tx_complete_actions(
cookie = desc->async_tx.cookie; cookie = desc->async_tx.cookie;
desc->async_tx.cookie = 0; desc->async_tx.cookie = 0;
dma_descriptor_unmap(&desc->async_tx);
/* call the callback (must not sleep or submit new /* call the callback (must not sleep or submit new
* operations to this channel) * operations to this channel)
*/ */
if (desc->async_tx.callback) dmaengine_desc_get_callback_invoke(&desc->async_tx, NULL);
desc->async_tx.callback(
desc->async_tx.callback_param);
dma_descriptor_unmap(&desc->async_tx);
} }
/* run dependent operations */ /* run dependent operations */
@ -3891,7 +3888,7 @@ static int ppc440spe_adma_setup_irqs(struct ppc440spe_adma_device *adev,
np = ofdev->dev.of_node; np = ofdev->dev.of_node;
if (adev->id != PPC440SPE_XOR_ID) { if (adev->id != PPC440SPE_XOR_ID) {
adev->err_irq = irq_of_parse_and_map(np, 1); adev->err_irq = irq_of_parse_and_map(np, 1);
if (adev->err_irq == NO_IRQ) { if (!adev->err_irq) {
dev_warn(adev->dev, "no err irq resource?\n"); dev_warn(adev->dev, "no err irq resource?\n");
*initcode = PPC_ADMA_INIT_IRQ2; *initcode = PPC_ADMA_INIT_IRQ2;
adev->err_irq = -ENXIO; adev->err_irq = -ENXIO;
@ -3902,7 +3899,7 @@ static int ppc440spe_adma_setup_irqs(struct ppc440spe_adma_device *adev,
} }
adev->irq = irq_of_parse_and_map(np, 0); adev->irq = irq_of_parse_and_map(np, 0);
if (adev->irq == NO_IRQ) { if (!adev->irq) {
dev_err(adev->dev, "no irq resource\n"); dev_err(adev->dev, "no irq resource\n");
*initcode = PPC_ADMA_INIT_IRQ1; *initcode = PPC_ADMA_INIT_IRQ1;
ret = -ENXIO; ret = -ENXIO;

View File

@ -111,6 +111,7 @@ static void hidma_process_completed(struct hidma_chan *mchan)
struct dma_async_tx_descriptor *desc; struct dma_async_tx_descriptor *desc;
dma_cookie_t last_cookie; dma_cookie_t last_cookie;
struct hidma_desc *mdesc; struct hidma_desc *mdesc;
struct hidma_desc *next;
unsigned long irqflags; unsigned long irqflags;
struct list_head list; struct list_head list;
@ -122,28 +123,36 @@ static void hidma_process_completed(struct hidma_chan *mchan)
spin_unlock_irqrestore(&mchan->lock, irqflags); spin_unlock_irqrestore(&mchan->lock, irqflags);
/* Execute callbacks and run dependencies */ /* Execute callbacks and run dependencies */
list_for_each_entry(mdesc, &list, node) { list_for_each_entry_safe(mdesc, next, &list, node) {
enum dma_status llstat; enum dma_status llstat;
struct dmaengine_desc_callback cb;
struct dmaengine_result result;
desc = &mdesc->desc; desc = &mdesc->desc;
last_cookie = desc->cookie;
spin_lock_irqsave(&mchan->lock, irqflags); spin_lock_irqsave(&mchan->lock, irqflags);
dma_cookie_complete(desc); dma_cookie_complete(desc);
spin_unlock_irqrestore(&mchan->lock, irqflags); spin_unlock_irqrestore(&mchan->lock, irqflags);
llstat = hidma_ll_status(mdma->lldev, mdesc->tre_ch); llstat = hidma_ll_status(mdma->lldev, mdesc->tre_ch);
if (desc->callback && (llstat == DMA_COMPLETE)) dmaengine_desc_get_callback(desc, &cb);
desc->callback(desc->callback_param);
last_cookie = desc->cookie;
dma_run_dependencies(desc); dma_run_dependencies(desc);
spin_lock_irqsave(&mchan->lock, irqflags);
list_move(&mdesc->node, &mchan->free);
if (llstat == DMA_COMPLETE) {
mchan->last_success = last_cookie;
result.result = DMA_TRANS_NOERROR;
} else
result.result = DMA_TRANS_ABORTED;
spin_unlock_irqrestore(&mchan->lock, irqflags);
dmaengine_desc_callback_invoke(&cb, &result);
} }
/* Free descriptors */
spin_lock_irqsave(&mchan->lock, irqflags);
list_splice_tail_init(&list, &mchan->free);
spin_unlock_irqrestore(&mchan->lock, irqflags);
} }
/* /*
@ -238,6 +247,19 @@ static void hidma_issue_pending(struct dma_chan *dmach)
hidma_ll_start(dmadev->lldev); hidma_ll_start(dmadev->lldev);
} }
static inline bool hidma_txn_is_success(dma_cookie_t cookie,
dma_cookie_t last_success, dma_cookie_t last_used)
{
if (last_success <= last_used) {
if ((cookie <= last_success) || (cookie > last_used))
return true;
} else {
if ((cookie <= last_success) && (cookie > last_used))
return true;
}
return false;
}
static enum dma_status hidma_tx_status(struct dma_chan *dmach, static enum dma_status hidma_tx_status(struct dma_chan *dmach,
dma_cookie_t cookie, dma_cookie_t cookie,
struct dma_tx_state *txstate) struct dma_tx_state *txstate)
@ -246,8 +268,13 @@ static enum dma_status hidma_tx_status(struct dma_chan *dmach,
enum dma_status ret; enum dma_status ret;
ret = dma_cookie_status(dmach, cookie, txstate); ret = dma_cookie_status(dmach, cookie, txstate);
if (ret == DMA_COMPLETE) if (ret == DMA_COMPLETE) {
return ret; bool is_success;
is_success = hidma_txn_is_success(cookie, mchan->last_success,
dmach->cookie);
return is_success ? ret : DMA_ERROR;
}
if (mchan->paused && (ret == DMA_IN_PROGRESS)) { if (mchan->paused && (ret == DMA_IN_PROGRESS)) {
unsigned long flags; unsigned long flags;
@ -398,6 +425,7 @@ static int hidma_terminate_channel(struct dma_chan *chan)
hidma_process_completed(mchan); hidma_process_completed(mchan);
spin_lock_irqsave(&mchan->lock, irqflags); spin_lock_irqsave(&mchan->lock, irqflags);
mchan->last_success = 0;
list_splice_init(&mchan->active, &list); list_splice_init(&mchan->active, &list);
list_splice_init(&mchan->prepared, &list); list_splice_init(&mchan->prepared, &list);
list_splice_init(&mchan->completed, &list); list_splice_init(&mchan->completed, &list);
@ -413,14 +441,9 @@ static int hidma_terminate_channel(struct dma_chan *chan)
/* return all user requests */ /* return all user requests */
list_for_each_entry_safe(mdesc, tmp, &list, node) { list_for_each_entry_safe(mdesc, tmp, &list, node) {
struct dma_async_tx_descriptor *txd = &mdesc->desc; struct dma_async_tx_descriptor *txd = &mdesc->desc;
dma_async_tx_callback callback = mdesc->desc.callback;
void *param = mdesc->desc.callback_param;
dma_descriptor_unmap(txd); dma_descriptor_unmap(txd);
dmaengine_desc_get_callback_invoke(txd, NULL);
if (callback)
callback(param);
dma_run_dependencies(txd); dma_run_dependencies(txd);
/* move myself to free_list */ /* move myself to free_list */

View File

@ -72,7 +72,6 @@ struct hidma_lldev {
u32 tre_write_offset; /* TRE write location */ u32 tre_write_offset; /* TRE write location */
struct tasklet_struct task; /* task delivering notifications */ struct tasklet_struct task; /* task delivering notifications */
struct tasklet_struct rst_task; /* task to reset HW */
DECLARE_KFIFO_PTR(handoff_fifo, DECLARE_KFIFO_PTR(handoff_fifo,
struct hidma_tre *); /* pending TREs FIFO */ struct hidma_tre *); /* pending TREs FIFO */
}; };
@ -89,6 +88,7 @@ struct hidma_chan {
bool allocated; bool allocated;
char dbg_name[16]; char dbg_name[16];
u32 dma_sig; u32 dma_sig;
dma_cookie_t last_success;
/* /*
* active descriptor on this channel * active descriptor on this channel

View File

@ -380,27 +380,6 @@ static int hidma_ll_reset(struct hidma_lldev *lldev)
return 0; return 0;
} }
/*
* Abort all transactions and perform a reset.
*/
static void hidma_ll_abort(unsigned long arg)
{
struct hidma_lldev *lldev = (struct hidma_lldev *)arg;
u8 err_code = HIDMA_EVRE_STATUS_ERROR;
u8 err_info = 0xFF;
int rc;
hidma_cleanup_pending_tre(lldev, err_info, err_code);
/* reset the channel for recovery */
rc = hidma_ll_setup(lldev);
if (rc) {
dev_err(lldev->dev, "channel reinitialize failed after error\n");
return;
}
writel(ENABLE_IRQS, lldev->evca + HIDMA_EVCA_IRQ_EN_REG);
}
/* /*
* The interrupt handler for HIDMA will try to consume as many pending * The interrupt handler for HIDMA will try to consume as many pending
* EVRE from the event queue as possible. Each EVRE has an associated * EVRE from the event queue as possible. Each EVRE has an associated
@ -454,13 +433,18 @@ irqreturn_t hidma_ll_inthandler(int chirq, void *arg)
while (cause) { while (cause) {
if (cause & HIDMA_ERR_INT_MASK) { if (cause & HIDMA_ERR_INT_MASK) {
dev_err(lldev->dev, "error 0x%x, resetting...\n", dev_err(lldev->dev, "error 0x%x, disabling...\n",
cause); cause);
/* Clear out pending interrupts */ /* Clear out pending interrupts */
writel(cause, lldev->evca + HIDMA_EVCA_IRQ_CLR_REG); writel(cause, lldev->evca + HIDMA_EVCA_IRQ_CLR_REG);
tasklet_schedule(&lldev->rst_task); /* No further submissions. */
hidma_ll_disable(lldev);
/* Driver completes the txn and intimates the client.*/
hidma_cleanup_pending_tre(lldev, 0xFF,
HIDMA_EVRE_STATUS_ERROR);
goto out; goto out;
} }
@ -808,7 +792,6 @@ struct hidma_lldev *hidma_ll_init(struct device *dev, u32 nr_tres,
return NULL; return NULL;
spin_lock_init(&lldev->lock); spin_lock_init(&lldev->lock);
tasklet_init(&lldev->rst_task, hidma_ll_abort, (unsigned long)lldev);
tasklet_init(&lldev->task, hidma_ll_tre_complete, (unsigned long)lldev); tasklet_init(&lldev->task, hidma_ll_tre_complete, (unsigned long)lldev);
lldev->initialized = 1; lldev->initialized = 1;
writel(ENABLE_IRQS, lldev->evca + HIDMA_EVCA_IRQ_EN_REG); writel(ENABLE_IRQS, lldev->evca + HIDMA_EVCA_IRQ_EN_REG);
@ -831,7 +814,6 @@ int hidma_ll_uninit(struct hidma_lldev *lldev)
required_bytes = sizeof(struct hidma_tre) * lldev->nr_tres; required_bytes = sizeof(struct hidma_tre) * lldev->nr_tres;
tasklet_kill(&lldev->task); tasklet_kill(&lldev->task);
tasklet_kill(&lldev->rst_task);
memset(lldev->trepool, 0, required_bytes); memset(lldev->trepool, 0, required_bytes);
lldev->trepool = NULL; lldev->trepool = NULL;
lldev->pending_tre_count = 0; lldev->pending_tre_count = 0;

View File

@ -823,11 +823,11 @@ static struct dma_async_tx_descriptor *s3c24xx_dma_prep_memcpy(
struct s3c24xx_sg *dsg; struct s3c24xx_sg *dsg;
int src_mod, dest_mod; int src_mod, dest_mod;
dev_dbg(&s3cdma->pdev->dev, "prepare memcpy of %d bytes from %s\n", dev_dbg(&s3cdma->pdev->dev, "prepare memcpy of %zu bytes from %s\n",
len, s3cchan->name); len, s3cchan->name);
if ((len & S3C24XX_DCON_TC_MASK) != len) { if ((len & S3C24XX_DCON_TC_MASK) != len) {
dev_err(&s3cdma->pdev->dev, "memcpy size %d to large\n", len); dev_err(&s3cdma->pdev->dev, "memcpy size %zu to large\n", len);
return NULL; return NULL;
} }
@ -1301,6 +1301,9 @@ static int s3c24xx_dma_probe(struct platform_device *pdev)
s3cdma->slave.device_prep_dma_cyclic = s3c24xx_dma_prep_dma_cyclic; s3cdma->slave.device_prep_dma_cyclic = s3c24xx_dma_prep_dma_cyclic;
s3cdma->slave.device_config = s3c24xx_dma_set_runtime_config; s3cdma->slave.device_config = s3c24xx_dma_set_runtime_config;
s3cdma->slave.device_terminate_all = s3c24xx_dma_terminate_all; s3cdma->slave.device_terminate_all = s3c24xx_dma_terminate_all;
s3cdma->slave.filter.map = pdata->slave_map;
s3cdma->slave.filter.mapcnt = pdata->slavecnt;
s3cdma->slave.filter.fn = s3c24xx_dma_filter;
/* Register as many memcpy channels as there are physical channels */ /* Register as many memcpy channels as there are physical channels */
ret = s3c24xx_dma_init_virtual_channels(s3cdma, &s3cdma->memcpy, ret = s3c24xx_dma_init_virtual_channels(s3cdma, &s3cdma->memcpy,
@ -1418,7 +1421,7 @@ bool s3c24xx_dma_filter(struct dma_chan *chan, void *param)
s3cchan = to_s3c24xx_dma_chan(chan); s3cchan = to_s3c24xx_dma_chan(chan);
return s3cchan->id == (int)param; return s3cchan->id == (uintptr_t)param;
} }
EXPORT_SYMBOL(s3c24xx_dma_filter); EXPORT_SYMBOL(s3c24xx_dma_filter);

View File

@ -463,7 +463,7 @@ static enum dma_status sa11x0_dma_tx_status(struct dma_chan *chan,
dma_addr_t addr = sa11x0_dma_pos(p); dma_addr_t addr = sa11x0_dma_pos(p);
unsigned i; unsigned i;
dev_vdbg(d->slave.dev, "tx_status: addr:%x\n", addr); dev_vdbg(d->slave.dev, "tx_status: addr:%pad\n", &addr);
for (i = 0; i < txd->sglen; i++) { for (i = 0; i < txd->sglen; i++) {
dev_vdbg(d->slave.dev, "tx_status: [%u] %x+%x\n", dev_vdbg(d->slave.dev, "tx_status: [%u] %x+%x\n",
@ -491,7 +491,7 @@ static enum dma_status sa11x0_dma_tx_status(struct dma_chan *chan,
} }
spin_unlock_irqrestore(&c->vc.lock, flags); spin_unlock_irqrestore(&c->vc.lock, flags);
dev_vdbg(d->slave.dev, "tx_status: bytes 0x%zx\n", state->residue); dev_vdbg(d->slave.dev, "tx_status: bytes 0x%x\n", state->residue);
return ret; return ret;
} }
@ -551,8 +551,8 @@ static struct dma_async_tx_descriptor *sa11x0_dma_prep_slave_sg(
if (len > DMA_MAX_SIZE) if (len > DMA_MAX_SIZE)
j += DIV_ROUND_UP(len, DMA_MAX_SIZE & ~DMA_ALIGN) - 1; j += DIV_ROUND_UP(len, DMA_MAX_SIZE & ~DMA_ALIGN) - 1;
if (addr & DMA_ALIGN) { if (addr & DMA_ALIGN) {
dev_dbg(chan->device->dev, "vchan %p: bad buffer alignment: %08x\n", dev_dbg(chan->device->dev, "vchan %p: bad buffer alignment: %pad\n",
&c->vc, addr); &c->vc, &addr);
return NULL; return NULL;
} }
} }
@ -599,7 +599,7 @@ static struct dma_async_tx_descriptor *sa11x0_dma_prep_slave_sg(
txd->size = size; txd->size = size;
txd->sglen = j; txd->sglen = j;
dev_dbg(chan->device->dev, "vchan %p: txd %p: size %u nr %u\n", dev_dbg(chan->device->dev, "vchan %p: txd %p: size %zu nr %u\n",
&c->vc, &txd->vd, txd->size, txd->sglen); &c->vc, &txd->vd, txd->size, txd->sglen);
return vchan_tx_prep(&c->vc, &txd->vd, flags); return vchan_tx_prep(&c->vc, &txd->vd, flags);
@ -693,8 +693,8 @@ static int sa11x0_dma_device_config(struct dma_chan *chan,
if (maxburst == 8) if (maxburst == 8)
ddar |= DDAR_BS; ddar |= DDAR_BS;
dev_dbg(c->vc.chan.device->dev, "vchan %p: dma_slave_config addr %x width %u burst %u\n", dev_dbg(c->vc.chan.device->dev, "vchan %p: dma_slave_config addr %pad width %u burst %u\n",
&c->vc, addr, width, maxburst); &c->vc, &addr, width, maxburst);
c->ddar = ddar | (addr & 0xf0000000) | (addr & 0x003ffffc) << 6; c->ddar = ddar | (addr & 0xf0000000) | (addr & 0x003ffffc) << 6;

View File

@ -117,15 +117,35 @@ struct rcar_dmac_desc_page {
((PAGE_SIZE - offsetof(struct rcar_dmac_desc_page, chunks)) / \ ((PAGE_SIZE - offsetof(struct rcar_dmac_desc_page, chunks)) / \
sizeof(struct rcar_dmac_xfer_chunk)) sizeof(struct rcar_dmac_xfer_chunk))
/*
* struct rcar_dmac_chan_slave - Slave configuration
* @slave_addr: slave memory address
* @xfer_size: size (in bytes) of hardware transfers
*/
struct rcar_dmac_chan_slave {
phys_addr_t slave_addr;
unsigned int xfer_size;
};
/*
* struct rcar_dmac_chan_map - Map of slave device phys to dma address
* @addr: slave dma address
* @dir: direction of mapping
* @slave: slave configuration that is mapped
*/
struct rcar_dmac_chan_map {
dma_addr_t addr;
enum dma_data_direction dir;
struct rcar_dmac_chan_slave slave;
};
/* /*
* struct rcar_dmac_chan - R-Car Gen2 DMA Controller Channel * struct rcar_dmac_chan - R-Car Gen2 DMA Controller Channel
* @chan: base DMA channel object * @chan: base DMA channel object
* @iomem: channel I/O memory base * @iomem: channel I/O memory base
* @index: index of this channel in the controller * @index: index of this channel in the controller
* @src_xfer_size: size (in bytes) of hardware transfers on the source side * @src: slave memory address and size on the source side
* @dst_xfer_size: size (in bytes) of hardware transfers on the destination side * @dst: slave memory address and size on the destination side
* @src_slave_addr: slave source memory address
* @dst_slave_addr: slave destination memory address
* @mid_rid: hardware MID/RID for the DMA client using this channel * @mid_rid: hardware MID/RID for the DMA client using this channel
* @lock: protects the channel CHCR register and the desc members * @lock: protects the channel CHCR register and the desc members
* @desc.free: list of free descriptors * @desc.free: list of free descriptors
@ -142,10 +162,9 @@ struct rcar_dmac_chan {
void __iomem *iomem; void __iomem *iomem;
unsigned int index; unsigned int index;
unsigned int src_xfer_size; struct rcar_dmac_chan_slave src;
unsigned int dst_xfer_size; struct rcar_dmac_chan_slave dst;
dma_addr_t src_slave_addr; struct rcar_dmac_chan_map map;
dma_addr_t dst_slave_addr;
int mid_rid; int mid_rid;
spinlock_t lock; spinlock_t lock;
@ -793,13 +812,13 @@ static void rcar_dmac_chan_configure_desc(struct rcar_dmac_chan *chan,
case DMA_DEV_TO_MEM: case DMA_DEV_TO_MEM:
chcr = RCAR_DMACHCR_DM_INC | RCAR_DMACHCR_SM_FIXED chcr = RCAR_DMACHCR_DM_INC | RCAR_DMACHCR_SM_FIXED
| RCAR_DMACHCR_RS_DMARS; | RCAR_DMACHCR_RS_DMARS;
xfer_size = chan->src_xfer_size; xfer_size = chan->src.xfer_size;
break; break;
case DMA_MEM_TO_DEV: case DMA_MEM_TO_DEV:
chcr = RCAR_DMACHCR_DM_FIXED | RCAR_DMACHCR_SM_INC chcr = RCAR_DMACHCR_DM_FIXED | RCAR_DMACHCR_SM_INC
| RCAR_DMACHCR_RS_DMARS; | RCAR_DMACHCR_RS_DMARS;
xfer_size = chan->dst_xfer_size; xfer_size = chan->dst.xfer_size;
break; break;
case DMA_MEM_TO_MEM: case DMA_MEM_TO_MEM:
@ -1023,13 +1042,65 @@ rcar_dmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dma_dest,
DMA_MEM_TO_MEM, flags, false); DMA_MEM_TO_MEM, flags, false);
} }
static int rcar_dmac_map_slave_addr(struct dma_chan *chan,
enum dma_transfer_direction dir)
{
struct rcar_dmac_chan *rchan = to_rcar_dmac_chan(chan);
struct rcar_dmac_chan_map *map = &rchan->map;
phys_addr_t dev_addr;
size_t dev_size;
enum dma_data_direction dev_dir;
if (dir == DMA_DEV_TO_MEM) {
dev_addr = rchan->src.slave_addr;
dev_size = rchan->src.xfer_size;
dev_dir = DMA_TO_DEVICE;
} else {
dev_addr = rchan->dst.slave_addr;
dev_size = rchan->dst.xfer_size;
dev_dir = DMA_FROM_DEVICE;
}
/* Reuse current map if possible. */
if (dev_addr == map->slave.slave_addr &&
dev_size == map->slave.xfer_size &&
dev_dir == map->dir)
return 0;
/* Remove old mapping if present. */
if (map->slave.xfer_size)
dma_unmap_resource(chan->device->dev, map->addr,
map->slave.xfer_size, map->dir, 0);
map->slave.xfer_size = 0;
/* Create new slave address map. */
map->addr = dma_map_resource(chan->device->dev, dev_addr, dev_size,
dev_dir, 0);
if (dma_mapping_error(chan->device->dev, map->addr)) {
dev_err(chan->device->dev,
"chan%u: failed to map %zx@%pap", rchan->index,
dev_size, &dev_addr);
return -EIO;
}
dev_dbg(chan->device->dev, "chan%u: map %zx@%pap to %pad dir: %s\n",
rchan->index, dev_size, &dev_addr, &map->addr,
dev_dir == DMA_TO_DEVICE ? "DMA_TO_DEVICE" : "DMA_FROM_DEVICE");
map->slave.slave_addr = dev_addr;
map->slave.xfer_size = dev_size;
map->dir = dev_dir;
return 0;
}
static struct dma_async_tx_descriptor * static struct dma_async_tx_descriptor *
rcar_dmac_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, rcar_dmac_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
unsigned int sg_len, enum dma_transfer_direction dir, unsigned int sg_len, enum dma_transfer_direction dir,
unsigned long flags, void *context) unsigned long flags, void *context)
{ {
struct rcar_dmac_chan *rchan = to_rcar_dmac_chan(chan); struct rcar_dmac_chan *rchan = to_rcar_dmac_chan(chan);
dma_addr_t dev_addr;
/* Someone calling slave DMA on a generic channel? */ /* Someone calling slave DMA on a generic channel? */
if (rchan->mid_rid < 0 || !sg_len) { if (rchan->mid_rid < 0 || !sg_len) {
@ -1039,9 +1110,10 @@ rcar_dmac_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
return NULL; return NULL;
} }
dev_addr = dir == DMA_DEV_TO_MEM if (rcar_dmac_map_slave_addr(chan, dir))
? rchan->src_slave_addr : rchan->dst_slave_addr; return NULL;
return rcar_dmac_chan_prep_sg(rchan, sgl, sg_len, dev_addr,
return rcar_dmac_chan_prep_sg(rchan, sgl, sg_len, rchan->map.addr,
dir, flags, false); dir, flags, false);
} }
@ -1055,7 +1127,6 @@ rcar_dmac_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t buf_addr,
struct rcar_dmac_chan *rchan = to_rcar_dmac_chan(chan); struct rcar_dmac_chan *rchan = to_rcar_dmac_chan(chan);
struct dma_async_tx_descriptor *desc; struct dma_async_tx_descriptor *desc;
struct scatterlist *sgl; struct scatterlist *sgl;
dma_addr_t dev_addr;
unsigned int sg_len; unsigned int sg_len;
unsigned int i; unsigned int i;
@ -1067,6 +1138,9 @@ rcar_dmac_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t buf_addr,
return NULL; return NULL;
} }
if (rcar_dmac_map_slave_addr(chan, dir))
return NULL;
sg_len = buf_len / period_len; sg_len = buf_len / period_len;
if (sg_len > RCAR_DMAC_MAX_SG_LEN) { if (sg_len > RCAR_DMAC_MAX_SG_LEN) {
dev_err(chan->device->dev, dev_err(chan->device->dev,
@ -1094,9 +1168,7 @@ rcar_dmac_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t buf_addr,
sg_dma_len(&sgl[i]) = period_len; sg_dma_len(&sgl[i]) = period_len;
} }
dev_addr = dir == DMA_DEV_TO_MEM desc = rcar_dmac_chan_prep_sg(rchan, sgl, sg_len, rchan->map.addr,
? rchan->src_slave_addr : rchan->dst_slave_addr;
desc = rcar_dmac_chan_prep_sg(rchan, sgl, sg_len, dev_addr,
dir, flags, true); dir, flags, true);
kfree(sgl); kfree(sgl);
@ -1112,10 +1184,10 @@ static int rcar_dmac_device_config(struct dma_chan *chan,
* We could lock this, but you shouldn't be configuring the * We could lock this, but you shouldn't be configuring the
* channel, while using it... * channel, while using it...
*/ */
rchan->src_slave_addr = cfg->src_addr; rchan->src.slave_addr = cfg->src_addr;
rchan->dst_slave_addr = cfg->dst_addr; rchan->dst.slave_addr = cfg->dst_addr;
rchan->src_xfer_size = cfg->src_addr_width; rchan->src.xfer_size = cfg->src_addr_width;
rchan->dst_xfer_size = cfg->dst_addr_width; rchan->dst.xfer_size = cfg->dst_addr_width;
return 0; return 0;
} }
@ -1389,21 +1461,18 @@ static irqreturn_t rcar_dmac_isr_channel_thread(int irq, void *dev)
{ {
struct rcar_dmac_chan *chan = dev; struct rcar_dmac_chan *chan = dev;
struct rcar_dmac_desc *desc; struct rcar_dmac_desc *desc;
struct dmaengine_desc_callback cb;
spin_lock_irq(&chan->lock); spin_lock_irq(&chan->lock);
/* For cyclic transfers notify the user after every chunk. */ /* For cyclic transfers notify the user after every chunk. */
if (chan->desc.running && chan->desc.running->cyclic) { if (chan->desc.running && chan->desc.running->cyclic) {
dma_async_tx_callback callback;
void *callback_param;
desc = chan->desc.running; desc = chan->desc.running;
callback = desc->async_tx.callback; dmaengine_desc_get_callback(&desc->async_tx, &cb);
callback_param = desc->async_tx.callback_param;
if (callback) { if (dmaengine_desc_callback_valid(&cb)) {
spin_unlock_irq(&chan->lock); spin_unlock_irq(&chan->lock);
callback(callback_param); dmaengine_desc_callback_invoke(&cb, NULL);
spin_lock_irq(&chan->lock); spin_lock_irq(&chan->lock);
} }
} }
@ -1418,14 +1487,15 @@ static irqreturn_t rcar_dmac_isr_channel_thread(int irq, void *dev)
dma_cookie_complete(&desc->async_tx); dma_cookie_complete(&desc->async_tx);
list_del(&desc->node); list_del(&desc->node);
if (desc->async_tx.callback) { dmaengine_desc_get_callback(&desc->async_tx, &cb);
if (dmaengine_desc_callback_valid(&cb)) {
spin_unlock_irq(&chan->lock); spin_unlock_irq(&chan->lock);
/* /*
* We own the only reference to this descriptor, we can * We own the only reference to this descriptor, we can
* safely dereference it without holding the channel * safely dereference it without holding the channel
* lock. * lock.
*/ */
desc->async_tx.callback(desc->async_tx.callback_param); dmaengine_desc_callback_invoke(&cb, NULL);
spin_lock_irq(&chan->lock); spin_lock_irq(&chan->lock);
} }

View File

@ -330,10 +330,11 @@ static dma_async_tx_callback __ld_cleanup(struct shdma_chan *schan, bool all)
bool head_acked = false; bool head_acked = false;
dma_cookie_t cookie = 0; dma_cookie_t cookie = 0;
dma_async_tx_callback callback = NULL; dma_async_tx_callback callback = NULL;
void *param = NULL; struct dmaengine_desc_callback cb;
unsigned long flags; unsigned long flags;
LIST_HEAD(cyclic_list); LIST_HEAD(cyclic_list);
memset(&cb, 0, sizeof(cb));
spin_lock_irqsave(&schan->chan_lock, flags); spin_lock_irqsave(&schan->chan_lock, flags);
list_for_each_entry_safe(desc, _desc, &schan->ld_queue, node) { list_for_each_entry_safe(desc, _desc, &schan->ld_queue, node) {
struct dma_async_tx_descriptor *tx = &desc->async_tx; struct dma_async_tx_descriptor *tx = &desc->async_tx;
@ -367,8 +368,8 @@ static dma_async_tx_callback __ld_cleanup(struct shdma_chan *schan, bool all)
/* Call callback on the last chunk */ /* Call callback on the last chunk */
if (desc->mark == DESC_COMPLETED && tx->callback) { if (desc->mark == DESC_COMPLETED && tx->callback) {
desc->mark = DESC_WAITING; desc->mark = DESC_WAITING;
dmaengine_desc_get_callback(tx, &cb);
callback = tx->callback; callback = tx->callback;
param = tx->callback_param;
dev_dbg(schan->dev, "descriptor #%d@%p on %d callback\n", dev_dbg(schan->dev, "descriptor #%d@%p on %d callback\n",
tx->cookie, tx, schan->id); tx->cookie, tx, schan->id);
BUG_ON(desc->chunks != 1); BUG_ON(desc->chunks != 1);
@ -430,8 +431,7 @@ static dma_async_tx_callback __ld_cleanup(struct shdma_chan *schan, bool all)
spin_unlock_irqrestore(&schan->chan_lock, flags); spin_unlock_irqrestore(&schan->chan_lock, flags);
if (callback) dmaengine_desc_callback_invoke(&cb, NULL);
callback(param);
return callback; return callback;
} }
@ -885,9 +885,9 @@ bool shdma_reset(struct shdma_dev *sdev)
/* Complete all */ /* Complete all */
list_for_each_entry(sdesc, &dl, node) { list_for_each_entry(sdesc, &dl, node) {
struct dma_async_tx_descriptor *tx = &sdesc->async_tx; struct dma_async_tx_descriptor *tx = &sdesc->async_tx;
sdesc->mark = DESC_IDLE; sdesc->mark = DESC_IDLE;
if (tx->callback) dmaengine_desc_get_callback_invoke(tx, NULL);
tx->callback(tx->callback_param);
} }
spin_lock(&schan->chan_lock); spin_lock(&schan->chan_lock);

View File

@ -360,9 +360,7 @@ static void sirfsoc_dma_process_completed(struct sirfsoc_dma *sdma)
list_for_each_entry(sdesc, &list, node) { list_for_each_entry(sdesc, &list, node) {
desc = &sdesc->desc; desc = &sdesc->desc;
if (desc->callback) dmaengine_desc_get_callback_invoke(desc, NULL);
desc->callback(desc->callback_param);
last_cookie = desc->cookie; last_cookie = desc->cookie;
dma_run_dependencies(desc); dma_run_dependencies(desc);
} }
@ -388,8 +386,7 @@ static void sirfsoc_dma_process_completed(struct sirfsoc_dma *sdma)
desc = &sdesc->desc; desc = &sdesc->desc;
while (happened_cyclic != schan->completed_cyclic) { while (happened_cyclic != schan->completed_cyclic) {
if (desc->callback) dmaengine_desc_get_callback_invoke(desc, NULL);
desc->callback(desc->callback_param);
schan->completed_cyclic++; schan->completed_cyclic++;
} }
} }
@ -869,7 +866,7 @@ static int sirfsoc_dma_probe(struct platform_device *op)
} }
sdma->irq = irq_of_parse_and_map(dn, 0); sdma->irq = irq_of_parse_and_map(dn, 0);
if (sdma->irq == NO_IRQ) { if (!sdma->irq) {
dev_err(dev, "Error mapping IRQ!\n"); dev_err(dev, "Error mapping IRQ!\n");
return -EINVAL; return -EINVAL;
} }

View File

@ -874,7 +874,7 @@ static void d40_log_lli_to_lcxa(struct d40_chan *chan, struct d40_desc *desc)
} }
if (curr_lcla < 0) if (curr_lcla < 0)
goto out; goto set_current;
for (; lli_current < lli_len; lli_current++) { for (; lli_current < lli_len; lli_current++) {
unsigned int lcla_offset = chan->phy_chan->num * 1024 + unsigned int lcla_offset = chan->phy_chan->num * 1024 +
@ -925,8 +925,7 @@ static void d40_log_lli_to_lcxa(struct d40_chan *chan, struct d40_desc *desc)
break; break;
} }
} }
set_current:
out:
desc->lli_current = lli_current; desc->lli_current = lli_current;
} }
@ -941,15 +940,7 @@ static void d40_desc_load(struct d40_chan *d40c, struct d40_desc *d40d)
static struct d40_desc *d40_first_active_get(struct d40_chan *d40c) static struct d40_desc *d40_first_active_get(struct d40_chan *d40c)
{ {
struct d40_desc *d; return list_first_entry_or_null(&d40c->active, struct d40_desc, node);
if (list_empty(&d40c->active))
return NULL;
d = list_first_entry(&d40c->active,
struct d40_desc,
node);
return d;
} }
/* remove desc from current queue and add it to the pending_queue */ /* remove desc from current queue and add it to the pending_queue */
@ -962,36 +953,18 @@ static void d40_desc_queue(struct d40_chan *d40c, struct d40_desc *desc)
static struct d40_desc *d40_first_pending(struct d40_chan *d40c) static struct d40_desc *d40_first_pending(struct d40_chan *d40c)
{ {
struct d40_desc *d; return list_first_entry_or_null(&d40c->pending_queue, struct d40_desc,
node);
if (list_empty(&d40c->pending_queue))
return NULL;
d = list_first_entry(&d40c->pending_queue,
struct d40_desc,
node);
return d;
} }
static struct d40_desc *d40_first_queued(struct d40_chan *d40c) static struct d40_desc *d40_first_queued(struct d40_chan *d40c)
{ {
struct d40_desc *d; return list_first_entry_or_null(&d40c->queue, struct d40_desc, node);
if (list_empty(&d40c->queue))
return NULL;
d = list_first_entry(&d40c->queue,
struct d40_desc,
node);
return d;
} }
static struct d40_desc *d40_first_done(struct d40_chan *d40c) static struct d40_desc *d40_first_done(struct d40_chan *d40c)
{ {
if (list_empty(&d40c->done)) return list_first_entry_or_null(&d40c->done, struct d40_desc, node);
return NULL;
return list_first_entry(&d40c->done, struct d40_desc, node);
} }
static int d40_psize_2_burst_size(bool is_log, int psize) static int d40_psize_2_burst_size(bool is_log, int psize)
@ -1083,7 +1056,7 @@ static int __d40_execute_command_phy(struct d40_chan *d40c,
D40_CHAN_POS(d40c->phy_chan->num); D40_CHAN_POS(d40c->phy_chan->num);
if (status == D40_DMA_SUSPENDED || status == D40_DMA_STOP) if (status == D40_DMA_SUSPENDED || status == D40_DMA_STOP)
goto done; goto unlock;
} }
wmask = 0xffffffff & ~(D40_CHAN_POS_MASK(d40c->phy_chan->num)); wmask = 0xffffffff & ~(D40_CHAN_POS_MASK(d40c->phy_chan->num));
@ -1119,7 +1092,7 @@ static int __d40_execute_command_phy(struct d40_chan *d40c,
} }
} }
done: unlock:
spin_unlock_irqrestore(&d40c->base->execmd_lock, flags); spin_unlock_irqrestore(&d40c->base->execmd_lock, flags);
return ret; return ret;
} }
@ -1596,8 +1569,7 @@ static void dma_tasklet(unsigned long data)
struct d40_desc *d40d; struct d40_desc *d40d;
unsigned long flags; unsigned long flags;
bool callback_active; bool callback_active;
dma_async_tx_callback callback; struct dmaengine_desc_callback cb;
void *callback_param;
spin_lock_irqsave(&d40c->lock, flags); spin_lock_irqsave(&d40c->lock, flags);
@ -1607,7 +1579,7 @@ static void dma_tasklet(unsigned long data)
/* Check if we have reached here for cyclic job */ /* Check if we have reached here for cyclic job */
d40d = d40_first_active_get(d40c); d40d = d40_first_active_get(d40c);
if (d40d == NULL || !d40d->cyclic) if (d40d == NULL || !d40d->cyclic)
goto err; goto check_pending_tx;
} }
if (!d40d->cyclic) if (!d40d->cyclic)
@ -1624,8 +1596,7 @@ static void dma_tasklet(unsigned long data)
/* Callback to client */ /* Callback to client */
callback_active = !!(d40d->txd.flags & DMA_PREP_INTERRUPT); callback_active = !!(d40d->txd.flags & DMA_PREP_INTERRUPT);
callback = d40d->txd.callback; dmaengine_desc_get_callback(&d40d->txd, &cb);
callback_param = d40d->txd.callback_param;
if (!d40d->cyclic) { if (!d40d->cyclic) {
if (async_tx_test_ack(&d40d->txd)) { if (async_tx_test_ack(&d40d->txd)) {
@ -1646,12 +1617,11 @@ static void dma_tasklet(unsigned long data)
spin_unlock_irqrestore(&d40c->lock, flags); spin_unlock_irqrestore(&d40c->lock, flags);
if (callback_active && callback) if (callback_active)
callback(callback_param); dmaengine_desc_callback_invoke(&cb, NULL);
return; return;
check_pending_tx:
err:
/* Rescue manouver if receiving double interrupts */ /* Rescue manouver if receiving double interrupts */
if (d40c->pending_tx > 0) if (d40c->pending_tx > 0)
d40c->pending_tx--; d40c->pending_tx--;
@ -1780,42 +1750,40 @@ static bool d40_alloc_mask_set(struct d40_phy_res *phy,
phy->allocated_dst == D40_ALLOC_FREE) { phy->allocated_dst == D40_ALLOC_FREE) {
phy->allocated_dst = D40_ALLOC_PHY; phy->allocated_dst = D40_ALLOC_PHY;
phy->allocated_src = D40_ALLOC_PHY; phy->allocated_src = D40_ALLOC_PHY;
goto found; goto found_unlock;
} else } else
goto not_found; goto not_found_unlock;
} }
/* Logical channel */ /* Logical channel */
if (is_src) { if (is_src) {
if (phy->allocated_src == D40_ALLOC_PHY) if (phy->allocated_src == D40_ALLOC_PHY)
goto not_found; goto not_found_unlock;
if (phy->allocated_src == D40_ALLOC_FREE) if (phy->allocated_src == D40_ALLOC_FREE)
phy->allocated_src = D40_ALLOC_LOG_FREE; phy->allocated_src = D40_ALLOC_LOG_FREE;
if (!(phy->allocated_src & BIT(log_event_line))) { if (!(phy->allocated_src & BIT(log_event_line))) {
phy->allocated_src |= BIT(log_event_line); phy->allocated_src |= BIT(log_event_line);
goto found; goto found_unlock;
} else } else
goto not_found; goto not_found_unlock;
} else { } else {
if (phy->allocated_dst == D40_ALLOC_PHY) if (phy->allocated_dst == D40_ALLOC_PHY)
goto not_found; goto not_found_unlock;
if (phy->allocated_dst == D40_ALLOC_FREE) if (phy->allocated_dst == D40_ALLOC_FREE)
phy->allocated_dst = D40_ALLOC_LOG_FREE; phy->allocated_dst = D40_ALLOC_LOG_FREE;
if (!(phy->allocated_dst & BIT(log_event_line))) { if (!(phy->allocated_dst & BIT(log_event_line))) {
phy->allocated_dst |= BIT(log_event_line); phy->allocated_dst |= BIT(log_event_line);
goto found; goto found_unlock;
} else }
goto not_found;
} }
not_found_unlock:
not_found:
spin_unlock_irqrestore(&phy->lock, flags); spin_unlock_irqrestore(&phy->lock, flags);
return false; return false;
found: found_unlock:
spin_unlock_irqrestore(&phy->lock, flags); spin_unlock_irqrestore(&phy->lock, flags);
return true; return true;
} }
@ -1831,7 +1799,7 @@ static bool d40_alloc_mask_free(struct d40_phy_res *phy, bool is_src,
phy->allocated_dst = D40_ALLOC_FREE; phy->allocated_dst = D40_ALLOC_FREE;
phy->allocated_src = D40_ALLOC_FREE; phy->allocated_src = D40_ALLOC_FREE;
is_free = true; is_free = true;
goto out; goto unlock;
} }
/* Logical channel */ /* Logical channel */
@ -1847,8 +1815,7 @@ static bool d40_alloc_mask_free(struct d40_phy_res *phy, bool is_src,
is_free = ((phy->allocated_src | phy->allocated_dst) == is_free = ((phy->allocated_src | phy->allocated_dst) ==
D40_ALLOC_FREE); D40_ALLOC_FREE);
unlock:
out:
spin_unlock_irqrestore(&phy->lock, flags); spin_unlock_irqrestore(&phy->lock, flags);
return is_free; return is_free;
@ -2047,7 +2014,7 @@ static int d40_free_dma(struct d40_chan *d40c)
res = d40_channel_execute_command(d40c, D40_DMA_STOP); res = d40_channel_execute_command(d40c, D40_DMA_STOP);
if (res) { if (res) {
chan_err(d40c, "stop failed\n"); chan_err(d40c, "stop failed\n");
goto out; goto mark_last_busy;
} }
d40_alloc_mask_free(phy, is_src, chan_is_logical(d40c) ? event : 0); d40_alloc_mask_free(phy, is_src, chan_is_logical(d40c) ? event : 0);
@ -2065,8 +2032,7 @@ static int d40_free_dma(struct d40_chan *d40c)
d40c->busy = false; d40c->busy = false;
d40c->phy_chan = NULL; d40c->phy_chan = NULL;
d40c->configured = false; d40c->configured = false;
out: mark_last_busy:
pm_runtime_mark_last_busy(d40c->base->dev); pm_runtime_mark_last_busy(d40c->base->dev);
pm_runtime_put_autosuspend(d40c->base->dev); pm_runtime_put_autosuspend(d40c->base->dev);
return res; return res;
@ -2094,8 +2060,7 @@ static bool d40_is_paused(struct d40_chan *d40c)
D40_CHAN_POS(d40c->phy_chan->num); D40_CHAN_POS(d40c->phy_chan->num);
if (status == D40_DMA_SUSPENDED || status == D40_DMA_STOP) if (status == D40_DMA_SUSPENDED || status == D40_DMA_STOP)
is_paused = true; is_paused = true;
goto unlock;
goto _exit;
} }
if (d40c->dma_cfg.dir == DMA_MEM_TO_DEV || if (d40c->dma_cfg.dir == DMA_MEM_TO_DEV ||
@ -2105,7 +2070,7 @@ static bool d40_is_paused(struct d40_chan *d40c)
status = readl(chanbase + D40_CHAN_REG_SSLNK); status = readl(chanbase + D40_CHAN_REG_SSLNK);
} else { } else {
chan_err(d40c, "Unknown direction\n"); chan_err(d40c, "Unknown direction\n");
goto _exit; goto unlock;
} }
status = (status & D40_EVENTLINE_MASK(event)) >> status = (status & D40_EVENTLINE_MASK(event)) >>
@ -2113,7 +2078,7 @@ static bool d40_is_paused(struct d40_chan *d40c)
if (status != D40_DMA_RUN) if (status != D40_DMA_RUN)
is_paused = true; is_paused = true;
_exit: unlock:
spin_unlock_irqrestore(&d40c->lock, flags); spin_unlock_irqrestore(&d40c->lock, flags);
return is_paused; return is_paused;
@ -2198,7 +2163,7 @@ static struct d40_desc *
d40_prep_desc(struct d40_chan *chan, struct scatterlist *sg, d40_prep_desc(struct d40_chan *chan, struct scatterlist *sg,
unsigned int sg_len, unsigned long dma_flags) unsigned int sg_len, unsigned long dma_flags)
{ {
struct stedma40_chan_cfg *cfg = &chan->dma_cfg; struct stedma40_chan_cfg *cfg;
struct d40_desc *desc; struct d40_desc *desc;
int ret; int ret;
@ -2206,17 +2171,18 @@ d40_prep_desc(struct d40_chan *chan, struct scatterlist *sg,
if (!desc) if (!desc)
return NULL; return NULL;
cfg = &chan->dma_cfg;
desc->lli_len = d40_sg_2_dmalen(sg, sg_len, cfg->src_info.data_width, desc->lli_len = d40_sg_2_dmalen(sg, sg_len, cfg->src_info.data_width,
cfg->dst_info.data_width); cfg->dst_info.data_width);
if (desc->lli_len < 0) { if (desc->lli_len < 0) {
chan_err(chan, "Unaligned size\n"); chan_err(chan, "Unaligned size\n");
goto err; goto free_desc;
} }
ret = d40_pool_lli_alloc(chan, desc, desc->lli_len); ret = d40_pool_lli_alloc(chan, desc, desc->lli_len);
if (ret < 0) { if (ret < 0) {
chan_err(chan, "Could not allocate lli\n"); chan_err(chan, "Could not allocate lli\n");
goto err; goto free_desc;
} }
desc->lli_current = 0; desc->lli_current = 0;
@ -2226,8 +2192,7 @@ d40_prep_desc(struct d40_chan *chan, struct scatterlist *sg,
dma_async_tx_descriptor_init(&desc->txd, &chan->chan); dma_async_tx_descriptor_init(&desc->txd, &chan->chan);
return desc; return desc;
free_desc:
err:
d40_desc_free(chan, desc); d40_desc_free(chan, desc);
return NULL; return NULL;
} }
@ -2238,8 +2203,8 @@ d40_prep_sg(struct dma_chan *dchan, struct scatterlist *sg_src,
enum dma_transfer_direction direction, unsigned long dma_flags) enum dma_transfer_direction direction, unsigned long dma_flags)
{ {
struct d40_chan *chan = container_of(dchan, struct d40_chan, chan); struct d40_chan *chan = container_of(dchan, struct d40_chan, chan);
dma_addr_t src_dev_addr = 0; dma_addr_t src_dev_addr;
dma_addr_t dst_dev_addr = 0; dma_addr_t dst_dev_addr;
struct d40_desc *desc; struct d40_desc *desc;
unsigned long flags; unsigned long flags;
int ret; int ret;
@ -2253,11 +2218,13 @@ d40_prep_sg(struct dma_chan *dchan, struct scatterlist *sg_src,
desc = d40_prep_desc(chan, sg_src, sg_len, dma_flags); desc = d40_prep_desc(chan, sg_src, sg_len, dma_flags);
if (desc == NULL) if (desc == NULL)
goto err; goto unlock;
if (sg_next(&sg_src[sg_len - 1]) == sg_src) if (sg_next(&sg_src[sg_len - 1]) == sg_src)
desc->cyclic = true; desc->cyclic = true;
src_dev_addr = 0;
dst_dev_addr = 0;
if (direction == DMA_DEV_TO_MEM) if (direction == DMA_DEV_TO_MEM)
src_dev_addr = chan->runtime_addr; src_dev_addr = chan->runtime_addr;
else if (direction == DMA_MEM_TO_DEV) else if (direction == DMA_MEM_TO_DEV)
@ -2273,7 +2240,7 @@ d40_prep_sg(struct dma_chan *dchan, struct scatterlist *sg_src,
if (ret) { if (ret) {
chan_err(chan, "Failed to prepare %s sg job: %d\n", chan_err(chan, "Failed to prepare %s sg job: %d\n",
chan_is_logical(chan) ? "log" : "phy", ret); chan_is_logical(chan) ? "log" : "phy", ret);
goto err; goto free_desc;
} }
/* /*
@ -2285,10 +2252,9 @@ d40_prep_sg(struct dma_chan *dchan, struct scatterlist *sg_src,
spin_unlock_irqrestore(&chan->lock, flags); spin_unlock_irqrestore(&chan->lock, flags);
return &desc->txd; return &desc->txd;
free_desc:
err: d40_desc_free(chan, desc);
if (desc) unlock:
d40_desc_free(chan, desc);
spin_unlock_irqrestore(&chan->lock, flags); spin_unlock_irqrestore(&chan->lock, flags);
return NULL; return NULL;
} }
@ -2426,7 +2392,7 @@ static int d40_alloc_chan_resources(struct dma_chan *chan)
err = d40_config_memcpy(d40c); err = d40_config_memcpy(d40c);
if (err) { if (err) {
chan_err(d40c, "Failed to configure memcpy channel\n"); chan_err(d40c, "Failed to configure memcpy channel\n");
goto fail; goto mark_last_busy;
} }
} }
@ -2434,7 +2400,7 @@ static int d40_alloc_chan_resources(struct dma_chan *chan)
if (err) { if (err) {
chan_err(d40c, "Failed to allocate channel\n"); chan_err(d40c, "Failed to allocate channel\n");
d40c->configured = false; d40c->configured = false;
goto fail; goto mark_last_busy;
} }
pm_runtime_get_sync(d40c->base->dev); pm_runtime_get_sync(d40c->base->dev);
@ -2468,7 +2434,7 @@ static int d40_alloc_chan_resources(struct dma_chan *chan)
*/ */
if (is_free_phy) if (is_free_phy)
d40_config_write(d40c); d40_config_write(d40c);
fail: mark_last_busy:
pm_runtime_mark_last_busy(d40c->base->dev); pm_runtime_mark_last_busy(d40c->base->dev);
pm_runtime_put_autosuspend(d40c->base->dev); pm_runtime_put_autosuspend(d40c->base->dev);
spin_unlock_irqrestore(&d40c->lock, flags); spin_unlock_irqrestore(&d40c->lock, flags);
@ -2891,7 +2857,7 @@ static int __init d40_dmaengine_init(struct d40_base *base,
if (err) { if (err) {
d40_err(base->dev, "Failed to register slave channels\n"); d40_err(base->dev, "Failed to register slave channels\n");
goto failure1; goto exit;
} }
d40_chan_init(base, &base->dma_memcpy, base->log_chans, d40_chan_init(base, &base->dma_memcpy, base->log_chans,
@ -2908,7 +2874,7 @@ static int __init d40_dmaengine_init(struct d40_base *base,
if (err) { if (err) {
d40_err(base->dev, d40_err(base->dev,
"Failed to register memcpy only channels\n"); "Failed to register memcpy only channels\n");
goto failure2; goto unregister_slave;
} }
d40_chan_init(base, &base->dma_both, base->phy_chans, d40_chan_init(base, &base->dma_both, base->phy_chans,
@ -2926,14 +2892,14 @@ static int __init d40_dmaengine_init(struct d40_base *base,
if (err) { if (err) {
d40_err(base->dev, d40_err(base->dev,
"Failed to register logical and physical capable channels\n"); "Failed to register logical and physical capable channels\n");
goto failure3; goto unregister_memcpy;
} }
return 0; return 0;
failure3: unregister_memcpy:
dma_async_device_unregister(&base->dma_memcpy); dma_async_device_unregister(&base->dma_memcpy);
failure2: unregister_slave:
dma_async_device_unregister(&base->dma_slave); dma_async_device_unregister(&base->dma_slave);
failure1: exit:
return err; return err;
} }
@ -3144,11 +3110,11 @@ static int __init d40_phy_res_init(struct d40_base *base)
static struct d40_base * __init d40_hw_detect_init(struct platform_device *pdev) static struct d40_base * __init d40_hw_detect_init(struct platform_device *pdev)
{ {
struct stedma40_platform_data *plat_data = dev_get_platdata(&pdev->dev); struct stedma40_platform_data *plat_data = dev_get_platdata(&pdev->dev);
struct clk *clk = NULL; struct clk *clk;
void __iomem *virtbase = NULL; void __iomem *virtbase;
struct resource *res = NULL; struct resource *res;
struct d40_base *base = NULL; struct d40_base *base;
int num_log_chans = 0; int num_log_chans;
int num_phy_chans; int num_phy_chans;
int num_memcpy_chans; int num_memcpy_chans;
int clk_ret = -EINVAL; int clk_ret = -EINVAL;
@ -3160,27 +3126,27 @@ static struct d40_base * __init d40_hw_detect_init(struct platform_device *pdev)
clk = clk_get(&pdev->dev, NULL); clk = clk_get(&pdev->dev, NULL);
if (IS_ERR(clk)) { if (IS_ERR(clk)) {
d40_err(&pdev->dev, "No matching clock found\n"); d40_err(&pdev->dev, "No matching clock found\n");
goto failure; goto check_prepare_enabled;
} }
clk_ret = clk_prepare_enable(clk); clk_ret = clk_prepare_enable(clk);
if (clk_ret) { if (clk_ret) {
d40_err(&pdev->dev, "Failed to prepare/enable clock\n"); d40_err(&pdev->dev, "Failed to prepare/enable clock\n");
goto failure; goto disable_unprepare;
} }
/* Get IO for DMAC base address */ /* Get IO for DMAC base address */
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "base"); res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "base");
if (!res) if (!res)
goto failure; goto disable_unprepare;
if (request_mem_region(res->start, resource_size(res), if (request_mem_region(res->start, resource_size(res),
D40_NAME " I/O base") == NULL) D40_NAME " I/O base") == NULL)
goto failure; goto release_region;
virtbase = ioremap(res->start, resource_size(res)); virtbase = ioremap(res->start, resource_size(res));
if (!virtbase) if (!virtbase)
goto failure; goto release_region;
/* This is just a regular AMBA PrimeCell ID actually */ /* This is just a regular AMBA PrimeCell ID actually */
for (pid = 0, i = 0; i < 4; i++) for (pid = 0, i = 0; i < 4; i++)
@ -3192,13 +3158,13 @@ static struct d40_base * __init d40_hw_detect_init(struct platform_device *pdev)
if (cid != AMBA_CID) { if (cid != AMBA_CID) {
d40_err(&pdev->dev, "Unknown hardware! No PrimeCell ID\n"); d40_err(&pdev->dev, "Unknown hardware! No PrimeCell ID\n");
goto failure; goto unmap_io;
} }
if (AMBA_MANF_BITS(pid) != AMBA_VENDOR_ST) { if (AMBA_MANF_BITS(pid) != AMBA_VENDOR_ST) {
d40_err(&pdev->dev, "Unknown designer! Got %x wanted %x\n", d40_err(&pdev->dev, "Unknown designer! Got %x wanted %x\n",
AMBA_MANF_BITS(pid), AMBA_MANF_BITS(pid),
AMBA_VENDOR_ST); AMBA_VENDOR_ST);
goto failure; goto unmap_io;
} }
/* /*
* HW revision: * HW revision:
@ -3212,7 +3178,7 @@ static struct d40_base * __init d40_hw_detect_init(struct platform_device *pdev)
rev = AMBA_REV_BITS(pid); rev = AMBA_REV_BITS(pid);
if (rev < 2) { if (rev < 2) {
d40_err(&pdev->dev, "hardware revision: %d is not supported", rev); d40_err(&pdev->dev, "hardware revision: %d is not supported", rev);
goto failure; goto unmap_io;
} }
/* The number of physical channels on this HW */ /* The number of physical channels on this HW */
@ -3238,7 +3204,7 @@ static struct d40_base * __init d40_hw_detect_init(struct platform_device *pdev)
sizeof(struct d40_chan), GFP_KERNEL); sizeof(struct d40_chan), GFP_KERNEL);
if (base == NULL) if (base == NULL)
goto failure; goto unmap_io;
base->rev = rev; base->rev = rev;
base->clk = clk; base->clk = clk;
@ -3283,65 +3249,66 @@ static struct d40_base * __init d40_hw_detect_init(struct platform_device *pdev)
base->gen_dmac.init_reg_size = ARRAY_SIZE(dma_init_reg_v4a); base->gen_dmac.init_reg_size = ARRAY_SIZE(dma_init_reg_v4a);
} }
base->phy_res = kzalloc(num_phy_chans * sizeof(struct d40_phy_res), base->phy_res = kcalloc(num_phy_chans,
sizeof(*base->phy_res),
GFP_KERNEL); GFP_KERNEL);
if (!base->phy_res) if (!base->phy_res)
goto failure; goto free_base;
base->lookup_phy_chans = kzalloc(num_phy_chans * base->lookup_phy_chans = kcalloc(num_phy_chans,
sizeof(struct d40_chan *), sizeof(*base->lookup_phy_chans),
GFP_KERNEL); GFP_KERNEL);
if (!base->lookup_phy_chans) if (!base->lookup_phy_chans)
goto failure; goto free_phy_res;
base->lookup_log_chans = kzalloc(num_log_chans * base->lookup_log_chans = kcalloc(num_log_chans,
sizeof(struct d40_chan *), sizeof(*base->lookup_log_chans),
GFP_KERNEL); GFP_KERNEL);
if (!base->lookup_log_chans) if (!base->lookup_log_chans)
goto failure; goto free_phy_chans;
base->reg_val_backup_chan = kmalloc(base->num_phy_chans * base->reg_val_backup_chan = kmalloc_array(base->num_phy_chans,
sizeof(d40_backup_regs_chan), sizeof(d40_backup_regs_chan),
GFP_KERNEL); GFP_KERNEL);
if (!base->reg_val_backup_chan) if (!base->reg_val_backup_chan)
goto failure; goto free_log_chans;
base->lcla_pool.alloc_map = base->lcla_pool.alloc_map = kcalloc(num_phy_chans
kzalloc(num_phy_chans * sizeof(struct d40_desc *) * D40_LCLA_LINK_PER_EVENT_GRP,
* D40_LCLA_LINK_PER_EVENT_GRP, GFP_KERNEL); sizeof(*base->lcla_pool.alloc_map),
GFP_KERNEL);
if (!base->lcla_pool.alloc_map) if (!base->lcla_pool.alloc_map)
goto failure; goto free_backup_chan;
base->desc_slab = kmem_cache_create(D40_NAME, sizeof(struct d40_desc), base->desc_slab = kmem_cache_create(D40_NAME, sizeof(struct d40_desc),
0, SLAB_HWCACHE_ALIGN, 0, SLAB_HWCACHE_ALIGN,
NULL); NULL);
if (base->desc_slab == NULL) if (base->desc_slab == NULL)
goto failure; goto free_map;
return base; return base;
free_map:
failure: kfree(base->lcla_pool.alloc_map);
free_backup_chan:
kfree(base->reg_val_backup_chan);
free_log_chans:
kfree(base->lookup_log_chans);
free_phy_chans:
kfree(base->lookup_phy_chans);
free_phy_res:
kfree(base->phy_res);
free_base:
kfree(base);
unmap_io:
iounmap(virtbase);
release_region:
release_mem_region(res->start, resource_size(res));
check_prepare_enabled:
if (!clk_ret) if (!clk_ret)
disable_unprepare:
clk_disable_unprepare(clk); clk_disable_unprepare(clk);
if (!IS_ERR(clk)) if (!IS_ERR(clk))
clk_put(clk); clk_put(clk);
if (virtbase)
iounmap(virtbase);
if (res)
release_mem_region(res->start,
resource_size(res));
if (virtbase)
iounmap(virtbase);
if (base) {
kfree(base->lcla_pool.alloc_map);
kfree(base->reg_val_backup_chan);
kfree(base->lookup_log_chans);
kfree(base->lookup_phy_chans);
kfree(base->phy_res);
kfree(base);
}
return NULL; return NULL;
} }
@ -3404,20 +3371,18 @@ static int __init d40_lcla_allocate(struct d40_base *base)
struct d40_lcla_pool *pool = &base->lcla_pool; struct d40_lcla_pool *pool = &base->lcla_pool;
unsigned long *page_list; unsigned long *page_list;
int i, j; int i, j;
int ret = 0; int ret;
/* /*
* This is somewhat ugly. We need 8192 bytes that are 18 bit aligned, * This is somewhat ugly. We need 8192 bytes that are 18 bit aligned,
* To full fill this hardware requirement without wasting 256 kb * To full fill this hardware requirement without wasting 256 kb
* we allocate pages until we get an aligned one. * we allocate pages until we get an aligned one.
*/ */
page_list = kmalloc(sizeof(unsigned long) * MAX_LCLA_ALLOC_ATTEMPTS, page_list = kmalloc_array(MAX_LCLA_ALLOC_ATTEMPTS,
GFP_KERNEL); sizeof(*page_list),
GFP_KERNEL);
if (!page_list) { if (!page_list)
ret = -ENOMEM; return -ENOMEM;
goto failure;
}
/* Calculating how many pages that are required */ /* Calculating how many pages that are required */
base->lcla_pool.pages = SZ_1K * base->num_phy_chans / PAGE_SIZE; base->lcla_pool.pages = SZ_1K * base->num_phy_chans / PAGE_SIZE;
@ -3433,7 +3398,7 @@ static int __init d40_lcla_allocate(struct d40_base *base)
for (j = 0; j < i; j++) for (j = 0; j < i; j++)
free_pages(page_list[j], base->lcla_pool.pages); free_pages(page_list[j], base->lcla_pool.pages);
goto failure; goto free_page_list;
} }
if ((virt_to_phys((void *)page_list[i]) & if ((virt_to_phys((void *)page_list[i]) &
@ -3460,7 +3425,7 @@ static int __init d40_lcla_allocate(struct d40_base *base)
GFP_KERNEL); GFP_KERNEL);
if (!base->lcla_pool.base_unaligned) { if (!base->lcla_pool.base_unaligned) {
ret = -ENOMEM; ret = -ENOMEM;
goto failure; goto free_page_list;
} }
base->lcla_pool.base = PTR_ALIGN(base->lcla_pool.base_unaligned, base->lcla_pool.base = PTR_ALIGN(base->lcla_pool.base_unaligned,
@ -3473,12 +3438,13 @@ static int __init d40_lcla_allocate(struct d40_base *base)
if (dma_mapping_error(base->dev, pool->dma_addr)) { if (dma_mapping_error(base->dev, pool->dma_addr)) {
pool->dma_addr = 0; pool->dma_addr = 0;
ret = -ENOMEM; ret = -ENOMEM;
goto failure; goto free_page_list;
} }
writel(virt_to_phys(base->lcla_pool.base), writel(virt_to_phys(base->lcla_pool.base),
base->virtbase + D40_DREG_LCLA); base->virtbase + D40_DREG_LCLA);
failure: ret = 0;
free_page_list:
kfree(page_list); kfree(page_list);
return ret; return ret;
} }
@ -3490,9 +3456,7 @@ static int __init d40_of_probe(struct platform_device *pdev,
int num_phy = 0, num_memcpy = 0, num_disabled = 0; int num_phy = 0, num_memcpy = 0, num_disabled = 0;
const __be32 *list; const __be32 *list;
pdata = devm_kzalloc(&pdev->dev, pdata = devm_kzalloc(&pdev->dev, sizeof(*pdata), GFP_KERNEL);
sizeof(struct stedma40_platform_data),
GFP_KERNEL);
if (!pdata) if (!pdata)
return -ENOMEM; return -ENOMEM;
@ -3574,7 +3538,7 @@ static int __init d40_probe(struct platform_device *pdev)
if (!res) { if (!res) {
ret = -ENOENT; ret = -ENOENT;
d40_err(&pdev->dev, "No \"lcpa\" memory resource\n"); d40_err(&pdev->dev, "No \"lcpa\" memory resource\n");
goto failure; goto destroy_cache;
} }
base->lcpa_size = resource_size(res); base->lcpa_size = resource_size(res);
base->phy_lcpa = res->start; base->phy_lcpa = res->start;
@ -3583,7 +3547,7 @@ static int __init d40_probe(struct platform_device *pdev)
D40_NAME " I/O lcpa") == NULL) { D40_NAME " I/O lcpa") == NULL) {
ret = -EBUSY; ret = -EBUSY;
d40_err(&pdev->dev, "Failed to request LCPA region %pR\n", res); d40_err(&pdev->dev, "Failed to request LCPA region %pR\n", res);
goto failure; goto destroy_cache;
} }
/* We make use of ESRAM memory for this. */ /* We make use of ESRAM memory for this. */
@ -3599,7 +3563,7 @@ static int __init d40_probe(struct platform_device *pdev)
if (!base->lcpa_base) { if (!base->lcpa_base) {
ret = -ENOMEM; ret = -ENOMEM;
d40_err(&pdev->dev, "Failed to ioremap LCPA region\n"); d40_err(&pdev->dev, "Failed to ioremap LCPA region\n");
goto failure; goto destroy_cache;
} }
/* If lcla has to be located in ESRAM we don't need to allocate */ /* If lcla has to be located in ESRAM we don't need to allocate */
if (base->plat_data->use_esram_lcla) { if (base->plat_data->use_esram_lcla) {
@ -3609,14 +3573,14 @@ static int __init d40_probe(struct platform_device *pdev)
ret = -ENOENT; ret = -ENOENT;
d40_err(&pdev->dev, d40_err(&pdev->dev,
"No \"lcla_esram\" memory resource\n"); "No \"lcla_esram\" memory resource\n");
goto failure; goto destroy_cache;
} }
base->lcla_pool.base = ioremap(res->start, base->lcla_pool.base = ioremap(res->start,
resource_size(res)); resource_size(res));
if (!base->lcla_pool.base) { if (!base->lcla_pool.base) {
ret = -ENOMEM; ret = -ENOMEM;
d40_err(&pdev->dev, "Failed to ioremap LCLA region\n"); d40_err(&pdev->dev, "Failed to ioremap LCLA region\n");
goto failure; goto destroy_cache;
} }
writel(res->start, base->virtbase + D40_DREG_LCLA); writel(res->start, base->virtbase + D40_DREG_LCLA);
@ -3624,7 +3588,7 @@ static int __init d40_probe(struct platform_device *pdev)
ret = d40_lcla_allocate(base); ret = d40_lcla_allocate(base);
if (ret) { if (ret) {
d40_err(&pdev->dev, "Failed to allocate LCLA area\n"); d40_err(&pdev->dev, "Failed to allocate LCLA area\n");
goto failure; goto destroy_cache;
} }
} }
@ -3635,7 +3599,7 @@ static int __init d40_probe(struct platform_device *pdev)
ret = request_irq(base->irq, d40_handle_interrupt, 0, D40_NAME, base); ret = request_irq(base->irq, d40_handle_interrupt, 0, D40_NAME, base);
if (ret) { if (ret) {
d40_err(&pdev->dev, "No IRQ defined\n"); d40_err(&pdev->dev, "No IRQ defined\n");
goto failure; goto destroy_cache;
} }
if (base->plat_data->use_esram_lcla) { if (base->plat_data->use_esram_lcla) {
@ -3645,7 +3609,7 @@ static int __init d40_probe(struct platform_device *pdev)
d40_err(&pdev->dev, "Failed to get lcpa_regulator\n"); d40_err(&pdev->dev, "Failed to get lcpa_regulator\n");
ret = PTR_ERR(base->lcpa_regulator); ret = PTR_ERR(base->lcpa_regulator);
base->lcpa_regulator = NULL; base->lcpa_regulator = NULL;
goto failure; goto destroy_cache;
} }
ret = regulator_enable(base->lcpa_regulator); ret = regulator_enable(base->lcpa_regulator);
@ -3654,7 +3618,7 @@ static int __init d40_probe(struct platform_device *pdev)
"Failed to enable lcpa_regulator\n"); "Failed to enable lcpa_regulator\n");
regulator_put(base->lcpa_regulator); regulator_put(base->lcpa_regulator);
base->lcpa_regulator = NULL; base->lcpa_regulator = NULL;
goto failure; goto destroy_cache;
} }
} }
@ -3669,13 +3633,13 @@ static int __init d40_probe(struct platform_device *pdev)
ret = d40_dmaengine_init(base, num_reserved_chans); ret = d40_dmaengine_init(base, num_reserved_chans);
if (ret) if (ret)
goto failure; goto destroy_cache;
base->dev->dma_parms = &base->dma_parms; base->dev->dma_parms = &base->dma_parms;
ret = dma_set_max_seg_size(base->dev, STEDMA40_MAX_SEG_SIZE); ret = dma_set_max_seg_size(base->dev, STEDMA40_MAX_SEG_SIZE);
if (ret) { if (ret) {
d40_err(&pdev->dev, "Failed to set dma max seg size\n"); d40_err(&pdev->dev, "Failed to set dma max seg size\n");
goto failure; goto destroy_cache;
} }
d40_hw_init(base); d40_hw_init(base);
@ -3689,8 +3653,7 @@ static int __init d40_probe(struct platform_device *pdev)
dev_info(base->dev, "initialized\n"); dev_info(base->dev, "initialized\n");
return 0; return 0;
destroy_cache:
failure:
kmem_cache_destroy(base->desc_slab); kmem_cache_destroy(base->desc_slab);
if (base->virtbase) if (base->virtbase)
iounmap(base->virtbase); iounmap(base->virtbase);
@ -3732,7 +3695,7 @@ failure:
kfree(base->lookup_phy_chans); kfree(base->lookup_phy_chans);
kfree(base->phy_res); kfree(base->phy_res);
kfree(base); kfree(base);
report_failure: report_failure:
d40_err(&pdev->dev, "probe failed\n"); d40_err(&pdev->dev, "probe failed\n");
return ret; return ret;
} }

View File

@ -954,7 +954,7 @@ static void stm32_dma_desc_free(struct virt_dma_desc *vdesc)
kfree(container_of(vdesc, struct stm32_dma_desc, vdesc)); kfree(container_of(vdesc, struct stm32_dma_desc, vdesc));
} }
void stm32_dma_set_config(struct stm32_dma_chan *chan, static void stm32_dma_set_config(struct stm32_dma_chan *chan,
struct stm32_dma_cfg *cfg) struct stm32_dma_cfg *cfg)
{ {
stm32_dma_clear_reg(&chan->chan_reg); stm32_dma_clear_reg(&chan->chan_reg);

View File

@ -1011,6 +1011,12 @@ static struct sun6i_dma_config sun8i_a23_dma_cfg = {
.nr_max_vchans = 37, .nr_max_vchans = 37,
}; };
static struct sun6i_dma_config sun8i_a83t_dma_cfg = {
.nr_max_channels = 8,
.nr_max_requests = 28,
.nr_max_vchans = 39,
};
/* /*
* The H3 has 12 physical channels, a maximum DRQ port id of 27, * The H3 has 12 physical channels, a maximum DRQ port id of 27,
* and a total of 34 usable source and destination endpoints. * and a total of 34 usable source and destination endpoints.
@ -1025,6 +1031,7 @@ static struct sun6i_dma_config sun8i_h3_dma_cfg = {
static const struct of_device_id sun6i_dma_match[] = { static const struct of_device_id sun6i_dma_match[] = {
{ .compatible = "allwinner,sun6i-a31-dma", .data = &sun6i_a31_dma_cfg }, { .compatible = "allwinner,sun6i-a31-dma", .data = &sun6i_a31_dma_cfg },
{ .compatible = "allwinner,sun8i-a23-dma", .data = &sun8i_a23_dma_cfg }, { .compatible = "allwinner,sun8i-a23-dma", .data = &sun8i_a23_dma_cfg },
{ .compatible = "allwinner,sun8i-a83t-dma", .data = &sun8i_a83t_dma_cfg },
{ .compatible = "allwinner,sun8i-h3-dma", .data = &sun8i_h3_dma_cfg }, { .compatible = "allwinner,sun8i-h3-dma", .data = &sun8i_h3_dma_cfg },
{ /* sentinel */ } { /* sentinel */ }
}; };

View File

@ -655,8 +655,7 @@ static void handle_cont_sngl_cycle_dma_done(struct tegra_dma_channel *tdc,
static void tegra_dma_tasklet(unsigned long data) static void tegra_dma_tasklet(unsigned long data)
{ {
struct tegra_dma_channel *tdc = (struct tegra_dma_channel *)data; struct tegra_dma_channel *tdc = (struct tegra_dma_channel *)data;
dma_async_tx_callback callback = NULL; struct dmaengine_desc_callback cb;
void *callback_param = NULL;
struct tegra_dma_desc *dma_desc; struct tegra_dma_desc *dma_desc;
unsigned long flags; unsigned long flags;
int cb_count; int cb_count;
@ -666,13 +665,12 @@ static void tegra_dma_tasklet(unsigned long data)
dma_desc = list_first_entry(&tdc->cb_desc, dma_desc = list_first_entry(&tdc->cb_desc,
typeof(*dma_desc), cb_node); typeof(*dma_desc), cb_node);
list_del(&dma_desc->cb_node); list_del(&dma_desc->cb_node);
callback = dma_desc->txd.callback; dmaengine_desc_get_callback(&dma_desc->txd, &cb);
callback_param = dma_desc->txd.callback_param;
cb_count = dma_desc->cb_count; cb_count = dma_desc->cb_count;
dma_desc->cb_count = 0; dma_desc->cb_count = 0;
spin_unlock_irqrestore(&tdc->lock, flags); spin_unlock_irqrestore(&tdc->lock, flags);
while (cb_count-- && callback) while (cb_count--)
callback(callback_param); dmaengine_desc_callback_invoke(&cb, NULL);
spin_lock_irqsave(&tdc->lock, flags); spin_lock_irqsave(&tdc->lock, flags);
} }
spin_unlock_irqrestore(&tdc->lock, flags); spin_unlock_irqrestore(&tdc->lock, flags);

View File

@ -670,7 +670,6 @@ static int tegra_adma_probe(struct platform_device *pdev)
const struct tegra_adma_chip_data *cdata; const struct tegra_adma_chip_data *cdata;
struct tegra_adma *tdma; struct tegra_adma *tdma;
struct resource *res; struct resource *res;
struct clk *clk;
int ret, i; int ret, i;
cdata = of_device_get_match_data(&pdev->dev); cdata = of_device_get_match_data(&pdev->dev);
@ -697,18 +696,9 @@ static int tegra_adma_probe(struct platform_device *pdev)
if (ret) if (ret)
return ret; return ret;
clk = clk_get(&pdev->dev, "d_audio"); ret = of_pm_clk_add_clk(&pdev->dev, "d_audio");
if (IS_ERR(clk)) { if (ret)
dev_err(&pdev->dev, "ADMA clock not found\n");
ret = PTR_ERR(clk);
goto clk_destroy; goto clk_destroy;
}
ret = pm_clk_add_clk(&pdev->dev, clk);
if (ret) {
clk_put(clk);
goto clk_destroy;
}
pm_runtime_enable(&pdev->dev); pm_runtime_enable(&pdev->dev);

View File

@ -18,15 +18,19 @@
#define TI_XBAR_DRA7 0 #define TI_XBAR_DRA7 0
#define TI_XBAR_AM335X 1 #define TI_XBAR_AM335X 1
static const u32 ti_xbar_type[] = {
[TI_XBAR_DRA7] = TI_XBAR_DRA7,
[TI_XBAR_AM335X] = TI_XBAR_AM335X,
};
static const struct of_device_id ti_dma_xbar_match[] = { static const struct of_device_id ti_dma_xbar_match[] = {
{ {
.compatible = "ti,dra7-dma-crossbar", .compatible = "ti,dra7-dma-crossbar",
.data = (void *)TI_XBAR_DRA7, .data = &ti_xbar_type[TI_XBAR_DRA7],
}, },
{ {
.compatible = "ti,am335x-edma-crossbar", .compatible = "ti,am335x-edma-crossbar",
.data = (void *)TI_XBAR_AM335X, .data = &ti_xbar_type[TI_XBAR_AM335X],
}, },
{}, {},
}; };
@ -190,9 +194,6 @@ static int ti_am335x_xbar_probe(struct platform_device *pdev)
#define TI_DRA7_XBAR_OUTPUTS 127 #define TI_DRA7_XBAR_OUTPUTS 127
#define TI_DRA7_XBAR_INPUTS 256 #define TI_DRA7_XBAR_INPUTS 256
#define TI_XBAR_EDMA_OFFSET 0
#define TI_XBAR_SDMA_OFFSET 1
struct ti_dra7_xbar_data { struct ti_dra7_xbar_data {
void __iomem *iomem; void __iomem *iomem;
@ -280,18 +281,25 @@ static void *ti_dra7_xbar_route_allocate(struct of_phandle_args *dma_spec,
return map; return map;
} }
#define TI_XBAR_EDMA_OFFSET 0
#define TI_XBAR_SDMA_OFFSET 1
static const u32 ti_dma_offset[] = {
[TI_XBAR_EDMA_OFFSET] = 0,
[TI_XBAR_SDMA_OFFSET] = 1,
};
static const struct of_device_id ti_dra7_master_match[] = { static const struct of_device_id ti_dra7_master_match[] = {
{ {
.compatible = "ti,omap4430-sdma", .compatible = "ti,omap4430-sdma",
.data = (void *)TI_XBAR_SDMA_OFFSET, .data = &ti_dma_offset[TI_XBAR_SDMA_OFFSET],
}, },
{ {
.compatible = "ti,edma3", .compatible = "ti,edma3",
.data = (void *)TI_XBAR_EDMA_OFFSET, .data = &ti_dma_offset[TI_XBAR_EDMA_OFFSET],
}, },
{ {
.compatible = "ti,edma3-tpcc", .compatible = "ti,edma3-tpcc",
.data = (void *)TI_XBAR_EDMA_OFFSET, .data = &ti_dma_offset[TI_XBAR_EDMA_OFFSET],
}, },
{}, {},
}; };
@ -311,7 +319,7 @@ static int ti_dra7_xbar_probe(struct platform_device *pdev)
struct property *prop; struct property *prop;
struct resource *res; struct resource *res;
u32 safe_val; u32 safe_val;
size_t sz; int sz;
void __iomem *iomem; void __iomem *iomem;
int i, ret; int i, ret;
@ -395,7 +403,7 @@ static int ti_dra7_xbar_probe(struct platform_device *pdev)
xbar->dmarouter.dev = &pdev->dev; xbar->dmarouter.dev = &pdev->dev;
xbar->dmarouter.route_free = ti_dra7_xbar_free; xbar->dmarouter.route_free = ti_dra7_xbar_free;
xbar->dma_offset = (u32)match->data; xbar->dma_offset = *(u32 *)match->data;
mutex_init(&xbar->mutex); mutex_init(&xbar->mutex);
platform_set_drvdata(pdev, xbar); platform_set_drvdata(pdev, xbar);
@ -428,7 +436,7 @@ static int ti_dma_xbar_probe(struct platform_device *pdev)
if (unlikely(!match)) if (unlikely(!match))
return -EINVAL; return -EINVAL;
switch ((u32)match->data) { switch (*(u32 *)match->data) {
case TI_XBAR_DRA7: case TI_XBAR_DRA7:
ret = ti_dra7_xbar_probe(pdev); ret = ti_dra7_xbar_probe(pdev);
break; break;

View File

@ -226,8 +226,7 @@ static void __td_start_dma(struct timb_dma_chan *td_chan)
static void __td_finish(struct timb_dma_chan *td_chan) static void __td_finish(struct timb_dma_chan *td_chan)
{ {
dma_async_tx_callback callback; struct dmaengine_desc_callback cb;
void *param;
struct dma_async_tx_descriptor *txd; struct dma_async_tx_descriptor *txd;
struct timb_dma_desc *td_desc; struct timb_dma_desc *td_desc;
@ -252,8 +251,7 @@ static void __td_finish(struct timb_dma_chan *td_chan)
dma_cookie_complete(txd); dma_cookie_complete(txd);
td_chan->ongoing = false; td_chan->ongoing = false;
callback = txd->callback; dmaengine_desc_get_callback(txd, &cb);
param = txd->callback_param;
list_move(&td_desc->desc_node, &td_chan->free_list); list_move(&td_desc->desc_node, &td_chan->free_list);
@ -262,8 +260,7 @@ static void __td_finish(struct timb_dma_chan *td_chan)
* The API requires that no submissions are done from a * The API requires that no submissions are done from a
* callback, so we don't need to drop the lock here * callback, so we don't need to drop the lock here
*/ */
if (callback) dmaengine_desc_callback_invoke(&cb, NULL);
callback(param);
} }
static u32 __td_ier_mask(struct timb_dma *td) static u32 __td_ier_mask(struct timb_dma *td)

View File

@ -403,16 +403,14 @@ static void
txx9dmac_descriptor_complete(struct txx9dmac_chan *dc, txx9dmac_descriptor_complete(struct txx9dmac_chan *dc,
struct txx9dmac_desc *desc) struct txx9dmac_desc *desc)
{ {
dma_async_tx_callback callback; struct dmaengine_desc_callback cb;
void *param;
struct dma_async_tx_descriptor *txd = &desc->txd; struct dma_async_tx_descriptor *txd = &desc->txd;
dev_vdbg(chan2dev(&dc->chan), "descriptor %u %p complete\n", dev_vdbg(chan2dev(&dc->chan), "descriptor %u %p complete\n",
txd->cookie, desc); txd->cookie, desc);
dma_cookie_complete(txd); dma_cookie_complete(txd);
callback = txd->callback; dmaengine_desc_get_callback(txd, &cb);
param = txd->callback_param;
txx9dmac_sync_desc_for_cpu(dc, desc); txx9dmac_sync_desc_for_cpu(dc, desc);
list_splice_init(&desc->tx_list, &dc->free_list); list_splice_init(&desc->tx_list, &dc->free_list);
@ -423,8 +421,7 @@ txx9dmac_descriptor_complete(struct txx9dmac_chan *dc,
* The API requires that no submissions are done from a * The API requires that no submissions are done from a
* callback, so we don't need to drop the lock here * callback, so we don't need to drop the lock here
*/ */
if (callback) dmaengine_desc_callback_invoke(&cb, NULL);
callback(param);
dma_run_dependencies(txd); dma_run_dependencies(txd);
} }

View File

@ -87,8 +87,7 @@ static void vchan_complete(unsigned long arg)
{ {
struct virt_dma_chan *vc = (struct virt_dma_chan *)arg; struct virt_dma_chan *vc = (struct virt_dma_chan *)arg;
struct virt_dma_desc *vd; struct virt_dma_desc *vd;
dma_async_tx_callback cb = NULL; struct dmaengine_desc_callback cb;
void *cb_data = NULL;
LIST_HEAD(head); LIST_HEAD(head);
spin_lock_irq(&vc->lock); spin_lock_irq(&vc->lock);
@ -96,18 +95,17 @@ static void vchan_complete(unsigned long arg)
vd = vc->cyclic; vd = vc->cyclic;
if (vd) { if (vd) {
vc->cyclic = NULL; vc->cyclic = NULL;
cb = vd->tx.callback; dmaengine_desc_get_callback(&vd->tx, &cb);
cb_data = vd->tx.callback_param; } else {
memset(&cb, 0, sizeof(cb));
} }
spin_unlock_irq(&vc->lock); spin_unlock_irq(&vc->lock);
if (cb) dmaengine_desc_callback_invoke(&cb, NULL);
cb(cb_data);
while (!list_empty(&head)) { while (!list_empty(&head)) {
vd = list_first_entry(&head, struct virt_dma_desc, node); vd = list_first_entry(&head, struct virt_dma_desc, node);
cb = vd->tx.callback; dmaengine_desc_get_callback(&vd->tx, &cb);
cb_data = vd->tx.callback_param;
list_del(&vd->node); list_del(&vd->node);
if (dmaengine_desc_test_reuse(&vd->tx)) if (dmaengine_desc_test_reuse(&vd->tx))
@ -115,8 +113,7 @@ static void vchan_complete(unsigned long arg)
else else
vc->desc_free(vd); vc->desc_free(vd);
if (cb) dmaengine_desc_callback_invoke(&cb, NULL);
cb(cb_data);
} }
} }

View File

@ -45,6 +45,8 @@ static inline struct virt_dma_chan *to_virt_chan(struct dma_chan *chan)
void vchan_dma_desc_free_list(struct virt_dma_chan *vc, struct list_head *head); void vchan_dma_desc_free_list(struct virt_dma_chan *vc, struct list_head *head);
void vchan_init(struct virt_dma_chan *vc, struct dma_device *dmadev); void vchan_init(struct virt_dma_chan *vc, struct dma_device *dmadev);
struct virt_dma_desc *vchan_find_desc(struct virt_dma_chan *, dma_cookie_t); struct virt_dma_desc *vchan_find_desc(struct virt_dma_chan *, dma_cookie_t);
extern dma_cookie_t vchan_tx_submit(struct dma_async_tx_descriptor *);
extern int vchan_tx_desc_free(struct dma_async_tx_descriptor *);
/** /**
* vchan_tx_prep - prepare a descriptor * vchan_tx_prep - prepare a descriptor
@ -55,8 +57,6 @@ struct virt_dma_desc *vchan_find_desc(struct virt_dma_chan *, dma_cookie_t);
static inline struct dma_async_tx_descriptor *vchan_tx_prep(struct virt_dma_chan *vc, static inline struct dma_async_tx_descriptor *vchan_tx_prep(struct virt_dma_chan *vc,
struct virt_dma_desc *vd, unsigned long tx_flags) struct virt_dma_desc *vd, unsigned long tx_flags)
{ {
extern dma_cookie_t vchan_tx_submit(struct dma_async_tx_descriptor *);
extern int vchan_tx_desc_free(struct dma_async_tx_descriptor *);
unsigned long flags; unsigned long flags;
dma_async_tx_descriptor_init(&vd->tx, &vc->chan); dma_async_tx_descriptor_init(&vd->tx, &vc->chan);
@ -123,10 +123,8 @@ static inline void vchan_cyclic_callback(struct virt_dma_desc *vd)
*/ */
static inline struct virt_dma_desc *vchan_next_desc(struct virt_dma_chan *vc) static inline struct virt_dma_desc *vchan_next_desc(struct virt_dma_chan *vc)
{ {
if (list_empty(&vc->desc_issued)) return list_first_entry_or_null(&vc->desc_issued,
return NULL; struct virt_dma_desc, node);
return list_first_entry(&vc->desc_issued, struct virt_dma_desc, node);
} }
/** /**

View File

@ -606,12 +606,10 @@ static void xgene_dma_run_tx_complete_actions(struct xgene_dma_chan *chan,
return; return;
dma_cookie_complete(tx); dma_cookie_complete(tx);
dma_descriptor_unmap(tx);
/* Run the link descriptor callback function */ /* Run the link descriptor callback function */
if (tx->callback) dmaengine_desc_get_callback_invoke(tx, NULL);
tx->callback(tx->callback_param);
dma_descriptor_unmap(tx);
/* Run any dependencies */ /* Run any dependencies */
dma_run_dependencies(tx); dma_run_dependencies(tx);

View File

@ -755,8 +755,7 @@ static void xilinx_dma_chan_desc_cleanup(struct xilinx_dma_chan *chan)
spin_lock_irqsave(&chan->lock, flags); spin_lock_irqsave(&chan->lock, flags);
list_for_each_entry_safe(desc, next, &chan->done_list, node) { list_for_each_entry_safe(desc, next, &chan->done_list, node) {
dma_async_tx_callback callback; struct dmaengine_desc_callback cb;
void *callback_param;
if (desc->cyclic) { if (desc->cyclic) {
xilinx_dma_chan_handle_cyclic(chan, desc, &flags); xilinx_dma_chan_handle_cyclic(chan, desc, &flags);
@ -767,11 +766,10 @@ static void xilinx_dma_chan_desc_cleanup(struct xilinx_dma_chan *chan)
list_del(&desc->node); list_del(&desc->node);
/* Run the link descriptor callback function */ /* Run the link descriptor callback function */
callback = desc->async_tx.callback; dmaengine_desc_get_callback(&desc->async_tx, &cb);
callback_param = desc->async_tx.callback_param; if (dmaengine_desc_callback_valid(&cb)) {
if (callback) {
spin_unlock_irqrestore(&chan->lock, flags); spin_unlock_irqrestore(&chan->lock, flags);
callback(callback_param); dmaengine_desc_callback_invoke(&cb, NULL);
spin_lock_irqsave(&chan->lock, flags); spin_lock_irqsave(&chan->lock, flags);
} }

View File

@ -102,13 +102,16 @@ struct ntb_queue_entry {
void *buf; void *buf;
unsigned int len; unsigned int len;
unsigned int flags; unsigned int flags;
int retries;
int errors;
unsigned int tx_index;
unsigned int rx_index;
struct ntb_transport_qp *qp; struct ntb_transport_qp *qp;
union { union {
struct ntb_payload_header __iomem *tx_hdr; struct ntb_payload_header __iomem *tx_hdr;
struct ntb_payload_header *rx_hdr; struct ntb_payload_header *rx_hdr;
}; };
unsigned int index;
}; };
struct ntb_rx_info { struct ntb_rx_info {
@ -259,6 +262,12 @@ enum {
static void ntb_transport_rxc_db(unsigned long data); static void ntb_transport_rxc_db(unsigned long data);
static const struct ntb_ctx_ops ntb_transport_ops; static const struct ntb_ctx_ops ntb_transport_ops;
static struct ntb_client ntb_transport_client; static struct ntb_client ntb_transport_client;
static int ntb_async_tx_submit(struct ntb_transport_qp *qp,
struct ntb_queue_entry *entry);
static void ntb_memcpy_tx(struct ntb_queue_entry *entry, void __iomem *offset);
static int ntb_async_rx_submit(struct ntb_queue_entry *entry, void *offset);
static void ntb_memcpy_rx(struct ntb_queue_entry *entry, void *offset);
static int ntb_transport_bus_match(struct device *dev, static int ntb_transport_bus_match(struct device *dev,
struct device_driver *drv) struct device_driver *drv)
@ -1229,7 +1238,7 @@ static void ntb_complete_rxc(struct ntb_transport_qp *qp)
break; break;
entry->rx_hdr->flags = 0; entry->rx_hdr->flags = 0;
iowrite32(entry->index, &qp->rx_info->entry); iowrite32(entry->rx_index, &qp->rx_info->entry);
cb_data = entry->cb_data; cb_data = entry->cb_data;
len = entry->len; len = entry->len;
@ -1247,10 +1256,36 @@ static void ntb_complete_rxc(struct ntb_transport_qp *qp)
spin_unlock_irqrestore(&qp->ntb_rx_q_lock, irqflags); spin_unlock_irqrestore(&qp->ntb_rx_q_lock, irqflags);
} }
static void ntb_rx_copy_callback(void *data) static void ntb_rx_copy_callback(void *data,
const struct dmaengine_result *res)
{ {
struct ntb_queue_entry *entry = data; struct ntb_queue_entry *entry = data;
/* we need to check DMA results if we are using DMA */
if (res) {
enum dmaengine_tx_result dma_err = res->result;
switch (dma_err) {
case DMA_TRANS_READ_FAILED:
case DMA_TRANS_WRITE_FAILED:
entry->errors++;
case DMA_TRANS_ABORTED:
{
struct ntb_transport_qp *qp = entry->qp;
void *offset = qp->rx_buff + qp->rx_max_frame *
qp->rx_index;
ntb_memcpy_rx(entry, offset);
qp->rx_memcpy++;
return;
}
case DMA_TRANS_NOERROR:
default:
break;
}
}
entry->flags |= DESC_DONE_FLAG; entry->flags |= DESC_DONE_FLAG;
ntb_complete_rxc(entry->qp); ntb_complete_rxc(entry->qp);
@ -1266,10 +1301,10 @@ static void ntb_memcpy_rx(struct ntb_queue_entry *entry, void *offset)
/* Ensure that the data is fully copied out before clearing the flag */ /* Ensure that the data is fully copied out before clearing the flag */
wmb(); wmb();
ntb_rx_copy_callback(entry); ntb_rx_copy_callback(entry, NULL);
} }
static void ntb_async_rx(struct ntb_queue_entry *entry, void *offset) static int ntb_async_rx_submit(struct ntb_queue_entry *entry, void *offset)
{ {
struct dma_async_tx_descriptor *txd; struct dma_async_tx_descriptor *txd;
struct ntb_transport_qp *qp = entry->qp; struct ntb_transport_qp *qp = entry->qp;
@ -1282,13 +1317,6 @@ static void ntb_async_rx(struct ntb_queue_entry *entry, void *offset)
int retries = 0; int retries = 0;
len = entry->len; len = entry->len;
if (!chan)
goto err;
if (len < copy_bytes)
goto err;
device = chan->device; device = chan->device;
pay_off = (size_t)offset & ~PAGE_MASK; pay_off = (size_t)offset & ~PAGE_MASK;
buff_off = (size_t)buf & ~PAGE_MASK; buff_off = (size_t)buf & ~PAGE_MASK;
@ -1316,7 +1344,8 @@ static void ntb_async_rx(struct ntb_queue_entry *entry, void *offset)
unmap->from_cnt = 1; unmap->from_cnt = 1;
for (retries = 0; retries < DMA_RETRIES; retries++) { for (retries = 0; retries < DMA_RETRIES; retries++) {
txd = device->device_prep_dma_memcpy(chan, unmap->addr[1], txd = device->device_prep_dma_memcpy(chan,
unmap->addr[1],
unmap->addr[0], len, unmap->addr[0], len,
DMA_PREP_INTERRUPT); DMA_PREP_INTERRUPT);
if (txd) if (txd)
@ -1331,7 +1360,7 @@ static void ntb_async_rx(struct ntb_queue_entry *entry, void *offset)
goto err_get_unmap; goto err_get_unmap;
} }
txd->callback = ntb_rx_copy_callback; txd->callback_result = ntb_rx_copy_callback;
txd->callback_param = entry; txd->callback_param = entry;
dma_set_unmap(txd, unmap); dma_set_unmap(txd, unmap);
@ -1345,12 +1374,37 @@ static void ntb_async_rx(struct ntb_queue_entry *entry, void *offset)
qp->rx_async++; qp->rx_async++;
return; return 0;
err_set_unmap: err_set_unmap:
dmaengine_unmap_put(unmap); dmaengine_unmap_put(unmap);
err_get_unmap: err_get_unmap:
dmaengine_unmap_put(unmap); dmaengine_unmap_put(unmap);
err:
return -ENXIO;
}
static void ntb_async_rx(struct ntb_queue_entry *entry, void *offset)
{
struct ntb_transport_qp *qp = entry->qp;
struct dma_chan *chan = qp->rx_dma_chan;
int res;
if (!chan)
goto err;
if (entry->len < copy_bytes)
goto err;
res = ntb_async_rx_submit(entry, offset);
if (res < 0)
goto err;
if (!entry->retries)
qp->rx_async++;
return;
err: err:
ntb_memcpy_rx(entry, offset); ntb_memcpy_rx(entry, offset);
qp->rx_memcpy++; qp->rx_memcpy++;
@ -1397,7 +1451,7 @@ static int ntb_process_rxc(struct ntb_transport_qp *qp)
} }
entry->rx_hdr = hdr; entry->rx_hdr = hdr;
entry->index = qp->rx_index; entry->rx_index = qp->rx_index;
if (hdr->len > entry->len) { if (hdr->len > entry->len) {
dev_dbg(&qp->ndev->pdev->dev, dev_dbg(&qp->ndev->pdev->dev,
@ -1467,12 +1521,39 @@ static void ntb_transport_rxc_db(unsigned long data)
} }
} }
static void ntb_tx_copy_callback(void *data) static void ntb_tx_copy_callback(void *data,
const struct dmaengine_result *res)
{ {
struct ntb_queue_entry *entry = data; struct ntb_queue_entry *entry = data;
struct ntb_transport_qp *qp = entry->qp; struct ntb_transport_qp *qp = entry->qp;
struct ntb_payload_header __iomem *hdr = entry->tx_hdr; struct ntb_payload_header __iomem *hdr = entry->tx_hdr;
/* we need to check DMA results if we are using DMA */
if (res) {
enum dmaengine_tx_result dma_err = res->result;
switch (dma_err) {
case DMA_TRANS_READ_FAILED:
case DMA_TRANS_WRITE_FAILED:
entry->errors++;
case DMA_TRANS_ABORTED:
{
void __iomem *offset =
qp->tx_mw + qp->tx_max_frame *
entry->tx_index;
/* resubmit via CPU */
ntb_memcpy_tx(entry, offset);
qp->tx_memcpy++;
return;
}
case DMA_TRANS_NOERROR:
default:
break;
}
}
iowrite32(entry->flags | DESC_DONE_FLAG, &hdr->flags); iowrite32(entry->flags | DESC_DONE_FLAG, &hdr->flags);
ntb_peer_db_set(qp->ndev, BIT_ULL(qp->qp_num)); ntb_peer_db_set(qp->ndev, BIT_ULL(qp->qp_num));
@ -1507,40 +1588,25 @@ static void ntb_memcpy_tx(struct ntb_queue_entry *entry, void __iomem *offset)
/* Ensure that the data is fully copied out before setting the flags */ /* Ensure that the data is fully copied out before setting the flags */
wmb(); wmb();
ntb_tx_copy_callback(entry); ntb_tx_copy_callback(entry, NULL);
} }
static void ntb_async_tx(struct ntb_transport_qp *qp, static int ntb_async_tx_submit(struct ntb_transport_qp *qp,
struct ntb_queue_entry *entry) struct ntb_queue_entry *entry)
{ {
struct ntb_payload_header __iomem *hdr;
struct dma_async_tx_descriptor *txd; struct dma_async_tx_descriptor *txd;
struct dma_chan *chan = qp->tx_dma_chan; struct dma_chan *chan = qp->tx_dma_chan;
struct dma_device *device; struct dma_device *device;
size_t len = entry->len;
void *buf = entry->buf;
size_t dest_off, buff_off; size_t dest_off, buff_off;
struct dmaengine_unmap_data *unmap; struct dmaengine_unmap_data *unmap;
dma_addr_t dest; dma_addr_t dest;
dma_cookie_t cookie; dma_cookie_t cookie;
void __iomem *offset;
size_t len = entry->len;
void *buf = entry->buf;
int retries = 0; int retries = 0;
offset = qp->tx_mw + qp->tx_max_frame * qp->tx_index;
hdr = offset + qp->tx_max_frame - sizeof(struct ntb_payload_header);
entry->tx_hdr = hdr;
iowrite32(entry->len, &hdr->len);
iowrite32((u32)qp->tx_pkts, &hdr->ver);
if (!chan)
goto err;
if (len < copy_bytes)
goto err;
device = chan->device; device = chan->device;
dest = qp->tx_mw_phys + qp->tx_max_frame * qp->tx_index; dest = qp->tx_mw_phys + qp->tx_max_frame * entry->tx_index;
buff_off = (size_t)buf & ~PAGE_MASK; buff_off = (size_t)buf & ~PAGE_MASK;
dest_off = (size_t)dest & ~PAGE_MASK; dest_off = (size_t)dest & ~PAGE_MASK;
@ -1560,8 +1626,9 @@ static void ntb_async_tx(struct ntb_transport_qp *qp,
unmap->to_cnt = 1; unmap->to_cnt = 1;
for (retries = 0; retries < DMA_RETRIES; retries++) { for (retries = 0; retries < DMA_RETRIES; retries++) {
txd = device->device_prep_dma_memcpy(chan, dest, unmap->addr[0], txd = device->device_prep_dma_memcpy(chan, dest,
len, DMA_PREP_INTERRUPT); unmap->addr[0], len,
DMA_PREP_INTERRUPT);
if (txd) if (txd)
break; break;
@ -1574,7 +1641,7 @@ static void ntb_async_tx(struct ntb_transport_qp *qp,
goto err_get_unmap; goto err_get_unmap;
} }
txd->callback = ntb_tx_copy_callback; txd->callback_result = ntb_tx_copy_callback;
txd->callback_param = entry; txd->callback_param = entry;
dma_set_unmap(txd, unmap); dma_set_unmap(txd, unmap);
@ -1585,13 +1652,47 @@ static void ntb_async_tx(struct ntb_transport_qp *qp,
dmaengine_unmap_put(unmap); dmaengine_unmap_put(unmap);
dma_async_issue_pending(chan); dma_async_issue_pending(chan);
qp->tx_async++;
return; return 0;
err_set_unmap: err_set_unmap:
dmaengine_unmap_put(unmap); dmaengine_unmap_put(unmap);
err_get_unmap: err_get_unmap:
dmaengine_unmap_put(unmap); dmaengine_unmap_put(unmap);
err:
return -ENXIO;
}
static void ntb_async_tx(struct ntb_transport_qp *qp,
struct ntb_queue_entry *entry)
{
struct ntb_payload_header __iomem *hdr;
struct dma_chan *chan = qp->tx_dma_chan;
void __iomem *offset;
int res;
entry->tx_index = qp->tx_index;
offset = qp->tx_mw + qp->tx_max_frame * entry->tx_index;
hdr = offset + qp->tx_max_frame - sizeof(struct ntb_payload_header);
entry->tx_hdr = hdr;
iowrite32(entry->len, &hdr->len);
iowrite32((u32)qp->tx_pkts, &hdr->ver);
if (!chan)
goto err;
if (entry->len < copy_bytes)
goto err;
res = ntb_async_tx_submit(qp, entry);
if (res < 0)
goto err;
if (!entry->retries)
qp->tx_async++;
return;
err: err:
ntb_memcpy_tx(entry, offset); ntb_memcpy_tx(entry, offset);
qp->tx_memcpy++; qp->tx_memcpy++;
@ -1928,6 +2029,9 @@ int ntb_transport_rx_enqueue(struct ntb_transport_qp *qp, void *cb, void *data,
entry->buf = data; entry->buf = data;
entry->len = len; entry->len = len;
entry->flags = 0; entry->flags = 0;
entry->retries = 0;
entry->errors = 0;
entry->rx_index = 0;
ntb_list_add(&qp->ntb_rx_q_lock, &entry->entry, &qp->rx_pend_q); ntb_list_add(&qp->ntb_rx_q_lock, &entry->entry, &qp->rx_pend_q);
@ -1970,6 +2074,9 @@ int ntb_transport_tx_enqueue(struct ntb_transport_qp *qp, void *cb, void *data,
entry->buf = data; entry->buf = data;
entry->len = len; entry->len = len;
entry->flags = 0; entry->flags = 0;
entry->errors = 0;
entry->retries = 0;
entry->tx_index = 0;
rc = ntb_process_tx(qp, entry); rc = ntb_process_tx(qp, entry);
if (rc) if (rc)

View File

@ -56,6 +56,13 @@ extern void debug_dma_alloc_coherent(struct device *dev, size_t size,
extern void debug_dma_free_coherent(struct device *dev, size_t size, extern void debug_dma_free_coherent(struct device *dev, size_t size,
void *virt, dma_addr_t addr); void *virt, dma_addr_t addr);
extern void debug_dma_map_resource(struct device *dev, phys_addr_t addr,
size_t size, int direction,
dma_addr_t dma_addr);
extern void debug_dma_unmap_resource(struct device *dev, dma_addr_t dma_addr,
size_t size, int direction);
extern void debug_dma_sync_single_for_cpu(struct device *dev, extern void debug_dma_sync_single_for_cpu(struct device *dev,
dma_addr_t dma_handle, size_t size, dma_addr_t dma_handle, size_t size,
int direction); int direction);
@ -141,6 +148,18 @@ static inline void debug_dma_free_coherent(struct device *dev, size_t size,
{ {
} }
static inline void debug_dma_map_resource(struct device *dev, phys_addr_t addr,
size_t size, int direction,
dma_addr_t dma_addr)
{
}
static inline void debug_dma_unmap_resource(struct device *dev,
dma_addr_t dma_addr, size_t size,
int direction)
{
}
static inline void debug_dma_sync_single_for_cpu(struct device *dev, static inline void debug_dma_sync_single_for_cpu(struct device *dev,
dma_addr_t dma_handle, dma_addr_t dma_handle,
size_t size, int direction) size_t size, int direction)

View File

@ -95,6 +95,12 @@ struct dma_map_ops {
struct scatterlist *sg, int nents, struct scatterlist *sg, int nents,
enum dma_data_direction dir, enum dma_data_direction dir,
unsigned long attrs); unsigned long attrs);
dma_addr_t (*map_resource)(struct device *dev, phys_addr_t phys_addr,
size_t size, enum dma_data_direction dir,
unsigned long attrs);
void (*unmap_resource)(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction dir,
unsigned long attrs);
void (*sync_single_for_cpu)(struct device *dev, void (*sync_single_for_cpu)(struct device *dev,
dma_addr_t dma_handle, size_t size, dma_addr_t dma_handle, size_t size,
enum dma_data_direction dir); enum dma_data_direction dir);
@ -258,6 +264,41 @@ static inline void dma_unmap_page(struct device *dev, dma_addr_t addr,
debug_dma_unmap_page(dev, addr, size, dir, false); debug_dma_unmap_page(dev, addr, size, dir, false);
} }
static inline dma_addr_t dma_map_resource(struct device *dev,
phys_addr_t phys_addr,
size_t size,
enum dma_data_direction dir,
unsigned long attrs)
{
struct dma_map_ops *ops = get_dma_ops(dev);
dma_addr_t addr;
BUG_ON(!valid_dma_direction(dir));
/* Don't allow RAM to be mapped */
BUG_ON(pfn_valid(PHYS_PFN(phys_addr)));
addr = phys_addr;
if (ops->map_resource)
addr = ops->map_resource(dev, phys_addr, size, dir, attrs);
debug_dma_map_resource(dev, phys_addr, size, dir, addr);
return addr;
}
static inline void dma_unmap_resource(struct device *dev, dma_addr_t addr,
size_t size, enum dma_data_direction dir,
unsigned long attrs)
{
struct dma_map_ops *ops = get_dma_ops(dev);
BUG_ON(!valid_dma_direction(dir));
if (ops->unmap_resource)
ops->unmap_resource(dev, addr, size, dir, attrs);
debug_dma_unmap_resource(dev, addr, size, dir);
}
static inline void dma_sync_single_for_cpu(struct device *dev, dma_addr_t addr, static inline void dma_sync_single_for_cpu(struct device *dev, dma_addr_t addr,
size_t size, size_t size,
enum dma_data_direction dir) enum dma_data_direction dir)

View File

@ -441,6 +441,21 @@ typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param);
typedef void (*dma_async_tx_callback)(void *dma_async_param); typedef void (*dma_async_tx_callback)(void *dma_async_param);
enum dmaengine_tx_result {
DMA_TRANS_NOERROR = 0, /* SUCCESS */
DMA_TRANS_READ_FAILED, /* Source DMA read failed */
DMA_TRANS_WRITE_FAILED, /* Destination DMA write failed */
DMA_TRANS_ABORTED, /* Op never submitted / aborted */
};
struct dmaengine_result {
enum dmaengine_tx_result result;
u32 residue;
};
typedef void (*dma_async_tx_callback_result)(void *dma_async_param,
const struct dmaengine_result *result);
struct dmaengine_unmap_data { struct dmaengine_unmap_data {
u8 map_cnt; u8 map_cnt;
u8 to_cnt; u8 to_cnt;
@ -478,6 +493,7 @@ struct dma_async_tx_descriptor {
dma_cookie_t (*tx_submit)(struct dma_async_tx_descriptor *tx); dma_cookie_t (*tx_submit)(struct dma_async_tx_descriptor *tx);
int (*desc_free)(struct dma_async_tx_descriptor *tx); int (*desc_free)(struct dma_async_tx_descriptor *tx);
dma_async_tx_callback callback; dma_async_tx_callback callback;
dma_async_tx_callback_result callback_result;
void *callback_param; void *callback_param;
struct dmaengine_unmap_data *unmap; struct dmaengine_unmap_data *unmap;
#ifdef CONFIG_ASYNC_TX_ENABLE_CHANNEL_SWITCH #ifdef CONFIG_ASYNC_TX_ENABLE_CHANNEL_SWITCH

View File

@ -11,6 +11,8 @@
#ifndef __LINUX_MBUS_H #ifndef __LINUX_MBUS_H
#define __LINUX_MBUS_H #define __LINUX_MBUS_H
#include <linux/errno.h>
struct resource; struct resource;
struct mbus_dram_target_info struct mbus_dram_target_info
@ -55,6 +57,8 @@ struct mbus_dram_target_info
#ifdef CONFIG_PLAT_ORION #ifdef CONFIG_PLAT_ORION
extern const struct mbus_dram_target_info *mv_mbus_dram_info(void); extern const struct mbus_dram_target_info *mv_mbus_dram_info(void);
extern const struct mbus_dram_target_info *mv_mbus_dram_info_nooverlap(void); extern const struct mbus_dram_target_info *mv_mbus_dram_info_nooverlap(void);
int mvebu_mbus_get_io_win_info(phys_addr_t phyaddr, u32 *size, u8 *target,
u8 *attr);
#else #else
static inline const struct mbus_dram_target_info *mv_mbus_dram_info(void) static inline const struct mbus_dram_target_info *mv_mbus_dram_info(void)
{ {
@ -64,14 +68,24 @@ static inline const struct mbus_dram_target_info *mv_mbus_dram_info_nooverlap(vo
{ {
return NULL; return NULL;
} }
static inline int mvebu_mbus_get_io_win_info(phys_addr_t phyaddr, u32 *size,
u8 *target, u8 *attr)
{
/*
* On all ARM32 MVEBU platforms with MBus support, this stub
* function will not get called. The real function from the
* MBus driver is called instead. ARM64 MVEBU platforms like
* the Armada 3700 could use the mv_xor device driver which calls
* into this function
*/
return -EINVAL;
}
#endif #endif
int mvebu_mbus_save_cpu_target(u32 __iomem *store_addr); int mvebu_mbus_save_cpu_target(u32 __iomem *store_addr);
void mvebu_mbus_get_pcie_mem_aperture(struct resource *res); void mvebu_mbus_get_pcie_mem_aperture(struct resource *res);
void mvebu_mbus_get_pcie_io_aperture(struct resource *res); void mvebu_mbus_get_pcie_io_aperture(struct resource *res);
int mvebu_mbus_get_dram_win_info(phys_addr_t phyaddr, u8 *target, u8 *attr); int mvebu_mbus_get_dram_win_info(phys_addr_t phyaddr, u8 *target, u8 *attr);
int mvebu_mbus_get_io_win_info(phys_addr_t phyaddr, u32 *size, u8 *target,
u8 *attr);
int mvebu_mbus_add_window_remap_by_id(unsigned int target, int mvebu_mbus_add_window_remap_by_id(unsigned int target,
unsigned int attribute, unsigned int attribute,
phys_addr_t base, size_t size, phys_addr_t base, size_t size,

View File

@ -297,6 +297,7 @@ struct omap_system_dma_plat_info {
#define dma_omap15xx() __dma_omap15xx(d) #define dma_omap15xx() __dma_omap15xx(d)
#define dma_omap16xx() __dma_omap16xx(d) #define dma_omap16xx() __dma_omap16xx(d)
#if defined(CONFIG_ARCH_OMAP)
extern struct omap_system_dma_plat_info *omap_get_plat_info(void); extern struct omap_system_dma_plat_info *omap_get_plat_info(void);
extern void omap_set_dma_priority(int lch, int dst_port, int priority); extern void omap_set_dma_priority(int lch, int dst_port, int priority);
@ -355,4 +356,22 @@ static inline int omap_lcd_dma_running(void)
} }
#endif #endif
#else /* CONFIG_ARCH_OMAP */
static inline struct omap_system_dma_plat_info *omap_get_plat_info(void)
{
return NULL;
}
static inline int omap_request_dma(int dev_id, const char *dev_name,
void (*callback)(int lch, u16 ch_status, void *data),
void *data, int *dma_ch)
{
return -ENODEV;
}
static inline void omap_free_dma(int ch) { }
#endif /* CONFIG_ARCH_OMAP */
#endif /* __LINUX_OMAP_DMA_H */ #endif /* __LINUX_OMAP_DMA_H */

View File

@ -28,7 +28,7 @@ struct sram_platdata {
int granularity; int granularity;
}; };
#ifdef CONFIG_ARM #ifdef CONFIG_MMP_SRAM
extern struct gen_pool *sram_get_gpool(char *pool_name); extern struct gen_pool *sram_get_gpool(char *pool_name);
#else #else
static inline struct gen_pool *sram_get_gpool(char *pool_name) static inline struct gen_pool *sram_get_gpool(char *pool_name)

View File

@ -30,16 +30,22 @@ struct s3c24xx_dma_channel {
u16 chansel; u16 chansel;
}; };
struct dma_slave_map;
/** /**
* struct s3c24xx_dma_platdata - platform specific settings * struct s3c24xx_dma_platdata - platform specific settings
* @num_phy_channels: number of physical channels * @num_phy_channels: number of physical channels
* @channels: array of virtual channel descriptions * @channels: array of virtual channel descriptions
* @num_channels: number of virtual channels * @num_channels: number of virtual channels
* @slave_map: dma slave map matching table
* @slavecnt: number of elements in slave_map
*/ */
struct s3c24xx_dma_platdata { struct s3c24xx_dma_platdata {
int num_phy_channels; int num_phy_channels;
struct s3c24xx_dma_channel *channels; struct s3c24xx_dma_channel *channels;
int num_channels; int num_channels;
const struct dma_slave_map *slave_map;
int slavecnt;
}; };
struct dma_chan; struct dma_chan;

View File

@ -44,6 +44,7 @@ enum {
dma_debug_page, dma_debug_page,
dma_debug_sg, dma_debug_sg,
dma_debug_coherent, dma_debug_coherent,
dma_debug_resource,
}; };
enum map_err_types { enum map_err_types {
@ -151,8 +152,9 @@ static const char *const maperr2str[] = {
[MAP_ERR_CHECKED] = "dma map error checked", [MAP_ERR_CHECKED] = "dma map error checked",
}; };
static const char *type2name[4] = { "single", "page", static const char *type2name[5] = { "single", "page",
"scather-gather", "coherent" }; "scather-gather", "coherent",
"resource" };
static const char *dir2name[4] = { "DMA_BIDIRECTIONAL", "DMA_TO_DEVICE", static const char *dir2name[4] = { "DMA_BIDIRECTIONAL", "DMA_TO_DEVICE",
"DMA_FROM_DEVICE", "DMA_NONE" }; "DMA_FROM_DEVICE", "DMA_NONE" };
@ -400,6 +402,9 @@ static void hash_bucket_del(struct dma_debug_entry *entry)
static unsigned long long phys_addr(struct dma_debug_entry *entry) static unsigned long long phys_addr(struct dma_debug_entry *entry)
{ {
if (entry->type == dma_debug_resource)
return __pfn_to_phys(entry->pfn) + entry->offset;
return page_to_phys(pfn_to_page(entry->pfn)) + entry->offset; return page_to_phys(pfn_to_page(entry->pfn)) + entry->offset;
} }
@ -1519,6 +1524,49 @@ void debug_dma_free_coherent(struct device *dev, size_t size,
} }
EXPORT_SYMBOL(debug_dma_free_coherent); EXPORT_SYMBOL(debug_dma_free_coherent);
void debug_dma_map_resource(struct device *dev, phys_addr_t addr, size_t size,
int direction, dma_addr_t dma_addr)
{
struct dma_debug_entry *entry;
if (unlikely(dma_debug_disabled()))
return;
entry = dma_entry_alloc();
if (!entry)
return;
entry->type = dma_debug_resource;
entry->dev = dev;
entry->pfn = PHYS_PFN(addr);
entry->offset = offset_in_page(addr);
entry->size = size;
entry->dev_addr = dma_addr;
entry->direction = direction;
entry->map_err_type = MAP_ERR_NOT_CHECKED;
add_dma_entry(entry);
}
EXPORT_SYMBOL(debug_dma_map_resource);
void debug_dma_unmap_resource(struct device *dev, dma_addr_t dma_addr,
size_t size, int direction)
{
struct dma_debug_entry ref = {
.type = dma_debug_resource,
.dev = dev,
.dev_addr = dma_addr,
.size = size,
.direction = direction,
};
if (unlikely(dma_debug_disabled()))
return;
check_unmap(&ref);
}
EXPORT_SYMBOL(debug_dma_unmap_resource);
void debug_dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, void debug_dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle,
size_t size, int direction) size_t size, int direction)
{ {