linux/drivers/media/platform/Makefile

95 lines
2.4 KiB
Makefile
Raw Normal View History

#
# Makefile for the video capture/playback device drivers.
#
obj-$(CONFIG_VIDEO_M32R_AR_M64278) += arv.o
obj-$(CONFIG_VIDEO_VIA_CAMERA) += via-camera.o
obj-$(CONFIG_VIDEO_CAFE_CCIC) += marvell-ccic/
obj-$(CONFIG_VIDEO_MMP_CAMERA) += marvell-ccic/
obj-$(CONFIG_VIDEO_OMAP3) += omap3isp/
obj-$(CONFIG_VIDEO_PXA27x) += pxa_camera.o
obj-$(CONFIG_VIDEO_VIU) += fsl-viu.o
obj-$(CONFIG_VIDEO_VIMC) += vimc/
obj-$(CONFIG_VIDEO_VIVID) += vivid/
obj-$(CONFIG_VIDEO_VIM2M) += vim2m.o
[media] v4l: ti-vpe: Add VPE mem to mem driver VPE is a block which consists of a single memory to memory path which can perform chrominance up/down sampling, de-interlacing, scaling, and color space conversion of raster or tiled YUV420 coplanar, YUV422 coplanar or YUV422 interleaved video formats. We create a mem2mem driver based primarily on the mem2mem-testdev example. The de-interlacer, scaler and color space converter are all bypassed for now to keep the driver simple. Chroma up/down sampler blocks are implemented, so conversion beteen different YUV formats is possible. Each mem2mem context allocates a buffer for VPE MMR values which it will use when it gets access to the VPE HW via the mem2mem queue, it also allocates a VPDMA descriptor list to which configuration and data descriptors are added. Based on the information received via v4l2 ioctls for the source and destination queues, the driver configures the values for the MMRs, and stores them in the buffer. There are also some VPDMA parameters like frame start and line mode which needs to be configured, these are configured by direct register writes via the VPDMA helper functions. The driver's device_run() mem2mem op will add each descriptor based on how the source and destination queues are set up for the given ctx, once the list is prepared, it's submitted to VPDMA, these descriptors when parsed by VPDMA will upload MMR registers, start DMA of video buffers on the various input and output clients/ports. When the list is parsed completely(and the DMAs on all the output ports done), an interrupt is generated which we use to notify that the source and destination buffers are done. The rest of the driver is quite similar to other mem2mem drivers, we use the multiplane v4l2 ioctls as the HW support coplanar formats. Signed-off-by: Archit Taneja <archit@ti.com> Acked-by: Hans Verkuil <hans.verkuil@cisco.com> Signed-off-by: Kamil Debski <k.debski@samsung.com> Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
2013-10-16 09:36:47 +04:00
obj-$(CONFIG_VIDEO_TI_VPE) += ti-vpe/
obj-$(CONFIG_VIDEO_TI_CAL) += ti-vpe/
obj-$(CONFIG_VIDEO_MX2_EMMAPRP) += mx2_emmaprp.o
obj-$(CONFIG_VIDEO_CODA) += coda/
obj-$(CONFIG_VIDEO_SH_VEU) += sh_veu.o
obj-$(CONFIG_CEC_GPIO) += cec-gpio/
obj-$(CONFIG_VIDEO_MEM2MEM_DEINTERLACE) += m2m-deinterlace.o
obj-$(CONFIG_VIDEO_MUX) += video-mux.o
obj-$(CONFIG_VIDEO_S3C_CAMIF) += s3c-camif/
obj-$(CONFIG_VIDEO_SAMSUNG_EXYNOS4_IS) += exynos4-is/
obj-$(CONFIG_VIDEO_SAMSUNG_S5P_JPEG) += s5p-jpeg/
obj-$(CONFIG_VIDEO_SAMSUNG_S5P_MFC) += s5p-mfc/
obj-$(CONFIG_VIDEO_SAMSUNG_S5P_G2D) += s5p-g2d/
obj-$(CONFIG_VIDEO_SAMSUNG_S5P_CEC) += s5p-cec/
obj-$(CONFIG_VIDEO_SAMSUNG_EXYNOS_GSC) += exynos-gsc/
obj-$(CONFIG_VIDEO_STI_BDISP) += sti/bdisp/
obj-$(CONFIG_VIDEO_STI_HVA) += sti/hva/
obj-$(CONFIG_DVB_C8SECTPFE) += sti/c8sectpfe/
obj-$(CONFIG_VIDEO_STI_HDMI_CEC) += sti/cec/
obj-$(CONFIG_VIDEO_STI_DELTA) += sti/delta/
obj-y += stm32/
obj-y += blackfin/
obj-y += davinci/
obj-$(CONFIG_VIDEO_SH_VOU) += sh_vou.o
obj-$(CONFIG_SOC_CAMERA) += soc_camera/
obj-$(CONFIG_VIDEO_RCAR_DRIF) += rcar_drif.o
obj-$(CONFIG_VIDEO_RENESAS_FCP) += rcar-fcp.o
obj-$(CONFIG_VIDEO_RENESAS_FDP1) += rcar_fdp1.o
[media] V4L2: platform: Add Renesas R-Car JPEG codec driver Here's the driver for the Renesas R-Car JPEG processing unit. The driver is implemented within the V4L2 framework as a memory-to-memory device. It presents two video nodes to userspace, one for the encoding part, and one for the decoding part. It was found that the only working mode for encoding is no markers output, so we generate markers with software. In the current version of driver we also use software JPEG header parsing because with hardware parsing performance is lower than desired. >From a userspace point of view the process is typical (S_FMT, REQBUF, optionally QUERYBUF, QBUF, STREAMON, DQBUF) for both the source and destination queues. STREAMON can return -EINVAL in case of mismatch of output and capture queues format. Also during decoding driver can return buffers if queued buffer with JPEG image contains image with inappropriate subsampling (e.g. 4:2:0 in JPEG and 4:2:2 in capture). If JPEG image and queue format dimensions differ driver will return buffer on QBUF with VB2_BUF_STATE_ERROR flag. During encoding the available formats are: V4L2_PIX_FMT_NV12M, V4L2_PIX_FMT_NV12, V4L2_PIX_FMT_NV16, V4L2_PIX_FMT_NV16M for source and V4L2_PIX_FMT_JPEG for destination. During decoding the available formats are: V4L2_PIX_FMT_JPEG for source and V4L2_PIX_FMT_NV12M, V4L2_PIX_FMT_NV16M, V4L2_PIX_FMT_NV12, V4L2_PIX_FMT_NV16 for destination. Performance of current version: 1280x800 NV12 image encoding/decoding decoding ~122 FPS encoding ~191 FPS Signed-off-by: Mikhail Ulyanov <mikhail.ulyanov@cogentembedded.com> Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@osg.samsung.com>
2015-07-22 14:23:03 +03:00
obj-$(CONFIG_VIDEO_RENESAS_JPU) += rcar_jpu.o
obj-$(CONFIG_VIDEO_RENESAS_VSP1) += vsp1/
obj-y += omap/
obj-$(CONFIG_VIDEO_AM437X_VPFE) += am437x/
obj-$(CONFIG_VIDEO_XILINX) += xilinx/
obj-$(CONFIG_VIDEO_RCAR_VIN) += rcar-vin/
obj-$(CONFIG_VIDEO_ATMEL_ISC) += atmel/
obj-$(CONFIG_VIDEO_ATMEL_ISI) += atmel/
obj-$(CONFIG_VIDEO_STM32_DCMI) += stm32/
ccflags-y += -I$(srctree)/drivers/media/i2c
obj-$(CONFIG_VIDEO_MEDIATEK_VPU) += mtk-vpu/
obj-$(CONFIG_VIDEO_MEDIATEK_VCODEC) += mtk-vcodec/
obj-$(CONFIG_VIDEO_MEDIATEK_MDP) += mtk-mdp/
obj-$(CONFIG_VIDEO_MEDIATEK_JPEG) += mtk-jpeg/
obj-$(CONFIG_VIDEO_QCOM_CAMSS) += qcom/camss-8x16/
obj-$(CONFIG_VIDEO_QCOM_VENUS) += qcom/venus/
obj-y += meson/