Merge tag 'drm-misc-next-2023-10-12' of git://anongit.freedesktop.org/drm/drm-misc into drm-next

drm-misc-next for v6.7-rc1:

Contains the previous pull request drm-misc-next-2023-10-06 + following:

Cross-subsystem Changes:
- Rename fb_pgprot to pgprot_framebuffer and remove file argument/
- Update iosys-map documentation typos.

Core Changes:
- Assorted fixes to drm/panel.
- Add HPD state to drm_connector_oob_hotplug_event(), and implement
  oob hotplug events in bridge connector.
- Replace drm_framebuffer_plane_width/height  with calls to
  drm_format_info_plane_width/height.

Driver Changes:
- Clock and debug fixes for bridge/samsung-dsim.
- More btree -> maple tree conversions.
- Assorted bugfixes in rockchip, panel-tpo-tpg110,
- Add LTK050H3148W-CTA6 panel support.
- Assorted small fixes in host1x, tegra, simpledrm.
- Suspend fixes for host1x.

Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/3812345e-b086-4d72-8504-f58d84e8feab@linux.intel.com
This commit is contained in:
Dave Airlie 2023-10-16 10:40:31 +10:00
commit d32ce5ab7b
105 changed files with 1900 additions and 418 deletions

View File

@ -0,0 +1,84 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/lvds-data-mapping.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: LVDS Data Mapping
maintainers:
- Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
- Thierry Reding <thierry.reding@gmail.com>
description: |
LVDS is a physical layer specification defined in ANSI/TIA/EIA-644-A. Multiple
incompatible data link layers have been used over time to transmit image data
to LVDS devices. This bindings supports devices compatible with the following
specifications.
[JEIDA] "Digital Interface Standards for Monitor", JEIDA-59-1999, February
1999 (Version 1.0), Japan Electronic Industry Development Association (JEIDA)
[LDI] "Open LVDS Display Interface", May 1999 (Version 0.95), National
Semiconductor
[VESA] "VESA Notebook Panel Standard", October 2007 (Version 1.0), Video
Electronics Standards Association (VESA)
Device compatible with those specifications have been marketed under the
FPD-Link and FlatLink brands.
properties:
data-mapping:
enum:
- jeida-18
- jeida-24
- vesa-24
description: |
The color signals mapping order.
LVDS data mappings are defined as follows.
- "jeida-18" - 18-bit data mapping compatible with the [JEIDA], [LDI] and
[VESA] specifications. Data are transferred as follows on 3 LVDS lanes.
Slot 0 1 2 3 4 5 6
________________ _________________
Clock \_______________________/
______ ______ ______ ______ ______ ______ ______
DATA0 ><__G0__><__R5__><__R4__><__R3__><__R2__><__R1__><__R0__><
DATA1 ><__B1__><__B0__><__G5__><__G4__><__G3__><__G2__><__G1__><
DATA2 ><_CTL2_><_CTL1_><_CTL0_><__B5__><__B4__><__B3__><__B2__><
- "jeida-24" - 24-bit data mapping compatible with the [DSIM] and [LDI]
specifications. Data are transferred as follows on 4 LVDS lanes.
Slot 0 1 2 3 4 5 6
________________ _________________
Clock \_______________________/
______ ______ ______ ______ ______ ______ ______
DATA0 ><__G2__><__R7__><__R6__><__R5__><__R4__><__R3__><__R2__><
DATA1 ><__B3__><__B2__><__G7__><__G6__><__G5__><__G4__><__G3__><
DATA2 ><_CTL2_><_CTL1_><_CTL0_><__B7__><__B6__><__B5__><__B4__><
DATA3 ><_CTL3_><__B1__><__B0__><__G1__><__G0__><__R1__><__R0__><
- "vesa-24" - 24-bit data mapping compatible with the [VESA] specification.
Data are transferred as follows on 4 LVDS lanes.
Slot 0 1 2 3 4 5 6
________________ _________________
Clock \_______________________/
______ ______ ______ ______ ______ ______ ______
DATA0 ><__G0__><__R5__><__R4__><__R3__><__R2__><__R1__><__R0__><
DATA1 ><__B1__><__B0__><__G5__><__G4__><__G3__><__G2__><__G1__><
DATA2 ><_CTL2_><_CTL1_><_CTL0_><__B5__><__B4__><__B3__><__B2__><
DATA3 ><_CTL3_><__B7__><__B6__><__G7__><__G6__><__R7__><__R6__><
Control signals are mapped as follows.
CTL0: HSync
CTL1: VSync
CTL2: Data Enable
CTL3: 0
additionalProperties: true
...

View File

@ -6,83 +6,24 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: LVDS Display Common Properties
allOf:
- $ref: lvds-data-mapping.yaml#
maintainers:
- Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
- Thierry Reding <thierry.reding@gmail.com>
description: |+
LVDS is a physical layer specification defined in ANSI/TIA/EIA-644-A. Multiple
incompatible data link layers have been used over time to transmit image data
to LVDS devices. This bindings supports devices compatible with the following
specifications.
[JEIDA] "Digital Interface Standards for Monitor", JEIDA-59-1999, February
1999 (Version 1.0), Japan Electronic Industry Development Association (JEIDA)
[LDI] "Open LVDS Display Interface", May 1999 (Version 0.95), National
Semiconductor
[VESA] "VESA Notebook Panel Standard", October 2007 (Version 1.0), Video
Electronics Standards Association (VESA)
Device compatible with those specifications have been marketed under the
FPD-Link and FlatLink brands.
description:
This binding extends the data mapping defined in lvds-data-mapping.yaml.
It supports reversing the bit order on the formats defined there in order
to accomodate for even more specialized data formats, since a variety of
data formats and layouts is used to drive LVDS displays.
properties:
data-mapping:
enum:
- jeida-18
- jeida-24
- vesa-24
description: |
The color signals mapping order.
LVDS data mappings are defined as follows.
- "jeida-18" - 18-bit data mapping compatible with the [JEIDA], [LDI] and
[VESA] specifications. Data are transferred as follows on 3 LVDS lanes.
Slot 0 1 2 3 4 5 6
________________ _________________
Clock \_______________________/
______ ______ ______ ______ ______ ______ ______
DATA0 ><__G0__><__R5__><__R4__><__R3__><__R2__><__R1__><__R0__><
DATA1 ><__B1__><__B0__><__G5__><__G4__><__G3__><__G2__><__G1__><
DATA2 ><_CTL2_><_CTL1_><_CTL0_><__B5__><__B4__><__B3__><__B2__><
- "jeida-24" - 24-bit data mapping compatible with the [DSIM] and [LDI]
specifications. Data are transferred as follows on 4 LVDS lanes.
Slot 0 1 2 3 4 5 6
________________ _________________
Clock \_______________________/
______ ______ ______ ______ ______ ______ ______
DATA0 ><__G2__><__R7__><__R6__><__R5__><__R4__><__R3__><__R2__><
DATA1 ><__B3__><__B2__><__G7__><__G6__><__G5__><__G4__><__G3__><
DATA2 ><_CTL2_><_CTL1_><_CTL0_><__B7__><__B6__><__B5__><__B4__><
DATA3 ><_CTL3_><__B1__><__B0__><__G1__><__G0__><__R1__><__R0__><
- "vesa-24" - 24-bit data mapping compatible with the [VESA] specification.
Data are transferred as follows on 4 LVDS lanes.
Slot 0 1 2 3 4 5 6
________________ _________________
Clock \_______________________/
______ ______ ______ ______ ______ ______ ______
DATA0 ><__G0__><__R5__><__R4__><__R3__><__R2__><__R1__><__R0__><
DATA1 ><__B1__><__B0__><__G5__><__G4__><__G3__><__G2__><__G1__><
DATA2 ><_CTL2_><_CTL1_><_CTL0_><__B5__><__B4__><__B3__><__B2__><
DATA3 ><_CTL3_><__B7__><__B6__><__G7__><__G6__><__R7__><__R6__><
Control signals are mapped as follows.
CTL0: HSync
CTL1: VSync
CTL2: Data Enable
CTL3: 0
data-mirror:
type: boolean
description:
If set, reverse the bit order described in the data mappings below on all
If set, reverse the bit order described in the data mappings on all
data lanes, transmitting bits for slots 6 to 0 instead of 0 to 6.
additionalProperties: true

View File

@ -17,6 +17,7 @@ properties:
enum:
- leadtek,ltk050h3146w
- leadtek,ltk050h3146w-a2
- leadtek,ltk050h3148w
reg: true
backlight: true
reset-gpios: true

View File

@ -7,9 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: NewVision NV3051D based LCD panel
description: |
The NewVision NV3051D is a driver chip used to drive DSI panels. For now,
this driver only supports the 640x480 panels found in the Anbernic RG353
based devices.
The NewVision NV3051D is a driver chip used to drive DSI panels.
maintainers:
- Chris Morgan <macromorgan@hotmail.com>
@ -21,6 +19,7 @@ properties:
compatible:
items:
- enum:
- anbernic,rg351v-panel
- anbernic,rg353p-panel
- anbernic,rg353v-panel
- const: newvision,nv3051d

View File

@ -21,9 +21,9 @@ description: |
allOf:
- $ref: panel-common.yaml#
- $ref: ../lvds-data-mapping.yaml#
properties:
compatible:
enum:
# compatible must be listed in alphabetical order, ordered by compatible.
@ -359,6 +359,17 @@ properties:
power-supply: true
no-hpd: true
hpd-gpios: true
data-mapping: true
if:
not:
properties:
compatible:
contains:
const: innolux,g101ice-l01
then:
properties:
data-mapping: false
additionalProperties: false
@ -378,3 +389,16 @@ examples:
};
};
};
- |
panel_lvds: panel-lvds {
compatible = "innolux,g101ice-l01";
power-supply = <&vcc_lcd_reg>;
data-mapping = "jeida-24";
port {
panel_in_lvds: endpoint {
remote-endpoint = <&ltdc_out_lvds>;
};
};
};

View File

@ -0,0 +1,73 @@
# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/panel/raydium,rm692e5.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Raydium RM692E5 based DSI display panels
maintainers:
- Konrad Dybcio <konradybcio@kernel.org>
description:
The Raydium RM692E5 is a generic DSI Panel IC used to control
AMOLED panels.
allOf:
- $ref: panel-common.yaml#
properties:
compatible:
items:
- const: fairphone,fp5-rm692e5-boe
- const: raydium,rm692e5
dvdd-supply:
description: Digital voltage rail
vci-supply:
description: Analog voltage rail
vddio-supply:
description: I/O voltage rail
reg: true
port: true
required:
- compatible
- reg
- reset-gpios
- dvdd-supply
- vci-supply
- vddio-supply
- port
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/gpio/gpio.h>
dsi {
#address-cells = <1>;
#size-cells = <0>;
panel@0 {
compatible = "fairphone,fp5-rm692e5-boe", "raydium,rm692e5";
reg = <0>;
reset-gpios = <&tlmm 44 GPIO_ACTIVE_LOW>;
dvdd-supply = <&vreg_oled_vci>;
vci-supply = <&vreg_l12c>;
vddio-supply = <&vreg_oled_dvdd>;
port {
panel_in_0: endpoint {
remote-endpoint = <&dsi0_out>;
};
};
};
};
...

View File

@ -18,6 +18,7 @@ GPU Driver Documentation
xen-front
afbc
komeda-kms
panfrost
.. only:: subproject and html

View File

@ -466,40 +466,40 @@ DRM MM Range Allocator Function References
.. kernel-doc:: drivers/gpu/drm/drm_mm.c
:export:
DRM GPU VA Manager
==================
DRM GPUVM
=========
Overview
--------
.. kernel-doc:: drivers/gpu/drm/drm_gpuva_mgr.c
.. kernel-doc:: drivers/gpu/drm/drm_gpuvm.c
:doc: Overview
Split and Merge
---------------
.. kernel-doc:: drivers/gpu/drm/drm_gpuva_mgr.c
.. kernel-doc:: drivers/gpu/drm/drm_gpuvm.c
:doc: Split and Merge
Locking
-------
.. kernel-doc:: drivers/gpu/drm/drm_gpuva_mgr.c
.. kernel-doc:: drivers/gpu/drm/drm_gpuvm.c
:doc: Locking
Examples
--------
.. kernel-doc:: drivers/gpu/drm/drm_gpuva_mgr.c
.. kernel-doc:: drivers/gpu/drm/drm_gpuvm.c
:doc: Examples
DRM GPU VA Manager Function References
--------------------------------------
DRM GPUVM Function References
-----------------------------
.. kernel-doc:: include/drm/drm_gpuva_mgr.h
.. kernel-doc:: include/drm/drm_gpuvm.h
:internal:
.. kernel-doc:: drivers/gpu/drm/drm_gpuva_mgr.c
.. kernel-doc:: drivers/gpu/drm/drm_gpuvm.c
:export:
DRM Buddy Allocator

View File

@ -285,6 +285,83 @@ for GPU1 and GPU2 from different vendors, and a third handler for
mmapped regular files. Threads cause additional pain with signal
handling as well.
Device reset
============
The GPU stack is really complex and is prone to errors, from hardware bugs,
faulty applications and everything in between the many layers. Some errors
require resetting the device in order to make the device usable again. This
section describes the expectations for DRM and usermode drivers when a
device resets and how to propagate the reset status.
Device resets can not be disabled without tainting the kernel, which can lead to
hanging the entire kernel through shrinkers/mmu_notifiers. Userspace role in
device resets is to propagate the message to the application and apply any
special policy for blocking guilty applications, if any. Corollary is that
debugging a hung GPU context require hardware support to be able to preempt such
a GPU context while it's stopped.
Kernel Mode Driver
------------------
The KMD is responsible for checking if the device needs a reset, and to perform
it as needed. Usually a hang is detected when a job gets stuck executing. KMD
should keep track of resets, because userspace can query any time about the
reset status for a specific context. This is needed to propagate to the rest of
the stack that a reset has happened. Currently, this is implemented by each
driver separately, with no common DRM interface. Ideally this should be properly
integrated at DRM scheduler to provide a common ground for all drivers. After a
reset, KMD should reject new command submissions for affected contexts.
User Mode Driver
----------------
After command submission, UMD should check if the submission was accepted or
rejected. After a reset, KMD should reject submissions, and UMD can issue an
ioctl to the KMD to check the reset status, and this can be checked more often
if the UMD requires it. After detecting a reset, UMD will then proceed to report
it to the application using the appropriate API error code, as explained in the
section below about robustness.
Robustness
----------
The only way to try to keep a graphical API context working after a reset is if
it complies with the robustness aspects of the graphical API that it is using.
Graphical APIs provide ways to applications to deal with device resets. However,
there is no guarantee that the app will use such features correctly, and a
userspace that doesn't support robust interfaces (like a non-robust
OpenGL context or API without any robustness support like libva) leave the
robustness handling entirely to the userspace driver. There is no strong
community consensus on what the userspace driver should do in that case,
since all reasonable approaches have some clear downsides.
OpenGL
~~~~~~
Apps using OpenGL should use the available robust interfaces, like the
extension ``GL_ARB_robustness`` (or ``GL_EXT_robustness`` for OpenGL ES). This
interface tells if a reset has happened, and if so, all the context state is
considered lost and the app proceeds by creating new ones. There's no consensus
on what to do to if robustness is not in use.
Vulkan
~~~~~~
Apps using Vulkan should check for ``VK_ERROR_DEVICE_LOST`` for submissions.
This error code means, among other things, that a device reset has happened and
it needs to recreate the contexts to keep going.
Reporting causes of resets
--------------------------
Apart from propagating the reset through the stack so apps can recover, it's
really useful for driver developers to learn more about what caused the reset in
the first place. DRM devices should make use of devcoredump to store relevant
information about the reset, so this information can be added to user bug
reports.
.. _drm_driver_ioctl:
IOCTL Support on Device Nodes

View File

@ -169,3 +169,4 @@ Driver specific implementations
-------------------------------
:ref:`i915-usage-stats`
:ref:`panfrost-usage-stats`

View File

@ -0,0 +1,40 @@
.. SPDX-License-Identifier: GPL-2.0+
=========================
drm/Panfrost Mali Driver
=========================
.. _panfrost-usage-stats:
Panfrost DRM client usage stats implementation
==============================================
The drm/Panfrost driver implements the DRM client usage stats specification as
documented in :ref:`drm-client-usage-stats`.
Example of the output showing the implemented key value pairs and entirety of
the currently possible format options:
::
pos: 0
flags: 02400002
mnt_id: 27
ino: 531
drm-driver: panfrost
drm-client-id: 14
drm-engine-fragment: 1846584880 ns
drm-cycles-fragment: 1424359409
drm-maxfreq-fragment: 799999987 Hz
drm-curfreq-fragment: 799999987 Hz
drm-engine-vertex-tiler: 71932239 ns
drm-cycles-vertex-tiler: 52617357
drm-maxfreq-vertex-tiler: 799999987 Hz
drm-curfreq-vertex-tiler: 799999987 Hz
drm-total-memory: 290 MiB
drm-shared-memory: 0 MiB
drm-active-memory: 226 MiB
drm-resident-memory: 36496 KiB
drm-purgeable-memory: 128 KiB
Possible `drm-engine-` key names are: `fragment`, and `vertex-tiler`.
`drm-curfreq-` values convey the current operating frequency for that engine.

View File

@ -1632,6 +1632,7 @@ R: Steven Price <steven.price@arm.com>
L: dri-devel@lists.freedesktop.org
S: Supported
T: git git://anongit.freedesktop.org/drm/drm-misc
F: Documentation/gpu/panfrost.rst
F: drivers/gpu/drm/panfrost/
F: include/uapi/drm/panfrost_drm.h
@ -6859,12 +6860,26 @@ M: Thomas Zimmermann <tzimmermann@suse.de>
S: Maintained
W: https://01.org/linuxgraphics/gfx-docs/maintainer-tools/drm-misc.html
T: git git://anongit.freedesktop.org/drm/drm-misc
F: Documentation/devicetree/bindings/display/
F: Documentation/devicetree/bindings/gpu/
F: Documentation/gpu/
F: drivers/gpu/drm/*
F: drivers/gpu/drm/
F: drivers/gpu/vga/
F: include/drm/drm*
F: include/drm/drm
F: include/linux/vga*
F: include/uapi/drm/drm*
F: include/uapi/drm/
X: drivers/gpu/drm/amd/
X: drivers/gpu/drm/armada/
X: drivers/gpu/drm/etnaviv/
X: drivers/gpu/drm/exynos/
X: drivers/gpu/drm/i915/
X: drivers/gpu/drm/kmb/
X: drivers/gpu/drm/mediatek/
X: drivers/gpu/drm/msm/
X: drivers/gpu/drm/nouveau/
X: drivers/gpu/drm/radeon/
X: drivers/gpu/drm/renesas/
X: drivers/gpu/drm/tegra/
DRM DRIVERS FOR ALLWINNER A10
M: Maxime Ripard <mripard@kernel.org>
@ -15355,6 +15370,7 @@ M: Laurentiu Palcu <laurentiu.palcu@oss.nxp.com>
R: Lucas Stach <l.stach@pengutronix.de>
L: dri-devel@lists.freedesktop.org
S: Maintained
T: git git://anongit.freedesktop.org/drm/drm-misc
F: Documentation/devicetree/bindings/display/imx/nxp,imx8mq-dcss.yaml
F: drivers/gpu/drm/imx/dcss/

View File

@ -8,17 +8,16 @@
#include <asm/page.h>
struct file;
static inline void fb_pgprotect(struct file *file, struct vm_area_struct *vma,
unsigned long off)
static inline pgprot_t pgprot_framebuffer(pgprot_t prot,
unsigned long vm_start, unsigned long vm_end,
unsigned long offset)
{
if (efi_range_is_wc(vma->vm_start, vma->vm_end - vma->vm_start))
vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
if (efi_range_is_wc(vm_start, vm_end - vm_start))
return pgprot_writecombine(prot);
else
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
return pgprot_noncached(prot);
}
#define fb_pgprotect fb_pgprotect
#define pgprot_framebuffer pgprot_framebuffer
static inline void fb_memcpy_fromio(void *to, const volatile void __iomem *from, size_t n)
{

View File

@ -5,26 +5,27 @@
#include <asm/page.h>
#include <asm/setup.h>
struct file;
static inline void fb_pgprotect(struct file *file, struct vm_area_struct *vma,
unsigned long off)
static inline pgprot_t pgprot_framebuffer(pgprot_t prot,
unsigned long vm_start, unsigned long vm_end,
unsigned long offset)
{
#ifdef CONFIG_MMU
#ifdef CONFIG_SUN3
pgprot_val(vma->vm_page_prot) |= SUN3_PAGE_NOCACHE;
pgprot_val(prot) |= SUN3_PAGE_NOCACHE;
#else
if (CPU_IS_020_OR_030)
pgprot_val(vma->vm_page_prot) |= _PAGE_NOCACHE030;
pgprot_val(prot) |= _PAGE_NOCACHE030;
if (CPU_IS_040_OR_060) {
pgprot_val(vma->vm_page_prot) &= _CACHEMASK040;
pgprot_val(prot) &= _CACHEMASK040;
/* Use no-cache mode, serialized */
pgprot_val(vma->vm_page_prot) |= _PAGE_NOCACHE_S;
pgprot_val(prot) |= _PAGE_NOCACHE_S;
}
#endif /* CONFIG_SUN3 */
#endif /* CONFIG_MMU */
return prot;
}
#define fb_pgprotect fb_pgprotect
#define pgprot_framebuffer pgprot_framebuffer
#include <asm-generic/fb.h>

View File

@ -3,14 +3,13 @@
#include <asm/page.h>
struct file;
static inline void fb_pgprotect(struct file *file, struct vm_area_struct *vma,
unsigned long off)
static inline pgprot_t pgprot_framebuffer(pgprot_t prot,
unsigned long vm_start, unsigned long vm_end,
unsigned long offset)
{
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
return pgprot_noncached(prot);
}
#define fb_pgprotect fb_pgprotect
#define pgprot_framebuffer pgprot_framebuffer
/*
* MIPS doesn't define __raw_ I/O macros, so the helpers

View File

@ -2,18 +2,20 @@
#ifndef _ASM_FB_H_
#define _ASM_FB_H_
#include <linux/fs.h>
#include <asm/page.h>
static inline void fb_pgprotect(struct file *file, struct vm_area_struct *vma,
unsigned long off)
static inline pgprot_t pgprot_framebuffer(pgprot_t prot,
unsigned long vm_start, unsigned long vm_end,
unsigned long offset)
{
vma->vm_page_prot = phys_mem_access_prot(file, off >> PAGE_SHIFT,
vma->vm_end - vma->vm_start,
vma->vm_page_prot);
/*
* PowerPC's implementation of phys_mem_access_prot() does
* not use the file argument. Set it to NULL in preparation
* of later updates to the interface.
*/
return phys_mem_access_prot(NULL, PHYS_PFN(offset), vm_end - vm_start, prot);
}
#define fb_pgprotect fb_pgprotect
#define pgprot_framebuffer pgprot_framebuffer
#include <asm-generic/fb.h>

View File

@ -4,15 +4,18 @@
#include <linux/io.h>
#include <asm/page.h>
struct fb_info;
struct file;
struct vm_area_struct;
#ifdef CONFIG_SPARC32
static inline void fb_pgprotect(struct file *file, struct vm_area_struct *vma,
unsigned long off)
{ }
#define fb_pgprotect fb_pgprotect
static inline pgprot_t pgprot_framebuffer(pgprot_t prot,
unsigned long vm_start, unsigned long vm_end,
unsigned long offset)
{
return prot;
}
#define pgprot_framebuffer pgprot_framebuffer
#endif
int fb_is_primary_device(struct fb_info *info);

View File

@ -2,12 +2,14 @@
#ifndef _ASM_X86_FB_H
#define _ASM_X86_FB_H
struct fb_info;
struct file;
struct vm_area_struct;
#include <asm/page.h>
void fb_pgprotect(struct file *file, struct vm_area_struct *vma, unsigned long off);
#define fb_pgprotect fb_pgprotect
struct fb_info;
pgprot_t pgprot_framebuffer(pgprot_t prot,
unsigned long vm_start, unsigned long vm_end,
unsigned long offset);
#define pgprot_framebuffer pgprot_framebuffer
int fb_is_primary_device(struct fb_info *info);
#define fb_is_primary_device fb_is_primary_device

View File

@ -13,16 +13,17 @@
#include <linux/vgaarb.h>
#include <asm/fb.h>
void fb_pgprotect(struct file *file, struct vm_area_struct *vma, unsigned long off)
pgprot_t pgprot_framebuffer(pgprot_t prot,
unsigned long vm_start, unsigned long vm_end,
unsigned long offset)
{
unsigned long prot;
prot = pgprot_val(vma->vm_page_prot) & ~_PAGE_CACHE_MASK;
pgprot_val(prot) &= ~_PAGE_CACHE_MASK;
if (boot_cpu_data.x86 > 3)
pgprot_val(vma->vm_page_prot) =
prot | cachemode2protval(_PAGE_CACHE_MODE_UC_MINUS);
pgprot_val(prot) |= cachemode2protval(_PAGE_CACHE_MODE_UC_MINUS);
return prot;
}
EXPORT_SYMBOL(fb_pgprotect);
EXPORT_SYMBOL(pgprot_framebuffer);
int fb_is_primary_device(struct fb_info *info)
{

View File

@ -2,7 +2,6 @@
# Copyright (C) 2023 Intel Corporation
intel_vpu-y := \
ivpu_debugfs.o \
ivpu_drv.o \
ivpu_fw.o \
ivpu_fw_log.o \
@ -16,4 +15,6 @@ intel_vpu-y := \
ivpu_mmu_context.o \
ivpu_pm.o
intel_vpu-$(CONFIG_DEBUG_FS) += ivpu_debugfs.o
obj-$(CONFIG_DRM_ACCEL_IVPU) += intel_vpu.o

View File

@ -17,20 +17,26 @@
#include "ivpu_jsm_msg.h"
#include "ivpu_pm.h"
static inline struct ivpu_device *seq_to_ivpu(struct seq_file *s)
{
struct drm_debugfs_entry *entry = s->private;
return to_ivpu_device(entry->dev);
}
static int bo_list_show(struct seq_file *s, void *v)
{
struct drm_info_node *node = (struct drm_info_node *)s->private;
struct drm_printer p = drm_seq_file_printer(s);
struct ivpu_device *vdev = seq_to_ivpu(s);
ivpu_bo_list(node->minor->dev, &p);
ivpu_bo_list(&vdev->drm, &p);
return 0;
}
static int fw_name_show(struct seq_file *s, void *v)
{
struct drm_info_node *node = (struct drm_info_node *)s->private;
struct ivpu_device *vdev = to_ivpu_device(node->minor->dev);
struct ivpu_device *vdev = seq_to_ivpu(s);
seq_printf(s, "%s\n", vdev->fw->name);
return 0;
@ -38,8 +44,7 @@ static int fw_name_show(struct seq_file *s, void *v)
static int fw_trace_capability_show(struct seq_file *s, void *v)
{
struct drm_info_node *node = (struct drm_info_node *)s->private;
struct ivpu_device *vdev = to_ivpu_device(node->minor->dev);
struct ivpu_device *vdev = seq_to_ivpu(s);
u64 trace_hw_component_mask;
u32 trace_destination_mask;
int ret;
@ -57,8 +62,7 @@ static int fw_trace_capability_show(struct seq_file *s, void *v)
static int fw_trace_config_show(struct seq_file *s, void *v)
{
struct drm_info_node *node = (struct drm_info_node *)s->private;
struct ivpu_device *vdev = to_ivpu_device(node->minor->dev);
struct ivpu_device *vdev = seq_to_ivpu(s);
/**
* WA: VPU_JSM_MSG_TRACE_GET_CONFIG command is not working yet,
* so we use values from vdev->fw instead of calling ivpu_jsm_trace_get_config()
@ -78,8 +82,7 @@ static int fw_trace_config_show(struct seq_file *s, void *v)
static int last_bootmode_show(struct seq_file *s, void *v)
{
struct drm_info_node *node = (struct drm_info_node *)s->private;
struct ivpu_device *vdev = to_ivpu_device(node->minor->dev);
struct ivpu_device *vdev = seq_to_ivpu(s);
seq_printf(s, "%s\n", (vdev->pm->is_warmboot) ? "warmboot" : "coldboot");
@ -88,8 +91,7 @@ static int last_bootmode_show(struct seq_file *s, void *v)
static int reset_counter_show(struct seq_file *s, void *v)
{
struct drm_info_node *node = (struct drm_info_node *)s->private;
struct ivpu_device *vdev = to_ivpu_device(node->minor->dev);
struct ivpu_device *vdev = seq_to_ivpu(s);
seq_printf(s, "%d\n", atomic_read(&vdev->pm->reset_counter));
return 0;
@ -97,14 +99,13 @@ static int reset_counter_show(struct seq_file *s, void *v)
static int reset_pending_show(struct seq_file *s, void *v)
{
struct drm_info_node *node = (struct drm_info_node *)s->private;
struct ivpu_device *vdev = to_ivpu_device(node->minor->dev);
struct ivpu_device *vdev = seq_to_ivpu(s);
seq_printf(s, "%d\n", atomic_read(&vdev->pm->in_reset));
return 0;
}
static const struct drm_info_list vdev_debugfs_list[] = {
static const struct drm_debugfs_info vdev_debugfs_list[] = {
{"bo_list", bo_list_show, 0},
{"fw_name", fw_name_show, 0},
{"fw_trace_capability", fw_trace_capability_show, 0},
@ -270,25 +271,24 @@ static const struct file_operations ivpu_reset_engine_fops = {
.write = ivpu_reset_engine_fn,
};
void ivpu_debugfs_init(struct drm_minor *minor)
void ivpu_debugfs_init(struct ivpu_device *vdev)
{
struct ivpu_device *vdev = to_ivpu_device(minor->dev);
struct dentry *debugfs_root = vdev->drm.debugfs_root;
drm_debugfs_create_files(vdev_debugfs_list, ARRAY_SIZE(vdev_debugfs_list),
minor->debugfs_root, minor);
drm_debugfs_add_files(&vdev->drm, vdev_debugfs_list, ARRAY_SIZE(vdev_debugfs_list));
debugfs_create_file("force_recovery", 0200, minor->debugfs_root, vdev,
debugfs_create_file("force_recovery", 0200, debugfs_root, vdev,
&ivpu_force_recovery_fops);
debugfs_create_file("fw_log", 0644, minor->debugfs_root, vdev,
debugfs_create_file("fw_log", 0644, debugfs_root, vdev,
&fw_log_fops);
debugfs_create_file("fw_trace_destination_mask", 0200, minor->debugfs_root, vdev,
debugfs_create_file("fw_trace_destination_mask", 0200, debugfs_root, vdev,
&fw_trace_destination_mask_fops);
debugfs_create_file("fw_trace_hw_comp_mask", 0200, minor->debugfs_root, vdev,
debugfs_create_file("fw_trace_hw_comp_mask", 0200, debugfs_root, vdev,
&fw_trace_hw_comp_mask_fops);
debugfs_create_file("fw_trace_level", 0200, minor->debugfs_root, vdev,
debugfs_create_file("fw_trace_level", 0200, debugfs_root, vdev,
&fw_trace_level_fops);
debugfs_create_file("reset_engine", 0200, minor->debugfs_root, vdev,
debugfs_create_file("reset_engine", 0200, debugfs_root, vdev,
&ivpu_reset_engine_fops);
}

View File

@ -6,8 +6,12 @@
#ifndef __IVPU_DEBUGFS_H__
#define __IVPU_DEBUGFS_H__
struct drm_minor;
struct ivpu_device;
void ivpu_debugfs_init(struct drm_minor *minor);
#if defined(CONFIG_DEBUG_FS)
void ivpu_debugfs_init(struct ivpu_device *vdev);
#else
static inline void ivpu_debugfs_init(struct ivpu_device *vdev) { }
#endif
#endif /* __IVPU_DEBUGFS_H__ */

View File

@ -395,10 +395,6 @@ static const struct drm_driver driver = {
.postclose = ivpu_postclose,
.gem_prime_import = ivpu_gem_prime_import,
#if defined(CONFIG_DEBUG_FS)
.debugfs_init = ivpu_debugfs_init,
#endif
.ioctls = ivpu_drm_ioctls,
.num_ioctls = ARRAY_SIZE(ivpu_drm_ioctls),
.fops = &ivpu_fops,
@ -631,6 +627,8 @@ static int ivpu_probe(struct pci_dev *pdev, const struct pci_device_id *id)
if (ret)
return ret;
ivpu_debugfs_init(vdev);
ret = drm_dev_register(&vdev->drm, 0);
if (ret) {
dev_err(&pdev->dev, "Failed to register DRM device: %d\n", ret);

View File

@ -655,7 +655,7 @@ struct ip_hw_instance {
u8 harvest;
int num_base_addresses;
u32 base_addr[];
u32 base_addr[] __counted_by(num_base_addresses);
};
struct ip_hw_id {

View File

@ -204,15 +204,16 @@ void dm_helpers_dp_update_branch_info(
{}
static void dm_helpers_construct_old_payload(
struct dc_link *link,
int pbn_per_slot,
struct drm_dp_mst_topology_mgr *mgr,
struct drm_dp_mst_topology_state *mst_state,
struct drm_dp_mst_atomic_payload *new_payload,
struct drm_dp_mst_atomic_payload *old_payload)
{
struct link_mst_stream_allocation_table current_link_table =
link->mst_stream_alloc_table;
struct link_mst_stream_allocation *dc_alloc;
int i;
struct drm_dp_mst_atomic_payload *pos;
int pbn_per_slot = mst_state->pbn_div;
u8 next_payload_vc_start = mgr->next_start_slot;
u8 payload_vc_start = new_payload->vc_start_slot;
u8 allocated_time_slots;
*old_payload = *new_payload;
@ -221,20 +222,17 @@ static void dm_helpers_construct_old_payload(
* struct drm_dp_mst_atomic_payload are don't care fields
* while calling drm_dp_remove_payload_part2()
*/
for (i = 0; i < current_link_table.stream_count; i++) {
dc_alloc =
&current_link_table.stream_allocations[i];
if (dc_alloc->vcp_id == new_payload->vcpi) {
old_payload->time_slots = dc_alloc->slot_count;
old_payload->pbn = dc_alloc->slot_count * pbn_per_slot;
break;
}
list_for_each_entry(pos, &mst_state->payloads, next) {
if (pos != new_payload &&
pos->vc_start_slot > payload_vc_start &&
pos->vc_start_slot < next_payload_vc_start)
next_payload_vc_start = pos->vc_start_slot;
}
/* make sure there is an old payload*/
ASSERT(i != current_link_table.stream_count);
allocated_time_slots = next_payload_vc_start - payload_vc_start;
old_payload->time_slots = allocated_time_slots;
old_payload->pbn = allocated_time_slots * pbn_per_slot;
}
/*
@ -272,8 +270,8 @@ bool dm_helpers_dp_mst_write_payload_allocation_table(
drm_dp_add_payload_part1(mst_mgr, mst_state, new_payload);
} else {
/* construct old payload by VCPI*/
dm_helpers_construct_old_payload(stream->link, mst_state->pbn_div,
new_payload, &old_payload);
dm_helpers_construct_old_payload(mst_mgr, mst_state,
new_payload, &old_payload);
target_payload = &old_payload;
drm_dp_remove_payload_part1(mst_mgr, mst_state, new_payload);
@ -366,7 +364,7 @@ bool dm_helpers_dp_mst_send_payload_allocation(
if (enable) {
ret = drm_dp_add_payload_part2(mst_mgr, mst_state->base.state, new_payload);
} else {
dm_helpers_construct_old_payload(stream->link, mst_state->pbn_div,
dm_helpers_construct_old_payload(mst_mgr, mst_state,
new_payload, &old_payload);
drm_dp_remove_payload_part2(mst_mgr, mst_state, &old_payload, new_payload);
}

View File

@ -192,7 +192,7 @@ struct smu10_clock_voltage_dependency_record {
struct smu10_voltage_dependency_table {
uint32_t count;
struct smu10_clock_voltage_dependency_record entries[];
struct smu10_clock_voltage_dependency_record entries[] __counted_by(count);
};
struct smu10_clock_voltage_information {

View File

@ -121,7 +121,7 @@ static const struct regmap_config adv7511_regmap_config = {
.val_bits = 8,
.max_register = 0xff,
.cache_type = REGCACHE_RBTREE,
.cache_type = REGCACHE_MAPLE,
.reg_defaults_raw = adv7511_register_defaults,
.num_reg_defaults_raw = ARRAY_SIZE(adv7511_register_defaults),
@ -1068,7 +1068,7 @@ static const struct regmap_config adv7511_cec_regmap_config = {
.val_bits = 8,
.max_register = 0xff,
.cache_type = REGCACHE_RBTREE,
.cache_type = REGCACHE_MAPLE,
.volatile_reg = adv7511_cec_register_volatile,
};

View File

@ -197,7 +197,7 @@ static const struct regmap_config chipone_regmap_config = {
.val_bits = 8,
.rd_table = &chipone_dsi_readable_table,
.wr_table = &chipone_dsi_writeable_table,
.cache_type = REGCACHE_RBTREE,
.cache_type = REGCACHE_MAPLE,
.max_register = MIPI_ATE_STATUS(1),
};

View File

@ -89,7 +89,7 @@ static const struct regmap_config lt9211_regmap_config = {
.volatile_table = &lt9211_rw_table,
.ranges = &lt9211_range,
.num_ranges = 1,
.cache_type = REGCACHE_RBTREE,
.cache_type = REGCACHE_MAPLE,
.max_register = 0xda00,
};

View File

@ -296,7 +296,7 @@ static int lt9611uxc_connector_get_modes(struct drm_connector *connector)
unsigned int count;
struct edid *edid;
edid = lt9611uxc->bridge.funcs->get_edid(&lt9611uxc->bridge, connector);
edid = drm_bridge_get_edid(&lt9611uxc->bridge, connector);
drm_connector_update_edid_property(connector, edid);
count = drm_add_edid_modes(connector, edid);
kfree(edid);

View File

@ -410,6 +410,8 @@ static const struct samsung_dsim_driver_data exynos3_dsi_driver_data = {
.num_bits_resol = 11,
.pll_p_offset = 13,
.reg_values = reg_values,
.pll_fin_min = 6,
.pll_fin_max = 12,
.m_min = 41,
.m_max = 125,
.min_freq = 500,
@ -427,6 +429,8 @@ static const struct samsung_dsim_driver_data exynos4_dsi_driver_data = {
.num_bits_resol = 11,
.pll_p_offset = 13,
.reg_values = reg_values,
.pll_fin_min = 6,
.pll_fin_max = 12,
.m_min = 41,
.m_max = 125,
.min_freq = 500,
@ -442,6 +446,8 @@ static const struct samsung_dsim_driver_data exynos5_dsi_driver_data = {
.num_bits_resol = 11,
.pll_p_offset = 13,
.reg_values = reg_values,
.pll_fin_min = 6,
.pll_fin_max = 12,
.m_min = 41,
.m_max = 125,
.min_freq = 500,
@ -457,6 +463,8 @@ static const struct samsung_dsim_driver_data exynos5433_dsi_driver_data = {
.num_bits_resol = 12,
.pll_p_offset = 13,
.reg_values = exynos5433_reg_values,
.pll_fin_min = 6,
.pll_fin_max = 12,
.m_min = 41,
.m_max = 125,
.min_freq = 500,
@ -472,6 +480,8 @@ static const struct samsung_dsim_driver_data exynos5422_dsi_driver_data = {
.num_bits_resol = 12,
.pll_p_offset = 13,
.reg_values = exynos5422_reg_values,
.pll_fin_min = 6,
.pll_fin_max = 12,
.m_min = 41,
.m_max = 125,
.min_freq = 500,
@ -491,6 +501,8 @@ static const struct samsung_dsim_driver_data imx8mm_dsi_driver_data = {
*/
.pll_p_offset = 14,
.reg_values = imx8mm_dsim_reg_values,
.pll_fin_min = 2,
.pll_fin_max = 30,
.m_min = 64,
.m_max = 1023,
.min_freq = 1050,
@ -614,7 +626,23 @@ static unsigned long samsung_dsim_set_pll(struct samsung_dsim *dsi,
u16 m;
u32 reg;
fin = dsi->pll_clk_rate;
if (dsi->pll_clk) {
/*
* Ensure that the reference clock is generated with a power of
* two divider from its parent, but close to the PLLs upper
* limit.
*/
fin = clk_get_rate(clk_get_parent(dsi->pll_clk));
while (fin > driver_data->pll_fin_max * MHZ)
fin /= 2;
clk_set_rate(dsi->pll_clk, fin);
fin = clk_get_rate(dsi->pll_clk);
} else {
fin = dsi->pll_clk_rate;
}
dev_dbg(dsi->dev, "PLL ref clock freq %lu\n", fin);
fout = samsung_dsim_pll_find_pms(dsi, fin, freq, &p, &m, &s);
if (!fout) {
dev_err(dsi->dev,
@ -960,10 +988,12 @@ static void samsung_dsim_set_display_mode(struct samsung_dsim *dsi)
u32 reg;
if (dsi->mode_flags & MIPI_DSI_MODE_VIDEO) {
int byte_clk_khz = dsi->hs_clock / 1000 / 8;
int hfp = (m->hsync_start - m->hdisplay) * byte_clk_khz / m->clock;
int hbp = (m->htotal - m->hsync_end) * byte_clk_khz / m->clock;
int hsa = (m->hsync_end - m->hsync_start) * byte_clk_khz / m->clock;
u64 byte_clk = dsi->hs_clock / 8;
u64 pix_clk = m->clock * 1000;
int hfp = DIV64_U64_ROUND_UP((m->hsync_start - m->hdisplay) * byte_clk, pix_clk);
int hbp = DIV64_U64_ROUND_UP((m->htotal - m->hsync_end) * byte_clk, pix_clk);
int hsa = DIV64_U64_ROUND_UP((m->hsync_end - m->hsync_start) * byte_clk, pix_clk);
/* remove packet overhead when possible */
hfp = max(hfp - 6, 0);
@ -1726,7 +1756,10 @@ of_find_panel_or_bridge:
return ret;
}
DRM_DEV_INFO(dev, "Attached %s device\n", device->name);
DRM_DEV_INFO(dev, "Attached %s device (lanes:%d bpp:%d mode-flags:0x%lx)\n",
device->name, device->lanes,
mipi_dsi_pixel_format_to_bpp(device->format),
device->mode_flags);
drm_bridge_add(&dsi->bridge);
@ -1833,18 +1866,15 @@ static int samsung_dsim_parse_dt(struct samsung_dsim *dsi)
u32 lane_polarities[5] = { 0 };
struct device_node *endpoint;
int i, nr_lanes, ret;
struct clk *pll_clk;
ret = samsung_dsim_of_read_u32(node, "samsung,pll-clock-frequency",
&dsi->pll_clk_rate, 1);
/* If it doesn't exist, read it from the clock instead of failing */
if (ret < 0) {
dev_dbg(dev, "Using sclk_mipi for pll clock frequency\n");
pll_clk = devm_clk_get(dev, "sclk_mipi");
if (!IS_ERR(pll_clk))
dsi->pll_clk_rate = clk_get_rate(pll_clk);
else
return PTR_ERR(pll_clk);
dsi->pll_clk = devm_clk_get(dev, "sclk_mipi");
if (IS_ERR(dsi->pll_clk))
return PTR_ERR(dsi->pll_clk);
}
/* If it doesn't exist, use pixel clock instead of failing */
@ -2005,7 +2035,7 @@ err_disable_runtime:
}
EXPORT_SYMBOL_GPL(samsung_dsim_probe);
int samsung_dsim_remove(struct platform_device *pdev)
void samsung_dsim_remove(struct platform_device *pdev)
{
struct samsung_dsim *dsi = platform_get_drvdata(pdev);
@ -2013,8 +2043,6 @@ int samsung_dsim_remove(struct platform_device *pdev)
if (dsi->plat_data->host_ops && dsi->plat_data->host_ops->unregister_host)
dsi->plat_data->host_ops->unregister_host(dsi);
return 0;
}
EXPORT_SYMBOL_GPL(samsung_dsim_remove);
@ -2114,7 +2142,7 @@ MODULE_DEVICE_TABLE(of, samsung_dsim_of_match);
static struct platform_driver samsung_dsim_driver = {
.probe = samsung_dsim_probe,
.remove = samsung_dsim_remove,
.remove_new = samsung_dsim_remove,
.driver = {
.name = "samsung-dsim",
.pm = &samsung_dsim_pm_ops,

View File

@ -2005,7 +2005,7 @@ static const struct regmap_config tc_regmap_config = {
.val_bits = 32,
.reg_stride = 4,
.max_register = PLL_DBG,
.cache_type = REGCACHE_RBTREE,
.cache_type = REGCACHE_MAPLE,
.readable_reg = tc_readable_reg,
.volatile_table = &tc_volatile_table,
.writeable_reg = tc_writeable_reg,

View File

@ -100,7 +100,7 @@ static struct regmap_config dlpc_regmap_config = {
.max_register = WR_DSI_PORT_EN,
.writeable_noinc_reg = dlpc_writeable_noinc_reg,
.volatile_table = &dlpc_volatile_table,
.cache_type = REGCACHE_RBTREE,
.cache_type = REGCACHE_MAPLE,
.name = "dlpc3433",
};

View File

@ -233,7 +233,7 @@ static const struct regmap_config sn65dsi83_regmap_config = {
.rd_table = &sn65dsi83_readable_table,
.wr_table = &sn65dsi83_writeable_table,
.volatile_table = &sn65dsi83_volatile_table,
.cache_type = REGCACHE_RBTREE,
.cache_type = REGCACHE_MAPLE,
.max_register = REG_IRQ_STAT,
};

View File

@ -746,8 +746,11 @@ int drm_dp_dpcd_read_phy_link_status(struct drm_dp_aux *aux,
}
EXPORT_SYMBOL(drm_dp_dpcd_read_phy_link_status);
static bool is_edid_digital_input_dp(const struct edid *edid)
static bool is_edid_digital_input_dp(const struct drm_edid *drm_edid)
{
/* FIXME: get rid of drm_edid_raw() */
const struct edid *edid = drm_edid_raw(drm_edid);
return edid && edid->revision >= 4 &&
edid->input & DRM_EDID_INPUT_DIGITAL &&
(edid->input & DRM_EDID_DIGITAL_TYPE_MASK) == DRM_EDID_DIGITAL_TYPE_DP;
@ -779,13 +782,13 @@ EXPORT_SYMBOL(drm_dp_downstream_is_type);
* drm_dp_downstream_is_tmds() - is the downstream facing port TMDS?
* @dpcd: DisplayPort configuration data
* @port_cap: port capabilities
* @edid: EDID
* @drm_edid: EDID
*
* Returns: whether the downstream facing port is TMDS (HDMI/DVI).
*/
bool drm_dp_downstream_is_tmds(const u8 dpcd[DP_RECEIVER_CAP_SIZE],
const u8 port_cap[4],
const struct edid *edid)
const struct drm_edid *drm_edid)
{
if (dpcd[DP_DPCD_REV] < 0x11) {
switch (dpcd[DP_DOWNSTREAMPORT_PRESENT] & DP_DWN_STRM_PORT_TYPE_MASK) {
@ -798,7 +801,7 @@ bool drm_dp_downstream_is_tmds(const u8 dpcd[DP_RECEIVER_CAP_SIZE],
switch (port_cap[0] & DP_DS_PORT_TYPE_MASK) {
case DP_DS_PORT_TYPE_DP_DUALMODE:
if (is_edid_digital_input_dp(edid))
if (is_edid_digital_input_dp(drm_edid))
return false;
fallthrough;
case DP_DS_PORT_TYPE_DVI:
@ -1036,14 +1039,14 @@ EXPORT_SYMBOL(drm_dp_downstream_max_dotclock);
* drm_dp_downstream_max_tmds_clock() - extract downstream facing port max TMDS clock
* @dpcd: DisplayPort configuration data
* @port_cap: port capabilities
* @edid: EDID
* @drm_edid: EDID
*
* Returns: HDMI/DVI downstream facing port max TMDS clock in kHz on success,
* or 0 if max TMDS clock not defined
*/
int drm_dp_downstream_max_tmds_clock(const u8 dpcd[DP_RECEIVER_CAP_SIZE],
const u8 port_cap[4],
const struct edid *edid)
const struct drm_edid *drm_edid)
{
if (!drm_dp_is_branch(dpcd))
return 0;
@ -1059,7 +1062,7 @@ int drm_dp_downstream_max_tmds_clock(const u8 dpcd[DP_RECEIVER_CAP_SIZE],
switch (port_cap[0] & DP_DS_PORT_TYPE_MASK) {
case DP_DS_PORT_TYPE_DP_DUALMODE:
if (is_edid_digital_input_dp(edid))
if (is_edid_digital_input_dp(drm_edid))
return 0;
/*
* It's left up to the driver to check the
@ -1101,14 +1104,14 @@ EXPORT_SYMBOL(drm_dp_downstream_max_tmds_clock);
* drm_dp_downstream_min_tmds_clock() - extract downstream facing port min TMDS clock
* @dpcd: DisplayPort configuration data
* @port_cap: port capabilities
* @edid: EDID
* @drm_edid: EDID
*
* Returns: HDMI/DVI downstream facing port min TMDS clock in kHz on success,
* or 0 if max TMDS clock not defined
*/
int drm_dp_downstream_min_tmds_clock(const u8 dpcd[DP_RECEIVER_CAP_SIZE],
const u8 port_cap[4],
const struct edid *edid)
const struct drm_edid *drm_edid)
{
if (!drm_dp_is_branch(dpcd))
return 0;
@ -1124,7 +1127,7 @@ int drm_dp_downstream_min_tmds_clock(const u8 dpcd[DP_RECEIVER_CAP_SIZE],
switch (port_cap[0] & DP_DS_PORT_TYPE_MASK) {
case DP_DS_PORT_TYPE_DP_DUALMODE:
if (is_edid_digital_input_dp(edid))
if (is_edid_digital_input_dp(drm_edid))
return 0;
fallthrough;
case DP_DS_PORT_TYPE_DVI:
@ -1145,13 +1148,13 @@ EXPORT_SYMBOL(drm_dp_downstream_min_tmds_clock);
* bits per component
* @dpcd: DisplayPort configuration data
* @port_cap: downstream facing port capabilities
* @edid: EDID
* @drm_edid: EDID
*
* Returns: Max bpc on success or 0 if max bpc not defined
*/
int drm_dp_downstream_max_bpc(const u8 dpcd[DP_RECEIVER_CAP_SIZE],
const u8 port_cap[4],
const struct edid *edid)
const struct drm_edid *drm_edid)
{
if (!drm_dp_is_branch(dpcd))
return 0;
@ -1169,7 +1172,7 @@ int drm_dp_downstream_max_bpc(const u8 dpcd[DP_RECEIVER_CAP_SIZE],
case DP_DS_PORT_TYPE_DP:
return 0;
case DP_DS_PORT_TYPE_DP_DUALMODE:
if (is_edid_digital_input_dp(edid))
if (is_edid_digital_input_dp(drm_edid))
return 0;
fallthrough;
case DP_DS_PORT_TYPE_HDMI:
@ -1362,14 +1365,14 @@ EXPORT_SYMBOL(drm_dp_downstream_id);
* @m: pointer for debugfs file
* @dpcd: DisplayPort configuration data
* @port_cap: port capabilities
* @edid: EDID
* @drm_edid: EDID
* @aux: DisplayPort AUX channel
*
*/
void drm_dp_downstream_debug(struct seq_file *m,
const u8 dpcd[DP_RECEIVER_CAP_SIZE],
const u8 port_cap[4],
const struct edid *edid,
const struct drm_edid *drm_edid,
struct drm_dp_aux *aux)
{
bool detailed_cap_info = dpcd[DP_DOWNSTREAMPORT_PRESENT] &
@ -1432,15 +1435,15 @@ void drm_dp_downstream_debug(struct seq_file *m,
if (clk > 0)
seq_printf(m, "\t\tMax dot clock: %d kHz\n", clk);
clk = drm_dp_downstream_max_tmds_clock(dpcd, port_cap, edid);
clk = drm_dp_downstream_max_tmds_clock(dpcd, port_cap, drm_edid);
if (clk > 0)
seq_printf(m, "\t\tMax TMDS clock: %d kHz\n", clk);
clk = drm_dp_downstream_min_tmds_clock(dpcd, port_cap, edid);
clk = drm_dp_downstream_min_tmds_clock(dpcd, port_cap, drm_edid);
if (clk > 0)
seq_printf(m, "\t\tMin TMDS clock: %d kHz\n", clk);
bpc = drm_dp_downstream_max_bpc(dpcd, port_cap, edid);
bpc = drm_dp_downstream_max_bpc(dpcd, port_cap, drm_edid);
if (bpc > 0)
seq_printf(m, "\t\tMax bpc: %d\n", bpc);

View File

@ -5,6 +5,8 @@
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/property.h>
#include <linux/slab.h>
#include <drm/drm_atomic_state_helper.h>
@ -107,27 +109,36 @@ static void drm_bridge_connector_hpd_notify(struct drm_connector *connector,
}
}
static void drm_bridge_connector_hpd_cb(void *cb_data,
enum drm_connector_status status)
static void drm_bridge_connector_handle_hpd(struct drm_bridge_connector *drm_bridge_connector,
enum drm_connector_status status)
{
struct drm_bridge_connector *drm_bridge_connector = cb_data;
struct drm_connector *connector = &drm_bridge_connector->base;
struct drm_device *dev = connector->dev;
enum drm_connector_status old_status;
mutex_lock(&dev->mode_config.mutex);
old_status = connector->status;
connector->status = status;
mutex_unlock(&dev->mode_config.mutex);
if (old_status == status)
return;
drm_bridge_connector_hpd_notify(connector, status);
drm_kms_helper_connector_hotplug_event(connector);
}
static void drm_bridge_connector_hpd_cb(void *cb_data,
enum drm_connector_status status)
{
drm_bridge_connector_handle_hpd(cb_data, status);
}
static void drm_bridge_connector_oob_hotplug_event(struct drm_connector *connector,
enum drm_connector_status status)
{
struct drm_bridge_connector *bridge_connector =
to_drm_bridge_connector(connector);
drm_bridge_connector_handle_hpd(bridge_connector, status);
}
static void drm_bridge_connector_enable_hpd(struct drm_connector *connector)
{
struct drm_bridge_connector *bridge_connector =
@ -196,6 +207,8 @@ static void drm_bridge_connector_destroy(struct drm_connector *connector)
drm_connector_unregister(connector);
drm_connector_cleanup(connector);
fwnode_handle_put(connector->fwnode);
kfree(bridge_connector);
}
@ -221,6 +234,7 @@ static const struct drm_connector_funcs drm_bridge_connector_funcs = {
.atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
.atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
.debugfs_init = drm_bridge_connector_debugfs_init,
.oob_hotplug_event = drm_bridge_connector_oob_hotplug_event,
};
/* -----------------------------------------------------------------------------
@ -238,7 +252,7 @@ static int drm_bridge_connector_get_modes_edid(struct drm_connector *connector,
if (status != connector_status_connected)
goto no_edid;
edid = bridge->funcs->get_edid(bridge, connector);
edid = drm_bridge_get_edid(bridge, connector);
if (!drm_edid_is_valid(edid)) {
kfree(edid);
goto no_edid;
@ -357,6 +371,12 @@ struct drm_connector *drm_bridge_connector_init(struct drm_device *drm,
if (!drm_bridge_get_next_bridge(bridge))
connector_type = bridge->type;
#ifdef CONFIG_OF
if (!drm_bridge_get_next_bridge(bridge) &&
bridge->of_node)
connector->fwnode = fwnode_handle_get(of_fwnode_handle(bridge->of_node));
#endif
if (bridge->ddc)
ddc = bridge->ddc;

View File

@ -3060,6 +3060,7 @@ struct drm_connector *drm_connector_find_by_fwnode(struct fwnode_handle *fwnode)
/**
* drm_connector_oob_hotplug_event - Report out-of-band hotplug event to connector
* @connector_fwnode: fwnode_handle to report the event on
* @status: hot plug detect logical state
*
* On some hardware a hotplug event notification may come from outside the display
* driver / device. An example of this is some USB Type-C setups where the hardware
@ -3069,7 +3070,8 @@ struct drm_connector *drm_connector_find_by_fwnode(struct fwnode_handle *fwnode)
* This function can be used to report these out-of-band events after obtaining
* a drm_connector reference through calling drm_connector_find_by_fwnode().
*/
void drm_connector_oob_hotplug_event(struct fwnode_handle *connector_fwnode)
void drm_connector_oob_hotplug_event(struct fwnode_handle *connector_fwnode,
enum drm_connector_status status)
{
struct drm_connector *connector;
@ -3078,7 +3080,7 @@ void drm_connector_oob_hotplug_event(struct fwnode_handle *connector_fwnode)
return;
if (connector->funcs->oob_hotplug_event)
connector->funcs->oob_hotplug_event(connector);
connector->funcs->oob_hotplug_event(connector, status);
drm_connector_put(connector);
}

View File

@ -964,6 +964,8 @@ void drm_show_memory_stats(struct drm_printer *p, struct drm_file *file)
spin_lock(&file->table_lock);
idr_for_each_entry (&file->object_idr, obj, id) {
enum drm_gem_object_status s = 0;
size_t add_size = (obj->funcs && obj->funcs->rss) ?
obj->funcs->rss(obj) : obj->size;
if (obj->funcs && obj->funcs->status) {
s = obj->funcs->status(obj);
@ -978,7 +980,7 @@ void drm_show_memory_stats(struct drm_printer *p, struct drm_file *file)
}
if (s & DRM_GEM_OBJECT_RESIDENT) {
status.resident += obj->size;
status.resident += add_size;
} else {
/* If already purged or not yet backed by pages, don't
* count it as purgeable:
@ -987,14 +989,14 @@ void drm_show_memory_stats(struct drm_printer *p, struct drm_file *file)
}
if (!dma_resv_test_signaled(obj->resv, dma_resv_usage_rw(true))) {
status.active += obj->size;
status.active += add_size;
/* If still active, don't count as purgeable: */
s &= ~DRM_GEM_OBJECT_PURGEABLE;
}
if (s & DRM_GEM_OBJECT_PURGEABLE)
status.purgeable += obj->size;
status.purgeable += add_size;
}
spin_unlock(&file->table_lock);

View File

@ -151,24 +151,6 @@ int drm_mode_addfb_ioctl(struct drm_device *dev,
return drm_mode_addfb(dev, data, file_priv);
}
static int fb_plane_width(int width,
const struct drm_format_info *format, int plane)
{
if (plane == 0)
return width;
return DIV_ROUND_UP(width, format->hsub);
}
static int fb_plane_height(int height,
const struct drm_format_info *format, int plane)
{
if (plane == 0)
return height;
return DIV_ROUND_UP(height, format->vsub);
}
static int framebuffer_check(struct drm_device *dev,
const struct drm_mode_fb_cmd2 *r)
{
@ -196,8 +178,8 @@ static int framebuffer_check(struct drm_device *dev,
info = drm_get_format_info(dev, r);
for (i = 0; i < info->num_planes; i++) {
unsigned int width = fb_plane_width(r->width, info, i);
unsigned int height = fb_plane_height(r->height, info, i);
unsigned int width = drm_format_info_plane_width(info, r->width, i);
unsigned int height = drm_format_info_plane_height(info, r->height, i);
unsigned int block_size = info->char_per_block[i];
u64 min_pitch = drm_format_info_min_pitch(info, i, width);
@ -1136,44 +1118,6 @@ void drm_framebuffer_remove(struct drm_framebuffer *fb)
}
EXPORT_SYMBOL(drm_framebuffer_remove);
/**
* drm_framebuffer_plane_width - width of the plane given the first plane
* @width: width of the first plane
* @fb: the framebuffer
* @plane: plane index
*
* Returns:
* The width of @plane, given that the width of the first plane is @width.
*/
int drm_framebuffer_plane_width(int width,
const struct drm_framebuffer *fb, int plane)
{
if (plane >= fb->format->num_planes)
return 0;
return fb_plane_width(width, fb->format, plane);
}
EXPORT_SYMBOL(drm_framebuffer_plane_width);
/**
* drm_framebuffer_plane_height - height of the plane given the first plane
* @height: height of the first plane
* @fb: the framebuffer
* @plane: plane index
*
* Returns:
* The height of @plane, given that the height of the first plane is @height.
*/
int drm_framebuffer_plane_height(int height,
const struct drm_framebuffer *fb, int plane)
{
if (plane >= fb->format->num_planes)
return 0;
return fb_plane_height(height, fb->format, plane);
}
EXPORT_SYMBOL(drm_framebuffer_plane_height);
void drm_framebuffer_print_info(struct drm_printer *p, unsigned int indent,
const struct drm_framebuffer *fb)
{
@ -1189,8 +1133,8 @@ void drm_framebuffer_print_info(struct drm_printer *p, unsigned int indent,
for (i = 0; i < fb->format->num_planes; i++) {
drm_printf_indent(p, indent + 1, "size[%u]=%dx%d\n", i,
drm_framebuffer_plane_width(fb->width, fb, i),
drm_framebuffer_plane_height(fb->height, fb, i));
drm_format_info_plane_width(fb->format, fb->width, i),
drm_format_info_plane_height(fb->format, fb->height, i));
drm_printf_indent(p, indent + 1, "pitch[%u]=%u\n", i, fb->pitches[i]);
drm_printf_indent(p, indent + 1, "offset[%u]=%u\n", i, fb->offsets[i]);
drm_printf_indent(p, indent + 1, "obj[%u]:%s\n", i,

View File

@ -73,6 +73,9 @@ void drm_vblank_cancel_pending_works(struct drm_vblank_crtc *vblank)
assert_spin_locked(&vblank->dev->event_lock);
drm_WARN_ONCE(vblank->dev, !list_empty(&vblank->pending_work),
"Cancelling pending vblank works!\n");
list_for_each_entry_safe(work, next, &vblank->pending_work, node) {
list_del_init(&work->node);
drm_vblank_put(vblank->dev, vblank->pipe);

View File

@ -181,7 +181,7 @@ MODULE_DEVICE_TABLE(of, exynos_dsi_of_match);
struct platform_driver dsi_driver = {
.probe = samsung_dsim_probe,
.remove = samsung_dsim_remove,
.remove_new = samsung_dsim_remove,
.driver = {
.name = "exynos-dsi",
.owner = THIS_MODULE,

View File

@ -141,7 +141,7 @@ struct gma_i2c_chan *oaktrail_lvds_i2c_init(struct drm_device *dev)
chan->drm_dev = dev;
chan->reg = dev_priv->lpc_gpio_base;
strncpy(chan->base.name, "gma500 LPC", I2C_NAME_SIZE - 1);
strscpy(chan->base.name, "gma500 LPC", sizeof(chan->base.name));
chan->base.owner = THIS_MODULE;
chan->base.algo_data = &chan->algo;
chan->base.dev.parent = dev->dev;

View File

@ -175,6 +175,9 @@ struct intel_hotplug {
/* Whether or not to count short HPD IRQs in HPD storms */
u8 hpd_short_storm_enabled;
/* Last state reported by oob_hotplug_event for each encoder */
unsigned long oob_hotplug_last_state;
/*
* if we get a HPD irq from DP and a HPD irq from non-DP
* the non-DP HPD could block the workqueue on a mode config

View File

@ -237,14 +237,13 @@ static void intel_dp_info(struct seq_file *m, struct intel_connector *connector)
{
struct intel_encoder *intel_encoder = intel_attached_encoder(connector);
struct intel_dp *intel_dp = enc_to_intel_dp(intel_encoder);
const struct edid *edid = drm_edid_raw(connector->detect_edid);
seq_printf(m, "\tDPCD rev: %x\n", intel_dp->dpcd[DP_DPCD_REV]);
seq_printf(m, "\taudio support: %s\n",
str_yes_no(connector->base.display_info.has_audio));
drm_dp_downstream_debug(m, intel_dp->dpcd, intel_dp->downstream_ports,
edid, &intel_dp->aux);
connector->detect_edid, &intel_dp->aux);
}
static void intel_dp_mst_info(struct seq_file *m,

View File

@ -5207,14 +5207,10 @@ intel_dp_update_dfp(struct intel_dp *intel_dp,
{
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
struct intel_connector *connector = intel_dp->attached_connector;
const struct edid *edid;
/* FIXME: Get rid of drm_edid_raw() */
edid = drm_edid_raw(drm_edid);
intel_dp->dfp.max_bpc =
drm_dp_downstream_max_bpc(intel_dp->dpcd,
intel_dp->downstream_ports, edid);
intel_dp->downstream_ports, drm_edid);
intel_dp->dfp.max_dotclock =
drm_dp_downstream_max_dotclock(intel_dp->dpcd,
@ -5223,11 +5219,11 @@ intel_dp_update_dfp(struct intel_dp *intel_dp,
intel_dp->dfp.min_tmds_clock =
drm_dp_downstream_min_tmds_clock(intel_dp->dpcd,
intel_dp->downstream_ports,
edid);
drm_edid);
intel_dp->dfp.max_tmds_clock =
drm_dp_downstream_max_tmds_clock(intel_dp->dpcd,
intel_dp->downstream_ports,
edid);
drm_edid);
intel_dp->dfp.pcon_max_frl_bw =
drm_dp_get_pcon_max_frl_bw(intel_dp->dpcd,
@ -5727,15 +5723,26 @@ static int intel_dp_connector_atomic_check(struct drm_connector *conn,
return intel_modeset_synced_crtcs(state, conn);
}
static void intel_dp_oob_hotplug_event(struct drm_connector *connector)
static void intel_dp_oob_hotplug_event(struct drm_connector *connector,
enum drm_connector_status hpd_state)
{
struct intel_encoder *encoder = intel_attached_encoder(to_intel_connector(connector));
struct drm_i915_private *i915 = to_i915(connector->dev);
bool hpd_high = hpd_state == connector_status_connected;
unsigned int hpd_pin = encoder->hpd_pin;
bool need_work = false;
spin_lock_irq(&i915->irq_lock);
i915->display.hotplug.event_bits |= BIT(encoder->hpd_pin);
if (hpd_high != test_bit(hpd_pin, &i915->display.hotplug.oob_hotplug_last_state)) {
i915->display.hotplug.event_bits |= BIT(hpd_pin);
__assign_bit(hpd_pin, &i915->display.hotplug.oob_hotplug_last_state, hpd_high);
need_work = true;
}
spin_unlock_irq(&i915->irq_lock);
queue_delayed_work(i915->unordered_wq, &i915->display.hotplug.hotplug_work, 0);
if (need_work)
queue_delayed_work(i915->unordered_wq, &i915->display.hotplug.hotplug_work, 0);
}
static const struct drm_connector_funcs intel_dp_connector_funcs = {

View File

@ -1117,7 +1117,7 @@ static int intel_fb_offset_to_xy(int *x, int *y,
return -EINVAL;
}
height = drm_framebuffer_plane_height(fb->height, fb, color_plane);
height = drm_format_info_plane_height(fb->format, fb->height, color_plane);
height = ALIGN(height, intel_tile_height(fb, color_plane));
/* Catch potential overflows early */

View File

@ -1924,7 +1924,7 @@ struct perf_stats {
struct perf_series {
struct drm_i915_private *i915;
unsigned int nengines;
struct intel_context *ce[];
struct intel_context *ce[] __counted_by(nengines);
};
static int cmp_u32(const void *A, const void *B)

View File

@ -61,7 +61,7 @@ struct dpu_hw_intr {
void (*cb)(void *arg, int irq_idx);
void *arg;
atomic_t count;
} irq_tbl[];
} irq_tbl[] __counted_by(total_irqs);
};
/**

View File

@ -1560,15 +1560,13 @@ nv50_sor_atomic_disable(struct drm_encoder *encoder, struct drm_atomic_state *st
{
struct nouveau_encoder *nv_encoder = nouveau_encoder(encoder);
struct nv50_head *head = nv50_head(nv_encoder->crtc);
struct nouveau_connector *nv_connector = nv50_outp_get_old_connector(state, nv_encoder);
#ifdef CONFIG_DRM_NOUVEAU_BACKLIGHT
struct nouveau_connector *nv_connector = nv50_outp_get_old_connector(state, nv_encoder);
struct nouveau_drm *drm = nouveau_drm(nv_encoder->base.base.dev);
struct nouveau_backlight *backlight = nv_connector->backlight;
#endif
struct drm_dp_aux *aux = &nv_connector->aux;
int ret;
#ifdef CONFIG_DRM_NOUVEAU_BACKLIGHT
if (backlight && backlight->uses_dpcd) {
ret = drm_edp_backlight_disable(aux, &backlight->edp_info);
if (ret < 0)

View File

@ -82,7 +82,7 @@ struct nvkm_perfdom {
u8 mode;
u32 clk;
u16 signal_nr;
struct nvkm_perfsig signal[];
struct nvkm_perfsig signal[] __counted_by(signal_nr);
};
struct nvkm_funcdom {

View File

@ -516,6 +516,15 @@ config DRM_PANEL_RAYDIUM_RM68200
Say Y here if you want to enable support for Raydium RM68200
720x1280 DSI video mode panel.
config DRM_PANEL_RAYDIUM_RM692E5
tristate "Raydium RM692E5-based DSI panel"
depends on OF
depends on DRM_MIPI_DSI
depends on BACKLIGHT_CLASS_DEVICE
help
Say Y here if you want to enable support for Raydium RM692E5-based
display panels, such as the one found in the Fairphone 5 smartphone.
config DRM_PANEL_RONBO_RB070D30
tristate "Ronbo Electronics RB070D30 panel"
depends on OF

View File

@ -49,6 +49,7 @@ obj-$(CONFIG_DRM_PANEL_PANASONIC_VVX10F034N00) += panel-panasonic-vvx10f034n00.o
obj-$(CONFIG_DRM_PANEL_RASPBERRYPI_TOUCHSCREEN) += panel-raspberrypi-touchscreen.o
obj-$(CONFIG_DRM_PANEL_RAYDIUM_RM67191) += panel-raydium-rm67191.o
obj-$(CONFIG_DRM_PANEL_RAYDIUM_RM68200) += panel-raydium-rm68200.o
obj-$(CONFIG_DRM_PANEL_RAYDIUM_RM692E5) += panel-raydium-rm692e5.o
obj-$(CONFIG_DRM_PANEL_RONBO_RB070D30) += panel-ronbo-rb070d30.o
obj-$(CONFIG_DRM_PANEL_SAMSUNG_ATNA33XC20) += panel-samsung-atna33xc20.o
obj-$(CONFIG_DRM_PANEL_SAMSUNG_DB7430) += panel-samsung-db7430.o

View File

@ -267,6 +267,8 @@ static int versatile_panel_get_modes(struct drm_panel *panel,
connector->display_info.bus_flags = vpanel->panel_type->bus_flags;
mode = drm_mode_duplicate(connector->dev, &vpanel->panel_type->mode);
if (!mode)
return -ENOMEM;
drm_mode_set_name(mode);
mode->type = DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED;

View File

@ -325,11 +325,6 @@ static struct regmap_bus ili9322_regmap_bus = {
.val_format_endian_default = REGMAP_ENDIAN_BIG,
};
static bool ili9322_volatile_reg(struct device *dev, unsigned int reg)
{
return false;
}
static bool ili9322_writeable_reg(struct device *dev, unsigned int reg)
{
/* Just register 0 is read-only */
@ -342,8 +337,7 @@ static const struct regmap_config ili9322_regmap_config = {
.reg_bits = 8,
.val_bits = 8,
.max_register = 0x44,
.cache_type = REGCACHE_RBTREE,
.volatile_reg = ili9322_volatile_reg,
.cache_type = REGCACHE_MAPLE,
.writeable_reg = ili9322_writeable_reg,
};

View File

@ -24,6 +24,7 @@ struct ltk050h3146w_cmd {
struct ltk050h3146w;
struct ltk050h3146w_desc {
const unsigned long mode_flags;
const struct drm_display_mode *mode;
int (*init)(struct ltk050h3146w *ctx);
};
@ -243,6 +244,91 @@ struct ltk050h3146w *panel_to_ltk050h3146w(struct drm_panel *panel)
return container_of(panel, struct ltk050h3146w, panel);
}
static int ltk050h3148w_init_sequence(struct ltk050h3146w *ctx)
{
struct mipi_dsi_device *dsi = to_mipi_dsi_device(ctx->dev);
int ret;
/*
* Init sequence was supplied by the panel vendor without much
* documentation.
*/
mipi_dsi_dcs_write_seq(dsi, 0xb9, 0xff, 0x83, 0x94);
mipi_dsi_dcs_write_seq(dsi, 0xb1, 0x50, 0x15, 0x75, 0x09, 0x32, 0x44,
0x71, 0x31, 0x55, 0x2f);
mipi_dsi_dcs_write_seq(dsi, 0xba, 0x63, 0x03, 0x68, 0x6b, 0xb2, 0xc0);
mipi_dsi_dcs_write_seq(dsi, 0xd2, 0x88);
mipi_dsi_dcs_write_seq(dsi, 0xb2, 0x00, 0x80, 0x64, 0x10, 0x07);
mipi_dsi_dcs_write_seq(dsi, 0xb4, 0x05, 0x70, 0x05, 0x70, 0x01, 0x70,
0x01, 0x0c, 0x86, 0x75, 0x00, 0x3f, 0x01, 0x74,
0x01, 0x74, 0x01, 0x74, 0x01, 0x0c, 0x86);
mipi_dsi_dcs_write_seq(dsi, 0xd3, 0x00, 0x00, 0x07, 0x07, 0x40, 0x1e,
0x08, 0x00, 0x32, 0x10, 0x08, 0x00, 0x08, 0x54,
0x15, 0x10, 0x05, 0x04, 0x02, 0x12, 0x10, 0x05,
0x07, 0x33, 0x34, 0x0c, 0x0c, 0x37, 0x10, 0x07,
0x17, 0x11, 0x40);
mipi_dsi_dcs_write_seq(dsi, 0xd5, 0x19, 0x19, 0x18, 0x18, 0x1b, 0x1b,
0x1a, 0x1a, 0x04, 0x05, 0x06, 0x07, 0x00, 0x01,
0x02, 0x03, 0x20, 0x21, 0x18, 0x18, 0x22, 0x23,
0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18,
0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18,
0x18, 0x18, 0x18, 0x18, 0x18, 0x18);
mipi_dsi_dcs_write_seq(dsi, 0xd6, 0x18, 0x18, 0x19, 0x19, 0x1b, 0x1b,
0x1a, 0x1a, 0x03, 0x02, 0x01, 0x00, 0x07, 0x06,
0x05, 0x04, 0x23, 0x22, 0x18, 0x18, 0x21, 0x20,
0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18,
0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18,
0x18, 0x18, 0x18, 0x18, 0x18, 0x18);
mipi_dsi_dcs_write_seq(dsi, 0xe0, 0x00, 0x03, 0x09, 0x11, 0x11, 0x14,
0x18, 0x16, 0x2e, 0x3d, 0x4d, 0x4d, 0x58, 0x6c,
0x72, 0x78, 0x88, 0x8b, 0x86, 0xa4, 0xb2, 0x58,
0x55, 0x59, 0x5b, 0x5d, 0x60, 0x64, 0x7f, 0x00,
0x03, 0x09, 0x0f, 0x11, 0x14, 0x18, 0x16, 0x2e,
0x3d, 0x4d, 0x4d, 0x58, 0x6d, 0x73, 0x78, 0x88,
0x8b, 0x87, 0xa5, 0xb2, 0x58, 0x55, 0x58, 0x5b,
0x5d, 0x61, 0x65, 0x7f);
mipi_dsi_dcs_write_seq(dsi, 0xcc, 0x0b);
mipi_dsi_dcs_write_seq(dsi, 0xc0, 0x1f, 0x31);
mipi_dsi_dcs_write_seq(dsi, 0xb6, 0xc4, 0xc4);
mipi_dsi_dcs_write_seq(dsi, 0xbd, 0x01);
mipi_dsi_dcs_write_seq(dsi, 0xb1, 0x00);
mipi_dsi_dcs_write_seq(dsi, 0xbd, 0x00);
mipi_dsi_dcs_write_seq(dsi, 0xc6, 0xef);
mipi_dsi_dcs_write_seq(dsi, 0xd4, 0x02);
mipi_dsi_dcs_write_seq(dsi, 0x11);
mipi_dsi_dcs_write_seq(dsi, 0x29);
ret = mipi_dsi_dcs_set_tear_on(dsi, 1);
if (ret < 0) {
dev_err(ctx->dev, "failed to set tear on: %d\n", ret);
return ret;
}
msleep(60);
return 0;
}
static const struct drm_display_mode ltk050h3148w_mode = {
.hdisplay = 720,
.hsync_start = 720 + 12,
.hsync_end = 720 + 12 + 6,
.htotal = 720 + 12 + 6 + 24,
.vdisplay = 1280,
.vsync_start = 1280 + 9,
.vsync_end = 1280 + 9 + 2,
.vtotal = 1280 + 9 + 2 + 16,
.clock = 59756,
.width_mm = 62,
.height_mm = 110,
};
static const struct ltk050h3146w_desc ltk050h3148w_data = {
.mode = &ltk050h3148w_mode,
.init = ltk050h3148w_init_sequence,
.mode_flags = MIPI_DSI_MODE_VIDEO_SYNC_PULSE,
};
static int ltk050h3146w_init_sequence(struct ltk050h3146w *ctx)
{
struct mipi_dsi_device *dsi = to_mipi_dsi_device(ctx->dev);
@ -330,6 +416,8 @@ static const struct drm_display_mode ltk050h3146w_mode = {
static const struct ltk050h3146w_desc ltk050h3146w_data = {
.mode = &ltk050h3146w_mode,
.init = ltk050h3146w_init_sequence,
.mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_BURST |
MIPI_DSI_MODE_LPM | MIPI_DSI_MODE_NO_EOT_PACKET,
};
static int ltk050h3146w_a2_select_page(struct ltk050h3146w *ctx, int page)
@ -424,6 +512,8 @@ static const struct drm_display_mode ltk050h3146w_a2_mode = {
static const struct ltk050h3146w_desc ltk050h3146w_a2_data = {
.mode = &ltk050h3146w_a2_mode,
.init = ltk050h3146w_a2_init_sequence,
.mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_BURST |
MIPI_DSI_MODE_LPM | MIPI_DSI_MODE_NO_EOT_PACKET,
};
static int ltk050h3146w_unprepare(struct drm_panel *panel)
@ -583,8 +673,7 @@ static int ltk050h3146w_probe(struct mipi_dsi_device *dsi)
dsi->lanes = 4;
dsi->format = MIPI_DSI_FMT_RGB888;
dsi->mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_BURST |
MIPI_DSI_MODE_LPM | MIPI_DSI_MODE_NO_EOT_PACKET;
dsi->mode_flags = ctx->panel_desc->mode_flags;
drm_panel_init(&ctx->panel, &dsi->dev, &ltk050h3146w_funcs,
DRM_MODE_CONNECTOR_DSI);
@ -642,6 +731,10 @@ static const struct of_device_id ltk050h3146w_of_match[] = {
.compatible = "leadtek,ltk050h3146w-a2",
.data = &ltk050h3146w_a2_data,
},
{
.compatible = "leadtek,ltk050h3148w",
.data = &ltk050h3148w_data,
},
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, ltk050h3146w_of_match);

View File

@ -388,6 +388,13 @@ static int panel_nv3051d_probe(struct mipi_dsi_device *dsi)
dsi->mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_BURST |
MIPI_DSI_MODE_LPM | MIPI_DSI_MODE_NO_EOT_PACKET;
/*
* The panel in the RG351V is identical to the 353P, except it
* requires MIPI_DSI_CLOCK_NON_CONTINUOUS to operate correctly.
*/
if (of_device_is_compatible(dev->of_node, "anbernic,rg351v-panel"))
dsi->mode_flags |= MIPI_DSI_CLOCK_NON_CONTINUOUS;
drm_panel_init(&ctx->panel, &dsi->dev, &panel_nv3051d_funcs,
DRM_MODE_CONNECTOR_DSI);

View File

@ -0,0 +1,423 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Generated with linux-mdss-dsi-panel-driver-generator from vendor device tree.
* Copyright (c) 2023 Linaro Limited
*/
#include <linux/backlight.h>
#include <linux/delay.h>
#include <linux/gpio/consumer.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/regulator/consumer.h>
#include <drm/display/drm_dsc.h>
#include <drm/display/drm_dsc_helper.h>
#include <drm/drm_mipi_dsi.h>
#include <drm/drm_modes.h>
#include <drm/drm_panel.h>
struct rm692e5_panel {
struct drm_panel panel;
struct mipi_dsi_device *dsi;
struct drm_dsc_config dsc;
struct regulator_bulk_data supplies[3];
struct gpio_desc *reset_gpio;
bool prepared;
};
static inline struct rm692e5_panel *to_rm692e5_panel(struct drm_panel *panel)
{
return container_of(panel, struct rm692e5_panel, panel);
}
static void rm692e5_reset(struct rm692e5_panel *ctx)
{
gpiod_set_value_cansleep(ctx->reset_gpio, 0);
usleep_range(10000, 11000);
gpiod_set_value_cansleep(ctx->reset_gpio, 1);
usleep_range(5000, 6000);
gpiod_set_value_cansleep(ctx->reset_gpio, 0);
usleep_range(10000, 11000);
}
static int rm692e5_on(struct rm692e5_panel *ctx)
{
struct mipi_dsi_device *dsi = ctx->dsi;
struct device *dev = &dsi->dev;
int ret;
dsi->mode_flags |= MIPI_DSI_MODE_LPM;
mipi_dsi_generic_write_seq(dsi, 0xfe, 0x41);
mipi_dsi_generic_write_seq(dsi, 0xd6, 0x00);
mipi_dsi_generic_write_seq(dsi, 0xfe, 0x16);
mipi_dsi_generic_write_seq(dsi, 0x8a, 0x87);
mipi_dsi_generic_write_seq(dsi, 0xfe, 0x71);
mipi_dsi_generic_write_seq(dsi, 0x82, 0x01);
mipi_dsi_generic_write_seq(dsi, 0xc6, 0x00);
mipi_dsi_generic_write_seq(dsi, 0xc7, 0x2c);
mipi_dsi_generic_write_seq(dsi, 0xc8, 0x64);
mipi_dsi_generic_write_seq(dsi, 0xc9, 0x3c);
mipi_dsi_generic_write_seq(dsi, 0xca, 0x80);
mipi_dsi_generic_write_seq(dsi, 0xcb, 0x02);
mipi_dsi_generic_write_seq(dsi, 0xcc, 0x02);
mipi_dsi_generic_write_seq(dsi, 0xfe, 0x38);
mipi_dsi_generic_write_seq(dsi, 0x18, 0x13);
mipi_dsi_generic_write_seq(dsi, 0xfe, 0xf4);
mipi_dsi_generic_write_seq(dsi, 0x00, 0xff);
mipi_dsi_generic_write_seq(dsi, 0x01, 0xff);
mipi_dsi_generic_write_seq(dsi, 0x02, 0xcf);
mipi_dsi_generic_write_seq(dsi, 0x03, 0xbc);
mipi_dsi_generic_write_seq(dsi, 0x04, 0xb9);
mipi_dsi_generic_write_seq(dsi, 0x05, 0x99);
mipi_dsi_generic_write_seq(dsi, 0x06, 0x02);
mipi_dsi_generic_write_seq(dsi, 0x07, 0x0a);
mipi_dsi_generic_write_seq(dsi, 0x08, 0xe0);
mipi_dsi_generic_write_seq(dsi, 0x09, 0x4c);
mipi_dsi_generic_write_seq(dsi, 0x0a, 0xeb);
mipi_dsi_generic_write_seq(dsi, 0x0b, 0xe8);
mipi_dsi_generic_write_seq(dsi, 0x0c, 0x32);
mipi_dsi_generic_write_seq(dsi, 0x0d, 0x07);
mipi_dsi_generic_write_seq(dsi, 0xfe, 0xf4);
mipi_dsi_generic_write_seq(dsi, 0x0d, 0xc0);
mipi_dsi_generic_write_seq(dsi, 0x0e, 0xff);
mipi_dsi_generic_write_seq(dsi, 0x0f, 0xff);
mipi_dsi_generic_write_seq(dsi, 0x10, 0x33);
mipi_dsi_generic_write_seq(dsi, 0x11, 0x6f);
mipi_dsi_generic_write_seq(dsi, 0x12, 0x6e);
mipi_dsi_generic_write_seq(dsi, 0x13, 0xa6);
mipi_dsi_generic_write_seq(dsi, 0x14, 0x80);
mipi_dsi_generic_write_seq(dsi, 0x15, 0x02);
mipi_dsi_generic_write_seq(dsi, 0x16, 0x38);
mipi_dsi_generic_write_seq(dsi, 0x17, 0xd3);
mipi_dsi_generic_write_seq(dsi, 0x18, 0x3a);
mipi_dsi_generic_write_seq(dsi, 0x19, 0xba);
mipi_dsi_generic_write_seq(dsi, 0x1a, 0xcc);
mipi_dsi_generic_write_seq(dsi, 0x1b, 0x01);
ret = mipi_dsi_dcs_nop(dsi);
if (ret < 0) {
dev_err(dev, "Failed to nop: %d\n", ret);
return ret;
}
msleep(32);
mipi_dsi_generic_write_seq(dsi, 0xfe, 0x38);
mipi_dsi_generic_write_seq(dsi, 0x18, 0x13);
mipi_dsi_generic_write_seq(dsi, 0xfe, 0xd1);
mipi_dsi_generic_write_seq(dsi, 0xd3, 0x00);
mipi_dsi_generic_write_seq(dsi, 0xd0, 0x00);
mipi_dsi_generic_write_seq(dsi, 0xd2, 0x00);
mipi_dsi_generic_write_seq(dsi, 0xd4, 0x00);
mipi_dsi_generic_write_seq(dsi, 0xb4, 0x01);
mipi_dsi_generic_write_seq(dsi, 0xfe, 0xf9);
mipi_dsi_generic_write_seq(dsi, 0x00, 0xaf);
mipi_dsi_generic_write_seq(dsi, 0x1d, 0x37);
mipi_dsi_generic_write_seq(dsi, 0x44, 0x0a, 0x7b);
mipi_dsi_generic_write_seq(dsi, 0xfe, 0x00);
mipi_dsi_generic_write_seq(dsi, 0xfa, 0x01);
mipi_dsi_generic_write_seq(dsi, 0xc2, 0x08);
mipi_dsi_generic_write_seq(dsi, 0x35, 0x00);
mipi_dsi_generic_write_seq(dsi, 0x51, 0x05, 0x42);
ret = mipi_dsi_dcs_exit_sleep_mode(dsi);
if (ret < 0) {
dev_err(dev, "Failed to exit sleep mode: %d\n", ret);
return ret;
}
msleep(100);
ret = mipi_dsi_dcs_set_display_on(dsi);
if (ret < 0) {
dev_err(dev, "Failed to set display on: %d\n", ret);
return ret;
}
return 0;
}
static int rm692e5_disable(struct drm_panel *panel)
{
struct rm692e5_panel *ctx = to_rm692e5_panel(panel);
struct mipi_dsi_device *dsi = ctx->dsi;
struct device *dev = &dsi->dev;
int ret;
dsi->mode_flags &= ~MIPI_DSI_MODE_LPM;
mipi_dsi_generic_write_seq(dsi, 0xfe, 0x00);
ret = mipi_dsi_dcs_set_display_off(dsi);
if (ret < 0) {
dev_err(dev, "Failed to set display off: %d\n", ret);
return ret;
}
ret = mipi_dsi_dcs_enter_sleep_mode(dsi);
if (ret < 0) {
dev_err(dev, "Failed to enter sleep mode: %d\n", ret);
return ret;
}
msleep(100);
return 0;
}
static int rm692e5_prepare(struct drm_panel *panel)
{
struct rm692e5_panel *ctx = to_rm692e5_panel(panel);
struct drm_dsc_picture_parameter_set pps;
struct device *dev = &ctx->dsi->dev;
int ret;
if (ctx->prepared)
return 0;
ret = regulator_bulk_enable(ARRAY_SIZE(ctx->supplies), ctx->supplies);
if (ret < 0) {
dev_err(dev, "Failed to enable regulators: %d\n", ret);
return ret;
}
rm692e5_reset(ctx);
ret = rm692e5_on(ctx);
if (ret < 0) {
dev_err(dev, "Failed to initialize panel: %d\n", ret);
gpiod_set_value_cansleep(ctx->reset_gpio, 1);
regulator_bulk_disable(ARRAY_SIZE(ctx->supplies), ctx->supplies);
return ret;
}
drm_dsc_pps_payload_pack(&pps, &ctx->dsc);
ret = mipi_dsi_picture_parameter_set(ctx->dsi, &pps);
if (ret < 0) {
dev_err(panel->dev, "failed to transmit PPS: %d\n", ret);
return ret;
}
ret = mipi_dsi_compression_mode(ctx->dsi, true);
if (ret < 0) {
dev_err(dev, "failed to enable compression mode: %d\n", ret);
return ret;
}
msleep(28);
mipi_dsi_generic_write_seq(ctx->dsi, 0xfe, 0x40);
/* 0x05 -> 90Hz, 0x00 -> 60Hz */
mipi_dsi_generic_write_seq(ctx->dsi, 0xbd, 0x05);
mipi_dsi_generic_write_seq(ctx->dsi, 0xfe, 0x00);
ctx->prepared = true;
return 0;
}
static int rm692e5_unprepare(struct drm_panel *panel)
{
struct rm692e5_panel *ctx = to_rm692e5_panel(panel);
if (!ctx->prepared)
return 0;
gpiod_set_value_cansleep(ctx->reset_gpio, 1);
regulator_bulk_disable(ARRAY_SIZE(ctx->supplies), ctx->supplies);
ctx->prepared = false;
return 0;
}
static const struct drm_display_mode rm692e5_mode = {
.clock = (1224 + 32 + 8 + 8) * (2700 + 8 + 2 + 8) * 90 / 1000,
.hdisplay = 1224,
.hsync_start = 1224 + 32,
.hsync_end = 1224 + 32 + 8,
.htotal = 1224 + 32 + 8 + 8,
.vdisplay = 2700,
.vsync_start = 2700 + 8,
.vsync_end = 2700 + 8 + 2,
.vtotal = 2700 + 8 + 2 + 8,
.width_mm = 68,
.height_mm = 150,
};
static int rm692e5_get_modes(struct drm_panel *panel,
struct drm_connector *connector)
{
struct drm_display_mode *mode;
mode = drm_mode_duplicate(connector->dev, &rm692e5_mode);
if (!mode)
return -ENOMEM;
drm_mode_set_name(mode);
mode->type = DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED;
connector->display_info.width_mm = mode->width_mm;
connector->display_info.height_mm = mode->height_mm;
drm_mode_probed_add(connector, mode);
return 1;
}
static const struct drm_panel_funcs rm692e5_panel_funcs = {
.prepare = rm692e5_prepare,
.unprepare = rm692e5_unprepare,
.disable = rm692e5_disable,
.get_modes = rm692e5_get_modes,
};
static int rm692e5_bl_update_status(struct backlight_device *bl)
{
struct mipi_dsi_device *dsi = bl_get_data(bl);
u16 brightness = backlight_get_brightness(bl);
int ret;
dsi->mode_flags &= ~MIPI_DSI_MODE_LPM;
ret = mipi_dsi_dcs_set_display_brightness_large(dsi, brightness);
if (ret < 0)
return ret;
dsi->mode_flags |= MIPI_DSI_MODE_LPM;
return 0;
}
static int rm692e5_bl_get_brightness(struct backlight_device *bl)
{
struct mipi_dsi_device *dsi = bl_get_data(bl);
u16 brightness;
int ret;
dsi->mode_flags &= ~MIPI_DSI_MODE_LPM;
ret = mipi_dsi_dcs_get_display_brightness_large(dsi, &brightness);
if (ret < 0)
return ret;
dsi->mode_flags |= MIPI_DSI_MODE_LPM;
return brightness;
}
static const struct backlight_ops rm692e5_bl_ops = {
.update_status = rm692e5_bl_update_status,
.get_brightness = rm692e5_bl_get_brightness,
};
static struct backlight_device *
rm692e5_create_backlight(struct mipi_dsi_device *dsi)
{
struct device *dev = &dsi->dev;
const struct backlight_properties props = {
.type = BACKLIGHT_RAW,
.brightness = 4095,
.max_brightness = 4095,
};
return devm_backlight_device_register(dev, dev_name(dev), dev, dsi,
&rm692e5_bl_ops, &props);
}
static int rm692e5_probe(struct mipi_dsi_device *dsi)
{
struct device *dev = &dsi->dev;
struct rm692e5_panel *ctx;
int ret;
ctx = devm_kzalloc(dev, sizeof(*ctx), GFP_KERNEL);
if (!ctx)
return -ENOMEM;
ctx->supplies[0].supply = "vddio";
ctx->supplies[1].supply = "dvdd";
ctx->supplies[2].supply = "vci";
ret = devm_regulator_bulk_get(dev, ARRAY_SIZE(ctx->supplies),
ctx->supplies);
if (ret < 0)
return dev_err_probe(dev, ret, "Failed to get regulators\n");
ctx->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_HIGH);
if (IS_ERR(ctx->reset_gpio))
return dev_err_probe(dev, PTR_ERR(ctx->reset_gpio),
"Failed to get reset-gpios\n");
ctx->dsi = dsi;
mipi_dsi_set_drvdata(dsi, ctx);
dsi->lanes = 4;
dsi->format = MIPI_DSI_FMT_RGB888;
dsi->mode_flags = MIPI_DSI_MODE_NO_EOT_PACKET |
MIPI_DSI_CLOCK_NON_CONTINUOUS;
drm_panel_init(&ctx->panel, dev, &rm692e5_panel_funcs,
DRM_MODE_CONNECTOR_DSI);
ctx->panel.prepare_prev_first = true;
ctx->panel.backlight = rm692e5_create_backlight(dsi);
if (IS_ERR(ctx->panel.backlight))
return dev_err_probe(dev, PTR_ERR(ctx->panel.backlight),
"Failed to create backlight\n");
drm_panel_add(&ctx->panel);
/* This panel only supports DSC; unconditionally enable it */
dsi->dsc = &ctx->dsc;
/* TODO: Pass slice_per_pkt = 2 */
ctx->dsc.dsc_version_major = 1;
ctx->dsc.dsc_version_minor = 1;
ctx->dsc.slice_height = 60;
ctx->dsc.slice_width = 1224;
ctx->dsc.slice_count = 1224 / ctx->dsc.slice_width;
ctx->dsc.bits_per_component = 8;
ctx->dsc.bits_per_pixel = 8 << 4; /* 4 fractional bits */
ctx->dsc.block_pred_enable = true;
ret = mipi_dsi_attach(dsi);
if (ret < 0) {
dev_err(dev, "Failed to attach to DSI host: %d\n", ret);
drm_panel_remove(&ctx->panel);
return ret;
}
return 0;
}
static void rm692e5_remove(struct mipi_dsi_device *dsi)
{
struct rm692e5_panel *ctx = mipi_dsi_get_drvdata(dsi);
int ret;
ret = mipi_dsi_detach(dsi);
if (ret < 0)
dev_err(&dsi->dev, "Failed to detach from DSI host: %d\n", ret);
drm_panel_remove(&ctx->panel);
}
static const struct of_device_id rm692e5_of_match[] = {
{ .compatible = "fairphone,fp5-rm692e5-boe" },
{ }
};
MODULE_DEVICE_TABLE(of, rm692e5_of_match);
static struct mipi_dsi_driver rm692e5_driver = {
.probe = rm692e5_probe,
.remove = rm692e5_remove,
.driver = {
.name = "panel-rm692e5-boe-amoled",
.of_match_table = rm692e5_of_match,
},
};
module_mipi_dsi_driver(rm692e5_driver);
MODULE_DESCRIPTION("DRM driver for rm692e5-equipped DSI panels");
MODULE_LICENSE("GPL");

View File

@ -40,6 +40,7 @@
#include <drm/drm_edid.h>
#include <drm/drm_mipi_dsi.h>
#include <drm/drm_panel.h>
#include <drm/drm_of.h>
/**
* struct panel_desc - Describes a simple panel.
@ -549,6 +550,51 @@ static void panel_simple_parse_panel_timing_node(struct device *dev,
dev_err(dev, "Reject override mode: No display_timing found\n");
}
static int panel_simple_override_nondefault_lvds_datamapping(struct device *dev,
struct panel_simple *panel)
{
int ret, bpc;
ret = drm_of_lvds_get_data_mapping(dev->of_node);
if (ret < 0) {
if (ret == -EINVAL)
dev_warn(dev, "Ignore invalid data-mapping property\n");
/*
* Ignore non-existing or malformatted property, fallback to
* default data-mapping, and return 0.
*/
return 0;
}
switch (ret) {
default:
WARN_ON(1);
fallthrough;
case MEDIA_BUS_FMT_RGB888_1X7X4_SPWG:
fallthrough;
case MEDIA_BUS_FMT_RGB888_1X7X4_JEIDA:
bpc = 8;
break;
case MEDIA_BUS_FMT_RGB666_1X7X3_SPWG:
bpc = 6;
}
if (panel->desc->bpc != bpc || panel->desc->bus_format != ret) {
struct panel_desc *override_desc;
override_desc = devm_kmemdup(dev, panel->desc, sizeof(*panel->desc), GFP_KERNEL);
if (!override_desc)
return -ENOMEM;
override_desc->bus_format = ret;
override_desc->bpc = bpc;
panel->desc = override_desc;
}
return 0;
}
static int panel_simple_probe(struct device *dev, const struct panel_desc *desc)
{
struct panel_simple *panel;
@ -601,6 +647,13 @@ static int panel_simple_probe(struct device *dev, const struct panel_desc *desc)
panel_simple_parse_panel_timing_node(dev, panel, &dt);
}
if (desc->connector_type == DRM_MODE_CONNECTOR_LVDS) {
/* Optional data-mapping property for overriding bus format */
err = panel_simple_override_nondefault_lvds_datamapping(dev, panel);
if (err)
goto free_ddc;
}
connector_type = desc->connector_type;
/* Catch common mistakes for panels. */
switch (connector_type) {

View File

@ -379,6 +379,8 @@ static int tpg110_get_modes(struct drm_panel *panel,
connector->display_info.bus_flags = tpg->panel_mode->bus_flags;
mode = drm_mode_duplicate(connector->dev, &tpg->panel_mode->mode);
if (!mode)
return -ENOMEM;
drm_mode_set_name(mode);
mode->type = DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED;

View File

@ -12,4 +12,6 @@ panfrost-y := \
panfrost_perfcnt.o \
panfrost_dump.o
panfrost-$(CONFIG_DEBUG_FS) += panfrost_debugfs.o
obj-$(CONFIG_DRM_PANFROST) += panfrost.o

View File

@ -0,0 +1,21 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright 2023 Collabora ltd. */
/* Copyright 2023 Amazon.com, Inc. or its affiliates. */
#include <linux/debugfs.h>
#include <linux/platform_device.h>
#include <drm/drm_debugfs.h>
#include <drm/drm_file.h>
#include <drm/panfrost_drm.h>
#include "panfrost_device.h"
#include "panfrost_gpu.h"
#include "panfrost_debugfs.h"
void panfrost_debugfs_init(struct drm_minor *minor)
{
struct drm_device *dev = minor->dev;
struct panfrost_device *pfdev = platform_get_drvdata(to_platform_device(dev->dev));
debugfs_create_atomic_t("profile", 0600, minor->debugfs_root, &pfdev->profile_mode);
}

View File

@ -0,0 +1,14 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright 2023 Collabora ltd.
* Copyright 2023 Amazon.com, Inc. or its affiliates.
*/
#ifndef PANFROST_DEBUGFS_H
#define PANFROST_DEBUGFS_H
#ifdef CONFIG_DEBUG_FS
void panfrost_debugfs_init(struct drm_minor *minor);
#endif
#endif /* PANFROST_DEBUGFS_H */

View File

@ -58,6 +58,7 @@ static int panfrost_devfreq_get_dev_status(struct device *dev,
spin_lock_irqsave(&pfdevfreq->lock, irqflags);
panfrost_devfreq_update_utilization(pfdevfreq);
pfdevfreq->current_frequency = status->current_frequency;
status->total_time = ktime_to_ns(ktime_add(pfdevfreq->busy_time,
pfdevfreq->idle_time));
@ -117,6 +118,7 @@ int panfrost_devfreq_init(struct panfrost_device *pfdev)
struct devfreq *devfreq;
struct thermal_cooling_device *cooling;
struct panfrost_devfreq *pfdevfreq = &pfdev->pfdevfreq;
unsigned long freq = ULONG_MAX;
if (pfdev->comp->num_supplies > 1) {
/*
@ -172,6 +174,12 @@ int panfrost_devfreq_init(struct panfrost_device *pfdev)
return ret;
}
/* Find the fastest defined rate */
opp = dev_pm_opp_find_freq_floor(dev, &freq);
if (IS_ERR(opp))
return PTR_ERR(opp);
pfdevfreq->fast_rate = freq;
dev_pm_opp_put(opp);
/*

View File

@ -19,6 +19,9 @@ struct panfrost_devfreq {
struct devfreq_simple_ondemand_data gov_data;
bool opp_of_table_added;
unsigned long current_frequency;
unsigned long fast_rate;
ktime_t busy_time;
ktime_t idle_time;
ktime_t time_last_update;

View File

@ -207,6 +207,8 @@ int panfrost_device_init(struct panfrost_device *pfdev)
spin_lock_init(&pfdev->as_lock);
spin_lock_init(&pfdev->cycle_counter.lock);
err = panfrost_clk_init(pfdev);
if (err) {
dev_err(pfdev->dev, "clk init failed %d\n", err);

View File

@ -107,6 +107,7 @@ struct panfrost_device {
struct list_head scheduled_jobs;
struct panfrost_perfcnt *perfcnt;
atomic_t profile_mode;
struct mutex sched_lock;
@ -121,6 +122,11 @@ struct panfrost_device {
struct shrinker shrinker;
struct panfrost_devfreq pfdevfreq;
struct {
atomic_t use_count;
spinlock_t lock;
} cycle_counter;
};
struct panfrost_mmu {
@ -135,12 +141,19 @@ struct panfrost_mmu {
struct list_head list;
};
struct panfrost_engine_usage {
unsigned long long elapsed_ns[NUM_JOB_SLOTS];
unsigned long long cycles[NUM_JOB_SLOTS];
};
struct panfrost_file_priv {
struct panfrost_device *pfdev;
struct drm_sched_entity sched_entity[NUM_JOB_SLOTS];
struct panfrost_mmu *mmu;
struct panfrost_engine_usage engine_usage;
};
static inline struct panfrost_device *to_panfrost_device(struct drm_device *ddev)

View File

@ -20,6 +20,7 @@
#include "panfrost_job.h"
#include "panfrost_gpu.h"
#include "panfrost_perfcnt.h"
#include "panfrost_debugfs.h"
static bool unstable_ioctls;
module_param_unsafe(unstable_ioctls, bool, 0600);
@ -267,6 +268,7 @@ static int panfrost_ioctl_submit(struct drm_device *dev, void *data,
job->requirements = args->requirements;
job->flush_id = panfrost_gpu_get_latest_flush_id(pfdev);
job->mmu = file_priv->mmu;
job->engine_usage = &file_priv->engine_usage;
slot = panfrost_job_get_slot(job);
@ -523,7 +525,58 @@ static const struct drm_ioctl_desc panfrost_drm_driver_ioctls[] = {
PANFROST_IOCTL(MADVISE, madvise, DRM_RENDER_ALLOW),
};
DEFINE_DRM_GEM_FOPS(panfrost_drm_driver_fops);
static void panfrost_gpu_show_fdinfo(struct panfrost_device *pfdev,
struct panfrost_file_priv *panfrost_priv,
struct drm_printer *p)
{
int i;
/*
* IMPORTANT NOTE: drm-cycles and drm-engine measurements are not
* accurate, as they only provide a rough estimation of the number of
* GPU cycles and CPU time spent in a given context. This is due to two
* different factors:
* - Firstly, we must consider the time the CPU and then the kernel
* takes to process the GPU interrupt, which means additional time and
* GPU cycles will be added in excess to the real figure.
* - Secondly, the pipelining done by the Job Manager (2 job slots per
* engine) implies there is no way to know exactly how much time each
* job spent on the GPU.
*/
static const char * const engine_names[] = {
"fragment", "vertex-tiler", "compute-only"
};
BUILD_BUG_ON(ARRAY_SIZE(engine_names) != NUM_JOB_SLOTS);
for (i = 0; i < NUM_JOB_SLOTS - 1; i++) {
drm_printf(p, "drm-engine-%s:\t%llu ns\n",
engine_names[i], panfrost_priv->engine_usage.elapsed_ns[i]);
drm_printf(p, "drm-cycles-%s:\t%llu\n",
engine_names[i], panfrost_priv->engine_usage.cycles[i]);
drm_printf(p, "drm-maxfreq-%s:\t%lu Hz\n",
engine_names[i], pfdev->pfdevfreq.fast_rate);
drm_printf(p, "drm-curfreq-%s:\t%lu Hz\n",
engine_names[i], pfdev->pfdevfreq.current_frequency);
}
}
static void panfrost_show_fdinfo(struct drm_printer *p, struct drm_file *file)
{
struct drm_device *dev = file->minor->dev;
struct panfrost_device *pfdev = dev->dev_private;
panfrost_gpu_show_fdinfo(pfdev, file->driver_priv, p);
drm_show_memory_stats(p, file);
}
static const struct file_operations panfrost_drm_driver_fops = {
.owner = THIS_MODULE,
DRM_GEM_FOPS,
.show_fdinfo = drm_show_fdinfo,
};
/*
* Panfrost driver version:
@ -535,6 +588,7 @@ static const struct drm_driver panfrost_drm_driver = {
.driver_features = DRIVER_RENDER | DRIVER_GEM | DRIVER_SYNCOBJ,
.open = panfrost_open,
.postclose = panfrost_postclose,
.show_fdinfo = panfrost_show_fdinfo,
.ioctls = panfrost_drm_driver_ioctls,
.num_ioctls = ARRAY_SIZE(panfrost_drm_driver_ioctls),
.fops = &panfrost_drm_driver_fops,
@ -546,6 +600,10 @@ static const struct drm_driver panfrost_drm_driver = {
.gem_create_object = panfrost_gem_create_object,
.gem_prime_import_sg_table = panfrost_gem_prime_import_sg_table,
#ifdef CONFIG_DEBUG_FS
.debugfs_init = panfrost_debugfs_init,
#endif
};
static int panfrost_probe(struct platform_device *pdev)

View File

@ -195,6 +195,34 @@ static int panfrost_gem_pin(struct drm_gem_object *obj)
return drm_gem_shmem_pin(&bo->base);
}
static enum drm_gem_object_status panfrost_gem_status(struct drm_gem_object *obj)
{
struct panfrost_gem_object *bo = to_panfrost_bo(obj);
enum drm_gem_object_status res = 0;
if (bo->base.pages)
res |= DRM_GEM_OBJECT_RESIDENT;
if (bo->base.madv == PANFROST_MADV_DONTNEED)
res |= DRM_GEM_OBJECT_PURGEABLE;
return res;
}
static size_t panfrost_gem_rss(struct drm_gem_object *obj)
{
struct panfrost_gem_object *bo = to_panfrost_bo(obj);
if (bo->is_heap) {
return bo->heap_rss_size;
} else if (bo->base.pages) {
WARN_ON(bo->heap_rss_size);
return bo->base.base.size;
}
return 0;
}
static const struct drm_gem_object_funcs panfrost_gem_funcs = {
.free = panfrost_gem_free_object,
.open = panfrost_gem_open,
@ -206,6 +234,8 @@ static const struct drm_gem_object_funcs panfrost_gem_funcs = {
.vmap = drm_gem_shmem_object_vmap,
.vunmap = drm_gem_shmem_object_vunmap,
.mmap = drm_gem_shmem_object_mmap,
.status = panfrost_gem_status,
.rss = panfrost_gem_rss,
.vm_ops = &drm_gem_shmem_vm_ops,
};

View File

@ -36,6 +36,11 @@ struct panfrost_gem_object {
*/
atomic_t gpu_usecount;
/*
* Object chunk size currently mapped onto physical memory
*/
size_t heap_rss_size;
bool noexec :1;
bool is_heap :1;
};

View File

@ -73,6 +73,13 @@ int panfrost_gpu_soft_reset(struct panfrost_device *pfdev)
gpu_write(pfdev, GPU_INT_CLEAR, GPU_IRQ_MASK_ALL);
gpu_write(pfdev, GPU_INT_MASK, GPU_IRQ_MASK_ALL);
/*
* All in-flight jobs should have released their cycle
* counter references upon reset, but let us make sure
*/
if (drm_WARN_ON(pfdev->ddev, atomic_read(&pfdev->cycle_counter.use_count) != 0))
atomic_set(&pfdev->cycle_counter.use_count, 0);
return 0;
}
@ -321,6 +328,40 @@ static void panfrost_gpu_init_features(struct panfrost_device *pfdev)
pfdev->features.shader_present, pfdev->features.l2_present);
}
void panfrost_cycle_counter_get(struct panfrost_device *pfdev)
{
if (atomic_inc_not_zero(&pfdev->cycle_counter.use_count))
return;
spin_lock(&pfdev->cycle_counter.lock);
if (atomic_inc_return(&pfdev->cycle_counter.use_count) == 1)
gpu_write(pfdev, GPU_CMD, GPU_CMD_CYCLE_COUNT_START);
spin_unlock(&pfdev->cycle_counter.lock);
}
void panfrost_cycle_counter_put(struct panfrost_device *pfdev)
{
if (atomic_add_unless(&pfdev->cycle_counter.use_count, -1, 1))
return;
spin_lock(&pfdev->cycle_counter.lock);
if (atomic_dec_return(&pfdev->cycle_counter.use_count) == 0)
gpu_write(pfdev, GPU_CMD, GPU_CMD_CYCLE_COUNT_STOP);
spin_unlock(&pfdev->cycle_counter.lock);
}
unsigned long long panfrost_cycle_counter_read(struct panfrost_device *pfdev)
{
u32 hi, lo;
do {
hi = gpu_read(pfdev, GPU_CYCLE_COUNT_HI);
lo = gpu_read(pfdev, GPU_CYCLE_COUNT_LO);
} while (hi != gpu_read(pfdev, GPU_CYCLE_COUNT_HI));
return ((u64)hi << 32) | lo;
}
void panfrost_gpu_power_on(struct panfrost_device *pfdev)
{
int ret;

View File

@ -16,6 +16,10 @@ int panfrost_gpu_soft_reset(struct panfrost_device *pfdev);
void panfrost_gpu_power_on(struct panfrost_device *pfdev);
void panfrost_gpu_power_off(struct panfrost_device *pfdev);
void panfrost_cycle_counter_get(struct panfrost_device *pfdev);
void panfrost_cycle_counter_put(struct panfrost_device *pfdev);
unsigned long long panfrost_cycle_counter_read(struct panfrost_device *pfdev);
void panfrost_gpu_amlogic_quirk(struct panfrost_device *pfdev);
#endif

View File

@ -159,6 +159,16 @@ panfrost_dequeue_job(struct panfrost_device *pfdev, int slot)
struct panfrost_job *job = pfdev->jobs[slot][0];
WARN_ON(!job);
if (job->is_profiled) {
if (job->engine_usage) {
job->engine_usage->elapsed_ns[slot] +=
ktime_to_ns(ktime_sub(ktime_get(), job->start_time));
job->engine_usage->cycles[slot] +=
panfrost_cycle_counter_read(pfdev) - job->start_cycles;
}
panfrost_cycle_counter_put(job->pfdev);
}
pfdev->jobs[slot][0] = pfdev->jobs[slot][1];
pfdev->jobs[slot][1] = NULL;
@ -233,6 +243,13 @@ static void panfrost_job_hw_submit(struct panfrost_job *job, int js)
subslot = panfrost_enqueue_job(pfdev, js, job);
/* Don't queue the job if a reset is in progress */
if (!atomic_read(&pfdev->reset.pending)) {
if (atomic_read(&pfdev->profile_mode)) {
panfrost_cycle_counter_get(pfdev);
job->is_profiled = true;
job->start_time = ktime_get();
job->start_cycles = panfrost_cycle_counter_read(pfdev);
}
job_write(pfdev, JS_COMMAND_NEXT(js), JS_COMMAND_START);
dev_dbg(pfdev->dev,
"JS: Submitting atom %p to js[%d][%d] with head=0x%llx AS %d",
@ -660,10 +677,14 @@ panfrost_reset(struct panfrost_device *pfdev,
* stuck jobs. Let's make sure the PM counters stay balanced by
* manually calling pm_runtime_put_noidle() and
* panfrost_devfreq_record_idle() for each stuck job.
* Let's also make sure the cycle counting register's refcnt is
* kept balanced to prevent it from running forever
*/
spin_lock(&pfdev->js->job_lock);
for (i = 0; i < NUM_JOB_SLOTS; i++) {
for (j = 0; j < ARRAY_SIZE(pfdev->jobs[0]) && pfdev->jobs[i][j]; j++) {
if (pfdev->jobs[i][j]->is_profiled)
panfrost_cycle_counter_put(pfdev->jobs[i][j]->pfdev);
pm_runtime_put_noidle(pfdev->dev);
panfrost_devfreq_record_idle(&pfdev->pfdevfreq);
}
@ -926,6 +947,9 @@ void panfrost_job_close(struct panfrost_file_priv *panfrost_priv)
}
job_write(pfdev, JS_COMMAND(i), cmd);
/* Jobs can outlive their file context */
job->engine_usage = NULL;
}
}
spin_unlock(&pfdev->js->job_lock);

View File

@ -32,6 +32,11 @@ struct panfrost_job {
/* Fence to be signaled by drm-sched once its done with the job */
struct dma_fence *render_done_fence;
struct panfrost_engine_usage *engine_usage;
bool is_profiled;
ktime_t start_time;
u64 start_cycles;
};
int panfrost_job_init(struct panfrost_device *pfdev);

View File

@ -522,6 +522,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
IOMMU_WRITE | IOMMU_READ | IOMMU_NOEXEC, sgt);
bomapping->active = true;
bo->heap_rss_size += SZ_2M;
dev_dbg(pfdev->dev, "mapped page fault @ AS%d %llx", as, addr);

View File

@ -46,6 +46,8 @@
#define GPU_CMD_SOFT_RESET 0x01
#define GPU_CMD_PERFCNT_CLEAR 0x03
#define GPU_CMD_PERFCNT_SAMPLE 0x04
#define GPU_CMD_CYCLE_COUNT_START 0x05
#define GPU_CMD_CYCLE_COUNT_STOP 0x06
#define GPU_CMD_CLEAN_CACHES 0x07
#define GPU_CMD_CLEAN_INV_CACHES 0x08
#define GPU_STATUS 0x34
@ -73,6 +75,9 @@
#define GPU_PRFCNT_TILER_EN 0x74
#define GPU_PRFCNT_MMU_L2_EN 0x7c
#define GPU_CYCLE_COUNT_LO 0x90
#define GPU_CYCLE_COUNT_HI 0x94
#define GPU_THREAD_MAX_THREADS 0x0A0 /* (RO) Maximum number of threads per core */
#define GPU_THREAD_MAX_WORKGROUP_SIZE 0x0A4 /* (RO) Maximum workgroup size */
#define GPU_THREAD_MAX_BARRIER_SIZE 0x0A8 /* (RO) Maximum threads waiting at a barrier */

View File

@ -1177,6 +1177,7 @@ static int cdn_dp_probe(struct platform_device *pdev)
struct cdn_dp_device *dp;
struct extcon_dev *extcon;
struct phy *phy;
int ret;
int i;
dp = devm_kzalloc(dev, sizeof(*dp), GFP_KERNEL);
@ -1217,9 +1218,19 @@ static int cdn_dp_probe(struct platform_device *pdev)
mutex_init(&dp->lock);
dev_set_drvdata(dev, dp);
cdn_dp_audio_codec_init(dp, dev);
ret = cdn_dp_audio_codec_init(dp, dev);
if (ret)
return ret;
return component_add(dev, &cdn_dp_component_ops);
ret = component_add(dev, &cdn_dp_component_ops);
if (ret)
goto err_audio_deinit;
return 0;
err_audio_deinit:
platform_device_unregister(dp->audio_pdev);
return ret;
}
static void cdn_dp_remove(struct platform_device *pdev)
@ -1250,7 +1261,7 @@ struct platform_driver cdn_dp_driver = {
.driver = {
.name = "cdn-dp",
.owner = THIS_MODULE,
.of_match_table = of_match_ptr(cdn_dp_dt_ids),
.of_match_table = cdn_dp_dt_ids,
.pm = &cdn_dp_pm_ops,
},
};

View File

@ -1358,8 +1358,7 @@ static int dw_mipi_dsi_rockchip_probe(struct platform_device *pdev)
if (!dsi)
return -ENOMEM;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
dsi->base = devm_ioremap_resource(dev, res);
dsi->base = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
if (IS_ERR(dsi->base)) {
DRM_DEV_ERROR(dev, "Unable to get dsi registers\n");
return PTR_ERR(dsi->base);

View File

@ -469,8 +469,8 @@ static bool rockchip_vop2_mod_supported(struct drm_plane *plane, u32 format,
return true;
if (!rockchip_afbc(plane, modifier)) {
drm_err(vop2->drm, "Unsupported format modifier 0x%llx\n",
modifier);
drm_dbg_kms(vop2->drm, "Unsupported format modifier 0x%llx\n",
modifier);
return false;
}
@ -2640,7 +2640,7 @@ static const struct regmap_config vop2_regmap_config = {
.max_register = 0x3000,
.name = "vop2",
.volatile_table = &vop2_volatile_table,
.cache_type = REGCACHE_RBTREE,
.cache_type = REGCACHE_MAPLE,
};
static int vop2_bind(struct device *dev, struct device *master, void *data)

View File

@ -752,6 +752,6 @@ struct platform_driver rockchip_lvds_driver = {
.remove_new = rockchip_lvds_remove,
.driver = {
.name = "rockchip-lvds",
.of_match_table = of_match_ptr(rockchip_lvds_dt_ids),
.of_match_table = rockchip_lvds_dt_ids,
},
};

View File

@ -274,6 +274,6 @@ struct platform_driver vop2_platform_driver = {
.remove_new = vop2_remove,
.driver = {
.name = "rockchip-vop2",
.of_match_table = of_match_ptr(vop2_dt_match),
.of_match_table = vop2_dt_match,
},
};

View File

@ -120,9 +120,6 @@ int tegra_drm_unregister_client(struct tegra_drm *tegra,
int host1x_client_iommu_attach(struct host1x_client *client);
void host1x_client_iommu_detach(struct host1x_client *client);
int tegra_drm_init(struct tegra_drm *tegra, struct drm_device *drm);
int tegra_drm_exit(struct tegra_drm *tegra);
void *tegra_drm_alloc(struct tegra_drm *tegra, size_t size, dma_addr_t *iova);
void tegra_drm_free(struct tegra_drm *tegra, size_t size, void *virt,
dma_addr_t iova);

View File

@ -177,18 +177,27 @@ static void tegra_bo_unpin(struct host1x_bo_mapping *map)
static void *tegra_bo_mmap(struct host1x_bo *bo)
{
struct tegra_bo *obj = host1x_to_tegra_bo(bo);
struct iosys_map map;
struct iosys_map map = { 0 };
void *vaddr;
int ret;
if (obj->vaddr) {
if (obj->vaddr)
return obj->vaddr;
} else if (obj->gem.import_attach) {
if (obj->gem.import_attach) {
ret = dma_buf_vmap_unlocked(obj->gem.import_attach->dmabuf, &map);
return ret ? NULL : map.vaddr;
} else {
return vmap(obj->pages, obj->num_pages, VM_MAP,
pgprot_writecombine(PAGE_KERNEL));
if (ret < 0)
return ERR_PTR(ret);
return map.vaddr;
}
vaddr = vmap(obj->pages, obj->num_pages, VM_MAP,
pgprot_writecombine(PAGE_KERNEL));
if (!vaddr)
return ERR_PTR(-ENOMEM);
return vaddr;
}
static void tegra_bo_munmap(struct host1x_bo *bo, void *addr)
@ -198,10 +207,11 @@ static void tegra_bo_munmap(struct host1x_bo *bo, void *addr)
if (obj->vaddr)
return;
else if (obj->gem.import_attach)
dma_buf_vunmap_unlocked(obj->gem.import_attach->dmabuf, &map);
else
vunmap(addr);
if (obj->gem.import_attach)
return dma_buf_vunmap_unlocked(obj->gem.import_attach->dmabuf, &map);
vunmap(addr);
}
static struct host1x_bo *tegra_bo_get(struct host1x_bo *bo)

View File

@ -1101,7 +1101,7 @@ static int tegra_display_hub_probe(struct platform_device *pdev)
for (i = 0; i < hub->soc->num_wgrps; i++) {
struct tegra_windowgroup *wgrp = &hub->wgrps[i];
char id[8];
char id[16];
snprintf(id, sizeof(id), "wgrp%u", i);
mutex_init(&wgrp->lock);

View File

@ -81,6 +81,16 @@ struct fb_swab_result {
const u32 expected[TEST_BUF_SIZE];
};
struct convert_to_xbgr8888_result {
unsigned int dst_pitch;
const u32 expected[TEST_BUF_SIZE];
};
struct convert_to_abgr8888_result {
unsigned int dst_pitch;
const u32 expected[TEST_BUF_SIZE];
};
struct convert_xrgb8888_case {
const char *name;
unsigned int pitch;
@ -98,6 +108,8 @@ struct convert_xrgb8888_case {
struct convert_to_argb2101010_result argb2101010_result;
struct convert_to_mono_result mono_result;
struct fb_swab_result swab_result;
struct convert_to_xbgr8888_result xbgr8888_result;
struct convert_to_abgr8888_result abgr8888_result;
};
static struct convert_xrgb8888_case convert_xrgb8888_cases[] = {
@ -155,6 +167,14 @@ static struct convert_xrgb8888_case convert_xrgb8888_cases[] = {
.dst_pitch = TEST_USE_DEFAULT_PITCH,
.expected = { 0x0000FF01 },
},
.xbgr8888_result = {
.dst_pitch = TEST_USE_DEFAULT_PITCH,
.expected = { 0x010000FF },
},
.abgr8888_result = {
.dst_pitch = TEST_USE_DEFAULT_PITCH,
.expected = { 0xFF0000FF },
},
},
{
.name = "single_pixel_clip_rectangle",
@ -213,6 +233,14 @@ static struct convert_xrgb8888_case convert_xrgb8888_cases[] = {
.dst_pitch = TEST_USE_DEFAULT_PITCH,
.expected = { 0x0000FF10 },
},
.xbgr8888_result = {
.dst_pitch = TEST_USE_DEFAULT_PITCH,
.expected = { 0x100000FF },
},
.abgr8888_result = {
.dst_pitch = TEST_USE_DEFAULT_PITCH,
.expected = { 0xFF0000FF },
},
},
{
/* Well known colors: White, black, red, green, blue, magenta,
@ -343,6 +371,24 @@ static struct convert_xrgb8888_case convert_xrgb8888_cases[] = {
0x00FFFF77, 0xFFFF0088,
},
},
.xbgr8888_result = {
.dst_pitch = TEST_USE_DEFAULT_PITCH,
.expected = {
0x11FFFFFF, 0x22000000,
0x330000FF, 0x4400FF00,
0x55FF0000, 0x66FF00FF,
0x7700FFFF, 0x88FFFF00,
},
},
.abgr8888_result = {
.dst_pitch = TEST_USE_DEFAULT_PITCH,
.expected = {
0xFFFFFFFF, 0xFF000000,
0xFF0000FF, 0xFF00FF00,
0xFFFF0000, 0xFFFF00FF,
0xFF00FFFF, 0xFFFFFF00,
},
},
},
{
/* Randomly picked colors. Full buffer within the clip area. */
@ -458,6 +504,22 @@ static struct convert_xrgb8888_case convert_xrgb8888_cases[] = {
0x0303A8C2, 0x73F06CD2, 0x9C440EA3, 0x00000000, 0x00000000,
},
},
.xbgr8888_result = {
.dst_pitch = 20,
.expected = {
0xA19C440E, 0xB1054D11, 0xC103F3A8, 0x00000000, 0x00000000,
0xD173F06C, 0xA29C440E, 0xB2054D11, 0x00000000, 0x00000000,
0xC20303A8, 0xD273F06C, 0xA39C440E, 0x00000000, 0x00000000,
},
},
.abgr8888_result = {
.dst_pitch = 20,
.expected = {
0xFF9C440E, 0xFF054D11, 0xFF03F3A8, 0x00000000, 0x00000000,
0xFF73F06C, 0xFF9C440E, 0xFF054D11, 0x00000000, 0x00000000,
0xFF0303A8, 0xFF73F06C, 0xFF9C440E, 0x00000000, 0x00000000,
},
},
},
};
@ -643,6 +705,18 @@ static void drm_test_fb_xrgb8888_to_rgb565(struct kunit *test)
drm_fb_xrgb8888_to_rgb565(&dst, &result->dst_pitch, &src, &fb, &params->clip, true);
buf = le16buf_to_cpu(test, (__force const __le16 *)buf, dst_size / sizeof(__le16));
KUNIT_EXPECT_MEMEQ(test, buf, result->expected_swab, dst_size);
buf = dst.vaddr;
memset(buf, 0, dst_size);
int blit_result = 0;
blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_RGB565, &src, &fb, &params->clip);
buf = le16buf_to_cpu(test, (__force const __le16 *)buf, dst_size / sizeof(__le16));
KUNIT_EXPECT_FALSE(test, blit_result);
KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
}
static void drm_test_fb_xrgb8888_to_xrgb1555(struct kunit *test)
@ -677,6 +751,18 @@ static void drm_test_fb_xrgb8888_to_xrgb1555(struct kunit *test)
drm_fb_xrgb8888_to_xrgb1555(&dst, dst_pitch, &src, &fb, &params->clip);
buf = le16buf_to_cpu(test, (__force const __le16 *)buf, dst_size / sizeof(__le16));
KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
buf = dst.vaddr; /* restore original value of buf */
memset(buf, 0, dst_size);
int blit_result = 0;
blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_XRGB1555, &src, &fb, &params->clip);
buf = le16buf_to_cpu(test, (__force const __le16 *)buf, dst_size / sizeof(__le16));
KUNIT_EXPECT_FALSE(test, blit_result);
KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
}
static void drm_test_fb_xrgb8888_to_argb1555(struct kunit *test)
@ -711,6 +797,18 @@ static void drm_test_fb_xrgb8888_to_argb1555(struct kunit *test)
drm_fb_xrgb8888_to_argb1555(&dst, dst_pitch, &src, &fb, &params->clip);
buf = le16buf_to_cpu(test, (__force const __le16 *)buf, dst_size / sizeof(__le16));
KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
buf = dst.vaddr; /* restore original value of buf */
memset(buf, 0, dst_size);
int blit_result = 0;
blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_ARGB1555, &src, &fb, &params->clip);
buf = le16buf_to_cpu(test, (__force const __le16 *)buf, dst_size / sizeof(__le16));
KUNIT_EXPECT_FALSE(test, blit_result);
KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
}
static void drm_test_fb_xrgb8888_to_rgba5551(struct kunit *test)
@ -745,6 +843,18 @@ static void drm_test_fb_xrgb8888_to_rgba5551(struct kunit *test)
drm_fb_xrgb8888_to_rgba5551(&dst, dst_pitch, &src, &fb, &params->clip);
buf = le16buf_to_cpu(test, (__force const __le16 *)buf, dst_size / sizeof(__le16));
KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
buf = dst.vaddr; /* restore original value of buf */
memset(buf, 0, dst_size);
int blit_result = 0;
blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_RGBA5551, &src, &fb, &params->clip);
buf = le16buf_to_cpu(test, (__force const __le16 *)buf, dst_size / sizeof(__le16));
KUNIT_EXPECT_FALSE(test, blit_result);
KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
}
static void drm_test_fb_xrgb8888_to_rgb888(struct kunit *test)
@ -782,6 +892,16 @@ static void drm_test_fb_xrgb8888_to_rgb888(struct kunit *test)
drm_fb_xrgb8888_to_rgb888(&dst, dst_pitch, &src, &fb, &params->clip);
KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
buf = dst.vaddr; /* restore original value of buf */
memset(buf, 0, dst_size);
int blit_result = 0;
blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_RGB888, &src, &fb, &params->clip);
KUNIT_EXPECT_FALSE(test, blit_result);
KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
}
static void drm_test_fb_xrgb8888_to_argb8888(struct kunit *test)
@ -816,6 +936,18 @@ static void drm_test_fb_xrgb8888_to_argb8888(struct kunit *test)
drm_fb_xrgb8888_to_argb8888(&dst, dst_pitch, &src, &fb, &params->clip);
buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32));
KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
buf = dst.vaddr; /* restore original value of buf */
memset(buf, 0, dst_size);
int blit_result = 0;
blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_ARGB8888, &src, &fb, &params->clip);
buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32));
KUNIT_EXPECT_FALSE(test, blit_result);
KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
}
static void drm_test_fb_xrgb8888_to_xrgb2101010(struct kunit *test)
@ -850,6 +982,17 @@ static void drm_test_fb_xrgb8888_to_xrgb2101010(struct kunit *test)
drm_fb_xrgb8888_to_xrgb2101010(&dst, dst_pitch, &src, &fb, &params->clip);
buf = le32buf_to_cpu(test, buf, dst_size / sizeof(u32));
KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
buf = dst.vaddr; /* restore original value of buf */
memset(buf, 0, dst_size);
int blit_result = 0;
blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_XRGB2101010, &src, &fb,
&params->clip);
KUNIT_EXPECT_FALSE(test, blit_result);
KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
}
static void drm_test_fb_xrgb8888_to_argb2101010(struct kunit *test)
@ -884,6 +1027,19 @@ static void drm_test_fb_xrgb8888_to_argb2101010(struct kunit *test)
drm_fb_xrgb8888_to_argb2101010(&dst, dst_pitch, &src, &fb, &params->clip);
buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32));
KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
buf = dst.vaddr; /* restore original value of buf */
memset(buf, 0, dst_size);
int blit_result = 0;
blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_ARGB2101010, &src, &fb,
&params->clip);
buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32));
KUNIT_EXPECT_FALSE(test, blit_result);
KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
}
static void drm_test_fb_xrgb8888_to_mono(struct kunit *test)
@ -951,6 +1107,119 @@ static void drm_test_fb_swab(struct kunit *test)
drm_fb_swab(&dst, dst_pitch, &src, &fb, &params->clip, false);
buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32));
KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
buf = dst.vaddr; /* restore original value of buf */
memset(buf, 0, dst_size);
int blit_result;
blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_XRGB8888 | DRM_FORMAT_BIG_ENDIAN,
&src, &fb, &params->clip);
buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32));
KUNIT_EXPECT_FALSE(test, blit_result);
KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
buf = dst.vaddr;
memset(buf, 0, dst_size);
blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_BGRX8888, &src, &fb, &params->clip);
buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32));
KUNIT_EXPECT_FALSE(test, blit_result);
KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
buf = dst.vaddr;
memset(buf, 0, dst_size);
struct drm_format_info mock_format = *fb.format;
mock_format.format |= DRM_FORMAT_BIG_ENDIAN;
fb.format = &mock_format;
blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_XRGB8888, &src, &fb, &params->clip);
buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32));
KUNIT_EXPECT_FALSE(test, blit_result);
KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
}
static void drm_test_fb_xrgb8888_to_abgr8888(struct kunit *test)
{
const struct convert_xrgb8888_case *params = test->param_value;
const struct convert_to_abgr8888_result *result = &params->abgr8888_result;
size_t dst_size;
u32 *buf = NULL;
__le32 *xrgb8888 = NULL;
struct iosys_map dst, src;
struct drm_framebuffer fb = {
.format = drm_format_info(DRM_FORMAT_XRGB8888),
.pitches = { params->pitch, 0, 0 },
};
dst_size = conversion_buf_size(DRM_FORMAT_XBGR8888, result->dst_pitch, &params->clip, 0);
KUNIT_ASSERT_GT(test, dst_size, 0);
buf = kunit_kzalloc(test, dst_size, GFP_KERNEL);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, buf);
iosys_map_set_vaddr(&dst, buf);
xrgb8888 = cpubuf_to_le32(test, params->xrgb8888, TEST_BUF_SIZE);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, xrgb8888);
iosys_map_set_vaddr(&src, xrgb8888);
const unsigned int *dst_pitch = (result->dst_pitch == TEST_USE_DEFAULT_PITCH) ?
NULL : &result->dst_pitch;
int blit_result = 0;
blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_ABGR8888, &src, &fb, &params->clip);
buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32));
KUNIT_EXPECT_FALSE(test, blit_result);
KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
}
static void drm_test_fb_xrgb8888_to_xbgr8888(struct kunit *test)
{
const struct convert_xrgb8888_case *params = test->param_value;
const struct convert_to_xbgr8888_result *result = &params->xbgr8888_result;
size_t dst_size;
u32 *buf = NULL;
__le32 *xrgb8888 = NULL;
struct iosys_map dst, src;
struct drm_framebuffer fb = {
.format = drm_format_info(DRM_FORMAT_XRGB8888),
.pitches = { params->pitch, 0, 0 },
};
dst_size = conversion_buf_size(DRM_FORMAT_XBGR8888, result->dst_pitch, &params->clip, 0);
KUNIT_ASSERT_GT(test, dst_size, 0);
buf = kunit_kzalloc(test, dst_size, GFP_KERNEL);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, buf);
iosys_map_set_vaddr(&dst, buf);
xrgb8888 = cpubuf_to_le32(test, params->xrgb8888, TEST_BUF_SIZE);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, xrgb8888);
iosys_map_set_vaddr(&src, xrgb8888);
const unsigned int *dst_pitch = (result->dst_pitch == TEST_USE_DEFAULT_PITCH) ?
NULL : &result->dst_pitch;
int blit_result = 0;
blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_XBGR8888, &src, &fb, &params->clip);
buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32));
KUNIT_EXPECT_FALSE(test, blit_result);
KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
}
struct clip_offset_case {
@ -1538,6 +1807,19 @@ static void drm_test_fb_memcpy(struct kunit *test)
drm_fb_memcpy(dst, dst_pitches, src, &fb, &params->clip);
for (size_t i = 0; i < fb.format->num_planes; i++) {
expected[i] = cpubuf_to_le32(test, params->expected[i], TEST_BUF_SIZE);
KUNIT_EXPECT_MEMEQ_MSG(test, buf[i], expected[i], dst_size[i],
"Failed expectation on plane %zu", i);
memset(buf[i], 0, dst_size[i]);
}
int blit_result;
blit_result = drm_fb_blit(dst, dst_pitches, params->format, src, &fb, &params->clip);
KUNIT_EXPECT_FALSE(test, blit_result);
for (size_t i = 0; i < fb.format->num_planes; i++) {
expected[i] = cpubuf_to_le32(test, params->expected[i], TEST_BUF_SIZE);
KUNIT_EXPECT_MEMEQ_MSG(test, buf[i], expected[i], dst_size[i],
@ -1558,6 +1840,8 @@ static struct kunit_case drm_format_helper_test_cases[] = {
KUNIT_CASE_PARAM(drm_test_fb_xrgb8888_to_argb2101010, convert_xrgb8888_gen_params),
KUNIT_CASE_PARAM(drm_test_fb_xrgb8888_to_mono, convert_xrgb8888_gen_params),
KUNIT_CASE_PARAM(drm_test_fb_swab, convert_xrgb8888_gen_params),
KUNIT_CASE_PARAM(drm_test_fb_xrgb8888_to_xbgr8888, convert_xrgb8888_gen_params),
KUNIT_CASE_PARAM(drm_test_fb_xrgb8888_to_abgr8888, convert_xrgb8888_gen_params),
KUNIT_CASE_PARAM(drm_test_fb_clip_offset, clip_offset_gen_params),
KUNIT_CASE_PARAM(drm_test_fb_build_fourcc_list, fb_build_fourcc_list_gen_params),
KUNIT_CASE_PARAM(drm_test_fb_memcpy, fb_memcpy_gen_params),

View File

@ -506,7 +506,7 @@ static void simpledrm_device_detach_genpd(void *res)
return;
for (i = sdev->pwr_dom_count - 1; i >= 0; i--) {
if (!sdev->pwr_dom_links[i])
if (sdev->pwr_dom_links[i])
device_link_del(sdev->pwr_dom_links[i]);
if (!IS_ERR_OR_NULL(sdev->pwr_dom_devs[i]))
dev_pm_domain_detach(sdev->pwr_dom_devs[i], true);

View File

@ -59,7 +59,7 @@ struct v3d_perfmon {
* values can't be reset, but you can fake a reset by
* destroying the perfmon and creating a new one.
*/
u64 values[];
u64 values[] __counted_by(ncounters);
};
struct v3d_dev {

View File

@ -76,7 +76,7 @@ struct vc4_perfmon {
* Note that counter values can't be reset, but you can fake a reset by
* destroying the perfmon and creating a new one.
*/
u64 counters[];
u64 counters[] __counted_by(ncounters);
};
struct vc4_dev {

View File

@ -119,7 +119,7 @@ struct virtio_gpu_object_array {
struct ww_acquire_ctx ticket;
struct list_head next;
u32 nents, total;
struct drm_gem_object *objs[];
struct drm_gem_object *objs[] __counted_by(total);
};
struct virtio_gpu_vbuffer;

View File

@ -77,7 +77,7 @@ struct vmw_surface_offset {
struct vmw_surface_dirty {
struct vmw_surface_cache cache;
u32 num_subres;
SVGA3dBox boxes[];
SVGA3dBox boxes[] __counted_by(num_subres);
};
static void vmw_user_surface_free(struct vmw_resource *res);

View File

@ -27,6 +27,8 @@ int host1x_channel_list_init(struct host1x_channel_list *chlist,
return -ENOMEM;
}
mutex_init(&chlist->lock);
return 0;
}
@ -79,6 +81,25 @@ void host1x_channel_stop(struct host1x_channel *channel)
}
EXPORT_SYMBOL(host1x_channel_stop);
/**
* host1x_channel_stop_all() - disable CDMA on allocated channels
* @host: host1x instance
*
* Stop CDMA on allocated channels
*/
void host1x_channel_stop_all(struct host1x *host)
{
struct host1x_channel_list *chlist = &host->channel_list;
int bit;
mutex_lock(&chlist->lock);
for_each_set_bit(bit, chlist->allocated_channels, host->info->nb_channels)
host1x_channel_stop(&chlist->channels[bit]);
mutex_unlock(&chlist->lock);
}
static void release_channel(struct kref *kref)
{
struct host1x_channel *channel =
@ -104,8 +125,11 @@ static struct host1x_channel *acquire_unused_channel(struct host1x *host)
unsigned int max_channels = host->info->nb_channels;
unsigned int index;
mutex_lock(&chlist->lock);
index = find_first_zero_bit(chlist->allocated_channels, max_channels);
if (index >= max_channels) {
mutex_unlock(&chlist->lock);
dev_err(host->dev, "failed to find free channel\n");
return NULL;
}
@ -114,6 +138,8 @@ static struct host1x_channel *acquire_unused_channel(struct host1x *host)
set_bit(index, chlist->allocated_channels);
mutex_unlock(&chlist->lock);
return &chlist->channels[index];
}

View File

@ -10,6 +10,7 @@
#include <linux/io.h>
#include <linux/kref.h>
#include <linux/mutex.h>
#include "cdma.h"
@ -18,6 +19,8 @@ struct host1x_channel;
struct host1x_channel_list {
struct host1x_channel *channels;
struct mutex lock;
unsigned long *allocated_channels;
};
@ -37,5 +40,6 @@ int host1x_channel_list_init(struct host1x_channel_list *chlist,
void host1x_channel_list_free(struct host1x_channel_list *chlist);
struct host1x_channel *host1x_channel_get_index(struct host1x *host,
unsigned int index);
void host1x_channel_stop_all(struct host1x *host);
#endif

View File

@ -34,10 +34,10 @@ int host1x_memory_context_list_init(struct host1x *host1x)
if (err < 0)
return 0;
cdl->devs = kcalloc(err, sizeof(*cdl->devs), GFP_KERNEL);
cdl->len = err / 4;
cdl->devs = kcalloc(cdl->len, sizeof(*cdl->devs), GFP_KERNEL);
if (!cdl->devs)
return -ENOMEM;
cdl->len = err / 4;
for (i = 0; i < cdl->len; i++) {
ctx = &cdl->devs[i];

View File

@ -488,7 +488,7 @@ static int host1x_get_resets(struct host1x *host)
static int host1x_probe(struct platform_device *pdev)
{
struct host1x *host;
int err;
int err, i;
host = devm_kzalloc(&pdev->dev, sizeof(*host), GFP_KERNEL);
if (!host)
@ -516,9 +516,30 @@ static int host1x_probe(struct platform_device *pdev)
return PTR_ERR(host->regs);
}
host->syncpt_irq = platform_get_irq(pdev, 0);
if (host->syncpt_irq < 0)
return host->syncpt_irq;
for (i = 0; i < ARRAY_SIZE(host->syncpt_irqs); i++) {
char irq_name[] = "syncptX";
sprintf(irq_name, "syncpt%d", i);
err = platform_get_irq_byname_optional(pdev, irq_name);
if (err == -ENXIO)
break;
if (err < 0)
return err;
host->syncpt_irqs[i] = err;
}
host->num_syncpt_irqs = i;
/* Device tree without irq names */
if (i == 0) {
host->syncpt_irqs[0] = platform_get_irq(pdev, 0);
if (host->syncpt_irqs[0] < 0)
return host->syncpt_irqs[0];
host->num_syncpt_irqs = 1;
}
mutex_init(&host->devices_lock);
INIT_LIST_HEAD(&host->devices);
@ -655,6 +676,7 @@ static int __maybe_unused host1x_runtime_suspend(struct device *dev)
struct host1x *host = dev_get_drvdata(dev);
int err;
host1x_channel_stop_all(host);
host1x_intr_stop(host);
host1x_syncpt_save(host);
@ -719,7 +741,7 @@ release_reset:
static const struct dev_pm_ops host1x_pm_ops = {
SET_RUNTIME_PM_OPS(host1x_runtime_suspend, host1x_runtime_resume,
NULL)
/* TODO: add system suspend-resume once driver will be ready for that */
SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, pm_runtime_force_resume)
};
static struct platform_driver tegra_host1x_driver = {

View File

@ -124,7 +124,8 @@ struct host1x {
void __iomem *regs;
void __iomem *hv_regs; /* hypervisor region */
void __iomem *common_regs;
int syncpt_irq;
int syncpt_irqs[8];
int num_syncpt_irqs;
struct host1x_syncpt *syncpt;
struct host1x_syncpt_base *bases;
struct device *dev;

View File

@ -13,13 +13,20 @@
#include "../intr.h"
#include "../dev.h"
struct host1x_intr_irq_data {
struct host1x *host;
u32 offset;
};
static irqreturn_t syncpt_thresh_isr(int irq, void *dev_id)
{
struct host1x *host = dev_id;
struct host1x_intr_irq_data *irq_data = dev_id;
struct host1x *host = irq_data->host;
unsigned long reg;
unsigned int i, id;
for (i = 0; i < DIV_ROUND_UP(host->info->nb_pts, 32); i++) {
for (i = irq_data->offset; i < DIV_ROUND_UP(host->info->nb_pts, 32);
i += host->num_syncpt_irqs) {
reg = host1x_sync_readl(host,
HOST1X_SYNC_SYNCPT_THRESH_CPU0_INT_STATUS(i));
@ -67,26 +74,41 @@ static void intr_hw_init(struct host1x *host, u32 cpm)
/*
* Program threshold interrupt destination among 8 lines per VM,
* per syncpoint. For now, just direct all to the first interrupt
* line.
* per syncpoint. For each group of 32 syncpoints (corresponding to one
* interrupt status register), direct to one interrupt line, going
* around in a round robin fashion.
*/
for (id = 0; id < host->info->nb_pts; id++)
host1x_sync_writel(host, 0, HOST1X_SYNC_SYNCPT_INTR_DEST(id));
for (id = 0; id < host->info->nb_pts; id++) {
u32 reg_offset = id / 32;
u32 irq_index = reg_offset % host->num_syncpt_irqs;
host1x_sync_writel(host, irq_index, HOST1X_SYNC_SYNCPT_INTR_DEST(id));
}
#endif
}
static int
host1x_intr_init_host_sync(struct host1x *host, u32 cpm)
{
int err;
int err, i;
struct host1x_intr_irq_data *irq_data;
irq_data = devm_kcalloc(host->dev, host->num_syncpt_irqs, sizeof(irq_data[0]), GFP_KERNEL);
if (!irq_data)
return -ENOMEM;
host1x_hw_intr_disable_all_syncpt_intrs(host);
err = devm_request_irq(host->dev, host->syncpt_irq,
syncpt_thresh_isr, IRQF_SHARED,
"host1x_syncpt", host);
if (err < 0)
return err;
for (i = 0; i < host->num_syncpt_irqs; i++) {
irq_data[i].host = host;
irq_data[i].offset = i;
err = devm_request_irq(host->dev, host->syncpt_irqs[i],
syncpt_thresh_isr, IRQF_SHARED,
"host1x_syncpt", &irq_data[i]);
if (err < 0)
return err;
}
intr_hw_init(host, cpm);

View File

@ -153,11 +153,11 @@ static int dp_altmode_status_update(struct dp_altmode *dp)
}
}
} else {
if (dp->hpd != hpd) {
drm_connector_oob_hotplug_event(dp->connector_fwnode);
dp->hpd = hpd;
sysfs_notify(&dp->alt->dev.kobj, "displayport", "hpd");
}
drm_connector_oob_hotplug_event(dp->connector_fwnode,
hpd ? connector_status_connected :
connector_status_disconnected);
dp->hpd = hpd;
sysfs_notify(&dp->alt->dev.kobj, "displayport", "hpd");
}
return ret;
@ -173,7 +173,8 @@ static int dp_altmode_configured(struct dp_altmode *dp)
* configuration is complete to signal HPD.
*/
if (dp->pending_hpd) {
drm_connector_oob_hotplug_event(dp->connector_fwnode);
drm_connector_oob_hotplug_event(dp->connector_fwnode,
connector_status_connected);
sysfs_notify(&dp->alt->dev.kobj, "displayport", "hpd");
dp->pending_hpd = false;
}
@ -618,8 +619,8 @@ void dp_altmode_remove(struct typec_altmode *alt)
cancel_work_sync(&dp->work);
if (dp->connector_fwnode) {
if (dp->hpd)
drm_connector_oob_hotplug_event(dp->connector_fwnode);
drm_connector_oob_hotplug_event(dp->connector_fwnode,
connector_status_disconnected);
fwnode_handle_put(dp->connector_fwnode);
}

View File

@ -365,7 +365,8 @@ static int fb_mmap(struct file *file, struct vm_area_struct *vma)
mutex_unlock(&info->mm_lock);
vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
fb_pgprotect(file, vma, start);
vma->vm_page_prot = pgprot_framebuffer(vma->vm_page_prot, vma->vm_start,
vma->vm_end, start);
return vm_iomap_memory(vma, start, len);
}

View File

@ -12,14 +12,14 @@
#include <linux/pgtable.h>
struct fb_info;
struct file;
#ifndef fb_pgprotect
#define fb_pgprotect fb_pgprotect
static inline void fb_pgprotect(struct file *file, struct vm_area_struct *vma,
unsigned long off)
#ifndef pgprot_framebuffer
#define pgprot_framebuffer pgprot_framebuffer
static inline pgprot_t pgprot_framebuffer(pgprot_t prot,
unsigned long vm_start, unsigned long vm_end,
unsigned long offset)
{
vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
return pgprot_writecombine(prot);
}
#endif

View File

@ -61,6 +61,8 @@ struct samsung_dsim_driver_data {
unsigned int num_bits_resol;
unsigned int pll_p_offset;
const unsigned int *reg_values;
unsigned int pll_fin_min;
unsigned int pll_fin_max;
u16 m_min;
u16 m_max;
};
@ -88,6 +90,7 @@ struct samsung_dsim {
void __iomem *reg_base;
struct phy *phy;
struct clk **clks;
struct clk *pll_clk;
struct regulator_bulk_data supplies[2];
int irq;
struct gpio_desc *te_gpio;
@ -116,7 +119,7 @@ struct samsung_dsim {
};
extern int samsung_dsim_probe(struct platform_device *pdev);
extern int samsung_dsim_remove(struct platform_device *pdev);
extern void samsung_dsim_remove(struct platform_device *pdev);
extern const struct dev_pm_ops samsung_dsim_pm_ops;
#endif /* __SAMSUNG_DSIM__ */

View File

@ -272,8 +272,8 @@ struct drm_dp_aux_msg {
};
struct cec_adapter;
struct edid;
struct drm_connector;
struct drm_edid;
/**
* struct drm_dp_aux_cec - DisplayPort CEC-Tunneling-over-AUX
@ -507,18 +507,18 @@ bool drm_dp_downstream_is_type(const u8 dpcd[DP_RECEIVER_CAP_SIZE],
const u8 port_cap[4], u8 type);
bool drm_dp_downstream_is_tmds(const u8 dpcd[DP_RECEIVER_CAP_SIZE],
const u8 port_cap[4],
const struct edid *edid);
const struct drm_edid *drm_edid);
int drm_dp_downstream_max_dotclock(const u8 dpcd[DP_RECEIVER_CAP_SIZE],
const u8 port_cap[4]);
int drm_dp_downstream_max_tmds_clock(const u8 dpcd[DP_RECEIVER_CAP_SIZE],
const u8 port_cap[4],
const struct edid *edid);
const struct drm_edid *drm_edid);
int drm_dp_downstream_min_tmds_clock(const u8 dpcd[DP_RECEIVER_CAP_SIZE],
const u8 port_cap[4],
const struct edid *edid);
const struct drm_edid *drm_edid);
int drm_dp_downstream_max_bpc(const u8 dpcd[DP_RECEIVER_CAP_SIZE],
const u8 port_cap[4],
const struct edid *edid);
const struct drm_edid *drm_edid);
bool drm_dp_downstream_420_passthrough(const u8 dpcd[DP_RECEIVER_CAP_SIZE],
const u8 port_cap[4]);
bool drm_dp_downstream_444_to_420_conversion(const u8 dpcd[DP_RECEIVER_CAP_SIZE],
@ -530,7 +530,7 @@ int drm_dp_downstream_id(struct drm_dp_aux *aux, char id[6]);
void drm_dp_downstream_debug(struct seq_file *m,
const u8 dpcd[DP_RECEIVER_CAP_SIZE],
const u8 port_cap[4],
const struct edid *edid,
const struct drm_edid *drm_edid,
struct drm_dp_aux *aux);
enum drm_mode_subconnector
drm_dp_subconnector_type(const u8 dpcd[DP_RECEIVER_CAP_SIZE],

Some files were not shown because too many files have changed in this diff Show More