Merge 5.15-rc6 into usb-next

We need the usb fixes in here as well.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This commit is contained in:
Greg Kroah-Hartman 2021-10-18 09:40:52 +02:00
commit c03fb16baf
310 changed files with 3703 additions and 2521 deletions

View File

@ -1226,7 +1226,7 @@ PAGE_SIZE multiple when read back.
Note that all fields in this file are hierarchical and the
file modified event can be generated due to an event down the
hierarchy. For for the local events at the cgroup level see
hierarchy. For the local events at the cgroup level see
memory.events.local.
low
@ -2170,19 +2170,19 @@ existing device files.
Cgroup v2 device controller has no interface files and is implemented
on top of cgroup BPF. To control access to device files, a user may
create bpf programs of the BPF_CGROUP_DEVICE type and attach them
to cgroups. On an attempt to access a device file, corresponding
BPF programs will be executed, and depending on the return value
the attempt will succeed or fail with -EPERM.
create bpf programs of type BPF_PROG_TYPE_CGROUP_DEVICE and attach
them to cgroups with BPF_CGROUP_DEVICE flag. On an attempt to access a
device file, corresponding BPF programs will be executed, and depending
on the return value the attempt will succeed or fail with -EPERM.
A BPF_CGROUP_DEVICE program takes a pointer to the bpf_cgroup_dev_ctx
structure, which describes the device access attempt: access type
(mknod/read/write) and device (type, major and minor numbers).
If the program returns 0, the attempt fails with -EPERM, otherwise
it succeeds.
A BPF_PROG_TYPE_CGROUP_DEVICE program takes a pointer to the
bpf_cgroup_dev_ctx structure, which describes the device access attempt:
access type (mknod/read/write) and device (type, major and minor numbers).
If the program returns 0, the attempt fails with -EPERM, otherwise it
succeeds.
An example of BPF_CGROUP_DEVICE program may be found in the kernel
source tree in the tools/testing/selftests/bpf/progs/dev_cgroup.c file.
An example of BPF_PROG_TYPE_CGROUP_DEVICE program may be found in
tools/testing/selftests/bpf/progs/dev_cgroup.c in the kernel source tree.
RDMA

View File

@ -21,6 +21,7 @@ select:
contains:
enum:
- snps,dwmac
- snps,dwmac-3.40a
- snps,dwmac-3.50a
- snps,dwmac-3.610
- snps,dwmac-3.70a
@ -76,6 +77,7 @@ properties:
- rockchip,rk3399-gmac
- rockchip,rv1108-gmac
- snps,dwmac
- snps,dwmac-3.40a
- snps,dwmac-3.50a
- snps,dwmac-3.610
- snps,dwmac-3.70a

View File

@ -171,7 +171,7 @@ examples:
cs-gpios = <&gpio0 13 0>,
<&gpio0 14 0>;
rx-sample-delay-ns = <3>;
spi-flash@1 {
flash@1 {
compatible = "spi-nand";
reg = <1>;
rx-sample-delay-ns = <7>;

View File

@ -4,103 +4,112 @@
NTFS3
=====
Summary and Features
====================
NTFS3 is fully functional NTFS Read-Write driver. The driver works with
NTFS versions up to 3.1, normal/compressed/sparse files
and journal replaying. File system type to use on mount is 'ntfs3'.
NTFS3 is fully functional NTFS Read-Write driver. The driver works with NTFS
versions up to 3.1. File system type to use on mount is *ntfs3*.
- This driver implements NTFS read/write support for normal, sparse and
compressed files.
- Supports native journal replaying;
- Supports extended attributes
Predefined extended attributes:
- 'system.ntfs_security' gets/sets security
descriptor (SECURITY_DESCRIPTOR_RELATIVE)
- 'system.ntfs_attrib' gets/sets ntfs file/dir attributes.
Note: applied to empty files, this allows to switch type between
sparse(0x200), compressed(0x800) and normal;
- Supports native journal replaying.
- Supports NFS export of mounted NTFS volumes.
- Supports extended attributes. Predefined extended attributes:
- *system.ntfs_security* gets/sets security
Descriptor: SECURITY_DESCRIPTOR_RELATIVE
- *system.ntfs_attrib* gets/sets ntfs file/dir attributes.
Note: Applied to empty files, this allows to switch type between
sparse(0x200), compressed(0x800) and normal.
Mount Options
=============
The list below describes mount options supported by NTFS3 driver in addition to
generic ones.
generic ones. You can use every mount option with **no** option. If it is in
this table marked with no it means default is without **no**.
===============================================================================
.. flat-table::
:widths: 1 5
:fill-cells:
nls=name This option informs the driver how to interpret path
strings and translate them to Unicode and back. If
this option is not set, the default codepage will be
used (CONFIG_NLS_DEFAULT).
Examples:
'nls=utf8'
* - iocharset=name
- This option informs the driver how to interpret path strings and
translate them to Unicode and back. If this option is not set, the
default codepage will be used (CONFIG_NLS_DEFAULT).
uid=
gid=
umask= Controls the default permissions for files/directories created
after the NTFS volume is mounted.
Example: iocharset=utf8
fmask=
dmask= Instead of specifying umask which applies both to
files and directories, fmask applies only to files and
dmask only to directories.
* - uid=
- :rspan:`1`
* - gid=
nohidden Files with the Windows-specific HIDDEN (FILE_ATTRIBUTE_HIDDEN)
attribute will not be shown under Linux.
* - umask=
- Controls the default permissions for files/directories created after
the NTFS volume is mounted.
sys_immutable Files with the Windows-specific SYSTEM
(FILE_ATTRIBUTE_SYSTEM) attribute will be marked as system
immutable files.
* - dmask=
- :rspan:`1` Instead of specifying umask which applies both to files and
directories, fmask applies only to files and dmask only to directories.
* - fmask=
discard Enable support of the TRIM command for improved performance
on delete operations, which is recommended for use with the
solid-state drives (SSD).
* - noacsrules
- "No access rules" mount option sets access rights for files/folders to
777 and owner/group to root. This mount option absorbs all other
permissions.
force Forces the driver to mount partitions even if 'dirty' flag
(volume dirty) is set. Not recommended for use.
- Permissions change for files/folders will be reported as successful,
but they will remain 777.
sparse Create new files as "sparse".
- Owner/group change will be reported as successful, butthey will stay
as root.
showmeta Use this parameter to show all meta-files (System Files) on
a mounted NTFS partition.
By default, all meta-files are hidden.
* - nohidden
- Files with the Windows-specific HIDDEN (FILE_ATTRIBUTE_HIDDEN) attribute
will not be shown under Linux.
prealloc Preallocate space for files excessively when file size is
increasing on writes. Decreases fragmentation in case of
parallel write operations to different files.
* - sys_immutable
- Files with the Windows-specific SYSTEM (FILE_ATTRIBUTE_SYSTEM) attribute
will be marked as system immutable files.
no_acs_rules "No access rules" mount option sets access rights for
files/folders to 777 and owner/group to root. This mount
option absorbs all other permissions:
- permissions change for files/folders will be reported
as successful, but they will remain 777;
- owner/group change will be reported as successful, but
they will stay as root
* - discard
- Enable support of the TRIM command for improved performance on delete
operations, which is recommended for use with the solid-state drives
(SSD).
acl Support POSIX ACLs (Access Control Lists). Effective if
supported by Kernel. Not to be confused with NTFS ACLs.
The option specified as acl enables support for POSIX ACLs.
* - force
- Forces the driver to mount partitions even if volume is marked dirty.
Not recommended for use.
noatime All files and directories will not update their last access
time attribute if a partition is mounted with this parameter.
This option can speed up file system operation.
* - sparse
- Create new files as sparse.
===============================================================================
* - showmeta
- Use this parameter to show all meta-files (System Files) on a mounted
NTFS partition. By default, all meta-files are hidden.
ToDo list
* - prealloc
- Preallocate space for files excessively when file size is increasing on
writes. Decreases fragmentation in case of parallel write operations to
different files.
* - acl
- Support POSIX ACLs (Access Control Lists). Effective if supported by
Kernel. Not to be confused with NTFS ACLs. The option specified as acl
enables support for POSIX ACLs.
Todo list
=========
- Full journaling support (currently journal replaying is supported) over JBD.
- Full journaling support over JBD. Currently journal replaying is supported
which is not necessarily as effectice as JBD would be.
References
==========
https://www.paragon-software.com/home/ntfs-linux-professional/
- Commercial version of the NTFS driver for Linux.
- Commercial version of the NTFS driver for Linux.
https://www.paragon-software.com/home/ntfs-linux-professional/
almaz.alexandrovich@paragon-software.com
- Direct e-mail address for feedback and requests on the NTFS3 implementation.
- Direct e-mail address for feedback and requests on the NTFS3 implementation.
almaz.alexandrovich@paragon-software.com

View File

@ -18,7 +18,7 @@ types can be added after the security issue of corresponding device driver
is clarified or fixed in the future.
Create/Destroy VDUSE devices
------------------------
----------------------------
VDUSE devices are created as follows:

View File

@ -7343,10 +7343,11 @@ F: include/uapi/linux/fpga-dfl.h
FPGA MANAGER FRAMEWORK
M: Moritz Fischer <mdf@kernel.org>
M: Wu Hao <hao.wu@intel.com>
M: Xu Yilun <yilun.xu@intel.com>
R: Tom Rix <trix@redhat.com>
L: linux-fpga@vger.kernel.org
S: Maintained
W: http://www.rocketboards.org
Q: http://patchwork.kernel.org/project/linux-fpga/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mdf/linux-fpga.git
F: Documentation/devicetree/bindings/fpga/
@ -7440,7 +7441,7 @@ FREESCALE IMX / MXC FEC DRIVER
M: Joakim Zhang <qiangqing.zhang@nxp.com>
L: netdev@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/net/fsl-fec.txt
F: Documentation/devicetree/bindings/net/fsl,fec.yaml
F: drivers/net/ethernet/freescale/fec.h
F: drivers/net/ethernet/freescale/fec_main.c
F: drivers/net/ethernet/freescale/fec_ptp.c
@ -9307,7 +9308,7 @@ S: Maintained
F: drivers/platform/x86/intel/atomisp2/led.c
INTEL BIOS SAR INT1092 DRIVER
M: Shravan S <s.shravan@intel.com>
M: Shravan Sudhakar <s.shravan@intel.com>
M: Intel Corporation <linuxwwan@intel.com>
L: platform-driver-x86@vger.kernel.org
S: Maintained
@ -9629,7 +9630,7 @@ F: include/uapi/linux/isst_if.h
F: tools/power/x86/intel-speed-select/
INTEL STRATIX10 FIRMWARE DRIVERS
M: Richard Gong <richard.gong@linux.intel.com>
M: Dinh Nguyen <dinguyen@kernel.org>
L: linux-kernel@vger.kernel.org
S: Maintained
F: Documentation/ABI/testing/sysfs-devices-platform-stratix10-rsu
@ -10279,7 +10280,6 @@ KERNEL VIRTUAL MACHINE for s390 (KVM/s390)
M: Christian Borntraeger <borntraeger@de.ibm.com>
M: Janosch Frank <frankja@linux.ibm.com>
R: David Hildenbrand <david@redhat.com>
R: Cornelia Huck <cohuck@redhat.com>
R: Claudio Imbrenda <imbrenda@linux.ibm.com>
L: kvm@vger.kernel.org
S: Supported
@ -11153,6 +11153,7 @@ S: Maintained
F: Documentation/devicetree/bindings/net/dsa/marvell.txt
F: Documentation/networking/devlink/mv88e6xxx.rst
F: drivers/net/dsa/mv88e6xxx/
F: include/linux/dsa/mv88e6xxx.h
F: include/linux/platform_data/mv88e6xxx.h
MARVELL ARMADA 3700 PHY DRIVERS
@ -16301,6 +16302,7 @@ S390
M: Heiko Carstens <hca@linux.ibm.com>
M: Vasily Gorbik <gor@linux.ibm.com>
M: Christian Borntraeger <borntraeger@de.ibm.com>
R: Alexander Gordeev <agordeev@linux.ibm.com>
L: linux-s390@vger.kernel.org
S: Supported
W: http://www.ibm.com/developerworks/linux/linux390/
@ -16379,7 +16381,6 @@ F: drivers/s390/crypto/vfio_ap_ops.c
F: drivers/s390/crypto/vfio_ap_private.h
S390 VFIO-CCW DRIVER
M: Cornelia Huck <cohuck@redhat.com>
M: Eric Farman <farman@linux.ibm.com>
M: Matthew Rosato <mjrosato@linux.ibm.com>
R: Halil Pasic <pasic@linux.ibm.com>
@ -17986,7 +17987,7 @@ F: net/switchdev/
SY8106A REGULATOR DRIVER
M: Icenowy Zheng <icenowy@aosc.io>
S: Maintained
F: Documentation/devicetree/bindings/regulator/sy8106a-regulator.txt
F: Documentation/devicetree/bindings/regulator/silergy,sy8106a.yaml
F: drivers/regulator/sy8106a-regulator.c
SYNC FILE FRAMEWORK

View File

@ -2,7 +2,7 @@
VERSION = 5
PATCHLEVEL = 15
SUBLEVEL = 0
EXTRAVERSION = -rc5
EXTRAVERSION = -rc6
NAME = Opossums on Parade
# *DOCUMENTATION*

View File

@ -26,11 +26,6 @@ extern char empty_zero_page[PAGE_SIZE];
extern pgd_t swapper_pg_dir[] __aligned(PAGE_SIZE);
/* Macro to mark a page protection as uncacheable */
#define pgprot_noncached(prot) (__pgprot(pgprot_val(prot) & ~_PAGE_CACHEABLE))
extern pgd_t swapper_pg_dir[] __aligned(PAGE_SIZE);
/* to cope with aliasing VIPT cache */
#define HAVE_ARCH_UNMAPPED_AREA

View File

@ -40,8 +40,8 @@
regulator-always-on;
regulator-settling-time-us = <5000>;
gpios = <&expgpio 4 GPIO_ACTIVE_HIGH>;
states = <1800000 0x1
3300000 0x0>;
states = <1800000 0x1>,
<3300000 0x0>;
status = "okay";
};
@ -217,15 +217,16 @@
};
&pcie0 {
pci@1,0 {
pci@0,0 {
device_type = "pci";
#address-cells = <3>;
#size-cells = <2>;
ranges;
reg = <0 0 0 0 0>;
usb@1,0 {
reg = <0x10000 0 0 0 0>;
usb@0,0 {
reg = <0 0 0 0 0>;
resets = <&reset RASPBERRYPI_FIRMWARE_RESET_ID_USB>;
};
};

View File

@ -300,6 +300,14 @@
status = "disabled";
};
vec: vec@7ec13000 {
compatible = "brcm,bcm2711-vec";
reg = <0x7ec13000 0x1000>;
clocks = <&clocks BCM2835_CLOCK_VEC>;
interrupts = <GIC_SPI 123 IRQ_TYPE_LEVEL_HIGH>;
status = "disabled";
};
dvp: clock@7ef00000 {
compatible = "brcm,brcm2711-dvp";
reg = <0x7ef00000 0x10>;
@ -532,8 +540,8 @@
compatible = "brcm,genet-mdio-v5";
reg = <0xe14 0x8>;
reg-names = "mdio";
#address-cells = <0x0>;
#size-cells = <0x1>;
#address-cells = <0x1>;
#size-cells = <0x0>;
};
};
};

View File

@ -106,6 +106,14 @@
status = "okay";
};
vec: vec@7e806000 {
compatible = "brcm,bcm2835-vec";
reg = <0x7e806000 0x1000>;
clocks = <&clocks BCM2835_CLOCK_VEC>;
interrupts = <2 27>;
status = "disabled";
};
pixelvalve@7e807000 {
compatible = "brcm,bcm2835-pixelvalve2";
reg = <0x7e807000 0x100>;

View File

@ -464,14 +464,6 @@
status = "disabled";
};
vec: vec@7e806000 {
compatible = "brcm,bcm2835-vec";
reg = <0x7e806000 0x1000>;
clocks = <&clocks BCM2835_CLOCK_VEC>;
interrupts = <2 27>;
status = "disabled";
};
usb: usb@7e980000 {
compatible = "brcm,bcm2835-usb";
reg = <0x7e980000 0x10000>;

View File

@ -47,7 +47,7 @@
};
gmac: eth@e0800000 {
compatible = "st,spear600-gmac";
compatible = "snps,dwmac-3.40a";
reg = <0xe0800000 0x8000>;
interrupts = <23 22>;
interrupt-names = "macirq", "eth_wake_irq";

View File

@ -197,7 +197,6 @@ CONFIG_PCI_EPF_TEST=m
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
CONFIG_OMAP_OCP2SCP=y
CONFIG_SIMPLE_PM_BUS=y
CONFIG_MTD=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_BLOCK=y

View File

@ -46,7 +46,6 @@ CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
CONFIG_DMA_CMA=y
CONFIG_CMA_SIZE_MBYTES=64
CONFIG_SIMPLE_PM_BUS=y
CONFIG_MTD=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_BLOCK=y

View File

@ -40,7 +40,6 @@ CONFIG_PCI_RCAR_GEN2=y
CONFIG_PCIE_RCAR_HOST=y
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
CONFIG_SIMPLE_PM_BUS=y
CONFIG_MTD=y
CONFIG_MTD_BLOCK=y
CONFIG_MTD_CFI=y

View File

@ -9,6 +9,7 @@
#include <linux/iopoll.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/platform_device.h>
#include <linux/reset-controller.h>
#include <linux/smp.h>
#include <asm/smp_plat.h>
@ -81,11 +82,6 @@ static const struct reset_control_ops imx_src_ops = {
.reset = imx_src_reset_module,
};
static struct reset_controller_dev imx_reset_controller = {
.ops = &imx_src_ops,
.nr_resets = ARRAY_SIZE(sw_reset_bits),
};
static void imx_gpcv2_set_m_core_pgc(bool enable, u32 offset)
{
writel_relaxed(enable, gpc_base + offset);
@ -177,10 +173,6 @@ void __init imx_src_init(void)
src_base = of_iomap(np, 0);
WARN_ON(!src_base);
imx_reset_controller.of_node = np;
if (IS_ENABLED(CONFIG_RESET_CONTROLLER))
reset_controller_register(&imx_reset_controller);
/*
* force warm reset sources to generate cold reset
* for a more reliable restart
@ -214,3 +206,33 @@ void __init imx7_src_init(void)
if (!gpc_base)
return;
}
static const struct of_device_id imx_src_dt_ids[] = {
{ .compatible = "fsl,imx51-src" },
{ /* sentinel */ }
};
static int imx_src_probe(struct platform_device *pdev)
{
struct reset_controller_dev *rcdev;
rcdev = devm_kzalloc(&pdev->dev, sizeof(*rcdev), GFP_KERNEL);
if (!rcdev)
return -ENOMEM;
rcdev->ops = &imx_src_ops;
rcdev->dev = &pdev->dev;
rcdev->of_node = pdev->dev.of_node;
rcdev->nr_resets = ARRAY_SIZE(sw_reset_bits);
return devm_reset_controller_register(&pdev->dev, rcdev);
}
static struct platform_driver imx_src_driver = {
.driver = {
.name = "imx-src",
.of_match_table = imx_src_dt_ids,
},
.probe = imx_src_probe,
};
builtin_platform_driver(imx_src_driver);

View File

@ -112,7 +112,6 @@ config ARCH_OMAP2PLUS
select PM_GENERIC_DOMAINS
select PM_GENERIC_DOMAINS_OF
select RESET_CONTROLLER
select SIMPLE_PM_BUS
select SOC_BUS
select TI_SYSC
select OMAP_IRQCHIP

View File

@ -245,7 +245,6 @@ CONFIG_DEVTMPFS_MOUNT=y
CONFIG_FW_LOADER_USER_HELPER=y
CONFIG_FW_LOADER_USER_HELPER_FALLBACK=y
CONFIG_HISILICON_LPC=y
CONFIG_SIMPLE_PM_BUS=y
CONFIG_FSL_MC_BUS=y
CONFIG_TEGRA_ACONNECT=m
CONFIG_GNSS=m

View File

@ -43,7 +43,7 @@ void __init arm64_hugetlb_cma_reserve(void)
#ifdef CONFIG_ARM64_4K_PAGES
order = PUD_SHIFT - PAGE_SHIFT;
#else
order = CONT_PMD_SHIFT + PMD_SHIFT - PAGE_SHIFT;
order = CONT_PMD_SHIFT - PAGE_SHIFT;
#endif
/*
* HugeTLB CMA reservation is required for gigantic

View File

@ -8,7 +8,7 @@ config CSKY
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
select ARCH_USE_BUILTIN_BSWAP
select ARCH_USE_QUEUED_RWLOCKS
select ARCH_WANT_FRAME_POINTERS if !CPU_CK610
select ARCH_WANT_FRAME_POINTERS if !CPU_CK610 && $(cc-option,-mbacktrace)
select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
select COMMON_CLK
select CLKSRC_MMIO
@ -241,6 +241,7 @@ endchoice
menuconfig HAVE_TCM
bool "Tightly-Coupled/Sram Memory"
depends on !COMPILE_TEST
help
The implementation are not only used by TCM (Tightly-Coupled Meory)
but also used by sram on SOC bus. It follow existed linux tcm

View File

@ -74,7 +74,6 @@ static __always_inline unsigned long __fls(unsigned long x)
* bug fix, why only could use atomic!!!!
*/
#include <asm-generic/bitops/non-atomic.h>
#define __clear_bit(nr, vaddr) clear_bit(nr, vaddr)
#include <asm-generic/bitops/le.h>
#include <asm-generic/bitops/ext2-atomic.h>

View File

@ -99,7 +99,8 @@ static int gpr_set(struct task_struct *target,
if (ret)
return ret;
regs.sr = task_pt_regs(target)->sr;
/* BIT(0) of regs.sr is Condition Code/Carry bit */
regs.sr = (regs.sr & BIT(0)) | (task_pt_regs(target)->sr & ~BIT(0));
#ifdef CONFIG_CPU_HAS_HILO
regs.dcsr = task_pt_regs(target)->dcsr;
#endif

View File

@ -52,10 +52,14 @@ static long restore_sigcontext(struct pt_regs *regs,
struct sigcontext __user *sc)
{
int err = 0;
unsigned long sr = regs->sr;
/* sc_pt_regs is structured the same as the start of pt_regs */
err |= __copy_from_user(regs, &sc->sc_pt_regs, sizeof(struct pt_regs));
/* BIT(0) of regs->sr is Condition Code/Carry bit */
regs->sr = (sr & ~1) | (regs->sr & 1);
/* Restore the floating-point state. */
err |= restore_fpu_state(sc);

View File

@ -255,13 +255,16 @@ kvm_novcpu_exit:
* r3 contains the SRR1 wakeup value, SRR1 is trashed.
*/
_GLOBAL(idle_kvm_start_guest)
ld r4,PACAEMERGSP(r13)
mfcr r5
mflr r0
std r1,0(r4)
std r5,8(r4)
std r0,16(r4)
subi r1,r4,STACK_FRAME_OVERHEAD
std r5, 8(r1) // Save CR in caller's frame
std r0, 16(r1) // Save LR in caller's frame
// Create frame on emergency stack
ld r4, PACAEMERGSP(r13)
stdu r1, -SWITCH_FRAME_SIZE(r4)
// Switch to new frame on emergency stack
mr r1, r4
std r3, 32(r1) // Save SRR1 wakeup value
SAVE_NVGPRS(r1)
/*
@ -313,6 +316,10 @@ kvm_unsplit_wakeup:
kvm_secondary_got_guest:
// About to go to guest, clear saved SRR1
li r0, 0
std r0, 32(r1)
/* Set HSTATE_DSCR(r13) to something sensible */
ld r6, PACA_DSCR_DEFAULT(r13)
std r6, HSTATE_DSCR(r13)
@ -392,13 +399,12 @@ kvm_no_guest:
mfspr r4, SPRN_LPCR
rlwimi r4, r3, 0, LPCR_PECE0 | LPCR_PECE1
mtspr SPRN_LPCR, r4
/* set up r3 for return */
mfspr r3,SPRN_SRR1
// Return SRR1 wakeup value, or 0 if we went into the guest
ld r3, 32(r1)
REST_NVGPRS(r1)
addi r1, r1, STACK_FRAME_OVERHEAD
ld r0, 16(r1)
ld r5, 8(r1)
ld r1, 0(r1)
ld r1, 0(r1) // Switch back to caller stack
ld r0, 16(r1) // Reload LR
ld r5, 8(r1) // Reload CR
mtlr r0
mtcr r5
blr

View File

@ -945,7 +945,8 @@ static int xive_get_irqchip_state(struct irq_data *data,
* interrupt to be inactive in that case.
*/
*state = (pq != XIVE_ESB_INVALID) && !xd->stale_p &&
(xd->saved_p || !!(pq & XIVE_ESB_VAL_P));
(xd->saved_p || (!!(pq & XIVE_ESB_VAL_P) &&
!irqd_irq_disabled(data)));
return 0;
default:
return -EINVAL;

View File

@ -259,14 +259,13 @@ EXPORT_SYMBOL(strcmp);
#ifdef __HAVE_ARCH_STRRCHR
char *strrchr(const char *s, int c)
{
size_t len = __strend(s) - s;
ssize_t len = __strend(s) - s;
if (len)
do {
if (s[len] == (char) c)
return (char *) s + len;
} while (--len > 0);
return NULL;
do {
if (s[len] == (char)c)
return (char *)s + len;
} while (--len >= 0);
return NULL;
}
EXPORT_SYMBOL(strrchr);
#endif

View File

@ -1525,7 +1525,6 @@ config AMD_MEM_ENCRYPT
config AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT
bool "Activate AMD Secure Memory Encryption (SME) by default"
default y
depends on AMD_MEM_ENCRYPT
help
Say yes to have system memory encrypted by default if running on

View File

@ -68,6 +68,7 @@ static bool test_intel(int idx, void *data)
case INTEL_FAM6_BROADWELL_D:
case INTEL_FAM6_BROADWELL_G:
case INTEL_FAM6_BROADWELL_X:
case INTEL_FAM6_SAPPHIRERAPIDS_X:
case INTEL_FAM6_ATOM_SILVERMONT:
case INTEL_FAM6_ATOM_SILVERMONT_D:

View File

@ -385,7 +385,7 @@ static int __fpu_restore_sig(void __user *buf, void __user *buf_fx,
return -EINVAL;
} else {
/* Mask invalid bits out for historical reasons (broken hardware). */
fpu->state.fxsave.mxcsr &= ~mxcsr_feature_mask;
fpu->state.fxsave.mxcsr &= mxcsr_feature_mask;
}
/* Enforce XFEATURE_MASK_FPSSE when XSAVE is enabled */

View File

@ -666,6 +666,12 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
bfq_put_idle_entity(bfq_entity_service_tree(entity), entity);
bfqg_and_blkg_put(bfqq_group(bfqq));
if (entity->parent &&
entity->parent->last_bfqq_created == bfqq)
entity->parent->last_bfqq_created = NULL;
else if (bfqd->last_bfqq_created == bfqq)
bfqd->last_bfqq_created = NULL;
entity->parent = bfqg->my_entity;
entity->sched_data = &bfqg->sched_data;
/* pin down bfqg and its associated blkg */

View File

@ -49,7 +49,6 @@
#include "blk-mq.h"
#include "blk-mq-sched.h"
#include "blk-pm.h"
#include "blk-rq-qos.h"
struct dentry *blk_debugfs_root;
@ -337,23 +336,25 @@ void blk_put_queue(struct request_queue *q)
}
EXPORT_SYMBOL(blk_put_queue);
void blk_set_queue_dying(struct request_queue *q)
void blk_queue_start_drain(struct request_queue *q)
{
blk_queue_flag_set(QUEUE_FLAG_DYING, q);
/*
* When queue DYING flag is set, we need to block new req
* entering queue, so we call blk_freeze_queue_start() to
* prevent I/O from crossing blk_queue_enter().
*/
blk_freeze_queue_start(q);
if (queue_is_mq(q))
blk_mq_wake_waiters(q);
/* Make blk_queue_enter() reexamine the DYING flag. */
wake_up_all(&q->mq_freeze_wq);
}
void blk_set_queue_dying(struct request_queue *q)
{
blk_queue_flag_set(QUEUE_FLAG_DYING, q);
blk_queue_start_drain(q);
}
EXPORT_SYMBOL_GPL(blk_set_queue_dying);
/**
@ -385,13 +386,8 @@ void blk_cleanup_queue(struct request_queue *q)
*/
blk_freeze_queue(q);
rq_qos_exit(q);
blk_queue_flag_set(QUEUE_FLAG_DEAD, q);
/* for synchronous bio-based driver finish in-flight integrity i/o */
blk_flush_integrity();
blk_sync_queue(q);
if (queue_is_mq(q))
blk_mq_exit_queue(q);
@ -416,6 +412,30 @@ void blk_cleanup_queue(struct request_queue *q)
}
EXPORT_SYMBOL(blk_cleanup_queue);
static bool blk_try_enter_queue(struct request_queue *q, bool pm)
{
rcu_read_lock();
if (!percpu_ref_tryget_live(&q->q_usage_counter))
goto fail;
/*
* The code that increments the pm_only counter must ensure that the
* counter is globally visible before the queue is unfrozen.
*/
if (blk_queue_pm_only(q) &&
(!pm || queue_rpm_status(q) == RPM_SUSPENDED))
goto fail_put;
rcu_read_unlock();
return true;
fail_put:
percpu_ref_put(&q->q_usage_counter);
fail:
rcu_read_unlock();
return false;
}
/**
* blk_queue_enter() - try to increase q->q_usage_counter
* @q: request queue pointer
@ -425,40 +445,18 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags)
{
const bool pm = flags & BLK_MQ_REQ_PM;
while (true) {
bool success = false;
rcu_read_lock();
if (percpu_ref_tryget_live(&q->q_usage_counter)) {
/*
* The code that increments the pm_only counter is
* responsible for ensuring that that counter is
* globally visible before the queue is unfrozen.
*/
if ((pm && queue_rpm_status(q) != RPM_SUSPENDED) ||
!blk_queue_pm_only(q)) {
success = true;
} else {
percpu_ref_put(&q->q_usage_counter);
}
}
rcu_read_unlock();
if (success)
return 0;
while (!blk_try_enter_queue(q, pm)) {
if (flags & BLK_MQ_REQ_NOWAIT)
return -EBUSY;
/*
* read pair of barrier in blk_freeze_queue_start(),
* we need to order reading __PERCPU_REF_DEAD flag of
* .q_usage_counter and reading .mq_freeze_depth or
* queue dying flag, otherwise the following wait may
* never return if the two reads are reordered.
* read pair of barrier in blk_freeze_queue_start(), we need to
* order reading __PERCPU_REF_DEAD flag of .q_usage_counter and
* reading .mq_freeze_depth or queue dying flag, otherwise the
* following wait may never return if the two reads are
* reordered.
*/
smp_rmb();
wait_event(q->mq_freeze_wq,
(!q->mq_freeze_depth &&
blk_pm_resume_queue(pm, q)) ||
@ -466,23 +464,43 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags)
if (blk_queue_dying(q))
return -ENODEV;
}
return 0;
}
static inline int bio_queue_enter(struct bio *bio)
{
struct request_queue *q = bio->bi_bdev->bd_disk->queue;
bool nowait = bio->bi_opf & REQ_NOWAIT;
int ret;
struct gendisk *disk = bio->bi_bdev->bd_disk;
struct request_queue *q = disk->queue;
ret = blk_queue_enter(q, nowait ? BLK_MQ_REQ_NOWAIT : 0);
if (unlikely(ret)) {
if (nowait && !blk_queue_dying(q))
while (!blk_try_enter_queue(q, false)) {
if (bio->bi_opf & REQ_NOWAIT) {
if (test_bit(GD_DEAD, &disk->state))
goto dead;
bio_wouldblock_error(bio);
else
bio_io_error(bio);
return -EBUSY;
}
/*
* read pair of barrier in blk_freeze_queue_start(), we need to
* order reading __PERCPU_REF_DEAD flag of .q_usage_counter and
* reading .mq_freeze_depth or queue dying flag, otherwise the
* following wait may never return if the two reads are
* reordered.
*/
smp_rmb();
wait_event(q->mq_freeze_wq,
(!q->mq_freeze_depth &&
blk_pm_resume_queue(false, q)) ||
test_bit(GD_DEAD, &disk->state));
if (test_bit(GD_DEAD, &disk->state))
goto dead;
}
return ret;
return 0;
dead:
bio_io_error(bio);
return -ENODEV;
}
void blk_queue_exit(struct request_queue *q)
@ -899,11 +917,18 @@ static blk_qc_t __submit_bio(struct bio *bio)
struct gendisk *disk = bio->bi_bdev->bd_disk;
blk_qc_t ret = BLK_QC_T_NONE;
if (blk_crypto_bio_prep(&bio)) {
if (!disk->fops->submit_bio)
return blk_mq_submit_bio(bio);
if (unlikely(bio_queue_enter(bio) != 0))
return BLK_QC_T_NONE;
if (!submit_bio_checks(bio) || !blk_crypto_bio_prep(&bio))
goto queue_exit;
if (disk->fops->submit_bio) {
ret = disk->fops->submit_bio(bio);
goto queue_exit;
}
return blk_mq_submit_bio(bio);
queue_exit:
blk_queue_exit(disk->queue);
return ret;
}
@ -941,9 +966,6 @@ static blk_qc_t __submit_bio_noacct(struct bio *bio)
struct request_queue *q = bio->bi_bdev->bd_disk->queue;
struct bio_list lower, same;
if (unlikely(bio_queue_enter(bio) != 0))
continue;
/*
* Create a fresh bio_list for all subordinate requests.
*/
@ -979,23 +1001,12 @@ static blk_qc_t __submit_bio_noacct(struct bio *bio)
static blk_qc_t __submit_bio_noacct_mq(struct bio *bio)
{
struct bio_list bio_list[2] = { };
blk_qc_t ret = BLK_QC_T_NONE;
blk_qc_t ret;
current->bio_list = bio_list;
do {
struct gendisk *disk = bio->bi_bdev->bd_disk;
if (unlikely(bio_queue_enter(bio) != 0))
continue;
if (!blk_crypto_bio_prep(&bio)) {
blk_queue_exit(disk->queue);
ret = BLK_QC_T_NONE;
continue;
}
ret = blk_mq_submit_bio(bio);
ret = __submit_bio(bio);
} while ((bio = bio_list_pop(&bio_list[0])));
current->bio_list = NULL;
@ -1013,9 +1024,6 @@ static blk_qc_t __submit_bio_noacct_mq(struct bio *bio)
*/
blk_qc_t submit_bio_noacct(struct bio *bio)
{
if (!submit_bio_checks(bio))
return BLK_QC_T_NONE;
/*
* We only want one ->submit_bio to be active at a time, else stack
* usage with stacked devices could be a problem. Use current->bio_list

View File

@ -188,9 +188,11 @@ void blk_mq_freeze_queue(struct request_queue *q)
}
EXPORT_SYMBOL_GPL(blk_mq_freeze_queue);
void blk_mq_unfreeze_queue(struct request_queue *q)
void __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic)
{
mutex_lock(&q->mq_freeze_lock);
if (force_atomic)
q->q_usage_counter.data->force_atomic = true;
q->mq_freeze_depth--;
WARN_ON_ONCE(q->mq_freeze_depth < 0);
if (!q->mq_freeze_depth) {
@ -199,6 +201,11 @@ void blk_mq_unfreeze_queue(struct request_queue *q)
}
mutex_unlock(&q->mq_freeze_lock);
}
void blk_mq_unfreeze_queue(struct request_queue *q)
{
__blk_mq_unfreeze_queue(q, false);
}
EXPORT_SYMBOL_GPL(blk_mq_unfreeze_queue);
/*

View File

@ -51,6 +51,8 @@ struct blk_flush_queue *blk_alloc_flush_queue(int node, int cmd_size,
void blk_free_flush_queue(struct blk_flush_queue *q);
void blk_freeze_queue(struct request_queue *q);
void __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic);
void blk_queue_start_drain(struct request_queue *q);
#define BIO_INLINE_VECS 4
struct bio_vec *bvec_alloc(mempool_t *pool, unsigned short *nr_vecs,

View File

@ -26,6 +26,7 @@
#include <linux/badblocks.h>
#include "blk.h"
#include "blk-rq-qos.h"
static struct kobject *block_depr;
@ -559,6 +560,8 @@ EXPORT_SYMBOL(device_add_disk);
*/
void del_gendisk(struct gendisk *disk)
{
struct request_queue *q = disk->queue;
might_sleep();
if (WARN_ON_ONCE(!disk_live(disk) && !(disk->flags & GENHD_FL_HIDDEN)))
@ -575,8 +578,27 @@ void del_gendisk(struct gendisk *disk)
fsync_bdev(disk->part0);
__invalidate_device(disk->part0, true);
/*
* Fail any new I/O.
*/
set_bit(GD_DEAD, &disk->state);
set_capacity(disk, 0);
/*
* Prevent new I/O from crossing bio_queue_enter().
*/
blk_queue_start_drain(q);
blk_mq_freeze_queue_wait(q);
rq_qos_exit(q);
blk_sync_queue(q);
blk_flush_integrity();
/*
* Allow using passthrough request again after the queue is torn down.
*/
blk_queue_flag_clear(QUEUE_FLAG_INIT_DONE, q);
__blk_mq_unfreeze_queue(q, true);
if (!(disk->flags & GENHD_FL_HIDDEN)) {
sysfs_remove_link(&disk_to_dev(disk)->kobj, "bdi");
@ -1056,6 +1078,7 @@ static void disk_release(struct device *dev)
struct gendisk *disk = dev_to_disk(dev);
might_sleep();
WARN_ON_ONCE(disk_live(disk));
disk_release_events(disk);
kfree(disk->random);

View File

@ -151,6 +151,7 @@ struct kyber_ctx_queue {
struct kyber_queue_data {
struct request_queue *q;
dev_t dev;
/*
* Each scheduling domain has a limited number of in-flight requests
@ -257,7 +258,7 @@ static int calculate_percentile(struct kyber_queue_data *kqd,
}
memset(buckets, 0, sizeof(kqd->latency_buckets[sched_domain][type]));
trace_kyber_latency(kqd->q, kyber_domain_names[sched_domain],
trace_kyber_latency(kqd->dev, kyber_domain_names[sched_domain],
kyber_latency_type_names[type], percentile,
bucket + 1, 1 << KYBER_LATENCY_SHIFT, samples);
@ -270,7 +271,7 @@ static void kyber_resize_domain(struct kyber_queue_data *kqd,
depth = clamp(depth, 1U, kyber_depth[sched_domain]);
if (depth != kqd->domain_tokens[sched_domain].sb.depth) {
sbitmap_queue_resize(&kqd->domain_tokens[sched_domain], depth);
trace_kyber_adjust(kqd->q, kyber_domain_names[sched_domain],
trace_kyber_adjust(kqd->dev, kyber_domain_names[sched_domain],
depth);
}
}
@ -366,6 +367,7 @@ static struct kyber_queue_data *kyber_queue_data_alloc(struct request_queue *q)
goto err;
kqd->q = q;
kqd->dev = disk_devt(q->disk);
kqd->cpu_latency = alloc_percpu_gfp(struct kyber_cpu_latency,
GFP_KERNEL | __GFP_ZERO);
@ -774,7 +776,7 @@ kyber_dispatch_cur_domain(struct kyber_queue_data *kqd,
list_del_init(&rq->queuelist);
return rq;
} else {
trace_kyber_throttled(kqd->q,
trace_kyber_throttled(kqd->dev,
kyber_domain_names[khd->cur_domain]);
}
} else if (sbitmap_any_bit_set(&khd->kcq_map[khd->cur_domain])) {
@ -787,7 +789,7 @@ kyber_dispatch_cur_domain(struct kyber_queue_data *kqd,
list_del_init(&rq->queuelist);
return rq;
} else {
trace_kyber_throttled(kqd->q,
trace_kyber_throttled(kqd->dev,
kyber_domain_names[khd->cur_domain]);
}
}

View File

@ -36,7 +36,7 @@ struct acpi_gtdt_descriptor {
static struct acpi_gtdt_descriptor acpi_gtdt_desc __initdata;
static inline void *next_platform_timer(void *platform_timer)
static inline __init void *next_platform_timer(void *platform_timer)
{
struct acpi_gtdt_header *gh = platform_timer;

View File

@ -371,7 +371,7 @@ static int lps0_device_attach(struct acpi_device *adev,
return 0;
if (acpi_s2idle_vendor_amd()) {
/* AMD0004, AMDI0005:
/* AMD0004, AMD0005, AMDI0005:
* - Should use rev_id 0x0
* - function mask > 0x3: Should use AMD method, but has off by one bug
* - function mask = 0x3: Should use Microsoft method
@ -390,6 +390,7 @@ static int lps0_device_attach(struct acpi_device *adev,
ACPI_LPS0_DSM_UUID_MICROSOFT, 0,
&lps0_dsm_guid_microsoft);
if (lps0_dsm_func_mask > 0x3 && (!strcmp(hid, "AMD0004") ||
!strcmp(hid, "AMD0005") ||
!strcmp(hid, "AMDI0005"))) {
lps0_dsm_func_mask = (lps0_dsm_func_mask << 1) | 0x1;
acpi_handle_debug(adev->handle, "_DSM UUID %s: Adjusted function mask: 0x%x\n",

View File

@ -440,10 +440,7 @@ struct ahci_host_priv *ahci_platform_get_resources(struct platform_device *pdev,
hpriv->phy_regulator = devm_regulator_get(dev, "phy");
if (IS_ERR(hpriv->phy_regulator)) {
rc = PTR_ERR(hpriv->phy_regulator);
if (rc == -EPROBE_DEFER)
goto err_out;
rc = 0;
hpriv->phy_regulator = NULL;
goto err_out;
}
if (flags & AHCI_PLATFORM_GET_RESETS) {

View File

@ -352,7 +352,8 @@ static unsigned int pdc_data_xfer_vlb(struct ata_queued_cmd *qc,
iowrite32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
if (unlikely(slop)) {
__le32 pad;
__le32 pad = 0;
if (rw == READ) {
pad = cpu_to_le32(ioread32(ap->ioaddr.data_addr));
memcpy(buf + buflen - slop, &pad, slop);
@ -742,7 +743,8 @@ static unsigned int vlb32_data_xfer(struct ata_queued_cmd *qc,
ioread32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
if (unlikely(slop)) {
__le32 pad;
__le32 pad = 0;
if (rw == WRITE) {
memcpy(&pad, buf + buflen - slop, slop);
iowrite32(le32_to_cpu(pad), ap->ioaddr.data_addr);

View File

@ -687,7 +687,8 @@ struct device_link *device_link_add(struct device *consumer,
{
struct device_link *link;
if (!consumer || !supplier || flags & ~DL_ADD_VALID_FLAGS ||
if (!consumer || !supplier || consumer == supplier ||
flags & ~DL_ADD_VALID_FLAGS ||
(flags & DL_FLAG_STATELESS && flags & DL_MANAGED_LINK_FLAGS) ||
(flags & DL_FLAG_SYNC_STATE_ONLY &&
(flags & ~DL_FLAG_INFERRED) != DL_FLAG_SYNC_STATE_ONLY) ||

View File

@ -2,4 +2,4 @@
obj-$(CONFIG_TEST_ASYNC_DRIVER_PROBE) += test_async_driver_probe.o
obj-$(CONFIG_DRIVER_PE_KUNIT_TEST) += property-entry-test.o
CFLAGS_REMOVE_property-entry-test.o += -fplugin-arg-structleak_plugin-byref -fplugin-arg-structleak_plugin-byref-all
CFLAGS_property-entry-test.o += $(DISABLE_STRUCTLEAK_PLUGIN)

View File

@ -373,10 +373,22 @@ static int brd_alloc(int i)
struct gendisk *disk;
char buf[DISK_NAME_LEN];
mutex_lock(&brd_devices_mutex);
list_for_each_entry(brd, &brd_devices, brd_list) {
if (brd->brd_number == i) {
mutex_unlock(&brd_devices_mutex);
return -EEXIST;
}
}
brd = kzalloc(sizeof(*brd), GFP_KERNEL);
if (!brd)
if (!brd) {
mutex_unlock(&brd_devices_mutex);
return -ENOMEM;
}
brd->brd_number = i;
list_add_tail(&brd->brd_list, &brd_devices);
mutex_unlock(&brd_devices_mutex);
spin_lock_init(&brd->brd_lock);
INIT_RADIX_TREE(&brd->brd_pages, GFP_ATOMIC);
@ -411,37 +423,30 @@ static int brd_alloc(int i)
blk_queue_flag_set(QUEUE_FLAG_NONROT, disk->queue);
blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, disk->queue);
add_disk(disk);
list_add_tail(&brd->brd_list, &brd_devices);
return 0;
out_free_dev:
mutex_lock(&brd_devices_mutex);
list_del(&brd->brd_list);
mutex_unlock(&brd_devices_mutex);
kfree(brd);
return -ENOMEM;
}
static void brd_probe(dev_t dev)
{
int i = MINOR(dev) / max_part;
struct brd_device *brd;
mutex_lock(&brd_devices_mutex);
list_for_each_entry(brd, &brd_devices, brd_list) {
if (brd->brd_number == i)
goto out_unlock;
}
brd_alloc(i);
out_unlock:
mutex_unlock(&brd_devices_mutex);
brd_alloc(MINOR(dev) / max_part);
}
static void brd_del_one(struct brd_device *brd)
{
list_del(&brd->brd_list);
del_gendisk(brd->brd_disk);
blk_cleanup_disk(brd->brd_disk);
brd_free_pages(brd);
mutex_lock(&brd_devices_mutex);
list_del(&brd->brd_list);
mutex_unlock(&brd_devices_mutex);
kfree(brd);
}
@ -491,25 +496,21 @@ static int __init brd_init(void)
brd_debugfs_dir = debugfs_create_dir("ramdisk_pages", NULL);
mutex_lock(&brd_devices_mutex);
for (i = 0; i < rd_nr; i++) {
err = brd_alloc(i);
if (err)
goto out_free;
}
mutex_unlock(&brd_devices_mutex);
pr_info("brd: module loaded\n");
return 0;
out_free:
unregister_blkdev(RAMDISK_MAJOR, "ramdisk");
debugfs_remove_recursive(brd_debugfs_dir);
list_for_each_entry_safe(brd, next, &brd_devices, brd_list)
brd_del_one(brd);
mutex_unlock(&brd_devices_mutex);
unregister_blkdev(RAMDISK_MAJOR, "ramdisk");
pr_info("brd: module NOT loaded !!!\n");
return err;
@ -519,13 +520,12 @@ static void __exit brd_exit(void)
{
struct brd_device *brd, *next;
unregister_blkdev(RAMDISK_MAJOR, "ramdisk");
debugfs_remove_recursive(brd_debugfs_dir);
list_for_each_entry_safe(brd, next, &brd_devices, brd_list)
brd_del_one(brd);
unregister_blkdev(RAMDISK_MAJOR, "ramdisk");
pr_info("brd: module unloaded\n");
}

View File

@ -71,8 +71,10 @@ static int rnbd_clt_parse_map_options(const char *buf, size_t max_path_cnt,
int opt_mask = 0;
int token;
int ret = -EINVAL;
int i, dest_port, nr_poll_queues;
int nr_poll_queues = 0;
int dest_port = 0;
int p_cnt = 0;
int i;
options = kstrdup(buf, GFP_KERNEL);
if (!options)

View File

@ -689,28 +689,6 @@ static const struct blk_mq_ops virtio_mq_ops = {
static unsigned int virtblk_queue_depth;
module_param_named(queue_depth, virtblk_queue_depth, uint, 0444);
static int virtblk_validate(struct virtio_device *vdev)
{
u32 blk_size;
if (!vdev->config->get) {
dev_err(&vdev->dev, "%s failure: config access disabled\n",
__func__);
return -EINVAL;
}
if (!virtio_has_feature(vdev, VIRTIO_BLK_F_BLK_SIZE))
return 0;
blk_size = virtio_cread32(vdev,
offsetof(struct virtio_blk_config, blk_size));
if (blk_size < SECTOR_SIZE || blk_size > PAGE_SIZE)
__virtio_clear_bit(vdev, VIRTIO_BLK_F_BLK_SIZE);
return 0;
}
static int virtblk_probe(struct virtio_device *vdev)
{
struct virtio_blk *vblk;
@ -722,6 +700,12 @@ static int virtblk_probe(struct virtio_device *vdev)
u8 physical_block_exp, alignment_offset;
unsigned int queue_depth;
if (!vdev->config->get) {
dev_err(&vdev->dev, "%s failure: config access disabled\n",
__func__);
return -EINVAL;
}
err = ida_simple_get(&vd_index_ida, 0, minor_to_index(1 << MINORBITS),
GFP_KERNEL);
if (err < 0)
@ -836,14 +820,6 @@ static int virtblk_probe(struct virtio_device *vdev)
else
blk_size = queue_logical_block_size(q);
if (blk_size < SECTOR_SIZE || blk_size > PAGE_SIZE) {
dev_err(&vdev->dev,
"block size is changed unexpectedly, now is %u\n",
blk_size);
err = -EINVAL;
goto out_cleanup_disk;
}
/* Use topology information if available */
err = virtio_cread_feature(vdev, VIRTIO_BLK_F_TOPOLOGY,
struct virtio_blk_config, physical_block_exp,
@ -1009,7 +985,6 @@ static struct virtio_driver virtio_blk = {
.driver.name = KBUILD_MODNAME,
.driver.owner = THIS_MODULE,
.id_table = id_table,
.validate = virtblk_validate,
.probe = virtblk_probe,
.remove = virtblk_remove,
.config_changed = virtblk_config_changed,

View File

@ -152,18 +152,6 @@ config QCOM_EBI2
Interface 2, which can be used to connect things like NAND Flash,
SRAM, ethernet adapters, FPGAs and LCD displays.
config SIMPLE_PM_BUS
tristate "Simple Power-Managed Bus Driver"
depends on OF && PM
help
Driver for transparent busses that don't need a real driver, but
where the bus controller is part of a PM domain, or under the control
of a functional clock, and thus relies on runtime PM for managing
this PM domain and/or clock.
An example of such a bus controller is the Renesas Bus State
Controller (BSC, sometimes called "LBSC within Bus Bridge", or
"External Bus Interface") as found on several Renesas ARM SoCs.
config SUN50I_DE2_BUS
bool "Allwinner A64 DE2 Bus Driver"
default ARM64

View File

@ -27,7 +27,7 @@ obj-$(CONFIG_OMAP_OCP2SCP) += omap-ocp2scp.o
obj-$(CONFIG_QCOM_EBI2) += qcom-ebi2.o
obj-$(CONFIG_SUN50I_DE2_BUS) += sun50i-de2.o
obj-$(CONFIG_SUNXI_RSB) += sunxi-rsb.o
obj-$(CONFIG_SIMPLE_PM_BUS) += simple-pm-bus.o
obj-$(CONFIG_OF) += simple-pm-bus.o
obj-$(CONFIG_TEGRA_ACONNECT) += tegra-aconnect.o
obj-$(CONFIG_TEGRA_GMI) += tegra-gmi.o
obj-$(CONFIG_TI_PWMSS) += ti-pwmss.o

View File

@ -13,11 +13,36 @@
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
static int simple_pm_bus_probe(struct platform_device *pdev)
{
const struct of_dev_auxdata *lookup = dev_get_platdata(&pdev->dev);
struct device_node *np = pdev->dev.of_node;
const struct device *dev = &pdev->dev;
const struct of_dev_auxdata *lookup = dev_get_platdata(dev);
struct device_node *np = dev->of_node;
const struct of_device_id *match;
/*
* Allow user to use driver_override to bind this driver to a
* transparent bus device which has a different compatible string
* that's not listed in simple_pm_bus_of_match. We don't want to do any
* of the simple-pm-bus tasks for these devices, so return early.
*/
if (pdev->driver_override)
return 0;
match = of_match_device(dev->driver->of_match_table, dev);
/*
* These are transparent bus devices (not simple-pm-bus matches) that
* have their child nodes populated automatically. So, don't need to
* do anything more. We only match with the device if this driver is
* the most specific match because we don't want to incorrectly bind to
* a device that has a more specific driver.
*/
if (match && match->data) {
if (of_property_match_string(np, "compatible", match->compatible) == 0)
return 0;
else
return -ENODEV;
}
dev_dbg(&pdev->dev, "%s\n", __func__);
@ -31,14 +56,25 @@ static int simple_pm_bus_probe(struct platform_device *pdev)
static int simple_pm_bus_remove(struct platform_device *pdev)
{
const void *data = of_device_get_match_data(&pdev->dev);
if (pdev->driver_override || data)
return 0;
dev_dbg(&pdev->dev, "%s\n", __func__);
pm_runtime_disable(&pdev->dev);
return 0;
}
#define ONLY_BUS ((void *) 1) /* Match if the device is only a bus. */
static const struct of_device_id simple_pm_bus_of_match[] = {
{ .compatible = "simple-pm-bus", },
{ .compatible = "simple-bus", .data = ONLY_BUS },
{ .compatible = "simple-mfd", .data = ONLY_BUS },
{ .compatible = "isa", .data = ONLY_BUS },
{ .compatible = "arm,amba-bus", .data = ONLY_BUS },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, simple_pm_bus_of_match);

View File

@ -564,6 +564,7 @@ config SM_GCC_6125
config SM_GCC_6350
tristate "SM6350 Global Clock Controller"
select QCOM_GDSC
help
Support for the global clock controller on SM6350 devices.
Say Y if you want to use peripheral devices such as UART,

View File

@ -3242,7 +3242,7 @@ static struct gdsc hlos1_vote_turing_mmu_tbu1_gdsc = {
};
static struct gdsc hlos1_vote_turing_mmu_tbu0_gdsc = {
.gdscr = 0x7d060,
.gdscr = 0x7d07c,
.pd = {
.name = "hlos1_vote_turing_mmu_tbu0",
},

View File

@ -186,6 +186,8 @@ static struct rzg2l_reset r9a07g044_resets[] = {
static const unsigned int r9a07g044_crit_mod_clks[] __initconst = {
MOD_CLK_BASE + R9A07G044_GIC600_GICCLK,
MOD_CLK_BASE + R9A07G044_IA55_CLK,
MOD_CLK_BASE + R9A07G044_DMAC_ACLK,
};
const struct rzg2l_cpg_info r9a07g044_cpg_info = {

View File

@ -391,7 +391,7 @@ static int rzg2l_mod_clock_is_enabled(struct clk_hw *hw)
value = readl(priv->base + CLK_MON_R(clock->off));
return !(value & bitmask);
return value & bitmask;
}
static const struct clk_ops rzg2l_mod_clock_ops = {

View File

@ -165,13 +165,6 @@ static const struct clk_parent_data mpu_mux[] = {
.name = "boot_clk", },
};
static const struct clk_parent_data s2f_usr0_mux[] = {
{ .fw_name = "f2s-free-clk",
.name = "f2s-free-clk", },
{ .fw_name = "boot_clk",
.name = "boot_clk", },
};
static const struct clk_parent_data emac_mux[] = {
{ .fw_name = "emaca_free_clk",
.name = "emaca_free_clk", },
@ -312,8 +305,6 @@ static const struct stratix10_gate_clock agilex_gate_clks[] = {
4, 0x44, 28, 1, 0, 0, 0},
{ AGILEX_CS_TIMER_CLK, "cs_timer_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0, 0x24,
5, 0, 0, 0, 0x30, 1, 0},
{ AGILEX_S2F_USER0_CLK, "s2f_user0_clk", NULL, s2f_usr0_mux, ARRAY_SIZE(s2f_usr0_mux), 0, 0x24,
6, 0, 0, 0, 0, 0, 0},
{ AGILEX_EMAC0_CLK, "emac0_clk", NULL, emac_mux, ARRAY_SIZE(emac_mux), 0, 0x7C,
0, 0, 0, 0, 0x94, 26, 0},
{ AGILEX_EMAC1_CLK, "emac1_clk", NULL, emac_mux, ARRAY_SIZE(emac_mux), 0, 0x7C,

View File

@ -178,7 +178,7 @@ static void axp_mc_check(struct mem_ctl_info *mci)
"details unavailable (multiple errors)");
if (cnt_dbe)
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci,
cnt_sbe, /* error count */
cnt_dbe, /* error count */
0, 0, 0, /* pfn, offset, syndrome */
-1, -1, -1, /* top, mid, low layer */
mci->ctl_name,

View File

@ -49,6 +49,13 @@ static int ffa_device_probe(struct device *dev)
return ffa_drv->probe(ffa_dev);
}
static void ffa_device_remove(struct device *dev)
{
struct ffa_driver *ffa_drv = to_ffa_driver(dev->driver);
ffa_drv->remove(to_ffa_dev(dev));
}
static int ffa_device_uevent(struct device *dev, struct kobj_uevent_env *env)
{
struct ffa_device *ffa_dev = to_ffa_dev(dev);
@ -86,6 +93,7 @@ struct bus_type ffa_bus_type = {
.name = "arm_ffa",
.match = ffa_device_match,
.probe = ffa_device_probe,
.remove = ffa_device_remove,
.uevent = ffa_device_uevent,
.dev_groups = ffa_device_attributes_groups,
};
@ -127,7 +135,7 @@ static void ffa_release_device(struct device *dev)
static int __ffa_devices_unregister(struct device *dev, void *data)
{
ffa_release_device(dev);
device_unregister(dev);
return 0;
}

View File

@ -25,8 +25,6 @@
#include <acpi/ghes.h>
#include <ras/ras_event.h>
static char rcd_decode_str[CPER_REC_LEN];
/*
* CPER record ID need to be unique even after reboot, because record
* ID is used as index for ERST storage, while CPER records from
@ -312,6 +310,7 @@ const char *cper_mem_err_unpack(struct trace_seq *p,
struct cper_mem_err_compact *cmem)
{
const char *ret = trace_seq_buffer_ptr(p);
char rcd_decode_str[CPER_REC_LEN];
if (cper_mem_err_location(cmem, rcd_decode_str))
trace_seq_printf(p, "%s", rcd_decode_str);
@ -326,6 +325,7 @@ static void cper_print_mem(const char *pfx, const struct cper_sec_mem_err *mem,
int len)
{
struct cper_mem_err_compact cmem;
char rcd_decode_str[CPER_REC_LEN];
/* Don't trust UEFI 2.1/2.2 structure with bad validation bits */
if (len == sizeof(struct cper_sec_mem_err_old) &&

View File

@ -271,7 +271,7 @@ efi_status_t allocate_new_fdt_and_exit_boot(void *handle,
return status;
}
efi_info("Exiting boot services and installing virtual address map...\n");
efi_info("Exiting boot services...\n");
map.map = &memory_map;
status = efi_allocate_pages(MAX_FDT_SIZE, new_fdt_addr, ULONG_MAX);

View File

@ -414,7 +414,7 @@ static void virt_efi_reset_system(int reset_type,
unsigned long data_size,
efi_char16_t *data)
{
if (down_interruptible(&efi_runtime_lock)) {
if (down_trylock(&efi_runtime_lock)) {
pr_warn("failed to invoke the reset_system() runtime service:\n"
"could not get exclusive access to the firmware\n");
return;

View File

@ -192,12 +192,19 @@ static const struct of_device_id ice40_fpga_of_match[] = {
};
MODULE_DEVICE_TABLE(of, ice40_fpga_of_match);
static const struct spi_device_id ice40_fpga_spi_ids[] = {
{ .name = "ice40-fpga-mgr", },
{},
};
MODULE_DEVICE_TABLE(spi, ice40_fpga_spi_ids);
static struct spi_driver ice40_fpga_driver = {
.probe = ice40_fpga_probe,
.driver = {
.name = "ice40spi",
.of_match_table = of_match_ptr(ice40_fpga_of_match),
},
.id_table = ice40_fpga_spi_ids,
};
module_spi_driver(ice40_fpga_driver);

View File

@ -174,6 +174,13 @@ static int gen_74x164_remove(struct spi_device *spi)
return 0;
}
static const struct spi_device_id gen_74x164_spi_ids[] = {
{ .name = "74hc595" },
{ .name = "74lvc594" },
{},
};
MODULE_DEVICE_TABLE(spi, gen_74x164_spi_ids);
static const struct of_device_id gen_74x164_dt_ids[] = {
{ .compatible = "fairchild,74hc595" },
{ .compatible = "nxp,74lvc594" },
@ -188,6 +195,7 @@ static struct spi_driver gen_74x164_driver = {
},
.probe = gen_74x164_probe,
.remove = gen_74x164_remove,
.id_table = gen_74x164_spi_ids,
};
module_spi_driver(gen_74x164_driver);

View File

@ -476,10 +476,19 @@ static struct platform_device *gpio_mockup_pdevs[GPIO_MOCKUP_MAX_GC];
static void gpio_mockup_unregister_pdevs(void)
{
struct platform_device *pdev;
struct fwnode_handle *fwnode;
int i;
for (i = 0; i < GPIO_MOCKUP_MAX_GC; i++)
platform_device_unregister(gpio_mockup_pdevs[i]);
for (i = 0; i < GPIO_MOCKUP_MAX_GC; i++) {
pdev = gpio_mockup_pdevs[i];
if (!pdev)
continue;
fwnode = dev_fwnode(&pdev->dev);
platform_device_unregister(pdev);
fwnode_remove_software_node(fwnode);
}
}
static __init char **gpio_mockup_make_line_names(const char *label,
@ -508,6 +517,7 @@ static int __init gpio_mockup_register_chip(int idx)
struct property_entry properties[GPIO_MOCKUP_MAX_PROP];
struct platform_device_info pdevinfo;
struct platform_device *pdev;
struct fwnode_handle *fwnode;
char **line_names = NULL;
char chip_label[32];
int prop = 0, base;
@ -536,13 +546,18 @@ static int __init gpio_mockup_register_chip(int idx)
"gpio-line-names", line_names, ngpio);
}
fwnode = fwnode_create_software_node(properties, NULL);
if (IS_ERR(fwnode))
return PTR_ERR(fwnode);
pdevinfo.name = "gpio-mockup";
pdevinfo.id = idx;
pdevinfo.properties = properties;
pdevinfo.fwnode = fwnode;
pdev = platform_device_register_full(&pdevinfo);
kfree_strarray(line_names, ngpio);
if (IS_ERR(pdev)) {
fwnode_remove_software_node(fwnode);
pr_err("error registering device");
return PTR_ERR(pdev);
}

View File

@ -559,21 +559,21 @@ static int pca953x_gpio_set_pull_up_down(struct pca953x_chip *chip,
mutex_lock(&chip->i2c_lock);
/* Disable pull-up/pull-down */
ret = regmap_write_bits(chip->regmap, pull_en_reg, bit, 0);
if (ret)
goto exit;
/* Configure pull-up/pull-down */
if (config == PIN_CONFIG_BIAS_PULL_UP)
ret = regmap_write_bits(chip->regmap, pull_sel_reg, bit, bit);
else if (config == PIN_CONFIG_BIAS_PULL_DOWN)
ret = regmap_write_bits(chip->regmap, pull_sel_reg, bit, 0);
else
ret = 0;
if (ret)
goto exit;
/* Enable pull-up/pull-down */
ret = regmap_write_bits(chip->regmap, pull_en_reg, bit, bit);
/* Disable/Enable pull-up/pull-down */
if (config == PIN_CONFIG_BIAS_DISABLE)
ret = regmap_write_bits(chip->regmap, pull_en_reg, bit, 0);
else
ret = regmap_write_bits(chip->regmap, pull_en_reg, bit, bit);
exit:
mutex_unlock(&chip->i2c_lock);
@ -587,7 +587,9 @@ static int pca953x_gpio_set_config(struct gpio_chip *gc, unsigned int offset,
switch (pinconf_to_config_param(config)) {
case PIN_CONFIG_BIAS_PULL_UP:
case PIN_CONFIG_BIAS_PULL_PIN_DEFAULT:
case PIN_CONFIG_BIAS_PULL_DOWN:
case PIN_CONFIG_BIAS_DISABLE:
return pca953x_gpio_set_pull_up_down(chip, offset, config);
default:
return -ENOTSUPP;

View File

@ -1834,11 +1834,20 @@ static void connector_bad_edid(struct drm_connector *connector,
u8 *edid, int num_blocks)
{
int i;
u8 num_of_ext = edid[0x7e];
u8 last_block;
/*
* 0x7e in the EDID is the number of extension blocks. The EDID
* is 1 (base block) + num_ext_blocks big. That means we can think
* of 0x7e in the EDID of the _index_ of the last block in the
* combined chunk of memory.
*/
last_block = edid[0x7e];
/* Calculate real checksum for the last edid extension block data */
connector->real_edid_checksum =
drm_edid_block_checksum(edid + num_of_ext * EDID_LENGTH);
if (last_block < num_blocks)
connector->real_edid_checksum =
drm_edid_block_checksum(edid + last_block * EDID_LENGTH);
if (connector->bad_edid_counter++ && !drm_debug_enabled(DRM_UT_KMS))
return;

View File

@ -1506,6 +1506,7 @@ static int drm_fb_helper_single_fb_probe(struct drm_fb_helper *fb_helper,
{
struct drm_client_dev *client = &fb_helper->client;
struct drm_device *dev = fb_helper->dev;
struct drm_mode_config *config = &dev->mode_config;
int ret = 0;
int crtc_count = 0;
struct drm_connector_list_iter conn_iter;
@ -1663,6 +1664,11 @@ static int drm_fb_helper_single_fb_probe(struct drm_fb_helper *fb_helper,
/* Handle our overallocation */
sizes.surface_height *= drm_fbdev_overalloc;
sizes.surface_height /= 100;
if (sizes.surface_height > config->max_height) {
drm_dbg_kms(dev, "Fbdev over-allocation too large; clamping height to %d\n",
config->max_height);
sizes.surface_height = config->max_height;
}
/* push down into drivers */
ret = (*fb_helper->funcs->fb_probe)(fb_helper, &sizes);

View File

@ -46,6 +46,7 @@ int hyperv_mode_config_init(struct hyperv_drm_device *hv);
int hyperv_update_vram_location(struct hv_device *hdev, phys_addr_t vram_pp);
int hyperv_update_situation(struct hv_device *hdev, u8 active, u32 bpp,
u32 w, u32 h, u32 pitch);
int hyperv_hide_hw_ptr(struct hv_device *hdev);
int hyperv_update_dirt(struct hv_device *hdev, struct drm_rect *rect);
int hyperv_connect_vsp(struct hv_device *hdev);

View File

@ -101,6 +101,7 @@ static void hyperv_pipe_enable(struct drm_simple_display_pipe *pipe,
struct hyperv_drm_device *hv = to_hv(pipe->crtc.dev);
struct drm_shadow_plane_state *shadow_plane_state = to_drm_shadow_plane_state(plane_state);
hyperv_hide_hw_ptr(hv->hdev);
hyperv_update_situation(hv->hdev, 1, hv->screen_depth,
crtc_state->mode.hdisplay,
crtc_state->mode.vdisplay,

View File

@ -299,6 +299,55 @@ int hyperv_update_situation(struct hv_device *hdev, u8 active, u32 bpp,
return 0;
}
/*
* Hyper-V supports a hardware cursor feature. It's not used by Linux VM,
* but the Hyper-V host still draws a point as an extra mouse pointer,
* which is unwanted, especially when Xorg is running.
*
* The hyperv_fb driver uses synthvid_send_ptr() to hide the unwanted
* pointer, by setting msg.ptr_pos.is_visible = 1 and setting the
* msg.ptr_shape.data. Note: setting msg.ptr_pos.is_visible to 0 doesn't
* work in tests.
*
* Copy synthvid_send_ptr() to hyperv_drm and rename it to
* hyperv_hide_hw_ptr(). Note: hyperv_hide_hw_ptr() is also called in the
* handler of the SYNTHVID_FEATURE_CHANGE event, otherwise the host still
* draws an extra unwanted mouse pointer after the VM Connection window is
* closed and reopened.
*/
int hyperv_hide_hw_ptr(struct hv_device *hdev)
{
struct synthvid_msg msg;
memset(&msg, 0, sizeof(struct synthvid_msg));
msg.vid_hdr.type = SYNTHVID_POINTER_POSITION;
msg.vid_hdr.size = sizeof(struct synthvid_msg_hdr) +
sizeof(struct synthvid_pointer_position);
msg.ptr_pos.is_visible = 1;
msg.ptr_pos.video_output = 0;
msg.ptr_pos.image_x = 0;
msg.ptr_pos.image_y = 0;
hyperv_sendpacket(hdev, &msg);
memset(&msg, 0, sizeof(struct synthvid_msg));
msg.vid_hdr.type = SYNTHVID_POINTER_SHAPE;
msg.vid_hdr.size = sizeof(struct synthvid_msg_hdr) +
sizeof(struct synthvid_pointer_shape);
msg.ptr_shape.part_idx = SYNTHVID_CURSOR_COMPLETE;
msg.ptr_shape.is_argb = 1;
msg.ptr_shape.width = 1;
msg.ptr_shape.height = 1;
msg.ptr_shape.hot_x = 0;
msg.ptr_shape.hot_y = 0;
msg.ptr_shape.data[0] = 0;
msg.ptr_shape.data[1] = 1;
msg.ptr_shape.data[2] = 1;
msg.ptr_shape.data[3] = 1;
hyperv_sendpacket(hdev, &msg);
return 0;
}
int hyperv_update_dirt(struct hv_device *hdev, struct drm_rect *rect)
{
struct hyperv_drm_device *hv = hv_get_drvdata(hdev);
@ -392,8 +441,11 @@ static void hyperv_receive_sub(struct hv_device *hdev)
return;
}
if (msg->vid_hdr.type == SYNTHVID_FEATURE_CHANGE)
if (msg->vid_hdr.type == SYNTHVID_FEATURE_CHANGE) {
hv->dirt_needed = msg->feature_chg.is_dirt_needed;
if (hv->dirt_needed)
hyperv_hide_hw_ptr(hv->hdev);
}
}
static void hyperv_receive(void *ctx)

View File

@ -186,13 +186,16 @@ void intel_dsm_get_bios_data_funcs_supported(struct drm_i915_private *i915)
{
struct pci_dev *pdev = to_pci_dev(i915->drm.dev);
acpi_handle dhandle;
union acpi_object *obj;
dhandle = ACPI_HANDLE(&pdev->dev);
if (!dhandle)
return;
acpi_evaluate_dsm(dhandle, &intel_dsm_guid2, INTEL_DSM_REVISION_ID,
INTEL_DSM_FN_GET_BIOS_DATA_FUNCS_SUPPORTED, NULL);
obj = acpi_evaluate_dsm(dhandle, &intel_dsm_guid2, INTEL_DSM_REVISION_ID,
INTEL_DSM_FN_GET_BIOS_DATA_FUNCS_SUPPORTED, NULL);
if (obj)
ACPI_FREE(obj);
}
/*

View File

@ -937,6 +937,10 @@ static struct i915_gem_engines *user_engines(struct i915_gem_context *ctx,
unsigned int n;
e = alloc_engines(num_engines);
if (!e)
return ERR_PTR(-ENOMEM);
e->num_engines = num_engines;
for (n = 0; n < num_engines; n++) {
struct intel_context *ce;
int ret;
@ -970,7 +974,6 @@ static struct i915_gem_engines *user_engines(struct i915_gem_context *ctx,
goto free_engines;
}
}
e->num_engines = num_engines;
return e;

View File

@ -421,6 +421,7 @@ void intel_context_fini(struct intel_context *ce)
mutex_destroy(&ce->pin_mutex);
i915_active_fini(&ce->active);
i915_sw_fence_fini(&ce->guc_blocked);
}
void i915_context_module_exit(void)

View File

@ -4,8 +4,6 @@
*/
#include <linux/clk.h>
#include <linux/dma-mapping.h>
#include <linux/mailbox_controller.h>
#include <linux/pm_runtime.h>
#include <linux/soc/mediatek/mtk-cmdq.h>
#include <linux/soc/mediatek/mtk-mmsys.h>
@ -52,11 +50,8 @@ struct mtk_drm_crtc {
bool pending_async_planes;
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
struct mbox_client cmdq_cl;
struct mbox_chan *cmdq_chan;
struct cmdq_pkt cmdq_handle;
struct cmdq_client *cmdq_client;
u32 cmdq_event;
u32 cmdq_vblank_cnt;
#endif
struct device *mmsys_dev;
@ -227,79 +222,9 @@ struct mtk_ddp_comp *mtk_drm_ddp_comp_for_plane(struct drm_crtc *crtc,
}
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
static int mtk_drm_cmdq_pkt_create(struct mbox_chan *chan, struct cmdq_pkt *pkt,
size_t size)
static void ddp_cmdq_cb(struct cmdq_cb_data data)
{
struct device *dev;
dma_addr_t dma_addr;
pkt->va_base = kzalloc(size, GFP_KERNEL);
if (!pkt->va_base) {
kfree(pkt);
return -ENOMEM;
}
pkt->buf_size = size;
dev = chan->mbox->dev;
dma_addr = dma_map_single(dev, pkt->va_base, pkt->buf_size,
DMA_TO_DEVICE);
if (dma_mapping_error(dev, dma_addr)) {
dev_err(dev, "dma map failed, size=%u\n", (u32)(u64)size);
kfree(pkt->va_base);
kfree(pkt);
return -ENOMEM;
}
pkt->pa_base = dma_addr;
return 0;
}
static void mtk_drm_cmdq_pkt_destroy(struct mbox_chan *chan, struct cmdq_pkt *pkt)
{
dma_unmap_single(chan->mbox->dev, pkt->pa_base, pkt->buf_size,
DMA_TO_DEVICE);
kfree(pkt->va_base);
kfree(pkt);
}
static void ddp_cmdq_cb(struct mbox_client *cl, void *mssg)
{
struct mtk_drm_crtc *mtk_crtc = container_of(cl, struct mtk_drm_crtc, cmdq_cl);
struct cmdq_cb_data *data = mssg;
struct mtk_crtc_state *state;
unsigned int i;
state = to_mtk_crtc_state(mtk_crtc->base.state);
state->pending_config = false;
if (mtk_crtc->pending_planes) {
for (i = 0; i < mtk_crtc->layer_nr; i++) {
struct drm_plane *plane = &mtk_crtc->planes[i];
struct mtk_plane_state *plane_state;
plane_state = to_mtk_plane_state(plane->state);
plane_state->pending.config = false;
}
mtk_crtc->pending_planes = false;
}
if (mtk_crtc->pending_async_planes) {
for (i = 0; i < mtk_crtc->layer_nr; i++) {
struct drm_plane *plane = &mtk_crtc->planes[i];
struct mtk_plane_state *plane_state;
plane_state = to_mtk_plane_state(plane->state);
plane_state->pending.async_config = false;
}
mtk_crtc->pending_async_planes = false;
}
mtk_crtc->cmdq_vblank_cnt = 0;
mtk_drm_cmdq_pkt_destroy(mtk_crtc->cmdq_chan, data->pkt);
cmdq_pkt_destroy(data.data);
}
#endif
@ -453,8 +378,7 @@ static void mtk_crtc_ddp_config(struct drm_crtc *crtc,
state->pending_vrefresh, 0,
cmdq_handle);
if (!cmdq_handle)
state->pending_config = false;
state->pending_config = false;
}
if (mtk_crtc->pending_planes) {
@ -474,12 +398,9 @@ static void mtk_crtc_ddp_config(struct drm_crtc *crtc,
mtk_ddp_comp_layer_config(comp, local_layer,
plane_state,
cmdq_handle);
if (!cmdq_handle)
plane_state->pending.config = false;
plane_state->pending.config = false;
}
if (!cmdq_handle)
mtk_crtc->pending_planes = false;
mtk_crtc->pending_planes = false;
}
if (mtk_crtc->pending_async_planes) {
@ -499,12 +420,9 @@ static void mtk_crtc_ddp_config(struct drm_crtc *crtc,
mtk_ddp_comp_layer_config(comp, local_layer,
plane_state,
cmdq_handle);
if (!cmdq_handle)
plane_state->pending.async_config = false;
plane_state->pending.async_config = false;
}
if (!cmdq_handle)
mtk_crtc->pending_async_planes = false;
mtk_crtc->pending_async_planes = false;
}
}
@ -512,7 +430,7 @@ static void mtk_drm_crtc_update_config(struct mtk_drm_crtc *mtk_crtc,
bool needs_vblank)
{
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
struct cmdq_pkt *cmdq_handle = &mtk_crtc->cmdq_handle;
struct cmdq_pkt *cmdq_handle;
#endif
struct drm_crtc *crtc = &mtk_crtc->base;
struct mtk_drm_private *priv = crtc->dev->dev_private;
@ -550,24 +468,14 @@ static void mtk_drm_crtc_update_config(struct mtk_drm_crtc *mtk_crtc,
mtk_mutex_release(mtk_crtc->mutex);
}
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
if (mtk_crtc->cmdq_chan) {
mbox_flush(mtk_crtc->cmdq_chan, 2000);
cmdq_handle->cmd_buf_size = 0;
if (mtk_crtc->cmdq_client) {
mbox_flush(mtk_crtc->cmdq_client->chan, 2000);
cmdq_handle = cmdq_pkt_create(mtk_crtc->cmdq_client, PAGE_SIZE);
cmdq_pkt_clear_event(cmdq_handle, mtk_crtc->cmdq_event);
cmdq_pkt_wfe(cmdq_handle, mtk_crtc->cmdq_event, false);
mtk_crtc_ddp_config(crtc, cmdq_handle);
cmdq_pkt_finalize(cmdq_handle);
dma_sync_single_for_device(mtk_crtc->cmdq_chan->mbox->dev,
cmdq_handle->pa_base,
cmdq_handle->cmd_buf_size,
DMA_TO_DEVICE);
/*
* CMDQ command should execute in next vblank,
* If it fail to execute in next 2 vblank, timeout happen.
*/
mtk_crtc->cmdq_vblank_cnt = 2;
mbox_send_message(mtk_crtc->cmdq_chan, cmdq_handle);
mbox_client_txdone(mtk_crtc->cmdq_chan, 0);
cmdq_pkt_flush_async(cmdq_handle, ddp_cmdq_cb, cmdq_handle);
}
#endif
mtk_crtc->config_updating = false;
@ -581,15 +489,12 @@ static void mtk_crtc_ddp_irq(void *data)
struct mtk_drm_private *priv = crtc->dev->dev_private;
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
if (!priv->data->shadow_register && !mtk_crtc->cmdq_chan)
mtk_crtc_ddp_config(crtc, NULL);
else if (mtk_crtc->cmdq_vblank_cnt > 0 && --mtk_crtc->cmdq_vblank_cnt == 0)
DRM_ERROR("mtk_crtc %d CMDQ execute command timeout!\n",
drm_crtc_index(&mtk_crtc->base));
if (!priv->data->shadow_register && !mtk_crtc->cmdq_client)
#else
if (!priv->data->shadow_register)
mtk_crtc_ddp_config(crtc, NULL);
#endif
mtk_crtc_ddp_config(crtc, NULL);
mtk_drm_finish_page_flip(mtk_crtc);
}
@ -924,20 +829,16 @@ int mtk_drm_crtc_create(struct drm_device *drm_dev,
mutex_init(&mtk_crtc->hw_lock);
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
mtk_crtc->cmdq_cl.dev = mtk_crtc->mmsys_dev;
mtk_crtc->cmdq_cl.tx_block = false;
mtk_crtc->cmdq_cl.knows_txdone = true;
mtk_crtc->cmdq_cl.rx_callback = ddp_cmdq_cb;
mtk_crtc->cmdq_chan =
mbox_request_channel(&mtk_crtc->cmdq_cl,
drm_crtc_index(&mtk_crtc->base));
if (IS_ERR(mtk_crtc->cmdq_chan)) {
mtk_crtc->cmdq_client =
cmdq_mbox_create(mtk_crtc->mmsys_dev,
drm_crtc_index(&mtk_crtc->base));
if (IS_ERR(mtk_crtc->cmdq_client)) {
dev_dbg(dev, "mtk_crtc %d failed to create mailbox client, writing register by CPU now\n",
drm_crtc_index(&mtk_crtc->base));
mtk_crtc->cmdq_chan = NULL;
mtk_crtc->cmdq_client = NULL;
}
if (mtk_crtc->cmdq_chan) {
if (mtk_crtc->cmdq_client) {
ret = of_property_read_u32_index(priv->mutex_node,
"mediatek,gce-events",
drm_crtc_index(&mtk_crtc->base),
@ -945,18 +846,8 @@ int mtk_drm_crtc_create(struct drm_device *drm_dev,
if (ret) {
dev_dbg(dev, "mtk_crtc %d failed to get mediatek,gce-events property\n",
drm_crtc_index(&mtk_crtc->base));
mbox_free_channel(mtk_crtc->cmdq_chan);
mtk_crtc->cmdq_chan = NULL;
} else {
ret = mtk_drm_cmdq_pkt_create(mtk_crtc->cmdq_chan,
&mtk_crtc->cmdq_handle,
PAGE_SIZE);
if (ret) {
dev_dbg(dev, "mtk_crtc %d failed to create cmdq packet\n",
drm_crtc_index(&mtk_crtc->base));
mbox_free_channel(mtk_crtc->cmdq_chan);
mtk_crtc->cmdq_chan = NULL;
}
cmdq_mbox_destroy(mtk_crtc->cmdq_client);
mtk_crtc->cmdq_client = NULL;
}
}
#endif

View File

@ -571,13 +571,14 @@ struct msm_gpu *a3xx_gpu_init(struct drm_device *dev)
}
icc_path = devm_of_icc_get(&pdev->dev, "gfx-mem");
ret = IS_ERR(icc_path);
if (ret)
if (IS_ERR(icc_path)) {
ret = PTR_ERR(icc_path);
goto fail;
}
ocmem_icc_path = devm_of_icc_get(&pdev->dev, "ocmem");
ret = IS_ERR(ocmem_icc_path);
if (ret) {
if (IS_ERR(ocmem_icc_path)) {
ret = PTR_ERR(ocmem_icc_path);
/* allow -ENODATA, ocmem icc is optional */
if (ret != -ENODATA)
goto fail;

View File

@ -699,13 +699,14 @@ struct msm_gpu *a4xx_gpu_init(struct drm_device *dev)
}
icc_path = devm_of_icc_get(&pdev->dev, "gfx-mem");
ret = IS_ERR(icc_path);
if (ret)
if (IS_ERR(icc_path)) {
ret = PTR_ERR(icc_path);
goto fail;
}
ocmem_icc_path = devm_of_icc_get(&pdev->dev, "ocmem");
ret = IS_ERR(ocmem_icc_path);
if (ret) {
if (IS_ERR(ocmem_icc_path)) {
ret = PTR_ERR(ocmem_icc_path);
/* allow -ENODATA, ocmem icc is optional */
if (ret != -ENODATA)
goto fail;

View File

@ -296,6 +296,8 @@ int a6xx_gmu_set_oob(struct a6xx_gmu *gmu, enum a6xx_gmu_oob_state state)
u32 val;
int request, ack;
WARN_ON_ONCE(!mutex_is_locked(&gmu->lock));
if (state >= ARRAY_SIZE(a6xx_gmu_oob_bits))
return -EINVAL;
@ -337,6 +339,8 @@ void a6xx_gmu_clear_oob(struct a6xx_gmu *gmu, enum a6xx_gmu_oob_state state)
{
int bit;
WARN_ON_ONCE(!mutex_is_locked(&gmu->lock));
if (state >= ARRAY_SIZE(a6xx_gmu_oob_bits))
return;
@ -1482,6 +1486,8 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node)
if (!pdev)
return -ENODEV;
mutex_init(&gmu->lock);
gmu->dev = &pdev->dev;
of_dma_configure(gmu->dev, node, true);

View File

@ -44,6 +44,9 @@ struct a6xx_gmu_bo {
struct a6xx_gmu {
struct device *dev;
/* For serializing communication with the GMU: */
struct mutex lock;
struct msm_gem_address_space *aspace;
void * __iomem mmio;

View File

@ -106,7 +106,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu,
u32 asid;
u64 memptr = rbmemptr(ring, ttbr0);
if (ctx == a6xx_gpu->cur_ctx)
if (ctx->seqno == a6xx_gpu->cur_ctx_seqno)
return;
if (msm_iommu_pagetable_params(ctx->aspace->mmu, &ttbr, &asid))
@ -139,7 +139,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu,
OUT_PKT7(ring, CP_EVENT_WRITE, 1);
OUT_RING(ring, 0x31);
a6xx_gpu->cur_ctx = ctx;
a6xx_gpu->cur_ctx_seqno = ctx->seqno;
}
static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
@ -881,7 +881,7 @@ static int a6xx_zap_shader_init(struct msm_gpu *gpu)
A6XX_RBBM_INT_0_MASK_UCHE_OOB_ACCESS | \
A6XX_RBBM_INT_0_MASK_UCHE_TRAP_INTR)
static int a6xx_hw_init(struct msm_gpu *gpu)
static int hw_init(struct msm_gpu *gpu)
{
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
@ -1081,7 +1081,7 @@ static int a6xx_hw_init(struct msm_gpu *gpu)
/* Always come up on rb 0 */
a6xx_gpu->cur_ring = gpu->rb[0];
a6xx_gpu->cur_ctx = NULL;
a6xx_gpu->cur_ctx_seqno = 0;
/* Enable the SQE_to start the CP engine */
gpu_write(gpu, REG_A6XX_CP_SQE_CNTL, 1);
@ -1135,6 +1135,19 @@ out:
return ret;
}
static int a6xx_hw_init(struct msm_gpu *gpu)
{
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
int ret;
mutex_lock(&a6xx_gpu->gmu.lock);
ret = hw_init(gpu);
mutex_unlock(&a6xx_gpu->gmu.lock);
return ret;
}
static void a6xx_dump(struct msm_gpu *gpu)
{
DRM_DEV_INFO(&gpu->pdev->dev, "status: %08x\n",
@ -1509,7 +1522,9 @@ static int a6xx_pm_resume(struct msm_gpu *gpu)
trace_msm_gpu_resume(0);
mutex_lock(&a6xx_gpu->gmu.lock);
ret = a6xx_gmu_resume(a6xx_gpu);
mutex_unlock(&a6xx_gpu->gmu.lock);
if (ret)
return ret;
@ -1532,7 +1547,9 @@ static int a6xx_pm_suspend(struct msm_gpu *gpu)
msm_devfreq_suspend(gpu);
mutex_lock(&a6xx_gpu->gmu.lock);
ret = a6xx_gmu_stop(a6xx_gpu);
mutex_unlock(&a6xx_gpu->gmu.lock);
if (ret)
return ret;
@ -1547,18 +1564,19 @@ static int a6xx_get_timestamp(struct msm_gpu *gpu, uint64_t *value)
{
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
static DEFINE_MUTEX(perfcounter_oob);
mutex_lock(&perfcounter_oob);
mutex_lock(&a6xx_gpu->gmu.lock);
/* Force the GPU power on so we can read this register */
a6xx_gmu_set_oob(&a6xx_gpu->gmu, GMU_OOB_PERFCOUNTER_SET);
*value = gpu_read64(gpu, REG_A6XX_CP_ALWAYS_ON_COUNTER_LO,
REG_A6XX_CP_ALWAYS_ON_COUNTER_HI);
REG_A6XX_CP_ALWAYS_ON_COUNTER_HI);
a6xx_gmu_clear_oob(&a6xx_gpu->gmu, GMU_OOB_PERFCOUNTER_SET);
mutex_unlock(&perfcounter_oob);
mutex_unlock(&a6xx_gpu->gmu.lock);
return 0;
}
@ -1622,6 +1640,16 @@ static unsigned long a6xx_gpu_busy(struct msm_gpu *gpu)
return (unsigned long)busy_time;
}
void a6xx_gpu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp)
{
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
mutex_lock(&a6xx_gpu->gmu.lock);
a6xx_gmu_set_freq(gpu, opp);
mutex_unlock(&a6xx_gpu->gmu.lock);
}
static struct msm_gem_address_space *
a6xx_create_address_space(struct msm_gpu *gpu, struct platform_device *pdev)
{
@ -1766,7 +1794,7 @@ static const struct adreno_gpu_funcs funcs = {
#endif
.gpu_busy = a6xx_gpu_busy,
.gpu_get_freq = a6xx_gmu_get_freq,
.gpu_set_freq = a6xx_gmu_set_freq,
.gpu_set_freq = a6xx_gpu_set_freq,
#if defined(CONFIG_DRM_MSM_GPU_STATE)
.gpu_state_get = a6xx_gpu_state_get,
.gpu_state_put = a6xx_gpu_state_put,

View File

@ -19,7 +19,16 @@ struct a6xx_gpu {
uint64_t sqe_iova;
struct msm_ringbuffer *cur_ring;
struct msm_file_private *cur_ctx;
/**
* cur_ctx_seqno:
*
* The ctx->seqno value of the context with current pgtables
* installed. Tracked by seqno rather than pointer value to
* avoid dangling pointers, and cases where a ctx can be freed
* and a new one created with the same address.
*/
int cur_ctx_seqno;
struct a6xx_gmu gmu;

View File

@ -794,7 +794,7 @@ static const struct dpu_pingpong_cfg sm8150_pp[] = {
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 30),
-1),
PP_BLK("pingpong_5", PINGPONG_5, 0x72800, MERGE_3D_2, sdm845_pp_sblk,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 30),
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 31),
-1),
};

View File

@ -1125,6 +1125,20 @@ static void mdp5_crtc_reset(struct drm_crtc *crtc)
__drm_atomic_helper_crtc_reset(crtc, &mdp5_cstate->base);
}
static const struct drm_crtc_funcs mdp5_crtc_no_lm_cursor_funcs = {
.set_config = drm_atomic_helper_set_config,
.destroy = mdp5_crtc_destroy,
.page_flip = drm_atomic_helper_page_flip,
.reset = mdp5_crtc_reset,
.atomic_duplicate_state = mdp5_crtc_duplicate_state,
.atomic_destroy_state = mdp5_crtc_destroy_state,
.atomic_print_state = mdp5_crtc_atomic_print_state,
.get_vblank_counter = mdp5_crtc_get_vblank_counter,
.enable_vblank = msm_crtc_enable_vblank,
.disable_vblank = msm_crtc_disable_vblank,
.get_vblank_timestamp = drm_crtc_vblank_helper_get_vblank_timestamp,
};
static const struct drm_crtc_funcs mdp5_crtc_funcs = {
.set_config = drm_atomic_helper_set_config,
.destroy = mdp5_crtc_destroy,
@ -1313,6 +1327,8 @@ struct drm_crtc *mdp5_crtc_init(struct drm_device *dev,
mdp5_crtc->lm_cursor_enabled = cursor_plane ? false : true;
drm_crtc_init_with_planes(dev, crtc, plane, cursor_plane,
cursor_plane ?
&mdp5_crtc_no_lm_cursor_funcs :
&mdp5_crtc_funcs, NULL);
drm_flip_work_init(&mdp5_crtc->unref_cursor_work,

View File

@ -1309,14 +1309,14 @@ static int dp_pm_resume(struct device *dev)
* can not declared display is connected unless
* HDMI cable is plugged in and sink_count of
* dongle become 1
* also only signal audio when disconnected
*/
if (dp->link->sink_count)
if (dp->link->sink_count) {
dp->dp_display.is_connected = true;
else
} else {
dp->dp_display.is_connected = false;
dp_display_handle_plugged_change(g_dp_display,
dp->dp_display.is_connected);
dp_display_handle_plugged_change(g_dp_display, false);
}
DRM_DEBUG_DP("After, sink_count=%d is_connected=%d core_inited=%d power_on=%d\n",
dp->link->sink_count, dp->dp_display.is_connected,

View File

@ -215,8 +215,10 @@ int msm_dsi_modeset_init(struct msm_dsi *msm_dsi, struct drm_device *dev,
goto fail;
}
if (!msm_dsi_manager_validate_current_config(msm_dsi->id))
if (!msm_dsi_manager_validate_current_config(msm_dsi->id)) {
ret = -EINVAL;
goto fail;
}
msm_dsi->encoder = encoder;

View File

@ -451,7 +451,7 @@ static int dsi_bus_clk_enable(struct msm_dsi_host *msm_host)
return 0;
err:
for (; i > 0; i--)
while (--i >= 0)
clk_disable_unprepare(msm_host->bus_clks[i]);
return ret;

View File

@ -110,14 +110,13 @@ static struct dsi_pll_14nm *pll_14nm_list[DSI_MAX];
static bool pll_14nm_poll_for_ready(struct dsi_pll_14nm *pll_14nm,
u32 nb_tries, u32 timeout_us)
{
bool pll_locked = false;
bool pll_locked = false, pll_ready = false;
void __iomem *base = pll_14nm->phy->pll_base;
u32 tries, val;
tries = nb_tries;
while (tries--) {
val = dsi_phy_read(base +
REG_DSI_14nm_PHY_PLL_RESET_SM_READY_STATUS);
val = dsi_phy_read(base + REG_DSI_14nm_PHY_PLL_RESET_SM_READY_STATUS);
pll_locked = !!(val & BIT(5));
if (pll_locked)
@ -126,23 +125,24 @@ static bool pll_14nm_poll_for_ready(struct dsi_pll_14nm *pll_14nm,
udelay(timeout_us);
}
if (!pll_locked) {
tries = nb_tries;
while (tries--) {
val = dsi_phy_read(base +
REG_DSI_14nm_PHY_PLL_RESET_SM_READY_STATUS);
pll_locked = !!(val & BIT(0));
if (!pll_locked)
goto out;
if (pll_locked)
break;
tries = nb_tries;
while (tries--) {
val = dsi_phy_read(base + REG_DSI_14nm_PHY_PLL_RESET_SM_READY_STATUS);
pll_ready = !!(val & BIT(0));
udelay(timeout_us);
}
if (pll_ready)
break;
udelay(timeout_us);
}
DBG("DSI PLL is %slocked", pll_locked ? "" : "*not* ");
out:
DBG("DSI PLL is %slocked, %sready", pll_locked ? "" : "*not* ", pll_ready ? "" : "*not* ");
return pll_locked;
return pll_locked && pll_ready;
}
static void dsi_pll_14nm_config_init(struct dsi_pll_config *pconf)

View File

@ -428,7 +428,7 @@ static int pll_28nm_register(struct dsi_pll_28nm *pll_28nm, struct clk_hw **prov
bytediv->reg = pll_28nm->phy->pll_base + REG_DSI_28nm_8960_PHY_PLL_CTRL_9;
snprintf(parent_name, 32, "dsi%dvco_clk", pll_28nm->phy->id);
snprintf(clk_name, 32, "dsi%dpllbyte", pll_28nm->phy->id);
snprintf(clk_name, 32, "dsi%dpllbyte", pll_28nm->phy->id + 1);
bytediv_init.name = clk_name;
bytediv_init.ops = &clk_bytediv_ops;
@ -442,7 +442,7 @@ static int pll_28nm_register(struct dsi_pll_28nm *pll_28nm, struct clk_hw **prov
return ret;
provided_clocks[DSI_BYTE_PLL_CLK] = &bytediv->hw;
snprintf(clk_name, 32, "dsi%dpll", pll_28nm->phy->id);
snprintf(clk_name, 32, "dsi%dpll", pll_28nm->phy->id + 1);
/* DIV3 */
hw = devm_clk_hw_register_divider(dev, clk_name,
parent_name, 0, pll_28nm->phy->pll_base +

View File

@ -1116,7 +1116,7 @@ void msm_edp_ctrl_power(struct edp_ctrl *ctrl, bool on)
int msm_edp_ctrl_init(struct msm_edp *edp)
{
struct edp_ctrl *ctrl = NULL;
struct device *dev = &edp->pdev->dev;
struct device *dev;
int ret;
if (!edp) {
@ -1124,6 +1124,7 @@ int msm_edp_ctrl_init(struct msm_edp *edp)
return -EINVAL;
}
dev = &edp->pdev->dev;
ctrl = devm_kzalloc(dev, sizeof(*ctrl), GFP_KERNEL);
if (!ctrl)
return -ENOMEM;

View File

@ -630,10 +630,11 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv)
if (ret)
goto err_msm_uninit;
ret = msm_disp_snapshot_init(ddev);
if (ret)
DRM_DEV_ERROR(dev, "msm_disp_snapshot_init failed ret = %d\n", ret);
if (kms) {
ret = msm_disp_snapshot_init(ddev);
if (ret)
DRM_DEV_ERROR(dev, "msm_disp_snapshot_init failed ret = %d\n", ret);
}
drm_mode_config_reset(ddev);
#ifdef CONFIG_DRM_FBDEV_EMULATION
@ -682,6 +683,7 @@ static void load_gpu(struct drm_device *dev)
static int context_init(struct drm_device *dev, struct drm_file *file)
{
static atomic_t ident = ATOMIC_INIT(0);
struct msm_drm_private *priv = dev->dev_private;
struct msm_file_private *ctx;
@ -689,12 +691,17 @@ static int context_init(struct drm_device *dev, struct drm_file *file)
if (!ctx)
return -ENOMEM;
INIT_LIST_HEAD(&ctx->submitqueues);
rwlock_init(&ctx->queuelock);
kref_init(&ctx->ref);
msm_submitqueue_init(dev, ctx);
ctx->aspace = msm_gpu_create_private_address_space(priv->gpu, current);
file->driver_priv = ctx;
ctx->seqno = atomic_inc_return(&ident);
return 0;
}

View File

@ -53,14 +53,6 @@ struct msm_disp_state;
#define FRAC_16_16(mult, div) (((mult) << 16) / (div))
struct msm_file_private {
rwlock_t queuelock;
struct list_head submitqueues;
int queueid;
struct msm_gem_address_space *aspace;
struct kref ref;
};
enum msm_mdp_plane_property {
PLANE_PROP_ZPOS,
PLANE_PROP_ALPHA,
@ -488,41 +480,6 @@ void msm_writel(u32 data, void __iomem *addr);
u32 msm_readl(const void __iomem *addr);
void msm_rmw(void __iomem *addr, u32 mask, u32 or);
struct msm_gpu_submitqueue;
int msm_submitqueue_init(struct drm_device *drm, struct msm_file_private *ctx);
struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_file_private *ctx,
u32 id);
int msm_submitqueue_create(struct drm_device *drm,
struct msm_file_private *ctx,
u32 prio, u32 flags, u32 *id);
int msm_submitqueue_query(struct drm_device *drm, struct msm_file_private *ctx,
struct drm_msm_submitqueue_query *args);
int msm_submitqueue_remove(struct msm_file_private *ctx, u32 id);
void msm_submitqueue_close(struct msm_file_private *ctx);
void msm_submitqueue_destroy(struct kref *kref);
static inline void __msm_file_private_destroy(struct kref *kref)
{
struct msm_file_private *ctx = container_of(kref,
struct msm_file_private, ref);
msm_gem_address_space_put(ctx->aspace);
kfree(ctx);
}
static inline void msm_file_private_put(struct msm_file_private *ctx)
{
kref_put(&ctx->ref, __msm_file_private_destroy);
}
static inline struct msm_file_private *msm_file_private_get(
struct msm_file_private *ctx)
{
kref_get(&ctx->ref);
return ctx;
}
#define DBG(fmt, ...) DRM_DEBUG_DRIVER(fmt"\n", ##__VA_ARGS__)
#define VERB(fmt, ...) if (0) DRM_DEBUG_DRIVER(fmt"\n", ##__VA_ARGS__)
@ -547,7 +504,7 @@ static inline int align_pitch(int width, int bpp)
static inline unsigned long timeout_to_jiffies(const ktime_t *timeout)
{
ktime_t now = ktime_get();
unsigned long remaining_jiffies;
s64 remaining_jiffies;
if (ktime_compare(*timeout, now) < 0) {
remaining_jiffies = 0;
@ -556,7 +513,7 @@ static inline unsigned long timeout_to_jiffies(const ktime_t *timeout)
remaining_jiffies = ktime_divns(rem, NSEC_PER_SEC / HZ);
}
return remaining_jiffies;
return clamp(remaining_jiffies, 0LL, (s64)INT_MAX);
}
#endif /* __MSM_DRV_H__ */

View File

@ -46,7 +46,7 @@ static struct msm_gem_submit *submit_create(struct drm_device *dev,
if (!submit)
return ERR_PTR(-ENOMEM);
ret = drm_sched_job_init(&submit->base, &queue->entity, queue);
ret = drm_sched_job_init(&submit->base, queue->entity, queue);
if (ret) {
kfree(submit);
return ERR_PTR(ret);
@ -171,7 +171,8 @@ out:
static int submit_lookup_cmds(struct msm_gem_submit *submit,
struct drm_msm_gem_submit *args, struct drm_file *file)
{
unsigned i, sz;
unsigned i;
size_t sz;
int ret = 0;
for (i = 0; i < args->nr_cmds; i++) {
@ -907,7 +908,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
/* The scheduler owns a ref now: */
msm_gem_submit_get(submit);
drm_sched_entity_push_job(&submit->base, &queue->entity);
drm_sched_entity_push_job(&submit->base, queue->entity);
args->fence = submit->fence_id;

View File

@ -257,6 +257,39 @@ struct msm_gpu_perfcntr {
*/
#define NR_SCHED_PRIORITIES (1 + DRM_SCHED_PRIORITY_HIGH - DRM_SCHED_PRIORITY_MIN)
/**
* struct msm_file_private - per-drm_file context
*
* @queuelock: synchronizes access to submitqueues list
* @submitqueues: list of &msm_gpu_submitqueue created by userspace
* @queueid: counter incremented each time a submitqueue is created,
* used to assign &msm_gpu_submitqueue.id
* @aspace: the per-process GPU address-space
* @ref: reference count
* @seqno: unique per process seqno
*/
struct msm_file_private {
rwlock_t queuelock;
struct list_head submitqueues;
int queueid;
struct msm_gem_address_space *aspace;
struct kref ref;
int seqno;
/**
* entities:
*
* Table of per-priority-level sched entities used by submitqueues
* associated with this &drm_file. Because some userspace apps
* make assumptions about rendering from multiple gl contexts
* (of the same priority) within the process happening in FIFO
* order without requiring any fencing beyond MakeCurrent(), we
* create at most one &drm_sched_entity per-process per-priority-
* level.
*/
struct drm_sched_entity *entities[NR_SCHED_PRIORITIES * MSM_GPU_MAX_RINGS];
};
/**
* msm_gpu_convert_priority - Map userspace priority to ring # and sched priority
*
@ -304,6 +337,8 @@ static inline int msm_gpu_convert_priority(struct msm_gpu *gpu, int prio,
}
/**
* struct msm_gpu_submitqueues - Userspace created context.
*
* A submitqueue is associated with a gl context or vk queue (or equiv)
* in userspace.
*
@ -321,7 +356,7 @@ static inline int msm_gpu_convert_priority(struct msm_gpu *gpu, int prio,
* seqno, protected by submitqueue lock
* @lock: submitqueue lock
* @ref: reference count
* @entity: the submit job-queue
* @entity: the submit job-queue
*/
struct msm_gpu_submitqueue {
int id;
@ -333,7 +368,7 @@ struct msm_gpu_submitqueue {
struct idr fence_idr;
struct mutex lock;
struct kref ref;
struct drm_sched_entity entity;
struct drm_sched_entity *entity;
};
struct msm_gpu_state_bo {
@ -421,6 +456,33 @@ static inline void gpu_write64(struct msm_gpu *gpu, u32 lo, u32 hi, u64 val)
int msm_gpu_pm_suspend(struct msm_gpu *gpu);
int msm_gpu_pm_resume(struct msm_gpu *gpu);
int msm_submitqueue_init(struct drm_device *drm, struct msm_file_private *ctx);
struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_file_private *ctx,
u32 id);
int msm_submitqueue_create(struct drm_device *drm,
struct msm_file_private *ctx,
u32 prio, u32 flags, u32 *id);
int msm_submitqueue_query(struct drm_device *drm, struct msm_file_private *ctx,
struct drm_msm_submitqueue_query *args);
int msm_submitqueue_remove(struct msm_file_private *ctx, u32 id);
void msm_submitqueue_close(struct msm_file_private *ctx);
void msm_submitqueue_destroy(struct kref *kref);
void __msm_file_private_destroy(struct kref *kref);
static inline void msm_file_private_put(struct msm_file_private *ctx)
{
kref_put(&ctx->ref, __msm_file_private_destroy);
}
static inline struct msm_file_private *msm_file_private_get(
struct msm_file_private *ctx)
{
kref_get(&ctx->ref);
return ctx;
}
void msm_devfreq_init(struct msm_gpu *gpu);
void msm_devfreq_cleanup(struct msm_gpu *gpu);
void msm_devfreq_resume(struct msm_gpu *gpu);

View File

@ -151,6 +151,9 @@ void msm_devfreq_active(struct msm_gpu *gpu)
unsigned int idle_time;
unsigned long target_freq = df->idle_freq;
if (!df->devfreq)
return;
/*
* Hold devfreq lock to synchronize with get_dev_status()/
* target() callbacks
@ -186,6 +189,9 @@ void msm_devfreq_idle(struct msm_gpu *gpu)
struct msm_gpu_devfreq *df = &gpu->devfreq;
unsigned long idle_freq, target_freq = 0;
if (!df->devfreq)
return;
/*
* Hold devfreq lock to synchronize with get_dev_status()/
* target() callbacks

View File

@ -7,6 +7,24 @@
#include "msm_gpu.h"
void __msm_file_private_destroy(struct kref *kref)
{
struct msm_file_private *ctx = container_of(kref,
struct msm_file_private, ref);
int i;
for (i = 0; i < ARRAY_SIZE(ctx->entities); i++) {
if (!ctx->entities[i])
continue;
drm_sched_entity_destroy(ctx->entities[i]);
kfree(ctx->entities[i]);
}
msm_gem_address_space_put(ctx->aspace);
kfree(ctx);
}
void msm_submitqueue_destroy(struct kref *kref)
{
struct msm_gpu_submitqueue *queue = container_of(kref,
@ -14,8 +32,6 @@ void msm_submitqueue_destroy(struct kref *kref)
idr_destroy(&queue->fence_idr);
drm_sched_entity_destroy(&queue->entity);
msm_file_private_put(queue->ctx);
kfree(queue);
@ -61,13 +77,47 @@ void msm_submitqueue_close(struct msm_file_private *ctx)
}
}
static struct drm_sched_entity *
get_sched_entity(struct msm_file_private *ctx, struct msm_ringbuffer *ring,
unsigned ring_nr, enum drm_sched_priority sched_prio)
{
static DEFINE_MUTEX(entity_lock);
unsigned idx = (ring_nr * NR_SCHED_PRIORITIES) + sched_prio;
/* We should have already validated that the requested priority is
* valid by the time we get here.
*/
if (WARN_ON(idx >= ARRAY_SIZE(ctx->entities)))
return ERR_PTR(-EINVAL);
mutex_lock(&entity_lock);
if (!ctx->entities[idx]) {
struct drm_sched_entity *entity;
struct drm_gpu_scheduler *sched = &ring->sched;
int ret;
entity = kzalloc(sizeof(*ctx->entities[idx]), GFP_KERNEL);
ret = drm_sched_entity_init(entity, sched_prio, &sched, 1, NULL);
if (ret) {
kfree(entity);
return ERR_PTR(ret);
}
ctx->entities[idx] = entity;
}
mutex_unlock(&entity_lock);
return ctx->entities[idx];
}
int msm_submitqueue_create(struct drm_device *drm, struct msm_file_private *ctx,
u32 prio, u32 flags, u32 *id)
{
struct msm_drm_private *priv = drm->dev_private;
struct msm_gpu_submitqueue *queue;
struct msm_ringbuffer *ring;
struct drm_gpu_scheduler *sched;
enum drm_sched_priority sched_prio;
unsigned ring_nr;
int ret;
@ -91,12 +141,10 @@ int msm_submitqueue_create(struct drm_device *drm, struct msm_file_private *ctx,
queue->flags = flags;
queue->ring_nr = ring_nr;
ring = priv->gpu->rb[ring_nr];
sched = &ring->sched;
ret = drm_sched_entity_init(&queue->entity,
sched_prio, &sched, 1, NULL);
if (ret) {
queue->entity = get_sched_entity(ctx, priv->gpu->rb[ring_nr],
ring_nr, sched_prio);
if (IS_ERR(queue->entity)) {
ret = PTR_ERR(queue->entity);
kfree(queue);
return ret;
}
@ -140,10 +188,6 @@ int msm_submitqueue_init(struct drm_device *drm, struct msm_file_private *ctx)
*/
default_prio = DIV_ROUND_UP(max_priority, 2);
INIT_LIST_HEAD(&ctx->submitqueues);
rwlock_init(&ctx->queuelock);
return msm_submitqueue_create(drm, ctx, default_prio, 0, NULL);
}

View File

@ -82,7 +82,7 @@ g84_fifo_chan_engine_fini(struct nvkm_fifo_chan *base,
if (offset < 0)
return 0;
engn = fifo->base.func->engine_id(&fifo->base, engine);
engn = fifo->base.func->engine_id(&fifo->base, engine) - 1;
save = nvkm_mask(device, 0x002520, 0x0000003f, 1 << engn);
nvkm_wr32(device, 0x0032fc, chan->base.inst->addr >> 12);
done = nvkm_msec(device, 2000,

View File

@ -295,6 +295,7 @@ config DRM_PANEL_OLIMEX_LCD_OLINUXINO
depends on OF
depends on I2C
depends on BACKLIGHT_CLASS_DEVICE
select CRC32
help
The panel is used with different sizes LCDs, from 480x272 to
1280x800, and 24 bit per pixel.

View File

@ -214,7 +214,7 @@ int drm_ati_pcigart_init(struct drm_device *dev, struct drm_ati_pcigart_info *ga
}
ret = 0;
#if defined(__i386__) || defined(__x86_64__)
#ifdef CONFIG_X86
wbinvd();
#else
mb();

View File

@ -86,12 +86,20 @@ int rcar_du_encoder_init(struct rcar_du_device *rcdu,
}
/*
* Create and initialize the encoder. On Gen3 skip the LVDS1 output if
* Create and initialize the encoder. On Gen3, skip the LVDS1 output if
* the LVDS1 encoder is used as a companion for LVDS0 in dual-link
* mode.
* mode, or any LVDS output if it isn't connected. The latter may happen
* on D3 or E3 as the LVDS encoders are needed to provide the pixel
* clock to the DU, even when the LVDS outputs are not used.
*/
if (rcdu->info->gen >= 3 && output == RCAR_DU_OUTPUT_LVDS1) {
if (rcar_lvds_dual_link(bridge))
if (rcdu->info->gen >= 3) {
if (output == RCAR_DU_OUTPUT_LVDS1 &&
rcar_lvds_dual_link(bridge))
return -ENOLINK;
if ((output == RCAR_DU_OUTPUT_LVDS0 ||
output == RCAR_DU_OUTPUT_LVDS1) &&
!rcar_lvds_is_connected(bridge))
return -ENOLINK;
}

View File

@ -576,6 +576,9 @@ static int rcar_lvds_attach(struct drm_bridge *bridge,
{
struct rcar_lvds *lvds = bridge_to_rcar_lvds(bridge);
if (!lvds->next_bridge)
return 0;
return drm_bridge_attach(bridge->encoder, lvds->next_bridge, bridge,
flags);
}
@ -598,6 +601,14 @@ bool rcar_lvds_dual_link(struct drm_bridge *bridge)
}
EXPORT_SYMBOL_GPL(rcar_lvds_dual_link);
bool rcar_lvds_is_connected(struct drm_bridge *bridge)
{
struct rcar_lvds *lvds = bridge_to_rcar_lvds(bridge);
return lvds->next_bridge != NULL;
}
EXPORT_SYMBOL_GPL(rcar_lvds_is_connected);
/* -----------------------------------------------------------------------------
* Probe & Remove
*/

View File

@ -16,6 +16,7 @@ struct drm_bridge;
int rcar_lvds_clk_enable(struct drm_bridge *bridge, unsigned long freq);
void rcar_lvds_clk_disable(struct drm_bridge *bridge);
bool rcar_lvds_dual_link(struct drm_bridge *bridge);
bool rcar_lvds_is_connected(struct drm_bridge *bridge);
#else
static inline int rcar_lvds_clk_enable(struct drm_bridge *bridge,
unsigned long freq)
@ -27,6 +28,10 @@ static inline bool rcar_lvds_dual_link(struct drm_bridge *bridge)
{
return false;
}
static inline bool rcar_lvds_is_connected(struct drm_bridge *bridge)
{
return false;
}
#endif /* CONFIG_DRM_RCAR_LVDS */
#endif /* __RCAR_LVDS_H__ */

View File

@ -738,7 +738,7 @@ static irqreturn_t fxls8962af_interrupt(int irq, void *p)
if (reg & FXLS8962AF_INT_STATUS_SRC_BUF) {
ret = fxls8962af_fifo_flush(indio_dev);
if (ret)
if (ret < 0)
return IRQ_NONE;
return IRQ_HANDLED;

View File

@ -293,6 +293,7 @@ static const struct ad_sigma_delta_info ad7192_sigma_delta_info = {
.has_registers = true,
.addr_shift = 3,
.read_mask = BIT(6),
.irq_flags = IRQF_TRIGGER_FALLING,
};
static const struct ad_sd_calib_data ad7192_calib_arr[8] = {

View File

@ -203,7 +203,7 @@ static const struct ad_sigma_delta_info ad7780_sigma_delta_info = {
.set_mode = ad7780_set_mode,
.postprocess_sample = ad7780_postprocess_sample,
.has_registers = false,
.irq_flags = IRQF_TRIGGER_LOW,
.irq_flags = IRQF_TRIGGER_FALLING,
};
#define _AD7780_CHANNEL(_bits, _wordsize, _mask_all) \

Some files were not shown because too many files have changed in this diff Show More