Merge branch 'sched/urgent' into x86/mm, to pick up dependent fix

Andy will need the following scheduler fix for the PCID series:

  252d2a4117: sched/core: Idle_task_exit() shouldn't use switch_mm_irqs_off()

So do a cross-merge.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
Ingo Molnar 2017-06-13 08:47:22 +02:00
commit 3f365cf304
318 changed files with 2722 additions and 1568 deletions

View File

@ -866,6 +866,15 @@
dscc4.setup= [NET]
dt_cpu_ftrs= [PPC]
Format: {"off" | "known"}
Control how the dt_cpu_ftrs device-tree binding is
used for CPU feature discovery and setup (if it
exists).
off: Do not use it, fall back to legacy cpu table.
known: Do not pass through unknown features to guests
or userspace, only those that the kernel is aware of.
dump_apple_properties [X86]
Dump name and content of EFI device properties on
x86 Macs. Useful for driver authors to determine

View File

@ -26,6 +26,10 @@ Optional properties:
- interrupt-controller : Indicates the switch is itself an interrupt
controller. This is used for the PHY interrupts.
#interrupt-cells = <2> : Controller uses two cells, number and flag
- eeprom-length : Set to the length of an EEPROM connected to the
switch. Must be set if the switch can not detect
the presence and/or size of a connected EEPROM,
otherwise optional.
- mdio : Container of PHY and devices on the switches MDIO
bus.
- mdio? : Container of PHYs and devices on the external MDIO

View File

@ -0,0 +1,194 @@
The QorIQ DPAA Ethernet Driver
==============================
Authors:
Madalin Bucur <madalin.bucur@nxp.com>
Camelia Groza <camelia.groza@nxp.com>
Contents
========
- DPAA Ethernet Overview
- DPAA Ethernet Supported SoCs
- Configuring DPAA Ethernet in your kernel
- DPAA Ethernet Frame Processing
- DPAA Ethernet Features
- Debugging
DPAA Ethernet Overview
======================
DPAA stands for Data Path Acceleration Architecture and it is a
set of networking acceleration IPs that are available on several
generations of SoCs, both on PowerPC and ARM64.
The Freescale DPAA architecture consists of a series of hardware blocks
that support Ethernet connectivity. The Ethernet driver depends upon the
following drivers in the Linux kernel:
- Peripheral Access Memory Unit (PAMU) (* needed only for PPC platforms)
drivers/iommu/fsl_*
- Frame Manager (FMan)
drivers/net/ethernet/freescale/fman
- Queue Manager (QMan), Buffer Manager (BMan)
drivers/soc/fsl/qbman
A simplified view of the dpaa_eth interfaces mapped to FMan MACs:
dpaa_eth /eth0\ ... /ethN\
driver | | | |
------------- ---- ----------- ---- -------------
-Ports / Tx Rx \ ... / Tx Rx \
FMan | | | |
-MACs | MAC0 | | MACN |
/ dtsec0 \ ... / dtsecN \ (or tgec)
/ \ / \(or memac)
--------- -------------- --- -------------- ---------
FMan, FMan Port, FMan SP, FMan MURAM drivers
---------------------------------------------------------
FMan HW blocks: MURAM, MACs, Ports, SP
---------------------------------------------------------
The dpaa_eth relation to the QMan, BMan and FMan:
________________________________
dpaa_eth / eth0 \
driver / \
--------- -^- -^- -^- --- ---------
QMan driver / \ / \ / \ \ / | BMan |
|Rx | |Rx | |Tx | |Tx | | driver |
--------- |Dfl| |Err| |Cnf| |FQs| | |
QMan HW |FQ | |FQ | |FQs| | | | |
/ \ / \ / \ \ / | |
--------- --- --- --- -v- ---------
| FMan QMI | |
| FMan HW FMan BMI | BMan HW |
----------------------- --------
where the acronyms used above (and in the code) are:
DPAA = Data Path Acceleration Architecture
FMan = DPAA Frame Manager
QMan = DPAA Queue Manager
BMan = DPAA Buffers Manager
QMI = QMan interface in FMan
BMI = BMan interface in FMan
FMan SP = FMan Storage Profiles
MURAM = Multi-user RAM in FMan
FQ = QMan Frame Queue
Rx Dfl FQ = default reception FQ
Rx Err FQ = Rx error frames FQ
Tx Cnf FQ = Tx confirmation FQs
Tx FQs = transmission frame queues
dtsec = datapath three speed Ethernet controller (10/100/1000 Mbps)
tgec = ten gigabit Ethernet controller (10 Gbps)
memac = multirate Ethernet MAC (10/100/1000/10000)
DPAA Ethernet Supported SoCs
============================
The DPAA drivers enable the Ethernet controllers present on the following SoCs:
# PPC
P1023
P2041
P3041
P4080
P5020
P5040
T1023
T1024
T1040
T1042
T2080
T4240
B4860
# ARM
LS1043A
LS1046A
Configuring DPAA Ethernet in your kernel
========================================
To enable the DPAA Ethernet driver, the following Kconfig options are required:
# common for arch/arm64 and arch/powerpc platforms
CONFIG_FSL_DPAA=y
CONFIG_FSL_FMAN=y
CONFIG_FSL_DPAA_ETH=y
CONFIG_FSL_XGMAC_MDIO=y
# for arch/powerpc only
CONFIG_FSL_PAMU=y
# common options needed for the PHYs used on the RDBs
CONFIG_VITESSE_PHY=y
CONFIG_REALTEK_PHY=y
CONFIG_AQUANTIA_PHY=y
DPAA Ethernet Frame Processing
==============================
On Rx, buffers for the incoming frames are retrieved from one of the three
existing buffers pools. The driver initializes and seeds these, each with
buffers of different sizes: 1KB, 2KB and 4KB.
On Tx, all transmitted frames are returned to the driver through Tx
confirmation frame queues. The driver is then responsible for freeing the
buffers. In order to do this properly, a backpointer is added to the buffer
before transmission that points to the skb. When the buffer returns to the
driver on a confirmation FQ, the skb can be correctly consumed.
DPAA Ethernet Features
======================
Currently the DPAA Ethernet driver enables the basic features required for
a Linux Ethernet driver. The support for advanced features will be added
gradually.
The driver has Rx and Tx checksum offloading for UDP and TCP. Currently the Rx
checksum offload feature is enabled by default and cannot be controlled through
ethtool.
The driver has support for multiple prioritized Tx traffic classes. Priorities
range from 0 (lowest) to 3 (highest). These are mapped to HW workqueues with
strict priority levels. Each traffic class contains NR_CPU TX queues. By
default, only one traffic class is enabled and the lowest priority Tx queues
are used. Higher priority traffic classes can be enabled with the mqprio
qdisc. For example, all four traffic classes are enabled on an interface with
the following command. Furthermore, skb priority levels are mapped to traffic
classes as follows:
* priorities 0 to 3 - traffic class 0 (low priority)
* priorities 4 to 7 - traffic class 1 (medium-low priority)
* priorities 8 to 11 - traffic class 2 (medium-high priority)
* priorities 12 to 15 - traffic class 3 (high priority)
tc qdisc add dev <int> root handle 1: \
mqprio num_tc 4 map 0 0 0 0 1 1 1 1 2 2 2 2 3 3 3 3 hw 1
Debugging
=========
The following statistics are exported for each interface through ethtool:
- interrupt count per CPU
- Rx packets count per CPU
- Tx packets count per CPU
- Tx confirmed packets count per CPU
- Tx S/G frames count per CPU
- Tx error count per CPU
- Rx error count per CPU
- Rx error count per type
- congestion related statistics:
- congestion status
- time spent in congestion
- number of time the device entered congestion
- dropped packets count per cause
The driver also exports the following information in sysfs:
- the FQ IDs for each FQ type
/sys/devices/platform/dpaa-ethernet.0/net/<int>/fqids
- the IDs of the buffer pools in use
/sys/devices/platform/dpaa-ethernet.0/net/<int>/bpids

View File

@ -1,7 +1,7 @@
TCP protocol
============
Last updated: 9 February 2008
Last updated: 3 June 2017
Contents
========
@ -29,18 +29,19 @@ As of 2.6.13, Linux supports pluggable congestion control algorithms.
A congestion control mechanism can be registered through functions in
tcp_cong.c. The functions used by the congestion control mechanism are
registered via passing a tcp_congestion_ops struct to
tcp_register_congestion_control. As a minimum name, ssthresh,
cong_avoid must be valid.
tcp_register_congestion_control. As a minimum, the congestion control
mechanism must provide a valid name and must implement either ssthresh,
cong_avoid and undo_cwnd hooks or the "omnipotent" cong_control hook.
Private data for a congestion control mechanism is stored in tp->ca_priv.
tcp_ca(tp) returns a pointer to this space. This is preallocated space - it
is important to check the size of your private data will fit this space, or
alternatively space could be allocated elsewhere and a pointer to it could
alternatively, space could be allocated elsewhere and a pointer to it could
be stored here.
There are three kinds of congestion control algorithms currently: The
simplest ones are derived from TCP reno (highspeed, scalable) and just
provide an alternative the congestion window calculation. More complex
provide an alternative congestion window calculation. More complex
ones like BIC try to look at other events to provide better
heuristics. There are also round trip time based algorithms like
Vegas and Westwood+.
@ -49,21 +50,15 @@ Good TCP congestion control is a complex problem because the algorithm
needs to maintain fairness and performance. Please review current
research and RFC's before developing new modules.
The method that is used to determine which congestion control mechanism is
determined by the setting of the sysctl net.ipv4.tcp_congestion_control.
The default congestion control will be the last one registered (LIFO);
so if you built everything as modules, the default will be reno. If you
build with the defaults from Kconfig, then CUBIC will be builtin (not a
module) and it will end up the default.
The default congestion control mechanism is chosen based on the
DEFAULT_TCP_CONG Kconfig parameter. If you really want a particular default
value then you can set it using sysctl net.ipv4.tcp_congestion_control. The
module will be autoloaded if needed and you will get the expected protocol. If
you ask for an unknown congestion method, then the sysctl attempt will fail.
If you really want a particular default value then you will need
to set it with the sysctl. If you use a sysctl, the module will be autoloaded
if needed and you will get the expected protocol. If you ask for an
unknown congestion method, then the sysctl attempt will fail.
If you remove a tcp congestion control module, then you will get the next
If you remove a TCP congestion control module, then you will get the next
available one. Since reno cannot be built as a module, and cannot be
deleted, it will always be available.
removed, it will always be available.
How the new TCP output machine [nyi] works.
===========================================

View File

@ -1172,7 +1172,7 @@ N: clps711x
ARM/CIRRUS LOGIC EP93XX ARM ARCHITECTURE
M: Hartley Sweeten <hsweeten@visionengravers.com>
M: Ryan Mallon <rmallon@gmail.com>
M: Alexander Sverdlin <alexander.sverdlin@gmail.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: arch/arm/mach-ep93xx/
@ -1489,13 +1489,15 @@ M: Gregory Clement <gregory.clement@free-electrons.com>
M: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: arch/arm/mach-mvebu/
F: drivers/rtc/rtc-armada38x.c
F: arch/arm/boot/dts/armada*
F: arch/arm/boot/dts/kirkwood*
F: arch/arm/configs/mvebu_*_defconfig
F: arch/arm/mach-mvebu/
F: arch/arm64/boot/dts/marvell/armada*
F: drivers/cpufreq/mvebu-cpufreq.c
F: arch/arm/configs/mvebu_*_defconfig
F: drivers/irqchip/irq-armada-370-xp.c
F: drivers/irqchip/irq-mvebu-*
F: drivers/rtc/rtc-armada38x.c
ARM/Marvell Berlin SoC support
M: Jisheng Zhang <jszhang@marvell.com>
@ -1721,7 +1723,6 @@ N: rockchip
ARM/SAMSUNG EXYNOS ARM ARCHITECTURES
M: Kukjin Kim <kgene@kernel.org>
M: Krzysztof Kozlowski <krzk@kernel.org>
R: Javier Martinez Canillas <javier@osg.samsung.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers)
Q: https://patchwork.kernel.org/project/linux-samsung-soc/list/
@ -1829,7 +1830,6 @@ F: drivers/edac/altera_edac.
ARM/STI ARCHITECTURE
M: Patrice Chotard <patrice.chotard@st.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
L: kernel@stlinux.com
W: http://www.stlinux.com
S: Maintained
F: arch/arm/mach-sti/
@ -7707,7 +7707,7 @@ F: drivers/platform/x86/hp_accel.c
LIVE PATCHING
M: Josh Poimboeuf <jpoimboe@redhat.com>
M: Jessica Yu <jeyu@redhat.com>
M: Jessica Yu <jeyu@kernel.org>
M: Jiri Kosina <jikos@kernel.org>
M: Miroslav Benes <mbenes@suse.cz>
R: Petr Mladek <pmladek@suse.com>
@ -8508,7 +8508,7 @@ S: Odd Fixes
F: drivers/media/radio/radio-miropcm20*
MELLANOX MLX4 core VPI driver
M: Yishai Hadas <yishaih@mellanox.com>
M: Tariq Toukan <tariqt@mellanox.com>
L: netdev@vger.kernel.org
L: linux-rdma@vger.kernel.org
W: http://www.mellanox.com
@ -8516,7 +8516,6 @@ Q: http://patchwork.ozlabs.org/project/netdev/list/
S: Supported
F: drivers/net/ethernet/mellanox/mlx4/
F: include/linux/mlx4/
F: include/uapi/rdma/mlx4-abi.h
MELLANOX MLX4 IB driver
M: Yishai Hadas <yishaih@mellanox.com>
@ -8526,6 +8525,7 @@ Q: http://patchwork.kernel.org/project/linux-rdma/list/
S: Supported
F: drivers/infiniband/hw/mlx4/
F: include/linux/mlx4/
F: include/uapi/rdma/mlx4-abi.h
MELLANOX MLX5 core VPI driver
M: Saeed Mahameed <saeedm@mellanox.com>
@ -8538,7 +8538,6 @@ Q: http://patchwork.ozlabs.org/project/netdev/list/
S: Supported
F: drivers/net/ethernet/mellanox/mlx5/core/
F: include/linux/mlx5/
F: include/uapi/rdma/mlx5-abi.h
MELLANOX MLX5 IB driver
M: Matan Barak <matanb@mellanox.com>
@ -8549,6 +8548,7 @@ Q: http://patchwork.kernel.org/project/linux-rdma/list/
S: Supported
F: drivers/infiniband/hw/mlx5/
F: include/linux/mlx5/
F: include/uapi/rdma/mlx5-abi.h
MELEXIS MLX90614 DRIVER
M: Crt Mori <cmo@melexis.com>
@ -8588,7 +8588,7 @@ S: Maintained
F: drivers/media/dvb-frontends/mn88473*
MODULE SUPPORT
M: Jessica Yu <jeyu@redhat.com>
M: Jessica Yu <jeyu@kernel.org>
M: Rusty Russell <rusty@rustcorp.com.au>
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jeyu/linux.git modules-next
S: Maintained
@ -11268,7 +11268,6 @@ F: drivers/media/rc/serial_ir.c
STI CEC DRIVER
M: Benjamin Gaignard <benjamin.gaignard@linaro.org>
L: kernel@stlinux.com
S: Maintained
F: drivers/staging/media/st-cec/
F: Documentation/devicetree/bindings/media/stih-cec.txt
@ -11778,6 +11777,7 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/nsekhar/linux-davinci.git
S: Supported
F: arch/arm/mach-davinci/
F: drivers/i2c/busses/i2c-davinci.c
F: arch/arm/boot/dts/da850*
TI DAVINCI SERIES MEDIA DRIVER
M: "Lad, Prabhakar" <prabhakar.csengg@gmail.com>
@ -13861,7 +13861,7 @@ S: Odd fixes
F: drivers/net/wireless/wl3501*
WOLFSON MICROELECTRONICS DRIVERS
L: patches@opensource.wolfsonmicro.com
L: patches@opensource.cirrus.com
T: git https://github.com/CirrusLogic/linux-drivers.git
W: https://github.com/CirrusLogic/linux-drivers/wiki
S: Supported

View File

@ -17,14 +17,12 @@
@ there.
.inst 'M' | ('Z' << 8) | (0x1310 << 16) @ tstne r0, #0x4d000
#else
mov r0, r0
W(mov) r0, r0
#endif
.endm
.macro __EFI_HEADER
#ifdef CONFIG_EFI_STUB
b __efi_start
.set start_offset, __efi_start - start
.org start + 0x3c
@

View File

@ -130,19 +130,22 @@ start:
.rept 7
__nop
.endr
ARM( mov r0, r0 )
ARM( b 1f )
THUMB( badr r12, 1f )
THUMB( bx r12 )
#ifndef CONFIG_THUMB2_KERNEL
mov r0, r0
#else
AR_CLASS( sub pc, pc, #3 ) @ A/R: switch to Thumb2 mode
M_CLASS( nop.w ) @ M: already in Thumb2 mode
.thumb
#endif
W(b) 1f
.word _magic_sig @ Magic numbers to help the loader
.word _magic_start @ absolute load/run zImage address
.word _magic_end @ zImage end address
.word 0x04030201 @ endianness flag
THUMB( .thumb )
1: __EFI_HEADER
__EFI_HEADER
1:
ARM_BE8( setend be ) @ go BE8 if compiled for BE8
AR_CLASS( mrs r9, cpsr )
#ifdef CONFIG_ARM_VIRT_EXT

View File

@ -3,6 +3,11 @@
#include <dt-bindings/clock/bcm2835-aux.h>
#include <dt-bindings/gpio/gpio.h>
/* firmware-provided startup stubs live here, where the secondary CPUs are
* spinning.
*/
/memreserve/ 0x00000000 0x00001000;
/* This include file covers the common peripherals and configuration between
* bcm2835 and bcm2836 implementations, leaving the CPU configuration to
* bcm2835.dtsi and bcm2836.dtsi.

View File

@ -120,10 +120,16 @@
ethphy0: ethernet-phy@2 {
reg = <2>;
micrel,led-mode = <1>;
clocks = <&clks IMX6UL_CLK_ENET_REF>;
clock-names = "rmii-ref";
};
ethphy1: ethernet-phy@1 {
reg = <1>;
micrel,led-mode = <1>;
clocks = <&clks IMX6UL_CLK_ENET2_REF>;
clock-names = "rmii-ref";
};
};
};

View File

@ -137,8 +137,8 @@ netcp: netcp@26000000 {
/* NetCP address range */
ranges = <0 0x26000000 0x1000000>;
clocks = <&clkpa>, <&clkcpgmac>, <&chipclk12>, <&clkosr>;
clock-names = "pa_clk", "ethss_clk", "cpts", "osr_clk";
clocks = <&clkpa>, <&clkcpgmac>, <&chipclk12>;
clock-names = "pa_clk", "ethss_clk", "cpts";
dma-coherent;
ti,navigator-dmas = <&dma_gbe 0>,

View File

@ -232,6 +232,14 @@
};
};
osr: sram@70000000 {
compatible = "mmio-sram";
reg = <0x70000000 0x10000>;
#address-cells = <1>;
#size-cells = <1>;
clocks = <&clkosr>;
};
dspgpio0: keystone_dsp_gpio@02620240 {
compatible = "ti,keystone-dsp-gpio";
gpio-controller;

View File

@ -1,4 +1,4 @@
#include <versatile-ab.dts>
#include "versatile-ab.dts"
/ {
model = "ARM Versatile PB";

View File

@ -235,7 +235,7 @@ int mcpm_cpu_power_up(unsigned int cpu, unsigned int cluster)
return ret;
}
typedef void (*phys_reset_t)(unsigned long);
typedef typeof(cpu_reset) phys_reset_t;
void mcpm_cpu_power_down(void)
{
@ -300,7 +300,7 @@ void mcpm_cpu_power_down(void)
* on the CPU.
*/
phys_reset = (phys_reset_t)(unsigned long)__pa_symbol(cpu_reset);
phys_reset(__pa_symbol(mcpm_entry_point));
phys_reset(__pa_symbol(mcpm_entry_point), false);
/* should never get here */
BUG();
@ -389,7 +389,7 @@ static int __init nocache_trampoline(unsigned long _arg)
__mcpm_cpu_down(cpu, cluster);
phys_reset = (phys_reset_t)(unsigned long)__pa_symbol(cpu_reset);
phys_reset(__pa_symbol(mcpm_entry_point));
phys_reset(__pa_symbol(mcpm_entry_point), false);
BUG();
}

View File

@ -19,7 +19,8 @@ struct dev_archdata {
#ifdef CONFIG_XEN
const struct dma_map_ops *dev_dma_ops;
#endif
bool dma_coherent;
unsigned int dma_coherent:1;
unsigned int dma_ops_setup:1;
};
struct omap_device;

View File

@ -66,6 +66,7 @@ typedef pte_t *pte_addr_t;
#define pgprot_noncached(prot) (prot)
#define pgprot_writecombine(prot) (prot)
#define pgprot_dmacoherent(prot) (prot)
#define pgprot_device(prot) (prot)
/*

View File

@ -1,6 +1,7 @@
menuconfig ARCH_AT91
bool "Atmel SoCs"
depends on ARCH_MULTI_V4T || ARCH_MULTI_V5 || ARCH_MULTI_V7
select ARM_CPU_SUSPEND if PM
select COMMON_CLK_AT91
select GPIOLIB
select PINCTRL

View File

@ -153,7 +153,8 @@ int __init davinci_pm_init(void)
davinci_sram_suspend = sram_alloc(davinci_cpu_suspend_sz, NULL);
if (!davinci_sram_suspend) {
pr_err("PM: cannot allocate SRAM memory\n");
return -ENOMEM;
ret = -ENOMEM;
goto no_sram_mem;
}
davinci_sram_push(davinci_sram_suspend, davinci_cpu_suspend,
@ -161,6 +162,10 @@ int __init davinci_pm_init(void)
suspend_set_ops(&davinci_pm_ops);
return 0;
no_sram_mem:
iounmap(pm_config.ddrpsc_reg_base);
no_ddrpsc_mem:
iounmap(pm_config.ddrpll_reg_base);
no_ddrpll_mem:

View File

@ -2311,7 +2311,14 @@ int arm_iommu_attach_device(struct device *dev,
}
EXPORT_SYMBOL_GPL(arm_iommu_attach_device);
static void __arm_iommu_detach_device(struct device *dev)
/**
* arm_iommu_detach_device
* @dev: valid struct device pointer
*
* Detaches the provided device from a previously attached map.
* This voids the dma operations (dma_map_ops pointer)
*/
void arm_iommu_detach_device(struct device *dev)
{
struct dma_iommu_mapping *mapping;
@ -2324,22 +2331,10 @@ static void __arm_iommu_detach_device(struct device *dev)
iommu_detach_device(mapping->domain, dev);
kref_put(&mapping->kref, release_iommu_mapping);
to_dma_iommu_mapping(dev) = NULL;
set_dma_ops(dev, NULL);
pr_debug("Detached IOMMU controller from %s device.\n", dev_name(dev));
}
/**
* arm_iommu_detach_device
* @dev: valid struct device pointer
*
* Detaches the provided device from a previously attached map.
* This voids the dma operations (dma_map_ops pointer)
*/
void arm_iommu_detach_device(struct device *dev)
{
__arm_iommu_detach_device(dev);
set_dma_ops(dev, NULL);
}
EXPORT_SYMBOL_GPL(arm_iommu_detach_device);
static const struct dma_map_ops *arm_get_iommu_dma_map_ops(bool coherent)
@ -2379,7 +2374,7 @@ static void arm_teardown_iommu_dma_ops(struct device *dev)
if (!mapping)
return;
__arm_iommu_detach_device(dev);
arm_iommu_detach_device(dev);
arm_iommu_release_mapping(mapping);
}
@ -2430,9 +2425,13 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
dev->dma_ops = xen_dma_ops;
}
#endif
dev->archdata.dma_ops_setup = true;
}
void arch_teardown_dma_ops(struct device *dev)
{
if (!dev->archdata.dma_ops_setup)
return;
arm_teardown_iommu_dma_ops(dev);
}

View File

@ -231,8 +231,7 @@
cpm_crypto: crypto@800000 {
compatible = "inside-secure,safexcel-eip197";
reg = <0x800000 0x200000>;
interrupts = <GIC_SPI 34 (IRQ_TYPE_EDGE_RISING
| IRQ_TYPE_LEVEL_HIGH)>,
interrupts = <GIC_SPI 34 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 54 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 55 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 56 IRQ_TYPE_LEVEL_HIGH>,

View File

@ -221,8 +221,7 @@
cps_crypto: crypto@800000 {
compatible = "inside-secure,safexcel-eip197";
reg = <0x800000 0x200000>;
interrupts = <GIC_SPI 34 (IRQ_TYPE_EDGE_RISING
| IRQ_TYPE_LEVEL_HIGH)>,
interrupts = <GIC_SPI 34 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 278 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 279 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 280 IRQ_TYPE_LEVEL_HIGH>,

View File

@ -68,6 +68,7 @@ CONFIG_PCIE_QCOM=y
CONFIG_PCIE_ARMADA_8K=y
CONFIG_PCI_AARDVARK=y
CONFIG_PCIE_RCAR=y
CONFIG_PCIE_ROCKCHIP=m
CONFIG_PCI_HOST_GENERIC=y
CONFIG_PCI_XGENE=y
CONFIG_ARM64_VA_BITS_48=y
@ -208,6 +209,8 @@ CONFIG_BRCMFMAC=m
CONFIG_WL18XX=m
CONFIG_WLCORE_SDIO=m
CONFIG_INPUT_EVDEV=y
CONFIG_KEYBOARD_ADC=m
CONFIG_KEYBOARD_CROS_EC=y
CONFIG_KEYBOARD_GPIO=y
CONFIG_INPUT_MISC=y
CONFIG_INPUT_PM8941_PWRKEY=y
@ -263,6 +266,7 @@ CONFIG_SPI_MESON_SPIFC=m
CONFIG_SPI_ORION=y
CONFIG_SPI_PL022=y
CONFIG_SPI_QUP=y
CONFIG_SPI_ROCKCHIP=y
CONFIG_SPI_S3C64XX=y
CONFIG_SPI_SPIDEV=m
CONFIG_SPMI=y
@ -292,6 +296,7 @@ CONFIG_THERMAL_GOV_POWER_ALLOCATOR=y
CONFIG_CPU_THERMAL=y
CONFIG_THERMAL_EMULATION=y
CONFIG_EXYNOS_THERMAL=y
CONFIG_ROCKCHIP_THERMAL=m
CONFIG_WATCHDOG=y
CONFIG_S3C2410_WATCHDOG=y
CONFIG_MESON_GXBB_WATCHDOG=m
@ -300,12 +305,14 @@ CONFIG_RENESAS_WDT=y
CONFIG_BCM2835_WDT=y
CONFIG_MFD_CROS_EC=y
CONFIG_MFD_CROS_EC_I2C=y
CONFIG_MFD_CROS_EC_SPI=y
CONFIG_MFD_EXYNOS_LPASS=m
CONFIG_MFD_HI655X_PMIC=y
CONFIG_MFD_MAX77620=y
CONFIG_MFD_SPMI_PMIC=y
CONFIG_MFD_RK808=y
CONFIG_MFD_SEC_CORE=y
CONFIG_REGULATOR_FAN53555=y
CONFIG_REGULATOR_FIXED_VOLTAGE=y
CONFIG_REGULATOR_GPIO=y
CONFIG_REGULATOR_HI655X=y
@ -473,8 +480,10 @@ CONFIG_ARCH_TEGRA_186_SOC=y
CONFIG_EXTCON_USB_GPIO=y
CONFIG_IIO=y
CONFIG_EXYNOS_ADC=y
CONFIG_ROCKCHIP_SARADC=m
CONFIG_PWM=y
CONFIG_PWM_BCM2835=m
CONFIG_PWM_CROS_EC=m
CONFIG_PWM_MESON=m
CONFIG_PWM_ROCKCHIP=y
CONFIG_PWM_SAMSUNG=y
@ -484,6 +493,7 @@ CONFIG_PHY_HI6220_USB=y
CONFIG_PHY_SUN4I_USB=y
CONFIG_PHY_ROCKCHIP_INNO_USB2=y
CONFIG_PHY_ROCKCHIP_EMMC=y
CONFIG_PHY_ROCKCHIP_PCIE=m
CONFIG_PHY_XGENE=y
CONFIG_PHY_TEGRA_XUSB=y
CONFIG_ARM_SCPI_PROTOCOL=y

View File

@ -380,22 +380,6 @@ source "arch/powerpc/platforms/Kconfig"
menu "Kernel options"
config PPC_DT_CPU_FTRS
bool "Device-tree based CPU feature discovery & setup"
depends on PPC_BOOK3S_64
default n
help
This enables code to use a new device tree binding for describing CPU
compatibility and features. Saying Y here will attempt to use the new
binding if the firmware provides it. Currently only the skiboot
firmware provides this binding.
If you're not sure say Y.
config PPC_CPUFEATURES_ENABLE_UNKNOWN
bool "cpufeatures pass through unknown features to guest/userspace"
depends on PPC_DT_CPU_FTRS
default y
config HIGHMEM
bool "High memory support"
depends on PPC32

View File

@ -8,7 +8,7 @@
#define H_PTE_INDEX_SIZE 9
#define H_PMD_INDEX_SIZE 7
#define H_PUD_INDEX_SIZE 9
#define H_PGD_INDEX_SIZE 12
#define H_PGD_INDEX_SIZE 9
#ifndef __ASSEMBLY__
#define H_PTE_TABLE_SIZE (sizeof(pte_t) << H_PTE_INDEX_SIZE)

View File

@ -214,7 +214,6 @@ enum {
#define CPU_FTR_DAWR LONG_ASM_CONST(0x0400000000000000)
#define CPU_FTR_DABRX LONG_ASM_CONST(0x0800000000000000)
#define CPU_FTR_PMAO_BUG LONG_ASM_CONST(0x1000000000000000)
#define CPU_FTR_SUBCORE LONG_ASM_CONST(0x2000000000000000)
#define CPU_FTR_POWER9_DD1 LONG_ASM_CONST(0x4000000000000000)
#ifndef __ASSEMBLY__
@ -463,7 +462,7 @@ enum {
CPU_FTR_STCX_CHECKS_ADDRESS | CPU_FTR_POPCNTB | CPU_FTR_POPCNTD | \
CPU_FTR_ICSWX | CPU_FTR_CFAR | CPU_FTR_HVMODE | CPU_FTR_VMX_COPY | \
CPU_FTR_DBELL | CPU_FTR_HAS_PPR | CPU_FTR_DAWR | \
CPU_FTR_ARCH_207S | CPU_FTR_TM_COMP | CPU_FTR_SUBCORE)
CPU_FTR_ARCH_207S | CPU_FTR_TM_COMP)
#define CPU_FTRS_POWER8E (CPU_FTRS_POWER8 | CPU_FTR_PMAO_BUG)
#define CPU_FTRS_POWER8_DD1 (CPU_FTRS_POWER8 & ~CPU_FTR_DBELL)
#define CPU_FTRS_POWER9 (CPU_FTR_USE_TB | CPU_FTR_LWSYNC | \

View File

@ -110,13 +110,18 @@ void release_thread(struct task_struct *);
#define TASK_SIZE_128TB (0x0000800000000000UL)
#define TASK_SIZE_512TB (0x0002000000000000UL)
#ifdef CONFIG_PPC_BOOK3S_64
/*
* For now 512TB is only supported with book3s and 64K linux page size.
*/
#if defined(CONFIG_PPC_BOOK3S_64) && defined(CONFIG_PPC_64K_PAGES)
/*
* Max value currently used:
*/
#define TASK_SIZE_USER64 TASK_SIZE_512TB
#define TASK_SIZE_USER64 TASK_SIZE_512TB
#define DEFAULT_MAP_WINDOW_USER64 TASK_SIZE_128TB
#else
#define TASK_SIZE_USER64 TASK_SIZE_64TB
#define TASK_SIZE_USER64 TASK_SIZE_64TB
#define DEFAULT_MAP_WINDOW_USER64 TASK_SIZE_64TB
#endif
/*
@ -132,7 +137,7 @@ void release_thread(struct task_struct *);
* space during mmap's.
*/
#define TASK_UNMAPPED_BASE_USER32 (PAGE_ALIGN(TASK_SIZE_USER32 / 4))
#define TASK_UNMAPPED_BASE_USER64 (PAGE_ALIGN(TASK_SIZE_128TB / 4))
#define TASK_UNMAPPED_BASE_USER64 (PAGE_ALIGN(DEFAULT_MAP_WINDOW_USER64 / 4))
#define TASK_UNMAPPED_BASE ((is_32bit_task()) ? \
TASK_UNMAPPED_BASE_USER32 : TASK_UNMAPPED_BASE_USER64 )
@ -143,21 +148,15 @@ void release_thread(struct task_struct *);
* with 128TB and conditionally enable upto 512TB
*/
#ifdef CONFIG_PPC_BOOK3S_64
#define DEFAULT_MAP_WINDOW ((is_32bit_task()) ? \
TASK_SIZE_USER32 : TASK_SIZE_128TB)
#define DEFAULT_MAP_WINDOW ((is_32bit_task()) ? \
TASK_SIZE_USER32 : DEFAULT_MAP_WINDOW_USER64)
#else
#define DEFAULT_MAP_WINDOW TASK_SIZE
#endif
#ifdef __powerpc64__
#ifdef CONFIG_PPC_BOOK3S_64
/* Limit stack to 128TB */
#define STACK_TOP_USER64 TASK_SIZE_128TB
#else
#define STACK_TOP_USER64 TASK_SIZE_USER64
#endif
#define STACK_TOP_USER64 DEFAULT_MAP_WINDOW_USER64
#define STACK_TOP_USER32 TASK_SIZE_USER32
#define STACK_TOP (is_32bit_task() ? \

View File

@ -44,8 +44,22 @@ extern void __init dump_numa_cpu_topology(void);
extern int sysfs_add_device_to_node(struct device *dev, int nid);
extern void sysfs_remove_device_from_node(struct device *dev, int nid);
static inline int early_cpu_to_node(int cpu)
{
int nid;
nid = numa_cpu_lookup_table[cpu];
/*
* Fall back to node 0 if nid is unset (it should be, except bugs).
* This allows callers to safely do NODE_DATA(early_cpu_to_node(cpu)).
*/
return (nid < 0) ? 0 : nid;
}
#else
static inline int early_cpu_to_node(int cpu) { return 0; }
static inline void dump_numa_cpu_topology(void) {}
static inline int sysfs_add_device_to_node(struct device *dev, int nid)

View File

@ -8,6 +8,7 @@
#include <linux/export.h>
#include <linux/init.h>
#include <linux/jump_label.h>
#include <linux/libfdt.h>
#include <linux/memblock.h>
#include <linux/printk.h>
#include <linux/sched.h>
@ -642,7 +643,6 @@ static struct dt_cpu_feature_match __initdata
{"processor-control-facility", feat_enable_dbell, CPU_FTR_DBELL},
{"processor-control-facility-v3", feat_enable_dbell, CPU_FTR_DBELL},
{"processor-utilization-of-resources-register", feat_enable_purr, 0},
{"subcore", feat_enable, CPU_FTR_SUBCORE},
{"no-execute", feat_enable, 0},
{"strong-access-ordering", feat_enable, CPU_FTR_SAO},
{"cache-inhibited-large-page", feat_enable_large_ci, 0},
@ -671,12 +671,24 @@ static struct dt_cpu_feature_match __initdata
{"wait-v3", feat_enable, 0},
};
/* XXX: how to configure this? Default + boot time? */
#ifdef CONFIG_PPC_CPUFEATURES_ENABLE_UNKNOWN
#define CPU_FEATURE_ENABLE_UNKNOWN 1
#else
#define CPU_FEATURE_ENABLE_UNKNOWN 0
#endif
static bool __initdata using_dt_cpu_ftrs;
static bool __initdata enable_unknown = true;
static int __init dt_cpu_ftrs_parse(char *str)
{
if (!str)
return 0;
if (!strcmp(str, "off"))
using_dt_cpu_ftrs = false;
else if (!strcmp(str, "known"))
enable_unknown = false;
else
return 1;
return 0;
}
early_param("dt_cpu_ftrs", dt_cpu_ftrs_parse);
static void __init cpufeatures_setup_start(u32 isa)
{
@ -707,7 +719,7 @@ static bool __init cpufeatures_process_feature(struct dt_cpu_feature *f)
}
}
if (!known && CPU_FEATURE_ENABLE_UNKNOWN) {
if (!known && enable_unknown) {
if (!feat_try_enable_unknown(f)) {
pr_info("not enabling: %s (unknown and unsupported by kernel)\n",
f->name);
@ -756,6 +768,26 @@ static void __init cpufeatures_setup_finished(void)
cur_cpu_spec->cpu_features, cur_cpu_spec->mmu_features);
}
static int __init disabled_on_cmdline(void)
{
unsigned long root, chosen;
const char *p;
root = of_get_flat_dt_root();
chosen = of_get_flat_dt_subnode_by_name(root, "chosen");
if (chosen == -FDT_ERR_NOTFOUND)
return false;
p = of_get_flat_dt_prop(chosen, "bootargs", NULL);
if (!p)
return false;
if (strstr(p, "dt_cpu_ftrs=off"))
return true;
return false;
}
static int __init fdt_find_cpu_features(unsigned long node, const char *uname,
int depth, void *data)
{
@ -766,8 +798,6 @@ static int __init fdt_find_cpu_features(unsigned long node, const char *uname,
return 0;
}
static bool __initdata using_dt_cpu_ftrs = false;
bool __init dt_cpu_ftrs_in_use(void)
{
return using_dt_cpu_ftrs;
@ -775,6 +805,8 @@ bool __init dt_cpu_ftrs_in_use(void)
bool __init dt_cpu_ftrs_init(void *fdt)
{
using_dt_cpu_ftrs = false;
/* Setup and verify the FDT, if it fails we just bail */
if (!early_init_dt_verify(fdt))
return false;
@ -782,6 +814,9 @@ bool __init dt_cpu_ftrs_init(void *fdt)
if (!of_scan_flat_dt(fdt_find_cpu_features, NULL))
return false;
if (disabled_on_cmdline())
return false;
cpufeatures_setup_cpu();
using_dt_cpu_ftrs = true;
@ -1027,5 +1062,8 @@ static int __init dt_cpu_ftrs_scan_callback(unsigned long node, const char
void __init dt_cpu_ftrs_scan(void)
{
if (!using_dt_cpu_ftrs)
return;
of_scan_flat_dt(dt_cpu_ftrs_scan_callback, NULL);
}

View File

@ -1666,6 +1666,7 @@ void start_thread(struct pt_regs *regs, unsigned long start, unsigned long sp)
#ifdef CONFIG_VSX
current->thread.used_vsr = 0;
#endif
current->thread.load_fp = 0;
memset(&current->thread.fp_state, 0, sizeof(current->thread.fp_state));
current->thread.fp_save_area = NULL;
#ifdef CONFIG_ALTIVEC
@ -1674,6 +1675,7 @@ void start_thread(struct pt_regs *regs, unsigned long start, unsigned long sp)
current->thread.vr_save_area = NULL;
current->thread.vrsave = 0;
current->thread.used_vr = 0;
current->thread.load_vec = 0;
#endif /* CONFIG_ALTIVEC */
#ifdef CONFIG_SPE
memset(current->thread.evr, 0, sizeof(current->thread.evr));
@ -1685,6 +1687,7 @@ void start_thread(struct pt_regs *regs, unsigned long start, unsigned long sp)
current->thread.tm_tfhar = 0;
current->thread.tm_texasr = 0;
current->thread.tm_tfiar = 0;
current->thread.load_tm = 0;
#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
}
EXPORT_SYMBOL(start_thread);

View File

@ -928,7 +928,7 @@ void __init setup_arch(char **cmdline_p)
#ifdef CONFIG_PPC_MM_SLICES
#ifdef CONFIG_PPC64
init_mm.context.addr_limit = TASK_SIZE_128TB;
init_mm.context.addr_limit = DEFAULT_MAP_WINDOW_USER64;
#else
#error "context.addr_limit not initialized."
#endif

View File

@ -661,7 +661,7 @@ void __init emergency_stack_init(void)
static void * __init pcpu_fc_alloc(unsigned int cpu, size_t size, size_t align)
{
return __alloc_bootmem_node(NODE_DATA(cpu_to_node(cpu)), size, align,
return __alloc_bootmem_node(NODE_DATA(early_cpu_to_node(cpu)), size, align,
__pa(MAX_DMA_ADDRESS));
}
@ -672,7 +672,7 @@ static void __init pcpu_fc_free(void *ptr, size_t size)
static int pcpu_cpu_distance(unsigned int from, unsigned int to)
{
if (cpu_to_node(from) == cpu_to_node(to))
if (early_cpu_to_node(from) == early_cpu_to_node(to))
return LOCAL_DISTANCE;
else
return REMOTE_DISTANCE;

View File

@ -99,7 +99,7 @@ static int hash__init_new_context(struct mm_struct *mm)
* mm->context.addr_limit. Default to max task size so that we copy the
* default values to paca which will help us to handle slb miss early.
*/
mm->context.addr_limit = TASK_SIZE_128TB;
mm->context.addr_limit = DEFAULT_MAP_WINDOW_USER64;
/*
* The old code would re-promote on fork, we don't do that when using

View File

@ -402,7 +402,7 @@ static struct power_pmu power9_isa207_pmu = {
.name = "POWER9",
.n_counter = MAX_PMU_COUNTERS,
.add_fields = ISA207_ADD_FIELDS,
.test_adder = ISA207_TEST_ADDER,
.test_adder = P9_DD1_TEST_ADDER,
.compute_mmcr = isa207_compute_mmcr,
.config_bhrb = power9_config_bhrb,
.bhrb_filter_map = power9_bhrb_filter_map,
@ -421,7 +421,7 @@ static struct power_pmu power9_pmu = {
.name = "POWER9",
.n_counter = MAX_PMU_COUNTERS,
.add_fields = ISA207_ADD_FIELDS,
.test_adder = P9_DD1_TEST_ADDER,
.test_adder = ISA207_TEST_ADDER,
.compute_mmcr = isa207_compute_mmcr,
.config_bhrb = power9_config_bhrb,
.bhrb_filter_map = power9_bhrb_filter_map,

View File

@ -59,6 +59,17 @@ config PPC_OF_BOOT_TRAMPOLINE
In case of doubt, say Y
config PPC_DT_CPU_FTRS
bool "Device-tree based CPU feature discovery & setup"
depends on PPC_BOOK3S_64
default y
help
This enables code to use a new device tree binding for describing CPU
compatibility and features. Saying Y here will attempt to use the new
binding if the firmware provides it. Currently only the skiboot
firmware provides this binding.
If you're not sure say Y.
config UDBG_RTAS_CONSOLE
bool "RTAS based debug console"
depends on PPC_RTAS

View File

@ -175,6 +175,8 @@ static int spufs_arch_write_note(struct spu_context *ctx, int i,
skip = roundup(cprm->pos - total + sz, 4) - cprm->pos;
if (!dump_skip(cprm, skip))
goto Eio;
rc = 0;
out:
free_page((unsigned long)buf);
return rc;

View File

@ -407,7 +407,13 @@ static DEVICE_ATTR(subcores_per_core, 0644,
static int subcore_init(void)
{
if (!cpu_has_feature(CPU_FTR_SUBCORE))
unsigned pvr_ver;
pvr_ver = PVR_VER(mfspr(SPRN_PVR));
if (pvr_ver != PVR_POWER8 &&
pvr_ver != PVR_POWER8E &&
pvr_ver != PVR_POWER8NVL)
return 0;
/*

View File

@ -124,6 +124,7 @@ static struct property *dlpar_clone_drconf_property(struct device_node *dn)
for (i = 0; i < num_lmbs; i++) {
lmbs[i].base_addr = be64_to_cpu(lmbs[i].base_addr);
lmbs[i].drc_index = be32_to_cpu(lmbs[i].drc_index);
lmbs[i].aa_index = be32_to_cpu(lmbs[i].aa_index);
lmbs[i].flags = be32_to_cpu(lmbs[i].flags);
}
@ -147,6 +148,7 @@ static void dlpar_update_drconf_property(struct device_node *dn,
for (i = 0; i < num_lmbs; i++) {
lmbs[i].base_addr = cpu_to_be64(lmbs[i].base_addr);
lmbs[i].drc_index = cpu_to_be32(lmbs[i].drc_index);
lmbs[i].aa_index = cpu_to_be32(lmbs[i].aa_index);
lmbs[i].flags = cpu_to_be32(lmbs[i].flags);
}

View File

@ -75,7 +75,8 @@ static int u8_gpio_dir_out(struct gpio_chip *gc, unsigned int gpio, int val)
static void u8_gpio_save_regs(struct of_mm_gpio_chip *mm_gc)
{
struct u8_gpio_chip *u8_gc = gpiochip_get_data(&mm_gc->gc);
struct u8_gpio_chip *u8_gc =
container_of(mm_gc, struct u8_gpio_chip, mm_gc);
u8_gc->data = in_8(mm_gc->regs);
}

View File

@ -192,9 +192,9 @@ config NR_CPUS
int "Maximum number of CPUs"
depends on SMP
range 2 32 if SPARC32
range 2 1024 if SPARC64
range 2 4096 if SPARC64
default 32 if SPARC32
default 64 if SPARC64
default 4096 if SPARC64
source kernel/Kconfig.hz
@ -295,9 +295,13 @@ config NUMA
depends on SPARC64 && SMP
config NODES_SHIFT
int
default "4"
int "Maximum NUMA Nodes (as a power of 2)"
range 4 5 if SPARC64
default "5"
depends on NEED_MULTIPLE_NODES
help
Specify the maximum number of NUMA Nodes available on the target
system. Increases memory reserved to accommodate various tables.
# Some NUMA nodes have memory ranges that span
# other nodes. Even though a pfn is valid and

View File

@ -52,7 +52,7 @@
#define CTX_NR_MASK TAG_CONTEXT_BITS
#define CTX_HW_MASK (CTX_NR_MASK | CTX_PGSZ_MASK)
#define CTX_FIRST_VERSION ((_AC(1,UL) << CTX_VERSION_SHIFT) + _AC(1,UL))
#define CTX_FIRST_VERSION BIT(CTX_VERSION_SHIFT)
#define CTX_VALID(__ctx) \
(!(((__ctx.sparc64_ctx_val) ^ tlb_context_cache) & CTX_VERSION_MASK))
#define CTX_HWBITS(__ctx) ((__ctx.sparc64_ctx_val) & CTX_HW_MASK)

View File

@ -19,13 +19,8 @@ extern spinlock_t ctx_alloc_lock;
extern unsigned long tlb_context_cache;
extern unsigned long mmu_context_bmap[];
DECLARE_PER_CPU(struct mm_struct *, per_cpu_secondary_mm);
void get_new_mmu_context(struct mm_struct *mm);
#ifdef CONFIG_SMP
void smp_new_mmu_context_version(void);
#else
#define smp_new_mmu_context_version() do { } while (0)
#endif
int init_new_context(struct task_struct *tsk, struct mm_struct *mm);
void destroy_context(struct mm_struct *mm);
@ -76,8 +71,9 @@ void __flush_tlb_mm(unsigned long, unsigned long);
static inline void switch_mm(struct mm_struct *old_mm, struct mm_struct *mm, struct task_struct *tsk)
{
unsigned long ctx_valid, flags;
int cpu;
int cpu = smp_processor_id();
per_cpu(per_cpu_secondary_mm, cpu) = mm;
if (unlikely(mm == &init_mm))
return;
@ -123,7 +119,6 @@ static inline void switch_mm(struct mm_struct *old_mm, struct mm_struct *mm, str
* for the first time, we must flush that context out of the
* local TLB.
*/
cpu = smp_processor_id();
if (!ctx_valid || !cpumask_test_cpu(cpu, mm_cpumask(mm))) {
cpumask_set_cpu(cpu, mm_cpumask(mm));
__flush_tlb_mm(CTX_HWBITS(mm->context),
@ -133,26 +128,7 @@ static inline void switch_mm(struct mm_struct *old_mm, struct mm_struct *mm, str
}
#define deactivate_mm(tsk,mm) do { } while (0)
/* Activate a new MM instance for the current task. */
static inline void activate_mm(struct mm_struct *active_mm, struct mm_struct *mm)
{
unsigned long flags;
int cpu;
spin_lock_irqsave(&mm->context.lock, flags);
if (!CTX_VALID(mm->context))
get_new_mmu_context(mm);
cpu = smp_processor_id();
if (!cpumask_test_cpu(cpu, mm_cpumask(mm)))
cpumask_set_cpu(cpu, mm_cpumask(mm));
load_secondary_context(mm);
__flush_tlb_mm(CTX_HWBITS(mm->context), SECONDARY_CONTEXT);
tsb_context_switch(mm);
spin_unlock_irqrestore(&mm->context.lock, flags);
}
#define activate_mm(active_mm, mm) switch_mm(active_mm, mm, NULL)
#endif /* !(__ASSEMBLY__) */
#endif /* !(__SPARC64_MMU_CONTEXT_H) */

View File

@ -20,7 +20,6 @@
#define PIL_SMP_CALL_FUNC 1
#define PIL_SMP_RECEIVE_SIGNAL 2
#define PIL_SMP_CAPTURE 3
#define PIL_SMP_CTX_NEW_VERSION 4
#define PIL_DEVICE_IRQ 5
#define PIL_SMP_CALL_FUNC_SNGL 6
#define PIL_DEFERRED_PCR_WORK 7

View File

@ -327,6 +327,7 @@ struct vio_dev {
int compat_len;
u64 dev_no;
u64 id;
unsigned long channel_id;

View File

@ -909,7 +909,7 @@ static int register_services(struct ds_info *dp)
pbuf.req.handle = cp->handle;
pbuf.req.major = 1;
pbuf.req.minor = 0;
strcpy(pbuf.req.svc_id, cp->service_id);
strcpy(pbuf.id_buf, cp->service_id);
err = __ds_send(lp, &pbuf, msg_len);
if (err > 0)

View File

@ -1034,17 +1034,26 @@ static void __init init_cpu_send_mondo_info(struct trap_per_cpu *tb)
{
#ifdef CONFIG_SMP
unsigned long page;
void *mondo, *p;
BUILD_BUG_ON((NR_CPUS * sizeof(u16)) > (PAGE_SIZE - 64));
BUILD_BUG_ON((NR_CPUS * sizeof(u16)) > PAGE_SIZE);
/* Make sure mondo block is 64byte aligned */
p = kzalloc(127, GFP_KERNEL);
if (!p) {
prom_printf("SUN4V: Error, cannot allocate mondo block.\n");
prom_halt();
}
mondo = (void *)(((unsigned long)p + 63) & ~0x3f);
tb->cpu_mondo_block_pa = __pa(mondo);
page = get_zeroed_page(GFP_KERNEL);
if (!page) {
prom_printf("SUN4V: Error, cannot allocate cpu mondo page.\n");
prom_printf("SUN4V: Error, cannot allocate cpu list page.\n");
prom_halt();
}
tb->cpu_mondo_block_pa = __pa(page);
tb->cpu_list_pa = __pa(page + 64);
tb->cpu_list_pa = __pa(page);
#endif
}

View File

@ -37,7 +37,6 @@ void handle_stdfmna(struct pt_regs *regs, unsigned long sfar, unsigned long sfsr
/* smp_64.c */
void __irq_entry smp_call_function_client(int irq, struct pt_regs *regs);
void __irq_entry smp_call_function_single_client(int irq, struct pt_regs *regs);
void __irq_entry smp_new_mmu_context_version_client(int irq, struct pt_regs *regs);
void __irq_entry smp_penguin_jailcell(int irq, struct pt_regs *regs);
void __irq_entry smp_receive_signal_client(int irq, struct pt_regs *regs);

View File

@ -964,37 +964,6 @@ void flush_dcache_page_all(struct mm_struct *mm, struct page *page)
preempt_enable();
}
void __irq_entry smp_new_mmu_context_version_client(int irq, struct pt_regs *regs)
{
struct mm_struct *mm;
unsigned long flags;
clear_softint(1 << irq);
/* See if we need to allocate a new TLB context because
* the version of the one we are using is now out of date.
*/
mm = current->active_mm;
if (unlikely(!mm || (mm == &init_mm)))
return;
spin_lock_irqsave(&mm->context.lock, flags);
if (unlikely(!CTX_VALID(mm->context)))
get_new_mmu_context(mm);
spin_unlock_irqrestore(&mm->context.lock, flags);
load_secondary_context(mm);
__flush_tlb_mm(CTX_HWBITS(mm->context),
SECONDARY_CONTEXT);
}
void smp_new_mmu_context_version(void)
{
smp_cross_call(&xcall_new_mmu_context_version, 0, 0, 0);
}
#ifdef CONFIG_KGDB
void kgdb_roundup_cpus(unsigned long flags)
{

View File

@ -455,13 +455,16 @@ __tsb_context_switch:
.type copy_tsb,#function
copy_tsb: /* %o0=old_tsb_base, %o1=old_tsb_size
* %o2=new_tsb_base, %o3=new_tsb_size
* %o4=page_size_shift
*/
sethi %uhi(TSB_PASS_BITS), %g7
srlx %o3, 4, %o3
add %o0, %o1, %g1 /* end of old tsb */
add %o0, %o1, %o1 /* end of old tsb */
sllx %g7, 32, %g7
sub %o3, 1, %o3 /* %o3 == new tsb hash mask */
mov %o4, %g1 /* page_size_shift */
661: prefetcha [%o0] ASI_N, #one_read
.section .tsb_phys_patch, "ax"
.word 661b
@ -486,9 +489,9 @@ copy_tsb: /* %o0=old_tsb_base, %o1=old_tsb_size
/* This can definitely be computed faster... */
srlx %o0, 4, %o5 /* Build index */
and %o5, 511, %o5 /* Mask index */
sllx %o5, PAGE_SHIFT, %o5 /* Put into vaddr position */
sllx %o5, %g1, %o5 /* Put into vaddr position */
or %o4, %o5, %o4 /* Full VADDR. */
srlx %o4, PAGE_SHIFT, %o4 /* Shift down to create index */
srlx %o4, %g1, %o4 /* Shift down to create index */
and %o4, %o3, %o4 /* Mask with new_tsb_nents-1 */
sllx %o4, 4, %o4 /* Shift back up into tsb ent offset */
TSB_STORE(%o2 + %o4, %g2) /* Store TAG */
@ -496,7 +499,7 @@ copy_tsb: /* %o0=old_tsb_base, %o1=old_tsb_size
TSB_STORE(%o2 + %o4, %g3) /* Store TTE */
80: add %o0, 16, %o0
cmp %o0, %g1
cmp %o0, %o1
bne,pt %xcc, 90b
nop

View File

@ -50,7 +50,7 @@ tl0_resv03e: BTRAP(0x3e) BTRAP(0x3f) BTRAP(0x40)
tl0_irq1: TRAP_IRQ(smp_call_function_client, 1)
tl0_irq2: TRAP_IRQ(smp_receive_signal_client, 2)
tl0_irq3: TRAP_IRQ(smp_penguin_jailcell, 3)
tl0_irq4: TRAP_IRQ(smp_new_mmu_context_version_client, 4)
tl0_irq4: BTRAP(0x44)
#else
tl0_irq1: BTRAP(0x41)
tl0_irq2: BTRAP(0x42)

View File

@ -302,13 +302,16 @@ static struct vio_dev *vio_create_one(struct mdesc_handle *hp, u64 mp,
if (!id) {
dev_set_name(&vdev->dev, "%s", bus_id_name);
vdev->dev_no = ~(u64)0;
vdev->id = ~(u64)0;
} else if (!cfg_handle) {
dev_set_name(&vdev->dev, "%s-%llu", bus_id_name, *id);
vdev->dev_no = *id;
vdev->id = ~(u64)0;
} else {
dev_set_name(&vdev->dev, "%s-%llu-%llu", bus_id_name,
*cfg_handle, *id);
vdev->dev_no = *cfg_handle;
vdev->id = *id;
}
vdev->dev.parent = parent;
@ -351,27 +354,84 @@ static void vio_add(struct mdesc_handle *hp, u64 node)
(void) vio_create_one(hp, node, &root_vdev->dev);
}
struct vio_md_node_query {
const char *type;
u64 dev_no;
u64 id;
};
static int vio_md_node_match(struct device *dev, void *arg)
{
struct vio_md_node_query *query = (struct vio_md_node_query *) arg;
struct vio_dev *vdev = to_vio_dev(dev);
if (vdev->mp == (u64) arg)
return 1;
if (vdev->dev_no != query->dev_no)
return 0;
if (vdev->id != query->id)
return 0;
if (strcmp(vdev->type, query->type))
return 0;
return 0;
return 1;
}
static void vio_remove(struct mdesc_handle *hp, u64 node)
{
const char *type;
const u64 *id, *cfg_handle;
u64 a;
struct vio_md_node_query query;
struct device *dev;
dev = device_find_child(&root_vdev->dev, (void *) node,
type = mdesc_get_property(hp, node, "device-type", NULL);
if (!type) {
type = mdesc_get_property(hp, node, "name", NULL);
if (!type)
type = mdesc_node_name(hp, node);
}
query.type = type;
id = mdesc_get_property(hp, node, "id", NULL);
cfg_handle = NULL;
mdesc_for_each_arc(a, hp, node, MDESC_ARC_TYPE_BACK) {
u64 target;
target = mdesc_arc_target(hp, a);
cfg_handle = mdesc_get_property(hp, target,
"cfg-handle", NULL);
if (cfg_handle)
break;
}
if (!id) {
query.dev_no = ~(u64)0;
query.id = ~(u64)0;
} else if (!cfg_handle) {
query.dev_no = *id;
query.id = ~(u64)0;
} else {
query.dev_no = *cfg_handle;
query.id = *id;
}
dev = device_find_child(&root_vdev->dev, &query,
vio_md_node_match);
if (dev) {
printk(KERN_INFO "VIO: Removing device %s\n", dev_name(dev));
device_unregister(dev);
put_device(dev);
} else {
if (!id)
printk(KERN_ERR "VIO: Removed unknown %s node.\n",
type);
else if (!cfg_handle)
printk(KERN_ERR "VIO: Removed unknown %s node %llu.\n",
type, *id);
else
printk(KERN_ERR "VIO: Removed unknown %s node %llu-%llu.\n",
type, *cfg_handle, *id);
}
}

View File

@ -15,6 +15,7 @@ lib-$(CONFIG_SPARC32) += copy_user.o locks.o
lib-$(CONFIG_SPARC64) += atomic_64.o
lib-$(CONFIG_SPARC32) += lshrdi3.o ashldi3.o
lib-$(CONFIG_SPARC32) += muldi3.o bitext.o cmpdi2.o
lib-$(CONFIG_SPARC64) += multi3.o
lib-$(CONFIG_SPARC64) += copy_page.o clear_page.o bzero.o
lib-$(CONFIG_SPARC64) += csum_copy.o csum_copy_from_user.o csum_copy_to_user.o

35
arch/sparc/lib/multi3.S Normal file
View File

@ -0,0 +1,35 @@
#include <linux/linkage.h>
#include <asm/export.h>
.text
.align 4
ENTRY(__multi3) /* %o0 = u, %o1 = v */
mov %o1, %g1
srl %o3, 0, %g4
mulx %g4, %g1, %o1
srlx %g1, 0x20, %g3
mulx %g3, %g4, %g5
sllx %g5, 0x20, %o5
srl %g1, 0, %g4
sub %o1, %o5, %o5
srlx %o5, 0x20, %o5
addcc %g5, %o5, %g5
srlx %o3, 0x20, %o5
mulx %g4, %o5, %g4
mulx %g3, %o5, %o5
sethi %hi(0x80000000), %g3
addcc %g5, %g4, %g5
srlx %g5, 0x20, %g5
add %g3, %g3, %g3
movcc %xcc, %g0, %g3
addcc %o5, %g5, %o5
sllx %g4, 0x20, %g4
add %o1, %g4, %o1
add %o5, %g3, %g2
mulx %g1, %o2, %g1
add %g1, %g2, %g1
mulx %o0, %o3, %o0
retl
add %g1, %o0, %o0
ENDPROC(__multi3)
EXPORT_SYMBOL(__multi3)

View File

@ -358,7 +358,8 @@ static int __init setup_hugepagesz(char *string)
}
if ((hv_pgsz_mask & cpu_pgsz_mask) == 0U) {
pr_warn("hugepagesz=%llu not supported by MMU.\n",
hugetlb_bad_size();
pr_err("hugepagesz=%llu not supported by MMU.\n",
hugepage_size);
goto out;
}
@ -706,10 +707,58 @@ EXPORT_SYMBOL(__flush_dcache_range);
/* get_new_mmu_context() uses "cache + 1". */
DEFINE_SPINLOCK(ctx_alloc_lock);
unsigned long tlb_context_cache = CTX_FIRST_VERSION - 1;
unsigned long tlb_context_cache = CTX_FIRST_VERSION;
#define MAX_CTX_NR (1UL << CTX_NR_BITS)
#define CTX_BMAP_SLOTS BITS_TO_LONGS(MAX_CTX_NR)
DECLARE_BITMAP(mmu_context_bmap, MAX_CTX_NR);
DEFINE_PER_CPU(struct mm_struct *, per_cpu_secondary_mm) = {0};
static void mmu_context_wrap(void)
{
unsigned long old_ver = tlb_context_cache & CTX_VERSION_MASK;
unsigned long new_ver, new_ctx, old_ctx;
struct mm_struct *mm;
int cpu;
bitmap_zero(mmu_context_bmap, 1 << CTX_NR_BITS);
/* Reserve kernel context */
set_bit(0, mmu_context_bmap);
new_ver = (tlb_context_cache & CTX_VERSION_MASK) + CTX_FIRST_VERSION;
if (unlikely(new_ver == 0))
new_ver = CTX_FIRST_VERSION;
tlb_context_cache = new_ver;
/*
* Make sure that any new mm that are added into per_cpu_secondary_mm,
* are going to go through get_new_mmu_context() path.
*/
mb();
/*
* Updated versions to current on those CPUs that had valid secondary
* contexts
*/
for_each_online_cpu(cpu) {
/*
* If a new mm is stored after we took this mm from the array,
* it will go into get_new_mmu_context() path, because we
* already bumped the version in tlb_context_cache.
*/
mm = per_cpu(per_cpu_secondary_mm, cpu);
if (unlikely(!mm || mm == &init_mm))
continue;
old_ctx = mm->context.sparc64_ctx_val;
if (likely((old_ctx & CTX_VERSION_MASK) == old_ver)) {
new_ctx = (old_ctx & ~CTX_VERSION_MASK) | new_ver;
set_bit(new_ctx & CTX_NR_MASK, mmu_context_bmap);
mm->context.sparc64_ctx_val = new_ctx;
}
}
}
/* Caller does TLB context flushing on local CPU if necessary.
* The caller also ensures that CTX_VALID(mm->context) is false.
@ -725,48 +774,30 @@ void get_new_mmu_context(struct mm_struct *mm)
{
unsigned long ctx, new_ctx;
unsigned long orig_pgsz_bits;
int new_version;
spin_lock(&ctx_alloc_lock);
retry:
/* wrap might have happened, test again if our context became valid */
if (unlikely(CTX_VALID(mm->context)))
goto out;
orig_pgsz_bits = (mm->context.sparc64_ctx_val & CTX_PGSZ_MASK);
ctx = (tlb_context_cache + 1) & CTX_NR_MASK;
new_ctx = find_next_zero_bit(mmu_context_bmap, 1 << CTX_NR_BITS, ctx);
new_version = 0;
if (new_ctx >= (1 << CTX_NR_BITS)) {
new_ctx = find_next_zero_bit(mmu_context_bmap, ctx, 1);
if (new_ctx >= ctx) {
int i;
new_ctx = (tlb_context_cache & CTX_VERSION_MASK) +
CTX_FIRST_VERSION;
if (new_ctx == 1)
new_ctx = CTX_FIRST_VERSION;
/* Don't call memset, for 16 entries that's just
* plain silly...
*/
mmu_context_bmap[0] = 3;
mmu_context_bmap[1] = 0;
mmu_context_bmap[2] = 0;
mmu_context_bmap[3] = 0;
for (i = 4; i < CTX_BMAP_SLOTS; i += 4) {
mmu_context_bmap[i + 0] = 0;
mmu_context_bmap[i + 1] = 0;
mmu_context_bmap[i + 2] = 0;
mmu_context_bmap[i + 3] = 0;
}
new_version = 1;
goto out;
mmu_context_wrap();
goto retry;
}
}
if (mm->context.sparc64_ctx_val)
cpumask_clear(mm_cpumask(mm));
mmu_context_bmap[new_ctx>>6] |= (1UL << (new_ctx & 63));
new_ctx |= (tlb_context_cache & CTX_VERSION_MASK);
out:
tlb_context_cache = new_ctx;
mm->context.sparc64_ctx_val = new_ctx | orig_pgsz_bits;
out:
spin_unlock(&ctx_alloc_lock);
if (unlikely(new_version))
smp_new_mmu_context_version();
}
static int numa_enabled = 1;

View File

@ -496,7 +496,8 @@ retry_tsb_alloc:
extern void copy_tsb(unsigned long old_tsb_base,
unsigned long old_tsb_size,
unsigned long new_tsb_base,
unsigned long new_tsb_size);
unsigned long new_tsb_size,
unsigned long page_size_shift);
unsigned long old_tsb_base = (unsigned long) old_tsb;
unsigned long new_tsb_base = (unsigned long) new_tsb;
@ -504,7 +505,9 @@ retry_tsb_alloc:
old_tsb_base = __pa(old_tsb_base);
new_tsb_base = __pa(new_tsb_base);
}
copy_tsb(old_tsb_base, old_size, new_tsb_base, new_size);
copy_tsb(old_tsb_base, old_size, new_tsb_base, new_size,
tsb_index == MM_TSB_BASE ?
PAGE_SHIFT : REAL_HPAGE_SHIFT);
}
mm->context.tsb_block[tsb_index].tsb = new_tsb;

View File

@ -971,11 +971,6 @@ xcall_capture:
wr %g0, (1 << PIL_SMP_CAPTURE), %set_softint
retry
.globl xcall_new_mmu_context_version
xcall_new_mmu_context_version:
wr %g0, (1 << PIL_SMP_CTX_NEW_VERSION), %set_softint
retry
#ifdef CONFIG_KGDB
.globl xcall_kgdb_capture
xcall_kgdb_capture:

View File

@ -255,6 +255,7 @@ static void init_cyrix(struct cpuinfo_x86 *c)
break;
case 4: /* MediaGX/GXm or Geode GXM/GXLV/GX1 */
case 11: /* GX1 with inverted Device ID */
#ifdef CONFIG_PCI
{
u32 vendor, device;

View File

@ -619,6 +619,9 @@ int __init save_microcode_in_initrd_intel(void)
show_saved_mc();
/* initrd is going away, clear patch ptr. */
intel_ucode_patch = NULL;
return 0;
}

View File

@ -52,7 +52,7 @@ BFQG_FLAG_FNS(idling)
BFQG_FLAG_FNS(empty)
#undef BFQG_FLAG_FNS
/* This should be called with the queue_lock held. */
/* This should be called with the scheduler lock held. */
static void bfqg_stats_update_group_wait_time(struct bfqg_stats *stats)
{
unsigned long long now;
@ -67,7 +67,7 @@ static void bfqg_stats_update_group_wait_time(struct bfqg_stats *stats)
bfqg_stats_clear_waiting(stats);
}
/* This should be called with the queue_lock held. */
/* This should be called with the scheduler lock held. */
static void bfqg_stats_set_start_group_wait_time(struct bfq_group *bfqg,
struct bfq_group *curr_bfqg)
{
@ -81,7 +81,7 @@ static void bfqg_stats_set_start_group_wait_time(struct bfq_group *bfqg,
bfqg_stats_mark_waiting(stats);
}
/* This should be called with the queue_lock held. */
/* This should be called with the scheduler lock held. */
static void bfqg_stats_end_empty_time(struct bfqg_stats *stats)
{
unsigned long long now;
@ -203,12 +203,30 @@ struct bfq_group *bfqq_group(struct bfq_queue *bfqq)
static void bfqg_get(struct bfq_group *bfqg)
{
return blkg_get(bfqg_to_blkg(bfqg));
bfqg->ref++;
}
void bfqg_put(struct bfq_group *bfqg)
{
return blkg_put(bfqg_to_blkg(bfqg));
bfqg->ref--;
if (bfqg->ref == 0)
kfree(bfqg);
}
static void bfqg_and_blkg_get(struct bfq_group *bfqg)
{
/* see comments in bfq_bic_update_cgroup for why refcounting bfqg */
bfqg_get(bfqg);
blkg_get(bfqg_to_blkg(bfqg));
}
void bfqg_and_blkg_put(struct bfq_group *bfqg)
{
bfqg_put(bfqg);
blkg_put(bfqg_to_blkg(bfqg));
}
void bfqg_stats_update_io_add(struct bfq_group *bfqg, struct bfq_queue *bfqq,
@ -312,7 +330,11 @@ void bfq_init_entity(struct bfq_entity *entity, struct bfq_group *bfqg)
if (bfqq) {
bfqq->ioprio = bfqq->new_ioprio;
bfqq->ioprio_class = bfqq->new_ioprio_class;
bfqg_get(bfqg);
/*
* Make sure that bfqg and its associated blkg do not
* disappear before entity.
*/
bfqg_and_blkg_get(bfqg);
}
entity->parent = bfqg->my_entity; /* NULL for root group */
entity->sched_data = &bfqg->sched_data;
@ -399,6 +421,8 @@ struct blkg_policy_data *bfq_pd_alloc(gfp_t gfp, int node)
return NULL;
}
/* see comments in bfq_bic_update_cgroup for why refcounting */
bfqg_get(bfqg);
return &bfqg->pd;
}
@ -426,7 +450,7 @@ void bfq_pd_free(struct blkg_policy_data *pd)
struct bfq_group *bfqg = pd_to_bfqg(pd);
bfqg_stats_exit(&bfqg->stats);
return kfree(bfqg);
bfqg_put(bfqg);
}
void bfq_pd_reset_stats(struct blkg_policy_data *pd)
@ -496,9 +520,10 @@ struct bfq_group *bfq_find_set_group(struct bfq_data *bfqd,
* Move @bfqq to @bfqg, deactivating it from its old group and reactivating
* it on the new one. Avoid putting the entity on the old group idle tree.
*
* Must be called under the queue lock; the cgroup owning @bfqg must
* not disappear (by now this just means that we are called under
* rcu_read_lock()).
* Must be called under the scheduler lock, to make sure that the blkg
* owning @bfqg does not disappear (see comments in
* bfq_bic_update_cgroup on guaranteeing the consistency of blkg
* objects).
*/
void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
struct bfq_group *bfqg)
@ -519,16 +544,12 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
bfq_deactivate_bfqq(bfqd, bfqq, false, false);
else if (entity->on_st)
bfq_put_idle_entity(bfq_entity_service_tree(entity), entity);
bfqg_put(bfqq_group(bfqq));
bfqg_and_blkg_put(bfqq_group(bfqq));
/*
* Here we use a reference to bfqg. We don't need a refcounter
* as the cgroup reference will not be dropped, so that its
* destroy() callback will not be invoked.
*/
entity->parent = bfqg->my_entity;
entity->sched_data = &bfqg->sched_data;
bfqg_get(bfqg);
/* pin down bfqg and its associated blkg */
bfqg_and_blkg_get(bfqg);
if (bfq_bfqq_busy(bfqq)) {
bfq_pos_tree_add_move(bfqd, bfqq);
@ -545,8 +566,9 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
* @bic: the bic to move.
* @blkcg: the blk-cgroup to move to.
*
* Move bic to blkcg, assuming that bfqd->queue is locked; the caller
* has to make sure that the reference to cgroup is valid across the call.
* Move bic to blkcg, assuming that bfqd->lock is held; which makes
* sure that the reference to cgroup is valid across the call (see
* comments in bfq_bic_update_cgroup on this issue)
*
* NOTE: an alternative approach might have been to store the current
* cgroup in bfqq and getting a reference to it, reducing the lookup
@ -604,6 +626,57 @@ void bfq_bic_update_cgroup(struct bfq_io_cq *bic, struct bio *bio)
goto out;
bfqg = __bfq_bic_change_cgroup(bfqd, bic, bio_blkcg(bio));
/*
* Update blkg_path for bfq_log_* functions. We cache this
* path, and update it here, for the following
* reasons. Operations on blkg objects in blk-cgroup are
* protected with the request_queue lock, and not with the
* lock that protects the instances of this scheduler
* (bfqd->lock). This exposes BFQ to the following sort of
* race.
*
* The blkg_lookup performed in bfq_get_queue, protected
* through rcu, may happen to return the address of a copy of
* the original blkg. If this is the case, then the
* bfqg_and_blkg_get performed in bfq_get_queue, to pin down
* the blkg, is useless: it does not prevent blk-cgroup code
* from destroying both the original blkg and all objects
* directly or indirectly referred by the copy of the
* blkg.
*
* On the bright side, destroy operations on a blkg invoke, as
* a first step, hooks of the scheduler associated with the
* blkg. And these hooks are executed with bfqd->lock held for
* BFQ. As a consequence, for any blkg associated with the
* request queue this instance of the scheduler is attached
* to, we are guaranteed that such a blkg is not destroyed, and
* that all the pointers it contains are consistent, while we
* are holding bfqd->lock. A blkg_lookup performed with
* bfqd->lock held then returns a fully consistent blkg, which
* remains consistent until this lock is held.
*
* Thanks to the last fact, and to the fact that: (1) bfqg has
* been obtained through a blkg_lookup in the above
* assignment, and (2) bfqd->lock is being held, here we can
* safely use the policy data for the involved blkg (i.e., the
* field bfqg->pd) to get to the blkg associated with bfqg,
* and then we can safely use any field of blkg. After we
* release bfqd->lock, even just getting blkg through this
* bfqg may cause dangling references to be traversed, as
* bfqg->pd may not exist any more.
*
* In view of the above facts, here we cache, in the bfqg, any
* blkg data we may need for this bic, and for its associated
* bfq_queue. As of now, we need to cache only the path of the
* blkg, which is used in the bfq_log_* functions.
*
* Finally, note that bfqg itself needs to be protected from
* destruction on the blkg_free of the original blkg (which
* invokes bfq_pd_free). We use an additional private
* refcounter for bfqg, to let it disappear only after no
* bfq_queue refers to it any longer.
*/
blkg_path(bfqg_to_blkg(bfqg), bfqg->blkg_path, sizeof(bfqg->blkg_path));
bic->blkcg_serial_nr = serial_nr;
out:
rcu_read_unlock();
@ -640,8 +713,6 @@ static void bfq_reparent_leaf_entity(struct bfq_data *bfqd,
* @bfqd: the device data structure with the root group.
* @bfqg: the group to move from.
* @st: the service tree with the entities.
*
* Needs queue_lock to be taken and reference to be valid over the call.
*/
static void bfq_reparent_active_entities(struct bfq_data *bfqd,
struct bfq_group *bfqg,
@ -692,8 +763,7 @@ void bfq_pd_offline(struct blkg_policy_data *pd)
/*
* The idle tree may still contain bfq_queues belonging
* to exited task because they never migrated to a different
* cgroup from the one being destroyed now. No one else
* can access them so it's safe to act without any lock.
* cgroup from the one being destroyed now.
*/
bfq_flush_idle_tree(st);

View File

@ -3665,7 +3665,7 @@ void bfq_put_queue(struct bfq_queue *bfqq)
kmem_cache_free(bfq_pool, bfqq);
#ifdef CONFIG_BFQ_GROUP_IOSCHED
bfqg_put(bfqg);
bfqg_and_blkg_put(bfqg);
#endif
}

View File

@ -759,6 +759,12 @@ struct bfq_group {
/* must be the first member */
struct blkg_policy_data pd;
/* cached path for this blkg (see comments in bfq_bic_update_cgroup) */
char blkg_path[128];
/* reference counter (see comments in bfq_bic_update_cgroup) */
int ref;
struct bfq_entity entity;
struct bfq_sched_data sched_data;
@ -838,7 +844,7 @@ struct bfq_group *bfq_find_set_group(struct bfq_data *bfqd,
struct blkcg_gq *bfqg_to_blkg(struct bfq_group *bfqg);
struct bfq_group *bfqq_group(struct bfq_queue *bfqq);
struct bfq_group *bfq_create_group_hierarchy(struct bfq_data *bfqd, int node);
void bfqg_put(struct bfq_group *bfqg);
void bfqg_and_blkg_put(struct bfq_group *bfqg);
#ifdef CONFIG_BFQ_GROUP_IOSCHED
extern struct cftype bfq_blkcg_legacy_files[];
@ -910,20 +916,13 @@ void bfq_add_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq);
struct bfq_group *bfqq_group(struct bfq_queue *bfqq);
#define bfq_log_bfqq(bfqd, bfqq, fmt, args...) do { \
char __pbuf[128]; \
\
blkg_path(bfqg_to_blkg(bfqq_group(bfqq)), __pbuf, sizeof(__pbuf)); \
blk_add_trace_msg((bfqd)->queue, "bfq%d%c %s " fmt, (bfqq)->pid, \
blk_add_trace_msg((bfqd)->queue, "bfq%d%c %s " fmt, (bfqq)->pid,\
bfq_bfqq_sync((bfqq)) ? 'S' : 'A', \
__pbuf, ##args); \
bfqq_group(bfqq)->blkg_path, ##args); \
} while (0)
#define bfq_log_bfqg(bfqd, bfqg, fmt, args...) do { \
char __pbuf[128]; \
\
blkg_path(bfqg_to_blkg(bfqg), __pbuf, sizeof(__pbuf)); \
blk_add_trace_msg((bfqd)->queue, "%s " fmt, __pbuf, ##args); \
} while (0)
#define bfq_log_bfqg(bfqd, bfqg, fmt, args...) \
blk_add_trace_msg((bfqd)->queue, "%s " fmt, (bfqg)->blkg_path, ##args)
#else /* CONFIG_BFQ_GROUP_IOSCHED */

View File

@ -175,6 +175,9 @@ bool bio_integrity_enabled(struct bio *bio)
if (bio_op(bio) != REQ_OP_READ && bio_op(bio) != REQ_OP_WRITE)
return false;
if (!bio_sectors(bio))
return false;
/* Already protected? */
if (bio_integrity(bio))
return false;

View File

@ -1461,22 +1461,28 @@ static blk_qc_t request_to_qc_t(struct blk_mq_hw_ctx *hctx, struct request *rq)
return blk_tag_to_qc_t(rq->internal_tag, hctx->queue_num, true);
}
static void __blk_mq_try_issue_directly(struct request *rq, blk_qc_t *cookie,
bool may_sleep)
static void __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
struct request *rq,
blk_qc_t *cookie, bool may_sleep)
{
struct request_queue *q = rq->q;
struct blk_mq_queue_data bd = {
.rq = rq,
.last = true,
};
struct blk_mq_hw_ctx *hctx;
blk_qc_t new_cookie;
int ret;
bool run_queue = true;
if (blk_mq_hctx_stopped(hctx)) {
run_queue = false;
goto insert;
}
if (q->elevator)
goto insert;
if (!blk_mq_get_driver_tag(rq, &hctx, false))
if (!blk_mq_get_driver_tag(rq, NULL, false))
goto insert;
new_cookie = request_to_qc_t(hctx, rq);
@ -1500,7 +1506,7 @@ static void __blk_mq_try_issue_directly(struct request *rq, blk_qc_t *cookie,
__blk_mq_requeue_request(rq);
insert:
blk_mq_sched_insert_request(rq, false, true, false, may_sleep);
blk_mq_sched_insert_request(rq, false, run_queue, false, may_sleep);
}
static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
@ -1508,7 +1514,7 @@ static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
{
if (!(hctx->flags & BLK_MQ_F_BLOCKING)) {
rcu_read_lock();
__blk_mq_try_issue_directly(rq, cookie, false);
__blk_mq_try_issue_directly(hctx, rq, cookie, false);
rcu_read_unlock();
} else {
unsigned int srcu_idx;
@ -1516,7 +1522,7 @@ static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
might_sleep();
srcu_idx = srcu_read_lock(&hctx->queue_rq_srcu);
__blk_mq_try_issue_directly(rq, cookie, true);
__blk_mq_try_issue_directly(hctx, rq, cookie, true);
srcu_read_unlock(&hctx->queue_rq_srcu, srcu_idx);
}
}
@ -1619,9 +1625,12 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
blk_mq_put_ctx(data.ctx);
if (same_queue_rq)
if (same_queue_rq) {
data.hctx = blk_mq_map_queue(q,
same_queue_rq->mq_ctx->cpu);
blk_mq_try_issue_directly(data.hctx, same_queue_rq,
&cookie);
}
} else if (q->nr_hw_queues > 1 && is_sync) {
blk_mq_put_ctx(data.ctx);
blk_mq_bio_to_request(rq, bio);

View File

@ -27,6 +27,13 @@ static int throtl_quantum = 32;
#define MIN_THROTL_IOPS (10)
#define DFL_LATENCY_TARGET (-1L)
#define DFL_IDLE_THRESHOLD (0)
#define DFL_HD_BASELINE_LATENCY (4000L) /* 4ms */
#define LATENCY_FILTERED_SSD (0)
/*
* For HD, very small latency comes from sequential IO. Such IO is helpless to
* help determine if its IO is impacted by others, hence we ignore the IO
*/
#define LATENCY_FILTERED_HD (1000L) /* 1ms */
#define SKIP_LATENCY (((u64)1) << BLK_STAT_RES_SHIFT)
@ -212,6 +219,7 @@ struct throtl_data
struct avg_latency_bucket avg_buckets[LATENCY_BUCKET_SIZE];
struct latency_bucket __percpu *latency_buckets;
unsigned long last_calculate_time;
unsigned long filtered_latency;
bool track_bio_latency;
};
@ -698,7 +706,7 @@ static void throtl_dequeue_tg(struct throtl_grp *tg)
static void throtl_schedule_pending_timer(struct throtl_service_queue *sq,
unsigned long expires)
{
unsigned long max_expire = jiffies + 8 * sq_to_tg(sq)->td->throtl_slice;
unsigned long max_expire = jiffies + 8 * sq_to_td(sq)->throtl_slice;
/*
* Since we are adjusting the throttle limit dynamically, the sleep
@ -2281,7 +2289,7 @@ void blk_throtl_bio_endio(struct bio *bio)
throtl_track_latency(tg->td, blk_stat_size(&bio->bi_issue_stat),
bio_op(bio), lat);
if (tg->latency_target) {
if (tg->latency_target && lat >= tg->td->filtered_latency) {
int bucket;
unsigned int threshold;
@ -2417,14 +2425,20 @@ void blk_throtl_exit(struct request_queue *q)
void blk_throtl_register_queue(struct request_queue *q)
{
struct throtl_data *td;
int i;
td = q->td;
BUG_ON(!td);
if (blk_queue_nonrot(q))
if (blk_queue_nonrot(q)) {
td->throtl_slice = DFL_THROTL_SLICE_SSD;
else
td->filtered_latency = LATENCY_FILTERED_SSD;
} else {
td->throtl_slice = DFL_THROTL_SLICE_HD;
td->filtered_latency = LATENCY_FILTERED_HD;
for (i = 0; i < LATENCY_BUCKET_SIZE; i++)
td->avg_buckets[i].latency = DFL_HD_BASELINE_LATENCY;
}
#ifndef CONFIG_BLK_DEV_THROTTLING_LOW
/* if no low limit, use previous default */
td->throtl_slice = DFL_THROTL_SLICE_HD;

View File

@ -141,7 +141,7 @@ int public_key_verify_signature(const struct public_key *pkey,
* signature and returns that to us.
*/
ret = crypto_akcipher_verify(req);
if (ret == -EINPROGRESS) {
if ((ret == -EINPROGRESS) || (ret == -EBUSY)) {
wait_for_completion(&compl.completion);
ret = compl.err;
}

View File

@ -1767,9 +1767,8 @@ static int drbg_kcapi_sym_ctr(struct drbg_state *drbg,
break;
case -EINPROGRESS:
case -EBUSY:
ret = wait_for_completion_interruptible(
&drbg->ctr_completion);
if (!ret && !drbg->ctr_async_err) {
wait_for_completion(&drbg->ctr_completion);
if (!drbg->ctr_async_err) {
reinit_completion(&drbg->ctr_completion);
break;
}

View File

@ -152,10 +152,8 @@ static int crypto_gcm_setkey(struct crypto_aead *aead, const u8 *key,
err = crypto_skcipher_encrypt(&data->req);
if (err == -EINPROGRESS || err == -EBUSY) {
err = wait_for_completion_interruptible(
&data->result.completion);
if (!err)
err = data->result.err;
wait_for_completion(&data->result.completion);
err = data->result.err;
}
if (err)

View File

@ -666,14 +666,6 @@ static const struct iommu_ops *iort_iommu_xlate(struct device *dev,
int ret = -ENODEV;
struct fwnode_handle *iort_fwnode;
/*
* If we already translated the fwspec there
* is nothing left to do, return the iommu_ops.
*/
ops = iort_fwspec_iommu_ops(dev->iommu_fwspec);
if (ops)
return ops;
if (node) {
iort_fwnode = iort_get_fwnode(node);
if (!iort_fwnode)
@ -735,6 +727,14 @@ const struct iommu_ops *iort_iommu_configure(struct device *dev)
u32 streamid = 0;
int err;
/*
* If we already translated the fwspec there
* is nothing left to do, return the iommu_ops.
*/
ops = iort_fwspec_iommu_ops(dev->iommu_fwspec);
if (ops)
return ops;
if (dev_is_pci(dev)) {
struct pci_bus *bus = to_pci_dev(dev)->bus;
u32 rid;
@ -782,6 +782,12 @@ const struct iommu_ops *iort_iommu_configure(struct device *dev)
if (err)
ops = ERR_PTR(err);
/* Ignore all other errors apart from EPROBE_DEFER */
if (IS_ERR(ops) && (PTR_ERR(ops) != -EPROBE_DEFER)) {
dev_dbg(dev, "Adding to IOMMU failed: %ld\n", PTR_ERR(ops));
ops = NULL;
}
return ops;
}

View File

@ -782,7 +782,7 @@ static int acpi_battery_update(struct acpi_battery *battery, bool resume)
if ((battery->state & ACPI_BATTERY_STATE_CRITICAL) ||
(test_bit(ACPI_BATTERY_ALARM_PRESENT, &battery->flags) &&
(battery->capacity_now <= battery->alarm)))
pm_wakeup_hard_event(&battery->device->dev);
pm_wakeup_event(&battery->device->dev, 0);
return result;
}

View File

@ -217,7 +217,7 @@ static int acpi_lid_notify_state(struct acpi_device *device, int state)
}
if (state)
pm_wakeup_hard_event(&device->dev);
pm_wakeup_event(&device->dev, 0);
ret = blocking_notifier_call_chain(&acpi_lid_notifier, state, device);
if (ret == NOTIFY_DONE)
@ -402,7 +402,7 @@ static void acpi_button_notify(struct acpi_device *device, u32 event)
} else {
int keycode;
pm_wakeup_hard_event(&device->dev);
pm_wakeup_event(&device->dev, 0);
if (button->suspended)
break;
@ -534,7 +534,6 @@ static int acpi_button_add(struct acpi_device *device)
lid_device = device;
}
device_init_wakeup(&device->dev, true);
printk(KERN_INFO PREFIX "%s [%s]\n", name, acpi_device_bid(device));
return 0;

View File

@ -24,7 +24,6 @@
#include <linux/pm_qos.h>
#include <linux/pm_domain.h>
#include <linux/pm_runtime.h>
#include <linux/suspend.h>
#include "internal.h"
@ -400,7 +399,7 @@ static void acpi_pm_notify_handler(acpi_handle handle, u32 val, void *not_used)
mutex_lock(&acpi_pm_notifier_lock);
if (adev->wakeup.flags.notifier_present) {
pm_wakeup_ws_event(adev->wakeup.ws, 0, true);
__pm_wakeup_event(adev->wakeup.ws, 0);
if (adev->wakeup.context.work.func)
queue_pm_work(&adev->wakeup.context.work);
}

View File

@ -1371,8 +1371,8 @@ int acpi_dma_configure(struct device *dev, enum dev_dma_attr attr)
iort_set_dma_mask(dev);
iommu = iort_iommu_configure(dev);
if (IS_ERR(iommu))
return PTR_ERR(iommu);
if (IS_ERR(iommu) && PTR_ERR(iommu) == -EPROBE_DEFER)
return -EPROBE_DEFER;
size = max(dev->coherent_dma_mask, dev->coherent_dma_mask + 1);
/*

View File

@ -663,40 +663,14 @@ static int acpi_freeze_prepare(void)
acpi_os_wait_events_complete();
if (acpi_sci_irq_valid())
enable_irq_wake(acpi_sci_irq);
return 0;
}
static void acpi_freeze_wake(void)
{
/*
* If IRQD_WAKEUP_ARMED is not set for the SCI at this point, it means
* that the SCI has triggered while suspended, so cancel the wakeup in
* case it has not been a wakeup event (the GPEs will be checked later).
*/
if (acpi_sci_irq_valid() &&
!irqd_is_wakeup_armed(irq_get_irq_data(acpi_sci_irq)))
pm_system_cancel_wakeup();
}
static void acpi_freeze_sync(void)
{
/*
* Process all pending events in case there are any wakeup ones.
*
* The EC driver uses the system workqueue, so that one needs to be
* flushed too.
*/
acpi_os_wait_events_complete();
flush_scheduled_work();
}
static void acpi_freeze_restore(void)
{
acpi_disable_wakeup_devices(ACPI_STATE_S0);
if (acpi_sci_irq_valid())
disable_irq_wake(acpi_sci_irq);
acpi_enable_all_runtime_gpes();
}
@ -708,8 +682,6 @@ static void acpi_freeze_end(void)
static const struct platform_freeze_ops acpi_freeze_ops = {
.begin = acpi_freeze_begin,
.prepare = acpi_freeze_prepare,
.wake = acpi_freeze_wake,
.sync = acpi_freeze_sync,
.restore = acpi_freeze_restore,
.end = acpi_freeze_end,
};

View File

@ -1364,6 +1364,40 @@ static inline void ahci_gtf_filter_workaround(struct ata_host *host)
{}
#endif
/*
* On the Acer Aspire Switch Alpha 12, sometimes all SATA ports are detected
* as DUMMY, or detected but eventually get a "link down" and never get up
* again. When this happens, CAP.NP may hold a value of 0x00 or 0x01, and the
* port_map may hold a value of 0x00.
*
* Overriding CAP.NP to 0x02 and the port_map to 0x7 will reveal all 3 ports
* and can significantly reduce the occurrence of the problem.
*
* https://bugzilla.kernel.org/show_bug.cgi?id=189471
*/
static void acer_sa5_271_workaround(struct ahci_host_priv *hpriv,
struct pci_dev *pdev)
{
static const struct dmi_system_id sysids[] = {
{
.ident = "Acer Switch Alpha 12",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
DMI_MATCH(DMI_PRODUCT_NAME, "Switch SA5-271")
},
},
{ }
};
if (dmi_check_system(sysids)) {
dev_info(&pdev->dev, "enabling Acer Switch Alpha 12 workaround\n");
if ((hpriv->saved_cap & 0xC734FF00) == 0xC734FF00) {
hpriv->port_map = 0x7;
hpriv->cap = 0xC734FF02;
}
}
}
#ifdef CONFIG_ARM64
/*
* Due to ERRATA#22536, ThunderX needs to handle HOST_IRQ_STAT differently.
@ -1636,6 +1670,10 @@ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
"online status unreliable, applying workaround\n");
}
/* Acer SA5-271 workaround modifies private_data */
acer_sa5_271_workaround(hpriv, pdev);
/* CAP.NP sometimes indicate the index of the last enabled
* port, at other times, that of the last possible port, so
* determining the maximum port number requires looking at

View File

@ -514,8 +514,9 @@ int ahci_platform_init_host(struct platform_device *pdev,
irq = platform_get_irq(pdev, 0);
if (irq <= 0) {
dev_err(dev, "no irq\n");
return -EINVAL;
if (irq != -EPROBE_DEFER)
dev_err(dev, "no irq\n");
return irq;
}
hpriv->irq = irq;

View File

@ -6800,7 +6800,7 @@ static int __init ata_parse_force_one(char **cur,
}
force_ent->port = simple_strtoul(id, &endp, 10);
if (p == endp || *endp != '\0') {
if (id == endp || *endp != '\0') {
*reason = "invalid port/link";
return -EINVAL;
}

View File

@ -4067,7 +4067,6 @@ static int mv_platform_probe(struct platform_device *pdev)
struct ata_host *host;
struct mv_host_priv *hpriv;
struct resource *res;
void __iomem *mmio;
int n_ports = 0, irq = 0;
int rc;
int port;
@ -4086,9 +4085,8 @@ static int mv_platform_probe(struct platform_device *pdev)
* Get the register base first
*/
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
mmio = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(mmio))
return PTR_ERR(mmio);
if (res == NULL)
return -EINVAL;
/* allocate host */
if (pdev->dev.of_node) {
@ -4132,7 +4130,12 @@ static int mv_platform_probe(struct platform_device *pdev)
hpriv->board_idx = chip_soc;
host->iomap = NULL;
hpriv->base = mmio - SATAHC0_REG_BASE;
hpriv->base = devm_ioremap(&pdev->dev, res->start,
resource_size(res));
if (!hpriv->base)
return -ENOMEM;
hpriv->base -= SATAHC0_REG_BASE;
hpriv->clk = clk_get(&pdev->dev, NULL);
if (IS_ERR(hpriv->clk))

View File

@ -890,7 +890,10 @@ static int sata_rcar_probe(struct platform_device *pdev)
dev_err(&pdev->dev, "failed to get access to sata clock\n");
return PTR_ERR(priv->clk);
}
clk_prepare_enable(priv->clk);
ret = clk_prepare_enable(priv->clk);
if (ret)
return ret;
host = ata_host_alloc(&pdev->dev, 1);
if (!host) {
@ -970,8 +973,11 @@ static int sata_rcar_resume(struct device *dev)
struct ata_host *host = dev_get_drvdata(dev);
struct sata_rcar_priv *priv = host->private_data;
void __iomem *base = priv->base;
int ret;
clk_prepare_enable(priv->clk);
ret = clk_prepare_enable(priv->clk);
if (ret)
return ret;
/* ack and mask */
iowrite32(0, base + SATAINTSTAT_REG);
@ -988,8 +994,11 @@ static int sata_rcar_restore(struct device *dev)
{
struct ata_host *host = dev_get_drvdata(dev);
struct sata_rcar_priv *priv = host->private_data;
int ret;
clk_prepare_enable(priv->clk);
ret = clk_prepare_enable(priv->clk);
if (ret)
return ret;
sata_rcar_setup_port(host);

View File

@ -1091,6 +1091,11 @@ static int __device_suspend_noirq(struct device *dev, pm_message_t state, bool a
if (async_error)
goto Complete;
if (pm_wakeup_pending()) {
async_error = -EBUSY;
goto Complete;
}
if (dev->power.syscore || dev->power.direct_complete)
goto Complete;

View File

@ -28,8 +28,8 @@ bool events_check_enabled __read_mostly;
/* First wakeup IRQ seen by the kernel in the last cycle. */
unsigned int pm_wakeup_irq __read_mostly;
/* If greater than 0 and the system is suspending, terminate the suspend. */
static atomic_t pm_abort_suspend __read_mostly;
/* If set and the system is suspending, terminate the suspend. */
static bool pm_abort_suspend __read_mostly;
/*
* Combined counters of registered wakeup events and wakeup events in progress.
@ -855,26 +855,20 @@ bool pm_wakeup_pending(void)
pm_print_active_wakeup_sources();
}
return ret || atomic_read(&pm_abort_suspend) > 0;
return ret || pm_abort_suspend;
}
void pm_system_wakeup(void)
{
atomic_inc(&pm_abort_suspend);
pm_abort_suspend = true;
freeze_wake();
}
EXPORT_SYMBOL_GPL(pm_system_wakeup);
void pm_system_cancel_wakeup(void)
{
atomic_dec(&pm_abort_suspend);
}
void pm_wakeup_clear(bool reset)
void pm_wakeup_clear(void)
{
pm_abort_suspend = false;
pm_wakeup_irq = 0;
if (reset)
atomic_set(&pm_abort_suspend, 0);
}
void pm_system_irq_wakeup(unsigned int irq_number)

View File

@ -608,6 +608,9 @@ static int loop_switch(struct loop_device *lo, struct file *file)
*/
static int loop_flush(struct loop_device *lo)
{
/* loop not yet configured, no running thread, nothing to flush */
if (lo->lo_state != Lo_bound)
return 0;
return loop_switch(lo, NULL);
}

View File

@ -571,9 +571,10 @@ static inline void update_turbo_state(void)
static int min_perf_pct_min(void)
{
struct cpudata *cpu = all_cpu_data[0];
int turbo_pstate = cpu->pstate.turbo_pstate;
return DIV_ROUND_UP(cpu->pstate.min_pstate * 100,
cpu->pstate.turbo_pstate);
return turbo_pstate ?
DIV_ROUND_UP(cpu->pstate.min_pstate * 100, turbo_pstate) : 0;
}
static s16 intel_pstate_get_epb(struct cpudata *cpu_data)

View File

@ -27,6 +27,26 @@ struct bmp_header {
u32 size;
} __packed;
static bool efi_bgrt_addr_valid(u64 addr)
{
efi_memory_desc_t *md;
for_each_efi_memory_desc(md) {
u64 size;
u64 end;
if (md->type != EFI_BOOT_SERVICES_DATA)
continue;
size = md->num_pages << EFI_PAGE_SHIFT;
end = md->phys_addr + size;
if (addr >= md->phys_addr && addr < end)
return true;
}
return false;
}
void __init efi_bgrt_init(struct acpi_table_header *table)
{
void *image;
@ -36,7 +56,7 @@ void __init efi_bgrt_init(struct acpi_table_header *table)
if (acpi_disabled)
return;
if (!efi_enabled(EFI_BOOT))
if (!efi_enabled(EFI_MEMMAP))
return;
if (table->length < sizeof(bgrt_tab)) {
@ -65,6 +85,10 @@ void __init efi_bgrt_init(struct acpi_table_header *table)
goto out;
}
if (!efi_bgrt_addr_valid(bgrt->image_address)) {
pr_notice("Ignoring BGRT: invalid image address\n");
goto out;
}
image = early_memremap(bgrt->image_address, sizeof(bmp_header));
if (!image) {
pr_notice("Ignoring BGRT: failed to map image header memory\n");

View File

@ -508,6 +508,8 @@ drm_atomic_helper_check_modeset(struct drm_device *dev,
bool has_connectors =
!!new_crtc_state->connector_mask;
WARN_ON(!drm_modeset_is_locked(&crtc->mutex));
if (!drm_mode_equal(&old_crtc_state->mode, &new_crtc_state->mode)) {
DRM_DEBUG_ATOMIC("[CRTC:%d:%s] mode changed\n",
crtc->base.id, crtc->name);
@ -551,6 +553,8 @@ drm_atomic_helper_check_modeset(struct drm_device *dev,
for_each_oldnew_connector_in_state(state, connector, old_connector_state, new_connector_state, i) {
const struct drm_connector_helper_funcs *funcs = connector->helper_private;
WARN_ON(!drm_modeset_is_locked(&dev->mode_config.connection_mutex));
/*
* This only sets crtc->connectors_changed for routing changes,
* drivers must set crtc->connectors_changed themselves when
@ -650,6 +654,8 @@ drm_atomic_helper_check_planes(struct drm_device *dev,
for_each_oldnew_plane_in_state(state, plane, old_plane_state, new_plane_state, i) {
const struct drm_plane_helper_funcs *funcs;
WARN_ON(!drm_modeset_is_locked(&plane->mutex));
funcs = plane->helper_private;
drm_atomic_helper_plane_changed(state, old_plane_state, new_plane_state, plane);
@ -2663,7 +2669,12 @@ int drm_atomic_helper_resume(struct drm_device *dev,
drm_modeset_acquire_init(&ctx, 0);
while (1) {
err = drm_modeset_lock_all_ctx(dev, &ctx);
if (err)
goto out;
err = drm_atomic_helper_commit_duplicated_state(state, &ctx);
out:
if (err != -EDEADLK)
break;

View File

@ -358,7 +358,12 @@ EXPORT_SYMBOL(drm_put_dev);
void drm_unplug_dev(struct drm_device *dev)
{
/* for a USB device */
drm_dev_unregister(dev);
if (drm_core_check_feature(dev, DRIVER_MODESET))
drm_modeset_unregister_all(dev);
drm_minor_unregister(dev, DRM_MINOR_PRIMARY);
drm_minor_unregister(dev, DRM_MINOR_RENDER);
drm_minor_unregister(dev, DRM_MINOR_CONTROL);
mutex_lock(&drm_global_mutex);

View File

@ -760,7 +760,7 @@ static int dsi_parse_dt(struct platform_device *pdev, struct dw_dsi *dsi)
* Get the endpoint node. In our case, dsi has one output port1
* to which the external HDMI bridge is connected.
*/
ret = drm_of_find_panel_or_bridge(np, 0, 0, NULL, &dsi->bridge);
ret = drm_of_find_panel_or_bridge(np, 1, 0, NULL, &dsi->bridge);
if (ret)
return ret;

View File

@ -1235,6 +1235,15 @@ int i915_driver_load(struct pci_dev *pdev, const struct pci_device_id *ent)
goto out_fini;
pci_set_drvdata(pdev, &dev_priv->drm);
/*
* Disable the system suspend direct complete optimization, which can
* leave the device suspended skipping the driver's suspend handlers
* if the device was already runtime suspended. This is needed due to
* the difference in our runtime and system suspend sequence and
* becaue the HDA driver may require us to enable the audio power
* domain during system suspend.
*/
pdev->dev_flags |= PCI_DEV_FLAGS_NEEDS_RESUME;
ret = i915_driver_init_early(dev_priv, ent);
if (ret < 0)

View File

@ -2991,6 +2991,16 @@ static inline bool intel_scanout_needs_vtd_wa(struct drm_i915_private *dev_priv)
return false;
}
static inline bool
intel_ggtt_update_needs_vtd_wa(struct drm_i915_private *dev_priv)
{
#ifdef CONFIG_INTEL_IOMMU
if (IS_BROXTON(dev_priv) && intel_iommu_gfx_mapped)
return true;
#endif
return false;
}
int intel_sanitize_enable_ppgtt(struct drm_i915_private *dev_priv,
int enable_ppgtt);

View File

@ -3298,6 +3298,10 @@ int i915_gem_wait_for_idle(struct drm_i915_private *i915, unsigned int flags)
{
int ret;
/* If the device is asleep, we have no requests outstanding */
if (!READ_ONCE(i915->gt.awake))
return 0;
if (flags & I915_WAIT_LOCKED) {
struct i915_gem_timeline *tl;

View File

@ -2191,6 +2191,101 @@ static void gen8_ggtt_clear_range(struct i915_address_space *vm,
gen8_set_pte(&gtt_base[i], scratch_pte);
}
static void bxt_vtd_ggtt_wa(struct i915_address_space *vm)
{
struct drm_i915_private *dev_priv = vm->i915;
/*
* Make sure the internal GAM fifo has been cleared of all GTT
* writes before exiting stop_machine(). This guarantees that
* any aperture accesses waiting to start in another process
* cannot back up behind the GTT writes causing a hang.
* The register can be any arbitrary GAM register.
*/
POSTING_READ(GFX_FLSH_CNTL_GEN6);
}
struct insert_page {
struct i915_address_space *vm;
dma_addr_t addr;
u64 offset;
enum i915_cache_level level;
};
static int bxt_vtd_ggtt_insert_page__cb(void *_arg)
{
struct insert_page *arg = _arg;
gen8_ggtt_insert_page(arg->vm, arg->addr, arg->offset, arg->level, 0);
bxt_vtd_ggtt_wa(arg->vm);
return 0;
}
static void bxt_vtd_ggtt_insert_page__BKL(struct i915_address_space *vm,
dma_addr_t addr,
u64 offset,
enum i915_cache_level level,
u32 unused)
{
struct insert_page arg = { vm, addr, offset, level };
stop_machine(bxt_vtd_ggtt_insert_page__cb, &arg, NULL);
}
struct insert_entries {
struct i915_address_space *vm;
struct sg_table *st;
u64 start;
enum i915_cache_level level;
};
static int bxt_vtd_ggtt_insert_entries__cb(void *_arg)
{
struct insert_entries *arg = _arg;
gen8_ggtt_insert_entries(arg->vm, arg->st, arg->start, arg->level, 0);
bxt_vtd_ggtt_wa(arg->vm);
return 0;
}
static void bxt_vtd_ggtt_insert_entries__BKL(struct i915_address_space *vm,
struct sg_table *st,
u64 start,
enum i915_cache_level level,
u32 unused)
{
struct insert_entries arg = { vm, st, start, level };
stop_machine(bxt_vtd_ggtt_insert_entries__cb, &arg, NULL);
}
struct clear_range {
struct i915_address_space *vm;
u64 start;
u64 length;
};
static int bxt_vtd_ggtt_clear_range__cb(void *_arg)
{
struct clear_range *arg = _arg;
gen8_ggtt_clear_range(arg->vm, arg->start, arg->length);
bxt_vtd_ggtt_wa(arg->vm);
return 0;
}
static void bxt_vtd_ggtt_clear_range__BKL(struct i915_address_space *vm,
u64 start,
u64 length)
{
struct clear_range arg = { vm, start, length };
stop_machine(bxt_vtd_ggtt_clear_range__cb, &arg, NULL);
}
static void gen6_ggtt_clear_range(struct i915_address_space *vm,
u64 start, u64 length)
{
@ -2785,6 +2880,14 @@ static int gen8_gmch_probe(struct i915_ggtt *ggtt)
ggtt->base.insert_entries = gen8_ggtt_insert_entries;
/* Serialize GTT updates with aperture access on BXT if VT-d is on. */
if (intel_ggtt_update_needs_vtd_wa(dev_priv)) {
ggtt->base.insert_entries = bxt_vtd_ggtt_insert_entries__BKL;
ggtt->base.insert_page = bxt_vtd_ggtt_insert_page__BKL;
if (ggtt->base.clear_range != nop_clear_range)
ggtt->base.clear_range = bxt_vtd_ggtt_clear_range__BKL;
}
ggtt->invalidate = gen6_ggtt_invalidate;
return ggtt_probe_common(ggtt, size);
@ -2997,7 +3100,8 @@ void i915_ggtt_enable_guc(struct drm_i915_private *i915)
void i915_ggtt_disable_guc(struct drm_i915_private *i915)
{
i915->ggtt.invalidate = gen6_ggtt_invalidate;
if (i915->ggtt.invalidate == guc_ggtt_invalidate)
i915->ggtt.invalidate = gen6_ggtt_invalidate;
}
void i915_gem_restore_gtt_mappings(struct drm_i915_private *dev_priv)

View File

@ -278,7 +278,7 @@ i915_gem_object_set_tiling(struct drm_i915_gem_object *obj,
obj->mm.quirked = false;
}
if (!i915_gem_object_is_tiled(obj)) {
GEM_BUG_ON(!obj->mm.quirked);
GEM_BUG_ON(obj->mm.quirked);
__i915_gem_object_pin_pages(obj);
obj->mm.quirked = true;
}

View File

@ -208,7 +208,7 @@ static const struct intel_device_info intel_ironlake_d_info = {
static const struct intel_device_info intel_ironlake_m_info = {
GEN5_FEATURES,
.platform = INTEL_IRONLAKE,
.is_mobile = 1,
.is_mobile = 1, .has_fbc = 1,
};
#define GEN6_FEATURES \
@ -390,7 +390,6 @@ static const struct intel_device_info intel_skylake_gt3_info = {
.has_hw_contexts = 1, \
.has_logical_ring_contexts = 1, \
.has_guc = 1, \
.has_decoupled_mmio = 1, \
.has_aliasing_ppgtt = 1, \
.has_full_ppgtt = 1, \
.has_full_48bit_ppgtt = 1, \

View File

@ -12203,6 +12203,15 @@ static void update_scanline_offset(struct intel_crtc *crtc)
* type. For DP ports it behaves like most other platforms, but on HDMI
* there's an extra 1 line difference. So we need to add two instead of
* one to the value.
*
* On VLV/CHV DSI the scanline counter would appear to increment
* approx. 1/3 of a scanline before start of vblank. Unfortunately
* that means we can't tell whether we're in vblank or not while
* we're on that particular line. We must still set scanline_offset
* to 1 so that the vblank timestamps come out correct when we query
* the scanline counter from within the vblank interrupt handler.
* However if queried just before the start of vblank we'll get an
* answer that's slightly in the future.
*/
if (IS_GEN2(dev_priv)) {
const struct drm_display_mode *adjusted_mode = &crtc->config->base.adjusted_mode;

View File

@ -1075,6 +1075,22 @@ int intel_ring_workarounds_emit(struct drm_i915_gem_request *req)
return 0;
}
static bool ring_is_idle(struct intel_engine_cs *engine)
{
struct drm_i915_private *dev_priv = engine->i915;
bool idle = true;
intel_runtime_pm_get(dev_priv);
/* No bit for gen2, so assume the CS parser is idle */
if (INTEL_GEN(dev_priv) > 2 && !(I915_READ_MODE(engine) & MODE_IDLE))
idle = false;
intel_runtime_pm_put(dev_priv);
return idle;
}
/**
* intel_engine_is_idle() - Report if the engine has finished process all work
* @engine: the intel_engine_cs
@ -1084,8 +1100,6 @@ int intel_ring_workarounds_emit(struct drm_i915_gem_request *req)
*/
bool intel_engine_is_idle(struct intel_engine_cs *engine)
{
struct drm_i915_private *dev_priv = engine->i915;
/* Any inflight/incomplete requests? */
if (!i915_seqno_passed(intel_engine_get_seqno(engine),
intel_engine_last_submit(engine)))
@ -1100,7 +1114,7 @@ bool intel_engine_is_idle(struct intel_engine_cs *engine)
return false;
/* Ring stopped? */
if (INTEL_GEN(dev_priv) > 2 && !(I915_READ_MODE(engine) & MODE_IDLE))
if (!ring_is_idle(engine))
return false;
return true;

View File

@ -82,20 +82,10 @@ static unsigned int get_crtc_fence_y_offset(struct intel_crtc *crtc)
static void intel_fbc_get_plane_source_size(struct intel_fbc_state_cache *cache,
int *width, int *height)
{
int w, h;
if (drm_rotation_90_or_270(cache->plane.rotation)) {
w = cache->plane.src_h;
h = cache->plane.src_w;
} else {
w = cache->plane.src_w;
h = cache->plane.src_h;
}
if (width)
*width = w;
*width = cache->plane.src_w;
if (height)
*height = h;
*height = cache->plane.src_h;
}
static int intel_fbc_calculate_cfb_size(struct drm_i915_private *dev_priv,
@ -746,6 +736,11 @@ static void intel_fbc_update_state_cache(struct intel_crtc *crtc,
cache->crtc.hsw_bdw_pixel_rate = crtc_state->pixel_rate;
cache->plane.rotation = plane_state->base.rotation;
/*
* Src coordinates are already rotated by 270 degrees for
* the 90/270 degree plane rotation cases (to match the
* GTT mapping), hence no need to account for rotation here.
*/
cache->plane.src_w = drm_rect_width(&plane_state->base.src) >> 16;
cache->plane.src_h = drm_rect_height(&plane_state->base.src) >> 16;
cache->plane.visible = plane_state->base.visible;

View File

@ -4335,10 +4335,18 @@ skl_compute_wm(struct drm_atomic_state *state)
struct drm_crtc_state *cstate;
struct intel_atomic_state *intel_state = to_intel_atomic_state(state);
struct skl_wm_values *results = &intel_state->wm_results;
struct drm_device *dev = state->dev;
struct skl_pipe_wm *pipe_wm;
bool changed = false;
int ret, i;
/*
* When we distrust bios wm we always need to recompute to set the
* expected DDB allocations for each CRTC.
*/
if (to_i915(dev)->wm.distrust_bios_wm)
changed = true;
/*
* If this transaction isn't actually touching any CRTC's, don't
* bother with watermark calculation. Note that if we pass this
@ -4349,6 +4357,7 @@ skl_compute_wm(struct drm_atomic_state *state)
*/
for_each_new_crtc_in_state(state, crtc, cstate, i)
changed = true;
if (!changed)
return 0;

View File

@ -435,8 +435,9 @@ static bool intel_psr_match_conditions(struct intel_dp *intel_dp)
}
/* PSR2 is restricted to work with panel resolutions upto 3200x2000 */
if (intel_crtc->config->pipe_src_w > 3200 ||
intel_crtc->config->pipe_src_h > 2000) {
if (dev_priv->psr.psr2_support &&
(intel_crtc->config->pipe_src_w > 3200 ||
intel_crtc->config->pipe_src_h > 2000)) {
dev_priv->psr.psr2_support = false;
return false;
}

View File

@ -83,10 +83,13 @@ int intel_usecs_to_scanlines(const struct drm_display_mode *adjusted_mode,
*/
void intel_pipe_update_start(struct intel_crtc *crtc)
{
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
const struct drm_display_mode *adjusted_mode = &crtc->config->base.adjusted_mode;
long timeout = msecs_to_jiffies_timeout(1);
int scanline, min, max, vblank_start;
wait_queue_head_t *wq = drm_crtc_vblank_waitqueue(&crtc->base);
bool need_vlv_dsi_wa = (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) &&
intel_crtc_has_type(crtc->config, INTEL_OUTPUT_DSI);
DEFINE_WAIT(wait);
vblank_start = adjusted_mode->crtc_vblank_start;
@ -139,6 +142,24 @@ void intel_pipe_update_start(struct intel_crtc *crtc)
drm_crtc_vblank_put(&crtc->base);
/*
* On VLV/CHV DSI the scanline counter would appear to
* increment approx. 1/3 of a scanline before start of vblank.
* The registers still get latched at start of vblank however.
* This means we must not write any registers on the first
* line of vblank (since not the whole line is actually in
* vblank). And unfortunately we can't use the interrupt to
* wait here since it will fire too soon. We could use the
* frame start interrupt instead since it will fire after the
* critical scanline, but that would require more changes
* in the interrupt code. So for now we'll just do the nasty
* thing and poll for the bad scanline to pass us by.
*
* FIXME figure out if BXT+ DSI suffers from this as well
*/
while (need_vlv_dsi_wa && scanline == vblank_start)
scanline = intel_get_crtc_scanline(crtc);
crtc->debug.scanline_start = scanline;
crtc->debug.start_vbl_time = ktime_get();
crtc->debug.start_vbl_count = intel_crtc_get_vblank_counter(crtc);

View File

@ -59,8 +59,6 @@ struct drm_i915_gem_request;
* available in the work queue (note, the queue is shared,
* not per-engine). It is OK for this to be nonzero, but
* it should not be huge!
* q_fail: failed to enqueue a work item. This should never happen,
* because we check for space beforehand.
* b_fail: failed to ring the doorbell. This should never happen, unless
* somehow the hardware misbehaves, or maybe if the GuC firmware
* crashes? We probably need to reset the GPU to recover.

View File

@ -673,7 +673,7 @@ static int imx_ldb_bind(struct device *dev, struct device *master, void *data)
ret = drm_of_find_panel_or_bridge(child,
imx_ldb->lvds_mux ? 4 : 2, 0,
&channel->panel, &channel->bridge);
if (ret)
if (ret && ret != -ENODEV)
return ret;
/* panel ddc only if there is no bridge */

View File

@ -19,6 +19,7 @@
#include <drm/drm_of.h>
#include <linux/clk.h>
#include <linux/component.h>
#include <linux/iopoll.h>
#include <linux/irq.h>
#include <linux/of.h>
#include <linux/of_platform.h>
@ -900,16 +901,12 @@ static int mtk_dsi_host_detach(struct mipi_dsi_host *host,
static void mtk_dsi_wait_for_idle(struct mtk_dsi *dsi)
{
u32 timeout_ms = 500000; /* total 1s ~ 2s timeout */
int ret;
u32 val;
while (timeout_ms--) {
if (!(readl(dsi->regs + DSI_INTSTA) & DSI_BUSY))
break;
usleep_range(2, 4);
}
if (timeout_ms == 0) {
ret = readl_poll_timeout(dsi->regs + DSI_INTSTA, val, !(val & DSI_BUSY),
4, 2000000);
if (ret) {
DRM_WARN("polling dsi wait not busy timeout!\n");
mtk_dsi_enable(dsi);

View File

@ -1062,7 +1062,7 @@ static int mtk_hdmi_setup_vendor_specific_infoframe(struct mtk_hdmi *hdmi,
}
err = hdmi_vendor_infoframe_pack(&frame, buffer, sizeof(buffer));
if (err) {
if (err < 0) {
dev_err(hdmi->dev, "Failed to pack vendor infoframe: %zd\n",
err);
return err;

Some files were not shown because too many files have changed in this diff Show More