Fixes for omaps against v4.6-rc2, mostly to fix suspend for beagle-x15
that broke when we added runtime based SoC revision detection earlier. It seems suspend worked earlier as things were only partially initialized, while now we initialize things properly for dra7. Note that the "ARM: OMAP: Catch callers of revision information prior to it being populated" had to be reverted as it caused bogus warnings for other SoCs because omap initcalls bail out based on revision being set to 0 for other SoCs. These initcalls will mostly just disappear when we drop support for omap3 legacy booting. Also included is a fix for dra7 sys_32k_ck clock source that is not enabled on boot making system fall back to using emulated clock. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAABAgAGBQJXFkmvAAoJEBvUPslcq6VzmakQAKmjDs+iS02No6sQaIkBvadJ gAL8EvAPZ5QLt6OW+MvFLiE42kr9NZSbvyLNLzjjqKI9AmuYZnrci8tQJ9uOHMLM SmScTjE6b7PvGo2jI+IXkoi1hVL1YP7FKuA0/25EwNaeMKoZ7JPFJ/jpPatvIEg7 2TB6wk0rdlpGaaSx8uFR2ohDE8lA0gDNRDiHx7G2d9tl9rLLXC2IvS1YhpvLcGw0 IkRgFGF6+WZ09DZ+B1pnKPrf5wWBGqI44IWBL4epOCsDGxOYFjMix5or7FiaTrcP 8BGfOVJCRKGjccBvAFJLSznjMQb8jeTHkFxWY8BgY2FSuTJlFrbkxIBraFaZMqG/ 9IOxPvjCJDgJdhGxHuQUGKzbQ2o7D7H5xb5PCxbdscHTcwi2WR6AXI6q+5v2ykqu moQXvAPS2GcfVrgXcuIthwvqENExILTwogcgME8/6DvS0l7LlJ7bf1G32jsCvI3M mw0+8GKx9qW+7demu6uy0uLiVI5cHfcTWp2ocLnXwnRVmixU5XG4QaPRp/MhB/QV F57UxJFHH398zeZ8OgJFzjYwIT+60KCJNU+qMHdBdd76hiJ7Dg0rMxqoxLpbaB4t fPbhuE/eUWvZR5IeL1I1M4dDgpp6KThGR86K1wHSMwWkq8Og+XoYMoU3yn0tyqT9 IHyBWPFAVpKI44uAle7w =0X4E -----END PGP SIGNATURE----- Merge tag 'omap-for-v4.6/fixes-rc2-v2-signed' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap into fixes Merge "omap fixes against v4.6-rc2" from Tony Lindgren Fixes for omaps against v4.6-rc2, mostly to fix suspend for beagle-x15 that broke when we added runtime based SoC revision detection earlier. It seems suspend worked earlier as things were only partially initialized, while now we initialize things properly for dra7. Note that the "ARM: OMAP: Catch callers of revision information prior to it being populated" had to be reverted as it caused bogus warnings for other SoCs because omap initcalls bail out based on revision being set to 0 for other SoCs. These initcalls will mostly just disappear when we drop support for omap3 legacy booting. Also included is a fix for dra7 sys_32k_ck clock source that is not enabled on boot making system fall back to using emulated clock. * tag 'omap-for-v4.6/fixes-rc2-v2-signed' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap: (198 commits) Revert "ARM: OMAP: Catch callers of revision information prior to it being populated" ARM: OMAP: Catch callers of revision information prior to it being populated ARM: dts: dra7: Correct clock tree for sys_32k_ck ARM: OMAP: DRA7: Provide proper class to omap2_set_globals_tap ARM: OMAP: DRA7: wakeupgen: Skip SAR save for wakeupgen Linux 4.6-rc2 v4l2-mc: avoid warning about unused variable Convert straggling drivers to new six-argument get_user_pages() .mailmap: add Christophe Ricard Make CONFIG_FHANDLE default y mm/page_isolation.c: fix the function comments oom, oom_reaper: do not enqueue task if it is on the oom_reaper_list head mm/page_isolation: fix tracepoint to mirror check function behavior mm/rmap: batched invalidations should use existing api x86/mm: TLB_REMOTE_SEND_IPI should count pages mm: fix invalid node in alloc_migrate_target() include/linux/huge_mm.h: return NULL instead of false for pmd_trans_huge_lock() mm, kasan: fix compilation for CONFIG_SLAB MAINTAINERS: orangefs mailing list is subscribers-only net: mvneta: fix changing MTU when using per-cpu processing ...
This commit is contained in:
commit
c0e309138b
1
.mailmap
1
.mailmap
@ -33,6 +33,7 @@ Björn Steinbrink <B.Steinbrink@gmx.de>
|
||||
Brian Avery <b.avery@hp.com>
|
||||
Brian King <brking@us.ibm.com>
|
||||
Christoph Hellwig <hch@lst.de>
|
||||
Christophe Ricard <christophe.ricard@gmail.com>
|
||||
Corey Minyard <minyard@acm.org>
|
||||
Damian Hobson-Garcia <dhobsong@igel.co.jp>
|
||||
David Brownell <david-b@pacbell.net>
|
||||
|
@ -386,7 +386,7 @@ used. First phase is to "prepare" anything needed, including various checks,
|
||||
memory allocation, etc. The goal is to handle the stuff that is not unlikely
|
||||
to fail here. The second phase is to "commit" the actual changes.
|
||||
|
||||
Switchdev provides an inftrastructure for sharing items (for example memory
|
||||
Switchdev provides an infrastructure for sharing items (for example memory
|
||||
allocations) between the two phases.
|
||||
|
||||
The object created by a driver in "prepare" phase and it is queued up by:
|
||||
|
208
Documentation/x86/topology.txt
Normal file
208
Documentation/x86/topology.txt
Normal file
@ -0,0 +1,208 @@
|
||||
x86 Topology
|
||||
============
|
||||
|
||||
This documents and clarifies the main aspects of x86 topology modelling and
|
||||
representation in the kernel. Update/change when doing changes to the
|
||||
respective code.
|
||||
|
||||
The architecture-agnostic topology definitions are in
|
||||
Documentation/cputopology.txt. This file holds x86-specific
|
||||
differences/specialities which must not necessarily apply to the generic
|
||||
definitions. Thus, the way to read up on Linux topology on x86 is to start
|
||||
with the generic one and look at this one in parallel for the x86 specifics.
|
||||
|
||||
Needless to say, code should use the generic functions - this file is *only*
|
||||
here to *document* the inner workings of x86 topology.
|
||||
|
||||
Started by Thomas Gleixner <tglx@linutronix.de> and Borislav Petkov <bp@alien8.de>.
|
||||
|
||||
The main aim of the topology facilities is to present adequate interfaces to
|
||||
code which needs to know/query/use the structure of the running system wrt
|
||||
threads, cores, packages, etc.
|
||||
|
||||
The kernel does not care about the concept of physical sockets because a
|
||||
socket has no relevance to software. It's an electromechanical component. In
|
||||
the past a socket always contained a single package (see below), but with the
|
||||
advent of Multi Chip Modules (MCM) a socket can hold more than one package. So
|
||||
there might be still references to sockets in the code, but they are of
|
||||
historical nature and should be cleaned up.
|
||||
|
||||
The topology of a system is described in the units of:
|
||||
|
||||
- packages
|
||||
- cores
|
||||
- threads
|
||||
|
||||
* Package:
|
||||
|
||||
Packages contain a number of cores plus shared resources, e.g. DRAM
|
||||
controller, shared caches etc.
|
||||
|
||||
AMD nomenclature for package is 'Node'.
|
||||
|
||||
Package-related topology information in the kernel:
|
||||
|
||||
- cpuinfo_x86.x86_max_cores:
|
||||
|
||||
The number of cores in a package. This information is retrieved via CPUID.
|
||||
|
||||
- cpuinfo_x86.phys_proc_id:
|
||||
|
||||
The physical ID of the package. This information is retrieved via CPUID
|
||||
and deduced from the APIC IDs of the cores in the package.
|
||||
|
||||
- cpuinfo_x86.logical_id:
|
||||
|
||||
The logical ID of the package. As we do not trust BIOSes to enumerate the
|
||||
packages in a consistent way, we introduced the concept of logical package
|
||||
ID so we can sanely calculate the number of maximum possible packages in
|
||||
the system and have the packages enumerated linearly.
|
||||
|
||||
- topology_max_packages():
|
||||
|
||||
The maximum possible number of packages in the system. Helpful for per
|
||||
package facilities to preallocate per package information.
|
||||
|
||||
|
||||
* Cores:
|
||||
|
||||
A core consists of 1 or more threads. It does not matter whether the threads
|
||||
are SMT- or CMT-type threads.
|
||||
|
||||
AMDs nomenclature for a CMT core is "Compute Unit". The kernel always uses
|
||||
"core".
|
||||
|
||||
Core-related topology information in the kernel:
|
||||
|
||||
- smp_num_siblings:
|
||||
|
||||
The number of threads in a core. The number of threads in a package can be
|
||||
calculated by:
|
||||
|
||||
threads_per_package = cpuinfo_x86.x86_max_cores * smp_num_siblings
|
||||
|
||||
|
||||
* Threads:
|
||||
|
||||
A thread is a single scheduling unit. It's the equivalent to a logical Linux
|
||||
CPU.
|
||||
|
||||
AMDs nomenclature for CMT threads is "Compute Unit Core". The kernel always
|
||||
uses "thread".
|
||||
|
||||
Thread-related topology information in the kernel:
|
||||
|
||||
- topology_core_cpumask():
|
||||
|
||||
The cpumask contains all online threads in the package to which a thread
|
||||
belongs.
|
||||
|
||||
The number of online threads is also printed in /proc/cpuinfo "siblings."
|
||||
|
||||
- topology_sibling_mask():
|
||||
|
||||
The cpumask contains all online threads in the core to which a thread
|
||||
belongs.
|
||||
|
||||
- topology_logical_package_id():
|
||||
|
||||
The logical package ID to which a thread belongs.
|
||||
|
||||
- topology_physical_package_id():
|
||||
|
||||
The physical package ID to which a thread belongs.
|
||||
|
||||
- topology_core_id();
|
||||
|
||||
The ID of the core to which a thread belongs. It is also printed in /proc/cpuinfo
|
||||
"core_id."
|
||||
|
||||
|
||||
|
||||
System topology examples
|
||||
|
||||
Note:
|
||||
|
||||
The alternative Linux CPU enumeration depends on how the BIOS enumerates the
|
||||
threads. Many BIOSes enumerate all threads 0 first and then all threads 1.
|
||||
That has the "advantage" that the logical Linux CPU numbers of threads 0 stay
|
||||
the same whether threads are enabled or not. That's merely an implementation
|
||||
detail and has no practical impact.
|
||||
|
||||
1) Single Package, Single Core
|
||||
|
||||
[package 0] -> [core 0] -> [thread 0] -> Linux CPU 0
|
||||
|
||||
2) Single Package, Dual Core
|
||||
|
||||
a) One thread per core
|
||||
|
||||
[package 0] -> [core 0] -> [thread 0] -> Linux CPU 0
|
||||
-> [core 1] -> [thread 0] -> Linux CPU 1
|
||||
|
||||
b) Two threads per core
|
||||
|
||||
[package 0] -> [core 0] -> [thread 0] -> Linux CPU 0
|
||||
-> [thread 1] -> Linux CPU 1
|
||||
-> [core 1] -> [thread 0] -> Linux CPU 2
|
||||
-> [thread 1] -> Linux CPU 3
|
||||
|
||||
Alternative enumeration:
|
||||
|
||||
[package 0] -> [core 0] -> [thread 0] -> Linux CPU 0
|
||||
-> [thread 1] -> Linux CPU 2
|
||||
-> [core 1] -> [thread 0] -> Linux CPU 1
|
||||
-> [thread 1] -> Linux CPU 3
|
||||
|
||||
AMD nomenclature for CMT systems:
|
||||
|
||||
[node 0] -> [Compute Unit 0] -> [Compute Unit Core 0] -> Linux CPU 0
|
||||
-> [Compute Unit Core 1] -> Linux CPU 1
|
||||
-> [Compute Unit 1] -> [Compute Unit Core 0] -> Linux CPU 2
|
||||
-> [Compute Unit Core 1] -> Linux CPU 3
|
||||
|
||||
4) Dual Package, Dual Core
|
||||
|
||||
a) One thread per core
|
||||
|
||||
[package 0] -> [core 0] -> [thread 0] -> Linux CPU 0
|
||||
-> [core 1] -> [thread 0] -> Linux CPU 1
|
||||
|
||||
[package 1] -> [core 0] -> [thread 0] -> Linux CPU 2
|
||||
-> [core 1] -> [thread 0] -> Linux CPU 3
|
||||
|
||||
b) Two threads per core
|
||||
|
||||
[package 0] -> [core 0] -> [thread 0] -> Linux CPU 0
|
||||
-> [thread 1] -> Linux CPU 1
|
||||
-> [core 1] -> [thread 0] -> Linux CPU 2
|
||||
-> [thread 1] -> Linux CPU 3
|
||||
|
||||
[package 1] -> [core 0] -> [thread 0] -> Linux CPU 4
|
||||
-> [thread 1] -> Linux CPU 5
|
||||
-> [core 1] -> [thread 0] -> Linux CPU 6
|
||||
-> [thread 1] -> Linux CPU 7
|
||||
|
||||
Alternative enumeration:
|
||||
|
||||
[package 0] -> [core 0] -> [thread 0] -> Linux CPU 0
|
||||
-> [thread 1] -> Linux CPU 4
|
||||
-> [core 1] -> [thread 0] -> Linux CPU 1
|
||||
-> [thread 1] -> Linux CPU 5
|
||||
|
||||
[package 1] -> [core 0] -> [thread 0] -> Linux CPU 2
|
||||
-> [thread 1] -> Linux CPU 6
|
||||
-> [core 1] -> [thread 0] -> Linux CPU 3
|
||||
-> [thread 1] -> Linux CPU 7
|
||||
|
||||
AMD nomenclature for CMT systems:
|
||||
|
||||
[node 0] -> [Compute Unit 0] -> [Compute Unit Core 0] -> Linux CPU 0
|
||||
-> [Compute Unit Core 1] -> Linux CPU 1
|
||||
-> [Compute Unit 1] -> [Compute Unit Core 0] -> Linux CPU 2
|
||||
-> [Compute Unit Core 1] -> Linux CPU 3
|
||||
|
||||
[node 1] -> [Compute Unit 0] -> [Compute Unit Core 0] -> Linux CPU 4
|
||||
-> [Compute Unit Core 1] -> Linux CPU 5
|
||||
-> [Compute Unit 1] -> [Compute Unit Core 0] -> Linux CPU 6
|
||||
-> [Compute Unit Core 1] -> Linux CPU 7
|
11
MAINTAINERS
11
MAINTAINERS
@ -5042,6 +5042,7 @@ F: include/linux/hw_random.h
|
||||
HARDWARE SPINLOCK CORE
|
||||
M: Ohad Ben-Cohen <ohad@wizery.com>
|
||||
M: Bjorn Andersson <bjorn.andersson@linaro.org>
|
||||
L: linux-remoteproc@vger.kernel.org
|
||||
S: Maintained
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/ohad/hwspinlock.git
|
||||
F: Documentation/hwspinlock.txt
|
||||
@ -6402,7 +6403,7 @@ KPROBES
|
||||
M: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
|
||||
M: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
|
||||
M: "David S. Miller" <davem@davemloft.net>
|
||||
M: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
|
||||
M: Masami Hiramatsu <mhiramat@kernel.org>
|
||||
S: Maintained
|
||||
F: Documentation/kprobes.txt
|
||||
F: include/linux/kprobes.h
|
||||
@ -8253,7 +8254,7 @@ F: Documentation/filesystems/overlayfs.txt
|
||||
|
||||
ORANGEFS FILESYSTEM
|
||||
M: Mike Marshall <hubcap@omnibond.com>
|
||||
L: pvfs2-developers@beowulf-underground.org
|
||||
L: pvfs2-developers@beowulf-underground.org (subscribers-only)
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/hubcap/linux.git
|
||||
S: Supported
|
||||
F: fs/orangefs/
|
||||
@ -9314,6 +9315,7 @@ F: include/linux/regmap.h
|
||||
REMOTE PROCESSOR (REMOTEPROC) SUBSYSTEM
|
||||
M: Ohad Ben-Cohen <ohad@wizery.com>
|
||||
M: Bjorn Andersson <bjorn.andersson@linaro.org>
|
||||
L: linux-remoteproc@vger.kernel.org
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/ohad/remoteproc.git
|
||||
S: Maintained
|
||||
F: drivers/remoteproc/
|
||||
@ -9323,6 +9325,7 @@ F: include/linux/remoteproc.h
|
||||
REMOTE PROCESSOR MESSAGING (RPMSG) SUBSYSTEM
|
||||
M: Ohad Ben-Cohen <ohad@wizery.com>
|
||||
M: Bjorn Andersson <bjorn.andersson@linaro.org>
|
||||
L: linux-remoteproc@vger.kernel.org
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/ohad/rpmsg.git
|
||||
S: Maintained
|
||||
F: drivers/rpmsg/
|
||||
@ -11137,8 +11140,8 @@ F: include/uapi/linux/tipc*.h
|
||||
F: net/tipc/
|
||||
|
||||
TILE ARCHITECTURE
|
||||
M: Chris Metcalf <cmetcalf@ezchip.com>
|
||||
W: http://www.ezchip.com/scm/
|
||||
M: Chris Metcalf <cmetcalf@mellanox.com>
|
||||
W: http://www.mellanox.com/repository/solutions/tile-scm/
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/cmetcalf/linux-tile.git
|
||||
S: Supported
|
||||
F: arch/tile/
|
||||
|
2
Makefile
2
Makefile
@ -1,7 +1,7 @@
|
||||
VERSION = 4
|
||||
PATCHLEVEL = 6
|
||||
SUBLEVEL = 0
|
||||
EXTRAVERSION = -rc1
|
||||
EXTRAVERSION = -rc2
|
||||
NAME = Blurry Fish Butt
|
||||
|
||||
# *DOCUMENTATION*
|
||||
|
@ -98,12 +98,20 @@
|
||||
clock-frequency = <32768>;
|
||||
};
|
||||
|
||||
sys_32k_ck: sys_32k_ck {
|
||||
sys_clk32_crystal_ck: sys_clk32_crystal_ck {
|
||||
#clock-cells = <0>;
|
||||
compatible = "fixed-clock";
|
||||
clock-frequency = <32768>;
|
||||
};
|
||||
|
||||
sys_clk32_pseudo_ck: sys_clk32_pseudo_ck {
|
||||
#clock-cells = <0>;
|
||||
compatible = "fixed-factor-clock";
|
||||
clocks = <&sys_clkin1>;
|
||||
clock-mult = <1>;
|
||||
clock-div = <610>;
|
||||
};
|
||||
|
||||
virt_12000000_ck: virt_12000000_ck {
|
||||
#clock-cells = <0>;
|
||||
compatible = "fixed-clock";
|
||||
@ -2170,4 +2178,12 @@
|
||||
ti,bit-shift = <22>;
|
||||
reg = <0x0558>;
|
||||
};
|
||||
|
||||
sys_32k_ck: sys_32k_ck {
|
||||
#clock-cells = <0>;
|
||||
compatible = "ti,mux-clock";
|
||||
clocks = <&sys_clk32_crystal_ck>, <&sys_clk32_pseudo_ck>, <&sys_clk32_pseudo_ck>, <&sys_clk32_pseudo_ck>;
|
||||
ti,bit-shift = <8>;
|
||||
reg = <0x6c4>;
|
||||
};
|
||||
};
|
||||
|
@ -737,7 +737,8 @@ void __init omap5_init_late(void)
|
||||
#ifdef CONFIG_SOC_DRA7XX
|
||||
void __init dra7xx_init_early(void)
|
||||
{
|
||||
omap2_set_globals_tap(-1, OMAP2_L4_IO_ADDRESS(DRA7XX_TAP_BASE));
|
||||
omap2_set_globals_tap(DRA7XX_CLASS,
|
||||
OMAP2_L4_IO_ADDRESS(DRA7XX_TAP_BASE));
|
||||
omap2_set_globals_prcm_mpu(OMAP2_L4_IO_ADDRESS(OMAP54XX_PRCM_MPU_BASE));
|
||||
omap2_control_base_init();
|
||||
omap4_pm_init_early();
|
||||
|
@ -274,6 +274,10 @@ static inline void omap5_irq_save_context(void)
|
||||
*/
|
||||
static void irq_save_context(void)
|
||||
{
|
||||
/* DRA7 has no SAR to save */
|
||||
if (soc_is_dra7xx())
|
||||
return;
|
||||
|
||||
if (!sar_base)
|
||||
sar_base = omap4_get_sar_ram_base();
|
||||
|
||||
@ -290,6 +294,9 @@ static void irq_sar_clear(void)
|
||||
{
|
||||
u32 val;
|
||||
u32 offset = SAR_BACKUP_STATUS_OFFSET;
|
||||
/* DRA7 has no SAR to save */
|
||||
if (soc_is_dra7xx())
|
||||
return;
|
||||
|
||||
if (soc_is_omap54xx())
|
||||
offset = OMAP5_SAR_BACKUP_STATUS_OFFSET;
|
||||
|
@ -68,11 +68,13 @@ CONFIG_KSM=y
|
||||
CONFIG_TRANSPARENT_HUGEPAGE=y
|
||||
CONFIG_CMA=y
|
||||
CONFIG_XEN=y
|
||||
CONFIG_CMDLINE="console=ttyAMA0"
|
||||
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
|
||||
CONFIG_COMPAT=y
|
||||
CONFIG_CPU_IDLE=y
|
||||
CONFIG_ARM_CPUIDLE=y
|
||||
CONFIG_CPU_FREQ=y
|
||||
CONFIG_ARM_BIG_LITTLE_CPUFREQ=y
|
||||
CONFIG_ARM_SCPI_CPUFREQ=y
|
||||
CONFIG_NET=y
|
||||
CONFIG_PACKET=y
|
||||
CONFIG_UNIX=y
|
||||
@ -80,7 +82,6 @@ CONFIG_INET=y
|
||||
CONFIG_IP_PNP=y
|
||||
CONFIG_IP_PNP_DHCP=y
|
||||
CONFIG_IP_PNP_BOOTP=y
|
||||
# CONFIG_INET_LRO is not set
|
||||
# CONFIG_IPV6 is not set
|
||||
CONFIG_BPF_JIT=y
|
||||
# CONFIG_WIRELESS is not set
|
||||
@ -144,16 +145,18 @@ CONFIG_SERIAL_XILINX_PS_UART_CONSOLE=y
|
||||
CONFIG_SERIAL_MVEBU_UART=y
|
||||
CONFIG_VIRTIO_CONSOLE=y
|
||||
# CONFIG_HW_RANDOM is not set
|
||||
CONFIG_I2C=y
|
||||
CONFIG_I2C_CHARDEV=y
|
||||
CONFIG_I2C_DESIGNWARE_PLATFORM=y
|
||||
CONFIG_I2C_MV64XXX=y
|
||||
CONFIG_I2C_QUP=y
|
||||
CONFIG_I2C_TEGRA=y
|
||||
CONFIG_I2C_UNIPHIER_F=y
|
||||
CONFIG_I2C_RCAR=y
|
||||
CONFIG_SPI=y
|
||||
CONFIG_SPI_PL022=y
|
||||
CONFIG_SPI_QUP=y
|
||||
CONFIG_SPMI=y
|
||||
CONFIG_PINCTRL_SINGLE=y
|
||||
CONFIG_PINCTRL_MSM8916=y
|
||||
CONFIG_PINCTRL_QCOM_SPMI_PMIC=y
|
||||
CONFIG_GPIO_SYSFS=y
|
||||
@ -196,6 +199,7 @@ CONFIG_USB_EHCI_HCD_PLATFORM=y
|
||||
CONFIG_USB_OHCI_HCD=y
|
||||
CONFIG_USB_OHCI_HCD_PLATFORM=y
|
||||
CONFIG_USB_STORAGE=y
|
||||
CONFIG_USB_DWC2=y
|
||||
CONFIG_USB_CHIPIDEA=y
|
||||
CONFIG_USB_CHIPIDEA_UDC=y
|
||||
CONFIG_USB_CHIPIDEA_HOST=y
|
||||
@ -205,19 +209,20 @@ CONFIG_USB_MSM_OTG=y
|
||||
CONFIG_USB_ULPI=y
|
||||
CONFIG_USB_GADGET=y
|
||||
CONFIG_MMC=y
|
||||
CONFIG_MMC_BLOCK_MINORS=16
|
||||
CONFIG_MMC_BLOCK_MINORS=32
|
||||
CONFIG_MMC_ARMMMCI=y
|
||||
CONFIG_MMC_SDHCI=y
|
||||
CONFIG_MMC_SDHCI_PLTFM=y
|
||||
CONFIG_MMC_SDHCI_TEGRA=y
|
||||
CONFIG_MMC_SDHCI_MSM=y
|
||||
CONFIG_MMC_SPI=y
|
||||
CONFIG_MMC_SUNXI=y
|
||||
CONFIG_MMC_DW=y
|
||||
CONFIG_MMC_DW_EXYNOS=y
|
||||
CONFIG_MMC_BLOCK_MINORS=16
|
||||
CONFIG_MMC_DW_K3=y
|
||||
CONFIG_MMC_SUNXI=y
|
||||
CONFIG_NEW_LEDS=y
|
||||
CONFIG_LEDS_CLASS=y
|
||||
CONFIG_LEDS_GPIO=y
|
||||
CONFIG_LEDS_SYSCON=y
|
||||
CONFIG_LEDS_TRIGGERS=y
|
||||
CONFIG_LEDS_TRIGGER_HEARTBEAT=y
|
||||
@ -229,8 +234,8 @@ CONFIG_RTC_DRV_PL031=y
|
||||
CONFIG_RTC_DRV_SUN6I=y
|
||||
CONFIG_RTC_DRV_XGENE=y
|
||||
CONFIG_DMADEVICES=y
|
||||
CONFIG_QCOM_BAM_DMA=y
|
||||
CONFIG_TEGRA20_APB_DMA=y
|
||||
CONFIG_QCOM_BAM_DMA=y
|
||||
CONFIG_RCAR_DMAC=y
|
||||
CONFIG_VFIO=y
|
||||
CONFIG_VFIO_PCI=y
|
||||
@ -239,20 +244,26 @@ CONFIG_VIRTIO_BALLOON=y
|
||||
CONFIG_VIRTIO_MMIO=y
|
||||
CONFIG_XEN_GNTDEV=y
|
||||
CONFIG_XEN_GRANT_DEV_ALLOC=y
|
||||
CONFIG_COMMON_CLK_SCPI=y
|
||||
CONFIG_COMMON_CLK_CS2000_CP=y
|
||||
CONFIG_COMMON_CLK_QCOM=y
|
||||
CONFIG_MSM_GCC_8916=y
|
||||
CONFIG_HWSPINLOCK_QCOM=y
|
||||
CONFIG_MAILBOX=y
|
||||
CONFIG_ARM_MHU=y
|
||||
CONFIG_HI6220_MBOX=y
|
||||
CONFIG_ARM_SMMU=y
|
||||
CONFIG_QCOM_SMEM=y
|
||||
CONFIG_QCOM_SMD=y
|
||||
CONFIG_QCOM_SMD_RPM=y
|
||||
CONFIG_ARCH_TEGRA_132_SOC=y
|
||||
CONFIG_ARCH_TEGRA_210_SOC=y
|
||||
CONFIG_HISILICON_IRQ_MBIGEN=y
|
||||
CONFIG_EXTCON_USB_GPIO=y
|
||||
CONFIG_COMMON_RESET_HI6220=y
|
||||
CONFIG_PHY_RCAR_GEN3_USB2=y
|
||||
CONFIG_PHY_HI6220_USB=y
|
||||
CONFIG_PHY_XGENE=y
|
||||
CONFIG_ARM_SCPI_PROTOCOL=y
|
||||
CONFIG_EXT2_FS=y
|
||||
CONFIG_EXT3_FS=y
|
||||
CONFIG_FANOTIFY=y
|
||||
@ -264,6 +275,7 @@ CONFIG_CUSE=y
|
||||
CONFIG_VFAT_FS=y
|
||||
CONFIG_TMPFS=y
|
||||
CONFIG_HUGETLBFS=y
|
||||
CONFIG_CONFIGFS_FS=y
|
||||
CONFIG_EFIVAR_FS=y
|
||||
CONFIG_SQUASHFS=y
|
||||
CONFIG_NFS_FS=y
|
||||
|
@ -27,7 +27,6 @@
|
||||
#include <asm/kvm.h>
|
||||
#include <asm/kvm_asm.h>
|
||||
#include <asm/kvm_mmio.h>
|
||||
#include <asm/kvm_perf_event.h>
|
||||
|
||||
#define __KVM_HAVE_ARCH_INTC_INITIALIZED
|
||||
|
||||
|
@ -21,7 +21,6 @@
|
||||
#include <linux/compiler.h>
|
||||
#include <linux/kvm_host.h>
|
||||
#include <asm/kvm_mmu.h>
|
||||
#include <asm/kvm_perf_event.h>
|
||||
#include <asm/sysreg.h>
|
||||
|
||||
#define __hyp_text __section(.hyp.text) notrace
|
||||
|
@ -1,68 +0,0 @@
|
||||
/*
|
||||
* Copyright (C) 2012 ARM Ltd.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
|
||||
#ifndef __ASM_KVM_PERF_EVENT_H
|
||||
#define __ASM_KVM_PERF_EVENT_H
|
||||
|
||||
#define ARMV8_PMU_MAX_COUNTERS 32
|
||||
#define ARMV8_PMU_COUNTER_MASK (ARMV8_PMU_MAX_COUNTERS - 1)
|
||||
|
||||
/*
|
||||
* Per-CPU PMCR: config reg
|
||||
*/
|
||||
#define ARMV8_PMU_PMCR_E (1 << 0) /* Enable all counters */
|
||||
#define ARMV8_PMU_PMCR_P (1 << 1) /* Reset all counters */
|
||||
#define ARMV8_PMU_PMCR_C (1 << 2) /* Cycle counter reset */
|
||||
#define ARMV8_PMU_PMCR_D (1 << 3) /* CCNT counts every 64th cpu cycle */
|
||||
#define ARMV8_PMU_PMCR_X (1 << 4) /* Export to ETM */
|
||||
#define ARMV8_PMU_PMCR_DP (1 << 5) /* Disable CCNT if non-invasive debug*/
|
||||
/* Determines which bit of PMCCNTR_EL0 generates an overflow */
|
||||
#define ARMV8_PMU_PMCR_LC (1 << 6)
|
||||
#define ARMV8_PMU_PMCR_N_SHIFT 11 /* Number of counters supported */
|
||||
#define ARMV8_PMU_PMCR_N_MASK 0x1f
|
||||
#define ARMV8_PMU_PMCR_MASK 0x7f /* Mask for writable bits */
|
||||
|
||||
/*
|
||||
* PMOVSR: counters overflow flag status reg
|
||||
*/
|
||||
#define ARMV8_PMU_OVSR_MASK 0xffffffff /* Mask for writable bits */
|
||||
#define ARMV8_PMU_OVERFLOWED_MASK ARMV8_PMU_OVSR_MASK
|
||||
|
||||
/*
|
||||
* PMXEVTYPER: Event selection reg
|
||||
*/
|
||||
#define ARMV8_PMU_EVTYPE_MASK 0xc80003ff /* Mask for writable bits */
|
||||
#define ARMV8_PMU_EVTYPE_EVENT 0x3ff /* Mask for EVENT bits */
|
||||
|
||||
#define ARMV8_PMU_EVTYPE_EVENT_SW_INCR 0 /* Software increment event */
|
||||
|
||||
/*
|
||||
* Event filters for PMUv3
|
||||
*/
|
||||
#define ARMV8_PMU_EXCLUDE_EL1 (1 << 31)
|
||||
#define ARMV8_PMU_EXCLUDE_EL0 (1 << 30)
|
||||
#define ARMV8_PMU_INCLUDE_EL2 (1 << 27)
|
||||
|
||||
/*
|
||||
* PMUSERENR: user enable reg
|
||||
*/
|
||||
#define ARMV8_PMU_USERENR_MASK 0xf /* Mask for writable bits */
|
||||
#define ARMV8_PMU_USERENR_EN (1 << 0) /* PMU regs can be accessed at EL0 */
|
||||
#define ARMV8_PMU_USERENR_SW (1 << 1) /* PMSWINC can be written at EL0 */
|
||||
#define ARMV8_PMU_USERENR_CR (1 << 2) /* Cycle counter can be read at EL0 */
|
||||
#define ARMV8_PMU_USERENR_ER (1 << 3) /* Event counter can be read at EL0 */
|
||||
|
||||
#endif
|
@ -1 +1,5 @@
|
||||
#ifdef CONFIG_CPU_BIG_ENDIAN
|
||||
#define CONFIG_CPU_ENDIAN_BE8 CONFIG_CPU_BIG_ENDIAN
|
||||
#endif
|
||||
|
||||
#include <../../arm/include/asm/opcodes.h>
|
||||
|
@ -17,6 +17,53 @@
|
||||
#ifndef __ASM_PERF_EVENT_H
|
||||
#define __ASM_PERF_EVENT_H
|
||||
|
||||
#define ARMV8_PMU_MAX_COUNTERS 32
|
||||
#define ARMV8_PMU_COUNTER_MASK (ARMV8_PMU_MAX_COUNTERS - 1)
|
||||
|
||||
/*
|
||||
* Per-CPU PMCR: config reg
|
||||
*/
|
||||
#define ARMV8_PMU_PMCR_E (1 << 0) /* Enable all counters */
|
||||
#define ARMV8_PMU_PMCR_P (1 << 1) /* Reset all counters */
|
||||
#define ARMV8_PMU_PMCR_C (1 << 2) /* Cycle counter reset */
|
||||
#define ARMV8_PMU_PMCR_D (1 << 3) /* CCNT counts every 64th cpu cycle */
|
||||
#define ARMV8_PMU_PMCR_X (1 << 4) /* Export to ETM */
|
||||
#define ARMV8_PMU_PMCR_DP (1 << 5) /* Disable CCNT if non-invasive debug*/
|
||||
#define ARMV8_PMU_PMCR_LC (1 << 6) /* Overflow on 64 bit cycle counter */
|
||||
#define ARMV8_PMU_PMCR_N_SHIFT 11 /* Number of counters supported */
|
||||
#define ARMV8_PMU_PMCR_N_MASK 0x1f
|
||||
#define ARMV8_PMU_PMCR_MASK 0x7f /* Mask for writable bits */
|
||||
|
||||
/*
|
||||
* PMOVSR: counters overflow flag status reg
|
||||
*/
|
||||
#define ARMV8_PMU_OVSR_MASK 0xffffffff /* Mask for writable bits */
|
||||
#define ARMV8_PMU_OVERFLOWED_MASK ARMV8_PMU_OVSR_MASK
|
||||
|
||||
/*
|
||||
* PMXEVTYPER: Event selection reg
|
||||
*/
|
||||
#define ARMV8_PMU_EVTYPE_MASK 0xc800ffff /* Mask for writable bits */
|
||||
#define ARMV8_PMU_EVTYPE_EVENT 0xffff /* Mask for EVENT bits */
|
||||
|
||||
#define ARMV8_PMU_EVTYPE_EVENT_SW_INCR 0 /* Software increment event */
|
||||
|
||||
/*
|
||||
* Event filters for PMUv3
|
||||
*/
|
||||
#define ARMV8_PMU_EXCLUDE_EL1 (1 << 31)
|
||||
#define ARMV8_PMU_EXCLUDE_EL0 (1 << 30)
|
||||
#define ARMV8_PMU_INCLUDE_EL2 (1 << 27)
|
||||
|
||||
/*
|
||||
* PMUSERENR: user enable reg
|
||||
*/
|
||||
#define ARMV8_PMU_USERENR_MASK 0xf /* Mask for writable bits */
|
||||
#define ARMV8_PMU_USERENR_EN (1 << 0) /* PMU regs can be accessed at EL0 */
|
||||
#define ARMV8_PMU_USERENR_SW (1 << 1) /* PMSWINC can be written at EL0 */
|
||||
#define ARMV8_PMU_USERENR_CR (1 << 2) /* Cycle counter can be read at EL0 */
|
||||
#define ARMV8_PMU_USERENR_ER (1 << 3) /* Event counter can be read at EL0 */
|
||||
|
||||
#ifdef CONFIG_PERF_EVENTS
|
||||
struct pt_regs;
|
||||
extern unsigned long perf_instruction_pointer(struct pt_regs *regs);
|
||||
|
@ -20,6 +20,7 @@
|
||||
*/
|
||||
|
||||
#include <asm/irq_regs.h>
|
||||
#include <asm/perf_event.h>
|
||||
#include <asm/virt.h>
|
||||
|
||||
#include <linux/of.h>
|
||||
@ -384,9 +385,6 @@ static const struct attribute_group *armv8_pmuv3_attr_groups[] = {
|
||||
#define ARMV8_IDX_COUNTER_LAST(cpu_pmu) \
|
||||
(ARMV8_IDX_CYCLE_COUNTER + cpu_pmu->num_events - 1)
|
||||
|
||||
#define ARMV8_MAX_COUNTERS 32
|
||||
#define ARMV8_COUNTER_MASK (ARMV8_MAX_COUNTERS - 1)
|
||||
|
||||
/*
|
||||
* ARMv8 low level PMU access
|
||||
*/
|
||||
@ -395,40 +393,7 @@ static const struct attribute_group *armv8_pmuv3_attr_groups[] = {
|
||||
* Perf Event to low level counters mapping
|
||||
*/
|
||||
#define ARMV8_IDX_TO_COUNTER(x) \
|
||||
(((x) - ARMV8_IDX_COUNTER0) & ARMV8_COUNTER_MASK)
|
||||
|
||||
/*
|
||||
* Per-CPU PMCR: config reg
|
||||
*/
|
||||
#define ARMV8_PMCR_E (1 << 0) /* Enable all counters */
|
||||
#define ARMV8_PMCR_P (1 << 1) /* Reset all counters */
|
||||
#define ARMV8_PMCR_C (1 << 2) /* Cycle counter reset */
|
||||
#define ARMV8_PMCR_D (1 << 3) /* CCNT counts every 64th cpu cycle */
|
||||
#define ARMV8_PMCR_X (1 << 4) /* Export to ETM */
|
||||
#define ARMV8_PMCR_DP (1 << 5) /* Disable CCNT if non-invasive debug*/
|
||||
#define ARMV8_PMCR_LC (1 << 6) /* Overflow on 64 bit cycle counter */
|
||||
#define ARMV8_PMCR_N_SHIFT 11 /* Number of counters supported */
|
||||
#define ARMV8_PMCR_N_MASK 0x1f
|
||||
#define ARMV8_PMCR_MASK 0x7f /* Mask for writable bits */
|
||||
|
||||
/*
|
||||
* PMOVSR: counters overflow flag status reg
|
||||
*/
|
||||
#define ARMV8_OVSR_MASK 0xffffffff /* Mask for writable bits */
|
||||
#define ARMV8_OVERFLOWED_MASK ARMV8_OVSR_MASK
|
||||
|
||||
/*
|
||||
* PMXEVTYPER: Event selection reg
|
||||
*/
|
||||
#define ARMV8_EVTYPE_MASK 0xc800ffff /* Mask for writable bits */
|
||||
#define ARMV8_EVTYPE_EVENT 0xffff /* Mask for EVENT bits */
|
||||
|
||||
/*
|
||||
* Event filters for PMUv3
|
||||
*/
|
||||
#define ARMV8_EXCLUDE_EL1 (1 << 31)
|
||||
#define ARMV8_EXCLUDE_EL0 (1 << 30)
|
||||
#define ARMV8_INCLUDE_EL2 (1 << 27)
|
||||
(((x) - ARMV8_IDX_COUNTER0) & ARMV8_PMU_COUNTER_MASK)
|
||||
|
||||
static inline u32 armv8pmu_pmcr_read(void)
|
||||
{
|
||||
@ -439,14 +404,14 @@ static inline u32 armv8pmu_pmcr_read(void)
|
||||
|
||||
static inline void armv8pmu_pmcr_write(u32 val)
|
||||
{
|
||||
val &= ARMV8_PMCR_MASK;
|
||||
val &= ARMV8_PMU_PMCR_MASK;
|
||||
isb();
|
||||
asm volatile("msr pmcr_el0, %0" :: "r" (val));
|
||||
}
|
||||
|
||||
static inline int armv8pmu_has_overflowed(u32 pmovsr)
|
||||
{
|
||||
return pmovsr & ARMV8_OVERFLOWED_MASK;
|
||||
return pmovsr & ARMV8_PMU_OVERFLOWED_MASK;
|
||||
}
|
||||
|
||||
static inline int armv8pmu_counter_valid(struct arm_pmu *cpu_pmu, int idx)
|
||||
@ -512,7 +477,7 @@ static inline void armv8pmu_write_counter(struct perf_event *event, u32 value)
|
||||
static inline void armv8pmu_write_evtype(int idx, u32 val)
|
||||
{
|
||||
if (armv8pmu_select_counter(idx) == idx) {
|
||||
val &= ARMV8_EVTYPE_MASK;
|
||||
val &= ARMV8_PMU_EVTYPE_MASK;
|
||||
asm volatile("msr pmxevtyper_el0, %0" :: "r" (val));
|
||||
}
|
||||
}
|
||||
@ -558,7 +523,7 @@ static inline u32 armv8pmu_getreset_flags(void)
|
||||
asm volatile("mrs %0, pmovsclr_el0" : "=r" (value));
|
||||
|
||||
/* Write to clear flags */
|
||||
value &= ARMV8_OVSR_MASK;
|
||||
value &= ARMV8_PMU_OVSR_MASK;
|
||||
asm volatile("msr pmovsclr_el0, %0" :: "r" (value));
|
||||
|
||||
return value;
|
||||
@ -696,7 +661,7 @@ static void armv8pmu_start(struct arm_pmu *cpu_pmu)
|
||||
|
||||
raw_spin_lock_irqsave(&events->pmu_lock, flags);
|
||||
/* Enable all counters */
|
||||
armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMCR_E);
|
||||
armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E);
|
||||
raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
|
||||
}
|
||||
|
||||
@ -707,7 +672,7 @@ static void armv8pmu_stop(struct arm_pmu *cpu_pmu)
|
||||
|
||||
raw_spin_lock_irqsave(&events->pmu_lock, flags);
|
||||
/* Disable all counters */
|
||||
armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMCR_E);
|
||||
armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E);
|
||||
raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
|
||||
}
|
||||
|
||||
@ -717,7 +682,7 @@ static int armv8pmu_get_event_idx(struct pmu_hw_events *cpuc,
|
||||
int idx;
|
||||
struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
|
||||
struct hw_perf_event *hwc = &event->hw;
|
||||
unsigned long evtype = hwc->config_base & ARMV8_EVTYPE_EVENT;
|
||||
unsigned long evtype = hwc->config_base & ARMV8_PMU_EVTYPE_EVENT;
|
||||
|
||||
/* Always place a cycle counter into the cycle counter. */
|
||||
if (evtype == ARMV8_PMUV3_PERFCTR_CLOCK_CYCLES) {
|
||||
@ -754,11 +719,11 @@ static int armv8pmu_set_event_filter(struct hw_perf_event *event,
|
||||
attr->exclude_kernel != attr->exclude_hv)
|
||||
return -EINVAL;
|
||||
if (attr->exclude_user)
|
||||
config_base |= ARMV8_EXCLUDE_EL0;
|
||||
config_base |= ARMV8_PMU_EXCLUDE_EL0;
|
||||
if (!is_kernel_in_hyp_mode() && attr->exclude_kernel)
|
||||
config_base |= ARMV8_EXCLUDE_EL1;
|
||||
config_base |= ARMV8_PMU_EXCLUDE_EL1;
|
||||
if (!attr->exclude_hv)
|
||||
config_base |= ARMV8_INCLUDE_EL2;
|
||||
config_base |= ARMV8_PMU_INCLUDE_EL2;
|
||||
|
||||
/*
|
||||
* Install the filter into config_base as this is used to
|
||||
@ -784,35 +749,36 @@ static void armv8pmu_reset(void *info)
|
||||
* Initialize & Reset PMNC. Request overflow interrupt for
|
||||
* 64 bit cycle counter but cheat in armv8pmu_write_counter().
|
||||
*/
|
||||
armv8pmu_pmcr_write(ARMV8_PMCR_P | ARMV8_PMCR_C | ARMV8_PMCR_LC);
|
||||
armv8pmu_pmcr_write(ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C |
|
||||
ARMV8_PMU_PMCR_LC);
|
||||
}
|
||||
|
||||
static int armv8_pmuv3_map_event(struct perf_event *event)
|
||||
{
|
||||
return armpmu_map_event(event, &armv8_pmuv3_perf_map,
|
||||
&armv8_pmuv3_perf_cache_map,
|
||||
ARMV8_EVTYPE_EVENT);
|
||||
ARMV8_PMU_EVTYPE_EVENT);
|
||||
}
|
||||
|
||||
static int armv8_a53_map_event(struct perf_event *event)
|
||||
{
|
||||
return armpmu_map_event(event, &armv8_a53_perf_map,
|
||||
&armv8_a53_perf_cache_map,
|
||||
ARMV8_EVTYPE_EVENT);
|
||||
ARMV8_PMU_EVTYPE_EVENT);
|
||||
}
|
||||
|
||||
static int armv8_a57_map_event(struct perf_event *event)
|
||||
{
|
||||
return armpmu_map_event(event, &armv8_a57_perf_map,
|
||||
&armv8_a57_perf_cache_map,
|
||||
ARMV8_EVTYPE_EVENT);
|
||||
ARMV8_PMU_EVTYPE_EVENT);
|
||||
}
|
||||
|
||||
static int armv8_thunder_map_event(struct perf_event *event)
|
||||
{
|
||||
return armpmu_map_event(event, &armv8_thunder_perf_map,
|
||||
&armv8_thunder_perf_cache_map,
|
||||
ARMV8_EVTYPE_EVENT);
|
||||
ARMV8_PMU_EVTYPE_EVENT);
|
||||
}
|
||||
|
||||
static void armv8pmu_read_num_pmnc_events(void *info)
|
||||
@ -820,7 +786,7 @@ static void armv8pmu_read_num_pmnc_events(void *info)
|
||||
int *nb_cnt = info;
|
||||
|
||||
/* Read the nb of CNTx counters supported from PMNC */
|
||||
*nb_cnt = (armv8pmu_pmcr_read() >> ARMV8_PMCR_N_SHIFT) & ARMV8_PMCR_N_MASK;
|
||||
*nb_cnt = (armv8pmu_pmcr_read() >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK;
|
||||
|
||||
/* Add the CPU cycles counter */
|
||||
*nb_cnt += 1;
|
||||
|
@ -97,8 +97,7 @@ static int __init early_init_dt_scan_serial(unsigned long node,
|
||||
return 0;
|
||||
#endif
|
||||
|
||||
*addr64 = fdt_translate_address((const void *)initial_boot_params,
|
||||
node);
|
||||
*addr64 = of_flat_dt_translate_address(node);
|
||||
|
||||
return *addr64 == OF_BAD_ADDR ? 0 : 1;
|
||||
}
|
||||
|
@ -30,6 +30,7 @@ config PARISC
|
||||
select TTY # Needed for pdc_cons.c
|
||||
select HAVE_DEBUG_STACKOVERFLOW
|
||||
select HAVE_ARCH_AUDITSYSCALL
|
||||
select HAVE_ARCH_SECCOMP_FILTER
|
||||
select ARCH_NO_COHERENT_DMA_MMAP
|
||||
|
||||
help
|
||||
|
@ -183,6 +183,13 @@ typedef struct compat_siginfo {
|
||||
int _band; /* POLL_IN, POLL_OUT, POLL_MSG */
|
||||
int _fd;
|
||||
} _sigpoll;
|
||||
|
||||
/* SIGSYS */
|
||||
struct {
|
||||
compat_uptr_t _call_addr; /* calling user insn */
|
||||
int _syscall; /* triggering system call number */
|
||||
compat_uint_t _arch; /* AUDIT_ARCH_* of syscall */
|
||||
} _sigsys;
|
||||
} _sifields;
|
||||
} compat_siginfo_t;
|
||||
|
||||
|
@ -39,6 +39,19 @@ static inline void syscall_get_arguments(struct task_struct *tsk,
|
||||
}
|
||||
}
|
||||
|
||||
static inline void syscall_set_return_value(struct task_struct *task,
|
||||
struct pt_regs *regs,
|
||||
int error, long val)
|
||||
{
|
||||
regs->gr[28] = error ? error : val;
|
||||
}
|
||||
|
||||
static inline void syscall_rollback(struct task_struct *task,
|
||||
struct pt_regs *regs)
|
||||
{
|
||||
/* do nothing */
|
||||
}
|
||||
|
||||
static inline int syscall_get_arch(void)
|
||||
{
|
||||
int arch = AUDIT_ARCH_PARISC;
|
||||
|
@ -270,7 +270,8 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request,
|
||||
long do_syscall_trace_enter(struct pt_regs *regs)
|
||||
{
|
||||
/* Do the secure computing check first. */
|
||||
secure_computing_strict(regs->gr[20]);
|
||||
if (secure_computing() == -1)
|
||||
return -1;
|
||||
|
||||
if (test_thread_flag(TIF_SYSCALL_TRACE) &&
|
||||
tracehook_report_syscall_entry(regs)) {
|
||||
@ -296,7 +297,11 @@ long do_syscall_trace_enter(struct pt_regs *regs)
|
||||
regs->gr[23] & 0xffffffff);
|
||||
|
||||
out:
|
||||
return regs->gr[20];
|
||||
/*
|
||||
* Sign extend the syscall number to 64bit since it may have been
|
||||
* modified by a compat ptrace call
|
||||
*/
|
||||
return (int) ((u32) regs->gr[20]);
|
||||
}
|
||||
|
||||
void do_syscall_trace_exit(struct pt_regs *regs)
|
||||
|
@ -371,6 +371,11 @@ copy_siginfo_to_user32 (compat_siginfo_t __user *to, const siginfo_t *from)
|
||||
val = (compat_int_t)from->si_int;
|
||||
err |= __put_user(val, &to->si_int);
|
||||
break;
|
||||
case __SI_SYS >> 16:
|
||||
err |= __put_user(ptr_to_compat(from->si_call_addr), &to->si_call_addr);
|
||||
err |= __put_user(from->si_syscall, &to->si_syscall);
|
||||
err |= __put_user(from->si_arch, &to->si_arch);
|
||||
break;
|
||||
}
|
||||
}
|
||||
return err;
|
||||
|
@ -329,6 +329,7 @@ tracesys_next:
|
||||
|
||||
ldo -THREAD_SZ_ALGN-FRAME_SIZE(%r30),%r1 /* get task ptr */
|
||||
LDREG TI_TASK(%r1), %r1
|
||||
LDREG TASK_PT_GR28(%r1), %r28 /* Restore return value */
|
||||
LDREG TASK_PT_GR26(%r1), %r26 /* Restore the users args */
|
||||
LDREG TASK_PT_GR25(%r1), %r25
|
||||
LDREG TASK_PT_GR24(%r1), %r24
|
||||
@ -342,6 +343,7 @@ tracesys_next:
|
||||
stw %r21, -56(%r30) /* 6th argument */
|
||||
#endif
|
||||
|
||||
cmpib,COND(=),n -1,%r20,tracesys_exit /* seccomp may have returned -1 */
|
||||
comiclr,>>= __NR_Linux_syscalls, %r20, %r0
|
||||
b,n .Ltracesys_nosys
|
||||
|
||||
|
@ -246,7 +246,7 @@ struct thread_struct {
|
||||
#endif /* CONFIG_ALTIVEC */
|
||||
#ifdef CONFIG_VSX
|
||||
/* VSR status */
|
||||
int used_vsr; /* set if process has used altivec */
|
||||
int used_vsr; /* set if process has used VSX */
|
||||
#endif /* CONFIG_VSX */
|
||||
#ifdef CONFIG_SPE
|
||||
unsigned long evr[32]; /* upper 32-bits of SPE regs */
|
||||
|
@ -983,7 +983,7 @@ void restore_tm_state(struct pt_regs *regs)
|
||||
static inline void save_sprs(struct thread_struct *t)
|
||||
{
|
||||
#ifdef CONFIG_ALTIVEC
|
||||
if (cpu_has_feature(cpu_has_feature(CPU_FTR_ALTIVEC)))
|
||||
if (cpu_has_feature(CPU_FTR_ALTIVEC))
|
||||
t->vrsave = mfspr(SPRN_VRSAVE);
|
||||
#endif
|
||||
#ifdef CONFIG_PPC_BOOK3S_64
|
||||
|
@ -413,7 +413,7 @@ static void hugepd_free(struct mmu_gather *tlb, void *hugepte)
|
||||
{
|
||||
struct hugepd_freelist **batchp;
|
||||
|
||||
batchp = this_cpu_ptr(&hugepd_freelist_cur);
|
||||
batchp = &get_cpu_var(hugepd_freelist_cur);
|
||||
|
||||
if (atomic_read(&tlb->mm->mm_users) < 2 ||
|
||||
cpumask_equal(mm_cpumask(tlb->mm),
|
||||
|
@ -59,6 +59,9 @@ config PCI_QUIRKS
|
||||
config ARCH_SUPPORTS_UPROBES
|
||||
def_bool y
|
||||
|
||||
config DEBUG_RODATA
|
||||
def_bool y
|
||||
|
||||
config S390
|
||||
def_bool y
|
||||
select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
|
||||
|
@ -669,11 +669,13 @@ static const struct file_operations prng_tdes_fops = {
|
||||
static struct miscdevice prng_sha512_dev = {
|
||||
.name = "prandom",
|
||||
.minor = MISC_DYNAMIC_MINOR,
|
||||
.mode = 0644,
|
||||
.fops = &prng_sha512_fops,
|
||||
};
|
||||
static struct miscdevice prng_tdes_dev = {
|
||||
.name = "prandom",
|
||||
.minor = MISC_DYNAMIC_MINOR,
|
||||
.mode = 0644,
|
||||
.fops = &prng_tdes_fops,
|
||||
};
|
||||
|
||||
|
@ -15,4 +15,7 @@
|
||||
|
||||
#define __read_mostly __attribute__((__section__(".data..read_mostly")))
|
||||
|
||||
/* Read-only memory is marked before mark_rodata_ro() is called. */
|
||||
#define __ro_after_init __read_mostly
|
||||
|
||||
#endif
|
||||
|
@ -311,7 +311,9 @@
|
||||
#define __NR_shutdown 373
|
||||
#define __NR_mlock2 374
|
||||
#define __NR_copy_file_range 375
|
||||
#define NR_syscalls 376
|
||||
#define __NR_preadv2 376
|
||||
#define __NR_pwritev2 377
|
||||
#define NR_syscalls 378
|
||||
|
||||
/*
|
||||
* There are some system calls that are not present on 64 bit, some
|
||||
|
@ -670,6 +670,7 @@ static int cpumf_pmu_notifier(struct notifier_block *self, unsigned long action,
|
||||
|
||||
switch (action & ~CPU_TASKS_FROZEN) {
|
||||
case CPU_ONLINE:
|
||||
case CPU_DOWN_FAILED:
|
||||
flags = PMC_INIT;
|
||||
smp_call_function_single(cpu, setup_pmc_cpu, &flags, 1);
|
||||
break;
|
||||
|
@ -1521,7 +1521,7 @@ static int cpumf_pmu_notifier(struct notifier_block *self,
|
||||
|
||||
switch (action & ~CPU_TASKS_FROZEN) {
|
||||
case CPU_ONLINE:
|
||||
case CPU_ONLINE_FROZEN:
|
||||
case CPU_DOWN_FAILED:
|
||||
flags = PMC_INIT;
|
||||
smp_call_function_single(cpu, setup_pmc_cpu, &flags, 1);
|
||||
break;
|
||||
|
@ -384,3 +384,5 @@ SYSCALL(sys_recvmsg,compat_sys_recvmsg)
|
||||
SYSCALL(sys_shutdown,sys_shutdown)
|
||||
SYSCALL(sys_mlock2,compat_sys_mlock2)
|
||||
SYSCALL(sys_copy_file_range,compat_sys_copy_file_range) /* 375 */
|
||||
SYSCALL(sys_preadv2,compat_sys_preadv2)
|
||||
SYSCALL(sys_pwritev2,compat_sys_pwritev2)
|
||||
|
@ -20,9 +20,9 @@
|
||||
static inline int gup_pte_range(pmd_t *pmdp, pmd_t pmd, unsigned long addr,
|
||||
unsigned long end, int write, struct page **pages, int *nr)
|
||||
{
|
||||
struct page *head, *page;
|
||||
unsigned long mask;
|
||||
pte_t *ptep, pte;
|
||||
struct page *page;
|
||||
|
||||
mask = (write ? _PAGE_PROTECT : 0) | _PAGE_INVALID | _PAGE_SPECIAL;
|
||||
|
||||
@ -37,12 +37,14 @@ static inline int gup_pte_range(pmd_t *pmdp, pmd_t pmd, unsigned long addr,
|
||||
return 0;
|
||||
VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
|
||||
page = pte_page(pte);
|
||||
if (!page_cache_get_speculative(page))
|
||||
head = compound_head(page);
|
||||
if (!page_cache_get_speculative(head))
|
||||
return 0;
|
||||
if (unlikely(pte_val(pte) != pte_val(*ptep))) {
|
||||
put_page(page);
|
||||
put_page(head);
|
||||
return 0;
|
||||
}
|
||||
VM_BUG_ON_PAGE(compound_head(page) != head, page);
|
||||
pages[*nr] = page;
|
||||
(*nr)++;
|
||||
|
||||
|
@ -108,6 +108,13 @@ void __init paging_init(void)
|
||||
free_area_init_nodes(max_zone_pfns);
|
||||
}
|
||||
|
||||
void mark_rodata_ro(void)
|
||||
{
|
||||
/* Text and rodata are already protected. Nothing to do here. */
|
||||
pr_info("Write protecting the kernel read-only data: %luk\n",
|
||||
((unsigned long)&_eshared - (unsigned long)&_stext) >> 10);
|
||||
}
|
||||
|
||||
void __init mem_init(void)
|
||||
{
|
||||
if (MACHINE_HAS_TLB_LC)
|
||||
@ -126,9 +133,6 @@ void __init mem_init(void)
|
||||
setup_zero_pages(); /* Setup zeroed pages. */
|
||||
|
||||
mem_init_print_info(NULL);
|
||||
printk("Write protected kernel read-only data: %#lx - %#lx\n",
|
||||
(unsigned long)&_stext,
|
||||
PFN_ALIGN((unsigned long)&_eshared) - 1);
|
||||
}
|
||||
|
||||
void free_initmem(void)
|
||||
|
@ -176,7 +176,6 @@ static int clp_query_pci_fn(struct zpci_dev *zdev, u32 fh)
|
||||
rc = clp_store_query_pci_fn(zdev, &rrb->response);
|
||||
if (rc)
|
||||
goto out;
|
||||
if (rrb->response.pfgid)
|
||||
rc = clp_query_pci_fngrp(zdev, rrb->response.pfgid);
|
||||
} else {
|
||||
zpci_err("Q PCI FN:\n");
|
||||
|
@ -6,17 +6,17 @@
|
||||
|
||||
#ifdef CONFIG_COMPAT
|
||||
struct __new_sigaction32 {
|
||||
unsigned sa_handler;
|
||||
unsigned int sa_handler;
|
||||
unsigned int sa_flags;
|
||||
unsigned sa_restorer; /* not used by Linux/SPARC yet */
|
||||
unsigned int sa_restorer; /* not used by Linux/SPARC yet */
|
||||
compat_sigset_t sa_mask;
|
||||
};
|
||||
|
||||
struct __old_sigaction32 {
|
||||
unsigned sa_handler;
|
||||
unsigned int sa_handler;
|
||||
compat_old_sigset_t sa_mask;
|
||||
unsigned int sa_flags;
|
||||
unsigned sa_restorer; /* not used by Linux/SPARC yet */
|
||||
unsigned int sa_restorer; /* not used by Linux/SPARC yet */
|
||||
};
|
||||
#endif
|
||||
|
||||
|
@ -117,9 +117,9 @@ static inline void bw_clear_intr_mask(int sbus_level, int mask)
|
||||
"i" (ASI_M_CTL));
|
||||
}
|
||||
|
||||
static inline unsigned bw_get_prof_limit(int cpu)
|
||||
static inline unsigned int bw_get_prof_limit(int cpu)
|
||||
{
|
||||
unsigned limit;
|
||||
unsigned int limit;
|
||||
|
||||
__asm__ __volatile__ ("lda [%1] %2, %0" :
|
||||
"=r" (limit) :
|
||||
@ -128,7 +128,7 @@ static inline unsigned bw_get_prof_limit(int cpu)
|
||||
return limit;
|
||||
}
|
||||
|
||||
static inline void bw_set_prof_limit(int cpu, unsigned limit)
|
||||
static inline void bw_set_prof_limit(int cpu, unsigned int limit)
|
||||
{
|
||||
__asm__ __volatile__ ("sta %0, [%1] %2" : :
|
||||
"r" (limit),
|
||||
@ -136,9 +136,9 @@ static inline void bw_set_prof_limit(int cpu, unsigned limit)
|
||||
"i" (ASI_M_CTL));
|
||||
}
|
||||
|
||||
static inline unsigned bw_get_ctrl(int cpu)
|
||||
static inline unsigned int bw_get_ctrl(int cpu)
|
||||
{
|
||||
unsigned ctrl;
|
||||
unsigned int ctrl;
|
||||
|
||||
__asm__ __volatile__ ("lda [%1] %2, %0" :
|
||||
"=r" (ctrl) :
|
||||
@ -147,7 +147,7 @@ static inline unsigned bw_get_ctrl(int cpu)
|
||||
return ctrl;
|
||||
}
|
||||
|
||||
static inline void bw_set_ctrl(int cpu, unsigned ctrl)
|
||||
static inline void bw_set_ctrl(int cpu, unsigned int ctrl)
|
||||
{
|
||||
__asm__ __volatile__ ("sta %0, [%1] %2" : :
|
||||
"r" (ctrl),
|
||||
@ -155,9 +155,9 @@ static inline void bw_set_ctrl(int cpu, unsigned ctrl)
|
||||
"i" (ASI_M_CTL));
|
||||
}
|
||||
|
||||
static inline unsigned cc_get_ipen(void)
|
||||
static inline unsigned int cc_get_ipen(void)
|
||||
{
|
||||
unsigned pending;
|
||||
unsigned int pending;
|
||||
|
||||
__asm__ __volatile__ ("lduha [%1] %2, %0" :
|
||||
"=r" (pending) :
|
||||
@ -166,7 +166,7 @@ static inline unsigned cc_get_ipen(void)
|
||||
return pending;
|
||||
}
|
||||
|
||||
static inline void cc_set_iclr(unsigned clear)
|
||||
static inline void cc_set_iclr(unsigned int clear)
|
||||
{
|
||||
__asm__ __volatile__ ("stha %0, [%1] %2" : :
|
||||
"r" (clear),
|
||||
@ -174,9 +174,9 @@ static inline void cc_set_iclr(unsigned clear)
|
||||
"i" (ASI_M_MXCC));
|
||||
}
|
||||
|
||||
static inline unsigned cc_get_imsk(void)
|
||||
static inline unsigned int cc_get_imsk(void)
|
||||
{
|
||||
unsigned mask;
|
||||
unsigned int mask;
|
||||
|
||||
__asm__ __volatile__ ("lduha [%1] %2, %0" :
|
||||
"=r" (mask) :
|
||||
@ -185,7 +185,7 @@ static inline unsigned cc_get_imsk(void)
|
||||
return mask;
|
||||
}
|
||||
|
||||
static inline void cc_set_imsk(unsigned mask)
|
||||
static inline void cc_set_imsk(unsigned int mask)
|
||||
{
|
||||
__asm__ __volatile__ ("stha %0, [%1] %2" : :
|
||||
"r" (mask),
|
||||
@ -193,9 +193,9 @@ static inline void cc_set_imsk(unsigned mask)
|
||||
"i" (ASI_M_MXCC));
|
||||
}
|
||||
|
||||
static inline unsigned cc_get_imsk_other(int cpuid)
|
||||
static inline unsigned int cc_get_imsk_other(int cpuid)
|
||||
{
|
||||
unsigned mask;
|
||||
unsigned int mask;
|
||||
|
||||
__asm__ __volatile__ ("lduha [%1] %2, %0" :
|
||||
"=r" (mask) :
|
||||
@ -204,7 +204,7 @@ static inline unsigned cc_get_imsk_other(int cpuid)
|
||||
return mask;
|
||||
}
|
||||
|
||||
static inline void cc_set_imsk_other(int cpuid, unsigned mask)
|
||||
static inline void cc_set_imsk_other(int cpuid, unsigned int mask)
|
||||
{
|
||||
__asm__ __volatile__ ("stha %0, [%1] %2" : :
|
||||
"r" (mask),
|
||||
@ -212,7 +212,7 @@ static inline void cc_set_imsk_other(int cpuid, unsigned mask)
|
||||
"i" (ASI_M_CTL));
|
||||
}
|
||||
|
||||
static inline void cc_set_igen(unsigned gen)
|
||||
static inline void cc_set_igen(unsigned int gen)
|
||||
{
|
||||
__asm__ __volatile__ ("sta %0, [%1] %2" : :
|
||||
"r" (gen),
|
||||
|
@ -29,12 +29,12 @@ struct linux_dev_v0_funcs {
|
||||
/* V2 and later prom device operations. */
|
||||
struct linux_dev_v2_funcs {
|
||||
phandle (*v2_inst2pkg)(int d); /* Convert ihandle to phandle */
|
||||
char * (*v2_dumb_mem_alloc)(char *va, unsigned sz);
|
||||
void (*v2_dumb_mem_free)(char *va, unsigned sz);
|
||||
char * (*v2_dumb_mem_alloc)(char *va, unsigned int sz);
|
||||
void (*v2_dumb_mem_free)(char *va, unsigned int sz);
|
||||
|
||||
/* To map devices into virtual I/O space. */
|
||||
char * (*v2_dumb_mmap)(char *virta, int which_io, unsigned paddr, unsigned sz);
|
||||
void (*v2_dumb_munmap)(char *virta, unsigned size);
|
||||
char * (*v2_dumb_mmap)(char *virta, int which_io, unsigned int paddr, unsigned int sz);
|
||||
void (*v2_dumb_munmap)(char *virta, unsigned int size);
|
||||
|
||||
int (*v2_dev_open)(char *devpath);
|
||||
void (*v2_dev_close)(int d);
|
||||
@ -50,7 +50,7 @@ struct linux_dev_v2_funcs {
|
||||
struct linux_mlist_v0 {
|
||||
struct linux_mlist_v0 *theres_more;
|
||||
unsigned int start_adr;
|
||||
unsigned num_bytes;
|
||||
unsigned int num_bytes;
|
||||
};
|
||||
|
||||
struct linux_mem_v0 {
|
||||
|
@ -218,7 +218,7 @@ extern pgprot_t PAGE_KERNEL_LOCKED;
|
||||
extern pgprot_t PAGE_COPY;
|
||||
extern pgprot_t PAGE_SHARED;
|
||||
|
||||
/* XXX This uglyness is for the atyfb driver's sparc mmap() support. XXX */
|
||||
/* XXX This ugliness is for the atyfb driver's sparc mmap() support. XXX */
|
||||
extern unsigned long _PAGE_IE;
|
||||
extern unsigned long _PAGE_E;
|
||||
extern unsigned long _PAGE_CACHE;
|
||||
|
@ -201,7 +201,7 @@ unsigned long get_wchan(struct task_struct *task);
|
||||
#define KSTK_ESP(tsk) (task_pt_regs(tsk)->u_regs[UREG_FP])
|
||||
|
||||
/* Please see the commentary in asm/backoff.h for a description of
|
||||
* what these instructions are doing and how they have been choosen.
|
||||
* what these instructions are doing and how they have been chosen.
|
||||
* To make a long story short, we are trying to yield the current cpu
|
||||
* strand during busy loops.
|
||||
*/
|
||||
|
@ -25,7 +25,7 @@ struct sigcontext32 {
|
||||
int sigc_oswins; /* outstanding windows */
|
||||
|
||||
/* stack ptrs for each regwin buf */
|
||||
unsigned sigc_spbuf[__SUNOS_MAXWIN];
|
||||
unsigned int sigc_spbuf[__SUNOS_MAXWIN];
|
||||
|
||||
/* Windows to restore after signal */
|
||||
struct reg_window32 sigc_wbuf[__SUNOS_MAXWIN];
|
||||
|
@ -149,7 +149,7 @@ extern struct tsb_phys_patch_entry __tsb_phys_patch, __tsb_phys_patch_end;
|
||||
* page size in question. So for PMD mappings (which fall on
|
||||
* bit 23, for 8MB per PMD) we must propagate bit 22 for a
|
||||
* 4MB huge page. For huge PUDs (which fall on bit 33, for
|
||||
* 8GB per PUD), we have to accomodate 256MB and 2GB huge
|
||||
* 8GB per PUD), we have to accommodate 256MB and 2GB huge
|
||||
* pages. So for those we propagate bits 32 to 28.
|
||||
*/
|
||||
#define KERN_PGTABLE_WALK(VADDR, REG1, REG2, FAIL_LABEL) \
|
||||
|
@ -6,13 +6,13 @@
|
||||
#if defined(__sparc__) && defined(__arch64__)
|
||||
/* 64 bit sparc */
|
||||
struct stat {
|
||||
unsigned st_dev;
|
||||
unsigned int st_dev;
|
||||
ino_t st_ino;
|
||||
mode_t st_mode;
|
||||
short st_nlink;
|
||||
uid_t st_uid;
|
||||
gid_t st_gid;
|
||||
unsigned st_rdev;
|
||||
unsigned int st_rdev;
|
||||
off_t st_size;
|
||||
time_t st_atime;
|
||||
time_t st_mtime;
|
||||
|
@ -5,27 +5,27 @@
|
||||
|
||||
#include "kernel.h"
|
||||
|
||||
static unsigned dir_class[] = {
|
||||
static unsigned int dir_class[] = {
|
||||
#include <asm-generic/audit_dir_write.h>
|
||||
~0U
|
||||
};
|
||||
|
||||
static unsigned read_class[] = {
|
||||
static unsigned int read_class[] = {
|
||||
#include <asm-generic/audit_read.h>
|
||||
~0U
|
||||
};
|
||||
|
||||
static unsigned write_class[] = {
|
||||
static unsigned int write_class[] = {
|
||||
#include <asm-generic/audit_write.h>
|
||||
~0U
|
||||
};
|
||||
|
||||
static unsigned chattr_class[] = {
|
||||
static unsigned int chattr_class[] = {
|
||||
#include <asm-generic/audit_change_attr.h>
|
||||
~0U
|
||||
};
|
||||
|
||||
static unsigned signal_class[] = {
|
||||
static unsigned int signal_class[] = {
|
||||
#include <asm-generic/audit_signal.h>
|
||||
~0U
|
||||
};
|
||||
@ -39,7 +39,7 @@ int audit_classify_arch(int arch)
|
||||
return 0;
|
||||
}
|
||||
|
||||
int audit_classify_syscall(int abi, unsigned syscall)
|
||||
int audit_classify_syscall(int abi, unsigned int syscall)
|
||||
{
|
||||
#ifdef CONFIG_COMPAT
|
||||
if (abi == AUDIT_ARCH_SPARC)
|
||||
|
@ -2,32 +2,32 @@
|
||||
#include <asm/unistd.h>
|
||||
#include "kernel.h"
|
||||
|
||||
unsigned sparc32_dir_class[] = {
|
||||
unsigned int sparc32_dir_class[] = {
|
||||
#include <asm-generic/audit_dir_write.h>
|
||||
~0U
|
||||
};
|
||||
|
||||
unsigned sparc32_chattr_class[] = {
|
||||
unsigned int sparc32_chattr_class[] = {
|
||||
#include <asm-generic/audit_change_attr.h>
|
||||
~0U
|
||||
};
|
||||
|
||||
unsigned sparc32_write_class[] = {
|
||||
unsigned int sparc32_write_class[] = {
|
||||
#include <asm-generic/audit_write.h>
|
||||
~0U
|
||||
};
|
||||
|
||||
unsigned sparc32_read_class[] = {
|
||||
unsigned int sparc32_read_class[] = {
|
||||
#include <asm-generic/audit_read.h>
|
||||
~0U
|
||||
};
|
||||
|
||||
unsigned sparc32_signal_class[] = {
|
||||
unsigned int sparc32_signal_class[] = {
|
||||
#include <asm-generic/audit_signal.h>
|
||||
~0U
|
||||
};
|
||||
|
||||
int sparc32_classify_syscall(unsigned syscall)
|
||||
int sparc32_classify_syscall(unsigned int syscall)
|
||||
{
|
||||
switch(syscall) {
|
||||
case __NR_open:
|
||||
|
@ -1255,7 +1255,7 @@ flush_patch_exception:
|
||||
kuw_patch1_7win: sll %o3, 6, %o3
|
||||
|
||||
/* No matter how much overhead this routine has in the worst
|
||||
* case scenerio, it is several times better than taking the
|
||||
* case scenario, it is several times better than taking the
|
||||
* traps with the old method of just doing flush_user_windows().
|
||||
*/
|
||||
kill_user_windows:
|
||||
|
@ -131,7 +131,7 @@ void __iomem *ioremap(unsigned long offset, unsigned long size)
|
||||
EXPORT_SYMBOL(ioremap);
|
||||
|
||||
/*
|
||||
* Comlimentary to ioremap().
|
||||
* Complementary to ioremap().
|
||||
*/
|
||||
void iounmap(volatile void __iomem *virtual)
|
||||
{
|
||||
@ -233,7 +233,7 @@ _sparc_ioremap(struct resource *res, u32 bus, u32 pa, int sz)
|
||||
}
|
||||
|
||||
/*
|
||||
* Comlimentary to _sparc_ioremap().
|
||||
* Complementary to _sparc_ioremap().
|
||||
*/
|
||||
static void _sparc_free_io(struct resource *res)
|
||||
{
|
||||
@ -532,7 +532,7 @@ static void pci32_unmap_page(struct device *dev, dma_addr_t ba, size_t size,
|
||||
}
|
||||
|
||||
/* Map a set of buffers described by scatterlist in streaming
|
||||
* mode for DMA. This is the scather-gather version of the
|
||||
* mode for DMA. This is the scatter-gather version of the
|
||||
* above pci_map_single interface. Here the scatter gather list
|
||||
* elements are each tagged with the appropriate dma address
|
||||
* and length. They are obtained via sg_dma_{address,length}(SG).
|
||||
|
@ -54,12 +54,12 @@ void do_signal32(struct pt_regs * regs);
|
||||
asmlinkage int do_sys32_sigstack(u32 u_ssptr, u32 u_ossptr, unsigned long sp);
|
||||
|
||||
/* compat_audit.c */
|
||||
extern unsigned sparc32_dir_class[];
|
||||
extern unsigned sparc32_chattr_class[];
|
||||
extern unsigned sparc32_write_class[];
|
||||
extern unsigned sparc32_read_class[];
|
||||
extern unsigned sparc32_signal_class[];
|
||||
int sparc32_classify_syscall(unsigned syscall);
|
||||
extern unsigned int sparc32_dir_class[];
|
||||
extern unsigned int sparc32_chattr_class[];
|
||||
extern unsigned int sparc32_write_class[];
|
||||
extern unsigned int sparc32_read_class[];
|
||||
extern unsigned int sparc32_signal_class[];
|
||||
int sparc32_classify_syscall(unsigned int syscall);
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_SPARC32
|
||||
|
@ -203,7 +203,7 @@ static struct irq_chip leon_irq = {
|
||||
|
||||
/*
|
||||
* Build a LEON IRQ for the edge triggered LEON IRQ controller:
|
||||
* Edge (normal) IRQ - handle_simple_irq, ack=DONT-CARE, never ack
|
||||
* Edge (normal) IRQ - handle_simple_irq, ack=DON'T-CARE, never ack
|
||||
* Level IRQ (PCI|Level-GPIO) - handle_fasteoi_irq, ack=1, ack after ISR
|
||||
* Per-CPU Edge - handle_percpu_irq, ack=0
|
||||
*/
|
||||
|
@ -103,7 +103,7 @@ static void show_regwindow32(struct pt_regs *regs)
|
||||
mm_segment_t old_fs;
|
||||
|
||||
__asm__ __volatile__ ("flushw");
|
||||
rw = compat_ptr((unsigned)regs->u_regs[14]);
|
||||
rw = compat_ptr((unsigned int)regs->u_regs[14]);
|
||||
old_fs = get_fs();
|
||||
set_fs (USER_DS);
|
||||
if (copy_from_user (&r_w, rw, sizeof(r_w))) {
|
||||
|
@ -109,7 +109,7 @@ unsigned long cmdline_memory_size __initdata = 0;
|
||||
unsigned char boot_cpu_id = 0xff; /* 0xff will make it into DATA section... */
|
||||
|
||||
static void
|
||||
prom_console_write(struct console *con, const char *s, unsigned n)
|
||||
prom_console_write(struct console *con, const char *s, unsigned int n)
|
||||
{
|
||||
prom_write(s, n);
|
||||
}
|
||||
|
@ -77,7 +77,7 @@ struct screen_info screen_info = {
|
||||
};
|
||||
|
||||
static void
|
||||
prom_console_write(struct console *con, const char *s, unsigned n)
|
||||
prom_console_write(struct console *con, const char *s, unsigned int n)
|
||||
{
|
||||
prom_write(s, n);
|
||||
}
|
||||
|
@ -144,7 +144,7 @@ void do_sigreturn32(struct pt_regs *regs)
|
||||
compat_uptr_t fpu_save;
|
||||
compat_uptr_t rwin_save;
|
||||
unsigned int psr;
|
||||
unsigned pc, npc;
|
||||
unsigned int pc, npc;
|
||||
sigset_t set;
|
||||
compat_sigset_t seta;
|
||||
int err, i;
|
||||
|
@ -337,10 +337,10 @@ SYSCALL_DEFINE6(sparc_ipc, unsigned int, call, int, first, unsigned long, second
|
||||
switch (call) {
|
||||
case SEMOP:
|
||||
err = sys_semtimedop(first, ptr,
|
||||
(unsigned)second, NULL);
|
||||
(unsigned int)second, NULL);
|
||||
goto out;
|
||||
case SEMTIMEDOP:
|
||||
err = sys_semtimedop(first, ptr, (unsigned)second,
|
||||
err = sys_semtimedop(first, ptr, (unsigned int)second,
|
||||
(const struct timespec __user *)
|
||||
(unsigned long) fifth);
|
||||
goto out;
|
||||
|
@ -1,4 +1,4 @@
|
||||
/* sysfs.c: Toplogy sysfs support code for sparc64.
|
||||
/* sysfs.c: Topology sysfs support code for sparc64.
|
||||
*
|
||||
* Copyright (C) 2007 David S. Miller <davem@davemloft.net>
|
||||
*/
|
||||
|
@ -209,8 +209,8 @@ static inline int do_int_store(int reg_num, int size, unsigned long *dst_addr,
|
||||
if (size == 16) {
|
||||
size = 8;
|
||||
zero = (((long)(reg_num ?
|
||||
(unsigned)fetch_reg(reg_num, regs) : 0)) << 32) |
|
||||
(unsigned)fetch_reg(reg_num + 1, regs);
|
||||
(unsigned int)fetch_reg(reg_num, regs) : 0)) << 32) |
|
||||
(unsigned int)fetch_reg(reg_num + 1, regs);
|
||||
} else if (reg_num) {
|
||||
src_val_p = fetch_reg_addr(reg_num, regs);
|
||||
}
|
||||
|
@ -303,10 +303,10 @@ no_context:
|
||||
fixup = search_extables_range(regs->pc, &g2);
|
||||
/* Values below 10 are reserved for other things */
|
||||
if (fixup > 10) {
|
||||
extern const unsigned __memset_start[];
|
||||
extern const unsigned __memset_end[];
|
||||
extern const unsigned __csum_partial_copy_start[];
|
||||
extern const unsigned __csum_partial_copy_end[];
|
||||
extern const unsigned int __memset_start[];
|
||||
extern const unsigned int __memset_end[];
|
||||
extern const unsigned int __csum_partial_copy_start[];
|
||||
extern const unsigned int __csum_partial_copy_end[];
|
||||
|
||||
#ifdef DEBUG_EXCEPTIONS
|
||||
printk("Exception: PC<%08lx> faddr<%08lx>\n",
|
||||
|
@ -351,7 +351,7 @@ do { *prog++ = BR_OPC | WDISP22(OFF); \
|
||||
*
|
||||
* Sometimes we need to emit a branch earlier in the code
|
||||
* sequence. And in these situations we adjust "destination"
|
||||
* to accomodate this difference. For example, if we needed
|
||||
* to accommodate this difference. For example, if we needed
|
||||
* to emit a branch (and it's delay slot) right before the
|
||||
* final instruction emitted for a BPF opcode, we'd use
|
||||
* "destination + 4" instead of just plain "destination" above.
|
||||
|
@ -211,7 +211,7 @@ _gxio_mpipe_link_mac_t;
|
||||
* request shared data permission on the same link.
|
||||
*
|
||||
* No more than one of ::GXIO_MPIPE_LINK_DATA, ::GXIO_MPIPE_LINK_NO_DATA,
|
||||
* or ::GXIO_MPIPE_LINK_EXCL_DATA may be specifed in a gxio_mpipe_link_open()
|
||||
* or ::GXIO_MPIPE_LINK_EXCL_DATA may be specified in a gxio_mpipe_link_open()
|
||||
* call. If none are specified, ::GXIO_MPIPE_LINK_DATA is assumed.
|
||||
*/
|
||||
#define GXIO_MPIPE_LINK_DATA 0x00000001UL
|
||||
@ -219,7 +219,7 @@ _gxio_mpipe_link_mac_t;
|
||||
/** Do not request data permission on the specified link.
|
||||
*
|
||||
* No more than one of ::GXIO_MPIPE_LINK_DATA, ::GXIO_MPIPE_LINK_NO_DATA,
|
||||
* or ::GXIO_MPIPE_LINK_EXCL_DATA may be specifed in a gxio_mpipe_link_open()
|
||||
* or ::GXIO_MPIPE_LINK_EXCL_DATA may be specified in a gxio_mpipe_link_open()
|
||||
* call. If none are specified, ::GXIO_MPIPE_LINK_DATA is assumed.
|
||||
*/
|
||||
#define GXIO_MPIPE_LINK_NO_DATA 0x00000002UL
|
||||
@ -230,7 +230,7 @@ _gxio_mpipe_link_mac_t;
|
||||
* data permission on it, this open will fail.
|
||||
*
|
||||
* No more than one of ::GXIO_MPIPE_LINK_DATA, ::GXIO_MPIPE_LINK_NO_DATA,
|
||||
* or ::GXIO_MPIPE_LINK_EXCL_DATA may be specifed in a gxio_mpipe_link_open()
|
||||
* or ::GXIO_MPIPE_LINK_EXCL_DATA may be specified in a gxio_mpipe_link_open()
|
||||
* call. If none are specified, ::GXIO_MPIPE_LINK_DATA is assumed.
|
||||
*/
|
||||
#define GXIO_MPIPE_LINK_EXCL_DATA 0x00000004UL
|
||||
@ -241,7 +241,7 @@ _gxio_mpipe_link_mac_t;
|
||||
* permission on the same link.
|
||||
*
|
||||
* No more than one of ::GXIO_MPIPE_LINK_STATS, ::GXIO_MPIPE_LINK_NO_STATS,
|
||||
* or ::GXIO_MPIPE_LINK_EXCL_STATS may be specifed in a gxio_mpipe_link_open()
|
||||
* or ::GXIO_MPIPE_LINK_EXCL_STATS may be specified in a gxio_mpipe_link_open()
|
||||
* call. If none are specified, ::GXIO_MPIPE_LINK_STATS is assumed.
|
||||
*/
|
||||
#define GXIO_MPIPE_LINK_STATS 0x00000008UL
|
||||
@ -249,7 +249,7 @@ _gxio_mpipe_link_mac_t;
|
||||
/** Do not request stats permission on the specified link.
|
||||
*
|
||||
* No more than one of ::GXIO_MPIPE_LINK_STATS, ::GXIO_MPIPE_LINK_NO_STATS,
|
||||
* or ::GXIO_MPIPE_LINK_EXCL_STATS may be specifed in a gxio_mpipe_link_open()
|
||||
* or ::GXIO_MPIPE_LINK_EXCL_STATS may be specified in a gxio_mpipe_link_open()
|
||||
* call. If none are specified, ::GXIO_MPIPE_LINK_STATS is assumed.
|
||||
*/
|
||||
#define GXIO_MPIPE_LINK_NO_STATS 0x00000010UL
|
||||
@ -267,7 +267,7 @@ _gxio_mpipe_link_mac_t;
|
||||
* reset by other statistics programs.
|
||||
*
|
||||
* No more than one of ::GXIO_MPIPE_LINK_STATS, ::GXIO_MPIPE_LINK_NO_STATS,
|
||||
* or ::GXIO_MPIPE_LINK_EXCL_STATS may be specifed in a gxio_mpipe_link_open()
|
||||
* or ::GXIO_MPIPE_LINK_EXCL_STATS may be specified in a gxio_mpipe_link_open()
|
||||
* call. If none are specified, ::GXIO_MPIPE_LINK_STATS is assumed.
|
||||
*/
|
||||
#define GXIO_MPIPE_LINK_EXCL_STATS 0x00000020UL
|
||||
@ -278,7 +278,7 @@ _gxio_mpipe_link_mac_t;
|
||||
* permission on the same link.
|
||||
*
|
||||
* No more than one of ::GXIO_MPIPE_LINK_CTL, ::GXIO_MPIPE_LINK_NO_CTL,
|
||||
* or ::GXIO_MPIPE_LINK_EXCL_CTL may be specifed in a gxio_mpipe_link_open()
|
||||
* or ::GXIO_MPIPE_LINK_EXCL_CTL may be specified in a gxio_mpipe_link_open()
|
||||
* call. If none are specified, ::GXIO_MPIPE_LINK_CTL is assumed.
|
||||
*/
|
||||
#define GXIO_MPIPE_LINK_CTL 0x00000040UL
|
||||
@ -286,7 +286,7 @@ _gxio_mpipe_link_mac_t;
|
||||
/** Do not request control permission on the specified link.
|
||||
*
|
||||
* No more than one of ::GXIO_MPIPE_LINK_CTL, ::GXIO_MPIPE_LINK_NO_CTL,
|
||||
* or ::GXIO_MPIPE_LINK_EXCL_CTL may be specifed in a gxio_mpipe_link_open()
|
||||
* or ::GXIO_MPIPE_LINK_EXCL_CTL may be specified in a gxio_mpipe_link_open()
|
||||
* call. If none are specified, ::GXIO_MPIPE_LINK_CTL is assumed.
|
||||
*/
|
||||
#define GXIO_MPIPE_LINK_NO_CTL 0x00000080UL
|
||||
@ -301,7 +301,7 @@ _gxio_mpipe_link_mac_t;
|
||||
* it prevents programs like mpipe-link from configuring the link.
|
||||
*
|
||||
* No more than one of ::GXIO_MPIPE_LINK_CTL, ::GXIO_MPIPE_LINK_NO_CTL,
|
||||
* or ::GXIO_MPIPE_LINK_EXCL_CTL may be specifed in a gxio_mpipe_link_open()
|
||||
* or ::GXIO_MPIPE_LINK_EXCL_CTL may be specified in a gxio_mpipe_link_open()
|
||||
* call. If none are specified, ::GXIO_MPIPE_LINK_CTL is assumed.
|
||||
*/
|
||||
#define GXIO_MPIPE_LINK_EXCL_CTL 0x00000100UL
|
||||
@ -311,7 +311,7 @@ _gxio_mpipe_link_mac_t;
|
||||
* change the desired state of the link when it is closed or the process
|
||||
* exits. No more than one of ::GXIO_MPIPE_LINK_AUTO_UP,
|
||||
* ::GXIO_MPIPE_LINK_AUTO_UPDOWN, ::GXIO_MPIPE_LINK_AUTO_DOWN, or
|
||||
* ::GXIO_MPIPE_LINK_AUTO_NONE may be specifed in a gxio_mpipe_link_open()
|
||||
* ::GXIO_MPIPE_LINK_AUTO_NONE may be specified in a gxio_mpipe_link_open()
|
||||
* call. If none are specified, ::GXIO_MPIPE_LINK_AUTO_UPDOWN is assumed.
|
||||
*/
|
||||
#define GXIO_MPIPE_LINK_AUTO_UP 0x00000200UL
|
||||
@ -322,7 +322,7 @@ _gxio_mpipe_link_mac_t;
|
||||
* open, set the desired state of the link to down. No more than one of
|
||||
* ::GXIO_MPIPE_LINK_AUTO_UP, ::GXIO_MPIPE_LINK_AUTO_UPDOWN,
|
||||
* ::GXIO_MPIPE_LINK_AUTO_DOWN, or ::GXIO_MPIPE_LINK_AUTO_NONE may be
|
||||
* specifed in a gxio_mpipe_link_open() call. If none are specified,
|
||||
* specified in a gxio_mpipe_link_open() call. If none are specified,
|
||||
* ::GXIO_MPIPE_LINK_AUTO_UPDOWN is assumed.
|
||||
*/
|
||||
#define GXIO_MPIPE_LINK_AUTO_UPDOWN 0x00000400UL
|
||||
@ -332,7 +332,7 @@ _gxio_mpipe_link_mac_t;
|
||||
* process has the link open, set the desired state of the link to down.
|
||||
* No more than one of ::GXIO_MPIPE_LINK_AUTO_UP,
|
||||
* ::GXIO_MPIPE_LINK_AUTO_UPDOWN, ::GXIO_MPIPE_LINK_AUTO_DOWN, or
|
||||
* ::GXIO_MPIPE_LINK_AUTO_NONE may be specifed in a gxio_mpipe_link_open()
|
||||
* ::GXIO_MPIPE_LINK_AUTO_NONE may be specified in a gxio_mpipe_link_open()
|
||||
* call. If none are specified, ::GXIO_MPIPE_LINK_AUTO_UPDOWN is assumed.
|
||||
*/
|
||||
#define GXIO_MPIPE_LINK_AUTO_DOWN 0x00000800UL
|
||||
@ -342,7 +342,7 @@ _gxio_mpipe_link_mac_t;
|
||||
* closed or the process exits. No more than one of
|
||||
* ::GXIO_MPIPE_LINK_AUTO_UP, ::GXIO_MPIPE_LINK_AUTO_UPDOWN,
|
||||
* ::GXIO_MPIPE_LINK_AUTO_DOWN, or ::GXIO_MPIPE_LINK_AUTO_NONE may be
|
||||
* specifed in a gxio_mpipe_link_open() call. If none are specified,
|
||||
* specified in a gxio_mpipe_link_open() call. If none are specified,
|
||||
* ::GXIO_MPIPE_LINK_AUTO_UPDOWN is assumed.
|
||||
*/
|
||||
#define GXIO_MPIPE_LINK_AUTO_NONE 0x00001000UL
|
||||
|
@ -126,15 +126,15 @@ void
|
||||
sleeping_thread_to_gdb_regs(unsigned long *gdb_regs, struct task_struct *task)
|
||||
{
|
||||
struct pt_regs *thread_regs;
|
||||
const int NGPRS = TREG_LAST_GPR + 1;
|
||||
|
||||
if (task == NULL)
|
||||
return;
|
||||
|
||||
/* Initialize to zero. */
|
||||
memset(gdb_regs, 0, NUMREGBYTES);
|
||||
|
||||
thread_regs = task_pt_regs(task);
|
||||
memcpy(gdb_regs, thread_regs, TREG_LAST_GPR * sizeof(unsigned long));
|
||||
memcpy(gdb_regs, thread_regs, NGPRS * sizeof(unsigned long));
|
||||
memset(&gdb_regs[NGPRS], 0,
|
||||
(TILEGX_PC_REGNUM - NGPRS) * sizeof(unsigned long));
|
||||
gdb_regs[TILEGX_PC_REGNUM] = thread_regs->pc;
|
||||
gdb_regs[TILEGX_FAULTNUM_REGNUM] = thread_regs->faultnum;
|
||||
}
|
||||
@ -433,9 +433,9 @@ int kgdb_arch_handle_exception(int vector, int signo, int err_code,
|
||||
struct kgdb_arch arch_kgdb_ops;
|
||||
|
||||
/*
|
||||
* kgdb_arch_init - Perform any architecture specific initalization.
|
||||
* kgdb_arch_init - Perform any architecture specific initialization.
|
||||
*
|
||||
* This function will handle the initalization of any architecture
|
||||
* This function will handle the initialization of any architecture
|
||||
* specific callbacks.
|
||||
*/
|
||||
int kgdb_arch_init(void)
|
||||
@ -447,9 +447,9 @@ int kgdb_arch_init(void)
|
||||
}
|
||||
|
||||
/*
|
||||
* kgdb_arch_exit - Perform any architecture specific uninitalization.
|
||||
* kgdb_arch_exit - Perform any architecture specific uninitialization.
|
||||
*
|
||||
* This function will handle the uninitalization of any architecture
|
||||
* This function will handle the uninitialization of any architecture
|
||||
* specific callbacks, for dynamic registration and unregistration.
|
||||
*/
|
||||
void kgdb_arch_exit(void)
|
||||
|
@ -1326,7 +1326,7 @@ invalid_device:
|
||||
|
||||
|
||||
/*
|
||||
* See tile_cfg_read() for relevent comments.
|
||||
* See tile_cfg_read() for relevant comments.
|
||||
* Note that "val" is the value to write, not a pointer to that value.
|
||||
*/
|
||||
static int tile_cfg_write(struct pci_bus *bus, unsigned int devfn, int offset,
|
||||
|
@ -369,7 +369,7 @@ static int amd_pmu_cpu_prepare(int cpu)
|
||||
|
||||
WARN_ON_ONCE(cpuc->amd_nb);
|
||||
|
||||
if (boot_cpu_data.x86_max_cores < 2)
|
||||
if (!x86_pmu.amd_nb_constraints)
|
||||
return NOTIFY_OK;
|
||||
|
||||
cpuc->amd_nb = amd_alloc_nb(cpu);
|
||||
@ -388,7 +388,7 @@ static void amd_pmu_cpu_starting(int cpu)
|
||||
|
||||
cpuc->perf_ctr_virt_mask = AMD64_EVENTSEL_HOSTONLY;
|
||||
|
||||
if (boot_cpu_data.x86_max_cores < 2)
|
||||
if (!x86_pmu.amd_nb_constraints)
|
||||
return;
|
||||
|
||||
nb_id = amd_get_nb_id(cpu);
|
||||
@ -414,7 +414,7 @@ static void amd_pmu_cpu_dead(int cpu)
|
||||
{
|
||||
struct cpu_hw_events *cpuhw;
|
||||
|
||||
if (boot_cpu_data.x86_max_cores < 2)
|
||||
if (!x86_pmu.amd_nb_constraints)
|
||||
return;
|
||||
|
||||
cpuhw = &per_cpu(cpu_hw_events, cpu);
|
||||
@ -648,6 +648,8 @@ static __initconst const struct x86_pmu amd_pmu = {
|
||||
.cpu_prepare = amd_pmu_cpu_prepare,
|
||||
.cpu_starting = amd_pmu_cpu_starting,
|
||||
.cpu_dead = amd_pmu_cpu_dead,
|
||||
|
||||
.amd_nb_constraints = 1,
|
||||
};
|
||||
|
||||
static int __init amd_core_pmu_init(void)
|
||||
@ -674,6 +676,11 @@ static int __init amd_core_pmu_init(void)
|
||||
x86_pmu.eventsel = MSR_F15H_PERF_CTL;
|
||||
x86_pmu.perfctr = MSR_F15H_PERF_CTR;
|
||||
x86_pmu.num_counters = AMD64_NUM_COUNTERS_CORE;
|
||||
/*
|
||||
* AMD Core perfctr has separate MSRs for the NB events, see
|
||||
* the amd/uncore.c driver.
|
||||
*/
|
||||
x86_pmu.amd_nb_constraints = 0;
|
||||
|
||||
pr_cont("core perfctr, ");
|
||||
return 0;
|
||||
@ -693,6 +700,14 @@ __init int amd_pmu_init(void)
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (num_possible_cpus() == 1) {
|
||||
/*
|
||||
* No point in allocating data structures to serialize
|
||||
* against other CPUs, when there is only the one CPU.
|
||||
*/
|
||||
x86_pmu.amd_nb_constraints = 0;
|
||||
}
|
||||
|
||||
/* Events are common for all AMDs */
|
||||
memcpy(hw_cache_event_ids, amd_hw_cache_event_ids,
|
||||
sizeof(hw_cache_event_ids));
|
||||
|
@ -28,10 +28,46 @@ static u32 ibs_caps;
|
||||
#define IBS_FETCH_CONFIG_MASK (IBS_FETCH_RAND_EN | IBS_FETCH_MAX_CNT)
|
||||
#define IBS_OP_CONFIG_MASK IBS_OP_MAX_CNT
|
||||
|
||||
|
||||
/*
|
||||
* IBS states:
|
||||
*
|
||||
* ENABLED; tracks the pmu::add(), pmu::del() state, when set the counter is taken
|
||||
* and any further add()s must fail.
|
||||
*
|
||||
* STARTED/STOPPING/STOPPED; deal with pmu::start(), pmu::stop() state but are
|
||||
* complicated by the fact that the IBS hardware can send late NMIs (ie. after
|
||||
* we've cleared the EN bit).
|
||||
*
|
||||
* In order to consume these late NMIs we have the STOPPED state, any NMI that
|
||||
* happens after we've cleared the EN state will clear this bit and report the
|
||||
* NMI handled (this is fundamentally racy in the face or multiple NMI sources,
|
||||
* someone else can consume our BIT and our NMI will go unhandled).
|
||||
*
|
||||
* And since we cannot set/clear this separate bit together with the EN bit,
|
||||
* there are races; if we cleared STARTED early, an NMI could land in
|
||||
* between clearing STARTED and clearing the EN bit (in fact multiple NMIs
|
||||
* could happen if the period is small enough), and consume our STOPPED bit
|
||||
* and trigger streams of unhandled NMIs.
|
||||
*
|
||||
* If, however, we clear STARTED late, an NMI can hit between clearing the
|
||||
* EN bit and clearing STARTED, still see STARTED set and process the event.
|
||||
* If this event will have the VALID bit clear, we bail properly, but this
|
||||
* is not a given. With VALID set we can end up calling pmu::stop() again
|
||||
* (the throttle logic) and trigger the WARNs in there.
|
||||
*
|
||||
* So what we do is set STOPPING before clearing EN to avoid the pmu::stop()
|
||||
* nesting, and clear STARTED late, so that we have a well defined state over
|
||||
* the clearing of the EN bit.
|
||||
*
|
||||
* XXX: we could probably be using !atomic bitops for all this.
|
||||
*/
|
||||
|
||||
enum ibs_states {
|
||||
IBS_ENABLED = 0,
|
||||
IBS_STARTED = 1,
|
||||
IBS_STOPPING = 2,
|
||||
IBS_STOPPED = 3,
|
||||
|
||||
IBS_MAX_STATES,
|
||||
};
|
||||
@ -377,9 +413,8 @@ static void perf_ibs_start(struct perf_event *event, int flags)
|
||||
|
||||
perf_ibs_set_period(perf_ibs, hwc, &period);
|
||||
/*
|
||||
* Set STARTED before enabling the hardware, such that
|
||||
* a subsequent NMI must observe it. Then clear STOPPING
|
||||
* such that we don't consume NMIs by accident.
|
||||
* Set STARTED before enabling the hardware, such that a subsequent NMI
|
||||
* must observe it.
|
||||
*/
|
||||
set_bit(IBS_STARTED, pcpu->state);
|
||||
clear_bit(IBS_STOPPING, pcpu->state);
|
||||
@ -396,6 +431,9 @@ static void perf_ibs_stop(struct perf_event *event, int flags)
|
||||
u64 config;
|
||||
int stopping;
|
||||
|
||||
if (test_and_set_bit(IBS_STOPPING, pcpu->state))
|
||||
return;
|
||||
|
||||
stopping = test_bit(IBS_STARTED, pcpu->state);
|
||||
|
||||
if (!stopping && (hwc->state & PERF_HES_UPTODATE))
|
||||
@ -405,12 +443,12 @@ static void perf_ibs_stop(struct perf_event *event, int flags)
|
||||
|
||||
if (stopping) {
|
||||
/*
|
||||
* Set STOPPING before disabling the hardware, such that it
|
||||
* Set STOPPED before disabling the hardware, such that it
|
||||
* must be visible to NMIs the moment we clear the EN bit,
|
||||
* at which point we can generate an !VALID sample which
|
||||
* we need to consume.
|
||||
*/
|
||||
set_bit(IBS_STOPPING, pcpu->state);
|
||||
set_bit(IBS_STOPPED, pcpu->state);
|
||||
perf_ibs_disable_event(perf_ibs, hwc, config);
|
||||
/*
|
||||
* Clear STARTED after disabling the hardware; if it were
|
||||
@ -556,7 +594,7 @@ fail:
|
||||
* with samples that even have the valid bit cleared.
|
||||
* Mark all this NMIs as handled.
|
||||
*/
|
||||
if (test_and_clear_bit(IBS_STOPPING, pcpu->state))
|
||||
if (test_and_clear_bit(IBS_STOPPED, pcpu->state))
|
||||
return 1;
|
||||
|
||||
return 0;
|
||||
|
@ -607,6 +607,11 @@ struct x86_pmu {
|
||||
*/
|
||||
atomic_t lbr_exclusive[x86_lbr_exclusive_max];
|
||||
|
||||
/*
|
||||
* AMD bits
|
||||
*/
|
||||
unsigned int amd_nb_constraints : 1;
|
||||
|
||||
/*
|
||||
* Extra registers for events
|
||||
*/
|
||||
@ -795,6 +800,9 @@ ssize_t intel_event_sysfs_show(char *page, u64 config);
|
||||
|
||||
struct attribute **merge_attr(struct attribute **a, struct attribute **b);
|
||||
|
||||
ssize_t events_sysfs_show(struct device *dev, struct device_attribute *attr,
|
||||
char *page);
|
||||
|
||||
#ifdef CONFIG_CPU_SUP_AMD
|
||||
|
||||
int amd_pmu_init(void);
|
||||
@ -925,9 +933,6 @@ int p6_pmu_init(void);
|
||||
|
||||
int knc_pmu_init(void);
|
||||
|
||||
ssize_t events_sysfs_show(struct device *dev, struct device_attribute *attr,
|
||||
char *page);
|
||||
|
||||
static inline int is_ht_workaround_enabled(void)
|
||||
{
|
||||
return !!(x86_pmu.flags & PMU_FL_EXCL_ENABLED);
|
||||
|
@ -190,6 +190,7 @@
|
||||
#define MSR_PP1_ENERGY_STATUS 0x00000641
|
||||
#define MSR_PP1_POLICY 0x00000642
|
||||
|
||||
/* Config TDP MSRs */
|
||||
#define MSR_CONFIG_TDP_NOMINAL 0x00000648
|
||||
#define MSR_CONFIG_TDP_LEVEL_1 0x00000649
|
||||
#define MSR_CONFIG_TDP_LEVEL_2 0x0000064A
|
||||
@ -210,13 +211,6 @@
|
||||
#define MSR_GFX_PERF_LIMIT_REASONS 0x000006B0
|
||||
#define MSR_RING_PERF_LIMIT_REASONS 0x000006B1
|
||||
|
||||
/* Config TDP MSRs */
|
||||
#define MSR_CONFIG_TDP_NOMINAL 0x00000648
|
||||
#define MSR_CONFIG_TDP_LEVEL1 0x00000649
|
||||
#define MSR_CONFIG_TDP_LEVEL2 0x0000064A
|
||||
#define MSR_CONFIG_TDP_CONTROL 0x0000064B
|
||||
#define MSR_TURBO_ACTIVATION_RATIO 0x0000064C
|
||||
|
||||
/* Hardware P state interface */
|
||||
#define MSR_PPERF 0x0000064e
|
||||
#define MSR_PERF_LIMIT_REASONS 0x0000064f
|
||||
|
@ -47,6 +47,15 @@ static inline void arch_memcpy_to_pmem(void __pmem *dst, const void *src,
|
||||
BUG();
|
||||
}
|
||||
|
||||
static inline int arch_memcpy_from_pmem(void *dst, const void __pmem *src,
|
||||
size_t n)
|
||||
{
|
||||
if (static_cpu_has(X86_FEATURE_MCE_RECOVERY))
|
||||
return memcpy_mcsafe(dst, (void __force *) src, n);
|
||||
memcpy(dst, (void __force *) src, n);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* arch_wmb_pmem - synchronize writes to persistent memory
|
||||
*
|
||||
|
@ -132,8 +132,6 @@ struct cpuinfo_x86 {
|
||||
u16 logical_proc_id;
|
||||
/* Core id: */
|
||||
u16 cpu_core_id;
|
||||
/* Compute unit id */
|
||||
u8 compute_unit_id;
|
||||
/* Index into per_cpu list: */
|
||||
u16 cpu_index;
|
||||
u32 microcode;
|
||||
|
@ -155,6 +155,7 @@ static inline int wbinvd_on_all_cpus(void)
|
||||
wbinvd();
|
||||
return 0;
|
||||
}
|
||||
#define smp_num_siblings 1
|
||||
#endif /* CONFIG_SMP */
|
||||
|
||||
extern unsigned disabled_cpus;
|
||||
|
@ -276,11 +276,9 @@ static inline bool is_ia32_task(void)
|
||||
*/
|
||||
#define force_iret() set_thread_flag(TIF_NOTIFY_RESUME)
|
||||
|
||||
#endif /* !__ASSEMBLY__ */
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
extern void arch_task_cache_init(void);
|
||||
extern int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src);
|
||||
extern void arch_release_task_struct(struct task_struct *tsk);
|
||||
#endif
|
||||
#endif /* !__ASSEMBLY__ */
|
||||
|
||||
#endif /* _ASM_X86_THREAD_INFO_H */
|
||||
|
@ -319,12 +319,6 @@ static inline void reset_lazy_tlbstate(void)
|
||||
|
||||
#endif /* SMP */
|
||||
|
||||
/* Not inlined due to inc_irq_stat not being defined yet */
|
||||
#define flush_tlb_local() { \
|
||||
inc_irq_stat(irq_tlb_count); \
|
||||
local_flush_tlb(); \
|
||||
}
|
||||
|
||||
#ifndef CONFIG_PARAVIRT
|
||||
#define flush_tlb_others(mask, mm, start, end) \
|
||||
native_flush_tlb_others(mask, mm, start, end)
|
||||
|
@ -170,15 +170,13 @@ int amd_get_subcaches(int cpu)
|
||||
{
|
||||
struct pci_dev *link = node_to_amd_nb(amd_get_nb_id(cpu))->link;
|
||||
unsigned int mask;
|
||||
int cuid;
|
||||
|
||||
if (!amd_nb_has_feature(AMD_NB_L3_PARTITIONING))
|
||||
return 0;
|
||||
|
||||
pci_read_config_dword(link, 0x1d4, &mask);
|
||||
|
||||
cuid = cpu_data(cpu).compute_unit_id;
|
||||
return (mask >> (4 * cuid)) & 0xf;
|
||||
return (mask >> (4 * cpu_data(cpu).cpu_core_id)) & 0xf;
|
||||
}
|
||||
|
||||
int amd_set_subcaches(int cpu, unsigned long mask)
|
||||
@ -204,7 +202,7 @@ int amd_set_subcaches(int cpu, unsigned long mask)
|
||||
pci_write_config_dword(nb->misc, 0x1b8, reg & ~0x180000);
|
||||
}
|
||||
|
||||
cuid = cpu_data(cpu).compute_unit_id;
|
||||
cuid = cpu_data(cpu).cpu_core_id;
|
||||
mask <<= 4 * cuid;
|
||||
mask |= (0xf ^ (1 << cuid)) << 26;
|
||||
|
||||
|
@ -300,7 +300,6 @@ static int nearby_node(int apicid)
|
||||
#ifdef CONFIG_SMP
|
||||
static void amd_get_topology(struct cpuinfo_x86 *c)
|
||||
{
|
||||
u32 cores_per_cu = 1;
|
||||
u8 node_id;
|
||||
int cpu = smp_processor_id();
|
||||
|
||||
@ -313,8 +312,8 @@ static void amd_get_topology(struct cpuinfo_x86 *c)
|
||||
|
||||
/* get compute unit information */
|
||||
smp_num_siblings = ((ebx >> 8) & 3) + 1;
|
||||
c->compute_unit_id = ebx & 0xff;
|
||||
cores_per_cu += ((ebx >> 8) & 3);
|
||||
c->x86_max_cores /= smp_num_siblings;
|
||||
c->cpu_core_id = ebx & 0xff;
|
||||
} else if (cpu_has(c, X86_FEATURE_NODEID_MSR)) {
|
||||
u64 value;
|
||||
|
||||
@ -325,19 +324,16 @@ static void amd_get_topology(struct cpuinfo_x86 *c)
|
||||
|
||||
/* fixup multi-node processor information */
|
||||
if (nodes_per_socket > 1) {
|
||||
u32 cores_per_node;
|
||||
u32 cus_per_node;
|
||||
|
||||
set_cpu_cap(c, X86_FEATURE_AMD_DCM);
|
||||
cores_per_node = c->x86_max_cores / nodes_per_socket;
|
||||
cus_per_node = cores_per_node / cores_per_cu;
|
||||
cus_per_node = c->x86_max_cores / nodes_per_socket;
|
||||
|
||||
/* store NodeID, use llc_shared_map to store sibling info */
|
||||
per_cpu(cpu_llc_id, cpu) = node_id;
|
||||
|
||||
/* core id has to be in the [0 .. cores_per_node - 1] range */
|
||||
c->cpu_core_id %= cores_per_node;
|
||||
c->compute_unit_id %= cus_per_node;
|
||||
c->cpu_core_id %= cus_per_node;
|
||||
}
|
||||
}
|
||||
#endif
|
||||
|
@ -384,6 +384,9 @@ static void intel_thermal_interrupt(void)
|
||||
{
|
||||
__u64 msr_val;
|
||||
|
||||
if (static_cpu_has(X86_FEATURE_HWP))
|
||||
wrmsrl_safe(MSR_HWP_STATUS, 0);
|
||||
|
||||
rdmsrl(MSR_IA32_THERM_STATUS, msr_val);
|
||||
|
||||
/* Check for violation of core thermal thresholds*/
|
||||
|
@ -18,4 +18,6 @@ const char *const x86_power_flags[32] = {
|
||||
"", /* tsc invariant mapped to constant_tsc */
|
||||
"cpb", /* core performance boost */
|
||||
"eff_freq_ro", /* Readonly aperf/mperf */
|
||||
"proc_feedback", /* processor feedback interface */
|
||||
"acc_power", /* accumulated power mechanism */
|
||||
};
|
||||
|
@ -422,7 +422,7 @@ static bool match_smt(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
|
||||
|
||||
if (c->phys_proc_id == o->phys_proc_id &&
|
||||
per_cpu(cpu_llc_id, cpu1) == per_cpu(cpu_llc_id, cpu2) &&
|
||||
c->compute_unit_id == o->compute_unit_id)
|
||||
c->cpu_core_id == o->cpu_core_id)
|
||||
return topology_sane(c, o, "smt");
|
||||
|
||||
} else if (c->phys_proc_id == o->phys_proc_id &&
|
||||
|
@ -104,10 +104,8 @@ static void flush_tlb_func(void *info)
|
||||
|
||||
inc_irq_stat(irq_tlb_count);
|
||||
|
||||
if (f->flush_mm != this_cpu_read(cpu_tlbstate.active_mm))
|
||||
if (f->flush_mm && f->flush_mm != this_cpu_read(cpu_tlbstate.active_mm))
|
||||
return;
|
||||
if (!f->flush_end)
|
||||
f->flush_end = f->flush_start + PAGE_SIZE;
|
||||
|
||||
count_vm_tlb_event(NR_TLB_REMOTE_FLUSH_RECEIVED);
|
||||
if (this_cpu_read(cpu_tlbstate.state) == TLBSTATE_OK) {
|
||||
@ -135,12 +133,20 @@ void native_flush_tlb_others(const struct cpumask *cpumask,
|
||||
unsigned long end)
|
||||
{
|
||||
struct flush_tlb_info info;
|
||||
|
||||
if (end == 0)
|
||||
end = start + PAGE_SIZE;
|
||||
info.flush_mm = mm;
|
||||
info.flush_start = start;
|
||||
info.flush_end = end;
|
||||
|
||||
count_vm_tlb_event(NR_TLB_REMOTE_FLUSH);
|
||||
trace_tlb_flush(TLB_REMOTE_SEND_IPI, end - start);
|
||||
if (end == TLB_FLUSH_ALL)
|
||||
trace_tlb_flush(TLB_REMOTE_SEND_IPI, TLB_FLUSH_ALL);
|
||||
else
|
||||
trace_tlb_flush(TLB_REMOTE_SEND_IPI,
|
||||
(end - start) >> PAGE_SHIFT);
|
||||
|
||||
if (is_uv_system()) {
|
||||
unsigned int cpu;
|
||||
|
||||
|
@ -20,6 +20,7 @@
|
||||
#include <linux/pci.h>
|
||||
|
||||
#include <asm/mce.h>
|
||||
#include <asm/smp.h>
|
||||
#include <asm/amd_nb.h>
|
||||
#include <asm/irq_vectors.h>
|
||||
|
||||
@ -206,7 +207,7 @@ static u32 get_nbc_for_node(int node_id)
|
||||
struct cpuinfo_x86 *c = &boot_cpu_data;
|
||||
u32 cores_per_node;
|
||||
|
||||
cores_per_node = c->x86_max_cores / amd_get_nodes_per_socket();
|
||||
cores_per_node = (c->x86_max_cores * smp_num_siblings) / amd_get_nodes_per_socket();
|
||||
|
||||
return cores_per_node * node_id;
|
||||
}
|
||||
|
@ -178,6 +178,8 @@ int pkcs7_validate_trust(struct pkcs7_message *pkcs7,
|
||||
int cached_ret = -ENOKEY;
|
||||
int ret;
|
||||
|
||||
*_trusted = false;
|
||||
|
||||
for (p = pkcs7->certs; p; p = p->next)
|
||||
p->seen = false;
|
||||
|
||||
|
@ -491,6 +491,58 @@ static void acpi_processor_remove(struct acpi_device *device)
|
||||
}
|
||||
#endif /* CONFIG_ACPI_HOTPLUG_CPU */
|
||||
|
||||
#ifdef CONFIG_X86
|
||||
static bool acpi_hwp_native_thermal_lvt_set;
|
||||
static acpi_status __init acpi_hwp_native_thermal_lvt_osc(acpi_handle handle,
|
||||
u32 lvl,
|
||||
void *context,
|
||||
void **rv)
|
||||
{
|
||||
u8 sb_uuid_str[] = "4077A616-290C-47BE-9EBD-D87058713953";
|
||||
u32 capbuf[2];
|
||||
struct acpi_osc_context osc_context = {
|
||||
.uuid_str = sb_uuid_str,
|
||||
.rev = 1,
|
||||
.cap.length = 8,
|
||||
.cap.pointer = capbuf,
|
||||
};
|
||||
|
||||
if (acpi_hwp_native_thermal_lvt_set)
|
||||
return AE_CTRL_TERMINATE;
|
||||
|
||||
capbuf[0] = 0x0000;
|
||||
capbuf[1] = 0x1000; /* set bit 12 */
|
||||
|
||||
if (ACPI_SUCCESS(acpi_run_osc(handle, &osc_context))) {
|
||||
if (osc_context.ret.pointer && osc_context.ret.length > 1) {
|
||||
u32 *capbuf_ret = osc_context.ret.pointer;
|
||||
|
||||
if (capbuf_ret[1] & 0x1000) {
|
||||
acpi_handle_info(handle,
|
||||
"_OSC native thermal LVT Acked\n");
|
||||
acpi_hwp_native_thermal_lvt_set = true;
|
||||
}
|
||||
}
|
||||
kfree(osc_context.ret.pointer);
|
||||
}
|
||||
|
||||
return AE_OK;
|
||||
}
|
||||
|
||||
void __init acpi_early_processor_osc(void)
|
||||
{
|
||||
if (boot_cpu_has(X86_FEATURE_HWP)) {
|
||||
acpi_walk_namespace(ACPI_TYPE_PROCESSOR, ACPI_ROOT_OBJECT,
|
||||
ACPI_UINT32_MAX,
|
||||
acpi_hwp_native_thermal_lvt_osc,
|
||||
NULL, NULL, NULL);
|
||||
acpi_get_devices(ACPI_PROCESSOR_DEVICE_HID,
|
||||
acpi_hwp_native_thermal_lvt_osc,
|
||||
NULL, NULL);
|
||||
}
|
||||
}
|
||||
#endif
|
||||
|
||||
/*
|
||||
* The following ACPI IDs are known to be suitable for representing as
|
||||
* processor devices.
|
||||
|
@ -1019,6 +1019,9 @@ static int __init acpi_bus_init(void)
|
||||
goto error1;
|
||||
}
|
||||
|
||||
/* Set capability bits for _OSC under processor scope */
|
||||
acpi_early_processor_osc();
|
||||
|
||||
/*
|
||||
* _OSC method may exist in module level code,
|
||||
* so it must be run after ACPI_FULL_INITIALIZATION
|
||||
|
@ -145,6 +145,12 @@ void acpi_early_processor_set_pdc(void);
|
||||
static inline void acpi_early_processor_set_pdc(void) {}
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_X86
|
||||
void acpi_early_processor_osc(void);
|
||||
#else
|
||||
static inline void acpi_early_processor_osc(void) {}
|
||||
#endif
|
||||
|
||||
/* --------------------------------------------------------------------------
|
||||
Embedded Controller
|
||||
-------------------------------------------------------------------------- */
|
||||
|
@ -57,7 +57,7 @@ static int mtk_reset(struct reset_controller_dev *rcdev,
|
||||
return mtk_reset_deassert(rcdev, id);
|
||||
}
|
||||
|
||||
static struct reset_control_ops mtk_reset_ops = {
|
||||
static const struct reset_control_ops mtk_reset_ops = {
|
||||
.assert = mtk_reset_assert,
|
||||
.deassert = mtk_reset_deassert,
|
||||
.reset = mtk_reset,
|
||||
|
@ -74,7 +74,7 @@ static int mmp_clk_reset_deassert(struct reset_controller_dev *rcdev,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct reset_control_ops mmp_clk_reset_ops = {
|
||||
static const struct reset_control_ops mmp_clk_reset_ops = {
|
||||
.assert = mmp_clk_reset_assert,
|
||||
.deassert = mmp_clk_reset_deassert,
|
||||
};
|
||||
|
@ -129,20 +129,10 @@ static const char * const gcc_xo_ddr_500_200[] = {
|
||||
};
|
||||
|
||||
#define F(f, s, h, m, n) { (f), (s), (2 * (h) - 1), (m), (n) }
|
||||
#define P_XO 0
|
||||
#define FE_PLL_200 1
|
||||
#define FE_PLL_500 2
|
||||
#define DDRC_PLL_666 3
|
||||
|
||||
#define DDRC_PLL_666_SDCC 1
|
||||
#define FE_PLL_125_DLY 1
|
||||
|
||||
#define FE_PLL_WCSS2G 1
|
||||
#define FE_PLL_WCSS5G 1
|
||||
|
||||
static const struct freq_tbl ftbl_gcc_audio_pwm_clk[] = {
|
||||
F(48000000, P_XO, 1, 0, 0),
|
||||
F(200000000, FE_PLL_200, 1, 0, 0),
|
||||
F(200000000, P_FEPLL200, 1, 0, 0),
|
||||
{ }
|
||||
};
|
||||
|
||||
@ -334,15 +324,15 @@ static struct clk_branch gcc_blsp1_qup2_spi_apps_clk = {
|
||||
};
|
||||
|
||||
static const struct freq_tbl ftbl_gcc_blsp1_uart1_2_apps_clk[] = {
|
||||
F(1843200, FE_PLL_200, 1, 144, 15625),
|
||||
F(3686400, FE_PLL_200, 1, 288, 15625),
|
||||
F(7372800, FE_PLL_200, 1, 576, 15625),
|
||||
F(14745600, FE_PLL_200, 1, 1152, 15625),
|
||||
F(16000000, FE_PLL_200, 1, 2, 25),
|
||||
F(1843200, P_FEPLL200, 1, 144, 15625),
|
||||
F(3686400, P_FEPLL200, 1, 288, 15625),
|
||||
F(7372800, P_FEPLL200, 1, 576, 15625),
|
||||
F(14745600, P_FEPLL200, 1, 1152, 15625),
|
||||
F(16000000, P_FEPLL200, 1, 2, 25),
|
||||
F(24000000, P_XO, 1, 1, 2),
|
||||
F(32000000, FE_PLL_200, 1, 4, 25),
|
||||
F(40000000, FE_PLL_200, 1, 1, 5),
|
||||
F(46400000, FE_PLL_200, 1, 29, 125),
|
||||
F(32000000, P_FEPLL200, 1, 4, 25),
|
||||
F(40000000, P_FEPLL200, 1, 1, 5),
|
||||
F(46400000, P_FEPLL200, 1, 29, 125),
|
||||
F(48000000, P_XO, 1, 0, 0),
|
||||
{ }
|
||||
};
|
||||
@ -410,9 +400,9 @@ static struct clk_branch gcc_blsp1_uart2_apps_clk = {
|
||||
};
|
||||
|
||||
static const struct freq_tbl ftbl_gcc_gp_clk[] = {
|
||||
F(1250000, FE_PLL_200, 1, 16, 0),
|
||||
F(2500000, FE_PLL_200, 1, 8, 0),
|
||||
F(5000000, FE_PLL_200, 1, 4, 0),
|
||||
F(1250000, P_FEPLL200, 1, 16, 0),
|
||||
F(2500000, P_FEPLL200, 1, 8, 0),
|
||||
F(5000000, P_FEPLL200, 1, 4, 0),
|
||||
{ }
|
||||
};
|
||||
|
||||
@ -512,11 +502,11 @@ static struct clk_branch gcc_gp3_clk = {
|
||||
static const struct freq_tbl ftbl_gcc_sdcc1_apps_clk[] = {
|
||||
F(144000, P_XO, 1, 3, 240),
|
||||
F(400000, P_XO, 1, 1, 0),
|
||||
F(20000000, FE_PLL_500, 1, 1, 25),
|
||||
F(25000000, FE_PLL_500, 1, 1, 20),
|
||||
F(50000000, FE_PLL_500, 1, 1, 10),
|
||||
F(100000000, FE_PLL_500, 1, 1, 5),
|
||||
F(193000000, DDRC_PLL_666_SDCC, 1, 0, 0),
|
||||
F(20000000, P_FEPLL500, 1, 1, 25),
|
||||
F(25000000, P_FEPLL500, 1, 1, 20),
|
||||
F(50000000, P_FEPLL500, 1, 1, 10),
|
||||
F(100000000, P_FEPLL500, 1, 1, 5),
|
||||
F(193000000, P_DDRPLL, 1, 0, 0),
|
||||
{ }
|
||||
};
|
||||
|
||||
@ -536,9 +526,9 @@ static struct clk_rcg2 sdcc1_apps_clk_src = {
|
||||
|
||||
static const struct freq_tbl ftbl_gcc_apps_clk[] = {
|
||||
F(48000000, P_XO, 1, 0, 0),
|
||||
F(200000000, FE_PLL_200, 1, 0, 0),
|
||||
F(500000000, FE_PLL_500, 1, 0, 0),
|
||||
F(626000000, DDRC_PLL_666, 1, 0, 0),
|
||||
F(200000000, P_FEPLL200, 1, 0, 0),
|
||||
F(500000000, P_FEPLL500, 1, 0, 0),
|
||||
F(626000000, P_DDRPLLAPSS, 1, 0, 0),
|
||||
{ }
|
||||
};
|
||||
|
||||
@ -557,7 +547,7 @@ static struct clk_rcg2 apps_clk_src = {
|
||||
|
||||
static const struct freq_tbl ftbl_gcc_apps_ahb_clk[] = {
|
||||
F(48000000, P_XO, 1, 0, 0),
|
||||
F(100000000, FE_PLL_200, 2, 0, 0),
|
||||
F(100000000, P_FEPLL200, 2, 0, 0),
|
||||
{ }
|
||||
};
|
||||
|
||||
@ -940,7 +930,7 @@ static struct clk_branch gcc_usb2_mock_utmi_clk = {
|
||||
};
|
||||
|
||||
static const struct freq_tbl ftbl_gcc_usb30_mock_utmi_clk[] = {
|
||||
F(2000000, FE_PLL_200, 10, 0, 0),
|
||||
F(2000000, P_FEPLL200, 10, 0, 0),
|
||||
{ }
|
||||
};
|
||||
|
||||
@ -1007,7 +997,7 @@ static struct clk_branch gcc_usb3_mock_utmi_clk = {
|
||||
};
|
||||
|
||||
static const struct freq_tbl ftbl_gcc_fephy_dly_clk[] = {
|
||||
F(125000000, FE_PLL_125_DLY, 1, 0, 0),
|
||||
F(125000000, P_FEPLL125DLY, 1, 0, 0),
|
||||
{ }
|
||||
};
|
||||
|
||||
@ -1027,7 +1017,7 @@ static struct clk_rcg2 fephy_125m_dly_clk_src = {
|
||||
|
||||
static const struct freq_tbl ftbl_gcc_wcss2g_clk[] = {
|
||||
F(48000000, P_XO, 1, 0, 0),
|
||||
F(250000000, FE_PLL_WCSS2G, 1, 0, 0),
|
||||
F(250000000, P_FEPLLWCSS2G, 1, 0, 0),
|
||||
{ }
|
||||
};
|
||||
|
||||
@ -1097,7 +1087,7 @@ static struct clk_branch gcc_wcss2g_rtc_clk = {
|
||||
|
||||
static const struct freq_tbl ftbl_gcc_wcss5g_clk[] = {
|
||||
F(48000000, P_XO, 1, 0, 0),
|
||||
F(250000000, FE_PLL_WCSS5G, 1, 0, 0),
|
||||
F(250000000, P_FEPLLWCSS5G, 1, 0, 0),
|
||||
{ }
|
||||
};
|
||||
|
||||
@ -1325,6 +1315,16 @@ MODULE_DEVICE_TABLE(of, gcc_ipq4019_match_table);
|
||||
|
||||
static int gcc_ipq4019_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
|
||||
clk_register_fixed_rate(dev, "fepll125", "xo", 0, 200000000);
|
||||
clk_register_fixed_rate(dev, "fepll125dly", "xo", 0, 200000000);
|
||||
clk_register_fixed_rate(dev, "fepllwcss2g", "xo", 0, 200000000);
|
||||
clk_register_fixed_rate(dev, "fepllwcss5g", "xo", 0, 200000000);
|
||||
clk_register_fixed_rate(dev, "fepll200", "xo", 0, 200000000);
|
||||
clk_register_fixed_rate(dev, "fepll500", "xo", 0, 200000000);
|
||||
clk_register_fixed_rate(dev, "ddrpllapss", "xo", 0, 666000000);
|
||||
|
||||
return qcom_cc_probe(pdev, &gcc_ipq4019_desc);
|
||||
}
|
||||
|
||||
|
@ -55,7 +55,7 @@ qcom_reset_deassert(struct reset_controller_dev *rcdev, unsigned long id)
|
||||
return regmap_update_bits(rst->regmap, map->reg, mask, 0);
|
||||
}
|
||||
|
||||
struct reset_control_ops qcom_reset_ops = {
|
||||
const struct reset_control_ops qcom_reset_ops = {
|
||||
.reset = qcom_reset,
|
||||
.assert = qcom_reset_assert,
|
||||
.deassert = qcom_reset_deassert,
|
||||
|
@ -32,6 +32,6 @@ struct qcom_reset_controller {
|
||||
#define to_qcom_reset_controller(r) \
|
||||
container_of(r, struct qcom_reset_controller, rcdev);
|
||||
|
||||
extern struct reset_control_ops qcom_reset_ops;
|
||||
extern const struct reset_control_ops qcom_reset_ops;
|
||||
|
||||
#endif
|
||||
|
@ -81,7 +81,7 @@ static int rockchip_softrst_deassert(struct reset_controller_dev *rcdev,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct reset_control_ops rockchip_softrst_ops = {
|
||||
static const struct reset_control_ops rockchip_softrst_ops = {
|
||||
.assert = rockchip_softrst_assert,
|
||||
.deassert = rockchip_softrst_deassert,
|
||||
};
|
||||
|
@ -1423,7 +1423,7 @@ static int atlas7_reset_module(struct reset_controller_dev *rcdev,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct reset_control_ops atlas7_rst_ops = {
|
||||
static const struct reset_control_ops atlas7_rst_ops = {
|
||||
.reset = atlas7_reset_module,
|
||||
};
|
||||
|
||||
|
@ -85,7 +85,7 @@ static int sunxi_ve_of_xlate(struct reset_controller_dev *rcdev,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct reset_control_ops sunxi_ve_reset_ops = {
|
||||
static const struct reset_control_ops sunxi_ve_reset_ops = {
|
||||
.assert = sunxi_ve_reset_assert,
|
||||
.deassert = sunxi_ve_reset_deassert,
|
||||
};
|
||||
|
@ -83,7 +83,7 @@ static int sun9i_mmc_reset_deassert(struct reset_controller_dev *rcdev,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct reset_control_ops sun9i_mmc_reset_ops = {
|
||||
static const struct reset_control_ops sun9i_mmc_reset_ops = {
|
||||
.assert = sun9i_mmc_reset_assert,
|
||||
.deassert = sun9i_mmc_reset_deassert,
|
||||
};
|
||||
|
@ -76,7 +76,7 @@ static int sunxi_usb_reset_deassert(struct reset_controller_dev *rcdev,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct reset_control_ops sunxi_usb_reset_ops = {
|
||||
static const struct reset_control_ops sunxi_usb_reset_ops = {
|
||||
.assert = sunxi_usb_reset_assert,
|
||||
.deassert = sunxi_usb_reset_deassert,
|
||||
};
|
||||
|
@ -271,7 +271,7 @@ void __init tegra_init_from_table(struct tegra_clk_init_table *tbl,
|
||||
}
|
||||
}
|
||||
|
||||
static struct reset_control_ops rst_ops = {
|
||||
static const struct reset_control_ops rst_ops = {
|
||||
.assert = tegra_clk_rst_assert,
|
||||
.deassert = tegra_clk_rst_deassert,
|
||||
};
|
||||
|
@ -37,7 +37,6 @@ struct men_z127_gpio {
|
||||
void __iomem *reg_base;
|
||||
struct mcb_device *mdev;
|
||||
struct resource *mem;
|
||||
spinlock_t lock;
|
||||
};
|
||||
|
||||
static int men_z127_debounce(struct gpio_chip *gc, unsigned gpio,
|
||||
@ -69,7 +68,7 @@ static int men_z127_debounce(struct gpio_chip *gc, unsigned gpio,
|
||||
debounce /= 50;
|
||||
}
|
||||
|
||||
spin_lock(&priv->lock);
|
||||
spin_lock(&gc->bgpio_lock);
|
||||
|
||||
db_en = readl(priv->reg_base + MEN_Z127_DBER);
|
||||
|
||||
@ -84,7 +83,7 @@ static int men_z127_debounce(struct gpio_chip *gc, unsigned gpio,
|
||||
writel(db_en, priv->reg_base + MEN_Z127_DBER);
|
||||
writel(db_cnt, priv->reg_base + GPIO_TO_DBCNT_REG(gpio));
|
||||
|
||||
spin_unlock(&priv->lock);
|
||||
spin_unlock(&gc->bgpio_lock);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -97,7 +96,7 @@ static int men_z127_request(struct gpio_chip *gc, unsigned gpio_pin)
|
||||
if (gpio_pin >= gc->ngpio)
|
||||
return -EINVAL;
|
||||
|
||||
spin_lock(&priv->lock);
|
||||
spin_lock(&gc->bgpio_lock);
|
||||
od_en = readl(priv->reg_base + MEN_Z127_ODER);
|
||||
|
||||
if (gpiochip_line_is_open_drain(gc, gpio_pin))
|
||||
@ -106,7 +105,7 @@ static int men_z127_request(struct gpio_chip *gc, unsigned gpio_pin)
|
||||
od_en &= ~BIT(gpio_pin);
|
||||
|
||||
writel(od_en, priv->reg_base + MEN_Z127_ODER);
|
||||
spin_unlock(&priv->lock);
|
||||
spin_unlock(&gc->bgpio_lock);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -173,6 +173,11 @@ static int xgene_gpio_probe(struct platform_device *pdev)
|
||||
}
|
||||
|
||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||
if (!res) {
|
||||
err = -EINVAL;
|
||||
goto err;
|
||||
}
|
||||
|
||||
gpio->base = devm_ioremap_nocache(&pdev->dev, res->start,
|
||||
resource_size(res));
|
||||
if (!gpio->base) {
|
||||
|
@ -1,10 +1,14 @@
|
||||
menu "ACP Configuration"
|
||||
menu "ACP (Audio CoProcessor) Configuration"
|
||||
|
||||
config DRM_AMD_ACP
|
||||
bool "Enable ACP IP support"
|
||||
bool "Enable AMD Audio CoProcessor IP support"
|
||||
select MFD_CORE
|
||||
select PM_GENERIC_DOMAINS if PM
|
||||
help
|
||||
Choose this option to enable ACP IP support for AMD SOCs.
|
||||
This adds the ACP (Audio CoProcessor) IP driver and wires
|
||||
it up into the amdgpu driver. The ACP block provides the DMA
|
||||
engine for the i2s-based ALSA driver. It is required for audio
|
||||
on APUs which utilize an i2s codec.
|
||||
|
||||
endmenu
|
||||
|
@ -608,6 +608,10 @@ int amdgpu_bo_fault_reserve_notify(struct ttm_buffer_object *bo)
|
||||
if ((offset + size) <= adev->mc.visible_vram_size)
|
||||
return 0;
|
||||
|
||||
/* Can't move a pinned BO to visible VRAM */
|
||||
if (abo->pin_count > 0)
|
||||
return -EINVAL;
|
||||
|
||||
/* hurrah the memory is not visible ! */
|
||||
amdgpu_ttm_placement_from_domain(abo, AMDGPU_GEM_DOMAIN_VRAM);
|
||||
lpfn = adev->mc.visible_vram_size >> PAGE_SHIFT;
|
||||
|
@ -384,9 +384,15 @@ static int amdgpu_bo_move(struct ttm_buffer_object *bo,
|
||||
struct ttm_mem_reg *new_mem)
|
||||
{
|
||||
struct amdgpu_device *adev;
|
||||
struct amdgpu_bo *abo;
|
||||
struct ttm_mem_reg *old_mem = &bo->mem;
|
||||
int r;
|
||||
|
||||
/* Can't move a pinned BO */
|
||||
abo = container_of(bo, struct amdgpu_bo, tbo);
|
||||
if (WARN_ON_ONCE(abo->pin_count > 0))
|
||||
return -EINVAL;
|
||||
|
||||
adev = amdgpu_get_adev(bo->bdev);
|
||||
if (old_mem->mem_type == TTM_PL_SYSTEM && bo->ttm == NULL) {
|
||||
amdgpu_move_null(bo, new_mem);
|
||||
|
@ -179,7 +179,7 @@ static int drm_dp_dpcd_access(struct drm_dp_aux *aux, u8 request,
|
||||
{
|
||||
struct drm_dp_aux_msg msg;
|
||||
unsigned int retry;
|
||||
int err;
|
||||
int err = 0;
|
||||
|
||||
memset(&msg, 0, sizeof(msg));
|
||||
msg.address = offset;
|
||||
@ -187,6 +187,8 @@ static int drm_dp_dpcd_access(struct drm_dp_aux *aux, u8 request,
|
||||
msg.buffer = buffer;
|
||||
msg.size = size;
|
||||
|
||||
mutex_lock(&aux->hw_mutex);
|
||||
|
||||
/*
|
||||
* The specification doesn't give any recommendation on how often to
|
||||
* retry native transactions. We used to retry 7 times like for
|
||||
@ -195,25 +197,24 @@ static int drm_dp_dpcd_access(struct drm_dp_aux *aux, u8 request,
|
||||
*/
|
||||
for (retry = 0; retry < 32; retry++) {
|
||||
|
||||
mutex_lock(&aux->hw_mutex);
|
||||
err = aux->transfer(aux, &msg);
|
||||
mutex_unlock(&aux->hw_mutex);
|
||||
if (err < 0) {
|
||||
if (err == -EBUSY)
|
||||
continue;
|
||||
|
||||
return err;
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
|
||||
switch (msg.reply & DP_AUX_NATIVE_REPLY_MASK) {
|
||||
case DP_AUX_NATIVE_REPLY_ACK:
|
||||
if (err < size)
|
||||
return -EPROTO;
|
||||
return err;
|
||||
err = -EPROTO;
|
||||
goto unlock;
|
||||
|
||||
case DP_AUX_NATIVE_REPLY_NACK:
|
||||
return -EIO;
|
||||
err = -EIO;
|
||||
goto unlock;
|
||||
|
||||
case DP_AUX_NATIVE_REPLY_DEFER:
|
||||
usleep_range(AUX_RETRY_INTERVAL, AUX_RETRY_INTERVAL + 100);
|
||||
@ -222,7 +223,11 @@ static int drm_dp_dpcd_access(struct drm_dp_aux *aux, u8 request,
|
||||
}
|
||||
|
||||
DRM_DEBUG_KMS("too many retries, giving up\n");
|
||||
return -EIO;
|
||||
err = -EIO;
|
||||
|
||||
unlock:
|
||||
mutex_unlock(&aux->hw_mutex);
|
||||
return err;
|
||||
}
|
||||
|
||||
/**
|
||||
@ -544,9 +549,7 @@ static int drm_dp_i2c_do_msg(struct drm_dp_aux *aux, struct drm_dp_aux_msg *msg)
|
||||
int max_retries = max(7, drm_dp_i2c_retry_count(msg, dp_aux_i2c_speed_khz));
|
||||
|
||||
for (retry = 0, defer_i2c = 0; retry < (max_retries + defer_i2c); retry++) {
|
||||
mutex_lock(&aux->hw_mutex);
|
||||
ret = aux->transfer(aux, msg);
|
||||
mutex_unlock(&aux->hw_mutex);
|
||||
if (ret < 0) {
|
||||
if (ret == -EBUSY)
|
||||
continue;
|
||||
@ -685,6 +688,8 @@ static int drm_dp_i2c_xfer(struct i2c_adapter *adapter, struct i2c_msg *msgs,
|
||||
|
||||
memset(&msg, 0, sizeof(msg));
|
||||
|
||||
mutex_lock(&aux->hw_mutex);
|
||||
|
||||
for (i = 0; i < num; i++) {
|
||||
msg.address = msgs[i].addr;
|
||||
drm_dp_i2c_msg_set_request(&msg, &msgs[i]);
|
||||
@ -739,6 +744,8 @@ static int drm_dp_i2c_xfer(struct i2c_adapter *adapter, struct i2c_msg *msgs,
|
||||
msg.size = 0;
|
||||
(void)drm_dp_i2c_do_msg(aux, &msg);
|
||||
|
||||
mutex_unlock(&aux->hw_mutex);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
|
@ -196,7 +196,7 @@ void __exit msm_hdmi_phy_driver_unregister(void);
|
||||
int msm_hdmi_pll_8960_init(struct platform_device *pdev);
|
||||
int msm_hdmi_pll_8996_init(struct platform_device *pdev);
|
||||
#else
|
||||
static inline int msm_hdmi_pll_8960_init(struct platform_device *pdev);
|
||||
static inline int msm_hdmi_pll_8960_init(struct platform_device *pdev)
|
||||
{
|
||||
return -ENODEV;
|
||||
}
|
||||
|
@ -467,9 +467,6 @@ static void msm_preclose(struct drm_device *dev, struct drm_file *file)
|
||||
struct msm_file_private *ctx = file->driver_priv;
|
||||
struct msm_kms *kms = priv->kms;
|
||||
|
||||
if (kms)
|
||||
kms->funcs->preclose(kms, file);
|
||||
|
||||
mutex_lock(&dev->struct_mutex);
|
||||
if (ctx == priv->lastctx)
|
||||
priv->lastctx = NULL;
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user