IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
This patch implements 4 levels of translation tables since 3 levels
of page tables with 4KB pages cannot support 40-bit physical address
space described in [1] due to the following issue.
It is a restriction that kernel logical memory map with 4KB + 3 levels
(0xffffffc000000000-0xffffffffffffffff) cannot cover RAM region from
544GB to 1024GB in [1]. Specifically, ARM64 kernel fails to create
mapping for this region in map_mem function since __phys_to_virt for
this region reaches to address overflow.
If SoC design follows the document, [1], over 32GB RAM would be placed
from 544GB. Even 64GB system is supposed to use the region from 544GB
to 576GB for only 32GB RAM. Naturally, it would reach to enable 4 levels
of page tables to avoid hacking __virt_to_phys and __phys_to_virt.
However, it is recommended 4 levels of page table should be only enabled
if memory map is too sparse or there is about 512GB RAM.
References
----------
[1]: Principles of ARM Memory Maps, White Paper, Issue C
Signed-off-by: Jungseok Lee <jays.lee@samsung.com>
Reviewed-by: Sungjinn Chung <sungjinn.chung@samsung.com>
Acked-by: Kukjin Kim <kgene.kim@samsung.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Reviewed-by: Steve Capper <steve.capper@linaro.org>
[catalin.marinas@arm.com: MEMBLOCK_INITIAL_LIMIT removed, same as PUD_SIZE]
[catalin.marinas@arm.com: early_ioremap_init() updated for 4 levels]
[catalin.marinas@arm.com: 48-bit VA depends on BROKEN until KVM is fixed]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Jungseok Lee <jungseoklee85@gmail.com>
This patch adds hardware definition and types for 4 levels of
translation tables with 4KB pages.
Signed-off-by: Jungseok Lee <jays.lee@samsung.com>
Reviewed-by: Sungjinn Chung <sungjinn.chung@samsung.com>
Acked-by: Kukjin Kim <kgene.kim@samsung.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Jungseok Lee <jungseoklee85@gmail.com>
This patch adds virtual address space size and a level of translation
tables to kernel configuration. It facilicates introduction of
different MMU options, such as 4KB + 4 levels, 16KB + 4 levels and
64KB + 3 levels, easily.
The idea is based on the discussion with Catalin Marinas:
http://www.spinics.net/linux/lists/arm-kernel/msg319552.html
Signed-off-by: Jungseok Lee <jays.lee@samsung.com>
Reviewed-by: Sungjinn Chung <sungjinn.chung@samsung.com>
Acked-by: Kukjin Kim <kgene.kim@samsung.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Jungseok Lee <jungseoklee85@gmail.com>
The early_ioremap_init() function already handles fixmap pte
initialisation, so upgrade this to cover all of pud/pmd/pte and remove
one page from swapper_pg_dir.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Jungseok Lee <jungseoklee85@gmail.com>
ZONE_DMA is created to allow 32-bit only devices to access memory in the
absence of an IOMMU. On systems where the memory starts above 4GB, it is
expected that some devices have a DMA offset hardwired to be able to
access the bottom of the memory. Linux currently supports DT bindings
for the DMA offsets but they are not (easily) available early during
boot.
This patch tries to guess a DMA offset and assumes that ZONE_DMA
corresponds to the 32-bit mask above the start of DRAM.
Fixes: 2d5a5612bc (arm64: Limit the CMA buffer to 32-bit if ZONE_DMA)
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Mark Salter <msalter@redhat.com>
Tested-by: Mark Salter <msalter@redhat.com>
Tested-by: Anup Patel <anup.patel@linaro.org>
This patch adds bch8 ecc software fallback which is mostly used by
omap3s because they lack hardware elm support.
Fixes: 0611c41934 (ARM: OMAP2+: gpmc:
update gpmc_hwecc_bch_capable() for new platforms and ECC schemes)
Cc: <stable@vger.kernel.org> # 3.15.x+
Signed-off-by: Christoph Fritz <chf.fritz@googlemail.com>
Reviewed-by: Pekon Gupta <pekon@ti.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
A reference to ARCH_HAS_OPP was added in commit 333d17e56 (arm64: add
ARCH_HAS_OPP to allow enabling OPP library) however this symbol is no
longer needed after commit 049d595a4d (PM / OPP: Make OPP invisible
to users in Kconfig).
Signed-off-by: Mark Brown <broonie@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
In the recent commit b50a6c584b "Clear MMCR2 when enabling PMU", I
screwed up the handling of MMCR2 for tasks using EBB.
We must make sure we set MMCR2 *before* ebb_switch_in(), otherwise we
overwrite the value of MMCR2 that userspace may have written. That
potentially breaks a task that uses EBB and manually uses MMCR2 for
event freezing.
Fixes: b50a6c584b ("powerpc/perf: Clear MMCR2 when enabling PMU")
Cc: stable@vger.kernel.org
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Commit 554086d ("x86_32, entry: Do syscall exit work on badsys
(CVE-2014-4508)") introduced a regression in the x86_32 syscall entry
code, resulting in syscall() not returning proper errors for undefined
syscalls on CPUs supporting the sysenter feature.
The following code:
> int result = syscall(666);
> printf("result=%d errno=%d error=%s\n", result, errno, strerror(errno));
results in:
> result=666 errno=0 error=Success
Obviously, the syscall return value is the called syscall number, but it
should have been an ENOSYS error. When run under ptrace it behaves
correctly, which makes it hard to debug in the wild:
> result=-1 errno=38 error=Function not implemented
The %eax register is the return value register. For debugging via ptrace
the syscall entry code stores the complete register context on the
stack. The badsys handlers only store the ENOSYS error code in the
ptrace register set and do not set %eax like a regular syscall handler
would. The old resume_userspace call chain contains code that clobbers
%eax and it restores %eax from the ptrace registers afterwards. The same
goes for the ptrace-enabled call chain. When ptrace is not used, the
syscall return value is the passed-in syscall number from the untouched
%eax register.
Use %eax as the return value register in syscall_badsys and
sysenter_badsys, like a real syscall handler does, and have the caller
push the value onto the stack for ptrace access.
Signed-off-by: Sven Wegener <sven.wegener@stealer.net>
Link: http://lkml.kernel.org/r/alpine.LNX.2.11.1407221022380.31021@titan.int.lan.stealer.net
Reviewed-and-tested-by: Andy Lutomirski <luto@amacapital.net>
Cc: <stable@vger.kernel.org> # If 554086d is backported
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
memmove may be called from module code copy_pages(btrfs), and it may
call memcpy, which may call back to C code, so it needs to use
_GLOBAL_TOC to set up r2 correctly.
This fixes following error when I tried to boot an le guest:
Vector: 300 (Data Access) at [c000000073f97210]
pc: c000000000015004: enable_kernel_altivec+0x24/0x80
lr: c000000000058fbc: enter_vmx_copy+0x3c/0x60
sp: c000000073f97490
msr: 8000000002009033
dar: d000000001d50170
dsisr: 40000000
current = 0xc0000000734c0000
paca = 0xc00000000fff0000 softe: 0 irq_happened: 0x01
pid = 815, comm = mktemp
enter ? for help
[c000000073f974f0] c000000000058fbc enter_vmx_copy+0x3c/0x60
[c000000073f97510] c000000000057d34 memcpy_power7+0x274/0x840
[c000000073f97610] d000000001c3179c copy_pages+0xfc/0x110 [btrfs]
[c000000073f97660] d000000001c3c248 memcpy_extent_buffer+0xe8/0x160 [btrfs]
[c000000073f97700] d000000001be4be8 setup_items_for_insert+0x208/0x4a0 [btrfs]
[c000000073f97820] d000000001be50b4 btrfs_insert_empty_items+0xf4/0x140 [btrfs]
[c000000073f97890] d000000001bfed30 insert_with_overflow+0x70/0x180 [btrfs]
[c000000073f97900] d000000001bff174 btrfs_insert_dir_item+0x114/0x2f0 [btrfs]
[c000000073f979a0] d000000001c1f92c btrfs_add_link+0x10c/0x370 [btrfs]
[c000000073f97a40] d000000001c20e94 btrfs_create+0x204/0x270 [btrfs]
[c000000073f97b00] c00000000026d438 vfs_create+0x178/0x210
[c000000073f97b50] c000000000270a70 do_last+0x9f0/0xe90
[c000000073f97c20] c000000000271010 path_openat+0x100/0x810
[c000000073f97ce0] c000000000272ea8 do_filp_open+0x58/0xd0
[c000000073f97dc0] c00000000025ade8 do_sys_open+0x1b8/0x300
[c000000073f97e30] c00000000000a008 syscall_exit+0x0/0x7c
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Commit 75b57ecf9 refactored device tree nodes to use kobjects such that they
can be exposed via /sysfs. A secondary commit 0829f6d1f furthered this rework
by moving the kobect initialization logic out of of_node_add into its own
of_node_init function. The inital commit removed the existing kref_init calls
in the pseries dlpar code with the assumption kobject initialization would
occur in of_node_add. The second commit had the side effect of triggering a
BUG_ON during DLPAR, migration and suspend/resume operations as a result of
dynamically added nodes being uninitialized.
This patch fixes this by adding of_node_init calls in place of the previously
removed kref_init calls.
Fixes: 0829f6d1f6 ("of: device_node kobject lifecycle fixes")
Cc: stable@vger.kernel.org
Signed-off-by: Tyrel Datwyler <tyreld@linux.vnet.ibm.com>
Acked-by: Nathan Fontenot <nfont@linux.vnet.ibm.com>
Acked-by: Grant Likely <grant.likely@linaro.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
We now support TASK_SIZE of 16TB, hence the array should be 8.
Fixes the below crash:
Unable to handle kernel paging request for data at address 0x000100bd
Faulting instruction address: 0xc00000000004f914
cpu 0x13: Vector: 300 (Data Access) at [c000000fea75fa90]
pc: c00000000004f914: .sys_subpage_prot+0x2d4/0x5c0
lr: c00000000004fb5c: .sys_subpage_prot+0x51c/0x5c0
sp: c000000fea75fd10
msr: 9000000000009032
dar: 100bd
dsisr: 40000000
current = 0xc000000fea6ae490
paca = 0xc00000000fb8ab00 softe: 0 irq_happened: 0x00
pid = 8237, comm = a.out
enter ? for help
[c000000fea75fe30] c00000000000a164 syscall_exit+0x0/0x98
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
This fixes some bugs in emulate_step(). First, the setting of the carry
bit for the arithmetic right-shift instructions was not correct on 64-bit
machines because we were masking with a mask of type int rather than
unsigned long. Secondly, the sld (shift left doubleword) instruction was
using the wrong instruction field for the register containing the shift
count.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
These processors do not currently support doorbell IPIs, so remove them
from the feature list if we are at DD 1.xx for the 0x004d part.
This fixes a regression caused by d4e58e5928 (powerpc/powernv: Enable
POWER8 doorbell IPIs). With that patch the kernel would hang at boot
when calling smp_call_function_many, as the doorbell would not be
received by the target CPUs:
.smp_call_function_many+0x2bc/0x3c0 (unreliable)
.on_each_cpu_mask+0x30/0x100
.cpuidle_register_driver+0x158/0x1a0
.cpuidle_register+0x2c/0x110
.powernv_processor_idle_init+0x23c/0x2c0
.do_one_initcall+0xd4/0x260
.kernel_init_freeable+0x25c/0x33c
.kernel_init+0x1c/0x120
.ret_from_kernel_thread+0x58/0x7c
Fixes: d4e58e5928 (powerpc/powernv: Enable POWER8 doorbell IPIs)
Signed-off-by: Joel Stanley <joel@jms.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Pull sparc fix from David Miller:
"Need to hook up the new renameat2 system call"
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc:
sparc: Hook up renameat2 syscall.
an x86 change too and it is a regression from 3.14. As it only affects
nested virtualization and there were other changes in this area in 3.16,
I am not nominating it for 3.15-stable.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQIcBAABAgAGBQJTyTTfAAoJEBvWZb6bTYbyytIQAJare/EWQmNBDK57EcJBIlJS
6MW2XnASEW+KCoUw0+u3sm9eaRXQdmJRb1Aw5zxTiUIR3ZSI8MDSQr1XxEgTAOtE
vFZjonPwlbnE8edLMhH3v/6/v9oO7bwNTDYeOE2pKPRfgPRjFmj1QUOJkvzRnRwj
kS5M4RtI+VqhdyJW8f4HaWqoRaOAISp3ZjQUJQdab3DWsf9ZpNjwLNjKzGZKNvIN
Klcpi7JH32zawUfqnAvph/7NsrBGrpFRE+j+JU9LLnD9PehuXwqZbWh01g2Anbq2
TKVrmXW+YnoD1IZsDw7r/14FaeRweV7yALA/eA9F4KfSyF2Qm9RbjVVdrUYz0CHV
aIl0cZeZM8xRCLy/ZWj+dOQ23RWelZaslHSpshKOznoRsuuvVwpx93zVtRwlw2dx
4WJ2A5gYA+ZUQ7eWjk83381JXkbRDUb3cO+NL8t9GnFctCJzT/gQHjqu15f7TJ2Q
gKhmeciKOS3xY4sQ+ti6gv8CwIFYqgdTzkxedxSgS9xpiAmw9v57V7WukXoXB6zl
AyjEAk9FFOeBZ5nXs0ObK5LKjI+MJoZ3X0bin7PCuT6dFrIA2yHvo5EgMvOcUua9
8Tu9L8sWv/JsKjuqebkKxekAKvv0CV35Q8OsLpEF6RI0eXyiXy2extk1LzUuK9cx
ZVYbN263++En/tgH2AJM
=Vdqn
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Pull kvm fixes from Paolo Bonzini:
"These are mostly PPC changes for 3.16-new things. However, there is
an x86 change too and it is a regression from 3.14. As it only
affects nested virtualization and there were other changes in this
area in 3.16, I am not nominating it for 3.15-stable"
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
KVM: x86: Check for nested events if there is an injectable interrupt
KVM: PPC: RTAS: Do byte swaps explicitly
KVM: PPC: Book3S PR: Fix ABIv2 on LE
KVM: PPC: Assembly functions exported to modules need _GLOBAL_TOC()
PPC: Add _GLOBAL_TOC for 32bit
KVM: PPC: BOOK3S: HV: Use base page size when comparing against slb value
KVM: PPC: Book3E: Unlock mmu_lock when setting caching atttribute
Pull s390 fixes from Martin Schwidefsky:
"A couple of last minute bug fixes for 3.16, including a fix for ptrace
to close a hole which allowed a user space program to write to the
kernel address space"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux:
s390: fix restore of invalid floating-point-control
s390/zcrypt: improve device probing for zcrypt adapter cards
s390/ptrace: fix PSW mask check
s390/MSI: Use standard mask and unmask funtions
s390/3270: correct size detection with the read-partition command
s390: require mvcos facility, not tod clock steering facility
BorisO reports that misc_register() fails often on xen. The current code
unregisters the CPU hotplug notifier in that case. If then a CPU is
offlined and onlined back again, we end up with a second timer running
on that CPU, leading to soft lockups and system hangs.
So let's leave the hotcpu notifier always registered - even if
mce_device_create failed for some cores and never unreg it so that we
can deal with the timer handling accordingly.
Reported-and-Tested-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Link: http://lkml.kernel.org/r/1403274493-1371-1-git-send-email-boris.ostrovsky@oracle.com
Signed-off-by: Borislav Petkov <bp@suse.de>
Haswell and newer Intel CPUs have support for RTM, and in that case DR6.RTM is
not fixed to 1 and DR7.RTM is not fixed to zero. That is not the case in the
current KVM implementation. This bug is apparent only if the MOV-DR instruction
is emulated or the host also debugs the guest.
This patch is a partial fix which enables DR6.RTM and DR7.RTM to be cleared and
set respectively. It also sets DR6.RTM upon every debug exception. Obviously,
it is not a complete fix, as debugging of RTM is still unsupported.
Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
free_nested needs the loaded_vmcs to be valid if it is a vmcs02, in
order to detach it from the shadow vmcs. However, this is not
available anymore after commit 26a865f4aa (KVM: VMX: fix use after
free of vmx->loaded_vmcs, 2014-01-03).
Revert that patch, and fix its problem by forcing a vmcs01 as the
active VMCS before freeing all the nested VMX state.
Reported-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Tested-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
If the RFLAGS.RF is set, then no #DB should occur on instruction breakpoints.
However, the KVM emulator injects #DB regardless to RFLAGS.RF. This patch fixes
this behavior. KVM, however, still appears not to update RFLAGS.RF correctly,
regardless of this patch.
Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
RFLAGS.RF was cleaned in several functions (e.g., syscall) in the x86 emulator.
Now that we clear it before the execution of an instruction in the emulator, we
can remove the specific cleanup of RFLAGS.RF.
Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When an instruction is emulated RFLAGS.RF should be cleared. KVM previously did
not do so. This patch clears RFLAGS.RF after interception is done. If a fault
occurs during the instruction, RFLAGS.RF will be set by a previous patch. This
patch does not handle the case of traps/interrupts during rep-strings. Traps
are only expected to occur on debug watchpoints, and those are anyhow not
handled by the emulator.
Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
RFLAGS.RF is always zero after popf. Therefore, popf should not updated RF, as
anyhow emulating popf, just as any other instruction should clear RFLAGS.RF.
Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When skipping an emulated instruction, rflags.rf should be cleared as it would
be on real x86 CPU.
Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
the cpu state settable by user space.
This is necessary to avoid races in s390 SIGP/reset handling which
happen because some SIGPs are handled in QEMU, while others are
handled in the kernel. Together with the busy conditions as return
value of SIGP races happen especially in areas like starting and
stopping of CPUs. (For example, there is a program 'cpuplugd', that
runs on several s390 distros which does automatic onlining and
offlining on cpus.)
As soon as the MPSTATE interface is used, user space takes complete
control of the cpu states. Otherwise the kernel will use the old way.
Therefore, the new kernel continues to work fine with old QEMUs.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iQIcBAABAgAGBQJTxSp+AAoJEBF7vIC1phx8RYgP/0mBaV3isXrCR+iisLJYNJjq
s6Ssl9TUMR3+lRZ9epytRd02UfkBGqVaW+HtRh5JP5KKAGSn2i3eB9WDAY7bD8i7
DVLVE2aO7okw1Z2G6CEO27dRfS0SCAfj/X77BRISyqVxK4eY86lAhQdyU5nB67TR
c0Fk4YHwjeBoQxZTAQr2xL4052gkB+Jp/PpltszILonsYNASOsxbcHqH4t+0SFmo
FGXydBn6eN+e3fWQSxetkrxvj14sj5K6ljiZoSMyw5nDfyrRn8RcCX87GjNLG+GR
X0eFB9Nl83NQoC5ksQtojunsx57+cEMgoWbdK7mxoqp+6+wJrvYB2eSKY77RYH4J
2xIy3klF/ypSZt7gxwL0pugi9QodGW39mA+stuezKUwyPalpMxHmRRwvHitGJjkP
KwvWc4m2QebKJ6RHhgkvZ0gMaVUJcqitrlXUxWgAAcH6MNBIC1g2ufsxnv51V/O6
SnspBWTPVDUqO6bJP4brJiAt8K7Jx3Bg5frpyN0jparh8Nmu3Kwfz0RtDYrUYyOe
p2o2lzY5L6gvY3iOrhvoc9zbpbyuycon8nUP4WOh/eGvIM2WV6cxmkck1Fo/wNso
evunS1FNvbN7Wxk5h4/XSVsfdcM/mUa3E7cVxgpg8+Aqse9qfpM35BlNWR+zf0G+
AdF90u/I+3mcRKWoSrKu
=86qw
-----END PGP SIGNATURE-----
Merge tag 'kvm-s390-20140715' of git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux into kvm-next
This series enables the "KVM_(S|G)ET_MP_STATE" ioctls on s390 to make
the cpu state settable by user space.
This is necessary to avoid races in s390 SIGP/reset handling which
happen because some SIGPs are handled in QEMU, while others are
handled in the kernel. Together with the busy conditions as return
value of SIGP races happen especially in areas like starting and
stopping of CPUs. (For example, there is a program 'cpuplugd', that
runs on several s390 distros which does automatic onlining and
offlining on cpus.)
As soon as the MPSTATE interface is used, user space takes complete
control of the cpu states. Otherwise the kernel will use the old way.
Therefore, the new kernel continues to work fine with old QEMUs.
We should advertise all capabilities, including those that can
be enabled.
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
We can get rid of the tasklet used for waking up a VCPU in the hrtimer
code but wakeup the VCPU directly.
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Let's move the vcpu wakeup code to a central point.
We should set the vcpu->preempted flag only if the target is actually sleeping
and before the real wakeup happens. Otherwise the preempted flag might be set,
when not necessary. This may result in immediate reschedules after schedule()
in some scenarios.
The wakeup code doesn't require the local_int.lock to be held.
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
The start_stop_lock is no longer acquired when in atomic context, therefore we
can convert it into an ordinary spin_lock.
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
local_int.lock is not used in a bottom-half handler anymore, therefore we can
turn it into an ordinary spin_lock at all occurrences.
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
This patch cleans up the code in handle_wait by reusing the common code
function kvm_vcpu_block.
signal_pending(), kvm_cpu_has_pending_timer() and kvm_arch_vcpu_runnable() are
sufficient for checking if we need to wake-up that VCPU. kvm_vcpu_block
uses these functions, so no checks are lost.
The flag "timer_due" can be removed - kvm_cpu_has_pending_timer() tests whether
the timer is pending, thus the vcpu is correctly woken up.
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
SMbios is important for server hardware vendors. It implements a spec for
providing descriptive information about the platform. Things like serial
numbers, physical layout of the ports, build configuration data, and the like.
This has been tested by dmidecode and lshw tools.
Signed-off-by: Yi Li <yi.li@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
If do_ops() fails we have to release current->mm->mmap_sem
otherwise the failing task will never terminate.
Reported-by: Toralf Förster <toralf.foerster@gmx.de>
Signed-off-by: Richard Weinberger <richard@nod.at>
Trinity discovered an execution path such that a task
can unmap his stub page.
Reported-by: Toralf Förster <toralf.foerster@gmx.de>
Signed-off-by: Richard Weinberger <richard@nod.at>
This reverts commit 0974a9cadc.
The real for for that issue is to release current->mm->mmap_sem in
fix_range_common().
Signed-off-by: Richard Weinberger <richard@nod.at>
Pull locking fixes from Thomas Gleixner:
"The locking department delivers:
- A rather large and intrusive bundle of fixes to address serious
performance regressions introduced by the new rwsem / mcs
technology. Simpler solutions have been discussed, but they would
have been ugly bandaids with more risk than doing the right thing.
- Make the rwsem spin on owner technology opt-in for architectures
and enable it only on the known to work ones.
- A few fixes to the lockdep userspace library"
* 'locking-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
locking/rwsem: Add CONFIG_RWSEM_SPIN_ON_OWNER
locking/mutex: Disable optimistic spinning on some architectures
locking/rwsem: Reduce the size of struct rw_semaphore
locking/rwsem: Rename 'activity' to 'count'
locking/spinlocks/mcs: Micro-optimize osq_unlock()
locking/spinlocks/mcs: Introduce and use init macro and function for osq locks
locking/spinlocks/mcs: Convert osq lock to atomic_t to reduce overhead
locking/spinlocks/mcs: Rename optimistic_spin_queue() to optimistic_spin_node()
locking/rwsem: Allow conservative optimistic spinning when readers have lock
tools/liblockdep: Account for bitfield changes in lockdeps lock_acquire
tools/liblockdep: Remove debug print left over from development
tools/liblockdep: Fix comparison of a boolean value with a value of 2
A smaller set of fixes this week, and all regression fixes:
- a handful of issues fixed on at91 with common clock conversion
- a set of fixes for Marvell mvebu (SMP, coherency, PM)
- a clock fix for i.MX6Q.
- ... and a SMP/hotplug fix for Exynos
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.14 (GNU/Linux)
iQIcBAABAgAGBQJTygXeAAoJEIwa5zzehBx3bt0P/2ofpoOuYRV88sHjI9w+0R+F
6t8WIFtTSFypI3zD6cSFBR38wTHI4mJ/jBb0ZnIhGXZE3Bzl/n9Moz7UElxsDD9v
AjMWzyx6XrSJSCATczN/CDMX38QN+0NZW+hdXODGz9g7DrVGT/Z2jqugkaPAkAwy
gVBmCqa+nkksfQCcQF3LDVmCyDUMHKILfUvyQJ217QbIavxO3kU/2wLdgEQpUCrI
YUWAnAj+S/xoxd6OYJr9nMd+M6P9nkRdy+dD56nJtSiZdFwFoI+EgfhUkT3iezPN
q3aYg3GbgiM/Fp8IO58tE2CbbG/xWJH+kwkJ03yl3z1Gx2KqAYeBpy2QMLBR9rUf
F0axul3EeW9Gf7OEEFKQbCW8ETaP2AMEbm11FZkjJxMlNjbG9zkYFnl0oedLXxTA
AcOPB7ABIWU1PsXXTqD9ZxjZmAsKL4CCck0BnWdOyQT5c9gA4ePEGEDMjeT/OiZE
QwlujHFl4M4E1XFJRL6RiBYppNLBKTsrgl+HaoDSW/MbD350WqbOFTzngw9Xy/rO
n7YNxUR2QFfWCNY1Zk8J8oJI/ISxla2bthhIe0+l/kk/zVUM3OMEClp0Fdw/L55X
Md/fc7FzQKV9GPhtSz1RGDN4bjdJuGmitjMrYf+YhbWHa6iKS3XkkHBNpkKhY8Kf
h9MsTmjd0En4BJLUqf0h
=LOtI
-----END PGP SIGNATURE-----
Merge tag 'fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Pull ARM SoC fixes from Olof Johansson:
"A smaller set of fixes this week, and all regression fixes:
- a handful of issues fixed on at91 with common clock conversion
- a set of fixes for Marvell mvebu (SMP, coherency, PM)
- a clock fix for i.MX6Q.
- ... and a SMP/hotplug fix for Exynos"
* tag 'fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc:
ARM: EXYNOS: Fix core ID used by platsmp and hotplug code
ARM: at91/dt: add missing clocks property to pwm node in sam9x5.dtsi
ARM: at91/dt: fix usb0 clocks definition in sam9n12 dtsi
ARM: at91: at91sam9x5: correct typo error for ohci clock
ARM: clk-imx6q: parent lvds_sel input from upstream clock gates
ARM: mvebu: Fix coherency bus notifiers by using separate notifiers
ARM: mvebu: Fix the operand list in the inline asm of armada_370_xp_pmsu_idle_enter
ARM: mvebu: fix SMP boot for Armada 38x and Armada 375 Z1 in big endian
Pull x86 fixes from Peter Anvin:
"A couple of key fixes and a few less critical ones. The main ones
are:
- add a .bss section to the PE/COFF headers when building with EFI
stub
- invoke the correct paravirt magic when building the espfix page
tables
Unfortunately both of these areas also have at least one additional
fix each still in thie pipeline, but which are not yet ready to push"
* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86: Remove unused variable "polling"
x86/espfix/xen: Fix allocation of pages for paravirt page tables
x86/efi: Include a .bss section within the PE/COFF headers
efi: fdt: Do not report an error during boot if UEFI is not available
efi/arm64: efistub: remove local copy of linux_banner
When CPU topology is specified in device tree, cpu_logical_map() does
not return core ID anymore, but rather full MPIDR value. This breaks
existing calculation of PMU register offsets on Exynos SoCs.
This patch fixes the problem by adjusting the code to use only core ID
bits of the value returned by cpu_logical_map() to allow CPU topology to
be specified in device tree on Exynos SoCs.
Signed-off-by: Tomasz Figa <t.figa@samsung.com>
Signed-off-by: Kukjin Kim <kgene.kim@samsung.com>
Signed-off-by: Olof Johansson <olof@lixom.net>
It fixes a hard machine hang regression for boards where only pcie is
active but no sata, as the latest imx6-pcie driver is no longer enabling
the upstream clock directly but only lvds clk out.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJTyNTvAAoJEFBXWFqHsHzOm6kIAIQnvL429KlsyQAkZTpwHR/l
omETpfgmjTIpGJ4hYE04Kdi8w/O7GrAVUFe0moBETPRshHBJhYGCDgVuM38fA/PB
dd6vkCL1rS1bELaFFfTzFE07BlbZRSXy6PEs8/9wcE8vQOJ/BEKjscNY6PspKDMb
txRnmDUf9R+YdKBAY7CWTXC465Vtfiz8vFf1v73t+URxi/YTAut7s50V1IaXZf1E
g+W8G6SME8j1mOfPrq6hRdxijLsJ0QpKDVZay4Sb19+WMnLXXrc4M3skQsDUScp8
3dfdJBy/fVtFwQlmcK2z78rr6netMTbIVTDJjbJiz2Eb0kIZXgsDW5Jkgr+6uqE=
=S50Z
-----END PGP SIGNATURE-----
Merge tag 'imx-fixes-3.16-2' of git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux into fixes
Merge "ARM: imx: fixes for 3.16, 2nd take" from Shawn Guo:
The i.MX fixes for 3.16, 2nd take:
It fixes a hard machine hang regression for boards where only pcie is
active but no sata, as the latest imx6-pcie driver is no longer enabling
the upstream clock directly but only lvds clk out.
* tag 'imx-fixes-3.16-2' of git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux:
ARM: clk-imx6q: parent lvds_sel input from upstream clock gates
Signed-off-by: Olof Johansson <olof@lixom.net>
- Fix SMP boot on 38x/375 in big endian
- Fix operand list for pmsu on 370/XP
- Fix coherency bus notifiers
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAABAgAGBQJTyZAQAAoJEP45WPkGe8Zn+DIQAIWvUhQ/7rsThFThmlsa4t01
8l4PqWlPznHHfSAfcM5JZtGBpKvqHhf+e6Hn8wPXek4u7v1x2K6dREk2JgsGZQMP
rsA2Ajn8jseFQb+iBnzdr1eV0AkztlGy0pJ4N+S4pogp4pzn6WPPNGz7P9UUzW/U
U2I8NycTqzsq8siODK/AbqLfFfok2M/++QgNOdEli1cQ44NdYyAzVLeqe/C9Ou6K
fn6RdscbvK/jWmrWi9CS4lhnhNkG8HBxxpzF4Rm06dWDU6z+B/HECq8yjHJlX9rx
EsxiJRV6nzUiws+/o19CUsl/lsJP0pfiTDXCoiUUhOovYUukm632ySdG5QfjnYaK
zsRw9hBnHCfHW5QEt6NaY5fVknnQPmJMM7WsW9B7PtQX4Rl38CWhLdq3LAbPVv9V
ze1AllUSmBLTYuQHFMuA602ZzngFcw1c+ZOmfrOpX+QYlyiv1CkqUOXiVGHNb2Nn
NPiCZaDp8d+JvWloOme0aZX+XfgfUOeXxogtYCtFBTGe9C+P6oqzPni3hqcvL7PA
PUo6BRe1KIOaQuUm0Eh/XqWC5Nyo0gcXm1oM8JgovVTT6RQndPIQLfO9isOa5A+b
PaLrAYtzHge+cCU4TJShYzjcVGzz1K2hsINjJ9NlW8172LbC1g5wQWrUVPFHrLuz
WoZYmkmNzNd8EGQwXdkj
=tq91
-----END PGP SIGNATURE-----
Merge tag 'mvebu-fixes-3.16-3' of git://git.infradead.org/linux-mvebu into fixes
Merge "mvebu fixes for v3.16 (round 3)" from Jason Cooper:
- Fix SMP boot on 38x/375 in big endian
- Fix operand list for pmsu on 370/XP
- Fix coherency bus notifiers
* tag 'mvebu-fixes-3.16-3' of git://git.infradead.org/linux-mvebu:
ARM: mvebu: Fix coherency bus notifiers by using separate notifiers
ARM: mvebu: Fix the operand list in the inline asm of armada_370_xp_pmsu_idle_enter
ARM: mvebu: fix SMP boot for Armada 38x and Armada 375 Z1 in big endian
Signed-off-by: Olof Johansson <olof@lixom.net>
Remove check of obsolete variable function_trace_stop as requested by
Steven Rostedt.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Nothing sets function_trace_stop to disable function tracing anymore.
Remove the check for it in the arch code.
arm64 was broken anyway, as it had an ifdef testing
CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST which is only set if
the arch supports the code (which it obviously did not), and
it was testing a non existent ftrace_trace_stop instead of
function_trace_stop.
Link: http://lkml.kernel.org/r/20140627124421.GP26276@arm.com
Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Nothing sets function_trace_stop to disable function tracing anymore.
Remove the check for it in the arch code.
Link: http://lkml.kernel.org/r/3144266.ziutPk5CNZ@vapier
Acked-by: Mike Frysinger <vapier@gentoo.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Nothing sets function_trace_stop to disable function tracing anymore.
Remove the check for it in the arch code.
Acked-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Nothing sets function_trace_stop to disable function tracing anymore.
Remove the check for it in the arch code.
Link: http://lkml.kernel.org/r/53C8D82B.4030204@monstr.eu
Tested-by: Michal Simek <monstr@monstr.eu>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Nothing sets function_trace_stop to disable function tracing anymore.
Remove the check for it in the arch code.
Cc: Ralf Baechle <ralf@linux-mips.org>
Tested-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Nothing sets function_trace_stop to disable function tracing anymore.
Remove the check for it in the arch code.
Link: http://lkml.kernel.org/r/53B08317.7010501@gmx.de
Cc: Kyle McMartin <kyle@mcmartin.ca>
Acked-by: Helge Deller <deller@gmx.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Nothing sets function_trace_stop to disable function tracing anymore.
Remove the check for it in the arch code.
[ Please test this on your arch ]
Cc: Matt Fleming <matt@console-pimps.org>
Cc: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Nothing sets function_trace_stop to disable function tracing anymore.
Remove the check for it in the arch code.
Link: http://lkml.kernel.org/r/20140703.211820.1674895115102216877.davem@davemloft.net
Cc: David S. Miller <davem@davemloft.net>
OKed-to-go-through-tracing-tree-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Nothing sets function_trace_stop to disable function tracing anymore.
Remove the check for it in the arch code.
Cc: Chris Metcalf <cmetcalf@tilera.com>
Acked-by: Zhigang Lu<zlu@tilera.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Nothing sets function_trace_stop to disable function tracing anymore.
Remove the check for it in the arch code.
Link: http://lkml.kernel.org/r/53C54D32.6000000@zytor.com
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Reviewed-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
ftrace_stop() is going away as it disables parts of function tracing
that affects users that should not be affected. But ftrace_graph_stop()
is built on ftrace_stop(). Here's another example of killing all of
function tracing because something went wrong with function graph
tracing.
Instead of disabling all users of function tracing on function graph
error, disable only function graph tracing. To do this, the arch code
must call ftrace_graph_is_dead() before it implements function graph.
Cc: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
ftrace_stop() is going away as it disables parts of function tracing
that affects users that should not be affected. But ftrace_graph_stop()
is built on ftrace_stop(). Here's another example of killing all of
function tracing because something went wrong with function graph
tracing.
Instead of disabling all users of function tracing on function graph
error, disable only function graph tracing. To do this, the arch code
must call ftrace_graph_is_dead() before it implements function graph.
Cc: Anton Blanchard <anton@samba.org>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
ftrace_stop() is going away as it disables parts of function tracing
that affects users that should not be affected. But ftrace_graph_stop()
is built on ftrace_stop(). Here's another example of killing all of
function tracing because something went wrong with function graph
tracing.
Instead of disabling all users of function tracing on function graph
error, disable only function graph tracing. To do this, the arch code
must call ftrace_graph_is_dead() before it implements function graph.
Link: http://lkml.kernel.org/r/53B08317.7010501@gmx.de
Cc: Kyle McMartin <kyle@mcmartin.ca>
Acked-by: Helge Deller <deller@gmx.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
ftrace_stop() is going away as it disables parts of function tracing
that affects users that should not be affected. But ftrace_graph_stop()
is built on ftrace_stop(). Here's another example of killing all of
function tracing because something went wrong with function graph
tracing.
Instead of disabling all users of function tracing on function graph
error, disable only function graph tracing. To do this, the arch code
must call ftrace_graph_is_dead() before it implements function graph.
Cc: Ralf Baechle <ralf@linux-mips.org>
Tested-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
ftrace_stop() is going away as it disables parts of function tracing
that affects users that should not be affected. But ftrace_graph_stop()
is built on ftrace_stop(). Here's another example of killing all of
function tracing because something went wrong with function graph
tracing.
Instead of disabling all users of function tracing on function graph
error, disable only function graph tracing. To do this, the arch code
must call ftrace_graph_is_dead() before it implements function graph.
Link: http://lkml.kernel.org/r/53C8D874.9090601@monstr.eu
Tested-by: Michal Simek <monstr@monstr.eu>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Currently reading /proc/cpuinfo will result in information being read
out of the MIDR_EL1 of the current CPU, and the information is not
associated with any particular logical CPU number.
This is problematic for systems with heterogeneous CPUs (i.e.
big.LITTLE) where MIDR fields will vary across CPUs, and the output will
differ depending on the executing CPU.
This patch reorganises the code responsible for /proc/cpuinfo to print
information per-cpu. In the process, we perform several cleanups:
* Property names are coerced to lower-case (to match "processor" as per
glibc's expectations).
* Property names are simplified and made to match the MIDR field names.
* Revision is changed to hex as with every other field.
* The meaningless Architecture property is removed.
* The ripe-for-abuse Machine field is removed.
The features field (a human-readable representation of the hwcaps)
remains printed once, as this is expected to remain in use as the
globally support CPU features. To enable the possibility of the addition
of per-cpu HW feature information later, this is printed before any
CPU-specific information.
Comments are added to guide userspace developers in the right direction
(using the hwcaps provided in auxval). Hopefully where userspace
applications parse /proc/cpuinfo rather than using the readily available
hwcaps, they limit themselves to reading said first line.
If CPU features differ from each other, the previously installed sanity
checks will give us some advance notice with warnings and
TAINT_CPU_OUT_OF_SPEC. If we are lucky, we will never see such systems.
Rework will be required in many places to support such systems anyway.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marcus Shawcroft <marcus.shawcroft@arm.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
[catalin.marinas@arm.com: remove machine_name as it is no longer reported]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Unexpected variation in certain system register values across CPUs is an
indicator of potential problems with a system. The kernel expects CPUs
to be mostly identical in terms of supported features, even in systems
with heterogeneous CPUs, with uniform instruction set support being
critical for the correct operation of userspace.
To help detect issues early where hardware violates the expectations of
the kernel, this patch adds simple runtime sanity checks on important ID
registers in the bring up path of each CPU.
Where CPUs are fundamentally mismatched, set TAINT_CPU_OUT_OF_SPEC.
Given that the kernel assumes CPUs are identical feature wise, let's not
pretend that we expect such configurations to work. Supporting such
configurations would require massive rework, and hopefully they will
never exist.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
In big.LITTLE systems, the I-cache policy may differ across CPUs, and
thus we must always meet the most stringent maintenance requirements of
any I-cache in the system when performing maintenance to ensure
correctness. Unfortunately this requirement is not met as we always look
at the current CPU's cache type register to determine the maintenance
requirements.
This patch causes the I-cache policy of all CPUs to be taken into
account for icache_is_aliasing and icache_is_aivivt. If any I-cache in
the system is aliasing or AIVIVT, the respective function will return
true. At boot each CPU may set flags to identify that at least one
I-cache in the system is aliasing and/or AIVIVT.
The now unused and potentially misleading icache_policy function is
removed.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Several kernel subsystems need to know details about CPU system register
values, sometimes for CPUs other than that they are executing on. Rather
than hard-coding system register accesses and cross-calls for these
cases, this patch adds logic to record various system register values at
boot-time. This may be used for feature reporting, firmware bug
detection, etc.
Separate hooks are added for the boot and hotplug paths to enable
one-time intialisation and cold/warm boot value mismatch detection in
later patches.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The MIDR_EL1 register is composed of a number of bitfields, and uses of
the fields has so far involved open-coding of the shifts and masks
required.
This patch adds shifts and masks for each of the MIDR_EL1 subfields, and
also provides accessors built atop of these. Existing uses within
cputype.h are updated to use these accessors.
The read_cpuid_part_number macro is modified to return the extracted
bitfield rather than returning the value in-place with all other fields
(including revision) masked out, to better match the other accessors.
As the value is only used in comparison with the *_CPU_PART_* macros
which are similarly updated, and these values are never exposed to
userspace, this change should not affect any functionality.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Suspend init function must be marked as __init, since it is not needed
after the kernel has booted. This patch moves the cpu_suspend_init()
function to the __init section.
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
PSCI init functions must be marked as __init so that they are freed
by the kernel upon boot.
This patch marks the PSCI init functions as such since they need not
be persistent in the kernel address space after the kernel has booted.
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
PSCI CPU operations have to be enabled on UP kernels so that calls
like eg cpu_suspend can be made functional on UP too.
This patch reworks the PSCI CPU operations so that they can be
enabled on UP systems.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The pwm driver requires a clocks property referencing the pwm peripheral
clk.
Signed-off-by: Boris BREZILLON <boris.brezillon@free-electrons.com>
Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Correct the typo error for the second "uhphs_clk".
Signed-off-by: Bo Shen <voice.shen@atmel.com>
Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com>
commit 431a84b1a4
("ARM: 8034/1: Disable preemption in iwmmxt_task_enable()")
introduced macros {inc,dec}_preempt_count to iwmmxt_task_enable
to make it run with preemption disabled.
Unfortunately, other functions in iwmmxt.S also use concan_{save,dump,load}
sections located in iwmmxt_task_enable() to deal with iWMMXt coprocessor.
This causes an unbalanced preempt_count due to excessive dec_preempt_count
and destroyed return addresses in callers of concan_ labels due to a register
collision:
Linux version 3.16.0-rc3-00062-gd92a333-dirty (jef@armhf) (gcc version 4.8.3 (Debian 4.8.3-4) ) #5 PREEMPT Thu Jul 3 19:46:39 CEST 2014
CPU: ARMv7 Processor [560f5815] revision 5 (ARMv7), cr=10c5387d
CPU: PIPT / VIPT nonaliasing data cache, PIPT instruction cache
Machine model: SolidRun CuBox
...
PJ4 iWMMXt v2 coprocessor enabled.
...
Unable to handle kernel paging request at virtual address fffffffe
pgd = bb25c000
[fffffffe] *pgd=3bfde821, *pte=00000000, *ppte=00000000
Internal error: Oops: 80000007 [#1] PREEMPT ARM
Modules linked in:
CPU: 0 PID: 62 Comm: startpar Not tainted 3.16.0-rc3-00062-gd92a333-dirty #5
task: bb230b80 ti: bb256000 task.ti: bb256000
PC is at 0xfffffffe
LR is at iwmmxt_task_copy+0x44/0x4c
pc : [<fffffffe>] lr : [<800130ac>] psr: 40000033
sp : bb257de8 ip : 00000013 fp : bb257ea4
r10: bb256000 r9 : fffffdfe r8 : 76e898e6
r7 : bb257ec8 r6 : bb256000 r5 : 7ea12760 r4 : 000000a0
r3 : ffffffff r2 : 00000003 r1 : bb257df8 r0 : 00000000
Flags: nZcv IRQs on FIQs on Mode SVC_32 ISA Thumb Segment user
Control: 10c5387d Table: 3b25c019 DAC: 00000015
Process startpar (pid: 62, stack limit = 0xbb256248)
This patch fixes the issue by moving concan_{save,dump,load} into separate
code sections and make iwmmxt_task_enable() call them in the same way the
other functions use concan_ symbols. The test for valid ownership is moved
to concan_save and is safe for the other user of it, iwmmxt_task_disable().
The register collision is also resolved by moving concan_ symbols as
{inc,dec}_preempt_count are now local to iwmmxt_task_enable().
Fixes: 431a84b1a4 ("ARM: 8034/1: Disable preemption in iwmmxt_task_enable()")
Signed-off-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Jean-Francois Moine <moinejf@free.fr>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Writing to the FPCR is commonly implemented as a self-synchronising
operation in the CPU, so avoid writing to the register when the saved
value matches that in the hardware already.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The i.MX6 reference manual doesn't make a clear distinction
between the fixed clock divider and the enable gate for the
pcie and sata reference clocks. This lead to the lvds mux
inputs in the imx6q clk driver to be parented from the
ref clock (which is the divider) instead of the actual gate,
which in turn prevents the upstream clock to actually be
enabled when lvds clk out is active.
This fixes a hard machine hang regression in kernel 3.16 for
boards where only pcie is active but no sata, as with this
kernel version the imx6-pcie driver is no longer enabling
the upstream clock directly but only lvds clk out.
Reported-by: Arne Ruhnau <arne.ruhnau@target-sg.com>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Tested-by: Arne Ruhnau <arne.ruhnau@target-sg.com>
Signed-off-by: Shawn Guo <shawn.guo@freescale.com>
When setting up the CMA region, we must ensure that the old section
mappings are flushed from the TLB before replacing them with page
tables, otherwise we can suffer from mismatched aliases if the CPU
speculatively prefetches from these mappings at an inopportune time.
A mismatched alias can occur when the TLB contains a section mapping,
but a subsequent prefetch causes it to load a page table mapping,
resulting in the possibility of the TLB containing two matching
mappings for the same virtual address region.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Andy pointed out that binutils generates additional sections in the vdso
image (e.g. section string table) which, if our .text section gets big
enough, could cross a page boundary and end up screwing up the location
where the kernel expects to put the data page.
This patch solves the issue in the same manner as x86_32, by moving the
data page before the code pages.
Cc: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
_install_special_mapping replaces install_special_mapping and removes
the need to detect special VMA in arch_vma_name.
This patch moves the vdso and compat vectors page code over to the new
API.
Cc: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The VDSO datapage doesn't need to be executable (no code there) or
CoW-able (the kernel writes the page, so a private copy is totally
useless).
This patch moves the datapage into its own VMA, identified as "[vvar]"
in /proc/<pid>/maps.
Cc: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Just keep the asm/page.h definition as this is included in vmlinux.lds.S
as well.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
This patch fixed the following checkpatch complaint as using pr_*
instead of printk.
WARNING: printk() should include KERN_ facility level
Signed-off-by: Jungseok Lee <jays.lee@samsung.com>
Reviewed-by: Sungjinn Chung <sungjinn.chung@samsung.com>
Acked-by: Kukjin Kim <kgene.kim@samsung.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
ftrace_stop() is going away as it disables parts of function tracing
that affects users that should not be affected. But ftrace_graph_stop()
is built on ftrace_stop(). Here's another example of killing all of
function tracing because something went wrong with function graph
tracing.
Instead of disabling all users of function tracing on function graph
error, disable only function graph tracing. To do this, the arch code
must call ftrace_graph_is_dead() before it implements function graph.
Link: http://lkml.kernel.org/r/53C54D18.3020602@zytor.com
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Reviewed-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
ftrace_stop() is used to stop function tracing during suspend and resume
which removes a lot of possible debugging opportunities with tracing.
The reason was that some function in the resume path was causing a triple
fault if it were to be traced. The issue I found was that doing something
as simple as calling smp_processor_id() would reboot the box!
When function tracing was first created I didn't have a good way to figure
out what function was having issues, or it looked to be multiple ones. To
fix it, we just created a big hammer approach to the problem which was to
add a flag in the mcount trampoline that could be checked and not call
the traced functions.
Lately I developed better ways to find problem functions and I can bisect
down to see what function is causing the issue. I removed the flag that
stopped tracing and proceeded to find the problem function and it ended
up being restore_processor_state(). This function makes sense as when the
CPU comes back online from a suspend it calls this function to set up
registers, amongst them the GS register, which stores things such as
what CPU the processor is (if you call smp_processor_id() without this
set up properly, it would fault).
By making restore_processor_state() notrace, the system can suspend and
resume without the need of the big hammer tracing to stop.
Link: http://lkml.kernel.org/r/3577662.BSnUZfboWb@vostro.rjw.lan
Acked-by: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Reviewed-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
The function graph trampoline is called from the function trampoline
and both do a save and restore of registers. The save of registers
done by the function trampoline when only the function graph tracer
is running is a waste of CPU cycles.
As the function graph tracer trampoline in x86 is dependent from
the function trampoline, we can call it directly when a function
is only being traced by the function graph trampoline.
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
This patch fix bug reported in https://bugzilla.kernel.org/show_bug.cgi?id=73331,
after the patch http://www.spinics.net/lists/kvm/msg105230.html applied, there is
some progress and the L2 can boot up, however, slowly. The original idea of this
fix vid injection patch is from "Zhang, Yang Z" <yang.z.zhang@intel.com>.
Interrupt which delivered by vid should be injected to L1 by L0 if current is in
L1, or should be injected to L2 by L0 through the old injection way if L1 doesn't
have set External-interrupt exiting bit. The current logic doen't consider these
cases. This patch fix it by vid intr to L1 if current is L1 or L2 through old
injection way if L1 doen't have External-interrupt exiting bit set.
Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Signed-off-by: "Zhang, Yang Z" <yang.z.zhang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* pci/host-generic:
PCI: generic: Fix GPL v2 license string typo
* pci/host-mvebu:
PCI: mvebu: Fix GPL v2 license string typo
* pci/host-rcar:
PCI: rcar: Fix GPL v2 license string typo
* pci/host-tegra:
PCI: tegra: Fix GPL v2 license string typo
* pci/msi:
PCI/MSI: Use irq_get_msi_desc() to simplify code
PCI/MSI: Remove unused list access in __pci_restore_msix_state()
PCI/MSI: Retrieve first MSI IRQ from msi_desc rather than pci_dev
PCI/MSI: Remove unused function msi_remove_pci_irq_vectors()
PCI/MSI: Add msi_setup_entry() to clean up MSI initialization
* pci/misc:
PCI: Configure ASPM when enabling device
x86: don't exclude low BIOS area when allocating address space for non-PCI cards
PCI: Add include guard to include/linux/pci_ids.h
x86, ia64: Move EFI_FB vga_default_device() initialization to pci_vga_fixup()
* pci/resource:
PCI: Tidy resource assignment messages
PCI: Return conventional error values from pci_revert_fw_address()
PCI: Cleanup control flow
PCI: Support BAR sizes up to 128GB
PCI: Keep original resource if we fail to expand it
* pci/virtualization:
powerpc/pci: Remove duplicate logic
PCI: Make resetting secondary bus logic common
Pull perf fixes from Ingo Molnar:
"Tooling fixes and an Intel PMU driver fixlet"
* 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
perf: Do not allow optimized switch for non-cloned events
perf/x86/intel: ignore CondChgd bit to avoid false NMI handling
perf symbols: Get kernel start address by symbol name
perf tools: Fix segfault in cumulative.callchain report
Commit 30919b0bf3 ("x86: avoid low BIOS area when allocating address
space") moved the test for resource allocations that fall within the first
1MB of address space from the PCI-specific path to a generic path, such
that all resource allocations will avoid this area. However, this breaks
ISA cards which need to allocate a memory region within the first 1MB. An
example is the i82365 PCMCIA controller and derivatives like the Ricoh
RF5C296/396 which map part of the PCMCIA socket memory address space into
the first 1MB of system memory address space. They do not work anymore as
no usable memory region exists due to this change:
Intel ISA PCIC probe: Ricoh RF5C296/396 ISA-to-PCMCIA at port 0x3e0 ofs 0x00, 2 sockets
host opts [0]: none
host opts [1]: none
ISA irqs (scanned) = 3,4,5,9,10 status change on irq 10
pcmcia_socket pcmcia_socket1: pccard: PCMCIA card inserted into slot 1
pcmcia_socket pcmcia_socket0: cs: IO port probe 0xc00-0xcff: excluding 0xcf8-0xcff
pcmcia_socket pcmcia_socket0: cs: IO port probe 0xa00-0xaff: clean.
pcmcia_socket pcmcia_socket0: cs: IO port probe 0x100-0x3ff: excluding 0x170-0x177 0x1f0-0x1f7 0x2f8-0x2ff 0x370-0x37f 0x3c0-0x3e7 0x3f0-0x3ff
pcmcia_socket pcmcia_socket0: cs: memory probe 0x0a0000-0x0affff: excluding 0xa0000-0xaffff
pcmcia_socket pcmcia_socket0: cs: memory probe 0x0b0000-0x0bffff: excluding 0xb0000-0xbffff
pcmcia_socket pcmcia_socket0: cs: memory probe 0x0c0000-0x0cffff: excluding 0xc0000-0xcbfff
pcmcia_socket pcmcia_socket0: cs: memory probe 0x0d0000-0x0dffff: clean.
pcmcia_socket pcmcia_socket0: cs: memory probe 0x0e0000-0x0effff: clean.
pcmcia_socket pcmcia_socket0: cs: memory probe 0x60000000-0x60ffffff: clean.
pcmcia_socket pcmcia_socket0: cs: memory probe 0xa0000000-0xa0ffffff: clean.
pcmcia_socket pcmcia_socket1: cs: IO port probe 0xc00-0xcff: excluding 0xcf8-0xcff
pcmcia_socket pcmcia_socket1: cs: IO port probe 0xa00-0xaff: clean.
pcmcia_socket pcmcia_socket1: cs: IO port probe 0x100-0x3ff: excluding 0x170-0x177 0x1f0-0x1f7 0x2f8-0x2ff 0x370-0x37f 0x3c0-0x3e7 0x3f0-0x3ff
pcmcia_socket pcmcia_socket1: cs: memory probe 0x0a0000-0x0affff: excluding 0xa0000-0xaffff
pcmcia_socket pcmcia_socket1: cs: memory probe 0x0b0000-0x0bffff: excluding 0xb0000-0xbffff
pcmcia_socket pcmcia_socket1: cs: memory probe 0x0c0000-0x0cffff: excluding 0xc0000-0xcbfff
pcmcia_socket pcmcia_socket1: cs: memory probe 0x0d0000-0x0dffff: clean.
pcmcia_socket pcmcia_socket1: cs: memory probe 0x0e0000-0x0effff: clean.
pcmcia_socket pcmcia_socket1: cs: memory probe 0x60000000-0x60ffffff: clean.
pcmcia_socket pcmcia_socket1: cs: memory probe 0xa0000000-0xa0ffffff: clean.
pcmcia_socket pcmcia_socket1: cs: memory probe 0x0cc000-0x0effff: excluding 0xe0000-0xeffff
pcmcia_socket pcmcia_socket1: cs: unable to map card memory!
If filtering out the first 1MB is reverted, everything works as expected.
Tested-by: Robert Resch <fli4l@robert.reschpara.de>
Signed-off-by: Christoph Schulz <develop@kristov.de>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
CC: stable@vger.kernel.org # v2.6.37+
The optimistic spin code assumes regular stores and cmpxchg() play nice;
this is found to not be true for at least: parisc, sparc32, tile32,
metag-lock1, arc-!llsc and hexagon.
There is further wreckage, but this in particular seemed easy to
trigger, so blacklist this.
Opt in for known good archs.
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Reported-by: Mikulas Patocka <mpatocka@redhat.com>
Cc: David Miller <davem@davemloft.net>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: James Bottomley <James.Bottomley@hansenpartnership.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Jason Low <jason.low2@hp.com>
Cc: Waiman Long <waiman.long@hp.com>
Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
Cc: John David Anglin <dave.anglin@bell.net>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Davidlohr Bueso <davidlohr@hp.com>
Cc: stable@vger.kernel.org
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: sparclinux@vger.kernel.org
Link: http://lkml.kernel.org/r/20140606175316.GV13930@laptop.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit:
commit 6f6343f53d
Author: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Date: Thu Apr 17 17:17:33 2014 +0900
kprobes/x86: Call exception handlers directly from do_int3/do_debug
appears to have inadvertently dropped a check that the int3 came
from kernel mode. Trying to dereference addr when addr is
user-controlled is completely bogus.
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Link: http://lkml.kernel.org/r/c4e339882c121aa76254f2adde3fcbdf502faec2.1405099506.git.luto@amacapital.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
It's unnecessary to excessively spam the kernel log anytime the BTS buffer
cannot be allocated, so make this allocation __GFP_NOWARN.
The user probably will want to at least find some artifact that the
allocation has failed in the past, probably due to fragmentation because
of its large size, when it's not allocated at bootstrap. Thus, add a
WARN_ONCE() so something is left behind for them to understand why perf
commnads that require PEBS is not working properly.
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/alpine.DEB.2.02.1406301600460.26302@chino.kir.corp.google.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
With -cpu host, KVM reports LBR and extra_regs support, if the host has
support.
When the guest perf driver tries to access LBR or extra_regs MSR,
it #GPs all MSR accesses,since KVM doesn't handle LBR and extra_regs support.
So check the related MSRs access right once at initialization time to avoid
the error access at runtime.
For reproducing the issue, please build the kernel with CONFIG_KVM_INTEL = y
(for host kernel).
And CONFIG_PARAVIRT = n and CONFIG_KVM_GUEST = n (for guest kernel).
Start the guest with -cpu host.
Run perf record with --branch-any or --branch-filter in guest to trigger LBR
Run perf stat offcore events (E.g. LLC-loads/LLC-load-misses ...) in guest to
trigger offcore_rsp #GP
Signed-off-by: Kan Liang <kan.liang@intel.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Maria Dimakopoulou <maria.n.dimakopoulou@gmail.com>
Cc: Mark Davies <junk@eslaf.co.uk>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Yan, Zheng <zheng.z.yan@intel.com>
Link: http://lkml.kernel.org/r/1405365957-20202-1-git-send-email-kan.liang@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This patch fixes the SNB-EP and IVT Cbox filter mapping
table. The table controls which filters are supported by
which events. There were several mistakes in those tables
causing some filters to be ignored, such as NID on
TOR_INSERTS.
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: zheng.z.yan@intel.com
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/20140630144624.GA2604@quad
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This was discussed back in February:
https://lkml.org/lkml/2014/2/18/956
But I never saw a patch come out of it.
On IvyBridge we share the SandyBridge cache event tables, but the
dTLB-load-miss event is not compatible. Patch it up after
the fact to the proper DTLB_LOAD_MISSES.DEMAND_LD_MISS_CAUSES_A_WALK
Signed-off-by: Vince Weaver <vincent.weaver@maine.edu>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/alpine.DEB.2.11.1407141528200.17214@vincent-weaver-1.umelst.maine.edu
Signed-off-by: Ingo Molnar <mingo@kernel.org>
The fixup of the inline assembly to restore the floating-point-control
register needs to check for instruction address *after* the lfcp
instruction as the specification and data exceptions are suppresssing.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The PSW mask check of the PTRACE_POKEUSR_AREA command is incorrect.
The PSW_MASK_USER define contains the PSW_MASK_ASC bits, the ptrace
interface accepts all combinations for the address-space-control
bits. To protect the kernel space the PSW mask check in ptrace needs
to reject the address-space-control bit combination for home space.
Fixes CVE-2014-3534
Cc: stable@vger.kernel.org
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
MSI irqchip in s390 has its own mask and unmask MSI irq
functions, zpci_enable_irq() and zpci_disable_irq().
They mask and unmask MSI irq in standard ways, no arch
special. MSI driver provides two global standard functions
mask_msi_irq() and unmask_msi_irq(). Local zpci_enable_irq()
and zpci_disable_irq() are almost the same as the standard
two. the difference is local mask/unmask functions
read the mask status before mask and unmask everytime.
Then change the value and rewrite to hardware. In standard
functions, save the mask status after mask and unmask msi
irq, and use the cached status to change the mask status.
When we mask or unmask a MSI irq, we always cache its
mask status except we know need not to cache it, like in
pci_msi_shutdown. So use the standard functions to replace
the local is safe.
Signed-off-by: Yijing Wang <wangyijing@huawei.com>
[sebott: fixed inverted function pointers]
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Inlined uaccess functions require the mvcos facility (bit 27), not the tod
clock steering facility (bit 28) for z10 and newer machines.
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
- resolve FIXMEs in double exception handler for window overflow. This
fix makes native building of linux on xtensa host possible;
- fix sysmem region removal issue introduced in 3.15.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJTxQ0lAAoJEFH5zJH4P6BEIS4P/3HE24wfqFYryzKOh2P8FBVa
YiIrsZ43X2FHp43WwifPy2BK5DDObCNdxHMxJuR4Yx5xrwYEFdH2ztLjuQWmefoL
TCZu1dBwkZYZp86OR7D4hJ2eOHGzOsm7/c/o+InDq2ci7qVmDq5qsbc407gf2lFe
hHnsG7OCxCKHBCxFrxfd61gQupAlSGYph4pUI46U2UGVvy1n2vy2urLPPrv5unhv
eXwmlpB+XfApG3jtTNHQgPy0PR0nmHZqe7xhwR9wk+8HjWu6Zh4nvaPQ6tbNnynh
gQTSwlN27Vw1JTo/d/+gVKAfKs88Bsujl9fXAhWCxdgR3LdErU2dEdpOpW5rlLZw
hdD61t68/uyroeeyvDvJM7cU/s2CljHJ0wqFEhmbdeiP5wexSU8fg/qmq9FDMASJ
YM7ABcyhmyLl5EQqum+h5ta6hTZWCZVa1qAw0IIRY48/+eufiCfGHQieHUqrDOfj
fqWXIFYjyyN4kY63Qca84qjCJD+FOmw1aCXqieVmdvs/jw41eyuz87zjvWdMoJJH
SiaGsPfsccTEvnJ2DbYsWewNtCeMsGQMiptJk1lC0FtmHmr4TCQFX4QmBVJSh2wT
DLz+k5Zh3EkhSBdyy0GiMYc1wnqjOWvTtsvoUTLNoJ5Vi+2jLPclzk1RY3hx41SN
rV1U8UbT0nmqJaOo1thd
=R7uM
-----END PGP SIGNATURE-----
Merge tag 'xtensa-for-next-20140715' of git://github.com/jcmvbkbc/linux-xtensa into for_next
Xtensa fixes for 3.16:
- resolve FIXMEs in double exception handler for window overflow. This
fix makes native building of linux on xtensa host possible;
- fix sysmem region removal issue introduced in 3.15.
Pull networking fixes from David Miller:
1) Bluetooth pairing fixes from Johan Hedberg.
2) ieee80211_send_auth() doesn't allocate enough tail room for the SKB,
from Max Stepanov.
3) New iwlwifi chip IDs, from Oren Givon.
4) bnx2x driver reads wrong PCI config space MSI register, from Yijing
Wang.
5) IPV6 MLD Query validation isn't strong enough, from Hangbin Liu.
6) Fix double SKB free in openvswitch, from Andy Zhou.
7) Fix sk_dst_set() being racey with UDP sockets, leading to strange
crashes, from Eric Dumazet.
8) Interpret the NAPI budget correctly in the new systemport driver,
from Florian Fainelli.
9) VLAN code frees percpu stats in the wrong place, leading to crashes
in the get stats handler. From Eric Dumazet.
10) TCP sockets doing a repair can crash with a divide by zero, because
we invoke tcp_push() with an MSS value of zero. Just skip that part
of the sendmsg paths in repair mode. From Christoph Paasch.
11) IRQ affinity bug fixes in mlx4 driver from Amir Vadai.
12) Don't ignore path MTU icmp messages with a zero mtu, machines out
there still spit them out, and all of our per-protocol handlers for
PMTU can cope with it just fine. From Edward Allcutt.
13) Some NETDEV_CHANGE notifier invocations were not passing in the
correct kind of cookie as the argument, from Loic Prylli.
14) Fix crashes in long multicast/broadcast reassembly, from Jon Paul
Maloy.
15) ip_tunnel_lookup() doesn't interpret wildcard keys correctly, fix
from Dmitry Popov.
16) Fix skb->sk assigned without taking a reference to 'sk' in
appletalk, from Andrey Utkin.
17) Fix some info leaks in ULP event signalling to userspace in SCTP,
from Daniel Borkmann.
18) Fix deadlocks in HSO driver, from Olivier Sobrie.
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (93 commits)
hso: fix deadlock when receiving bursts of data
hso: remove unused workqueue
net: ppp: don't call sk_chk_filter twice
mlx4: mark napi id for gro_skb
bonding: fix ad_select module param check
net: pppoe: use correct channel MTU when using Multilink PPP
neigh: sysctl - simplify address calculation of gc_* variables
net: sctp: fix information leaks in ulpevent layer
MAINTAINERS: update r8169 maintainer
net: bcmgenet: fix RGMII_MODE_EN bit
tipc: clear 'next'-pointer of message fragments before reassembly
r8152: fix r8152_csum_workaround function
be2net: set EQ DB clear-intr bit in be_open()
GRE: enable offloads for GRE
farsync: fix invalid memory accesses in fst_add_one() and fst_init_card()
igb: do a reset on SR-IOV re-init if device is down
igb: Workaround for i210 Errata 25: Slow System Clock
usbnet: smsc95xx: add reset_resume function with reset operation
dp83640: Always decode received status frames
r8169: disable L23
...
init_espfix_ap() is currently off by one level when informing hypervisor
that allocated pages will be used for ministacks' page tables.
The most immediate effect of this on a PV guest is that if
'stack_page = __get_free_page()' returns a non-zeroed-out page the hypervisor
will refuse to use it for a page table (which it shouldn't be anyway). This will
result in warnings by both Xen and Linux.
More importantly, a subsequent write to that page (again, by a PV guest) is
likely to result in fatal page fault.
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Link: http://lkml.kernel.org/r/1404926298-5565-1-git-send-email-boris.ostrovsky@oracle.com
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: <stable@vger.kernel.org>
which, apart from reducing code duplication also stops the arm64 stub
being rebuilt every time make is invoked - Ard Biesheuvel
* Fix the EFI fdt code to not report a boot error if UEFI is
unavailable since booting without UEFI parameters is a valid use case
for non-UEFI platforms - Catalin Marinas
* Include a .bss section in the EFI boot stub PE/COFF headers to fix a
memory corruption bug - Michael Brown
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJTw9I4AAoJEC84WcCNIz1VJcwQAKBZXaIjsxWqtMsqvB7RmCtw
5wi44hf2sdrwKCBUdxRBHBoiISps3f+5VJZN25eJ5QG+eqcqoQkGoQsAVlMbGrOJ
iyYF79REUj/ujovtodKQPOci4ueYhiRemBc8o6abOjSwdAe3fj5vpQgKQuS9RwtG
SIy4MBucE58Zle6vGHXLxHKdJVCB0/ALESwjg9fXWluopHdWkmmN0kKhYlysElHx
7zn2C2RoVDNQQwknueUznktKUH9pxLBEW74qE98CBZ9Spt8QpjvT22UkxoOU8grU
RnogYgJhwiy1+GNQ5524rZwcM2XP1qFhCcWoKxP3I0UhTOe2Za7Ogi8ggUAXbvEh
+hCsCIM3DqzAROOQMsmUZHkMJ0Gi2HSL12tt1KHNZ2zh74hiMPqDAmzkjBuf2KE0
5Lxdnc47UmHE9qVduj8fg2A+cMLV/K4NBlitOXq6lJp9+n8Wa93MLF0WENd9akE+
c9u7sAoTwG/5DUDy3rB1H5P65/WKyXNe9Db8JdZp2+62dxmsNyF++zkApDJT3Op7
MyFTpVkeZrWZbZVwZJXZuKNdh19Jv/adZrvC7OKvcDBfjow8AGzbDDnr4ez42QT8
OkM3uGyBuVTgtUdmPYNREXP5zjr1NfZV/w1fbslaJl/UbFZrNUILTP5YyqTphv2M
W9eO6CyrmX10gd5v47A7
=Cvne
-----END PGP SIGNATURE-----
Merge tag 'efi-urgent' into x86/urgent
* Remove a duplicate copy of linux_banner from the arm64 EFI stub
which, apart from reducing code duplication also stops the arm64 stub
being rebuilt every time make is invoked - Ard Biesheuvel
* Fix the EFI fdt code to not report a boot error if UEFI is
unavailable since booting without UEFI parameters is a valid use case
for non-UEFI platforms - Catalin Marinas
* Include a .bss section in the EFI boot stub PE/COFF headers to fix a
memory corruption bug - Michael Brown
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
On OMAP SOCs using PL310 controllers, power_ctrl register is not
accessible from non-secure software even on PL310 versions which
support it. The secure code takes care of setting it up correctly
and power transitions are proven on these devices.
For example, AM437x has L2C-310 version r3p3 and ROM code on that
device does not support writing to L2C-310 power control register.
The L2C driver, however, tries writing to this register for all
revisions >= r3p0.
This leads to a warning dump on boot which leads most users to believe
that L2 cache is non-functional.
Since the problem is understood, and cannot be addressed through
software, replace the warning with a pr_info() while maintaining the
WARN_ON() for other truly unexpected scenarios.
Reported-by: Nishanth Menon <nm@ti.com>
Tested-by: Felipe Balbi <balbi@ti.com>
Signed-off-by: Sekhar Nori <nsekhar@ti.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
All known Rockchip SoCs have a reset controller in their CRUs, so it's
helpful to have the reset controller framework selected by default,
only be deselected by the user in special cases.
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
Acked-By: Max Schwarz <max.schwarz@online.de>
Tested-By: Max Schwarz <max.schwarz@online.de>
Signed-off-by: Mike Turquette <mturquette@linaro.org>
This week's arm-soc fixes:
- Another set of OMAP fixes
* Clock fixes
* Restart handling
* PHY regulators
* SATA hwmod data for DRA7
+ Some trivial fixes and removal of a bit of dead code
- Exynos fixes
* A bunch of clock fixes
* Some SMP fixes
* Exynos multi-core timer: register as clocksource and fix ftrace.
+ a few other minor fixes
There's also a couple more patches, and at91 fix for USB caused by common
clock conversion, and more MAINTAINERS entries for shmobile.
We're definitely switching to only regression fixes from here on out,
we've been a little less strict than usual up until now.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.14 (GNU/Linux)
iQIcBAABAgAGBQJTws/yAAoJEIwa5zzehBx35GQP/jUx/0+qiNkrupcmMoDr9ykP
QSQPWRVBancZ5eO3gBvnde0P/4gaTQZb7s31nCkX7RQfpe4078eR9isrqs7du3rw
BNsdJxtWIBzhnx19GDhjZM2BbYoVlDIETQTKUDPPhMg/I/k4ozLLcC+02uAMG1/g
TiCjOHF968Cq1bCwyvqJgiTDjjAPoHLrnD2aDsAuwYi2QPKNWkv0uHZFQmV2ooja
9Fk3Km32wirTvfjKir0r/BhV5oEdwv3y/HYRNG+bJOkvxDph8i5t2EvLeCl/yctq
KLcHBJjLsF2MgvCpoCfjg8OFyYA7qmZDNVxfiywJnWVUR6w0kOU3MggPopsikoY/
xU3MKJSu/36cNJER3Rl51taD9tq+4hVhKTjiLBkD0MsD9jN5ewvDqI5BKpjL5wlZ
I36eZmQE4yPnd6is1RS7uuTcN/uXBtOAhNjPS42xkW5lo9W7BWOUOlB1dzlcZTky
J6/h9WzODKcgaeTx55ks4wjhQmdzzO3nQk0ion3YEv27RKkSUJy8PNDAfMKAAF3p
OWzqUIfB2mMGQBGXcShXAAyC3S2jGJptlKL3Wno0LkPXBGqlRXHMYBvGoFEdP/iV
F24TDLxJIL/lPbpQEey3J4qZANrpuAm0HYmg+r5KiVHbMkcW2Z000w4kRoO+tIIc
Bxzxrot9PVAFfE1SkVYt
=NB6g
-----END PGP SIGNATURE-----
Merge tag 'fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Pull ARM SoC fixes from Olof Johansson:
"This week's arm-soc fixes:
- Another set of OMAP fixes
* Clock fixes
* Restart handling
* PHY regulators
* SATA hwmod data for DRA7
+ Some trivial fixes and removal of a bit of dead code
- Exynos fixes
* A bunch of clock fixes
* Some SMP fixes
* Exynos multi-core timer: register as clocksource and fix ftrace.
+ a few other minor fixes
There's also a couple more patches, and at91 fix for USB caused by
common clock conversion, and more MAINTAINERS entries for shmobile.
We're definitely switching to only regression fixes from here on out,
we've been a little less strict than usual up until now"
* tag 'fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (26 commits)
ARM: at91: at91sam9x5: add clocks for usb device
ARM: EXYNOS: Register cpuidle device only on exynos4210 and 5250
ARM: dts: Add clock property for mfc_pd in exynos5420
clk: exynos5420: Add IDs for clocks used in PD mfc
ARM: EXYNOS: Add support for clock handling in power domain
ARM: OMAP2+: Remove non working OMAP HDMI audio initialization
ARM: imx: fix shared gate clock
ARM: dts: Update the parent for Audss clocks in Exynos5420
ARM: EXYNOS: Update secondary boot addr for secure mode
ARM: dts: Fix TI CPSW Phy mode selection on IGEP COM AQUILA.
ARM: dts: am335x-evmsk: Enable the McASP FIFO for audio
ARM: dts: am335x-evm: Enable the McASP FIFO for audio
ARM: OMAP2+: Make GPMC skip disabled devices
ARM: OMAP2+: create dsp device only on OMAP3 SoCs
ARM: dts: dra7-evm: Make VDDA_1V8_PHY supply always on
ARM: DRA7/AM43XX: fix header definition for omap44xx_restart
ARM: OMAP2+: clock/dpll: fix _dpll_test_fint arithmetics overflow
ARM: DRA7: hwmod: Add SYSCONFIG for usb_otg_ss
ARM: DRA7: hwmod: Fixup SATA hwmod
ARM: OMAP3: PRM/CM: Add back macros used by TI DSP/Bridge driver
...
Pull ARM fixes from Russell King:
"Another round of fixes for ARM:
- a set of kprobes fixes from Jon Medhurst
- fix the revision checking for the L2 cache which wasn't noticed to
have been broken"
* 'fixes' of git://ftp.arm.linux.org.uk/~rmk/linux-arm:
ARM: l2c: fix revision checking
ARM: kprobes: Fix test code compilation errors for ARMv4 targets
ARM: kprobes: Disallow instructions with PC and register specified shift
ARM: kprobes: Prevent known test failures stopping other tests running
Pull m68k fixes from Geert Uytterhoeven:
"Summary:
- Fix for a boot regression introduced in v3.16-rc1,
- Fix for a build issue in -next"
Christoph Hellwig questioned why mach_random_get_entropy should be
exported to modules, and Geert explains that random_get_entropy() is
called by at least the crypto layer and ends up using it on m68k. On
most other architectures it just uses get_cycles() (which is typically
inlined and doesn't need exporting),
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/geert/linux-m68k:
m68k: Export mach_random_get_entropy to modules
m68k: Fix boot regression on machines with RAM at non-zero
Pull parisc fixes from Helge Deller:
"The major patch in here is one which fixes the fanotify_mark() syscall
in the compat layer of the 64bit parisc kernel. It went unnoticed so
long, because the calling syntax when using a 64bit parameter in a
32bit syscall is quite complex and even worse, it may be even
different if you call syscall() or the glibc wrapper. This patch
makes the kernel accept the calling convention when called by the
glibc wrapper.
The other two patches are trivial and remove unused headers, #includes
and adds the serial ports of the fastest C8000 workstation to the
parisc-kernel internal hardware database"
* 'parisc-3.16-5' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux:
parisc: drop unused defines and header includes
parisc: fix fanotify_mark() syscall on 32bit compat kernel
parisc: add serial ports of C8000/1GHz machine to hardware database
On parisc we can not use the existing compat implementation for fanotify_mark()
because for the 64bit mask parameter the higher and lower 32bits are ordered
differently than what the compat function expects from big endian
architectures.
Specifically:
It finally turned out, that on hppa we end up with different assignments
of parameters to kernel arguments depending on if we call the glibc
wrapper function
int fanotify_mark (int __fanotify_fd, unsigned int __flags,
uint64_t __mask, int __dfd, const char *__pathname);
or directly calling the syscall manually
syscall(__NR_fanotify_mark, ...)
Reason is, that the syscall() function is implemented as C-function and
because we now have the sysno as first parameter in front of the other
parameters the compiler will unexpectedly add an empty paramenter in
front of the u64 value to ensure the correct calling alignment for 64bit
values.
This means, on hppa you can't simply use syscall() to call the kernel
fanotify_mark() function directly, but you have to use the glibc
function instead.
This patch fixes the kernel in the hppa-arch specifc coding to adjust
the parameters in a way as if userspace calls the glibc wrapper function
fanotify_mark().
Signed-off-by: Helge Deller <deller@gmx.de>
Cc: stable@vger.kernel.org # 3.13+
- update the parent for Auudss clock because kernel will be hang
during late boot if the parent clock is disabled in bootloader.
- enable clk handing in power domain because while power domain
on/off, its regarding clock source will be reset and it causes
a problem so need to handle it.
- add mux clocks to be used by power domain for exynos5420-mfc
during power domain on/off and property in device tree also.
- register cpuidle only for exynos4210 and exynos5250 because a
system failure will be happened on other exynos SoCs.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iQIcBAABAgAGBQJTwbTSAAoJEA0Cl+kVi2xqK9UP/3wm1ukMWNekDZ97rTsEN4Uo
qDladKJf54DANuBkbXMWL3AT6O0Ja679Xm3rjvvGGMyuM+Ga/S7RO4Odvq9970HZ
/Auv+MQnU/t7L3UW/4jvPSL5aRTxBJ9ylpcH7HNsvUg51bOuovHcQyafag8tKt1M
/rbRcKK6076KvctT1h677NX4+TYcFMbp08qYlmWaLGSXvijgTErHFdhoB/Mbf1+Q
SKduITGVTmRQ4cB1Dxn1fVoAb8UIJWiWWW2Ndi57gn/blaM4iE/K6oYSV8972HtZ
WMaFcka06FBBuFpKDjQp092altyAJbSwTURJEadI6Nrw+uqs6uMEX9hKeFdYvaKJ
avbhz7YlDK3NSCvriJkdp0faWHxLlr0ZLV3aIye3o7JKa68Bp/Un6Y6L+5dEAdnk
K3BiFxomdtTw5S39qnpttshwStUBCK9FxNuiPaO0FNPCiIEtQsobTCpYZ5vAZZFk
A9lqgdQT1u/gRxn02KPz0CKz5EYhlJvJTxiX83+vv/9DUI4ulBu9oJyDLbKszZ07
XqqAsk9cpBlr2NxnBUeAO8R4lBBjyf16pWRJBxGvlrz97OONS+OOygVufUS8o5Jw
p8Bgf8xeRA0udxj4X2KgyKzM3TNyGUxUD5tSMwvTmWIc7HGzjzL/Fv8NBwUGKMTs
4RtpSqM59UZuVbqXxUeP
=didu
-----END PGP SIGNATURE-----
Merge tag 'samsung-fixes-3' of git://git.kernel.org/pub/scm/linux/kernel/git/kgene/linux-samsung into fixes
Merge "Samsung fixes-3 for 3.16" from Kukjin Kim:
Samsung fixes-3 for v3.16
- update the parent for Auudss clock because kernel will be hang
during late boot if the parent clock is disabled in bootloader.
- enable clk handing in power domain because while power domain
on/off, its regarding clock source will be reset and it causes
a problem so need to handle it.
- add mux clocks to be used by power domain for exynos5420-mfc
during power domain on/off and property in device tree also.
- register cpuidle only for exynos4210 and exynos5250 because a
system failure will be happened on other exynos SoCs.
* tag 'samsung-fixes-3' of git://git.kernel.org/pub/scm/linux/kernel/git/kgene/linux-samsung:
ARM: EXYNOS: Register cpuidle device only on exynos4210 and 5250
ARM: dts: Add clock property for mfc_pd in exynos5420
clk: exynos5420: Add IDs for clocks used in PD mfc
ARM: EXYNOS: Add support for clock handling in power domain
ARM: dts: Update the parent for Audss clocks in Exynos5420
Signed-off-by: Olof Johansson <olof@lixom.net>
Add clocks for usb device, or else switch to CCF, the gadget
won't work.
Reported-by: Jiri Prchal <jiri.prchal@aksignal.cz>
Signed-off-by: Bo Shen <voice.shen@atmel.com>
Acked-by: Alexandre Belloni <alexandre.belloni@free-electrons.com>
Tested-by: Jiri Prchal <jiri.prchal@aksignal.cz>
Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Olof Johansson <olof@lixom.net>
Pull x86 fixes from Peter Anvin:
"A couple of further build fixes for the VDSO code.
This is turning into a bit of a headache, and Andy has already come up
with a more ultimate cleanup, but most likely that is 3.17 material"
* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86-32, vdso: Fix vDSO build error due to missing align_vdso_addr()
x86-64, vdso: Fix vDSO build breakage due to empty .rela.dyn
Pull powerpc fixes from Ben Herrenschmidt:
"Here are a few more powerpc fixes for 3.16
There's a small series of 3 patches that fix saving/restoring MMUCR2
when using KVM without which perf goes completely bonkers in the host
system. Another perf fix from Anton that's been rotting away in
patchwork due to my poor eyesight, a couple of compile fixes, a little
addition to the WSP removal by Michael (removing a bit more dead
stuff) and a fix for an embarassing regression with our soft irq
masking"
* 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc:
powerpc/perf: Never program book3s PMCs with values >= 0x80000000
powerpc: Disable RELOCATABLE for COMPILE_TEST with PPC64
powerpc/perf: Clear MMCR2 when enabling PMU
powerpc/perf: Add PPMU_ARCH_207S define
powerpc/kvm: Remove redundant save of SIER AND MMCR2
powerpc/powernv: Check for IRQHAPPENED before sleeping
powerpc: Clean up MMU_FTRS_A2 and MMU_FTR_TYPE_3E
powerpc/cell: Fix compilation with CONFIG_COREDUMP=n
Enable trapping of the debug registers, preventing the guests to
mess with the host state (and allowing guests to use the debug
infrastructure as well).
Reviewed-by: Anup Patel <anup.patel@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Implement switching of the debug registers. While the number
of registers is massive, CPUs usually don't implement them all
(A57 has 6 breakpoints and 4 watchpoints, which gives us a total
of 22 registers "only").
Also, we only save/restore them when MDSCR_EL1 has debug enabled,
or when we've flagged the debug registers as dirty. It means that
most of the time, we only save/restore MDSCR_EL1.
Reviewed-by: Anup Patel <anup.patel@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Add handlers for all the AArch32 debug registers that are accessible
from EL0 or EL1. The code follow the same strategy as the AArch64
counterpart with regards to tracking the dirty state of the debug
registers.
Reviewed-by: Anup Patel <anup.patel@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
We now have multiple tables for the various system registers
we trap. Make sure we check the order of all of them, as it is
critical that we get the order right (been there, done that...).
Reviewed-by: Anup Patel <anup.patel@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
An interesting "feature" of the CP14 encoding is that there is
an overlap between 32 and 64bit registers, meaning they cannot
live in the same table as we did for CP15.
Create separate tables for 64bit CP14 and CP15 registers, and
let the top level handler use the right one.
Reviewed-by: Anup Patel <anup.patel@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
As we're about to trap a bunch of CP14 registers, let's rework
the CP15 handling so it can be generalized and work with multiple
tables.
Reviewed-by: Anup Patel <anup.patel@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Add handlers for all the AArch64 debug registers that are accessible
from EL0 or EL1. The trapping code keeps track of the state of the
debug registers, allowing for the switch code to implement a lazy
switching strategy.
Reviewed-by: Anup Patel <anup.patel@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
In order to be able to use the DBG_MDSCR_* macros from the KVM code,
move the relevant definitions to the obvious include file.
Also move the debug_el enum to a portion of the file that is guarded
by #ifndef __ASSEMBLY__ in order to use that file from assembly code.
Acked-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Anup Patel <anup.patel@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
pm_fake doesn't quite describe what the handler does (ignoring writes
and returning 0 for reads).
As we're about to use it (a lot) in a different context, rename it
with a (admitedly cryptic) name that make sense for all users.
Reviewed-by: Anup Patel <anup.patel@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Fix issue with 32bit guests running on top of BE KVM host.
Indexes of high and low words of 64bit cp15 register are
swapped in case of big endian code, since 64bit cp15 state is
restored or saved with double word write or read instruction.
Define helper macro to access low words of 64bit cp15 register.
Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Since size of all sys registers is always 8 bytes. Current
code is actually endian agnostic. Just clean it up a bit.
Removed comment about little endian. Change type of pointer
from 'void *' to 'u64 *' to enforce stronger type checking.
Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
esr_el2 field of struct kvm_vcpu_fault_info has u32 type.
It should be stored as word. Current code works in LE case
because existing puts least significant word of x1 into
esr_el2, and it puts most significant work of x1 into next
field, which accidentally is OK because it is updated again
by next instruction. But existing code breaks in BE case.
Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
In case of guest CPU running in LE mode and host runs in
BE mode we need byteswap data, so read/write is emulated correctly.
Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Previous patches addresses ARMV7 big-endian virtualiztion,
kvm related issues, so enable ARM_VIRT_EXT for big-endian
now.
Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Fix code that handles KVM_SET_ONE_REG, KVM_GET_ONE_REG ioctls to work in BE
image. Before this fix get/set_one_reg functions worked correctly only in
LE case - reg_from_user was taking 'void *' kernel address that actually could
be target/source memory of either 4 bytes size or 8 bytes size, and code copied
from/to user memory that could hold either 4 bytes register, 8 byte register
or pair of 4 bytes registers.
In order to work in endian agnostic way reg_from_user to reg_to_user functions
should copy register value only to kernel variable with size that matches
register size. In few place where size mismatch existed fix issue on macro
caller side.
Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
In case of status register E bit is not set (LE mode) and host runs in
BE mode we need byteswap data, so read/write is emulated correctly.
Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
The __kvm_vcpu_run function returns a 64-bit result in two registers,
which has to be adjusted for BE case.
Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
In some cases the mcrr and mrrc instructions in combination with the ldrd
and strd instructions need to deal with 64bit value in memory. The ldrd
and strd instructions already handle endianness within word (register)
boundaries but to get effect of the whole 64bit value represented correctly,
rr_lo_hi macro is introduced and is used to swap registers positions when
the mcrr and mrrc instructions are used. That has the effect of swapping
two words.
Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
The vgic h/w registers are little endian; when BE asm code
reads/writes from/to them, it needs to do byteswap after/before.
Byteswap code uses ARM_BE8 wrapper to add swap only if
CONFIG_CPU_BIG_ENDIAN is configured.
Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
HSCTLR.EE is defined as bit[25] referring to arm manual
DDI0606C.b(p1590).
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Li Liu <john.liuli@huawei.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Introduce the GICv3 world switch code used to save/restore the
GICv3 context.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Introduce the support code for emulating a GICv2 on top of GICv3
hardware.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
GICv3 requires the IMO and FMO bits to be tightly coupled with some
of the interrupt controller's register switch.
In order to have similar code paths, move the manipulation of these
bits to the GICv2 switch code.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Move the GICv2 world switch code into its own file, and add the
necessary indirection to the arm64 switch code.
Also introduce a new type field to the vgic_params structure.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
We already have __hyp_text_{start,end} to express the boundaries
of the HYP text section, and __kvm_hyp_code_{start,end} are getting
in the way of a more modular world switch code.
Just turn __kvm_hyp_code_{start,end} into #defines mapping the
linker-emited symbols.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Brutally hack the innocent vgic code, and move the GICv2 specific code
to its own file, using vgic_ops and vgic_params as a way to pass
information between the two blocks.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
In order to make way for the GICv3 registers, move the v2-specific
registers to their own structure.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
For correct guest suspend/resume behaviour we need to ensure we include
the generic timer registers for 64 bit guests. As CONFIG_KVM_ARM_TIMER is
always set for arm64 we don't need to worry about null implementations.
However I have re-jigged the kvm_arm_timer_set/get_reg declarations to
be in the common include/kvm/arm_arch_timer.h headers.
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
I suspect this is a -ECUTPASTE fault from the initial implementation. If
we don't declare the register ID to be KVM_REG_ARM64 the KVM_GET_ONE_REG
implementation kvm_arm_get_reg() returns -EINVAL and hilarity ensues.
The kvm/api.txt document describes all arm64 registers as starting with
0x60xx... (i.e KVM_REG_ARM64).
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
A userspace process can map device MMIO memory via VFIO or /dev/mem,
e.g., for platform device passthrough support in QEMU.
During early development, we found the PAGE_S2 memory type being used
for MMIO mappings. This patch corrects that by using the more strongly
ordered memory type for device MMIO mappings: PAGE_S2_DEVICE.
Signed-off-by: Kim Phillips <kim.phillips@linaro.org>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Currently when a KVM region is deleted or moved after
KVM_SET_USER_MEMORY_REGION ioctl, the corresponding
intermediate physical memory is not unmapped.
This patch corrects this and unmaps the region's IPA range
in kvm_arch_commit_memory_region using unmap_stage2_range.
Signed-off-by: Eric Auger <eric.auger@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
unmap_range() was utterly broken, to quote Marc, and broke in all sorts
of situations. It was also quite complicated to follow and didn't
follow the usual scheme of having a separate iterating function for each
level of page tables.
Address this by refactoring the code and introduce a pgd_clear()
function.
Reviewed-by: Jungseok Lee <jays.lee@samsung.com>
Reviewed-by: Mario Smarduch <m.smarduch@samsung.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Certain instructions (e.g., mwait and monitor) cause a #UD exception when they
are executed in user mode. This is in contrast to the regular privileged
instructions which cause #GP. In order not to mess with SVM interception of
mwait and monitor which assumes privilege level assertions take place before
interception, a flag has been added.
Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Certain instructions, such as monitor and xsave do not support big real mode
and cause a #GP exception if any of the accessed bytes effective address are
not within [0, 0xffff]. This patch introduces a flag to mark these
instructions, including the necassary checks.
Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Emulator accesses are always done a page at a time, either by the emulator
itself (for fetches) or because we need to query the MMU for address
translations. Speed up these accesses by using kvm_read_guest_page
and, in the case of fetches, by inlining kvm_read_guest_virt_helper and
dropping the loop around kvm_read_guest_page.
This final tweak saves 30-100 more clock cycles (4-10%), bringing the
count (as measured by kvm-unit-tests) down to 720-1100 clock cycles on
a Sandy Bridge Xeon host, compared to 2300-3200 before the whole series
and 925-1700 after the first two low-hanging fruit changes.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When the CS base is not page-aligned, the linear address of the code could
get close to the page boundary (e.g. 0x...ffe) even if the EIP value is
not. So we need to first linearize the address, and only then compute
the number of valid bytes that can be fetched.
This happens relatively often when executing real mode code.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We do not need a memory copying loop anymore in insn_fetch; we
can use a byte-aligned pointer to access instruction fields directly
from the fetch_cache. This eliminates 50-150 cycles (corresponding to
a 5-10% improvement in performance) from each instruction.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
do_insn_fetch_bytes will only be called once in a given insn_fetch and
insn_fetch_arr, because in fact it will only be called at most twice
for any instruction and the first call is explicit in x86_decode_insn.
This observation lets us hoist the call out of the memory copying loop.
It does not buy performance, because most fetches are one byte long
anyway, but it prepares for the next patch.
The overflow check is tricky, but correct. Because do_insn_fetch_bytes
has already been called once, we know that fc->end is at least 15. So
it is okay to subtract the number of bytes we want to read.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Hoist the common case up from do_insn_fetch_byte to do_insn_fetch,
and prime the fetch_cache in x86_decode_insn. This helps a bit the
compiler and the branch predictor, but above all it lays the
ground for further changes in the next few patches.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
rip_relative is only set if decode_modrm runs, and if you have ModRM
you will also have a memopp. We can then access memopp unconditionally.
Note that rip_relative cannot be hoisted up to decode_modrm, or you
break "mov $0, xyz(%rip)".
Also, move typecast on "out of range value" of mem.ea to decode_modrm.
Together, all these optimizations save about 50 cycles on each emulated
instructions (4-6%).
Signed-off-by: Bandan Das <bsd@redhat.com>
[Fix immediate operands with rip-relative addressing. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
x86_decode_insn already sets a default for seg_override,
so remove it from the zeroed area. Also replace set/get functions
with direct access to the field.
Signed-off-by: Bandan Das <bsd@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
A lot of initializations are unnecessary as they get set to
appropriate values before actually being used. Optimize
placement of fields in x86_emulate_ctxt
Signed-off-by: Bandan Das <bsd@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Remove the if conditional - that will help us avoid
an "else initialize to 0" Also, rearrange operators
for slightly better code.
Signed-off-by: Bandan Das <bsd@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The same information can be gleaned from ctxt->d and avoids having
to zero/NULL initialize intercept and check_perm
Signed-off-by: Bandan Das <bsd@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Core emulator functions all belong in emulator.c,
x86 should have no knowledge of emulator internals
Signed-off-by: Bandan Das <bsd@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The "if/return" checks are useless, because we return X86EMUL_CONTINUE
anyway if we do not return.
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We can just blindly move all 16 bytes of ctxt->src's value to ctxt->dst.
write_register_operand will take care of writing only the lower bytes.
Avoiding a call to memcpy (the compiler optimizes it out) gains about
200 cycles on kvm-unit-tests for register-to-register moves, and makes
them about as fast as arithmetic instructions.
We could perhaps get a larger speedup by moving all instructions _except_
moves out of x86_emulate_insn, removing opcode_len, and replacing the
switch statement with an inlined em_mov.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
There are several checks for "peculiar" aspects of instructions in both
x86_decode_insn and x86_emulate_insn. Group them together, and guard
them with a single "if" that lets the processor quickly skip them all.
Make this more effective by adding two more flag bits that say whether the
.intercept and .check_perm fields are valid. We will reuse these
flags later to avoid initializing fields of the emulate_ctxt struct.
This skims about 30 cycles for each emulated instructions, which is
approximately a 3% improvement.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Despite the provisions to emulate up to 130 consecutive instructions, in
practice KVM will emulate just one before exiting handle_invalid_guest_state,
because x86_emulate_instruction always sets KVM_REQ_EVENT.
However, we only need to do this if an interrupt could be injected,
which happens a) if an interrupt shadow bit (STI or MOV SS) has gone
away; b) if the interrupt flag has just been set (other instructions
than STI can set it without enabling an interrupt shadow).
This cuts another 700-900 cycles from the cost of emulating an
instruction (measured on a Sandy Bridge Xeon: 1650-2600 cycles
before the patch on kvm-unit-tests, 925-1700 afterwards).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
For the next patch we will need to know the full state of the
interrupt shadow; we will then set KVM_REQ_EVENT when one bit
is cleared.
However, right now get_interrupt_shadow only returns the one
corresponding to the emulated instruction, or an unconditional
0 if the emulated instruction does not have an interrupt shadow.
This is confusing and does not allow us to check for cleared
bits as mentioned above.
Clean the callback up, and modify toggle_interruptibility to
match the comment above the call. As a small result, the
call to set_interrupt_shadow will be skipped in the common
case where int_shadow == 0 && mask == 0.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
About 25% of the time spent in emulation of invalid guest state
is wasted in checking whether emulation is required for the next
instruction. However, this almost never changes except when a
segment register (or TR or LDTR) changes, or when there is a mode
transition (i.e. CR0 changes).
In fact, vmx_set_segment and vmx_set_cr0 already modify
vmx->emulation_required (except that the former for some reason
uses |= instead of just an assignment). So there is no need to
call guest_state_valid in the emulation loop.
Emulation performance test results indicate 1650-2600 cycles
for common instructions, versus 2300-3200 before this patch on
a Sandy Bridge Xeon.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Since commit 575203 the MCE subsystem in the Linux kernel for AMD sets bit 18
in MSR_K7_HWCR. Running such a kernel as a guest in KVM on an AMD host results
in a GPE injected into the guest because kvm_set_msr_common returns 1. This
patch fixes this by masking bit 18 from the MSR value desired by the guest.
Signed-off-by: Matthias Lange <matthias.lange@kernkonzept.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We encountered a scenario in which after an INIT is delivered, a pending
interrupt is delivered, although it was sent before the INIT. As the SDM
states in section 10.4.7.1, the ISR and the IRR should be cleared after INIT as
KVM does. This also means that pending interrupts should be cleared. This
patch clears upon reset (and INIT) the pending interrupts; and at the same
occassion clears the pending exceptions, since they may cause a similar issue.
Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We have noticed that qemu-kvm hangs early in the BIOS when runnning nested
under some versions of VMware ESXi.
The problem we believe is because KVM assumes that the platform preserves
the 'G' but for any segment register. The SVM specification itemizes the
segment attribute bits that are observed by the CPU, but the (G)ranularity bit
is not one of the bits itemized, for any segment. Though current AMD CPUs keep
track of the (G)ranularity bit for all segment registers other than CS, the
specification does not require it. VMware's virtual CPU may not track the
(G)ranularity bit for any segment register.
Since kvm already synthesizes the (G)ranularity bit for the CS segment. It
should do so for all segments. The patch below does that, and helps get rid of
the hangs. Patch applies on top of Linus' tree.
Signed-off-by: Jim Mattson <jmattson@vmware.com>
Signed-off-by: Alok N Kataria <akataria@vmware.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We are seeing a lot of PMU warnings on POWER8:
Can't find PMC that caused IRQ
Looking closer, the active PMC is 0 at this point and we took a PMU
exception on the transition from negative to 0. Some versions of POWER8
have an issue where they edge detect and not level detect PMC overflows.
A number of places program the PMC with (0x80000000 - period_left),
where period_left can be negative. We can either fix all of these or
just ensure that period_left is always >= 1.
This patch takes the second option.
Cc: <stable@vger.kernel.org>
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
powerpc:allmodconfig has been failing for some time with the following
error.
arch/powerpc/kernel/exceptions-64s.S: Assembler messages:
arch/powerpc/kernel/exceptions-64s.S:1312: Error: attempt to move .org backwards
make[1]: *** [arch/powerpc/kernel/head_64.o] Error 1
A number of attempts to fix the problem by moving around code have been
unsuccessful and resulted in failed builds for some configurations and
the discovery of toolchain bugs.
Fix the problem by disabling RELOCATABLE for COMPILE_TEST builds instead.
While this is less than perfect, it avoids substantial code changes
which would otherwise be necessary just to make COMPILE_TEST builds
happy and might have undesired side effects.
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
On POWER8 when switching to a KVM guest we set bits in MMCR2 to freeze
the PMU counters. Aside from on boot they are then never reset,
resulting in stuck perf counters for any user in the guest or host.
We now set MMCR2 to 0 whenever enabling the PMU, which provides a sane
state for perf to use the PMU counters under either the guest or the
host.
This was manifesting as a bug with ppc64_cpu --frequency:
$ sudo ppc64_cpu --frequency
WARNING: couldn't run on cpu 0
WARNING: couldn't run on cpu 8
...
WARNING: couldn't run on cpu 144
WARNING: couldn't run on cpu 152
min: 18446744073.710 GHz (cpu -1)
max: 0.000 GHz (cpu -1)
avg: 0.000 GHz
The command uses a perf counter to measure CPU cycles over a fixed
amount of time, in order to approximate the frequency of the machine.
The counters were returning zero once a guest was started, regardless of
weather it was still running or had been shut down.
By dumping the value of MMCR2, it was observed that once a guest is
running MMCR2 is set to 1s - which stops counters from running:
$ sudo sh -c 'echo p > /proc/sysrq-trigger'
CPU: 0 PMU registers, ppmu = POWER8 n_counters = 6
PMC1: 5b635e38 PMC2: 00000000 PMC3: 00000000 PMC4: 00000000
PMC5: 1bf5a646 PMC6: 5793d378 PMC7: deadbeef PMC8: deadbeef
MMCR0: 0000000080000000 MMCR1: 000000001e000000 MMCRA: 0000040000000000
MMCR2: fffffffffffffc00 EBBHR: 0000000000000000
EBBRR: 0000000000000000 BESCR: 0000000000000000
SIAR: 00000000000a51cc SDAR: c00000000fc40000 SIER: 0000000001000000
This is done unconditionally in book3s_hv_interrupts.S upon entering the
guest, and the original value is only save/restored if the host has
indicated it was using the PMU. This is okay, however the user of the
PMU needs to ensure that it is in a defined state when it starts using
it.
Fixes: e05b9b9e5c ("powerpc/perf: Power8 PMU support")
Cc: stable@vger.kernel.org
Signed-off-by: Joel Stanley <joel@jms.id.au>
Acked-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Instead of separate bits for every POWER8 PMU feature, have a single one
for v2.07 of the architecture.
This saves us adding a MMCR2 define for a future patch.
Cc: stable@vger.kernel.org
Signed-off-by: Joel Stanley <joel@jms.id.au>
Acked-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
These two registers are already saved in the block above. Aside from
being unnecessary, by the time we get down to the second save location
r8 no longer contains MMCR2, so we are clobbering the saved value with
PMC5.
MMCR2 primarily consists of counter freeze bits. So restoring the value
of PMC5 into MMCR2 will most likely have the effect of freezing
counters.
Fixes: 72cde5a88d ("KVM: PPC: Book3S HV: Save/restore host PMU registers that are new in POWER8")
Cc: stable@vger.kernel.org
Signed-off-by: Joel Stanley <joel@jms.id.au>
Acked-by: Michael Ellerman <mpe@ellerman.id.au>
Acked-by: Paul Mackerras <paulus@samba.org>
Reviewed-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Commit 8d6f7c5a: "powerpc/powernv: Make it possible to skip the IRQHAPPENED
check in power7_nap()" added code that prevents cpus from checking for
pending interrupts just before entering sleep state, which is wrong. These
interrupts are delivered during the soft irq disabled state of the cpu.
A cpu cannot enter any idle state with pending interrupts because they will
never be serviced until the next time the cpu is woken up by some other
interrupt. Its only then that the pending interrupts are replayed. This can result
in device timeouts or warnings about this cpu being stuck.
This patch fixes ths issue by ensuring that cpus check for pending interrupts
just before entering any idle state as long as they are not in the path of split
core operations.
Signed-off-by: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Acked-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
In fb5a515704 "powerpc: Remove platforms/wsp and associated pieces",
we removed the last user of MMU_FTRS_A2. So remove it.
MMU_FTRS_A2 was the last user of MMU_FTR_TYPE_3E, so remove it also.
This leaves some unreachable code in mmu_context_nohash.c, so remove
that also.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Commit 046d662f48 "coredump: make core dump functionality optional"
made the coredump optional, but didn't update the spufs code that
depends on it. That leads to build errors such as:
arch/powerpc/platforms/built-in.o: In function `.spufs_arch_write_note':
coredump.c:(.text+0x22cd4): undefined reference to `.dump_emit'
coredump.c:(.text+0x22cf4): undefined reference to `.dump_emit'
coredump.c:(.text+0x22d0c): undefined reference to `.dump_align'
coredump.c:(.text+0x22d48): undefined reference to `.dump_emit'
coredump.c:(.text+0x22e7c): undefined reference to `.dump_skip'
Fix it by adding some ifdefs in the cell code.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Currently, the exynos cpuidle driver works correctly only on exynos4210
and 5250. Trying to use it with just one CPU online on any other exynos
SoCs will lead to system failure, due to unsupported AFTR mode on other
SoCs. This patch fixes the problem by registering the driver only on
supported SoCs and letting others simply use default WFI mode until
support for them is added.
Signed-off-by: Tomasz Figa <t.figa@samsung.com>
Signed-off-by: Kukjin Kim <kgene.kim@samsung.com>
Relying on static functions used just once to get inlined (and
subsequently have dead code paths eliminated) is wrong: Compilers are
free to decide whether they do this, regardless of optimization level.
With this not happening for vdso_addr() (observed with gcc 4.1.x), an
unresolved reference to align_vdso_addr() causes the build to fail.
[ hpa: vdso_addr() is never actually used on x86-32, as calculate_addr
in map_vdso() is always false. It ought to be possible to clean
this up further, but this fixes the immediate problem. ]
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Link: http://lkml.kernel.org/r/53B5863B02000078000204D5@mail.emea.novell.com
Acked-by: Andy Lutomirski <luto@amacapital.net>
Tested-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Tested-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Adding the optional clock property for the mfc_pd for
handling the re-parenting while pd on/off.
Signed-off-by: Arun Kumar K <arun.kk@samsung.com>
Signed-off-by: Shaik Ameer Basha <shaik.ameer@samsung.com>
Reviewed-by: Tomasz Figa <t.figa@samsung.com>
Signed-off-by: Kukjin Kim <kgene.kim@samsung.com>
While powering on/off a local powerdomain in exynos5 chipsets, the
input clocks to each device gets modified. This behaviour is based
on the SYSCLK_SYS_PWR_REG registers.
E.g. SYSCLK_MFC_SYS_PWR_REG = 0x0, the parent of input clock to MFC
(aclk333) gets modified to oscclk
= 0x1, no change in clocks.
The recommended value of SYSCLK_SYS_PWR_REG before power gating any
domain is 0x0. So we must also restore the clocks while powering on
a domain everytime.
This patch adds the framework for getting the required mux and parent
clocks through a power domain device node. With this patch, while
powering off a domain, parent is set to oscclk and while powering back
on, its re-set to the correct parent which is as per the recommended
pd on/off sequence.
Signed-off-by: Prathyush K <prathyush.k@samsung.com>
Signed-off-by: Andrew Bresticker <abrestic@chromium.org>
Signed-off-by: Arun Kumar K <arun.kk@samsung.com>
Signed-off-by: Shaik Ameer Basha <shaik.ameer@samsung.com>
Reviewed-by: Tomasz Figa <t.figa@samsung.com>
Signed-off-by: Kukjin Kim <kgene.kim@samsung.com>
Certain ld versions (observed with 2.20.0) put an empty .rela.dyn
section into shared object files, breaking the assumption on the number
of sections to be copied to the final output. Simply discard any empty
SHT_REL and SHT_RELA sections to address this.
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Link: http://lkml.kernel.org/r/53B5861E02000078000204D1@mail.emea.novell.com
Acked-by: Andy Lutomirski <luto@amacapital.net>
Tested-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Tested-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Commit b4aa016305 ("efifb: Implement vga_default_device() (v2)") added
efifb vga_default_device() so EFI systems that do not load shadow VBIOS or
setup VGA get proper value for boot_vga PCI sysfs attribute on the
corresponding PCI device.
Xorg doesn't detect devices when boot_vga=0, e.g., on some EFI systems such
as MacBookAir2,1. Xorg detects the GPU and finds the DRI device but then
bails out with "no devices detected".
Note: When vga_default_device() is set boot_vga PCI sysfs attribute
reflects its state. When unset this attribute is 1 whenever
IORESOURCE_ROM_SHADOW flag is set.
With introduction of sysfb/simplefb/simpledrm efifb is getting obsolete
while having native drivers for the GPU also makes selecting sysfb/efifb
optional.
Remove the efifb implementation of vga_default_device() and initialize
vgaarb's vga_default_device() with the PCI GPU that matches boot
screen_info in pci_fixup_video().
[bhelgaas: remove unused "dev" in efifb_setup()]
Fixes: b4aa016305 ("efifb: Implement vga_default_device() (v2)")
Tested-by: Anibal Francisco Martinez Cortina <linuxkid.zeuz@gmail.com>
Signed-off-by: Bruno Prémont <bonbons@linux-vserver.org>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Acked-by: Matthew Garrett <matthew.garrett@nebula.com>
CC: stable@vger.kernel.org # v3.5+
restart handling and phy regulators and SATA interconnect data.
Also few build fixes related to the DSP driver in staging, and trivial
stuff like removal of broken and soon to be unused platform data init
for HDMI audio that would be good to get into the -rc series if not
too late.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJTvTCLAAoJEBvUPslcq6Vz5J0QAKJJlkjQwpr+nV8yfgPSNASR
yEwRdPYF2k3bxcjusv10OyOFQp+3UcMILATR0hE29khr32Yh3qs4zYgSajs2Fr3Y
ziNVqbknm4WQ5vrQXKN12P1xeb67bX0MrmZBQyqzDPc8bPdlohIkBBE6NQ6jI6Jn
5xXn7auDgSYhIB3w09QZX6yzm3+06qG+Typ9ZgLr/OaF0KUGv5KcwJi/T6/PTbl+
SJMJYJbVZoD1v4yTVGskqolYi7Yy8sMbAGIGKUw40f3aOI5SSctgFynHzy75hyNC
aZ14PtfWFkiJ32xXHMFGJ7dficqSzC5QTrItcP+kjRpktYwaDMMse5rewQ4nToMo
Y6cxBWy3rMT3uX6XSEGkI16PiEjtnOUj02czEO45wLTWIFyZTHkdUtG3j5BMhuDz
IvkDnxCyKjbEt6zCWRZH2L0JTRitIJUaKEeZwgM89T0oBMOvX9y6ikrnS1yeamO6
prWMyPA395wU7M7UWFthI/b2uSS2xHBJ2Lf4yHj2lEKHlai4AKXwgWzctmboiZjx
7T2rCY0/N15EdUuAWdOdDHe4bPl9Dg99peheMs9gUtOM0rQmsSN4stAGwC+VAduj
aFE4mDcPgUcRX9IpYPGSZY3X3PTjESl2t5rDZQVtxw9+VzUfdDiOAIiBHBOZGunG
PDJPg6AJGeBHGKkn+k0K
=paeu
-----END PGP SIGNATURE-----
Merge tag 'omap-for-v3.16/fixes-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap into fixes
Merge "omap fixes against v3.16-rc4" from Tony Lindgren:
Fixes for omaps for the -rc series. It's mostly fixes for clock rates,
restart handling and phy regulators and SATA interconnect data.
Also few build fixes related to the DSP driver in staging, and trivial
stuff like removal of broken and soon to be unused platform data init
for HDMI audio that would be good to get into the -rc series if not
too late.
* tag 'omap-for-v3.16/fixes-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap:
ARM: OMAP2+: Remove non working OMAP HDMI audio initialization
ARM: dts: Fix TI CPSW Phy mode selection on IGEP COM AQUILA.
ARM: dts: am335x-evmsk: Enable the McASP FIFO for audio
ARM: dts: am335x-evm: Enable the McASP FIFO for audio
ARM: OMAP2+: Make GPMC skip disabled devices
ARM: OMAP2+: create dsp device only on OMAP3 SoCs
ARM: dts: dra7-evm: Make VDDA_1V8_PHY supply always on
ARM: DRA7/AM43XX: fix header definition for omap44xx_restart
ARM: OMAP2+: clock/dpll: fix _dpll_test_fint arithmetics overflow
ARM: DRA7: hwmod: Add SYSCONFIG for usb_otg_ss
ARM: DRA7: hwmod: Fixup SATA hwmod
ARM: OMAP3: PRM/CM: Add back macros used by TI DSP/Bridge driver
ARM: dts: dra7xx-clocks: Fix the l3 and l4 clock rates
Signed-off-by: Olof Johansson <olof@lixom.net>
Pull crypto fixes from Herbert Xu:
"This push fixes an error in sha512_ssse3 that leads to incorrect
output as well as a memory leak in caam_jr when the module is
unloaded"
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: caam - fix memleak in caam_jr module
crypto: sha512_ssse3 - fix byte count to bit count conversion