98629 Commits

Author SHA1 Message Date
Mark Brown
a6ce305207 Merge remote-tracking branches 'asoc/topic/intel', 'asoc/topic/kirkwood', 'asoc/topic/max98090' and 'asoc/topic/mc13783' into asoc-next 2014-08-04 16:31:45 +01:00
Mark Brown
2fd5373467 Merge remote-tracking branch 'asoc/topic/rcar' into asoc-next 2014-08-04 16:31:20 +01:00
Dan Carpenter
4c51cb005b x86/pmc_atom: Silence shift wrapping warnings in pmc_sleep_tmr_show()
I don't know if we really need 64 bits here but these variables are
declared as u64 and it can't hurt to cast this so we prevent any shift
wrapping.

Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Aubrey Li <aubrey.li@linux.intel.com>
Link: http://lkml.kernel.org/r/20140801082715.GE28869@mwanda
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2014-08-02 16:52:17 -07:00
Alexei Starovoitov
7ae457c1e5 net: filter: split 'struct sk_filter' into socket and bpf parts
clean up names related to socket filtering and bpf in the following way:
- everything that deals with sockets keeps 'sk_*' prefix
- everything that is pure BPF is changed to 'bpf_*' prefix

split 'struct sk_filter' into
struct sk_filter {
	atomic_t        refcnt;
	struct rcu_head rcu;
	struct bpf_prog *prog;
};
and
struct bpf_prog {
        u32                     jited:1,
                                len:31;
        struct sock_fprog_kern  *orig_prog;
        unsigned int            (*bpf_func)(const struct sk_buff *skb,
                                            const struct bpf_insn *filter);
        union {
                struct sock_filter      insns[0];
                struct bpf_insn         insnsi[0];
                struct work_struct      work;
        };
};
so that 'struct bpf_prog' can be used independent of sockets and cleans up
'unattached' bpf use cases

split SK_RUN_FILTER macro into:
    SK_RUN_FILTER to be used with 'struct sk_filter *' and
    BPF_PROG_RUN to be used with 'struct bpf_prog *'

__sk_filter_release(struct sk_filter *) gains
__bpf_prog_release(struct bpf_prog *) helper function

also perform related renames for the functions that work
with 'struct bpf_prog *', since they're on the same lines:

sk_filter_size -> bpf_prog_size
sk_filter_select_runtime -> bpf_prog_select_runtime
sk_filter_free -> bpf_prog_free
sk_unattached_filter_create -> bpf_prog_create
sk_unattached_filter_destroy -> bpf_prog_destroy
sk_store_orig_filter -> bpf_prog_store_orig_filter
sk_release_orig_filter -> bpf_release_orig_filter
__sk_migrate_filter -> bpf_migrate_filter
__sk_prepare_filter -> bpf_prepare_filter

API for attaching classic BPF to a socket stays the same:
sk_attach_filter(prog, struct sock *)/sk_detach_filter(struct sock *)
and SK_RUN_FILTER(struct sk_filter *, ctx) to execute a program
which is used by sockets, tun, af_packet

API for 'unattached' BPF programs becomes:
bpf_prog_create(struct bpf_prog **)/bpf_prog_destroy(struct bpf_prog *)
and BPF_PROG_RUN(struct bpf_prog *, ctx) to execute a program
which is used by isdn, ppp, team, seccomp, ptp, xt_bpf, cls_bpf, test_bpf

Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-08-02 15:03:58 -07:00
Alexei Starovoitov
8fb575ca39 net: filter: rename sk_convert_filter() -> bpf_convert_filter()
to indicate that this function is converting classic BPF into eBPF
and not related to sockets

Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-08-02 15:02:38 -07:00
Linus Torvalds
3f9c08f7ce Merge branch 'fixes' of git://ftp.arm.linux.org.uk/~rmk/linux-arm
Pull ARM fixes from Russell King:
 "A few fixes for ARM.  Some of these are correctness issues:
   - TLBs must be flushed after the old mappings are removed by the DMA
     mapping code, but before the new mappings are established.
   - An off-by-one entry error in the Keystone LPAE setup code.

  Fixes include:
   - ensuring that the identity mapping for LPAE does not remove the
     kernel image from the identity map.
   - preventing userspace from trapping into kgdb.
   - fixing a preemption issue in the Intel iwmmxt code.
   - fixing a build error with nommu.

  Other changes include:
   - Adding a note about which areas of memory are expected to be
     accessible while the identity mapping tables are in place"

* 'fixes' of git://ftp.arm.linux.org.uk/~rmk/linux-arm:
  ARM: 8124/1: don't enter kgdb when userspace executes a kgdb break instruction
  ARM: idmap: add identity mapping usage note
  ARM: 8115/1: LPAE: reduce damage caused by idmap to virtual memory layout
  ARM: fix alignment of keystone page table fixup
  ARM: 8112/1: only select ARM_PATCH_PHYS_VIRT if MMU is enabled
  ARM: 8100/1: Fix preemption disable in iwmmxt_task_enable()
  ARM: DMA: ensure that old section mappings are flushed from the TLB
2014-08-02 10:57:39 -07:00
Omar Sandoval
6bf755db4d ARM: 8124/1: don't enter kgdb when userspace executes a kgdb break instruction
The kgdb breakpoint hooks (kgdb_brk_fn and kgdb_compiled_brk_fn)
should only be entered when a kgdb break instruction is executed
from the kernel. Otherwise, if kgdb is enabled, a userspace program
can cause the kernel to drop into the debugger by executing either
KGDB_BREAKINST or KGDB_COMPILED_BREAK.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Omar Sandoval <osandov@osandov.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-08-02 15:20:30 +01:00
Russell King
c5cc87fa8d ARM: idmap: add identity mapping usage note
Add a note about the usage of the identity mapping; we do not support
accesses outside of the identity map region and kernel image while a
CPU is using the identity map.  This is because the identity mapping
may overwrite vmalloc space, IO mappings, the vectors pages, etc.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-08-02 15:20:26 +01:00
Russell King
3bb70de692 ARM: add comments to the early page table remap code
Add further comments to the early page table remap code to explain what
the code is doing, why it is doing it, but more importantly to explain
that the code is not architecturally compliant and is squarely in
"UNPREDICTABLE" behaviour territory.

Add a warning and tainting of the kernel too.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-08-02 08:51:55 +01:00
Shawn Guo
c716483c3d ARM: 8122/1: smp_scu: enable SCU standby support
With SCU standby enabled, SCU CLK will be turned off when all processors
are in WFI mode.  And the clock will be turned on when any processor
leaves WFI mode.

This behavior should be preferable in terms of power efficiency of
system idle.  So let's set the SCU standby bit to enable the support in
function scu_enable().

Cortex-A9 earlier than r2p0 has no standby bit in SCU, so we need to
skip setting the bit for those.

Signed-off-by: Shawn Guo <shawn.guo@freescale.com>
Acked-by: Soren Brinkmann <soren.brinkmann@xilinx.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-08-02 08:51:53 +01:00
Shawn Guo
f8f3d4ed0d ARM: 8121/1: smp_scu: use macro for SCU enable bit
Use macro instead of magic number for SCU enable bit.

Signed-off-by: Shawn Guo <shawn.guo@freescale.com>
Acked-by: Soren Brinkmann <soren.brinkmann@xilinx.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-08-02 08:51:52 +01:00
Jussi Kivilinna
c8611d712a ARM: 8120/1: crypto: sha512: add ARM NEON implementation
This patch adds ARM NEON assembly implementation of SHA-512 and SHA-384
algorithms.

tcrypt benchmark results on Cortex-A8, sha512-generic vs sha512-neon-asm:

block-size      bytes/update    old-vs-new
16              16              2.99x
64              16              2.67x
64              64              3.00x
256             16              2.64x
256             64              3.06x
256             256             3.33x
1024            16              2.53x
1024            256             3.39x
1024            1024            3.52x
2048            16              2.50x
2048            256             3.41x
2048            1024            3.54x
2048            2048            3.57x
4096            16              2.49x
4096            256             3.42x
4096            1024            3.56x
4096            4096            3.59x
8192            16              2.48x
8192            256             3.42x
8192            1024            3.56x
8192            4096            3.60x
8192            8192            3.60x

Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-08-02 08:51:50 +01:00
Jussi Kivilinna
604682551a ARM: 8119/1: crypto: sha1: add ARM NEON implementation
This patch adds ARM NEON assembly implementation of SHA-1 algorithm.

tcrypt benchmark results on Cortex-A8, sha1-arm-asm vs sha1-neon-asm:

block-size      bytes/update    old-vs-new
16              16              1.04x
64              16              1.02x
64              64              1.05x
256             16              1.03x
256             64              1.04x
256             256             1.30x
1024            16              1.03x
1024            256             1.36x
1024            1024            1.52x
2048            16              1.03x
2048            256             1.39x
2048            1024            1.55x
2048            2048            1.59x
4096            16              1.03x
4096            256             1.40x
4096            1024            1.57x
4096            4096            1.62x
8192            16              1.03x
8192            256             1.40x
8192            1024            1.58x
8192            4096            1.63x
8192            8192            1.63x

Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-08-02 08:51:47 +01:00
Jussi Kivilinna
1f8673d31a ARM: 8118/1: crypto: sha1/make use of common SHA-1 structures
Common SHA-1 structures are defined in <crypto/sha.h> for code sharing.

This patch changes SHA-1/ARM glue code to use these structures.

Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-08-02 08:51:46 +01:00
Linus Torvalds
f88cf230a4 Merge branch 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 fix from Peter Anvin:
 "A single fix to not invoke the espfix code on Xen PV, as it turns out
  to oops the guest when invoked after all.  This patch leaves some
  amount of dead code, in particular unnecessary initialization of the
  espfix stacks when they won't be used, but in the interest of keeping
  the patch minimal that cleanup can wait for the next cycle"

* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86_64/entry/xen: Do not invoke espfix64 on Xen
2014-08-01 17:37:01 -07:00
Pawel Moll
4e3a25b027 ARM: imx: Remove references to platform_bus in mxc code
The bus devices created to be parents for other peripherals
were using platform_bus as a parent, not being platform
devices themselves. Remove the references, making them
virtual devices instead.

Cc: Sascha Hauer <kernel@pengutronix.de>
Cc: Russell King <linux@arm.linux.org.uk>
Signed-off-by: Pawel Moll <pawel.moll@arm.com>
Acked-by: Shawn Guo <shawn.guo@freescale.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-08-01 16:03:00 -07:00
Linus Torvalds
9642a1041e ARM: Straggler SoC fix for 3.16
A DT bugfix for Nomadik that had an ambigouos double-inversion of a gpio
 line, and one MAINTAINER URL update that might as well go in now.
 
 We could hold off until the merge window, but then we'll just have to mark
 the DT fix for stable and it just seems like in total causing more work.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.14 (GNU/Linux)
 
 iQIcBAABAgAGBQJT2dyEAAoJEIwa5zzehBx3rZMQAKwoGghFh2t/4USXOWEDT1ns
 B0cPMvg3wjlOxnJsC4Nvi9BclIBBlUImHoHYG8Un7/aJXkuRpOC36Zg2wWDJUkVC
 NZOwHAj/wBwPTKQnHKSQ2Nyw/h1t1FidlWt07GcFxmYvl00pfE707MHKiRTd/pUY
 T4/VELaUXmcJ2/9WqscNLi4TzuQBd5eBb1n48GRzVsTfjJo7jXExCsuKWNOpbPar
 eDqZjw7KAU73T8d0hJmZtJuI8iZ0I3mMYFrA7Sp1srXEpw/ZKfvn6+SBH3CzZdxz
 oDXZYYBl4wYuqqW7I42esZkcsyGmCE58KS0tYWZgvQUJzrHeyV6myuEn2qt2ROzQ
 Ii5huDBSbxNOk2D+v5JevBG8SuDFMS6Op1Gj9fUB7F/+P+EDzccwSkejPZYAnHDd
 JHWiks6CU4xRzlSrPb4AtQ3UkX0GO1QPlhhT/TQD0wOAn9XMkC9hfYBBn+pnusXm
 l8F+DSGx2UUziwF2f4tTGcpKeiTN71g8xizHFs4LHhctnVeb/G68a5s1Cxsfq7Bd
 tW3Y/b1sy5q/TllKTL6qTSqTS+GPldV/w4IwBm4tJs3Q5lMoJErvFQqTEPpIwcRP
 xDVBT0BpEStvxMNYVhIEE4C50k02zLhqS2CH9pqzHVBFYhhNlp2g23QDsbU2yWgG
 sPg0G+Ts8yxeS1DZotnq
 =sz9W
 -----END PGP SIGNATURE-----

Merge tag 'fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc

Pull ARM straggler SoC fix from Olof Johansson:
 "A DT bugfix for Nomadik that had an ambigouos double-inversion of a
  gpio line, and one MAINTAINER URL update that might as well go in now.

  We could hold off until the merge window, but then we'll just have to
  mark the DT fix for stable and it just seems like in total causing
  more work"

* tag 'fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc:
  MAINTAINERS: Update Tegra Git URL
  ARM: nomadik: fix up double inversion in DT
2014-08-01 12:49:02 -07:00
Russell King
c70fbb01b1 Two different fixes for the same problem making some ARM nommu configurations
not boot since 3.6-rc1. The problem is that user_addr_max returned the biggest
 available RAM address which makes some copy_from_user variants fail to read
 from XIP memory.
 
 Even in the presence of one of the two fixes the other still makes sense, so
 both patches are included here.
 
 This problem was the last one preventing efm32 boot to a prompt with mainline.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iEYEABECAAYFAlOyfhsACgkQ6suMTIUe0hZWggCePaoe/S+aDki9B2ASCY0zVkRq
 XE8AoM5G4yRgnL3zitI2ftvvlp4xx1mS
 =4Vjn
 -----END PGP SIGNATURE-----

Merge tag 'nommu-for-rmk' of git://git.pengutronix.de/git/ukl/linux into devel-stable

Two different fixes for the same problem making some ARM nommu configurations
not boot since 3.6-rc1. The problem is that user_addr_max returned the biggest
available RAM address which makes some copy_from_user variants fail to read
from XIP memory.

Even in the presence of one of the two fixes the other still makes sense, so
both patches are included here.

This problem was the last one preventing efm32 boot to a prompt with mainline.
2014-08-01 19:54:26 +01:00
Mark Rutland
ea1719672f arm64: add newline to I-cache policy string
Due to a missing newline in the I-cache policy detection log output,
it's possible to get some ratehr unfortunate output at boot time:

CPU1: Booted secondary processor
Detected VIPT I-cache on CPU1CPU2: Booted secondary processor
Detected VIPT I-cache on CPU2CPU3: Booted secondary processor
Detected VIPT I-cache on CPU3CPU4: Booted secondary processor
Detected PIPT I-cache on CPU4CPU5: Booted secondary processor
Detected PIPT I-cache on CPU5Brought up 6 CPUs
SMP: Total of 6 processors activated.

This patch adds the missing newline to the format string, cleaning up
the output.

Fixes: 59ccc0d41b7a ("arm64: cachetype: report weakest cache policy")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-08-01 14:00:06 +01:00
Vince Bridgers
ea6856e352 ARM: socfpga: Add socfpga Ethernet filter attributes entries
This patch adds socfpga Ethernet filter attributes for multicast
and unicast filters per Synopsys Ethernet IP configuration chosen
by Altera for the Cyclone 5 and Arria SOC FPGAs.

Signed-off-by: Vince Bridgers <vbridgers2013@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-07-31 14:13:29 -07:00
Joerg Roedel
4c5e9d9f0d Merge branches 'x86/vt-d', 'x86/amd', 'arm/omap', 'ppc/pamu', 'arm/smmu', 'arm/exynos' and 'core' into next 2014-07-31 20:29:02 +02:00
Mike Turquette
d7d3d26fa5 Samsung clock patches for 3.17
1) non-critical fixes (without need to push to stable):
 
 d5e136a clk: samsung: Register clk provider only after registering its all clocks
 305cfab clk: samsung: Make of_device_id array const
 e9d5295 clk: samsung: exynos5420: Setup clocks before system suspend
 f65d518 clk: samsung: trivial: Correct typo in author's name
 
 2) Exynos CLKOUT driver:
 
 800c979 clk: samsung: exynos4: Add missing CPU/DMC clock hierarchy
 01f7ec2 clk: samsung: exynos4: Add CLKOUT clock hierarchy
 1e832e5 clk: samsung: Add driver to control CLKOUT line on Exynos SoCs
 d19bb39 ARM: dts: exynos: Update PMU node with CLKOUT related data
 
 3) Clock hierarchy extensions:
 
 17d3f1d clk: exynos4: Add PPMU IP block source clocks.
 ca5b402 clk: samsung: register exynos5420 apll/kpll configuration data
 
 4) ARM CLKDOWN functionality enablement for Exynos4 and 3250:
 
 42773b2 clk: samsung: exynos4: Enable ARMCLK down feature
 45c5b0a clk: samsung: exynos3250: Enable ARMCLK down feature
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2
 
 iQIcBAABAgAGBQJT2kn8AAoJEIv3Hb8G/Xru1OMP/j6lh7aMyNm6E0mHFJd0Pfjy
 foe5N9RdQeRfoLySkItqJbBmgujkjjxstSpENCMp8VINlxxHfxi4Nl0bk34efy5f
 xYMVrmyZoB5dO4W/QmamGIiysD6aRhJ+kbwN+fai05/y+XUt8nUSTH7VdBabq9d3
 2O1kRjOMhcdnGQgs/V2XK3SvX2+iUycNAi3JKv1ai1OtB8JiykCeN4FOJr2xPFkv
 CS5kZj+ofor3SZ6NnmJq52Uuto+ck9NLpp6ohCNlvf6PC52oGa/l1KU693r534Rf
 nbbNeCOEeByqXPMuL/SxAQzZOqbMRADef42X6tVTK6qpTx58Iep9LRybolYCrMor
 s4p92u9gsLsURQ85f02mYecnqLMEoeZb5p7sOmjZ0QuHWXm5PVkhIzOYweXEzFtE
 MeEcVIEiaSpqrm94s7iPYNXleTfLHvoi7jSRjfJayqffNuUeBMfKG6gkmYogU/Ou
 9RrGsB+m8dyz/vqvqtRkZznOBaFblwqhSdeY2F+x9Onk/Bin3wzh/9NIge8HJk2P
 H62R1EUQePLCS9cZJS95jBmSAWRXPD6yaq4xIj2LTuN0uFhCO2FRxWW2eFu9OqE4
 DfDIDA1S77aqMzIYjSUUis1yhf4RnSnqy2il5iFMsbiA0/19rYgLiyNrpym8AS+T
 +ErdKOkHjkwEUZbzobH6
 =wCSD
 -----END PGP SIGNATURE-----

Merge tag 'for_3.17/samsung-clk' of git://git.kernel.org/pub/scm/linux/kernel/git/tfiga/samsung-clk into clk-next-samsung

Samsung clock patches for 3.17

1) non-critical fixes (without need to push to stable):

d5e136a clk: samsung: Register clk provider only after registering its all clocks
305cfab clk: samsung: Make of_device_id array const
e9d5295 clk: samsung: exynos5420: Setup clocks before system suspend
f65d518 clk: samsung: trivial: Correct typo in author's name

2) Exynos CLKOUT driver:

800c979 clk: samsung: exynos4: Add missing CPU/DMC clock hierarchy
01f7ec2 clk: samsung: exynos4: Add CLKOUT clock hierarchy
1e832e5 clk: samsung: Add driver to control CLKOUT line on Exynos SoCs
d19bb39 ARM: dts: exynos: Update PMU node with CLKOUT related data

3) Clock hierarchy extensions:

17d3f1d clk: exynos4: Add PPMU IP block source clocks.
ca5b402 clk: samsung: register exynos5420 apll/kpll configuration data

4) ARM CLKDOWN functionality enablement for Exynos4 and 3250:

42773b2 clk: samsung: exynos4: Enable ARMCLK down feature
45c5b0a clk: samsung: exynos3250: Enable ARMCLK down feature
2014-07-31 09:32:18 -07:00
Dave Hansen
a5102476a2 x86/mm: Set TLB flush tunable to sane value (33)
This has been run through Intel's LKP tests across a wide range
of modern sytems and workloads and it wasn't shown to make a
measurable performance difference positive or negative.

Now that we have some shiny new tracepoints, we can actually
figure out what the heck is going on.

During a kernel compile, 60% of the flush_tlb_mm_range() calls
are for a single page.  It breaks down like this:

 size   percent  percent<=
  V        V        V
GLOBAL:   2.20%   2.20% avg cycles:  2283
     1:  56.92%  59.12% avg cycles:  1276
     2:  13.78%  72.90% avg cycles:  1505
     3:   8.26%  81.16% avg cycles:  1880
     4:   7.41%  88.58% avg cycles:  2447
     5:   1.73%  90.31% avg cycles:  2358
     6:   1.32%  91.63% avg cycles:  2563
     7:   1.14%  92.77% avg cycles:  2862
     8:   0.62%  93.39% avg cycles:  3542
     9:   0.08%  93.47% avg cycles:  3289
    10:   0.43%  93.90% avg cycles:  3570
    11:   0.20%  94.10% avg cycles:  3767
    12:   0.08%  94.18% avg cycles:  3996
    13:   0.03%  94.20% avg cycles:  4077
    14:   0.02%  94.23% avg cycles:  4836
    15:   0.04%  94.26% avg cycles:  5699
    16:   0.06%  94.32% avg cycles:  5041
    17:   0.57%  94.89% avg cycles:  5473
    18:   0.02%  94.91% avg cycles:  5396
    19:   0.03%  94.95% avg cycles:  5296
    20:   0.02%  94.96% avg cycles:  6749
    21:   0.18%  95.14% avg cycles:  6225
    22:   0.01%  95.15% avg cycles:  6393
    23:   0.01%  95.16% avg cycles:  6861
    24:   0.12%  95.28% avg cycles:  6912
    25:   0.05%  95.32% avg cycles:  7190
    26:   0.01%  95.33% avg cycles:  7793
    27:   0.01%  95.34% avg cycles:  7833
    28:   0.01%  95.35% avg cycles:  8253
    29:   0.08%  95.42% avg cycles:  8024
    30:   0.03%  95.45% avg cycles:  9670
    31:   0.01%  95.46% avg cycles:  8949
    32:   0.01%  95.46% avg cycles:  9350
    33:   3.11%  98.57% avg cycles:  8534
    34:   0.02%  98.60% avg cycles: 10977
    35:   0.02%  98.62% avg cycles: 11400

We get in to dimishing returns pretty quickly.  On pre-IvyBridge
CPUs, we used to set the limit at 8 pages, and it was set at 128
on IvyBrige.  That 128 number looks pretty silly considering that
less than 0.5% of the flushes are that large.

The previous code tried to size this number based on the size of
the TLB.  Good idea, but it's error-prone, needs maintenance
(which it didn't get up to now), and probably would not matter in
practice much.

Settting it to 33 means that we cover the mallopt
M_TRIM_THRESHOLD, which is the most universally common size to do
flushes.

That's the short version.  Here's the long one for why I chose 33:

1. These numbers have a constant bias in the timestamps from the
   tracing.  Probably counts for a couple hundred cycles in each of
   these tests, but it should be fairly _even_ across all of them.
   The smallest delta between the tracepoints I have ever seen is
   335 cycles.  This is one reason the cycles/page cost goes down in
   general as the flushes get larger.  The true cost is nearer to
   100 cycles.
2. A full flush is more expensive than a single invlpg, but not
   by much (single percentages).
3. A dtlb miss is 17.1ns (~45 cycles) and a itlb miss is 13.0ns
   (~34 cycles).  At those rates, refilling the 512-entry dTLB takes
   22,000 cycles.
4. 22,000 cycles is approximately the equivalent of doing 85
   invlpg operations.  But, the odds are that the TLB can
   actually be filled up faster than that because TLB misses that
   are close in time also tend to leverage the same caches.
6. ~98% of flushes are <=33 pages.  There are a lot of flushes of
   33 pages, probably because libc's M_TRIM_THRESHOLD is set to
   128k (32 pages)
7. I've found no consistent data to support changing the IvyBridge
   vs. SandyBridge tunable by a factor of 16

I used the performance counters on this hardware (IvyBridge i5-3320M)
to figure out the tlb miss costs:

ocperf.py stat -e dtlb_load_misses.walk_duration,dtlb_load_misses.walk_completed,dtlb_store_misses.walk_duration,dtlb_store_misses.walk_completed,itlb_misses.walk_duration,itlb_misses.walk_completed,itlb.itlb_flush

     7,720,030,970      dtlb_load_misses_walk_duration                                    [57.13%]
       169,856,353      dtlb_load_misses_walk_completed                                    [57.15%]
       708,832,859      dtlb_store_misses_walk_duration                                    [57.17%]
        19,346,823      dtlb_store_misses_walk_completed                                    [57.17%]
     2,779,687,402      itlb_misses_walk_duration                                    [57.15%]
        82,241,148      itlb_misses_walk_completed                                    [57.13%]
           770,717      itlb_itlb_flush                                              [57.11%]

Show that a dtlb miss is 17.1ns (~45 cycles) and a itlb miss is 13.0ns
(~34 cycles).  At those rates, refilling the 512-entry dTLB takes
22,000 cycles.  On a SandyBridge system with more cores and larger
caches, those are dtlb=13.4ns and itlb=9.5ns.

cat perf.stat.txt | perl -pe 's/,//g'
	| awk '/itlb_misses_walk_duration/ { icyc+=$1 }
		/itlb_misses_walk_completed/ { imiss+=$1 }
		/dtlb_.*_walk_duration/ { dcyc+=$1 }
		/dtlb_.*.*completed/ { dmiss+=$1 }
		END {print "itlb cyc/miss: ", icyc/imiss, " dtlb cyc/miss: ", dcyc/dmiss, "   -----    ", icyc,imiss, dcyc,dmiss }

On Westmere CPUs, the counters to use are: itlb_flush,itlb_misses.walk_cycles,itlb_misses.any,dtlb_misses.walk_cycles,dtlb_misses.any

The assumptions that this code went in under:
https://lkml.org/lkml/2012/6/12/119 say that a flush and a refill are
about 100ns.  Being generous, that is over by a factor of 6 on the
refill side, although it is fairly close on the cost of an invlpg.
An increase of a single invlpg operation seems to lengthen the flush
range operation by about 200 cycles.  Here is one example of the data
collected for flushing 10 and 11 pages (full data are below):

    10:   0.43%  93.90% avg cycles:  3570 cycles/page:  357 samples: 4714
    11:   0.20%  94.10% avg cycles:  3767 cycles/page:  342 samples: 2145

How to generate this table:

	echo 10000 > /sys/kernel/debug/tracing/buffer_size_kb
	echo x86-tsc > /sys/kernel/debug/tracing/trace_clock
	echo 'reason != 0' > /sys/kernel/debug/tracing/events/tlb/tlb_flush/filter
	echo 1 > /sys/kernel/debug/tracing/events/tlb/tlb_flush/enable

Pipe the trace output in to this script:

	http://sr71.net/~dave/intel/201402-tlb/trace-time-diff-process.pl.txt

Note that these data were gathered with the invlpg threshold set to
150 pages.  Only data points with >=50 of samples were printed:

Flush    % of     %<=
in       flush    this
pages      es     size
------------------------------------------------------------------------------
    -1:   2.20%   2.20% avg cycles:  2283 cycles/page: xxxx samples: 23960
     1:  56.92%  59.12% avg cycles:  1276 cycles/page: 1276 samples: 620895
     2:  13.78%  72.90% avg cycles:  1505 cycles/page:  752 samples: 150335
     3:   8.26%  81.16% avg cycles:  1880 cycles/page:  626 samples: 90131
     4:   7.41%  88.58% avg cycles:  2447 cycles/page:  611 samples: 80877
     5:   1.73%  90.31% avg cycles:  2358 cycles/page:  471 samples: 18885
     6:   1.32%  91.63% avg cycles:  2563 cycles/page:  427 samples: 14397
     7:   1.14%  92.77% avg cycles:  2862 cycles/page:  408 samples: 12441
     8:   0.62%  93.39% avg cycles:  3542 cycles/page:  442 samples: 6721
     9:   0.08%  93.47% avg cycles:  3289 cycles/page:  365 samples: 917
    10:   0.43%  93.90% avg cycles:  3570 cycles/page:  357 samples: 4714
    11:   0.20%  94.10% avg cycles:  3767 cycles/page:  342 samples: 2145
    12:   0.08%  94.18% avg cycles:  3996 cycles/page:  333 samples: 864
    13:   0.03%  94.20% avg cycles:  4077 cycles/page:  313 samples: 289
    14:   0.02%  94.23% avg cycles:  4836 cycles/page:  345 samples: 236
    15:   0.04%  94.26% avg cycles:  5699 cycles/page:  379 samples: 390
    16:   0.06%  94.32% avg cycles:  5041 cycles/page:  315 samples: 643
    17:   0.57%  94.89% avg cycles:  5473 cycles/page:  321 samples: 6229
    18:   0.02%  94.91% avg cycles:  5396 cycles/page:  299 samples: 224
    19:   0.03%  94.95% avg cycles:  5296 cycles/page:  278 samples: 367
    20:   0.02%  94.96% avg cycles:  6749 cycles/page:  337 samples: 185
    21:   0.18%  95.14% avg cycles:  6225 cycles/page:  296 samples: 1964
    22:   0.01%  95.15% avg cycles:  6393 cycles/page:  290 samples: 83
    23:   0.01%  95.16% avg cycles:  6861 cycles/page:  298 samples: 61
    24:   0.12%  95.28% avg cycles:  6912 cycles/page:  288 samples: 1307
    25:   0.05%  95.32% avg cycles:  7190 cycles/page:  287 samples: 533
    26:   0.01%  95.33% avg cycles:  7793 cycles/page:  299 samples: 94
    27:   0.01%  95.34% avg cycles:  7833 cycles/page:  290 samples: 66
    28:   0.01%  95.35% avg cycles:  8253 cycles/page:  294 samples: 73
    29:   0.08%  95.42% avg cycles:  8024 cycles/page:  276 samples: 846
    30:   0.03%  95.45% avg cycles:  9670 cycles/page:  322 samples: 296
    31:   0.01%  95.46% avg cycles:  8949 cycles/page:  288 samples: 79
    32:   0.01%  95.46% avg cycles:  9350 cycles/page:  292 samples: 60
    33:   3.11%  98.57% avg cycles:  8534 cycles/page:  258 samples: 33936
    34:   0.02%  98.60% avg cycles: 10977 cycles/page:  322 samples: 268
    35:   0.02%  98.62% avg cycles: 11400 cycles/page:  325 samples: 177
    36:   0.01%  98.63% avg cycles: 11504 cycles/page:  319 samples: 161
    37:   0.02%  98.65% avg cycles: 11596 cycles/page:  313 samples: 182
    38:   0.02%  98.66% avg cycles: 11850 cycles/page:  311 samples: 195
    39:   0.01%  98.68% avg cycles: 12158 cycles/page:  311 samples: 128
    40:   0.01%  98.68% avg cycles: 11626 cycles/page:  290 samples: 78
    41:   0.04%  98.73% avg cycles: 11435 cycles/page:  278 samples: 477
    42:   0.01%  98.73% avg cycles: 12571 cycles/page:  299 samples: 74
    43:   0.01%  98.74% avg cycles: 12562 cycles/page:  292 samples: 78
    44:   0.01%  98.75% avg cycles: 12991 cycles/page:  295 samples: 108
    45:   0.01%  98.76% avg cycles: 13169 cycles/page:  292 samples: 78
    46:   0.02%  98.78% avg cycles: 12891 cycles/page:  280 samples: 261
    47:   0.01%  98.79% avg cycles: 13099 cycles/page:  278 samples: 67
    48:   0.01%  98.80% avg cycles: 13851 cycles/page:  288 samples: 77
    49:   0.01%  98.80% avg cycles: 13749 cycles/page:  280 samples: 66
    50:   0.01%  98.81% avg cycles: 13949 cycles/page:  278 samples: 73
    52:   0.00%  98.82% avg cycles: 14243 cycles/page:  273 samples: 52
    54:   0.01%  98.83% avg cycles: 15312 cycles/page:  283 samples: 87
    55:   0.01%  98.84% avg cycles: 15197 cycles/page:  276 samples: 109
    56:   0.02%  98.86% avg cycles: 15234 cycles/page:  272 samples: 208
    57:   0.00%  98.86% avg cycles: 14888 cycles/page:  261 samples: 53
    58:   0.01%  98.87% avg cycles: 15037 cycles/page:  259 samples: 59
    59:   0.01%  98.87% avg cycles: 15752 cycles/page:  266 samples: 63
    62:   0.00%  98.89% avg cycles: 16222 cycles/page:  261 samples: 54
    64:   0.02%  98.91% avg cycles: 17179 cycles/page:  268 samples: 248
    65:   0.12%  99.03% avg cycles: 18762 cycles/page:  288 samples: 1324
    85:   0.00%  99.10% avg cycles: 21649 cycles/page:  254 samples: 50
   127:   0.01%  99.18% avg cycles: 32397 cycles/page:  255 samples: 75
   128:   0.13%  99.31% avg cycles: 31711 cycles/page:  247 samples: 1466
   129:   0.18%  99.49% avg cycles: 33017 cycles/page:  255 samples: 1927
   181:   0.33%  99.84% avg cycles:  2489 cycles/page:   13 samples: 3547
   256:   0.05%  99.91% avg cycles:  2305 cycles/page:    9 samples: 550
   512:   0.03%  99.95% avg cycles:  2133 cycles/page:    4 samples: 304
  1512:   0.01%  99.99% avg cycles:  3038 cycles/page:    2 samples: 65

Here are the tlb counters during a 10-second slice of a kernel compile
for a SandyBridge system.  It's better than IvyBridge, but probably
due to the larger caches since this was one of the 'X' extreme parts.

    10,873,007,282      dtlb_load_misses_walk_duration
       250,711,333      dtlb_load_misses_walk_completed
     1,212,395,865      dtlb_store_misses_walk_duration
        31,615,772      dtlb_store_misses_walk_completed
     5,091,010,274      itlb_misses_walk_duration
       163,193,511      itlb_misses_walk_completed
         1,321,980      itlb_itlb_flush

      10.008045158 seconds time elapsed

# cat perf.stat.1392743721.txt | perl -pe 's/,//g' | awk '/itlb_misses_walk_duration/ { icyc+=$1 } /itlb_misses_walk_completed/ { imiss+=$1 } /dtlb_.*_walk_duration/ { dcyc+=$1 } /dtlb_.*.*completed/ { dmiss+=$1 } END {print "itlb cyc/miss: ", icyc/imiss/3.3, " dtlb cyc/miss: ", dcyc/dmiss/3.3, "   -----    ", icyc,imiss, dcyc,dmiss }'
itlb ns/miss:  9.45338  dtlb ns/miss:  12.9716

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Link: http://lkml.kernel.org/r/20140731154103.10C1115E@viggo.jf.intel.com
Acked-by: Rik van Riel <riel@redhat.com>
Acked-by: Mel Gorman <mgorman@suse.de>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-07-31 08:48:51 -07:00
Dave Hansen
2d040a1ce9 x86/mm: New tunable for single vs full TLB flush
Most of the logic here is in the documentation file.  Please take
a look at it.

I know we've come full-circle here back to a tunable, but this
new one is *WAY* simpler.  I challenge anyone to describe in one
sentence how the old one worked.  Here's the way the new one
works:

	If we are flushing more pages than the ceiling, we use
	the full flush, otherwise we use per-page flushes.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Link: http://lkml.kernel.org/r/20140731154101.12B52CAF@viggo.jf.intel.com
Acked-by: Rik van Riel <riel@redhat.com>
Acked-by: Mel Gorman <mgorman@suse.de>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-07-31 08:48:51 -07:00
Dave Hansen
d17d8f9ded x86/mm: Add tracepoints for TLB flushes
We don't have any good way to figure out what kinds of flushes
are being attempted.  Right now, we can try to use the vm
counters, but those only tell us what we actually did with the
hardware (one-by-one vs full) and don't tell us what was actually
_requested_.

This allows us to select out "interesting" TLB flushes that we
might want to optimize (like the ranged ones) and ignore the ones
that we have very little control over (the ones at context
switch).

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Link: http://lkml.kernel.org/r/20140731154059.4C96CBA5@viggo.jf.intel.com
Acked-by: Rik van Riel <riel@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-07-31 08:48:51 -07:00
Dave Hansen
a23421f111 x86/mm: Unify remote INVLPG code
There are currently three paths through the remote flush code:

1. full invalidation
2. single page invalidation using invlpg
3. ranged invalidation using invlpg

This takes 2 and 3 and combines them in to a single path by
making the single-page one just be the start and end be start
plus a single page.  This makes placement of our tracepoint easier.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Link: http://lkml.kernel.org/r/20140731154058.E0F90408@viggo.jf.intel.com
Cc: Rik van Riel <riel@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-07-31 08:48:51 -07:00
Dave Hansen
9dfa6dee53 x86/mm: Fix missed global TLB flush stat
If we take the

	if (end == TLB_FLUSH_ALL || vmflag & VM_HUGETLB) {
		local_flush_tlb();
		goto out;
	}

path out of flush_tlb_mm_range(), we will have flushed the tlb,
but not incremented NR_TLB_LOCAL_FLUSH_ALL.  This unifies the
way out of the function so that we always take a single path when
doing a full tlb flush.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Link: http://lkml.kernel.org/r/20140731154056.FF763B76@viggo.jf.intel.com
Acked-by: Rik van Riel <riel@redhat.com>
Acked-by: Mel Gorman <mgorman@suse.de>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-07-31 08:48:50 -07:00
Dave Hansen
e9f4e0a9fe x86/mm: Rip out complicated, out-of-date, buggy TLB flushing
I think the flush_tlb_mm_range() code that tries to tune the
flush sizes based on the CPU needs to get ripped out for
several reasons:

1. It is obviously buggy.  It uses mm->total_vm to judge the
   task's footprint in the TLB.  It should certainly be using
   some measure of RSS, *NOT* ->total_vm since only resident
   memory can populate the TLB.
2. Haswell, and several other CPUs are missing from the
   intel_tlb_flushall_shift_set() function.  Thus, it has been
   demonstrated to bitrot quickly in practice.
3. It is plain wrong in my vm:
	[    0.037444] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0
	[    0.037444] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0
	[    0.037444] tlb_flushall_shift: 6
   Which leads to it to never use invlpg.
4. The assumptions about TLB refill costs are wrong:
	http://lkml.kernel.org/r/1337782555-8088-3-git-send-email-alex.shi@intel.com
    (more on this in later patches)
5. I can not reproduce the original data: https://lkml.org/lkml/2012/5/17/59
   I believe the sample times were too short.  Running the
   benchmark in a loop yields times that vary quite a bit.

Note that this leaves us with a static ceiling of 1 page.  This
is a conservative, dumb setting, and will be revised in a later
patch.

This also removes the code which attempts to predict whether we
are flushing data or instructions.  We expect instruction flushes
to be relatively rare and not worth tuning for explicitly.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Link: http://lkml.kernel.org/r/20140731154055.ABC88E89@viggo.jf.intel.com
Acked-by: Rik van Riel <riel@redhat.com>
Acked-by: Mel Gorman <mgorman@suse.de>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-07-31 08:48:50 -07:00
Dave Hansen
4995ab9cf5 x86/mm: Clean up the TLB flushing code
The

	if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids)

line of code is not exactly the easiest to audit, especially when
it ends up at two different indentation levels.  This eliminates
one of the the copy-n-paste versions.  It also gives us a unified
exit point for each path through this function.  We need this in
a minute for our tracepoint.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Link: http://lkml.kernel.org/r/20140731154054.44F1CDDC@viggo.jf.intel.com
Acked-by: Rik van Riel <riel@redhat.com>
Acked-by: Mel Gorman <mgorman@suse.de>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-07-31 08:48:50 -07:00
Mark D Rustad
42cbc04fd3 x86/kvm: Resolve shadow warnings in macro expansion
Resolve shadow warnings that appear in W=2 builds. Instead of
using ret to hold the return pointer, save the length in a new
variable saved_len and compute the pointer on exit. This also
resolves a very technical error, in that ret was declared as
a const char *, when it really was a char * const.

Signed-off-by: Mark Rustad <mark.d.rustad@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-31 16:33:29 +02:00
Paolo Bonzini
307d2740b1 Two fixes for recently introduced regressions
- a memory leak on busy SIGP
 - pontentially lost SIGP stop in rare situations (shutdown loops)
 
 The first issue is not part of a released kernel. The 2nd issue is
 present in all KVM versions, but did not trigger before commit
 7dfc63cf977447e09b1072911c2 (KVM: s390: allow only one SIGP STOP
 (AND STORE STATUS) at a time) with Linux as a guest.
 So no need for cc stable
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.11 (GNU/Linux)
 
 iQIcBAABAgAGBQJT2fIAAAoJEBF7vIC1phx8IU4P/A83LP0jpEu3tdvIjU4g9Ev8
 yan96/6d/QPOe4mlR7TkalbnkC0mT09ZFnUKnfdVIYD89HWa0gnoXKXhXYge8ZoK
 uB90uTV/O/ySJuSYUkITNHGqvZPkqGc75GrdBsQ1KP9Kcs475KREkStEj4nXhSgb
 IoIL/yuBkYpqroEcZbgV6r2u2UYXGdle131OKHLO4xJ96tniVWsSM32E1/sh29zj
 Sr9ho0dNHdKO4ILKc03osCS3Y49gXC1nmODPLvTs1pBQtw7RSJ5dM9aqCR1XQkaD
 q5PJFQsjBQEhrHOTOZUYAxdk7j2kT2qxZqwYahL+WelbKVZgwdNQMZ78Xrq1RXVp
 hQC29KCYrNbzZMVim0ZQCSWVrlQtdoAnjXejeRQShVVYCjo0JckPaSNB+tpPCtaV
 +e4wPyhLFDmKZIX1Pme716TYxuLL4pCwU8wd6b+DGJ7hq9rzILs8guQua1eMYiCD
 6CjSkEbpDhSthXJ3qdDpIWId13ppY0sVvaFoC3Jnwrj167gRXl0tfK2nX4EpexaG
 rrqWXhwLFuRWm4ZDJaE6tg/bQSsDumXSUX1zGUoFe522DppuKBJCeYzXTrqtokYd
 vyl4CzHlkov1K7EmJfUu69BMyVDWPX7JMOnM2k4Mde2io6IEIqYQvLK5fcAhAE7r
 ElmgACkwwrEL62qryq5l
 =+xZE
 -----END PGP SIGNATURE-----

Merge tag 'kvm-s390-20140730' of git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux into kvm-next

Two fixes for recently introduced regressions
- a memory leak on busy SIGP
- pontentially lost SIGP stop in rare situations (shutdown loops)

The first issue is not part of a released kernel. The 2nd issue is
present in all KVM versions, but did not trigger before commit
7dfc63cf977447e09b1072911c2 (KVM: s390: allow only one SIGP STOP
(AND STORE STATUS) at a time) with Linux as a guest.
So no need for cc stable
2014-07-31 16:31:49 +02:00
Will Deacon
9415667584 Revert "arm64: dmi: Add SMBIOS/DMI support"
This reverts commit a28e3f4b90543f7c249a956e3ca518e243a04618.

Ard and Yi Li report that this patch is broken by design, so revert it
and let them sort it out for 3.18 instead.

Reported-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-07-31 14:00:03 +01:00
byungchul.park
e4aa297a49 arm64: fpsimd: fix a typo in fpsimd_save_partial_state ENDPROC
Commit 190f1ca85d07 ("arm64: add support for kernel mode NEON in interrupt
context") introduced a typing error in fpsimd_save_partial_state ENDPROC.

This patch fixes the typing error.

Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: byungchul.park <byungchul.park@lge.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-07-31 11:42:42 +01:00
Will Deacon
c878e0cff5 arm64: don't call break hooks for BRK exceptions from EL0
Our break hooks are used to handle brk exceptions from kgdb (and potentially
kprobes if that code ever resurfaces), so don't bother calling them if
the BRK exception comes from userspace.

This prevents userspace from trapping to a kdb shell on systems where
kgdb is enabled and active.

Cc: <stable@vger.kernel.org>
Reported-by: Omar Sandoval <osandov@osandov.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-07-31 11:36:08 +01:00
David Hildenbrand
db37386147 KVM: s390: rework broken SIGP STOP interrupt handling
A VCPU might never stop if it intercepts (for whatever reason) between
"fake interrupt delivery" and execution of the stop function.

Heart of the problem is that SIGP STOP is an interrupt that has to be
processed on every SIE entry until the VCPU finally executes the stop
function.

This problem was made apparent by commit 7dfc63cf977447e09b1072911c2
(KVM: s390: allow only one SIGP STOP (AND STORE STATUS) at a time).
With the old code, the guest could (incorrectly) inject SIGP STOPs
multiple times. The bug of losing a sigp stop exists in KVM before
7dfc63cf97, but it was hidden by Linux guests doing a sigp stop loop.
The new code (rightfully) returns CC=2 and does not queue a new
interrupt.

This patch is a simple fix of the problem. Longterm we are going to
rework that code - e.g. get rid of the action bits and so on.

Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
[some additional patch description]
2014-07-31 09:20:35 +02:00
David S. Miller
f139c74a8d Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-07-30 13:25:49 -07:00
Linus Walleij
3181788c3a ARM: nomadik: fix up double inversion in DT
The GPIO pin connected to card detect was inverted twice: once by
the argument to the GPIO line itself where it was magically marked
as active low by the flag GPIO_ACTIVE_LOW (0x01) in the third cell,
and also marked active low AGAIN by explicitly stating
"cd-inverted" (a deprecated method).

After commit 78f87df2b4f8760954d7d80603d0cfcbd4759683
"mmc: mmci: Use the common mmc DT parser" this results in the
line being inverted twice so it was effectively uninverted, while
the old code would not have this effect, instead disregarding the
flag on the GPIO line altogether, which is a bug. I admit the
semantics may be unclear but inverting twice is as good a
definition as any on how this should work.

So fix up the buggy device tree. Use proper #includes so the DTS
is clear and readable.

Cc: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Olof Johansson <olof@lixom.net>
2014-07-30 12:47:17 -07:00
H. Peter Anvin
c3107e3c50 APEI is currently implemented so that it depends on x86 hardware.
The primary dependency is that GHES uses the x86 NMI for hardware
 error notification and MCE for memory error handling. These patches
 remove that dependency.
 
 Other APEI features such as error reporting via external IRQ, error
 serialization, or error injection, do not require changes to use them
 on non-x86 architectures.
 
 The following patch set eliminates the APEI Kconfig x86 dependency
 by making these changes:
 - treat NMI notification as GHES architecture - HAVE_ACPI_APEI_NMI
 - group and wrap around #ifdef CONFIG_HAVE_ACPI_APEI_NMI code which
   is used only for NMI path
 - identify architectural boxes and abstract it accordingly (tlb flush and MCE)
 - rework ioremap for both IRQ and NMI context
 
 NMI code is kept in ghes.c file since NMI and IRQ context are tightly coupled.
 
 Note, these patches introduce no functional changes for x86. The NMI notification
 feature is hard selected for x86. Architectures that want to use this
 feature should also provide NMI code infrastructure.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJT2BaPAAoJEKurIx+X31iBLGMP/0yyWOna4229p9CmuElSP3os
 Kb+9Thru+Wg4ihj43CYW0nznQnamCaqBa5NpDXZn0Ebtxc08SSGVzbf+z+vBMeD+
 HW4093m4g8sGL7i4JdAol0MEPpKTQRdpj525N/h/xWVSDXQ0Bq3vQ7DS1/j1Bp4k
 Lq3G8dEk+4LjNPcQ5YBPl71zWJOC4iUctfh1OpFdfgA04804Vis3j8T6ljE7/72M
 51xXK3af9ktIg6MU2HOwraUsSspVeJs/4lPu4fab4XI07BRDb4T7yx19a9VaBy67
 m6TaTd3eC/Z0Uh+51grNuXSnWQK4fvahRZJEwiRdC0wL3w3mhdZkmqm0nBdBFyof
 5b251+FOazOtZdMsWS/mMjQUjybQ+4k9zpnndIPw/5rqxJ8lgaP7o81e+hw1Xh1Q
 E0ZWUMXnAIkRmkyYLUv5aTICRYIZtAC/C1QrR5ZB/9Q+yvtxp13dbqGzWhcF7AIw
 UK/yb5T5ZAzvuJlmPG0ZiV75HH9bjX4OFV3AhXJIEG/iTOdVVpat8yICFrT33Xpc
 uAwRXQvz6mn2c2xpZcJqSJQlXKg2nbrfUmscU8P8Zu6mQpvBB/+2cDbW/5wfuKbE
 NpD0aB5PxhHY+nNvIfOsTUk72aZcZdUEQJt/792vhnMYb/IK1X/qa4zrVmOqlZKt
 mtXwUQWdj3kSG36mgssO
 =nYdd
 -----END PGP SIGNATURE-----

Merge tag 'please-pull-apei' into x86/ras

APEI is currently implemented so that it depends on x86 hardware.
The primary dependency is that GHES uses the x86 NMI for hardware
error notification and MCE for memory error handling. These patches
remove that dependency.

Other APEI features such as error reporting via external IRQ, error
serialization, or error injection, do not require changes to use them
on non-x86 architectures.

The following patch set eliminates the APEI Kconfig x86 dependency
by making these changes:
- treat NMI notification as GHES architecture - HAVE_ACPI_APEI_NMI
- group and wrap around #ifdef CONFIG_HAVE_ACPI_APEI_NMI code which
  is used only for NMI path
- identify architectural boxes and abstract it accordingly (tlb flush and MCE)
- rework ioremap for both IRQ and NMI context

NMI code is kept in ghes.c file since NMI and IRQ context are tightly coupled.

Note, these patches introduce no functional changes for x86. The NMI notification
feature is hard selected for x86. Architectures that want to use this
feature should also provide NMI code infrastructure.
2014-07-30 10:48:00 -07:00
Linus Torvalds
26bcd8b725 Device tree Exynos bug fix for v3.16-rc7
Exynos has buggy firmware that puts bad data into the memory node. Commit
 1c2f87c2 (ARM: Get rid of meminfo) exposed the bug by dropping the artificial
 upper bound on the number of memory banks that can be added. Exynos fails to
 boot after that commit. This branch fixes it by splitting the early DT parse
 function and inserting a fixup hook. Exynos uses the hook to correct the DT
 before parsing memory regions.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJT2HI1AAoJEMWQL496c2LNvfAP/ifY6foyrO2MHGxlGdghL3Xe
 fHY+MxoywBqWwLuXjfSh0rIt/5KE80JvtTjnssSOHOZokOPa/O3N39SrQPaLRqW8
 1XC5A/Qocokeii69iXgXn0aQChBhyrRW708q9iU43ucKwcmWNvrzgdq838XdVB3q
 BGHeV9ADn57PHAitsOrDCJei//jgs94NXDKPmCwrTn62aiedeiiMAWYUfsPXFtsn
 gloL8wT8gcD8ojaSvKWpGJtUbkFBNe1DVQgsmIfG0hNUuolpsbNZo688OoWJUCaj
 0qQ2LqHD2djDMqxxj0xFxOx7GoQPZjAG9NlLkca3QG5dc1S+Bf//g11uxRAHQ2qD
 3l24i825fp4kGL1NUfR+OK4PIqGwBbEnXoIgrWnVjQxw/adMlH3iWFfuZqe/fBIq
 4CTe9buc+JGCdJUAp+DS3YRYtFPdlovgaJjCAAwKWEd4GpjLEKrGGL/dAkhyRP/j
 77byHy8XgSB5moh7qiR0u1M3lyRmU54f5EdDimPGaMUJ2PSzSxuYZk41hRRrstVn
 JCzDmblvTF4wai3t4Z+laUP0dAym/gwX/87UiRsO+hyXKGiVCq9AmDkueL2xLUuV
 c8rqjXLcVZ5qicLP2uCtWpz96WVzTCa3CzcMufT7t6cErMLueSSARrxq2RrETsFo
 SpeBf3cc90Edv8LP7V9W
 =lmyQ
 -----END PGP SIGNATURE-----

Merge tag 'devicetree-for-linus' of git://git.secretlab.ca/git/linux

Pull Exynos platform DT fix from Grant Likely:
 "Device tree Exynos bug fix for v3.16-rc7

  This bug fix has been brewing for a while.  I hate sending it to you
  so late, but I only got confirmation that it solves the problem this
  past weekend.  The diff looks big for a bug fix, but the majority of
  it is only executed in the Exynos quirk case.  Unfortunately it
  required splitting early_init_dt_scan() in two and adding quirk
  handling in the middle of it on ARM.

  Exynos has buggy firmware that puts bad data into the memory node.
  Commit 1c2f87c22566 ("ARM: Get rid of meminfo") exposed the bug by
  dropping the artificial upper bound on the number of memory banks that
  can be added.  Exynos fails to boot after that commit.  This branch
  fixes it by splitting the early DT parse function and inserting a
  fixup hook.  Exynos uses the hook to correct the DT before parsing
  memory regions"

* tag 'devicetree-for-linus' of git://git.secretlab.ca/git/linux:
  arm: Add devicetree fixup machine function
  of: Add memory limiting function for flattened devicetrees
  of: Split early_init_dt_scan into two parts
2014-07-30 09:01:04 -07:00
Linus Torvalds
acba648dca Fix BUG when trying to expand the grant table. This seems to occur
often during boot with Ubuntu 14.04 PV guests.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.12 (GNU/Linux)
 
 iQEcBAABAgAGBQJT2PhgAAoJEFxbo/MsZsTRlzIH/1HjbkGZmRlOj5wcrYlWCUJ/
 DGLBHc76so52xd9oP8COT5tuSVP6/usPPLFaOmVZ7fMiOpoyz9d3lc0g56otw3gJ
 tTUFTyW0EoFtvmIl50OMC726p9azETjA3P2XJkV/D3GhBGGqgrP5uR+mRvisvq3y
 eGZEx1UIHv1jov47TBFR1NcckXBWw+6J9m34y9h6an9VNDCuuGwYZ8dfGAFsLrVb
 lGLTmgQQmyk4SexVINfOwL40KkVDVEq+X74HcPviyNHEIy66xLzMtKpL+Sf4xeuv
 VG3JhqAUGuRGGK48rrbpxhBbpxGp35O9RV68YrGssxfuTejSYduw5zTzzt30QIA=
 =cr8X
 -----END PGP SIGNATURE-----

Merge tag 'stable/for-linus-3.16-rc7-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip

Pull Xen fix from David Vrabel:
 "Fix BUG when trying to expand the grant table.  This seems to occur
  often during boot with Ubuntu 14.04 PV guests"

* tag 'stable/for-linus-3.16-rc7-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip:
  x86/xen: safely map and unmap grant frames when in atomic context
2014-07-30 09:00:20 -07:00
Robert Richter
3666f88010 arm64: defconfig: enable devtmpfs mount option
Matching x86 and making it more convenient to run the arm64 default
kernel as distros like Ubuntu need this option.

Signed-off-by: Robert Richter <rrichter@cavium.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-07-30 16:54:21 +01:00
Chris J Arges
296f047502 KVM: vmx: remove duplicate vmx_mpx_supported() prototype
Remove a prototype which was added by both 93c4adc7afe and 36be0b9deb2.

Signed-off-by: Chris J Arges <chris.j.arges@canonical.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-30 17:43:57 +02:00
Arun Chandran
1915e2ad1c arm64: vdso: fix build error when switching from LE to BE
Building a kernel with CPU_BIG_ENDIAN fails if there are stale objects
from a !CPU_BIG_ENDIAN build. Due to a missing FORCE prerequisite on an
if_changed rule in the VDSO Makefile, we attempt to link a stale LE
object into the new BE kernel.

According to Documentation/kbuild/makefiles.txt, FORCE is required for
if_changed rules and forgetting it is a common mistake, so fix it by
'Forcing' the build of vdso. This patch fixes build errors like these:

arch/arm64/kernel/vdso/note.o: compiled for a little endian system and target is big endian
failed to merge target specific data of file arch/arm64/kernel/vdso/note.o

arch/arm64/kernel/vdso/sigreturn.o: compiled for a little endian system and target is big endian
failed to merge target specific data of file arch/arm64/kernel/vdso/sigreturn.o

Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Arun Chandran <achandran@mvista.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-07-30 15:06:35 +01:00
Christian Borntraeger
d514f42641 KVM: s390: Fix memory leak on busy SIGP stop
commit 7dfc63cf977447e09b1072911c22564f900fc578
(KVM: s390: allow only one SIGP STOP (AND STORE STATUS) at a time)
introduced a memory leak if a sigp stop is already pending. Free
the allocated inti structure.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
2014-07-30 15:29:40 +02:00
David Vrabel
b7dd0e350e x86/xen: safely map and unmap grant frames when in atomic context
arch_gnttab_map_frames() and arch_gnttab_unmap_frames() are called in
atomic context but were calling alloc_vm_area() which might sleep.

Also, if a driver attempts to allocate a grant ref from an interrupt
and the table needs expanding, then the CPU may already by in lazy MMU
mode and apply_to_page_range() will BUG when it tries to re-enable
lazy MMU mode.

These two functions are only used in PV guests.

Introduce arch_gnttab_init() to allocates the virtual address space in
advance.

Avoid the use of apply_to_page_range() by using saving and using the
array of PTE addresses from the alloc_vm_area() call (which ensures
that the required page tables are pre-allocated).

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2014-07-30 14:22:47 +01:00
John Stultz
953dec21ae timekeeping: Fixup typo in update_vsyscall_old definition
In commit 4a0e637738f0 ("clocksource: Get rid of cycle_last"),
currently in the -tip tree, there was a small typo where cycles_t
was used intstead of cycle_t. This broke ppc64 builds.

Fix this by using the proper cycle_t type for this usage, in
both the definition and the ia64 implementation.

Now, having both cycle_t and cycles_t types seems like a very
bad idea just asking for these sorts of issues. But that
will be a cleanup for another day.

Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1406349439-11785-1-git-send-email-john.stultz@linaro.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-07-30 09:26:25 +02:00
Laura Abbott
5a12a597a8 arm: Add devicetree fixup machine function
Commit 1c2f87c22566cd057bc8cde10c37ae9da1a1bb76
(ARM: 8025/1: Get rid of meminfo) dropped the upper bound on
the number of memory banks that can be added as there was no
technical need in the kernel. It turns out though, some bootloaders
(specifically the arndale-octa exynos boards) may pass invalid memory
information and rely on the kernel to not parse this data. This is a
bug in the bootloader but we still need to work around this.
Work around this by introducing a dt_fixup function. This function
gets called before the flattened devicetree is scanned for memory
and the like. In this fixup function for exynos, limit the maximum
number of memory regions in the devicetree.

Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Tested-by: Andreas Färber <afaerber@suse.de>
[glikely: Added a comment and fixed up function name]
Signed-off-by: Grant Likely <grant.likely@linaro.org>
2014-07-29 21:26:49 -06:00
Thierry Reding
6b15075c2c [IA64] sn: Do not needlessly convert between pointers and integers
The nasid_to_try variable is an array of integers, so plain integers can
be used when assigning values to the elements rather than casting a NULL
pointer to an integer, which results in the following warning from GCC:

	arch/ia64/sn/kernel/bte.c:117:22: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]
	    nasid_to_try[1] = (int)NULL;
	                      ^
	arch/ia64/sn/kernel/bte.c:125:22: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]
	    nasid_to_try[1] = (int)NULL;
	                      ^

Replace (int)NULL with a simple 0 to silence these warnings.

Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2014-07-29 16:28:15 -07:00
Thierry Reding
882d6f384b [IA64] sn: Fix zeroing of PDAs
The code uses a the following to zero out a PDA:

	memset(pda, 0, sizeof(pda));

But sizeof(pda) will return the size of a pointer rather than the size
of the structure pointed to. This triggers the following warning from
GCC:

	arch/ia64/sn/kernel/setup.c:582:23: warning: argument to 'sizeof' in 'memset' call is the same pointer type 'struct pda_s *' as the destination; expected 'struct pda_s' or an explicit length [-Wsizeof-pointer-memaccess]
	  memset(pda, 0, sizeof(pda));
	                       ^

Fix this by passing in the size of the structure using sizeof(*pda)
instead.

Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2014-07-29 16:26:37 -07:00
Uwe Kleine-König
c6f54a9b39 ARM: 8113/1: remove remaining definitions of PLAT_PHYS_OFFSET from <mach/memory.h>
The platforms selecting NEED_MACH_MEMORY_H defined the start address of
their physical memory in the respective <mach/memory.h>. With
ARM_PATCH_PHYS_VIRT=y (which is quite common today) this is useless
though because the definition isn't used but determined dynamically.

So remove the definitions from all <mach/memory.h> and provide the
Kconfig symbol PHYS_OFFSET with the respective defaults in case
ARM_PATCH_PHYS_VIRT isn't enabled.

This allows to drop the dependency of PHYS_OFFSET on !NEED_MACH_MEMORY_H
which prevents compiling an integrator nommu-kernel.
(CONFIG_PAGE_OFFSET which has "default PHYS_OFFSET if !MMU" expanded to
"0x" because CONFIG_PHYS_OFFSET doesn't exist as INTEGRATOR selects
NEED_MACH_MEMORY_H.)

Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-29 23:08:52 +01:00