37811 Commits

Author SHA1 Message Date
Heiko Carstens
bfe3349b51 [S390] atomic ops: small cleanups
Couple of coding style fixes, replace __inline__ with inline and
remove #ifdef __KERNEL_- since the header file isn't exported.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2009-09-11 10:29:44 +02:00
Heiko Carstens
1275105851 [S390] atomic ops: add effecient atomic64 support for 31 bit
Use compare double and swap to implement efficient atomic64 ops for 31 bit.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2009-09-11 10:29:43 +02:00
Martin Schwidefsky
6ac2a4ddd1 [S390] improve mcount code
Move the 64 bit mount code from mcount.S into mcount64.S and avoid
code duplication.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2009-09-11 10:29:43 +02:00
Heiko Carstens
04efc3be76 [S390] convert/optimize csum_fold() to C
In the meantime gcc generates better code than the old inline
assemblies do. Original inline assembly results in:

lr	%r1,%r2
sr	%r3,%r3
lr	%r2,%r1
srdl	%r2,16
alr	%r2,%r3
alr	%r1,%r2
srl	%r1,16
xilf	%r1,65535
llghr	%r2,%r1
br	%r14

Out of the C code gcc generates this:

rll	%r1,%r2,16
ar	%r1,%r2
srl	%r1,16
xilf	%r1,65535
llghr	%r2,%r1
br	%r14

In addition we don't have any static register allocations anymore and
gcc is free to shuffle instructions around for better pipeline usage.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2009-09-11 10:29:43 +02:00
Heiko Carstens
05e7ff7da7 [S390] introduce get_clock_monotonic
Introduce get_clock_monotonic() function which can be used to get a
(fast) timestamp. Resolution is the same as for get_clock(). The
only difference is that the timestamps are monotonic and don't jump
backward or forward.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2009-09-11 10:29:42 +02:00
Heiko Carstens
62733e5a5a [S390] cio: move scsw helper functions to header file
All scsw helper functions are very short and usage of them shouldn't
result in function calls. Therefore we move them to a separate header
file.
Also saves a lot of EXPORT_SYMBOLs.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2009-09-11 10:29:36 +02:00
Ben Hutchings
5367b6887e x86: Fix code patching for paravirt-alternatives on 486
As reported in <http://bugs.debian.org/511703> and
<http://bugs.debian.org/515982>, kernels with paravirt-alternatives
enabled crash in text_poke_early() on at least some 486-class
processors.

The problem is that text_poke_early() itself uses inline functions
affected by paravirt-alternatives and so will modify instructions that
have already been prefetched.  Pentium and later processors will
invalidate the prefetched instructions in this case, but 486-class
processors do not.

Change sync_core() to limit prefetching on 486-class (and 386-class)
processors, and move the call to sync_core() above the call to the
modifiable local_irq_restore().

Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
LKML-Reference: <1252547631.3423.134.camel@localhost>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2009-09-10 16:50:19 -07:00
James Morris
a3c8b97396 Merge branch 'next' into for-linus 2009-09-11 08:04:49 +10:00
Jeremy Fitzhardinge
78c86e5e56 x86: split __phys_addr out into separate file
Split __phys_addr out into its own file so we can disable
-fstack-protector in a fine-grained fashion.  Also it doesn't
have terribly much to do with the rest of ioremap.c.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
2009-09-10 11:48:55 -07:00
David S. Miller
b73d884756 sparc64: Initial niagara2 perf counter support.
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-09-10 07:42:02 -07:00
David S. Miller
660d13765f sparc64: Perf counter 'nop' event is not constant.
On Niagara-2, for example, it's going to be different.  So make
it something specified in sparc_pmu.

Signed-off-by: David S. Miller <davem@davemloft.net>
2009-09-10 07:13:26 -07:00
David S. Miller
496c07e3b4 sparc64: Provide a way to specify a perf counter overflow IRQ enable bit.
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-09-10 07:10:59 -07:00
David S. Miller
91b9286d81 sparc64: Provide hypervisor tracing bit support for perf counters.
A PMU need only specify which bit in the PCR enabled hypervisor
tracing in order to enable this.

This will be used in Niagara-2 perf counter support.

Signed-off-by: David S. Miller <davem@davemloft.net>
2009-09-10 07:09:06 -07:00
Takashi Iwai
e0b3032bcd Merge branch 'topic/asoc' into for-linus
* topic/asoc: (226 commits)
  ASoC: au1x: PSC-AC97 bugfixes
  ASoC: Fix WM835x Out4 capture enumeration
  ASoC: Remove unuused hw_read_t
  ASoC: fix pxa2xx-ac97.c breakage
  ASoC: Fully specify DC servo bits to update in wm_hubs
  ASoC: Debugged improper setting of PLL fields in WM8580 driver
  ASoC: new board driver to connect bfin-5xx with ad1836 codec
  ASoC: OMAP: Add functionality to set CLKR and FSR sources in McBSP DAI
  ASoC: davinci: i2c device creation moved into board files
  ASoC: Don't reconfigure WM8350 FLL if not needed
  ASoC: Fix s3c-i2s-v2 build
  ASoC: Make platform data optional for TLV320AIC3x
  ASoC: Add S3C24xx dependencies for Simtec machines
  ASoC: SDP3430: Fix TWL GPIO6 pin mux request
  ASoC: S3C platform: Fix s3c2410_dma_started() called at improper time
  ARM: OMAP: McBSP: Merge two functions into omap_mcbsp_start/_stop
  ASoC: OMAP: Fix setup of XCCR and RCCR registers in McBSP DAI
  OMAP: McBSP: Use textual values in DMA operating mode sysfs files
  ARM: OMAP: DMA: Add support for DMA channel self linking on OMAP1510
  ASoC: Select core DMA when building for S3C64xx
  ...
2009-09-10 15:32:40 +02:00
David S. Miller
59abbd1e7c sparc64: Initial hw perf counter support.
Only supports one simple counter and only UltraSPARC-IIIi chips.

Signed-off-by: David S. Miller <davem@davemloft.net>
2009-09-10 06:28:20 -07:00
David S. Miller
5686f9c3d6 sparc64: Implement a real set_perf_counter_pending().
When the perf counter subsystem needs to reschedule work out
from an NMI, it invokes set_perf_counter_pending().

This triggers a non-NMI irq which should invoke
perf_counter_do_pending().

Currently this won't trigger because sparc64 won't trigger
the perf counter subsystem from NMIs, but when the HW counter
support is added it will.

Signed-off-by: David S. Miller <davem@davemloft.net>
2009-09-10 05:59:24 -07:00
David S. Miller
2d0740c456 sparc64: Use nmi_enter() and nmi_exit(), as needed.
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-09-10 05:56:16 -07:00
David S. Miller
76c36d016a sparc64: Provide extern decls for sparc_??u_type strings.
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-09-10 05:55:17 -07:00
Yang Xiaowei
2496afbf1e xen: use stronger barrier after unlocking lock
We need to have a stronger barrier between releasing the lock and
checking for any waiting spinners.  A compiler barrier is not sufficient
because the CPU's ordering rules do not prevent the read xl->spinners
from happening before the unlock assignment, as they are different
memory locations.

We need to have an explicit barrier to enforce the write-read ordering
to different memory locations.

Because of it, I can't bring up > 4 HVM guests on one SMP machine.

[ Code and commit comments expanded -J ]

[ Impact: avoid deadlock when using Xen PV spinlocks ]

Signed-off-by: Yang Xiaowei <xiaowei.yang@intel.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
2009-09-09 16:38:44 -07:00
Jeremy Fitzhardinge
4d576b57b5 xen: only enable interrupts while actually blocking for spinlock
Where possible we enable interrupts while waiting for a spinlock to
become free, in order to reduce big latency spikes in interrupt handling.

However, at present if we manage to pick up the spinlock just before
blocking, we'll end up holding the lock with interrupts enabled for a
while.  This will cause a deadlock if we recieve an interrupt in that
window, and the interrupt handler tries to take the lock too.

Solve this by shrinking the interrupt-enabled region to just around the
blocking call.

[ Impact: avoid race/deadlock when using Xen PV spinlocks ]

Reported-by: "Yang, Xiaowei" <xiaowei.yang@intel.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
2009-09-09 16:38:11 -07:00
Jeremy Fitzhardinge
577eebeae3 xen: make -fstack-protector work under Xen
-fstack-protector uses a special per-cpu "stack canary" value.
gcc generates special code in each function to test the canary to make
sure that the function's stack hasn't been overrun.

On x86-64, this is simply an offset of %gs, which is the usual per-cpu
base segment register, so setting it up simply requires loading %gs's
base as normal.

On i386, the stack protector segment is %gs (rather than the usual kernel
percpu %fs segment register).  This requires setting up the full kernel
GDT and then loading %gs accordingly.  We also need to make sure %gs is
initialized when bringing up secondary cpus too.

To keep things consistent, we do the full GDT/segment register setup on
both architectures.

Because we need to avoid -fstack-protected code before setting up the GDT
and because there's no way to disable it on a per-function basis, several
files need to have stack-protector inhibited.

[ Impact: allow Xen booting with stack-protector enabled ]

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
2009-09-09 16:37:39 -07:00
David Howells
733e5e4b4e KEYS: Add missing linux/tracehook.h #inclusions
Add #inclusions of linux/tracehook.h to those arch files that had the tracehook
call for TIF_NOTIFY_RESUME added when support for that flag was added to that
arch.

Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: James Morris <jmorris@namei.org>
2009-09-09 18:30:02 +10:00
David S. Miller
d89be56b21 sparc64: Make touch_nmi_watchdog() actually work.
It guards it's actions on nmi_watchdog_active, but nothing ever
sets that and it's initial value is zero.

Signed-off-by: David S. Miller <davem@davemloft.net>
2009-09-08 23:29:16 -07:00
David S. Miller
4e85f5915d sparc64: Kill unnecessary cast in profile_timer_exceptions_notify().
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-09-08 23:20:39 -07:00
David S. Miller
a8f2226455 sparc64: Manage NMI watchdog enabling like x86.
Use a per-cpu 'wd_enabled' boolean and a global atomic_t count
of watchdog NMI enabled cpus which is set to '-1' if something
is wrong with the watchdog and it can't be used.

Signed-off-by: David S. Miller <davem@davemloft.net>
2009-09-08 23:16:06 -07:00
Tejun Heo
3e5cd1f257 dmi: extend dmi_get_year() to dmi_get_date()
There are cases where full date information is required instead of
just the year.  Add month and day parsing to dmi_get_year() and rename
it to dmi_get_date().

As the original function only required '/' followed by any number of
parseable characters at the end of the string, keep that behavior to
avoid upsetting existing users.

The new function takes dates of format [mm[/dd]]/yy[yy].  Year, month
and date are checked to be in the ranges of [1-9999], [1-12] and
[1-31] respectively and any invalid or out-of-range component is
returned as zero.

The dummy implementation is updated accordingly but the return value
is updated to indicate field not found which is consistent with how
other dummy functions behave.

Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
2009-09-08 21:17:48 -04:00
Peter Zijlstra
a8fae3ec5f sched: enable SD_WAKE_IDLE
Now that SD_WAKE_IDLE doesn't make pipe-test suck anymore,
enable it by default for MC, CPU and NUMA domains.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-07 22:00:17 +02:00
Krzysztof Halasa
5dbc46506a IXP42x HSS support for setting internal clock rate
HSS usually uses external clocks, so it's not a big deal. Internal clock
is used for direct DTE-DTE connections and when the DCE doesn't provide
it's own clock.

This also depends on the oscillator frequency. Intel seems to have
calculated the clock register settings for 33.33 MHz (66.66 MHz timer
base). Their settings seem quite suboptimal both in terms of average
frequency (60 ppm is unacceptable for G.703 applications, their primary
intended usage(?)) and jitter.

Many (most?) platforms use a 33.333 MHz oscillator, a 10 ppm difference
from Intel's base.

Instead of creating static tables, I've created a procedure to program
the HSS clock register. The register consists of 3 parts (A, B, C).
The average frequency (= bit rate) is:
66.66x MHz / (A  + (B + 1) / (C + 1))
The procedure aims at the closest average frequency, possibly at the
cost of increased jitter. Nobody would be able to directly drive an
unbufferred transmitter with a HSS anyway, and the frequency error is
what it really counts.

I've verified the above with an oscilloscope on IXP425. It seems IXP46x
and possibly IXP43x use a bit different clock generation algorithm - it
looks like the avg frequency is:
(on IXP465) 66.66x MHz / (A  + B / (C + 1)).
Also they use much greater precomputed A and B - on IXP425 it would
simply result in more jitter, but I don't know how does it work on
IXP46x (perhaps 3 least significant bits aren't used?).

Anyway it looks that they were aiming for exactly +60 ppm or -60 ppm,
while <1 ppm is typically possible (with a synchronized clock, of
course).

The attached patch makes it possible to set almost any bit rate
(my IXP425 533 MHz quits at > 22 Mb/s if a single port is used, and the
minimum is ca. 65 Kb/s).

This is independent of MVIP (multi-E1/T1 on one HSS) mode.

Signed-off-by: Krzysztof Hałasa <khc@pm.waw.pl>
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-09-07 01:56:49 -07:00
Tobias Klauser
d535e4319a x86: Make memtype_seq_ops const
Signed-off-by: Tobias Klauser <tklauser@distanz.ch>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-06 06:33:33 +02:00
Rafael J. Wysocki
23b6c52cf5 x86: Decrease the level of some NUMA messages to KERN_DEBUG
Some NUMA messages in srat_32.c are confusing to users,
because they seem to indicate errors, while in fact they
reflect normal behaviour.

Decrease the level of these messages to KERN_DEBUG so that
they don't show up unnecessarily.

Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
LKML-Reference: <200909050107.45175.rjw@sisk.pl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-06 06:32:23 +02:00
Ingo Molnar
ed011b22ce Merge commit 'v2.6.31-rc9' into tracing/core
Merge reason: move from -rc5 to -rc9.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-06 06:11:42 +02:00
Roderick Colenbrander
74a01180db powerpc: Fix i8259 interrupt driver kernel crash on ML510
This patch fixes a null pointer exception caused by removal of
'ack()' for level interrupts in the Xilinx interrupt driver.  A recent
change to the xilinx interrupt controller removed the ack hook for
level irqs.

Signed-off-by: Roderick Colenbrander <thunderbird2k@gmail.com>
Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-09-05 14:58:07 -07:00
Linus Torvalds
535e0c1726 Merge branch 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6
* 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6:
  [IA64] fix csum_ipv6_magic()
  [IA64] Fix warning in dma-mapping.c
2009-09-05 14:50:53 -07:00
Linus Torvalds
d3acd16cda Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-2.6
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-2.6:
  sparc64: Fix bootup with mcount in some configs.
  sparc64: Kill spurious NMI watchdog triggers by increasing limit to 30 seconds.
2009-09-05 13:49:06 -07:00
Linus Torvalds
93697a3cab Merge branch 'perfcounters-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip
* 'perfcounters-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
  perf_counter/powerpc: Fix cache event codes for POWER7
  perf_counter: Fix /0 bug in swcounters
  perf_counters: Increase paranoia level
2009-09-05 13:48:37 -07:00
Jan Glauber
81bd5f6c96 crypto: sha-s390 - Fix warnings in import function
That patch should fix the warnings.

Signed-off-by: Jan Glauber <jang@linux.vnet.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-09-05 16:27:35 +10:00
H. Peter Anvin
b19ae39998 x86, msr: change msr-reg.o to obj-y, and export its symbols
Change msr-reg.o to obj-y (it will be included in virtually every
kernel since it is used by the initialization code for AMD processors)
and add a separate C file to export its symbols to modules, so that
msr.ko can use them; on uniprocessors we bypass the helper functions
in msr.o and use the accessor functions directly via inlines.

Signed-off-by: H. Peter Anvin <hpa@zytor.com>
LKML-Reference: <20090904140834.GA15789@elte.hu>
Cc: Borislav Petkov <petkovbb@googlemail.com>
2009-09-04 10:00:09 -07:00
Pekka Enberg
8e019366ba kmemleak: Don't scan uninitialized memory when kmemcheck is enabled
Ingo Molnar reported the following kmemcheck warning when running both
kmemleak and kmemcheck enabled:

  PM: Adding info for No Bus:vcsa7
  WARNING: kmemcheck: Caught 32-bit read from uninitialized memory
  (f6f6e1a4)
  d873f9f600000000c42ae4c1005c87f70000000070665f666978656400000000
   i i i i u u u u i i i i i i i i i i i i i i i i i i i i i u u u
           ^

  Pid: 3091, comm: kmemleak Not tainted (2.6.31-rc7-tip #1303) P4DC6
  EIP: 0060:[<c110301f>] EFLAGS: 00010006 CPU: 0
  EIP is at scan_block+0x3f/0xe0
  EAX: f40bd700 EBX: f40bd780 ECX: f16b46c0 EDX: 00000001
  ESI: f6f6e1a4 EDI: 00000000 EBP: f10f3f4c ESP: c2605fcc
   DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068
  CR0: 8005003b CR2: e89a4844 CR3: 30ff1000 CR4: 000006f0
  DR0: 00000000 DR1: 00000000 DR2: 00000000 DR3: 00000000
  DR6: ffff4ff0 DR7: 00000400
   [<c110313c>] scan_object+0x7c/0xf0
   [<c1103389>] kmemleak_scan+0x1d9/0x400
   [<c1103a3c>] kmemleak_scan_thread+0x4c/0xb0
   [<c10819d4>] kthread+0x74/0x80
   [<c10257db>] kernel_thread_helper+0x7/0x3c
   [<ffffffff>] 0xffffffff
  kmemleak: 515 new suspected memory leaks (see
  /sys/kernel/debug/kmemleak)
  kmemleak: 42 new suspected memory leaks (see /sys/kernel/debug/kmemleak)

The problem here is that kmemleak will scan partially initialized
objects that makes kmemcheck complain. Fix that up by skipping
uninitialized memory regions when kmemcheck is enabled.

Reported-by: Ingo Molnar <mingo@elte.hu>
Acked-by: Ingo Molnar <mingo@elte.hu>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
2009-09-04 16:05:55 +01:00
Ingo Molnar
695a461296 Merge branch 'amd-iommu/2.6.32' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/linux-2.6-iommu into core/iommu 2009-09-04 14:44:16 +02:00
David S. Miller
bd4352cadf sparc64: Fix bootup with mcount in some configs.
Functions invoked early when booting up a cpu can't use
tracing because mcount requires a valid 'current_thread_info()'
and TLB mappings to be setup.

The code path of sun4v_register_mondo_queues --> register_one_mondo
is one such case.  sun4v_register_mondo_queues already has the
necessary 'notrace' annotation, but register_one_mondo does not.

Normally register_one_mondo is inlined so the bug doesn't trigger,
but with some config/compiler combinations, it won't be so we
must properly mark it notrace.

While we're here, add 'notrace' annoations to prom_printf and
prom_halt so that early error handling won't have the same problem.

Reported-by: Alexander Beregalov <a.beregalov@gmail.com>
Reported-by: Leif Sawyer <lsawyer@gci.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-09-04 03:39:45 -07:00
Jens Axboe
825c9fb47a sparc: add basic support for 'perf'
This wires up the perf_counter_open() syscall so that basic
software support for perf is working.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-09-04 02:56:22 -07:00
Ingo Molnar
840a065310 sched: Turn on SD_BALANCE_NEWIDLE
Start the re-tuning of the balancer by turning on newidle.

It improves hackbench performance and parallelism on a 4x4 box.
The "perf stat --repeat 10" measurements give us:

  domain0             domain1
  .......................................
 -SD_BALANCE_NEWIDLE -SD_BALANCE_NEWIDLE:
   2041.273208  task-clock-msecs         #      9.354 CPUs    ( +-   0.363% )

 +SD_BALANCE_NEWIDLE -SD_BALANCE_NEWIDLE:
   2086.326925  task-clock-msecs         #     11.934 CPUs    ( +-   0.301% )

 +SD_BALANCE_NEWIDLE +SD_BALANCE_NEWIDLE:
   2115.289791  task-clock-msecs         #     12.158 CPUs    ( +-   0.263% )

Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Andreas Herrmann <andreas.herrmann3@amd.com>
Cc: Andreas Herrmann <andreas.herrmann3@amd.com>
Cc: Gautham R Shenoy <ego@in.ibm.com>
Cc: Balbir Singh <balbir@in.ibm.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-04 11:52:54 +02:00
Ingo Molnar
47734f89be sched: Clean up topology.h
Re-organize the flag settings so that it's visible at a glance
which sched-domains flags are set and which not.

With the new balancer code we'll need to re-tune these details
anyway, so make it cleaner to make fewer mistakes down the
road ;-)

Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Andreas Herrmann <andreas.herrmann3@amd.com>
Cc: Andreas Herrmann <andreas.herrmann3@amd.com>
Cc: Gautham R Shenoy <ego@in.ibm.com>
Cc: Balbir Singh <balbir@in.ibm.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-04 11:52:53 +02:00
David S. Miller
a29889a536 Merge branch 'master' of /home/davem/src/GIT/linux-2.6/ 2009-09-04 02:22:21 -07:00
Yinghai Lu
0d96b9ff74 x86: Use hard_smp_processor_id() to get apic id for AMD K8 cpus
Otherwise, system with apci id lifting will have wrong apicid in
/proc/cpuinfo.

and use that in srat_detect_node().

Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Cc: Andreas Herrmann <andreas.herrmann3@amd.com>
Cc: Suresh Siddha <suresh.b.siddha@intel.com>
Cc: Cyrill Gorcunov <gorcunov@openvz.org>
LKML-Reference: <4A998CCA.1040407@kernel.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-04 09:55:29 +02:00
Ingo Molnar
29e2035bdd Merge branch 'linus' into core/rcu
Merge reason: Avoid fuzz in init/main.c and update from rc6 to rc8.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-04 09:29:05 +02:00
markus.t.metzger@intel.com
1653192f51 x86, perf_counter, bts: Do not allow kernel BTS tracing for now
Kernel BTS tracing generates too much data too fast for us to
handle, causing the kernel to hang.

Fail for BTS requests for kernel code.

Signed-off-by: Markus Metzger <markus.t.metzger@intel.com>
Acked-by: Peter Zijlstra <a.p.zjilstra@chello.nl>
LKML-Reference: <20090902140616.901253000@intel.com>
[ This is really a workaround - but we want BTS tracing in .32
  so make sure we dont regress. The lockup should be fixed
  ASAP. ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-04 09:26:40 +02:00
markus.t.metzger@intel.com
596da17f94 x86, perf_counter, bts: Correct pointer-to-u64 casts
On 32bit, pointers in the DS AREA configuration are cast to
u64. The current (long) cast to avoid compiler warnings results
in a signed 64bit address.

Signed-off-by: Markus Metzger <markus.t.metzger@intel.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <20090902140615.305889000@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-04 09:26:39 +02:00
markus.t.metzger@intel.com
747b50aaf7 x86, perf_counter, bts: Fail if BTS is not available
Reserve PERF_COUNT_HW_BRANCH_INSTRUCTIONS with sample_period ==
1 for BTS tracing and fail, if BTS is not available.

Signed-off-by: Markus Metzger <markus.t.metzger@intel.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <20090902140612.943801000@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-04 09:26:39 +02:00
Jeremy Fitzhardinge
53f824520b x86/i386: Put aligned stack-canary in percpu shared_aligned section
Pack aligned things together into a special section to minimize
padding holes.

Suggested-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Cc: Tejun Heo <tj@kernel.org>
LKML-Reference: <4AA035C0.9070202@goop.org>
[ queued up in tip:x86/asm because it depends on this commit:
  x86/i386: Make sure stack-protector segment base is cache aligned ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-04 07:10:31 +02:00