2005-04-17 02:20:36 +04:00
/*
* linux / mm / oom_kill . c
*
* Copyright ( C ) 1998 , 2000 Rik van Riel
* Thanks go out to Claus Fischer for some serious inspiration and
* for goading me into coding this file . . .
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:19:46 +04:00
* Copyright ( C ) 2010 Google , Inc .
* Rewritten by David Rientjes
2005-04-17 02:20:36 +04:00
*
* The routines in this file are used to kill a process when
[PATCH] cpusets: oom_kill tweaks
This patch series extends the use of the cpuset attribute 'mem_exclusive'
to support cpuset configurations that:
1) allow GFP_KERNEL allocations to come from a potentially larger
set of memory nodes than GFP_USER allocations, and
2) can constrain the oom killer to tasks running in cpusets in
a specified subtree of the cpuset hierarchy.
Here's an example usage scenario. For a few hours or more, a large NUMA
system at a University is to be divided in two halves, with a bunch of student
jobs running in half the system under some form of batch manager, and with a
big research project running in the other half. Each of the student jobs is
placed in a small cpuset, but should share the classic Unix time share
facilities, such as buffered pages of files in /bin and /usr/lib. The big
research project wants no interference whatsoever from the student jobs, and
has highly tuned, unusual memory and i/o patterns that intend to make full use
of all the main memory on the nodes available to it.
In this example, we have two big sibling cpusets, one of which is further
divided into a more dynamic set of child cpusets.
We want kernel memory allocations constrained by the two big cpusets, and user
allocations constrained by the smaller child cpusets where present. And we
require that the oom killer not operate across the two halves of this system,
or else the first time a student job runs amuck, the big research project will
likely be first inline to get shot.
Tweaking /proc/<pid>/oom_adj is not ideal -- if the big research project
really does run amuck allocating memory, it should be shot, not some other
task outside the research projects mem_exclusive cpuset.
I propose to extend the use of the 'mem_exclusive' flag of cpusets to manage
such scenarios. Let memory allocations for user space (GFP_USER) be
constrained by a tasks current cpuset, but memory allocations for kernel space
(GFP_KERNEL) by constrained by the nearest mem_exclusive ancestor of the
current cpuset, even though kernel space allocations will still _prefer_ to
remain within the current tasks cpuset, if memory is easily available.
Let the oom killer be constrained to consider only tasks that are in
overlapping mem_exclusive cpusets (it won't help much to kill a task that
normally cannot allocate memory on any of the same nodes as the ones on which
the current task can allocate.)
The current constraints imposed on setting mem_exclusive are unchanged. A
cpuset may only be mem_exclusive if its parent is also mem_exclusive, and a
mem_exclusive cpuset may not overlap any of its siblings memory nodes.
This patch was presented on linux-mm in early July 2005, though did not
generate much feedback at that time. It has been built for a variety of
arch's using cross tools, and built, booted and tested for function on SN2
(ia64).
There are 4 patches in this set:
1) Some minor cleanup, and some improvements to the code layout
of one routine to make subsequent patches cleaner.
2) Add another GFP flag - __GFP_HARDWALL. It marks memory
requests for USER space, which are tightly confined by the
current tasks cpuset.
3) Now memory requests (such as KERNEL) that not marked HARDWALL can
if short on memory, look in the potentially larger pool of memory
defined by the nearest mem_exclusive ancestor cpuset of the current
tasks cpuset.
4) Finally, modify the oom killer to skip any task whose mem_exclusive
cpuset doesn't overlap ours.
Patch (1), the one time I looked on an SN2 (ia64) build, actually saved 32
bytes of kernel text space. Patch (2) has no affect on the size of kernel
text space (it just adds a preprocessor flag). Patches (3) and (4) added
about 600 bytes each of kernel text space, mostly in kernel/cpuset.c, which
matters only if CONFIG_CPUSET is enabled.
This patch:
This patch applies a few comment and code cleanups to mm/oom_kill.c prior to
applying a few small patches to improve cpuset management of memory placement.
The comment changed in oom_kill.c was seriously misleading. The code layout
change in select_bad_process() makes room for adding another condition on
which a process can be spared the oom killer (see the subsequent
cpuset_nodes_overlap patch for this addition).
Also a couple typos and spellos that bugged me, while I was here.
This patch should have no material affect.
Signed-off-by: Paul Jackson <pj@sgi.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-09-07 02:18:09 +04:00
* we ' re seriously out of memory . This gets called from __alloc_pages ( )
* in mm / page_alloc . c when we really run out of memory .
2005-04-17 02:20:36 +04:00
*
* Since we won ' t call these routines often ( on a well - configured
* machine ) this file will double as a ' coding guide ' and a signpost
* for newbie kernel hackers . It features several pointers to major
* kernel subsystems and hints as to where to find out what things do .
*/
2006-10-20 10:28:32 +04:00
# include <linux/oom.h>
2005-04-17 02:20:36 +04:00
# include <linux/mm.h>
2007-07-30 02:36:13 +04:00
# include <linux/err.h>
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 11:04:11 +03:00
# include <linux/gfp.h>
2005-04-17 02:20:36 +04:00
# include <linux/sched.h>
2017-02-08 20:51:29 +03:00
# include <linux/sched/mm.h>
2017-02-08 20:51:30 +03:00
# include <linux/sched/coredump.h>
2017-02-08 20:51:36 +03:00
# include <linux/sched/task.h>
2005-04-17 02:20:36 +04:00
# include <linux/swap.h>
# include <linux/timex.h>
# include <linux/jiffies.h>
2005-09-07 02:18:13 +04:00
# include <linux/cpuset.h>
2011-10-16 10:01:52 +04:00
# include <linux/export.h>
2006-09-26 10:31:20 +04:00
# include <linux/notifier.h>
2008-02-07 11:13:58 +03:00
# include <linux/memcontrol.h>
2010-08-10 04:18:52 +04:00
# include <linux/mempolicy.h>
security: Fix setting of PF_SUPERPRIV by __capable()
Fix the setting of PF_SUPERPRIV by __capable() as it could corrupt the flags
the target process if that is not the current process and it is trying to
change its own flags in a different way at the same time.
__capable() is using neither atomic ops nor locking to protect t->flags. This
patch removes __capable() and introduces has_capability() that doesn't set
PF_SUPERPRIV on the process being queried.
This patch further splits security_ptrace() in two:
(1) security_ptrace_may_access(). This passes judgement on whether one
process may access another only (PTRACE_MODE_ATTACH for ptrace() and
PTRACE_MODE_READ for /proc), and takes a pointer to the child process.
current is the parent.
(2) security_ptrace_traceme(). This passes judgement on PTRACE_TRACEME only,
and takes only a pointer to the parent process. current is the child.
In Smack and commoncap, this uses has_capability() to determine whether
the parent will be permitted to use PTRACE_ATTACH if normal checks fail.
This does not set PF_SUPERPRIV.
Two of the instances of __capable() actually only act on current, and so have
been changed to calls to capable().
Of the places that were using __capable():
(1) The OOM killer calls __capable() thrice when weighing the killability of a
process. All of these now use has_capability().
(2) cap_ptrace() and smack_ptrace() were using __capable() to check to see
whether the parent was allowed to trace any process. As mentioned above,
these have been split. For PTRACE_ATTACH and /proc, capable() is now
used, and for PTRACE_TRACEME, has_capability() is used.
(3) cap_safe_nice() only ever saw current, so now uses capable().
(4) smack_setprocattr() rejected accesses to tasks other than current just
after calling __capable(), so the order of these two tests have been
switched and capable() is used instead.
(5) In smack_file_send_sigiotask(), we need to allow privileged processes to
receive SIGIO on files they're manipulating.
(6) In smack_task_wait(), we let a process wait for a privileged process,
whether or not the process doing the waiting is privileged.
I've tested this with the LTP SELinux and syscalls testscripts.
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Serge Hallyn <serue@us.ibm.com>
Acked-by: Casey Schaufler <casey@schaufler-ca.com>
Acked-by: Andrew G. Morgan <morgan@kernel.org>
Acked-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: James Morris <jmorris@namei.org>
2008-08-14 14:37:28 +04:00
# include <linux/security.h>
2011-03-23 02:30:12 +03:00
# include <linux/ptrace.h>
2011-11-01 04:07:07 +04:00
# include <linux/freezer.h>
2012-01-11 03:08:09 +04:00
# include <linux/ftrace.h>
2012-03-22 03:33:47 +04:00
# include <linux/ratelimit.h>
2016-03-26 00:20:24 +03:00
# include <linux/kthread.h>
# include <linux/init.h>
2017-10-04 02:14:50 +03:00
# include <linux/mmu_notifier.h>
2016-03-26 00:20:24 +03:00
# include <asm/tlb.h>
# include "internal.h"
2017-11-16 04:32:07 +03:00
# include "slab.h"
2012-01-11 03:08:09 +04:00
# define CREATE_TRACE_POINTS
# include <trace/events/oom.h>
2005-04-17 02:20:36 +04:00
2006-06-23 13:03:13 +04:00
int sysctl_panic_on_oom ;
2007-10-17 10:25:56 +04:00
int sysctl_oom_kill_allocating_task ;
2010-08-10 04:18:53 +04:00
int sysctl_oom_dump_tasks = 1 ;
2015-06-25 02:57:19 +03:00
2018-08-18 01:49:10 +03:00
/*
* Serializes oom killer invocations ( out_of_memory ( ) ) from all contexts to
* prevent from over eager oom killing ( e . g . when the oom killer is invoked
* from different domains ) .
*
* oom_killer_disable ( ) relies on this lock to stabilize oom_killer_disabled
* and mark_oom_victim
*/
2015-06-25 02:57:19 +03:00
DEFINE_MUTEX ( oom_lock ) ;
2005-04-17 02:20:36 +04:00
2010-08-10 04:18:52 +04:00
# ifdef CONFIG_NUMA
/**
* has_intersects_mems_allowed ( ) - check task eligiblity for kill
2014-01-22 03:50:00 +04:00
* @ start : task struct of which task to consider
2010-08-10 04:18:52 +04:00
* @ mask : nodemask passed to page allocator for mempolicy ooms
*
* Task eligibility is determined by whether or not a candidate task , @ tsk ,
* shares the same mempolicy nodes as current if it is bound by such a policy
* and whether or not it has the same set of allowed cpuset nodes .
2009-09-22 04:03:14 +04:00
*/
2014-01-22 03:50:00 +04:00
static bool has_intersects_mems_allowed ( struct task_struct * start ,
2010-08-10 04:18:52 +04:00
const nodemask_t * mask )
2009-09-22 04:03:14 +04:00
{
2014-01-22 03:50:00 +04:00
struct task_struct * tsk ;
bool ret = false ;
2009-09-22 04:03:14 +04:00
2014-01-22 03:50:00 +04:00
rcu_read_lock ( ) ;
2014-01-22 03:49:58 +04:00
for_each_thread ( start , tsk ) {
2010-08-10 04:18:52 +04:00
if ( mask ) {
/*
* If this is a mempolicy constrained oom , tsk ' s
* cpuset is irrelevant . Only return true if its
* mempolicy intersects current , otherwise it may be
* needlessly killed .
*/
2014-01-22 03:50:00 +04:00
ret = mempolicy_nodemask_intersects ( tsk , mask ) ;
2010-08-10 04:18:52 +04:00
} else {
/*
* This is not a mempolicy constrained oom , so only
* check the mems of tsk ' s cpuset .
*/
2014-01-22 03:50:00 +04:00
ret = cpuset_mems_allowed_intersects ( current , tsk ) ;
2010-08-10 04:18:52 +04:00
}
2014-01-22 03:50:00 +04:00
if ( ret )
break ;
2014-01-22 03:49:58 +04:00
}
2014-01-22 03:50:00 +04:00
rcu_read_unlock ( ) ;
2010-08-10 04:19:39 +04:00
2014-01-22 03:50:00 +04:00
return ret ;
2010-08-10 04:18:52 +04:00
}
# else
static bool has_intersects_mems_allowed ( struct task_struct * tsk ,
const nodemask_t * mask )
{
return true ;
2009-09-22 04:03:14 +04:00
}
2010-08-10 04:18:52 +04:00
# endif /* CONFIG_NUMA */
2009-09-22 04:03:14 +04:00
2010-08-10 04:18:52 +04:00
/*
* The process p may have detached its own - > mm while exiting or through
* use_mm ( ) , but one or more of its subthreads may still have a valid
* pointer . Return p , or any of its subthreads with a valid - > mm , with
* task_lock ( ) held .
*/
2010-08-11 05:03:00 +04:00
struct task_struct * find_lock_task_mm ( struct task_struct * p )
oom: introduce find_lock_task_mm() to fix !mm false positives
Almost all ->mm == NULL checks in oom_kill.c are wrong.
The current code assumes that the task without ->mm has already released
its memory and ignores the process. However this is not necessarily true
when this process is multithreaded, other live sub-threads can use this
->mm.
- Remove the "if (!p->mm)" check in select_bad_process(), it is
just wrong.
- Add the new helper, find_lock_task_mm(), which finds the live
thread which uses the memory and takes task_lock() to pin ->mm
- change oom_badness() to use this helper instead of just checking
->mm != NULL.
- As David pointed out, select_bad_process() must never choose the
task without ->mm, but no matter what oom_badness() returns the
task can be chosen if nothing else has been found yet.
Change oom_badness() to return int, change it to return -1 if
find_lock_task_mm() fails, and change select_bad_process() to
check points >= 0.
Note! This patch is not enough, we need more changes.
- oom_badness() was fixed, but oom_kill_task() still ignores
the task without ->mm
- oom_forkbomb_penalty() should use find_lock_task_mm() too,
and it also needs other changes to actually find the first
first-descendant children
This will be addressed later.
[kosaki.motohiro@jp.fujitsu.com: use in badness(), __oom_kill_task()]
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:18:45 +04:00
{
2014-01-22 03:49:58 +04:00
struct task_struct * t ;
oom: introduce find_lock_task_mm() to fix !mm false positives
Almost all ->mm == NULL checks in oom_kill.c are wrong.
The current code assumes that the task without ->mm has already released
its memory and ignores the process. However this is not necessarily true
when this process is multithreaded, other live sub-threads can use this
->mm.
- Remove the "if (!p->mm)" check in select_bad_process(), it is
just wrong.
- Add the new helper, find_lock_task_mm(), which finds the live
thread which uses the memory and takes task_lock() to pin ->mm
- change oom_badness() to use this helper instead of just checking
->mm != NULL.
- As David pointed out, select_bad_process() must never choose the
task without ->mm, but no matter what oom_badness() returns the
task can be chosen if nothing else has been found yet.
Change oom_badness() to return int, change it to return -1 if
find_lock_task_mm() fails, and change select_bad_process() to
check points >= 0.
Note! This patch is not enough, we need more changes.
- oom_badness() was fixed, but oom_kill_task() still ignores
the task without ->mm
- oom_forkbomb_penalty() should use find_lock_task_mm() too,
and it also needs other changes to actually find the first
first-descendant children
This will be addressed later.
[kosaki.motohiro@jp.fujitsu.com: use in badness(), __oom_kill_task()]
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:18:45 +04:00
2014-01-22 03:50:01 +04:00
rcu_read_lock ( ) ;
2014-01-22 03:49:58 +04:00
for_each_thread ( p , t ) {
oom: introduce find_lock_task_mm() to fix !mm false positives
Almost all ->mm == NULL checks in oom_kill.c are wrong.
The current code assumes that the task without ->mm has already released
its memory and ignores the process. However this is not necessarily true
when this process is multithreaded, other live sub-threads can use this
->mm.
- Remove the "if (!p->mm)" check in select_bad_process(), it is
just wrong.
- Add the new helper, find_lock_task_mm(), which finds the live
thread which uses the memory and takes task_lock() to pin ->mm
- change oom_badness() to use this helper instead of just checking
->mm != NULL.
- As David pointed out, select_bad_process() must never choose the
task without ->mm, but no matter what oom_badness() returns the
task can be chosen if nothing else has been found yet.
Change oom_badness() to return int, change it to return -1 if
find_lock_task_mm() fails, and change select_bad_process() to
check points >= 0.
Note! This patch is not enough, we need more changes.
- oom_badness() was fixed, but oom_kill_task() still ignores
the task without ->mm
- oom_forkbomb_penalty() should use find_lock_task_mm() too,
and it also needs other changes to actually find the first
first-descendant children
This will be addressed later.
[kosaki.motohiro@jp.fujitsu.com: use in badness(), __oom_kill_task()]
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:18:45 +04:00
task_lock ( t ) ;
if ( likely ( t - > mm ) )
2014-01-22 03:50:01 +04:00
goto found ;
oom: introduce find_lock_task_mm() to fix !mm false positives
Almost all ->mm == NULL checks in oom_kill.c are wrong.
The current code assumes that the task without ->mm has already released
its memory and ignores the process. However this is not necessarily true
when this process is multithreaded, other live sub-threads can use this
->mm.
- Remove the "if (!p->mm)" check in select_bad_process(), it is
just wrong.
- Add the new helper, find_lock_task_mm(), which finds the live
thread which uses the memory and takes task_lock() to pin ->mm
- change oom_badness() to use this helper instead of just checking
->mm != NULL.
- As David pointed out, select_bad_process() must never choose the
task without ->mm, but no matter what oom_badness() returns the
task can be chosen if nothing else has been found yet.
Change oom_badness() to return int, change it to return -1 if
find_lock_task_mm() fails, and change select_bad_process() to
check points >= 0.
Note! This patch is not enough, we need more changes.
- oom_badness() was fixed, but oom_kill_task() still ignores
the task without ->mm
- oom_forkbomb_penalty() should use find_lock_task_mm() too,
and it also needs other changes to actually find the first
first-descendant children
This will be addressed later.
[kosaki.motohiro@jp.fujitsu.com: use in badness(), __oom_kill_task()]
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:18:45 +04:00
task_unlock ( t ) ;
2014-01-22 03:49:58 +04:00
}
2014-01-22 03:50:01 +04:00
t = NULL ;
found :
rcu_read_unlock ( ) ;
oom: introduce find_lock_task_mm() to fix !mm false positives
Almost all ->mm == NULL checks in oom_kill.c are wrong.
The current code assumes that the task without ->mm has already released
its memory and ignores the process. However this is not necessarily true
when this process is multithreaded, other live sub-threads can use this
->mm.
- Remove the "if (!p->mm)" check in select_bad_process(), it is
just wrong.
- Add the new helper, find_lock_task_mm(), which finds the live
thread which uses the memory and takes task_lock() to pin ->mm
- change oom_badness() to use this helper instead of just checking
->mm != NULL.
- As David pointed out, select_bad_process() must never choose the
task without ->mm, but no matter what oom_badness() returns the
task can be chosen if nothing else has been found yet.
Change oom_badness() to return int, change it to return -1 if
find_lock_task_mm() fails, and change select_bad_process() to
check points >= 0.
Note! This patch is not enough, we need more changes.
- oom_badness() was fixed, but oom_kill_task() still ignores
the task without ->mm
- oom_forkbomb_penalty() should use find_lock_task_mm() too,
and it also needs other changes to actually find the first
first-descendant children
This will be addressed later.
[kosaki.motohiro@jp.fujitsu.com: use in badness(), __oom_kill_task()]
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:18:45 +04:00
2014-01-22 03:50:01 +04:00
return t ;
oom: introduce find_lock_task_mm() to fix !mm false positives
Almost all ->mm == NULL checks in oom_kill.c are wrong.
The current code assumes that the task without ->mm has already released
its memory and ignores the process. However this is not necessarily true
when this process is multithreaded, other live sub-threads can use this
->mm.
- Remove the "if (!p->mm)" check in select_bad_process(), it is
just wrong.
- Add the new helper, find_lock_task_mm(), which finds the live
thread which uses the memory and takes task_lock() to pin ->mm
- change oom_badness() to use this helper instead of just checking
->mm != NULL.
- As David pointed out, select_bad_process() must never choose the
task without ->mm, but no matter what oom_badness() returns the
task can be chosen if nothing else has been found yet.
Change oom_badness() to return int, change it to return -1 if
find_lock_task_mm() fails, and change select_bad_process() to
check points >= 0.
Note! This patch is not enough, we need more changes.
- oom_badness() was fixed, but oom_kill_task() still ignores
the task without ->mm
- oom_forkbomb_penalty() should use find_lock_task_mm() too,
and it also needs other changes to actually find the first
first-descendant children
This will be addressed later.
[kosaki.motohiro@jp.fujitsu.com: use in badness(), __oom_kill_task()]
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:18:45 +04:00
}
2015-11-07 03:28:06 +03:00
/*
* order = = - 1 means the oom kill is required by sysrq , otherwise only
* for display purposes .
*/
static inline bool is_sysrq_oom ( struct oom_control * oc )
{
return oc - > order = = - 1 ;
}
2016-10-08 02:57:23 +03:00
static inline bool is_memcg_oom ( struct oom_control * oc )
{
return oc - > memcg ! = NULL ;
}
2010-08-10 04:19:35 +04:00
/* return true if the task is not adequate as candidate victim task. */
2010-09-23 00:05:10 +04:00
static bool oom_unkillable_task ( struct task_struct * p ,
2014-12-11 02:44:33 +03:00
struct mem_cgroup * memcg , const nodemask_t * nodemask )
2010-08-10 04:19:35 +04:00
{
if ( is_global_init ( p ) )
return true ;
if ( p - > flags & PF_KTHREAD )
return true ;
/* When mem_cgroup_out_of_memory() and p is not member of the group */
2012-01-13 05:18:32 +04:00
if ( memcg & & ! task_in_mem_cgroup ( p , memcg ) )
2010-08-10 04:19:35 +04:00
return true ;
/* p may not have freeable memory in nodemask */
if ( ! has_intersects_mems_allowed ( p , nodemask ) )
return true ;
return false ;
}
2017-11-16 04:32:07 +03:00
/*
* Print out unreclaimble slabs info when unreclaimable slabs amount is greater
* than all user memory ( LRU pages )
*/
static bool is_dump_unreclaim_slabs ( void )
{
unsigned long nr_lru ;
nr_lru = global_node_page_state ( NR_ACTIVE_ANON ) +
global_node_page_state ( NR_INACTIVE_ANON ) +
global_node_page_state ( NR_ACTIVE_FILE ) +
global_node_page_state ( NR_INACTIVE_FILE ) +
global_node_page_state ( NR_ISOLATED_ANON ) +
global_node_page_state ( NR_ISOLATED_FILE ) +
global_node_page_state ( NR_UNEVICTABLE ) ;
return ( global_node_page_state ( NR_SLAB_UNRECLAIMABLE ) > nr_lru ) ;
}
2005-04-17 02:20:36 +04:00
/**
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:19:46 +04:00
* oom_badness - heuristic function to determine which candidate task to kill
2005-04-17 02:20:36 +04:00
* @ p : task struct of which task we should calculate
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:19:46 +04:00
* @ totalpages : total present RAM allowed for page allocation
2018-04-06 02:24:57 +03:00
* @ memcg : task ' s memory controller , if constrained
* @ nodemask : nodemask passed to page allocator for mempolicy ooms
2005-04-17 02:20:36 +04:00
*
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:19:46 +04:00
* The heuristic for determining which task to kill is made to be as simple and
* predictable as possible . The goal is to return the highest value for the
* task consuming the most memory to avoid subsequent oom failures .
2005-04-17 02:20:36 +04:00
*/
2012-05-30 02:06:47 +04:00
unsigned long oom_badness ( struct task_struct * p , struct mem_cgroup * memcg ,
const nodemask_t * nodemask , unsigned long totalpages )
2005-04-17 02:20:36 +04:00
{
2012-06-09 00:21:26 +04:00
long points ;
2012-06-20 23:52:58 +04:00
long adj ;
oom: move oom_adj value from task_struct to signal_struct
Currently, OOM logic callflow is here.
__out_of_memory()
select_bad_process() for each task
badness() calculate badness of one task
oom_kill_process() search child
oom_kill_task() kill target task and mm shared tasks with it
example, process-A have two thread, thread-A and thread-B and it have very
fat memory and each thread have following oom_adj and oom_score.
thread-A: oom_adj = OOM_DISABLE, oom_score = 0
thread-B: oom_adj = 0, oom_score = very-high
Then, select_bad_process() select thread-B, but oom_kill_task() refuse
kill the task because thread-A have OOM_DISABLE. Thus __out_of_memory()
call select_bad_process() again. but select_bad_process() select the same
task. It mean kernel fall in livelock.
The fact is, select_bad_process() must select killable task. otherwise
OOM logic go into livelock.
And root cause is, oom_adj shouldn't be per-thread value. it should be
per-process value because OOM-killer kill a process, not thread. Thus
This patch moves oomkilladj (now more appropriately named oom_adj) from
struct task_struct to struct signal_struct. it naturally prevent
select_bad_process() choose wrong task.
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Paul Menage <menage@google.com>
Cc: David Rientjes <rientjes@google.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-09-22 04:03:13 +04:00
2012-01-13 05:18:32 +04:00
if ( oom_unkillable_task ( p , memcg , nodemask ) )
2010-08-10 04:19:37 +04:00
return 0 ;
2005-04-17 02:20:36 +04:00
oom: introduce find_lock_task_mm() to fix !mm false positives
Almost all ->mm == NULL checks in oom_kill.c are wrong.
The current code assumes that the task without ->mm has already released
its memory and ignores the process. However this is not necessarily true
when this process is multithreaded, other live sub-threads can use this
->mm.
- Remove the "if (!p->mm)" check in select_bad_process(), it is
just wrong.
- Add the new helper, find_lock_task_mm(), which finds the live
thread which uses the memory and takes task_lock() to pin ->mm
- change oom_badness() to use this helper instead of just checking
->mm != NULL.
- As David pointed out, select_bad_process() must never choose the
task without ->mm, but no matter what oom_badness() returns the
task can be chosen if nothing else has been found yet.
Change oom_badness() to return int, change it to return -1 if
find_lock_task_mm() fails, and change select_bad_process() to
check points >= 0.
Note! This patch is not enough, we need more changes.
- oom_badness() was fixed, but oom_kill_task() still ignores
the task without ->mm
- oom_forkbomb_penalty() should use find_lock_task_mm() too,
and it also needs other changes to actually find the first
first-descendant children
This will be addressed later.
[kosaki.motohiro@jp.fujitsu.com: use in badness(), __oom_kill_task()]
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:18:45 +04:00
p = find_lock_task_mm ( p ) ;
if ( ! p )
2005-04-17 02:20:36 +04:00
return 0 ;
2016-05-21 02:57:18 +03:00
/*
* Do not even consider tasks which are explicitly marked oom
2016-07-29 01:44:46 +03:00
* unkillable or have been already oom reaped or the are in
* the middle of vfork
2016-05-21 02:57:18 +03:00
*/
2012-12-12 04:02:54 +04:00
adj = ( long ) p - > signal - > oom_score_adj ;
2016-05-21 02:57:18 +03:00
if ( adj = = OOM_SCORE_ADJ_MIN | |
2016-10-08 02:58:57 +03:00
test_bit ( MMF_OOM_SKIP , & p - > mm - > flags ) | |
2016-07-29 01:44:46 +03:00
in_vfork ( p ) ) {
2011-11-16 02:36:07 +04:00
task_unlock ( p ) ;
return 0 ;
}
2005-04-17 02:20:36 +04:00
/*
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:19:46 +04:00
* The baseline for the badness score is the proportion of RAM that each
2011-04-28 02:26:50 +04:00
* task ' s rss , pagetable and swap space use .
2005-04-17 02:20:36 +04:00
*/
mm: account pmd page tables to the process
Dave noticed that unprivileged process can allocate significant amount of
memory -- >500 MiB on x86_64 -- and stay unnoticed by oom-killer and
memory cgroup. The trick is to allocate a lot of PMD page tables. Linux
kernel doesn't account PMD tables to the process, only PTE.
The use-cases below use few tricks to allocate a lot of PMD page tables
while keeping VmRSS and VmPTE low. oom_score for the process will be 0.
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/mman.h>
#include <sys/prctl.h>
#define PUD_SIZE (1UL << 30)
#define PMD_SIZE (1UL << 21)
#define NR_PUD 130000
int main(void)
{
char *addr = NULL;
unsigned long i;
prctl(PR_SET_THP_DISABLE);
for (i = 0; i < NR_PUD ; i++) {
addr = mmap(addr + PUD_SIZE, PUD_SIZE, PROT_WRITE|PROT_READ,
MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
if (addr == MAP_FAILED) {
perror("mmap");
break;
}
*addr = 'x';
munmap(addr, PMD_SIZE);
mmap(addr, PMD_SIZE, PROT_WRITE|PROT_READ,
MAP_ANONYMOUS|MAP_PRIVATE|MAP_FIXED, -1, 0);
if (addr == MAP_FAILED)
perror("re-mmap"), exit(1);
}
printf("PID %d consumed %lu KiB in PMD page tables\n",
getpid(), i * 4096 >> 10);
return pause();
}
The patch addresses the issue by account PMD tables to the process the
same way we account PTE.
The main place where PMD tables is accounted is __pmd_alloc() and
free_pmd_range(). But there're few corner cases:
- HugeTLB can share PMD page tables. The patch handles by accounting
the table to all processes who share it.
- x86 PAE pre-allocates few PMD tables on fork.
- Architectures with FIRST_USER_ADDRESS > 0. We need to adjust sanity
check on exit(2).
Accounting only happens on configuration where PMD page table's level is
present (PMD is not folded). As with nr_ptes we use per-mm counter. The
counter value is used to calculate baseline for badness score by
oom-killer.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reported-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Hugh Dickins <hughd@google.com>
Reviewed-by: Cyrill Gorcunov <gorcunov@openvz.org>
Cc: Pavel Emelyanov <xemul@openvz.org>
Cc: David Rientjes <rientjes@google.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-02-12 02:26:50 +03:00
points = get_mm_rss ( p - > mm ) + get_mm_counter ( p - > mm , MM_SWAPENTS ) +
2017-11-16 04:35:40 +03:00
mm_pgtables_bytes ( p - > mm ) / PAGE_SIZE ;
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:19:46 +04:00
task_unlock ( p ) ;
2005-04-17 02:20:36 +04:00
2012-06-20 23:52:58 +04:00
/* Normalize to oom_score_adj units */
adj * = totalpages / 1000 ;
points + = adj ;
2005-04-17 02:20:36 +04:00
2010-09-23 00:04:52 +04:00
/*
2012-05-30 02:06:47 +04:00
* Never return 0 for an eligible task regardless of the root bonus and
* oom_score_adj ( oom_score_adj can ' t be OOM_SCORE_ADJ_MIN here ) .
2010-09-23 00:04:52 +04:00
*/
2012-06-09 00:21:26 +04:00
return points > 0 ? points : 1 ;
2005-04-17 02:20:36 +04:00
}
2016-10-08 02:57:23 +03:00
enum oom_constraint {
CONSTRAINT_NONE ,
CONSTRAINT_CPUSET ,
CONSTRAINT_MEMORY_POLICY ,
CONSTRAINT_MEMCG ,
} ;
2006-02-21 05:27:52 +03:00
/*
* Determine the type of allocation constraint .
*/
2016-10-08 02:57:23 +03:00
static enum oom_constraint constrained_alloc ( struct oom_control * oc )
2009-12-16 03:45:33 +03:00
{
2008-04-28 13:12:16 +04:00
struct zone * zone ;
2008-04-28 13:12:17 +04:00
struct zoneref * z ;
2015-09-09 01:00:36 +03:00
enum zone_type high_zoneidx = gfp_zone ( oc - > gfp_mask ) ;
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:19:46 +04:00
bool cpuset_limited = false ;
int nid ;
2006-02-21 05:27:52 +03:00
2016-10-08 02:57:23 +03:00
if ( is_memcg_oom ( oc ) ) {
2018-06-08 03:06:18 +03:00
oc - > totalpages = mem_cgroup_get_max ( oc - > memcg ) ? : 1 ;
2016-10-08 02:57:23 +03:00
return CONSTRAINT_MEMCG ;
}
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:19:46 +04:00
/* Default to all available memory */
2016-10-08 02:57:23 +03:00
oc - > totalpages = totalram_pages + total_swap_pages ;
if ( ! IS_ENABLED ( CONFIG_NUMA ) )
return CONSTRAINT_NONE ;
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:19:46 +04:00
2015-09-09 01:00:36 +03:00
if ( ! oc - > zonelist )
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:19:46 +04:00
return CONSTRAINT_NONE ;
2009-12-16 03:45:33 +03:00
/*
* Reach here only when __GFP_NOFAIL is used . So , we should avoid
* to kill current . We have to random task kill in this case .
* Hopefully , CONSTRAINT_THISNODE . . . but no way to handle it , now .
*/
2015-09-09 01:00:36 +03:00
if ( oc - > gfp_mask & __GFP_THISNODE )
2009-12-16 03:45:33 +03:00
return CONSTRAINT_NONE ;
2006-02-21 05:27:52 +03:00
2009-12-16 03:45:33 +03:00
/*
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:19:46 +04:00
* This is not a __GFP_THISNODE allocation , so a truncated nodemask in
* the page allocator means a mempolicy is in effect . Cpuset policy
* is enforced in get_page_from_freelist ( ) .
2009-12-16 03:45:33 +03:00
*/
2015-09-09 01:00:36 +03:00
if ( oc - > nodemask & &
! nodes_subset ( node_states [ N_MEMORY ] , * oc - > nodemask ) ) {
2016-10-08 02:57:23 +03:00
oc - > totalpages = total_swap_pages ;
2015-09-09 01:00:36 +03:00
for_each_node_mask ( nid , * oc - > nodemask )
2016-10-08 02:57:23 +03:00
oc - > totalpages + = node_spanned_pages ( nid ) ;
2006-02-21 05:27:52 +03:00
return CONSTRAINT_MEMORY_POLICY ;
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:19:46 +04:00
}
2009-12-16 03:45:33 +03:00
/* Check this allocation failure is caused by cpuset's wall function */
2015-09-09 01:00:36 +03:00
for_each_zone_zonelist_nodemask ( zone , z , oc - > zonelist ,
high_zoneidx , oc - > nodemask )
if ( ! cpuset_zone_allowed ( zone , oc - > gfp_mask ) )
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:19:46 +04:00
cpuset_limited = true ;
2006-02-21 05:27:52 +03:00
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:19:46 +04:00
if ( cpuset_limited ) {
2016-10-08 02:57:23 +03:00
oc - > totalpages = total_swap_pages ;
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:19:46 +04:00
for_each_node_mask ( nid , cpuset_current_mems_allowed )
2016-10-08 02:57:23 +03:00
oc - > totalpages + = node_spanned_pages ( nid ) ;
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:19:46 +04:00
return CONSTRAINT_CPUSET ;
}
2006-02-21 05:27:52 +03:00
return CONSTRAINT_NONE ;
}
2016-10-08 02:57:23 +03:00
static int oom_evaluate_task ( struct task_struct * task , void * arg )
2012-08-01 03:43:40 +04:00
{
2016-10-08 02:57:23 +03:00
struct oom_control * oc = arg ;
unsigned long points ;
2015-09-09 01:00:36 +03:00
if ( oom_unkillable_task ( task , NULL , oc - > nodemask ) )
2016-10-08 02:57:23 +03:00
goto next ;
2012-08-01 03:43:40 +04:00
/*
* This task already has access to memory reserves and is being killed .
2016-07-29 01:45:01 +03:00
* Don ' t allow any other task to have access to the reserves unless
2016-10-08 02:58:57 +03:00
* the task has MMF_OOM_SKIP because chances that it would release
2016-07-29 01:45:01 +03:00
* any memory is quite low .
2012-08-01 03:43:40 +04:00
*/
2016-10-08 02:58:57 +03:00
if ( ! is_sysrq_oom ( oc ) & & tsk_is_oom_victim ( task ) ) {
if ( test_bit ( MMF_OOM_SKIP , & task - > signal - > oom_mm - > flags ) )
2016-10-08 02:57:23 +03:00
goto next ;
goto abort ;
2016-07-29 01:45:01 +03:00
}
2012-08-01 03:43:40 +04:00
2012-12-12 04:02:56 +04:00
/*
* If task is allocating a lot of memory and has been marked to be
* killed first if it triggers an oom , then select it .
*/
2016-10-08 02:57:23 +03:00
if ( oom_task_origin ( task ) ) {
points = ULONG_MAX ;
goto select ;
}
2012-12-12 04:02:56 +04:00
2016-10-08 02:57:23 +03:00
points = oom_badness ( task , NULL , oc - > nodemask , oc - > totalpages ) ;
if ( ! points | | points < oc - > chosen_points )
goto next ;
/* Prefer thread group leaders for display purposes */
if ( points = = oc - > chosen_points & & thread_group_leader ( oc - > chosen ) )
goto next ;
select :
if ( oc - > chosen )
put_task_struct ( oc - > chosen ) ;
get_task_struct ( task ) ;
oc - > chosen = task ;
oc - > chosen_points = points ;
next :
return 0 ;
abort :
if ( oc - > chosen )
put_task_struct ( oc - > chosen ) ;
oc - > chosen = ( void * ) - 1UL ;
return 1 ;
2012-08-01 03:43:40 +04:00
}
2005-04-17 02:20:36 +04:00
/*
2016-10-08 02:57:23 +03:00
* Simple selection loop . We choose the process with the highest number of
* ' points ' . In case scan was aborted , oc - > chosen is set to - 1.
2005-04-17 02:20:36 +04:00
*/
2016-10-08 02:57:23 +03:00
static void select_bad_process ( struct oom_control * oc )
2005-04-17 02:20:36 +04:00
{
2016-10-08 02:57:23 +03:00
if ( is_memcg_oom ( oc ) )
mem_cgroup_scan_tasks ( oc - > memcg , oom_evaluate_task , oc ) ;
else {
struct task_struct * p ;
2014-01-24 03:53:34 +04:00
2016-10-08 02:57:23 +03:00
rcu_read_lock ( ) ;
for_each_process ( p )
if ( oom_evaluate_task ( p , oc ) )
break ;
rcu_read_unlock ( ) ;
2014-01-22 03:49:58 +04:00
}
2006-09-29 13:01:12 +04:00
2016-10-08 02:57:23 +03:00
oc - > chosen_points = oc - > chosen_points * 1000 / oc - > totalpages ;
2005-04-17 02:20:36 +04:00
}
oom: add sysctl to enable task memory dump
Adds a new sysctl, 'oom_dump_tasks', that enables the kernel to produce a
dump of all system tasks (excluding kernel threads) when performing an
OOM-killing. Information includes pid, uid, tgid, vm size, rss, cpu,
oom_adj score, and name.
This is helpful for determining why there was an OOM condition and which
rogue task caused it.
It is configurable so that large systems, such as those with several
thousand tasks, do not incur a performance penalty associated with dumping
data they may not desire.
If an OOM was triggered as a result of a memory controller, the tasklist
shall be filtered to exclude tasks that are not a member of the same
cgroup.
Cc: Andrea Arcangeli <andrea@suse.de>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-02-07 11:14:07 +03:00
/**
2008-03-20 03:00:42 +03:00
* dump_tasks - dump current memory state of all system tasks
2012-06-20 23:53:01 +04:00
* @ memcg : current ' s memory controller , if constrained
2010-09-23 00:05:10 +04:00
* @ nodemask : nodemask passed to page allocator for mempolicy ooms
2008-03-20 03:00:42 +03:00
*
2010-09-23 00:05:10 +04:00
* Dumps the current memory state of all eligible tasks . Tasks not in the same
* memcg , not in the same cpuset , or bound to a disjoint set of mempolicy nodes
* are not shown .
2017-11-16 04:35:40 +03:00
* State information includes task ' s pid , uid , tgid , vm size , rss ,
* pgtables_bytes , swapents , oom_score_adj value , and name .
oom: add sysctl to enable task memory dump
Adds a new sysctl, 'oom_dump_tasks', that enables the kernel to produce a
dump of all system tasks (excluding kernel threads) when performing an
OOM-killing. Information includes pid, uid, tgid, vm size, rss, cpu,
oom_adj score, and name.
This is helpful for determining why there was an OOM condition and which
rogue task caused it.
It is configurable so that large systems, such as those with several
thousand tasks, do not incur a performance penalty associated with dumping
data they may not desire.
If an OOM was triggered as a result of a memory controller, the tasklist
shall be filtered to exclude tasks that are not a member of the same
cgroup.
Cc: Andrea Arcangeli <andrea@suse.de>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-02-07 11:14:07 +03:00
*/
2014-12-11 02:44:33 +03:00
static void dump_tasks ( struct mem_cgroup * memcg , const nodemask_t * nodemask )
oom: add sysctl to enable task memory dump
Adds a new sysctl, 'oom_dump_tasks', that enables the kernel to produce a
dump of all system tasks (excluding kernel threads) when performing an
OOM-killing. Information includes pid, uid, tgid, vm size, rss, cpu,
oom_adj score, and name.
This is helpful for determining why there was an OOM condition and which
rogue task caused it.
It is configurable so that large systems, such as those with several
thousand tasks, do not incur a performance penalty associated with dumping
data they may not desire.
If an OOM was triggered as a result of a memory controller, the tasklist
shall be filtered to exclude tasks that are not a member of the same
cgroup.
Cc: Andrea Arcangeli <andrea@suse.de>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-02-07 11:14:07 +03:00
{
2010-08-10 04:18:46 +04:00
struct task_struct * p ;
struct task_struct * task ;
oom: add sysctl to enable task memory dump
Adds a new sysctl, 'oom_dump_tasks', that enables the kernel to produce a
dump of all system tasks (excluding kernel threads) when performing an
OOM-killing. Information includes pid, uid, tgid, vm size, rss, cpu,
oom_adj score, and name.
This is helpful for determining why there was an OOM condition and which
rogue task caused it.
It is configurable so that large systems, such as those with several
thousand tasks, do not incur a performance penalty associated with dumping
data they may not desire.
If an OOM was triggered as a result of a memory controller, the tasklist
shall be filtered to exclude tasks that are not a member of the same
cgroup.
Cc: Andrea Arcangeli <andrea@suse.de>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-02-07 11:14:07 +03:00
2018-08-22 07:52:41 +03:00
pr_info ( " Tasks state (memory values in pages): \n " ) ;
pr_info ( " [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name \n " ) ;
2012-08-01 03:43:45 +04:00
rcu_read_lock ( ) ;
2010-08-10 04:18:46 +04:00
for_each_process ( p ) {
2012-01-13 05:18:32 +04:00
if ( oom_unkillable_task ( p , memcg , nodemask ) )
2008-11-06 23:53:29 +03:00
continue ;
oom: add sysctl to enable task memory dump
Adds a new sysctl, 'oom_dump_tasks', that enables the kernel to produce a
dump of all system tasks (excluding kernel threads) when performing an
OOM-killing. Information includes pid, uid, tgid, vm size, rss, cpu,
oom_adj score, and name.
This is helpful for determining why there was an OOM condition and which
rogue task caused it.
It is configurable so that large systems, such as those with several
thousand tasks, do not incur a performance penalty associated with dumping
data they may not desire.
If an OOM was triggered as a result of a memory controller, the tasklist
shall be filtered to exclude tasks that are not a member of the same
cgroup.
Cc: Andrea Arcangeli <andrea@suse.de>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-02-07 11:14:07 +03:00
2010-08-10 04:18:46 +04:00
task = find_lock_task_mm ( p ) ;
if ( ! task ) {
2009-05-29 01:34:19 +04:00
/*
2010-08-10 04:18:46 +04:00
* This is a kthread or all of p ' s threads have already
* detached their mm ' s . There ' s no need to report
2010-08-10 04:18:46 +04:00
* them ; they can ' t be oom killed anyway .
2009-05-29 01:34:19 +04:00
*/
continue ;
}
2010-08-10 04:18:46 +04:00
2018-08-22 07:52:41 +03:00
pr_info ( " [%7d] %5d %5d %8lu %8lu %8ld %8lu %5hd %s \n " ,
2012-02-08 19:00:08 +04:00
task - > pid , from_kuid ( & init_user_ns , task_uid ( task ) ) ,
task - > tgid , task - > mm - > total_vm , get_mm_rss ( task - > mm ) ,
2017-11-16 04:35:40 +03:00
mm_pgtables_bytes ( task - > mm ) ,
2012-08-01 03:42:56 +04:00
get_mm_counter ( task - > mm , MM_SWAPENTS ) ,
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:19:46 +04:00
task - > signal - > oom_score_adj , task - > comm ) ;
2010-08-10 04:18:46 +04:00
task_unlock ( task ) ;
}
2012-08-01 03:43:45 +04:00
rcu_read_unlock ( ) ;
oom: add sysctl to enable task memory dump
Adds a new sysctl, 'oom_dump_tasks', that enables the kernel to produce a
dump of all system tasks (excluding kernel threads) when performing an
OOM-killing. Information includes pid, uid, tgid, vm size, rss, cpu,
oom_adj score, and name.
This is helpful for determining why there was an OOM condition and which
rogue task caused it.
It is configurable so that large systems, such as those with several
thousand tasks, do not incur a performance penalty associated with dumping
data they may not desire.
If an OOM was triggered as a result of a memory controller, the tasklist
shall be filtered to exclude tasks that are not a member of the same
cgroup.
Cc: Andrea Arcangeli <andrea@suse.de>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-02-07 11:14:07 +03:00
}
2016-07-27 01:22:33 +03:00
static void dump_header ( struct oom_control * oc , struct task_struct * p )
2009-12-15 04:57:47 +03:00
{
2017-11-16 04:39:14 +03:00
pr_warn ( " %s invoked oom-killer: gfp_mask=%#x(%pGg), nodemask=%*pbl, order=%d, oom_score_adj=%hd \n " ,
current - > comm , oc - > gfp_mask , & oc - > gfp_mask ,
nodemask_pr_args ( oc - > nodemask ) , oc - > order ,
current - > signal - > oom_score_adj ) ;
2016-10-08 02:59:33 +03:00
if ( ! IS_ENABLED ( CONFIG_COMPACTION ) & & oc - > order )
pr_warn ( " COMPACTION is disabled!!! \n " ) ;
2016-03-16 00:56:05 +03:00
2015-11-06 05:48:05 +03:00
cpuset_print_current_mems_allowed ( ) ;
2009-12-15 04:57:47 +03:00
dump_stack ( ) ;
2017-11-16 04:32:07 +03:00
if ( is_memcg_oom ( oc ) )
2016-07-27 01:22:33 +03:00
mem_cgroup_print_oom_info ( oc - > memcg , p ) ;
2017-11-16 04:32:07 +03:00
else {
2017-02-25 01:55:42 +03:00
show_mem ( SHOW_MEM_FILTER_NODES , oc - > nodemask ) ;
2017-11-16 04:32:07 +03:00
if ( is_dump_unreclaim_slabs ( ) )
dump_unreclaimable_slab ( ) ;
}
2009-12-15 04:57:47 +03:00
if ( sysctl_oom_dump_tasks )
2016-07-27 01:22:33 +03:00
dump_tasks ( oc - > memcg , oc - > nodemask ) ;
2009-12-15 04:57:47 +03:00
}
2014-10-20 20:12:32 +04:00
/*
2015-02-12 02:26:24 +03:00
* Number of OOM victims in flight
2014-10-20 20:12:32 +04:00
*/
2015-02-12 02:26:24 +03:00
static atomic_t oom_victims = ATOMIC_INIT ( 0 ) ;
static DECLARE_WAIT_QUEUE_HEAD ( oom_victims_wait ) ;
2014-10-20 20:12:32 +04:00
2016-10-08 02:57:23 +03:00
static bool oom_killer_disabled __read_mostly ;
2014-10-20 20:12:32 +04:00
2016-03-26 00:20:30 +03:00
# define K(x) ((x) << (PAGE_SHIFT-10))
2016-05-20 03:13:12 +03:00
/*
* task - > mm can be NULL if the task is the exited group leader . So to
* determine whether the task is using a particular mm , we examine all the
* task ' s threads : if one of those is using this mm then this task was also
* using it .
*/
2016-07-29 01:44:43 +03:00
bool process_shares_mm ( struct task_struct * p , struct mm_struct * mm )
2016-05-20 03:13:12 +03:00
{
struct task_struct * t ;
for_each_thread ( p , t ) {
struct mm_struct * t_mm = READ_ONCE ( t - > mm ) ;
if ( t_mm )
return t_mm = = mm ;
}
return false ;
}
2016-03-26 00:20:24 +03:00
# ifdef CONFIG_MMU
/*
* OOM Reaper kernel thread which tries to reap the memory used by the OOM
* victim ( if that is possible ) to help the OOM killer to move on .
*/
static struct task_struct * oom_reaper_th ;
static DECLARE_WAIT_QUEUE_HEAD ( oom_reaper_wait ) ;
2016-03-26 00:20:39 +03:00
static struct task_struct * oom_reaper_list ;
2016-03-26 00:20:33 +03:00
static DEFINE_SPINLOCK ( oom_reaper_lock ) ;
2018-08-22 07:52:33 +03:00
bool __oom_reap_task_mm ( struct mm_struct * mm )
2016-03-26 00:20:24 +03:00
{
struct vm_area_struct * vma ;
2018-08-22 07:52:33 +03:00
bool ret = true ;
2018-05-12 02:02:04 +03:00
/*
* Tell all users of get_user / copy_from_user etc . . . that the content
* is no longer stable . No barriers really needed because unmapping
* should imply barriers already and the reader would hit a page fault
* if it stumbled over a reaped memory .
*/
set_bit ( MMF_UNSTABLE , & mm - > flags ) ;
for ( vma = mm - > mmap ; vma ; vma = vma - > vm_next ) {
if ( ! can_madv_dontneed_vma ( vma ) )
continue ;
/*
* Only anonymous pages have a good chance to be dropped
* without additional steps which we cannot afford as we
* are OOM already .
*
* We do not even care about fs backed pages because all
* which are reclaimable have already been reclaimed and
* we do not want to block exit_mmap by keeping mm ref
* count elevated without a good reason .
*/
if ( vma_is_anonymous ( vma ) | | ! ( vma - > vm_flags & VM_SHARED ) ) {
const unsigned long start = vma - > vm_start ;
const unsigned long end = vma - > vm_end ;
struct mmu_gather tlb ;
tlb_gather_mmu ( & tlb , mm , start , end ) ;
2018-08-22 07:52:33 +03:00
if ( mmu_notifier_invalidate_range_start_nonblock ( mm , start , end ) ) {
2018-09-05 01:45:37 +03:00
tlb_finish_mmu ( & tlb , start , end ) ;
2018-08-22 07:52:33 +03:00
ret = false ;
continue ;
}
2018-05-12 02:02:04 +03:00
unmap_page_range ( & tlb , vma , start , end , NULL ) ;
mmu_notifier_invalidate_range_end ( mm , start , end ) ;
tlb_finish_mmu ( & tlb , start , end ) ;
}
}
2018-08-22 07:52:33 +03:00
return ret ;
2018-05-12 02:02:04 +03:00
}
2018-08-22 07:52:45 +03:00
/*
* Reaps the address space of the give task .
*
* Returns true on success and false if none or part of the address space
* has been reclaimed and the caller should retry later .
*/
2018-05-12 02:02:04 +03:00
static bool oom_reap_task_mm ( struct task_struct * tsk , struct mm_struct * mm )
{
2016-03-26 00:20:24 +03:00
bool ret = true ;
if ( ! down_read_trylock ( & mm - > mmap_sem ) ) {
2017-07-11 01:49:05 +03:00
trace_skip_task_reaping ( tsk - > pid ) ;
2018-08-22 07:52:37 +03:00
return false ;
2017-10-04 02:14:50 +03:00
}
2016-07-27 01:24:50 +03:00
/*
2017-09-07 02:25:00 +03:00
* MMF_OOM_SKIP is set by exit_mmap when the OOM reaper can ' t
* work on the mm anymore . The check for MMF_OOM_SKIP must run
* under mmap_sem for reading because it serializes against the
* down_write ( ) ; up_write ( ) cycle in exit_mmap ( ) .
2016-07-27 01:24:50 +03:00
*/
2017-09-07 02:25:00 +03:00
if ( test_bit ( MMF_OOM_SKIP , & mm - > flags ) ) {
2017-07-11 01:49:05 +03:00
trace_skip_task_reaping ( tsk - > pid ) ;
2018-08-22 07:52:45 +03:00
goto out_unlock ;
2016-03-26 00:20:24 +03:00
}
2017-07-11 01:49:05 +03:00
trace_start_task_reaping ( tsk - > pid ) ;
2018-08-22 07:52:33 +03:00
/* failed to reap part of the address space. Try again later */
2018-08-22 07:52:45 +03:00
ret = __oom_reap_task_mm ( mm ) ;
if ( ! ret )
goto out_finish ;
2016-03-26 00:20:24 +03:00
2016-03-26 00:20:30 +03:00
pr_info ( " oom_reaper: reaped process %d (%s), now anon-rss:%lukB, file-rss:%lukB, shmem-rss:%lukB \n " ,
task_pid_nr ( tsk ) , tsk - > comm ,
K ( get_mm_counter ( mm , MM_ANONPAGES ) ) ,
K ( get_mm_counter ( mm , MM_FILEPAGES ) ) ,
K ( get_mm_counter ( mm , MM_SHMEMPAGES ) ) ) ;
2018-08-22 07:52:45 +03:00
out_finish :
trace_finish_task_reaping ( tsk - > pid ) ;
out_unlock :
2016-03-26 00:20:24 +03:00
up_read ( & mm - > mmap_sem ) ;
2016-03-26 00:20:27 +03:00
2016-03-26 00:20:24 +03:00
return ret ;
}
2016-03-26 00:20:30 +03:00
# define MAX_OOM_REAP_RETRIES 10
2016-03-26 00:20:27 +03:00
static void oom_reap_task ( struct task_struct * tsk )
2016-03-26 00:20:24 +03:00
{
int attempts = 0 ;
2016-10-08 02:58:51 +03:00
struct mm_struct * mm = tsk - > signal - > oom_mm ;
2016-03-26 00:20:24 +03:00
/* Retry the down_read_trylock(mmap_sem) a few times */
2018-05-12 02:02:04 +03:00
while ( attempts + + < MAX_OOM_REAP_RETRIES & & ! oom_reap_task_mm ( tsk , mm ) )
2016-03-26 00:20:24 +03:00
schedule_timeout_idle ( HZ / 10 ) ;
2018-04-06 02:25:45 +03:00
if ( attempts < = MAX_OOM_REAP_RETRIES | |
test_bit ( MMF_OOM_SKIP , & mm - > flags ) )
2016-10-08 02:58:45 +03:00
goto done ;
2016-07-29 01:44:58 +03:00
2016-10-08 02:58:45 +03:00
pr_info ( " oom_reaper: unable to reap pid:%d (%s) \n " ,
task_pid_nr ( tsk ) , tsk - > comm ) ;
debug_show_all_locks ( ) ;
2016-03-26 00:20:30 +03:00
2016-10-08 02:58:45 +03:00
done :
2016-05-20 03:13:15 +03:00
tsk - > oom_reaper_list = NULL ;
2016-10-08 02:58:51 +03:00
/*
* Hide this mm from OOM killer because it has been either reaped or
* somebody can ' t call up_write ( mmap_sem ) .
*/
2016-10-08 02:58:57 +03:00
set_bit ( MMF_OOM_SKIP , & mm - > flags ) ;
2016-10-08 02:58:51 +03:00
2016-03-26 00:20:24 +03:00
/* Drop a reference taken by wake_oom_reaper */
2016-03-26 00:20:27 +03:00
put_task_struct ( tsk ) ;
2016-03-26 00:20:24 +03:00
}
static int oom_reaper ( void * unused )
{
while ( true ) {
2016-03-26 00:20:33 +03:00
struct task_struct * tsk = NULL ;
2016-03-26 00:20:24 +03:00
2016-03-26 00:20:39 +03:00
wait_event_freezable ( oom_reaper_wait , oom_reaper_list ! = NULL ) ;
2016-03-26 00:20:33 +03:00
spin_lock ( & oom_reaper_lock ) ;
2016-03-26 00:20:39 +03:00
if ( oom_reaper_list ! = NULL ) {
tsk = oom_reaper_list ;
oom_reaper_list = tsk - > oom_reaper_list ;
2016-03-26 00:20:33 +03:00
}
spin_unlock ( & oom_reaper_lock ) ;
if ( tsk )
oom_reap_task ( tsk ) ;
2016-03-26 00:20:24 +03:00
}
return 0 ;
}
2016-10-08 02:57:23 +03:00
static void wake_oom_reaper ( struct task_struct * tsk )
2016-03-26 00:20:24 +03:00
{
2016-04-02 00:31:34 +03:00
/* tsk is already queued? */
if ( tsk = = oom_reaper_list | | tsk - > oom_reaper_list )
2016-03-26 00:20:24 +03:00
return ;
2016-03-26 00:20:27 +03:00
get_task_struct ( tsk ) ;
2016-03-26 00:20:24 +03:00
2016-03-26 00:20:33 +03:00
spin_lock ( & oom_reaper_lock ) ;
2016-03-26 00:20:39 +03:00
tsk - > oom_reaper_list = oom_reaper_list ;
oom_reaper_list = tsk ;
2016-03-26 00:20:33 +03:00
spin_unlock ( & oom_reaper_lock ) ;
2017-07-11 01:49:05 +03:00
trace_wake_reaper ( tsk - > pid ) ;
2016-03-26 00:20:33 +03:00
wake_up ( & oom_reaper_wait ) ;
2016-03-26 00:20:24 +03:00
}
static int __init oom_init ( void )
{
oom_reaper_th = kthread_run ( oom_reaper , NULL , " oom_reaper " ) ;
return 0 ;
}
subsys_initcall ( oom_init )
2016-10-08 02:57:23 +03:00
# else
static inline void wake_oom_reaper ( struct task_struct * tsk )
{
}
# endif /* CONFIG_MMU */
2016-03-26 00:20:24 +03:00
2015-02-12 02:26:12 +03:00
/**
2015-06-25 02:57:07 +03:00
* mark_oom_victim - mark the given task as OOM victim
2015-02-12 02:26:12 +03:00
* @ tsk : task to mark
2015-02-12 02:26:24 +03:00
*
2015-06-25 02:57:19 +03:00
* Has to be called with oom_lock held and never after
2015-02-12 02:26:24 +03:00
* oom has been disabled already .
2016-10-08 02:58:51 +03:00
*
* tsk - > mm has to be non NULL and caller has to guarantee it is stable ( either
* under task_lock or operate on the current ) .
2015-02-12 02:26:12 +03:00
*/
2016-10-08 02:57:23 +03:00
static void mark_oom_victim ( struct task_struct * tsk )
2015-02-12 02:26:12 +03:00
{
2016-10-08 02:58:51 +03:00
struct mm_struct * mm = tsk - > mm ;
2015-02-12 02:26:24 +03:00
WARN_ON ( oom_killer_disabled ) ;
/* OOM killer might race with memcg OOM */
if ( test_and_set_tsk_thread_flag ( tsk , TIF_MEMDIE ) )
return ;
2016-10-08 02:58:51 +03:00
/* oom_mm is bound to the signal struct life time. */
2017-12-15 02:33:15 +03:00
if ( ! cmpxchg ( & tsk - > signal - > oom_mm , NULL , mm ) ) {
2017-02-28 01:30:07 +03:00
mmgrab ( tsk - > signal - > oom_mm ) ;
2017-12-15 02:33:15 +03:00
set_bit ( MMF_OOM_VICTIM , & mm - > flags ) ;
}
2016-10-08 02:58:51 +03:00
2015-02-12 02:26:15 +03:00
/*
* Make sure that the task is woken up from uninterruptible sleep
* if it is frozen because OOM killer wouldn ' t be able to free
* any memory and livelock . freezing_slow_path will tell the freezer
* that TIF_MEMDIE tasks should be ignored .
*/
__thaw_task ( tsk ) ;
2015-02-12 02:26:24 +03:00
atomic_inc ( & oom_victims ) ;
2017-07-11 01:49:05 +03:00
trace_mark_victim ( tsk - > pid ) ;
2015-02-12 02:26:12 +03:00
}
/**
2015-06-25 02:57:07 +03:00
* exit_oom_victim - note the exit of an OOM victim
2015-02-12 02:26:12 +03:00
*/
2016-10-08 02:59:03 +03:00
void exit_oom_victim ( void )
2015-02-12 02:26:12 +03:00
{
2016-10-08 02:59:03 +03:00
clear_thread_flag ( TIF_MEMDIE ) ;
2015-02-12 02:26:24 +03:00
2015-06-25 02:57:13 +03:00
if ( ! atomic_dec_return ( & oom_victims ) )
2015-02-12 02:26:24 +03:00
wake_up_all ( & oom_victims_wait ) ;
}
2016-10-08 02:59:00 +03:00
/**
* oom_killer_enable - enable OOM killer
*/
void oom_killer_enable ( void )
{
oom_killer_disabled = false ;
2017-05-04 00:54:57 +03:00
pr_info ( " OOM killer enabled. \n " ) ;
2016-10-08 02:59:00 +03:00
}
2015-02-12 02:26:24 +03:00
/**
* oom_killer_disable - disable OOM killer
2016-10-08 02:59:00 +03:00
* @ timeout : maximum timeout to wait for oom victims in jiffies
2015-02-12 02:26:24 +03:00
*
* Forces all page allocations to fail rather than trigger OOM killer .
2016-10-08 02:59:00 +03:00
* Will block and wait until all OOM victims are killed or the given
* timeout expires .
2015-02-12 02:26:24 +03:00
*
* The function cannot be called when there are runnable user tasks because
* the userspace would see unexpected allocation failures as a result . Any
* new usage of this function should be consulted with MM people .
*
* Returns true if successful and false if the OOM killer cannot be
* disabled .
*/
2016-10-08 02:59:00 +03:00
bool oom_killer_disable ( signed long timeout )
2015-02-12 02:26:24 +03:00
{
2016-10-08 02:59:00 +03:00
signed long ret ;
2015-02-12 02:26:24 +03:00
/*
2016-03-18 00:20:45 +03:00
* Make sure to not race with an ongoing OOM killer . Check that the
* current is not killed ( possibly due to sharing the victim ' s memory ) .
2015-02-12 02:26:24 +03:00
*/
2016-03-18 00:20:45 +03:00
if ( mutex_lock_killable ( & oom_lock ) )
2015-02-12 02:26:24 +03:00
return false ;
oom_killer_disabled = true ;
2015-06-25 02:57:19 +03:00
mutex_unlock ( & oom_lock ) ;
2015-02-12 02:26:24 +03:00
2016-10-08 02:59:00 +03:00
ret = wait_event_interruptible_timeout ( oom_victims_wait ,
! atomic_read ( & oom_victims ) , timeout ) ;
if ( ret < = 0 ) {
oom_killer_enable ( ) ;
return false ;
}
2017-05-04 00:54:57 +03:00
pr_info ( " OOM killer disabled. \n " ) ;
2015-02-12 02:26:24 +03:00
return true ;
}
2016-07-29 01:44:52 +03:00
static inline bool __task_will_free_mem ( struct task_struct * task )
{
struct signal_struct * sig = task - > signal ;
/*
* A coredumping process may sleep for an extended period in exit_mm ( ) ,
* so the oom killer cannot assume that the process will promptly exit
* and release memory .
*/
if ( sig - > flags & SIGNAL_GROUP_COREDUMP )
return false ;
if ( sig - > flags & SIGNAL_GROUP_EXIT )
return true ;
if ( thread_group_empty ( task ) & & ( task - > flags & PF_EXITING ) )
return true ;
return false ;
}
/*
* Checks whether the given task is dying or exiting and likely to
* release its address space . This means that all threads and processes
* sharing the same mm have to be killed or exiting .
2016-07-29 01:45:04 +03:00
* Caller has to make sure that task - > mm is stable ( hold task_lock or
* it operates on the current ) .
2016-07-29 01:44:52 +03:00
*/
2016-10-08 02:57:23 +03:00
static bool task_will_free_mem ( struct task_struct * task )
2016-07-29 01:44:52 +03:00
{
2016-07-29 01:45:04 +03:00
struct mm_struct * mm = task - > mm ;
2016-07-29 01:44:52 +03:00
struct task_struct * p ;
2016-08-12 01:33:09 +03:00
bool ret = true ;
2016-07-29 01:44:52 +03:00
/*
2016-07-29 01:45:04 +03:00
* Skip tasks without mm because it might have passed its exit_mm and
* exit_oom_victim . oom_reaper could have rescued that but do not rely
* on that for now . We can consider find_lock_task_mm in future .
2016-07-29 01:44:52 +03:00
*/
2016-07-29 01:45:04 +03:00
if ( ! mm )
2016-07-29 01:44:52 +03:00
return false ;
2016-07-29 01:45:04 +03:00
if ( ! __task_will_free_mem ( task ) )
return false ;
mm, oom: task_will_free_mem should skip oom_reaped tasks
The 0-day robot has encountered the following:
Out of memory: Kill process 3914 (trinity-c0) score 167 or sacrifice child
Killed process 3914 (trinity-c0) total-vm:55864kB, anon-rss:1512kB, file-rss:1088kB, shmem-rss:25616kB
oom_reaper: reaped process 3914 (trinity-c0), now anon-rss:0kB, file-rss:0kB, shmem-rss:26488kB
oom_reaper: reaped process 3914 (trinity-c0), now anon-rss:0kB, file-rss:0kB, shmem-rss:26900kB
oom_reaper: reaped process 3914 (trinity-c0), now anon-rss:0kB, file-rss:0kB, shmem-rss:26900kB
oom_reaper: reaped process 3914 (trinity-c0), now anon-rss:0kB, file-rss:0kB, shmem-rss:27296kB
oom_reaper: reaped process 3914 (trinity-c0), now anon-rss:0kB, file-rss:0kB, shmem-rss:28148kB
oom_reaper is trying to reap the same task again and again.
This is possible only when the oom killer is bypassed because of
task_will_free_mem because we skip over tasks with MMF_OOM_REAPED
already set during select_bad_process. Teach task_will_free_mem to skip
over MMF_OOM_REAPED tasks as well because they will be unlikely to free
anything more.
Analyzed by Tetsuo Handa.
Link: http://lkml.kernel.org/r/1466426628-15074-9-git-send-email-mhocko@kernel.org
Signed-off-by: Michal Hocko <mhocko@suse.com>
Acked-by: Oleg Nesterov <oleg@redhat.com>
Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: Vladimir Davydov <vdavydov@virtuozzo.com>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-29 01:44:55 +03:00
/*
* This task has already been drained by the oom reaper so there are
* only small chances it will free some more
*/
2016-10-08 02:58:57 +03:00
if ( test_bit ( MMF_OOM_SKIP , & mm - > flags ) )
mm, oom: task_will_free_mem should skip oom_reaped tasks
The 0-day robot has encountered the following:
Out of memory: Kill process 3914 (trinity-c0) score 167 or sacrifice child
Killed process 3914 (trinity-c0) total-vm:55864kB, anon-rss:1512kB, file-rss:1088kB, shmem-rss:25616kB
oom_reaper: reaped process 3914 (trinity-c0), now anon-rss:0kB, file-rss:0kB, shmem-rss:26488kB
oom_reaper: reaped process 3914 (trinity-c0), now anon-rss:0kB, file-rss:0kB, shmem-rss:26900kB
oom_reaper: reaped process 3914 (trinity-c0), now anon-rss:0kB, file-rss:0kB, shmem-rss:26900kB
oom_reaper: reaped process 3914 (trinity-c0), now anon-rss:0kB, file-rss:0kB, shmem-rss:27296kB
oom_reaper: reaped process 3914 (trinity-c0), now anon-rss:0kB, file-rss:0kB, shmem-rss:28148kB
oom_reaper is trying to reap the same task again and again.
This is possible only when the oom killer is bypassed because of
task_will_free_mem because we skip over tasks with MMF_OOM_REAPED
already set during select_bad_process. Teach task_will_free_mem to skip
over MMF_OOM_REAPED tasks as well because they will be unlikely to free
anything more.
Analyzed by Tetsuo Handa.
Link: http://lkml.kernel.org/r/1466426628-15074-9-git-send-email-mhocko@kernel.org
Signed-off-by: Michal Hocko <mhocko@suse.com>
Acked-by: Oleg Nesterov <oleg@redhat.com>
Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: Vladimir Davydov <vdavydov@virtuozzo.com>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-29 01:44:55 +03:00
return false ;
2016-07-29 01:45:04 +03:00
if ( atomic_read ( & mm - > mm_users ) < = 1 )
2016-07-29 01:44:52 +03:00
return true ;
/*
2016-10-08 02:57:32 +03:00
* Make sure that all tasks which share the mm with the given tasks
* are dying as well to make sure that a ) nobody pins its mm and
* b ) the task is also reapable by the oom reaper .
2016-07-29 01:44:52 +03:00
*/
rcu_read_lock ( ) ;
for_each_process ( p ) {
if ( ! process_shares_mm ( p , mm ) )
continue ;
if ( same_thread_group ( task , p ) )
continue ;
ret = __task_will_free_mem ( p ) ;
if ( ! ret )
break ;
}
rcu_read_unlock ( ) ;
return ret ;
}
mm, oom: refactor oom_kill_process()
Patch series "introduce memory.oom.group", v2.
This is a tiny implementation of cgroup-aware OOM killer, which adds an
ability to kill a cgroup as a single unit and so guarantee the integrity
of the workload.
Although it has only a limited functionality in comparison to what now
resides in the mm tree (it doesn't change the victim task selection
algorithm, doesn't look at memory stas on cgroup level, etc), it's also
much simpler and more straightforward. So, hopefully, we can avoid having
long debates here, as we had with the full implementation.
As it doesn't prevent any futher development, and implements an useful and
complete feature, it looks as a sane way forward.
This patch (of 2):
oom_kill_process() consists of two logical parts: the first one is
responsible for considering task's children as a potential victim and
printing the debug information. The second half is responsible for
sending SIGKILL to all tasks sharing the mm struct with the given victim.
This commit splits oom_kill_process() with an intention to re-use the the
second half: __oom_kill_process().
The cgroup-aware OOM killer will kill multiple tasks belonging to the
victim cgroup. We don't need to print the debug information for the each
task, as well as play with task selection (considering task's children),
so we can't use the existing oom_kill_process().
Link: http://lkml.kernel.org/r/20171130152824.1591-2-guro@fb.com
Link: http://lkml.kernel.org/r/20180802003201.817-3-guro@fb.com
Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: David Rientjes <rientjes@google.com>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: David Rientjes <rientjes@google.com>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-08-22 07:53:50 +03:00
static void __oom_kill_process ( struct task_struct * victim )
2005-04-17 02:20:36 +04:00
{
mm, oom: refactor oom_kill_process()
Patch series "introduce memory.oom.group", v2.
This is a tiny implementation of cgroup-aware OOM killer, which adds an
ability to kill a cgroup as a single unit and so guarantee the integrity
of the workload.
Although it has only a limited functionality in comparison to what now
resides in the mm tree (it doesn't change the victim task selection
algorithm, doesn't look at memory stas on cgroup level, etc), it's also
much simpler and more straightforward. So, hopefully, we can avoid having
long debates here, as we had with the full implementation.
As it doesn't prevent any futher development, and implements an useful and
complete feature, it looks as a sane way forward.
This patch (of 2):
oom_kill_process() consists of two logical parts: the first one is
responsible for considering task's children as a potential victim and
printing the debug information. The second half is responsible for
sending SIGKILL to all tasks sharing the mm struct with the given victim.
This commit splits oom_kill_process() with an intention to re-use the the
second half: __oom_kill_process().
The cgroup-aware OOM killer will kill multiple tasks belonging to the
victim cgroup. We don't need to print the debug information for the each
task, as well as play with task selection (considering task's children),
so we can't use the existing oom_kill_process().
Link: http://lkml.kernel.org/r/20171130152824.1591-2-guro@fb.com
Link: http://lkml.kernel.org/r/20180802003201.817-3-guro@fb.com
Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: David Rientjes <rientjes@google.com>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: David Rientjes <rientjes@google.com>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-08-22 07:53:50 +03:00
struct task_struct * p ;
2012-03-22 03:33:46 +04:00
struct mm_struct * mm ;
2016-03-26 00:20:44 +03:00
bool can_oom_reap = true ;
2005-04-17 02:20:36 +04:00
2012-08-01 03:43:45 +04:00
p = find_lock_task_mm ( victim ) ;
if ( ! p ) {
put_task_struct ( victim ) ;
2012-03-22 03:33:46 +04:00
return ;
2012-08-01 03:43:45 +04:00
} else if ( victim ! = p ) {
get_task_struct ( p ) ;
put_task_struct ( victim ) ;
victim = p ;
}
2012-03-22 03:33:46 +04:00
2015-11-06 05:47:51 +03:00
/* Get a reference to safely compare mm after task_unlock(victim) */
2012-03-22 03:33:46 +04:00
mm = victim - > mm ;
2017-02-28 01:30:07 +03:00
mmgrab ( mm ) ;
2017-07-07 01:40:28 +03:00
/* Raise event before sending signal: task reaper must see this */
count_vm_event ( OOM_KILL ) ;
2018-06-15 01:28:05 +03:00
memcg_memory_event_mm ( mm , MEMCG_OOM_KILL ) ;
2017-07-07 01:40:28 +03:00
2015-11-06 05:47:44 +03:00
/*
2017-09-07 02:24:50 +03:00
* We should send SIGKILL before granting access to memory reserves
* in order to prevent the OOM victim from depleting the memory
* reserves from the user space under its control .
2015-11-06 05:47:44 +03:00
*/
2018-07-21 18:45:15 +03:00
do_send_sig_info ( SIGKILL , SEND_SIG_FORCED , victim , PIDTYPE_TGID ) ;
2015-06-25 02:57:07 +03:00
mark_oom_victim ( victim ) ;
2016-01-15 02:19:26 +03:00
pr_err ( " Killed process %d (%s) total-vm:%lukB, anon-rss:%lukB, file-rss:%lukB, shmem-rss:%lukB \n " ,
2012-03-22 03:33:46 +04:00
task_pid_nr ( victim ) , victim - > comm , K ( victim - > mm - > total_vm ) ,
K ( get_mm_counter ( victim - > mm , MM_ANONPAGES ) ) ,
2016-01-15 02:19:26 +03:00
K ( get_mm_counter ( victim - > mm , MM_FILEPAGES ) ) ,
K ( get_mm_counter ( victim - > mm , MM_SHMEMPAGES ) ) ) ;
2012-03-22 03:33:46 +04:00
task_unlock ( victim ) ;
/*
* Kill all user processes sharing victim - > mm in other thread groups , if
* any . They don ' t get access to memory reserves , though , to avoid
* depletion of all memory . This prevents mm - > mmap_sem livelock when an
* oom killed thread cannot exit because it requires the semaphore and
* its contended by another thread trying to allocate memory itself .
* That thread will now get access to memory reserves since it has a
* pending fatal signal .
*/
2014-01-22 03:50:01 +04:00
rcu_read_lock ( ) ;
2015-11-06 05:48:23 +03:00
for_each_process ( p ) {
2015-11-06 05:48:26 +03:00
if ( ! process_shares_mm ( p , mm ) )
2015-11-06 05:48:23 +03:00
continue ;
if ( same_thread_group ( p , victim ) )
continue ;
2016-10-08 02:59:09 +03:00
if ( is_global_init ( p ) ) {
2016-03-26 00:20:24 +03:00
can_oom_reap = false ;
2016-10-08 02:58:57 +03:00
set_bit ( MMF_OOM_SKIP , & mm - > flags ) ;
2016-07-29 01:45:01 +03:00
pr_info ( " oom killer %d (%s) has mm pinned by %d (%s) \n " ,
task_pid_nr ( victim ) , victim - > comm ,
task_pid_nr ( p ) , p - > comm ) ;
2015-11-06 05:48:23 +03:00
continue ;
2016-03-26 00:20:24 +03:00
}
2016-10-08 02:59:09 +03:00
/*
* No use_mm ( ) user needs to read from the userspace so we are
* ok to reap it .
*/
if ( unlikely ( p - > flags & PF_KTHREAD ) )
continue ;
2018-07-21 18:45:15 +03:00
do_send_sig_info ( SIGKILL , SEND_SIG_FORCED , p , PIDTYPE_TGID ) ;
2015-11-06 05:48:23 +03:00
}
2012-08-01 03:43:45 +04:00
rcu_read_unlock ( ) ;
2012-03-22 03:33:46 +04:00
2016-03-26 00:20:24 +03:00
if ( can_oom_reap )
2016-03-26 00:20:27 +03:00
wake_oom_reaper ( victim ) ;
2016-03-26 00:20:24 +03:00
2015-11-06 05:47:51 +03:00
mmdrop ( mm ) ;
2012-08-01 03:43:45 +04:00
put_task_struct ( victim ) ;
2005-04-17 02:20:36 +04:00
}
2012-03-22 03:33:46 +04:00
# undef K
2005-04-17 02:20:36 +04:00
2018-08-22 07:53:54 +03:00
/*
* Kill provided task unless it ' s secured by setting
* oom_score_adj to OOM_SCORE_ADJ_MIN .
*/
static int oom_kill_memcg_member ( struct task_struct * task , void * unused )
{
if ( task - > signal - > oom_score_adj ! = OOM_SCORE_ADJ_MIN ) {
get_task_struct ( task ) ;
__oom_kill_process ( task ) ;
}
return 0 ;
}
mm, oom: refactor oom_kill_process()
Patch series "introduce memory.oom.group", v2.
This is a tiny implementation of cgroup-aware OOM killer, which adds an
ability to kill a cgroup as a single unit and so guarantee the integrity
of the workload.
Although it has only a limited functionality in comparison to what now
resides in the mm tree (it doesn't change the victim task selection
algorithm, doesn't look at memory stas on cgroup level, etc), it's also
much simpler and more straightforward. So, hopefully, we can avoid having
long debates here, as we had with the full implementation.
As it doesn't prevent any futher development, and implements an useful and
complete feature, it looks as a sane way forward.
This patch (of 2):
oom_kill_process() consists of two logical parts: the first one is
responsible for considering task's children as a potential victim and
printing the debug information. The second half is responsible for
sending SIGKILL to all tasks sharing the mm struct with the given victim.
This commit splits oom_kill_process() with an intention to re-use the the
second half: __oom_kill_process().
The cgroup-aware OOM killer will kill multiple tasks belonging to the
victim cgroup. We don't need to print the debug information for the each
task, as well as play with task selection (considering task's children),
so we can't use the existing oom_kill_process().
Link: http://lkml.kernel.org/r/20171130152824.1591-2-guro@fb.com
Link: http://lkml.kernel.org/r/20180802003201.817-3-guro@fb.com
Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: David Rientjes <rientjes@google.com>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: David Rientjes <rientjes@google.com>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-08-22 07:53:50 +03:00
static void oom_kill_process ( struct oom_control * oc , const char * message )
{
struct task_struct * p = oc - > chosen ;
unsigned int points = oc - > chosen_points ;
struct task_struct * victim = p ;
struct task_struct * child ;
struct task_struct * t ;
2018-08-22 07:53:54 +03:00
struct mem_cgroup * oom_group ;
mm, oom: refactor oom_kill_process()
Patch series "introduce memory.oom.group", v2.
This is a tiny implementation of cgroup-aware OOM killer, which adds an
ability to kill a cgroup as a single unit and so guarantee the integrity
of the workload.
Although it has only a limited functionality in comparison to what now
resides in the mm tree (it doesn't change the victim task selection
algorithm, doesn't look at memory stas on cgroup level, etc), it's also
much simpler and more straightforward. So, hopefully, we can avoid having
long debates here, as we had with the full implementation.
As it doesn't prevent any futher development, and implements an useful and
complete feature, it looks as a sane way forward.
This patch (of 2):
oom_kill_process() consists of two logical parts: the first one is
responsible for considering task's children as a potential victim and
printing the debug information. The second half is responsible for
sending SIGKILL to all tasks sharing the mm struct with the given victim.
This commit splits oom_kill_process() with an intention to re-use the the
second half: __oom_kill_process().
The cgroup-aware OOM killer will kill multiple tasks belonging to the
victim cgroup. We don't need to print the debug information for the each
task, as well as play with task selection (considering task's children),
so we can't use the existing oom_kill_process().
Link: http://lkml.kernel.org/r/20171130152824.1591-2-guro@fb.com
Link: http://lkml.kernel.org/r/20180802003201.817-3-guro@fb.com
Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: David Rientjes <rientjes@google.com>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: David Rientjes <rientjes@google.com>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-08-22 07:53:50 +03:00
unsigned int victim_points = 0 ;
static DEFINE_RATELIMIT_STATE ( oom_rs , DEFAULT_RATELIMIT_INTERVAL ,
DEFAULT_RATELIMIT_BURST ) ;
/*
* If the task is already exiting , don ' t alarm the sysadmin or kill
* its children or threads , just give it access to memory reserves
* so it can die quickly
*/
task_lock ( p ) ;
if ( task_will_free_mem ( p ) ) {
mark_oom_victim ( p ) ;
wake_oom_reaper ( p ) ;
task_unlock ( p ) ;
put_task_struct ( p ) ;
return ;
}
task_unlock ( p ) ;
if ( __ratelimit ( & oom_rs ) )
dump_header ( oc , p ) ;
pr_err ( " %s: Kill process %d (%s) score %u or sacrifice child \n " ,
message , task_pid_nr ( p ) , p - > comm , points ) ;
/*
* If any of p ' s children has a different mm and is eligible for kill ,
* the one with the highest oom_badness ( ) score is sacrificed for its
* parent . This attempts to lose the minimal amount of work done while
* still freeing memory .
*/
read_lock ( & tasklist_lock ) ;
for_each_thread ( p , t ) {
list_for_each_entry ( child , & t - > children , sibling ) {
unsigned int child_points ;
if ( process_shares_mm ( child , p - > mm ) )
continue ;
/*
* oom_badness ( ) returns 0 if the thread is unkillable
*/
child_points = oom_badness ( child ,
oc - > memcg , oc - > nodemask , oc - > totalpages ) ;
if ( child_points > victim_points ) {
put_task_struct ( victim ) ;
victim = child ;
victim_points = child_points ;
get_task_struct ( victim ) ;
}
}
}
read_unlock ( & tasklist_lock ) ;
2018-08-22 07:53:54 +03:00
/*
* Do we need to kill the entire memory cgroup ?
* Or even one of the ancestor memory cgroups ?
* Check this out before killing the victim task .
*/
oom_group = mem_cgroup_get_oom_group ( victim , oc - > memcg ) ;
mm, oom: refactor oom_kill_process()
Patch series "introduce memory.oom.group", v2.
This is a tiny implementation of cgroup-aware OOM killer, which adds an
ability to kill a cgroup as a single unit and so guarantee the integrity
of the workload.
Although it has only a limited functionality in comparison to what now
resides in the mm tree (it doesn't change the victim task selection
algorithm, doesn't look at memory stas on cgroup level, etc), it's also
much simpler and more straightforward. So, hopefully, we can avoid having
long debates here, as we had with the full implementation.
As it doesn't prevent any futher development, and implements an useful and
complete feature, it looks as a sane way forward.
This patch (of 2):
oom_kill_process() consists of two logical parts: the first one is
responsible for considering task's children as a potential victim and
printing the debug information. The second half is responsible for
sending SIGKILL to all tasks sharing the mm struct with the given victim.
This commit splits oom_kill_process() with an intention to re-use the the
second half: __oom_kill_process().
The cgroup-aware OOM killer will kill multiple tasks belonging to the
victim cgroup. We don't need to print the debug information for the each
task, as well as play with task selection (considering task's children),
so we can't use the existing oom_kill_process().
Link: http://lkml.kernel.org/r/20171130152824.1591-2-guro@fb.com
Link: http://lkml.kernel.org/r/20180802003201.817-3-guro@fb.com
Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: David Rientjes <rientjes@google.com>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: David Rientjes <rientjes@google.com>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-08-22 07:53:50 +03:00
__oom_kill_process ( victim ) ;
2018-08-22 07:53:54 +03:00
/*
* If necessary , kill all tasks in the selected memory cgroup .
*/
if ( oom_group ) {
mem_cgroup_print_oom_group ( oom_group ) ;
mem_cgroup_scan_tasks ( oom_group , oom_kill_memcg_member , NULL ) ;
mem_cgroup_put ( oom_group ) ;
}
mm, oom: refactor oom_kill_process()
Patch series "introduce memory.oom.group", v2.
This is a tiny implementation of cgroup-aware OOM killer, which adds an
ability to kill a cgroup as a single unit and so guarantee the integrity
of the workload.
Although it has only a limited functionality in comparison to what now
resides in the mm tree (it doesn't change the victim task selection
algorithm, doesn't look at memory stas on cgroup level, etc), it's also
much simpler and more straightforward. So, hopefully, we can avoid having
long debates here, as we had with the full implementation.
As it doesn't prevent any futher development, and implements an useful and
complete feature, it looks as a sane way forward.
This patch (of 2):
oom_kill_process() consists of two logical parts: the first one is
responsible for considering task's children as a potential victim and
printing the debug information. The second half is responsible for
sending SIGKILL to all tasks sharing the mm struct with the given victim.
This commit splits oom_kill_process() with an intention to re-use the the
second half: __oom_kill_process().
The cgroup-aware OOM killer will kill multiple tasks belonging to the
victim cgroup. We don't need to print the debug information for the each
task, as well as play with task selection (considering task's children),
so we can't use the existing oom_kill_process().
Link: http://lkml.kernel.org/r/20171130152824.1591-2-guro@fb.com
Link: http://lkml.kernel.org/r/20180802003201.817-3-guro@fb.com
Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: David Rientjes <rientjes@google.com>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: David Rientjes <rientjes@google.com>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-08-22 07:53:50 +03:00
}
2010-08-10 04:18:54 +04:00
/*
* Determines whether the kernel must panic because of the panic_on_oom sysctl .
*/
2016-10-08 02:57:23 +03:00
static void check_panic_on_oom ( struct oom_control * oc ,
enum oom_constraint constraint )
2010-08-10 04:18:54 +04:00
{
if ( likely ( ! sysctl_panic_on_oom ) )
return ;
if ( sysctl_panic_on_oom ! = 2 ) {
/*
* panic_on_oom = = 1 only affects CONSTRAINT_NONE , the kernel
* does not panic for cpuset , mempolicy , or memcg allocation
* failures .
*/
if ( constraint ! = CONSTRAINT_NONE )
return ;
}
2015-09-09 01:00:42 +03:00
/* Do not panic for oom kills triggered by sysrq */
2015-11-07 03:28:06 +03:00
if ( is_sysrq_oom ( oc ) )
2015-09-09 01:00:42 +03:00
return ;
2016-07-27 01:22:33 +03:00
dump_header ( oc , NULL ) ;
2010-08-10 04:18:54 +04:00
panic ( " Out of memory: %s panic_on_oom is enabled \n " ,
sysctl_panic_on_oom = = 2 ? " compulsory " : " system-wide " ) ;
}
2006-09-26 10:31:20 +04:00
static BLOCKING_NOTIFIER_HEAD ( oom_notify_list ) ;
int register_oom_notifier ( struct notifier_block * nb )
{
return blocking_notifier_chain_register ( & oom_notify_list , nb ) ;
}
EXPORT_SYMBOL_GPL ( register_oom_notifier ) ;
int unregister_oom_notifier ( struct notifier_block * nb )
{
return blocking_notifier_chain_unregister ( & oom_notify_list , nb ) ;
}
EXPORT_SYMBOL_GPL ( unregister_oom_notifier ) ;
2005-04-17 02:20:36 +04:00
/**
2015-09-09 01:00:36 +03:00
* out_of_memory - kill the " best " process when we run out of memory
* @ oc : pointer to struct oom_control
2005-04-17 02:20:36 +04:00
*
* If we run out of memory , we have the choice between either
* killing a random task ( bad ) , letting the system crash ( worse )
* OR try to be smart about which process to kill . Note that we
* don ' t have to be perfect here , we just have to be good .
*/
2015-09-09 01:00:36 +03:00
bool out_of_memory ( struct oom_control * oc )
2005-04-17 02:20:36 +04:00
{
2006-09-26 10:31:20 +04:00
unsigned long freed = 0 ;
2010-08-10 04:18:55 +04:00
enum oom_constraint constraint = CONSTRAINT_NONE ;
2006-09-26 10:31:20 +04:00
2015-06-25 02:57:19 +03:00
if ( oom_killer_disabled )
return false ;
2016-10-08 02:57:23 +03:00
if ( ! is_memcg_oom ( oc ) ) {
blocking_notifier_call_chain ( & oom_notify_list , 0 , & freed ) ;
if ( freed > 0 )
/* Got some memory back in the last second. */
return true ;
}
2005-04-17 02:20:36 +04:00
2010-08-10 04:18:48 +04:00
/*
mm, oom: allow exiting threads to have access to memory reserves
Exiting threads, those with PF_EXITING set, can pagefault and require
memory before they can make forward progress. This happens, for instance,
when a process must fault task->robust_list, a userspace structure, before
detaching its memory.
These threads also aren't guaranteed to get access to memory reserves
unless oom killed or killed from userspace. The oom killer won't grant
memory reserves if other threads are also exiting other than current and
stalling at the same point. This prevents needlessly killing processes
when others are already exiting.
Instead of special casing all the possible situations between PF_EXITING
getting set and a thread detaching its mm where it may allocate memory,
which probably wouldn't get updated when a change is made to the exit
path, the solution is to give all exiting threads access to memory
reserves if they call the oom killer. This allows them to quickly
allocate, detach its mm, and free the memory it represents.
Summary of Luigi's bug report:
: He had an oom condition where threads were faulting on task->robust_list
: and repeatedly called the oom killer but it would defer killing a thread
: because it saw other PF_EXITING threads. This can happen anytime we need
: to allocate memory after setting PF_EXITING and before detaching our mm;
: if there are other threads in the same state then the oom killer won't do
: anything unless one of them happens to be killed from userspace.
:
: So instead of only deferring for PF_EXITING and !task->robust_list, it's
: better to just give them access to memory reserves to prevent a potential
: livelock so that any other faults that may be introduced in the future in
: the exit path don't cause the same problem (and hopefully we don't allow
: too many of those!).
Signed-off-by: David Rientjes <rientjes@google.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Tested-by: Luigi Semenzato <semenzato@google.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-12-12 04:01:30 +04:00
* If current has a pending SIGKILL or is exiting , then automatically
* select it . The goal is to allow it to allocate so that it may
* quickly exit and free its memory .
2010-08-10 04:18:48 +04:00
*/
2016-07-29 01:45:04 +03:00
if ( task_will_free_mem ( current ) ) {
2015-06-25 02:57:07 +03:00
mark_oom_victim ( current ) ;
2016-07-29 01:44:52 +03:00
wake_oom_reaper ( current ) ;
2015-09-09 01:00:47 +03:00
return true ;
2010-08-10 04:18:48 +04:00
}
2016-05-20 03:13:09 +03:00
/*
* The OOM killer does not compensate for IO - less reclaim .
* pagefault_out_of_memory lost its gfp context so we have to
* make sure exclude 0 mask - all other users should have at least
* ___GFP_DIRECT_RECLAIM to get here .
*/
2017-02-23 02:46:22 +03:00
if ( oc - > gfp_mask & & ! ( oc - > gfp_mask & __GFP_FS ) )
2016-05-20 03:13:09 +03:00
return true ;
2006-02-21 05:27:52 +03:00
/*
* Check if there were limitations on the allocation ( only relevant for
2016-10-08 02:57:23 +03:00
* NUMA and memcg ) that may require different handling .
2006-02-21 05:27:52 +03:00
*/
2016-10-08 02:57:23 +03:00
constraint = constrained_alloc ( oc ) ;
2015-09-09 01:00:36 +03:00
if ( constraint ! = CONSTRAINT_MEMORY_POLICY )
oc - > nodemask = NULL ;
2016-07-27 01:22:33 +03:00
check_panic_on_oom ( oc , constraint ) ;
2010-08-10 04:18:59 +04:00
2016-10-08 02:57:23 +03:00
if ( ! is_memcg_oom ( oc ) & & sysctl_oom_kill_allocating_task & &
current - > mm & & ! oom_unkillable_task ( current , NULL , oc - > nodemask ) & &
2012-08-01 03:42:55 +04:00
current - > signal - > oom_score_adj ! = OOM_SCORE_ADJ_MIN ) {
2012-08-01 03:43:45 +04:00
get_task_struct ( current ) ;
2016-10-08 02:57:23 +03:00
oc - > chosen = current ;
oom_kill_process ( oc , " Out of memory (oom_kill_allocating_task) " ) ;
2015-09-09 01:00:47 +03:00
return true ;
2010-08-10 04:18:59 +04:00
}
2016-10-08 02:57:23 +03:00
select_bad_process ( oc ) ;
2018-09-05 01:45:34 +03:00
/* Found nothing?!?! */
if ( ! oc - > chosen ) {
2016-07-27 01:22:33 +03:00
dump_header ( oc , NULL ) ;
2018-09-05 01:45:34 +03:00
pr_warn ( " Out of memory and no killable processes... \n " ) ;
/*
* If we got here due to an actual allocation at the
* system level , we cannot survive this and will enter
* an endless loop in the allocator . Bail out now .
*/
if ( ! is_sysrq_oom ( oc ) & & ! is_memcg_oom ( oc ) )
panic ( " System is deadlocked on memory \n " ) ;
2010-08-10 04:18:59 +04:00
}
2018-08-18 01:49:04 +03:00
if ( oc - > chosen & & oc - > chosen ! = ( void * ) - 1UL )
2016-10-08 02:57:23 +03:00
oom_kill_process ( oc , ! is_memcg_oom ( oc ) ? " Out of memory " :
" Memory cgroup out of memory " ) ;
return ! ! oc - > chosen ;
2015-02-12 02:26:24 +03:00
}
2010-08-10 04:18:55 +04:00
/*
* The pagefault handler calls here because it is out of memory , so kill a
2016-07-27 01:22:30 +03:00
* memory - hogging task . If oom_lock is held by somebody else , a parallel oom
* killing is already in progress so do nothing .
2010-08-10 04:18:55 +04:00
*/
void pagefault_out_of_memory ( void )
{
2015-09-09 01:00:36 +03:00
struct oom_control oc = {
. zonelist = NULL ,
. nodemask = NULL ,
2016-07-27 01:22:33 +03:00
. memcg = NULL ,
2015-09-09 01:00:36 +03:00
. gfp_mask = 0 ,
. order = 0 ,
} ;
2013-10-17 00:46:59 +04:00
if ( mem_cgroup_oom_synchronize ( true ) )
2015-06-25 02:57:19 +03:00
return ;
mm: memcg: do not trap chargers with full callstack on OOM
The memcg OOM handling is incredibly fragile and can deadlock. When a
task fails to charge memory, it invokes the OOM killer and loops right
there in the charge code until it succeeds. Comparably, any other task
that enters the charge path at this point will go to a waitqueue right
then and there and sleep until the OOM situation is resolved. The problem
is that these tasks may hold filesystem locks and the mmap_sem; locks that
the selected OOM victim may need to exit.
For example, in one reported case, the task invoking the OOM killer was
about to charge a page cache page during a write(), which holds the
i_mutex. The OOM killer selected a task that was just entering truncate()
and trying to acquire the i_mutex:
OOM invoking task:
mem_cgroup_handle_oom+0x241/0x3b0
mem_cgroup_cache_charge+0xbe/0xe0
add_to_page_cache_locked+0x4c/0x140
add_to_page_cache_lru+0x22/0x50
grab_cache_page_write_begin+0x8b/0xe0
ext3_write_begin+0x88/0x270
generic_file_buffered_write+0x116/0x290
__generic_file_aio_write+0x27c/0x480
generic_file_aio_write+0x76/0xf0 # takes ->i_mutex
do_sync_write+0xea/0x130
vfs_write+0xf3/0x1f0
sys_write+0x51/0x90
system_call_fastpath+0x18/0x1d
OOM kill victim:
do_truncate+0x58/0xa0 # takes i_mutex
do_last+0x250/0xa30
path_openat+0xd7/0x440
do_filp_open+0x49/0xa0
do_sys_open+0x106/0x240
sys_open+0x20/0x30
system_call_fastpath+0x18/0x1d
The OOM handling task will retry the charge indefinitely while the OOM
killed task is not releasing any resources.
A similar scenario can happen when the kernel OOM killer for a memcg is
disabled and a userspace task is in charge of resolving OOM situations.
In this case, ALL tasks that enter the OOM path will be made to sleep on
the OOM waitqueue and wait for userspace to free resources or increase
the group's limit. But a userspace OOM handler is prone to deadlock
itself on the locks held by the waiting tasks. For example one of the
sleeping tasks may be stuck in a brk() call with the mmap_sem held for
writing but the userspace handler, in order to pick an optimal victim,
may need to read files from /proc/<pid>, which tries to acquire the same
mmap_sem for reading and deadlocks.
This patch changes the way tasks behave after detecting a memcg OOM and
makes sure nobody loops or sleeps with locks held:
1. When OOMing in a user fault, invoke the OOM killer and restart the
fault instead of looping on the charge attempt. This way, the OOM
victim can not get stuck on locks the looping task may hold.
2. When OOMing in a user fault but somebody else is handling it
(either the kernel OOM killer or a userspace handler), don't go to
sleep in the charge context. Instead, remember the OOMing memcg in
the task struct and then fully unwind the page fault stack with
-ENOMEM. pagefault_out_of_memory() will then call back into the
memcg code to check if the -ENOMEM came from the memcg, and then
either put the task to sleep on the memcg's OOM waitqueue or just
restart the fault. The OOM victim can no longer get stuck on any
lock a sleeping task may hold.
Debugged by Michal Hocko.
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Reported-by: azurIt <azurit@pobox.sk>
Acked-by: Michal Hocko <mhocko@suse.cz>
Cc: David Rientjes <rientjes@google.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-09-13 02:13:44 +04:00
2015-06-25 02:57:19 +03:00
if ( ! mutex_trylock ( & oom_lock ) )
return ;
2016-10-08 03:00:49 +03:00
out_of_memory ( & oc ) ;
2015-06-25 02:57:19 +03:00
mutex_unlock ( & oom_lock ) ;
2010-08-10 04:18:55 +04:00
}