2012-04-10 03:09:27 +04:00
/*
* Frontswap frontend
*
* This code provides the generic " frontend " layer to call a matching
* " backend " driver implementation of frontswap . See
* Documentation / vm / frontswap . txt for more information .
*
* Copyright ( C ) 2009 - 2012 Oracle Corp . All rights reserved .
* Author : Dan Magenheimer
*
* This work is licensed under the terms of the GNU GPL , version 2.
*/
# include <linux/mman.h>
# include <linux/swap.h>
# include <linux/swapops.h>
# include <linux/security.h>
# include <linux/module.h>
# include <linux/debugfs.h>
# include <linux/frontswap.h>
# include <linux/swapfile.h>
mm, frontswap: convert frontswap_enabled to static key
I have noticed that frontswap.h first declares "frontswap_enabled" as
extern bool variable, and then overrides it with "#define
frontswap_enabled (1)" for CONFIG_FRONTSWAP=Y or (0) when disabled. The
bool variable isn't actually instantiated anywhere.
This all looks like an unfinished attempt to make frontswap_enabled
reflect whether a backend is instantiated. But in the current state,
all frontswap hooks call unconditionally into frontswap.c just to check
if frontswap_ops is non-NULL. This should at least be checked inline,
but we can further eliminate the overhead when CONFIG_FRONTSWAP is
enabled and no backend registered, using a static key that is initially
disabled, and gets enabled only upon first backend registration.
Thus, checks for "frontswap_enabled" are replaced with
"frontswap_enabled()" wrapping the static key check. There are two
exceptions:
- xen's selfballoon_process() was testing frontswap_enabled in code guarded
by #ifdef CONFIG_FRONTSWAP, which was effectively always true when reachable.
The patch just removes this check. Using frontswap_enabled() does not sound
correct here, as this can be true even without xen's own backend being
registered.
- in SYSCALL_DEFINE2(swapon), change the check to IS_ENABLED(CONFIG_FRONTSWAP)
as it seems the bitmap allocation cannot currently be postponed until a
backend is registered. This means that frontswap will still have some
memory overhead by being configured, but without a backend.
After the patch, we can expect that some functions in frontswap.c are
called only when frontswap_ops is non-NULL. Change the checks there to
VM_BUG_ONs. While at it, convert other BUG_ONs to VM_BUG_ONs as
frontswap has been stable for some time.
[akpm@linux-foundation.org: coding-style fixes]
Link: http://lkml.kernel.org/r/1463152235-9717-1-git-send-email-vbabka@suse.cz
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-27 01:24:42 +03:00
DEFINE_STATIC_KEY_FALSE ( frontswap_enabled_key ) ;
2012-04-10 03:09:27 +04:00
/*
2015-06-25 02:58:18 +03:00
* frontswap_ops are added by frontswap_register_ops , and provide the
* frontswap " backend " implementation functions . Multiple implementations
* may be registered , but implementations can never deregister . This
* is a simple singly - linked list of all registered implementations .
2012-04-10 03:09:27 +04:00
*/
2013-05-01 02:26:51 +04:00
static struct frontswap_ops * frontswap_ops __read_mostly ;
2012-04-10 03:09:27 +04:00
2015-06-25 02:58:18 +03:00
# define for_each_frontswap_ops(ops) \
for ( ( ops ) = frontswap_ops ; ( ops ) ; ( ops ) = ( ops ) - > next )
2012-04-10 03:09:27 +04:00
/*
2012-05-15 19:32:15 +04:00
* If enabled , frontswap_store will return failure even on success . As
2012-04-10 03:09:27 +04:00
* a result , the swap subsystem will always write the page to swap , in
* effect converting frontswap into a writethrough cache . In this mode ,
* there is no direct reduction in swap writes , but a frontswap backend
* can unilaterally " reclaim " any pages in use with no data loss , thus
* providing increases control over maximum memory usage due to frontswap .
*/
static bool frontswap_writethrough_enabled __read_mostly ;
frontswap: support exclusive gets if tmem backend is capable
Tmem, as originally specified, assumes that "get" operations
performed on persistent pools never flush the page of data out
of tmem on a successful get, waiting instead for a flush
operation. This is intended to mimic the model of a swap
disk, where a disk read is non-destructive. Unlike a
disk, however, freeing up the RAM can be valuable. Over
the years that frontswap was in the review process, several
reviewers (and notably Hugh Dickins in 2010) pointed out that
this would result, at least temporarily, in two copies of the
data in RAM: one (compressed for zcache) copy in tmem,
and one copy in the swap cache. We wondered if this could
be done differently, at least optionally.
This patch allows tmem backends to instruct the frontswap
code that this backend performs exclusive gets. Zcache2
already contains hooks to support this feature. Other
backends are completely unaffected unless/until they are
updated to support this feature.
While it is not clear that exclusive gets are a performance
win on all workloads at all times, this small patch allows for
experimentation by backends.
P.S. Let's not quibble about the naming of "get" vs "read" vs
"load" etc. The naming is currently horribly inconsistent between
cleancache and frontswap and existing tmem backends, so will need
to be straightened out as a separate patch. "Get" is used
by the tmem architecture spec, existing backends, and
all documentation and presentation material so I am
using it in this patch.
Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2012-09-20 23:16:52 +04:00
/*
* If enabled , the underlying tmem implementation is capable of doing
* exclusive gets , so frontswap_load , on a successful tmem_get must
* mark the page as no longer in frontswap AND mark it dirty .
*/
static bool frontswap_tmem_exclusive_gets_enabled __read_mostly ;
2012-04-10 03:09:27 +04:00
# ifdef CONFIG_DEBUG_FS
/*
* Counters available via / sys / kernel / debug / frontswap ( if debugfs is
* properly configured ) . These are for information only so are not protected
* against increment races .
*/
2012-05-15 19:32:15 +04:00
static u64 frontswap_loads ;
static u64 frontswap_succ_stores ;
static u64 frontswap_failed_stores ;
2012-04-10 03:09:27 +04:00
static u64 frontswap_invalidates ;
2012-05-15 19:32:15 +04:00
static inline void inc_frontswap_loads ( void ) {
frontswap_loads + + ;
2012-04-10 03:09:27 +04:00
}
2012-05-15 19:32:15 +04:00
static inline void inc_frontswap_succ_stores ( void ) {
frontswap_succ_stores + + ;
2012-04-10 03:09:27 +04:00
}
2012-05-15 19:32:15 +04:00
static inline void inc_frontswap_failed_stores ( void ) {
frontswap_failed_stores + + ;
2012-04-10 03:09:27 +04:00
}
static inline void inc_frontswap_invalidates ( void ) {
frontswap_invalidates + + ;
}
# else
2012-05-15 19:32:15 +04:00
static inline void inc_frontswap_loads ( void ) { }
static inline void inc_frontswap_succ_stores ( void ) { }
static inline void inc_frontswap_failed_stores ( void ) { }
2012-04-10 03:09:27 +04:00
static inline void inc_frontswap_invalidates ( void ) { }
# endif
2013-05-01 02:26:50 +04:00
/*
* Due to the asynchronous nature of the backends loading potentially
* _after_ the swap system has been activated , we have chokepoints
* on all frontswap functions to not call the backend until the backend
* has registered .
*
* This would not guards us against the user deciding to call swapoff right as
* we are calling the backend to initialize ( so swapon is in action ) .
* Fortunatly for us , the swapon_mutex has been taked by the callee so we are
* OK . The other scenario where calls to frontswap_store ( called via
* swap_writepage ) is racing with frontswap_invalidate_area ( called via
* swapoff ) is again guarded by the swap subsystem .
*
* While no backend is registered all calls to frontswap_ [ store | load |
* invalidate_area | invalidate_page ] are ignored or fail .
*
* The time between the backend being registered and the swap file system
* calling the backend ( via the frontswap_ * functions ) is indeterminate as
2013-05-01 02:26:51 +04:00
* frontswap_ops is not atomic_t ( or a value guarded by a spinlock ) .
2013-05-01 02:26:50 +04:00
* That is OK as we are comfortable missing some of these calls to the newly
* registered backend .
*
* Obviously the opposite ( unloading the backend ) must be done after all
* the frontswap_ [ store | load | invalidate_area | invalidate_page ] start
2015-06-25 02:58:18 +03:00
* ignoring or failing the requests . However , there is currently no way
* to unload a backend once it is registered .
2013-05-01 02:26:50 +04:00
*/
2012-04-10 03:09:27 +04:00
/*
2015-06-25 02:58:18 +03:00
* Register operations for frontswap
2012-04-10 03:09:27 +04:00
*/
2015-06-25 02:58:18 +03:00
void frontswap_register_ops ( struct frontswap_ops * ops )
2012-04-10 03:09:27 +04:00
{
2015-06-25 02:58:18 +03:00
DECLARE_BITMAP ( a , MAX_SWAPFILES ) ;
DECLARE_BITMAP ( b , MAX_SWAPFILES ) ;
struct swap_info_struct * si ;
unsigned int i ;
bitmap_zero ( a , MAX_SWAPFILES ) ;
bitmap_zero ( b , MAX_SWAPFILES ) ;
spin_lock ( & swap_lock ) ;
plist_for_each_entry ( si , & swap_active_head , list ) {
if ( ! WARN_ON ( ! si - > frontswap_map ) )
set_bit ( si - > type , a ) ;
}
spin_unlock ( & swap_lock ) ;
/* the new ops needs to know the currently active swap devices */
for_each_set_bit ( i , a , MAX_SWAPFILES )
ops - > init ( i ) ;
/*
* Setting frontswap_ops must happen after the ops - > init ( ) calls
* above ; cmpxchg implies smp_mb ( ) which will ensure the init is
* complete at this point .
*/
do {
ops - > next = frontswap_ops ;
} while ( cmpxchg ( & frontswap_ops , ops - > next , ops ) ! = ops - > next ) ;
mm, frontswap: convert frontswap_enabled to static key
I have noticed that frontswap.h first declares "frontswap_enabled" as
extern bool variable, and then overrides it with "#define
frontswap_enabled (1)" for CONFIG_FRONTSWAP=Y or (0) when disabled. The
bool variable isn't actually instantiated anywhere.
This all looks like an unfinished attempt to make frontswap_enabled
reflect whether a backend is instantiated. But in the current state,
all frontswap hooks call unconditionally into frontswap.c just to check
if frontswap_ops is non-NULL. This should at least be checked inline,
but we can further eliminate the overhead when CONFIG_FRONTSWAP is
enabled and no backend registered, using a static key that is initially
disabled, and gets enabled only upon first backend registration.
Thus, checks for "frontswap_enabled" are replaced with
"frontswap_enabled()" wrapping the static key check. There are two
exceptions:
- xen's selfballoon_process() was testing frontswap_enabled in code guarded
by #ifdef CONFIG_FRONTSWAP, which was effectively always true when reachable.
The patch just removes this check. Using frontswap_enabled() does not sound
correct here, as this can be true even without xen's own backend being
registered.
- in SYSCALL_DEFINE2(swapon), change the check to IS_ENABLED(CONFIG_FRONTSWAP)
as it seems the bitmap allocation cannot currently be postponed until a
backend is registered. This means that frontswap will still have some
memory overhead by being configured, but without a backend.
After the patch, we can expect that some functions in frontswap.c are
called only when frontswap_ops is non-NULL. Change the checks there to
VM_BUG_ONs. While at it, convert other BUG_ONs to VM_BUG_ONs as
frontswap has been stable for some time.
[akpm@linux-foundation.org: coding-style fixes]
Link: http://lkml.kernel.org/r/1463152235-9717-1-git-send-email-vbabka@suse.cz
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-27 01:24:42 +03:00
static_branch_inc ( & frontswap_enabled_key ) ;
2015-06-25 02:58:18 +03:00
spin_lock ( & swap_lock ) ;
plist_for_each_entry ( si , & swap_active_head , list ) {
if ( si - > frontswap_map )
set_bit ( si - > type , b ) ;
2013-05-01 02:26:50 +04:00
}
2015-06-25 02:58:18 +03:00
spin_unlock ( & swap_lock ) ;
2013-05-01 02:26:50 +04:00
/*
2015-06-25 02:58:18 +03:00
* On the very unlikely chance that a swap device was added or
* removed between setting the " a " list bits and the ops init
* calls , we re - check and do init or invalidate for any changed
* bits .
2013-05-01 02:26:50 +04:00
*/
2015-06-25 02:58:18 +03:00
if ( unlikely ( ! bitmap_equal ( a , b , MAX_SWAPFILES ) ) ) {
for ( i = 0 ; i < MAX_SWAPFILES ; i + + ) {
if ( ! test_bit ( i , a ) & & test_bit ( i , b ) )
ops - > init ( i ) ;
else if ( test_bit ( i , a ) & & ! test_bit ( i , b ) )
ops - > invalidate_area ( i ) ;
}
}
2012-04-10 03:09:27 +04:00
}
EXPORT_SYMBOL ( frontswap_register_ops ) ;
/*
* Enable / disable frontswap writethrough ( see above ) .
*/
void frontswap_writethrough ( bool enable )
{
frontswap_writethrough_enabled = enable ;
}
EXPORT_SYMBOL ( frontswap_writethrough ) ;
frontswap: support exclusive gets if tmem backend is capable
Tmem, as originally specified, assumes that "get" operations
performed on persistent pools never flush the page of data out
of tmem on a successful get, waiting instead for a flush
operation. This is intended to mimic the model of a swap
disk, where a disk read is non-destructive. Unlike a
disk, however, freeing up the RAM can be valuable. Over
the years that frontswap was in the review process, several
reviewers (and notably Hugh Dickins in 2010) pointed out that
this would result, at least temporarily, in two copies of the
data in RAM: one (compressed for zcache) copy in tmem,
and one copy in the swap cache. We wondered if this could
be done differently, at least optionally.
This patch allows tmem backends to instruct the frontswap
code that this backend performs exclusive gets. Zcache2
already contains hooks to support this feature. Other
backends are completely unaffected unless/until they are
updated to support this feature.
While it is not clear that exclusive gets are a performance
win on all workloads at all times, this small patch allows for
experimentation by backends.
P.S. Let's not quibble about the naming of "get" vs "read" vs
"load" etc. The naming is currently horribly inconsistent between
cleancache and frontswap and existing tmem backends, so will need
to be straightened out as a separate patch. "Get" is used
by the tmem architecture spec, existing backends, and
all documentation and presentation material so I am
using it in this patch.
Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2012-09-20 23:16:52 +04:00
/*
* Enable / disable frontswap exclusive gets ( see above ) .
*/
void frontswap_tmem_exclusive_gets ( bool enable )
{
frontswap_tmem_exclusive_gets_enabled = enable ;
}
EXPORT_SYMBOL ( frontswap_tmem_exclusive_gets ) ;
2012-04-10 03:09:27 +04:00
/*
* Called when a swap device is swapon ' d .
*/
2013-05-01 02:26:54 +04:00
void __frontswap_init ( unsigned type , unsigned long * map )
2012-04-10 03:09:27 +04:00
{
struct swap_info_struct * sis = swap_info [ type ] ;
2015-06-25 02:58:18 +03:00
struct frontswap_ops * ops ;
2012-04-10 03:09:27 +04:00
mm, frontswap: convert frontswap_enabled to static key
I have noticed that frontswap.h first declares "frontswap_enabled" as
extern bool variable, and then overrides it with "#define
frontswap_enabled (1)" for CONFIG_FRONTSWAP=Y or (0) when disabled. The
bool variable isn't actually instantiated anywhere.
This all looks like an unfinished attempt to make frontswap_enabled
reflect whether a backend is instantiated. But in the current state,
all frontswap hooks call unconditionally into frontswap.c just to check
if frontswap_ops is non-NULL. This should at least be checked inline,
but we can further eliminate the overhead when CONFIG_FRONTSWAP is
enabled and no backend registered, using a static key that is initially
disabled, and gets enabled only upon first backend registration.
Thus, checks for "frontswap_enabled" are replaced with
"frontswap_enabled()" wrapping the static key check. There are two
exceptions:
- xen's selfballoon_process() was testing frontswap_enabled in code guarded
by #ifdef CONFIG_FRONTSWAP, which was effectively always true when reachable.
The patch just removes this check. Using frontswap_enabled() does not sound
correct here, as this can be true even without xen's own backend being
registered.
- in SYSCALL_DEFINE2(swapon), change the check to IS_ENABLED(CONFIG_FRONTSWAP)
as it seems the bitmap allocation cannot currently be postponed until a
backend is registered. This means that frontswap will still have some
memory overhead by being configured, but without a backend.
After the patch, we can expect that some functions in frontswap.c are
called only when frontswap_ops is non-NULL. Change the checks there to
VM_BUG_ONs. While at it, convert other BUG_ONs to VM_BUG_ONs as
frontswap has been stable for some time.
[akpm@linux-foundation.org: coding-style fixes]
Link: http://lkml.kernel.org/r/1463152235-9717-1-git-send-email-vbabka@suse.cz
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-27 01:24:42 +03:00
VM_BUG_ON ( sis = = NULL ) ;
2013-05-01 02:26:54 +04:00
/*
* p - > frontswap is a bitmap that we MUST have to figure out which page
* has gone in frontswap . Without it there is no point of continuing .
*/
if ( WARN_ON ( ! map ) )
return ;
/*
* Irregardless of whether the frontswap backend has been loaded
* before this function or it will be later , we _MUST_ have the
* p - > frontswap set to something valid to work properly .
*/
frontswap_map_set ( sis , map ) ;
2015-06-25 02:58:18 +03:00
for_each_frontswap_ops ( ops )
ops - > init ( type ) ;
2012-04-10 03:09:27 +04:00
}
EXPORT_SYMBOL ( __frontswap_init ) ;
2013-05-01 02:26:53 +04:00
bool __frontswap_test ( struct swap_info_struct * sis ,
pgoff_t offset )
{
2015-06-25 02:58:18 +03:00
if ( sis - > frontswap_map )
return test_bit ( offset , sis - > frontswap_map ) ;
return false ;
2013-05-01 02:26:53 +04:00
}
EXPORT_SYMBOL ( __frontswap_test ) ;
2015-06-25 02:58:18 +03:00
static inline void __frontswap_set ( struct swap_info_struct * sis ,
pgoff_t offset )
{
set_bit ( offset , sis - > frontswap_map ) ;
atomic_inc ( & sis - > frontswap_pages ) ;
}
2013-05-01 02:26:53 +04:00
static inline void __frontswap_clear ( struct swap_info_struct * sis ,
2015-06-25 02:58:18 +03:00
pgoff_t offset )
2012-06-10 14:51:07 +04:00
{
2013-05-01 02:26:53 +04:00
clear_bit ( offset , sis - > frontswap_map ) ;
2012-06-10 14:51:07 +04:00
atomic_dec ( & sis - > frontswap_pages ) ;
}
2012-04-10 03:09:27 +04:00
/*
2012-05-15 19:32:15 +04:00
* " Store " data from a page to frontswap and associate it with the page ' s
2012-04-10 03:09:27 +04:00
* swaptype and offset . Page must be locked and in the swap cache .
* If frontswap already contains a page with matching swaptype and
2012-06-16 16:37:48 +04:00
* offset , the frontswap implementation may either overwrite the data and
2012-04-10 03:09:27 +04:00
* return success or invalidate the page from frontswap and return failure .
*/
2012-05-15 19:32:15 +04:00
int __frontswap_store ( struct page * page )
2012-04-10 03:09:27 +04:00
{
2015-06-25 02:58:18 +03:00
int ret = - 1 ;
2012-04-10 03:09:27 +04:00
swp_entry_t entry = { . val = page_private ( page ) , } ;
int type = swp_type ( entry ) ;
struct swap_info_struct * sis = swap_info [ type ] ;
pgoff_t offset = swp_offset ( entry ) ;
2015-06-25 02:58:18 +03:00
struct frontswap_ops * ops ;
2012-04-10 03:09:27 +04:00
mm, frontswap: convert frontswap_enabled to static key
I have noticed that frontswap.h first declares "frontswap_enabled" as
extern bool variable, and then overrides it with "#define
frontswap_enabled (1)" for CONFIG_FRONTSWAP=Y or (0) when disabled. The
bool variable isn't actually instantiated anywhere.
This all looks like an unfinished attempt to make frontswap_enabled
reflect whether a backend is instantiated. But in the current state,
all frontswap hooks call unconditionally into frontswap.c just to check
if frontswap_ops is non-NULL. This should at least be checked inline,
but we can further eliminate the overhead when CONFIG_FRONTSWAP is
enabled and no backend registered, using a static key that is initially
disabled, and gets enabled only upon first backend registration.
Thus, checks for "frontswap_enabled" are replaced with
"frontswap_enabled()" wrapping the static key check. There are two
exceptions:
- xen's selfballoon_process() was testing frontswap_enabled in code guarded
by #ifdef CONFIG_FRONTSWAP, which was effectively always true when reachable.
The patch just removes this check. Using frontswap_enabled() does not sound
correct here, as this can be true even without xen's own backend being
registered.
- in SYSCALL_DEFINE2(swapon), change the check to IS_ENABLED(CONFIG_FRONTSWAP)
as it seems the bitmap allocation cannot currently be postponed until a
backend is registered. This means that frontswap will still have some
memory overhead by being configured, but without a backend.
After the patch, we can expect that some functions in frontswap.c are
called only when frontswap_ops is non-NULL. Change the checks there to
VM_BUG_ONs. While at it, convert other BUG_ONs to VM_BUG_ONs as
frontswap has been stable for some time.
[akpm@linux-foundation.org: coding-style fixes]
Link: http://lkml.kernel.org/r/1463152235-9717-1-git-send-email-vbabka@suse.cz
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-27 01:24:42 +03:00
VM_BUG_ON ( ! frontswap_ops ) ;
VM_BUG_ON ( ! PageLocked ( page ) ) ;
VM_BUG_ON ( sis = = NULL ) ;
2015-06-25 02:58:18 +03:00
/*
* If a dup , we must remove the old page first ; we can ' t leave the
* old page no matter if the store of the new page succeeds or fails ,
* and we can ' t rely on the new page replacing the old page as we may
* not store to the same implementation that contains the old page .
*/
if ( __frontswap_test ( sis , offset ) ) {
__frontswap_clear ( sis , offset ) ;
for_each_frontswap_ops ( ops )
ops - > invalidate_page ( type , offset ) ;
}
/* Try to store in each implementation, until one succeeds. */
for_each_frontswap_ops ( ops ) {
ret = ops - > store ( type , offset , page ) ;
if ( ! ret ) /* successful store */
break ;
}
2012-04-10 03:09:27 +04:00
if ( ret = = 0 ) {
2015-06-25 02:58:18 +03:00
__frontswap_set ( sis , offset ) ;
2012-05-15 19:32:15 +04:00
inc_frontswap_succ_stores ( ) ;
2012-06-10 14:51:04 +04:00
} else {
2012-05-15 19:32:15 +04:00
inc_frontswap_failed_stores ( ) ;
2012-06-10 14:51:00 +04:00
}
2012-04-10 03:09:27 +04:00
if ( frontswap_writethrough_enabled )
/* report failure so swap also writes to swap device */
ret = - 1 ;
return ret ;
}
2012-05-15 19:32:15 +04:00
EXPORT_SYMBOL ( __frontswap_store ) ;
2012-04-10 03:09:27 +04:00
/*
* " Get " data from frontswap associated with swaptype and offset that were
* specified when the data was put to frontswap and use it to fill the
* specified page with data . Page must be locked and in the swap cache .
*/
2012-05-15 19:32:15 +04:00
int __frontswap_load ( struct page * page )
2012-04-10 03:09:27 +04:00
{
int ret = - 1 ;
swp_entry_t entry = { . val = page_private ( page ) , } ;
int type = swp_type ( entry ) ;
struct swap_info_struct * sis = swap_info [ type ] ;
pgoff_t offset = swp_offset ( entry ) ;
2015-06-25 02:58:18 +03:00
struct frontswap_ops * ops ;
mm, frontswap: convert frontswap_enabled to static key
I have noticed that frontswap.h first declares "frontswap_enabled" as
extern bool variable, and then overrides it with "#define
frontswap_enabled (1)" for CONFIG_FRONTSWAP=Y or (0) when disabled. The
bool variable isn't actually instantiated anywhere.
This all looks like an unfinished attempt to make frontswap_enabled
reflect whether a backend is instantiated. But in the current state,
all frontswap hooks call unconditionally into frontswap.c just to check
if frontswap_ops is non-NULL. This should at least be checked inline,
but we can further eliminate the overhead when CONFIG_FRONTSWAP is
enabled and no backend registered, using a static key that is initially
disabled, and gets enabled only upon first backend registration.
Thus, checks for "frontswap_enabled" are replaced with
"frontswap_enabled()" wrapping the static key check. There are two
exceptions:
- xen's selfballoon_process() was testing frontswap_enabled in code guarded
by #ifdef CONFIG_FRONTSWAP, which was effectively always true when reachable.
The patch just removes this check. Using frontswap_enabled() does not sound
correct here, as this can be true even without xen's own backend being
registered.
- in SYSCALL_DEFINE2(swapon), change the check to IS_ENABLED(CONFIG_FRONTSWAP)
as it seems the bitmap allocation cannot currently be postponed until a
backend is registered. This means that frontswap will still have some
memory overhead by being configured, but without a backend.
After the patch, we can expect that some functions in frontswap.c are
called only when frontswap_ops is non-NULL. Change the checks there to
VM_BUG_ONs. While at it, convert other BUG_ONs to VM_BUG_ONs as
frontswap has been stable for some time.
[akpm@linux-foundation.org: coding-style fixes]
Link: http://lkml.kernel.org/r/1463152235-9717-1-git-send-email-vbabka@suse.cz
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-27 01:24:42 +03:00
VM_BUG_ON ( ! frontswap_ops ) ;
VM_BUG_ON ( ! PageLocked ( page ) ) ;
VM_BUG_ON ( sis = = NULL ) ;
2012-04-10 03:09:27 +04:00
2015-06-25 02:58:18 +03:00
if ( ! __frontswap_test ( sis , offset ) )
return - 1 ;
/* Try loading from each implementation, until one succeeds. */
for_each_frontswap_ops ( ops ) {
ret = ops - > load ( type , offset , page ) ;
if ( ! ret ) /* successful load */
break ;
}
frontswap: support exclusive gets if tmem backend is capable
Tmem, as originally specified, assumes that "get" operations
performed on persistent pools never flush the page of data out
of tmem on a successful get, waiting instead for a flush
operation. This is intended to mimic the model of a swap
disk, where a disk read is non-destructive. Unlike a
disk, however, freeing up the RAM can be valuable. Over
the years that frontswap was in the review process, several
reviewers (and notably Hugh Dickins in 2010) pointed out that
this would result, at least temporarily, in two copies of the
data in RAM: one (compressed for zcache) copy in tmem,
and one copy in the swap cache. We wondered if this could
be done differently, at least optionally.
This patch allows tmem backends to instruct the frontswap
code that this backend performs exclusive gets. Zcache2
already contains hooks to support this feature. Other
backends are completely unaffected unless/until they are
updated to support this feature.
While it is not clear that exclusive gets are a performance
win on all workloads at all times, this small patch allows for
experimentation by backends.
P.S. Let's not quibble about the naming of "get" vs "read" vs
"load" etc. The naming is currently horribly inconsistent between
cleancache and frontswap and existing tmem backends, so will need
to be straightened out as a separate patch. "Get" is used
by the tmem architecture spec, existing backends, and
all documentation and presentation material so I am
using it in this patch.
Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2012-09-20 23:16:52 +04:00
if ( ret = = 0 ) {
2012-05-15 19:32:15 +04:00
inc_frontswap_loads ( ) ;
frontswap: support exclusive gets if tmem backend is capable
Tmem, as originally specified, assumes that "get" operations
performed on persistent pools never flush the page of data out
of tmem on a successful get, waiting instead for a flush
operation. This is intended to mimic the model of a swap
disk, where a disk read is non-destructive. Unlike a
disk, however, freeing up the RAM can be valuable. Over
the years that frontswap was in the review process, several
reviewers (and notably Hugh Dickins in 2010) pointed out that
this would result, at least temporarily, in two copies of the
data in RAM: one (compressed for zcache) copy in tmem,
and one copy in the swap cache. We wondered if this could
be done differently, at least optionally.
This patch allows tmem backends to instruct the frontswap
code that this backend performs exclusive gets. Zcache2
already contains hooks to support this feature. Other
backends are completely unaffected unless/until they are
updated to support this feature.
While it is not clear that exclusive gets are a performance
win on all workloads at all times, this small patch allows for
experimentation by backends.
P.S. Let's not quibble about the naming of "get" vs "read" vs
"load" etc. The naming is currently horribly inconsistent between
cleancache and frontswap and existing tmem backends, so will need
to be straightened out as a separate patch. "Get" is used
by the tmem architecture spec, existing backends, and
all documentation and presentation material so I am
using it in this patch.
Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2012-09-20 23:16:52 +04:00
if ( frontswap_tmem_exclusive_gets_enabled ) {
SetPageDirty ( page ) ;
2013-05-01 02:26:53 +04:00
__frontswap_clear ( sis , offset ) ;
frontswap: support exclusive gets if tmem backend is capable
Tmem, as originally specified, assumes that "get" operations
performed on persistent pools never flush the page of data out
of tmem on a successful get, waiting instead for a flush
operation. This is intended to mimic the model of a swap
disk, where a disk read is non-destructive. Unlike a
disk, however, freeing up the RAM can be valuable. Over
the years that frontswap was in the review process, several
reviewers (and notably Hugh Dickins in 2010) pointed out that
this would result, at least temporarily, in two copies of the
data in RAM: one (compressed for zcache) copy in tmem,
and one copy in the swap cache. We wondered if this could
be done differently, at least optionally.
This patch allows tmem backends to instruct the frontswap
code that this backend performs exclusive gets. Zcache2
already contains hooks to support this feature. Other
backends are completely unaffected unless/until they are
updated to support this feature.
While it is not clear that exclusive gets are a performance
win on all workloads at all times, this small patch allows for
experimentation by backends.
P.S. Let's not quibble about the naming of "get" vs "read" vs
"load" etc. The naming is currently horribly inconsistent between
cleancache and frontswap and existing tmem backends, so will need
to be straightened out as a separate patch. "Get" is used
by the tmem architecture spec, existing backends, and
all documentation and presentation material so I am
using it in this patch.
Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2012-09-20 23:16:52 +04:00
}
}
2012-04-10 03:09:27 +04:00
return ret ;
}
2012-05-15 19:32:15 +04:00
EXPORT_SYMBOL ( __frontswap_load ) ;
2012-04-10 03:09:27 +04:00
/*
* Invalidate any data from frontswap associated with the specified swaptype
* and offset so that a subsequent " get " will fail .
*/
void __frontswap_invalidate_page ( unsigned type , pgoff_t offset )
{
struct swap_info_struct * sis = swap_info [ type ] ;
2015-06-25 02:58:18 +03:00
struct frontswap_ops * ops ;
mm, frontswap: convert frontswap_enabled to static key
I have noticed that frontswap.h first declares "frontswap_enabled" as
extern bool variable, and then overrides it with "#define
frontswap_enabled (1)" for CONFIG_FRONTSWAP=Y or (0) when disabled. The
bool variable isn't actually instantiated anywhere.
This all looks like an unfinished attempt to make frontswap_enabled
reflect whether a backend is instantiated. But in the current state,
all frontswap hooks call unconditionally into frontswap.c just to check
if frontswap_ops is non-NULL. This should at least be checked inline,
but we can further eliminate the overhead when CONFIG_FRONTSWAP is
enabled and no backend registered, using a static key that is initially
disabled, and gets enabled only upon first backend registration.
Thus, checks for "frontswap_enabled" are replaced with
"frontswap_enabled()" wrapping the static key check. There are two
exceptions:
- xen's selfballoon_process() was testing frontswap_enabled in code guarded
by #ifdef CONFIG_FRONTSWAP, which was effectively always true when reachable.
The patch just removes this check. Using frontswap_enabled() does not sound
correct here, as this can be true even without xen's own backend being
registered.
- in SYSCALL_DEFINE2(swapon), change the check to IS_ENABLED(CONFIG_FRONTSWAP)
as it seems the bitmap allocation cannot currently be postponed until a
backend is registered. This means that frontswap will still have some
memory overhead by being configured, but without a backend.
After the patch, we can expect that some functions in frontswap.c are
called only when frontswap_ops is non-NULL. Change the checks there to
VM_BUG_ONs. While at it, convert other BUG_ONs to VM_BUG_ONs as
frontswap has been stable for some time.
[akpm@linux-foundation.org: coding-style fixes]
Link: http://lkml.kernel.org/r/1463152235-9717-1-git-send-email-vbabka@suse.cz
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-27 01:24:42 +03:00
VM_BUG_ON ( ! frontswap_ops ) ;
VM_BUG_ON ( sis = = NULL ) ;
2012-04-10 03:09:27 +04:00
2015-06-25 02:58:18 +03:00
if ( ! __frontswap_test ( sis , offset ) )
return ;
for_each_frontswap_ops ( ops )
ops - > invalidate_page ( type , offset ) ;
__frontswap_clear ( sis , offset ) ;
inc_frontswap_invalidates ( ) ;
2012-04-10 03:09:27 +04:00
}
EXPORT_SYMBOL ( __frontswap_invalidate_page ) ;
/*
* Invalidate all data from frontswap associated with all offsets for the
* specified swaptype .
*/
void __frontswap_invalidate_area ( unsigned type )
{
struct swap_info_struct * sis = swap_info [ type ] ;
2015-06-25 02:58:18 +03:00
struct frontswap_ops * ops ;
2012-04-10 03:09:27 +04:00
mm, frontswap: convert frontswap_enabled to static key
I have noticed that frontswap.h first declares "frontswap_enabled" as
extern bool variable, and then overrides it with "#define
frontswap_enabled (1)" for CONFIG_FRONTSWAP=Y or (0) when disabled. The
bool variable isn't actually instantiated anywhere.
This all looks like an unfinished attempt to make frontswap_enabled
reflect whether a backend is instantiated. But in the current state,
all frontswap hooks call unconditionally into frontswap.c just to check
if frontswap_ops is non-NULL. This should at least be checked inline,
but we can further eliminate the overhead when CONFIG_FRONTSWAP is
enabled and no backend registered, using a static key that is initially
disabled, and gets enabled only upon first backend registration.
Thus, checks for "frontswap_enabled" are replaced with
"frontswap_enabled()" wrapping the static key check. There are two
exceptions:
- xen's selfballoon_process() was testing frontswap_enabled in code guarded
by #ifdef CONFIG_FRONTSWAP, which was effectively always true when reachable.
The patch just removes this check. Using frontswap_enabled() does not sound
correct here, as this can be true even without xen's own backend being
registered.
- in SYSCALL_DEFINE2(swapon), change the check to IS_ENABLED(CONFIG_FRONTSWAP)
as it seems the bitmap allocation cannot currently be postponed until a
backend is registered. This means that frontswap will still have some
memory overhead by being configured, but without a backend.
After the patch, we can expect that some functions in frontswap.c are
called only when frontswap_ops is non-NULL. Change the checks there to
VM_BUG_ONs. While at it, convert other BUG_ONs to VM_BUG_ONs as
frontswap has been stable for some time.
[akpm@linux-foundation.org: coding-style fixes]
Link: http://lkml.kernel.org/r/1463152235-9717-1-git-send-email-vbabka@suse.cz
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-27 01:24:42 +03:00
VM_BUG_ON ( ! frontswap_ops ) ;
VM_BUG_ON ( sis = = NULL ) ;
2015-06-25 02:58:18 +03:00
if ( sis - > frontswap_map = = NULL )
return ;
for_each_frontswap_ops ( ops )
ops - > invalidate_area ( type ) ;
atomic_set ( & sis - > frontswap_pages , 0 ) ;
bitmap_zero ( sis - > frontswap_map , sis - > max ) ;
2012-04-10 03:09:27 +04:00
}
EXPORT_SYMBOL ( __frontswap_invalidate_area ) ;
2012-06-10 14:51:01 +04:00
static unsigned long __frontswap_curr_pages ( void )
{
unsigned long totalpages = 0 ;
struct swap_info_struct * si = NULL ;
assert_spin_locked ( & swap_lock ) ;
swap: change swap_list_head to plist, add swap_avail_head
Originally get_swap_page() started iterating through the singly-linked
list of swap_info_structs using swap_list.next or highest_priority_index,
which both were intended to point to the highest priority active swap
target that was not full. The first patch in this series changed the
singly-linked list to a doubly-linked list, and removed the logic to start
at the highest priority non-full entry; it starts scanning at the highest
priority entry each time, even if the entry is full.
Replace the manually ordered swap_list_head with a plist, swap_active_head.
Add a new plist, swap_avail_head. The original swap_active_head plist
contains all active swap_info_structs, as before, while the new
swap_avail_head plist contains only swap_info_structs that are active and
available, i.e. not full. Add a new spinlock, swap_avail_lock, to protect
the swap_avail_head list.
Mel Gorman suggested using plists since they internally handle ordering
the list entries based on priority, which is exactly what swap was doing
manually. All the ordering code is now removed, and swap_info_struct
entries and simply added to their corresponding plist and automatically
ordered correctly.
Using a new plist for available swap_info_structs simplifies and
optimizes get_swap_page(), which no longer has to iterate over full
swap_info_structs. Using a new spinlock for swap_avail_head plist
allows each swap_info_struct to add or remove themselves from the
plist when they become full or not-full; previously they could not
do so because the swap_info_struct->lock is held when they change
from full<->not-full, and the swap_lock protecting the main
swap_active_head must be ordered before any swap_info_struct->lock.
Signed-off-by: Dan Streetman <ddstreet@ieee.org>
Acked-by: Mel Gorman <mgorman@suse.de>
Cc: Shaohua Li <shli@fusionio.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: Dan Streetman <ddstreet@ieee.org>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com>
Cc: Weijie Yang <weijieut@gmail.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Bob Liu <bob.liu@oracle.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-06-05 03:09:59 +04:00
plist_for_each_entry ( si , & swap_active_head , list )
2012-06-10 14:51:01 +04:00
totalpages + = atomic_read ( & si - > frontswap_pages ) ;
return totalpages ;
}
2012-06-10 14:51:02 +04:00
static int __frontswap_unuse_pages ( unsigned long total , unsigned long * unused ,
int * swapid )
{
int ret = - EINVAL ;
struct swap_info_struct * si = NULL ;
int si_frontswap_pages ;
unsigned long total_pages_to_unuse = total ;
unsigned long pages = 0 , pages_to_unuse = 0 ;
assert_spin_locked ( & swap_lock ) ;
swap: change swap_list_head to plist, add swap_avail_head
Originally get_swap_page() started iterating through the singly-linked
list of swap_info_structs using swap_list.next or highest_priority_index,
which both were intended to point to the highest priority active swap
target that was not full. The first patch in this series changed the
singly-linked list to a doubly-linked list, and removed the logic to start
at the highest priority non-full entry; it starts scanning at the highest
priority entry each time, even if the entry is full.
Replace the manually ordered swap_list_head with a plist, swap_active_head.
Add a new plist, swap_avail_head. The original swap_active_head plist
contains all active swap_info_structs, as before, while the new
swap_avail_head plist contains only swap_info_structs that are active and
available, i.e. not full. Add a new spinlock, swap_avail_lock, to protect
the swap_avail_head list.
Mel Gorman suggested using plists since they internally handle ordering
the list entries based on priority, which is exactly what swap was doing
manually. All the ordering code is now removed, and swap_info_struct
entries and simply added to their corresponding plist and automatically
ordered correctly.
Using a new plist for available swap_info_structs simplifies and
optimizes get_swap_page(), which no longer has to iterate over full
swap_info_structs. Using a new spinlock for swap_avail_head plist
allows each swap_info_struct to add or remove themselves from the
plist when they become full or not-full; previously they could not
do so because the swap_info_struct->lock is held when they change
from full<->not-full, and the swap_lock protecting the main
swap_active_head must be ordered before any swap_info_struct->lock.
Signed-off-by: Dan Streetman <ddstreet@ieee.org>
Acked-by: Mel Gorman <mgorman@suse.de>
Cc: Shaohua Li <shli@fusionio.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: Dan Streetman <ddstreet@ieee.org>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com>
Cc: Weijie Yang <weijieut@gmail.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Bob Liu <bob.liu@oracle.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-06-05 03:09:59 +04:00
plist_for_each_entry ( si , & swap_active_head , list ) {
2012-06-10 14:51:02 +04:00
si_frontswap_pages = atomic_read ( & si - > frontswap_pages ) ;
if ( total_pages_to_unuse < si_frontswap_pages ) {
pages = pages_to_unuse = total_pages_to_unuse ;
} else {
pages = si_frontswap_pages ;
pages_to_unuse = 0 ; /* unuse all */
}
/* ensure there is enough RAM to fetch pages from frontswap */
if ( security_vm_enough_memory_mm ( current - > mm , pages ) ) {
ret = - ENOMEM ;
continue ;
}
vm_unacct_memory ( pages ) ;
* unused = pages_to_unuse ;
swap: change swap_info singly-linked list to list_head
The logic controlling the singly-linked list of swap_info_struct entries
for all active, i.e. swapon'ed, swap targets is rather complex, because:
- it stores the entries in priority order
- there is a pointer to the highest priority entry
- there is a pointer to the highest priority not-full entry
- there is a highest_priority_index variable set outside the swap_lock
- swap entries of equal priority should be used equally
this complexity leads to bugs such as: https://lkml.org/lkml/2014/2/13/181
where different priority swap targets are incorrectly used equally.
That bug probably could be solved with the existing singly-linked lists,
but I think it would only add more complexity to the already difficult to
understand get_swap_page() swap_list iteration logic.
The first patch changes from a singly-linked list to a doubly-linked list
using list_heads; the highest_priority_index and related code are removed
and get_swap_page() starts each iteration at the highest priority
swap_info entry, even if it's full. While this does introduce unnecessary
list iteration (i.e. Schlemiel the painter's algorithm) in the case where
one or more of the highest priority entries are full, the iteration and
manipulation code is much simpler and behaves correctly re: the above bug;
and the fourth patch removes the unnecessary iteration.
The second patch adds some minor plist helper functions; nothing new
really, just functions to match existing regular list functions. These
are used by the next two patches.
The third patch adds plist_requeue(), which is used by get_swap_page() in
the next patch - it performs the requeueing of same-priority entries
(which moves the entry to the end of its priority in the plist), so that
all equal-priority swap_info_structs get used equally.
The fourth patch converts the main list into a plist, and adds a new plist
that contains only swap_info entries that are both active and not full.
As Mel suggested using plists allows removing all the ordering code from
swap - plists handle ordering automatically. The list naming is also
clarified now that there are two lists, with the original list changed
from swap_list_head to swap_active_head and the new list named
swap_avail_head. A new spinlock is also added for the new list, so
swap_info entries can be added or removed from the new list immediately as
they become full or not full.
This patch (of 4):
Replace the singly-linked list tracking active, i.e. swapon'ed,
swap_info_struct entries with a doubly-linked list using struct
list_heads. Simplify the logic iterating and manipulating the list of
entries, especially get_swap_page(), by using standard list_head
functions, and removing the highest priority iteration logic.
The change fixes the bug:
https://lkml.org/lkml/2014/2/13/181
in which different priority swap entries after the highest priority entry
are incorrectly used equally in pairs. The swap behavior is now as
advertised, i.e. different priority swap entries are used in order, and
equal priority swap targets are used concurrently.
Signed-off-by: Dan Streetman <ddstreet@ieee.org>
Acked-by: Mel Gorman <mgorman@suse.de>
Cc: Shaohua Li <shli@fusionio.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Dan Streetman <ddstreet@ieee.org>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com>
Cc: Weijie Yang <weijieut@gmail.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Bob Liu <bob.liu@oracle.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-06-05 03:09:53 +04:00
* swapid = si - > type ;
2012-06-10 14:51:02 +04:00
ret = 0 ;
break ;
}
return ret ;
}
2012-09-21 12:40:30 +04:00
/*
* Used to check if it ' s necessory and feasible to unuse pages .
* Return 1 when nothing to do , 0 when need to shink pages ,
* error code when there is an error .
*/
2012-06-10 14:51:03 +04:00
static int __frontswap_shrink ( unsigned long target_pages ,
unsigned long * pages_to_unuse ,
int * type )
{
unsigned long total_pages = 0 , total_pages_to_unuse ;
assert_spin_locked ( & swap_lock ) ;
total_pages = __frontswap_curr_pages ( ) ;
if ( total_pages < = target_pages ) {
/* Nothing to do */
* pages_to_unuse = 0 ;
2012-09-21 12:40:30 +04:00
return 1 ;
2012-06-10 14:51:03 +04:00
}
total_pages_to_unuse = total_pages - target_pages ;
return __frontswap_unuse_pages ( total_pages_to_unuse , pages_to_unuse , type ) ;
}
2012-04-10 03:09:27 +04:00
/*
* Frontswap , like a true swap device , may unnecessarily retain pages
* under certain circumstances ; " shrink " frontswap is essentially a
* " partial swapoff " and works by calling try_to_unuse to attempt to
* unuse enough frontswap pages to attempt to - - subject to memory
* constraints - - reduce the number of pages in frontswap to the
* number given in the parameter target_pages .
*/
void frontswap_shrink ( unsigned long target_pages )
{
2012-06-10 14:51:02 +04:00
unsigned long pages_to_unuse = 0 ;
2012-07-30 23:47:44 +04:00
int uninitialized_var ( type ) , ret ;
2012-04-10 03:09:27 +04:00
/*
* we don ' t want to hold swap_lock while doing a very
* lengthy try_to_unuse , but swap_list may change
swap: change swap_list_head to plist, add swap_avail_head
Originally get_swap_page() started iterating through the singly-linked
list of swap_info_structs using swap_list.next or highest_priority_index,
which both were intended to point to the highest priority active swap
target that was not full. The first patch in this series changed the
singly-linked list to a doubly-linked list, and removed the logic to start
at the highest priority non-full entry; it starts scanning at the highest
priority entry each time, even if the entry is full.
Replace the manually ordered swap_list_head with a plist, swap_active_head.
Add a new plist, swap_avail_head. The original swap_active_head plist
contains all active swap_info_structs, as before, while the new
swap_avail_head plist contains only swap_info_structs that are active and
available, i.e. not full. Add a new spinlock, swap_avail_lock, to protect
the swap_avail_head list.
Mel Gorman suggested using plists since they internally handle ordering
the list entries based on priority, which is exactly what swap was doing
manually. All the ordering code is now removed, and swap_info_struct
entries and simply added to their corresponding plist and automatically
ordered correctly.
Using a new plist for available swap_info_structs simplifies and
optimizes get_swap_page(), which no longer has to iterate over full
swap_info_structs. Using a new spinlock for swap_avail_head plist
allows each swap_info_struct to add or remove themselves from the
plist when they become full or not-full; previously they could not
do so because the swap_info_struct->lock is held when they change
from full<->not-full, and the swap_lock protecting the main
swap_active_head must be ordered before any swap_info_struct->lock.
Signed-off-by: Dan Streetman <ddstreet@ieee.org>
Acked-by: Mel Gorman <mgorman@suse.de>
Cc: Shaohua Li <shli@fusionio.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: Dan Streetman <ddstreet@ieee.org>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com>
Cc: Weijie Yang <weijieut@gmail.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Bob Liu <bob.liu@oracle.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-06-05 03:09:59 +04:00
* so restart scan from swap_active_head each time
2012-04-10 03:09:27 +04:00
*/
spin_lock ( & swap_lock ) ;
2012-06-10 14:51:03 +04:00
ret = __frontswap_shrink ( target_pages , & pages_to_unuse , & type ) ;
2012-04-10 03:09:27 +04:00
spin_unlock ( & swap_lock ) ;
2012-09-21 12:40:30 +04:00
if ( ret = = 0 )
2012-06-10 14:51:03 +04:00
try_to_unuse ( type , true , pages_to_unuse ) ;
2012-04-10 03:09:27 +04:00
return ;
}
EXPORT_SYMBOL ( frontswap_shrink ) ;
/*
* Count and return the number of frontswap pages across all
* swap devices . This is exported so that backend drivers can
* determine current usage without reading debugfs .
*/
unsigned long frontswap_curr_pages ( void )
{
unsigned long totalpages = 0 ;
spin_lock ( & swap_lock ) ;
2012-06-10 14:51:01 +04:00
totalpages = __frontswap_curr_pages ( ) ;
2012-04-10 03:09:27 +04:00
spin_unlock ( & swap_lock ) ;
2012-06-10 14:51:01 +04:00
2012-04-10 03:09:27 +04:00
return totalpages ;
}
EXPORT_SYMBOL ( frontswap_curr_pages ) ;
static int __init init_frontswap ( void )
{
# ifdef CONFIG_DEBUG_FS
struct dentry * root = debugfs_create_dir ( " frontswap " , NULL ) ;
if ( root = = NULL )
return - ENXIO ;
2012-05-15 19:32:15 +04:00
debugfs_create_u64 ( " loads " , S_IRUGO , root , & frontswap_loads ) ;
debugfs_create_u64 ( " succ_stores " , S_IRUGO , root , & frontswap_succ_stores ) ;
debugfs_create_u64 ( " failed_stores " , S_IRUGO , root ,
& frontswap_failed_stores ) ;
2012-04-10 03:09:27 +04:00
debugfs_create_u64 ( " invalidates " , S_IRUGO ,
root , & frontswap_invalidates ) ;
# endif
return 0 ;
}
module_init ( init_frontswap ) ;