2018-06-05 19:42:14 -07:00
// SPDX-License-Identifier: GPL-2.0+
2016-08-03 11:12:25 +10:00
/*
* Copyright ( C ) 2016 Oracle . All Rights Reserved .
* Author : Darrick J . Wong < darrick . wong @ oracle . com >
*/
# include "xfs.h"
# include "xfs_fs.h"
# include "xfs_shared.h"
# include "xfs_format.h"
# include "xfs_log_format.h"
# include "xfs_trans_resv.h"
# include "xfs_mount.h"
# include "xfs_defer.h"
# include "xfs_trans.h"
2018-08-01 07:20:32 -07:00
# include "xfs_buf_item.h"
2018-08-01 07:20:32 -07:00
# include "xfs_inode.h"
# include "xfs_inode_item.h"
2016-08-03 11:12:25 +10:00
# include "xfs_trace.h"
/*
* Deferred Operations in XFS
*
* Due to the way locking rules work in XFS , certain transactions ( block
* mapping and unmapping , typically ) have permanent reservations so that
* we can roll the transaction to adhere to AG locking order rules and
* to unlock buffers between metadata updates . Prior to rmap / reflink ,
* the mapping code had a mechanism to perform these deferrals for
* extents that were going to be freed ; this code makes that facility
* more generic .
*
* When adding the reverse mapping and reflink features , it became
* necessary to perform complex remapping multi - transactions to comply
* with AG locking order rules , and to be able to spread a single
* refcount update operation ( an operation on an n - block extent can
* update as many as n records ! ) among multiple transactions . XFS can
* roll a transaction to facilitate this , but using this facility
* requires us to log " intent " items in case log recovery needs to
* redo the operation , and to log " done " items to indicate that redo
* is not necessary .
*
* Deferred work is tracked in xfs_defer_pending items . Each pending
* item tracks one type of deferred work . Incoming work items ( which
* have not yet had an intent logged ) are attached to a pending item
* on the dop_intake list , where they wait for the caller to finish
* the deferred operations .
*
* Finishing a set of deferred operations is an involved process . To
* start , we define " rolling a deferred-op transaction " as follows :
*
* > For each xfs_defer_pending item on the dop_intake list ,
* - Sort the work items in AG order . XFS locking
* order rules require us to lock buffers in AG order .
* - Create a log intent item for that type .
* - Attach it to the pending item .
* - Move the pending item from the dop_intake list to the
* dop_pending list .
* > Roll the transaction .
*
* NOTE : To avoid exceeding the transaction reservation , we limit the
* number of items that we attach to a given xfs_defer_pending .
*
* The actual finishing process looks like this :
*
* > For each xfs_defer_pending in the dop_pending list ,
* - Roll the deferred - op transaction as above .
* - Create a log done item for that type , and attach it to the
* log intent item .
* - For each work item attached to the log intent item ,
* * Perform the described action .
* * Attach the work item to the log done item .
2016-09-19 10:26:25 +10:00
* * If the result of doing the work was - EAGAIN , - > finish work
* wants a new transaction . See the " Requesting a Fresh
* Transaction while Finishing Deferred Work " section below for
* details .
2016-08-03 11:12:25 +10:00
*
* The key here is that we must log an intent item for all pending
* work items every time we roll the transaction , and that we must log
* a done item as soon as the work is completed . With this mechanism
* we can perform complex remapping operations , chaining intent items
* as needed .
*
2016-09-19 10:26:25 +10:00
* Requesting a Fresh Transaction while Finishing Deferred Work
*
* If - > finish_item decides that it needs a fresh transaction to
* finish the work , it must ask its caller ( xfs_defer_finish ) for a
* continuation . The most likely cause of this circumstance are the
* refcount adjust functions deciding that they ' ve logged enough items
* to be at risk of exceeding the transaction reservation .
*
* To get a fresh transaction , we want to log the existing log done
* item to prevent the log intent item from replaying , immediately log
* a new log intent item with the unfinished work items , roll the
* transaction , and re - call - > finish_item wherever it left off . The
* log done item and the new log intent item must be in the same
* transaction or atomicity cannot be guaranteed ; defer_finish ensures
* that this happens .
*
* This requires some coordination between - > finish_item and
* defer_finish . Upon deciding to request a new transaction ,
* - > finish_item should update the current work item to reflect the
* unfinished work . Next , it should reset the log done item ' s list
* count to the number of items finished , and return - EAGAIN .
* defer_finish sees the - EAGAIN , logs the new log intent item
* with the remaining work items , and leaves the xfs_defer_pending
* item at the head of the dop_work queue . Then it rolls the
* transaction and picks up processing where it left off . It is
* required that - > finish_item must be careful to leave enough
* transaction reservation to fit the new log intent item .
*
2016-08-03 11:12:25 +10:00
* This is an example of remapping the extent ( E , E + B ) into file X at
* offset A and dealing with the extent ( C , C + B ) already being mapped
* there :
* + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
* | Unmap file X startblock C offset A length B | t0
* | Intent to reduce refcount for extent ( C , B ) |
* | Intent to remove rmap ( X , C , A , B ) |
* | Intent to free extent ( D , 1 ) ( bmbt block ) |
* | Intent to map ( X , A , B ) at startblock E |
* + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
* | Map file X startblock E offset A length B | t1
* | Done mapping ( X , E , A , B ) |
* | Intent to increase refcount for extent ( E , B ) |
* | Intent to add rmap ( X , E , A , B ) |
* + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
* | Reduce refcount for extent ( C , B ) | t2
2016-09-19 10:26:25 +10:00
* | Done reducing refcount for extent ( C , 9 ) |
* | Intent to reduce refcount for extent ( C + 9 , B - 9 ) |
* | ( ran out of space after 9 refcount updates ) |
* + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
* | Reduce refcount for extent ( C + 9 , B + 9 ) | t3
* | Done reducing refcount for extent ( C + 9 , B - 9 ) |
2016-08-03 11:12:25 +10:00
* | Increase refcount for extent ( E , B ) |
* | Done increasing refcount for extent ( E , B ) |
* | Intent to free extent ( C , B ) |
* | Intent to free extent ( F , 1 ) ( refcountbt block ) |
* | Intent to remove rmap ( F , 1 , REFC ) |
* + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
2016-09-19 10:26:25 +10:00
* | Remove rmap ( X , C , A , B ) | t4
2016-08-03 11:12:25 +10:00
* | Done removing rmap ( X , C , A , B ) |
* | Add rmap ( X , E , A , B ) |
* | Done adding rmap ( X , E , A , B ) |
* | Remove rmap ( F , 1 , REFC ) |
* | Done removing rmap ( F , 1 , REFC ) |
* + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
2016-09-19 10:26:25 +10:00
* | Free extent ( C , B ) | t5
2016-08-03 11:12:25 +10:00
* | Done freeing extent ( C , B ) |
* | Free extent ( D , 1 ) |
* | Done freeing extent ( D , 1 ) |
* | Free extent ( F , 1 ) |
* | Done freeing extent ( F , 1 ) |
* + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
*
* If we should crash before t2 commits , log recovery replays
* the following intent items :
*
* - Intent to reduce refcount for extent ( C , B )
* - Intent to remove rmap ( X , C , A , B )
* - Intent to free extent ( D , 1 ) ( bmbt block )
* - Intent to increase refcount for extent ( E , B )
* - Intent to add rmap ( X , E , A , B )
*
* In the process of recovering , it should also generate and take care
* of these intent items :
*
* - Intent to free extent ( C , B )
* - Intent to free extent ( F , 1 ) ( refcountbt block )
* - Intent to remove rmap ( F , 1 , REFC )
2016-09-19 10:26:25 +10:00
*
* Note that the continuation requested between t2 and t3 is likely to
* reoccur .
2016-08-03 11:12:25 +10:00
*/
2018-12-12 08:46:22 -08:00
static const struct xfs_defer_op_type * defer_op_types [ ] = {
[ XFS_DEFER_OPS_TYPE_BMAP ] = & xfs_bmap_update_defer_type ,
[ XFS_DEFER_OPS_TYPE_REFCOUNT ] = & xfs_refcount_update_defer_type ,
[ XFS_DEFER_OPS_TYPE_RMAP ] = & xfs_rmap_update_defer_type ,
[ XFS_DEFER_OPS_TYPE_FREE ] = & xfs_extent_free_defer_type ,
[ XFS_DEFER_OPS_TYPE_AGFL_FREE ] = & xfs_agfl_free_defer_type ,
} ;
2016-08-03 11:12:25 +10:00
/*
* For each pending item in the intake list , log its intent item and the
* associated extents , then add the entire intake list to the end of
* the pending list .
*/
STATIC void
2018-08-01 07:20:34 -07:00
xfs_defer_create_intents (
2018-08-01 07:20:33 -07:00
struct xfs_trans * tp )
2016-08-03 11:12:25 +10:00
{
struct list_head * li ;
struct xfs_defer_pending * dfp ;
2018-12-12 08:46:22 -08:00
const struct xfs_defer_op_type * ops ;
2016-08-03 11:12:25 +10:00
2018-08-01 07:20:35 -07:00
list_for_each_entry ( dfp , & tp - > t_dfops , dfp_list ) {
2018-12-12 08:46:22 -08:00
ops = defer_op_types [ dfp - > dfp_type ] ;
dfp - > dfp_intent = ops - > create_intent ( tp , dfp - > dfp_count ) ;
2018-08-01 07:20:34 -07:00
trace_xfs_defer_create_intent ( tp - > t_mountp , dfp ) ;
2018-12-12 08:46:22 -08:00
list_sort ( tp - > t_mountp , & dfp - > dfp_work , ops - > diff_items ) ;
2016-08-03 11:12:25 +10:00
list_for_each ( li , & dfp - > dfp_work )
2018-12-12 08:46:22 -08:00
ops - > log_item ( tp , dfp - > dfp_intent , li ) ;
2016-08-03 11:12:25 +10:00
}
}
/* Abort all the intents that were committed. */
STATIC void
xfs_defer_trans_abort (
struct xfs_trans * tp ,
2018-08-01 07:20:34 -07:00
struct list_head * dop_pending )
2016-08-03 11:12:25 +10:00
{
struct xfs_defer_pending * dfp ;
2018-12-12 08:46:22 -08:00
const struct xfs_defer_op_type * ops ;
2016-08-03 11:12:25 +10:00
2018-08-01 07:20:35 -07:00
trace_xfs_defer_trans_abort ( tp , _RET_IP_ ) ;
2016-08-03 11:12:25 +10:00
2016-10-24 14:21:18 +11:00
/* Abort intent items that don't have a done item. */
2018-08-01 07:20:34 -07:00
list_for_each_entry ( dfp , dop_pending , dfp_list ) {
2018-12-12 08:46:22 -08:00
ops = defer_op_types [ dfp - > dfp_type ] ;
2016-08-03 11:13:02 +10:00
trace_xfs_defer_pending_abort ( tp - > t_mountp , dfp ) ;
2016-10-24 14:21:18 +11:00
if ( dfp - > dfp_intent & & ! dfp - > dfp_done ) {
2018-12-12 08:46:22 -08:00
ops - > abort_intent ( dfp - > dfp_intent ) ;
2016-10-24 14:21:18 +11:00
dfp - > dfp_intent = NULL ;
}
2016-08-03 11:12:25 +10:00
}
}
/* Roll a transaction so we can do some deferred op processing. */
STATIC int
xfs_defer_trans_roll (
2018-08-01 07:20:34 -07:00
struct xfs_trans * * tpp )
2016-08-03 11:12:25 +10:00
{
2018-08-01 07:20:34 -07:00
struct xfs_trans * tp = * tpp ;
2018-08-01 07:20:32 -07:00
struct xfs_buf_log_item * bli ;
2018-08-01 07:20:32 -07:00
struct xfs_inode_log_item * ili ;
2018-08-01 07:20:32 -07:00
struct xfs_log_item * lip ;
struct xfs_buf * bplist [ XFS_DEFER_OPS_NR_BUFS ] ;
2018-08-01 07:20:32 -07:00
struct xfs_inode * iplist [ XFS_DEFER_OPS_NR_INODES ] ;
int bpcount = 0 , ipcount = 0 ;
2016-08-03 11:12:25 +10:00
int i ;
int error ;
2018-08-01 07:20:34 -07:00
list_for_each_entry ( lip , & tp - > t_items , li_trans ) {
2018-08-01 07:20:32 -07:00
switch ( lip - > li_type ) {
case XFS_LI_BUF :
bli = container_of ( lip , struct xfs_buf_log_item ,
bli_item ) ;
if ( bli - > bli_flags & XFS_BLI_HOLD ) {
if ( bpcount > = XFS_DEFER_OPS_NR_BUFS ) {
ASSERT ( 0 ) ;
return - EFSCORRUPTED ;
}
2018-08-01 07:20:34 -07:00
xfs_trans_dirty_buf ( tp , bli - > bli_buf ) ;
2018-08-01 07:20:32 -07:00
bplist [ bpcount + + ] = bli - > bli_buf ;
}
break ;
2018-08-01 07:20:32 -07:00
case XFS_LI_INODE :
ili = container_of ( lip , struct xfs_inode_log_item ,
ili_item ) ;
if ( ili - > ili_lock_flags = = 0 ) {
if ( ipcount > = XFS_DEFER_OPS_NR_INODES ) {
ASSERT ( 0 ) ;
return - EFSCORRUPTED ;
}
2018-08-01 07:20:34 -07:00
xfs_trans_log_inode ( tp , ili - > ili_inode ,
2018-08-01 07:20:32 -07:00
XFS_ILOG_CORE ) ;
iplist [ ipcount + + ] = ili - > ili_inode ;
}
break ;
2018-08-01 07:20:32 -07:00
default :
break ;
}
}
2017-12-07 19:07:02 -08:00
2018-08-01 07:20:35 -07:00
trace_xfs_defer_trans_roll ( tp , _RET_IP_ ) ;
2016-08-03 11:13:02 +10:00
2019-04-24 09:27:41 -07:00
/*
* Roll the transaction . Rolling always given a new transaction ( even
* if committing the old one fails ! ) to hand back to the caller , so we
* join the held resources to the new transaction so that we always
* return with the held resources joined to @ tpp , no matter what
* happened .
*/
2018-08-01 07:20:34 -07:00
error = xfs_trans_roll ( tpp ) ;
tp = * tpp ;
2016-08-03 11:12:25 +10:00
2017-08-28 10:21:03 -07:00
/* Rejoin the joined inodes. */
2018-08-01 07:20:32 -07:00
for ( i = 0 ; i < ipcount ; i + + )
2018-08-01 07:20:34 -07:00
xfs_trans_ijoin ( tp , iplist [ i ] , 0 ) ;
2016-08-03 11:12:25 +10:00
2017-12-07 19:07:02 -08:00
/* Rejoin the buffers and dirty them so the log moves forward. */
2018-08-01 07:20:32 -07:00
for ( i = 0 ; i < bpcount ; i + + ) {
2018-08-01 07:20:34 -07:00
xfs_trans_bjoin ( tp , bplist [ i ] ) ;
xfs_trans_bhold ( tp , bplist [ i ] ) ;
2017-12-07 19:07:02 -08:00
}
2019-04-24 09:27:41 -07:00
if ( error )
trace_xfs_defer_trans_roll_error ( tp , error ) ;
2016-08-03 11:12:25 +10:00
return error ;
}
2018-07-24 13:43:10 -07:00
/*
* Reset an already used dfops after finish .
*/
static void
xfs_defer_reset (
2018-08-01 07:20:30 -07:00
struct xfs_trans * tp )
2018-07-24 13:43:10 -07:00
{
2018-08-01 07:20:35 -07:00
ASSERT ( list_empty ( & tp - > t_dfops ) ) ;
2018-08-01 07:20:31 -07:00
/*
* Low mode state transfers across transaction rolls to mirror dfops
* lifetime . Clear it now that dfops is reset .
*/
tp - > t_flags & = ~ XFS_TRANS_LOWMODE ;
2018-07-24 13:43:10 -07:00
}
2018-08-01 07:20:34 -07:00
/*
* Free up any items left in the list .
*/
static void
xfs_defer_cancel_list (
struct xfs_mount * mp ,
struct list_head * dop_list )
{
struct xfs_defer_pending * dfp ;
struct xfs_defer_pending * pli ;
struct list_head * pwi ;
struct list_head * n ;
2018-12-12 08:46:22 -08:00
const struct xfs_defer_op_type * ops ;
2018-08-01 07:20:34 -07:00
/*
* Free the pending items . Caller should already have arranged
* for the intent items to be released .
*/
list_for_each_entry_safe ( dfp , pli , dop_list , dfp_list ) {
2018-12-12 08:46:22 -08:00
ops = defer_op_types [ dfp - > dfp_type ] ;
2018-08-01 07:20:34 -07:00
trace_xfs_defer_cancel_list ( mp , dfp ) ;
list_del ( & dfp - > dfp_list ) ;
list_for_each_safe ( pwi , n , & dfp - > dfp_work ) {
list_del ( pwi ) ;
dfp - > dfp_count - - ;
2018-12-12 08:46:22 -08:00
ops - > cancel_item ( pwi ) ;
2018-08-01 07:20:34 -07:00
}
ASSERT ( dfp - > dfp_count = = 0 ) ;
kmem_free ( dfp ) ;
}
}
2016-08-03 11:12:25 +10:00
/*
* Finish all the pending work . This involves logging intent items for
* any work items that wandered in since the last transaction roll ( if
* one has even happened ) , rolling the transaction , and finishing the
* work items in the first item on the logged - and - pending list .
*
* If an inode is provided , relog it to the new transaction .
*/
int
2018-07-24 13:43:15 -07:00
xfs_defer_finish_noroll (
2018-07-24 13:43:15 -07:00
struct xfs_trans * * tp )
2016-08-03 11:12:25 +10:00
{
struct xfs_defer_pending * dfp ;
struct list_head * li ;
struct list_head * n ;
void * state ;
int error = 0 ;
2018-12-12 08:46:22 -08:00
const struct xfs_defer_op_type * ops ;
2018-08-01 07:20:34 -07:00
LIST_HEAD ( dop_pending ) ;
2016-08-03 11:12:25 +10:00
ASSERT ( ( * tp ) - > t_flags & XFS_TRANS_PERM_LOG_RES ) ;
2018-08-01 07:20:35 -07:00
trace_xfs_defer_finish ( * tp , _RET_IP_ ) ;
2016-08-03 11:13:02 +10:00
2016-08-03 11:12:25 +10:00
/* Until we run out of pending work to finish... */
2018-08-01 07:20:35 -07:00
while ( ! list_empty ( & dop_pending ) | | ! list_empty ( & ( * tp ) - > t_dfops ) ) {
2018-08-01 07:20:34 -07:00
/* log intents and pull in intake items */
xfs_defer_create_intents ( * tp ) ;
2018-08-01 07:20:35 -07:00
list_splice_tail_init ( & ( * tp ) - > t_dfops , & dop_pending ) ;
2016-08-03 11:12:25 +10:00
2018-07-24 13:43:09 -07:00
/*
2018-08-01 07:20:33 -07:00
* Roll the transaction .
2018-07-24 13:43:09 -07:00
*/
error = xfs_defer_trans_roll ( tp ) ;
2016-08-03 11:12:25 +10:00
if ( error )
goto out ;
/* Log an intent-done item for the first pending item. */
2018-08-01 07:20:34 -07:00
dfp = list_first_entry ( & dop_pending , struct xfs_defer_pending ,
dfp_list ) ;
2018-12-12 08:46:22 -08:00
ops = defer_op_types [ dfp - > dfp_type ] ;
2016-08-03 11:13:02 +10:00
trace_xfs_defer_pending_finish ( ( * tp ) - > t_mountp , dfp ) ;
2018-12-12 08:46:22 -08:00
dfp - > dfp_done = ops - > create_done ( * tp , dfp - > dfp_intent ,
2016-08-03 11:12:25 +10:00
dfp - > dfp_count ) ;
/* Finish the work items. */
state = NULL ;
list_for_each_safe ( li , n , & dfp - > dfp_work ) {
list_del ( li ) ;
dfp - > dfp_count - - ;
2018-12-12 08:46:22 -08:00
error = ops - > finish_item ( * tp , li , dfp - > dfp_done ,
& state ) ;
2016-09-19 10:26:25 +10:00
if ( error = = - EAGAIN ) {
/*
* Caller wants a fresh transaction ;
* put the work item back on the list
* and jump out .
*/
list_add ( li , & dfp - > dfp_work ) ;
dfp - > dfp_count + + ;
break ;
} else if ( error ) {
2016-08-03 11:12:25 +10:00
/*
* Clean up after ourselves and jump out .
* xfs_defer_cancel will take care of freeing
* all these lists and stuff .
*/
2018-12-12 08:46:22 -08:00
if ( ops - > finish_cleanup )
ops - > finish_cleanup ( * tp , state , error ) ;
2016-08-03 11:12:25 +10:00
goto out ;
}
}
2016-09-19 10:26:25 +10:00
if ( error = = - EAGAIN ) {
/*
* Caller wants a fresh transaction , so log a
* new log intent item to replace the old one
* and roll the transaction . See " Requesting
* a Fresh Transaction while Finishing
* Deferred Work " above.
*/
2018-12-12 08:46:22 -08:00
dfp - > dfp_intent = ops - > create_intent ( * tp ,
2016-09-19 10:26:25 +10:00
dfp - > dfp_count ) ;
dfp - > dfp_done = NULL ;
list_for_each ( li , & dfp - > dfp_work )
2018-12-12 08:46:22 -08:00
ops - > log_item ( * tp , dfp - > dfp_intent , li ) ;
2016-09-19 10:26:25 +10:00
} else {
/* Done with the dfp, free it. */
list_del ( & dfp - > dfp_list ) ;
kmem_free ( dfp ) ;
}
2016-08-03 11:12:25 +10:00
2018-12-12 08:46:22 -08:00
if ( ops - > finish_cleanup )
ops - > finish_cleanup ( * tp , state , error ) ;
2016-08-03 11:12:25 +10:00
}
out :
2018-08-01 07:20:33 -07:00
if ( error ) {
2018-08-01 07:20:34 -07:00
xfs_defer_trans_abort ( * tp , & dop_pending ) ;
xfs_force_shutdown ( ( * tp ) - > t_mountp , SHUTDOWN_CORRUPT_INCORE ) ;
2018-08-01 07:20:35 -07:00
trace_xfs_defer_finish_error ( * tp , error ) ;
2018-08-01 07:20:34 -07:00
xfs_defer_cancel_list ( ( * tp ) - > t_mountp , & dop_pending ) ;
2018-08-01 07:20:33 -07:00
xfs_defer_cancel ( * tp ) ;
return error ;
}
2018-07-24 13:43:10 -07:00
2018-08-01 07:20:35 -07:00
trace_xfs_defer_finish_done ( * tp , _RET_IP_ ) ;
2018-08-01 07:20:33 -07:00
return 0 ;
2016-08-03 11:12:25 +10:00
}
2018-07-24 13:43:15 -07:00
int
xfs_defer_finish (
struct xfs_trans * * tp )
{
int error ;
/*
* Finish and roll the transaction once more to avoid returning to the
* caller with a dirty transaction .
*/
error = xfs_defer_finish_noroll ( tp ) ;
if ( error )
return error ;
if ( ( * tp ) - > t_flags & XFS_TRANS_DIRTY ) {
error = xfs_defer_trans_roll ( tp ) ;
2018-08-01 07:20:34 -07:00
if ( error ) {
xfs_force_shutdown ( ( * tp ) - > t_mountp ,
SHUTDOWN_CORRUPT_INCORE ) ;
2018-07-24 13:43:15 -07:00
return error ;
2018-08-01 07:20:34 -07:00
}
2018-07-24 13:43:15 -07:00
}
2018-08-01 07:20:30 -07:00
xfs_defer_reset ( * tp ) ;
2018-07-24 13:43:15 -07:00
return 0 ;
}
2016-08-03 11:12:25 +10:00
void
2018-08-01 07:20:30 -07:00
xfs_defer_cancel (
2018-08-01 07:20:34 -07:00
struct xfs_trans * tp )
2016-08-03 11:12:25 +10:00
{
2018-08-01 07:20:34 -07:00
struct xfs_mount * mp = tp - > t_mountp ;
2016-08-03 11:12:25 +10:00
2018-08-01 07:20:35 -07:00
trace_xfs_defer_cancel ( tp , _RET_IP_ ) ;
xfs_defer_cancel_list ( mp , & tp - > t_dfops ) ;
2016-08-03 11:12:25 +10:00
}
/* Add an item for later deferred processing. */
void
xfs_defer_add (
2018-08-01 07:20:34 -07:00
struct xfs_trans * tp ,
2016-08-03 11:12:25 +10:00
enum xfs_defer_ops_type type ,
struct list_head * li )
{
struct xfs_defer_pending * dfp = NULL ;
2018-12-12 08:46:22 -08:00
const struct xfs_defer_op_type * ops ;
2016-08-03 11:12:25 +10:00
2018-08-01 07:20:34 -07:00
ASSERT ( tp - > t_flags & XFS_TRANS_PERM_LOG_RES ) ;
2018-12-12 08:46:22 -08:00
BUILD_BUG_ON ( ARRAY_SIZE ( defer_op_types ) ! = XFS_DEFER_OPS_TYPE_MAX ) ;
2018-08-01 07:20:34 -07:00
2016-08-03 11:12:25 +10:00
/*
* Add the item to a pending item at the end of the intake list .
* If the last pending item has the same type , reuse it . Else ,
* create a new pending item at the end of the intake list .
*/
2018-08-01 07:20:35 -07:00
if ( ! list_empty ( & tp - > t_dfops ) ) {
dfp = list_last_entry ( & tp - > t_dfops ,
2016-08-03 11:12:25 +10:00
struct xfs_defer_pending , dfp_list ) ;
2018-12-12 08:46:22 -08:00
ops = defer_op_types [ dfp - > dfp_type ] ;
if ( dfp - > dfp_type ! = type | |
( ops - > max_items & & dfp - > dfp_count > = ops - > max_items ) )
2016-08-03 11:12:25 +10:00
dfp = NULL ;
}
if ( ! dfp ) {
dfp = kmem_alloc ( sizeof ( struct xfs_defer_pending ) ,
KM_SLEEP | KM_NOFS ) ;
2018-12-12 08:46:22 -08:00
dfp - > dfp_type = type ;
2016-08-03 11:12:25 +10:00
dfp - > dfp_intent = NULL ;
2016-08-30 13:51:39 +10:00
dfp - > dfp_done = NULL ;
2016-08-03 11:12:25 +10:00
dfp - > dfp_count = 0 ;
INIT_LIST_HEAD ( & dfp - > dfp_work ) ;
2018-08-01 07:20:35 -07:00
list_add_tail ( & dfp - > dfp_list , & tp - > t_dfops ) ;
2016-08-03 11:12:25 +10:00
}
list_add_tail ( li , & dfp - > dfp_work ) ;
dfp - > dfp_count + + ;
}
2018-07-24 13:43:11 -07:00
/*
2018-08-01 07:20:35 -07:00
* Move deferred ops from one transaction to another and reset the source to
* initial state . This is primarily used to carry state forward across
* transaction rolls with pending dfops .
2018-07-24 13:43:11 -07:00
*/
void
xfs_defer_move (
2018-08-01 07:20:30 -07:00
struct xfs_trans * dtp ,
struct xfs_trans * stp )
2018-07-24 13:43:11 -07:00
{
2018-08-01 07:20:35 -07:00
list_splice_init ( & stp - > t_dfops , & dtp - > t_dfops ) ;
2018-07-24 13:43:11 -07:00
2018-08-01 07:20:31 -07:00
/*
* Low free space mode was historically controlled by a dfops field .
* This meant that low mode state potentially carried across multiple
* transaction rolls . Transfer low mode on a dfops move to preserve
* that behavior .
*/
dtp - > t_flags | = ( stp - > t_flags & XFS_TRANS_LOWMODE ) ;
2018-07-24 13:43:11 -07:00
2018-08-01 07:20:30 -07:00
xfs_defer_reset ( stp ) ;
2018-07-24 13:43:11 -07:00
}