2005-04-17 02:20:36 +04:00
/*
2006-09-28 04:52:15 +04:00
* Copyright ( c ) 2000 - 2006 Silicon Graphics , Inc .
2005-11-02 06:58:39 +03:00
* All Rights Reserved .
2005-04-17 02:20:36 +04:00
*
2005-11-02 06:58:39 +03:00
* This program is free software ; you can redistribute it and / or
* modify it under the terms of the GNU General Public License as
2005-04-17 02:20:36 +04:00
* published by the Free Software Foundation .
*
2005-11-02 06:58:39 +03:00
* This program is distributed in the hope that it would be useful ,
* but WITHOUT ANY WARRANTY ; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE . See the
* GNU General Public License for more details .
2005-04-17 02:20:36 +04:00
*
2005-11-02 06:58:39 +03:00
* You should have received a copy of the GNU General Public License
* along with this program ; if not , write the Free Software Foundation ,
* Inc . , 51 Franklin St , Fifth Floor , Boston , MA 02110 - 1301 USA
2005-04-17 02:20:36 +04:00
*/
2006-11-11 10:03:49 +03:00
# include "xfs.h"
2005-04-17 02:20:36 +04:00
# include <linux/stddef.h>
# include <linux/errno.h>
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 11:04:11 +03:00
# include <linux/gfp.h>
2005-04-17 02:20:36 +04:00
# include <linux/pagemap.h>
# include <linux/init.h>
# include <linux/vmalloc.h>
# include <linux/bio.h>
# include <linux/sysctl.h>
# include <linux/proc_fs.h>
# include <linux/workqueue.h>
# include <linux/percpu.h>
# include <linux/blkdev.h>
# include <linux/hash.h>
2005-09-05 02:34:18 +04:00
# include <linux/kthread.h>
2006-03-22 11:09:12 +03:00
# include <linux/migrate.h>
2006-10-20 10:28:16 +04:00
# include <linux/backing-dev.h>
2006-12-07 07:34:23 +03:00
# include <linux/freezer.h>
2010-01-26 07:13:25 +03:00
# include <linux/list_sort.h>
2005-04-17 02:20:36 +04:00
2009-03-03 22:48:37 +03:00
# include "xfs_sb.h"
# include "xfs_inum.h"
xfs: Improve scalability of busy extent tracking
When we free a metadata extent, we record it in the per-AG busy
extent array so that it is not re-used before the freeing
transaction hits the disk. This array is fixed size, so when it
overflows we make further allocation transactions synchronous
because we cannot track more freed extents until those transactions
hit the disk and are completed. Under heavy mixed allocation and
freeing workloads with large log buffers, we can overflow this array
quite easily.
Further, the array is sparsely populated, which means that inserts
need to search for a free slot, and array searches often have to
search many more slots that are actually used to check all the
busy extents. Quite inefficient, really.
To enable this aspect of extent freeing to scale better, we need
a structure that can grow dynamically. While in other areas of
XFS we have used radix trees, the extents being freed are at random
locations on disk so are better suited to being indexed by an rbtree.
So, use a per-AG rbtree indexed by block number to track busy
extents. This incures a memory allocation when marking an extent
busy, but should not occur too often in low memory situations. This
should scale to an arbitrary number of extents so should not be a
limitation for features such as in-memory aggregation of
transactions.
However, there are still situations where we can't avoid allocating
busy extents (such as allocation from the AGFL). To minimise the
overhead of such occurences, we need to avoid doing a synchronous
log force while holding the AGF locked to ensure that the previous
transactions are safely on disk before we use the extent. We can do
this by marking the transaction doing the allocation as synchronous
rather issuing a log force.
Because of the locking involved and the ordering of transactions,
the synchronous transaction provides the same guarantees as a
synchronous log force because it ensures that all the prior
transactions are already on disk when the synchronous transaction
hits the disk. i.e. it preserves the free->allocate order of the
extent correctly in recovery.
By doing this, we avoid holding the AGF locked while log writes are
in progress, hence reducing the length of time the lock is held and
therefore we increase the rate at which we can allocate and free
from the allocation group, thereby increasing overall throughput.
The only problem with this approach is that when a metadata buffer is
marked stale (e.g. a directory block is removed), then buffer remains
pinned and locked until the log goes to disk. The issue here is that
if that stale buffer is reallocated in a subsequent transaction, the
attempt to lock that buffer in the transaction will hang waiting
the log to go to disk to unlock and unpin the buffer. Hence if
someone tries to lock a pinned, stale, locked buffer we need to
push on the log to get it unlocked ASAP. Effectively we are trading
off a guaranteed log force for a much less common trigger for log
force to occur.
Ideally we should not reallocate busy extents. That is a much more
complex fix to the problem as it involves direct intervention in the
allocation btree searches in many places. This is left to a future
set of modifications.
Finally, now that we track busy extents in allocated memory, we
don't need the descriptors in the transaction structure to point to
them. We can replace the complex busy chunk infrastructure with a
simple linked list of busy extents. This allows us to remove a large
chunk of code, making the overall change a net reduction in code
size.
Signed-off-by: Dave Chinner <david@fromorbit.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alex Elder <aelder@sgi.com>
2010-05-21 06:07:08 +04:00
# include "xfs_log.h"
2009-03-03 22:48:37 +03:00
# include "xfs_ag.h"
# include "xfs_mount.h"
2009-12-15 02:14:59 +03:00
# include "xfs_trace.h"
2009-03-03 22:48:37 +03:00
2007-02-10 10:34:56 +03:00
static kmem_zone_t * xfs_buf_zone ;
2006-01-11 07:37:58 +03:00
STATIC int xfsbufd ( void * ) ;
2010-07-19 08:56:17 +04:00
STATIC int xfsbufd_wakeup ( struct shrinker * , int , gfp_t ) ;
2006-01-11 07:39:08 +03:00
STATIC void xfs_buf_delwri_queue ( xfs_buf_t * , int ) ;
2007-07-17 15:03:17 +04:00
static struct shrinker xfs_buf_shake = {
. shrink = xfsbufd_wakeup ,
. seeks = DEFAULT_SEEKS ,
} ;
2005-06-21 09:14:01 +04:00
2007-02-10 10:34:56 +03:00
static struct workqueue_struct * xfslogd_workqueue ;
2005-09-02 10:58:49 +04:00
struct workqueue_struct * xfsdatad_workqueue ;
2009-04-06 20:42:11 +04:00
struct workqueue_struct * xfsconvertd_workqueue ;
2005-04-17 02:20:36 +04:00
2006-01-11 07:39:08 +03:00
# ifdef XFS_BUF_LOCK_TRACKING
# define XB_SET_OWNER(bp) ((bp)->b_last_holder = current->pid)
# define XB_CLEAR_OWNER(bp) ((bp)->b_last_holder = -1)
# define XB_GET_OWNER(bp) ((bp)->b_last_holder)
2005-04-17 02:20:36 +04:00
# else
2006-01-11 07:39:08 +03:00
# define XB_SET_OWNER(bp) do { } while (0)
# define XB_CLEAR_OWNER(bp) do { } while (0)
# define XB_GET_OWNER(bp) do { } while (0)
2005-04-17 02:20:36 +04:00
# endif
2006-01-11 07:39:08 +03:00
# define xb_to_gfp(flags) \
( ( ( ( flags ) & XBF_READ_AHEAD ) ? __GFP_NORETRY : \
( ( flags ) & XBF_DONT_BLOCK ) ? GFP_NOFS : GFP_KERNEL ) | __GFP_NOWARN )
2005-04-17 02:20:36 +04:00
2006-01-11 07:39:08 +03:00
# define xb_to_km(flags) \
( ( ( flags ) & XBF_DONT_BLOCK ) ? KM_NOFS : KM_SLEEP )
2005-04-17 02:20:36 +04:00
2006-01-11 07:39:08 +03:00
# define xfs_buf_allocate(flags) \
kmem_zone_alloc ( xfs_buf_zone , xb_to_km ( flags ) )
# define xfs_buf_deallocate(bp) \
kmem_zone_free ( xfs_buf_zone , ( bp ) ) ;
2005-04-17 02:20:36 +04:00
2010-01-25 20:42:24 +03:00
static inline int
xfs_buf_is_vmapped (
struct xfs_buf * bp )
{
/*
* Return true if the buffer is vmapped .
*
* The XBF_MAPPED flag is set if the buffer should be mapped , but the
* code is clever enough to know it doesn ' t have to map a single page ,
* so the check has to be both for XBF_MAPPED and bp - > b_page_count > 1.
*/
return ( bp - > b_flags & XBF_MAPPED ) & & bp - > b_page_count > 1 ;
}
static inline int
xfs_buf_vmap_len (
struct xfs_buf * bp )
{
return ( bp - > b_page_count * PAGE_SIZE ) - bp - > b_offset ;
}
2005-04-17 02:20:36 +04:00
/*
2006-01-11 07:39:08 +03:00
* Page Region interfaces .
2005-04-17 02:20:36 +04:00
*
2006-01-11 07:39:08 +03:00
* For pages in filesystems where the blocksize is smaller than the
* pagesize , we use the page - > private field ( long ) to hold a bitmap
* of uptodate regions within the page .
2005-04-17 02:20:36 +04:00
*
2006-01-11 07:39:08 +03:00
* Each such region is " bytes per page / bits per long " bytes long .
2005-04-17 02:20:36 +04:00
*
2006-01-11 07:39:08 +03:00
* NBPPR = = number - of - bytes - per - page - region
* BTOPR = = bytes - to - page - region ( rounded up )
* BTOPRT = = bytes - to - page - region - truncated ( rounded down )
2005-04-17 02:20:36 +04:00
*/
# if (BITS_PER_LONG == 32)
# define PRSHIFT (PAGE_CACHE_SHIFT - 5) /* (32 == 1<<5) */
# elif (BITS_PER_LONG == 64)
# define PRSHIFT (PAGE_CACHE_SHIFT - 6) /* (64 == 1<<6) */
# else
# error BITS_PER_LONG must be 32 or 64
# endif
# define NBPPR (PAGE_CACHE_SIZE / BITS_PER_LONG)
# define BTOPR(b) (((unsigned int)(b) + (NBPPR - 1)) >> PRSHIFT)
# define BTOPRT(b) (((unsigned int)(b) >> PRSHIFT))
STATIC unsigned long
page_region_mask (
size_t offset ,
size_t length )
{
unsigned long mask ;
int first , final ;
first = BTOPR ( offset ) ;
final = BTOPRT ( offset + length - 1 ) ;
first = min ( first , final ) ;
mask = ~ 0UL ;
mask < < = BITS_PER_LONG - ( final - first ) ;
mask > > = BITS_PER_LONG - ( final ) ;
ASSERT ( offset + length < = PAGE_CACHE_SIZE ) ;
ASSERT ( ( final - first ) < BITS_PER_LONG & & ( final - first ) > = 0 ) ;
return mask ;
}
2009-11-14 19:17:22 +03:00
STATIC void
2005-04-17 02:20:36 +04:00
set_page_region (
struct page * page ,
size_t offset ,
size_t length )
{
[PATCH] mm: split page table lock
Christoph Lameter demonstrated very poor scalability on the SGI 512-way, with
a many-threaded application which concurrently initializes different parts of
a large anonymous area.
This patch corrects that, by using a separate spinlock per page table page, to
guard the page table entries in that page, instead of using the mm's single
page_table_lock. (But even then, page_table_lock is still used to guard page
table allocation, and anon_vma allocation.)
In this implementation, the spinlock is tucked inside the struct page of the
page table page: with a BUILD_BUG_ON in case it overflows - which it would in
the case of 32-bit PA-RISC with spinlock debugging enabled.
Splitting the lock is not quite for free: another cacheline access. Ideally,
I suppose we would use split ptlock only for multi-threaded processes on
multi-cpu machines; but deciding that dynamically would have its own costs.
So for now enable it by config, at some number of cpus - since the Kconfig
language doesn't support inequalities, let preprocessor compare that with
NR_CPUS. But I don't think it's worth being user-configurable: for good
testing of both split and unsplit configs, split now at 4 cpus, and perhaps
change that to 8 later.
There is a benefit even for singly threaded processes: kswapd can be attacking
one part of the mm while another part is busy faulting.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-10-30 04:16:40 +03:00
set_page_private ( page ,
page_private ( page ) | page_region_mask ( offset , length ) ) ;
if ( page_private ( page ) = = ~ 0UL )
2005-04-17 02:20:36 +04:00
SetPageUptodate ( page ) ;
}
2009-11-14 19:17:22 +03:00
STATIC int
2005-04-17 02:20:36 +04:00
test_page_region (
struct page * page ,
size_t offset ,
size_t length )
{
unsigned long mask = page_region_mask ( offset , length ) ;
[PATCH] mm: split page table lock
Christoph Lameter demonstrated very poor scalability on the SGI 512-way, with
a many-threaded application which concurrently initializes different parts of
a large anonymous area.
This patch corrects that, by using a separate spinlock per page table page, to
guard the page table entries in that page, instead of using the mm's single
page_table_lock. (But even then, page_table_lock is still used to guard page
table allocation, and anon_vma allocation.)
In this implementation, the spinlock is tucked inside the struct page of the
page table page: with a BUILD_BUG_ON in case it overflows - which it would in
the case of 32-bit PA-RISC with spinlock debugging enabled.
Splitting the lock is not quite for free: another cacheline access. Ideally,
I suppose we would use split ptlock only for multi-threaded processes on
multi-cpu machines; but deciding that dynamically would have its own costs.
So for now enable it by config, at some number of cpus - since the Kconfig
language doesn't support inequalities, let preprocessor compare that with
NR_CPUS. But I don't think it's worth being user-configurable: for good
testing of both split and unsplit configs, split now at 4 cpus, and perhaps
change that to 8 later.
There is a benefit even for singly threaded processes: kswapd can be attacking
one part of the mm while another part is busy faulting.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-10-30 04:16:40 +03:00
return ( mask & & ( page_private ( page ) & mask ) = = mask ) ;
2005-04-17 02:20:36 +04:00
}
/*
2006-01-11 07:39:08 +03:00
* Internal xfs_buf_t object manipulation
2005-04-17 02:20:36 +04:00
*/
STATIC void
2006-01-11 07:39:08 +03:00
_xfs_buf_initialize (
xfs_buf_t * bp ,
2005-04-17 02:20:36 +04:00
xfs_buftarg_t * target ,
2006-01-11 12:50:22 +03:00
xfs_off_t range_base ,
2005-04-17 02:20:36 +04:00
size_t range_length ,
2006-01-11 07:39:08 +03:00
xfs_buf_flags_t flags )
2005-04-17 02:20:36 +04:00
{
/*
2006-01-11 07:39:08 +03:00
* We don ' t want certain flags to appear in b_flags .
2005-04-17 02:20:36 +04:00
*/
2006-01-11 07:39:08 +03:00
flags & = ~ ( XBF_LOCK | XBF_MAPPED | XBF_DONT_BLOCK | XBF_READ_AHEAD ) ;
memset ( bp , 0 , sizeof ( xfs_buf_t ) ) ;
atomic_set ( & bp - > b_hold , 1 ) ;
2008-08-13 10:36:11 +04:00
init_completion ( & bp - > b_iowait ) ;
2006-01-11 07:39:08 +03:00
INIT_LIST_HEAD ( & bp - > b_list ) ;
INIT_LIST_HEAD ( & bp - > b_hash_list ) ;
init_MUTEX_LOCKED ( & bp - > b_sema ) ; /* held, no waiters */
XB_SET_OWNER ( bp ) ;
bp - > b_target = target ;
bp - > b_file_offset = range_base ;
2005-04-17 02:20:36 +04:00
/*
* Set buffer_length and count_desired to the same value initially .
* I / O routines should use count_desired , which will be the same in
* most cases but may be reset ( e . g . XFS recovery ) .
*/
2006-01-11 07:39:08 +03:00
bp - > b_buffer_length = bp - > b_count_desired = range_length ;
bp - > b_flags = flags ;
bp - > b_bn = XFS_BUF_DADDR_NULL ;
atomic_set ( & bp - > b_pin_count , 0 ) ;
init_waitqueue_head ( & bp - > b_waiters ) ;
XFS_STATS_INC ( xb_create ) ;
2009-12-15 02:14:59 +03:00
trace_xfs_buf_init ( bp , _RET_IP_ ) ;
2005-04-17 02:20:36 +04:00
}
/*
2006-01-11 07:39:08 +03:00
* Allocate a page array capable of holding a specified number
* of pages , and point the page buf at it .
2005-04-17 02:20:36 +04:00
*/
STATIC int
2006-01-11 07:39:08 +03:00
_xfs_buf_get_pages (
xfs_buf_t * bp ,
2005-04-17 02:20:36 +04:00
int page_count ,
2006-01-11 07:39:08 +03:00
xfs_buf_flags_t flags )
2005-04-17 02:20:36 +04:00
{
/* Make sure that we have a page list */
2006-01-11 07:39:08 +03:00
if ( bp - > b_pages = = NULL ) {
bp - > b_offset = xfs_buf_poff ( bp - > b_file_offset ) ;
bp - > b_page_count = page_count ;
if ( page_count < = XB_PAGES ) {
bp - > b_pages = bp - > b_page_array ;
2005-04-17 02:20:36 +04:00
} else {
2006-01-11 07:39:08 +03:00
bp - > b_pages = kmem_alloc ( sizeof ( struct page * ) *
page_count , xb_to_km ( flags ) ) ;
if ( bp - > b_pages = = NULL )
2005-04-17 02:20:36 +04:00
return - ENOMEM ;
}
2006-01-11 07:39:08 +03:00
memset ( bp - > b_pages , 0 , sizeof ( struct page * ) * page_count ) ;
2005-04-17 02:20:36 +04:00
}
return 0 ;
}
/*
2006-01-11 07:39:08 +03:00
* Frees b_pages if it was allocated .
2005-04-17 02:20:36 +04:00
*/
STATIC void
2006-01-11 07:39:08 +03:00
_xfs_buf_free_pages (
2005-04-17 02:20:36 +04:00
xfs_buf_t * bp )
{
2006-01-11 07:39:08 +03:00
if ( bp - > b_pages ! = bp - > b_page_array ) {
2008-05-19 10:31:57 +04:00
kmem_free ( bp - > b_pages ) ;
2009-12-15 02:11:57 +03:00
bp - > b_pages = NULL ;
2005-04-17 02:20:36 +04:00
}
}
/*
* Releases the specified buffer .
*
* The modification state of any associated pages is left unchanged .
2006-01-11 07:39:08 +03:00
* The buffer most not be on any hash - use xfs_buf_rele instead for
2005-04-17 02:20:36 +04:00
* hashed and refcounted buffers
*/
void
2006-01-11 07:39:08 +03:00
xfs_buf_free (
2005-04-17 02:20:36 +04:00
xfs_buf_t * bp )
{
2009-12-15 02:14:59 +03:00
trace_xfs_buf_free ( bp , _RET_IP_ ) ;
2005-04-17 02:20:36 +04:00
2006-01-11 07:39:08 +03:00
ASSERT ( list_empty ( & bp - > b_hash_list ) ) ;
2005-04-17 02:20:36 +04:00
2007-05-14 12:23:50 +04:00
if ( bp - > b_flags & ( _XBF_PAGE_CACHE | _XBF_PAGES ) ) {
2005-04-17 02:20:36 +04:00
uint i ;
2010-01-25 20:42:24 +03:00
if ( xfs_buf_is_vmapped ( bp ) )
2010-03-16 21:55:56 +03:00
vm_unmap_ram ( bp - > b_addr - bp - > b_offset ,
bp - > b_page_count ) ;
2005-04-17 02:20:36 +04:00
2006-09-28 05:03:13 +04:00
for ( i = 0 ; i < bp - > b_page_count ; i + + ) {
struct page * page = bp - > b_pages [ i ] ;
2007-05-14 12:23:50 +04:00
if ( bp - > b_flags & _XBF_PAGE_CACHE )
ASSERT ( ! PagePrivate ( page ) ) ;
2006-09-28 05:03:13 +04:00
page_cache_release ( page ) ;
}
2005-04-17 02:20:36 +04:00
}
2009-12-15 02:11:57 +03:00
_xfs_buf_free_pages ( bp ) ;
2006-01-11 07:39:08 +03:00
xfs_buf_deallocate ( bp ) ;
2005-04-17 02:20:36 +04:00
}
/*
* Finds all pages for buffer in question and builds it ' s page list .
*/
STATIC int
2006-01-11 07:39:08 +03:00
_xfs_buf_lookup_pages (
2005-04-17 02:20:36 +04:00
xfs_buf_t * bp ,
uint flags )
{
2006-01-11 07:39:08 +03:00
struct address_space * mapping = bp - > b_target - > bt_mapping ;
size_t blocksize = bp - > b_target - > bt_bsize ;
size_t size = bp - > b_count_desired ;
2005-04-17 02:20:36 +04:00
size_t nbytes , offset ;
2006-01-11 07:39:08 +03:00
gfp_t gfp_mask = xb_to_gfp ( flags ) ;
2005-04-17 02:20:36 +04:00
unsigned short page_count , i ;
pgoff_t first ;
2006-01-11 12:50:22 +03:00
xfs_off_t end ;
2005-04-17 02:20:36 +04:00
int error ;
2006-01-11 07:39:08 +03:00
end = bp - > b_file_offset + bp - > b_buffer_length ;
page_count = xfs_buf_btoc ( end ) - xfs_buf_btoct ( bp - > b_file_offset ) ;
2005-04-17 02:20:36 +04:00
2006-01-11 07:39:08 +03:00
error = _xfs_buf_get_pages ( bp , page_count , flags ) ;
2005-04-17 02:20:36 +04:00
if ( unlikely ( error ) )
return error ;
2006-01-11 07:39:08 +03:00
bp - > b_flags | = _XBF_PAGE_CACHE ;
2005-04-17 02:20:36 +04:00
2006-01-11 07:39:08 +03:00
offset = bp - > b_offset ;
first = bp - > b_file_offset > > PAGE_CACHE_SHIFT ;
2005-04-17 02:20:36 +04:00
2006-01-11 07:39:08 +03:00
for ( i = 0 ; i < bp - > b_page_count ; i + + ) {
2005-04-17 02:20:36 +04:00
struct page * page ;
uint retries = 0 ;
retry :
page = find_or_create_page ( mapping , first + i , gfp_mask ) ;
if ( unlikely ( page = = NULL ) ) {
2006-01-11 07:39:08 +03:00
if ( flags & XBF_READ_AHEAD ) {
bp - > b_page_count = i ;
2008-05-19 10:34:42 +04:00
for ( i = 0 ; i < bp - > b_page_count ; i + + )
unlock_page ( bp - > b_pages [ i ] ) ;
2005-04-17 02:20:36 +04:00
return - ENOMEM ;
}
/*
* This could deadlock .
*
* But until all the XFS lowlevel code is revamped to
* handle buffer allocation failures we can ' t do much .
*/
if ( ! ( + + retries % 100 ) )
printk ( KERN_ERR
" XFS: possible memory allocation "
" deadlock in %s (mode:0x%x) \n " ,
2008-04-10 06:19:21 +04:00
__func__ , gfp_mask ) ;
2005-04-17 02:20:36 +04:00
2006-01-11 07:39:08 +03:00
XFS_STATS_INC ( xb_page_retries ) ;
2010-07-19 08:56:17 +04:00
xfsbufd_wakeup ( NULL , 0 , gfp_mask ) ;
2009-07-09 16:52:32 +04:00
congestion_wait ( BLK_RW_ASYNC , HZ / 50 ) ;
2005-04-17 02:20:36 +04:00
goto retry ;
}
2006-01-11 07:39:08 +03:00
XFS_STATS_INC ( xb_page_found ) ;
2005-04-17 02:20:36 +04:00
nbytes = min_t ( size_t , size , PAGE_CACHE_SIZE - offset ) ;
size - = nbytes ;
2006-09-28 05:03:13 +04:00
ASSERT ( ! PagePrivate ( page ) ) ;
2005-04-17 02:20:36 +04:00
if ( ! PageUptodate ( page ) ) {
page_count - - ;
2008-05-19 10:34:42 +04:00
if ( blocksize > = PAGE_CACHE_SIZE ) {
if ( flags & XBF_READ )
bp - > b_flags | = _XBF_PAGE_LOCKED ;
} else if ( ! PagePrivate ( page ) ) {
2005-04-17 02:20:36 +04:00
if ( test_page_region ( page , offset , nbytes ) )
page_count + + ;
}
}
2006-01-11 07:39:08 +03:00
bp - > b_pages [ i ] = page ;
2005-04-17 02:20:36 +04:00
offset = 0 ;
}
2008-05-19 10:34:42 +04:00
if ( ! ( bp - > b_flags & _XBF_PAGE_LOCKED ) ) {
for ( i = 0 ; i < bp - > b_page_count ; i + + )
unlock_page ( bp - > b_pages [ i ] ) ;
}
2006-01-11 07:39:08 +03:00
if ( page_count = = bp - > b_page_count )
bp - > b_flags | = XBF_DONE ;
2005-04-17 02:20:36 +04:00
return error ;
}
/*
* Map buffer into kernel address - space if nessecary .
*/
STATIC int
2006-01-11 07:39:08 +03:00
_xfs_buf_map_pages (
2005-04-17 02:20:36 +04:00
xfs_buf_t * bp ,
uint flags )
{
/* A single page buffer is always mappable */
2006-01-11 07:39:08 +03:00
if ( bp - > b_page_count = = 1 ) {
bp - > b_addr = page_address ( bp - > b_pages [ 0 ] ) + bp - > b_offset ;
bp - > b_flags | = XBF_MAPPED ;
} else if ( flags & XBF_MAPPED ) {
2010-03-16 21:55:56 +03:00
bp - > b_addr = vm_map_ram ( bp - > b_pages , bp - > b_page_count ,
- 1 , PAGE_KERNEL ) ;
2006-01-11 07:39:08 +03:00
if ( unlikely ( bp - > b_addr = = NULL ) )
2005-04-17 02:20:36 +04:00
return - ENOMEM ;
2006-01-11 07:39:08 +03:00
bp - > b_addr + = bp - > b_offset ;
bp - > b_flags | = XBF_MAPPED ;
2005-04-17 02:20:36 +04:00
}
return 0 ;
}
/*
* Finding and Reading Buffers
*/
/*
2006-01-11 07:39:08 +03:00
* Look up , and creates if absent , a lockable buffer for
2005-04-17 02:20:36 +04:00
* a given range of an inode . The buffer is returned
* locked . If other overlapping buffers exist , they are
* released before the new buffer is created and locked ,
* which may imply that this call will block until those buffers
* are unlocked . No I / O is implied by this call .
*/
xfs_buf_t *
2006-01-11 07:39:08 +03:00
_xfs_buf_find (
2005-04-17 02:20:36 +04:00
xfs_buftarg_t * btp , /* block device target */
2006-01-11 12:50:22 +03:00
xfs_off_t ioff , /* starting offset of range */
2005-04-17 02:20:36 +04:00
size_t isize , /* length of range */
2006-01-11 07:39:08 +03:00
xfs_buf_flags_t flags ,
xfs_buf_t * new_bp )
2005-04-17 02:20:36 +04:00
{
2006-01-11 12:50:22 +03:00
xfs_off_t range_base ;
2005-04-17 02:20:36 +04:00
size_t range_length ;
xfs_bufhash_t * hash ;
2006-01-11 07:39:08 +03:00
xfs_buf_t * bp , * n ;
2005-04-17 02:20:36 +04:00
range_base = ( ioff < < BBSHIFT ) ;
range_length = ( isize < < BBSHIFT ) ;
/* Check for IOs smaller than the sector size / not sector aligned */
2006-01-11 07:39:08 +03:00
ASSERT ( ! ( range_length < ( 1 < < btp - > bt_sshift ) ) ) ;
2006-01-11 12:50:22 +03:00
ASSERT ( ! ( range_base & ( xfs_off_t ) btp - > bt_smask ) ) ;
2005-04-17 02:20:36 +04:00
hash = & btp - > bt_hash [ hash_long ( ( unsigned long ) ioff , btp - > bt_hashshift ) ] ;
spin_lock ( & hash - > bh_lock ) ;
2006-01-11 07:39:08 +03:00
list_for_each_entry_safe ( bp , n , & hash - > bh_list , b_hash_list ) {
ASSERT ( btp = = bp - > b_target ) ;
if ( bp - > b_file_offset = = range_base & &
bp - > b_buffer_length = = range_length ) {
2005-04-17 02:20:36 +04:00
/*
2006-01-11 07:39:08 +03:00
* If we look at something , bring it to the
2005-04-17 02:20:36 +04:00
* front of the list for next time .
*/
2006-01-11 07:39:08 +03:00
atomic_inc ( & bp - > b_hold ) ;
list_move ( & bp - > b_hash_list , & hash - > bh_list ) ;
2005-04-17 02:20:36 +04:00
goto found ;
}
}
/* No match found */
2006-01-11 07:39:08 +03:00
if ( new_bp ) {
_xfs_buf_initialize ( new_bp , btp , range_base ,
2005-04-17 02:20:36 +04:00
range_length , flags ) ;
2006-01-11 07:39:08 +03:00
new_bp - > b_hash = hash ;
list_add ( & new_bp - > b_hash_list , & hash - > bh_list ) ;
2005-04-17 02:20:36 +04:00
} else {
2006-01-11 07:39:08 +03:00
XFS_STATS_INC ( xb_miss_locked ) ;
2005-04-17 02:20:36 +04:00
}
spin_unlock ( & hash - > bh_lock ) ;
2006-01-11 07:39:08 +03:00
return new_bp ;
2005-04-17 02:20:36 +04:00
found :
spin_unlock ( & hash - > bh_lock ) ;
/* Attempt to get the semaphore without sleeping,
* if this does not work then we need to drop the
* spinlock and do a hard attempt on the semaphore .
*/
2006-01-11 07:39:08 +03:00
if ( down_trylock ( & bp - > b_sema ) ) {
if ( ! ( flags & XBF_TRYLOCK ) ) {
2005-04-17 02:20:36 +04:00
/* wait for buffer ownership */
2006-01-11 07:39:08 +03:00
xfs_buf_lock ( bp ) ;
XFS_STATS_INC ( xb_get_locked_waited ) ;
2005-04-17 02:20:36 +04:00
} else {
/* We asked for a trylock and failed, no need
* to look at file offset and length here , we
2006-01-11 07:39:08 +03:00
* know that this buffer at least overlaps our
* buffer and is locked , therefore our buffer
* either does not exist , or is this buffer .
2005-04-17 02:20:36 +04:00
*/
2006-01-11 07:39:08 +03:00
xfs_buf_rele ( bp ) ;
XFS_STATS_INC ( xb_busy_locked ) ;
return NULL ;
2005-04-17 02:20:36 +04:00
}
} else {
/* trylock worked */
2006-01-11 07:39:08 +03:00
XB_SET_OWNER ( bp ) ;
2005-04-17 02:20:36 +04:00
}
2006-01-11 07:39:08 +03:00
if ( bp - > b_flags & XBF_STALE ) {
ASSERT ( ( bp - > b_flags & _XBF_DELWRI_Q ) = = 0 ) ;
bp - > b_flags & = XBF_MAPPED ;
2005-09-05 02:33:35 +04:00
}
2009-12-15 02:14:59 +03:00
trace_xfs_buf_find ( bp , flags , _RET_IP_ ) ;
2006-01-11 07:39:08 +03:00
XFS_STATS_INC ( xb_get_locked ) ;
return bp ;
2005-04-17 02:20:36 +04:00
}
/*
2006-01-11 07:39:08 +03:00
* Assembles a buffer covering the specified range .
2005-04-17 02:20:36 +04:00
* Storage in memory for all portions of the buffer will be allocated ,
* although backing storage may not be .
*/
xfs_buf_t *
2009-11-24 21:02:23 +03:00
xfs_buf_get (
2005-04-17 02:20:36 +04:00
xfs_buftarg_t * target , /* target for buffer */
2006-01-11 12:50:22 +03:00
xfs_off_t ioff , /* starting offset of range */
2005-04-17 02:20:36 +04:00
size_t isize , /* length of range */
2006-01-11 07:39:08 +03:00
xfs_buf_flags_t flags )
2005-04-17 02:20:36 +04:00
{
2006-01-11 07:39:08 +03:00
xfs_buf_t * bp , * new_bp ;
2005-04-17 02:20:36 +04:00
int error = 0 , i ;
2006-01-11 07:39:08 +03:00
new_bp = xfs_buf_allocate ( flags ) ;
if ( unlikely ( ! new_bp ) )
2005-04-17 02:20:36 +04:00
return NULL ;
2006-01-11 07:39:08 +03:00
bp = _xfs_buf_find ( target , ioff , isize , flags , new_bp ) ;
if ( bp = = new_bp ) {
error = _xfs_buf_lookup_pages ( bp , flags ) ;
2005-04-17 02:20:36 +04:00
if ( error )
goto no_buffer ;
} else {
2006-01-11 07:39:08 +03:00
xfs_buf_deallocate ( new_bp ) ;
if ( unlikely ( bp = = NULL ) )
2005-04-17 02:20:36 +04:00
return NULL ;
}
2006-01-11 07:39:08 +03:00
for ( i = 0 ; i < bp - > b_page_count ; i + + )
mark_page_accessed ( bp - > b_pages [ i ] ) ;
2005-04-17 02:20:36 +04:00
2006-01-11 07:39:08 +03:00
if ( ! ( bp - > b_flags & XBF_MAPPED ) ) {
error = _xfs_buf_map_pages ( bp , flags ) ;
2005-04-17 02:20:36 +04:00
if ( unlikely ( error ) ) {
printk ( KERN_WARNING " %s: failed to map pages \n " ,
2008-04-10 06:19:21 +04:00
__func__ ) ;
2005-04-17 02:20:36 +04:00
goto no_buffer ;
}
}
2006-01-11 07:39:08 +03:00
XFS_STATS_INC ( xb_get ) ;
2005-04-17 02:20:36 +04:00
/*
* Always fill in the block number now , the mapped cases can do
* their own overlay of this later .
*/
2006-01-11 07:39:08 +03:00
bp - > b_bn = ioff ;
bp - > b_count_desired = bp - > b_buffer_length ;
2005-04-17 02:20:36 +04:00
2009-12-15 02:14:59 +03:00
trace_xfs_buf_get ( bp , flags , _RET_IP_ ) ;
2006-01-11 07:39:08 +03:00
return bp ;
2005-04-17 02:20:36 +04:00
no_buffer :
2006-01-11 07:39:08 +03:00
if ( flags & ( XBF_LOCK | XBF_TRYLOCK ) )
xfs_buf_unlock ( bp ) ;
xfs_buf_rele ( bp ) ;
2005-04-17 02:20:36 +04:00
return NULL ;
}
2008-12-03 14:20:26 +03:00
STATIC int
_xfs_buf_read (
xfs_buf_t * bp ,
xfs_buf_flags_t flags )
{
int status ;
ASSERT ( ! ( flags & ( XBF_DELWRI | XBF_WRITE ) ) ) ;
ASSERT ( bp - > b_bn ! = XFS_BUF_DADDR_NULL ) ;
bp - > b_flags & = ~ ( XBF_WRITE | XBF_ASYNC | XBF_DELWRI | \
XBF_READ_AHEAD | _XBF_RUN_QUEUES ) ;
bp - > b_flags | = flags & ( XBF_READ | XBF_ASYNC | \
XBF_READ_AHEAD | _XBF_RUN_QUEUES ) ;
status = xfs_buf_iorequest ( bp ) ;
if ( ! status & & ! ( flags & XBF_ASYNC ) )
status = xfs_buf_iowait ( bp ) ;
return status ;
}
2005-04-17 02:20:36 +04:00
xfs_buf_t *
2009-11-24 21:02:23 +03:00
xfs_buf_read (
2005-04-17 02:20:36 +04:00
xfs_buftarg_t * target ,
2006-01-11 12:50:22 +03:00
xfs_off_t ioff ,
2005-04-17 02:20:36 +04:00
size_t isize ,
2006-01-11 07:39:08 +03:00
xfs_buf_flags_t flags )
2005-04-17 02:20:36 +04:00
{
2006-01-11 07:39:08 +03:00
xfs_buf_t * bp ;
flags | = XBF_READ ;
2009-11-24 21:02:23 +03:00
bp = xfs_buf_get ( target , ioff , isize , flags ) ;
2006-01-11 07:39:08 +03:00
if ( bp ) {
2009-12-15 02:14:59 +03:00
trace_xfs_buf_read ( bp , flags , _RET_IP_ ) ;
2006-01-11 07:39:08 +03:00
if ( ! XFS_BUF_ISDONE ( bp ) ) {
XFS_STATS_INC ( xb_get_read ) ;
2008-12-03 14:20:26 +03:00
_xfs_buf_read ( bp , flags ) ;
2006-01-11 07:39:08 +03:00
} else if ( flags & XBF_ASYNC ) {
2005-04-17 02:20:36 +04:00
/*
* Read ahead call which is already satisfied ,
* drop the buffer
*/
goto no_buffer ;
} else {
/* We do not want read in the flags */
2006-01-11 07:39:08 +03:00
bp - > b_flags & = ~ XBF_READ ;
2005-04-17 02:20:36 +04:00
}
}
2006-01-11 07:39:08 +03:00
return bp ;
2005-04-17 02:20:36 +04:00
no_buffer :
2006-01-11 07:39:08 +03:00
if ( flags & ( XBF_LOCK | XBF_TRYLOCK ) )
xfs_buf_unlock ( bp ) ;
xfs_buf_rele ( bp ) ;
2005-04-17 02:20:36 +04:00
return NULL ;
}
/*
2006-01-11 07:39:08 +03:00
* If we are not low on memory then do the readahead in a deadlock
* safe manner .
2005-04-17 02:20:36 +04:00
*/
void
2006-01-11 07:39:08 +03:00
xfs_buf_readahead (
2005-04-17 02:20:36 +04:00
xfs_buftarg_t * target ,
2006-01-11 12:50:22 +03:00
xfs_off_t ioff ,
2005-04-17 02:20:36 +04:00
size_t isize ,
2006-01-11 07:39:08 +03:00
xfs_buf_flags_t flags )
2005-04-17 02:20:36 +04:00
{
struct backing_dev_info * bdi ;
2006-01-11 07:39:08 +03:00
bdi = target - > bt_mapping - > backing_dev_info ;
2005-04-17 02:20:36 +04:00
if ( bdi_read_congested ( bdi ) )
return ;
2006-01-11 07:39:08 +03:00
flags | = ( XBF_TRYLOCK | XBF_ASYNC | XBF_READ_AHEAD ) ;
2009-11-24 21:02:23 +03:00
xfs_buf_read ( target , ioff , isize , flags ) ;
2005-04-17 02:20:36 +04:00
}
xfs_buf_t *
2006-01-11 07:39:08 +03:00
xfs_buf_get_empty (
2005-04-17 02:20:36 +04:00
size_t len ,
xfs_buftarg_t * target )
{
2006-01-11 07:39:08 +03:00
xfs_buf_t * bp ;
2005-04-17 02:20:36 +04:00
2006-01-11 07:39:08 +03:00
bp = xfs_buf_allocate ( 0 ) ;
if ( bp )
_xfs_buf_initialize ( bp , target , 0 , len , 0 ) ;
return bp ;
2005-04-17 02:20:36 +04:00
}
static inline struct page *
mem_to_page (
void * addr )
{
2008-02-05 09:28:34 +03:00
if ( ( ! is_vmalloc_addr ( addr ) ) ) {
2005-04-17 02:20:36 +04:00
return virt_to_page ( addr ) ;
} else {
return vmalloc_to_page ( addr ) ;
}
}
int
2006-01-11 07:39:08 +03:00
xfs_buf_associate_memory (
xfs_buf_t * bp ,
2005-04-17 02:20:36 +04:00
void * mem ,
size_t len )
{
int rval ;
int i = 0 ;
2007-11-27 09:01:24 +03:00
unsigned long pageaddr ;
unsigned long offset ;
size_t buflen ;
2005-04-17 02:20:36 +04:00
int page_count ;
2007-11-27 09:01:24 +03:00
pageaddr = ( unsigned long ) mem & PAGE_CACHE_MASK ;
offset = ( unsigned long ) mem - pageaddr ;
buflen = PAGE_CACHE_ALIGN ( len + offset ) ;
page_count = buflen > > PAGE_CACHE_SHIFT ;
2005-04-17 02:20:36 +04:00
/* Free any previous set of page pointers */
2006-01-11 07:39:08 +03:00
if ( bp - > b_pages )
_xfs_buf_free_pages ( bp ) ;
2005-04-17 02:20:36 +04:00
2006-01-11 07:39:08 +03:00
bp - > b_pages = NULL ;
bp - > b_addr = mem ;
2005-04-17 02:20:36 +04:00
2009-07-19 02:14:58 +04:00
rval = _xfs_buf_get_pages ( bp , page_count , XBF_DONT_BLOCK ) ;
2005-04-17 02:20:36 +04:00
if ( rval )
return rval ;
2006-01-11 07:39:08 +03:00
bp - > b_offset = offset ;
2007-11-27 09:01:24 +03:00
for ( i = 0 ; i < bp - > b_page_count ; i + + ) {
bp - > b_pages [ i ] = mem_to_page ( ( void * ) pageaddr ) ;
pageaddr + = PAGE_CACHE_SIZE ;
2005-04-17 02:20:36 +04:00
}
2007-11-27 09:01:24 +03:00
bp - > b_count_desired = len ;
bp - > b_buffer_length = buflen ;
2006-01-11 07:39:08 +03:00
bp - > b_flags | = XBF_MAPPED ;
2008-05-19 10:34:42 +04:00
bp - > b_flags & = ~ _XBF_PAGE_LOCKED ;
2005-04-17 02:20:36 +04:00
return 0 ;
}
xfs_buf_t *
2006-01-11 07:39:08 +03:00
xfs_buf_get_noaddr (
2005-04-17 02:20:36 +04:00
size_t len ,
xfs_buftarg_t * target )
{
2007-05-14 12:23:50 +04:00
unsigned long page_count = PAGE_ALIGN ( len ) > > PAGE_SHIFT ;
int error , i ;
2005-04-17 02:20:36 +04:00
xfs_buf_t * bp ;
2006-01-11 07:39:08 +03:00
bp = xfs_buf_allocate ( 0 ) ;
2005-04-17 02:20:36 +04:00
if ( unlikely ( bp = = NULL ) )
goto fail ;
2006-01-11 07:39:08 +03:00
_xfs_buf_initialize ( bp , target , 0 , len , 0 ) ;
2005-04-17 02:20:36 +04:00
2007-05-14 12:23:50 +04:00
error = _xfs_buf_get_pages ( bp , page_count , 0 ) ;
if ( error )
2005-04-17 02:20:36 +04:00
goto fail_free_buf ;
2007-05-14 12:23:50 +04:00
for ( i = 0 ; i < page_count ; i + + ) {
bp - > b_pages [ i ] = alloc_page ( GFP_KERNEL ) ;
if ( ! bp - > b_pages [ i ] )
goto fail_free_mem ;
2005-04-17 02:20:36 +04:00
}
2007-05-14 12:23:50 +04:00
bp - > b_flags | = _XBF_PAGES ;
2005-04-17 02:20:36 +04:00
2007-05-14 12:23:50 +04:00
error = _xfs_buf_map_pages ( bp , XBF_MAPPED ) ;
if ( unlikely ( error ) ) {
printk ( KERN_WARNING " %s: failed to map pages \n " ,
2008-04-10 06:19:21 +04:00
__func__ ) ;
2005-04-17 02:20:36 +04:00
goto fail_free_mem ;
2007-05-14 12:23:50 +04:00
}
2005-04-17 02:20:36 +04:00
2006-01-11 07:39:08 +03:00
xfs_buf_unlock ( bp ) ;
2005-04-17 02:20:36 +04:00
2009-12-15 02:14:59 +03:00
trace_xfs_buf_get_noaddr ( bp , _RET_IP_ ) ;
2005-04-17 02:20:36 +04:00
return bp ;
2007-05-14 12:23:50 +04:00
2005-04-17 02:20:36 +04:00
fail_free_mem :
2007-05-14 12:23:50 +04:00
while ( - - i > = 0 )
__free_page ( bp - > b_pages [ i ] ) ;
2007-05-24 09:21:11 +04:00
_xfs_buf_free_pages ( bp ) ;
2005-04-17 02:20:36 +04:00
fail_free_buf :
2007-05-24 09:21:11 +04:00
xfs_buf_deallocate ( bp ) ;
2005-04-17 02:20:36 +04:00
fail :
return NULL ;
}
/*
* Increment reference count on buffer , to hold the buffer concurrently
* with another thread which may release ( free ) the buffer asynchronously .
* Must hold the buffer already to call this function .
*/
void
2006-01-11 07:39:08 +03:00
xfs_buf_hold (
xfs_buf_t * bp )
2005-04-17 02:20:36 +04:00
{
2009-12-15 02:14:59 +03:00
trace_xfs_buf_hold ( bp , _RET_IP_ ) ;
2006-01-11 07:39:08 +03:00
atomic_inc ( & bp - > b_hold ) ;
2005-04-17 02:20:36 +04:00
}
/*
2006-01-11 07:39:08 +03:00
* Releases a hold on the specified buffer . If the
* the hold count is 1 , calls xfs_buf_free .
2005-04-17 02:20:36 +04:00
*/
void
2006-01-11 07:39:08 +03:00
xfs_buf_rele (
xfs_buf_t * bp )
2005-04-17 02:20:36 +04:00
{
2006-01-11 07:39:08 +03:00
xfs_bufhash_t * hash = bp - > b_hash ;
2005-04-17 02:20:36 +04:00
2009-12-15 02:14:59 +03:00
trace_xfs_buf_rele ( bp , _RET_IP_ ) ;
2005-04-17 02:20:36 +04:00
2006-02-01 04:14:52 +03:00
if ( unlikely ( ! hash ) ) {
ASSERT ( ! bp - > b_relse ) ;
if ( atomic_dec_and_test ( & bp - > b_hold ) )
xfs_buf_free ( bp ) ;
return ;
}
2008-08-13 09:42:10 +04:00
ASSERT ( atomic_read ( & bp - > b_hold ) > 0 ) ;
2006-01-11 07:39:08 +03:00
if ( atomic_dec_and_lock ( & bp - > b_hold , & hash - > bh_lock ) ) {
if ( bp - > b_relse ) {
atomic_inc ( & bp - > b_hold ) ;
2005-04-17 02:20:36 +04:00
spin_unlock ( & hash - > bh_lock ) ;
2006-01-11 07:39:08 +03:00
( * ( bp - > b_relse ) ) ( bp ) ;
} else if ( bp - > b_flags & XBF_FS_MANAGED ) {
2005-04-17 02:20:36 +04:00
spin_unlock ( & hash - > bh_lock ) ;
} else {
2006-01-11 07:39:08 +03:00
ASSERT ( ! ( bp - > b_flags & ( XBF_DELWRI | _XBF_DELWRI_Q ) ) ) ;
list_del_init ( & bp - > b_hash_list ) ;
2005-04-17 02:20:36 +04:00
spin_unlock ( & hash - > bh_lock ) ;
2006-01-11 07:39:08 +03:00
xfs_buf_free ( bp ) ;
2005-04-17 02:20:36 +04:00
}
}
}
/*
* Mutual exclusion on buffers . Locking model :
*
* Buffers associated with inodes for which buffer locking
* is not enabled are not protected by semaphores , and are
* assumed to be exclusively owned by the caller . There is a
* spinlock in the buffer , used by the caller when concurrent
* access is possible .
*/
/*
2006-01-11 07:39:08 +03:00
* Locks a buffer object , if it is not already locked .
* Note that this in no way locks the underlying pages , so it is only
* useful for synchronizing concurrent use of buffer objects , not for
* synchronizing independent access to the underlying pages .
2005-04-17 02:20:36 +04:00
*/
int
2006-01-11 07:39:08 +03:00
xfs_buf_cond_lock (
xfs_buf_t * bp )
2005-04-17 02:20:36 +04:00
{
int locked ;
2006-01-11 07:39:08 +03:00
locked = down_trylock ( & bp - > b_sema ) = = 0 ;
2009-12-15 02:14:59 +03:00
if ( locked )
2006-01-11 07:39:08 +03:00
XB_SET_OWNER ( bp ) ;
2009-12-15 02:14:59 +03:00
trace_xfs_buf_cond_lock ( bp , _RET_IP_ ) ;
2006-01-11 07:39:08 +03:00
return locked ? 0 : - EBUSY ;
2005-04-17 02:20:36 +04:00
}
int
2006-01-11 07:39:08 +03:00
xfs_buf_lock_value (
xfs_buf_t * bp )
2005-04-17 02:20:36 +04:00
{
2008-04-22 09:26:13 +04:00
return bp - > b_sema . count ;
2005-04-17 02:20:36 +04:00
}
/*
2006-01-11 07:39:08 +03:00
* Locks a buffer object .
* Note that this in no way locks the underlying pages , so it is only
* useful for synchronizing concurrent use of buffer objects , not for
* synchronizing independent access to the underlying pages .
xfs: Improve scalability of busy extent tracking
When we free a metadata extent, we record it in the per-AG busy
extent array so that it is not re-used before the freeing
transaction hits the disk. This array is fixed size, so when it
overflows we make further allocation transactions synchronous
because we cannot track more freed extents until those transactions
hit the disk and are completed. Under heavy mixed allocation and
freeing workloads with large log buffers, we can overflow this array
quite easily.
Further, the array is sparsely populated, which means that inserts
need to search for a free slot, and array searches often have to
search many more slots that are actually used to check all the
busy extents. Quite inefficient, really.
To enable this aspect of extent freeing to scale better, we need
a structure that can grow dynamically. While in other areas of
XFS we have used radix trees, the extents being freed are at random
locations on disk so are better suited to being indexed by an rbtree.
So, use a per-AG rbtree indexed by block number to track busy
extents. This incures a memory allocation when marking an extent
busy, but should not occur too often in low memory situations. This
should scale to an arbitrary number of extents so should not be a
limitation for features such as in-memory aggregation of
transactions.
However, there are still situations where we can't avoid allocating
busy extents (such as allocation from the AGFL). To minimise the
overhead of such occurences, we need to avoid doing a synchronous
log force while holding the AGF locked to ensure that the previous
transactions are safely on disk before we use the extent. We can do
this by marking the transaction doing the allocation as synchronous
rather issuing a log force.
Because of the locking involved and the ordering of transactions,
the synchronous transaction provides the same guarantees as a
synchronous log force because it ensures that all the prior
transactions are already on disk when the synchronous transaction
hits the disk. i.e. it preserves the free->allocate order of the
extent correctly in recovery.
By doing this, we avoid holding the AGF locked while log writes are
in progress, hence reducing the length of time the lock is held and
therefore we increase the rate at which we can allocate and free
from the allocation group, thereby increasing overall throughput.
The only problem with this approach is that when a metadata buffer is
marked stale (e.g. a directory block is removed), then buffer remains
pinned and locked until the log goes to disk. The issue here is that
if that stale buffer is reallocated in a subsequent transaction, the
attempt to lock that buffer in the transaction will hang waiting
the log to go to disk to unlock and unpin the buffer. Hence if
someone tries to lock a pinned, stale, locked buffer we need to
push on the log to get it unlocked ASAP. Effectively we are trading
off a guaranteed log force for a much less common trigger for log
force to occur.
Ideally we should not reallocate busy extents. That is a much more
complex fix to the problem as it involves direct intervention in the
allocation btree searches in many places. This is left to a future
set of modifications.
Finally, now that we track busy extents in allocated memory, we
don't need the descriptors in the transaction structure to point to
them. We can replace the complex busy chunk infrastructure with a
simple linked list of busy extents. This allows us to remove a large
chunk of code, making the overall change a net reduction in code
size.
Signed-off-by: Dave Chinner <david@fromorbit.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alex Elder <aelder@sgi.com>
2010-05-21 06:07:08 +04:00
*
* If we come across a stale , pinned , locked buffer , we know that we
* are being asked to lock a buffer that has been reallocated . Because
* it is pinned , we know that the log has not been pushed to disk and
* hence it will still be locked . Rather than sleeping until someone
* else pushes the log , push it ourselves before trying to get the lock .
2005-04-17 02:20:36 +04:00
*/
2006-01-11 07:39:08 +03:00
void
xfs_buf_lock (
xfs_buf_t * bp )
2005-04-17 02:20:36 +04:00
{
2009-12-15 02:14:59 +03:00
trace_xfs_buf_lock ( bp , _RET_IP_ ) ;
xfs: Improve scalability of busy extent tracking
When we free a metadata extent, we record it in the per-AG busy
extent array so that it is not re-used before the freeing
transaction hits the disk. This array is fixed size, so when it
overflows we make further allocation transactions synchronous
because we cannot track more freed extents until those transactions
hit the disk and are completed. Under heavy mixed allocation and
freeing workloads with large log buffers, we can overflow this array
quite easily.
Further, the array is sparsely populated, which means that inserts
need to search for a free slot, and array searches often have to
search many more slots that are actually used to check all the
busy extents. Quite inefficient, really.
To enable this aspect of extent freeing to scale better, we need
a structure that can grow dynamically. While in other areas of
XFS we have used radix trees, the extents being freed are at random
locations on disk so are better suited to being indexed by an rbtree.
So, use a per-AG rbtree indexed by block number to track busy
extents. This incures a memory allocation when marking an extent
busy, but should not occur too often in low memory situations. This
should scale to an arbitrary number of extents so should not be a
limitation for features such as in-memory aggregation of
transactions.
However, there are still situations where we can't avoid allocating
busy extents (such as allocation from the AGFL). To minimise the
overhead of such occurences, we need to avoid doing a synchronous
log force while holding the AGF locked to ensure that the previous
transactions are safely on disk before we use the extent. We can do
this by marking the transaction doing the allocation as synchronous
rather issuing a log force.
Because of the locking involved and the ordering of transactions,
the synchronous transaction provides the same guarantees as a
synchronous log force because it ensures that all the prior
transactions are already on disk when the synchronous transaction
hits the disk. i.e. it preserves the free->allocate order of the
extent correctly in recovery.
By doing this, we avoid holding the AGF locked while log writes are
in progress, hence reducing the length of time the lock is held and
therefore we increase the rate at which we can allocate and free
from the allocation group, thereby increasing overall throughput.
The only problem with this approach is that when a metadata buffer is
marked stale (e.g. a directory block is removed), then buffer remains
pinned and locked until the log goes to disk. The issue here is that
if that stale buffer is reallocated in a subsequent transaction, the
attempt to lock that buffer in the transaction will hang waiting
the log to go to disk to unlock and unpin the buffer. Hence if
someone tries to lock a pinned, stale, locked buffer we need to
push on the log to get it unlocked ASAP. Effectively we are trading
off a guaranteed log force for a much less common trigger for log
force to occur.
Ideally we should not reallocate busy extents. That is a much more
complex fix to the problem as it involves direct intervention in the
allocation btree searches in many places. This is left to a future
set of modifications.
Finally, now that we track busy extents in allocated memory, we
don't need the descriptors in the transaction structure to point to
them. We can replace the complex busy chunk infrastructure with a
simple linked list of busy extents. This allows us to remove a large
chunk of code, making the overall change a net reduction in code
size.
Signed-off-by: Dave Chinner <david@fromorbit.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alex Elder <aelder@sgi.com>
2010-05-21 06:07:08 +04:00
if ( atomic_read ( & bp - > b_pin_count ) & & ( bp - > b_flags & XBF_STALE ) )
xfs_log_force ( bp - > b_mount , 0 ) ;
2006-01-11 07:39:08 +03:00
if ( atomic_read ( & bp - > b_io_remaining ) )
blk_run_address_space ( bp - > b_target - > bt_mapping ) ;
down ( & bp - > b_sema ) ;
XB_SET_OWNER ( bp ) ;
2009-12-15 02:14:59 +03:00
trace_xfs_buf_lock_done ( bp , _RET_IP_ ) ;
2005-04-17 02:20:36 +04:00
}
/*
2006-01-11 07:39:08 +03:00
* Releases the lock on the buffer object .
2005-09-05 02:33:35 +04:00
* If the buffer is marked delwri but is not queued , do so before we
2006-01-11 07:39:08 +03:00
* unlock the buffer as we need to set flags correctly . We also need to
2005-09-05 02:33:35 +04:00
* take a reference for the delwri queue because the unlocker is going to
* drop their ' s and they don ' t know we just queued it .
2005-04-17 02:20:36 +04:00
*/
void
2006-01-11 07:39:08 +03:00
xfs_buf_unlock (
xfs_buf_t * bp )
2005-04-17 02:20:36 +04:00
{
2006-01-11 07:39:08 +03:00
if ( ( bp - > b_flags & ( XBF_DELWRI | _XBF_DELWRI_Q ) ) = = XBF_DELWRI ) {
atomic_inc ( & bp - > b_hold ) ;
bp - > b_flags | = XBF_ASYNC ;
xfs_buf_delwri_queue ( bp , 0 ) ;
2005-09-05 02:33:35 +04:00
}
2006-01-11 07:39:08 +03:00
XB_CLEAR_OWNER ( bp ) ;
up ( & bp - > b_sema ) ;
2009-12-15 02:14:59 +03:00
trace_xfs_buf_unlock ( bp , _RET_IP_ ) ;
2005-04-17 02:20:36 +04:00
}
2006-01-11 07:39:08 +03:00
STATIC void
xfs_buf_wait_unpin (
xfs_buf_t * bp )
2005-04-17 02:20:36 +04:00
{
DECLARE_WAITQUEUE ( wait , current ) ;
2006-01-11 07:39:08 +03:00
if ( atomic_read ( & bp - > b_pin_count ) = = 0 )
2005-04-17 02:20:36 +04:00
return ;
2006-01-11 07:39:08 +03:00
add_wait_queue ( & bp - > b_waiters , & wait ) ;
2005-04-17 02:20:36 +04:00
for ( ; ; ) {
set_current_state ( TASK_UNINTERRUPTIBLE ) ;
2006-01-11 07:39:08 +03:00
if ( atomic_read ( & bp - > b_pin_count ) = = 0 )
2005-04-17 02:20:36 +04:00
break ;
2006-01-11 07:39:08 +03:00
if ( atomic_read ( & bp - > b_io_remaining ) )
blk_run_address_space ( bp - > b_target - > bt_mapping ) ;
2005-04-17 02:20:36 +04:00
schedule ( ) ;
}
2006-01-11 07:39:08 +03:00
remove_wait_queue ( & bp - > b_waiters , & wait ) ;
2005-04-17 02:20:36 +04:00
set_current_state ( TASK_RUNNING ) ;
}
/*
* Buffer Utility Routines
*/
STATIC void
2006-01-11 07:39:08 +03:00
xfs_buf_iodone_work (
2006-11-22 17:57:56 +03:00
struct work_struct * work )
2005-04-17 02:20:36 +04:00
{
2006-11-22 17:57:56 +03:00
xfs_buf_t * bp =
container_of ( work , xfs_buf_t , b_iodone_work ) ;
2005-04-17 02:20:36 +04:00
2007-05-14 12:24:23 +04:00
/*
* We can get an EOPNOTSUPP to ordered writes . Here we clear the
* ordered flag and reissue them . Because we can ' t tell the higher
* layers directly that they should not issue ordered I / O anymore , they
2008-10-10 10:28:29 +04:00
* need to check if the _XFS_BARRIER_FAILED flag was set during I / O completion .
2007-05-14 12:24:23 +04:00
*/
if ( ( bp - > b_error = = EOPNOTSUPP ) & &
( bp - > b_flags & ( XBF_ORDERED | XBF_ASYNC ) ) = = ( XBF_ORDERED | XBF_ASYNC ) ) {
2009-12-15 02:14:59 +03:00
trace_xfs_buf_ordered_retry ( bp , _RET_IP_ ) ;
2007-05-14 12:24:23 +04:00
bp - > b_flags & = ~ XBF_ORDERED ;
2008-10-10 10:28:29 +04:00
bp - > b_flags | = _XFS_BARRIER_FAILED ;
2007-05-14 12:24:23 +04:00
xfs_buf_iorequest ( bp ) ;
} else if ( bp - > b_iodone )
2006-01-11 07:39:08 +03:00
( * ( bp - > b_iodone ) ) ( bp ) ;
else if ( bp - > b_flags & XBF_ASYNC )
2005-04-17 02:20:36 +04:00
xfs_buf_relse ( bp ) ;
}
void
2006-01-11 07:39:08 +03:00
xfs_buf_ioend (
xfs_buf_t * bp ,
2005-04-17 02:20:36 +04:00
int schedule )
{
2009-12-15 02:14:59 +03:00
trace_xfs_buf_iodone ( bp , _RET_IP_ ) ;
2007-11-23 08:31:00 +03:00
bp - > b_flags & = ~ ( XBF_READ | XBF_WRITE | XBF_READ_AHEAD ) ;
2006-01-11 07:39:08 +03:00
if ( bp - > b_error = = 0 )
bp - > b_flags | = XBF_DONE ;
2005-04-17 02:20:36 +04:00
2006-01-11 07:39:08 +03:00
if ( ( bp - > b_iodone ) | | ( bp - > b_flags & XBF_ASYNC ) ) {
2005-04-17 02:20:36 +04:00
if ( schedule ) {
2006-11-22 17:57:56 +03:00
INIT_WORK ( & bp - > b_iodone_work , xfs_buf_iodone_work ) ;
2006-01-11 07:39:08 +03:00
queue_work ( xfslogd_workqueue , & bp - > b_iodone_work ) ;
2005-04-17 02:20:36 +04:00
} else {
2006-11-22 17:57:56 +03:00
xfs_buf_iodone_work ( & bp - > b_iodone_work ) ;
2005-04-17 02:20:36 +04:00
}
} else {
2008-08-13 10:36:11 +04:00
complete ( & bp - > b_iowait ) ;
2005-04-17 02:20:36 +04:00
}
}
void
2006-01-11 07:39:08 +03:00
xfs_buf_ioerror (
xfs_buf_t * bp ,
int error )
2005-04-17 02:20:36 +04:00
{
ASSERT ( error > = 0 & & error < = 0xffff ) ;
2006-01-11 07:39:08 +03:00
bp - > b_error = ( unsigned short ) error ;
2009-12-15 02:14:59 +03:00
trace_xfs_buf_ioerror ( bp , error , _RET_IP_ ) ;
2005-04-17 02:20:36 +04:00
}
int
2010-01-14 01:17:58 +03:00
xfs_bwrite (
struct xfs_mount * mp ,
2008-12-03 14:20:26 +03:00
struct xfs_buf * bp )
2005-04-17 02:20:36 +04:00
{
2010-03-12 13:59:40 +03:00
int error ;
2005-04-17 02:20:36 +04:00
2010-01-14 01:17:58 +03:00
bp - > b_strat = xfs_bdstrat_cb ;
bp - > b_mount = mp ;
bp - > b_flags | = XBF_WRITE ;
2010-03-12 13:59:40 +03:00
bp - > b_flags & = ~ ( XBF_ASYNC | XBF_READ ) ;
2005-04-17 02:20:36 +04:00
2008-12-03 14:20:26 +03:00
xfs_buf_delwri_dequeue ( bp ) ;
2010-01-14 01:17:58 +03:00
xfs_buf_iostrategy ( bp ) ;
2005-04-17 02:20:36 +04:00
2010-03-12 13:59:40 +03:00
error = xfs_buf_iowait ( bp ) ;
if ( error )
xfs_force_shutdown ( mp , SHUTDOWN_META_IO_ERROR ) ;
xfs_buf_relse ( bp ) ;
2010-01-14 01:17:58 +03:00
return error ;
2008-12-03 14:20:26 +03:00
}
2005-04-17 02:20:36 +04:00
2008-12-03 14:20:26 +03:00
void
xfs_bdwrite (
void * mp ,
struct xfs_buf * bp )
{
2009-12-15 02:14:59 +03:00
trace_xfs_buf_bdwrite ( bp , _RET_IP_ ) ;
2005-04-17 02:20:36 +04:00
2008-12-03 14:20:26 +03:00
bp - > b_strat = xfs_bdstrat_cb ;
2008-12-09 12:47:30 +03:00
bp - > b_mount = mp ;
2005-04-17 02:20:36 +04:00
2008-12-03 14:20:26 +03:00
bp - > b_flags & = ~ XBF_READ ;
bp - > b_flags | = ( XBF_DELWRI | XBF_ASYNC ) ;
xfs_buf_delwri_queue ( bp , 1 ) ;
2005-04-17 02:20:36 +04:00
}
2010-01-14 01:17:56 +03:00
/*
* Called when we want to stop a buffer from getting written or read .
* We attach the EIO error , muck with its flags , and call biodone
* so that the proper iodone callbacks get called .
*/
STATIC int
xfs_bioerror (
xfs_buf_t * bp )
{
# ifdef XFSERRORDEBUG
ASSERT ( XFS_BUF_ISREAD ( bp ) | | bp - > b_iodone ) ;
# endif
/*
* No need to wait until the buffer is unpinned , we aren ' t flushing it .
*/
XFS_BUF_ERROR ( bp , EIO ) ;
/*
* We ' re calling biodone , so delete XBF_DONE flag .
*/
XFS_BUF_UNREAD ( bp ) ;
XFS_BUF_UNDELAYWRITE ( bp ) ;
XFS_BUF_UNDONE ( bp ) ;
XFS_BUF_STALE ( bp ) ;
XFS_BUF_CLR_BDSTRAT_FUNC ( bp ) ;
xfs_biodone ( bp ) ;
return EIO ;
}
/*
* Same as xfs_bioerror , except that we are releasing the buffer
* here ourselves , and avoiding the biodone call .
* This is meant for userdata errors ; metadata bufs come with
* iodone functions attached , so that we can track down errors .
*/
STATIC int
xfs_bioerror_relse (
struct xfs_buf * bp )
{
int64_t fl = XFS_BUF_BFLAGS ( bp ) ;
/*
* No need to wait until the buffer is unpinned .
* We aren ' t flushing it .
*
* chunkhold expects B_DONE to be set , whether
* we actually finish the I / O or not . We don ' t want to
* change that interface .
*/
XFS_BUF_UNREAD ( bp ) ;
XFS_BUF_UNDELAYWRITE ( bp ) ;
XFS_BUF_DONE ( bp ) ;
XFS_BUF_STALE ( bp ) ;
XFS_BUF_CLR_IODONE_FUNC ( bp ) ;
XFS_BUF_CLR_BDSTRAT_FUNC ( bp ) ;
2010-01-19 12:56:44 +03:00
if ( ! ( fl & XBF_ASYNC ) ) {
2010-01-14 01:17:56 +03:00
/*
* Mark b_error and B_ERROR _both_ .
* Lot ' s of chunkcache code assumes that .
* There ' s no reason to mark error for
* ASYNC buffers .
*/
XFS_BUF_ERROR ( bp , EIO ) ;
XFS_BUF_FINISH_IOWAIT ( bp ) ;
} else {
xfs_buf_relse ( bp ) ;
}
return EIO ;
}
/*
* All xfs metadata buffers except log state machine buffers
* get this attached as their b_bdstrat callback function .
* This is so that we can catch a buffer
* after prematurely unpinning it to forcibly shutdown the filesystem .
*/
int
xfs_bdstrat_cb (
struct xfs_buf * bp )
{
if ( XFS_FORCED_SHUTDOWN ( bp - > b_mount ) ) {
trace_xfs_bdstrat_shut ( bp , _RET_IP_ ) ;
/*
* Metadata write that didn ' t get logged but
* written delayed anyway . These aren ' t associated
* with a transaction , and can be ignored .
*/
if ( ! bp - > b_iodone & & ! XFS_BUF_ISREAD ( bp ) )
return xfs_bioerror_relse ( bp ) ;
else
return xfs_bioerror ( bp ) ;
}
xfs_buf_iorequest ( bp ) ;
return 0 ;
}
/*
* Wrapper around bdstrat so that we can stop data from going to disk in case
* we are shutting down the filesystem . Typically user data goes thru this
* path ; one of the exceptions is the superblock .
*/
void
xfsbdstrat (
struct xfs_mount * mp ,
struct xfs_buf * bp )
{
if ( XFS_FORCED_SHUTDOWN ( mp ) ) {
trace_xfs_bdstrat_shut ( bp , _RET_IP_ ) ;
xfs_bioerror_relse ( bp ) ;
return ;
}
xfs_buf_iorequest ( bp ) ;
}
2009-11-14 19:17:22 +03:00
STATIC void
2006-01-11 07:39:08 +03:00
_xfs_buf_ioend (
xfs_buf_t * bp ,
2005-04-17 02:20:36 +04:00
int schedule )
{
2008-05-19 10:34:42 +04:00
if ( atomic_dec_and_test ( & bp - > b_io_remaining ) = = 1 ) {
bp - > b_flags & = ~ _XBF_PAGE_LOCKED ;
2006-01-11 07:39:08 +03:00
xfs_buf_ioend ( bp , schedule ) ;
2008-05-19 10:34:42 +04:00
}
2005-04-17 02:20:36 +04:00
}
2007-10-12 10:17:47 +04:00
STATIC void
2006-01-11 07:39:08 +03:00
xfs_buf_bio_end_io (
2005-04-17 02:20:36 +04:00
struct bio * bio ,
int error )
{
2006-01-11 07:39:08 +03:00
xfs_buf_t * bp = ( xfs_buf_t * ) bio - > bi_private ;
unsigned int blocksize = bp - > b_target - > bt_bsize ;
2005-09-02 10:39:56 +04:00
struct bio_vec * bvec = bio - > bi_io_vec + bio - > bi_vcnt - 1 ;
2005-04-17 02:20:36 +04:00
2008-12-12 07:27:25 +03:00
xfs_buf_ioerror ( bp , - error ) ;
2005-04-17 02:20:36 +04:00
2010-01-25 20:42:24 +03:00
if ( ! error & & xfs_buf_is_vmapped ( bp ) & & ( bp - > b_flags & XBF_READ ) )
invalidate_kernel_vmap_range ( bp - > b_addr , xfs_buf_vmap_len ( bp ) ) ;
2005-09-02 10:39:56 +04:00
do {
2005-04-17 02:20:36 +04:00
struct page * page = bvec - > bv_page ;
2006-09-28 05:03:13 +04:00
ASSERT ( ! PagePrivate ( page ) ) ;
2006-01-11 07:39:08 +03:00
if ( unlikely ( bp - > b_error ) ) {
if ( bp - > b_flags & XBF_READ )
2005-09-02 10:39:56 +04:00
ClearPageUptodate ( page ) ;
2006-01-11 07:39:08 +03:00
} else if ( blocksize > = PAGE_CACHE_SIZE ) {
2005-04-17 02:20:36 +04:00
SetPageUptodate ( page ) ;
} else if ( ! PagePrivate ( page ) & &
2006-01-11 07:39:08 +03:00
( bp - > b_flags & _XBF_PAGE_CACHE ) ) {
2005-04-17 02:20:36 +04:00
set_page_region ( page , bvec - > bv_offset , bvec - > bv_len ) ;
}
2005-09-02 10:39:56 +04:00
if ( - - bvec > = bio - > bi_io_vec )
prefetchw ( & bvec - > bv_page - > flags ) ;
2008-05-19 10:34:42 +04:00
if ( bp - > b_flags & _XBF_PAGE_LOCKED )
unlock_page ( page ) ;
2005-09-02 10:39:56 +04:00
} while ( bvec > = bio - > bi_io_vec ) ;
2005-04-17 02:20:36 +04:00
2006-01-11 07:39:08 +03:00
_xfs_buf_ioend ( bp , 1 ) ;
2005-04-17 02:20:36 +04:00
bio_put ( bio ) ;
}
STATIC void
2006-01-11 07:39:08 +03:00
_xfs_buf_ioapply (
xfs_buf_t * bp )
2005-04-17 02:20:36 +04:00
{
2007-12-07 06:07:08 +03:00
int rw , map_i , total_nr_pages , nr_pages ;
2005-04-17 02:20:36 +04:00
struct bio * bio ;
2006-01-11 07:39:08 +03:00
int offset = bp - > b_offset ;
int size = bp - > b_count_desired ;
sector_t sector = bp - > b_bn ;
unsigned int blocksize = bp - > b_target - > bt_bsize ;
2005-04-17 02:20:36 +04:00
2006-01-11 07:39:08 +03:00
total_nr_pages = bp - > b_page_count ;
2005-04-17 02:20:36 +04:00
map_i = 0 ;
2006-01-11 07:39:08 +03:00
if ( bp - > b_flags & XBF_ORDERED ) {
ASSERT ( ! ( bp - > b_flags & XBF_READ ) ) ;
2005-11-02 02:26:59 +03:00
rw = WRITE_BARRIER ;
2009-11-24 21:03:15 +03:00
} else if ( bp - > b_flags & XBF_LOG_BUFFER ) {
2006-09-28 05:01:57 +04:00
ASSERT ( ! ( bp - > b_flags & XBF_READ_AHEAD ) ) ;
bp - > b_flags & = ~ _XBF_RUN_QUEUES ;
rw = ( bp - > b_flags & XBF_WRITE ) ? WRITE_SYNC : READ_SYNC ;
2009-11-24 21:03:15 +03:00
} else if ( bp - > b_flags & _XBF_RUN_QUEUES ) {
ASSERT ( ! ( bp - > b_flags & XBF_READ_AHEAD ) ) ;
bp - > b_flags & = ~ _XBF_RUN_QUEUES ;
rw = ( bp - > b_flags & XBF_WRITE ) ? WRITE_META : READ_META ;
2006-09-28 05:01:57 +04:00
} else {
rw = ( bp - > b_flags & XBF_WRITE ) ? WRITE :
( bp - > b_flags & XBF_READ_AHEAD ) ? READA : READ ;
2005-11-02 02:26:59 +03:00
}
2006-01-11 07:39:08 +03:00
/* Special code path for reading a sub page size buffer in --
2005-04-17 02:20:36 +04:00
* we populate up the whole page , and hence the other metadata
* in the same page . This optimization is only valid when the
2006-01-11 07:39:08 +03:00
* filesystem block size is not smaller than the page size .
2005-04-17 02:20:36 +04:00
*/
2006-01-11 07:39:08 +03:00
if ( ( bp - > b_buffer_length < PAGE_CACHE_SIZE ) & &
2008-05-19 10:34:42 +04:00
( ( bp - > b_flags & ( XBF_READ | _XBF_PAGE_LOCKED ) ) = =
( XBF_READ | _XBF_PAGE_LOCKED ) ) & &
2006-01-11 07:39:08 +03:00
( blocksize > = PAGE_CACHE_SIZE ) ) {
2005-04-17 02:20:36 +04:00
bio = bio_alloc ( GFP_NOIO , 1 ) ;
2006-01-11 07:39:08 +03:00
bio - > bi_bdev = bp - > b_target - > bt_bdev ;
2005-04-17 02:20:36 +04:00
bio - > bi_sector = sector - ( offset > > BBSHIFT ) ;
2006-01-11 07:39:08 +03:00
bio - > bi_end_io = xfs_buf_bio_end_io ;
bio - > bi_private = bp ;
2005-04-17 02:20:36 +04:00
2006-01-11 07:39:08 +03:00
bio_add_page ( bio , bp - > b_pages [ 0 ] , PAGE_CACHE_SIZE , 0 ) ;
2005-04-17 02:20:36 +04:00
size = 0 ;
2006-01-11 07:39:08 +03:00
atomic_inc ( & bp - > b_io_remaining ) ;
2005-04-17 02:20:36 +04:00
goto submit_io ;
}
next_chunk :
2006-01-11 07:39:08 +03:00
atomic_inc ( & bp - > b_io_remaining ) ;
2005-04-17 02:20:36 +04:00
nr_pages = BIO_MAX_SECTORS > > ( PAGE_SHIFT - BBSHIFT ) ;
if ( nr_pages > total_nr_pages )
nr_pages = total_nr_pages ;
bio = bio_alloc ( GFP_NOIO , nr_pages ) ;
2006-01-11 07:39:08 +03:00
bio - > bi_bdev = bp - > b_target - > bt_bdev ;
2005-04-17 02:20:36 +04:00
bio - > bi_sector = sector ;
2006-01-11 07:39:08 +03:00
bio - > bi_end_io = xfs_buf_bio_end_io ;
bio - > bi_private = bp ;
2005-04-17 02:20:36 +04:00
for ( ; size & & nr_pages ; nr_pages - - , map_i + + ) {
2006-01-11 07:39:08 +03:00
int rbytes , nbytes = PAGE_CACHE_SIZE - offset ;
2005-04-17 02:20:36 +04:00
if ( nbytes > size )
nbytes = size ;
2006-01-11 07:39:08 +03:00
rbytes = bio_add_page ( bio , bp - > b_pages [ map_i ] , nbytes , offset ) ;
if ( rbytes < nbytes )
2005-04-17 02:20:36 +04:00
break ;
offset = 0 ;
sector + = nbytes > > BBSHIFT ;
size - = nbytes ;
total_nr_pages - - ;
}
submit_io :
if ( likely ( bio - > bi_size ) ) {
2010-01-25 20:42:24 +03:00
if ( xfs_buf_is_vmapped ( bp ) ) {
flush_kernel_vmap_range ( bp - > b_addr ,
xfs_buf_vmap_len ( bp ) ) ;
}
2005-04-17 02:20:36 +04:00
submit_bio ( rw , bio ) ;
if ( size )
goto next_chunk ;
} else {
bio_put ( bio ) ;
2006-01-11 07:39:08 +03:00
xfs_buf_ioerror ( bp , EIO ) ;
2005-04-17 02:20:36 +04:00
}
}
int
2006-01-11 07:39:08 +03:00
xfs_buf_iorequest (
xfs_buf_t * bp )
2005-04-17 02:20:36 +04:00
{
2009-12-15 02:14:59 +03:00
trace_xfs_buf_iorequest ( bp , _RET_IP_ ) ;
2005-04-17 02:20:36 +04:00
2006-01-11 07:39:08 +03:00
if ( bp - > b_flags & XBF_DELWRI ) {
xfs_buf_delwri_queue ( bp , 1 ) ;
2005-04-17 02:20:36 +04:00
return 0 ;
}
2006-01-11 07:39:08 +03:00
if ( bp - > b_flags & XBF_WRITE ) {
xfs_buf_wait_unpin ( bp ) ;
2005-04-17 02:20:36 +04:00
}
2006-01-11 07:39:08 +03:00
xfs_buf_hold ( bp ) ;
2005-04-17 02:20:36 +04:00
/* Set the count to 1 initially, this will stop an I/O
* completion callout which happens before we have started
2006-01-11 07:39:08 +03:00
* all the I / O from calling xfs_buf_ioend too early .
2005-04-17 02:20:36 +04:00
*/
2006-01-11 07:39:08 +03:00
atomic_set ( & bp - > b_io_remaining , 1 ) ;
_xfs_buf_ioapply ( bp ) ;
_xfs_buf_ioend ( bp , 0 ) ;
2005-04-17 02:20:36 +04:00
2006-01-11 07:39:08 +03:00
xfs_buf_rele ( bp ) ;
2005-04-17 02:20:36 +04:00
return 0 ;
}
/*
2006-01-11 07:39:08 +03:00
* Waits for I / O to complete on the buffer supplied .
* It returns immediately if no I / O is pending .
* It returns the I / O error code , if any , or 0 if there was no error .
2005-04-17 02:20:36 +04:00
*/
int
2006-01-11 07:39:08 +03:00
xfs_buf_iowait (
xfs_buf_t * bp )
2005-04-17 02:20:36 +04:00
{
2009-12-15 02:14:59 +03:00
trace_xfs_buf_iowait ( bp , _RET_IP_ ) ;
2006-01-11 07:39:08 +03:00
if ( atomic_read ( & bp - > b_io_remaining ) )
blk_run_address_space ( bp - > b_target - > bt_mapping ) ;
2008-08-13 10:36:11 +04:00
wait_for_completion ( & bp - > b_iowait ) ;
2009-12-15 02:14:59 +03:00
trace_xfs_buf_iowait_done ( bp , _RET_IP_ ) ;
2006-01-11 07:39:08 +03:00
return bp - > b_error ;
2005-04-17 02:20:36 +04:00
}
2006-01-11 07:39:08 +03:00
xfs_caddr_t
xfs_buf_offset (
xfs_buf_t * bp ,
2005-04-17 02:20:36 +04:00
size_t offset )
{
struct page * page ;
2006-01-11 07:39:08 +03:00
if ( bp - > b_flags & XBF_MAPPED )
return XFS_BUF_PTR ( bp ) + offset ;
2005-04-17 02:20:36 +04:00
2006-01-11 07:39:08 +03:00
offset + = bp - > b_offset ;
page = bp - > b_pages [ offset > > PAGE_CACHE_SHIFT ] ;
return ( xfs_caddr_t ) page_address ( page ) + ( offset & ( PAGE_CACHE_SIZE - 1 ) ) ;
2005-04-17 02:20:36 +04:00
}
/*
* Move data into or out of a buffer .
*/
void
2006-01-11 07:39:08 +03:00
xfs_buf_iomove (
xfs_buf_t * bp , /* buffer to process */
2005-04-17 02:20:36 +04:00
size_t boff , /* starting buffer offset */
size_t bsize , /* length to copy */
2010-01-20 02:47:39 +03:00
void * data , /* data address */
2006-01-11 07:39:08 +03:00
xfs_buf_rw_t mode ) /* read/write/zero flag */
2005-04-17 02:20:36 +04:00
{
size_t bend , cpoff , csize ;
struct page * page ;
bend = boff + bsize ;
while ( boff < bend ) {
2006-01-11 07:39:08 +03:00
page = bp - > b_pages [ xfs_buf_btoct ( boff + bp - > b_offset ) ] ;
cpoff = xfs_buf_poff ( boff + bp - > b_offset ) ;
2005-04-17 02:20:36 +04:00
csize = min_t ( size_t ,
2006-01-11 07:39:08 +03:00
PAGE_CACHE_SIZE - cpoff , bp - > b_count_desired - boff ) ;
2005-04-17 02:20:36 +04:00
ASSERT ( ( ( csize + cpoff ) < = PAGE_CACHE_SIZE ) ) ;
switch ( mode ) {
2006-01-11 07:39:08 +03:00
case XBRW_ZERO :
2005-04-17 02:20:36 +04:00
memset ( page_address ( page ) + cpoff , 0 , csize ) ;
break ;
2006-01-11 07:39:08 +03:00
case XBRW_READ :
2005-04-17 02:20:36 +04:00
memcpy ( data , page_address ( page ) + cpoff , csize ) ;
break ;
2006-01-11 07:39:08 +03:00
case XBRW_WRITE :
2005-04-17 02:20:36 +04:00
memcpy ( page_address ( page ) + cpoff , data , csize ) ;
}
boff + = csize ;
data + = csize ;
}
}
/*
2006-01-11 07:39:08 +03:00
* Handling of buffer targets ( buftargs ) .
2005-04-17 02:20:36 +04:00
*/
/*
2006-01-11 07:39:08 +03:00
* Wait for any bufs with callbacks that have been submitted but
* have not yet returned . . . walk the hash list for the target .
2005-04-17 02:20:36 +04:00
*/
void
xfs_wait_buftarg (
xfs_buftarg_t * btp )
{
xfs_buf_t * bp , * n ;
xfs_bufhash_t * hash ;
uint i ;
for ( i = 0 ; i < ( 1 < < btp - > bt_hashshift ) ; i + + ) {
hash = & btp - > bt_hash [ i ] ;
again :
spin_lock ( & hash - > bh_lock ) ;
2006-01-11 07:39:08 +03:00
list_for_each_entry_safe ( bp , n , & hash - > bh_list , b_hash_list ) {
ASSERT ( btp = = bp - > b_target ) ;
if ( ! ( bp - > b_flags & XBF_FS_MANAGED ) ) {
2005-04-17 02:20:36 +04:00
spin_unlock ( & hash - > bh_lock ) ;
2005-09-05 02:33:35 +04:00
/*
* Catch superblock reference count leaks
* immediately
*/
2006-01-11 07:39:08 +03:00
BUG_ON ( bp - > b_bn = = 0 ) ;
2005-04-17 02:20:36 +04:00
delay ( 100 ) ;
goto again ;
}
}
spin_unlock ( & hash - > bh_lock ) ;
}
}
/*
2006-01-11 07:39:08 +03:00
* Allocate buffer hash table for a given target .
* For devices containing metadata ( i . e . not the log / realtime devices )
* we need to allocate a much larger hash table .
2005-04-17 02:20:36 +04:00
*/
STATIC void
xfs_alloc_bufhash (
xfs_buftarg_t * btp ,
int external )
{
unsigned int i ;
btp - > bt_hashshift = external ? 3 : 8 ; /* 8 or 256 buckets */
btp - > bt_hashmask = ( 1 < < btp - > bt_hashshift ) - 1 ;
2010-01-21 00:55:30 +03:00
btp - > bt_hash = kmem_zalloc_large ( ( 1 < < btp - > bt_hashshift ) *
sizeof ( xfs_bufhash_t ) ) ;
2005-04-17 02:20:36 +04:00
for ( i = 0 ; i < ( 1 < < btp - > bt_hashshift ) ; i + + ) {
spin_lock_init ( & btp - > bt_hash [ i ] . bh_lock ) ;
INIT_LIST_HEAD ( & btp - > bt_hash [ i ] . bh_list ) ;
}
}
STATIC void
xfs_free_bufhash (
xfs_buftarg_t * btp )
{
2010-01-21 00:55:30 +03:00
kmem_free_large ( btp - > bt_hash ) ;
2005-04-17 02:20:36 +04:00
btp - > bt_hash = NULL ;
}
2006-01-11 07:37:58 +03:00
/*
2006-01-11 07:39:08 +03:00
* buftarg list for delwrite queue processing
2006-01-11 07:37:58 +03:00
*/
2007-05-08 07:49:59 +04:00
static LIST_HEAD ( xfs_buftarg_list ) ;
2007-02-10 10:34:56 +03:00
static DEFINE_SPINLOCK ( xfs_buftarg_lock ) ;
2006-01-11 07:37:58 +03:00
STATIC void
xfs_register_buftarg (
xfs_buftarg_t * btp )
{
spin_lock ( & xfs_buftarg_lock ) ;
list_add ( & btp - > bt_list , & xfs_buftarg_list ) ;
spin_unlock ( & xfs_buftarg_lock ) ;
}
STATIC void
xfs_unregister_buftarg (
xfs_buftarg_t * btp )
{
spin_lock ( & xfs_buftarg_lock ) ;
list_del ( & btp - > bt_list ) ;
spin_unlock ( & xfs_buftarg_lock ) ;
}
2005-04-17 02:20:36 +04:00
void
xfs_free_buftarg (
2009-03-03 22:48:37 +03:00
struct xfs_mount * mp ,
struct xfs_buftarg * btp )
2005-04-17 02:20:36 +04:00
{
xfs_flush_buftarg ( btp , 1 ) ;
2009-03-03 22:48:37 +03:00
if ( mp - > m_flags & XFS_MOUNT_BARRIER )
xfs_blkdev_issue_flush ( btp ) ;
2005-04-17 02:20:36 +04:00
xfs_free_bufhash ( btp ) ;
2006-01-11 07:39:08 +03:00
iput ( btp - > bt_mapping - > host ) ;
2006-01-11 07:37:58 +03:00
2006-01-11 07:39:08 +03:00
/* Unregister the buftarg first so that we don't get a
* wakeup finding a non - existent task
*/
2006-01-11 07:37:58 +03:00
xfs_unregister_buftarg ( btp ) ;
kthread_stop ( btp - > bt_task ) ;
2008-05-19 10:31:57 +04:00
kmem_free ( btp ) ;
2005-04-17 02:20:36 +04:00
}
STATIC int
xfs_setsize_buftarg_flags (
xfs_buftarg_t * btp ,
unsigned int blocksize ,
unsigned int sectorsize ,
int verbose )
{
2006-01-11 07:39:08 +03:00
btp - > bt_bsize = blocksize ;
btp - > bt_sshift = ffs ( sectorsize ) - 1 ;
btp - > bt_smask = sectorsize - 1 ;
2005-04-17 02:20:36 +04:00
2006-01-11 07:39:08 +03:00
if ( set_blocksize ( btp - > bt_bdev , sectorsize ) ) {
2005-04-17 02:20:36 +04:00
printk ( KERN_WARNING
" XFS: Cannot set_blocksize to %u on device %s \n " ,
sectorsize , XFS_BUFTARG_NAME ( btp ) ) ;
return EINVAL ;
}
if ( verbose & &
( PAGE_CACHE_SIZE / BITS_PER_LONG ) > sectorsize ) {
printk ( KERN_WARNING
" XFS: %u byte sectors in use on device %s. "
" This is suboptimal; %u or greater is ideal. \n " ,
sectorsize , XFS_BUFTARG_NAME ( btp ) ,
( unsigned int ) PAGE_CACHE_SIZE / BITS_PER_LONG ) ;
}
return 0 ;
}
/*
2006-01-11 07:39:08 +03:00
* When allocating the initial buffer target we have not yet
* read in the superblock , so don ' t know what sized sectors
* are being used is at this early stage . Play safe .
*/
2005-04-17 02:20:36 +04:00
STATIC int
xfs_setsize_buftarg_early (
xfs_buftarg_t * btp ,
struct block_device * bdev )
{
return xfs_setsize_buftarg_flags ( btp ,
2009-05-23 01:17:49 +04:00
PAGE_CACHE_SIZE , bdev_logical_block_size ( bdev ) , 0 ) ;
2005-04-17 02:20:36 +04:00
}
int
xfs_setsize_buftarg (
xfs_buftarg_t * btp ,
unsigned int blocksize ,
unsigned int sectorsize )
{
return xfs_setsize_buftarg_flags ( btp , blocksize , sectorsize , 1 ) ;
}
STATIC int
xfs_mapping_buftarg (
xfs_buftarg_t * btp ,
struct block_device * bdev )
{
struct backing_dev_info * bdi ;
struct inode * inode ;
struct address_space * mapping ;
2006-06-28 15:26:44 +04:00
static const struct address_space_operations mapping_aops = {
2005-04-17 02:20:36 +04:00
. sync_page = block_sync_page ,
2006-02-01 14:05:41 +03:00
. migratepage = fail_migrate_page ,
2005-04-17 02:20:36 +04:00
} ;
inode = new_inode ( bdev - > bd_inode - > i_sb ) ;
if ( ! inode ) {
printk ( KERN_WARNING
" XFS: Cannot allocate mapping inode for device %s \n " ,
XFS_BUFTARG_NAME ( btp ) ) ;
return ENOMEM ;
}
inode - > i_mode = S_IFBLK ;
inode - > i_bdev = bdev ;
inode - > i_rdev = bdev - > bd_dev ;
bdi = blk_get_backing_dev_info ( bdev ) ;
if ( ! bdi )
bdi = & default_backing_dev_info ;
mapping = & inode - > i_data ;
mapping - > a_ops = & mapping_aops ;
mapping - > backing_dev_info = bdi ;
mapping_set_gfp_mask ( mapping , GFP_NOFS ) ;
2006-01-11 07:39:08 +03:00
btp - > bt_mapping = mapping ;
2005-04-17 02:20:36 +04:00
return 0 ;
}
2006-01-11 07:37:58 +03:00
STATIC int
xfs_alloc_delwrite_queue (
2010-03-23 01:52:55 +03:00
xfs_buftarg_t * btp ,
const char * fsname )
2006-01-11 07:37:58 +03:00
{
int error = 0 ;
INIT_LIST_HEAD ( & btp - > bt_list ) ;
INIT_LIST_HEAD ( & btp - > bt_delwrite_queue ) ;
2007-10-11 11:43:56 +04:00
spin_lock_init ( & btp - > bt_delwrite_lock ) ;
2006-01-11 07:37:58 +03:00
btp - > bt_flags = 0 ;
2010-03-23 01:52:55 +03:00
btp - > bt_task = kthread_run ( xfsbufd , btp , " xfsbufd/%s " , fsname ) ;
2006-01-11 07:37:58 +03:00
if ( IS_ERR ( btp - > bt_task ) ) {
error = PTR_ERR ( btp - > bt_task ) ;
goto out_error ;
}
xfs_register_buftarg ( btp ) ;
out_error :
return error ;
}
2005-04-17 02:20:36 +04:00
xfs_buftarg_t *
xfs_alloc_buftarg (
struct block_device * bdev ,
2010-03-23 01:52:55 +03:00
int external ,
const char * fsname )
2005-04-17 02:20:36 +04:00
{
xfs_buftarg_t * btp ;
btp = kmem_zalloc ( sizeof ( * btp ) , KM_SLEEP ) ;
2006-01-11 07:39:08 +03:00
btp - > bt_dev = bdev - > bd_dev ;
btp - > bt_bdev = bdev ;
2005-04-17 02:20:36 +04:00
if ( xfs_setsize_buftarg_early ( btp , bdev ) )
goto error ;
if ( xfs_mapping_buftarg ( btp , bdev ) )
goto error ;
2010-03-23 01:52:55 +03:00
if ( xfs_alloc_delwrite_queue ( btp , fsname ) )
2006-01-11 07:37:58 +03:00
goto error ;
2005-04-17 02:20:36 +04:00
xfs_alloc_bufhash ( btp , external ) ;
return btp ;
error :
2008-05-19 10:31:57 +04:00
kmem_free ( btp ) ;
2005-04-17 02:20:36 +04:00
return NULL ;
}
/*
2006-01-11 07:39:08 +03:00
* Delayed write buffer handling
2005-04-17 02:20:36 +04:00
*/
STATIC void
2006-01-11 07:39:08 +03:00
xfs_buf_delwri_queue (
xfs_buf_t * bp ,
2005-04-17 02:20:36 +04:00
int unlock )
{
2006-01-11 07:39:08 +03:00
struct list_head * dwq = & bp - > b_target - > bt_delwrite_queue ;
spinlock_t * dwlk = & bp - > b_target - > bt_delwrite_lock ;
2006-01-11 07:37:58 +03:00
2009-12-15 02:14:59 +03:00
trace_xfs_buf_delwri_queue ( bp , _RET_IP_ ) ;
2006-01-11 07:39:08 +03:00
ASSERT ( ( bp - > b_flags & ( XBF_DELWRI | XBF_ASYNC ) ) = = ( XBF_DELWRI | XBF_ASYNC ) ) ;
2005-04-17 02:20:36 +04:00
2006-01-11 07:37:58 +03:00
spin_lock ( dwlk ) ;
2005-04-17 02:20:36 +04:00
/* If already in the queue, dequeue and place at tail */
2006-01-11 07:39:08 +03:00
if ( ! list_empty ( & bp - > b_list ) ) {
ASSERT ( bp - > b_flags & _XBF_DELWRI_Q ) ;
if ( unlock )
atomic_dec ( & bp - > b_hold ) ;
list_del ( & bp - > b_list ) ;
2005-04-17 02:20:36 +04:00
}
2010-01-11 14:49:59 +03:00
if ( list_empty ( dwq ) ) {
/* start xfsbufd as it is about to have something to do */
wake_up_process ( bp - > b_target - > bt_task ) ;
}
2006-01-11 07:39:08 +03:00
bp - > b_flags | = _XBF_DELWRI_Q ;
list_add_tail ( & bp - > b_list , dwq ) ;
bp - > b_queuetime = jiffies ;
2006-01-11 07:37:58 +03:00
spin_unlock ( dwlk ) ;
2005-04-17 02:20:36 +04:00
if ( unlock )
2006-01-11 07:39:08 +03:00
xfs_buf_unlock ( bp ) ;
2005-04-17 02:20:36 +04:00
}
void
2006-01-11 07:39:08 +03:00
xfs_buf_delwri_dequeue (
xfs_buf_t * bp )
2005-04-17 02:20:36 +04:00
{
2006-01-11 07:39:08 +03:00
spinlock_t * dwlk = & bp - > b_target - > bt_delwrite_lock ;
2005-04-17 02:20:36 +04:00
int dequeued = 0 ;
2006-01-11 07:37:58 +03:00
spin_lock ( dwlk ) ;
2006-01-11 07:39:08 +03:00
if ( ( bp - > b_flags & XBF_DELWRI ) & & ! list_empty ( & bp - > b_list ) ) {
ASSERT ( bp - > b_flags & _XBF_DELWRI_Q ) ;
list_del_init ( & bp - > b_list ) ;
2005-04-17 02:20:36 +04:00
dequeued = 1 ;
}
2006-01-11 07:39:08 +03:00
bp - > b_flags & = ~ ( XBF_DELWRI | _XBF_DELWRI_Q ) ;
2006-01-11 07:37:58 +03:00
spin_unlock ( dwlk ) ;
2005-04-17 02:20:36 +04:00
if ( dequeued )
2006-01-11 07:39:08 +03:00
xfs_buf_rele ( bp ) ;
2005-04-17 02:20:36 +04:00
2009-12-15 02:14:59 +03:00
trace_xfs_buf_delwri_dequeue ( bp , _RET_IP_ ) ;
2005-04-17 02:20:36 +04:00
}
2010-02-02 02:13:42 +03:00
/*
* If a delwri buffer needs to be pushed before it has aged out , then promote
* it to the head of the delwri queue so that it will be flushed on the next
* xfsbufd run . We do this by resetting the queuetime of the buffer to be older
* than the age currently needed to flush the buffer . Hence the next time the
* xfsbufd sees it is guaranteed to be considered old enough to flush .
*/
void
xfs_buf_delwri_promote (
struct xfs_buf * bp )
{
struct xfs_buftarg * btp = bp - > b_target ;
long age = xfs_buf_age_centisecs * msecs_to_jiffies ( 10 ) + 1 ;
ASSERT ( bp - > b_flags & XBF_DELWRI ) ;
ASSERT ( bp - > b_flags & _XBF_DELWRI_Q ) ;
/*
* Check the buffer age before locking the delayed write queue as we
* don ' t need to promote buffers that are already past the flush age .
*/
if ( bp - > b_queuetime < jiffies - age )
return ;
bp - > b_queuetime = jiffies - age ;
spin_lock ( & btp - > bt_delwrite_lock ) ;
list_move ( & bp - > b_list , & btp - > bt_delwrite_queue ) ;
spin_unlock ( & btp - > bt_delwrite_lock ) ;
}
2005-04-17 02:20:36 +04:00
STATIC void
2006-01-11 07:39:08 +03:00
xfs_buf_runall_queues (
2005-04-17 02:20:36 +04:00
struct workqueue_struct * queue )
{
flush_workqueue ( queue ) ;
}
STATIC int
2005-06-21 09:14:01 +04:00
xfsbufd_wakeup (
2010-07-19 08:56:17 +04:00
struct shrinker * shrink ,
2005-11-04 02:51:01 +03:00
int priority ,
gfp_t mask )
2005-04-17 02:20:36 +04:00
{
2006-01-11 12:49:57 +03:00
xfs_buftarg_t * btp ;
2006-01-11 07:37:58 +03:00
spin_lock ( & xfs_buftarg_lock ) ;
2006-01-11 12:49:57 +03:00
list_for_each_entry ( btp , & xfs_buftarg_list , bt_list ) {
2006-01-11 07:39:08 +03:00
if ( test_bit ( XBT_FORCE_SLEEP , & btp - > bt_flags ) )
2006-01-11 07:37:58 +03:00
continue ;
2010-01-11 14:49:59 +03:00
if ( list_empty ( & btp - > bt_delwrite_queue ) )
continue ;
2006-01-11 07:39:08 +03:00
set_bit ( XBT_FORCE_FLUSH , & btp - > bt_flags ) ;
2006-01-11 07:37:58 +03:00
wake_up_process ( btp - > bt_task ) ;
}
spin_unlock ( & xfs_buftarg_lock ) ;
2005-04-17 02:20:36 +04:00
return 0 ;
}
2007-02-10 10:32:29 +03:00
/*
* Move as many buffers as specified to the supplied list
* idicating if we skipped any buffers to prevent deadlocks .
*/
STATIC int
xfs_buf_delwri_split (
xfs_buftarg_t * target ,
struct list_head * list ,
2007-02-10 10:34:49 +03:00
unsigned long age )
2007-02-10 10:32:29 +03:00
{
xfs_buf_t * bp , * n ;
struct list_head * dwq = & target - > bt_delwrite_queue ;
spinlock_t * dwlk = & target - > bt_delwrite_lock ;
int skipped = 0 ;
2007-02-10 10:34:49 +03:00
int force ;
2007-02-10 10:32:29 +03:00
2007-02-10 10:34:49 +03:00
force = test_and_clear_bit ( XBT_FORCE_FLUSH , & target - > bt_flags ) ;
2007-02-10 10:32:29 +03:00
INIT_LIST_HEAD ( list ) ;
spin_lock ( dwlk ) ;
list_for_each_entry_safe ( bp , n , dwq , b_list ) {
2009-12-15 02:14:59 +03:00
trace_xfs_buf_delwri_split ( bp , _RET_IP_ ) ;
2007-02-10 10:32:29 +03:00
ASSERT ( bp - > b_flags & XBF_DELWRI ) ;
2010-06-23 12:11:15 +04:00
if ( ! XFS_BUF_ISPINNED ( bp ) & & ! xfs_buf_cond_lock ( bp ) ) {
2007-02-10 10:34:49 +03:00
if ( ! force & &
2007-02-10 10:32:29 +03:00
time_before ( jiffies , bp - > b_queuetime + age ) ) {
xfs_buf_unlock ( bp ) ;
break ;
}
bp - > b_flags & = ~ ( XBF_DELWRI | _XBF_DELWRI_Q |
_XBF_RUN_QUEUES ) ;
bp - > b_flags | = XBF_WRITE ;
list_move_tail ( & bp - > b_list , list ) ;
} else
skipped + + ;
}
spin_unlock ( dwlk ) ;
return skipped ;
}
2010-01-26 07:13:25 +03:00
/*
* Compare function is more complex than it needs to be because
* the return value is only 32 bits and we are doing comparisons
* on 64 bit values
*/
static int
xfs_buf_cmp (
void * priv ,
struct list_head * a ,
struct list_head * b )
{
struct xfs_buf * ap = container_of ( a , struct xfs_buf , b_list ) ;
struct xfs_buf * bp = container_of ( b , struct xfs_buf , b_list ) ;
xfs_daddr_t diff ;
diff = ap - > b_bn - bp - > b_bn ;
if ( diff < 0 )
return - 1 ;
if ( diff > 0 )
return 1 ;
return 0 ;
}
void
xfs_buf_delwri_sort (
xfs_buftarg_t * target ,
struct list_head * list )
{
list_sort ( NULL , list , xfs_buf_cmp ) ;
}
2005-04-17 02:20:36 +04:00
STATIC int
2005-06-21 09:14:01 +04:00
xfsbufd (
2007-02-10 10:32:29 +03:00
void * data )
2005-04-17 02:20:36 +04:00
{
2010-01-26 07:13:25 +03:00
xfs_buftarg_t * target = ( xfs_buftarg_t * ) data ;
2005-04-17 02:20:36 +04:00
current - > flags | = PF_MEMALLOC ;
2007-12-07 06:09:02 +03:00
set_freezable ( ) ;
2005-04-17 02:20:36 +04:00
do {
2010-01-11 14:49:59 +03:00
long age = xfs_buf_age_centisecs * msecs_to_jiffies ( 10 ) ;
long tout = xfs_buf_timer_centisecs * msecs_to_jiffies ( 10 ) ;
2010-01-26 07:13:25 +03:00
int count = 0 ;
struct list_head tmp ;
2010-01-11 14:49:59 +03:00
2005-06-25 10:13:50 +04:00
if ( unlikely ( freezing ( current ) ) ) {
2006-01-11 07:39:08 +03:00
set_bit ( XBT_FORCE_SLEEP , & target - > bt_flags ) ;
2005-06-25 10:13:50 +04:00
refrigerator ( ) ;
2005-05-06 00:30:13 +04:00
} else {
2006-01-11 07:39:08 +03:00
clear_bit ( XBT_FORCE_SLEEP , & target - > bt_flags ) ;
2005-05-06 00:30:13 +04:00
}
2005-04-17 02:20:36 +04:00
2010-01-11 14:49:59 +03:00
/* sleep for a long time if there is nothing to do. */
if ( list_empty ( & target - > bt_delwrite_queue ) )
tout = MAX_SCHEDULE_TIMEOUT ;
schedule_timeout_interruptible ( tout ) ;
2005-04-17 02:20:36 +04:00
2010-01-11 14:49:59 +03:00
xfs_buf_delwri_split ( target , & tmp , age ) ;
2010-01-26 07:13:25 +03:00
list_sort ( NULL , & tmp , xfs_buf_cmp ) ;
2005-04-17 02:20:36 +04:00
while ( ! list_empty ( & tmp ) ) {
2010-01-26 07:13:25 +03:00
struct xfs_buf * bp ;
bp = list_first_entry ( & tmp , struct xfs_buf , b_list ) ;
2006-01-11 07:39:08 +03:00
list_del_init ( & bp - > b_list ) ;
xfs_buf_iostrategy ( bp ) ;
2007-02-10 10:32:29 +03:00
count + + ;
2005-04-17 02:20:36 +04:00
}
2006-09-28 04:52:15 +04:00
if ( count )
blk_run_address_space ( target - > bt_mapping ) ;
2005-04-17 02:20:36 +04:00
2005-09-05 02:34:18 +04:00
} while ( ! kthread_should_stop ( ) ) ;
2005-04-17 02:20:36 +04:00
2005-09-05 02:34:18 +04:00
return 0 ;
2005-04-17 02:20:36 +04:00
}
/*
2006-01-11 07:39:08 +03:00
* Go through all incore buffers , and release buffers if they belong to
* the given device . This is used in filesystem error handling to
* preserve the consistency of its metadata .
2005-04-17 02:20:36 +04:00
*/
int
xfs_flush_buftarg (
2007-02-10 10:32:29 +03:00
xfs_buftarg_t * target ,
int wait )
2005-04-17 02:20:36 +04:00
{
2010-01-26 07:13:25 +03:00
xfs_buf_t * bp ;
2007-02-10 10:32:29 +03:00
int pincount = 0 ;
2010-01-26 07:13:25 +03:00
LIST_HEAD ( tmp_list ) ;
LIST_HEAD ( wait_list ) ;
2005-04-17 02:20:36 +04:00
2009-04-06 20:42:11 +04:00
xfs_buf_runall_queues ( xfsconvertd_workqueue ) ;
2006-01-11 07:39:08 +03:00
xfs_buf_runall_queues ( xfsdatad_workqueue ) ;
xfs_buf_runall_queues ( xfslogd_workqueue ) ;
2005-04-17 02:20:36 +04:00
2007-02-10 10:34:49 +03:00
set_bit ( XBT_FORCE_FLUSH , & target - > bt_flags ) ;
2010-01-26 07:13:25 +03:00
pincount = xfs_buf_delwri_split ( target , & tmp_list , 0 ) ;
2005-04-17 02:20:36 +04:00
/*
2010-01-26 07:13:25 +03:00
* Dropped the delayed write list lock , now walk the temporary list .
* All I / O is issued async and then if we need to wait for completion
* we do that after issuing all the IO .
2005-04-17 02:20:36 +04:00
*/
2010-01-26 07:13:25 +03:00
list_sort ( NULL , & tmp_list , xfs_buf_cmp ) ;
while ( ! list_empty ( & tmp_list ) ) {
bp = list_first_entry ( & tmp_list , struct xfs_buf , b_list ) ;
2007-02-10 10:32:29 +03:00
ASSERT ( target = = bp - > b_target ) ;
2010-01-26 07:13:25 +03:00
list_del_init ( & bp - > b_list ) ;
if ( wait ) {
2006-01-11 07:39:08 +03:00
bp - > b_flags & = ~ XBF_ASYNC ;
2010-01-26 07:13:25 +03:00
list_add ( & bp - > b_list , & wait_list ) ;
}
2006-01-11 07:39:08 +03:00
xfs_buf_iostrategy ( bp ) ;
2005-04-17 02:20:36 +04:00
}
2010-01-26 07:13:25 +03:00
if ( wait ) {
/* Expedite and wait for IO to complete. */
2006-09-28 04:52:15 +04:00
blk_run_address_space ( target - > bt_mapping ) ;
2010-01-26 07:13:25 +03:00
while ( ! list_empty ( & wait_list ) ) {
bp = list_first_entry ( & wait_list , struct xfs_buf , b_list ) ;
2006-09-28 04:52:15 +04:00
2010-01-26 07:13:25 +03:00
list_del_init ( & bp - > b_list ) ;
xfs_iowait ( bp ) ;
xfs_buf_relse ( bp ) ;
}
2005-04-17 02:20:36 +04:00
}
return pincount ;
}
2005-11-02 02:15:05 +03:00
int __init
2006-01-11 07:39:08 +03:00
xfs_buf_init ( void )
2005-04-17 02:20:36 +04:00
{
2006-03-14 05:18:19 +03:00
xfs_buf_zone = kmem_zone_init_flags ( sizeof ( xfs_buf_t ) , " xfs_buf " ,
KM_ZONE_HWALIGN , NULL ) ;
2006-01-11 07:39:08 +03:00
if ( ! xfs_buf_zone )
2009-12-15 02:14:59 +03:00
goto out ;
2005-11-02 02:15:05 +03:00
2007-03-22 11:11:27 +03:00
xfslogd_workqueue = create_workqueue ( " xfslogd " ) ;
2005-06-21 09:14:01 +04:00
if ( ! xfslogd_workqueue )
2005-11-02 02:15:05 +03:00
goto out_free_buf_zone ;
2005-04-17 02:20:36 +04:00
2007-03-22 11:11:27 +03:00
xfsdatad_workqueue = create_workqueue ( " xfsdatad " ) ;
2005-06-21 09:14:01 +04:00
if ( ! xfsdatad_workqueue )
goto out_destroy_xfslogd_workqueue ;
2005-04-17 02:20:36 +04:00
2009-04-06 20:42:11 +04:00
xfsconvertd_workqueue = create_workqueue ( " xfsconvertd " ) ;
if ( ! xfsconvertd_workqueue )
goto out_destroy_xfsdatad_workqueue ;
2007-07-17 15:03:17 +04:00
register_shrinker ( & xfs_buf_shake ) ;
2005-06-21 09:14:01 +04:00
return 0 ;
2005-04-17 02:20:36 +04:00
2009-04-06 20:42:11 +04:00
out_destroy_xfsdatad_workqueue :
destroy_workqueue ( xfsdatad_workqueue ) ;
2005-06-21 09:14:01 +04:00
out_destroy_xfslogd_workqueue :
destroy_workqueue ( xfslogd_workqueue ) ;
out_free_buf_zone :
2006-01-11 07:39:08 +03:00
kmem_zone_destroy ( xfs_buf_zone ) ;
2009-12-15 02:14:59 +03:00
out :
2006-03-14 05:18:19 +03:00
return - ENOMEM ;
2005-04-17 02:20:36 +04:00
}
void
2006-01-11 07:39:08 +03:00
xfs_buf_terminate ( void )
2005-04-17 02:20:36 +04:00
{
2007-07-17 15:03:17 +04:00
unregister_shrinker ( & xfs_buf_shake ) ;
2009-04-06 20:42:11 +04:00
destroy_workqueue ( xfsconvertd_workqueue ) ;
2005-11-02 02:15:05 +03:00
destroy_workqueue ( xfsdatad_workqueue ) ;
destroy_workqueue ( xfslogd_workqueue ) ;
2006-01-11 07:39:08 +03:00
kmem_zone_destroy ( xfs_buf_zone ) ;
2005-04-17 02:20:36 +04:00
}
2007-05-08 07:49:59 +04:00
# ifdef CONFIG_KDB_MODULES
struct list_head *
xfs_get_buftarg_list ( void )
{
return & xfs_buftarg_list ;
}
# endif