zswap: do not allocate from atomic pool

zswap_frontswap_load() should be called from preemptible context (we even
call mutex_lock() there) and it does not look like we need to do
GFP_ATOMIC allocaion for temp buffer.  The same applies to
zswap_writeback_entry().

Use GFP_KERNEL for temporary buffer allocation in both cases.

Link: https://lkml.kernel.org/r/Y3xCTr6ikbtcUr/y@google.com
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Nhat Pham <nphamcs@gmail.com>
Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org>
Cc: Dan Streetman <ddstreet@ieee.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
Cc: Vitaly Wool <vitaly.wool@konsulko.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
Sergey Senozhatsky 2022-11-22 12:30:22 +09:00 committed by Andrew Morton
parent 7ce5f7e16a
commit 8d9b63708d
2 changed files with 9 additions and 2 deletions

View File

@ -387,6 +387,13 @@ bool zpool_evictable(struct zpool *zpool)
* zpool_can_sleep_mapped - Test if zpool can sleep when do mapped. * zpool_can_sleep_mapped - Test if zpool can sleep when do mapped.
* @zpool: The zpool to test * @zpool: The zpool to test
* *
* Some allocators enter non-preemptible context in ->map() callback (e.g.
* disable pagefaults) and exit that context in ->unmap(), which limits what
* we can do with the mapped object. For instance, we cannot wait for
* asynchronous crypto API to decompress such an object or take mutexes
* since those will call into the scheduler. This function tells us whether
* we use such an allocator.
*
* Returns: true if zpool can sleep; false otherwise. * Returns: true if zpool can sleep; false otherwise.
*/ */
bool zpool_can_sleep_mapped(struct zpool *zpool) bool zpool_can_sleep_mapped(struct zpool *zpool)

View File

@ -958,7 +958,7 @@ static int zswap_writeback_entry(struct zpool *pool, unsigned long handle)
}; };
if (!zpool_can_sleep_mapped(pool)) { if (!zpool_can_sleep_mapped(pool)) {
tmp = kmalloc(PAGE_SIZE, GFP_ATOMIC); tmp = kmalloc(PAGE_SIZE, GFP_KERNEL);
if (!tmp) if (!tmp)
return -ENOMEM; return -ENOMEM;
} }
@ -1311,7 +1311,7 @@ static int zswap_frontswap_load(unsigned type, pgoff_t offset,
} }
if (!zpool_can_sleep_mapped(entry->pool->zpool)) { if (!zpool_can_sleep_mapped(entry->pool->zpool)) {
tmp = kmalloc(entry->length, GFP_ATOMIC); tmp = kmalloc(entry->length, GFP_KERNEL);
if (!tmp) { if (!tmp) {
ret = -ENOMEM; ret = -ENOMEM;
goto freeentry; goto freeentry;