2019-06-04 11:11:33 +03:00
// SPDX-License-Identifier: GPL-2.0-only
2015-02-14 01:39:53 +03:00
/*
*
* Copyright ( c ) 2014 Samsung Electronics Co . , Ltd .
* Author : Andrey Ryabinin < a . ryabinin @ samsung . com >
*/
2019-07-12 06:53:52 +03:00
# include <linux/bitops.h>
2017-02-25 02:00:08 +03:00
# include <linux/delay.h>
2019-07-12 06:53:52 +03:00
# include <linux/kasan.h>
2015-02-14 01:39:53 +03:00
# include <linux/kernel.h>
2016-05-21 02:59:34 +03:00
# include <linux/mm.h>
2019-07-12 06:53:52 +03:00
# include <linux/mman.h>
# include <linux/module.h>
2015-02-14 01:39:53 +03:00
# include <linux/printk.h>
2021-02-24 23:05:21 +03:00
# include <linux/random.h>
2015-02-14 01:39:53 +03:00
# include <linux/slab.h>
# include <linux/string.h>
2016-05-21 02:59:34 +03:00
# include <linux/uaccess.h>
2019-09-24 01:34:16 +03:00
# include <linux/io.h>
2019-12-01 04:54:53 +03:00
# include <linux/vmalloc.h>
2022-03-25 04:11:59 +03:00
# include <linux/set_memory.h>
2019-09-24 01:34:16 +03:00
# include <asm/page.h>
2015-02-14 01:39:53 +03:00
2020-10-14 02:55:02 +03:00
# include <kunit/test.h>
2020-08-07 09:24:54 +03:00
# include "../mm/kasan/kasan.h"
2020-12-22 23:00:24 +03:00
# define OOB_TAG_OFF (IS_ENABLED(CONFIG_KASAN_GENERIC) ? 0 : KASAN_GRANULE_SIZE)
2020-08-07 09:24:54 +03:00
kasan: stop tests being eliminated as dead code with FORTIFY_SOURCE
Patch series "Fix some incompatibilites between KASAN and FORTIFY_SOURCE", v4.
3 KASAN self-tests fail on a kernel with both KASAN and FORTIFY_SOURCE:
memchr, memcmp and strlen.
When FORTIFY_SOURCE is on, a number of functions are replaced with
fortified versions, which attempt to check the sizes of the operands.
However, these functions often directly invoke __builtin_foo() once they
have performed the fortify check. The compiler can detect that the
results of these functions are not used, and knows that they have no other
side effects, and so can eliminate them as dead code.
Why are only memchr, memcmp and strlen affected?
================================================
Of string and string-like functions, kasan_test tests:
* strchr -> not affected, no fortified version
* strrchr -> likewise
* strcmp -> likewise
* strncmp -> likewise
* strnlen -> not affected, the fortify source implementation calls the
underlying strnlen implementation which is instrumented, not
a builtin
* strlen -> affected, the fortify souce implementation calls a __builtin
version which the compiler can determine is dead.
* memchr -> likewise
* memcmp -> likewise
* memset -> not affected, the compiler knows that memset writes to its
first argument and therefore is not dead.
Why does this not affect the functions normally?
================================================
In string.h, these functions are not marked as __pure, so the compiler
cannot know that they do not have side effects. If relevant functions are
marked as __pure in string.h, we see the following warnings and the
functions are elided:
lib/test_kasan.c: In function `kasan_memchr':
lib/test_kasan.c:606:2: warning: statement with no effect [-Wunused-value]
memchr(ptr, '1', size + 1);
^~~~~~~~~~~~~~~~~~~~~~~~~~
lib/test_kasan.c: In function `kasan_memcmp':
lib/test_kasan.c:622:2: warning: statement with no effect [-Wunused-value]
memcmp(ptr, arr, size+1);
^~~~~~~~~~~~~~~~~~~~~~~~
lib/test_kasan.c: In function `kasan_strings':
lib/test_kasan.c:645:2: warning: statement with no effect [-Wunused-value]
strchr(ptr, '1');
^~~~~~~~~~~~~~~~
...
This annotation would make sense to add and could be added at any point,
so the behaviour of test_kasan.c should change.
The fix
=======
Make all the functions that are pure write their results to a global,
which makes them live. The strlen and memchr tests now pass.
The memcmp test still fails to trigger, which is addressed in the next
patch.
[dja@axtens.net: drop patch 3]
Link: http://lkml.kernel.org/r/20200424145521.8203-2-dja@axtens.net
Fixes: 0c96350a2d2f ("lib/test_kasan.c: add tests for several string/memory API functions")
Signed-off-by: Daniel Axtens <dja@axtens.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Tested-by: David Gow <davidgow@google.com>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Cc: Daniel Micay <danielmicay@gmail.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Link: http://lkml.kernel.org/r/20200423154503.5103-1-dja@axtens.net
Link: http://lkml.kernel.org/r/20200423154503.5103-2-dja@axtens.net
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-06-04 01:56:43 +03:00
/*
2021-02-24 23:05:13 +03:00
* Some tests use these global variables to store return values from function
* calls that could otherwise be eliminated by the compiler as dead code .
kasan: stop tests being eliminated as dead code with FORTIFY_SOURCE
Patch series "Fix some incompatibilites between KASAN and FORTIFY_SOURCE", v4.
3 KASAN self-tests fail on a kernel with both KASAN and FORTIFY_SOURCE:
memchr, memcmp and strlen.
When FORTIFY_SOURCE is on, a number of functions are replaced with
fortified versions, which attempt to check the sizes of the operands.
However, these functions often directly invoke __builtin_foo() once they
have performed the fortify check. The compiler can detect that the
results of these functions are not used, and knows that they have no other
side effects, and so can eliminate them as dead code.
Why are only memchr, memcmp and strlen affected?
================================================
Of string and string-like functions, kasan_test tests:
* strchr -> not affected, no fortified version
* strrchr -> likewise
* strcmp -> likewise
* strncmp -> likewise
* strnlen -> not affected, the fortify source implementation calls the
underlying strnlen implementation which is instrumented, not
a builtin
* strlen -> affected, the fortify souce implementation calls a __builtin
version which the compiler can determine is dead.
* memchr -> likewise
* memcmp -> likewise
* memset -> not affected, the compiler knows that memset writes to its
first argument and therefore is not dead.
Why does this not affect the functions normally?
================================================
In string.h, these functions are not marked as __pure, so the compiler
cannot know that they do not have side effects. If relevant functions are
marked as __pure in string.h, we see the following warnings and the
functions are elided:
lib/test_kasan.c: In function `kasan_memchr':
lib/test_kasan.c:606:2: warning: statement with no effect [-Wunused-value]
memchr(ptr, '1', size + 1);
^~~~~~~~~~~~~~~~~~~~~~~~~~
lib/test_kasan.c: In function `kasan_memcmp':
lib/test_kasan.c:622:2: warning: statement with no effect [-Wunused-value]
memcmp(ptr, arr, size+1);
^~~~~~~~~~~~~~~~~~~~~~~~
lib/test_kasan.c: In function `kasan_strings':
lib/test_kasan.c:645:2: warning: statement with no effect [-Wunused-value]
strchr(ptr, '1');
^~~~~~~~~~~~~~~~
...
This annotation would make sense to add and could be added at any point,
so the behaviour of test_kasan.c should change.
The fix
=======
Make all the functions that are pure write their results to a global,
which makes them live. The strlen and memchr tests now pass.
The memcmp test still fails to trigger, which is addressed in the next
patch.
[dja@axtens.net: drop patch 3]
Link: http://lkml.kernel.org/r/20200424145521.8203-2-dja@axtens.net
Fixes: 0c96350a2d2f ("lib/test_kasan.c: add tests for several string/memory API functions")
Signed-off-by: Daniel Axtens <dja@axtens.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Tested-by: David Gow <davidgow@google.com>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Cc: Daniel Micay <danielmicay@gmail.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Link: http://lkml.kernel.org/r/20200423154503.5103-1-dja@axtens.net
Link: http://lkml.kernel.org/r/20200423154503.5103-2-dja@axtens.net
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-06-04 01:56:43 +03:00
*/
void * kasan_ptr_result ;
2020-10-14 02:55:02 +03:00
int kasan_int_result ;
static struct kunit_resource resource ;
2022-03-25 04:12:02 +03:00
static struct kunit_kasan_status test_status ;
2020-10-14 02:55:02 +03:00
static bool multishot ;
2021-02-24 23:05:13 +03:00
/*
* Temporarily enable multi - shot mode . Otherwise , KASAN would only report the
2021-02-24 23:05:26 +03:00
* first detected bug and panic the kernel if panic_on_warn is enabled . For
* hardware tag - based KASAN also allow tag checking to be reenabled for each
* test , see the comment for KUNIT_EXPECT_KASAN_FAIL ( ) .
2021-02-24 23:05:13 +03:00
*/
2020-10-14 02:55:02 +03:00
static int kasan_test_init ( struct kunit * test )
{
2021-02-24 23:06:02 +03:00
if ( ! kasan_enabled ( ) ) {
kunit_err ( test , " can't run KASAN tests with KASAN disabled " ) ;
return - 1 ;
}
2020-10-14 02:55:02 +03:00
multishot = kasan_save_enable_multi_shot ( ) ;
2022-03-25 04:12:02 +03:00
test_status . report_found = false ;
test_status . sync_fault = false ;
2021-04-30 09:00:49 +03:00
kunit_add_named_resource ( test , NULL , NULL , & resource ,
2022-03-25 04:12:02 +03:00
" kasan_status " , & test_status ) ;
2020-10-14 02:55:02 +03:00
return 0 ;
}
static void kasan_test_exit ( struct kunit * test )
{
kasan_restore_multi_shot ( multishot ) ;
2022-03-25 04:12:02 +03:00
KUNIT_EXPECT_FALSE ( test , test_status . report_found ) ;
2020-10-14 02:55:02 +03:00
}
/**
2021-02-24 23:05:13 +03:00
* KUNIT_EXPECT_KASAN_FAIL ( ) - check that the executed expression produces a
* KASAN report ; causes a test failure otherwise . This relies on a KUnit
2022-03-25 04:12:02 +03:00
* resource named " kasan_status " . Do not use this name for KUnit resources
2021-02-24 23:05:13 +03:00
* outside of KASAN tests .
2021-02-24 23:05:26 +03:00
*
2022-03-25 04:12:02 +03:00
* For hardware tag - based KASAN , when a synchronous tag fault happens , tag
2021-03-15 16:20:19 +03:00
* checking is auto - disabled . When this happens , this test handler reenables
* tag checking . As tag checking can be only disabled or enabled per CPU ,
* this handler disables migration ( preemption ) .
2021-02-24 23:05:34 +03:00
*
2022-03-25 04:12:02 +03:00
* Since the compiler doesn ' t see that the expression can change the test_status
2021-02-24 23:05:34 +03:00
* fields , it can reorder or optimize away the accesses to those fields .
* Use READ / WRITE_ONCE ( ) for the accesses and compiler barriers around the
* expression to prevent that .
2021-04-30 09:00:49 +03:00
*
2022-03-25 04:12:02 +03:00
* In between KUNIT_EXPECT_KASAN_FAIL checks , test_status . report_found is kept
* as false . This allows detecting KASAN reports that happen outside of the
* checks by asserting ! test_status . report_found at the start of
* KUNIT_EXPECT_KASAN_FAIL and in kasan_test_exit .
2020-10-14 02:55:02 +03:00
*/
2021-04-30 09:00:49 +03:00
# define KUNIT_EXPECT_KASAN_FAIL(test, expression) do { \
if ( IS_ENABLED ( CONFIG_KASAN_HW_TAGS ) & & \
2021-10-06 18:47:51 +03:00
kasan_sync_fault_possible ( ) ) \
2021-04-30 09:00:49 +03:00
migrate_disable ( ) ; \
2022-03-25 04:12:02 +03:00
KUNIT_EXPECT_FALSE ( test , READ_ONCE ( test_status . report_found ) ) ; \
2021-04-30 09:00:49 +03:00
barrier ( ) ; \
expression ; \
barrier ( ) ; \
2022-03-25 04:12:02 +03:00
if ( kasan_async_fault_possible ( ) ) \
kasan_force_async_fault ( ) ; \
if ( ! READ_ONCE ( test_status . report_found ) ) { \
kasan: test: improve failure message in KUNIT_EXPECT_KASAN_FAIL()
The KUNIT_EXPECT_KASAN_FAIL() macro currently uses KUNIT_EXPECT_EQ() to
compare fail_data.report_expected and fail_data.report_found. This always
gave a somewhat useless error message on failure, but the addition of
extra compile-time checking with READ_ONCE() has caused it to get much
longer, and be truncated before anything useful is displayed.
Instead, just check fail_data.report_found by hand (we've just set
report_expected to 'true'), and print a better failure message with
KUNIT_FAIL(). Because of this, report_expected is no longer used
anywhere, and can be removed.
Beforehand, a failure in:
KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)area)[3100]);
would have looked like:
[22:00:34] [FAILED] vmalloc_oob
[22:00:34] # vmalloc_oob: EXPECTATION FAILED at lib/test_kasan.c:991
[22:00:34] Expected ({ do { extern void __compiletime_assert_705(void) __attribute__((__error__("Unsupported access size for {READ,WRITE}_ONCE()."))); if (!((sizeof(fail_data.report_expected) == sizeof(char) || sizeof(fail_data.repp
[22:00:34] not ok 45 - vmalloc_oob
With this change, it instead looks like:
[22:04:04] [FAILED] vmalloc_oob
[22:04:04] # vmalloc_oob: EXPECTATION FAILED at lib/test_kasan.c:993
[22:04:04] KASAN failure expected in "((volatile char *)area)[3100]", but none occurred
[22:04:04] not ok 45 - vmalloc_oob
Also update the example failure in the documentation to reflect this.
Link: https://lkml.kernel.org/r/20210606005531.165954-1-davidgow@google.com
Signed-off-by: David Gow <davidgow@google.com>
Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
Reviewed-by: Marco Elver <elver@google.com>
Acked-by: Brendan Higgins <brendanhiggins@google.com>
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Daniel Axtens <dja@axtens.net>
Cc: David Gow <davidgow@google.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-06-29 05:40:36 +03:00
KUNIT_FAIL ( test , KUNIT_SUBTEST_INDENT " KASAN failure " \
" expected in \" " # expression \
" \" , but none occurred " ) ; \
} \
2022-03-25 04:12:02 +03:00
if ( IS_ENABLED ( CONFIG_KASAN_HW_TAGS ) & & \
kasan_sync_fault_possible ( ) ) { \
if ( READ_ONCE ( test_status . report_found ) & & \
READ_ONCE ( test_status . sync_fault ) ) \
kasan_enable_tagging ( ) ; \
2021-04-30 09:00:49 +03:00
migrate_enable ( ) ; \
} \
2022-03-25 04:12:02 +03:00
WRITE_ONCE ( test_status . report_found , false ) ; \
2020-10-14 02:55:02 +03:00
} while ( 0 )
2021-02-24 23:05:17 +03:00
# define KASAN_TEST_NEEDS_CONFIG_ON(test, config) do { \
2021-06-25 09:58:15 +03:00
if ( ! IS_ENABLED ( config ) ) \
kunit_skip ( ( test ) , " Test requires " # config " =y " ) ; \
2021-02-24 23:05:17 +03:00
} while ( 0 )
# define KASAN_TEST_NEEDS_CONFIG_OFF(test, config) do { \
2021-06-25 09:58:15 +03:00
if ( IS_ENABLED ( config ) ) \
kunit_skip ( ( test ) , " Test requires " # config " =n " ) ; \
2021-02-24 23:05:17 +03:00
} while ( 0 )
2020-10-14 02:55:06 +03:00
static void kmalloc_oob_right ( struct kunit * test )
2015-02-14 01:39:53 +03:00
{
char * ptr ;
2021-09-03 00:57:32 +03:00
size_t size = 128 - KASAN_GRANULE_SIZE - 5 ;
2015-02-14 01:39:53 +03:00
ptr = kmalloc ( size , GFP_KERNEL ) ;
2020-10-14 02:55:06 +03:00
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr ) ;
2020-08-07 09:24:54 +03:00
2022-06-09 00:40:24 +03:00
OPTIMIZER_HIDE_VAR ( ptr ) ;
2021-09-03 00:57:32 +03:00
/*
* An unaligned access past the requested kmalloc size .
* Only generic KASAN can precisely detect these .
*/
if ( IS_ENABLED ( CONFIG_KASAN_GENERIC ) )
KUNIT_EXPECT_KASAN_FAIL ( test , ptr [ size ] = ' x ' ) ;
/*
* An aligned access into the first out - of - bounds granule that falls
* within the aligned kmalloc object .
*/
KUNIT_EXPECT_KASAN_FAIL ( test , ptr [ size + 5 ] = ' y ' ) ;
/* Out-of-bounds access past the aligned kmalloc object. */
KUNIT_EXPECT_KASAN_FAIL ( test , ptr [ 0 ] =
ptr [ size + KASAN_GRANULE_SIZE + 5 ] ) ;
2015-02-14 01:39:53 +03:00
kfree ( ptr ) ;
}
2020-10-14 02:55:06 +03:00
static void kmalloc_oob_left ( struct kunit * test )
2015-02-14 01:39:53 +03:00
{
char * ptr ;
size_t size = 15 ;
ptr = kmalloc ( size , GFP_KERNEL ) ;
2020-10-14 02:55:06 +03:00
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr ) ;
2015-02-14 01:39:53 +03:00
2022-06-09 00:40:24 +03:00
OPTIMIZER_HIDE_VAR ( ptr ) ;
2020-10-14 02:55:06 +03:00
KUNIT_EXPECT_KASAN_FAIL ( test , * ptr = * ( ptr - 1 ) ) ;
2015-02-14 01:39:53 +03:00
kfree ( ptr ) ;
}
2020-10-14 02:55:06 +03:00
static void kmalloc_node_oob_right ( struct kunit * test )
2015-02-14 01:39:53 +03:00
{
char * ptr ;
size_t size = 4096 ;
ptr = kmalloc_node ( size , GFP_KERNEL , 0 ) ;
2020-10-14 02:55:06 +03:00
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr ) ;
2015-02-14 01:39:53 +03:00
2022-06-09 00:40:24 +03:00
OPTIMIZER_HIDE_VAR ( ptr ) ;
2021-09-03 00:57:35 +03:00
KUNIT_EXPECT_KASAN_FAIL ( test , ptr [ 0 ] = ptr [ size ] ) ;
2015-02-14 01:39:53 +03:00
kfree ( ptr ) ;
}
2021-02-24 23:05:55 +03:00
/*
* These kmalloc_pagealloc_ * tests try allocating a memory chunk that doesn ' t
* fit into a slab cache and therefore is allocated via the page allocator
* fallback . Since this kind of fallback is only implemented for SLUB , these
* tests are limited to that allocator .
*/
2020-10-14 02:55:06 +03:00
static void kmalloc_pagealloc_oob_right ( struct kunit * test )
2015-02-14 01:39:53 +03:00
{
char * ptr ;
size_t size = KMALLOC_MAX_CACHE_SIZE + 10 ;
2021-02-24 23:05:17 +03:00
KASAN_TEST_NEEDS_CONFIG_ON ( test , CONFIG_SLUB ) ;
2020-10-14 02:55:06 +03:00
2016-03-26 00:21:56 +03:00
ptr = kmalloc ( size , GFP_KERNEL ) ;
2020-10-14 02:55:06 +03:00
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr ) ;
2020-08-07 09:24:54 +03:00
2022-06-09 00:40:24 +03:00
OPTIMIZER_HIDE_VAR ( ptr ) ;
2020-10-14 02:55:06 +03:00
KUNIT_EXPECT_KASAN_FAIL ( test , ptr [ size + OOB_TAG_OFF ] = 0 ) ;
2021-02-24 23:05:55 +03:00
2016-03-26 00:21:56 +03:00
kfree ( ptr ) ;
}
2018-02-07 02:36:23 +03:00
2020-10-14 02:55:06 +03:00
static void kmalloc_pagealloc_uaf ( struct kunit * test )
2018-02-07 02:36:23 +03:00
{
char * ptr ;
size_t size = KMALLOC_MAX_CACHE_SIZE + 10 ;
2021-02-24 23:05:17 +03:00
KASAN_TEST_NEEDS_CONFIG_ON ( test , CONFIG_SLUB ) ;
2018-02-07 02:36:23 +03:00
2020-10-14 02:55:06 +03:00
ptr = kmalloc ( size , GFP_KERNEL ) ;
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr ) ;
2018-02-07 02:36:23 +03:00
kfree ( ptr ) ;
2021-02-24 23:05:55 +03:00
2021-09-03 00:57:35 +03:00
KUNIT_EXPECT_KASAN_FAIL ( test , ( ( volatile char * ) ptr ) [ 0 ] ) ;
2018-02-07 02:36:23 +03:00
}
2020-10-14 02:55:06 +03:00
static void kmalloc_pagealloc_invalid_free ( struct kunit * test )
2018-02-07 02:36:23 +03:00
{
char * ptr ;
size_t size = KMALLOC_MAX_CACHE_SIZE + 10 ;
2021-02-24 23:05:17 +03:00
KASAN_TEST_NEEDS_CONFIG_ON ( test , CONFIG_SLUB ) ;
2018-02-07 02:36:23 +03:00
2020-10-14 02:55:06 +03:00
ptr = kmalloc ( size , GFP_KERNEL ) ;
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr ) ;
KUNIT_EXPECT_KASAN_FAIL ( test , kfree ( ptr + 1 ) ) ;
2018-02-07 02:36:23 +03:00
}
2016-03-26 00:21:56 +03:00
2021-02-24 23:05:55 +03:00
static void pagealloc_oob_right ( struct kunit * test )
{
char * ptr ;
struct page * pages ;
size_t order = 4 ;
size_t size = ( 1UL < < ( PAGE_SHIFT + order ) ) ;
/*
* With generic KASAN page allocations have no redzones , thus
* out - of - bounds detection is not guaranteed .
* See https : //bugzilla.kernel.org/show_bug.cgi?id=210503.
*/
KASAN_TEST_NEEDS_CONFIG_OFF ( test , CONFIG_KASAN_GENERIC ) ;
pages = alloc_pages ( GFP_KERNEL , order ) ;
ptr = page_address ( pages ) ;
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr ) ;
2021-09-03 00:57:35 +03:00
KUNIT_EXPECT_KASAN_FAIL ( test , ptr [ 0 ] = ptr [ size ] ) ;
2021-02-24 23:05:55 +03:00
free_pages ( ( unsigned long ) ptr , order ) ;
}
static void pagealloc_uaf ( struct kunit * test )
{
char * ptr ;
struct page * pages ;
size_t order = 4 ;
pages = alloc_pages ( GFP_KERNEL , order ) ;
ptr = page_address ( pages ) ;
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr ) ;
free_pages ( ( unsigned long ) ptr , order ) ;
2021-09-03 00:57:35 +03:00
KUNIT_EXPECT_KASAN_FAIL ( test , ( ( volatile char * ) ptr ) [ 0 ] ) ;
2021-02-24 23:05:55 +03:00
}
2020-10-14 02:55:06 +03:00
static void kmalloc_large_oob_right ( struct kunit * test )
2016-03-26 00:21:56 +03:00
{
char * ptr ;
size_t size = KMALLOC_MAX_CACHE_SIZE - 256 ;
2021-02-24 23:05:13 +03:00
/*
* Allocate a chunk that is large enough , but still fits into a slab
2016-03-26 00:21:56 +03:00
* and does not trigger the page allocator fallback in SLUB .
*/
2015-02-14 01:39:53 +03:00
ptr = kmalloc ( size , GFP_KERNEL ) ;
2020-10-14 02:55:06 +03:00
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr ) ;
2015-02-14 01:39:53 +03:00
2022-06-09 00:40:24 +03:00
OPTIMIZER_HIDE_VAR ( ptr ) ;
2020-10-14 02:55:06 +03:00
KUNIT_EXPECT_KASAN_FAIL ( test , ptr [ size ] = 0 ) ;
2015-02-14 01:39:53 +03:00
kfree ( ptr ) ;
}
2021-02-26 04:20:15 +03:00
static void krealloc_more_oob_helper ( struct kunit * test ,
size_t size1 , size_t size2 )
2015-02-14 01:39:53 +03:00
{
char * ptr1 , * ptr2 ;
2021-02-26 04:20:15 +03:00
size_t middle ;
KUNIT_ASSERT_LT ( test , size1 , size2 ) ;
middle = size1 + ( size2 - size1 ) / 2 ;
2015-02-14 01:39:53 +03:00
ptr1 = kmalloc ( size1 , GFP_KERNEL ) ;
2020-10-14 02:55:06 +03:00
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr1 ) ;
2015-02-14 01:39:53 +03:00
2020-10-14 02:55:06 +03:00
ptr2 = krealloc ( ptr1 , size2 , GFP_KERNEL ) ;
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr2 ) ;
2020-08-07 09:24:54 +03:00
2021-02-26 04:20:15 +03:00
/* All offsets up to size2 must be accessible. */
ptr2 [ size1 - 1 ] = ' x ' ;
ptr2 [ size1 ] = ' x ' ;
ptr2 [ middle ] = ' x ' ;
ptr2 [ size2 - 1 ] = ' x ' ;
/* Generic mode is precise, so unaligned size2 must be inaccessible. */
if ( IS_ENABLED ( CONFIG_KASAN_GENERIC ) )
KUNIT_EXPECT_KASAN_FAIL ( test , ptr2 [ size2 ] = ' x ' ) ;
/* For all modes first aligned offset after size2 must be inaccessible. */
KUNIT_EXPECT_KASAN_FAIL ( test ,
ptr2 [ round_up ( size2 , KASAN_GRANULE_SIZE ) ] = ' x ' ) ;
2015-02-14 01:39:53 +03:00
kfree ( ptr2 ) ;
}
2021-02-26 04:20:15 +03:00
static void krealloc_less_oob_helper ( struct kunit * test ,
size_t size1 , size_t size2 )
2015-02-14 01:39:53 +03:00
{
char * ptr1 , * ptr2 ;
2021-02-26 04:20:15 +03:00
size_t middle ;
KUNIT_ASSERT_LT ( test , size2 , size1 ) ;
middle = size2 + ( size1 - size2 ) / 2 ;
2015-02-14 01:39:53 +03:00
ptr1 = kmalloc ( size1 , GFP_KERNEL ) ;
2020-10-14 02:55:06 +03:00
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr1 ) ;
2020-08-07 09:24:54 +03:00
2020-10-14 02:55:06 +03:00
ptr2 = krealloc ( ptr1 , size2 , GFP_KERNEL ) ;
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr2 ) ;
2020-08-07 09:24:54 +03:00
2021-02-26 04:20:15 +03:00
/* Must be accessible for all modes. */
ptr2 [ size2 - 1 ] = ' x ' ;
/* Generic mode is precise, so unaligned size2 must be inaccessible. */
if ( IS_ENABLED ( CONFIG_KASAN_GENERIC ) )
KUNIT_EXPECT_KASAN_FAIL ( test , ptr2 [ size2 ] = ' x ' ) ;
/* For all modes first aligned offset after size2 must be inaccessible. */
KUNIT_EXPECT_KASAN_FAIL ( test ,
ptr2 [ round_up ( size2 , KASAN_GRANULE_SIZE ) ] = ' x ' ) ;
/*
* For all modes all size2 , middle , and size1 should land in separate
* granules and thus the latter two offsets should be inaccessible .
*/
KUNIT_EXPECT_LE ( test , round_up ( size2 , KASAN_GRANULE_SIZE ) ,
round_down ( middle , KASAN_GRANULE_SIZE ) ) ;
KUNIT_EXPECT_LE ( test , round_up ( middle , KASAN_GRANULE_SIZE ) ,
round_down ( size1 , KASAN_GRANULE_SIZE ) ) ;
KUNIT_EXPECT_KASAN_FAIL ( test , ptr2 [ middle ] = ' x ' ) ;
KUNIT_EXPECT_KASAN_FAIL ( test , ptr2 [ size1 - 1 ] = ' x ' ) ;
KUNIT_EXPECT_KASAN_FAIL ( test , ptr2 [ size1 ] = ' x ' ) ;
2015-02-14 01:39:53 +03:00
kfree ( ptr2 ) ;
}
2021-02-26 04:20:15 +03:00
static void krealloc_more_oob ( struct kunit * test )
{
krealloc_more_oob_helper ( test , 201 , 235 ) ;
}
static void krealloc_less_oob ( struct kunit * test )
{
krealloc_less_oob_helper ( test , 235 , 201 ) ;
}
static void krealloc_pagealloc_more_oob ( struct kunit * test )
{
/* page_alloc fallback in only implemented for SLUB. */
KASAN_TEST_NEEDS_CONFIG_ON ( test , CONFIG_SLUB ) ;
krealloc_more_oob_helper ( test , KMALLOC_MAX_CACHE_SIZE + 201 ,
KMALLOC_MAX_CACHE_SIZE + 235 ) ;
}
static void krealloc_pagealloc_less_oob ( struct kunit * test )
{
/* page_alloc fallback in only implemented for SLUB. */
KASAN_TEST_NEEDS_CONFIG_ON ( test , CONFIG_SLUB ) ;
krealloc_less_oob_helper ( test , KMALLOC_MAX_CACHE_SIZE + 235 ,
KMALLOC_MAX_CACHE_SIZE + 201 ) ;
}
2021-02-26 04:20:19 +03:00
/*
* Check that krealloc ( ) detects a use - after - free , returns NULL ,
* and doesn ' t unpoison the freed object .
*/
static void krealloc_uaf ( struct kunit * test )
{
char * ptr1 , * ptr2 ;
int size1 = 201 ;
int size2 = 235 ;
ptr1 = kmalloc ( size1 , GFP_KERNEL ) ;
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr1 ) ;
kfree ( ptr1 ) ;
KUNIT_EXPECT_KASAN_FAIL ( test , ptr2 = krealloc ( ptr1 , size2 , GFP_KERNEL ) ) ;
2022-02-11 19:42:44 +03:00
KUNIT_ASSERT_NULL ( test , ptr2 ) ;
2021-02-26 04:20:19 +03:00
KUNIT_EXPECT_KASAN_FAIL ( test , * ( volatile char * ) ptr1 ) ;
}
2020-10-14 02:55:06 +03:00
static void kmalloc_oob_16 ( struct kunit * test )
2015-02-14 01:39:53 +03:00
{
struct {
u64 words [ 2 ] ;
} * ptr1 , * ptr2 ;
2020-11-02 04:07:37 +03:00
/* This test is specifically crafted for the generic mode. */
2021-02-24 23:05:17 +03:00
KASAN_TEST_NEEDS_CONFIG_ON ( test , CONFIG_KASAN_GENERIC ) ;
2020-11-02 04:07:37 +03:00
2015-02-14 01:39:53 +03:00
ptr1 = kmalloc ( sizeof ( * ptr1 ) - 3 , GFP_KERNEL ) ;
2020-10-14 02:55:06 +03:00
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr1 ) ;
2015-02-14 01:39:53 +03:00
ptr2 = kmalloc ( sizeof ( * ptr2 ) , GFP_KERNEL ) ;
2020-10-14 02:55:06 +03:00
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr2 ) ;
2022-06-09 00:40:24 +03:00
OPTIMIZER_HIDE_VAR ( ptr1 ) ;
OPTIMIZER_HIDE_VAR ( ptr2 ) ;
2020-10-14 02:55:06 +03:00
KUNIT_EXPECT_KASAN_FAIL ( test , * ptr1 = * ptr2 ) ;
2015-02-14 01:39:53 +03:00
kfree ( ptr1 ) ;
kfree ( ptr2 ) ;
}
2020-11-02 04:07:37 +03:00
static void kmalloc_uaf_16 ( struct kunit * test )
{
struct {
u64 words [ 2 ] ;
} * ptr1 , * ptr2 ;
ptr1 = kmalloc ( sizeof ( * ptr1 ) , GFP_KERNEL ) ;
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr1 ) ;
ptr2 = kmalloc ( sizeof ( * ptr2 ) , GFP_KERNEL ) ;
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr2 ) ;
kfree ( ptr2 ) ;
KUNIT_EXPECT_KASAN_FAIL ( test , * ptr1 = * ptr2 ) ;
kfree ( ptr1 ) ;
}
2021-09-03 00:57:38 +03:00
/*
* Note : in the memset tests below , the written range touches both valid and
* invalid memory . This makes sure that the instrumentation does not only check
* the starting address but the whole range .
*/
2020-10-14 02:55:06 +03:00
static void kmalloc_oob_memset_2 ( struct kunit * test )
2015-11-06 05:51:15 +03:00
{
char * ptr ;
2021-09-03 00:57:38 +03:00
size_t size = 128 - KASAN_GRANULE_SIZE ;
2015-11-06 05:51:15 +03:00
ptr = kmalloc ( size , GFP_KERNEL ) ;
2020-10-14 02:55:06 +03:00
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr ) ;
2020-08-07 09:24:54 +03:00
2021-11-05 23:36:12 +03:00
OPTIMIZER_HIDE_VAR ( size ) ;
2021-09-03 00:57:38 +03:00
KUNIT_EXPECT_KASAN_FAIL ( test , memset ( ptr + size - 1 , 0 , 2 ) ) ;
2015-11-06 05:51:15 +03:00
kfree ( ptr ) ;
}
2020-10-14 02:55:06 +03:00
static void kmalloc_oob_memset_4 ( struct kunit * test )
2015-11-06 05:51:15 +03:00
{
char * ptr ;
2021-09-03 00:57:38 +03:00
size_t size = 128 - KASAN_GRANULE_SIZE ;
2015-11-06 05:51:15 +03:00
ptr = kmalloc ( size , GFP_KERNEL ) ;
2020-10-14 02:55:06 +03:00
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr ) ;
2020-08-07 09:24:54 +03:00
2021-11-05 23:36:12 +03:00
OPTIMIZER_HIDE_VAR ( size ) ;
2021-09-03 00:57:38 +03:00
KUNIT_EXPECT_KASAN_FAIL ( test , memset ( ptr + size - 3 , 0 , 4 ) ) ;
2015-11-06 05:51:15 +03:00
kfree ( ptr ) ;
}
2020-10-14 02:55:06 +03:00
static void kmalloc_oob_memset_8 ( struct kunit * test )
2015-11-06 05:51:15 +03:00
{
char * ptr ;
2021-09-03 00:57:38 +03:00
size_t size = 128 - KASAN_GRANULE_SIZE ;
2015-11-06 05:51:15 +03:00
ptr = kmalloc ( size , GFP_KERNEL ) ;
2020-10-14 02:55:06 +03:00
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr ) ;
2020-08-07 09:24:54 +03:00
2021-11-05 23:36:12 +03:00
OPTIMIZER_HIDE_VAR ( size ) ;
2021-09-03 00:57:38 +03:00
KUNIT_EXPECT_KASAN_FAIL ( test , memset ( ptr + size - 7 , 0 , 8 ) ) ;
2015-11-06 05:51:15 +03:00
kfree ( ptr ) ;
}
2020-10-14 02:55:06 +03:00
static void kmalloc_oob_memset_16 ( struct kunit * test )
2015-11-06 05:51:15 +03:00
{
char * ptr ;
2021-09-03 00:57:38 +03:00
size_t size = 128 - KASAN_GRANULE_SIZE ;
2015-11-06 05:51:15 +03:00
ptr = kmalloc ( size , GFP_KERNEL ) ;
2020-10-14 02:55:06 +03:00
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr ) ;
2020-08-07 09:24:54 +03:00
2021-11-05 23:36:12 +03:00
OPTIMIZER_HIDE_VAR ( size ) ;
2021-09-03 00:57:38 +03:00
KUNIT_EXPECT_KASAN_FAIL ( test , memset ( ptr + size - 15 , 0 , 16 ) ) ;
2015-11-06 05:51:15 +03:00
kfree ( ptr ) ;
}
2020-10-14 02:55:06 +03:00
static void kmalloc_oob_in_memset ( struct kunit * test )
2015-02-14 01:39:53 +03:00
{
char * ptr ;
2021-09-03 00:57:38 +03:00
size_t size = 128 - KASAN_GRANULE_SIZE ;
2015-02-14 01:39:53 +03:00
ptr = kmalloc ( size , GFP_KERNEL ) ;
2020-10-14 02:55:06 +03:00
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr ) ;
2020-08-07 09:24:54 +03:00
2022-01-30 00:41:11 +03:00
OPTIMIZER_HIDE_VAR ( ptr ) ;
2021-11-05 23:36:12 +03:00
OPTIMIZER_HIDE_VAR ( size ) ;
2021-09-03 00:57:38 +03:00
KUNIT_EXPECT_KASAN_FAIL ( test ,
memset ( ptr , 0 , size + KASAN_GRANULE_SIZE ) ) ;
2015-02-14 01:39:53 +03:00
kfree ( ptr ) ;
}
2021-11-05 23:35:56 +03:00
static void kmalloc_memmove_negative_size ( struct kunit * test )
2020-04-02 07:09:40 +03:00
{
char * ptr ;
size_t size = 64 ;
2021-11-05 23:36:12 +03:00
size_t invalid_size = - 2 ;
2020-04-02 07:09:40 +03:00
2021-09-03 00:57:41 +03:00
/*
* Hardware tag - based mode doesn ' t check memmove for negative size .
* As a result , this test introduces a side - effect memory corruption ,
* which can result in a crash .
*/
KASAN_TEST_NEEDS_CONFIG_OFF ( test , CONFIG_KASAN_HW_TAGS ) ;
2020-04-02 07:09:40 +03:00
ptr = kmalloc ( size , GFP_KERNEL ) ;
2020-10-14 02:55:06 +03:00
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr ) ;
2020-04-02 07:09:40 +03:00
memset ( ( char * ) ptr , 0 , 64 ) ;
2022-01-30 00:41:11 +03:00
OPTIMIZER_HIDE_VAR ( ptr ) ;
2021-11-05 23:36:12 +03:00
OPTIMIZER_HIDE_VAR ( invalid_size ) ;
2020-10-14 02:55:06 +03:00
KUNIT_EXPECT_KASAN_FAIL ( test ,
memmove ( ( char * ) ptr , ( char * ) ptr + 4 , invalid_size ) ) ;
2020-04-02 07:09:40 +03:00
kfree ( ptr ) ;
}
2021-11-05 23:35:56 +03:00
static void kmalloc_memmove_invalid_size ( struct kunit * test )
{
char * ptr ;
size_t size = 64 ;
volatile size_t invalid_size = size ;
ptr = kmalloc ( size , GFP_KERNEL ) ;
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr ) ;
2020-04-02 07:09:40 +03:00
memset ( ( char * ) ptr , 0 , 64 ) ;
2022-01-30 00:41:11 +03:00
OPTIMIZER_HIDE_VAR ( ptr ) ;
2020-10-14 02:55:06 +03:00
KUNIT_EXPECT_KASAN_FAIL ( test ,
memmove ( ( char * ) ptr , ( char * ) ptr + 4 , invalid_size ) ) ;
2020-04-02 07:09:40 +03:00
kfree ( ptr ) ;
}
2020-10-14 02:55:06 +03:00
static void kmalloc_uaf ( struct kunit * test )
2015-02-14 01:39:53 +03:00
{
char * ptr ;
size_t size = 10 ;
ptr = kmalloc ( size , GFP_KERNEL ) ;
2020-10-14 02:55:06 +03:00
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr ) ;
2015-02-14 01:39:53 +03:00
kfree ( ptr ) ;
2021-09-03 00:57:35 +03:00
KUNIT_EXPECT_KASAN_FAIL ( test , ( ( volatile char * ) ptr ) [ 8 ] ) ;
2015-02-14 01:39:53 +03:00
}
2020-10-14 02:55:06 +03:00
static void kmalloc_uaf_memset ( struct kunit * test )
2015-02-14 01:39:53 +03:00
{
char * ptr ;
size_t size = 33 ;
2021-09-03 00:57:44 +03:00
/*
* Only generic KASAN uses quarantine , which is required to avoid a
* kernel memory corruption this test causes .
*/
KASAN_TEST_NEEDS_CONFIG_ON ( test , CONFIG_KASAN_GENERIC ) ;
2015-02-14 01:39:53 +03:00
ptr = kmalloc ( size , GFP_KERNEL ) ;
2020-10-14 02:55:06 +03:00
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr ) ;
2015-02-14 01:39:53 +03:00
kfree ( ptr ) ;
2020-10-14 02:55:06 +03:00
KUNIT_EXPECT_KASAN_FAIL ( test , memset ( ptr , 0 , size ) ) ;
2015-02-14 01:39:53 +03:00
}
2020-10-14 02:55:06 +03:00
static void kmalloc_uaf2 ( struct kunit * test )
2015-02-14 01:39:53 +03:00
{
char * ptr1 , * ptr2 ;
size_t size = 43 ;
2021-02-24 23:05:38 +03:00
int counter = 0 ;
2015-02-14 01:39:53 +03:00
2021-02-24 23:05:38 +03:00
again :
2015-02-14 01:39:53 +03:00
ptr1 = kmalloc ( size , GFP_KERNEL ) ;
2020-10-14 02:55:06 +03:00
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr1 ) ;
2015-02-14 01:39:53 +03:00
kfree ( ptr1 ) ;
2020-10-14 02:55:06 +03:00
2015-02-14 01:39:53 +03:00
ptr2 = kmalloc ( size , GFP_KERNEL ) ;
2020-10-14 02:55:06 +03:00
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr2 ) ;
2021-02-24 23:05:38 +03:00
/*
* For tag - based KASAN ptr1 and ptr2 tags might happen to be the same .
* Allow up to 16 attempts at generating different tags .
*/
if ( ! IS_ENABLED ( CONFIG_KASAN_GENERIC ) & & ptr1 = = ptr2 & & counter + + < 16 ) {
kfree ( ptr2 ) ;
goto again ;
}
2021-09-03 00:57:35 +03:00
KUNIT_EXPECT_KASAN_FAIL ( test , ( ( volatile char * ) ptr1 ) [ 40 ] ) ;
2020-10-14 02:55:06 +03:00
KUNIT_EXPECT_PTR_NE ( test , ptr1 , ptr2 ) ;
2015-02-14 01:39:53 +03:00
kfree ( ptr2 ) ;
}
2020-10-14 02:55:06 +03:00
static void kfree_via_page ( struct kunit * test )
2019-09-24 01:34:16 +03:00
{
char * ptr ;
size_t size = 8 ;
struct page * page ;
unsigned long offset ;
ptr = kmalloc ( size , GFP_KERNEL ) ;
2020-10-14 02:55:06 +03:00
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr ) ;
2019-09-24 01:34:16 +03:00
page = virt_to_page ( ptr ) ;
offset = offset_in_page ( ptr ) ;
kfree ( page_address ( page ) + offset ) ;
}
2020-10-14 02:55:06 +03:00
static void kfree_via_phys ( struct kunit * test )
2019-09-24 01:34:16 +03:00
{
char * ptr ;
size_t size = 8 ;
phys_addr_t phys ;
ptr = kmalloc ( size , GFP_KERNEL ) ;
2020-10-14 02:55:06 +03:00
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr ) ;
2019-09-24 01:34:16 +03:00
phys = virt_to_phys ( ptr ) ;
kfree ( phys_to_virt ( phys ) ) ;
}
2020-10-14 02:55:06 +03:00
static void kmem_cache_oob ( struct kunit * test )
2015-02-14 01:39:53 +03:00
{
char * p ;
size_t size = 200 ;
2021-02-24 23:05:59 +03:00
struct kmem_cache * cache ;
cache = kmem_cache_create ( " test_cache " , size , 0 , 0 , NULL ) ;
2020-10-14 02:55:06 +03:00
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , cache ) ;
2021-02-24 23:05:59 +03:00
2015-02-14 01:39:53 +03:00
p = kmem_cache_alloc ( cache , GFP_KERNEL ) ;
if ( ! p ) {
2020-10-14 02:55:06 +03:00
kunit_err ( test , " Allocation failed: %s \n " , __func__ ) ;
2015-02-14 01:39:53 +03:00
kmem_cache_destroy ( cache ) ;
return ;
}
2020-10-14 02:55:06 +03:00
KUNIT_EXPECT_KASAN_FAIL ( test , * p = p [ size + OOB_TAG_OFF ] ) ;
2021-02-24 23:05:59 +03:00
2015-02-14 01:39:53 +03:00
kmem_cache_free ( cache , p ) ;
kmem_cache_destroy ( cache ) ;
}
2021-02-24 23:05:59 +03:00
static void kmem_cache_accounted ( struct kunit * test )
2017-02-25 02:00:08 +03:00
{
int i ;
char * p ;
size_t size = 200 ;
struct kmem_cache * cache ;
cache = kmem_cache_create ( " test_cache " , size , 0 , SLAB_ACCOUNT , NULL ) ;
2020-10-14 02:55:06 +03:00
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , cache ) ;
2017-02-25 02:00:08 +03:00
/*
* Several allocations with a delay to allow for lazy per memcg kmem
* cache creation .
*/
for ( i = 0 ; i < 5 ; i + + ) {
p = kmem_cache_alloc ( cache , GFP_KERNEL ) ;
2017-11-18 02:28:00 +03:00
if ( ! p )
2017-02-25 02:00:08 +03:00
goto free_cache ;
2017-11-18 02:28:00 +03:00
2017-02-25 02:00:08 +03:00
kmem_cache_free ( cache , p ) ;
msleep ( 100 ) ;
}
free_cache :
kmem_cache_destroy ( cache ) ;
}
2021-02-24 23:05:59 +03:00
static void kmem_cache_bulk ( struct kunit * test )
{
struct kmem_cache * cache ;
size_t size = 200 ;
char * p [ 10 ] ;
bool ret ;
int i ;
cache = kmem_cache_create ( " test_cache " , size , 0 , 0 , NULL ) ;
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , cache ) ;
ret = kmem_cache_alloc_bulk ( cache , GFP_KERNEL , ARRAY_SIZE ( p ) , ( void * * ) & p ) ;
if ( ! ret ) {
kunit_err ( test , " Allocation failed: %s \n " , __func__ ) ;
kmem_cache_destroy ( cache ) ;
return ;
}
for ( i = 0 ; i < ARRAY_SIZE ( p ) ; i + + )
p [ i ] [ 0 ] = p [ i ] [ size - 1 ] = 42 ;
kmem_cache_free_bulk ( cache , ARRAY_SIZE ( p ) , ( void * * ) & p ) ;
kmem_cache_destroy ( cache ) ;
}
2015-02-14 01:39:53 +03:00
static char global_array [ 10 ] ;
2022-01-15 01:04:51 +03:00
static void kasan_global_oob_right ( struct kunit * test )
2015-02-14 01:39:53 +03:00
{
2021-05-15 03:27:27 +03:00
/*
* Deliberate out - of - bounds access . To prevent CONFIG_UBSAN_LOCAL_BOUNDS
2021-07-08 04:07:28 +03:00
* from failing here and panicking the kernel , access the array via a
2021-05-15 03:27:27 +03:00
* volatile pointer , which will prevent the compiler from being able to
* determine the array bounds .
*
* This access uses a volatile pointer to char ( char * volatile ) rather
* than the more conventional pointer to volatile char ( volatile char * )
* because we want to prevent the compiler from making inferences about
* the pointer itself ( i . e . its array bounds ) , not the data that it
* refers to .
*/
char * volatile array = global_array ;
char * p = & array [ ARRAY_SIZE ( global_array ) + 3 ] ;
2015-02-14 01:39:53 +03:00
2020-11-02 04:07:37 +03:00
/* Only generic mode instruments globals. */
2021-02-24 23:05:17 +03:00
KASAN_TEST_NEEDS_CONFIG_ON ( test , CONFIG_KASAN_GENERIC ) ;
2020-11-02 04:07:37 +03:00
2020-10-14 02:55:06 +03:00
KUNIT_EXPECT_KASAN_FAIL ( test , * ( volatile char * ) p ) ;
2015-02-14 01:39:53 +03:00
}
2022-01-15 01:04:51 +03:00
static void kasan_global_oob_left ( struct kunit * test )
{
char * volatile array = global_array ;
char * p = array - 3 ;
/*
* GCC is known to fail this test , skip it .
* See https : //bugzilla.kernel.org/show_bug.cgi?id=215051.
*/
KASAN_TEST_NEEDS_CONFIG_ON ( test , CONFIG_CC_IS_CLANG ) ;
KASAN_TEST_NEEDS_CONFIG_ON ( test , CONFIG_KASAN_GENERIC ) ;
KUNIT_EXPECT_KASAN_FAIL ( test , * ( volatile char * ) p ) ;
}
2021-02-24 23:05:50 +03:00
/* Check that ksize() makes the whole object accessible. */
2020-10-14 02:55:06 +03:00
static void ksize_unpoisons_memory ( struct kunit * test )
2016-05-21 02:59:17 +03:00
{
char * ptr ;
2018-02-07 02:36:48 +03:00
size_t size = 123 , real_size ;
2016-05-21 02:59:17 +03:00
ptr = kmalloc ( size , GFP_KERNEL ) ;
2020-10-14 02:55:06 +03:00
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr ) ;
2016-05-21 02:59:17 +03:00
real_size = ksize ( ptr ) ;
2021-02-24 23:05:13 +03:00
2022-06-09 00:40:24 +03:00
OPTIMIZER_HIDE_VAR ( ptr ) ;
2021-02-24 23:05:13 +03:00
/* This access shouldn't trigger a KASAN report. */
2016-05-21 02:59:17 +03:00
ptr [ size ] = ' x ' ;
2021-02-24 23:05:13 +03:00
/* This one must. */
2021-09-03 00:57:35 +03:00
KUNIT_EXPECT_KASAN_FAIL ( test , ( ( volatile char * ) ptr ) [ real_size ] ) ;
2021-02-24 23:05:13 +03:00
2016-05-21 02:59:17 +03:00
kfree ( ptr ) ;
}
2021-02-24 23:05:50 +03:00
/*
* Check that a use - after - free is detected by ksize ( ) and via normal accesses
* after it .
*/
static void ksize_uaf ( struct kunit * test )
{
char * ptr ;
int size = 128 - KASAN_GRANULE_SIZE ;
ptr = kmalloc ( size , GFP_KERNEL ) ;
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr ) ;
kfree ( ptr ) ;
2022-06-09 00:40:24 +03:00
OPTIMIZER_HIDE_VAR ( ptr ) ;
2021-02-24 23:05:50 +03:00
KUNIT_EXPECT_KASAN_FAIL ( test , ksize ( ptr ) ) ;
2021-09-03 00:57:47 +03:00
KUNIT_EXPECT_KASAN_FAIL ( test , ( ( volatile char * ) ptr ) [ 0 ] ) ;
KUNIT_EXPECT_KASAN_FAIL ( test , ( ( volatile char * ) ptr ) [ size ] ) ;
2021-02-24 23:05:50 +03:00
}
2020-10-14 02:55:06 +03:00
static void kasan_stack_oob ( struct kunit * test )
2016-05-21 02:59:34 +03:00
{
2020-10-14 02:55:06 +03:00
char stack_array [ 10 ] ;
2022-03-25 04:12:08 +03:00
/* See comment in kasan_global_oob_right. */
2021-05-15 03:27:27 +03:00
char * volatile array = stack_array ;
char * p = & array [ ARRAY_SIZE ( stack_array ) + OOB_TAG_OFF ] ;
2016-05-21 02:59:34 +03:00
2021-02-24 23:05:17 +03:00
KASAN_TEST_NEEDS_CONFIG_ON ( test , CONFIG_KASAN_STACK ) ;
2016-05-21 02:59:34 +03:00
2020-10-14 02:55:06 +03:00
KUNIT_EXPECT_KASAN_FAIL ( test , * ( volatile char * ) p ) ;
2016-05-21 02:59:34 +03:00
}
2020-10-14 02:55:06 +03:00
static void kasan_alloca_oob_left ( struct kunit * test )
2018-02-07 02:36:16 +03:00
{
volatile int i = 10 ;
char alloca_array [ i ] ;
2022-03-25 04:12:08 +03:00
/* See comment in kasan_global_oob_right. */
2021-05-15 03:27:27 +03:00
char * volatile array = alloca_array ;
char * p = array - 1 ;
2018-02-07 02:36:16 +03:00
2020-11-02 04:07:37 +03:00
/* Only generic mode instruments dynamic allocas. */
2021-02-24 23:05:17 +03:00
KASAN_TEST_NEEDS_CONFIG_ON ( test , CONFIG_KASAN_GENERIC ) ;
KASAN_TEST_NEEDS_CONFIG_ON ( test , CONFIG_KASAN_STACK ) ;
2020-10-14 02:55:06 +03:00
KUNIT_EXPECT_KASAN_FAIL ( test , * ( volatile char * ) p ) ;
2018-02-07 02:36:16 +03:00
}
2020-10-14 02:55:06 +03:00
static void kasan_alloca_oob_right ( struct kunit * test )
2018-02-07 02:36:16 +03:00
{
volatile int i = 10 ;
char alloca_array [ i ] ;
2022-03-25 04:12:08 +03:00
/* See comment in kasan_global_oob_right. */
2021-05-15 03:27:27 +03:00
char * volatile array = alloca_array ;
char * p = array + i ;
2018-02-07 02:36:16 +03:00
2020-11-02 04:07:37 +03:00
/* Only generic mode instruments dynamic allocas. */
2021-02-24 23:05:17 +03:00
KASAN_TEST_NEEDS_CONFIG_ON ( test , CONFIG_KASAN_GENERIC ) ;
KASAN_TEST_NEEDS_CONFIG_ON ( test , CONFIG_KASAN_STACK ) ;
2020-10-14 02:55:06 +03:00
KUNIT_EXPECT_KASAN_FAIL ( test , * ( volatile char * ) p ) ;
2018-02-07 02:36:16 +03:00
}
2020-10-14 02:55:06 +03:00
static void kmem_cache_double_free ( struct kunit * test )
2018-02-07 02:36:37 +03:00
{
char * p ;
size_t size = 200 ;
struct kmem_cache * cache ;
cache = kmem_cache_create ( " test_cache " , size , 0 , 0 , NULL ) ;
2020-10-14 02:55:06 +03:00
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , cache ) ;
2018-02-07 02:36:37 +03:00
p = kmem_cache_alloc ( cache , GFP_KERNEL ) ;
if ( ! p ) {
2020-10-14 02:55:06 +03:00
kunit_err ( test , " Allocation failed: %s \n " , __func__ ) ;
2018-02-07 02:36:37 +03:00
kmem_cache_destroy ( cache ) ;
return ;
}
kmem_cache_free ( cache , p ) ;
2020-10-14 02:55:06 +03:00
KUNIT_EXPECT_KASAN_FAIL ( test , kmem_cache_free ( cache , p ) ) ;
2018-02-07 02:36:37 +03:00
kmem_cache_destroy ( cache ) ;
}
2020-10-14 02:55:06 +03:00
static void kmem_cache_invalid_free ( struct kunit * test )
2018-02-07 02:36:37 +03:00
{
char * p ;
size_t size = 200 ;
struct kmem_cache * cache ;
cache = kmem_cache_create ( " test_cache " , size , 0 , SLAB_TYPESAFE_BY_RCU ,
NULL ) ;
2020-10-14 02:55:06 +03:00
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , cache ) ;
2018-02-07 02:36:37 +03:00
p = kmem_cache_alloc ( cache , GFP_KERNEL ) ;
if ( ! p ) {
2020-10-14 02:55:06 +03:00
kunit_err ( test , " Allocation failed: %s \n " , __func__ ) ;
2018-02-07 02:36:37 +03:00
kmem_cache_destroy ( cache ) ;
return ;
}
2021-02-24 23:05:13 +03:00
/* Trigger invalid free, the object doesn't get freed. */
2020-10-14 02:55:06 +03:00
KUNIT_EXPECT_KASAN_FAIL ( test , kmem_cache_free ( cache , p + 1 ) ) ;
2018-04-11 02:30:35 +03:00
/*
* Properly free the object to prevent the " Objects remaining in
* test_cache on __kmem_cache_shutdown " BUG failure.
*/
kmem_cache_free ( cache , p ) ;
2018-02-07 02:36:37 +03:00
kmem_cache_destroy ( cache ) ;
}
2022-02-26 06:10:59 +03:00
static void empty_cache_ctor ( void * object ) { }
2022-01-15 01:04:57 +03:00
static void kmem_cache_double_destroy ( struct kunit * test )
{
struct kmem_cache * cache ;
2022-02-26 06:10:59 +03:00
/* Provide a constructor to prevent cache merging. */
cache = kmem_cache_create ( " test_cache " , 200 , 0 , 0 , empty_cache_ctor ) ;
2022-01-15 01:04:57 +03:00
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , cache ) ;
kmem_cache_destroy ( cache ) ;
KUNIT_EXPECT_KASAN_FAIL ( test , kmem_cache_destroy ( cache ) ) ;
}
2020-10-14 02:55:06 +03:00
static void kasan_memchr ( struct kunit * test )
2018-10-27 01:02:34 +03:00
{
char * ptr ;
size_t size = 24 ;
2021-02-24 23:05:13 +03:00
/*
* str * functions are not instrumented with CONFIG_AMD_MEM_ENCRYPT .
* See https : //bugzilla.kernel.org/show_bug.cgi?id=206337 for details.
*/
2021-02-24 23:05:17 +03:00
KASAN_TEST_NEEDS_CONFIG_OFF ( test , CONFIG_AMD_MEM_ENCRYPT ) ;
2020-10-14 02:55:06 +03:00
2020-11-02 04:07:37 +03:00
if ( OOB_TAG_OFF )
size = round_up ( size , OOB_TAG_OFF ) ;
2020-10-14 02:55:06 +03:00
ptr = kmalloc ( size , GFP_KERNEL | __GFP_ZERO ) ;
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr ) ;
2022-01-30 00:41:11 +03:00
OPTIMIZER_HIDE_VAR ( ptr ) ;
kasan: test: silence intentional read overflow warnings
As done in commit d73dad4eb5ad ("kasan: test: bypass __alloc_size
checks") for __write_overflow warnings, also silence some more cases
that trip the __read_overflow warnings seen in 5.16-rc1[1]:
In file included from include/linux/string.h:253,
from include/linux/bitmap.h:10,
from include/linux/cpumask.h:12,
from include/linux/mm_types_task.h:14,
from include/linux/mm_types.h:5,
from include/linux/page-flags.h:13,
from arch/arm64/include/asm/mte.h:14,
from arch/arm64/include/asm/pgtable.h:12,
from include/linux/pgtable.h:6,
from include/linux/kasan.h:29,
from lib/test_kasan.c:10:
In function 'memcmp',
inlined from 'kasan_memcmp' at lib/test_kasan.c:897:2:
include/linux/fortify-string.h:263:25: error: call to '__read_overflow' declared with attribute error: detected read beyond size of object (1st parameter)
263 | __read_overflow();
| ^~~~~~~~~~~~~~~~~
In function 'memchr',
inlined from 'kasan_memchr' at lib/test_kasan.c:872:2:
include/linux/fortify-string.h:277:17: error: call to '__read_overflow' declared with attribute error: detected read beyond size of object (1st parameter)
277 | __read_overflow();
| ^~~~~~~~~~~~~~~~~
[1] http://kisskb.ellerman.id.au/kisskb/buildresult/14660585/log/
Link: https://lkml.kernel.org/r/20211116004111.3171781-1-keescook@chromium.org
Fixes: d73dad4eb5ad ("kasan: test: bypass __alloc_size checks")
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
Acked-by: Marco Elver <elver@google.com>
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-11-20 03:43:46 +03:00
OPTIMIZER_HIDE_VAR ( size ) ;
2020-10-14 02:55:06 +03:00
KUNIT_EXPECT_KASAN_FAIL ( test ,
kasan_ptr_result = memchr ( ptr , ' 1 ' , size + 1 ) ) ;
2018-10-27 01:02:34 +03:00
kfree ( ptr ) ;
}
2020-10-14 02:55:06 +03:00
static void kasan_memcmp ( struct kunit * test )
2018-10-27 01:02:34 +03:00
{
char * ptr ;
size_t size = 24 ;
int arr [ 9 ] ;
2021-02-24 23:05:13 +03:00
/*
* str * functions are not instrumented with CONFIG_AMD_MEM_ENCRYPT .
* See https : //bugzilla.kernel.org/show_bug.cgi?id=206337 for details.
*/
2021-02-24 23:05:17 +03:00
KASAN_TEST_NEEDS_CONFIG_OFF ( test , CONFIG_AMD_MEM_ENCRYPT ) ;
2018-10-27 01:02:34 +03:00
2020-11-02 04:07:37 +03:00
if ( OOB_TAG_OFF )
size = round_up ( size , OOB_TAG_OFF ) ;
2020-10-14 02:55:06 +03:00
ptr = kmalloc ( size , GFP_KERNEL | __GFP_ZERO ) ;
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr ) ;
2018-10-27 01:02:34 +03:00
memset ( arr , 0 , sizeof ( arr ) ) ;
2020-10-14 02:55:06 +03:00
2022-01-30 00:41:11 +03:00
OPTIMIZER_HIDE_VAR ( ptr ) ;
kasan: test: silence intentional read overflow warnings
As done in commit d73dad4eb5ad ("kasan: test: bypass __alloc_size
checks") for __write_overflow warnings, also silence some more cases
that trip the __read_overflow warnings seen in 5.16-rc1[1]:
In file included from include/linux/string.h:253,
from include/linux/bitmap.h:10,
from include/linux/cpumask.h:12,
from include/linux/mm_types_task.h:14,
from include/linux/mm_types.h:5,
from include/linux/page-flags.h:13,
from arch/arm64/include/asm/mte.h:14,
from arch/arm64/include/asm/pgtable.h:12,
from include/linux/pgtable.h:6,
from include/linux/kasan.h:29,
from lib/test_kasan.c:10:
In function 'memcmp',
inlined from 'kasan_memcmp' at lib/test_kasan.c:897:2:
include/linux/fortify-string.h:263:25: error: call to '__read_overflow' declared with attribute error: detected read beyond size of object (1st parameter)
263 | __read_overflow();
| ^~~~~~~~~~~~~~~~~
In function 'memchr',
inlined from 'kasan_memchr' at lib/test_kasan.c:872:2:
include/linux/fortify-string.h:277:17: error: call to '__read_overflow' declared with attribute error: detected read beyond size of object (1st parameter)
277 | __read_overflow();
| ^~~~~~~~~~~~~~~~~
[1] http://kisskb.ellerman.id.au/kisskb/buildresult/14660585/log/
Link: https://lkml.kernel.org/r/20211116004111.3171781-1-keescook@chromium.org
Fixes: d73dad4eb5ad ("kasan: test: bypass __alloc_size checks")
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
Acked-by: Marco Elver <elver@google.com>
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-11-20 03:43:46 +03:00
OPTIMIZER_HIDE_VAR ( size ) ;
2020-10-14 02:55:06 +03:00
KUNIT_EXPECT_KASAN_FAIL ( test ,
kasan_int_result = memcmp ( ptr , arr , size + 1 ) ) ;
2018-10-27 01:02:34 +03:00
kfree ( ptr ) ;
}
2020-10-14 02:55:06 +03:00
static void kasan_strings ( struct kunit * test )
2018-10-27 01:02:34 +03:00
{
char * ptr ;
size_t size = 24 ;
2021-02-24 23:05:13 +03:00
/*
* str * functions are not instrumented with CONFIG_AMD_MEM_ENCRYPT .
* See https : //bugzilla.kernel.org/show_bug.cgi?id=206337 for details.
*/
2021-02-24 23:05:17 +03:00
KASAN_TEST_NEEDS_CONFIG_OFF ( test , CONFIG_AMD_MEM_ENCRYPT ) ;
2020-10-14 02:55:06 +03:00
ptr = kmalloc ( size , GFP_KERNEL | __GFP_ZERO ) ;
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr ) ;
2018-10-27 01:02:34 +03:00
kfree ( ptr ) ;
/*
* Try to cause only 1 invalid access ( less spam in dmesg ) .
* For that we need ptr to point to zeroed byte .
* Skip metadata that could be stored in freed object so ptr
* will likely point to zeroed byte .
*/
ptr + = 16 ;
2020-10-14 02:55:06 +03:00
KUNIT_EXPECT_KASAN_FAIL ( test , kasan_ptr_result = strchr ( ptr , ' 1 ' ) ) ;
2018-10-27 01:02:34 +03:00
2020-10-14 02:55:06 +03:00
KUNIT_EXPECT_KASAN_FAIL ( test , kasan_ptr_result = strrchr ( ptr , ' 1 ' ) ) ;
2018-10-27 01:02:34 +03:00
2020-10-14 02:55:06 +03:00
KUNIT_EXPECT_KASAN_FAIL ( test , kasan_int_result = strcmp ( ptr , " 2 " ) ) ;
2018-10-27 01:02:34 +03:00
2020-10-14 02:55:06 +03:00
KUNIT_EXPECT_KASAN_FAIL ( test , kasan_int_result = strncmp ( ptr , " 2 " , 1 ) ) ;
2018-10-27 01:02:34 +03:00
2020-10-14 02:55:06 +03:00
KUNIT_EXPECT_KASAN_FAIL ( test , kasan_int_result = strlen ( ptr ) ) ;
2018-10-27 01:02:34 +03:00
2020-10-14 02:55:06 +03:00
KUNIT_EXPECT_KASAN_FAIL ( test , kasan_int_result = strnlen ( ptr , 1 ) ) ;
2018-10-27 01:02:34 +03:00
}
2020-11-02 04:07:37 +03:00
static void kasan_bitops_modify ( struct kunit * test , int nr , void * addr )
{
KUNIT_EXPECT_KASAN_FAIL ( test , set_bit ( nr , addr ) ) ;
KUNIT_EXPECT_KASAN_FAIL ( test , __set_bit ( nr , addr ) ) ;
KUNIT_EXPECT_KASAN_FAIL ( test , clear_bit ( nr , addr ) ) ;
KUNIT_EXPECT_KASAN_FAIL ( test , __clear_bit ( nr , addr ) ) ;
KUNIT_EXPECT_KASAN_FAIL ( test , clear_bit_unlock ( nr , addr ) ) ;
KUNIT_EXPECT_KASAN_FAIL ( test , __clear_bit_unlock ( nr , addr ) ) ;
KUNIT_EXPECT_KASAN_FAIL ( test , change_bit ( nr , addr ) ) ;
KUNIT_EXPECT_KASAN_FAIL ( test , __change_bit ( nr , addr ) ) ;
}
static void kasan_bitops_test_and_modify ( struct kunit * test , int nr , void * addr )
{
KUNIT_EXPECT_KASAN_FAIL ( test , test_and_set_bit ( nr , addr ) ) ;
KUNIT_EXPECT_KASAN_FAIL ( test , __test_and_set_bit ( nr , addr ) ) ;
KUNIT_EXPECT_KASAN_FAIL ( test , test_and_set_bit_lock ( nr , addr ) ) ;
KUNIT_EXPECT_KASAN_FAIL ( test , test_and_clear_bit ( nr , addr ) ) ;
KUNIT_EXPECT_KASAN_FAIL ( test , __test_and_clear_bit ( nr , addr ) ) ;
KUNIT_EXPECT_KASAN_FAIL ( test , test_and_change_bit ( nr , addr ) ) ;
KUNIT_EXPECT_KASAN_FAIL ( test , __test_and_change_bit ( nr , addr ) ) ;
KUNIT_EXPECT_KASAN_FAIL ( test , kasan_int_result = test_bit ( nr , addr ) ) ;
# if defined(clear_bit_unlock_is_negative_byte)
KUNIT_EXPECT_KASAN_FAIL ( test , kasan_int_result =
clear_bit_unlock_is_negative_byte ( nr , addr ) ) ;
# endif
}
static void kasan_bitops_generic ( struct kunit * test )
2019-07-12 06:53:52 +03:00
{
2020-11-02 04:07:37 +03:00
long * bits ;
/* This test is specifically crafted for the generic mode. */
2021-02-24 23:05:17 +03:00
KASAN_TEST_NEEDS_CONFIG_ON ( test , CONFIG_KASAN_GENERIC ) ;
2020-11-02 04:07:37 +03:00
2019-07-12 06:53:52 +03:00
/*
2021-02-24 23:05:13 +03:00
* Allocate 1 more byte , which causes kzalloc to round up to 16 bytes ;
2019-07-12 06:53:52 +03:00
* this way we do not actually corrupt other memory .
*/
2020-11-02 04:07:37 +03:00
bits = kzalloc ( sizeof ( * bits ) + 1 , GFP_KERNEL ) ;
2020-10-14 02:55:06 +03:00
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , bits ) ;
2019-07-12 06:53:52 +03:00
/*
* Below calls try to access bit within allocated memory ; however , the
* below accesses are still out - of - bounds , since bitops are defined to
* operate on the whole long the bit is in .
*/
2020-11-02 04:07:37 +03:00
kasan_bitops_modify ( test , BITS_PER_LONG , bits ) ;
2019-07-12 06:53:52 +03:00
/*
* Below calls try to access bit beyond allocated memory .
*/
2020-11-02 04:07:37 +03:00
kasan_bitops_test_and_modify ( test , BITS_PER_LONG + BITS_PER_BYTE , bits ) ;
2019-07-12 06:53:52 +03:00
2020-11-02 04:07:37 +03:00
kfree ( bits ) ;
}
2019-07-12 06:53:52 +03:00
2020-11-02 04:07:37 +03:00
static void kasan_bitops_tags ( struct kunit * test )
{
long * bits ;
2019-07-12 06:53:52 +03:00
2021-02-24 23:05:17 +03:00
/* This test is specifically crafted for tag-based modes. */
KASAN_TEST_NEEDS_CONFIG_OFF ( test , CONFIG_KASAN_GENERIC ) ;
2019-07-12 06:53:52 +03:00
2021-02-24 23:05:42 +03:00
/* kmalloc-64 cache will be used and the last 16 bytes will be the redzone. */
bits = kzalloc ( 48 , GFP_KERNEL ) ;
2020-11-02 04:07:37 +03:00
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , bits ) ;
2019-07-12 06:53:52 +03:00
2021-02-24 23:05:42 +03:00
/* Do the accesses past the 48 allocated bytes, but within the redone. */
kasan_bitops_modify ( test , BITS_PER_LONG , ( void * ) bits + 48 ) ;
kasan_bitops_test_and_modify ( test , BITS_PER_LONG + BITS_PER_BYTE , ( void * ) bits + 48 ) ;
2019-07-12 06:53:52 +03:00
kfree ( bits ) ;
}
2020-10-14 02:55:06 +03:00
static void kmalloc_double_kzfree ( struct kunit * test )
2019-07-12 06:54:11 +03:00
{
char * ptr ;
size_t size = 16 ;
ptr = kmalloc ( size , GFP_KERNEL ) ;
2020-10-14 02:55:06 +03:00
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr ) ;
2019-07-12 06:54:11 +03:00
2020-08-07 09:18:13 +03:00
kfree_sensitive ( ptr ) ;
2020-10-14 02:55:06 +03:00
KUNIT_EXPECT_KASAN_FAIL ( test , kfree_sensitive ( ptr ) ) ;
2019-07-12 06:54:11 +03:00
}
2022-03-25 04:11:59 +03:00
static void vmalloc_helpers_tags ( struct kunit * test )
{
void * ptr ;
/* This test is intended for tag-based modes. */
KASAN_TEST_NEEDS_CONFIG_OFF ( test , CONFIG_KASAN_GENERIC ) ;
KASAN_TEST_NEEDS_CONFIG_ON ( test , CONFIG_KASAN_VMALLOC ) ;
ptr = vmalloc ( PAGE_SIZE ) ;
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr ) ;
/* Check that the returned pointer is tagged. */
KUNIT_EXPECT_GE ( test , ( u8 ) get_tag ( ptr ) , ( u8 ) KASAN_TAG_MIN ) ;
KUNIT_EXPECT_LT ( test , ( u8 ) get_tag ( ptr ) , ( u8 ) KASAN_TAG_KERNEL ) ;
/* Make sure exported vmalloc helpers handle tagged pointers. */
KUNIT_ASSERT_TRUE ( test , is_vmalloc_addr ( ptr ) ) ;
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , vmalloc_to_page ( ptr ) ) ;
# if !IS_MODULE(CONFIG_KASAN_KUNIT_TEST)
{
int rv ;
/* Make sure vmalloc'ed memory permissions can be changed. */
rv = set_memory_ro ( ( unsigned long ) ptr , 1 ) ;
KUNIT_ASSERT_GE ( test , rv , 0 ) ;
rv = set_memory_rw ( ( unsigned long ) ptr , 1 ) ;
KUNIT_ASSERT_GE ( test , rv , 0 ) ;
}
# endif
vfree ( ptr ) ;
}
2020-10-14 02:55:06 +03:00
static void vmalloc_oob ( struct kunit * test )
2019-12-01 04:54:53 +03:00
{
2022-03-25 04:11:59 +03:00
char * v_ptr , * p_ptr ;
struct page * page ;
size_t size = PAGE_SIZE / 2 - KASAN_GRANULE_SIZE - 5 ;
2019-12-01 04:54:53 +03:00
2021-02-24 23:05:17 +03:00
KASAN_TEST_NEEDS_CONFIG_ON ( test , CONFIG_KASAN_VMALLOC ) ;
2019-12-01 04:54:53 +03:00
2022-03-25 04:11:59 +03:00
v_ptr = vmalloc ( size ) ;
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , v_ptr ) ;
OPTIMIZER_HIDE_VAR ( v_ptr ) ;
2019-12-01 04:54:53 +03:00
/*
2022-03-25 04:11:59 +03:00
* We have to be careful not to hit the guard page in vmalloc tests .
2019-12-01 04:54:53 +03:00
* The MMU will catch that and crash us .
*/
2022-03-25 04:11:59 +03:00
/* Make sure in-bounds accesses are valid. */
v_ptr [ 0 ] = 0 ;
v_ptr [ size - 1 ] = 0 ;
/*
* An unaligned access past the requested vmalloc size .
* Only generic KASAN can precisely detect these .
*/
if ( IS_ENABLED ( CONFIG_KASAN_GENERIC ) )
KUNIT_EXPECT_KASAN_FAIL ( test , ( ( volatile char * ) v_ptr ) [ size ] ) ;
/* An aligned access into the first out-of-bounds granule. */
KUNIT_EXPECT_KASAN_FAIL ( test , ( ( volatile char * ) v_ptr ) [ size + 5 ] ) ;
/* Check that in-bounds accesses to the physical page are valid. */
page = vmalloc_to_page ( v_ptr ) ;
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , page ) ;
p_ptr = page_address ( page ) ;
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , p_ptr ) ;
p_ptr [ 0 ] = 0 ;
vfree ( v_ptr ) ;
/*
* We can ' t check for use - after - unmap bugs in this nor in the following
* vmalloc tests , as the page might be fully unmapped and accessing it
* will crash the kernel .
*/
}
static void vmap_tags ( struct kunit * test )
{
char * p_ptr , * v_ptr ;
struct page * p_page , * v_page ;
/*
* This test is specifically crafted for the software tag - based mode ,
* the only tag - based mode that poisons vmap mappings .
*/
KASAN_TEST_NEEDS_CONFIG_ON ( test , CONFIG_KASAN_SW_TAGS ) ;
KASAN_TEST_NEEDS_CONFIG_ON ( test , CONFIG_KASAN_VMALLOC ) ;
p_page = alloc_pages ( GFP_KERNEL , 1 ) ;
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , p_page ) ;
p_ptr = page_address ( p_page ) ;
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , p_ptr ) ;
v_ptr = vmap ( & p_page , 1 , VM_MAP , PAGE_KERNEL ) ;
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , v_ptr ) ;
/*
* We can ' t check for out - of - bounds bugs in this nor in the following
* vmalloc tests , as allocations have page granularity and accessing
* the guard page will crash the kernel .
*/
KUNIT_EXPECT_GE ( test , ( u8 ) get_tag ( v_ptr ) , ( u8 ) KASAN_TAG_MIN ) ;
KUNIT_EXPECT_LT ( test , ( u8 ) get_tag ( v_ptr ) , ( u8 ) KASAN_TAG_KERNEL ) ;
/* Make sure that in-bounds accesses through both pointers work. */
* p_ptr = 0 ;
* v_ptr = 0 ;
/* Make sure vmalloc_to_page() correctly recovers the page pointer. */
v_page = vmalloc_to_page ( v_ptr ) ;
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , v_page ) ;
KUNIT_EXPECT_PTR_EQ ( test , p_page , v_page ) ;
vunmap ( v_ptr ) ;
free_pages ( ( unsigned long ) p_ptr , 1 ) ;
}
static void vm_map_ram_tags ( struct kunit * test )
{
char * p_ptr , * v_ptr ;
struct page * page ;
/*
* This test is specifically crafted for the software tag - based mode ,
* the only tag - based mode that poisons vm_map_ram mappings .
*/
KASAN_TEST_NEEDS_CONFIG_ON ( test , CONFIG_KASAN_SW_TAGS ) ;
page = alloc_pages ( GFP_KERNEL , 1 ) ;
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , page ) ;
p_ptr = page_address ( page ) ;
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , p_ptr ) ;
v_ptr = vm_map_ram ( & page , 1 , - 1 ) ;
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , v_ptr ) ;
KUNIT_EXPECT_GE ( test , ( u8 ) get_tag ( v_ptr ) , ( u8 ) KASAN_TAG_MIN ) ;
KUNIT_EXPECT_LT ( test , ( u8 ) get_tag ( v_ptr ) , ( u8 ) KASAN_TAG_KERNEL ) ;
/* Make sure that in-bounds accesses through both pointers work. */
* p_ptr = 0 ;
* v_ptr = 0 ;
vm_unmap_ram ( v_ptr , 1 ) ;
free_pages ( ( unsigned long ) p_ptr , 1 ) ;
}
static void vmalloc_percpu ( struct kunit * test )
{
char __percpu * ptr ;
int cpu ;
/*
* This test is specifically crafted for the software tag - based mode ,
* the only tag - based mode that poisons percpu mappings .
*/
KASAN_TEST_NEEDS_CONFIG_ON ( test , CONFIG_KASAN_SW_TAGS ) ;
ptr = __alloc_percpu ( PAGE_SIZE , PAGE_SIZE ) ;
for_each_possible_cpu ( cpu ) {
char * c_ptr = per_cpu_ptr ( ptr , cpu ) ;
KUNIT_EXPECT_GE ( test , ( u8 ) get_tag ( c_ptr ) , ( u8 ) KASAN_TAG_MIN ) ;
KUNIT_EXPECT_LT ( test , ( u8 ) get_tag ( c_ptr ) , ( u8 ) KASAN_TAG_KERNEL ) ;
/* Make sure that in-bounds accesses don't crash the kernel. */
* c_ptr = 0 ;
}
free_percpu ( ptr ) ;
2019-12-01 04:54:53 +03:00
}
2020-08-07 09:24:42 +03:00
2021-02-24 23:05:21 +03:00
/*
* Check that the assigned pointer tag falls within the [ KASAN_TAG_MIN ,
* KASAN_TAG_KERNEL ) range ( note : excluding the match - all tag ) for tag - based
* modes .
*/
static void match_all_not_assigned ( struct kunit * test )
{
char * ptr ;
struct page * pages ;
int i , size , order ;
KASAN_TEST_NEEDS_CONFIG_OFF ( test , CONFIG_KASAN_GENERIC ) ;
for ( i = 0 ; i < 256 ; i + + ) {
size = ( get_random_int ( ) % 1024 ) + 1 ;
ptr = kmalloc ( size , GFP_KERNEL ) ;
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr ) ;
KUNIT_EXPECT_GE ( test , ( u8 ) get_tag ( ptr ) , ( u8 ) KASAN_TAG_MIN ) ;
KUNIT_EXPECT_LT ( test , ( u8 ) get_tag ( ptr ) , ( u8 ) KASAN_TAG_KERNEL ) ;
kfree ( ptr ) ;
}
for ( i = 0 ; i < 256 ; i + + ) {
order = ( get_random_int ( ) % 4 ) + 1 ;
pages = alloc_pages ( GFP_KERNEL , order ) ;
ptr = page_address ( pages ) ;
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr ) ;
KUNIT_EXPECT_GE ( test , ( u8 ) get_tag ( ptr ) , ( u8 ) KASAN_TAG_MIN ) ;
KUNIT_EXPECT_LT ( test , ( u8 ) get_tag ( ptr ) , ( u8 ) KASAN_TAG_KERNEL ) ;
free_pages ( ( unsigned long ) ptr , order ) ;
}
2022-03-25 04:11:59 +03:00
if ( ! IS_ENABLED ( CONFIG_KASAN_VMALLOC ) )
return ;
for ( i = 0 ; i < 256 ; i + + ) {
size = ( get_random_int ( ) % 1024 ) + 1 ;
ptr = vmalloc ( size ) ;
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr ) ;
KUNIT_EXPECT_GE ( test , ( u8 ) get_tag ( ptr ) , ( u8 ) KASAN_TAG_MIN ) ;
KUNIT_EXPECT_LT ( test , ( u8 ) get_tag ( ptr ) , ( u8 ) KASAN_TAG_KERNEL ) ;
vfree ( ptr ) ;
}
2021-02-24 23:05:21 +03:00
}
/* Check that 0xff works as a match-all pointer tag for tag-based modes. */
static void match_all_ptr_tag ( struct kunit * test )
{
char * ptr ;
u8 tag ;
KASAN_TEST_NEEDS_CONFIG_OFF ( test , CONFIG_KASAN_GENERIC ) ;
ptr = kmalloc ( 128 , GFP_KERNEL ) ;
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr ) ;
/* Backup the assigned tag. */
tag = get_tag ( ptr ) ;
KUNIT_EXPECT_NE ( test , tag , ( u8 ) KASAN_TAG_KERNEL ) ;
/* Reset the tag to 0xff.*/
ptr = set_tag ( ptr , KASAN_TAG_KERNEL ) ;
/* This access shouldn't trigger a KASAN report. */
* ptr = 0 ;
/* Recover the pointer tag and free. */
ptr = set_tag ( ptr , tag ) ;
kfree ( ptr ) ;
}
/* Check that there are no match-all memory tags for tag-based modes. */
static void match_all_mem_tag ( struct kunit * test )
{
char * ptr ;
int tag ;
KASAN_TEST_NEEDS_CONFIG_OFF ( test , CONFIG_KASAN_GENERIC ) ;
ptr = kmalloc ( 128 , GFP_KERNEL ) ;
KUNIT_ASSERT_NOT_ERR_OR_NULL ( test , ptr ) ;
KUNIT_EXPECT_NE ( test , ( u8 ) get_tag ( ptr ) , ( u8 ) KASAN_TAG_KERNEL ) ;
/* For each possible tag value not matching the pointer tag. */
for ( tag = KASAN_TAG_MIN ; tag < = KASAN_TAG_KERNEL ; tag + + ) {
if ( tag = = get_tag ( ptr ) )
continue ;
/* Mark the first memory granule with the chosen memory tag. */
2021-04-30 08:59:59 +03:00
kasan_poison ( ptr , KASAN_GRANULE_SIZE , ( u8 ) tag , false ) ;
2021-02-24 23:05:21 +03:00
/* This access must cause a KASAN report. */
KUNIT_EXPECT_KASAN_FAIL ( test , * ptr = 0 ) ;
}
/* Recover the memory tag and free. */
2021-04-30 08:59:59 +03:00
kasan_poison ( ptr , KASAN_GRANULE_SIZE , get_tag ( ptr ) , false ) ;
2021-02-24 23:05:21 +03:00
kfree ( ptr ) ;
}
2020-10-14 02:55:06 +03:00
static struct kunit_case kasan_kunit_test_cases [ ] = {
KUNIT_CASE ( kmalloc_oob_right ) ,
KUNIT_CASE ( kmalloc_oob_left ) ,
KUNIT_CASE ( kmalloc_node_oob_right ) ,
KUNIT_CASE ( kmalloc_pagealloc_oob_right ) ,
KUNIT_CASE ( kmalloc_pagealloc_uaf ) ,
KUNIT_CASE ( kmalloc_pagealloc_invalid_free ) ,
2021-02-24 23:05:55 +03:00
KUNIT_CASE ( pagealloc_oob_right ) ,
KUNIT_CASE ( pagealloc_uaf ) ,
2020-10-14 02:55:06 +03:00
KUNIT_CASE ( kmalloc_large_oob_right ) ,
2021-02-26 04:20:15 +03:00
KUNIT_CASE ( krealloc_more_oob ) ,
KUNIT_CASE ( krealloc_less_oob ) ,
KUNIT_CASE ( krealloc_pagealloc_more_oob ) ,
KUNIT_CASE ( krealloc_pagealloc_less_oob ) ,
2021-02-26 04:20:19 +03:00
KUNIT_CASE ( krealloc_uaf ) ,
2020-10-14 02:55:06 +03:00
KUNIT_CASE ( kmalloc_oob_16 ) ,
2020-11-02 04:07:37 +03:00
KUNIT_CASE ( kmalloc_uaf_16 ) ,
2020-10-14 02:55:06 +03:00
KUNIT_CASE ( kmalloc_oob_in_memset ) ,
KUNIT_CASE ( kmalloc_oob_memset_2 ) ,
KUNIT_CASE ( kmalloc_oob_memset_4 ) ,
KUNIT_CASE ( kmalloc_oob_memset_8 ) ,
KUNIT_CASE ( kmalloc_oob_memset_16 ) ,
2021-11-05 23:35:56 +03:00
KUNIT_CASE ( kmalloc_memmove_negative_size ) ,
2020-10-14 02:55:06 +03:00
KUNIT_CASE ( kmalloc_memmove_invalid_size ) ,
KUNIT_CASE ( kmalloc_uaf ) ,
KUNIT_CASE ( kmalloc_uaf_memset ) ,
KUNIT_CASE ( kmalloc_uaf2 ) ,
KUNIT_CASE ( kfree_via_page ) ,
KUNIT_CASE ( kfree_via_phys ) ,
KUNIT_CASE ( kmem_cache_oob ) ,
2021-02-24 23:05:59 +03:00
KUNIT_CASE ( kmem_cache_accounted ) ,
KUNIT_CASE ( kmem_cache_bulk ) ,
2022-01-15 01:04:51 +03:00
KUNIT_CASE ( kasan_global_oob_right ) ,
KUNIT_CASE ( kasan_global_oob_left ) ,
2020-10-14 02:55:06 +03:00
KUNIT_CASE ( kasan_stack_oob ) ,
KUNIT_CASE ( kasan_alloca_oob_left ) ,
KUNIT_CASE ( kasan_alloca_oob_right ) ,
KUNIT_CASE ( ksize_unpoisons_memory ) ,
2021-02-24 23:05:50 +03:00
KUNIT_CASE ( ksize_uaf ) ,
2020-10-14 02:55:06 +03:00
KUNIT_CASE ( kmem_cache_double_free ) ,
KUNIT_CASE ( kmem_cache_invalid_free ) ,
2022-01-15 01:04:57 +03:00
KUNIT_CASE ( kmem_cache_double_destroy ) ,
2020-10-14 02:55:06 +03:00
KUNIT_CASE ( kasan_memchr ) ,
KUNIT_CASE ( kasan_memcmp ) ,
KUNIT_CASE ( kasan_strings ) ,
2020-11-02 04:07:37 +03:00
KUNIT_CASE ( kasan_bitops_generic ) ,
KUNIT_CASE ( kasan_bitops_tags ) ,
2020-10-14 02:55:06 +03:00
KUNIT_CASE ( kmalloc_double_kzfree ) ,
2022-03-25 04:11:59 +03:00
KUNIT_CASE ( vmalloc_helpers_tags ) ,
2020-10-14 02:55:06 +03:00
KUNIT_CASE ( vmalloc_oob ) ,
2022-03-25 04:11:59 +03:00
KUNIT_CASE ( vmap_tags ) ,
KUNIT_CASE ( vm_map_ram_tags ) ,
KUNIT_CASE ( vmalloc_percpu ) ,
2021-02-24 23:05:21 +03:00
KUNIT_CASE ( match_all_not_assigned ) ,
KUNIT_CASE ( match_all_ptr_tag ) ,
KUNIT_CASE ( match_all_mem_tag ) ,
2020-10-14 02:55:06 +03:00
{ }
} ;
static struct kunit_suite kasan_kunit_test_suite = {
. name = " kasan " ,
. init = kasan_test_init ,
. test_cases = kasan_kunit_test_cases ,
. exit = kasan_test_exit ,
} ;
kunit_test_suite ( kasan_kunit_test_suite ) ;
2015-02-14 01:39:53 +03:00
MODULE_LICENSE ( " GPL " ) ;