summaryrefslogtreecommitdiffstats
path: root/Documentation/dev-tools/kunit/usage.rst
blob: 650f99590df57d0b3593455c993987cdfcdb400d (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
.. SPDX-License-Identifier: GPL-2.0

===========
Using KUnit
===========

The purpose of this document is to describe what KUnit is, how it works, how it
is intended to be used, and all the concepts and terminology that are needed to
understand it. This guide assumes a working knowledge of the Linux kernel and
some basic knowledge of testing.

For a high level introduction to KUnit, including setting up KUnit for your
project, see :doc:`start`.

Organization of this document
=============================

This document is organized into two main sections: Testing and Common Patterns.
The first covers what unit tests are and how to use KUnit to write them. The
second covers common testing patterns, e.g. how to isolate code and make it
possible to unit test code that was otherwise un-unit-testable.

Testing
=======

What is KUnit?
--------------

"K" is short for "kernel" so "KUnit" is the "(Linux) Kernel Unit Testing
Framework." KUnit is intended first and foremost for writing unit tests; it is
general enough that it can be used to write integration tests; however, this is
a secondary goal. KUnit has no ambition of being the only testing framework for
the kernel; for example, it does not intend to be an end-to-end testing
framework.

What is Unit Testing?
---------------------

A `unit test <https://martinfowler.com/bliki/UnitTest.html>`_ is a test that
tests code at the smallest possible scope, a *unit* of code. In the C
programming language that's a function.

Unit tests should be written for all the publicly exposed functions in a
compilation unit; so that is all the functions that are exported in either a
*class* (defined below) or all functions which are **not** static.

Writing Tests
-------------

Test Cases
~~~~~~~~~~

The fundamental unit in KUnit is the test case. A test case is a function with
the signature ``void (*)(struct kunit *test)``. It calls a function to be tested
and then sets *expectations* for what should happen. For example:

.. code-block:: c

	void example_test_success(struct kunit *test)
	{
	}

	void example_test_failure(struct kunit *test)
	{
		KUNIT_FAIL(test, "This test never passes.");
	}

In the above example ``example_test_success`` always passes because it does
nothing; no expectations are set, so all expectations pass. On the other hand
``example_test_failure`` always fails because it calls ``KUNIT_FAIL``, which is
a special expectation that logs a message and causes the test case to fail.

Expectations
~~~~~~~~~~~~
An *expectation* is a way to specify that you expect a piece of code to do
something in a test. An expectation is called like a function. A test is made
by setting expectations about the behavior of a piece of code under test; when
one or more of the expectations fail, the test case fails and information about
the failure is logged. For example:

.. code-block:: c

	void add_test_basic(struct kunit *test)
	{
		KUNIT_EXPECT_EQ(test, 1, add(1, 0));
		KUNIT_EXPECT_EQ(test, 2, add(1, 1));
	}

In the above example ``add_test_basic`` makes a number of assertions about the
behavior of a function called ``add``; the first parameter is always of type
``struct kunit *``, which contains information about the current test context;
the second parameter, in this case, is what the value is expected to be; the
last value is what the value actually is. If ``add`` passes all of these
expectations, the test case, ``add_test_basic`` will pass; if any one of these
expectations fails, the test case will fail.

It is important to understand that a test case *fails* when any expectation is
violated; however, the test will continue running, potentially trying other
expectations until the test case ends or is otherwise terminated. This is as
opposed to *assertions* which are discussed later.

To learn about more expectations supported by KUnit, see :doc:`api/test`.

.. note::
   A single test case should be pretty short, pretty easy to understand,
   focused on a single behavior.

For example, if we wanted to properly test the add function above, we would
create additional tests cases which would each test a different property that an
add function should have like this:

.. code-block:: c

	void add_test_basic(struct kunit *test)
	{
		KUNIT_EXPECT_EQ(test, 1, add(1, 0));
		KUNIT_EXPECT_EQ(test, 2, add(1, 1));
	}

	void add_test_negative(struct kunit *test)
	{
		KUNIT_EXPECT_EQ(test, 0, add(-1, 1));
	}

	void add_test_max(struct kunit *test)
	{
		KUNIT_EXPECT_EQ(test, INT_MAX, add(0, INT_MAX));
		KUNIT_EXPECT_EQ(test, -1, add(INT_MAX, INT_MIN));
	}

	void add_test_overflow(struct kunit *test)
	{
		KUNIT_EXPECT_EQ(test, INT_MIN, add(INT_MAX, 1));
	}

Notice how it is immediately obvious what all the properties that we are testing
for are.

Assertions
~~~~~~~~~~

KUnit also has the concept of an *assertion*. An assertion is just like an
expectation except the assertion immediately terminates the test case if it is
not satisfied.

For example:

.. code-block:: c

	static void mock_test_do_expect_default_return(struct kunit *test)
	{
		struct mock_test_context *ctx = test->priv;
		struct mock *mock = ctx->mock;
		int param0 = 5, param1 = -5;
		const char *two_param_types[] = {"int", "int"};
		const void *two_params[] = {&param0, &param1};
		const void *ret;

		ret = mock->do_expect(mock,
				      "test_printk", test_printk,
				      two_param_types, two_params,
				      ARRAY_SIZE(two_params));
		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ret);
		KUNIT_EXPECT_EQ(test, -4, *((int *) ret));
	}

In this example, the method under test should return a pointer to a value, so
if the pointer returned by the method is null or an errno, we don't want to
bother continuing the test since the following expectation could crash the test
case. `ASSERT_NOT_ERR_OR_NULL(...)` allows us to bail out of the test case if
the appropriate conditions have not been satisfied to complete the test.

Test Suites
~~~~~~~~~~~

Now obviously one unit test isn't very helpful; the power comes from having
many test cases covering all of a unit's behaviors. Consequently it is common
to have many *similar* tests; in order to reduce duplication in these closely
related tests most unit testing frameworks - including KUnit - provide the
concept of a *test suite*. A *test suite* is just a collection of test cases
for a unit of code with a set up function that gets invoked before every test
case and then a tear down function that gets invoked after every test case
completes.

Example:

.. code-block:: c

	static struct kunit_case example_test_cases[] = {
		KUNIT_CASE(example_test_foo),
		KUNIT_CASE(example_test_bar),
		KUNIT_CASE(example_test_baz),
		{}
	};

	static struct kunit_suite example_test_suite = {
		.name = "example",
		.init = example_test_init,
		.exit = example_test_exit,
		.test_cases = example_test_cases,
	};
	kunit_test_suite(example_test_suite);

In the above example the test suite, ``example_test_suite``, would run the test
cases ``example_test_foo``, ``example_test_bar``, and ``example_test_baz``;
each would have ``example_test_init`` called immediately before it and would
have ``example_test_exit`` called immediately after it.
``kunit_test_suite(example_test_suite)`` registers the test suite with the
KUnit test framework.

.. note::
   A test case will only be run if it is associated with a test suite.

``kunit_test_suite(...)`` is a macro which tells the linker to put the specified
test suite in a special linker section so that it can be run by KUnit either
after late_init, or when the test module is loaded (depending on whether the
test was built in or not).

For more information on these types of things see the :doc:`api/test`.

Common Patterns
===============

Isolating Behavior
------------------

The most important aspect of unit testing that other forms of testing do not
provide is the ability to limit the amount of code under test to a single unit.
In practice, this is only possible by being able to control what code gets run
when the unit under test calls a function and this is usually accomplished
through some sort of indirection where a function is exposed as part of an API
such that the definition of that function can be changed without affecting the
rest of the code base. In the kernel this primarily comes from two constructs,
classes, structs that contain function pointers that are provided by the
implementer, and architecture-specific functions which have definitions selected
at compile time.

Classes
~~~~~~~

Classes are not a construct that is built into the C programming language;
however, it is an easily derived concept. Accordingly, pretty much every project
that does not use a standardized object oriented library (like GNOME's GObject)
has their own slightly different way of doing object oriented programming; the
Linux kernel is no exception.

The central concept in kernel object oriented programming is the class. In the
kernel, a *class* is a struct that contains function pointers. This creates a
contract between *implementers* and *users* since it forces them to use the
same function signature without having to call the function directly. In order
for it to truly be a class, the function pointers must specify that a pointer
to the class, known as a *class handle*, be one of the parameters; this makes
it possible for the member functions (also known as *methods*) to have access
to member variables (more commonly known as *fields*) allowing the same
implementation to have multiple *instances*.

Typically a class can be *overridden* by *child classes* by embedding the
*parent class* in the child class. Then when a method provided by the child
class is called, the child implementation knows that the pointer passed to it is
of a parent contained within the child; because of this, the child can compute
the pointer to itself because the pointer to the parent is always a fixed offset
from the pointer to the child; this offset is the offset of the parent contained
in the child struct. For example:

.. code-block:: c

	struct shape {
		int (*area)(struct shape *this);
	};

	struct rectangle {
		struct shape parent;
		int length;
		int width;
	};

	int rectangle_area(struct shape *this)
	{
		struct rectangle *self = container_of(this, struct shape, parent);

		return self->length * self->width;
	};

	void rectangle_new(struct rectangle *self, int length, int width)
	{
		self->parent.area = rectangle_area;
		self->length = length;
		self->width = width;
	}

In this example (as in most kernel code) the operation of computing the pointer
to the child from the pointer to the parent is done by ``container_of``.

Faking Classes
~~~~~~~~~~~~~~

In order to unit test a piece of code that calls a method in a class, the
behavior of the method must be controllable, otherwise the test ceases to be a
unit test and becomes an integration test.

A fake just provides an implementation of a piece of code that is different than
what runs in a production instance, but behaves identically from the standpoint
of the callers; this is usually done to replace a dependency that is hard to
deal with, or is slow.

A good example for this might be implementing a fake EEPROM that just stores the
"contents" in an internal buffer. For example, let's assume we have a class that
represents an EEPROM:

.. code-block:: c

	struct eeprom {
		ssize_t (*read)(struct eeprom *this, size_t offset, char *buffer, size_t count);
		ssize_t (*write)(struct eeprom *this, size_t offset, const char *buffer, size_t count);
	};

And we want to test some code that buffers writes to the EEPROM:

.. code-block:: c

	struct eeprom_buffer {
		ssize_t (*write)(struct eeprom_buffer *this, const char *buffer, size_t count);
		int flush(struct eeprom_buffer *this);
		size_t flush_count; /* Flushes when buffer exceeds flush_count. */
	};

	struct eeprom_buffer *new_eeprom_buffer(struct eeprom *eeprom);
	void destroy_eeprom_buffer(struct eeprom *eeprom);

We can easily test this code by *faking out* the underlying EEPROM:

.. code-block:: c

	struct fake_eeprom {
		struct eeprom parent;
		char contents[FAKE_EEPROM_CONTENTS_SIZE];
	};

	ssize_t fake_eeprom_read(struct eeprom *parent, size_t offset, char *buffer, size_t count)
	{
		struct fake_eeprom *this = container_of(parent, struct fake_eeprom, parent);

		count = min(count, FAKE_EEPROM_CONTENTS_SIZE - offset);
		memcpy(buffer, this->contents + offset, count);

		return count;
	}

	ssize_t fake_eeprom_write(struct eeprom *parent, size_t offset, const char *buffer, size_t count)
	{
		struct fake_eeprom *this = container_of(parent, struct fake_eeprom, parent);

		count = min(count, FAKE_EEPROM_CONTENTS_SIZE - offset);
		memcpy(this->contents + offset, buffer, count);

		return count;
	}

	void fake_eeprom_init(struct fake_eeprom *this)
	{
		this->parent.read = fake_eeprom_read;
		this->parent.write = fake_eeprom_write;
		memset(this->contents, 0, FAKE_EEPROM_CONTENTS_SIZE);
	}

We can now use it to test ``struct eeprom_buffer``:

.. code-block:: c

	struct eeprom_buffer_test {
		struct fake_eeprom *fake_eeprom;
		struct eeprom_buffer *eeprom_buffer;
	};

	static void eeprom_buffer_test_does_not_write_until_flush(struct kunit *test)
	{
		struct eeprom_buffer_test *ctx = test->priv;
		struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
		struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
		char buffer[] = {0xff};

		eeprom_buffer->flush_count = SIZE_MAX;

		eeprom_buffer->write(eeprom_buffer, buffer, 1);
		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);

		eeprom_buffer->write(eeprom_buffer, buffer, 1);
		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0);

		eeprom_buffer->flush(eeprom_buffer);
		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
	}

	static void eeprom_buffer_test_flushes_after_flush_count_met(struct kunit *test)
	{
		struct eeprom_buffer_test *ctx = test->priv;
		struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
		struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
		char buffer[] = {0xff};

		eeprom_buffer->flush_count = 2;

		eeprom_buffer->write(eeprom_buffer, buffer, 1);
		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);

		eeprom_buffer->write(eeprom_buffer, buffer, 1);
		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
	}

	static void eeprom_buffer_test_flushes_increments_of_flush_count(struct kunit *test)
	{
		struct eeprom_buffer_test *ctx = test->priv;
		struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
		struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
		char buffer[] = {0xff, 0xff};

		eeprom_buffer->flush_count = 2;

		eeprom_buffer->write(eeprom_buffer, buffer, 1);
		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);

		eeprom_buffer->write(eeprom_buffer, buffer, 2);
		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
		/* Should have only flushed the first two bytes. */
		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[2], 0);
	}

	static int eeprom_buffer_test_init(struct kunit *test)
	{
		struct eeprom_buffer_test *ctx;

		ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);

		ctx->fake_eeprom = kunit_kzalloc(test, sizeof(*ctx->fake_eeprom), GFP_KERNEL);
		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->fake_eeprom);
		fake_eeprom_init(ctx->fake_eeprom);

		ctx->eeprom_buffer = new_eeprom_buffer(&ctx->fake_eeprom->parent);
		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->eeprom_buffer);

		test->priv = ctx;

		return 0;
	}

	static void eeprom_buffer_test_exit(struct kunit *test)
	{
		struct eeprom_buffer_test *ctx = test->priv;

		destroy_eeprom_buffer(ctx->eeprom_buffer);
	}

Testing against multiple inputs
-------------------------------

Testing just a few inputs might not be enough to have confidence that the code
works correctly, e.g. for a hash function.

In such cases, it can be helpful to have a helper macro or function, e.g. this
fictitious example for ``sha1sum(1)``

.. code-block:: c

	/* Note: the cast is to satisfy overly strict type-checking. */
	#define TEST_SHA1(in, want) \
		sha1sum(in, out); \
		KUNIT_EXPECT_STREQ_MSG(test, (char *)out, want, "sha1sum(%s)", in);

	char out[40];
	TEST_SHA1("hello world",  "2aae6c35c94fcfb415dbe95f408b9ce91ee846ed");
	TEST_SHA1("hello world!", "430ce34d020724ed75a196dfc2ad67c77772d169");


Note the use of ``KUNIT_EXPECT_STREQ_MSG`` to give more context when it fails
and make it easier to track down. (Yes, in this example, ``want`` is likely
going to be unique enough on its own).

The ``_MSG`` variants are even more useful when the same expectation is called
multiple times (in a loop or helper function) and thus the line number isn't
enough to identify what failed, like below.

In some cases, it can be helpful to write a *table-driven test* instead, e.g.

.. code-block:: c

	int i;
	char out[40];

	struct sha1_test_case {
		const char *str;
		const char *sha1;
	};

	struct sha1_test_case cases[] = {
		{
			.str = "hello world",
			.sha1 = "2aae6c35c94fcfb415dbe95f408b9ce91ee846ed",
		},
		{
			.str = "hello world!",
			.sha1 = "430ce34d020724ed75a196dfc2ad67c77772d169",
		},
	};
	for (i = 0; i < ARRAY_SIZE(cases); ++i) {
		sha1sum(cases[i].str, out);
		KUNIT_EXPECT_STREQ_MSG(test, (char *)out, cases[i].sha1,
		                      "sha1sum(%s)", cases[i].str);
	}


There's more boilerplate involved, but it can:

* be more readable when there are multiple inputs/outputs thanks to field names,

  * E.g. see ``fs/ext4/inode-test.c`` for an example of both.
* reduce duplication if test cases can be shared across multiple tests.

  * E.g. if we wanted to also test ``sha256sum``, we could add a ``sha256``
    field and reuse ``cases``.

* be converted to a "parameterized test", see below.

Parameterized Testing
~~~~~~~~~~~~~~~~~~~~~

The table-driven testing pattern is common enough that KUnit has special
support for it.

Reusing the same ``cases`` array from above, we can write the test as a
"parameterized test" with the following.

.. code-block:: c

	// This is copy-pasted from above.
	struct sha1_test_case {
		const char *str;
		const char *sha1;
	};
	struct sha1_test_case cases[] = {
		{
			.str = "hello world",
			.sha1 = "2aae6c35c94fcfb415dbe95f408b9ce91ee846ed",
		},
		{
			.str = "hello world!",
			.sha1 = "430ce34d020724ed75a196dfc2ad67c77772d169",
		},
	};

	// Need a helper function to generate a name for each test case.
	static void case_to_desc(const struct sha1_test_case *t, char *desc)
	{
		strcpy(desc, t->str);
	}
	// Creates `sha1_gen_params()` to iterate over `cases`.
	KUNIT_ARRAY_PARAM(sha1, cases, case_to_desc);

	// Looks no different from a normal test.
	static void sha1_test(struct kunit *test)
	{
		// This function can just contain the body of the for-loop.
		// The former `cases[i]` is accessible under test->param_value.
		char out[40];
		struct sha1_test_case *test_param = (struct sha1_test_case *)(test->param_value);

		sha1sum(test_param->str, out);
		KUNIT_EXPECT_STREQ_MSG(test, (char *)out, test_param->sha1,
				      "sha1sum(%s)", test_param->str);
	}

	// Instead of KUNIT_CASE, we use KUNIT_CASE_PARAM and pass in the
	// function declared by KUNIT_ARRAY_PARAM.
	static struct kunit_case sha1_test_cases[] = {
		KUNIT_CASE_PARAM(sha1_test, sha1_gen_params),
		{}
	};

.. _kunit-on-non-uml:

KUnit on non-UML architectures
==============================

By default KUnit uses UML as a way to provide dependencies for code under test.
Under most circumstances KUnit's usage of UML should be treated as an
implementation detail of how KUnit works under the hood. Nevertheless, there
are instances where being able to run architecture-specific code or test
against real hardware is desirable. For these reasons KUnit supports running on
other architectures.

Running existing KUnit tests on non-UML architectures
-----------------------------------------------------

There are some special considerations when running existing KUnit tests on
non-UML architectures:

*   Hardware may not be deterministic, so a test that always passes or fails
    when run under UML may not always do so on real hardware.
*   Hardware and VM environments may not be hermetic. KUnit tries its best to
    provide a hermetic environment to run tests; however, it cannot manage state
    that it doesn't know about outside of the kernel. Consequently, tests that
    may be hermetic on UML may not be hermetic on other architectures.
*   Some features and tooling may not be supported outside of UML.
*   Hardware and VMs are slower than UML.

None of these are reasons not to run your KUnit tests on real hardware; they are
only things to be aware of when doing so.

The biggest impediment will likely be that certain KUnit features and
infrastructure may not support your target environment. For example, at this
time the KUnit Wrapper (``tools/testing/kunit/kunit.py``) does not work outside
of UML. Unfortunately, there is no way around this. Using UML (or even just a
particular architecture) allows us to make a lot of assumptions that make it
possible to do things which might otherwise be impossible.

Nevertheless, all core KUnit framework features are fully supported on all
architectures, and using them is straightforward: all you need to do is to take
your kunitconfig, your Kconfig options for the tests you would like to run, and
merge them into whatever config your are using for your platform. That's it!

For example, let's say you have the following kunitconfig:

.. code-block:: none

	CONFIG_KUNIT=y
	CONFIG_KUNIT_EXAMPLE_TEST=y

If you wanted to run this test on an x86 VM, you might add the following config
options to your ``.config``:

.. code-block:: none

	CONFIG_KUNIT=y
	CONFIG_KUNIT_EXAMPLE_TEST=y
	CONFIG_SERIAL_8250=y
	CONFIG_SERIAL_8250_CONSOLE=y

All these new options do is enable support for a common serial console needed
for logging.

Next, you could build a kernel with these tests as follows:


.. code-block:: bash

	make ARCH=x86 olddefconfig
	make ARCH=x86

Once you have built a kernel, you could run it on QEMU as follows:

.. code-block:: bash

	qemu-system-x86_64 -enable-kvm \
			   -m 1024 \
			   -kernel arch/x86_64/boot/bzImage \
			   -append 'console=ttyS0' \
			   --nographic

Interspersed in the kernel logs you might see the following:

.. code-block:: none

	TAP version 14
		# Subtest: example
		1..1
		# example_simple_test: initializing
		ok 1 - example_simple_test
	ok 1 - example

Congratulations, you just ran a KUnit test on the x86 architecture!

In a similar manner, kunit and kunit tests can also be built as modules,
so if you wanted to run tests in this way you might add the following config
options to your ``.config``:

.. code-block:: none

	CONFIG_KUNIT=m
	CONFIG_KUNIT_EXAMPLE_TEST=m

Once the kernel is built and installed, a simple

.. code-block:: bash

	modprobe example-test

...will run the tests.

.. note::
   Note that you should make sure your test depends on ``KUNIT=y`` in Kconfig
   if the test does not support module build.  Otherwise, it will trigger
   compile errors if ``CONFIG_KUNIT`` is ``m``.

Writing new tests for other architectures
-----------------------------------------

The first thing you must do is ask yourself whether it is necessary to write a
KUnit test for a specific architecture, and then whether it is necessary to
write that test for a particular piece of hardware. In general, writing a test
that depends on having access to a particular piece of hardware or software (not
included in the Linux source repo) should be avoided at all costs.

Even if you only ever plan on running your KUnit test on your hardware
configuration, other people may want to run your tests and may not have access
to your hardware. If you write your test to run on UML, then anyone can run your
tests without knowing anything about your particular setup, and you can still
run your tests on your hardware setup just by compiling for your architecture.

.. important::
   Always prefer tests that run on UML to tests that only run under a particular
   architecture, and always prefer tests that run under QEMU or another easy
   (and monetarily free) to obtain software environment to a specific piece of
   hardware.

Nevertheless, there are still valid reasons to write an architecture or hardware
specific test: for example, you might want to test some code that really belongs
in ``arch/some-arch/*``. Even so, try your best to write the test so that it
does not depend on physical hardware: if some of your test cases don't need the
hardware, only require the hardware for tests that actually need it.

Now that you have narrowed down exactly what bits are hardware specific, the
actual procedure for writing and running the tests is pretty much the same as
writing normal KUnit tests. One special caveat is that you have to reset
hardware state in between test cases; if this is not possible, you may only be
able to run one test case per invocation.

.. TODO(brendanhiggins@google.com): Add an actual example of an architecture-
   dependent KUnit test.

KUnit debugfs representation
============================
When kunit test suites are initialized, they create an associated directory
in ``/sys/kernel/debug/kunit/<test-suite>``.  The directory contains one file

- results: "cat results" displays results of each test case and the results
  of the entire suite for the last test run.

The debugfs representation is primarily of use when kunit test suites are
run in a native environment, either as modules or builtin.  Having a way
to display results like this is valuable as otherwise results can be
intermixed with other events in dmesg output.  The maximum size of each
results file is KUNIT_LOG_SIZE bytes (defined in ``include/kunit/test.h``).