summaryrefslogtreecommitdiffstats
path: root/tools/testing/kunit
AgeCommit message (Collapse)AuthorFilesLines
2022-05-18kunit: tool: Use qemu-system-i386 for i386 runsDavid Gow1-1/+1
We're currently using the x86_64 qemu for i386 builds. While this is not incorrect, it's probably more sensible to use the i386 one, which will at least fail properly if we accidentally were to build a 64-bit kernel. Signed-off-by: David Gow <davidgow@google.com> Tested-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-05-16kunit: tool: update riscv QEMU config with new serial dependencyBrendan Higgins1-0/+1
The config for the serial console for riscv, CONFIG_SERIAL_EARLYCON_RISCV_SBI, added a dependency, CONFIG_RISCV_SBI_V01, at some point, so add that in to the base arch config. Signed-off-by: Brendan Higgins <brendanhiggins@google.com> Reviewed-by: David Gow <davidgow@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-05-16kunit: tool: Add list of all valid test configs on UMLDavid Gow1-0/+37
It's often desirable (particularly in test automation) to run as many tests as possible. This config enables all the tests which work as builtins under UML at present, increasing the total tests run from 156 to 342 (not counting 36 'skipped' tests). They can be run with: ./tools/testing/kunit/kunit.py run --kunitconfig=./tools/testing/kunit/configs/all_tests_uml.config This acts as an in-between point between the KUNIT_ALL_TESTS config (which enables only tests whose dependencies are already enabled), and the kunit_tool --alltests option, which tries to use allyesconfig, taking a very long time to build and breaking very often. Signed-off-by: David Gow <davidgow@google.com> Tested-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-05-16kunit: tool: misc cleanupsDaniel Latypov7-46/+39
This primarily comes from running pylint over kunit tool code and ignoring some warnings we don't care about. If we ever got a fully clean setup, we could add this to run_checks.py, but we're not there yet. Fix things like * Drop unused imports * check `is None`, not `== None` (see PEP 8) * remove redundant parens around returns * remove redundant `else` / convert `elif` to `if` where appropriate * rename make_arch_qemuconfig() param to base_kunitconfig (this is the name used in the subclass, and it's a better one) * kunit_tool_test: check the exit code for SystemExit (could be 0) Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-05-16kunit: tool: minor cosmetic cleanups in kunit_parser.pyDaniel Latypov1-54/+17
There should be no behavioral changes from this patch. This patch removes redundant comment text, inlines a function used in only one place, and other such minor tweaks. Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-05-16kunit: tool: make parser stop overwriting status of suites w/ no_testsDaniel Latypov2-3/+6
Consider this invocation $ ./tools/testing/kunit/kunit.py parse <<EOF TAP version 14 1..2 ok 1 - suite # Subtest: no_tests_suite # catastrophic error! not ok 1 - no_tests_suite EOF It will have a 0 exit code even though there's a "not ok". Consider this one: $ ./tools/testing/kunit/kunit.py parse <<EOF TAP version 14 1..2 ok 1 - suite not ok 1 - no_tests_suite EOF It will a non-zero exit code. Why? We have this line in the kunit_parser.py > parent_test = parse_test_header(lines, test) where we have special handling when we see "# Subtest" and we ignore the explicit reported "not ok 1" status! Also, NO_TESTS at a suite-level only results in a non-zero status code where then there's only one suite atm. This change is the minimal one to make sure we don't overwrite it. Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-05-16kunit: tool: remove dead parse_crash_in_log() logicDaniel Latypov3-104/+4
This logic depends on the kernel logging a message containing 'kunit test case crashed', but there is no corresponding logic to do so. This is likely a relic of the revision process KUnit initially went through when being upstreamed. Delete it given 1) it's been missing for years and likely won't get implemented 2) the parser has been moving to be a more general KTAP parser, kunit-only magic like this isn't how we'd want to implement it. Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-05-12kunit: tool: print clearer error message when there's no TAP outputDaniel Latypov2-3/+4
Before: $ ./tools/testing/kunit/kunit.py parse /dev/null ... [ERROR] Test : invalid KTAP input! After: $ ./tools/testing/kunit/kunit.py parse /dev/null ... [ERROR] Test <missing>: could not find any KTAP output! This error message gets printed out when extract_tap_output() yielded no lines. So while it could be because of malformed KTAP output from KUnit, it could also be due to not having any KTAP output at all. Try and make the error message here more clear. Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-05-12kunit: tool: stop using a shell to run kernel under QEMUDaniel Latypov10-20/+22
Note: this potentially breaks custom qemu_configs if people are using them! But the fix for them is simple, don't specify multiple arguments in one string and don't add on a redundant ''. It feels a bit iffy to be using a shell in the first place. There's the usual shenanigans where people could pass in arbitrary shell commands via --kernel_arg (since we're just adding '' around the kernel_cmdline) or via a custom qemu_config. This isn't too much of a concern given the nature of this script (and the qemu_config file is in python, you can do w/e you want already). But it does have some other drawbacks. One example of a kunit-specific pain point: If the relevant qemu binary is missing, we get output like this: > /bin/sh: line 1: qemu-system-aarch64: command not found This in turn results in our KTAP parser complaining about missing/invalid KTAP, but we don't directly show the error! It's even more annoying to debug when you consider --raw_output only shows KUnit output by default, i.e. you need --raw_output=all to see it. Whereas directly invoking the binary, Python will raise a FileNotFoundError for us, which is a noisier but more clear. Making this change requires * splitting parameters like ['-m 256'] into ['-m', '256'] in kunit/qemu_configs/*.py * change [''] to [] in kunit/qemu_configs/*.py since otherwise QEMU fails w/ 'Device needs media, but drive is empty' * dropping explicit quoting of the kernel cmdline * using shlex.quote() when we print what command we're running so the user can copy-paste and run it Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-05-12kunit: tool: update test counts summary line formatDaniel Latypov1-5/+5
Before: > Testing complete. Passed: 137, Failed: 0, Crashed: 0, Skipped: 36, Errors: 0 After: > Testing complete. Ran 173 tests: passed: 137, skipped: 36 Even with our current set of statuses, the output is a bit verbose. It could get worse in the future if we add more (e.g. timeout, kasan). Let's only print the relevant ones. I had previously been sympathetic to the argument that always printing out all the statuses would make it easier to parse results. But now we have commit acd8e8407b8f ("kunit: Print test statistics on failure"), there are test counts printed out in the raw output. We don't currently print out an overall total across all suites, but it would be easy to add, if we see a need for that. Signed-off-by: Daniel Latypov <dlatypov@google.com> Co-developed-by: David Gow <davidgow@google.com> Signed-off-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-04-04kunit: tool: more descriptive metavars/--help outputDaniel Latypov2-14/+17
Before, our help output contained lines like --kconfig_add KCONFIG_ADD --qemu_config qemu_config --jobs jobs They're not very helpful. The former kind come from the automatic 'metavar' we get from argparse, the uppercase version of the flag name. The latter are where we manually specified metavar as the flag name. After: --build_dir DIR --make_options X=Y --kunitconfig PATH --kconfig_add CONFIG_X=Y --arch ARCH --cross_compile PREFIX --qemu_config FILE --jobs N --timeout SECONDS --raw_output [{all,kunit}] --json [FILE] This patch tries to make the code more clear by specifying the _type_ of input we expect, e.g. --build_dir is a DIR, --qemu_config is a FILE. I also switched it to uppercase since it looked more clearly like placeholder text that way. This patch also changes --raw_output to specify `choices` to make it more clear what the options are, and this way argparse can validate it for us, as shown by the added test case. Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-04-04kunit: tool: Do not colorize output when redirectedKees Cook1-0/+7
Filling log files with color codes makes diffs and other comparisons difficult. Only emit vt100 codes when the stdout is a TTY. Cc: Brendan Higgins <brendanhiggins@google.com> Cc: linux-kselftest@vger.kernel.org Cc: kunit-dev@googlegroups.com Signed-off-by: Kees Cook <keescook@chromium.org> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-04-04kunit: tool: properly report the used arch for --json, or '' if not knownDaniel Latypov3-3/+5
Before, kunit.py always printed "arch": "UM" in its json output, but... 1. With `kunit.py parse`, we could be parsing output from anywhere, so we can't say that. 2. Capitalizing it is probably wrong, as it's `ARCH=um` 3. Commit 87c9c1631788 ("kunit: tool: add support for QEMU") made it so kunit.py could knowingly run a different arch, yet we'd still always claim "UM". This patch addresses all of those. E.g. 1. $ ./tools/testing/kunit/kunit.py parse .kunit/test.log --json | grep -o '"arch.*' | sort -u "arch": "", 2. $ ./tools/testing/kunit/kunit.py run --json | ... "arch": "um", 3. $ ./tools/testing/kunit/kunit.py run --json --arch=x86_64 | ... "arch": "x86_64", Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-04-04kunit: tool: refactor how we plumb metadata into JSONDaniel Latypov3-21/+33
When using --json, kunit.py run/exec/parse will produce results in KernelCI json format. As part of that, we include the build_dir that was used, and we (incorrectly) hardcode in the arch, etc. We'll want a way to plumb more values (as well as the correct `arch`), so this patch groups those fields into kunit_json.Metadata type. This patch should have no user visible changes. And since we only used build_dir in KunitParseRequest for json, we can now move it out of that struct and add it into KunitExecRequest, which needs it and used to get it via inheritance. Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-04-04kunit: tool: readability tweaks in KernelCI json generation logicDaniel Latypov1-10/+10
Use a more idiomatic check that a list is non-empty (`if mylist:`) and simplify the function body by dedenting and using a dict to map between the kunit TestStatus enum => KernelCI json status string. The dict hopefully makes it less likely to have bugs like commit 9a6bb30a8830 ("kunit: tool: fix --json output for skipped tests"). Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-04-04kunit: tool: simplify code since build_dir can't be NoneDaniel Latypov3-37/+24
--build_dir is set to a default of '.kunit' since commit ddbd60c779b4 ("kunit: use --build_dir=.kunit as default"), but even before then it was explicitly set to ''. So outside of one unit test, there was no way for the build_dir to be ever be None, and we can simplify code by fixing the unit test and enforcing that via updated type annotations. E.g. this lets us drop `get_file_path()` since it's now exactly equivalent to os.path.join(). Note: there's some `if build_dir` checks that also fail if build_dir is explicitly set to '' that just guard against passing "O=" to make. But running `make O=` works just fine, so drop these checks. Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-04-04kunit: tool: drop last uses of collections.namedtupleDaniel Latypov2-11/+15
Since we formally require python3.7+ since commit df4b0807ca1a ("kunit: tool: Assert the version requirement"), we can just use @dataclasses.dataclass instead. In kunit_config.py, we used namedtuple to create a hashable type that had `name` and `value` fields and had to subclass it to define a custom `__str__()`. @datalcass lets us just define one type instead. In qemu_config.py, we use namedtuple to allow modules to define various parameters. Using @dataclass, we can add type-annotations for all these fields, making our code more typesafe and making it easier for users to figure out how to define new configs. Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-04-04kunit: tool: drop unused KernelDirectoryPath varDaniel Latypov1-2/+0
Commit be886ba90cce ("kunit: run kunit_tool from any directory") introduced this variable, but it was unused even in that commit. Since it's still unused now and callers can instead use get_kernel_root_path(), delete this var. Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-04-04kunit: tool: make --json handling a bit clearerDaniel Latypov3-16/+11
Currently kunit_json.get_json_result() will output the JSON-ified test output to json_path, but iff it's not "stdout". Instead, move the responsibility entirely over to the one caller. Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-03-23Merge tag 'linux-kselftest-kunit-5.18-rc1' of ↵Linus Torvalds1-16/+8
git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest Pull KUnit updates from Shuah Khan: - changes to decrease macro layering string, integer, EQ/NE asserts - remove unused macros - several cleanups and fixes - new list tests for list_del_init_careful(), list_is_head() and list_entry_is_head() * tag 'linux-kselftest-kunit-5.18-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest: list: test: Add a test for list_entry_is_head() list: test: Add a test for list_is_head() list: test: Add test for list_del_init_careful() kunit: cleanup assertion macro internal variables kunit: factor out str constants from binary assertion structs kunit: consolidate KUNIT_INIT_BINARY_ASSERT_STRUCT macros kunit: remove va_format from kunit_assert kunit: tool: drop mostly unused KunitResult.result field kunit: decrease macro layering for EQ/NE asserts kunit: decrease macro layering for integer asserts kunit: reduce layering in string assertion macros kunit: drop unused intermediate macros for ptr inequality checks kunit: make KUNIT_EXPECT_EQ() use KUNIT_EXPECT_EQ_MSG(), etc. kunit: drop unused assert_type from kunit_assert and clean up macros kunit: split out part of kunit_assert into a static const kunit: factor out kunit_base_assert_format() call into kunit_fail() kunit: drop unused kunit* field in kunit_assert kunit: move check if assertion passed into the macros kunit: add example test case showing off all the expect macros
2022-02-02kunit: fix missing f in f-string in run_checks.pyDaniel Latypov1-1/+1
We're missing the `f` prefix to have python do string interpolation, so we'd never end up printing what the actual "unexpected" error is. Fixes: ee92ed38364e ("kunit: add run_checks.py script to validate kunit changes") Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-01-31kunit: tool: drop mostly unused KunitResult.result fieldDaniel Latypov1-16/+8
This field is only used to pass along the parsed Test object from parse_tests(). Everywhere else the `result` field is ignored. Instead make parse_tests() explicitly return a KunitResult and Test so we can retire the `result` field. Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-01-25kunit: tool: Import missing importlib.abcMichał Winiarski1-0/+1
Python 3.10.0 contains: 9e09849d20 ("bpo-41006: importlib.util no longer imports typing (GH-20938)") It causes importlib.util to no longer import importlib.abs, which leads to the following error when trying to use kunit with qemu: AttributeError: module 'importlib' has no attribute 'abc'. Did you mean: '_abc'? Add the missing import. Signed-off-by: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-12-15kunit: tool: Default --jobs to number of CPUsDavid Gow2-3/+7
The --jobs parameter for kunit_tool currently defaults to 8 CPUs, regardless of the number available. For systems with significantly more (or less), this is not as efficient. Instead, default --jobs to the number of CPUs available to the process: while there are as many superstitions as to exactly what the ideal jobs:CPU ratio is, this seems sufficiently sensible to me. A new helper function to get the default number of jobs is added: get_default_jobs() -- this is used in kunit_tool_test instead of a hardcoded value, or an explicit call to len(os.sched_getaffinity()), so should be more flexible if this needs to change in the future. Signed-off-by: David Gow <davidgow@google.com> Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-12-15kunit: tool: fix newly introduced typechecker errorsDaniel Latypov2-2/+3
After upgrading mypy and pytype from pip, we see 2 new errors when running ./tools/testing/kunit/run_checks.py. Error #1: mypy and pytype They now deduce that importlib.util.spec_from_file_location() can return None and note that we're not checking for this. We validate that the arch is valid (i.e. the file exists) beforehand. Add in an `asssert spec is not None` to appease the checkers. Error #2: pytype bug https://github.com/google/pytype/issues/1057 It doesn't like `from datetime import datetime`, specifically that a type shares a name with a module. We can workaround this by either * renaming the import or just using `import datetime` * passing the new `--fix-module-collisions` flag to pytype. We pick the first option for now because * the flag is quite new, only in the 2021.11.29 release. * I'd prefer if people can just run `pytype <file>` Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-12-15kunit: tool: make `build` subcommand also reconfigure if neededDaniel Latypov2-2/+10
If I created a kunitconfig file that was incomplete, then $ ./tools/testing/kunit/kunit.py build --kunitconfig=my_kunitconfig would silently drop all the options with unmet dependencies! This is because it doesn't do the config check that `kunit.py config` does. So if I want to safely build a kernel for testing, I have to do $ ./tools/testing/kunit/kunit.py config <flags> $ ./tools/testing/kunit/kunit.py build <flags, again> It seems unlikely that any user of kunit.py would want the current `build` semantics. So make it effectively do `kunit.py config` + `kunit.py build`. Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-12-15kunit: tool: delete kunit_parser.TestResult typeDaniel Latypov4-35/+29
The `log` field is unused, and the `status` field is accessible via `test.status`. So it's simpler to just return the main `Test` object directly. And since we're no longer returning a namedtuple, which has no type annotations, this hopefully means typecheckers are better equipped to find any errors. Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-12-15kunit: tool: use dataclass instead of collections.namedtupleDaniel Latypov2-70/+75
namedtuple is a terse way of defining a collection of fields. However, it does not allow us to annotate the type of these fields. It also doesn't let us have any sort of inheritance between types. Since commit df4b0807ca1a ("kunit: tool: Assert the version requirement"), kunit.py has asserted that it's running on python >=3.7. So in that case use a 3.7 feature, dataclasses, to replace these. Changes in detail: * Make KunitExecRequest contain all the fields needed for exec_tests * Use inheritance to dedupe fields * also allows us to e.g. pass a KUnitRequest in as a KUnitParseRequest * this has changed around the order of some fields * Use named arguments when constructing all request objects in kunit.py * This is to prevent accidentally mixing up fields, etc. Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-12-13kunit: tool: suggest using decode_stacktrace.sh on kernel crashDaniel Latypov1-0/+6
kunit.py isn't very clear that 1) it stashes a copy of the unparsed output in $BUILD_DIR/test.log 2) it sets $BUILD_DIR=.kunit by default So it's trickier than it should be for a user to come up with the right command to do so. Make kunit.py print out a command for this if a) we saw a test case crash b) we only ran one kernel (test.log only contains output from the last) Example suggested command: $ scripts/decode_stacktrace.sh .kunit/vmlinux .kunit < .kunit/test.log | tee .kunit/decoded.log | ./tools/testing/kunit/kunit.py parse Without debug info a user might see something like [14:11:25] Call Trace: [14:11:25] ? kunit_binary_assert_format (:?) [14:11:25] kunit_try_run_case (test.c:?) [14:11:25] ? __kthread_parkme (kthread.c:?) [14:11:25] kunit_generic_run_threadfn_adapter (try-catch.c:?) [14:11:25] ? kunit_generic_run_threadfn_adapter (try-catch.c:?) [14:11:25] kthread (kthread.c:?) [14:11:25] new_thread_handler (:?) [14:11:25] [CRASHED] `tee` is in GNU coreutils, so it seems fine to add that into the pipeline by default, that way users can inspect the otuput in more detail. Note: to turn on debug info, users would need to do something like $ echo -e 'CONFIG_DEBUG_KERNEL=y\nCONFIG_DEBUG_INFO=y' >> .kunit/.kunitconfig $ ./tools/testing/kunit/kunit.py config $ ./tools/testing/kunit/kunit.py build $ <then run decode_stacktrace.sh now vmlinux is updated> This feels too clunky to include in the instructions. With --kconfig_add [1], it would become a bit less painful. [1] https://lore.kernel.org/linux-kselftest/20211106013058.2621799-2-dlatypov@google.com/ Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-12-13kunit: tool: reconfigure when the used kunitconfig changesDaniel Latypov2-11/+74
Problem: currently, if you remove something from your kunitconfig, kunit.py will not regenerate the .config file. The same thing happens if you did --kunitconfig_add=CONFIG_KASAN=y [1] and then ran again without it. Your new run will still have KASAN. The reason is that kunit.py won't regenerate the .config file if it's a superset of the kunitconfig. This speeds it up a bit for iterating. This patch adds an additional check that forces kunit.py to regenerate the .config file if the current kunitconfig doesn't match the previous one. What this means: * deleting entries from .kunitconfig works as one would expect * dropping a --kunitconfig_add also triggers a rebuild * you can still edit .config directly to turn on new options We implement this by creating a `last_used_kunitconfig` file in the build directory (so .kunit, by default) after we generate the .config. When comparing the kconfigs, we compare python sets, so duplicates and permutations don't trip us up. The majority of this patch is adding unit tests for the existing logic and for the new case where `last_used_kunitconfig` differs. [1] https://lore.kernel.org/linux-kselftest/20211106013058.2621799-2-dlatypov@google.com/ Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-12-13kunit: tool: revamp message for invalid kunitconfigDaniel Latypov1-9/+11
The current error message is precise, but not very clear if you don't already know what it's talking about, e.g. > $ make ARCH=um olddefconfig O=.kunit > ERROR:root:Provided Kconfig is not contained in validated .config. Following fields found in kunitconfig, but not in .config: CONFIG_DRM=y Try to reword the error message so that it's * your missing options usually have unsatisified dependencies * if you're on UML, that might be the cause (it is, in this example) Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-12-13kunit: tool: add --kconfig_add to allow easily tweaking kunitconfigsDaniel Latypov3-0/+31
E.g. run tests but with KASAN $ ./tools/testing/kunit/kunit.py run --arch=x86_64 --kconfig_add=CONFIG_KASAN=y This also works with --kunitconfig $ ./tools/testing/kunit/kunit.py run --arch=x86_64 --kunitconfig=fs/ext4 --kconfig_add=CONFIG_KASAN=y This flag is inspired by TuxMake's --kconfig-add, see https://gitlab.com/Linaro/tuxmake#examples. Our version just uses "_" as the delimiter for consistency with pre-existing flags like --build_dir, --make_options, --kernel_args, etc. Note: this does make it easier to run into a pre-existing edge case: $ ./tools/testing/kunit/kunit.py run --arch=x86_64 --kconfig_add=CONFIG_KASAN=y $ ./tools/testing/kunit/kunit.py run --arch=x86_64 This second invocation ^ still has KASAN enabled! kunit.py won't call olddefconfig if our current .config is already a superset of the provided kunitconfig. Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-12-13kunit: tool: move Kconfig read_from_file/parse_from_string to package-levelDaniel Latypov3-42/+37
read_from_file() clears its `self` Kconfig object and parses a config file. It is a way to construct Kconfig objects more so than an operation on Kconfig objects. This is reflected in the fact its only ever used as: kconfig = kunit_config.Kconfig() kconfig.read_from_file(path) So clean this up and simplify callers by replacing it with kconfig = kunit_config.parse_file(path) Do the same thing for the related parse_from_string() function as well. Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-12-13kunit: tool: print parsed test results fully incrementallyDaniel Latypov2-7/+57
With the parser rework [1] and run_kernel() rework [2], this allows the parser to print out test results incrementally. Currently, that's held up by the fact that the LineStream eagerly pre-fetches the next line when you call pop(). This blocks parse_test_result() from returning until the line *after* the "ok 1 - test name" line is also printed. One can see this with the following example: $ (echo -e 'TAP version 14\n1..3\nok 1 - fake test'; sleep 2; echo -e 'ok 2 - fake test 2'; sleep 3; echo -e 'ok 3 - fake test 3') | ./tools/testing/kunit/kunit.py parse Before this patch [1]: there's a pause before 'fake test' is printed. After this patch: 'fake test' is printed out immediately. This patch also adds * a unit test to verify LineStream's behavior directly * a test case to ensure that it's lazily calling the generator * an explicit exception for when users go beyond EOF [1] https://lore.kernel.org/linux-kselftest/20211006170049.106852-1-dlatypov@google.com/ [2] https://lore.kernel.org/linux-kselftest/20211005011340.2826268-1-dlatypov@google.com/ Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-12-13kunit: tool: Report an error if any test has no subtestsDavid Gow3-5/+30
It's possible for a test to have a subtest header, but zero valid subtests. We used to error on this if the test plan had no subtests listed, but it's possible to have subtests without a test plan (indeed, this is how parameterised tests work). Tests with 0 subtests now have the result NO_TESTS, and will report an error (which does not halt test execution, but is printed in a scary red colour and is noted in the results summary). Signed-off-by: David Gow <davidgow@google.com> Reviewed-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-12-13kunit: tool: Do not error on tests without test plansDavid Gow2-4/+6
The (K)TAP spec encourages test output to begin with a 'test plan': a count of the number of tests being run of the form: 1..n However, some test suites might not know the number of subtests in advance (for example, KUnit's parameterised tests use a generator function). In this case, it's not possible to print the test plan in advance. kunit_tool already parses test output which doesn't contain a plan, but reports an error. Since we want to use nested subtests with KUnit paramterised tests, remove this error. Signed-off-by: David Gow <davidgow@google.com> Reviewed-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-12-13kunit: add run_checks.py script to validate kunit changesDaniel Latypov1-0/+81
This formalizes the checks KUnit maintainers have been running (or in other cases: forgetting to run). This script also runs them all in parallel to minimize friction (pytype can be fairly slow, but not slower than running kunit.py). Example output: $ ./tools/testing/kunit/run_checks.py Waiting on 4 checks (kunit_tool_test.py, kunit smoke test, pytype, mypy)... kunit_tool_test.py: PASSED mypy: PASSED pytype: PASSED kunit smoke test: PASSED On failure or timeout (5 minutes), it'll dump out the stdout/stderr. E.g. adding in a type-checking error: mypy: FAILED > kunit.py:54: error: Name 'nonexistent_function' is not defined > Found 1 error in 1 file (checked 8 source files) mypy and pytype are two Python type-checkers and must be installed. This file treats them as optional and will mark them as SKIPPED if not installed. This tool also runs `kunit.py run --kunitconfig=lib/kunit` to run KUnit's own KUnit tests and to verify KUnit kernel code and kunit.py play nicely together. It uses --build_dir=kunit_run_checks so as not to clobber the default build_dir, which helps make it faster by reducing the need to rebuild, esp. if you're been passing in --arch instead of using UML. Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-12-13kunit: tool: fix --json output for skipped testsDaniel Latypov2-0/+8
Currently, KUnit will report SKIPPED tests as having failed if one uses --json. Add the missing if statement to set the appropriate status ("SKIP"). See https://api.kernelci.org/schema-test-case.html: "status": { "type": "string", "description": "The status of the execution of this test case", "enum": ["PASS", "FAIL", "SKIP", "ERROR"], "default": "PASS" }, with this, we now can properly produce all four of the statuses. Fixes: 5acaf6031f53 ("kunit: tool: Support skipped tests in kunit_tool") Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-10-29kunit: tool: fix typecheck errors about loading qemu configsDaniel Latypov1-6/+9
Currently, we have these errors: $ mypy ./tools/testing/kunit/*.py tools/testing/kunit/kunit_kernel.py:213: error: Item "_Loader" of "Optional[_Loader]" has no attribute "exec_module" tools/testing/kunit/kunit_kernel.py:213: error: Item "None" of "Optional[_Loader]" has no attribute "exec_module" tools/testing/kunit/kunit_kernel.py:214: error: Module has no attribute "QEMU_ARCH" tools/testing/kunit/kunit_kernel.py:215: error: Module has no attribute "QEMU_ARCH" exec_module =========== pytype currently reports no errors, but that's because there's a comment disabling it on 213. This is due to https://github.com/python/typeshed/pull/2626. The fix is to assert the loaded module implements the ABC (abstract base class) we want which has exec_module support. QEMU_ARCH ========= pytype is fine with this, but mypy is not: https://github.com/python/mypy/issues/5059 Add a check that the loaded module does indeed have QEMU_ARCH. Note: this is not enough to appease mypy, so we also add a comment to squash the warning. Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-10-25kunit: tool: continue past invalid utf-8 outputDaniel Latypov2-3/+4
kunit.py currently crashes and fails to parse kernel output if it's not fully valid utf-8. This can come from memory corruption or just inadvertently printing out binary data as strings. E.g. adding this line into a kunit test pr_info("\x80") will cause this exception UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 1961: invalid start byte We can tell Python how to handle errors, see https://docs.python.org/3/library/codecs.html#error-handlers Unfortunately, it doesn't seem like there's a way to specify this in just one location, so we need to repeat ourselves quite a bit. Specify `errors='backslashreplace'` so we instead: * print out the offending byte as '\x80' * try and continue parsing the output. * as long as the TAP lines themselves are valid, we're fine. Fixed spelling/grammar in commit log: Shuah Khan <<skhan@linuxfoundation.org> Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Tested-by: David Gow <davidgow@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-10-19kunit: tool: improve compatibility of kunit_parser with KTAP specificationRae Moar8-383/+938
Update to kunit_parser to improve compatibility with KTAP specification including arbitrarily nested tests. Patch accomplishes three major changes: - Use a general Test object to represent all tests rather than TestCase and TestSuite objects. This allows for easier implementation of arbitrary levels of nested tests and promotes the idea that both test suites and test cases are tests. - Print errors incrementally rather than all at once after the parsing finishes to maximize information given to the user in the case of the parser given invalid input and to increase the helpfulness of the timestamps given during printing. Note that kunit.py parse does not print incrementally yet. However, this fix brings us closer to this feature. - Increase compatibility for different formats of input. Arbitrary levels of nested tests supported. Also, test cases and test suites are now supported to be present on the same level of testing. This patch now implements the draft KTAP specification here: https://lore.kernel.org/linux-kselftest/CA+GJov6tdjvY9x12JsJT14qn6c7NViJxqaJk+r-K1YJzPggFDQ@mail.gmail.com/ We'll update the parser as the spec evolves. This patch adjusts the kunit_tool_test.py file to check for the correct outputs from the new parser and adds a new test to check the parsing for a KTAP result log with correct format for multiple nested subtests (test_is_test_passed-all_passed_nested.log). This patch also alters the kunit_json.py file to allow for arbitrarily nested tests. Signed-off-by: Rae Moar <rmoar@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-10-19kunit: tool: yield output from run_kernel in real timeDaniel Latypov2-30/+62
Currently, `run_kernel()` dumps all the kernel output to a file (.kunit/test.log) and then opens the file and yields it to callers. This made it easier to respect the requested timeout, if any. But it means that we can't yield the results in real time, either to the parser or to stdout (if --raw_output is set). This change spins up a background thread to enforce the timeout, which allows us to yield the kernel output in real time, while also copying it to the .kunit/test.log file. It's also careful to ensure that the .kunit/test.log file is complete, even in the kunit_parser throws an exception/otherwise doesn't consume every line, see the new `finally` block and unit test. For example: $ ./tools/testing/kunit/kunit.py run --arch=x86_64 --raw_output <configure + build steps> ... <can now see output from QEMU in real time> This does not currently have a visible effect when --raw_output is not passed, as kunit_parser.py currently only outputs everything at the end. But that could change, and this patch is a necessary step towards showing parsed test results in real time. Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-10-19kunit: tool: support running each suite/test separatelyDaniel Latypov2-22/+118
The new --run_isolated flag makes the tool boot the kernel once per suite or test, preventing leftover state from one suite to impact the other. This can be useful as a starting point to debugging test hermeticity issues. Note: it takes a lot longer, so people should not use it normally. Consider the following very simplified example: bool disable_something_for_test = false; void function_being_tested() { ... if (disable_something_for_test) return; ... } static void test_before(struct kunit *test) { disable_something_for_test = true; function_being_tested(); /* oops, we forgot to reset it back to false */ } static void test_after(struct kunit *test) { /* oops, now "fixing" test_before can cause test_after to fail! */ function_being_tested(); } Presented like this, the issues are obvious, but it gets a lot more complicated to track down as the amount of test setup and helper functions increases. Another use case is memory corruption. It might not be surfaced as a failure/crash in the test case or suite that caused it. I've noticed in kunit's own unit tests, the 3rd suite after might be the one to finally crash after an out-of-bounds write, for example. Example usage: Per suite: $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit --run_isolated=suite ... Starting KUnit Kernel (1/7)... ============================================================ ======== [PASSED] kunit_executor_test ======== .... Testing complete. 5 tests run. 0 failed. 0 crashed. 0 skipped. Starting KUnit Kernel (2/7)... ============================================================ ======== [PASSED] kunit-try-catch-test ======== ... Per test: $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit --run_isolated=test Starting KUnit Kernel (1/23)... ============================================================ ======== [PASSED] kunit_executor_test ======== [PASSED] parse_filter_test ============================================================ Testing complete. 1 tests run. 0 failed. 0 crashed. 0 skipped. Starting KUnit Kernel (2/23)... ============================================================ ======== [PASSED] kunit_executor_test ======== [PASSED] filter_subsuite_test ... It works with filters as well: $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit --run_isolated=suite example ... Starting KUnit Kernel (1/1)... ============================================================ ======== [PASSED] example ======== ... It also handles test filters, '*.*skip*' runs these 3 tests: kunit_status.kunit_status_mark_skipped_test example.example_skip_test example.example_mark_skipped_test Fixed up merge conflict between: d8c23ead708b ("kunit: tool: better handling of quasi-bool args (--json, --raw_output)") and 6710951ee039 ("kunit: tool: support running each suite/test separately") Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Shuah Khan <skhan@linuxfoundation.org> Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-10-19kunit: tool: actually track how long it took to run testsDaniel Latypov1-3/+5
This is a long standing bug in kunit tool. Since these files were added, run_kernel() has always yielded lines. That means, the call to run_kernel() returns before the kernel finishes executing tests, potentially before a single line of output is even produced. So code like this time_start = time.time() result = linux.run_kernel(...) time_end = time.time() would only measure the time taken for python to give back the generator object. From a caller's perspective, the only way to know the kernel has exited is for us to consume all the output from the `result` generator object. Alternatively, we could change run_kernel() to try and do its own book keeping and return the total time, but that doesn't seem worth it. This change makes us record `time_end` after we're done parsing all the output (which should mean we've consumed all of it, or errored out). That means we're including in the parsing time as well, but that should be quite small, and it's better than claiming it took 0s to run tests. Let's use this as an example: $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit example Before: Elapsed time: 7.684s total, 0.001s configuring, 4.692s building, 0.000s running After: Elapsed time: 6.283s total, 0.001s configuring, 3.202s building, 3.079s running Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-10-19kunit: tool: factor exec + parse steps into a functionDaniel Latypov1-25/+19
Currently this code is copy-pasted between the normal "run" subcommand and the "exec" subcommand. Given we don't have any interest in just executing the tests without giving the user any indication what happened (i.e. parsing the output), make a function that does both this things and can be reused. This will be useful when we allow more complicated ways of running tests, e.g. invoking the kernel multiple times instead of just once, etc. We remove input_data from the ParseRequest so the callers don't have to pass in a dummy value for this field. Named tuples are also immutable, so if they did pass in a dummy, exec_tests() would need to make a copy to call parse_tests(). Removing it also makes KunitParseRequest match the other *Request types, as they only contain user arguments/flags, not data. Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Acked-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-10-19kunit: tool: show list of valid --arch options when invalidDaniel Latypov2-2/+7
Consider this attempt to run KUnit in QEMU: $ ./tools/testing/kunit/kunit.py run --arch=x86 Before you'd get this error message: kunit_kernel.ConfigError: x86 is not a valid arch After: kunit_kernel.ConfigError: x86 is not a valid arch, options are ['alpha', 'arm', 'arm64', 'i386', 'powerpc', 'riscv', 's390', 'sparc', 'x86_64'] This should make it a bit easier for people to notice when they make typos, etc. Currently, one would have to dive into the python code to figure out what the valid set is. Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-10-19kunit: tool: misc fixes (unused vars, imports, leaked files)Daniel Latypov3-19/+12
Drop some variables in unit tests that were unused and/or add assertions based on them. For ExitStack, it was imported, but the `es` variable wasn't used so it didn't do anything, and we were leaking the file objects. Refactor it to just use nested `with` statements to properly close them. And drop the direct use of .close() on file objects in the kunit tool unit test, as these can be leaked if test assertions fail. Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-10-19kunit: tool: allow filtering test cases via globDaniel Latypov1-3/+2
Commit 1d71307a6f94 ("kunit: add unit test for filtering suites by names") introduced the ability to filter which suites we run via glob. This change extends it so we can also filter individual test cases inside of suites as well. This is quite useful when, e.g. * trying to run just the tests cases you've just added or are working on * trying to debug issues with test hermeticity Examples: $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit '*exec*.parse*' ... ============================================================ ======== [PASSED] kunit_executor_test ======== [PASSED] parse_filter_test ============================================================ Testing complete. 1 tests run. 0 failed. 0 crashed. $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit '*.no_matching_tests' ... [ERROR] no tests run! Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-10-01kunit: tool: better handling of quasi-bool args (--json, --raw_output)Daniel Latypov2-2/+30
Problem: What does this do? $ kunit.py run --json Well, it runs all the tests and prints test results out as JSON. And next is $ kunit.py run my-test-suite --json This runs just `my-test-suite` and prints results out as JSON. But what about? $ kunit.py run --json my-test-suite This runs all the tests and stores the json results in a "my-test-suite" file. Why: --json, and now --raw_output are actually string flags. They just have a default value. --json in particular takes the name of an output file. It was intended that you'd do $ kunit.py run --json=my_output_file my-test-suite if you ever wanted to specify the value. Workaround: It doesn't seem like there's a way to make https://docs.python.org/3/library/argparse.html only accept arg values after a '='. I believe that `--json` should "just work" regardless of where it is. So this patch automatically rewrites a bare `--json` to `--json=stdout`. That makes the examples above work the same way. Add a regression test that can catch this for --raw_output. Fixes: 6a499c9c42d0 ("kunit: tool: make --raw_output support only showing kunit output") Signed-off-by: Daniel Latypov <dlatypov@google.com> Tested-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-08-13kunit: Print test statistics on failureDavid Gow1-1/+1
When a number of tests fail, it can be useful to get higher-level statistics of how many tests are failing (or how many parameters are failing in parameterised tests), and in what cases or suites. This is already done by some non-KUnit tests, so add support for automatically generating these for KUnit tests. This change adds a 'kunit.stats_enabled' switch which has three values: - 0: No stats are printed (current behaviour) - 1: Stats are printed only for tests/suites with more than one subtest (new default) - 2: Always print test statistics For parameterised tests, the summary line looks as follows: " # inode_test_xtimestamp_decoding: pass:16 fail:0 skip:0 total:16" For test suites, there are two lines looking like this: "# ext4_inode_test: pass:1 fail:0 skip:0 total:1" "# Totals: pass:16 fail:0 skip:0 total:16" The first line gives the number of direct subtests, the second "Totals" line is the accumulated sum of all tests and test parameters. This format is based on the one used by kselftest[1]. [1]: https://elixir.bootlin.com/linux/latest/source/tools/testing/selftests/kselftest.h#L109 Signed-off-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>