Simon Glass | 2540410 | 2021-03-07 17:35:17 -0700 | [diff] [blame] | 1 | .. SPDX-License-Identifier: GPL-2.0+ |
| 2 | .. Copyright 2021 Google LLC |
| 3 | .. sectionauthor:: Simon Glass <sjg@chromium.org> |
| 4 | |
| 5 | Writing Tests |
| 6 | ============= |
| 7 | |
| 8 | This describes how to write tests in U-Boot and describes the possible options. |
| 9 | |
| 10 | Test types |
| 11 | ---------- |
| 12 | |
| 13 | There are two basic types of test in U-Boot: |
| 14 | |
| 15 | - Python tests, in test/py/tests |
| 16 | - C tests, in test/ and its subdirectories |
| 17 | |
| 18 | (there are also UEFI tests in lib/efi_selftest/ not considered here.) |
| 19 | |
| 20 | Python tests talk to U-Boot via the command line. They support both sandbox and |
| 21 | real hardware. They typically do not require building test code into U-Boot |
| 22 | itself. They are fairly slow to run, due to the command-line interface and there |
| 23 | being two separate processes. Python tests are fairly easy to write. They can |
| 24 | be a little tricky to debug sometimes due to the voluminous output of pytest. |
| 25 | |
| 26 | C tests are written directly in U-Boot. While they can be used on boards, they |
| 27 | are more commonly used with sandbox, as they obviously add to U-Boot code size. |
| 28 | C tests are easy to write so long as the required facilities exist. Where they |
| 29 | do not it can involve refactoring or adding new features to sandbox. They are |
| 30 | fast to run and easy to debug. |
| 31 | |
| 32 | Regardless of which test type is used, all tests are collected and run by the |
| 33 | pytest framework, so there is typically no need to run them separately. This |
| 34 | means that C tests can be used when it makes sense, and Python tests when it |
| 35 | doesn't. |
| 36 | |
| 37 | |
| 38 | This table shows how to decide whether to write a C or Python test: |
| 39 | |
| 40 | ===================== =========================== ============================= |
| 41 | Attribute C test Python test |
| 42 | ===================== =========================== ============================= |
| 43 | Fast to run? Yes No (two separate processes) |
| 44 | Easy to write? Yes, if required test Yes |
| 45 | features exist in sandbox |
| 46 | or the target system |
| 47 | Needs code in U-Boot? Yes No, provided the test can be |
| 48 | executed and the result |
| 49 | determined using the command |
| 50 | line |
| 51 | Easy to debug? Yes No, since access to the U-Boot |
| 52 | state is not available and the |
| 53 | amount of output can |
| 54 | sometimes require a bit of |
| 55 | digging |
| 56 | Can use gdb? Yes, directly Yes, with --gdbserver |
| 57 | Can run on boards? Some can, but only if Some |
| 58 | compiled in and not |
| 59 | dependent on sandboxau |
| 60 | ===================== =========================== ============================= |
| 61 | |
| 62 | |
| 63 | Python or C |
| 64 | ----------- |
| 65 | |
| 66 | Typically in U-Boot we encourage C test using sandbox for all features. This |
| 67 | allows fast testing, easy development and allows contributors to make changes |
| 68 | without needing dozens of boards to test with. |
| 69 | |
| 70 | When a test requires setup or interaction with the running host (such as to |
| 71 | generate images and then running U-Boot to check that they can be loaded), or |
| 72 | cannot be run on sandbox, Python tests should be used. These should typically |
| 73 | NOT rely on running with sandbox, but instead should function correctly on any |
| 74 | board supported by U-Boot. |
| 75 | |
| 76 | |
Simon Glass | 1f1614b | 2022-10-20 18:22:50 -0600 | [diff] [blame] | 77 | Mixing Python and C |
| 78 | ------------------- |
| 79 | |
| 80 | The best of both worlds is sometimes to have a Python test set things up and |
| 81 | perform some operations, with a 'checker' C unit test doing the checks |
| 82 | afterwards. This can be achieved with these steps: |
| 83 | |
| 84 | - Add the `UT_TESTF_MANUAL` flag to the checker test so that the `ut` command |
| 85 | does not run it by default |
| 86 | - Add a `_norun` suffix to the name so that pytest knows to skip it too |
| 87 | |
| 88 | In your Python test use the `-f` flag to the `ut` command to force the checker |
| 89 | test to run it, e.g.:: |
| 90 | |
| 91 | # Do the Python part |
| 92 | host load ... |
| 93 | bootm ... |
| 94 | |
| 95 | # Run the checker to make sure that everything worked |
| 96 | ut -f bootstd vbe_test_fixup_norun |
| 97 | |
| 98 | Note that apart from the `UT_TESTF_MANUAL` flag, the code in a 'manual' C test |
| 99 | is just like any other C test. It still uses ut_assert...() and other such |
| 100 | constructs, in this case to check that the expected things happened in the |
| 101 | Python test. |
| 102 | |
| 103 | |
Simon Glass | 2540410 | 2021-03-07 17:35:17 -0700 | [diff] [blame] | 104 | How slow are Python tests? |
| 105 | -------------------------- |
| 106 | |
| 107 | Under the hood, when running on sandbox, Python tests work by starting a sandbox |
| 108 | test and connecting to it via a pipe. Each interaction with the U-Boot process |
| 109 | requires at least a context switch to handle the pipe interaction. The test |
| 110 | sends a command to U-Boot, which then reacts and shows some output, then the |
| 111 | test sees that and continues. Of course on real hardware, communications delays |
| 112 | (e.g. with a serial console) make this slower. |
| 113 | |
| 114 | For comparison, consider a test that checks the 'md' (memory dump). All times |
| 115 | below are approximate, as measured on an AMD 2950X system. Here is is the test |
| 116 | in Python:: |
| 117 | |
| 118 | @pytest.mark.buildconfigspec('cmd_memory') |
| 119 | def test_md(u_boot_console): |
| 120 | """Test that md reads memory as expected, and that memory can be modified |
| 121 | using the mw command.""" |
| 122 | |
| 123 | ram_base = u_boot_utils.find_ram_base(u_boot_console) |
| 124 | addr = '%08x' % ram_base |
| 125 | val = 'a5f09876' |
| 126 | expected_response = addr + ': ' + val |
| 127 | u_boot_console.run_command('mw ' + addr + ' 0 10') |
| 128 | response = u_boot_console.run_command('md ' + addr + ' 10') |
| 129 | assert(not (expected_response in response)) |
| 130 | u_boot_console.run_command('mw ' + addr + ' ' + val) |
| 131 | response = u_boot_console.run_command('md ' + addr + ' 10') |
| 132 | assert(expected_response in response) |
| 133 | |
| 134 | This runs a few commands and checks the output. Note that it runs a command, |
| 135 | waits for the response and then checks it agains what is expected. If run by |
| 136 | itself it takes around 800ms, including test collection. For 1000 runs it takes |
| 137 | 19 seconds, or 19ms per run. Of course 1000 runs it not that useful since we |
| 138 | only want to run it once. |
| 139 | |
| 140 | There is no exactly equivalent C test, but here is a similar one that tests 'ms' |
| 141 | (memory search):: |
| 142 | |
| 143 | /* Test 'ms' command with bytes */ |
| 144 | static int mem_test_ms_b(struct unit_test_state *uts) |
| 145 | { |
| 146 | u8 *buf; |
| 147 | |
| 148 | buf = map_sysmem(0, BUF_SIZE + 1); |
| 149 | memset(buf, '\0', BUF_SIZE); |
| 150 | buf[0x0] = 0x12; |
| 151 | buf[0x31] = 0x12; |
| 152 | buf[0xff] = 0x12; |
| 153 | buf[0x100] = 0x12; |
| 154 | ut_assertok(console_record_reset_enable()); |
| 155 | run_command("ms.b 1 ff 12", 0); |
| 156 | ut_assert_nextline("00000030: 00 12 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................"); |
| 157 | ut_assert_nextline("--"); |
| 158 | ut_assert_nextline("000000f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 12 ................"); |
| 159 | ut_assert_nextline("2 matches"); |
| 160 | ut_assert_console_end(); |
| 161 | |
| 162 | ut_asserteq(2, env_get_hex("memmatches", 0)); |
| 163 | ut_asserteq(0xff, env_get_hex("memaddr", 0)); |
| 164 | ut_asserteq(0xfe, env_get_hex("mempos", 0)); |
| 165 | |
| 166 | unmap_sysmem(buf); |
| 167 | |
| 168 | return 0; |
| 169 | } |
| 170 | MEM_TEST(mem_test_ms_b, UT_TESTF_CONSOLE_REC); |
| 171 | |
| 172 | This runs the command directly in U-Boot, then checks the console output, also |
| 173 | directly in U-Boot. If run by itself this takes 100ms. For 1000 runs it takes |
| 174 | 660ms, or 0.66ms per run. |
| 175 | |
| 176 | So overall running a C test is perhaps 8 times faster individually and the |
| 177 | interactions are perhaps 25 times faster. |
| 178 | |
| 179 | It should also be noted that the C test is fairly easy to debug. You can set a |
| 180 | breakpoint on do_mem_search(), which is what implements the 'ms' command, |
| 181 | single step to see what might be wrong, etc. That is also possible with the |
| 182 | pytest, but requires two terminals and --gdbserver. |
| 183 | |
| 184 | |
| 185 | Why does speed matter? |
| 186 | ---------------------- |
| 187 | |
| 188 | Many development activities rely on running tests: |
| 189 | |
| 190 | - 'git bisect run make qcheck' can be used to find a failing commit |
| 191 | - test-driven development relies on quick iteration of build/test |
| 192 | - U-Boot's continuous integration (CI) systems make use of tests. Running |
| 193 | all sandbox tests typically takes 90 seconds and running each qemu test |
| 194 | takes about 30 seconds. This is currently dwarfed by the time taken to |
| 195 | build all boards |
| 196 | |
| 197 | As U-Boot continues to grow its feature set, fast and reliable tests are a |
| 198 | critical factor factor in developer productivity and happiness. |
| 199 | |
| 200 | |
| 201 | Writing C tests |
| 202 | --------------- |
| 203 | |
| 204 | C tests are arranged into suites which are typically executed by the 'ut' |
| 205 | command. Each suite is in its own file. This section describes how to accomplish |
| 206 | some common test tasks. |
| 207 | |
| 208 | (there are also UEFI C tests in lib/efi_selftest/ not considered here.) |
| 209 | |
| 210 | Add a new driver model test |
| 211 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 212 | |
| 213 | Use this when adding a test for a new or existing uclass, adding new operations |
| 214 | or features to a uclass, adding new ofnode or dev_read_() functions, or anything |
| 215 | else related to driver model. |
| 216 | |
| 217 | Find a suitable place for your test, perhaps near other test functions in |
| 218 | existing code, or in a new file. Each uclass should have its own test file. |
| 219 | |
| 220 | Declare the test with:: |
| 221 | |
| 222 | /* Test that ... */ |
| 223 | static int dm_test_uclassname_what(struct unit_test_state *uts) |
| 224 | { |
| 225 | /* test code here */ |
| 226 | |
| 227 | return 0; |
| 228 | } |
| 229 | DM_TEST(dm_test_uclassname_what, UT_TESTF_SCAN_FDT); |
| 230 | |
| 231 | Replace 'uclassname' with the name of your uclass, if applicable. Replace 'what' |
| 232 | with what you are testing. |
| 233 | |
| 234 | The flags for DM_TEST() are defined in test/test.h and you typically want |
| 235 | UT_TESTF_SCAN_FDT so that the devicetree is scanned and all devices are bound |
| 236 | and ready for use. The DM_TEST macro adds UT_TESTF_DM automatically so that |
| 237 | the test runner knows it is a driver model test. |
| 238 | |
| 239 | Driver model tests are special in that the entire driver model state is |
| 240 | recreated anew for each test. This ensures that if a previous test deletes a |
| 241 | device, for example, it does not affect subsequent tests. Driver model tests |
| 242 | also run both with livetree and flattree, to ensure that both devicetree |
| 243 | implementations work as expected. |
| 244 | |
| 245 | Example commit: c48cb7ebfb4 ("sandbox: add ADC unit tests") [1] |
| 246 | |
| 247 | [1] https://gitlab.denx.de/u-boot/u-boot/-/commit/c48cb7ebfb4 |
| 248 | |
| 249 | |
| 250 | Add a C test to an existing suite |
| 251 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 252 | |
| 253 | Use this when you are adding to or modifying an existing feature outside driver |
| 254 | model. An example is bloblist. |
| 255 | |
| 256 | Add a new function in the same file as the rest of the suite and register it |
| 257 | with the suite. For example, to add a new mem_search test:: |
| 258 | |
| 259 | /* Test 'ms' command with 32-bit values */ |
| 260 | static int mem_test_ms_new_thing(struct unit_test_state *uts) |
| 261 | { |
| 262 | /* test code here*/ |
| 263 | |
| 264 | return 0; |
| 265 | } |
| 266 | MEM_TEST(mem_test_ms_new_thing, UT_TESTF_CONSOLE_REC); |
| 267 | |
| 268 | Note that the MEM_TEST() macros is defined at the top of the file. |
| 269 | |
| 270 | Example commit: 9fe064646d2 ("bloblist: Support relocating to a larger space") [1] |
| 271 | |
| 272 | [1] https://gitlab.denx.de/u-boot/u-boot/-/commit/9fe064646d2 |
| 273 | |
| 274 | |
| 275 | Add a new test suite |
| 276 | ~~~~~~~~~~~~~~~~~~~~ |
| 277 | |
| 278 | Each suite should focus on one feature or subsystem, so if you are writing a |
| 279 | new one of those, you should add a new suite. |
| 280 | |
| 281 | Create a new file in test/ or a subdirectory and define a macro to register the |
| 282 | suite. For example:: |
| 283 | |
Simon Glass | 2540410 | 2021-03-07 17:35:17 -0700 | [diff] [blame] | 284 | #include <console.h> |
| 285 | #include <mapmem.h> |
| 286 | #include <dm/test.h> |
| 287 | #include <test/ut.h> |
| 288 | |
| 289 | /* Declare a new wibble test */ |
| 290 | #define WIBBLE_TEST(_name, _flags) UNIT_TEST(_name, _flags, wibble_test) |
| 291 | |
| 292 | /* Tetss go here */ |
| 293 | |
| 294 | /* At the bottom of the file: */ |
| 295 | |
| 296 | int do_ut_wibble(struct cmd_tbl *cmdtp, int flag, int argc, char *const argv[]) |
| 297 | { |
| 298 | struct unit_test *tests = UNIT_TEST_SUITE_START(wibble_test); |
| 299 | const int n_ents = UNIT_TEST_SUITE_COUNT(wibble_test); |
| 300 | |
| 301 | return cmd_ut_category("cmd_wibble", "wibble_test_", tests, n_ents, argc, argv); |
| 302 | } |
| 303 | |
| 304 | Then add new tests to it as above. |
| 305 | |
| 306 | Register this new suite in test/cmd_ut.c by adding to cmd_ut_sub[]:: |
| 307 | |
| 308 | /* Within cmd_ut_sub[]... */ |
| 309 | |
| 310 | U_BOOT_CMD_MKENT(wibble, CONFIG_SYS_MAXARGS, 1, do_ut_wibble, "", ""), |
| 311 | |
| 312 | and adding new help to ut_help_text[]:: |
| 313 | |
| 314 | "ut wibble - Test the wibble feature\n" |
| 315 | |
| 316 | If your feature is conditional on a particular Kconfig, then you can use #ifdef |
| 317 | to control that. |
| 318 | |
| 319 | Finally, add the test to the build by adding to the Makefile in the same |
| 320 | directory:: |
| 321 | |
| 322 | obj-$(CONFIG_$(SPL_)CMDLINE) += wibble.o |
| 323 | |
| 324 | Note that CMDLINE is never enabled in SPL, so this test will only be present in |
| 325 | U-Boot proper. See below for how to do SPL tests. |
| 326 | |
| 327 | As before, you can add an extra Kconfig check if needed:: |
| 328 | |
| 329 | ifneq ($(CONFIG_$(SPL_)WIBBLE),) |
| 330 | obj-$(CONFIG_$(SPL_)CMDLINE) += wibble.o |
| 331 | endif |
| 332 | |
| 333 | |
| 334 | Example commit: 919e7a8fb64 ("test: Add a simple test for bloblist") [1] |
| 335 | |
| 336 | [1] https://gitlab.denx.de/u-boot/u-boot/-/commit/919e7a8fb64 |
| 337 | |
| 338 | |
| 339 | Making the test run from pytest |
| 340 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 341 | |
| 342 | All C tests must run from pytest. Typically this is automatic, since pytest |
| 343 | scans the U-Boot executable for available tests to run. So long as you have a |
| 344 | 'ut' subcommand for your test suite, it will run. The same applies for driver |
| 345 | model tests since they use the 'ut dm' subcommand. |
| 346 | |
| 347 | See test/py/tests/test_ut.py for how unit tests are run. |
| 348 | |
| 349 | |
| 350 | Add a C test for SPL |
| 351 | ~~~~~~~~~~~~~~~~~~~~ |
| 352 | |
| 353 | Note: C tests are only available for sandbox_spl at present. There is currently |
| 354 | no mechanism in other boards to existing SPL tests even if they are built into |
| 355 | the image. |
| 356 | |
| 357 | SPL tests cannot be run from the 'ut' command since there are no commands |
| 358 | available in SPL. Instead, sandbox (only) calls ut_run_list() on start-up, when |
| 359 | the -u flag is given. This runs the available unit tests, no matter what suite |
| 360 | they are in. |
| 361 | |
| 362 | To create a new SPL test, follow the same rules as above, either adding to an |
| 363 | existing suite or creating a new one. |
| 364 | |
| 365 | An example SPL test is spl_test_load(). |
| 366 | |
| 367 | |
| 368 | Writing Python tests |
| 369 | -------------------- |
| 370 | |
| 371 | See :doc:`py_testing` for brief notes how to write Python tests. You |
| 372 | should be able to use the existing tests in test/py/tests as examples. |