blob: 404d158ec409be46a794c1a7790185fa28827a26 [file] [log] [blame]
Simon Glass25404102021-03-07 17:35:17 -07001.. SPDX-License-Identifier: GPL-2.0+
2.. Copyright 2021 Google LLC
3.. sectionauthor:: Simon Glass <sjg@chromium.org>
4
5Writing Tests
6=============
7
8This describes how to write tests in U-Boot and describes the possible options.
9
10Test types
11----------
12
13There are two basic types of test in U-Boot:
14
15 - Python tests, in test/py/tests
16 - C tests, in test/ and its subdirectories
17
18(there are also UEFI tests in lib/efi_selftest/ not considered here.)
19
20Python tests talk to U-Boot via the command line. They support both sandbox and
21real hardware. They typically do not require building test code into U-Boot
22itself. They are fairly slow to run, due to the command-line interface and there
23being two separate processes. Python tests are fairly easy to write. They can
24be a little tricky to debug sometimes due to the voluminous output of pytest.
25
26C tests are written directly in U-Boot. While they can be used on boards, they
27are more commonly used with sandbox, as they obviously add to U-Boot code size.
28C tests are easy to write so long as the required facilities exist. Where they
29do not it can involve refactoring or adding new features to sandbox. They are
30fast to run and easy to debug.
31
32Regardless of which test type is used, all tests are collected and run by the
33pytest framework, so there is typically no need to run them separately. This
34means that C tests can be used when it makes sense, and Python tests when it
35doesn't.
36
37
38This table shows how to decide whether to write a C or Python test:
39
40===================== =========================== =============================
41Attribute C test Python test
42===================== =========================== =============================
43Fast to run? Yes No (two separate processes)
44Easy to write? Yes, if required test Yes
45 features exist in sandbox
46 or the target system
47Needs code in U-Boot? Yes No, provided the test can be
48 executed and the result
49 determined using the command
50 line
51Easy to debug? Yes No, since access to the U-Boot
52 state is not available and the
53 amount of output can
54 sometimes require a bit of
55 digging
56Can use gdb? Yes, directly Yes, with --gdbserver
57Can run on boards? Some can, but only if Some
58 compiled in and not
59 dependent on sandboxau
60===================== =========================== =============================
61
62
63Python or C
64-----------
65
66Typically in U-Boot we encourage C test using sandbox for all features. This
67allows fast testing, easy development and allows contributors to make changes
68without needing dozens of boards to test with.
69
70When a test requires setup or interaction with the running host (such as to
71generate images and then running U-Boot to check that they can be loaded), or
72cannot be run on sandbox, Python tests should be used. These should typically
73NOT rely on running with sandbox, but instead should function correctly on any
74board supported by U-Boot.
75
76
Simon Glass1f1614b2022-10-20 18:22:50 -060077Mixing Python and C
78-------------------
79
80The best of both worlds is sometimes to have a Python test set things up and
81perform some operations, with a 'checker' C unit test doing the checks
82afterwards. This can be achieved with these steps:
83
Simon Glass1a92f832024-08-22 07:57:48 -060084- Add the `UTF_MANUAL` flag to the checker test so that the `ut` command
Simon Glass1f1614b2022-10-20 18:22:50 -060085 does not run it by default
86- Add a `_norun` suffix to the name so that pytest knows to skip it too
87
88In your Python test use the `-f` flag to the `ut` command to force the checker
89test to run it, e.g.::
90
91 # Do the Python part
92 host load ...
93 bootm ...
94
95 # Run the checker to make sure that everything worked
96 ut -f bootstd vbe_test_fixup_norun
97
Simon Glass1a92f832024-08-22 07:57:48 -060098Note that apart from the `UTF_MANUAL` flag, the code in a 'manual' C test
Simon Glass1f1614b2022-10-20 18:22:50 -060099is just like any other C test. It still uses ut_assert...() and other such
100constructs, in this case to check that the expected things happened in the
101Python test.
102
103
Simon Glass25404102021-03-07 17:35:17 -0700104How slow are Python tests?
105--------------------------
106
107Under the hood, when running on sandbox, Python tests work by starting a sandbox
108test and connecting to it via a pipe. Each interaction with the U-Boot process
109requires at least a context switch to handle the pipe interaction. The test
110sends a command to U-Boot, which then reacts and shows some output, then the
111test sees that and continues. Of course on real hardware, communications delays
112(e.g. with a serial console) make this slower.
113
114For comparison, consider a test that checks the 'md' (memory dump). All times
115below are approximate, as measured on an AMD 2950X system. Here is is the test
116in Python::
117
118 @pytest.mark.buildconfigspec('cmd_memory')
119 def test_md(u_boot_console):
120 """Test that md reads memory as expected, and that memory can be modified
121 using the mw command."""
122
123 ram_base = u_boot_utils.find_ram_base(u_boot_console)
124 addr = '%08x' % ram_base
125 val = 'a5f09876'
126 expected_response = addr + ': ' + val
127 u_boot_console.run_command('mw ' + addr + ' 0 10')
128 response = u_boot_console.run_command('md ' + addr + ' 10')
129 assert(not (expected_response in response))
130 u_boot_console.run_command('mw ' + addr + ' ' + val)
131 response = u_boot_console.run_command('md ' + addr + ' 10')
132 assert(expected_response in response)
133
134This runs a few commands and checks the output. Note that it runs a command,
135waits for the response and then checks it agains what is expected. If run by
136itself it takes around 800ms, including test collection. For 1000 runs it takes
13719 seconds, or 19ms per run. Of course 1000 runs it not that useful since we
138only want to run it once.
139
140There is no exactly equivalent C test, but here is a similar one that tests 'ms'
141(memory search)::
142
143 /* Test 'ms' command with bytes */
144 static int mem_test_ms_b(struct unit_test_state *uts)
145 {
146 u8 *buf;
147
148 buf = map_sysmem(0, BUF_SIZE + 1);
149 memset(buf, '\0', BUF_SIZE);
150 buf[0x0] = 0x12;
151 buf[0x31] = 0x12;
152 buf[0xff] = 0x12;
153 buf[0x100] = 0x12;
154 ut_assertok(console_record_reset_enable());
155 run_command("ms.b 1 ff 12", 0);
156 ut_assert_nextline("00000030: 00 12 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................");
157 ut_assert_nextline("--");
158 ut_assert_nextline("000000f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 12 ................");
159 ut_assert_nextline("2 matches");
160 ut_assert_console_end();
161
162 ut_asserteq(2, env_get_hex("memmatches", 0));
163 ut_asserteq(0xff, env_get_hex("memaddr", 0));
164 ut_asserteq(0xfe, env_get_hex("mempos", 0));
165
166 unmap_sysmem(buf);
167
168 return 0;
169 }
Simon Glass11fcfa32024-08-22 07:57:50 -0600170 MEM_TEST(mem_test_ms_b, UTF_CONSOLE);
Simon Glass25404102021-03-07 17:35:17 -0700171
172This runs the command directly in U-Boot, then checks the console output, also
173directly in U-Boot. If run by itself this takes 100ms. For 1000 runs it takes
174660ms, or 0.66ms per run.
175
176So overall running a C test is perhaps 8 times faster individually and the
177interactions are perhaps 25 times faster.
178
179It should also be noted that the C test is fairly easy to debug. You can set a
180breakpoint on do_mem_search(), which is what implements the 'ms' command,
181single step to see what might be wrong, etc. That is also possible with the
182pytest, but requires two terminals and --gdbserver.
183
184
185Why does speed matter?
186----------------------
187
188Many development activities rely on running tests:
189
190 - 'git bisect run make qcheck' can be used to find a failing commit
191 - test-driven development relies on quick iteration of build/test
192 - U-Boot's continuous integration (CI) systems make use of tests. Running
193 all sandbox tests typically takes 90 seconds and running each qemu test
194 takes about 30 seconds. This is currently dwarfed by the time taken to
195 build all boards
196
197As U-Boot continues to grow its feature set, fast and reliable tests are a
198critical factor factor in developer productivity and happiness.
199
200
201Writing C tests
202---------------
203
204C tests are arranged into suites which are typically executed by the 'ut'
205command. Each suite is in its own file. This section describes how to accomplish
206some common test tasks.
207
208(there are also UEFI C tests in lib/efi_selftest/ not considered here.)
209
210Add a new driver model test
211~~~~~~~~~~~~~~~~~~~~~~~~~~~
212
213Use this when adding a test for a new or existing uclass, adding new operations
214or features to a uclass, adding new ofnode or dev_read_() functions, or anything
215else related to driver model.
216
217Find a suitable place for your test, perhaps near other test functions in
218existing code, or in a new file. Each uclass should have its own test file.
219
220Declare the test with::
221
222 /* Test that ... */
223 static int dm_test_uclassname_what(struct unit_test_state *uts)
224 {
225 /* test code here */
226
227 return 0;
228 }
Simon Glass1a92f832024-08-22 07:57:48 -0600229 DM_TEST(dm_test_uclassname_what, UTF_SCAN_FDT);
Simon Glass25404102021-03-07 17:35:17 -0700230
Simon Glassd6c52f42024-08-22 07:57:49 -0600231Note that the convention is to NOT add a blank line before the macro, so that
232the function it relates to is more obvious.
233
Simon Glass25404102021-03-07 17:35:17 -0700234Replace 'uclassname' with the name of your uclass, if applicable. Replace 'what'
235with what you are testing.
236
237The flags for DM_TEST() are defined in test/test.h and you typically want
Simon Glass1a92f832024-08-22 07:57:48 -0600238UTF_SCAN_FDT so that the devicetree is scanned and all devices are bound
239and ready for use. The DM_TEST macro adds UTF_DM automatically so that
Simon Glass25404102021-03-07 17:35:17 -0700240the test runner knows it is a driver model test.
241
242Driver model tests are special in that the entire driver model state is
243recreated anew for each test. This ensures that if a previous test deletes a
244device, for example, it does not affect subsequent tests. Driver model tests
245also run both with livetree and flattree, to ensure that both devicetree
246implementations work as expected.
247
248Example commit: c48cb7ebfb4 ("sandbox: add ADC unit tests") [1]
249
250[1] https://gitlab.denx.de/u-boot/u-boot/-/commit/c48cb7ebfb4
251
252
253Add a C test to an existing suite
254~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
255
256Use this when you are adding to or modifying an existing feature outside driver
257model. An example is bloblist.
258
259Add a new function in the same file as the rest of the suite and register it
260with the suite. For example, to add a new mem_search test::
261
262 /* Test 'ms' command with 32-bit values */
263 static int mem_test_ms_new_thing(struct unit_test_state *uts)
264 {
265 /* test code here*/
266
267 return 0;
268 }
Simon Glass11fcfa32024-08-22 07:57:50 -0600269 MEM_TEST(mem_test_ms_new_thing, UTF_CONSOLE);
Simon Glass25404102021-03-07 17:35:17 -0700270
271Note that the MEM_TEST() macros is defined at the top of the file.
272
273Example commit: 9fe064646d2 ("bloblist: Support relocating to a larger space") [1]
274
275[1] https://gitlab.denx.de/u-boot/u-boot/-/commit/9fe064646d2
276
277
278Add a new test suite
279~~~~~~~~~~~~~~~~~~~~
280
281Each suite should focus on one feature or subsystem, so if you are writing a
282new one of those, you should add a new suite.
283
284Create a new file in test/ or a subdirectory and define a macro to register the
285suite. For example::
286
Simon Glass25404102021-03-07 17:35:17 -0700287 #include <console.h>
288 #include <mapmem.h>
289 #include <dm/test.h>
290 #include <test/ut.h>
291
292 /* Declare a new wibble test */
293 #define WIBBLE_TEST(_name, _flags) UNIT_TEST(_name, _flags, wibble_test)
294
295 /* Tetss go here */
296
297 /* At the bottom of the file: */
298
299 int do_ut_wibble(struct cmd_tbl *cmdtp, int flag, int argc, char *const argv[])
300 {
301 struct unit_test *tests = UNIT_TEST_SUITE_START(wibble_test);
302 const int n_ents = UNIT_TEST_SUITE_COUNT(wibble_test);
303
304 return cmd_ut_category("cmd_wibble", "wibble_test_", tests, n_ents, argc, argv);
305 }
306
307Then add new tests to it as above.
308
309Register this new suite in test/cmd_ut.c by adding to cmd_ut_sub[]::
310
311 /* Within cmd_ut_sub[]... */
312
313 U_BOOT_CMD_MKENT(wibble, CONFIG_SYS_MAXARGS, 1, do_ut_wibble, "", ""),
314
315and adding new help to ut_help_text[]::
316
317 "ut wibble - Test the wibble feature\n"
318
319If your feature is conditional on a particular Kconfig, then you can use #ifdef
320to control that.
321
322Finally, add the test to the build by adding to the Makefile in the same
323directory::
324
325 obj-$(CONFIG_$(SPL_)CMDLINE) += wibble.o
326
327Note that CMDLINE is never enabled in SPL, so this test will only be present in
328U-Boot proper. See below for how to do SPL tests.
329
330As before, you can add an extra Kconfig check if needed::
331
332 ifneq ($(CONFIG_$(SPL_)WIBBLE),)
333 obj-$(CONFIG_$(SPL_)CMDLINE) += wibble.o
334 endif
335
336
337Example commit: 919e7a8fb64 ("test: Add a simple test for bloblist") [1]
338
339[1] https://gitlab.denx.de/u-boot/u-boot/-/commit/919e7a8fb64
340
341
342Making the test run from pytest
343~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
344
345All C tests must run from pytest. Typically this is automatic, since pytest
346scans the U-Boot executable for available tests to run. So long as you have a
347'ut' subcommand for your test suite, it will run. The same applies for driver
348model tests since they use the 'ut dm' subcommand.
349
350See test/py/tests/test_ut.py for how unit tests are run.
351
352
353Add a C test for SPL
354~~~~~~~~~~~~~~~~~~~~
355
356Note: C tests are only available for sandbox_spl at present. There is currently
357no mechanism in other boards to existing SPL tests even if they are built into
358the image.
359
360SPL tests cannot be run from the 'ut' command since there are no commands
361available in SPL. Instead, sandbox (only) calls ut_run_list() on start-up, when
362the -u flag is given. This runs the available unit tests, no matter what suite
363they are in.
364
365To create a new SPL test, follow the same rules as above, either adding to an
366existing suite or creating a new one.
367
368An example SPL test is spl_test_load().
369
370
371Writing Python tests
372--------------------
373
374See :doc:`py_testing` for brief notes how to write Python tests. You
375should be able to use the existing tests in test/py/tests as examples.