Description of problem: Ok so here's what I have been pondering and putting what I have in my head to a english text instead of whiteboard and my own is always fun :) We really ought to be able to mapped out valid unit configurations and create test units with every valid configuration combo and test every inch out of systemd so we can rule out anything breaking between releases/changes The general idea I have is that we need to write any type supported test daemons and sockets with appropriate signals for start stop restart reload etc Followed by writing test cases and test units that [Install] into their own test.target directory We could then enable and or simply start that test target via git hook reboot or update etc. So basically if someone can write the test daemon/socket for the types I can write the test cases and the test unit and try to find a platform for us to use or come up with a simple web app that would fetch the logs from the journal and display a nice OK/Failed for those test cases Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
(In reply to comment #0) > We really ought to be able to mapped out valid unit configurations and > create test units with every valid configuration combo and test every inch > out of systemd so we can rule out anything breaking between releases/changes Every inch ain't gonna be easy, but we certainly could do more than we do now. > So basically if someone can write the test daemon/socket for the types I can > write the test cases and the test unit and try to find a platform for us to > use or come up with a simple web app that would fetch the logs from the > journal and display a nice OK/Failed for those test cases Since we can have socket-activated python (sd_listen_fds & friends are nicely wrapped), we should be able to cover all possible combinations with just a few lines of code. If you specify a list of "types", I can generate them.
Having tests for everything would be great of course. We have two test suites in systemd. One contains unit tests, the other runs various system tests under qemu and/or nspawn. The latter is capable of running the kind of tests you describe. I am sure Lennart runs the testsuites at least before making a new release. I agree with adding a ****load of tests. We just need people to implement them. As always, developer time is a scarce resource. Help is welcome. However, I am not sure what the criteria of resolving this BZ are. I.e. when can we say "OK, this is now resolved, let's close this!"? It seems this BZ could be perpetual. It's hard to test "everything".
As I said I can cover the test case writing (document/units ) and Zbigniew seemed to be willing to take care of the rest so assigning to me
Very well! You'll find the so-called "extended" testsuite in systemd.git in the test/ directory. There is README.testsuite file there explaining the basics. Note that there is currently some ugly code duplication between the TEST-* directories. This should ideally be refactored before adding more of them.
I don't think we need to track this in BZ. We are adding tests frequently,and we will continue to do this. But we could never close this bug, and we won't forget about adding tests anyway, hence let's not have it around here.