Five things to take away from Nat Pryce and Steve Freeman's "TDD at the system scale" talk
- When you run your system tests, build as much as possible of the environment from scratch.
At the very least, build and deploy the app, and clear out the database before each run
- For testing assemblies that include an asynchronous component, you want to wrap
your assertions in a function that will repeatedly “probe” for the state you want
until either it finds it, or it times out. Something like thisdoSomethingAsync(); probe(interval,timeout,aMatcher,anotherMatcher...);
Wrap the probe() function into a separate class that has access to the objects you
want to probe to simplify things.
- Don’t use the logging APIs directly for anything except low-level debug() messages, and maybe
not even then. Instead, have a “Monitoring” topic, and push structured messages/objects onto
that queue. Then you can separate out production of the messages from routing, handling, and
persisting them. You can also have your system tests hook into these messages to detect hard-to-observe state changes
- For system tests, build a “System Driver” that can act as a facade to the real system, giving
test classes easy access to a properly-initialised test environment – managing the creation and
cleanup of test data, access to monitoring queues, wrappers for probes, etc.
- We really need to start using a proper queueing provider
Add a comment
You are not allowed to comment on this entry as it has restricted commenting permissions.