Decouple integration tests from implementaion details #189
Replies: 1 comment
-
This is another article: J.B. Rainsberger - Integrated Tests Are A Scam Where he talks about integration vs unit tests. I see the point that you can't test the system (for example, the HTTP tracker) in 100% of corner cases, but I like the idea of thoroughly testing the public contract of the system (HTTP tracker). For example, we could check that the tracker is assigning the correct IP to the peer with an E2E test or with a unit test. I think we have to include the unit test anyway, and the E2E test would be duplicated, but I would like to run the E2E suite against any public tracker so that you can check whether the tracker is fulfilling the standard or not. On the other hand, we could test the HTTP tracker as an independent piece, as David Farlye proposes n the "Don’t Do E2E Testing!" video. We could mock the tracker: HTTP Tracker -> Core Tracker. I think the "integration" (as Rust calls them) tests we have right now are more like the first type: a black-box test suite to check if a tracker fulfils a BitTorrent specification (BEP). We do not have the first type of test yet. They would be necessary if we decided to extract the HTTP or UDP tracker into their own packages decoupled from the core tracker package. |
Beta Was this translation helpful? Give feedback.
-
Introduction
There are three integration tests in this repo:
They all use clients (API client, HTTP client and UDP client) that don't share code with production. And they have independent requests struts.
The UDP tracker uses an
announce
request from the aquatic crate:The HTTP tracker:
That was on purpose to keep tests decoupled from the implementation.
But I think there is a place where we break that isolation.
When we need to set the previous SUT state in the test, we usually call the tracker under the hood. All server implementations have a method to add a torrent:
API server:
HTTP tracker server:
The UDP tracker is slightly different. Integration tests are very simple. The test for the announce request only checks that the response is a correct announce response:
And we have units tests for most cases:
As you can see in this case, we use the tracker directly,
tracker.get_all_torrent_peers
, to get the final state. I do not think this is a problem here because is a public tracker method. But my point is:In E2E tests (or high-level integration tests, depending on what you want to call them), we should not access the tracker method directly.
If we are somehow trying to duplicate the production environment, we should not get data from the database directly or use internal data structures.
I do not consider these tests E2E tests because we setup the tracker at a low-level for testing. To write real E2E tests, we should run the tracker as an external process. I decided to do it so because it is much easier to set the initial state you want to test because you can do it programmatically whiteout having to deal with even variables to inject the tracker configuration you want to use.
Proposal
I think the solution is different for each case.
For the API, I think we could add an endpoint to
announce
torrent peers manually. That could be useful for other purposes too. Then, we could use the API endpoint to pre-set the state we want to test without accessing implementation details.For the HTTP, we should use the public HTTP specification to announce a peer instead of accessing the tracker.
The line
http_tracker_server.add_torrent(&info_hash, &previously_announced_peer).await;
should be replaced with anotherannounce
request using the client.But there are other cases like this:
After announcing a new peer, we check the stats by accessing the tracker's public API but not the public HTTP tracker API. In this case, there is no way to get the stats from the public HTTP tracker API because it's not an HTTP tracker feature. This makes me think that maybe tests should not be there. In one of my latest PRs I have started extracting services. This way you only need to test once (in the service) if the announce request increases the stats.
We should start cleaning the tests. I think we should fix these problems and also remove duplicate tests. Different services (UDP and HTTP tracker) are, for example, testing that statistics are increased. Since we were doing a lot of refactors I've added a lot of tests to have a safety net for refactors, but now that we start having a good test coverage and extracting new services and adding layers to the app, we should write more units tests and fewer integrations tests. Or even convert some integration tests into unit tests.
Links
There is an interesting video about E2E testing talking about these topics:
What do you think @da2ce7 @WarmBeer? WE do not have to do it right now, but we should take it into consideration from now on when we add or update new tests.
Beta Was this translation helpful? Give feedback.
All reactions