💾 Archived View for thebird.nl › gn-gemtext-threads › topics › testing › automated-testing.gmi captured on 2022-03-01 at 15:18:16. Gemini links have been rewritten to link to archived content
-=-=-=-=-=-=-
As part of the
there is need for automated tests to ensure that the system is working as expected.
This document is meant to track the implementation of the automated tests and possibly the related infrastructure for running the tests.
There is a collection of unit tests in the *tests/unit* in the
There is (as of 2022-Feb-10) an
in the Genenetwork 3 repository.
The tests there, however, are technically unit tests. Each test seems to test a single logical unit of the system e.g. correlations, gemma, etc.
There is no test that seems to check for interactions among the logical units/modules of the system e.g.
etc.
There is need for tests to ensure that all expected endpoints are up and running.
Maybe even check that the data is correct.
There is a need to ensure that the system does not take forever to compute stuff.
There is a single performance tests module in
the performance tests directory
for Genenetwork 3 but it is run manually, and mostly tests a very specific query that might or might not have been used in the code.
The performance tests in GN3 should probably be focussed on checking the following (among others):
etc.
This is relevant since GN3 is behind Nginx which defines a timeout.
Checks that previously working features are not broken. These can be added as we go along
Present under the
in the GN2 repository.
Genenetwork 2 has a "Mechanical Rob" testing system that is under construction whose purpose (as far as I - fredm - can tell) is to "walk" some common paths that have multiple logical units working together, thus performing some form of integration testing.
The only issue I (fredm) find in that as it is currently, it will not be able to test javascript interactions that are crucial to some operations in certain flows.
Since GN2 is not meant to handle computations itself, the bigger concern here is responsiveness.
There might need to be checks for responsiveness built in.
Checks that previously working features are not broken. These can be added as we go along
Tests in different categories should be grouped into different command-line endpoints. For example, unit tests could be run by "python3 setup.py check", integration tests could be run by "python3 setup.py integration-check", performance tests could be run by "python3 setup.py performance-check", and so on. This way, the CI will have to be configured only once, and then committers will be able to add new tests without requesting for a CI reconfiguration each time. We won't have to wait on others to respond. Less coordination will be required leading to smoother work for everyone.