CI should not rely on reported number of CPUs and limit jobs to 4
As per IRC discussion today, it was assumed that everyone was aware that the current runner setup rely on everyone CI limiting the number of jobs run by ninja/make/your-custom-build-system to 4 jobs. Now, 4 jobs is likely going to be disastrous for us, and @daniels simply said that he's waiting for feedback on this regard. gitlab-runner does not have the required feature to fix the number of CPU reported in the container (afaic it's a limitation of docker).
Here's a list of items that needs configuration in our CI. This should slow down our pipeline, so we need to measure the effect, and if needed ask for more threads (GNome CI uses 8 iirc). For some cases, over-committing will make sense. The expected effect is that we should have less timeouts, specially the valgrind runs, which are exponentially slow on overloaded systems.
-
Cerbero num_of_cpus -
Direct call to ninja !5763 (merged) -
Cargo incantation (cc @cassidy) -
x264enc, avec_h264 and so on usage (cc @thiblahute) -
GstValidate concurrency (over-committing here may make sense) (Tests only run on htz runners, it's fine) -
Fix HTZ runners configuration, balance number of jobs and configured cpus= (cc @alatiera) -
FDO_CI_CONCURRENT=16
is online on all htz gst runners.
-
-
Anything needed on OSX side ? (seems to perform well, but cc @ystreet)