Build the latest version of torch (CPU) only
Github: https://github.com/pytorch/pytorch.git PyPI: https://pypi.org/project/torch/
So this one will be complicated for a number of reasons
-
We need to build a new version of OpenBLAS for torch. We can't use the versions we build for numpy and scipy as the CPU builds of torch that use OpenBLAS build it with gomp enabled (which is GPL IIRC).
-
We probably don't want to upload the wheels to our normal package registry. We may want to continue to build our CPU torch even if it becomes available upstream on PyPI, if for example, the upstream version uses CUDA. We can upload the wheels to the package registry associated with the torch project we fork in gitlab. This will complicate the installation of torch, but will be no different for the upstream CPU builds for other architectures.
-
There are far too many torch tests to be able to run them all. It would take about a week on our emulated environment. We will have to identify a small subset of tests that we can run in about 20 minutes to validate the builds.