The initial version of the algorithm was implemented and trained by niologic via Google TensorFlow. A cloud-native Kubernetes infrastructure was used in the process. Later on, it was decided that the generated deep learning models should be exported as Numpy matrices and integrated via HTTP REST-API, due to the expected workload.
In contrast to Tensorflow Serving, this solution allowed simple development using a plaintext protocol, while Tensorflow Serving is based on gRPC (protocol buffer). Therefore, development is more costly, whereas the result is more performant.
Using containers the model could be trained scalable via batching. Furthermore, created models (weighed matrices) were exported after training and embedded into containers for scoring.
These scoring containers were tested in a CI/CD pipeline (according to model, quality, and integration). Successfully qualified containers could then be rolled out immediately (Kubernetes Replication Controller or later deployments).
Afterward, the CI/CD pipeline was supervised during the build process using established build tools and monitoring tools.