Cloud migration of an SaaS solutions

Introduction of Kubernetes and containers

Our costumer is running an analytical business operation with more than 15.000 employees worldwide. In this project, their analytical software solution should be made fit for parallel and scalable development. At the same time, they faced the strategic challenge of replacing the existing Amazon AWS cloud solution by migrating to Google Cloud.

Challenge

Our customer faced the challenge of ensuring the platform’s technical scaling, as well as scalable development of software modules in case of rapid growth. Simultaneously, an alternative to Amazon AWS hosting should be found.

Procedure

First, niologic documented the architecture, the state of technological development and plans for further development. In cooperation  with the sales department, the legal department and our customer’s IT security division, Microsoft Azure™ and Google Cloud™ were evaluated as possible alternatives to Amazon AWS™.

At the same time, as part of a pilot study, niologic developed a modularization of the existing SaaS solution as a cloud native container solution (docker) with Kubernetes functioning as a container orchestrator.

This solution, including docker images, load balancing and SSL termination, cloud storage, Kubernetes configuration,  backup creation, integration of OKTA™ as a SSO solution and fitting of the monitoring, was implemented within 5 project days using rapid prototyping. Also, a hadoop cluster was implemented and configurated with Apache Spark 2.0. After the deployment and the evaluation of Microsoft Azure and Google Cloud our customer decided to use Google Cloud as a service provider.

niologic supported their customer with the initial setup and every day usage of the solution. For this purpose, a team consisting of site reliability engineers (SRE) was set up as a DevOps solution and initiated by niologic using Kanban in product management.

niologic instigated the introduction of Kubernetes Helm, Stackdriver Application Performance Monitoring (APM) ,Terraform for Infrastructure as Code, as well as alerts for monitoring the Kubernetes cluster.

Further improvement of the hadoop cluster was conducted using Google Dataproc™ and preemptible VMs. Given entities and data bases were laid down into a migration plan and were finally migrated by niologic and the SREs.

Being coached by niologic, our customer’s developers took charge of the software’s and the connected container’s maintenance using Continuous Integration (CI) and Continuous Deployment (CD) pipelines.

Results and customer value

Within a month, our customer was able to incorporate the new infrastructure for new customers into normal operation. They were convinced by pilot operation, which was run earlier with our solution. Existing customer’s were migrated during the following weeks, without pausing the operation.

The development division was trained for using docker based CI and CD pipelines and took over further development. Now that the project has been completed, our customer’s SRE team is fully equipped with all the essential tools for monitoring and IT security. This way, the team can avoid shortages and imminent outages proactively.

Now, a powerful and independent team has been integrated into operations working on further developments of the platform. Service-level agreements (SLA) and service-level objectives were established and are being covered by the SRE team and Google support.

With the help of Google Dataproc and preemptile VMs an economic scaling of the spark-cluster was found. Furthermore, cloud-native solutions for horizontal scaling and agile development could succesfully be introduced, with the help of Kubernetes and Docker.

Being constantly in contact with the team and Google Cloud, niologic could also introduce the latest technologies fast and successfully. Additionally, labor costs for the DevOps division could be reduced by 60%, while availability was raised by 50%, combining onshore and nearshore. Existent suppliers were replaced. The costs for infrastructure could be reduced by 40%, by introducing container virtualization and a perfectly fitting cloud infrastructure.