Technology
3 min read
How we have transformed our infrastructure to empower the engineering team
Last Updated On Wed Mar 13 2024
At Invygo, we believe that to enable the team to work seamlessly at different aspects of the system a great and robust infrastructure is a must. Engineers can spend more time creating and less time maintaining and troubleshooting.
By the end of 2018, our infrastructure was running on AWS with all products were deployed on each EC2 Instances namely (as you can guess) development, testing and production. Within each instance, PM2 (a process management tool) was used to run our apps (backend and front end), all logs are stored locally within the server and the only job Jenkins needed to do was git pull and pm2 restart with fingers crossed
At that time, we had other things to focus on to get to the market first and gave our first customers the best mobile application experience to subscribe to a car.
Once we launched, we immediately realized that it was going to be a nightmare for us to maintain the development speed especially when the engineering team was growing.
• A nightmare of monitoring all services of all environments
• Difficult in maintaining different versions (node and other libs) for all the products
• Need to deploy other products with limited blast radius
• Utilize resources, reduce cost
• Centralized Logs
• Access control
• Network and application security
Docker and K8S come to a picture as a strong candidate that being matured and deployed in Production in big firms. In terms of infrastructure, biggest cloud platform such as Google Cloud, AWS, Azure are continuously shipping out products that enable small team to start running K8S and Docker. For AWS, we used Elastic Container Register (AWS ECR) and Elastic Kubernetes Services (AWS EKS) to ease our way in a totally new technology to us.
It is impossible to talk about Docker without first exploring containers. Containers solve a critical issue in the life of application development. When developers are writing code they are working on their own local development environment. When they are ready to move that code to production this is where problems arise. The code that worked perfectly on their machine doesn’t work in production. The reasons for this are varied; different operating system, different dependencies, different libraries.
Containers solved this critical issue of portability allowing you to separate code from the underlying infrastructure it is running on. Developers could package up their application, including all of the bins and libraries it needs to run correctly, into a small container image. In production that container can be run on any computer that has a conterization platform.
What about Kubernetes or K8S? Pretty simple, it's a container orchestrator
With Limited Blast radius in mind, we started building an entirely new infrastructure with terraform, k8s manifests, containerize our app with Development and Testing as one cluster and run for 4 sprints to make sure we could see all scenarios and customized it to fit our needs.
An example of our deployment file:
`apiVersion: apps/v1
kind: Deployment
metadata:
labels:
name: just-another-service
name: just-another-service
namespace: production
spec:
replicas: 1
revisionHistoryLimit: 5
selector:
matchLabels:
name: just-another-service
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
name: just-another-service
spec:
containers:
-
env:
-
name: ENV1
value: VAL_1
image: ecr.amazonaws.com/just-another-service
imagePullPolicy: Always
name: just-another-service
resources:
limits:
cpu: 200m
memory: 300Mi
requests:
cpu: 100m
memory: 200Mi`
After 4 weeks of pilot, production cluster was created, up and running marked the completion of the infrastructure transforming project. We developed a guideline for our engineers to create, develop and deploy services on their own.
As of now we have all our services running on K8S platform including supporting technologies/tool such as Cache System, Message Queue system, ELK, APM. With this, we can focus our energy on problem solving, development rather than maintaining and struggle with troubleshooting infrastructure.
After 4 weeks of pilot, production cluster was created, up and running marked the completion of the infrastructure transforming project. We developed a guideline for our engineers to create, develop and deploy services on their own.
As of now we have all our services running on K8S platform including supporting technologies/tool such as Cache System, Message Queue system, ELK, APM. With this, we can focus our energy on problem solving, development rather than maintaining and struggle with troubleshooting infrastructure.