Architecture
About the Project
In this project, we will deploy a three-tier application (React frontend, Flask backend, RDS PostgreSQL database) on AWS EKS. We’ll use Docker for containerisation, GitHub Actions and ECR for CI/CD, ArgoCD for GitOps, and Prometheus/Grafana for observability.
We will use an AWS ALB Ingress Controller to expose the application and External Services to connect to our RDS database.
Why Kubernetes & EKS? (Click to expand)
Kubernetes is the industry standard for orchestrating containerised applications, offering auto-scaling, load balancing, and self-healing. EKS (Elastic Kubernetes Service) is AWS’s managed Kubernetes service, which handles the complex control plane management for us. In this setup, we use Managed Node Groups, allowing EKS to handle the EC2 instance management for us while maintaining high availability. While Fargate offers a serverless alternative and self-managed groups provide more granular control, managed node groups strike the best balance for this production-grade architecture.
Prerequisites
I’ll assume you have basic knowledge of Docker, Kubernetes, and AWS services. You will need to install eksctl, aws cli, kubectl, docker and terraform on your local machine.
Cloning the Repository
First, clone the repository containing all the code and configuration files:
git clone https://github.com/wegoagain-dev/3-tier-eks.gitcd 3-tier-eksCreating the EKS Cluster
⚠️ NOTE: Ensure your AWS CLI is configured. EKS costs ~$0.10/hour, so remember to delete resources when finished!
We will use eksctl with a cluster-config.yaml file to provision our cluster.
Run the following in your terminal:
eksctl create cluster -f cluster-config.yamlThe cluster-config.yaml (included in the repo) looks like this:
apiVersion: eksctl.io/v1alpha5kind: ClusterConfig
metadata: name: three-tier region: eu-west-2 version: "1.31"
managedNodeGroups: - name: standard-workers instanceType: t3.medium minSize: 1 maxSize: 3 desiredCapacity: 2 iam: withAddonPolicies: imageBuilder: true albIngress: true cloudWatch: true
Understanding the IAM Policies (Deep Dive)
The iam section in the config automatically attaches necessary permissions to your worker nodes:
albIngress: true: Allows the AWS Load Balancer Controller to provision ALBs.cloudWatch: true: Attaches CloudWatchAgentServerPolicy for logging.imageBuilder: true: Grants access to ECR (AmazonEC2ContainerRegistryFullAccess).
It can take 15-20 minutes to create the cluster. once done, verify it:
# Check the status of the EKS clusteraws eks list-clusterskubectl command should be automatically configured by eksctl. Run these commands to ensure you can see the nodes in your cluster:
# Check the nodes in the clusterkubectl get namespaces #list all namespaceskubectl get nodes #list all nodeskubectl get pods -A #list all pods in all namespaceskubectl get services -A #list all services in all namespaces
Hopefully should all be up and running.
For this project we are using a React frontend, a Flask backend that connects to a PostgreSQL database.
Creating PostgreSQL RDS Database (Terraform)
Instead of manually creating the database, config and secrets, we will use Terraform to automate everything. We’ll split our configuration into logical files for a “Production Grade” structure.
1. Navigate to the terraform directory:
cd terraform2. Review provider.tf:
This file configures the AWS and Kubernetes providers. Notably, it uses the aws_eks_cluster data source to dynamically fetch cluster details.
terraform { required_version = ">= 1.0"
required_providers { aws = { source = "hashicorp/aws" version = "~> 5.0" } kubernetes = { source = "hashicorp/kubernetes" version = "~> 2.23" } helm = { source = "hashicorp/helm" version = "~> 2.11" } }}
provider "aws" { region = var.aws_region}
# 1. Ask AWS for the cluster details (Dynamic Lookup)data "aws_eks_cluster" "cluster" { name = var.cluster_name}
data "aws_eks_cluster_auth" "auth" { name = var.cluster_name}
# 2. Configure Kubernetes Provider (Uses the AWS API, not a file)provider "kubernetes" { host = data.aws_eks_cluster.cluster.endpoint cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority[0].data) token = data.aws_eks_cluster_auth.auth.token}
# 3. Configure Helm Providerprovider "helm" { kubernetes { host = data.aws_eks_cluster.cluster.endpoint cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority[0].data) token = data.aws_eks_cluster_auth.auth.token }}3. Review variables.tf:
Variables allow us to make our configuration reusable.
variable "git_repo_url" { description = "GitHub repository URL for ArgoCD to watch" type = string default = "https://github.com/wegoagain-dev/3-tier-eks.git" # Change this or pass via -var}
# Variable for your GitHub Repo (e.g., "your-user/your-repo")variable "github_repo" { description = "The GitHub repository path (e.g., 'wegoagain-dev/3-tier-eks')" type = string default = "wegoagain-dev/3-tier-eks" # Change this or pass via -var}
variable "cluster_name" { description = "EKS cluster name" type = string default = "three-tier"}
variable "aws_region" { description = "AWS region" type = string default = "eu-west-2"}
variable "argocd_chart_version" { description = "ArgoCD Helm chart version" type = string default = "5.51.6"}4. Review rds.tf (Database & Network):
This file handles the creation of the RDS instance, security groups, and subnets.
# 1. Get EKS Cluster VPCdata "aws_vpc" "eks_vpc" { id = data.aws_eks_cluster.cluster.vpc_config[0].vpc_id}
# 2. Get Private Subnetsdata "aws_subnets" "private" { filter { name = "vpc-id" values = [data.aws_vpc.eks_vpc.id] } filter { name = "tag:kubernetes.io/role/internal-elb" values = ["1"] }}
# 3. Get EKS Node Security Groupdata "aws_security_groups" "eks_nodes" { filter { name = "vpc-id" values = [data.aws_vpc.eks_vpc.id] } filter { name = "tag:aws:eks:cluster-name" values = [var.cluster_name] }}
# 4. Create Security Group for RDSresource "aws_security_group" "rds_sg" { name = "rds-security-group" description = "Allow inbound traffic from EKS nodes" vpc_id = data.aws_vpc.eks_vpc.id
ingress { from_port = 5432 to_port = 5432 protocol = "tcp" security_groups = [data.aws_security_groups.eks_nodes.ids[0]] }
egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] }}
# 5. Create DB Subnet Groupresource "aws_db_subnet_group" "default" { name = "three-tier-subnet-group" subnet_ids = data.aws_subnets.private.ids tags = { Name = "My DB subnet group" }}
# 6. Generate Random Passwordresource "random_password" "db_password" { length = 16 special = true override_special = "_%@"}
# 7. Create RDS Instanceresource "aws_db_instance" "default" { allocated_storage = 20 storage_type = "gp2" engine = "postgres" engine_version = "15" instance_class = "db.t3.micro" db_name = "devops_learning" username = "postgresadmin" password = random_password.db_password.result skip_final_snapshot = true db_subnet_group_name = aws_db_subnet_group.default.name vpc_security_group_ids = [aws_security_group.rds_sg.id] multi_az = false # production turn on (jus to save cost) apply_immediately = true}5. Review k8s.tf (Kubernetes Resources):
This file creates the Kubernetes namespace, secrets, ExternalName service, and ConfigMap.
# 8. Create Namespaceresource "kubernetes_namespace" "app" { metadata { name = "3-tier-app-eks" }}
# 9. Create Kubernetes Secret (AUTOMATED)resource "kubernetes_secret" "database_secret" { metadata { name = "database-secret" namespace = kubernetes_namespace.app.metadata[0].name }
data = { DATABASE_URL = "postgresql://${aws_db_instance.default.username}:${random_password.db_password.result}@${aws_db_instance.default.address}:${aws_db_instance.default.port}/${aws_db_instance.default.db_name}" DB_PASSWORD = random_password.db_password.result }
type = "Opaque"}
# 10. Create ExternalName Service (AUTOMATED)resource "kubernetes_service" "postgres_db" { metadata { name = "postgres-db" namespace = kubernetes_namespace.app.metadata[0].name } spec { type = "ExternalName" external_name = aws_db_instance.default.address port { port = 5432 } }}
# 11. Create ConfigMap (AUTOMATED)resource "kubernetes_config_map" "app_config" { metadata { name = "app-config" namespace = kubernetes_namespace.app.metadata[0].name }
data = { DB_NAME = "devops_learning" BACKEND_URL = "http://backend:8000" }}6. Apply the Infrastructure:
# Initialise Terraformterraform init
# Apply configurationterraform applyTerraform will now create everything needed for your environment: RDS Instance, Namespace, Secret, Service, and ConfigMap.

GitHub Actions & ECR setup
We need to set up OIDC to allow GitHub Actions to push to AWS ECR without hardcoding long-term credentials. Instead of manual AWS CLI commands, we will add this to our Terraform stack.
Set up OIDC & IAM Roles (Terraform)
1. Review iam_oidc.tf in your terraform directory:
# 1. Create OIDC Provider for GitHub Actionsresource "aws_iam_openid_connect_provider" "github" { url = "https://token.actions.githubusercontent.com"
client_id_list = [ "sts.amazonaws.com", ]
thumbprint_list = [ "6938fd4d98bab03faadb97b34396831e3780aea1", # GitHub's Thumbprint "1c58a3a8518e8759bf075b76b750d4f2df264fcd" # Backup thumbprint just in case ]}
# 2. Create IAM Role for GitHub Actionsresource "aws_iam_role" "github_actions" { name = "GitHubActionsECR"
assume_role_policy = jsonencode({ Version = "2012-10-17" Statement = [ { Action = "sts:AssumeRoleWithWebIdentity" Effect = "Allow" Principal = { Federated = aws_iam_openid_connect_provider.github.arn } Condition = { StringLike = { "token.actions.githubusercontent.com:sub" = "repo:${var.github_repo}:*" } StringEquals = { "token.actions.githubusercontent.com:aud" = "sts.amazonaws.com" } } }, ] })}
# 3. Attach ECR PowerUser Policyresource "aws_iam_role_policy_attachment" "ecr_poweruser" { role = aws_iam_role.github_actions.name policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryPowerUser"}
output "github_actions_role_arn" { value = aws_iam_role.github_actions.arn}2. Apply the changes:
# Apply again to create the OIDC resourcesterraform apply3. Get the Role ARN:
Terraform will output github_actions_role_arn. Copy this value.
Configuration & Deployment Preparation
Now that the infrastructure is ready, we need to prepare our application for deployment.
1. Set up ECR and CI/CD
- Create frontend and backend repositories in AWS ECR.
- Update the
.github/workflows/ci.ymlfile in your repo with:- Your AWS Region
- Your ECR Repository Names
- The IAM Role ARN you created in the previous step.
- Push these changes to GitHub to trigger the build pipeline.
2. Update Kubernetes Manifests
All deployment manifests are located in the k8s/ folder.
⚠️ CRITICAL STEP: Update Image References
The files
k8s/backend.yaml,k8s/frontend.yaml, andk8s/migration_job.yamlare currently configured with placeholder or private ECR URIs.Action: Open these files and replace the existing image URI (e.g.,
373317459404.dkr.ecr.eu-west-2.amazonaws.com) with your own ECR URI:<your-account-id>.dkr.ecr.<region>.amazonaws.comIf you skip this, your pods will fail with
ImagePullBackOff.
Verification
Since we used Terraform, the Namespace, Database Secret, Database Service, and Application ConfigMap have already been created for us!
Let’s verify that everything is set up correctly:
# 1. Check if the namespace existskubectl get namespace 3-tier-app-eks
# 2. Check if the secret exists (do NOT print it)kubectl get secret database-secret -n 3-tier-app-eks
# 3. Check if the ExternalName service is pointing to your RDSkubectl get svc postgres-db -n 3-tier-app-eks
# 4. Check if the ConfigMap existskubectl get configmap app-config -n 3-tier-app-eks
You should see all resources listed. This means Terraform successfully automated the “glue” between AWS and Kubernetes.
Deep Dive: The ExternalName Service Pattern
An ExternalName service maps a Kubernetes Service to an external DNS name (like an AWS RDS endpoint).\
Benefits:
- Service Discovery: Apps connect to
postgres-db(standard internal DNS), not a long AWS endpoint string. - Decoupling: If the RDS endpoint changes, you only update this one manifest (or Terraform resource), not your app code.
To verify connectivity, we can spin up a temporary pod:
kubectl run pg-connection-test --rm -it --image=postgres:14 --namespace=3-tier-app-eks -- bash
# Inside the pod, connect with SSL:PGSSLMODE=require psql -h postgres-db -p 5432 -U postgresadmin -d devops_learning# Enter password (use 'terraform output -raw db_password' to see it if needed). Type 'exit' to leave.Note: RDS PostgreSQL 15 requires SSL by default. We use
PGSSLMODE=requireto enable SSL in the connection.
Running database migrations
In this project theres a database migration that needs to be run before deploying the backend service. This is done using a Kubernetes Job, which is a one-time task that runs to completion, this will create database tables and seed data for the application to work correctly.
What a job does is it creates a pod that runs the specified command and then exits. If the command fails, the job will retry until it succeeds or reaches the specified backoff limit.
This command kubectl apply -f migration_job.yaml will create a job that runs the command to apply the database migrations. This command will create the necessary tables and seed data in the RDS PostgreSQL database. Its worth analysing the job manifest to understand what it does and how it works. this is where the secrets we created earlier will be used to connect to the database. its better than hardcoding the database credentials in the job manifest, because it allows you to change the credentials without modifying the job manifest.
(dont forget to modify the file to use your own backend image)
The job will use the DATABASE_URL from the database-secret to authenticate.
#run these one by one in k8s/ folderkubectl apply -f migration_job.yamlkubectl get job -Akubectl get pods -n 3-tier-app-eks
Run kubectl describe pod database-migration-<name> -n 3-tier-app-eks and you should see it completed succesfully
Backend and Frontend services
Now lets deploy the backend and frontend services. The backend service is a Flask application that connects to the RDS PostgreSQL database, and the frontend service is a React application that communicates with the backend service.
Read the manifest files for the backend and frontend services to understand what they do and how they work.
Backend Environment Variables:
The backend uses the DATABASE_URL from our database-secret to connect to RDS:
env: - name: DATABASE_URL valueFrom: secretKeyRef: name: database-secret key: DATABASE_URL - name: ALLOWED_ORIGINS value: "*"Frontend Environment Variables:
The frontend uses the BACKEND_URL from our app-config ConfigMap:
env: - name: BACKEND_URL valueFrom: configMapKeyRef: name: app-config key: BACKEND_URLApply the deployments:
# apply the backend service manifestkubectl apply -f backend.yaml# apply the frontend service manifestkubectl apply -f frontend.yaml# check the status of the podskubectl get deployment -n 3-tier-app-ekskubectl get svc -n 3-tier-app-eks
Accessing the application
At the minute we havent created ingress resources to expose the application to the internet. To access the application temporarily, we will port-forward the frontend and backend services to our local machine. This will allow us to access the application using localhost and a specific port. Open two terminal windows, one for the backend service and one for the frontend service. In the first terminal window, run the following command to port-forward the backend service: We need to open new terminals because the port-forward command will block the terminal until you stop it with CTRL+C.
# port-forward the backend service to localhost:8000kubectl port-forward svc/backend 8000:8000 -n 3-tier-app-eks# port-forward the frontend service to localhost:8080kubectl port-forward svc/frontend 8080:80 -n 3-tier-app-eks
you can access the backend service at http://localhost:8000/api/topics in the browser or curl http://localhost:8000/api/topics in the terminal.
you can access the frontend service at http://localhost:8080 in the browser. The frontend service will communicate with the backend service to fetch data and display it.

this is a devops quiz application that you can use. the seed data created some samples, in the manage questions you can add more questions and answers. the 3-tier-app-eks/backend/questions-answers includes some csv files that you can use to import questions and answers into the application. You can also add your own questions and answers using the frontend interface.
Time to implement Ingress
We need an Ingress Controller to manage external access (HTTP/HTTPS) to our services. We’ll use the AWS Load Balancer Controller, managed via Terraform.
Review lb_controller.tf:
This file uses Terraform to install the AWS Load Balancer Controller via Helm. Since we created our cluster with eksctl and enabled albIngress: true in the cluster configuration, the necessary IAM permissions are already in place.
resource "helm_release" "aws_load_balancer_controller" { name = "aws-load-balancer-controller" repository = "https://aws.github.io/eks-charts" chart = "aws-load-balancer-controller" namespace = "kube-system" version = "1.7.1"
set { name = "clusterName" value = var.cluster_name }
set { name = "serviceAccount.create" value = "true" }
set { name = "serviceAccount.name" value = "aws-load-balancer-controller" }}Apply the Terraform configuration:
cd terraformterraform apply -auto-approveDeep Dive: Helm & AWS Load Balancer Controller
Helm is a package manager for Kubernetes (like apt or brew).
Instead of applying dozens of individual YAML files manually, we use a Chart to install the Controller.
This Controller watches for Ingress resources in your cluster and automatically provisions AWS ALBs to satisfy them.
Why Terraform? Managing the Load Balancer Controller via Terraform ensures:
- Infrastructure as Code consistency
- Version control for the controller installation
- Easier rollback and updates
- Integration with other Terraform-managed resources

Ingress Manifest
Now, apply the ingress manifest. This tells the controller to route traffic:
/api-> Backend Service/-> Frontend Service
kubectl apply -f ingress.yamlThe ingress.yaml file:
apiVersion: networking.k8s.io/v1kind: IngressClassmetadata: name: albspec: controller: ingress.k8s.aws/alb---apiVersion: networking.k8s.io/v1kind: Ingressmetadata: name: three-tier-ingress namespace: 3-tier-app-eks annotations: alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: ipspec: ingressClassName: alb rules: - http: paths: - path: /api pathType: Prefix backend: service: name: backend port: number: 8000 - path: / pathType: Prefix backend: service: name: frontend port: number: 80Verify it’s working (it takes a few minutes for the ALB to provision):
# Check Ingress status (Look for the ADDRESS field)kubectl get ingress -n 3-tier-app-eks
# Debugging: Check Controller logskubectl logs -n kube-system -l app.kubernetes.io/name=aws-load-balancer-controller
It may take a few minutes for the ALB to be provisioned and the DNS name to be available. Once it is available, you can access it by copying the DNS name from the ingress command and pasting it in the browser.
Here’s a video demo: (best in full screen)
Complete CI/CD using ArgoCD (GitOps)
ArgoCD continuously syncs your GitHub repository with your Kubernetes cluster.
1. Install ArgoCD (Terraform)
Review argocd.tf:
resource "helm_release" "argocd" { name = "argocd" repository = "https://argoproj.github.io/argo-helm" chart = "argo-cd" namespace = "argocd" create_namespace = true version = var.argocd_chart_version
set { name = "server.service.type" value = "ClusterIP" } set { name = "redis-ha.enabled" value = "false" }}Apply the changes:
terraform applyTo access the UI:
# Wait for podskubectl get pods -n argocd# then access UIkubectl port-forward svc/argocd-server -n argocd 9000:443Access via https://localhost:9000. Accept the self-signed certificate warning.
2. Get Password & Login
Username: admin
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo
3. Deploy the Application
We’ll use an Application CRD to tell ArgoCD what to sync.
⚠️ CRITICAL STEP: Update
argocd-app.yamlwith YOUR GitHubrepoURLbefore applying.
kubectl apply -f argocd-app.yamlDeep Dive: Understanding the ArgoCD App Manifest
The argocd-app.yaml file defines:
- Source: Your Git repo URL and path (
k8s/). - Destination: The target cluster and namespace (
3-tier-app-eks). - SyncPolicy:
automated: Automatically applies changes.prune: Deletes resources removed from Git.selfHeal: Reverts manual changes to match Git state.
Check the UI—you should see your app syncing!

Adding Monitoring (Prometheus & Grafana)
We’ll use the kube-prometheus-stack Helm chart to install a full monitoring suite, managed via Terraform.
Review monitoring.tf:
This file uses Terraform to install Prometheus and Grafana via the kube-prometheus-stack Helm chart.
resource "helm_release" "kube_prometheus_stack" { name = "prometheus" repository = "https://prometheus-community.github.io/helm-charts" chart = "kube-prometheus-stack" namespace = "monitoring" create_namespace = true version = "56.6.2"
# Allow Prometheus to discover all ServiceMonitors across namespaces set { name = "prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues" value = "false" }
# Set Grafana admin password set { name = "grafana.adminPassword" value = "admin123" }
# Optional: Persist Grafana dashboards and data set { name = "grafana.persistence.enabled" value = "false" }
# Optional: Persist Prometheus data set { name = "prometheus.prometheusSpec.retention" value = "7d" }}Apply the Terraform configuration:
cd terraformterraform apply -auto-approveAccess Grafana
kubectl port-forward svc/prometheus-grafana -n monitoring 3000:80Go to http://localhost:3000. Login with admin / admin123.
Explore the pre-built dashboards under Dashboards > General > Kubernetes / Compute Resources.

What’s Next? (Production Enhancements)
To make this project fully enterprise-ready, In the next steps we will add:
- Custom Domain & SSL: Use ExternalDNS to sync Route53 with the Ingress, and AWS Certificate Manager (ACM) to automatically provision free SSL certificates for the ALB.
- Network Security: Implement Kubernetes Network Policies to restrict traffic. For example, allow the Frontend to talk only to the Backend, and the Backend only to the Database, blocking all other internal traffic.
- Auto-Scaling: Configure Horizontal Pod Autoscalers (HPA) to automatically add more backend pods when CPU usage spikes (e.g., above 70%).
Cleanup & Teardown
⚠️ IMPORTANT: Follow these steps in order to avoid unexpected AWS charges and ensure complete resource cleanup.
Step 1: Delete Kubernetes Resources (Optional but Recommended)
First, delete the Ingress to trigger ALB cleanup:
kubectl delete ingress three-tier-ingress -n 3-tier-app-eksWait 1-2 minutes for the AWS Load Balancer Controller to clean up the ALB and target groups.
Step 2: Destroy Terraform-Managed Resources
cd terraformterraform destroy -auto-approveThis will remove:
- RDS PostgreSQL database
- Kubernetes namespace and resources
- ArgoCD installation
- AWS Load Balancer Controller
- Prometheus & Grafana monitoring stack
- IAM roles and OIDC provider for GitHub Actions
Note: If the namespace deletion hangs, you may need to force-delete it (see troubleshooting below).
Step 3: Delete the EKS Cluster
eksctl delete cluster three-tier --region eu-west-2This will delete:
- EKS control plane
- Managed node groups
- VPC and networking resources
- IAM roles for nodes
Troubleshooting Cleanup Issues
If the ALB won’t delete:
The AWS Load Balancer Controller may leave behind ALBs if the Ingress wasn’t properly deleted. Manually remove them:
# Find the ALBaws elbv2 describe-load-balancers --region eu-west-2 \ --query 'LoadBalancers[?contains(LoadBalancerName, `k8s-3tierapp`)].LoadBalancerArn' \ --output text
# Delete itaws elbv2 delete-load-balancer --load-balancer-arn <ARN> --region eu-west-2
# Delete target groups (wait 10 seconds after deleting ALB)aws elbv2 describe-target-groups --region eu-west-2 \ --query 'TargetGroups[?contains(TargetGroupName, `k8s-3tierapp`)].TargetGroupArn' \ --output text | xargs -I {} aws elbv2 delete-target-group --target-group-arn {} --region eu-west-2If namespace deletion hangs:
kubectl delete namespace 3-tier-app-eks --force --grace-period=0If ArgoCD CRDs remain:
kubectl delete crd applications.argoproj.io applicationsets.argoproj.io appprojects.argoproj.ioVerification
Double-check the AWS Console to ensure all resources are deleted:
- EC2 Dashboard: No running instances, load balancers, or target groups
- RDS Dashboard: No databases
- EKS Dashboard: No clusters
- CloudFormation: No stacks related to
eksctl-three-tier - IAM: No roles starting with
eksctl-three-tierorGitHubActionsECR
Estimated monthly cost if left running: ~73 + RDS 20-40)
