Deploy Applications on AWS: A Step-by-Step Guide to EC2, ELB, Route 53, and More

Deploy Applications on AWS: A Step-by-Step Guide to EC2, ELB, Route 53, and More

Scaling apps on AWS without breaking a sweat!

Table of contents

In this guide, we’ll deploy a scalable and secure application on AWS, utilizing Memcached for caching, RabbitMQ for background task processing, MySQL for data storage, and Tomcat for application hosting. While this guide doesn’t cover full application development or third-party integrations, it focuses on using AWS tools like EC2, S3, Route 53, IAM, and Load Balancers to build a robust, enterprise-level infrastructure. Let’s dive in!


Tools & AWS Services You’ll Learn & Use in This Project

In this project, you'll gain hands-on experience with multiple AWS services and external tools essential for deploying a full-stack application. Below is a list of everything you'll be working with:

AWS Services:

  • EC2 – Launch and configure virtual machines to host application components

  • Elastic Load Balancer (ELB) – Distribute incoming traffic across multiple instances

  • Target Groups – Manage and route traffic to the appropriate backend instances

  • Auto Scaling Group – Ensure high availability by automatically scaling instances

  • S3 – Store and retrieve application build artifacts

  • Route 53 – Configure private DNS for internal service communication

  • Amazon Certificate Manager (ACM) – Manage SSL/TLS certificates for HTTPS

  • IAM – Set up roles and permissions for secure access to AWS resources

  • Custom AMI – Create a pre-configured machine image for faster deployments

  • Launch Template – Standardize EC2 instance configurations for Auto Scaling

  • Security Groups – Control inbound and outbound traffic for instances

  • Key Pairs – Enable secure SSH access to EC2 instances

External Tools & Technologies:

  • Maven – Build and package the application

  • Tomcat – Deploy and serve the backend application

  • RabbitMQ – Implement message queueing for communication between services

  • Memcached – Improve application performance with caching

  • MySQL – Manage the application’s database

  • GoDaddy – Register and manage domain names

  • Bash Scripting – Automate EC2 instance setup using user data

  • AWS CLI – Interact with AWS services via command-line

  • SSH – Securely connect to remote EC2 instances

  • Git/GitHub – Version control for source code (if applicable)

By the end of this project, you’ll have a strong understanding of these services and tools, setting a solid foundation for future AWS deployments. Let’s get started! 🚀


Prerequisites

  • Create an account on Amazon AWS to utilize its features.

  • Set up CloudWatch billing alert to avoid over-billing.

  • Have a full-stack application utilizing external backend services such as Memcached, RabbitMQ, and MySQL

  • Purchase a domain and set up a certificate on AWS Certificate Manager (optional).

  • Verify optimal build tools are installed

  • Clone the repository: https://github.com/hkhcoder/vprofile-project/tree/awsliftandshift


Architecture

This deployment follows a highly scalable and secure architecture using AWS services like Elastic Load Balancer (ELB), Auto Scaling Groups, Route 53, S3, Memcached, RabbitMQ, and MySQL. The goal is to efficiently handle web traffic, background processing, and caching while maintaining high availability and fault tolerance.

1. Domain Registration & SSL Certificate Setup

  • Purchase a domain from a provider

  • Register the domain on Amazon Certificate Manager (ACM) for validation

  • Once validated, ACM provides an SSL certificate, which is installed on the Application Load Balancer (ALB)’s exposed endpoint for secure HTTPS communication

2. Application Load Balancer (ELB)

  • Listens for incoming requests on port 443 (HTTPS) and routes traffic to the Auto Scaling Group (ASG)

  • Security Groups are configured to allow only necessary inbound traffic, ensuring a secure infrastructure

3. Auto Scaling Group (ASG) & Compute Instances

  • ASG manages multiple instances of the application and scales dynamically based on demand

  • Uses an Instance Template with a Tomcat-based Amazon Machine Image (AMI) for deployment

  • Tomcat instances pull the application artifacts from an Amazon S3 bucket

4. Backend Services (Memcached, RabbitMQ, MySQL)

  • The application does not interact with backend services using direct IP addresses as IPs may change based on requirements

  • Instead, it leverages Amazon Route 53 Private DNS Hosted Zones, mapping each service (Memcached, RabbitMQ, MySQL) public IP to a domain name (A record mapping)

5. Security & Access Control

  • Security Groups are configured for the ALB, backend services, and Tomcat servers

  • Only authorized services are allowed to communicate

  • Ensures secure and controlled access, preventing unauthorized connections


Setting up the EC2 instances

  1. Security Groups

  • Load Balancer

    • Accepts traffic from every IPv4 and IPv6

    • Utilize HTTP and HTTPS protocol (HTTP for testing purpose)

    • Follow structured and consistent naming convention

    • Add tags to better distinguish the resources

  • Tomcat Server

    • Accepts traffic at port 8080 from ELB’s security group

    • Add current user IP to SSH into instance

    • Naming convention

  • Memcached, RabbitMQ and MySQL Services

    • Accept MySQL requests on port 3306 from the server’s security group

    • Accept Memcached requests on port 11211 from the server’s security group

    • Accept RabbitMQ requests on port 5672 from the server’s security group

    • Add your current user IP to allow SSH access into the instance

    • If services communicate internally (Ex: RabbitMQ uses Memcached), grant All Traffic access to the same security group (select the same security group from the dropdown). In our case, Memcached uses RabbitMQ, so we allow All Traffic within the same security group.

    • Naming convention

  1. Key Pairs

  • Provides a key to prove user’s identity and authorized access

  • Setup 3 Key Pairs:- Server, Services and Load Balancer

  1. Domain Name (Optional)

  • Purchase a domain name from a provider (Ex: GoDaddy)

  • Navigate to Azure Certificate manager and request for a public certification

  • Add the domain name from provider with a prefix ‘*.’ (Ex: *.testdomain.xyz)

  • Copy down the CNAME and CNAME’s value from ACM

  • Add a new record on the domain provider portal and add the values under CNAME type (CNAME maps a name to another name)

  • Once the CNAME gets resolved, you can see the “issued“ status under the ACM certificate

  1. Tomcat Instance

  • All resources used under EC2 instance must be under free tier

  • Be informed that 30 GiB of volume lies under free tier for AWS, exceeding which will have a minimal change based on

  • You can stop the instances when not in use. You will not be changed for it

  • Name: testapp-prod-server

  • Amazon machine interface: Ubuntu Server 24.02

  • Instance type: t2.micro

  • Key pair: Select the key pair created in previous session for server

  • Security group: Select the security group created in previous session for server

  • Storage: 8 Gigabytes general purpose SSD (gp3)

  • Provision the tomcat instance on instance creation through below commands under the Advanced DetailsUser Data section while creating instance

  •           #!/bin/bash
              sudo apt update
              sudo apt upgrade -y
              sudo apt install openjdk-17-jdk -y
              sudo apt install tomcat10 tomcat10-admin tomcat10-docs tomcat10-common git -y
    

  1. Memcached Instance

  • Name: testapp-prod-memcached

  • Amazon machine interface: Amazon Linux 2023 AMI

  • Instance type: t2.micro

  • Key pair: Select the key pair created in previous session for BE services

  • Security group: Select the security group created in previous session for BE services

  • Storage: 8 Gigabytes general purpose SSD (gp3)

  • Provision the memcached instance on instance creation through below commands under the Advanced DetailsUser Data section while creating instance

  • In the below commands, we install, start and enable (to let it run post restart as well) the memcached service

  • Change loopback address (127.0.0.1) to all IPv4 IPs (0.0.0.0)

  • After every change to a service’s setting, it requires a restart before reflecting changes

  • Run memcached service on 11211 TCP port and 11111 UDP port

  •           #!/bin/bash
              sudo dnf install memcached -y
              sudo systemctl start memcached
              sudo systemctl enable memcached
              sudo systemctl status memcached
              sed -i 's/127.0.0.1/0.0.0.0/g' /etc/sysconfig/memcached
              sudo systemctl restart memcached
              sudo memcached -p 11211 -U 11111 -u memcached -d
    

    1. RabbitMQ Instance

  • Name: testapp-prod-rabbitmq

  • Amazon machine interface: Amazon Linux 2023 AMI

  • Instance type: t2.micro

  • Key pair: Select the key pair created in previous session for BE services

  • Security group: Select the security group created in previous session for BE services

  • Storage: 8 Gigabytes general purpose SSD (gp3)

  • Provision the rabbit mq instance on instance creation through below commands under the Advanced DetailsUser Data section while creating instance

  • In below commands, we fetch the signing key for rabbitmq

  • rabbit mq depends on erlang and so we fetch erlang key, install other dependencies like socat, logrotate

  • Setup rabbit mq service and setup a new user with admin access

  • 127.0.0.1 is the loopback address, and since services with different range of IPs will access the service, we set those IPs to empty array (null)

    #!/bin/bash
    ## primary RabbitMQ signing key
    rpm --import 'https://github.com/rabbitmq/signing-keys/releases/download/3.0/rabbitmq-release-signing-key.asc'
    ## modern Erlang repository
    rpm --import 'https://github.com/rabbitmq/signing-keys/releases/download/3.0/cloudsmith.rabbitmq-erlang.E495BB49CC4BBE5B.key'
    ## RabbitMQ server repository
    rpm --import 'https://github.com/rabbitmq/signing-keys/releases/download/3.0/cloudsmith.rabbitmq-server.9F4587F226208342.key'
    curl -o /etc/yum.repos.d/rabbitmq.repo https://raw.githubusercontent.com/hkhcoder/vprofile-project/refs/heads/awsliftandshift/al2023rmq.repo
    dnf update -y
    ## install these dependencies from standard OS repositories
    dnf install socat logrotate -y
    ## install RabbitMQ and zero dependency Erlang
    dnf install -y erlang rabbitmq-server
    systemctl enable rabbitmq-server
    systemctl start rabbitmq-server
    sudo sh -c 'echo "[{rabbit, [{loopback_users, []}]}]." > /etc/rabbitmq/rabbitmq.config'
    sudo rabbitmqctl add_user test test
    sudo rabbitmqctl set_user_tags test administrator
    rabbitmqctl set_permissions -p / test ".*" ".*" ".*"

    sudo systemctl restart rabbitmq-server

  1. MySQL Instance

  • Name: testapp-prod-mysql

  • Amazon machine interface: Amazon Linux 2023 AMI

  • Instance type: t2.micro

  • Key pair: Select the key pair created in previous session for BE services

  • Security group: Select the security group created in previous session for BE services

  • Storage: 8 Gigabytes general purpose SSD (gp3)

  • Provision the rabbit mq instance on instance creation through below commands under the Advanced DetailsUser Data section while creating instance

  • In the below commands, we set MariaDB service, delete existing root users, create a new admin user and grant it all access

    #!/bin/bash
    DATABASE_PASS='admin123'
    sudo dnf update -y
    sudo dnf install git zip unzip -y
    sudo dnf install mariadb105-server -y
    # starting & enabling mariadb-server
    sudo systemctl start mariadb
    sudo systemctl enable mariadb
    cd /tmp/
    git clone -b main https://github.com/hkhcoder/vprofile-project.git
    #restore the dump file for the application
    sudo mysqladmin -u root password "$DATABASE_PASS"
    sudo mysql -u root -p"$DATABASE_PASS" -e "ALTER USER 'root'@'localhost' IDENTIFIED BY '$DATABASE_PASS'"
    sudo mysql -u root -p"$DATABASE_PASS" -e "DELETE FROM mysql.user WHERE User='root' AND Host NOT IN ('localhost', '127.0.0.1', '::1')"
    sudo mysql -u root -p"$DATABASE_PASS" -e "DELETE FROM mysql.user WHERE User=''"
    sudo mysql -u root -p"$DATABASE_PASS" -e "DELETE FROM mysql.db WHERE Db='test' OR Db='test\_%'"
    sudo mysql -u root -p"$DATABASE_PASS" -e "FLUSH PRIVILEGES"
    sudo mysql -u root -p"$DATABASE_PASS" -e "create database accounts"
    sudo mysql -u root -p"$DATABASE_PASS" -e "grant all privileges on accounts.* TO 'admin'@'localhost' identified by 'admin123'"
    sudo mysql -u root -p"$DATABASE_PASS" -e "grant all privileges on accounts.* TO 'admin'@'%' identified by 'admin123'"
    sudo mysql -u root -p"$DATABASE_PASS" accounts < /tmp/vprofile-project/src/main/resources/db_backup.sql
    sudo mysql -u root -p"$DATABASE_PASS" -e "FLUSH PRIVILEGES"

Before we continue, please SSH into all 4 instances and use the below command to cross verify if services are active and running

Steps to SSH into the EC2 Instance

  1. Find the instance public IP in instance summary page

  2. On Git Bash navigate to path where Key pairs were downloaded (Download folder in our case)

  3. Find default username in Instance summary → Connect → EC2 Instance Connect

  4. SSH into instance through command: ssh -i key_pair_name.pem default_username@instance_public_ip

  5. Check service status through command: sudo systemctl status service_name

  6. Make sure all services are active

    • RabbitMQ

RabbitMQ

  • MySQL

  • Memcached

  • Tomcat

Congrats! You’ve set up your Amazon EC2 instance. Ready to scale and make the most of the cloud! 🚀


Setting up Route 53

To allow our Tomcat server to connect to backend services like RabbitMQ, Memcached, and MySQL, we can use either IP addresses or domain names. We prefer domain names for several reasons:

  • Public IPs are dynamic and can change on reboot, while DNS ensures requests are always routed to the correct IP.

  • If we have multiple service instances, load balancers can be integrated with DNS to route traffic effectively.

  • In case of a server failure, DNS allows for easy redirection to a backup server.

  • DNS is simpler to manage and embed compared to using IP addresses.

Steps to Create a Hosted Zone:

  1. Search for Route 53 in the AWS Console and navigate to Create a hosted zone.

  2. A hosted zone defines how AWS responds to DNS queries for the specified domain name.

  3. Add your domain name (e.g., testapp.in) and select Private hosted zone (since these services will be used within the AWS boundary).

  4. Add a record for each service:

    • Choose A record type for routing IPv4 addresses to domain names (e.g., route RabbitMQ’s private IP to rmq.testapp.in).

  • Requests from below domain names will be routed to their respective Private IPs


Setting up the Tomcat Server

In this section, we shall create an artifact from the project. Push it to S3 Bucket, pull it to EC2 instance and set it up on tomcat service

Building the project

  1. Clone Vprofile project from the URL provided in previous section (or any other project to be deployed)

    • Building Vprofile needs Java 17 and Maven installed on the PC

    • Check for the Java version through command on CMD: javac -version

  2. Update the host/address/url based on Route 53 domain names on the properties file

  3. Build the application through respective build command

    • For maven, build through following command: mvn install

    • Build would be found on path target → application_name.war

Setting up AWS CLI

To push the artifact to the S3 Bucket, we have two options: either drag and drop the file or upload it via the CLI. Since the goal of this blog is to maximize learning, we’ll use the CLI method

  1. Navigate to Users section on AWS Console

  2. Create a new user and attach policy AmazonS3FullAccess

  3. Navigate to created user and create Access Key (we provide the credentials to CLI to access S3 bucket)

  4. Download .csv file with Access Key and Secret access key

    • Never share the credentials as attackets/bots might take control of your resources
  5. Route to CMD and run following commands to setup AWS CLI:

    curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"

    unzip awscliv2.zip

    sudo ./aws/install

  6. Check if AWS is installed using command: aws --version

    • You can always find numerous methods to setup AWS on internet if above commands don’t do the job
  7. Now, we setup AWS CLI session to authenticate all commands at once instead of passing credentials , region, output format everytime

  8. Run the command and pass the credentials from user we created previously: aws configure

    • You can check your credentials and config through commands: ~./aws/config and ~./aws/credentials
  9. Keep region and output file type as default(shown in square brackets)

Creating an S3 bucket and uploading the artifact

We now have the AWS CLI set up and ready to go. Let’s proceed by creating an S3 bucket and pushing the artifact onto it.

  1. Navigate to S3 on AWS Console

  2. Create a new S3 bucket

    • The name of the bucket should be globally unique (you may add numbers, special characters to distinguish it)

    • We do not need to access S3 bucket from outside AWS boundry, so leave ACL (Access control list) and Public access options as it is

    • We do not need versioning on our S3 bucket objects, so keep it disabled

    • We will proceed with SSE S3 default encryption (you may want to change it to SSE KMS based on level of encryption needed for the object)

  3. Navigate back to project’s shell and use the following command to add artifact to S3 bucket: aws s3 cp path_to_artifact s3://bucket_name

  4. We now have the artifact on S3 bucket!

Creating an IAM role.

To grant the Tomcat instance access to the S3 bucket, IAM roles are used. These roles contain specific permissions, and by assigning them to AWS services, we enable secure access to other AWS resources without the need to provide access codes each time.

  1. Navigate to IAM (Identity and Access Management) option on AWS Console and route to Roles

  2. Create a role for AWS Service and select EC2 as service that uses the role

  3. Select S3FullAccess in permissions tab and assign a name to identify the role

  1. Navigate to Tomcat EC2 instance and select it

  2. Select the role under options Action → Security → Modify IAM role

  3. Our EC2 instance now has access to all S3 buckets in the account

Pulling the artifact and configuring the server

Now that our Instance has access to S3 bucket, lets pull it and host the application on the Tomcat server

  1. SSH into the Tomcat EC2 instance

  2. To use AWS CLI and pull the artifact, install AWS CLI through command: sudo snap install aws-cli --classic

  3. Pull the instance into temp folder through command: sudo aws s3 cp s3://testappbuckets3/vprofile-v2.war /tmp/

  4. To run certain commands, we need root user access on the instance, use command: sudo -i

  5. Stop the Tomcat server through command: systemctl stop tomcat10

  6. Tomcat server content resides on path /var/lib/tomcat10/webapps/ROOT

  7. Delete the root folder through command: rm -rf /var/lib/tomcat10/webapps/ROOT

  8. Copy the artifact to Tomcat code path: cp /tmp/artifact_name /var/lib/tomcat10/webapps/ROOT.war

  9. Run back the Tomcat server: systemctl start tomcat10

    • This would extract the server files out of build automatically

    • You can check if application is hosted by navigating to http://instance_public_ip:8080

    • Please note that you need to approve My IP on server security group for port 8080

  10. Congrats ! You have used multiple AWS services by now and completed ~75% of the project


Configuring the server to work with the Application Load Balancer over HTTPS

Now that our server is deployed and fully functional, we can introduce a load balancer to route traffic based on specific conditions. Additionally, we’ll explore how to use an SSL certificate. Let’s get started!

Let’s first take a high-level look at how an Application Load Balancer works

  1. Target group: A cluster of similar server instances.

  2. The load balancer pings each instance at regular intervals to check its health.

  3. If an instance responds, it is marked as healthy; if no response, it is marked as unhealthy.

  4. The target group also handles the port to which the load balancer forwards requests (8080 in our case, as Tomcat uses port 8080).

Now that we are clear with TG and LB basics lets proceed with creating it

  1. Navigate to Target groups in AWS Console path EC2 → Load Balancing → Target Groups

  2. Create a TG with type as Instance (Since TG would wrap around instances)

  3. For Protocol:Port, mention 8080

  4. Override Heath check port to 8080

  5. Select the server instance(s), and click on Mark as pending below

    • Make sure port is set to 8080

    • Traffic would be routed to selected instances based on load

  6. In Target group details page, make sure the instances are in healthy state

    • If not, cross verify the security groups traffic flow
  7. Now, navigate to Load Balancers option and create an Application Load Balancer

  8. As instructed earlier, follow standard naming convention for resources/services

  9. Select all availability zones to ensure high availability in case of data center failures

  10. Select the security group created in previous section

  11. For Listeners and Routing, as mentioned in security group section we would be using HTTPS but for testing purpose, we proceed with HTTP as well

    • Skip the HTTPS listener if you did not purchase domain/register domain on Amazon certificate manager
  12. Select the target group created previously

  13. Select the certificate from ACM for enabling HTTPS (this would install SSL Certification for hosted website)

  1. After Load Balancer is created, we can access the server through LB from DNS Name exposed by LB

Great work! Now, let’s jump into the second-to-last section—setting up an auto-scaling group and fully utilizing load balancers to handle traffic seamlessly. Let’s get ready to scale! 🚀


Leveraging the Auto-Scaling Group

So what exactly are auto-scaling groups?! Auto Scaling Groups automatically adjust the number of server instances based on demand. When traffic spikes, they scale up to handle the load; when usage drops, they scale down to save costs. This ensures optimal performance and efficiency without manual intervention.

How do these work? Let's explore

  1. There are 3 levels for ASG: Minimum desired capacity, desired capacity and maximum desired capacity

  2. Assume there’s a rally in traffic, ASG rapidly creates similar instances with maximum desired capacity as upper bound

  3. If the traffic/load goes below a certain extent, we reduce it to minimum desired capacity

  4. Under normal circumstances, instance replicas stay at desired capacity

Preparing the prerequisites

  1. We create an Amazon Machine Image out of server instance. This assures we have the AMI that we create the server from along with the enhancements and changes we made to the instance

    • Image of an Ubuntu instance with server deployed on it, will also clone the server changes to the image
  2. Select server instance on Instances tab and navigate to Actions → Manages Instances State → Create Image

    • Note that Image incurs a tiny amount of fees. Google for more details on fee structure

  1. After image is created, we need to create a Launch Template

    • Launch templates are used by auto-scaling groups to decide what all options we need while creating an instance
  2. Navigate to Launch Templates under Instances option and fill details as done while creating Tomcat Server

  3. For AMI, select the Image we created under option My AMIs → Owned by me

  4. Instance: t2.micro

  5. Key pair: Server key pair created in previous section

  6. Security group: Server security group created in previous section

  7. Expand Advanced details and set S3FullAccess for IAM Instance profile (so other server instances will also have S3 access for future use)

  8. Since we already provisioned the Tomcat server, and we are currently replicating it, no need to add Tomcat setup commands

With the dependencies set, let's build the auto-scaling group.

  1. Navigate to Auto Scaling group under Auto Scaling section in AWS Console

  2. Select the previously created launch template and all availability zones

  3. Choose the default Virtual private cloud (preselected)

  4. Attach the existing load balancer

    • We do not point ASG to LB but rather through the target group that points to LB

  1. Turn on Elastic Load Balancing heath checks for refined instance health check

    • ASG discards unhealthy instances and create new ones automatically through health checks at certain intervals
  2. Set the desired capacities based on your need

  1. Set Enable instance scale-in protection to prevent ASG from deleting unhealthy instances

    • Helps incase on needs to analyze the reason behind instance failure
  2. You may want to add notifications for certain events and receive SMS for the same

  3. After ASG is set up, we may proceed and delete the Tomcat server that we originally created

    • ASG now has the responsibility to add/delete instances so no need for manually created instances

    • Navigate to target group, select the manually created instance and de-register it

    • Delete the instance from Instances → Instance state → Terminate instance

We have now completed ~95% of the project. Thanks for holding on till here. Lets setup domain name for our application


Configuring a subdomain for the application

In this section, let’s set up a domain for the application and implement HTTPS for enhanced security

  1. Copy the load balancer's domain name (using the HTTP protocol).

  2. Go to your domain provider (e.g., GoDaddy).

  3. Add a CNAME record:

    • Name: Enter the subdomain name.

    • Value: Paste the load balancer's DNS name.

  1. Save the record.

  2. You should now be able to access the application at subdomain.domain.

  3. For example, with my domain anubhavgupta.xyz and subdomain woah, the URL would be woah.anubhavgupta.xyz


Thanks for sticking around and going through the blog! Now that you've built the project, you can go ahead and clean up the resources—unless you’re feeling generous enough to pay extra bills at the month’s end.

This took me three days to write. Please like the blog so my suffering feels worth it. 😭

Bye, and thanks again! 🚀😆

Goodbye GIFs - Get the best gif on GIFER