Setting up a multi-cloud deployment pipeline with CircleCI, AWS & Google Cloud
Create hassle-free AWS and Google Cloud deployments with CircleCI
Multi-cloud application deployment strategies offer a host of benefits, such as flexibility, resilience, and avoiding cloud provider lock-in. Spreading workloads across different clouds not only helps cut costs but also lets you pick the best tools and regions to deliver a smoother experience for your users.
This tutorial demonstrates how to set up a multi-cloud architecture with Amazon Web Services (AWS) and Google Cloud. Working with a JavaScript monorepo, you will learn how to deploy a Node.js + MySQL server app to AWS Elastic Container Service (ECS) with the EC2 (Elastic Compute Cloud) launch type, and a React client to Google Cloud Run using CircleCI orbs.
By the end, you will have a fully functional, production-ready pipeline that delivers your app across two major clouds. You will also learn valuable tips to apply to real-world projects.
Prerequisites
To follow this tutorial, here is a checklist of what you will need:
A Docker account to publish images to Docker Hub.
An AWS account with the AWS CLI installed and configured. Follow the instructions on the Authenticating using IAM user credentials for the AWS CLI page to:
Create an IAM user with administrative access (you should avoid using the root user to spin up AWS resources).
Generate two sets of credentials for CLI and CircleCI.
Install the AWS CLI on your machine.
Add your credentials and configure the IAM user profile for CLI use.
A Google Cloud account with billing enabled and the
gcloudCLI configured.A terminal to run bash scripts, with OpenSSL and
envsubstinstalled.Git installed on your local machine.
An IDE or code editor of your choice.
Preparing the cloud infrastructure
Before getting into the CI/CD pipeline, you need to set up the infrastructure where CircleCI will deploy the client and server apps. Once created, both services will be empty until the first CI/CD pipeline runs. After the pipeline config is ready, you will push your code to trigger CircleCI and automatically update the services. We will kick things off with AWS.
First, make sure you have the application code on your machine. Fork the repository on GitHub and clone it:
git clone <your-fork-url>
# navigate into the project directory
cd cara-store-catalog
Provisioning AWS resources
Inside the deployments/ directory, you will find two main folders, ecs/ and cloud-run/. Within them are scripts and configuration files to automate the process of provisioning resources on their respective cloud platforms.
One such file is ecs/scripts/export-env.template.sh, which exports the custom environment variables required for setup. Rename this file to export-env.sh and fill in the following values:
export AWS_PROFILE=
export DOCKERHUB_USERNAME=
export MYSQL_PASSWORD=
export MYSQL_ROOT_PASSWORD=
Next, rename .env.circleci.template to .env.circleci. This file contains a list of all the environment variables you’ll need to add to CircleCI. The setup script will auto-populate some of them (by executing set-circleci-env.sh), so all you have to do is copy and paste into CircleCI when needed.
Now, you can run the script to create the deployment resources. The script assumes your AWS account already has a default Virtual Private Cloud (VPC) configured with networking components like subnets and an internet gateway.
From the project’s root directory, enter these commands:
# add execution permissions
chmod +x deployments/ecs/scripts/export-env.sh deployments/ecs/scripts/set-circleci-env.sh deployments/ecs/scripts/setup-ecs.sh
# run the script
source deployments/ecs/scripts/setup-ecs.sh
Here’s an overview of what this does:
Creates an ECS cluster.
Sets up AWS Secrets Manager by:
Substituting the placeholders in
templates/ecs-secrets.template.jsonusing the values exported fromexport-env.sh.Uploading the env variables in
init/ecs-secrets.json.
Creates IAM and Task Execution roles for the EC2 instance.
Creates an application load balancer (ALB) and defines two security groups for the ALB and the ECS instance. This opens up the ALB for public access while it communicates requests to the server app running on the ECS instance.
Generates and uploads a self-signed TLS certificate to allow
httpssupport for the ALB URL. This ensures the client app can communicate with the server securely.Creates a key pair to allow secure SSH access to the EC2 instance.
Launches an EC2 instance and uploads the database initialization script
server/db/init.sql.Runs
set-circleci-env.shto populate.env.circleciwith the available values.
The setup script stops short of registering the task definition and creating the ECS service to keep infrastructure setup separate from deployment. If everything succeeds, you should see this message printed to the console:
[SUCCESS] ECS infrastructure with EC2 launch type successfully created
Service will be launched and deployed via CircleCI
After the script completes, open the ECS and EC2 dashboards in your AWS account to confirm the creation of resources such as the load balancer, target group, and ECS cluster.
Inside the scripts/ folder are a few other scripts like pause-ecs.sh and resume-ecs.sh. Use these to suspend or restart the ECS service temporarily to save costs.
Provisioning Google Cloud resources
You might be wondering why the React app is Dockerized instead of hosted as a static website on Google Cloud Storage. While static hosting does offer simplicity, using Docker provides a unified way to manage the client app across environments. It allows you to spin up multiple instances of your app, enabling easy testing, scaling, and consistent behavior across deployments.
The Google Cloud Run setup files are in the deployments/cloud-run/ directory. Rename the cloud-run/scripts/export-env.template.sh file to export-env.sh and update these values from your Google Cloud account:
export GCP_REGION=
export BILLING_ACCOUNT_NAME=""
Just like with AWS, run the setup script for Google Cloud Run from the project’s root directory:
# add execution permissions
chmod +x deployments/cloud-run/scripts/export-env.sh deployments/cloud-run/scripts/set-circleci-env.sh deployments/cloud-run/scripts/setup-cloud-run.sh
# run the script
source deployments/cloud-run/scripts/setup-cloud-run.sh
What this script does:
Creates a new project on Google Cloud.
Links the project to the specified billing account.
Enables all required APIs.
Creates a runtime service account for the Cloud Run instance and a deployer service account for CircleCI.
Creates a JSON key for CircleCI to authenticate with Google Cloud.
Runs the Cloud Run
set-circleci-env.shto populate values into.env.circleci.
A successful setup should print a message like this to the console:
[SUCCESS] Cloud Run service environment prepared:
Service: carastore-client
App will be deployed via CircleCI
With both AWS and Google Cloud configured, this concludes the infrastructure setup portion of this tutorial. Next up is the CI/CD pipeline.
Configuring the CI/CD pipeline for deployment
Since we’re working with a monorepo, the best approach is to use CircleCI’s dynamic configuration. This allows the configuration of different workflows to run based on file changes. If you are unfamiliar with the concept, check out my tutorial on dynamic configuration with CircleCI.
In the project’s root directory, create the following folder and files:
mkdir .circleci
cd .circleci
touch config.yml continue_config.yml
The setup file
Add this code to config.yml:
version: 2.1
setup: true
orbs:
path-filtering: circleci/path-filtering@2.0.1
workflows:
filter-path:
jobs:
- path-filtering/filter:
name: detect-modified-directories
mapping: |
server/.* run-server-jobs true
client/.* run-client-jobs true
.circleci/.* run-server-jobs true
.circleci/.* run-client-jobs true
deployments/ecs/.* run-task-job true
base-revision: main
This configuration uses the filter job from the path-filtering orb to detect changes and set these pipeline parameters:
run-server-jobs: triggers the server workflow whenserver/files change.run-client-jobs: triggers the client workflow whenclient/files change.run-task-jobtriggers the server deployment job within the server workflow when changes occur in thedeployments/ecs/folder. This ensures CircleCI updates the task definition and the ECS service accordingly.
Continuation configuration
After setting the pipeline parameters, CircleCI automatically executes the continuation config file and passes those pipeline values to it.
Open continue_config.yml and add this:
version: 2.1
parameters:
run-server-jobs:
type: boolean
default: false
run-client-jobs:
type: boolean
default: false
run-task-job:
type: boolean
default: false
orbs:
gcp-cli: circleci/gcp-cli@3.2.2
aws-ecs: circleci/aws-ecs@7.1.0
aws-cli: circleci/aws-cli@5.4.1
executors:
node-exec:
docker:
- image: node:22.17-alpine3.22
base-exec:
docker:
- image: cimg/base:current-24.04
commands:
installdeps:
description: "Install dependencies"
parameters:
directory:
type: string
steps:
- checkout:
path: ~/project
- restore_cache:
keys:
- v1-<< parameters.directory >>-deps-{{ checksum "package.json" }}-{{ checksum "package-lock.json" }}
- v1-<< parameters.directory >>-deps-{{ checksum "package.json" }}
- v1-<< parameters.directory >>-deps-
- run:
name: Install dependencies
command: npm ci
- save_cache:
key: v1-<< parameters.directory >>-deps-{{ checksum "package.json" }}-{{ checksum "package-lock.json" }}
paths:
- node_modules
get-image-tag:
description: "Retrieve tag for Docker image"
parameters:
directory:
type: string
steps:
- run:
name: Export version from package.json
command: |
IMAGE_TAG=$(jq -r '.version' ~/project/<< parameters.directory >>/package.json)
echo "export IMAGE_TAG=$IMAGE_TAG" >> $BASH_ENV
echo "IMAGE_TAG: $IMAGE_TAG"
source $BASH_ENV
The bulk of this code declares reusable elements to avoid unnecessary code duplication:
Default values for the pipeline parameters.
CircleCI orbs to simplify the deployment setup.
Executors to define the execution environment for the steps in each job.
Commands to run across multiple jobs:
installdepsinstallsnpmdependencies, stores them in a cache, and restores them in subsequent builds.get-image-tagdynamically retrieves the tag number from each app’spackage.jsonfile.
Start defining the jobs for the pipeline by adding the test jobs:
jobs:
test-client:
executor: node-exec
working_directory: ~/project/client
steps:
- installdeps:
directory: client
- run:
name: Run client tests
command: npm test
test-server:
executor: base-exec
working_directory: ~/project/server
steps:
- checkout:
path: ~/project
- setup_remote_docker:
docker_layer_caching: true
- get-image-tag:
directory: server
- run:
name: Spin up containers and run server tests
command: |
docker images
docker compose -f compose.yaml -f compose.cicd.yaml up --build -d
docker exec -it --user root nodejs-server-prod npm install
docker exec -it nodejs-server-prod npm test
- run:
name: Stop and remove containers
command: docker compose down -v
test-client uses the node-exec executor to maintain consistency with the Node.js version defined in the client app’s Dockerfile. It invokes the reusable installdeps command and then calls the test command specified in the client app’s package.json file. The tests must be run directly on the source code since they can’t run effectively on a static Dockerized build served by NGINX.
The test-server job’s structure is a little different. It uses the base-exec executor, which is CircleCI’s official Ubuntu Docker image. Here, tests can run on the server app’s Docker image, so the job spins up a container, installs dependencies as the root user, and runs tests inside it. The compose.cicd.yaml file is a Docker Compose merge file which specifies the production configuration. In contrast, the compose.override.yaml file in the server/ folder contains the development configuration.
Update continue_config.yml with the build job:
jobs key, as in test-client and test-server above. build-docker-image:
description: Build and publish << parameters.service >> Docker image
parameters:
service:
type: string
build_context:
type: string
executor: base-exec
working_directory: ~/project/<< parameters.service >>
steps:
- checkout:
path: ~/project
- setup_remote_docker:
docker_layer_caching: true
- get-image-tag:
directory: << parameters.service >>
- run:
name: Build << parameters.service >> image
command: |
docker build \
-t $DOCKERHUB_USERNAME/carastore-<< parameters.service >>:$IMAGE_TAG \
-t $DOCKERHUB_USERNAME/carastore-<< parameters.service >>:latest \
<< parameters.build_context >>
- run:
name: Authenticate and push image to Docker Hub
command: |
echo "$DOCKERHUB_PASSWORD" | docker login -u $DOCKERHUB_USERNAME --password-stdin
docker push -a $DOCKERHUB_USERNAME/carastore-<< parameters.service >>
Both client and server apps share a similar build process, so the build-docker-image takes on a parameterized structure to make it reusable across workflows.
Add the client deployment job:
deploy-client:
executor: gcp-cli/default
steps:
- checkout
- get-image-tag:
directory: client
- gcp-cli/setup:
gcloud_service_key: GCLOUD_SERVICE_KEY
google_compute_region: GCP_REGION
google_project_id: GOOGLE_PROJECT_ID
- run:
name: Deploy to Google Cloud Run
command: |
IMAGE="docker.io/$DOCKERHUB_USERNAME/carastore-client:$IMAGE_TAG"
gcloud run deploy "$SERVICE_NAME" \
--image "$IMAGE" \
--service-account "$RUNTIME_SA_EMAIL" \
--allow-unauthenticated \
--region "$GCP_REGION" \
--platform managed \
--port $CLIENT_APP_PORT \
--cpu 1 \
--memory 512Mi \
--min-instances 0 \
--max-instances 1 \
--set-env-vars VITE_API_URL=$VITE_API_URL
- run:
name: Verify deployment success
command: |
URL=$(gcloud run services describe "$SERVICE_NAME" \
--region "$GCP_REGION" \
--format='value(status.url)')
echo "[INFO] Service URL: $URL"
for i in {1..30}; do
STATUS=$(curl -s -o /dev/null -w "%{http_code}" "$URL")
if [ "$STATUS" -eq 200 ]; then
echo "[SUCCESS] Deployment verified"
exit 0
fi
echo "[WARN] Got $STATUS, retrying in 10s... ($i/30)"
sleep 10
done
echo "[ERROR] Deployment verification timed out after 5 minutes"
exit 1
Here, the previously defined gcp-cli orb installs and configures the gcloud CLI. You already set up the infrastructure by running the setup-cloud-run.sh script, so this job handles the client app deployment to Google Cloud Run. The final step confirms whether the deployment was successful.
Next, add the server deployment job:
deploy-server:
executor: base-exec
working_directory: ~/project/deployments/ecs
steps:
- checkout:
path: ~/project
- get-image-tag:
directory: server
- run:
name: Substitute env placeholders in task definition
command: |
envsubst < templates/task-definition.template.json > task-definition.json
- aws-cli/setup:
aws_access_key_id: $AWS_ACCESS_KEY_ID
aws_secret_access_key: $AWS_SECRET_ACCESS_KEY
region: $AWS_REGION
- aws-ecs/update_task_definition_from_json:
region: $AWS_REGION
task_definition_json: task-definition.json
- aws-ecs/update_service:
region: $AWS_REGION
family: "$MY_APP_PREFIX-server"
service_name: "$MY_APP_PREFIX-service"
cluster: "$MY_APP_PREFIX-cluster"
create_service: true
desired_count: "1"
container_name: "nodejs-server"
container_port: "5000"
target_group: $TG_ARN
skip_task_definition_registration: true
- aws-ecs/verify_revision_is_deployed:
region: $AWS_REGION
family: "$MY_APP_PREFIX-server"
service_name: "$MY_APP_PREFIX-service"
cluster: "$MY_APP_PREFIX-cluster"
task_definition_arn: $CCI_ORB_AWS_ECS_REGISTERED_TASK_DFN
max_poll_attempts: 20
deployment-coordinator:
type: no-op
There are two CircleCI orbs working hand-in-hand here to execute the server deployment: aws-cli to install and configure the AWS CLI, and aws-ecs to manage the core deployment process.
This job can be summarized in four major steps:
envsubstreplaces placeholders intask-definition.template.jsonwith values from CircleCI project environment variables and outputs a valid JSON task definition.The in-built
aws-ecs/update_task_definition_from_jsoncommand registers a new task definition with this file.aws-ecs/update_servicehandles the core service deployment with the specified parameters.And finally,
aws-ecs/verify_revision_is_deployedconfirms if the rollout was successful.
The deployment-coordinator job is a no-op that performs no specific action or consumes credits. Its purpose is to ensure the server app gets re-deployed if only the task definition changes.
This will become clearer after adding the final piece of the config, workflows:
workflows:
test-build-and-deploy-client:
when: << pipeline.parameters.run-client-jobs >>
jobs:
- test-client
- build-docker-image:
name: build-client-image
service: client
build_context: .
requires:
- test-client
- deploy-client:
requires:
- build-client-image
build-test-and-deploy-server:
when:
or: [<< pipeline.parameters.run-server-jobs >>, << pipeline.parameters.run-task-job >>]
jobs:
- build-docker-image:
name: build-server-image
service: server
build_context: --target prod .
filters:
pipeline.parameters.run-server-jobs
- test-server:
requires:
- build-server-image
filters:
pipeline.parameters.run-server-jobs
- deployment-coordinator
- deploy-server:
requires:
- test-server
- deployment-coordinator
Workflows determine how (and if) the outlined jobs should run. In both of these, CircleCI will only trigger the listed jobs “when” the specified pipeline parameters evaluate to true.
A brief explanation of what is going on in the build-test-and-deploy-server workflow:
The logical
oroperator means the workflow will only run when one OR both pipeline parameters are true.If both or only the
run-server-jobspipeline parameter is true, it runs all jobs.The
filtersrule skips thebuild-docker-imageandtest-serverjobs if onlyrun-task-jobis true. In that case, the no-opdeployment-coordinatorensuresdeploy-serverstill runs, because it remains as a dependency, even whentest-servergets filtered out. Without it,deploy-serverwould have no valid dependency, and CircleCI would skip it.The
requireskey stalls execution until the dependent job attains the defaultsuccessstatus.
While the typical flow is build → test → deploy in most CI/CD pipelines, the client and server workflows diverge slightly in practice:
Client workflow runs tests before the Docker build, since tests cannot run meaningfully on a static image.
Server workflow runs the Docker build before tests, since tests depend on the built production image.
So, while technical constraints mean a disparity in order, the overall progression is the same: validate and package, then deploy.
Running your deployment pipeline on CircleCI
To connect your project to CircleCI, start by following the instructions on the Set up a project page. CircleCI will prompt you with a few options. For now, select Commit a starter CI pipeline to a new branch. This will immediately trigger a successful test pipeline on a new branch, confirming CircleCI has connected your project correctly.
Next, open the deployments/env.circleci file you renamed earlier. It contains a list of all the environment variables required for CircleCI and should have the following values pre-populated:
ACCOUNT_IDAWS_REGIONGCP_REGIONRUNTIME_SA_EMAILTG_ARNVITE_API_URL
Follow the instructions on the Set an environment variable page to add them all to your project:
ACCOUNT_ID= # auto-generated by set-circleci-env.sh
AWS_ACCESS_KEY_ID= # CircleCI credentials you generated earlier
AWS_REGION= # auto-generated by set-circleci-env.sh
AWS_SECRET_ACCESS_KEY= # CircleCI credentials you generated earlier
CLIENT_APP_PORT=8080
DOCKERHUB_USERNAME= # your docker hub username
DOCKERHUB_PASSWORD= # your docker hub password
GCLOUD_SERVICE_KEY= # copy the contents of deployments/cloud-run/keys/circleci-deployer-$GCP_PROJECT_ID.json
GCP_REGION= # auto-generated by set-circleci-env.sh
GOOGLE_PROJECT_ID=carastore-client-prod
MYSQL_DATABASE=carastore_catalog
MYSQL_HOST=mysql-db
MYSQL_PASSWORD= # same value you used in ecs/scripts/export-env.sh
MYSQL_ROOT_PASSWORD= # same value you used in ecs/scripts/export-env.sh
MYSQL_USER=carastore_admin
MY_APP_PREFIX=carastore
PORT=5000
RUNTIME_SA_EMAIL= # auto-generated by set-circleci-env.sh
SERVICE_NAME=carastore-client
SM_SECRET_NAME=/carastore/server/env
TG_ARN= # auto-generated by set-circleci-env.sh
VITE_API_URL= # auto-generated by set-circleci-env.sh
With the set, commit and push your config files to GitHub:
git add .
git commit -m "<add-a-commit-message-here>"
git push -u origin main
CircleCI will detect the branch automatically and run the defined workflows. Go to the CircleCI web app, select Pipelines in the sidebar, and you’ll see your workflows progress from build → test → deploy:

And with that, you have a working deployment pipeline live on CircleCI.
Testing your app in production
Test the server app by copying the value of VITE_API_URL from your env.circleci file and pasting it into a browser. You may see a warning screen like this telling you the connection is not secure:

This happens because the app uses a self-signed TLS certificate to ensure the secure client can communicate with the server. While this is fine for a practice project, in the real world, you would provision a valid ACM certificate and issue it for a domain you own.
Click the link to continue, and you should be directed to a screen with the message:
“Hello from the server!”
To test the React app:
Navigate to the Cloud Run section in the Google Cloud console.
Select the service named carastore-client, and you should see the URL displayed.
Click on it to see the app in production.
You will see a message saying no products are available.
Click Add Product in the top right corner, enter a test value in each text box, and submit. The app should redirect you to the home page, where you’ll see the product details. You can edit and or delete it for further testing.
This confirms your client app is successfully communicating with the server.
Clean up and best practices
Run the cleanup scripts to avoid incurring unnecessary charges.
For AWS:
# add execution permissions
chmod +x deployments/ecs/scripts/cleanup-ecs.sh
# run the script
source deployments/ecs/scripts/cleanup-ecs.sh
For Google Cloud Run:
# add execution permissions
chmod +x deployments/cloud-run/scripts/cleanup-cloud-run.sh
# run the script
source deployments/cloud-run/scripts/cleanup-cloud-run.sh
Conclusion
Maintaining a multi-cloud deployment strategy in the real world adds complexity, especially in terms of environmental drift. Over time, production and staging environments may diverge due to resource limits or networking behavior. Here are a couple of helpful tips to mitigate this:
IaC (Infrastructure as Code) implementation (e.g., Terraform, Pulumi) to maintain parity.
Careful monitoring and observability to catch inconsistencies.
As next steps, you should consider:
Using CircleCI contexts for cloud credentials. E.g., staging vs production.
Maintaining cloud-agnostic logic and avoiding hard-coding cloud-specific behavior into your app.
Exporting logs to a centralized dashboard like Datadog or Grafana.
This CircleCI blog post, CI/CD for multi-cloud: Automate and unify deployments across providers, highlights key strategies for dealing with the increasing complexities of a robust multi-cloud architecture. I highly recommend it for a deeper dive into managing deployments across different providers.
I really enjoyed putting this together, and I hope it was just as fun for you to follow along. You can check out the full CircleCI configuration on the project’s multi_cloud branch.
