In the context of a school project i was assigned the task of making a CI/CD pipeline that will in a first time deliver the project on a distant VPS (Digital Ocean’s droplet) so that we can have a staging environment where the rest of the team will be able to see the progress of the application.

So here below a quick walkthrough on how i made it possible using docker and ghcr

Building a working docker image

The project’s repository is a NodeJS backend application that will be dockerized along with its postgresql database, and to make this happen we need to firstly create a simple Dockerfile on the project root

This simple Dockerfile create a nodejs:20 docker image and copy the content of the application in it while also installing its dependencies and exposing port:3000

FROM node:20
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
COPY entrypoint.sh .
RUN chmod +x entrypoint.sh
# Start the application with the entrypoint script
ENTRYPOINT ["./entrypoint.sh"]

Don’t forget the entrypoint.sh script

#entrypoint.sh:
#!/bin/sh
# Run database migrations
echo "Running migrations..."
npm run db:migrate
# Start the application
echo "Starting the app..."
npm run dev

we can see by running docker build -t aloom-back-image . that we successfully created said image

Building a container based of the Dockerfile image is fine and all but what we really want for a comprehensible CI/CD is a way to organize in a declarative file all the services that our project will need, and for that we will be utilizing the magic of a compose file !

version: "3.8"

services:
	db:
		image: postgres:16
		environment:
		- POSTGRES_USER=yourpostgresuser
		- POSTGRES_PASSWORD=yourpostgrespassword
		- POSTGRES_DB=yourpostgresdb
		volumes:
		- postgres-data:/var/lib/postgresql/data
		ports:
		- "5432:5432"
	api:
		build:
			context: .
			dockerfile: Dockerfile
		environment:
		- NODE_ENV=staging
		- DB_HOST=dbhost
		- DB_PORT=5432
		- DB_USER=postgresuser
		- DB_PASSWORD=postgrespassword
		- DB_NAME=postgresdb
		depends_on:
		- db
		ports:
		- "3000:3000"
		volumes:
		- ./data:/usr/src/data/app

volumes:
	postgres-data:

Pushing docker image to GitHub Container Registry

note that any container registry would work (DigitalOcean’s / DockerHub’s etc…), we are here using GHCR because its free for students

We now have a working docker image for the project we will then need to push it to GHCR.

First, make sure you have a GitHub personal access token at your disposal, if not follow this GitHub documentation on personal access tokens (if you can scope block it would be preferable)

Then declare in a GitHub workflow a CI/CD pipeline that will log in GHCR and push a built image of your current changes

name: CI/CD for aloom-back
on:
	push:
		branches:
			- main

jobs:
	build_and_publish:
		runs-on: ubuntu-latest
		steps:
			- uses: actions/checkout@v3

# Log in to GitHub Container Registry
			- name: Log in to GitHub Container Registry
			  run: |
				docker login --username guigzougz --password
				${{ secrets.GH_PAT }} ghcr.io


# Build and push the API image
			- name: Build and push the API image
			   run: |
				docker build . -f Dockerfile
				--tag ghcr.io/guigzouz/aloom-back-api:latest
				docker push ghcr.io/guigzouz/aloom-back-api:latest

If done correctly this workflow once ran will give at your disposal under Your Profile > Packages the docker image that you just built

Pulling the image and running it on your VPS (DigitalOcean Droplet)

if you are following this article on digital ocean, make sure to have a droplet started with docker on it How to create a droplet on DIgitalOcean(claim your 200€ free credits with github education package)

Still in the same workflow we will then define the “Continuous delivery (CD)” of CI/CD, the point will be to SSH connect to the VPS and pull the image from GHCR to run it with a custom docker-compose script

For that you will need to pass in the github repository secrets some credentials to be able to once connected to the vps, connect to ghcr and pull the docker image, those secrets are also useful to pass securely your database credentials, you can find them on Github Settings > Secrets & Variables > Actions - Repository Secrets

Reminder to put the build-and-deploy.yml script in the .github/workflows folder, it will allow github to detect your workflow and trigger the script when pushing on main

deploy_to_vps:
	needs: build_and_publish
	runs-on: ubuntu-latest
	steps:
	- name: Checkout Repository
	  uses: actions/checkout@v3

# Log in to the Droplet and deploy using Docker Compose
	- name: Deploy to Droplet using Docker Compose
	  uses: appleboy/ssh-action@v0.1.9
	  with:
		host: ${{ secrets.VPS_IP }}
		username: ${{ secrets.VPS_USER }}
		key: ${{ secrets.VPS_SSH_KEY }}
		script: | # Log in to GHCR on the Droplet
			echo ${{ secrets.GH_PAT }} |
			docker login ghcr.io -u guigzougz --password-stdin

# Pull the latest images
			docker pull ghcr.io/guigzouz/aloom-back-db:latest
			docker pull ghcr.io/guigzouz/aloom-back-api:latest

# Rebuild and deploy using Docker Compose
			docker-compose down
			docker-compose up -d

# Optional: Clean up unused images
			docker image prune -f

Feel free to now :

git add .
git commit -m 'initial commit'
git push origin main

And watch your pipeline execute itself on the Actions tab of your project’s repository

What next ?

So right now on your VPS is your API dockerized and exposing its :3000 port, but it is ok for a small staging environment such as this use case, but if you ever want to apply this principle for a production build, you will need to setup a reverse proxy, i advise setting up NGINX and getting a SSL certificate for your project