The Problem Nobody Admits to Having
A dev team we inherited during a client onboarding was doing manual deployments via FTP. In 2025. One guy held all the credentials. He’d SSH into the box, pull from GitHub by hand, cross his fingers, and hope nothing broke. When it did — and it did — there was no audit trail, no rollback plan, and no sleep that night.
That client came to us after a botched deploy took their e-commerce platform down for six hours on a Friday afternoon. The fix wasn’t complicated. It was GitHub CI/CD automation via GitHub Actions — something they had access to all along and never touched.
Why Manual Deployments Are a Liability
Manual processes don’t scale. They also don’t repeat consistently. One person runs the deploy differently than another. Steps get skipped. Tests don’t run. Security scans? Never happened once in that client’s history.
The other problem: you can’t audit what you can’t see. With a documented, version-controlled pipeline, every change, every decision, every step is logged. That matters when something goes sideways — and it will.
My opinion: if your deploy process lives only in someone’s head, you don’t have a deploy process. You have a ritual and a prayer.
GitHub Actions: What It Actually Is
GitHub Actions is an automation platform baked directly into GitHub. No separate CI server to spin up, no third-party integration to babysit. You write YAML, push it to your repo, and GitHub runs your pipeline.
That YAML lives in .github/workflows/ in your repository. Every workflow file defines what happens, when it happens, and where it runs. This is Pipeline as Code — the right way to handle deployments in 2025.
The big win: your pipeline gets the same treatment as your application code. Version controlled. Peer reviewed. Auditable. If someone changes the deployment process, it shows up in a pull request like everything else. No more shadow changes made at 11PM by the one guy who knows the root password.
CI vs CD: Get the Distinction Right
People smash these together constantly. They’re related but not the same thing.
Continuous Integration (CI) is the practice of merging code changes frequently — multiple times a day — into a shared branch, with automated tests running on every push. The goal is catching breaks early before they compound into something ugly.
Continuous Delivery (CD) means your code is always in a deployable state. You’ve automated the build and testing pipeline to the point where deploying to production is one button push — but a human still pushes that button.
Continuous Deployment removes the human entirely. Every change that passes your pipeline ships to production automatically. That’s powerful. It’s also terrifying if your test coverage is weak.
For most of the production environments we manage, Continuous Delivery is the right call. Automatic deploys to dev and staging, manual approval gate before production. That balance gives you speed without the 3AM wake-up calls.
Key Components You Need to Know
Workflows
A workflow is the top-level unit in GitHub Actions. It’s a YAML file defining an automated process made up of one or more jobs. Stored in .github/workflows/. Can be triggered by events, run on a schedule via cron, or kicked off manually from the GitHub UI.
Events
Events are what trigger your workflows. Push to main? That’s an event. Open a pull request? Event. Create a release tag? Event. Every automated workflow needs at least one trigger event — without it, nothing runs.
Jobs
Jobs are individual units of work inside a workflow. They run in parallel by default unless you define dependencies. A typical setup: a build job, a test job, a deploy job. You make deploy wait on test completing successfully before it fires.
Steps, Actions, and Runners
Steps are the individual commands inside a job. An action is a reusable unit of code from the GitHub Marketplace — pre-built steps for checking out code, configuring language runtimes, running Docker builds, and hundreds of other common tasks.
Runners are the boxes that execute your jobs. GitHub provides hosted runners on Ubuntu, Windows, and macOS. For clients with compliance requirements or air-gapped environments, self-hosted runners are the right move.
A Real Workflow: Build, Test, Deploy
Here’s what a stripped-down CI/CD workflow looks like for a Node.js application. This is close to what we shipped for a mid-market SaaS client during a pipeline rebuild engagement:
name: CI/CD Pipeline
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
build-and-test:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install dependencies
run: npm ci
- name: Run tests
run: npm test
- name: Run security audit
run: npm audit --audit-level=high
deploy:
needs: build-and-test
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Deploy to staging
run: ./scripts/deploy.sh staging
env:
DEPLOY_KEY: ${{ secrets.DEPLOY_KEY }}
The needs: build-and-test line is not optional. Deploy won’t fire unless the prior job exits clean. That one line is your safety net against shipping broken code.
Security Scanning: Build It In From Day One
We added vulnerability scanning to a SaaS client’s pipeline after their previous vendor delivered code with three critical CVEs baked in. Nobody had been scanning. Not once.
GitHub Actions handles this well. npm audit catches known Node.js vulnerabilities. Tools like Snyk and Trivy go deeper. GitHub’s own Dependabot and code scanning features run automatically once enabled. The point is: the pipeline catches vulnerabilities before they hit production, not after.
This maps directly to the NIST Cybersecurity Framework — specifically the Identify and Protect functions. Automated scanning in your pipeline is one of the cheapest security controls you can implement. And unlike a quarterly manual audit, it runs on every commit.
If you want to understand what attackers actually do inside compromised CI/CD pipelines, the MITRE ATT&CK framework covers CI/CD attack vectors in detail. Worth reviewing before you design your pipeline permissions and runner isolation strategy.
Secrets Management: The Part Everyone Gets Wrong
Don’t hardcode credentials. Ever. GitHub Actions has a built-in secrets store — use it. Secrets are encrypted at rest, masked in workflow logs, and scoped to your repo or org.
Reference them in your workflow with ${{ secrets.YOUR_SECRET_NAME }}. They never appear in plaintext in logs or run output. Simple, effective, zero extra infrastructure.
Caveat: GitHub’s secrets store is convenient but it’s not a full secrets management solution. For clients with strict compliance requirements, we integrate HashiCorp Vault or AWS Secrets Manager instead, pulling secrets at runtime rather than storing them in GitHub. Know the difference before you decide which approach fits your threat model.
Branching Strategy Matters Here
Your pipeline is only as good as your branching discipline. If everyone’s pushing directly to main, you’re missing half the value of CI.
The pattern that works: feature branches → pull requests → automated CI checks → merge to main → auto-deploy to staging → manual approval → production. This isn’t complicated. It’s just process.
We’ve seen process failures cause more outages than technical failures. The Windows Group Policy incident post-mortem on this blog is a good example of how gaps in change control turn into production emergencies. Same principle applies to deployment pipelines.
What to Monitor After You Ship the Pipeline
GitHub Actions generates detailed logs for every run. Per-step timing, exit codes, full output. Set up failure notifications via email or Slack so your team knows immediately when something breaks.
Track two numbers: your pipeline failure rate and your mean time to deploy. If your pipeline fails 30% of the time, that’s a flaky test problem — not a deployment problem. Fix it at the source. A pipeline people learn to ignore is worse than no pipeline at all.
That operational mindset — knowing what normal looks like so you can spot abnormal fast — is the same thing that saves you during production incidents. The Nginx 3AM production outage writeup covers exactly that kind of reactive diagnosis if you want the pattern applied to infrastructure.
Start Here, Not There
Don’t try to build the perfect pipeline on day one. You won’t. Start with one thing: automate your tests on pull requests. That single step will catch regressions before they hit main. Then add a staging deploy. Then add security scanning. Build incrementally.
The e-commerce client from the opening? We had a working CI/CD pipeline live within two days of onboarding. No more FTP deploys. No more six-hour Friday outages. And once the dev team had a pipeline running their tests, they actually started writing tests. That’s the compounding effect nobody talks about.
GitHub Actions is available on every GitHub repository. Free for public repos, included in paid plans for private ones. The barrier to entry is a YAML file and a few focused hours. There’s no excuse for deploying by hand in 2025.
Need help designing or fixing a CI/CD pipeline for your production environment? Reach out to the SSE team — this is exactly the kind of work we do across a range of client environments and industries.


