Try it Free

How We Added End-to-End Tests to SendRec with Playwright

We shipped SendRec v1.55.0 with 15 Playwright end-to-end tests running against the full stack — Go backend, PostgreSQL, and Garage (S3-compatible storage) — in CI. Here’s how we built it.

Why e2e tests?

SendRec had 415 unit tests across Go and TypeScript. They’re fast and catch regressions in isolated components. But they mock fetch, mock the database, and never touch a real browser. That means they can’t catch:

  • Broken auth flows where the frontend and backend disagree on cookies or redirects
  • Upload failures caused by S3 presigned URL configuration
  • Routing issues where the React SPA and Go server don’t agree on paths
  • Watch page rendering that depends on server-side Go templates

We needed tests that exercise the full stack through a real browser.

Architecture

The e2e setup has three parts:

  1. Docker Compose overlay (docker-compose.e2e.yml) starts the full stack — Go binary, PostgreSQL, and Garage with ephemeral volumes
  2. Playwright runs Chromium on the host, connecting to localhost:8080
  3. Test helpers seed data via API calls and direct database access

Tests run sequentially (workers: 1) because they share database state. A global setup truncates all tables and creates a verified test user before the suite runs.

Docker Compose for testing

The e2e compose is based on the dev compose with key differences: no persistent volumes, a fixed JWT secret, and S3_PUBLIC_ENDPOINT pointed at localhost:3900 so the browser can reach Garage for presigned URL uploads.

services:
  sendrec:
    build: .
    ports:
      - "8080:8080"
    environment:
      - DATABASE_URL=postgres://sendrec:sendrec@postgres:5432/sendrec?sslmode=disable
      - S3_ENDPOINT=http://garage:3900
      - S3_PUBLIC_ENDPOINT=http://localhost:3900
      - JWT_SECRET=e2e-test-secret
      - BASE_URL=http://localhost:8080
    depends_on:
      postgres:
        condition: service_healthy
      garage-init:
        condition: service_completed_successfully
    healthcheck:
      test: ["CMD", "wget", "--spider", "-q", "http://localhost:8080/api/health"]
      interval: 5s
      timeout: 5s
      retries: 10
      start_period: 15s

  postgres:
    image: postgres:18-alpine
    ports:
      - "5433:5432"
    environment:
      POSTGRES_USER: sendrec
      POSTGRES_PASSWORD: sendrec
      POSTGRES_DB: sendrec

  garage:
    image: dxflrs/garage:v2.2.0
    ports:
      - "3900:3900"

  garage-init:
    build:
      context: .
      dockerfile: Dockerfile.garage-init
    network_mode: "service:garage"
    depends_on:
      garage:
        condition: service_started

The garage-init container runs once to create the S3 bucket, generate API keys, and configure CORS on the bucket so the browser can PUT files directly.

The sendrec service won’t start until both PostgreSQL is healthy and garage-init completes successfully. This guarantees the database and storage are ready before the app starts.

Test user seeding

SendRec requires email verification before login. In e2e tests we bypass this by registering through the API and then flipping the email_verified flag directly in the database:

export const TEST_USER = {
  name: "E2E Test User",
  email: "e2e@test.sendrec.local",
  password: "TestPassword123!",
};

export async function createVerifiedUser(): Promise<void> {
  const baseURL = process.env.BASE_URL || "http://localhost:8080";

  const resp = await fetch(`${baseURL}/api/auth/register`, {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify(TEST_USER),
  });

  if (resp.status !== 201 && resp.status !== 409) {
    throw new Error(`Registration failed: ${resp.status}`);
  }

  await query("UPDATE users SET email_verified = true WHERE email = $1", [
    TEST_USER.email,
  ]);
}

The 409 check handles re-runs where the user already exists. The database helper connects directly to PostgreSQL via pg:

import pg from "pg";

const DATABASE_URL =
  process.env.DATABASE_URL ||
  "postgres://sendrec:sendrec@localhost:5433/sendrec";

export async function query(sql: string, params?: unknown[]): Promise<void> {
  const client = new pg.Client({ connectionString: DATABASE_URL });
  await client.connect();
  try {
    await client.query(sql, params);
  } finally {
    await client.end();
  }
}

Global setup and teardown

Playwright’s globalSetup runs before any test file. We truncate all tables and seed the test user:

import { truncateAllTables } from "./helpers/db";
import { createVerifiedUser } from "./helpers/auth";

export default async function globalSetup() {
  await truncateAllTables();
  await createVerifiedUser();
}

The truncateAllTables function clears all 18 tables with CASCADE to handle foreign key constraints:

export async function truncateAllTables(): Promise<void> {
  await query(`
    TRUNCATE users, videos, refresh_tokens, password_resets,
             email_confirmations, video_comments, video_views,
             folders, tags, video_tags, notification_preferences,
             api_keys, webhook_deliveries, user_branding,
             cta_clicks, view_milestones, video_viewers
    CASCADE
  `);
}

The globalTeardown does the same cleanup after all tests finish.

The tests

We wrote 15 tests across 5 spec files covering the core user flows.

Authentication (7 tests)

The auth tests cover the happy path and error cases without needing database setup beyond the global seed:

test("login with valid credentials redirects to home", async ({ page }) => {
  await loginViaUI(page);
  await expect(page).toHaveURL("/");
});

test("login with wrong password shows error", async ({ page }) => {
  await page.goto("/login");
  await page.getByLabel("Email").fill(TEST_USER.email);
  await page.getByLabel("Password").fill("wrongpassword");
  await page.getByRole("button", { name: "Sign in" }).click();
  await expect(page.getByText(/invalid/i)).toBeVisible();
});

test("unauthenticated user is redirected to login", async ({ page }) => {
  await page.context().clearCookies();
  await page.goto("/library");
  await expect(page).toHaveURL(/\/login/);
});

The loginViaUI helper fills the form and waits for the redirect:

export async function loginViaUI(page: Page): Promise<void> {
  await page.goto("/login");
  await page.getByLabel("Email").fill(TEST_USER.email);
  await page.getByLabel("Password").fill(TEST_USER.password);
  await page.getByRole("button", { name: "Sign in" }).click();
  await page.waitForURL("/");
}

Upload (3 tests)

The upload test exercises the full flow: select a file, click upload, wait for the S3 presigned URL PUT to complete, and verify the success message. We generated a tiny test fixture with ffmpeg:

ffmpeg -f lavfi -i color=c=blue:s=320x240:d=1 -c:v libvpx -b:v 100k test-video.webm

The test uses setInputFiles to simulate file selection and waits up to 60 seconds for the upload to complete:

test("upload a video file", async ({ page }) => {
  await page.goto("/upload");

  const testVideoPath = join(__dirname, "..", "fixtures", "test-video.webm");
  const fileInput = page.locator('[data-testid="file-input"]');
  await fileInput.setInputFiles(testVideoPath);

  await expect(page.getByText(/1 file/i)).toBeVisible();

  await page.getByRole("button", { name: /upload/i }).click();

  await expect(page.getByText(/upload complete/i)).toBeVisible({
    timeout: 60000,
  });
});

A follow-up test navigates to the library and verifies the uploaded video card appears:

test("uploaded video appears in library", async ({ page }) => {
  await page.goto("/library");
  await expect(page.locator(".video-card").first()).toBeVisible({
    timeout: 15000,
  });
});

Watch page (2 tests)

The watch page test queries the database for a video with status IN ('ready', 'processing') — matching the actual handler query — and skips gracefully if none exists:

test("watch page renders for a valid share token", async ({ page }) => {
  const rows = await queryRows<{ share_token: string }>(
    "SELECT share_token FROM videos WHERE status IN ('ready', 'processing') LIMIT 1"
  );

  test.skip(rows.length === 0, "No video available for watch page test");

  await page.goto(`/watch/${rows[0].share_token}`);
  await expect(page.locator("video")).toBeVisible({ timeout: 10000 });
});

Configuring CORS on Garage

SendRec uploads files directly from the browser to S3 via presigned URLs. In the e2e environment, the browser at localhost:8080 PUTs to Garage at localhost:3900 — a cross-origin request that requires CORS configuration.

The garage-init container configures CORS on the recordings bucket using the Garage admin API:

BUCKET_ID=$(curl -sf -H "Authorization: Bearer ${ADMIN_TOKEN}" \
  "${ADMIN_URL}/v2/GetBucketInfo?globalAlias=${S3_BUCKET}" | jq -r '.id // empty')

curl -sf -X POST "${ADMIN_URL}/v2/UpdateBucket?id=${BUCKET_ID}" \
  -H "Authorization: Bearer ${ADMIN_TOKEN}" \
  -H "Content-Type: application/json" \
  -d '{"corsConfig":{"set":[{"allowedOrigins":["*"],"allowedMethods":["GET","PUT","HEAD"],"allowedHeaders":["*"],"exposeHeaders":["ETag"],"maxAgeSeconds":3600}]}}'

We initially tried garage json-api UpdateBucket but the JSON format was subtly wrong — a "body":{} wrapper around corsConfig that the admin API silently ignored. Switching to curl with the REST API directly was more reliable and easier to debug.

CI integration

The e2e job runs after unit tests pass. It builds the Docker images, waits for the health check, installs Playwright, runs the tests, and uploads the HTML report as an artifact on failure:

e2e:
  needs: test
  runs-on: ubuntu-latest
  timeout-minutes: 15

  steps:
    - uses: actions/checkout@v6

    - name: Start e2e environment
      run: docker compose -f docker-compose.e2e.yml up --build -d

    - name: Wait for app to be healthy
      run: |
        for i in $(seq 1 60); do
          if curl -sf http://localhost:8080/api/health > /dev/null 2>&1; then
            echo "App is healthy!"
            exit 0
          fi
          sleep 3
        done
        echo "App failed to start"
        docker compose -f docker-compose.e2e.yml logs
        exit 1

    - name: Run e2e tests
      run: cd web && pnpm e2e
      env:
        BASE_URL: http://localhost:8080
        DATABASE_URL: postgres://sendrec:sendrec@localhost:5433/sendrec

    - name: Upload test results
      if: always()
      uses: actions/upload-artifact@v6
      with:
        name: playwright-report
        path: web/playwright-report/

    - name: Stop e2e environment
      if: always()
      run: docker compose -f docker-compose.e2e.yml down -v

The docker compose down -v in the always() step ensures volumes are cleaned up even if tests fail.

Lessons learned

Vitest and Playwright both match *.spec.ts. When we added Playwright spec files under web/e2e/, Vitest tried to run them and crashed. The fix was adding an include pattern to vitest.config.ts that restricts Vitest to src/:

test: {
  include: ["src/**/*.{test,spec}.{ts,tsx}"],
}

getByLabel is fragile for custom label markup. Our auth form uses <label><span>Name</span><input/></label> instead of a for/id pair. Playwright’s getByLabel("Name") worked locally but failed in headless CI. We switched to page.locator('label:has-text("Name") input').

getByDisplayValue doesn’t exist in Playwright. Coming from React Testing Library, we reflexively wrote getByDisplayValue for disabled inputs. Playwright uses page.locator('input[value="..."]') instead.

__dirname doesn’t exist in ESM. Playwright runs tests as ES modules. We replaced __dirname with:

import { fileURLToPath } from "url";
import { dirname, join } from "path";

const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);

S3 presigned URL CORS is easy to misconfigure silently. The browser’s “Failed to fetch” error gives no indication that CORS is the problem. Adding a debug step to CI that prints garage-init logs saved us hours of guessing.

Try it

The e2e tests run with three commands:

make e2e-up      # Start the full stack
make e2e-test    # Run 15 Playwright tests
make e2e-down    # Tear down

The test infrastructure is in web/e2e/ and the Docker Compose overlay is in docker-compose.e2e.yml. SendRec is open source under AGPL-3.0 — check it out at github.com/sendrec/sendrec.