Add a changes job that diffs HEAD~1..HEAD and outputs which apps were
affected. Build and deploy jobs consume the output:
- build matrix is restricted to changed apps only — unchanged apps are
never built or pushed
- deploy pulls only rebuilt images and restarts only those containers
Shared triggers (packages/, convex/, package-lock.json, turbo.json) mark
both apps as changed since they affect the full dependency tree.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Stripe publishable key must be baked into the client bundle. Added ARG/ENV
to storefront Dockerfile and --build-arg in the workflow build step, sourced
from STAGING_NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY secret.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
NEXT_PUBLIC_CLOUDINARY_API_KEY and NEXT_PUBLIC_IMAGE_PROCESSING_API_URL are
NEXT_PUBLIC_* vars that must be baked in at build time — added as ARG/ENV in
admin Dockerfile and as --build-arg in the workflow build step.
CLOUDINARY_API_SECRET is a server-side secret — added to the deploy step's
env block, written to /opt/staging/.env via printf, and exposed to the admin
container via compose.yml environment block.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
\$HOME in an unquoted heredoc expands on the runner (not the VPS), so the
VPS received the literal runner path (/root/staging/.env) which didn't exist.
Using the explicit /opt/staging/.env path (consistent with compose.yml and
mkdir) fixes the permission denied error.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Add --force-recreate to podman compose up so stale containers are never
reused across deploys when the image tag (staging) is reused
- Inject CLERK_SECRET_KEY and ADMIN_CLERK_SECRET_KEY from Gitea secrets into
~/staging/.env on the VPS via printf (variables expand on the runner before
SSH, so secrets never touch VPS shell history; file gets chmod 600)
- Update compose.yml: storefront gets CLERK_SECRET_KEY, admin gets
CLERK_SECRET_KEY mapped from ADMIN_CLERK_SECRET_KEY
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The compose file was written via a bash << 'COMPOSE' heredoc nested inside
the YAML run: | block scalar. Lines like "name: petloft-staging" at column 0
cause the YAML parser to break out of the block scalar early, making the
entire workflow file invalid YAML — Gitea silently drops invalid workflows,
so no jobs triggered at all.
Fix: move compose.yml to deploy/staging/compose.yml in the repo, substitute
${REGISTRY} on the runner, base64-encode the result, and decode it on the VPS
inside the SSH session. No inner heredoc needed.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The VPS had no /opt/staging directory or compose file, causing the deploy
step to fail with "No such file or directory". Now the workflow:
- Creates /opt/staging if missing
- Writes compose.yml on every deploy (keeps it in sync with CI)
- Touches .env so podman compose doesn't error if no secrets file exists yet
Also adds deploy/staging/.env.example documenting runtime secrets that must
be set manually on the VPS after first deploy.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
turbo prune cannot fully parse the npm 11 lockfile format, causing it to
generate an incomplete out/package-lock.json that drops non-hoisted workspace
entries (apps/storefront/node_modules/@heroui/react and related packages).
Replacing it with the full root lockfile ensures npm ci in the Docker deps
stage installs all packages including non-hoisted ones.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
docker build --push uses buildkit's internal push which connects directly
to the registry over HTTPS, bypassing the Podman daemon. Since the Gitea
registry is HTTP-only, this fails with "server gave HTTP response to HTTPS client".
Switch to --load (exports image into Podman daemon) then docker push (goes
through the daemon which has insecure=true in registries.conf → uses HTTP).
Tag the SHA variant with docker tag before pushing both.
Also:
- Add NEXT_PUBLIC_CLOUDINARY_CLOUD_NAME ARG/ENV to admin Dockerfile
- Add STAGING_ prefix note to both Dockerfiles builder stage
- Add STAGING_NEXT_PUBLIC_CLOUDINARY_CLOUD_NAME to workflow env and
pass it as --build-arg for admin builds only
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Added NEXT_PUBLIC_CLOUDINARY_CLOUD_NAME to both admin and storefront Dockerfiles to ensure it is available during the build process.
- Updated deploy-staging.yml to pass the new Cloudinary variable as a build argument.
- Clarified comments regarding the handling of NEXT_PUBLIC_* variables and Gitea secret prefixes.
This change enhances the build configuration for both applications, ensuring all necessary environment variables are correctly passed during the Docker build process.
Two issues in the admin (and upcoming storefront) build:
1. Missing Clerk publishableKey during prerender
NEXT_PUBLIC_* vars are baked into the client bundle at build time. If absent,
Next.js SSG fails with "@clerk/clerk-react: Missing publishableKey".
Added ARG + ENV in both Dockerfiles builder stage and pass them via
--build-arg in the workflow. Admin and storefront use different Clerk
instances so the key is selected per matrix.app with a shell conditional.
2. "No output specified with docker-container driver" warning
setup-buildx-action with driver:docker was not switching the driver in the
Podman environment. Removed the step and switched to docker build --push
which pushes directly during the build, eliminating the separate push steps
and the missing-output warning.
New secrets required:
STAGING_NEXT_PUBLIC_CONVEX_URL
STAGING_NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY (storefront)
STAGING_ADMIN_NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY (admin)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Two root causes for the Docker build failures:
1. convex/_generated/api not found (both apps)
turbo prune only traces npm workspace packages; the root convex/ directory
is not a workspace package so it is excluded from out/full/. Copy it
manually into the prune output after turbo prune runs.
2. @heroui/react not found (storefront)
package-lock.json was generated with npm@11 but node:20-alpine ships
npm@10. turbo warns it cannot parse the npm 11 lockfile and generates an
incomplete out/package-lock.json, causing npm ci inside Docker to miss
packages. Upgrade npm to 11 in the deps stage of both Dockerfiles.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Remove top-level env.REGISTRY — Gitea does not expand secrets in
workflow-level env blocks; reference secrets.STAGING_REGISTRY directly
- Add docker/setup-buildx-action with driver: docker to avoid the
docker-container driver which requires --privileged on rootless Podman
- Update secret names comment to clarify STAGING_ prefix convention
(Gitea has no environment-level secrets, so prefixes distinguish staging/prod)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Introduced a new workflow in deploy-staging.yml to automate the deployment process for the staging environment.
- The workflow includes steps for CI tasks (linting, type checking, testing), building and pushing Docker images for storefront and admin applications, and deploying to a VPS.
- Configured environment variables and secrets for secure access to the Docker registry and VPS.
This commit enhances the CI/CD pipeline by streamlining the deployment process to the staging environment.
- Add .gitea/workflows/ci.yml — runs lint, typecheck, and tests on every push
- Remove convex/_generated from .gitignore and commit the generated files so CI
has the type information it needs without requiring a live Convex backend
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>