24 Commits

Author SHA1 Message Date
f1dbf0b6ee fix(reviews): remove redundant comment in ProductDetailReviewsPanel component
All checks were successful
Deploy — Staging / Detect changed apps (push) Successful in 15s
Deploy — Staging / Lint, Typecheck & Test (push) Successful in 1m27s
Deploy — Staging / Build & push — storefront (push) Successful in 2m16s
Deploy — Staging / Build & push — admin (push) Has been skipped
Deploy — Staging / Deploy to staging VPS (push) Successful in 19s
2026-03-08 16:43:40 +03:00
777c3b34bc fix(ci): replace dynamic matrix with explicit per-app jobs
All checks were successful
Deploy — Staging / Detect changed apps (push) Successful in 16s
Deploy — Staging / Lint, Typecheck & Test (push) Successful in 1m29s
Deploy — Staging / Build & push — storefront (push) Has been skipped
Deploy — Staging / Build & push — admin (push) Has been skipped
Deploy — Staging / Deploy to staging VPS (push) Has been skipped
Gitea Actions does not evaluate fromJson() in matrix strategy — matrix.app
was always empty, breaking turbo prune. Replaced with two explicit jobs
(build-storefront, build-admin) each with a plain if: condition.

The deploy job uses always() + !contains(needs.*.result, 'failure') so it
runs when either build succeeded and skips when a build was cancelled/failed.
Parallelism is preserved — both apps still build simultaneously when both change.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-08 16:35:10 +03:00
f6156c78d1 feat(storefront): render product description as markdown
Some checks failed
Deploy — Staging / Detect changed apps (push) Successful in 16s
Deploy — Staging / Lint, Typecheck & Test (push) Successful in 1m29s
Deploy — Staging / Build & push — ${{ matrix.app }} (push) Failing after 49s
Deploy — Staging / Deploy to staging VPS (push) Has been skipped
- Add react-markdown dependency to storefront
- Use ReactMarkdown in ProductDetailDescriptionSection for formatted descriptions

Made-with: Cursor
2026-03-08 16:27:48 +03:00
0bd0d90f45 feat(ci): skip build and deploy for unchanged apps
All checks were successful
Deploy — Staging / Detect changed apps (push) Successful in 16s
Deploy — Staging / Lint, Typecheck & Test (push) Successful in 1m26s
Deploy — Staging / Build & push — ${{ matrix.app }} (push) Has been skipped
Deploy — Staging / Deploy to staging VPS (push) Has been skipped
Add a changes job that diffs HEAD~1..HEAD and outputs which apps were
affected. Build and deploy jobs consume the output:

- build matrix is restricted to changed apps only — unchanged apps are
  never built or pushed
- deploy pulls only rebuilt images and restarts only those containers

Shared triggers (packages/, convex/, package-lock.json, turbo.json) mark
both apps as changed since they affect the full dependency tree.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-08 16:15:58 +03:00
3396a79445 fix(storefront): pass NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY at build time
All checks were successful
Deploy — Staging / Lint, Typecheck & Test (push) Successful in 1m27s
Deploy — Staging / Build & push — admin (push) Successful in 53s
Deploy — Staging / Build & push — storefront (push) Successful in 1m40s
Deploy — Staging / Deploy to staging VPS (push) Successful in 19s
Stripe publishable key must be baked into the client bundle. Added ARG/ENV
to storefront Dockerfile and --build-arg in the workflow build step, sourced
from STAGING_NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY secret.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-08 15:50:09 +03:00
9f2e9afc63 fix(admin): pass missing Cloudinary and image-processing env vars
All checks were successful
Deploy — Staging / Lint, Typecheck & Test (push) Successful in 1m31s
Deploy — Staging / Build & push — admin (push) Successful in 1m39s
Deploy — Staging / Build & push — storefront (push) Successful in 57s
Deploy — Staging / Deploy to staging VPS (push) Successful in 20s
NEXT_PUBLIC_CLOUDINARY_API_KEY and NEXT_PUBLIC_IMAGE_PROCESSING_API_URL are
NEXT_PUBLIC_* vars that must be baked in at build time — added as ARG/ENV in
admin Dockerfile and as --build-arg in the workflow build step.

CLOUDINARY_API_SECRET is a server-side secret — added to the deploy step's
env block, written to /opt/staging/.env via printf, and exposed to the admin
container via compose.yml environment block.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-08 14:45:31 +03:00
64c0cd6af8 fix(deploy): write .env to /opt/staging not \$HOME/staging
All checks were successful
Deploy — Staging / Lint, Typecheck & Test (push) Successful in 1m27s
Deploy — Staging / Build & push — admin (push) Successful in 54s
Deploy — Staging / Build & push — storefront (push) Successful in 55s
Deploy — Staging / Deploy to staging VPS (push) Successful in 20s
\$HOME in an unquoted heredoc expands on the runner (not the VPS), so the
VPS received the literal runner path (/root/staging/.env) which didn't exist.
Using the explicit /opt/staging/.env path (consistent with compose.yml and
mkdir) fixes the permission denied error.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-08 13:04:40 +03:00
af8e14c545 fix(deploy): inject runtime secrets and force-recreate containers on deploy
Some checks failed
Deploy — Staging / Lint, Typecheck & Test (push) Successful in 1m33s
Deploy — Staging / Build & push — admin (push) Successful in 57s
Deploy — Staging / Build & push — storefront (push) Successful in 58s
Deploy — Staging / Deploy to staging VPS (push) Failing after 18s
- Add --force-recreate to podman compose up so stale containers are never
  reused across deploys when the image tag (staging) is reused
- Inject CLERK_SECRET_KEY and ADMIN_CLERK_SECRET_KEY from Gitea secrets into
  ~/staging/.env on the VPS via printf (variables expand on the runner before
  SSH, so secrets never touch VPS shell history; file gets chmod 600)
- Update compose.yml: storefront gets CLERK_SECRET_KEY, admin gets
  CLERK_SECRET_KEY mapped from ADMIN_CLERK_SECRET_KEY

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-08 12:42:06 +03:00
9ede637f39 fix(docker): correct server.js path for monorepo standalone output
All checks were successful
Deploy — Staging / Lint, Typecheck & Test (push) Successful in 2m41s
Deploy — Staging / Build & push — admin (push) Successful in 3m4s
Deploy — Staging / Build & push — storefront (push) Successful in 3m16s
Deploy — Staging / Deploy to staging VPS (push) Successful in 31s
With outputFileTracingRoot set to the repo root, Next.js standalone mirrors
the full monorepo directory tree inside .next/standalone/. server.js lands at
apps/storefront/server.js (not at the root), so the CMD must reflect that.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-08 12:10:39 +03:00
0da06b965d deploy: change ports mapping
All checks were successful
Deploy — Staging / Lint, Typecheck & Test (push) Successful in 2m46s
Deploy — Staging / Build & push — admin (push) Successful in 1m33s
Deploy — Staging / Build & push — storefront (push) Successful in 1m41s
Deploy — Staging / Deploy to staging VPS (push) Successful in 30s
2026-03-08 12:00:56 +03:00
439d6d4455 fix(ci): fix YAML parse error in deploy workflow caused by inner heredoc
Some checks failed
Deploy — Staging / Lint, Typecheck & Test (push) Successful in 1m29s
Deploy — Staging / Build & push — admin (push) Successful in 54s
Deploy — Staging / Build & push — storefront (push) Successful in 53s
Deploy — Staging / Deploy to staging VPS (push) Failing after 24s
The compose file was written via a bash << 'COMPOSE' heredoc nested inside
the YAML run: | block scalar. Lines like "name: petloft-staging" at column 0
cause the YAML parser to break out of the block scalar early, making the
entire workflow file invalid YAML — Gitea silently drops invalid workflows,
so no jobs triggered at all.

Fix: move compose.yml to deploy/staging/compose.yml in the repo, substitute
${REGISTRY} on the runner, base64-encode the result, and decode it on the VPS
inside the SSH session. No inner heredoc needed.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-08 11:37:02 +03:00
b333047753 workflow trigger 2026-03-08 11:29:45 +03:00
0b9ac5cd46 fix(deploy): create /opt/staging and write compose.yml on every deploy
The VPS had no /opt/staging directory or compose file, causing the deploy
step to fail with "No such file or directory". Now the workflow:
- Creates /opt/staging if missing
- Writes compose.yml on every deploy (keeps it in sync with CI)
- Touches .env so podman compose doesn't error if no secrets file exists yet

Also adds deploy/staging/.env.example documenting runtime secrets that must
be set manually on the VPS after first deploy.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-08 11:10:52 +03:00
bfc20ac293 fix(deps): add tailwind-merge to root package.json as direct dependency
Some checks failed
Deploy — Staging / Build & push — admin (push) Has been cancelled
Deploy — Staging / Build & push — storefront (push) Has been cancelled
Deploy — Staging / Deploy to staging VPS (push) Has been cancelled
Deploy — Staging / Lint, Typecheck & Test (push) Has been cancelled
turbo prune storefront --docker excludes admin, so tailwind-merge was
not installed at root in Docker (it was only hoisted because of admin's dep).
@heroui/styles/node_modules/tailwind-variants requires tailwind-merge >=3.0.0
and walks up to root to find it. Adding it as a root-level dep ensures npm ci
always installs it regardless of which workspace is being built.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-08 10:55:16 +03:00
33fed9382a fix(deps): upgrade tailwind-merge to v3 and declare lucide-react in storefront
Some checks failed
Deploy — Staging / Lint, Typecheck & Test (push) Successful in 2m40s
Deploy — Staging / Build & push — admin (push) Successful in 3m40s
Deploy — Staging / Build & push — storefront (push) Failing after 2m39s
Deploy — Staging / Deploy to staging VPS (push) Has been skipped
- apps/admin: tailwind-merge ^2.6.1 → ^3.4.0 so root resolves to v3.x,
  satisfying @heroui/styles/node_modules/tailwind-variants peer dep (>=3.0.0)
- apps/storefront: add lucide-react ^0.400.0 as explicit dep (used in
  SearchEmptyState and SearchResultsPanel but was previously undeclared)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-08 10:31:29 +03:00
5b0a727bce fix(ci): replace turbo-pruned lockfile with full root lockfile to fix @heroui/react missing in Docker
Some checks failed
Deploy — Staging / Lint, Typecheck & Test (push) Successful in 2m7s
Deploy — Staging / Build & push — admin (push) Successful in 3m20s
Deploy — Staging / Build & push — storefront (push) Failing after 2m30s
Deploy — Staging / Deploy to staging VPS (push) Has been skipped
turbo prune cannot fully parse the npm 11 lockfile format, causing it to
generate an incomplete out/package-lock.json that drops non-hoisted workspace
entries (apps/storefront/node_modules/@heroui/react and related packages).
Replacing it with the full root lockfile ensures npm ci in the Docker deps
stage installs all packages including non-hoisted ones.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-08 04:38:54 +03:00
5391b3b428 fix(docker): copy full deps stage into storefront builder, not just root node_modules
@heroui/react cannot be hoisted to the root by npm (peer dep constraints)
and is installed at apps/storefront/node_modules/ instead. The builder stage
was only copying /app/node_modules, leaving @heroui/react absent when
next build ran.

Switch to COPY --from=deps /app/ ./ so both root and workspace-level
node_modules are present, then COPY full/ . layers the source on top.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-08 04:20:43 +03:00
829fec9ac1 fix(ci): use --load + docker push instead of --push for HTTP registry
Some checks failed
Deploy — Staging / Lint, Typecheck & Test (push) Successful in 2m8s
Deploy — Staging / Build & push — admin (push) Successful in 1m22s
Deploy — Staging / Build & push — storefront (push) Failing after 1m35s
Deploy — Staging / Deploy to staging VPS (push) Has been skipped
docker build --push uses buildkit's internal push which connects directly
to the registry over HTTPS, bypassing the Podman daemon. Since the Gitea
registry is HTTP-only, this fails with "server gave HTTP response to HTTPS client".

Switch to --load (exports image into Podman daemon) then docker push (goes
through the daemon which has insecure=true in registries.conf → uses HTTP).
Tag the SHA variant with docker tag before pushing both.

Also:
- Add NEXT_PUBLIC_CLOUDINARY_CLOUD_NAME ARG/ENV to admin Dockerfile
- Add STAGING_ prefix note to both Dockerfiles builder stage
- Add STAGING_NEXT_PUBLIC_CLOUDINARY_CLOUD_NAME to workflow env and
  pass it as --build-arg for admin builds only

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-08 04:14:47 +03:00
6b63cbb6cd fix(ci): update Dockerfiles and workflow to include new Cloudinary environment variable
Some checks failed
Deploy — Staging / Lint, Typecheck & Test (push) Successful in 2m6s
Deploy — Staging / Build & push — admin (push) Failing after 2m7s
Deploy — Staging / Build & push — storefront (push) Failing after 1m35s
Deploy — Staging / Deploy to staging VPS (push) Has been skipped
- Added NEXT_PUBLIC_CLOUDINARY_CLOUD_NAME to both admin and storefront Dockerfiles to ensure it is available during the build process.
- Updated deploy-staging.yml to pass the new Cloudinary variable as a build argument.
- Clarified comments regarding the handling of NEXT_PUBLIC_* variables and Gitea secret prefixes.

This change enhances the build configuration for both applications, ensuring all necessary environment variables are correctly passed during the Docker build process.
2026-03-08 04:05:01 +03:00
bc7306fea4 fix(ci): pass NEXT_PUBLIC build args and fix docker push
Some checks failed
Deploy — Staging / Lint, Typecheck & Test (push) Successful in 2m11s
Deploy — Staging / Build & push — admin (push) Failing after 2m8s
Deploy — Staging / Build & push — storefront (push) Failing after 1m42s
Deploy — Staging / Deploy to staging VPS (push) Has been skipped
Two issues in the admin (and upcoming storefront) build:

1. Missing Clerk publishableKey during prerender
   NEXT_PUBLIC_* vars are baked into the client bundle at build time. If absent,
   Next.js SSG fails with "@clerk/clerk-react: Missing publishableKey".
   Added ARG + ENV in both Dockerfiles builder stage and pass them via
   --build-arg in the workflow. Admin and storefront use different Clerk
   instances so the key is selected per matrix.app with a shell conditional.

2. "No output specified with docker-container driver" warning
   setup-buildx-action with driver:docker was not switching the driver in the
   Podman environment. Removed the step and switched to docker build --push
   which pushes directly during the build, eliminating the separate push steps
   and the missing-output warning.

New secrets required:
  STAGING_NEXT_PUBLIC_CONVEX_URL
  STAGING_NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY        (storefront)
  STAGING_ADMIN_NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY  (admin)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-08 03:31:58 +03:00
7a6da4f18f fix(ci): fix convex missing from prune output and npm version mismatch
Some checks failed
CI / Lint, Typecheck & Test (push) Successful in 2m5s
Deploy — Staging / Lint, Typecheck & Test (push) Successful in 2m5s
Deploy — Staging / Build & push — admin (push) Failing after 3m11s
Deploy — Staging / Build & push — storefront (push) Failing after 2m23s
Deploy — Staging / Deploy to staging VPS (push) Has been skipped
Two root causes for the Docker build failures:

1. convex/_generated/api not found (both apps)
   turbo prune only traces npm workspace packages; the root convex/ directory
   is not a workspace package so it is excluded from out/full/. Copy it
   manually into the prune output after turbo prune runs.

2. @heroui/react not found (storefront)
   package-lock.json was generated with npm@11 but node:20-alpine ships
   npm@10. turbo warns it cannot parse the npm 11 lockfile and generates an
   incomplete out/package-lock.json, causing npm ci inside Docker to miss
   packages. Upgrade npm to 11 in the deps stage of both Dockerfiles.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-08 03:16:29 +03:00
fc5f98541b fix(ci): fix deploy-staging registry and buildx driver issues
Some checks failed
CI / Lint, Typecheck & Test (push) Successful in 2m6s
Deploy — Staging / Lint, Typecheck & Test (push) Successful in 2m56s
Deploy — Staging / Build & push — admin (push) Failing after 3m7s
Deploy — Staging / Build & push — storefront (push) Failing after 2m30s
Deploy — Staging / Deploy to staging VPS (push) Has been skipped
- Remove top-level env.REGISTRY — Gitea does not expand secrets in
  workflow-level env blocks; reference secrets.STAGING_REGISTRY directly
- Add docker/setup-buildx-action with driver: docker to avoid the
  docker-container driver which requires --privileged on rootless Podman
- Update secret names comment to clarify STAGING_ prefix convention
  (Gitea has no environment-level secrets, so prefixes distinguish staging/prod)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-08 02:55:41 +03:00
70b728a474 feat(docker): add Dockerfiles and update next.config.js for admin and storefront applications
Some checks failed
CI / Lint, Typecheck & Test (push) Successful in 2m7s
Deploy — Staging / Lint, Typecheck & Test (push) Successful in 2m3s
Deploy — Staging / Build & push — admin (push) Failing after 1m8s
Deploy — Staging / Build & push — storefront (push) Failing after 1m5s
Deploy — Staging / Deploy to staging VPS (push) Has been skipped
- Introduced Dockerfiles for both admin and storefront applications to streamline the build and deployment process using multi-stage builds.
- Configured the Dockerfiles to install dependencies, build the applications, and set up a minimal runtime environment.
- Updated next.config.js for both applications to enable standalone output and set the outputFileTracingRoot for proper file tracing in a monorepo setup.

This commit enhances the containerization of the applications, improving deployment efficiency and reducing image sizes.
2026-03-08 02:02:58 +03:00
79640074cd feat(ci): add Gitea CI workflow for staging deployment
- Introduced a new workflow in deploy-staging.yml to automate the deployment process for the staging environment.
- The workflow includes steps for CI tasks (linting, type checking, testing), building and pushing Docker images for storefront and admin applications, and deploying to a VPS.
- Configured environment variables and secrets for secure access to the Docker registry and VPS.

This commit enhances the CI/CD pipeline by streamlining the deployment process to the staging environment.
2026-03-08 01:42:57 +03:00
14 changed files with 1631 additions and 32 deletions

View File

@@ -3,7 +3,7 @@ name: CI
on: on:
push: push:
branches: branches:
- "**" - feat #"**" # TODO: change to "**" after testing
jobs: jobs:
ci: ci:

View File

@@ -0,0 +1,258 @@
name: Deploy — Staging
on:
push:
branches:
- staging
# Gitea Actions has no environment-level secrets (unlike GitHub Actions).
# Staging and production secrets live at repo level, distinguished by prefix.
# Production workflow uses the same names with PROD_ prefix.
#
# Required secrets (repo → Settings → Secrets and Variables → Actions):
# STAGING_REGISTRY — host:port/owner
# STAGING_REGISTRY_USER — Gitea username
# STAGING_REGISTRY_TOKEN — Gitea PAT (package:write)
# STAGING_SSH_HOST — host.containers.internal
# STAGING_SSH_USER — SSH user on the VPS
# STAGING_SSH_KEY — SSH private key (full PEM)
# STAGING_SSH_PORT — (optional) defaults to 22
# STAGING_NEXT_PUBLIC_CONVEX_URL
# STAGING_STOREFRONT_NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY
# STAGING_ADMIN_NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY
# STAGING_NEXT_PUBLIC_CLOUDINARY_CLOUD_NAME
# STAGING_NEXT_PUBLIC_CLOUDINARY_API_KEY
# STAGING_NEXT_PUBLIC_IMAGE_PROCESSING_API_URL
# STAGING_NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY
# STAGING_STOREFRONT_CLERK_SECRET_KEY
# STAGING_ADMIN_CLERK_SECRET_KEY
# STAGING_CLOUDINARY_API_SECRET
jobs:
# ── 0. Detect changes ────────────────────────────────────────────────────────
# Determines which apps need to be rebuilt on this push.
# Shared paths (packages/, convex/, root config) mark both apps as changed.
changes:
name: Detect changed apps
runs-on: ubuntu-latest
outputs:
storefront: ${{ steps.detect.outputs.storefront }}
admin: ${{ steps.detect.outputs.admin }}
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 2
- name: Determine affected apps
id: detect
run: |
BASE=$(git rev-parse HEAD~1 2>/dev/null || git hash-object -t tree /dev/null)
CHANGED=$(git diff --name-only "$BASE" HEAD)
STOREFRONT=false
ADMIN=false
# Shared paths affect both apps
if echo "$CHANGED" | grep -qE '^(package\.json|package-lock\.json|turbo\.json|packages/|convex/)'; then
STOREFRONT=true
ADMIN=true
fi
echo "$CHANGED" | grep -q '^apps/storefront/' && STOREFRONT=true || true
echo "$CHANGED" | grep -q '^apps/admin/' && ADMIN=true || true
echo "storefront=${STOREFRONT}" >> "$GITHUB_OUTPUT"
echo "admin=${ADMIN}" >> "$GITHUB_OUTPUT"
# ── 1. CI ───────────────────────────────────────────────────────────────────
ci:
name: Lint, Typecheck & Test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
cache: npm
- run: npm ci
- run: npm run lint
- run: npm run type-check
- run: npm run test:once
# ── 2a. Build storefront ─────────────────────────────────────────────────────
build-storefront:
name: Build & push — storefront
needs: [ci, changes]
if: needs.changes.outputs.storefront == 'true'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
cache: npm
- run: npm ci
- name: Prune workspace
run: |
npx turbo prune storefront --docker
cp -r convex out/full/convex
cp package-lock.json out/package-lock.json
- name: Authenticate with registry
run: |
mkdir -p ~/.docker
AUTH=$(echo -n "${{ secrets.STAGING_REGISTRY_USER }}:${{ secrets.STAGING_REGISTRY_TOKEN }}" | base64 -w 0)
REGISTRY_HOST=$(echo "${{ secrets.STAGING_REGISTRY }}" | cut -d'/' -f1)
echo "{\"auths\":{\"${REGISTRY_HOST}\":{\"auth\":\"${AUTH}\"}}}" > ~/.docker/config.json
- name: Build & push
env:
NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY: ${{ secrets.STAGING_STOREFRONT_NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY }}
NEXT_PUBLIC_CONVEX_URL: ${{ secrets.STAGING_NEXT_PUBLIC_CONVEX_URL }}
NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY: ${{ secrets.STAGING_NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY }}
run: |
SHORT_SHA="${GITHUB_SHA::7}"
IMAGE="${{ secrets.STAGING_REGISTRY }}/storefront"
docker build \
-f apps/storefront/Dockerfile \
--build-arg NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY="$NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY" \
--build-arg NEXT_PUBLIC_CONVEX_URL="$NEXT_PUBLIC_CONVEX_URL" \
--build-arg NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY="$NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY" \
--load -t "${IMAGE}:staging" ./out
docker tag "${IMAGE}:staging" "${IMAGE}:sha-${SHORT_SHA}"
docker push "${IMAGE}:staging"
docker push "${IMAGE}:sha-${SHORT_SHA}"
# ── 2b. Build admin ──────────────────────────────────────────────────────────
build-admin:
name: Build & push — admin
needs: [ci, changes]
if: needs.changes.outputs.admin == 'true'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
cache: npm
- run: npm ci
- name: Prune workspace
run: |
npx turbo prune admin --docker
cp -r convex out/full/convex
cp package-lock.json out/package-lock.json
- name: Authenticate with registry
run: |
mkdir -p ~/.docker
AUTH=$(echo -n "${{ secrets.STAGING_REGISTRY_USER }}:${{ secrets.STAGING_REGISTRY_TOKEN }}" | base64 -w 0)
REGISTRY_HOST=$(echo "${{ secrets.STAGING_REGISTRY }}" | cut -d'/' -f1)
echo "{\"auths\":{\"${REGISTRY_HOST}\":{\"auth\":\"${AUTH}\"}}}" > ~/.docker/config.json
- name: Build & push
env:
NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY: ${{ secrets.STAGING_ADMIN_NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY }}
NEXT_PUBLIC_CONVEX_URL: ${{ secrets.STAGING_NEXT_PUBLIC_CONVEX_URL }}
NEXT_PUBLIC_CLOUDINARY_CLOUD_NAME: ${{ secrets.STAGING_NEXT_PUBLIC_CLOUDINARY_CLOUD_NAME }}
NEXT_PUBLIC_CLOUDINARY_API_KEY: ${{ secrets.STAGING_NEXT_PUBLIC_CLOUDINARY_API_KEY }}
NEXT_PUBLIC_IMAGE_PROCESSING_API_URL: ${{ secrets.STAGING_NEXT_PUBLIC_IMAGE_PROCESSING_API_URL }}
run: |
SHORT_SHA="${GITHUB_SHA::7}"
IMAGE="${{ secrets.STAGING_REGISTRY }}/admin"
docker build \
-f apps/admin/Dockerfile \
--build-arg NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY="$NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY" \
--build-arg NEXT_PUBLIC_CONVEX_URL="$NEXT_PUBLIC_CONVEX_URL" \
--build-arg NEXT_PUBLIC_CLOUDINARY_CLOUD_NAME="$NEXT_PUBLIC_CLOUDINARY_CLOUD_NAME" \
--build-arg NEXT_PUBLIC_CLOUDINARY_API_KEY="$NEXT_PUBLIC_CLOUDINARY_API_KEY" \
--build-arg NEXT_PUBLIC_IMAGE_PROCESSING_API_URL="$NEXT_PUBLIC_IMAGE_PROCESSING_API_URL" \
--load -t "${IMAGE}:staging" ./out
docker tag "${IMAGE}:staging" "${IMAGE}:sha-${SHORT_SHA}"
docker push "${IMAGE}:staging"
docker push "${IMAGE}:sha-${SHORT_SHA}"
# ── 3. Deploy ────────────────────────────────────────────────────────────────
# Runs when at least one app changed and no build failed.
# `always()` is required so the job isn't auto-skipped when one build job
# was skipped (Gitea/GitHub skip downstream jobs of skipped jobs by default).
deploy:
name: Deploy to staging VPS
needs: [build-storefront, build-admin, changes]
if: |
always() &&
(needs.changes.outputs.storefront == 'true' || needs.changes.outputs.admin == 'true') &&
!contains(needs.*.result, 'failure') &&
!contains(needs.*.result, 'cancelled')
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Write SSH key
run: |
mkdir -p ~/.ssh
echo "${{ secrets.STAGING_SSH_KEY }}" > ~/.ssh/staging
chmod 600 ~/.ssh/staging
- name: Pull & restart containers on VPS
env:
REGISTRY: ${{ secrets.STAGING_REGISTRY }}
REGISTRY_USER: ${{ secrets.STAGING_REGISTRY_USER }}
REGISTRY_TOKEN: ${{ secrets.STAGING_REGISTRY_TOKEN }}
SSH_HOST: ${{ secrets.STAGING_SSH_HOST }}
SSH_USER: ${{ secrets.STAGING_SSH_USER }}
SSH_PORT: ${{ secrets.STAGING_SSH_PORT }}
CLERK_SECRET_KEY: ${{ secrets.STAGING_STOREFRONT_CLERK_SECRET_KEY }}
ADMIN_CLERK_SECRET_KEY: ${{ secrets.STAGING_ADMIN_CLERK_SECRET_KEY }}
CLOUDINARY_API_SECRET: ${{ secrets.STAGING_CLOUDINARY_API_SECRET }}
STOREFRONT_CHANGED: ${{ needs.changes.outputs.storefront }}
ADMIN_CHANGED: ${{ needs.changes.outputs.admin }}
run: |
REGISTRY_HOST=$(echo "$REGISTRY" | cut -d'/' -f1)
COMPOSE_B64=$(sed "s|\${REGISTRY}|${REGISTRY}|g" deploy/staging/compose.yml | base64 -w 0)
ssh -i ~/.ssh/staging \
-p "${SSH_PORT:-22}" \
-o StrictHostKeyChecking=accept-new \
"${SSH_USER}@${SSH_HOST}" bash -s << EOF
set -euo pipefail
echo "${REGISTRY_TOKEN}" \
| podman login "${REGISTRY_HOST}" \
-u "${REGISTRY_USER}" --password-stdin --tls-verify=false
[ "${STOREFRONT_CHANGED}" = "true" ] && podman pull --tls-verify=false "${REGISTRY}/storefront:staging"
[ "${ADMIN_CHANGED}" = "true" ] && podman pull --tls-verify=false "${REGISTRY}/admin:staging"
mkdir -p /opt/staging
echo "${COMPOSE_B64}" | base64 -d > /opt/staging/compose.yml
printf 'CLERK_SECRET_KEY=%s\nADMIN_CLERK_SECRET_KEY=%s\nCLOUDINARY_API_SECRET=%s\n' \
"${CLERK_SECRET_KEY}" "${ADMIN_CLERK_SECRET_KEY}" "${CLOUDINARY_API_SECRET}" \
> /opt/staging/.env
chmod 600 /opt/staging/.env
SERVICES=""
[ "${STOREFRONT_CHANGED}" = "true" ] && SERVICES="\${SERVICES} storefront"
[ "${ADMIN_CHANGED}" = "true" ] && SERVICES="\${SERVICES} admin"
cd /opt/staging
podman compose up -d --force-recreate --remove-orphans \${SERVICES}
podman image prune -f
EOF

68
apps/admin/Dockerfile Normal file
View File

@@ -0,0 +1,68 @@
# Build context: ./out (turbo prune admin --docker)
# out/json/ — package.json files only → used by deps stage for layer caching
# out/full/ — full pruned monorepo → used by builder stage for source
# out/package-lock.json
# ── Stage 1: deps ────────────────────────────────────────────────────────────
FROM node:20-alpine AS deps
RUN apk add --no-cache libc6-compat
WORKDIR /app
# Upgrade npm to match the project's packageManager (npm@11). The package-lock.json
# was generated with npm 11 — npm 10 (bundled with node:20) can't fully parse it,
# causing turbo prune to generate an incomplete pruned lockfile and npm ci to miss
# packages.
RUN npm install -g npm@11 --quiet
COPY json/ .
COPY package-lock.json .
RUN npm ci
# ── Stage 2: builder ─────────────────────────────────────────────────────────
FROM node:20-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY full/ .
# NEXT_PUBLIC_* vars are baked into the client bundle at build time by Next.js.
# They must be present here (not just at runtime) or SSG/prerender fails.
# Passed via --build-arg in CI. Note: Gitea secrets use a STAGING_/PROD_ prefix
# which is stripped by the workflow before being forwarded here as build args.
ARG NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY
ARG NEXT_PUBLIC_CONVEX_URL
ARG NEXT_PUBLIC_CLOUDINARY_CLOUD_NAME
ARG NEXT_PUBLIC_CLOUDINARY_API_KEY
ARG NEXT_PUBLIC_IMAGE_PROCESSING_API_URL
ENV NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY=$NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY \
NEXT_PUBLIC_CONVEX_URL=$NEXT_PUBLIC_CONVEX_URL \
NEXT_PUBLIC_CLOUDINARY_CLOUD_NAME=$NEXT_PUBLIC_CLOUDINARY_CLOUD_NAME \
NEXT_PUBLIC_CLOUDINARY_API_KEY=$NEXT_PUBLIC_CLOUDINARY_API_KEY \
NEXT_PUBLIC_IMAGE_PROCESSING_API_URL=$NEXT_PUBLIC_IMAGE_PROCESSING_API_URL \
NEXT_TELEMETRY_DISABLED=1
RUN npx turbo build --filter=admin
# ── Stage 3: runner ──────────────────────────────────────────────────────────
FROM node:20-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production \
NEXT_TELEMETRY_DISABLED=1 \
HOSTNAME=0.0.0.0 \
PORT=3001
RUN addgroup -g 1001 -S nodejs && adduser -S nextjs -u 1001
COPY --from=builder --chown=nextjs:nodejs /app/apps/admin/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/apps/admin/.next/static ./apps/admin/.next/static
COPY --from=builder --chown=nextjs:nodejs /app/apps/admin/public ./apps/admin/public
USER nextjs
EXPOSE 3001
CMD ["node", "apps/admin/server.js"]

View File

@@ -3,6 +3,8 @@ const path = require("path");
/** @type {import('next').NextConfig} */ /** @type {import('next').NextConfig} */
const nextConfig = { const nextConfig = {
output: "standalone",
outputFileTracingRoot: path.join(__dirname, "../.."),
transpilePackages: ["@repo/convex", "@repo/types", "@repo/utils"], transpilePackages: ["@repo/convex", "@repo/types", "@repo/utils"],
turbopack: { turbopack: {
root: path.join(__dirname, "..", ".."), root: path.join(__dirname, "..", ".."),

View File

@@ -29,7 +29,7 @@
"radix-ui": "^1.4.3", "radix-ui": "^1.4.3",
"react-hook-form": "^7.71.2", "react-hook-form": "^7.71.2",
"sonner": "^2.0.7", "sonner": "^2.0.7",
"tailwind-merge": "^2.6.1", "tailwind-merge": "^3.4.0",
"zod": "^3.25.76" "zod": "^3.25.76"
}, },
"devDependencies": { "devDependencies": {

View File

@@ -0,0 +1,81 @@
# Build context: ./out (turbo prune storefront --docker)
# out/json/ — package.json files only → used by deps stage for layer caching
# out/full/ — full pruned monorepo → used by builder stage for source
# out/package-lock.json
# ── Stage 1: deps ────────────────────────────────────────────────────────────
# Install ALL dependencies (dev + prod) using only the package.json tree.
# This layer is shared with the builder stage and only rebuilds when
# a package.json or the lock file changes — not when source code changes.
FROM node:20-alpine AS deps
RUN apk add --no-cache libc6-compat
WORKDIR /app
# Upgrade npm to match the project's packageManager (npm@11). The package-lock.json
# was generated with npm 11 — npm 10 (bundled with node:20) can't fully parse it,
# causing turbo prune to generate an incomplete pruned lockfile and npm ci to miss
# packages like @heroui/react.
RUN npm install -g npm@11 --quiet
COPY json/ .
COPY package-lock.json .
RUN npm ci
# ── Stage 2: builder ─────────────────────────────────────────────────────────
# Full monorepo source + build artifact.
# next build produces .next/standalone/ because output: "standalone" is set
# in next.config.js — that's what makes the runner stage small.
FROM node:20-alpine AS builder
WORKDIR /app
# Copy everything from the deps stage — not just /app/node_modules.
# @heroui/react cannot be hoisted to the root by npm and is installed at
# apps/storefront/node_modules/ instead. Copying only the root node_modules
# would leave it missing. Copying all of /app/ brings both root and
# workspace-level node_modules, then full/ layers the source on top.
COPY --from=deps /app/ ./
COPY full/ .
# NEXT_PUBLIC_* vars are baked into the client bundle at build time by Next.js.
# They must be present here (not just at runtime) or SSG/prerender fails.
# Passed via --build-arg in CI. Note: Gitea secrets use a STAGING_/PROD_ prefix
# which is stripped by the workflow before being forwarded here as build args.
ARG NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY
ARG NEXT_PUBLIC_CONVEX_URL
ARG NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY
ENV NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY=$NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY \
NEXT_PUBLIC_CONVEX_URL=$NEXT_PUBLIC_CONVEX_URL \
NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY=$NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY \
NEXT_TELEMETRY_DISABLED=1
RUN npx turbo build --filter=storefront
# ── Stage 3: runner ──────────────────────────────────────────────────────────
# Minimal runtime image — only the standalone bundle, static assets, and public dir.
# No source code, no dev dependencies, no build tools.
FROM node:20-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production \
NEXT_TELEMETRY_DISABLED=1 \
HOSTNAME=0.0.0.0 \
PORT=3000
# Non-root user for security
RUN addgroup -g 1001 -S nodejs && adduser -S nextjs -u 1001
# outputFileTracingRoot is set to the repo root, so the standalone directory mirrors
# the full monorepo tree. server.js lands at apps/storefront/server.js inside
# standalone/, not at the root. Static files and public/ must be copied separately.
COPY --from=builder --chown=nextjs:nodejs /app/apps/storefront/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/apps/storefront/.next/static ./apps/storefront/.next/static
COPY --from=builder --chown=nextjs:nodejs /app/apps/storefront/public ./apps/storefront/public
USER nextjs
EXPOSE 3000
CMD ["node", "apps/storefront/server.js"]

View File

@@ -3,6 +3,10 @@ const path = require("path");
/** @type {import('next').NextConfig} */ /** @type {import('next').NextConfig} */
const nextConfig = { const nextConfig = {
output: "standalone",
// Required in a monorepo: tells Next.js to trace files from the repo root
// so the standalone bundle includes files from packages/
outputFileTracingRoot: path.join(__dirname, "../.."),
transpilePackages: ["@repo/convex", "@repo/types", "@repo/utils"], transpilePackages: ["@repo/convex", "@repo/types", "@repo/utils"],
turbopack: { turbopack: {
root: path.join(__dirname, "..", ".."), root: path.join(__dirname, "..", ".."),

View File

@@ -18,6 +18,8 @@
"@repo/utils": "*", "@repo/utils": "*",
"@stripe/react-stripe-js": "^5.6.0", "@stripe/react-stripe-js": "^5.6.0",
"@stripe/stripe-js": "^8.8.0", "@stripe/stripe-js": "^8.8.0",
"framer-motion": "^11.0.0" "framer-motion": "^11.0.0",
"lucide-react": "^0.400.0",
"react-markdown": "^10.1.0"
} }
} }

View File

@@ -27,7 +27,7 @@ export function ProductDetailReviewsPanel({ productId, initialRating, initialRev
productId: productId as Id<"products">, productId: productId as Id<"products">,
sortBy, sortBy,
limit: offset + LIMIT, limit: offset + LIMIT,
offset: 0, // In this pattern, we increase limit to fetch more pages without resetting offset, so previously fetched array grows offset: 0,
}); });
if (result === undefined) return <ProductDetailReviewsSkeleton />; if (result === undefined) return <ProductDetailReviewsSkeleton />;

View File

@@ -1,11 +1,16 @@
import ReactMarkdown from "react-markdown";
type ProductDetailDescriptionSectionProps = { type ProductDetailDescriptionSectionProps = {
/** Product description (HTML or plain text); server-rendered in initial HTML per SEO. */ /** Product description (Markdown); server-rendered in initial HTML per SEO. */
description?: string | null; description?: string | null;
}; };
const descriptionClasses =
"max-w-3xl text-sm leading-relaxed text-[var(--foreground)]/80 [&_ol]:mb-3 [&_ol]:list-inside [&_ol]:list-decimal [&_p:last-child]:mb-0 [&_p]:mb-3 [&_ul]:mb-3 [&_ul]:list-inside [&_ul]:list-disc";
/** /**
* Description content for the PDP tabs section. * Description content for the PDP tabs section.
* Renders inside a parent <section> provided by ProductDetailTabsSection. * Renders Markdown as HTML inside a parent <section> provided by ProductDetailTabsSection.
* If empty, shows a short fallback message. * If empty, shows a short fallback message.
*/ */
export function ProductDetailDescriptionSection({ export function ProductDetailDescriptionSection({
@@ -15,10 +20,9 @@ export function ProductDetailDescriptionSection({
typeof description === "string" && description.trim().length > 0; typeof description === "string" && description.trim().length > 0;
return hasContent ? ( return hasContent ? (
<div <div className={descriptionClasses}>
className="max-w-3xl text-sm leading-relaxed text-[var(--foreground)]/80 [&_ol]:mb-3 [&_ol]:list-inside [&_ol]:list-decimal [&_p:last-child]:mb-0 [&_p]:mb-3 [&_ul]:mb-3 [&_ul]:list-inside [&_ul]:list-disc" <ReactMarkdown>{description!.trim()}</ReactMarkdown>
dangerouslySetInnerHTML={{ __html: description!.trim() }} </div>
/>
) : ( ) : (
<p className="text-sm text-[var(--muted)]">No description available.</p> <p className="text-sm text-[var(--muted)]">No description available.</p>
); );

View File

@@ -0,0 +1,15 @@
# Runtime secrets for staging containers.
# Copy this file to /opt/staging/.env on the VPS and fill in the values.
# NEXT_PUBLIC_* vars are already baked into the Docker images at build time —
# only server-side secrets that Next.js reads at runtime go here.
# Storefront — Clerk server-side key
CLERK_SECRET_KEY=
# Admin — Clerk server-side key (different Clerk instance)
# Add a second .env or use per-service env_file if keys differ per container.
# For now a single .env is shared; storefront ignores keys it doesn't use.
# Stripe (used by storefront checkout server actions if any)
STRIPE_SECRET_KEY=
STRIPE_WEBHOOK_SECRET=

View File

@@ -0,0 +1,25 @@
name: petloft-staging
services:
storefront:
image: ${REGISTRY}/storefront:staging
restart: unless-stopped
ports:
- "3001:3000"
env_file:
- path: .env
required: false
environment:
- CLERK_SECRET_KEY
admin:
image: ${REGISTRY}/admin:staging
restart: unless-stopped
ports:
- "3002:3001"
env_file:
- path: .env
required: false
environment:
- CLERK_SECRET_KEY=${ADMIN_CLERK_SECRET_KEY}
- CLOUDINARY_API_SECRET

1183
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -30,6 +30,7 @@
"react-dom": "^19.2.4", "react-dom": "^19.2.4",
"stripe": "^20.4.0", "stripe": "^20.4.0",
"svix": "^1.86.0", "svix": "^1.86.0",
"tailwind-merge": "^3.4.0",
"tailwindcss": "^4.2.0" "tailwindcss": "^4.2.0"
}, },
"devDependencies": { "devDependencies": {