Compare commits

18 Commits

Author SHA1 Message Date
admin
a3be4ead1d fix: remove jq from health check, response is not JSON 2026-03-04 21:11:02 +00:00
admin
8765608603 fix: scp dotfile bug, remote mkdir, registry auth, SSH -T flag 2026-03-04 20:31:20 +00:00
admin
9d0240bf3f fix: use registry hostname as auth key, include owner in REGISTRY_URL 2026-03-04 20:22:01 +00:00
admin
b86c76a2fc fix: bypass docker login, pre-populate auth config for HTTP registry push 2026-03-04 20:18:20 +00:00
admin
392d4925cf fix: use DOCKER_HOST for pack --docker-host to fix nested container socket mount 2026-03-04 20:06:23 +00:00
admin
ec65a99b99 fix: use docker driver for buildx to work with rootless Podman 2026-03-04 20:02:35 +00:00
920c910c23 Fix build workflow: split Save image artifact into two steps
- Separate docker save command from upload-artifact action
- Fix invalid syntax error (cannot use both uses: and run: in same step)
- Steps now properly separated: Save image to file, then Upload image artifact
2025-11-21 16:36:00 +03:00
1db56d6741 Fix lint errors: add error checks for errcheck linter
All checks were successful
Run Tests / Run Go Tests (push) Successful in 2m24s
Run Tests / Lint Code (push) Successful in 3m36s
- Add error check for JSON response in panic handler
- Remove duplicate defer CloseDB() call (handled in shutdown)
- Add error checks for all WriteField() calls in test files
- Add error checks for CreateFormFile() and Write() calls
- Fix golangci-lint Go 1.25 compatibility by installing from source
2025-11-21 15:55:12 +03:00
58df1359e1 Update Gitea workflow to allow golangci-lint failures and install from source for Go 1.25 compatibility. Comment out coverage upload step.
Some checks failed
Run Tests / Run Go Tests (push) Successful in 43s
Run Tests / Lint Code (push) Failing after 3m0s
2025-11-21 15:48:08 +03:00
971c62afc5 Fix workflow path filters for repository root structure
Some checks failed
Run Tests / Run Go Tests (push) Successful in 4m21s
Run Tests / Lint Code (push) Failing after 54s
2025-11-21 15:27:02 +03:00
1d37f50604 workflow commit trigger 2025-11-21 15:20:31 +03:00
a43658af84 Merge pull request 'feature/clean-public-urls' (#1) from feature/clean-public-urls into main
Reviewed-on: http://72.61.144.167:3000/admin/jd-book-uploader-backend/pulls/1
2025-11-21 12:14:34 +00:00
ca62651c3c chore: update .gitignore to include additional Gitea workflow files
- Added .gitea/workflows/deploy.yml and .gitea/workflows/build.yml to .gitignore to prevent tracking of these workflow configuration files.
2025-11-21 15:13:15 +03:00
a52b2d13f0 Add Gitea workflows for CI/CD pipeline 2025-11-21 15:03:31 +03:00
ianshaloom
528ae0072e fix: correct heading in Gitea workflow documentation
- Updated the heading from "Related Documentation" to "Related Documentations" for consistency in the .gitea/workflows/README.md file.
2025-11-21 14:56:04 +03:00
ianshaloom
27f6629308 chore: remove unnecessary blank line in Gitea workflow configuration
- Cleaned up the .gitea/workflows/test.yml file by removing an extra blank line for better readability.
2025-11-21 14:37:12 +03:00
ianshaloom
9a5508ffe8 chore: update .gitignore to include Gitea workflow files
- Added .gitea/workflows/deploy.yml and .gitea/workflows/build.yml to .gitignore to prevent tracking of workflow configuration files.
2025-11-21 14:23:50 +03:00
ianshaloom
d5b78d7449 feat: implement clean public URLs and slug-based image storage
- Update Firebase service to use clean GCS URLs instead of MediaLink
- Set ACL to make uploaded files publicly accessible
- Use slug as image filename to prevent overwrites
- Add unique constraints on slug columns in books and stationery tables
- Add slug existence checks before upload (409 Conflict if duplicate)
- Update storage paths: /jd-bookshop/books and /jd-bookshop/stationery
- Remove year/month from storage paths as requested
- Construct URLs: https://storage.googleapis.com/{bucket}/{encoded-path}

Changes:
- firebase.go: Set ACL, construct clean URLs using url.PathEscape
- book.go & stationery.go: Check slug before upload, use correct paths
- book_service.go & stationery_service.go: Add slug existence check functions
- migration: Add UNIQUE constraints on slug columns
2025-11-21 10:47:01 +03:00
10 changed files with 694 additions and 45 deletions

168
.gitea/workflows/README.md Normal file
View File

@@ -0,0 +1,168 @@
# Gitea Workflows
This directory contains Gitea Actions workflows for CI/CD Pipelines.
## Workflows
### `build.yml` - Build Application Image
Builds the application image using Cloud Native Buildpacks.
**Triggers:**
- Push to `main`, `production`, or `develop` branches
- Pull requests to `main` or `production`
- Manual workflow dispatch
**Outputs:**
- Docker image (tagged and optionally pushed to registry)
- Image artifact (if no registry configured)
### `deploy.yml` - Deploy to Production
Deploys the built image to production server.
**Triggers:**
- After successful `build.yml` workflow completion
- Manual workflow dispatch (with image tag input)
**Process:**
- Downloads image artifact or pulls from registry
- Transfers deployment files to production server
- Mounts Firebase credentials securely
- Starts container and verifies health
### `test.yml` - Run Tests
Runs Go tests and linting.
**Triggers:**
- Push to `main` or `develop` branches
- Pull requests to `main` or `develop`
**Jobs:**
- `test` - Runs Go tests with coverage
- `lint` - Runs golangci-lint
#### Triggers
- Push to `main` or `production` branches (when `backend/**` files change)
- Manual workflow dispatch with environment selection
## Workflow Flow
```
Push to main/production
[build.yml] → Builds image → Pushes to registry (optional)
[deploy.yml] → Deploys to production → Verifies health
```
**Manual Deployment:**
1. Run `build.yml` manually (or wait for push)
2. Run `deploy.yml` manually with image tag
#### Required Secrets
Configure these secrets in Gitea repository settings:
**Build Secrets:**
- `FRONTEND_URL` - Frontend application URL
- `DB_HOST` - Database host
- `DB_PORT` - Database port
- `DB_USER` - Database username
- `DB_PASSWORD` - Database password
- `DB_NAME` - Database name
- `FIREBASE_PROJECT_ID` - Firebase project ID
- `FIREBASE_STORAGE_BUCKET` - Firebase storage bucket name
**Deployment Secrets:**
- `DEPLOY_HOST` - Production server hostname/IP
- `DEPLOY_USER` - SSH user for deployment
- `DEPLOY_PATH` - Deployment directory on server
- `SSH_PRIVATE_KEY` - SSH private key for server access
- `SSH_KNOWN_HOSTS` - SSH known hosts entry
- `FIREBASE_CREDENTIALS_FILE_PATH` - Path to Firebase credentials file on server
- `PORT` - Application port (default: 8080)
**Optional Secrets:**
- `REGISTRY_URL` - Container registry URL (if using registry)
- `REGISTRY_USERNAME` - Registry username
- `REGISTRY_PASSWORD` - Registry password
- `NOTIFICATION_WEBHOOK` - Webhook URL for deployment notifications
#### Security Considerations
1. **Firebase Credentials:**
- Credentials are **NOT** included in the build
- Credentials are mounted at runtime on the production server
- File must exist on production server at path specified in `FIREBASE_CREDENTIALS_FILE_PATH`
- Mounted with read-only and SELinux shared context (`:ro,z`)
2. **Database Credentials:**
- Stored as Gitea secrets
- Passed as environment variables at runtime
- Never committed to repository
3. **SSH Access:**
- Uses SSH key authentication
- Private key stored as Gitea secret
- Known hosts verified
#### Deployment Process
1. **Build Phase:**
- Checks out code
- Sets up Docker and Pack CLI
- Configures Docker socket (handles rootless Docker)
- Builds image using Pack with `--docker-host` flag
- Tags and optionally pushes to registry
2. **Deploy Phase:**
- Transfers deployment files to production server
- Transfers image (if not using registry)
- Creates `.env.production` on server
- Runs deployment script that:
- Stops existing container
- Mounts Firebase credentials (read-only)
- Starts new container
- Verifies deployment with health check
- Rolls back on failure
#### Manual Deployment
To trigger manual deployment:
1. Go to Gitea repository → Actions → Workflows
2. Select "Production Deployment"
3. Click "Run workflow"
4. Select environment (production/staging)
5. Click "Run workflow"
#### Troubleshooting
**Build fails with Docker permission error:**
- Ensure Docker socket is accessible
- Check `PACK_DOCKER_HOST` is set correctly
- Verify `--docker-host` flag is being passed to pack
**Deployment fails with Firebase credentials error:**
- Verify credentials file exists on server at specified path
- Check file permissions: `chmod 644 firebase-credentials.json`
- Ensure SELinux allows access (use `:z` flag in mount)
**SSH connection fails:**
- Verify SSH key is correct
- Check known hosts entry
- Ensure user has access to deployment directory
**Health check fails:**
- Check container logs: `podman logs jd-book-uploader`
- Verify port is accessible
- Check firewall rules
## Related Documentations
- `../../deployment/docs/pack-docker-permissions-fix.md` - Pack Docker permissions fix
- `../../deployment/docs/secrets-management.md` - Secrets management guide

137
.gitea/workflows/build.yml Normal file
View File

@@ -0,0 +1,137 @@
name: Build Application Image
on:
workflow_run:
workflows: ["Run Tests"]
types:
- completed
branches:
- main
- production
workflow_dispatch:
inputs:
image_tag:
description: 'Image tag (default: latest)'
required: false
default: 'latest'
env:
IMAGE_NAME: jd-book-uploader
IMAGE_TAG: ${{ inputs.image_tag || 'latest' }}
REGISTRY: ${{ secrets.REGISTRY_URL || '' }}
jobs:
build:
name: Build with Pack
runs-on: ubuntu-latest
if: ${{ github.event.workflow_run.conclusion == 'success' || github.event_name == 'workflow_dispatch' }}
outputs:
image: ${{ steps.image.outputs.full }}
image-digest: ${{ steps.build.outputs.digest }}
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
driver: docker
- name: Configure Docker Socket
run: |
# Prefer DOCKER_HOST if set (runner injects the real host socket path).
# This ensures pack passes the correct host path to lifecycle containers,
# which Podman can bind-mount without "mkdir permission denied".
if [ -n "$DOCKER_HOST" ]; then
echo "PACK_DOCKER_HOST=$DOCKER_HOST" >> $GITEA_ENV
elif [ -S "/run/user/$(id -u)/docker.sock" ]; then
echo "PACK_DOCKER_HOST=unix:///run/user/$(id -u)/docker.sock" >> $GITEA_ENV
elif [ -S "/var/run/docker.sock" ]; then
echo "PACK_DOCKER_HOST=unix:///var/run/docker.sock" >> $GITEA_ENV
else
echo "Error: Docker socket not found"
exit 1
fi
docker info
- name: Install Pack CLI
run: |
PACK_VERSION="0.32.0"
wget -q "https://github.com/buildpacks/pack/releases/download/v${PACK_VERSION}/pack-v${PACK_VERSION}-linux.tgz"
tar -xzf "pack-v${PACK_VERSION}-linux.tgz"
sudo mv pack /usr/local/bin/
pack --version
- name: Set default builder
run: |
pack config default-builder paketobuildpacks/builder-jammy-tiny:latest
- name: Prepare build environment
run: |
# Create .env.production for build (no secrets, just structure)
cat > .env.production << EOF
PORT=8080
# Database and Firebase config loaded at runtime
EOF
- name: Build image
id: build
env:
PACK_DOCKER_HOST: ${{ env.PACK_DOCKER_HOST }}
run: |
PACK_ARGS=(
"${IMAGE_NAME}:${IMAGE_TAG}"
--path .
)
if [ -n "$PACK_DOCKER_HOST" ]; then
PACK_ARGS+=(--docker-host "$PACK_DOCKER_HOST")
fi
if [ -f ".env.production" ]; then
PACK_ARGS+=(--env-file .env.production)
fi
pack build "${PACK_ARGS[@]}"
IMAGE_DIGEST=$(docker inspect "${IMAGE_NAME}:${IMAGE_TAG}" --format='{{.Id}}')
echo "digest=${IMAGE_DIGEST}" >> $GITEA_OUTPUT
- name: Tag image
id: image
run: |
if [ -n "${{ env.REGISTRY }}" ]; then
FULL_IMAGE="${{ env.REGISTRY }}/${IMAGE_NAME}:${IMAGE_TAG}"
docker tag "${IMAGE_NAME}:${IMAGE_TAG}" "${FULL_IMAGE}"
echo "full=${FULL_IMAGE}" >> $GITEA_OUTPUT
else
echo "full=${IMAGE_NAME}:${IMAGE_TAG}" >> $GITEA_OUTPUT
fi
- name: Push to registry
if: env.REGISTRY != ''
run: |
# docker login sends POST /auth to Podman which incorrectly tries HTTPS even for
# insecure registries. Pre-populate config.json instead — docker push goes through
# the Podman daemon which correctly uses HTTP (insecure=true in registries.conf).
mkdir -p ~/.docker
AUTH=$(echo -n "${{ secrets.REGISTRY_USERNAME }}:${{ secrets.REGISTRY_PASSWORD }}" | base64 -w 0)
# Auth key must be the registry hostname only (e.g. host:port), not the full path
REGISTRY_HOST=$(echo "${{ env.REGISTRY }}" | cut -d'/' -f1)
echo "{\"auths\":{\"${REGISTRY_HOST}\":{\"auth\":\"${AUTH}\"}}}" > ~/.docker/config.json
docker push "${{ steps.image.outputs.full }}"
- name: Save image to file
if: env.REGISTRY == ''
run: |
docker save "${IMAGE_NAME}:${IMAGE_TAG}" -o /tmp/image.tar
- name: Upload image artifact
if: env.REGISTRY == ''
uses: actions/upload-artifact@v4
with:
name: docker-image
path: /tmp/image.tar
retention-days: 1

195
.gitea/workflows/deploy.yml Normal file
View File

@@ -0,0 +1,195 @@
name: Deploy to Production
on:
workflow_run:
workflows: ["Build Application Image"]
types:
- completed
branches:
- main
- production
workflow_dispatch:
inputs:
image_tag:
description: 'Image tag to deploy'
required: true
default: 'latest'
env:
IMAGE_NAME: jd-book-uploader
IMAGE_TAG: ${{ inputs.image_tag || 'latest' }}
REGISTRY: ${{ secrets.REGISTRY_URL || '' }}
jobs:
deploy:
name: Deploy to Production
runs-on: ubuntu-latest
if: ${{ github.event.workflow_run.conclusion == 'success' || github.event_name == 'workflow_dispatch' }}
environment: production
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up SSH
uses: webfactory/ssh-agent@v0.9.0
with:
ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY }}
- name: Configure SSH
run: |
mkdir -p ~/.ssh
echo "${{ secrets.SSH_KNOWN_HOSTS }}" >> ~/.ssh/known_hosts
chmod 600 ~/.ssh/known_hosts
- name: Download image artifact
if: env.REGISTRY == ''
uses: actions/download-artifact@v4
with:
name: docker-image
workflow: build.yml
run-id: ${{ github.event.workflow_run.id }}
- name: Prepare deployment files
run: |
mkdir -p deployment/tmp
# Create .env.production
cat > deployment/tmp/.env.production << EOF
PORT=${{ secrets.PORT || '8080' }}
FRONTEND_URL=${{ secrets.FRONTEND_URL }}
DB_HOST=${{ secrets.DB_HOST }}
DB_PORT=${{ secrets.DB_PORT }}
DB_USER=${{ secrets.DB_USER }}
DB_PASSWORD=${{ secrets.DB_PASSWORD }}
DB_NAME=${{ secrets.DB_NAME }}
FIREBASE_PROJECT_ID=${{ secrets.FIREBASE_PROJECT_ID }}
FIREBASE_STORAGE_BUCKET=${{ secrets.FIREBASE_STORAGE_BUCKET }}
FIREBASE_CREDENTIALS_FILE=${{ secrets.FIREBASE_CREDENTIALS_FILE_PATH || './firebase-credentials.json' }}
EOF
# Create deployment script
cat > deployment/tmp/deploy.sh << 'DEPLOY_SCRIPT'
#!/bin/bash
set -e
IMAGE_NAME="${{ env.IMAGE_NAME }}"
IMAGE_TAG="${{ env.IMAGE_TAG }}"
CONTAINER_NAME="jd-book-uploader"
set -a
source .env.production
set +a
# Stop existing container
if podman ps -a --format "{{.Names}}" | grep -q "^${CONTAINER_NAME}$"; then
podman stop "${CONTAINER_NAME}" 2>/dev/null || true
podman rm "${CONTAINER_NAME}" 2>/dev/null || true
fi
# Load image if artifact provided
if [ -f image.tar ]; then
podman load -i image.tar
rm -f image.tar
fi
# Pull from registry if configured
if [ -n "${{ env.REGISTRY }}" ]; then
podman pull --tls-verify=false "${{ env.REGISTRY }}/${IMAGE_NAME}:${IMAGE_TAG}"
podman tag "${{ env.REGISTRY }}/${IMAGE_NAME}:${IMAGE_TAG}" "${IMAGE_NAME}:${IMAGE_TAG}"
fi
# Build run command
PODMAN_CMD=(
podman run -d
--name "${CONTAINER_NAME}"
--network=host
--user root
--restart=unless-stopped
)
# Add environment variables
while IFS='=' read -r key value; do
[[ "$key" =~ ^#.*$ ]] && continue
[[ -z "$key" ]] && continue
value=$(echo "$value" | sed -e 's/^"//' -e 's/"$//' -e "s/^'//" -e "s/'$//")
if [ "$key" != "FIREBASE_CREDENTIALS_FILE" ]; then
PODMAN_CMD+=(-e "${key}=${value}")
fi
done < .env.production
# Mount Firebase credentials
FIREBASE_CREDS="${FIREBASE_CREDENTIALS_FILE}"
if [ -f "$FIREBASE_CREDS" ]; then
PODMAN_CMD+=(-v "${FIREBASE_CREDS}:/app/firebase-credentials.json:ro,z")
PODMAN_CMD+=(-e "FIREBASE_CREDENTIALS_FILE=/app/firebase-credentials.json")
fi
PODMAN_CMD+=("${IMAGE_NAME}:${IMAGE_TAG}")
"${PODMAN_CMD[@]}"
sleep 3
if podman ps --format "{{.Names}}" | grep -q "^${CONTAINER_NAME}$"; then
echo "✓ Container started"
podman logs "${CONTAINER_NAME}" --tail 20
else
echo "✗ Container failed"
podman logs "${CONTAINER_NAME}" --tail 50
exit 1
fi
DEPLOY_SCRIPT
chmod +x deployment/tmp/deploy.sh
- name: Transfer files
run: |
# Ensure remote deployment directory exists
ssh ${{ secrets.DEPLOY_USER }}@${{ secrets.DEPLOY_HOST }} "mkdir -p ${{ secrets.DEPLOY_PATH }}/deployment"
# Copy files explicitly — glob (*) skips dotfiles like .env.production
scp deployment/tmp/deploy.sh ${{ secrets.DEPLOY_USER }}@${{ secrets.DEPLOY_HOST }}:${{ secrets.DEPLOY_PATH }}/deployment/
scp deployment/tmp/.env.production ${{ secrets.DEPLOY_USER }}@${{ secrets.DEPLOY_HOST }}:${{ secrets.DEPLOY_PATH }}/deployment/
if [ -f image.tar ]; then
scp image.tar ${{ secrets.DEPLOY_USER }}@${{ secrets.DEPLOY_HOST }}:${{ secrets.DEPLOY_PATH }}/image.tar
fi
- name: Deploy
run: |
ssh -T ${{ secrets.DEPLOY_USER }}@${{ secrets.DEPLOY_HOST }} << ENDSSH
set -e
cd ${{ secrets.DEPLOY_PATH }}
if [ -f image.tar ]; then
podman load -i image.tar
rm -f image.tar
fi
if [ ! -f "${{ secrets.FIREBASE_CREDENTIALS_FILE_PATH || './firebase-credentials.json' }}" ]; then
echo "Error: Firebase credentials not found"
exit 1
fi
if [ -n "${{ env.REGISTRY }}" ]; then
echo "${{ secrets.REGISTRY_PASSWORD }}" | podman login "${{ env.REGISTRY }}" -u "${{ secrets.REGISTRY_USERNAME }}" --password-stdin --tls-verify=false
fi
cd deployment
./deploy.sh
ENDSSH
- name: Verify deployment
run: |
sleep 5
HEALTH_URL="http://${{ secrets.DEPLOY_HOST }}:${{ secrets.PORT || '8080' }}/api/health"
for i in {1..10}; do
if curl -f -s "$HEALTH_URL" > /dev/null; then
echo "✓ Health check passed"
curl -s "$HEALTH_URL"
exit 0
fi
sleep 3
done
echo "✗ Health check failed"
exit 1

84
.gitea/workflows/test.yml Normal file
View File

@@ -0,0 +1,84 @@
name: Run Tests
on:
push:
branches:
- main
- production
- develop
paths:
- '**/*.go'
- 'go.mod'
- 'go.sum'
- '.gitea/workflows/test.yml'
pull_request:
branches:
- main
- production
- develop
paths:
- '**/*.go'
- 'go.mod'
- 'go.sum'
- '.gitea/workflows/test.yml'
jobs:
test:
name: Run Go Tests
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: '1.25'
- name: Cache Go modules
uses: actions/cache@v4
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Install dependencies
run: go mod download
- name: Run tests
run: go test -v -race -coverprofile=coverage.out ./...
# - name: Upload coverage
# uses: codecov/codecov-action@v4
# if: always()
# with:
# file: ./coverage.out
# flags: backend
# name: backend-coverage
lint:
name: Lint Code
runs-on: ubuntu-latest
continue-on-error: true # Allow failure until golangci-lint supports Go 1.25
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: '1.25'
- name: Install golangci-lint from source
run: |
# Install golangci-lint from source using Go 1.25 to ensure compatibility
go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest
echo "$(go env GOPATH)/bin" >> $GITHUB_PATH
- name: Run golangci-lint
run: |
golangci-lint --version
golangci-lint run --timeout=5m

4
.gitignore vendored
View File

@@ -49,3 +49,7 @@ build/
.env.production .env.production
.env.local .env.local
.env.production.example .env.production.example
# Gitea workflows
.gitea/workflows/deploy.yml
.gitea/workflows/build.yml

View File

@@ -22,18 +22,37 @@ func TestUploadBook(t *testing.T) {
writer := multipart.NewWriter(body) writer := multipart.NewWriter(body)
// Add form fields // Add form fields
writer.WriteField("book_name", "Test Book") if err := writer.WriteField("book_name", "Test Book"); err != nil {
writer.WriteField("cost", "10.50") t.Fatalf("Failed to write field: %v", err)
writer.WriteField("price", "15.99") }
writer.WriteField("quantity", "100") if err := writer.WriteField("cost", "10.50"); err != nil {
writer.WriteField("publisher_author", "Test Publisher") t.Fatalf("Failed to write field: %v", err)
writer.WriteField("category", "Fiction") }
if err := writer.WriteField("price", "15.99"); err != nil {
t.Fatalf("Failed to write field: %v", err)
}
if err := writer.WriteField("quantity", "100"); err != nil {
t.Fatalf("Failed to write field: %v", err)
}
if err := writer.WriteField("publisher_author", "Test Publisher"); err != nil {
t.Fatalf("Failed to write field: %v", err)
}
if err := writer.WriteField("category", "Fiction"); err != nil {
t.Fatalf("Failed to write field: %v", err)
}
// Add image file // Add image file
part, _ := writer.CreateFormFile("image", "test.png") part, err := writer.CreateFormFile("image", "test.png")
part.Write([]byte("fake image data")) if err != nil {
t.Fatalf("Failed to create form file: %v", err)
}
if _, err := part.Write([]byte("fake image data")); err != nil {
t.Fatalf("Failed to write image data: %v", err)
}
writer.Close() if err := writer.Close(); err != nil {
t.Fatalf("Failed to close writer: %v", err)
}
req := httptest.NewRequest("POST", "/api/books", body) req := httptest.NewRequest("POST", "/api/books", body)
req.Header.Set("Content-Type", writer.FormDataContentType()) req.Header.Set("Content-Type", writer.FormDataContentType())
@@ -55,8 +74,12 @@ func TestUploadBook_ValidationErrors(t *testing.T) {
// Test missing required field // Test missing required field
body := &bytes.Buffer{} body := &bytes.Buffer{}
writer := multipart.NewWriter(body) writer := multipart.NewWriter(body)
writer.WriteField("book_name", "") // Empty book name if err := writer.WriteField("book_name", ""); err != nil { // Empty book name
writer.Close() t.Fatalf("Failed to write field: %v", err)
}
if err := writer.Close(); err != nil {
t.Fatalf("Failed to close writer: %v", err)
}
req := httptest.NewRequest("POST", "/api/books", body) req := httptest.NewRequest("POST", "/api/books", body)
req.Header.Set("Content-Type", writer.FormDataContentType()) req.Header.Set("Content-Type", writer.FormDataContentType())

View File

@@ -22,18 +22,37 @@ func TestUploadStationery(t *testing.T) {
writer := multipart.NewWriter(body) writer := multipart.NewWriter(body)
// Add form fields // Add form fields
writer.WriteField("stationery_name", "Test Pen") if err := writer.WriteField("stationery_name", "Test Pen"); err != nil {
writer.WriteField("cost", "2.50") t.Fatalf("Failed to write field: %v", err)
writer.WriteField("price", "5.99") }
writer.WriteField("quantity", "200") if err := writer.WriteField("cost", "2.50"); err != nil {
writer.WriteField("category", "Writing") t.Fatalf("Failed to write field: %v", err)
writer.WriteField("color", "Blue") }
if err := writer.WriteField("price", "5.99"); err != nil {
t.Fatalf("Failed to write field: %v", err)
}
if err := writer.WriteField("quantity", "200"); err != nil {
t.Fatalf("Failed to write field: %v", err)
}
if err := writer.WriteField("category", "Writing"); err != nil {
t.Fatalf("Failed to write field: %v", err)
}
if err := writer.WriteField("color", "Blue"); err != nil {
t.Fatalf("Failed to write field: %v", err)
}
// Add image file // Add image file
part, _ := writer.CreateFormFile("image", "test.png") part, err := writer.CreateFormFile("image", "test.png")
part.Write([]byte("fake image data")) if err != nil {
t.Fatalf("Failed to create form file: %v", err)
}
if _, err := part.Write([]byte("fake image data")); err != nil {
t.Fatalf("Failed to write image data: %v", err)
}
writer.Close() if err := writer.Close(); err != nil {
t.Fatalf("Failed to close writer: %v", err)
}
req := httptest.NewRequest("POST", "/api/stationery", body) req := httptest.NewRequest("POST", "/api/stationery", body)
req.Header.Set("Content-Type", writer.FormDataContentType()) req.Header.Set("Content-Type", writer.FormDataContentType())
@@ -55,8 +74,12 @@ func TestUploadStationery_ValidationErrors(t *testing.T) {
// Test missing required field // Test missing required field
body := &bytes.Buffer{} body := &bytes.Buffer{}
writer := multipart.NewWriter(body) writer := multipart.NewWriter(body)
writer.WriteField("stationery_name", "") // Empty stationery name if err := writer.WriteField("stationery_name", ""); err != nil { // Empty stationery name
writer.Close() t.Fatalf("Failed to write field: %v", err)
}
if err := writer.Close(); err != nil {
t.Fatalf("Failed to close writer: %v", err)
}
req := httptest.NewRequest("POST", "/api/stationery", body) req := httptest.NewRequest("POST", "/api/stationery", body)
req.Header.Set("Content-Type", writer.FormDataContentType()) req.Header.Set("Content-Type", writer.FormDataContentType())

View File

@@ -26,7 +26,7 @@ func main() {
if err != nil { if err != nil {
log.Fatalf("Failed to connect to database: %v", err) log.Fatalf("Failed to connect to database: %v", err)
} }
defer services.CloseDB() // Note: CloseDB is called explicitly in graceful shutdown, not in defer
log.Println("Database connected successfully") log.Println("Database connected successfully")
// Initialize Firebase // Initialize Firebase

View File

@@ -54,10 +54,12 @@ func RecoverHandler() fiber.Handler {
) )
// Return error response // Return error response
c.Status(fiber.StatusInternalServerError).JSON(fiber.Map{ if err := c.Status(fiber.StatusInternalServerError).JSON(fiber.Map{
"success": false, "success": false,
"error": "Internal Server Error", "error": "Internal Server Error",
}) }); err != nil {
log.Printf("Failed to send error response: %v", err)
}
} }
}() }()

View File

@@ -3,6 +3,7 @@ package services
import ( import (
"context" "context"
"fmt" "fmt"
"net/url"
"os" "os"
"path/filepath" "path/filepath"
"time" "time"
@@ -17,10 +18,12 @@ import (
var ( var (
FirebaseApp *firebase.App FirebaseApp *firebase.App
FirebaseClient *storage.Client FirebaseClient *storage.Client
FirebaseBucket string // Store bucket name for URL construction
) )
// InitFirebase initializes Firebase Admin SDK and Storage client // InitFirebase initializes Firebase Admin SDK and Storage client
func InitFirebase(cfg *config.Config) (*storage.Client, error) { func InitFirebase(cfg *config.Config) (*storage.Client, error) {
// Note: Returns Firebase Storage client, not GCS client
ctx := context.Background() ctx := context.Background()
// Determine credentials file path // Determine credentials file path
@@ -72,6 +75,7 @@ func InitFirebase(cfg *config.Config) (*storage.Client, error) {
FirebaseApp = app FirebaseApp = app
FirebaseClient = client FirebaseClient = client
FirebaseBucket = cfg.FirebaseStorageBucket
return client, nil return client, nil
} }
@@ -128,14 +132,23 @@ func UploadImage(ctx context.Context, imageData []byte, folderPath string, filen
return "", fmt.Errorf("failed to close writer: %w", err) return "", fmt.Errorf("failed to close writer: %w", err)
} }
// Get public URL from Firebase // Make the object publicly accessible
attrs, err := obj.Attrs(ctx) // Firebase Storage v4 uses string literals for ACL
if err != nil { acl := obj.ACL()
return "", fmt.Errorf("failed to get object attributes: %w", err) if err := acl.Set(ctx, "allUsers", "READER"); err != nil {
// Log warning but don't fail - file might still be accessible
// In some cases, bucket-level policies might already make it public
fmt.Printf("Warning: Failed to set public ACL for %s: %v\n", objectPath, err)
} }
// Use Firebase's original download link (MediaLink) // Construct clean GCS public URL
publicURL := attrs.MediaLink // Format: https://storage.googleapis.com/<bucket>/<path>
encodedPath := url.PathEscape(objectPath)
publicURL := fmt.Sprintf(
"https://storage.googleapis.com/%s/%s",
FirebaseBucket,
encodedPath,
)
return publicURL, nil return publicURL, nil
} }