Deploy

It’s a great question, Francis. Since you’re already running a Synology setup with a reverse proxy, you’ve already cleared the first hurdle of home server security (not opening a dozen random ports).
While both methods allow you to access your services via a clean URL like service.stnd.gd, they

MATH_HORIZONTAL.png

handle the “handshake” between the internet and your NAS differently.

  1. Traditional Reverse Proxy (The “Front Door”)
    In your current setup, your NAS acts like a gated community. You have one “door” open (Port 443). When someone knocks, the Reverse Proxy checks the name on the envelope (the domain) and directs the traffic to the right internal IP (like 192.168.1.50:8080).
  1. Cloudflare Tunnel (The “Secret Passage”)
    A Cloudflare Tunnel (part of their Zero Trust suite) works by installing a small piece of software (a “connector”) on your NAS or in a Docker container. Instead of waiting for the internet to “call” your router, the connector reaches out to Cloudflare and establishes a constant, outbound-only connection.
    Key Differences at a Glance
Feature Reverse Proxy (Current) Cloudflare Tunnel
Port Forwarding Required (Port 443) None required (0 open ports)
IP Exposure Your home IP is public Your home IP is hidden behind Cloudflare
SSL Management Usually handled by you/Synology Handled automatically by Cloudflare
CGNAT Fails if your ISP uses CGNAT Works perfectly through CGNAT
Control Full local control Traffic must pass through Cloudflare’s servers

Which one is better for you?
Given your interest in the IndieWeb and minimalism, you might prefer the Reverse Proxy because it keeps the connection “purer”—you own the stack. However, from a practical security standpoint for your personal projects:

To automate your workflow so that your NAS “compiles on push,” you are essentially setting up a CI/CD (Continuous Integration / Continuous Deployment) pipeline. Since you are already using a monorepo with pnpm and Turborepo for your “Standard” project, this is a very efficient way to keep your digital garden or apps updated.
There are two main ways to handle this: the “Push” method (GitHub does the work) and the “Pull” method (your NAS watches GitHub).
Method 1: GitHub Actions + Docker (The Professional Way)
In this scenario, GitHub’s servers do the heavy lifting of compiling your code, and then they tell your NAS to update.

Why this is good for you: It saves your NAS CPU and RAM for other things (like Plex), as the heavy compilation happens on GitHub’s hardware.
Method 2: Self-Hosted GitHub Runner (The “All-on-NAS” Way)
You can install a “Runner” directly on your Synology (via Docker). This makes your NAS a literal employee of GitHub.

That changes everything—you’ve already done the hard part! Since your NAS is already a GitHub Runner, it is essentially a “listening” agent that can execute commands on your local file system as soon as GitHub signals a push.
To bridge the gap between “the Runner finished a build” and “my website is now updated,” you need to define the steps in your .github/workflows file.

  1. The “Workflow” Strategy for pnpm & Turborepo
    Since you’re using pnpm and Turborepo for your “Standard” project, you want your runner to handle the heavy lifting (caching and building) locally.
    Here is a template for a .github/workflows/deploy.yml file. You should place this in your repository at .github/workflows/main.yml.
    name: Deploy to NAS
    on:
    push:
    branches:
    • main

jobs:
build-and-deploy:
runs-on: self-hosted # This targets your NAS runner
steps:
– name: Checkout code
uses: actions/checkout@v4

- name: Install pnpm
uses: pnpm/action-setup@v2
with:
version: 9 # or your preferred version

- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: ‘20’
cache: ‘pnpm’

- name: Install dependencies
run: pnpm install

- name: Build with Turborepo
run: pnpm build # Turborepo will handle the caching here

- name: Deploy (Copy to Web Folder)
run: |
# Example: Copying the ‘dist’ from your apps/web folder
# to your Synology’s Web Station or Docker volume path
cp -R ./apps/web/dist/* /volume1/web/standard-garden/

  1. Key Considerations for your NAS
    When using a self-hosted runner on a Synology, there are two “gotchas” that usually trip people up:
    Permission Mapping
    Your runner usually runs as a specific user (often root or a dedicated github user). You need to ensure this user has Write permissions to the destination folder (e.g., /volume1/web/).
  1. Integrating with Docker (Optional)
    If your “Standard” project runs inside a Docker container on your NAS rather than just serving static files:

This setup is where your pnpm and Turborepo structure really starts to shine. Since you’re handling both a static build (like your digital garden/notes) and a Cloudflare SSR component (the dynamic “Standard” app), your GitHub Runner on the NAS will act as the “control center.”
Here is how you would configure the workflow to handle both simultaneously.

  1. The Strategy: Hybrid Deployment
    Because you have two different targets, your GitHub Action needs to fork its logic after the build step.
  1. Updated GitHub Workflow (.github/workflows/deploy.yml)
    This configuration uses Turborepo to build both targets and then deploys them to their respective homes.
    name: Hybrid Deploy (NAS + Cloudflare)
    on:
    push:
    branches: [main]

jobs:
build-and-deploy:
runs-on: self-hosted
steps:
– name: Checkout
uses: actions/checkout@v4

- name: Setup pnpm
uses: pnpm/action-setup@v3
with:
version: 9

- name: Build All (Turborepo)
run: pnpm build # This builds both your static site and SSR app
env:
CLOUDFLARE_API_TOKEN: ${{ secrets.CLOUDFLARE_API_TOKEN }}

# — PART 1: Local Static Deploy —
– name: Deploy Static to NAS
run: |
# Assuming your static project is in apps/garden
rsync -av —delete ./apps/garden/dist/ /volume1/web/standard-garden/

# — PART 2: Cloudflare SSR Deploy —
– name: Deploy SSR to Cloudflare
run: |
# Use wrangler to push your SSR app (e.g., apps/standard)
cd apps/standard
pnpm exec wrangler deploy —remote
env:
CLOUDFLARE_API_TOKEN: ${{ secrets.CLOUDFLARE_API_TOKEN }}
CLOUDFLARE_ACCOUNT_ID: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}

  1. Why This “Local-First” Approach Wins
    Since your NAS is the runner, you get a few unique “home lab” advantages:

Yes, it absolutely can! This is a brilliant way to leverage your NAS’s resources to create a “staging” area before you push live to Cloudflare’s global network.
Since your Cloudflare SSR app is built on the Workers runtime, you can run a local emulator inside a Docker container on your Synology.
The “Preview” Tech Stack
To do this, you’ll use Miniflare (the simulator) or workerd (the actual open-source runtime Cloudflare uses). Because you are already a pnpm/Turborepo expert, the easiest way is to wrap your preview in a Docker image.

  1. Create a “Preview” Dockerfile
    In your SSR project folder (e.g., apps/standard/), add a Dockerfile.preview:
    FROM node:20-slim
    WORKDIR /app

Install pnpm

RUN npm install -g pnpm

Copy only what we need for the build

COPY . .
RUN pnpm install
RUN pnpm build

Use wrangler to start a local server

The —ip 0.0.0.0 is crucial so your NAS can see the traffic

EXPOSE 8787
CMD [“pnpm”, “exec”, “wrangler”, “dev”, “—ip”, “0.0.0.0”, “—port”, “8787”, “—local”]

  1. Update your GitHub Runner Workflow
    Now, when you push to a branch (not main), you can tell your runner to update the preview container:

Add this to your existing workflow for non-main branches

deploy-preview:
if: github.ref != ‘refs/heads/main’
runs-on: self-hosted
steps:
– uses: actions/checkout@v4
– name: Build and Restart Preview Container
run: |
docker build -t standard-preview -f apps/standard/Dockerfile.preview .
docker rm -f standard-preview-container || true
docker run -d —name standard-preview-container -p 8787:8787 standard-preview

  1. Accessing the Preview
    Once the container is running on port 8787, you can use your Reverse Proxy skills to make it accessible:

Are you absolutely sure?

This action cannot be undone.