Deploy
It’s a great question, Francis. Since you’re already running a Synology setup with a reverse proxy, you’ve already cleared the first hurdle of home server security (not opening a dozen random ports).
While both methods allow you to access your services via a clean URL like service.stnd.gd, they

handle the “handshake” between the internet and your NAS differently.
- Traditional Reverse Proxy (The “Front Door”)
In your current setup, your NAS acts like a gated community. You have one “door” open (Port 443). When someone knocks, the Reverse Proxy checks the name on the envelope (the domain) and directs the traffic to the right internal IP (like 192.168.1.50:8080).
- How it works: You must configure Port Forwarding on your router to send all traffic from 443 to your NAS.
- Visibility: Your home’s public IP address is visible to the internet. If someone pings your domain, they see exactly where your house is.
- Security: You are responsible for the firewall, SSL certificate renewal (usually via Let’s Encrypt), and filtering out bad actors.
- Cloudflare Tunnel (The “Secret Passage”)
A Cloudflare Tunnel (part of their Zero Trust suite) works by installing a small piece of software (a “connector”) on your NAS or in a Docker container. Instead of waiting for the internet to “call” your router, the connector reaches out to Cloudflare and establishes a constant, outbound-only connection.
Key Differences at a Glance
| Feature | Reverse Proxy (Current) | Cloudflare Tunnel |
|---|---|---|
| Port Forwarding | Required (Port 443) | None required (0 open ports) |
| IP Exposure | Your home IP is public | Your home IP is hidden behind Cloudflare |
| SSL Management | Usually handled by you/Synology | Handled automatically by Cloudflare |
| CGNAT | Fails if your ISP uses CGNAT | Works perfectly through CGNAT |
| Control | Full local control | Traffic must pass through Cloudflare’s servers |
Which one is better for you?
Given your interest in the IndieWeb and minimalism, you might prefer the Reverse Proxy because it keeps the connection “purer”—you own the stack. However, from a practical security standpoint for your personal projects:
- Stick with Reverse Proxy if: You want absolute privacy from third-party intermediaries and don’t mind managing your router settings.
- Switch to Cloudflare Tunnel if: You want to close all ports on your router, or if you want to use Cloudflare’s “Access” features (like adding a Google or GitHub login screen in front of your services without coding it yourself).
One small “catch” with Tunnels: since all traffic goes through Cloudflare, they do have a Terms of Service regarding heavy media streaming. If you are using this for your Plex server and high-bitrate 4K movies, a traditional Reverse Proxy is often more reliable and avoids potential flagging from Cloudflare.
Would you like me to walk you through how to set up a Cloudflare Tunnel in a Docker container on your Synology to test it out?
To automate your workflow so that your NAS “compiles on push,” you are essentially setting up a CI/CD (Continuous Integration / Continuous Deployment) pipeline. Since you are already using a monorepo with pnpm and Turborepo for your “Standard” project, this is a very efficient way to keep your digital garden or apps updated.
There are two main ways to handle this: the “Push” method (GitHub does the work) and the “Pull” method (your NAS watches GitHub).
Method 1: GitHub Actions + Docker (The Professional Way)
In this scenario, GitHub’s servers do the heavy lifting of compiling your code, and then they tell your NAS to update.
- The Workflow: You create a .github/workflows/deploy.yml file.
- The Build: On every push, GitHub Actions runs pnpm install and pnpm build.
- The Package: It packages the result into a Docker Image and pushes it to the GitHub Container Registry (GHCR).
- The Deploy: You use a “Webhook” or a tool like Watchtower on your Synology. When it sees a new image on GHCR, it automatically pulls it and restarts your container.
Why this is good for you: It saves your NAS CPU and RAM for other things (like Plex), as the heavy compilation happens on GitHub’s hardware.
Method 2: Self-Hosted GitHub Runner (The “All-on-NAS” Way)
You can install a “Runner” directly on your Synology (via Docker). This makes your NAS a literal employee of GitHub.
-
How it works: When you push code, GitHub sends the instructions to your NAS.
-
Execution: Your NAS pulls the code locally, runs your pnpm commands, and moves the files to your web directory.
-
Security: This is very secure because the Runner makes an outbound connection to GitHub—no need to open any ports or use your reverse proxy for the build process.
Method 3: Simple Webhooks (The “Minimalist” Way)
If you aren’t using Docker for your specific project and just want to pull files: -
Script: Create a simple Bash script on your NAS that does:
cd /volume1/web/standard-garden
git pull
pnpm install
pnpm build -
Webhook: Use a small Docker container like adnanh/webhook.
-
Trigger: You tell GitHub: “Every time I push, ping this URL: https://webhook.stnd.gd/deploy.”
-
Result: Your Reverse Proxy hits the webhook container, which triggers your script.
Which one fits your “Standard” workflow?
Given that you are using Turborepo, Method 1 is usually the winner. Turborepo is designed to be smart about caching builds, and GitHub Actions can store those caches so your deployments take seconds instead of minutes.
Would you like me to generate a GitHub Action YAML file specifically configured for your Turborepo and pnpm setup?
That changes everything—you’ve already done the hard part! Since your NAS is already a GitHub Runner, it is essentially a “listening” agent that can execute commands on your local file system as soon as GitHub signals a push.
To bridge the gap between “the Runner finished a build” and “my website is now updated,” you need to define the steps in your .github/workflows file.
- The “Workflow” Strategy for pnpm & Turborepo
Since you’re using pnpm and Turborepo for your “Standard” project, you want your runner to handle the heavy lifting (caching and building) locally.
Here is a template for a .github/workflows/deploy.yml file. You should place this in your repository at .github/workflows/main.yml.
name: Deploy to NAS
on:
push:
branches:- main
jobs:
build-and-deploy:
runs-on: self-hosted # This targets your NAS runner
steps:
– name: Checkout code
uses: actions/checkout@v4
- name: Install pnpm
uses: pnpm/action-setup@v2
with:
version: 9 # or your preferred version
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: ‘20’
cache: ‘pnpm’
- name: Install dependencies
run: pnpm install
- name: Build with Turborepo
run: pnpm build # Turborepo will handle the caching here
- name: Deploy (Copy to Web Folder)
run: |
# Example: Copying the ‘dist’ from your apps/web folder
# to your Synology’s Web Station or Docker volume path
cp -R ./apps/web/dist/* /volume1/web/standard-garden/
- Key Considerations for your NAS
When using a self-hosted runner on a Synology, there are two “gotchas” that usually trip people up:
Permission Mapping
Your runner usually runs as a specific user (often root or a dedicated github user). You need to ensure this user has Write permissions to the destination folder (e.g., /volume1/web/).
- Fix: Use chown or chmod on the target folder so the runner can overwrite the old build files.
The “Clean-up” Phase
Unlike GitHub’s cloud runners, self-hosted runners keep the files on your NAS after the build is done. This is great for speed (because node_modules stays there), but it can eat up storage over time. - Tip: Turborepo’s local cache will live in the runner’s work directory. This makes subsequent builds incredibly fast, but keep an eye on your disk space if you have a lot of projects.
- Integrating with Docker (Optional)
If your “Standard” project runs inside a Docker container on your NAS rather than just serving static files:
- Your workflow should end with a docker build command.
- Then, use docker-compose up -d —build to restart the service with the new code.
Since you’re already familiar with the “arr” suite and Synology’s Docker environment, this is usually the cleanest way to keep things isolated.
Does your current “Standard” project output static HTML (like an Astro build), or is it a live Node.js app that needs to be restarted? I can help you refine that last “Deploy” step based on how you serve it.
This setup is where your pnpm and Turborepo structure really starts to shine. Since you’re handling both a static build (like your digital garden/notes) and a Cloudflare SSR component (the dynamic “Standard” app), your GitHub Runner on the NAS will act as the “control center.”
Here is how you would configure the workflow to handle both simultaneously.
- The Strategy: Hybrid Deployment
Because you have two different targets, your GitHub Action needs to fork its logic after the build step.
- The Static Part: Your NAS runner builds the files and copies them to your local /volume1/web/ folder (or a Docker volume).
- The SSR Part: Your NAS runner builds the Cloudflare Worker/Pages function and uses wrangler to deploy it to Cloudflare’s edge.
- Updated GitHub Workflow (.github/workflows/deploy.yml)
This configuration uses Turborepo to build both targets and then deploys them to their respective homes.
name: Hybrid Deploy (NAS + Cloudflare)
on:
push:
branches: [main]
jobs:
build-and-deploy:
runs-on: self-hosted
steps:
– name: Checkout
uses: actions/checkout@v4
- name: Setup pnpm
uses: pnpm/action-setup@v3
with:
version: 9
- name: Build All (Turborepo)
run: pnpm build # This builds both your static site and SSR app
env:
CLOUDFLARE_API_TOKEN: ${{ secrets.CLOUDFLARE_API_TOKEN }}
# — PART 1: Local Static Deploy —
– name: Deploy Static to NAS
run: |
# Assuming your static project is in apps/garden
rsync -av —delete ./apps/garden/dist/ /volume1/web/standard-garden/
# — PART 2: Cloudflare SSR Deploy —
– name: Deploy SSR to Cloudflare
run: |
# Use wrangler to push your SSR app (e.g., apps/standard)
cd apps/standard
pnpm exec wrangler deploy —remote
env:
CLOUDFLARE_API_TOKEN: ${{ secrets.CLOUDFLARE_API_TOKEN }}
CLOUDFLARE_ACCOUNT_ID: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
- Why This “Local-First” Approach Wins
Since your NAS is the runner, you get a few unique “home lab” advantages:
- Secrets Security: Your CLOUDFLARE_API_TOKEN is handled by GitHub, but the actual deployment happens from your local network.
- Zero-Latency Internal Updates: For the static part, you aren’t “uploading” to the internet and then “downloading” back to your NAS. You are just moving files from one folder on your NAS to another. It’s nearly instantaneous.
- Pre-rendering Power: If your static site needs to pull heavy assets (like those Notion images you mentioned previously) or large datasets, your NAS can do that heavy lifting without hitting Cloudflare’s execution limits.
A Quick Tip for your “Standard” Monorepo:
Since you’re using @stnd as a scope, make sure your NAS runner has access to your private registry (if you use one) or that the pnpm-lock.yaml is always up to date. Turborepo will skip the build for whichever part (Static or SSR) hasn’t changed, saving you even more time.
Would you like me to help you set up the wrangler.toml for the SSR part so it plays nicely with your custom stnd.gd domain?
Yes, it absolutely can! This is a brilliant way to leverage your NAS’s resources to create a “staging” area before you push live to Cloudflare’s global network.
Since your Cloudflare SSR app is built on the Workers runtime, you can run a local emulator inside a Docker container on your Synology.
The “Preview” Tech Stack
To do this, you’ll use Miniflare (the simulator) or workerd (the actual open-source runtime Cloudflare uses). Because you are already a pnpm/Turborepo expert, the easiest way is to wrap your preview in a Docker image.
- Create a “Preview” Dockerfile
In your SSR project folder (e.g., apps/standard/), add a Dockerfile.preview:
FROM node:20-slim
WORKDIR /app
Install pnpm
RUN npm install -g pnpm
Copy only what we need for the build
COPY . .
RUN pnpm install
RUN pnpm build
Use wrangler to start a local server
The —ip 0.0.0.0 is crucial so your NAS can see the traffic
EXPOSE 8787
CMD [“pnpm”, “exec”, “wrangler”, “dev”, “—ip”, “0.0.0.0”, “—port”, “8787”, “—local”]
- Update your GitHub Runner Workflow
Now, when you push to a branch (not main), you can tell your runner to update the preview container:
Add this to your existing workflow for non-main branches
deploy-preview:
if: github.ref != ‘refs/heads/main’
runs-on: self-hosted
steps:
– uses: actions/checkout@v4
– name: Build and Restart Preview Container
run: |
docker build -t standard-preview -f apps/standard/Dockerfile.preview .
docker rm -f standard-preview-container || true
docker run -d —name standard-preview-container -p 8787:8787 standard-preview
- Accessing the Preview
Once the container is running on port 8787, you can use your Reverse Proxy skills to make it accessible:
- Subdomain: Create preview.stnd.gd in your Synology Reverse Proxy.
- Destination: Point it to http://localhost:8787.
Now, every time you push to a feature branch, your NAS will rebuild the app and you can see it live at your preview URL.
Why this is better than “Wrangler Dev” on your PC: - Persistence: It stays up even if you close your laptop.
- Family Testing: Since you’re building that e-learning platform for your girlfriend, she can test new features on the preview URL from her own device while you’re still coding.
- NAS Utilization: You’re putting those DS1821+ CPU cores to work!
One Small Warning
The —local mode in Wrangler uses Miniflare. It’s 99% accurate, but some very specific Cloudflare features (like advanced Analytics Engine or certain Cron Triggers) behave slightly differently than the “real” edge. For 99% of your SSR and Meta-Kernel logic, it will be perfect.
Would you like me to help you refine the Docker Compose file so your preview environment also includes a local database (like Redis or Postgres) to match your Cloudflare bindings?
Setting up a local development environment with Cloudflare Workers
This video explains how to use Cloudflare Tunnels to securely expose Docker containers from your NAS, which is perfect for sharing your “preview” environment safely.