- Docker with Compose v2
- Git
- bash/zsh
Everything else (Node.js, pnpm, build tools) runs inside containers.
git clone https://github.com/armbian/armbian-site.git
cd armbian-site
cp .env.example .envGenerate the required secrets:
# POSTGRES_PASSWORD
openssl rand -hex 32
# PAYLOAD_SECRET
openssl rand -hex 32Paste them into .env, then start:
./manage.sh upOnce healthy:
- Website: http://localhost
- CMS Admin: http://localhost/admin
- API (external): http://localhost:8080/api/v1/
The default admin password is generated randomly on first boot. Find it with:
docker compose logs www | grep "Default admin user created"apps/
api/ Fastify 5 REST API (boards, images, vendors, search)
www/ Next.js 16 + Payload CMS 3 (SSR, 17 locales)
packages/
schemas/ Zod schemas — single source of truth for types
config/ URLs, constants, support tiers, locales
api-client/ Typed HTTP client wrapping fetch()
theme/ CSS variables + Tailwind preset
Turborepo + pnpm workspaces. All code runs in Docker containers.
github.armbian.com (upstream JSON)
↓ syncs every 4h
API: SyncService → Normalizer (Zod) → DataStore (in-memory + MiniSearch)
↓ server-side fetch
www: getApiClient() → SSR pages
Payload CMS (PostgreSQL) → announcements, pages, flash guides, changelogs
The API normalizes upstream data and serves it over REST. The www app fetches server-side only — the API is not exposed to the browser (except image routes).
Payload CMS stores editorial content in PostgreSQL. Server components query it directly via getPayload({ config }).
CSS isolation between the website and Payload admin:
apps/www/src/app/
├── (frontend)/ Armbian website — imports globals.css (Tailwind)
│ └── [locale]/ i18n pages
└── (payload)/ Payload admin — uses its own CSS
├── admin/ Admin UI at /admin
└── api/ Payload REST API
Tailwind's preflight would break Payload's form styles. Route groups keep them separate.
Pages are server components by default. Only interactive UI uses 'use client'.
// Server component — data fetching, translations
const t = await getTranslations();
const api = getApiClient();
const data = await api.boards.list();// Client component — interactivity only
'use client';
const t = useTranslations();All images are served through the API's image cache:
Browser → /api/v1/images/boards/480/slug.png → Next.js rewrite → API → CDN (cached locally)
URL helpers in @armbian/config:
boardImageUrl('nanopi-r6s'); // → /api/v1/images/boards/480/nanopi-r6s.png
vendorLogoUrl('radxa'); // → /api/v1/images/vendors/480/radxa.png
partnerLogoUrl('spacemit'); // → /api/v1/images/partners/spacemit.pngImages are fetched from the CDN on first request and cached on disk. Subsequent requests serve from cache.
./manage.sh rebuild www # frontend only
./manage.sh rebuild api # API only
./manage.sh rebuild # everything./manage.sh logs # all services
./manage.sh logs www # specific service./manage.sh shell # www container (default)
./manage.sh shell api # API container./manage.sh db # psql prompt
./manage.sh db:backup # dump to backups/
./manage.sh db:restore FILE # restore a backup./manage.sh quality # typecheck + test (inside Docker)
./manage.sh quality typecheck # typecheck only
./manage.sh quality test # test only
./manage.sh quality lint # lint only
./manage.sh quality format # format check onlyRuns in a one-shot Node 22 container with the project mounted at /app. No local node_modules required.
./manage.sh deploy # pull GHCR images, restart, wait for healthUnlike ./manage.sh up (which builds from source), deploy pulls pre-built images from ghcr.io/armbian/website/{api,www}. Use this on production servers. If GHCR_TOKEN is set in .env or the environment, it authenticates to GHCR before pulling.
./manage.sh reset # wipes volumes, rebuilds from scratch./manage.sh cache:clean # wipe pnpm store + named node_modules volumes- Create
apps/www/src/app/(frontend)/[locale]/my-page/page.tsx - Add i18n keys to
apps/www/src/messages/en.json - Sync keys to all 16 other locale files
./manage.sh rebuild www
- Create collection in
apps/www/src/payload/collections/ - Register in
apps/www/payload.config.ts - Generate migration:
./manage.sh shell www→pnpm payload migrate:create - Import migration in
apps/www/src/migrations/index.ts ./manage.sh rebuild www— migrations apply automatically
Source of truth: apps/www/src/messages/en.json
- Add keys to
en.json - Copy to all 16 other locale files with translated values
- Server components:
const t = await getTranslations() - Client components:
const t = useTranslations()
All user-facing text must use translation keys — no hardcoded strings.
- Create route in
apps/api/src/routes/ - Register in
apps/api/src/server.ts - Types go in
packages/schemas/ ./manage.sh rebuild api
- All text uses i18n keys — no hardcoded strings in UI
- All data comes from the API — never hardcode board info, counts, or URLs
- URLs come from
@armbian/config— useARMBIAN_URLS,boardImageUrl(), etc. - Sanitize CMS HTML — use
sanitizeCmsHtml()beforedangerouslySetInnerHTML - Images go through the API — use the URL helpers, never link to CDN directly
Tailwind 4 with @tailwindcss/typography. Class-based dark mode.
:root {
--brand: #ff7d3d;
--bg: #ffffff;
--fg: #000000;
--border: #e0e0e0;
}
.dark {
--bg: #1a1a1a;
--fg: #ffffff;
}text-fluid-hero through text-fluid-xs — scales with viewport via clamp().
| Class | Purpose |
|---|---|
hw-card |
Board cards with hover transform + glow |
hw-img |
Image zoom on card hover |
bento-card |
Glassmorphism panels |
terminal-glass |
Code block styling |
badge-platinum |
Shiny tier badge |
divider-glow |
Glowing horizontal divider |
Defined in apps/www/src/app/(frontend)/globals.css.
Copy .env.example to .env. All variables:
| Variable | Required | Default | Description |
|---|---|---|---|
POSTGRES_PASSWORD |
Yes | — | Database password |
PAYLOAD_SECRET |
Yes | — | 64-char hex for Payload auth |
DATA_SYNC_INTERVAL_MS |
No | 14400000 |
Sync interval (4h) |
CORS_ORIGINS |
No | http://localhost:3000 |
Extra CORS origins |
LOG_LEVEL |
No | info |
API log level |
NEXT_PUBLIC_SITE_URL |
No | — | Base URL for Open Graph absolute URLs |
NEXT_PUBLIC_DOMAIN_LOCALE_ROUTING |
No | false |
Enable cross-domain locale switching |
WP_CONTENT_DIR |
No | ./legacy/wp-content |
Host path for legacy /wp-content files |
OIDC_CLIENT_ID |
No | — | Authentik OAuth2 client ID |
OIDC_CLIENT_SECRET |
No | — | Authentik OAuth2 secret |
OIDC_ISSUER_URL |
No | — | Authentik issuer URL |
OIDC_ALLOWED_DOMAINS |
No | — | Restrict OIDC to email domains |
Without POSTGRES_PASSWORD and PAYLOAD_SECRET, the stack won't start.
NEXT_PUBLIC_* variables are baked into the Next.js client bundle at build time. They are passed as Docker build args in docker-compose.yml and in the release workflow. Changing them requires a rebuild (./manage.sh rebuild www) or a new release tag.
Leave OIDC_* variables empty for local email/password login. When set, Authentik groups map to Payload roles:
| Authentik Group | Payload Role |
|---|---|
armbian-admin |
admin |
armbian-maintainer |
maintainer |
armbian-editor |
editor |
| (no match) | editor |
Roles sync on every login. First login auto-creates the user.
Three GitHub Actions workflows:
- CI (
.github/workflows/ci.yml) -- runs on push/PR tomain. Installs deps, runspnpm typecheckandpnpm test. - Release (
.github/workflows/release.yml) -- triggered by version tags (v*.*.*). Runs CI first, then builds multi-arch Docker images (linux/amd64,linux/arm64) and pushes them to GHCR (ghcr.io/armbian/website/api,ghcr.io/armbian/website/www). Creates a GitHub Release with auto-generated notes. - Deploy (
.github/workflows/deploy.yml) -- triggered automatically when the Release workflow completes, or manually viaworkflow_dispatch. SSHs into the production server, checks out the tag, and runs./manage.sh deploy.
git tag v0.5.0 && git push --tags
→ CI (typecheck + test)
→ Build (Docker images → GHCR)
→ Release (GitHub Release notes)
→ Deploy (SSH → ./manage.sh deploy)
| Secret | Purpose |
|---|---|
DEPLOY_HOST |
Production server hostname/IP |
DEPLOY_USER |
SSH username |
DEPLOY_KEY |
SSH private key |
GHCR_TOKEN |
GitHub token for pulling images on the server |
GITHUB_TOKEN is used automatically for GHCR push during the build job.
The official Armbian deployment serves three domains:
| Domain | Locale | Behavior |
|---|---|---|
armbian.com |
all 17 | Default English, other locales via /<locale> prefix |
armbian.cn |
zh only |
Forces Chinese on every page |
armbian.de |
de only |
Forces German on every page |
This is configured in packages/config/src/locales.ts (DOMAIN_LOCALE_MAP) and apps/www/src/i18n/routing.ts.
The language switcher (language-switcher.tsx) cross-redirects between domains only when both conditions are met:
NEXT_PUBLIC_DOMAIN_LOCALE_ROUTING=true(build-time env var)- The current browser hostname matches a known Armbian domain
Self-hosted instances and local development always use in-place locale switching via /<locale> prefixes, regardless of this setting.
The contact page submits to Zoho Bigin via a hidden iframe (biginHiddenFrame). A Google reCAPTCHA v2 widget gates submission.
- Site key:
RECAPTCHA_SITE_KEYinpackages/config/src/urls.ts(public, baked into the bundle) - Secret key: configured on the Zoho Bigin side (not in this repo)
- Bigin form tokens:
BIGIN_FORM_TOKENSinpackages/config/src/urls.ts
The reCAPTCHA integration requires these CSP entries in apps/www/next.config.ts:
script-src:https://www.google.com/recaptcha/andhttps://www.gstatic.com/recaptcha/frame-src:https://www.google.com/recaptcha/connect-src:https://www.google.comimg-src:https://www.gstatic.com/recaptcha/
If the reCAPTCHA widget fails to render, check browser console for CSP violations.
Caddy serves files at /wp-content/* for legacy WordPress URLs that are still linked from external sites. This only applies to requests with Host: armbian.com or Host: www.armbian.com.
- Set
WP_CONTENT_DIRin.envto the host path containing the legacy files (e.g.,/srv/wp-content) - The directory is bind-mounted read-only into the Caddy container at
/srv/wp-content - If unset, it defaults to
./legacy/wp-content(empty placeholder, gitignored)
The Caddy matcher is in docker/caddy/Caddyfile -- it only triggers for the two production hostnames.
- Docker Engine with Compose v2
- Git
- The repository cloned at
/home/website(or adjustdeploy.yml) - A
.envfile with production values
git clone https://github.com/armbian/armbian-site.git /home/website
cd /home/website
cp .env.example .env
# Fill in POSTGRES_PASSWORD, PAYLOAD_SECRET, WWW_HOSTNAME, etc.
./manage.sh up # first time: build from sourceHandled automatically by the Deploy workflow, or manually:
cd /home/website
git fetch --all --tags
git checkout v0.5.0
./manage.sh deployRebuild www and hard-refresh the browser:
./manage.sh rebuild wwwBuild-phase detection issue. Check that NEXT_PHASE is set in the Dockerfile.
./manage.sh shell www
cd /app/apps/www
pnpm payload migrate:upOr restore from backup: ./manage.sh db:restore FILE
./manage.sh logs apiCommon causes: upstream JSON changed format, Zod schema mismatch, rate limiting from GitHub.
The www service uses port 3000. The API and PostgreSQL are internal only (no host ports). If 3000 is in use:
lsof -i :80
kill -9 <PID>