App submissions use a two-phase review via app-review-submission.yml: AI review + human approval.
Source of truth: apps/APPS.md - A single markdown table with all apps.
How it works:
- User opens issue with
TIER-APPlabel - Workflow validates (Enter registration, duplicates), AI generates emoji + description
- Bot posts preview comment with
APP_REVIEW_DATAJSON block, labelsTIER-APP-REVIEW - Maintainer reviews preview, adds
TIER-APP-APPROVEDlabel - Workflow creates branch, prepends row to
apps/APPS.md, creates PR with auto-merge - After checks pass, PR merges automatically, issue closed with
TIER-APP-COMPLETE
Label state machine:
TIER-APP (new issue)
→ TIER-APP-REJECTED (validation failed: duplicate/spore)
→ TIER-APP-INCOMPLETE (not registered, user needs to fix)
→ TIER-APP-REVIEW (AI review passed, preview posted, awaiting human)
→ TIER-APP-APPROVED (maintainer approves → PR created + auto-merged)
→ TIER-APP-COMPLETE (PR merged, tier upgraded, issue closed)
→ TIER-APP-REJECTED (maintainer closes issue)
Manual edits (if needed):
- Edit
apps/APPS.mddirectly - Run
node .github/scripts/app-update-readme.jsto refresh README
Table format in APPS.md:
| Emoji | Name | Web_URL | Description | Language | Category | Platform | GitHub | GitHub_ID | Repo | Stars | Discord | Other | Submitted_Date | Issue_URL | Approved_Date |
| ----- | -------- | ------- | ----------------------------- | -------- | -------- | -------- | ------- | --------- | ---------------------- | ----- | ------- | ----- | -------------- | --------- | ------------- |
| 🎨 | App Name | url | Brief description (~80 chars) | | creative | web | @github | 12345678 | https://github.com/... | ⭐123 | | | 2025-01-01 | #1234 | 2025-01-02 |- Submitted_Date: Issue creation date (when user submitted)
- Issue_URL: Link to original GitHub issue
- Approved_Date: PR merge date (when app was approved)
Platform values (auto-detected from URL + description):
| Value | When to use |
|---|---|
web |
Browser-based app (default when URL exists) |
android |
Google Play Store app |
ios |
App Store or Apple Shortcuts (routinehub.co) |
windows |
Windows desktop / .exe |
macos |
macOS native app |
desktop |
Cross-platform desktop (Python/Qt, Electron, etc.) |
cli |
Command-line tool |
discord |
Discord bot or app |
telegram |
Telegram bot |
whatsapp |
WhatsApp bot |
library |
npm/PyPI/Go package, SDK, API wrapper |
browser-ext |
Browser extension (Firefox, Chrome) |
roblox |
Roblox game |
wordpress |
WordPress plugin |
api |
Backend/server with no public UI (default when no URL) |
Multiple platforms: comma-separated, e.g. telegram,whatsapp
Categories:
- 🖼️ Image (
image): Image gen, editing, design, avatars, stickers - 🎬 Video & Audio (
video_audio): Video gen, animation, music, TTS - ✍️ Write (
writing): Content creation, storytelling, copy, slides - 💬 Chat (
chat): Assistants, companions, AI studio, multi-modal chat - 🎮 Play (
games): AI games, interactive fiction, Roblox worlds - 📚 Learn (
learn): Education, tutoring, language learning - 🤖 Bots (
bots): Discord, Telegram, WhatsApp bots - 🛠️ Build (
build): Dev tools, SDKs, integrations, vibe coding - 💼 Business (
business): Productivity, finance, marketing, health, food
- Use ISO language code in the
Languagecolumn (e.g.,zh-CN,es,pt-BR,ja) - No flags in the table - use language codes only
pollinations.ai Discord Server:
- Guild ID:
885844321461485618 - Server: https://discord.gg/pollinations-ai-885844321461485618
Use this guild ID when interacting with Discord MCP tools for announcements, community management, etc.
Key directories and their purposes:
pollinations/
├── enter.pollinations.ai/ # Auth gateway + billing (Cloudflare Worker)
├── gen.pollinations.ai/ # Edge router → enter gateway
├── image.pollinations.ai/ # Image generation backend (EC2 + Vast.ai)
├── text.pollinations.ai/ # Text generation backend (EC2)
├── pollinations.ai/ # Main React frontend
├── packages/
│ ├── sdk/ # @pollinations_ai/sdk - Client library with React hooks
│ └── mcp/ # @pollinations_ai/model-context-protocol - MCP server
├── shared/ # Shared utilities (auth, registry, IP queue)
│ └── registry/ # Model registries (image.ts, text.ts, audio.ts, video.ts)
├── apps/ # Community apps + APPS.md showcase
└── social/ # Social media automation (Discord, Reddit, GitHub)
Primary endpoint: https://gen.pollinations.ai
All API requests go through gen.pollinations.ai, which routes to the enter.pollinations.ai gateway for authentication and billing.
- Authentication: Publishable keys (
pk_) for frontend, Secret keys (sk_) for backend - Billing: Pollen credits ($1 ≈ 1 Pollen)
- Get API keys: enter.pollinations.ai
- Full API docs: APIDOCS.md
Services behind enter gateway:
- Text: OpenAI-compatible API via Portkey (multi-provider: OpenAI, Google, Anthropic, DeepSeek, etc.)
- Image: Flux, Turbo, and other models on EC2/Vast.ai/io.net GPU instances
- Video: Wan (via Airforce/Alibaba), Veo, LTX on GPU instances
- Audio: ElevenLabs TTS/STT, text-to-music
- Tier system: microbe → spore → seed → flower → nectar → router (see
enter.pollinations.ai/src/tier-config.ts)
Service Ports:
- enter.pollinations.ai:
http://localhost:3000(API under/api/*) - text.pollinations.ai:
http://localhost:16385 - image.pollinations.ai:
http://localhost:16384
Local API Testing:
# Enter gateway (local)
curl "http://localhost:3000/api/generate/image/test?model=flux" -H "Authorization: Bearer $TOKEN"
curl "http://localhost:3000/api/generate/v1/chat/completions" -H "Authorization: Bearer $TOKEN" ...Testing Enter with Local Services:
To test enter.pollinations.ai with local text/image services, edit enter.pollinations.ai/wrangler.toml:
# Default (remote EC2):
IMAGE_SERVICE_URL = "http://ec2-3-80-56-235.compute-1.amazonaws.com:16384"
TEXT_SERVICE_URL = "http://ec2-3-80-56-235.compute-1.amazonaws.com:16385"
# For local testing (env.local):
IMAGE_SERVICE_URL = "http://localhost:16384"
TEXT_SERVICE_URL = "http://localhost:16385"Use npm run dev in each service directory to start them.
Note: EC2 hostnames in wrangler.toml may change. Check the actual values in enter.pollinations.ai/wrangler.toml.
The packages/mcp/ directory contains a Model Context Protocol server that allows AI agents to directly generate images, text, and audio using the pollinations.ai API.
For detailed implementation notes, design principles, and troubleshooting, see:
packages/mcp/README.md- Installation and usagepackages/mcp/AGENTS.md- Implementation guidelines and debugging
curl 'https://gen.pollinations.ai/image/{prompt}' -H 'Authorization: Bearer YOUR_API_KEY'curl 'https://gen.pollinations.ai/v1/chat/completions' \
-H 'Authorization: Bearer YOUR_API_KEY' \
-H 'Content-Type: application/json' \
-d '{"model": "openai", "messages": [{"role": "user", "content": "Hello"}]}'curl 'https://gen.pollinations.ai/text/{prompt}?key=YOUR_API_KEY'curl 'https://gen.pollinations.ai/audio/{text}?voice=nova&key=YOUR_API_KEY' -o speech.mp3- Image models:
https://gen.pollinations.ai/image/models - Text models:
https://gen.pollinations.ai/v1/models
- Full API Documentation
- Enter Services Deployment - Deploy and manage services on AWS EC2
THIS IS CRITICAL. Follow YAGNI religiously:
- Don't keep code for "potential futures" - Only implement what's needed NOW
- Remove unused functions - Even if they "might be useful someday"
- No speculative abstractions - If we need it later, we'll add it then
- No "just in case" helpers - Don't create test utilities or wrappers preemptively
- Keep the codebase minimal - Less code = fewer bugs = easier maintenance
- No fallbacks for backward compatibility - Clean breaks are better than complexity bloat. When changing tokens, headers, or APIs, update all consumers at once rather than supporting both old and new patterns
- When user says "keep it simple" — they mean it - Don't add layers, wrappers, or abstractions. One function, one price, one config. The simplest thing that works.
Prefer functional, elegant, and minimal solutions:
- Don't implement things we're not using anymore
- Check assumptions on the web and codebase regularly
- When continuing work from a previous session, read all relevant code first
- Check related PRs including comments, description, and history
- If in the middle of a feature/fix, identify clear next steps before proceeding
Before implementing:
- Verify assumptions on the web - APIs, libraries, and patterns change frequently
- Read related files into context - Get the full picture before making changes
- Check existing implementations - Don't reinvent what already exists in the codebase
- Check which branch you're on - Run
git branch --show-currentbefore starting work - Check related PRs and issues - Use GitHub MCP tools to find context before implementing
- Look for existing utility functions in
shared/before writing new ones (auth, queue, registry)
CRITICAL — These rules apply whenever deploying to Tinybird:
- Always validate first:
tb --cloud deploy --check --waitbefore any deploy - Never use
--allow-destructive-operationswithout explicit user permission - Never use
tb push— it's deprecated; usetb --cloud deploy --wait - Always use
--cloud— without it, CLI tries Tinybird Local (Docker) - Run from
enter.pollinations.ai/observability— not from repo root - Pipes are shared — multiple apps/dashboards may consume the same pipe. Verify all consumers before modifying any pipe
- Timeout mitigation: Use
uniq()overuniqExact(), avoid CTE+JOIN, prefer single-pass queries. For large time ranges, usestart_dateparameter pattern for week-by-week querying - Full deploy procedure: see
.claude/skills/tinybird-deploy/SKILL.md
IMPORTANT - Agents often make these mistakes (learned from session history):
- Don't use
cdin bash commands - Use thecwdparameter instead - Don't run
pytest- Usenpm run testornpx vitest run - Don't create .md documentation files unless explicitly asked
- Always use absolute paths for file operations
- Don't edit files manually during a Claude Code session - this busts the cache
- Don't run
/compactunless absolutely necessary - it busts cache - Don't let searches run wild - Use targeted file paths, not broad searches
- Don't modify test files to make tests pass - Fix the actual code instead
- Run
npm run decrypt-varsbefore running tests in enter.pollinations.ai - Check
.testingtokensfile for test API keys:enter.pollinations.ai/.testingtokens - Confirm which branch you're on before making changes — branch mix-ups are a recurring problem
- Don't reimplement existing logic — search for existing functions before writing new ones (e.g. SSE parsing, retry wrappers, auth extraction)
- Request PR reviews by mentioning
polly— include lowercasepollyin a PR comment (e.g. "polly please review") to trigger the Polly bot reviewer
-
Code Style:
- Use modern JavaScript/TypeScript features
- Use ES modules (import/export) - all .js files are treated as ES modules
- Follow existing code formatting patterns
- Add descriptive comments for complex logic
- Run biome check after making changes:
npx biome check --write <file>
-
Testing:
- Add tests for new features in appropriate test directories
- Follow existing test patterns in /test directories
- Test with real production code, not mocks - Tests should validate actual behavior
- Avoid creating mock infrastructure - use direct function imports instead
Test Commands by Service:
- enter.pollinations.ai:
cd enter.pollinations.ai && npm run test(vitest + Cloudflare Workers pool) - image.pollinations.ai:
cd image.pollinations.ai && npm run test(vitest) - text.pollinations.ai: No test runner configured yet
⚡ Run tests individually - Full suite takes time. Use:
npx vitest run --testNamePattern="specific test name" npx vitest run test/specific-file.test.tsSnapshot System: enter.pollinations.ai uses VCR-style snapshots for API responses:
- Snapshots stored in test fixtures, replayed during tests
- Set
TEST_VCR_MODE=recordto record new snapshots - Default mode is
replay-or-record
Testing Tokens:
enter.pollinations.ai/.testingtokenscontains:ENTER_API_TOKEN_LOCAL/ENTER_API_TOKEN_REMOTE- API keysENTER_TOKEN,GITHUB_TOKEN,POLAR_ACCESS_TOKEN
Testing Best Practices:
- Read existing tests entirely to understand patterns before adding new ones
- Prefer adding to existing test files over creating new ones
- Test core functionality - minimal, short, and sweet
- Don't create new testing patterns - follow existing conventions
- Make requests to
gen.pollinations.aifor production API testing
-
Documentation:
- Update API docs for new endpoints
- Add JSDoc comments for new functions
- Update README.md for user-facing changes
- Avoid creating markdown documentation files while working unless explicitly requested
- If temporary files are needed for testing/debugging, create them in a
temp/folder clearly labeled as temporary
-
Architecture Considerations:
- Frontend changes should be in pollinations.ai/
- Image generation in image.pollinations.ai/
- Text generation in text.pollinations.ai/
- SDK and React components in packages/sdk/
- AI assistant integrations in packages/mcp/
-
Security:
- Never expose API keys or secrets
- Use environment variables for sensitive data
- Implement proper input validation
-
Adding New Models:
- Text models: Add config in
text.pollinations.ai/configs/modelConfigs.ts, add entry inavailableModels.ts - Image models: Add handler in
image.pollinations.ai/src/, register inshared/registry/image.ts - Provider configs (Portkey, Bedrock, OpenAI-compatible) go in
text.pollinations.ai/configs/providerConfigs.js - Update API documentation and model registry
- Text models: Add config in
-
Frontend Updates:
- Follow React best practices
- Use existing UI components
- Maintain responsive design
-
API Changes:
- Maintain backward compatibility
- Update documentation
- Add appropriate error handling
-
API Documentation Guidelines:
- Keep documentation strictly technical and user-focused
- Avoid marketing language or promotional content
- Link to dynamic endpoints (like /models) rather than hardcoding lists that may change
- Don't include internal implementation details or environment variables
- Focus on endpoints, parameters, and response formats
- For new features, document both simplified endpoints and OpenAI-compatible endpoints
- Include minimal, clear code examples that demonstrate basic usage
- Enter plan mode for ANY non-trivial task (3+ steps or architectural decisions)
- If something goes sideways, STOP and re-plan immediately – don't keep pushing
- Use plan mode for verification steps, not just building
- Write detailed specs upfront to reduce ambiguity
- Use subagents liberally to keep main context window clean
- Offload research, exploration, and parallel analysis to subagents
- For complex problems, throw more compute at it via subagents
- One task per subagent for focused execution
- After ANY correction from the user: propose an update to this
AGENTS.mdwith the learned pattern - Write rules for yourself that prevent the same mistake
- Ruthlessly iterate on these lessons until mistake rate drops
- Review AGENTS.md at session start for relevant project
- Never mark a task complete without proving it works
- Diff behavior between main and your changes when relevant
- Ask yourself: "Would a staff engineer approve this?"
- Run tests, check logs, demonstrate correctness
- For non-trivial changes: pause and ask "is there a more elegant way?"
- If a fix feels hacky: "Knowing everything I know now, implement the elegant solution"
- Skip this for simple, obvious fixes – don't over-engineer
- Challenge your own work before presenting it
- When given a bug report: just fix it. Don't ask for hand-holding
- Point at logs, errors, failing tests – then resolve them
- Zero context switching required from the user
- Go fix failing CI tests without being told how
- Plan First: Outline steps before implementing (use todo tools or plan mode)
- Verify Plan: Check in before starting implementation
- Track Progress: Mark items complete as you go
- Explain Changes: High-level summary at each step
- Capture Lessons: When corrected, update this AGENTS.md with the pattern to prevent recurrence
When compacting conversation context, preserve:
- Full list of modified files with paths and line numbers
- All code snippets, diffs, and implementation details
- Test output, error messages, and command results
- Complete task plan, progress, and pending items
- User preferences and corrections from this session
- Key architectural decisions and their rationale
- Simplicity First: Make every change as simple as possible. Impact minimal code.
- No Laziness: Find root causes. No temporary fixes. Senior developer standards.
- Minimal Impact: Changes should only touch what's necessary. Avoid introducing bugs.
- If the user asks to send to git or something similar do all these steps:
- Git status, diff, create branch, commit all, push and write a PR description
- Verify branch before committing: Run
git branch --show-currentand confirm with user if unsure — branch mix-ups have caused wasted work multiple times - Avoid force pushes: Prefer follow-up commits over
git push --forceor--force-with-lease. Force pushes rewrite history and can cause issues for others working on the same branch. - Run biome check before committing:
npx biome check --write <file>to fix formatting/linting issues - If PR was already merged: Open a new branch/PR for follow-up changes, don't try to push to merged branches
BE CONCISE. All PRs, comments, issues: bullet points, <200 words, NO FLUFF.
PR Format:
- Use "- Adds X", "- Fix Y" format
- 3-5 bullets max
- Simple titles: "fix:", "feat:", "Add"
- No long paragraphs, no marketing language
Issue Comments:
- Bullet points only
- State facts, not opinions
- Link to relevant code/files
- No "I think" or "maybe" - be direct
Code Reviews:
- Focus on parts that need improving, not what's already good
- Be concise and information-dense
- Link to specific lines/files
- Don't praise code that's fine
- Don't repeat obvious things
- Only use established labels (check with
mcp1_list_issues) - Avoid creating new labels unless part of broader strategy
- Keep names consistent with existing patterns
Commit format:
feat: add feature
Co-authored-by: username <user_id+username@users.noreply.github.com>
Fixes #issue
- Use "Fixes #issue" or "Addresses #issue" in PR descriptions
- Email format:
{username} <{user_id}+{username}@users.noreply.github.com> - Find user_id in issue API response