SmartCache AI is a backend-focused, high-visibility system that combines:
- ⚡ Redis/Valkey caching
- 🤖 AI-powered summarization
- ⚙️ Asynchronous job processing (worker pool)
- 📊 Analytics & observability
This is not a simple AI wrapper — it is a scalable AI processing pipeline similar to real production systems.
- Showcase Go concurrency (goroutines, channels)
- Implement async job queue system
- Use Redis/Valkey for caching + state management
- Integrate AI meaningfully (not per request)
- Demonstrate system design + backend engineering
- No authentication (v1)
- No complex UI
- No microservices (single service)
Client → Go API → Redis (Cache + Queue) → Worker Pool → AI → Redis → Client
-
User sends input (text / URL)
-
Backend generates hash key
-
Redis check:
-
✅ HIT → return cached result
-
❌ MISS:
- create job
- push to queue
- return
processingstatus
-
- Worker picks job from queue
- Fetch content (if URL)
- Call AI
- Store result in Redis
- Update job status →
completed
- Redis-based queue
- Background workers using goroutines
- Non-blocking API
pendingprocessingcompletedfailed
-
Generates:
- summary (2–3 lines)
- tags (2–4)
-
Runs only in worker (not request path)
- Cache key:
summary:{hash} - TTL-based expiry
- Avoid duplicate AI calls
Track:
- total requests
- cache hits / misses
- queue size
- processing time
- failure rate
- Input → hashed
- Same input → same key
- Prevents duplicate processing
- Go (Gin)
- Redis / Valkey
- Gemini API
smartcache-ai/
backend/
cmd/
server/
main.go
internal/
api/
handlers/
submit.go
status.go
worker/
pool.go
job.go
cache/
redis.go
ai/
gemini.go
prompt.go
services/
processor.go
analytics/
metrics.go
config/
config.go
.env.example
go.mod
Submit text or URL for summarization
{
"input": "https://example.com/article"
}{
"job_id": "abc123",
"status": "processing"
}Check job status
{
"status": "completed",
"summary": "AI tools are dominating modern developer workflows.",
"tags": ["AI", "DevTools"]
}{
"total_requests": 1200,
"cache_hits": 800,
"queue_size": 5,
"avg_processing_time_ms": 320
}| Purpose | Key | Example |
|---|---|---|
| Cache | summary:{hash} |
summary:abc123 |
| Job data | job:{id} |
job:xyz789 |
| Queue | job_queue |
list |
| Analytics | metrics:* |
metrics:hits |
PORT=8080
REDIS_URL=redis://localhost:6379
GEMINI_API_KEY=your_key
WORKER_COUNT=3
CACHE_TTL=300
docker run -d -p 6379:6379 redis
cd backend
go mod tidy
go run cmd/server/main.go
- Async processing with goroutines
- Redis as cache + queue
- API performance optimization
- AI integration (decoupled)
- Job-based architecture
- Observability
- WebSocket for live job updates
- Rate limiting using Redis
- Retry mechanism for failed jobs
- PostgreSQL for persistence
- AI batching
This project demonstrates:
- Real-world backend architecture
- Concurrency handling in Go
- Production-like async systems
- Efficient AI usage
- Strong system design thinking
This is not just an AI project.
It is a scalable backend system that happens to use AI.
Think like a backend engineer. Build like one.