This guide covers the migration from QueryHawk's current observability stack (OpenTelemetry Collector + Prometheus + Jaeger) to Grafana Alloy for unified observability.
QueryHawk App → OpenTelemetry Collector → Jaeger (Traces) + Prometheus (Metrics)
↓
Container per User (PostgreSQL Exporter)
QueryHawk App → Grafana Alloy → Unified Grafana (Traces + Metrics + Logs)
↓
File-based Targets (No Containers)
# Stop the current stack
docker-compose down
# Remove old volumes (optional - backup first!)
docker volume rm queryhawk_prometheus_data queryhawk_prometheus_targets- ✅
docker-compose.yml- Updated for Alloy - ✅
grafana-alloy/config.yml- New Alloy configuration - ✅
grafana/provisioning/datasources/alloy-datasource.yml- New datasource - ✅
server/utils/alloyPostgresExporter.ts- New target management
# Build and start the new Alloy-based stack
docker-compose up --build -d# Check Alloy is running
curl http://localhost:12345/ready
# Check Grafana datasource
# Navigate to http://localhost:3001/datasources
# Verify "Alloy" datasource is configured and working- Before: N users = N containers = exponential resource growth
- After: N users = N targets = single process = linear resource growth
- Single UI: All traces, metrics, and logs in Grafana
- Better Correlation: Automatic linking between related telemetry
- Simplified Management: One configuration file instead of multiple
- Lower Latency: No container overhead for monitoring
- Better Resource Utilization: Shared monitoring logic
- Faster Target Discovery: File-based vs container startup
server:
log_level: info
http_listen_port: 12345
grpc_listen_port: 12346
metrics:
configs:
- name: default
scrape_configs:
- job_name: 'postgresql-users'
file_sd_configs:
- files: ['./targets/*.json']
relabel_configs:
- source_labels: [user_id]
target_label: user_id[
{
"targets": ["host:port"],
"labels": {
"user_id": "userId",
"database": "postgresql",
"service": "queryhawk"
}
}
]- Old:
/monitoring/start→ Creates Docker container - New:
/monitoring/start→ Creates Alloy target file
- Old: Prometheus at
http://localhost:9090 - New: Alloy at
http://localhost:12345
- Old: Container lifecycle management
- New: File-based target management
If you need to rollback to the old stack:
# Restore old docker-compose.yml
git checkout HEAD~1 -- docker-compose.yml
# Restore old OpenTelemetry config
git checkout HEAD~1 -- opentelemetry/otel-config.ymldocker-compose down
docker-compose up -d# Alloy should be healthy
curl http://localhost:12345/ready
# Check metrics endpoint
curl http://localhost:12345/metrics- Navigate to
http://localhost:3001 - Check datasources → Alloy should be configured
- Verify dashboards are receiving data
- Add a new database connection
- Verify target file is created in
grafana-alloy/targets/ - Check metrics appear in Grafana
# Check logs
docker-compose logs grafana-alloy
# Verify config file syntax
docker exec -it queryhawk-grafana-alloy-1 alloy --config.file=/etc/alloy/config.yml --check- Verify target files exist in
grafana-alloy/targets/ - Check file permissions
- Verify JSON syntax is valid
- Check Alloy datasource configuration
- Verify Alloy is sending data to Grafana
- Check network connectivity between services
# Check Alloy targets
curl http://localhost:12345/api/v1/targets
# Check Alloy metrics
curl http://localhost:12345/metrics
# Check target files
ls -la grafana-alloy/targets/Migration is successful when:
- ✅ Alloy service is running and healthy
- ✅ Grafana datasource is configured and working
- ✅ User database monitoring works without containers
- ✅ All existing dashboards receive data
- ✅ No more container-per-user resource explosion
- Grafana Alloy Documentation
- Alloy Configuration Reference
- OpenTelemetry Integration
- QueryHawk Support
If you encounter issues during migration:
- Check the troubleshooting section above
- Review Alloy logs:
docker-compose logs grafana-alloy - Verify configuration syntax
- Check network connectivity between services
Migration completed successfully! 🎉
QueryHawk now uses Grafana Alloy for unified observability with no more container-per-user overhead.