An intelligent, AI-powered portfolio with RAG (Retrieval-Augmented Generation) chatbot
A modern, interactive portfolio website featuring an AI-powered chatbot that can answer questions about my professional experience, skills, and projects. Built with React, TypeScript, and cutting-edge AI technologies including vector embeddings and semantic search.
- π€ AI Chatbot: Conversational interface powered by Groq (LLaMA 3.1) that understands context about my experience
- π RAG System: Retrieval-Augmented Generation using pgvector for accurate, source-backed responses
- β‘ Vector Search: Semantic search using embeddings (Xenova/all-MiniLM-L6-v2) for intelligent document retrieval
- π¨ Interactive UI: Hand gesture controls using MediaPipe for unique user interaction
- π± Responsive Design: Seamless experience across all devices
- π Deployed on Vercel: Fast, reliable hosting with edge functions
- @xenova/transformers - Local embeddings generation
- openai - API client for Groq integration
- pg - PostgreSQL client for vector database
- react-markdown - Markdown rendering
- @mediapipe/hands - Hand gesture recognition
- pdf-parse - Resume/document parsing
- Natural Conversations: Ask questions about my experience, skills, projects, and background
- Context-Aware Responses: Uses RAG to retrieve relevant information from my resume and portfolio
- Source Attribution: Responses include sources for transparency
- Semantic Search: Understands intent, not just keywords
User Query β Embedding Generation β Vector Search β Context Retrieval β LLM Response
- Vector Database: NeonDB with pgvector extension
- Embeddings Model: Xenova/all-MiniLM-L6-v2 (384 dimensions)
- LLM: Groq LLaMA 3.1 8B (fast inference)
- Chunking Strategy: 800-character chunks with overlap
- Hand Gesture Controls: Navigate using MediaPipe hand tracking
- Smooth Animations: Scroll-triggered animations
- Responsive Layout: Mobile-first design
- Fast Loading: Optimized assets and code splitting
- Node.js 18+ and npm
- PostgreSQL database (NeonDB recommended)
- Groq API key (free tier available)
- Clone the repository
git clone https://github.com/sunzid02/sarker-smart-portfolio.git
cd sarker-smart-portfolio- Install dependencies
npm install- Set up environment variables
Create a .env.local file:
# Database
DATABASE_URL=your_neon_postgres_url
# AI API Keys
GROQ_API_KEY=your_groq_api_key
# Optional: Admin key for ingestion
RAG_ADMIN_KEY=your_secret_admin_key- Set up the database
Run the SQL migration to create the vector table:
CREATE EXTENSION IF NOT EXISTS vector;
CREATE TABLE rag_chunks (
id TEXT PRIMARY KEY,
source TEXT NOT NULL,
part INTEGER,
content TEXT NOT NULL,
embedding vector(384)
);
CREATE INDEX ON rag_chunks USING ivfflat (embedding vector_cosine_ops);- Ingest your portfolio data
# This will parse your resume and generate embeddings
npm run ingest- Run the development server
npm run devVisit http://localhost:5173 to see your portfolio!
sarker-smart-portfolio/
βββ api/ # Vercel serverless API routes
β βββ chat.ts # Chat endpoint
β βββ ingest.ts # Data ingestion endpoint
βββ public/ # Static assets
β βββ resume/ # Resume PDFs
β βββ images/ # Project images
βββ scripts/ # Utility scripts
β βββ ingestPortfolio.ts # Data ingestion script
βββ src/
β βββ app/
β β βββ model/ # Data models
β β β βββ siteModel.ts # Portfolio content
β β βββ controller/ # Business logic
β β βββ view/ # UI components
β β βββ sections/ # Page sections
β β βββ ui/ # Reusable components
β βββ lib/
β β βββ rag/ # RAG implementation
β β βββ ragAnswer.ts # Query processing
β β βββ db.ts # Database operations
β β βββ ingest.ts # Document ingestion
β βββ main.tsx # App entry point
βββ package.json
βββ vite.config.ts
βββ tsconfig.json
Edit src/app/model/siteModel.ts to update:
- Personal information
- Work experience
- Projects
- Skills
- Social links
Edit src/lib/rag/ragAnswer.ts to adjust:
- System prompts
- Response length
- Temperature settings
- Context retrieval settings
# Development
npm run dev # Start dev server
npm run api # Start API server locally
# Building
npm run build # Build for production
npm run preview # Preview production build
# AI/RAG
npm run ingest # Ingest portfolio data
npm run inspect-db # Inspect database contents
# Code Quality
npm run lint # Run ESLint- Install Vercel CLI
npm i -g vercel- Deploy
vercel --prod-
Set environment variables in Vercel dashboard:
DATABASE_URLGROQ_API_KEYRAG_ADMIN_KEY
-
Run ingestion (one-time):
npm run ingest-
Document Ingestion:
- Parse resume and portfolio content
- Split into 800-character chunks
- Generate embeddings using Xenova/all-MiniLM-L6-v2
- Store in PostgreSQL with pgvector
-
Query Processing:
- User asks a question
- Generate embedding for the query
- Perform vector similarity search
- Retrieve top 6 most relevant chunks
-
Response Generation:
- Pass retrieved context to Groq LLaMA 3.1
- Generate natural language response
- Include source attribution
- Return to user
| Technology | Why? |
|---|---|
| React + Vite | Fast development, modern tooling |
| TypeScript | Type safety, better DX |
| Groq | Fast inference, free tier, good quality |
| Xenova/Transformers | Run embeddings on serverless (no API cost) |
| NeonDB | Serverless Postgres with pgvector support |
| Vercel | Easy deployment, edge functions, great DX |
- First Contentful Paint: < 1.5s
- Time to Interactive: < 3s
- Chat Response Time: 1-2s (including embedding + LLM)
- Vector Search: < 100ms
- Lighthouse Score: 95+ across all metrics
- Add voice input/output for chat
- Multi-language support
- GitHub integration for live project stats
- Blog section with AI-powered search
- Analytics dashboard
- Progressive Web App (PWA) support
While this is a personal portfolio, I welcome suggestions and improvements!
- Fork the repository
- Create a feature branch (
git checkout -b feature/AmazingFeature) - Commit changes (
git commit -m 'Add some AmazingFeature') - Push to branch (
git push origin feature/AmazingFeature) - Open a Pull Request
This project is open source and available under the MIT License.
Sarker Sunzid Mahmud
- π Portfolio: sarker-smart-portfolio.vercel.app
- πΌ LinkedIn: linkedin.com/in/sarkersunzid
- π GitHub: @sunzid02
- Groq for providing fast, free LLM inference
- Vercel for excellent hosting and deployment
- NeonDB for serverless PostgreSQL with vector support
- Hugging Face for transformer models
- MediaPipe for hand tracking capabilities
β Star this repo if you found it helpful!
Made with β€οΈ by Sarker Sunzid Mahmud