Skip to content

sunzid02/sarker-smart-portfolio

Repository files navigation

πŸš€ Sarker Smart Portfolio

Portfolio Banner Live Demo GitHub

An intelligent, AI-powered portfolio with RAG (Retrieval-Augmented Generation) chatbot

🌐 Live Demo β€’ πŸ“– Features β€’ πŸš€ Getting Started


✨ Overview

A modern, interactive portfolio website featuring an AI-powered chatbot that can answer questions about my professional experience, skills, and projects. Built with React, TypeScript, and cutting-edge AI technologies including vector embeddings and semantic search.

🎯 Key Highlights

  • πŸ€– AI Chatbot: Conversational interface powered by Groq (LLaMA 3.1) that understands context about my experience
  • πŸ” RAG System: Retrieval-Augmented Generation using pgvector for accurate, source-backed responses
  • ⚑ Vector Search: Semantic search using embeddings (Xenova/all-MiniLM-L6-v2) for intelligent document retrieval
  • 🎨 Interactive UI: Hand gesture controls using MediaPipe for unique user interaction
  • πŸ“± Responsive Design: Seamless experience across all devices
  • 🌐 Deployed on Vercel: Fast, reliable hosting with edge functions

πŸ› οΈ Tech Stack

Frontend

React TypeScript Vite

AI & ML

Groq Transformers MediaPipe

Database & Infrastructure

PostgreSQL pgvector Vercel

Key Libraries

  • @xenova/transformers - Local embeddings generation
  • openai - API client for Groq integration
  • pg - PostgreSQL client for vector database
  • react-markdown - Markdown rendering
  • @mediapipe/hands - Hand gesture recognition
  • pdf-parse - Resume/document parsing

🎯 Features

πŸ€– AI-Powered Chat Assistant

  • Natural Conversations: Ask questions about my experience, skills, projects, and background
  • Context-Aware Responses: Uses RAG to retrieve relevant information from my resume and portfolio
  • Source Attribution: Responses include sources for transparency
  • Semantic Search: Understands intent, not just keywords

πŸ” RAG Architecture

User Query β†’ Embedding Generation β†’ Vector Search β†’ Context Retrieval β†’ LLM Response
  • Vector Database: NeonDB with pgvector extension
  • Embeddings Model: Xenova/all-MiniLM-L6-v2 (384 dimensions)
  • LLM: Groq LLaMA 3.1 8B (fast inference)
  • Chunking Strategy: 800-character chunks with overlap

🎨 Interactive Features

  • Hand Gesture Controls: Navigate using MediaPipe hand tracking
  • Smooth Animations: Scroll-triggered animations
  • Responsive Layout: Mobile-first design
  • Fast Loading: Optimized assets and code splitting

πŸš€ Getting Started

Prerequisites

  • Node.js 18+ and npm
  • PostgreSQL database (NeonDB recommended)
  • Groq API key (free tier available)

Installation

  1. Clone the repository
git clone https://github.com/sunzid02/sarker-smart-portfolio.git
cd sarker-smart-portfolio
  1. Install dependencies
npm install
  1. Set up environment variables

Create a .env.local file:

# Database
DATABASE_URL=your_neon_postgres_url

# AI API Keys
GROQ_API_KEY=your_groq_api_key

# Optional: Admin key for ingestion
RAG_ADMIN_KEY=your_secret_admin_key
  1. Set up the database

Run the SQL migration to create the vector table:

CREATE EXTENSION IF NOT EXISTS vector;

CREATE TABLE rag_chunks (
  id TEXT PRIMARY KEY,
  source TEXT NOT NULL,
  part INTEGER,
  content TEXT NOT NULL,
  embedding vector(384)
);

CREATE INDEX ON rag_chunks USING ivfflat (embedding vector_cosine_ops);
  1. Ingest your portfolio data
# This will parse your resume and generate embeddings
npm run ingest
  1. Run the development server
npm run dev

Visit http://localhost:5173 to see your portfolio!


πŸ“ Project Structure

sarker-smart-portfolio/
β”œβ”€β”€ api/                      # Vercel serverless API routes
β”‚   β”œβ”€β”€ chat.ts              # Chat endpoint
β”‚   └── ingest.ts            # Data ingestion endpoint
β”œβ”€β”€ public/                   # Static assets
β”‚   β”œβ”€β”€ resume/              # Resume PDFs
β”‚   └── images/              # Project images
β”œβ”€β”€ scripts/                  # Utility scripts
β”‚   └── ingestPortfolio.ts   # Data ingestion script
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ app/
β”‚   β”‚   β”œβ”€β”€ model/           # Data models
β”‚   β”‚   β”‚   └── siteModel.ts # Portfolio content
β”‚   β”‚   β”œβ”€β”€ controller/      # Business logic
β”‚   β”‚   └── view/            # UI components
β”‚   β”‚       β”œβ”€β”€ sections/    # Page sections
β”‚   β”‚       └── ui/          # Reusable components
β”‚   β”œβ”€β”€ lib/
β”‚   β”‚   └── rag/             # RAG implementation
β”‚   β”‚       β”œβ”€β”€ ragAnswer.ts # Query processing
β”‚   β”‚       β”œβ”€β”€ db.ts        # Database operations
β”‚   β”‚       └── ingest.ts    # Document ingestion
β”‚   └── main.tsx             # App entry point
β”œβ”€β”€ package.json
β”œβ”€β”€ vite.config.ts
└── tsconfig.json

πŸ”§ Configuration

Customizing Portfolio Content

Edit src/app/model/siteModel.ts to update:

  • Personal information
  • Work experience
  • Projects
  • Skills
  • Social links

Customizing AI Behavior

Edit src/lib/rag/ragAnswer.ts to adjust:

  • System prompts
  • Response length
  • Temperature settings
  • Context retrieval settings

πŸ“š Available Scripts

# Development
npm run dev              # Start dev server
npm run api              # Start API server locally

# Building
npm run build            # Build for production
npm run preview          # Preview production build

# AI/RAG
npm run ingest           # Ingest portfolio data
npm run inspect-db       # Inspect database contents

# Code Quality
npm run lint             # Run ESLint

🌐 Deployment

Deploying to Vercel

  1. Install Vercel CLI
npm i -g vercel
  1. Deploy
vercel --prod
  1. Set environment variables in Vercel dashboard:

    • DATABASE_URL
    • GROQ_API_KEY
    • RAG_ADMIN_KEY
  2. Run ingestion (one-time):

npm run ingest

🀝 How It Works

RAG Pipeline

  1. Document Ingestion:

    • Parse resume and portfolio content
    • Split into 800-character chunks
    • Generate embeddings using Xenova/all-MiniLM-L6-v2
    • Store in PostgreSQL with pgvector
  2. Query Processing:

    • User asks a question
    • Generate embedding for the query
    • Perform vector similarity search
    • Retrieve top 6 most relevant chunks
  3. Response Generation:

    • Pass retrieved context to Groq LLaMA 3.1
    • Generate natural language response
    • Include source attribution
    • Return to user

Tech Stack Choices

Technology Why?
React + Vite Fast development, modern tooling
TypeScript Type safety, better DX
Groq Fast inference, free tier, good quality
Xenova/Transformers Run embeddings on serverless (no API cost)
NeonDB Serverless Postgres with pgvector support
Vercel Easy deployment, edge functions, great DX

πŸ“Š Performance

  • First Contentful Paint: < 1.5s
  • Time to Interactive: < 3s
  • Chat Response Time: 1-2s (including embedding + LLM)
  • Vector Search: < 100ms
  • Lighthouse Score: 95+ across all metrics

πŸ—ΊοΈ Roadmap

  • Add voice input/output for chat
  • Multi-language support
  • GitHub integration for live project stats
  • Blog section with AI-powered search
  • Analytics dashboard
  • Progressive Web App (PWA) support

🀝 Contributing

While this is a personal portfolio, I welcome suggestions and improvements!

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/AmazingFeature)
  3. Commit changes (git commit -m 'Add some AmazingFeature')
  4. Push to branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

πŸ“ License

This project is open source and available under the MIT License.


πŸ“§ Contact

Sarker Sunzid Mahmud


πŸ™ Acknowledgments

  • Groq for providing fast, free LLM inference
  • Vercel for excellent hosting and deployment
  • NeonDB for serverless PostgreSQL with vector support
  • Hugging Face for transformer models
  • MediaPipe for hand tracking capabilities

⭐ Star this repo if you found it helpful!

Made with ❀️ by Sarker Sunzid Mahmud

About

smart portfolio

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors