Skip to content

hharshhsaini/career-fordge-ai-

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

66 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸš€ CareerForge AI - Open Source Career Guidance Platform

100% Open-Source β€’ Zero API Costs β€’ Self-Hosted LLM

AI-powered career guidance platform that generates personalized learning roadmaps, skills analysis, and interview preparation β€” all running on your own infrastructure with zero usage costs.

License: MIT LLM: Mistral 7B License: Apache 2.0


✨ Features

Feature Description
🎯 Career Roadmaps 6-month personalized learning paths with weekly breakdowns
πŸ“š Skills Analysis Identify transferable skills and learning priorities
πŸ’Ό Interview Prep Technical & behavioral questions with answer frameworks
🧠 Knowledge Quizzes 15 MCQs per step with explanations (80% pass rate)
🎬 YouTube Tutorials Auto-curated full course videos (no shorts!)
πŸ“Š Progress Tracking Visual completion tracking with PDF export

πŸ—οΈ Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Frontend      β”‚     β”‚   Backend API    β”‚     β”‚   LLM Service   β”‚
β”‚   (React/Vite)  │────▢│   (FastAPI)      │────▢│   (Ollama)      β”‚
β”‚   Port: 5173    β”‚     β”‚   Port: 8000     β”‚     β”‚   Port: 11434   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                                         β”‚
                                                         β–Ό
                                                 β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                                                 β”‚  Mistral 7B     β”‚
                                                 β”‚  (Apache 2.0)   β”‚
                                                 β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸ† Why Mistral 7B?

Model License VRAM Speed Context
Mistral 7B βœ“ Apache 2.0 6GB 127 tok/s 32K
LLaMA 3 8B Meta License* 8GB 67 tok/s 8K
Mixtral 8x7B Apache 2.0 24GB+ 45 tok/s 32K

Mistral 7B is truly open with zero licensing friction for commercial use.


πŸš€ Quick Start

Prerequisites

  • macOS/Linux or WSL on Windows
  • 8GB+ RAM (16GB recommended)
  • Python 3.10+
  • Node.js 18+

1. Install Ollama & Model

# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh

# Start Ollama service
ollama serve

# Download Mistral 7B (4.1GB, takes 2-5 minutes)
ollama pull mistral:7b-instruct-v0.3-q4_K_M

2. Start Backend

cd backend

# Create virtual environment
python -m venv venv
source venv/bin/activate  # Windows: venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

# Copy environment file
cp .env.example .env

# Run the server
uvicorn main_v2:app --reload --port 8000

3. Start Frontend

cd frontend
npm install
npm run dev

4. Open the App

Visit http://localhost:5173 πŸŽ‰


πŸ“‘ API Endpoints

Core AI Endpoints

Method Endpoint Description
POST /api/ai/roadmap Generate career roadmap
POST /api/ai/skills Analyze skills & recommendations
POST /api/ai/interview-prep Interview preparation guide
POST /api/ai/quiz Generate knowledge quiz

Health Endpoints

Method Endpoint Description
GET /api/health System health status
GET /api/health/llm LLM service status
GET /api/health/models Available models

Legacy Compatibility

Method Endpoint Maps To
POST /generate-path /api/ai/roadmap
POST /generate-quiz /api/ai/quiz

Full API docs: http://localhost:8000/docs


🐳 Docker Deployment

Quick Start (All Services)

# Clone and run everything
docker-compose up -d

# Wait for model to download (first run only)
docker-compose logs -f model-loader

Individual Services

# Just the LLM service
cd llm-service
docker-compose up -d

# Verify it's running
curl http://localhost:11434/api/tags

☁️ Deployment Options

1. Local Machine (FREE)

  • Hardware: 8GB RAM, any modern CPU
  • Performance: ~3-5s per roadmap generation
  • Best for: Development, personal use

2. Single VPS ($20-50/month)

Provider     RAM    vCPUs   Cost/mo
─────────────────────────────────────
Hetzner      16GB   4       $18
Contabo      16GB   4       $12
DigitalOcean 16GB   4       $48

3. GPU Cloud ($0.20-0.50/hour)

Provider   GPU          VRAM   Cost/hr
────────────────────────────────────────
RunPod     RTX 3060     12GB   $0.20
Vast.ai    RTX 3080     10GB   $0.25
Lambda     RTX 4090     24GB   $0.50

GPU gives 10-20x faster inference!


πŸ“ Project Structure

career-forge/
β”œβ”€β”€ backend/
β”‚   β”œβ”€β”€ main_v2.py              # FastAPI app (open-source LLM)
β”‚   β”œβ”€β”€ config.py               # Environment configuration
β”‚   β”œβ”€β”€ api/
β”‚   β”‚   β”œβ”€β”€ routes/             # API endpoint handlers
β”‚   β”‚   └── schemas/            # Request/response models
β”‚   β”œβ”€β”€ services/
β”‚   β”‚   β”œβ”€β”€ llm_service.py      # Ollama integration
β”‚   β”‚   β”œβ”€β”€ roadmap_service.py  # Roadmap generation
β”‚   β”‚   β”œβ”€β”€ skills_service.py   # Skills analysis
β”‚   β”‚   β”œβ”€β”€ interview_service.py # Interview prep
β”‚   β”‚   └── quiz_service_v2.py  # Quiz generation
β”‚   β”œβ”€β”€ prompts/                # Prompt templates
β”‚   └── utils/                  # Helpers
β”‚
β”œβ”€β”€ llm-service/
β”‚   β”œβ”€β”€ docker-compose.yml      # Ollama deployment
β”‚   β”œβ”€β”€ Modelfile               # Custom model config
β”‚   └── scripts/
β”‚       β”œβ”€β”€ setup.sh            # Installation script
β”‚       β”œβ”€β”€ health-check.sh     # Monitoring
β”‚       └── benchmark.sh        # Performance testing
β”‚
β”œβ”€β”€ frontend/                   # React/Vite app
β”œβ”€β”€ docker-compose.yml          # Full stack deployment
└── ARCHITECTURE.md             # Detailed architecture docs

πŸ”§ Configuration

All settings via environment variables:

# LLM Service
LLM_BASE_URL=http://localhost:11434
LLM_MODEL=mistral:7b-instruct-v0.3-q4_K_M
LLM_TIMEOUT=120

# API Server
API_PORT=8000
CORS_ORIGINS=http://localhost:5173

# Features
YOUTUBE_ENABLED=true
RATE_LIMIT=30

See .env.example for all options.


πŸ”’ Security

  • βœ… Rate limiting on AI endpoints
  • βœ… Input validation & sanitization
  • βœ… Prompt injection protection
  • βœ… No secrets in code
  • βœ… CORS configuration
  • βœ… Non-root Docker containers

πŸš€ Scaling Notes

Horizontal Scaling

# deploy.yaml (Kubernetes)
apiVersion: apps/v1
kind: Deployment
metadata:
  name: careerforge-api
spec:
  replicas: 3  # Scale API pods
  ...

Response Caching

Add Redis for caching common roadmaps:

# Cache similar profile queries
cache_key = hash(user_profile)
if cached := redis.get(cache_key):
    return cached

Model Upgrades

Hot-swap to stronger models when resources available:

# Upgrade to Mixtral (needs 24GB+ VRAM)
ollama pull mixtral:8x7b-instruct-v0.1-q4_K_M

# Update .env
LLM_MODEL=mixtral:8x7b-instruct-v0.1-q4_K_M

πŸ“Έ Screenshots

Beautiful gradient UI with purple theme, 3D loading animations, interactive quiz modals, and progress visualization.


πŸ› οΈ Development

Run Tests

cd backend
pytest tests/ -v

Lint & Format

black .
isort .
mypy .

Benchmark LLM

cd llm-service/scripts
./benchmark.sh

🀝 Contributing

  1. Fork the repository
  2. Create feature branch (git checkout -b feature/amazing)
  3. Commit changes (git commit -m 'Add amazing feature')
  4. Push to branch (git push origin feature/amazing)
  5. Open a Pull Request

πŸ“„ License

  • This Project: MIT License
  • Mistral 7B Model: Apache 2.0 License

πŸ‘¨β€πŸ’» Author

Harsh Saini

Made with ❀️ for developers who want AI without the API bill.


πŸ™ Acknowledgments

  • Mistral AI for the amazing open-source model
  • Ollama for making local LLM deployment easy
  • The open-source community for making this possible

"BC… ye banda serious hai." πŸš€

About

πŸ”₯ AI-powered career guidance website that generates personalized learning roadmaps with curated resources from official docs, top-rated courses, and YouTube tutorials.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors