+ConvexHire is an intelligent recruitment automation platform that leverages Multi-Agent Systems (MAS) and Retrieval-Augmented Generation (RAG) to streamline hiring workflows. Built with transparency and bias-awareness at its core, it provides explainable AI-driven candidate matching while keeping humans in the loop for critical decisions.
-| Technology | Icon | Description |
-| :--- | :---: | :--- |
-| **LangChain** | | LLM application framework with chaining capabilities |
-| **LangGraph** | | Orchestration layer for cyclic multi-agent workflows |
-| **LangSmith** | | Observability, testing, and debugging platform |
-| **Groq** | | Ultra-low latency LLM inference engine |
-| **Google Gemini** | | Multimodal AI model for complex reasoning |
-| **Qdrant** | | Vector Search Engine for semantic matching |
-| **Hugging Face** | | SOTA Embedding models and Transformers |
+### The Problem
-
+Traditional ATS platforms rely on opaque keyword matching, often disqualifying qualified candidates without explanation. ConvexHire uses semantic understanding and transparent scoring to bridge this gap.
----
-
-## 💡 Why ConvexHire?
+## Features
-
-
🤖 AI-Powered
-
Multi-agent system with specialized AI workers for screening, ranking, and scheduling.
-
-
-
🔍 Transparent
-
Explainable AI (XAI) provides clear reasoning for every candidate match score.
-
-
-
👥 Human-in-Loop
-
Critical hiring decisions always require human approval and oversight.
-
-
-
-
-### The Problem We Solve
-Traditional ATS platforms rely on opaque keyword matching, often disqualifying qualified candidates without explanation. **ConvexHire** bridges the "Candidate Experience Gap" through semantic understanding, deep document analysis, and transparent scoring.
-
----
+
-## 🔄 System Workflow
+**AI-Powered Automation**
+- Intelligent resume screening and analysis
+- Semantic candidate matching with vector search
+- Automated job description generation
+- Smart interview scheduling
-> [!NOTE]
-> Click on the diagram below to view it in full resolution.
+
+
-
-
-
-
-
- Complete end-to-end recruitment workflow with AI agent orchestration.
-
+**Transparent & Fair**
+- Explainable AI decision-making
+- Bias-aware algorithms
+- Clear match scoring with reasoning
+- Human-in-the-loop oversight
-
+
+
+
+
-**Human-in-the-Loop Checkpoints:**
-1. `JD Approval` → Recruiter validates the AI-generated Job Description.
-2. `Candidate Review` → Recruiter reviews shortlisted candidates.
-3. `Final Decision` → Human confirmation before sending offers.
+**Advanced Document Processing**
+- Multi-column CV parsing with Docling
+- Layout-preserving OCR
+- Support for scanned documents
+- High accuracy text extraction
----
-
-## 🏗️ System Architecture
+
+
-> [!TIP]
-> The system utilizes a microservices approach orchestrated by LangGraph.
+**Seamless Integration**
+- Gmail and Google Calendar sync
+- RESTful API architecture
+- Real-time updates
+- Vector-based semantic search
-
-#### 2️⃣ Backend Setup (FastAPI)
-```bash
-cd backend
-# create env from example
-cp .env.example .env
-# Install dependencies & run
-uv sync
-uv run fastapi dev
-```
-> Backend runs on: `http://localhost:8000`
-> Swagger Docs: `http://localhost:8000/docs`
+
-#### 3️⃣ Frontend Setup (Next.js)
-```bash
-cd ../frontend
-# create env from local
-cp .env.local .env
-# Install dependencies & run
-npm install
-npm run dev
-```
-> App runs on: `http://localhost:3000`
+
+
+
+
+
----
+
-## 📊 Star History
+### AI and Data Layer
-[](https://star-history.com/#devrahulbanjara/ConvexHire&Date)
-
-**⭐ Like this project? Give us a star on GitHub!**
+
+
+
+
+
+
+
+
----
+### Technology Roles
-## 🙏 Acknowledgements
+| Component | Technology | Purpose |
+|:----------|:-----------|:--------|
+| **Backend API** | FastAPI | High-performance REST API server |
+| **Frontend** | Next.js | Server-side rendered React application |
+| **Orchestration** | LangGraph | Multi-agent workflow management |
+| **LLM Framework** | LangChain | LLM application development |
+| **Observability** | LangSmith | Debugging and monitoring |
+| **Inference** | Groq | Ultra-low latency LLM processing |
+| **Multimodal AI** | Google Gemini | Complex reasoning tasks |
+| **Vector Store** | Qdrant | Semantic search and matching |
+| **Embeddings** | Hugging Face | Text vectorization models |
+| **Document Processing** | Docling | OCR and layout-preserving parsing |
+| **Database** | Supabase | PostgreSQL database with real-time features |
+| **Integration** | Gmail / Google Calendar | Communication and scheduling |
-This project leverages [**Docling**](https://github.com/DS4SD/docling) by IBM Research for efficient document conversion.
+## Acknowledgements
-
-📚 Citation Reference
+This project leverages **Docling** by IBM Research for efficient document conversion and OCR processing.
-
+**Citation:**
-> Livathinos, N., Auer, C., Lysak, M., Nassar, A., Dolfi, M., Vagenas, P., ... & Staar, P. W. J. (2025). *Docling: An Efficient Open-Source Toolkit for AI-driven Document Conversion*. arXiv preprint arXiv:2501.17887.
->
-> 🔗 [https://arxiv.org/abs/2501.17887](https://arxiv.org/abs/2501.17887)
+> Livathinos, N., Auer, C., Lysak, M., Nassar, A., Dolfi, M., Vagenas, P., Berrospi Ramis, C., Omenetti, M., Dinkla, K., Kim, Y., Gupta, S., de Lima, R. T., Weber, V., Morin, L., Meijer, I., Kuropiatnyk, V., & Staar, P. W. J. (2025). *Docling: An Efficient Open-Source Toolkit for AI-driven Document Conversion*. arXiv preprint arXiv:2501.17887. https://arxiv.org/abs/2501.17887
```bibtex
-@misc{livathinos2025doclingefficientopensourcetoolkit,
+@misc{livathinos2025docling,
title={Docling: An Efficient Open-Source Toolkit for AI-driven Document Conversion},
author={Nikolaos Livathinos and Christoph Auer and Maksym Lysak and Ahmed Nassar and Michele Dolfi and Panos Vagenas and Cesar Berrospi Ramis and Matteo Omenetti and Kasper Dinkla and Yusik Kim and Shubham Gupta and Rafael Teixeira de Lima and Valery Weber and Lucas Morin and Ingmar Meijer and Viktor Kuropiatnyk and Peter W. J. Staar},
year={2025},
@@ -293,16 +232,23 @@ This project leverages [**Docling**](https://github.com/DS4SD/docling) by IBM Re
url={https://arxiv.org/abs/2501.17887}
}
```
-
+
+## Star History
+
+
+
+[](https://star-history.com/#devrahulbanjara/ConvexHire&Date)
+
+**Like this project? Give us a star!**
+
+
---
-### 💖 Built with passion for better recruitment
+### Made with passion by [@devrahulbanjara](https://github.com/devrahulbanjara)
[Report Bug](https://github.com/devrahulbanjara/ConvexHire/issues) • [Request Feature](https://github.com/devrahulbanjara/ConvexHire/issues) • [Contribute](CONTRIBUTING.md)
-**Made by [@devrahulbanjara](https://github.com/devrahulbanjara)**
-
diff --git a/backend/app/api/__init__.py b/backend/app/api/__init__.py
index fc7ec3e..5802f0d 100644
--- a/backend/app/api/__init__.py
+++ b/backend/app/api/__init__.py
@@ -1,7 +1,3 @@
-"""
-API package - Centralized router management
-"""
-
from fastapi import APIRouter
from app.api import (
@@ -14,10 +10,8 @@
users,
)
-# Create master router for API
api_router = APIRouter()
-# Include all route modules
api_router.include_router(auth.router, prefix="/auth", tags=["authentication"])
api_router.include_router(users.router, prefix="/users", tags=["users"])
api_router.include_router(candidate.router, prefix="/candidate", tags=["candidate"])
diff --git a/backend/app/api/auth.py b/backend/app/api/auth.py
index 6bb34c0..dadc9b3 100644
--- a/backend/app/api/auth.py
+++ b/backend/app/api/auth.py
@@ -11,7 +11,7 @@
SignupRequest,
TokenResponse,
)
-from app.services import AuthService
+from app.services import AuthService, UserService
router = APIRouter()
@@ -108,9 +108,7 @@ def google_login(request: Request):
@router.get("/google/callback")
@limiter.limit("5/minute")
-async def google_callback(
- request: Request, code: str, db: Session = Depends(get_db)
-):
+async def google_callback(request: Request, code: str, db: Session = Depends(get_db)):
try:
google_user = await AuthService.exchange_google_code(code)
@@ -144,8 +142,6 @@ def select_role(
user_id: str = Depends(get_current_user_id),
db: Session = Depends(get_db),
):
- from app.services import UserService
-
user = UserService.get_user_by_id(user_id, db)
if not user:
raise HTTPException(
diff --git a/backend/app/api/candidate.py b/backend/app/api/candidate.py
index 9380ef7..73e214a 100644
--- a/backend/app/api/candidate.py
+++ b/backend/app/api/candidate.py
@@ -18,26 +18,41 @@ def get_my_profile(
):
profile = CandidateService.get_full_profile(db, user_id)
+ social_links = [
+ schemas.SocialLinkResponse.model_validate(item) for item in profile.social_links
+ ]
+ work_experiences = [
+ schemas.WorkExperienceResponse.model_validate(item)
+ for item in profile.work_experiences
+ ]
+ educations = [
+ schemas.EducationResponse.model_validate(item) for item in profile.educations
+ ]
+ certifications = [
+ schemas.CertificationResponse.model_validate(item)
+ for item in profile.certifications
+ ]
+ skills = [schemas.SkillResponse.model_validate(item) for item in profile.skills]
+
return schemas.CandidateProfileFullResponse(
profile_id=profile.profile_id,
user_id=profile.user_id,
- full_name=profile.user.name, # From User Table
- email=profile.user.email, # From User Table
- picture=profile.user.picture, # From User Table
+ full_name=profile.user.name,
+ email=profile.user.email,
+ picture=profile.user.picture,
phone=profile.phone,
location_city=profile.location_city,
location_country=profile.location_country,
professional_headline=profile.professional_headline,
professional_summary=profile.professional_summary,
- social_links=profile.social_links,
- work_experiences=profile.work_experiences,
- educations=profile.educations,
- certifications=profile.certifications,
- skills=profile.skills,
+ social_links=social_links,
+ work_experiences=work_experiences,
+ educations=educations,
+ certifications=certifications,
+ skills=skills,
)
-# 2. UPDATE BASIC INFO
@router.patch("/me", response_model=schemas.CandidateProfileFullResponse)
@limiter.limit("5/minute")
def update_my_profile(
@@ -46,12 +61,10 @@ def update_my_profile(
user_id: str = Depends(get_current_user_id),
db: Session = Depends(get_db),
):
- # Reuse the GET logic to return full object after update
CandidateService.update_basic_info(db, user_id, data)
- return get_my_profile(user_id, db)
+ return get_my_profile(request, user_id, db)
-# 3. WORK EXPERIENCE
@router.post("/experience", response_model=schemas.WorkExperienceResponse)
@limiter.limit("5/minute")
def add_experience(
@@ -86,7 +99,6 @@ def update_experience(
return CandidateService.update_experience(db, user_id, item_id, data)
-# 4. EDUCATION
@router.post("/education", response_model=schemas.EducationResponse)
@limiter.limit("5/minute")
def add_education(
@@ -121,7 +133,6 @@ def update_education(
return CandidateService.update_education(db, user_id, item_id, data)
-# 5. SKILLS
@router.post("/skills", response_model=schemas.SkillResponse)
@limiter.limit("5/minute")
def add_skill(
@@ -156,7 +167,6 @@ def update_skill(
return CandidateService.update_skill(db, user_id, item_id, data)
-# 6. CERTIFICATIONS
@router.post("/certifications", response_model=schemas.CertificationResponse)
@limiter.limit("5/minute")
def add_certification(
@@ -189,3 +199,37 @@ def update_certification(
db: Session = Depends(get_db),
):
return CandidateService.update_certification(db, user_id, item_id, data)
+
+
+@router.post("/social-links", response_model=schemas.SocialLinkResponse)
+@limiter.limit("5/minute")
+def add_social_link(
+ request: Request,
+ data: schemas.SocialLinkBase,
+ user_id: str = Depends(get_current_user_id),
+ db: Session = Depends(get_db),
+):
+ return CandidateService.add_social_link(db, user_id, data)
+
+
+@router.delete("/social-links/{item_id}", status_code=status.HTTP_204_NO_CONTENT)
+@limiter.limit("5/minute")
+def delete_social_link(
+ request: Request,
+ item_id: str,
+ user_id: str = Depends(get_current_user_id),
+ db: Session = Depends(get_db),
+):
+ CandidateService.delete_social_link(db, user_id, item_id)
+
+
+@router.patch("/social-links/{item_id}", response_model=schemas.SocialLinkResponse)
+@limiter.limit("5/minute")
+def update_social_link(
+ request: Request,
+ item_id: str,
+ data: schemas.SocialLinkBase,
+ user_id: str = Depends(get_current_user_id),
+ db: Session = Depends(get_db),
+):
+ return CandidateService.update_social_link(db, user_id, item_id, data)
diff --git a/backend/app/api/candidate_applications.py b/backend/app/api/candidate_applications.py
index d0a6035..e6b8e6a 100644
--- a/backend/app/api/candidate_applications.py
+++ b/backend/app/api/candidate_applications.py
@@ -3,7 +3,7 @@
from app.core import get_current_user_id, get_db
from app.core.limiter import limiter
-from app.schemas.application import ApplicationResponse
+from app.schemas import ApplicationResponse
from app.services.candidate.application_service import ApplicationService
router = APIRouter()
diff --git a/backend/app/api/jobs.py b/backend/app/api/jobs.py
index 2d4df2e..114f834 100644
--- a/backend/app/api/jobs.py
+++ b/backend/app/api/jobs.py
@@ -3,67 +3,25 @@
from datetime import UTC, date, datetime
from fastapi import APIRouter, Depends, HTTPException, Request, status
-from sqlalchemy.orm import Session
+from sqlalchemy.orm import Session, selectinload
from app.core import get_current_user_id, get_db
from app.core.limiter import limiter
-from app.models.candidate import CandidateProfile
-from app.models.company import CompanyProfile
-from app.models.job import JobDescription, JobPosting, JobPostingStats
+from app.models import (
+ CandidateProfile,
+ CompanyProfile,
+ JobDescription,
+ JobPosting,
+ JobPostingStats,
+)
from app.schemas import job as schemas
+from app.services.candidate.job_service_utils import get_latest_jobs
from app.services.candidate.vector_job_service import JobVectorService
router = APIRouter()
vector_service = JobVectorService()
-VISIBLE_STATUSES = ["active", "expired"]
-
-
-@router.post("/admin/reindex")
-@limiter.limit("5/minute")
-def admin_reindex_jobs(request: Request, db: Session = Depends(get_db)):
- """
- Admin endpoint to clear the vector store and re-index all open jobs.
- This fixes duplicate vector entries.
- """
- try:
- # 1. Clear the Qdrant collection
- if vector_service.client:
- try:
- vector_service.client.delete_collection(vector_service.collection_name)
- vector_service._ensure_collection_exists()
-
- # Recreate the vector store connection
- from langchain_qdrant import QdrantVectorStore
-
- vector_service.vector_store = QdrantVectorStore(
- client=vector_service.client,
- collection_name=vector_service.collection_name,
- embedding=vector_service.embedding_model,
- )
- except Exception as e:
- return {
- "success": False,
- "error": f"Failed to reset Qdrant collection: {str(e)}",
- }
-
- # 2. Reset is_indexed flag on all jobs
- db.query(JobPosting).update({JobPosting.is_indexed: False})
- db.commit()
-
- # 3. Re-index all jobs
- vector_service.index_all_pending_jobs(db)
-
- indexed_count = (
- db.query(JobPosting).filter(JobPosting.is_indexed == True).count()
- )
-
- return {
- "success": True,
- "message": f"Successfully re-indexed {indexed_count} jobs.",
- }
- except Exception as e:
- return {"success": False, "error": str(e)}
+VISIBLE_STATUSES = ["active"]
@router.get("/recommendations", response_model=schemas.JobListResponse)
@@ -73,9 +31,11 @@ def get_recommendations(
user_id: str,
page: int = 1,
limit: int = 10,
+ employment_type: str | None = None,
+ location_type: str | None = None,
db: Session = Depends(get_db),
):
- # 1. Get Candidate Skills from Postgres
+ """Get personalized job recommendations based on user skill or fallback to latest jobs if user skills is empty."""
candidate = (
db.query(CandidateProfile).filter(CandidateProfile.user_id == user_id).first()
)
@@ -84,74 +44,39 @@ def get_recommendations(
if candidate and candidate.skills:
user_skills = [s.skill_name for s in candidate.skills]
- # 2. Get Matching Job IDs from Qdrant
- # Fetch a large number to account for duplicates/closed jobs
- fetch_limit = 200 # Fetch up to 200 candidates from vector store
-
- raw_ids = []
+ all_jobs = []
if user_skills:
- raw_ids = vector_service.recommend_jobs_by_skills(
- user_skills, limit=fetch_limit
- )
+ raw_ids = vector_service.recommend_jobs_by_skills(user_skills, limit=200)
+ if raw_ids:
+ jobs_from_db = (
+ db.query(JobPosting)
+ .filter(
+ JobPosting.job_id.in_(raw_ids),
+ JobPosting.status.in_(VISIBLE_STATUSES),
+ )
+ .all()
+ )
+ id_to_job = {job.job_id: job for job in jobs_from_db}
+ all_jobs = [id_to_job[jid] for jid in raw_ids if jid in id_to_job]
- # 3. Fallback: If no skills or no vector results, show recent jobs from DB
- if not raw_ids:
- offset = (page - 1) * limit
- total_recent = (
- db.query(JobPosting).filter(JobPosting.status.in_(VISIBLE_STATUSES)).count()
- )
- recent_jobs = (
- db.query(JobPosting)
- .filter(JobPosting.status.in_(VISIBLE_STATUSES))
- .order_by(JobPosting.posted_date.desc())
- .offset(offset)
- .limit(limit)
- .all()
- )
+ if not all_jobs:
+ all_jobs = get_latest_jobs(db, limit=200)
- total_pages = math.ceil(total_recent / limit) if limit > 0 else 0
- return {
- "jobs": [map_job_to_response(j) for j in recent_jobs],
- "total": total_recent,
- "page": page,
- "limit": limit,
- "total_pages": total_pages,
- "has_next": page < total_pages,
- "has_prev": page > 1,
- }
+ if employment_type:
+ all_jobs = [job for job in all_jobs if job.employment_type == employment_type]
+ if location_type:
+ all_jobs = [job for job in all_jobs if job.location_type == location_type]
- # 4. Deduplicate IDs from vector store (preserving order/relevance)
- seen_ids = set()
- unique_vector_ids = []
- for jid in raw_ids:
- if jid not in seen_ids:
- seen_ids.add(jid)
- unique_vector_ids.append(jid)
-
- # 5. Fetch Full Data from Postgres and Filter Open Jobs
- # Build a list of valid, unique jobs
- valid_jobs = []
- valid_job_ids_seen = set() # Extra safety to ensure uniqueness in response
- for jid in unique_vector_ids:
- if jid in valid_job_ids_seen:
- continue
- job = db.query(JobPosting).get(jid)
- if job and job.status in VISIBLE_STATUSES:
- valid_jobs.append(map_job_to_response(job))
- valid_job_ids_seen.add(jid)
-
- # 6. Apply Pagination to the list of valid unique jobs
- total_valid = len(valid_jobs)
+ total = len(all_jobs)
start_idx = (page - 1) * limit
end_idx = start_idx + limit
+ paginated_jobs = all_jobs[start_idx:end_idx]
- paginated_jobs = valid_jobs[start_idx:end_idx]
-
- total_pages = math.ceil(total_valid / limit) if limit > 0 else 0
+ total_pages = math.ceil(total / limit) if limit > 0 else 0
return {
- "jobs": paginated_jobs,
- "total": total_valid,
+ "jobs": [map_job_to_response(job) for job in paginated_jobs],
+ "total": total,
"page": page,
"limit": limit,
"total_pages": total_pages,
@@ -164,59 +89,46 @@ def get_recommendations(
@limiter.limit("5/minute")
def search_jobs(
request: Request,
- q: str,
+ q: str = "",
page: int = 1,
limit: int = 10,
+ employment_type: str | None = None,
+ location_type: str | None = None,
db: Session = Depends(get_db),
):
- # Fetch a large number to account for duplicates/closed jobs
- fetch_limit = 200
-
- # 1. Get IDs from Qdrant
- raw_ids = vector_service.search_jobs(q, limit=fetch_limit)
-
- if not raw_ids:
- return {
- "jobs": [],
- "total": 0,
- "page": 1,
- "limit": limit,
- "total_pages": 0,
- "has_next": False,
- "has_prev": False,
- }
+ all_jobs = []
+ if q.strip():
+ raw_ids = vector_service.search_jobs(q, limit=200)
+ if raw_ids:
+ jobs_from_db = (
+ db.query(JobPosting)
+ .filter(
+ JobPosting.job_id.in_(raw_ids),
+ JobPosting.status.in_(VISIBLE_STATUSES),
+ )
+ .all()
+ )
+ id_to_job = {job.job_id: job for job in jobs_from_db}
+ all_jobs = [id_to_job[jid] for jid in raw_ids if jid in id_to_job]
+
+ if not all_jobs:
+ all_jobs = get_latest_jobs(db, limit=200)
- # 2. Deduplicate IDs from vector store (preserving order/relevance)
- seen_ids = set()
- unique_vector_ids = []
- for jid in raw_ids:
- if jid not in seen_ids:
- seen_ids.add(jid)
- unique_vector_ids.append(jid)
-
- # 3. Fetch Full Data and Filter
- valid_jobs = []
- valid_job_ids_seen = set()
- for jid in unique_vector_ids:
- if jid in valid_job_ids_seen:
- continue
- job = db.query(JobPosting).get(jid)
- if job and job.status in VISIBLE_STATUSES:
- valid_jobs.append(map_job_to_response(job))
- valid_job_ids_seen.add(jid)
-
- # 4. Apply Pagination
- total_valid = len(valid_jobs)
+ if employment_type:
+ all_jobs = [job for job in all_jobs if job.employment_type == employment_type]
+ if location_type:
+ all_jobs = [job for job in all_jobs if job.location_type == location_type]
+
+ total = len(all_jobs)
start_idx = (page - 1) * limit
end_idx = start_idx + limit
+ paginated_jobs = all_jobs[start_idx:end_idx]
- paginated_jobs = valid_jobs[start_idx:end_idx]
-
- total_pages = math.ceil(total_valid / limit) if limit > 0 else 0
+ total_pages = math.ceil(total / limit) if limit > 0 else 0
return {
- "jobs": paginated_jobs,
- "total": total_valid,
+ "jobs": [map_job_to_response(job) for job in paginated_jobs],
+ "total": total,
"page": page,
"limit": limit,
"total_pages": total_pages,
@@ -235,12 +147,11 @@ def create_job(
user_id: str = Depends(get_current_user_id),
db: Session = Depends(get_db),
):
- """Create a new job posting"""
company = db.query(CompanyProfile).filter(CompanyProfile.user_id == user_id).first()
if not company:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
- detail="Company profile not found for this user",
+ detail="Company profile not found",
)
company_id = company.company_id
@@ -248,7 +159,6 @@ def create_job(
job_description_id = str(uuid.uuid4())
job_id = str(uuid.uuid4())
- # Handle required skills - allow empty list for drafts
required_skills_list = (
job_data.requiredSkillsAndExperience
if job_data.requiredSkillsAndExperience
@@ -268,7 +178,7 @@ def create_job(
job_description = JobDescription(
job_description_id=job_description_id,
- role_overview=job_data.description or "", # Allow empty for drafts
+ role_overview=job_data.description or "",
required_skills_experience=required_skills_experience_dict,
nice_to_have=nice_to_have_dict,
offers=offers_dict,
@@ -293,7 +203,6 @@ def create_job(
except Exception:
application_deadline = None
- # Determine status - use provided status or default to "active"
job_status = job_data.status if job_data.status else "active"
job_posting = JobPosting(
@@ -342,24 +251,15 @@ def create_job(
def get_jobs(
request: Request,
user_id: str | None = None,
- company_id: str | None = None, # Keep for backward compatibility
+ company_id: str | None = None,
status: str | None = None,
page: int = 1,
limit: int = 10,
db: Session = Depends(get_db),
):
- """
- Get list of jobs with optional filtering by user_id (recruiter) or company_id and status.
- If user_id is provided, looks up the company profile for that user and returns jobs for that company.
- If company_id is provided, returns jobs for that company directly.
- If status is provided, filters by status.
- For recruiters (user_id provided): returns all statuses by default (active, draft, expired).
- For public views (company_id or neither): shows active/expired by default.
- """
query = db.query(JobPosting)
is_recruiter_view = False
- # If user_id is provided, look up the company profile
if user_id:
is_recruiter_view = True
company_profile = (
@@ -368,7 +268,6 @@ def get_jobs(
if company_profile:
query = query.filter(JobPosting.company_id == company_profile.company_id)
else:
- # User has no company profile, return empty result
return {
"jobs": [],
"total": 0,
@@ -378,25 +277,16 @@ def get_jobs(
"has_next": False,
"has_prev": False,
}
- # Filter by company_id if provided (backward compatibility)
elif company_id:
query = query.filter(JobPosting.company_id == company_id)
- # Filter by status
- # If status is explicitly provided, use it
if status:
query = query.filter(JobPosting.status == status)
- # For recruiters viewing their own jobs, return all statuses (active, draft, expired)
- # For public views, only show active/expired
elif not is_recruiter_view:
query = query.filter(JobPosting.status.in_(VISIBLE_STATUSES))
- # Get total count
total = query.count()
- # Apply pagination and eager load relationships
- from sqlalchemy.orm import selectinload
-
offset = (page - 1) * limit
jobs = (
query.options(
@@ -432,98 +322,47 @@ def get_job_detail(request: Request, job_id: str, db: Session = Depends(get_db))
return map_job_to_response(job)
-def map_job_to_response(job: JobPosting):
- """Map JobPosting model to API response that matches frontend Job type"""
-
- # Build location string
- location_parts = []
- if job.location_city:
- location_parts.append(job.location_city)
- if job.location_country:
- location_parts.append(job.location_country)
- location = (
- ", ".join(location_parts)
- if location_parts
- else job.location_type or "Not specified"
- )
+def _build_location(city: str | None, country: str | None, location_type: str) -> str:
+ parts = [p for p in [city, country] if p]
+ return ", ".join(parts) if parts else location_type or "Not specified"
- requirements = []
- if job.job_description and job.job_description.required_skills_experience:
- req_and_skills = job.job_description.required_skills_experience
- if isinstance(req_and_skills, dict) and isinstance(
- req_and_skills.get("required_skills_experience"), list
- ):
- requirements = req_and_skills["required_skills_experience"]
-
- benefits = []
- if job.job_description and job.job_description.offers:
- offers = job.job_description.offers
- if isinstance(offers, dict) and isinstance(offers.get("benefits"), list):
- benefits = offers["benefits"]
-
- nice_to_have = []
- if job.job_description and job.job_description.nice_to_have:
- nth = job.job_description.nice_to_have
- if isinstance(nth, list):
- nice_to_have = nth
- elif isinstance(nth, dict):
- for key, val in nth.items():
- if isinstance(val, list):
- nice_to_have = val
- break
-
- company = None
- if job.company:
- company_location_parts = []
- if job.company.location_city:
- company_location_parts.append(job.company.location_city)
- if job.company.location_country:
- company_location_parts.append(job.company.location_country)
- company_location = (
- ", ".join(company_location_parts) if company_location_parts else None
- )
- company = {
- "id": job.company.company_id,
- "name": job.company.company_name,
- "description": job.company.description,
- "location": company_location,
- "website": job.company.website,
- "industry": job.company.industry,
- "founded_year": job.company.founded_year,
- }
+def _extract_list_from_dict(data: dict | None, key: str) -> list:
+ if not data or not isinstance(data, dict):
+ return []
+ value = data.get(key, [])
+ return value if isinstance(value, list) else []
+
+
+def map_job_to_response(job: JobPosting):
+ jd = job.job_description
return {
- # IDs
"job_id": job.job_id,
"id": job.job_id,
"company_id": job.company_id,
"job_description_id": job.job_description_id,
- # Core job info
"title": job.title,
"department": job.department,
"level": job.level,
- # Location - combined for frontend
- "location": location,
+ "location": _build_location(
+ job.location_city, job.location_country, job.location_type
+ ),
"location_city": job.location_city,
"location_country": job.location_country,
- "is_remote": job.location_type
- == "Remote", # Derived for frontend compatibility
+ "is_remote": job.location_type == "Remote",
"location_type": job.location_type,
- # Employment
"employment_type": job.employment_type or "Full-time",
- # Salary - provide both formats
"salary_min": job.salary_min,
"salary_max": job.salary_max,
- "salary_currency": job.salary_currency or "USD",
+ "salary_currency": job.salary_currency or "NPR",
"salary_range": {
"min": job.salary_min or 0,
"max": job.salary_max or 0,
- "currency": job.salary_currency or "USD",
+ "currency": job.salary_currency or "NPR",
}
- if job.salary_min or job.salary_max
+ if (job.salary_min or job.salary_max)
else None,
- # Status and dates
"status": job.status,
"posted_date": job.posted_date.isoformat() if job.posted_date else None,
"application_deadline": job.application_deadline.isoformat()
@@ -531,24 +370,30 @@ def map_job_to_response(job: JobPosting):
else None,
"created_at": job.created_at.isoformat() if job.created_at else None,
"updated_at": job.updated_at.isoformat() if job.updated_at else None,
- # Company - as object for frontend
- "company": company,
- "company_name": job.company.company_name if job.company else "Unknown Company",
- # Description
- "description": job.job_description.role_overview
- if job.job_description
- else None,
- "role_overview": job.job_description.role_overview
- if job.job_description
- else None,
- # Skills, Requirements, Benefits, and Nice to Have - as arrays for frontend
- "requirements": requirements,
- "benefits": benefits,
- "nice_to_have": nice_to_have,
- "required_skills_experience": job.job_description.required_skills_experience
- if job.job_description
+ "company": {
+ "id": job.company.company_id,
+ "name": job.company.company_name,
+ "description": job.company.description,
+ "location": _build_location(
+ job.company.location_city, job.company.location_country, ""
+ ),
+ "website": job.company.website,
+ "industry": job.company.industry,
+ "founded_year": job.company.founded_year,
+ }
+ if job.company
else None,
- # Stats from JobPostingStats
+ "company_name": job.company.company_name if job.company else "Unknown Company",
+ "description": jd.role_overview if jd else None,
+ "role_overview": jd.role_overview if jd else None,
+ "requirements": _extract_list_from_dict(
+ jd.required_skills_experience if jd else None, "required_skills_experience"
+ ),
+ "benefits": _extract_list_from_dict(jd.offers if jd else None, "benefits"),
+ "nice_to_have": _extract_list_from_dict(
+ jd.nice_to_have if jd else None, "nice_to_have"
+ ),
+ "required_skills_experience": jd.required_skills_experience if jd else None,
"applicant_count": job.stats.applicant_count if job.stats else 0,
"views_count": job.stats.views_count if job.stats else 0,
"is_featured": False,
diff --git a/backend/app/api/jobs_crud.py b/backend/app/api/jobs_crud.py
index 566866b..52e9170 100644
--- a/backend/app/api/jobs_crud.py
+++ b/backend/app/api/jobs_crud.py
@@ -7,8 +7,7 @@
from app.api.jobs import map_job_to_response
from app.core import get_current_user_id, get_db
from app.core.limiter import limiter
-from app.models.company import CompanyProfile
-from app.models.job import JobDescription, JobPosting, JobPostingStats
+from app.models import CompanyProfile, JobDescription, JobPosting, JobPostingStats
from app.schemas import job as schemas
from app.services.recruiter.job_generation_service import JobGenerationService
@@ -26,10 +25,6 @@ def generate_job_draft(
draft_request: schemas.JobDraftGenerateRequest,
user_id: str = Depends(get_current_user_id),
):
- """
- Generate a job description draft using the JD generation agent.
- This endpoint does NOT save the job to the database - it only generates the draft.
- """
if not draft_request.raw_requirements:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
@@ -37,15 +32,12 @@ def generate_job_draft(
)
try:
- # Combine title and requirements for the agent
combined_requirements = (
f"{draft_request.title}. {draft_request.raw_requirements}"
)
- # Generate draft using agent
generated_draft = JobGenerationService.generate_job_draft(combined_requirements)
- # Map generated content to response schema
return schemas.JobDraftResponse(
title=generated_draft.job_title,
description=generated_draft.role_overview,
@@ -84,7 +76,6 @@ def create_job(
job_description_id = str(uuid.uuid4())
job_id = str(uuid.uuid4())
- # Handle required skills - allow empty list for drafts
required_skills_list = (
job_data.requiredSkillsAndExperience
if job_data.requiredSkillsAndExperience
@@ -104,7 +95,7 @@ def create_job(
job_description = JobDescription(
job_description_id=job_description_id,
- role_overview=job_data.description or "", # Allow empty for drafts
+ role_overview=job_data.description or "",
required_skills_experience=required_skills_experience_dict,
nice_to_have=nice_to_have_dict,
offers=offers_dict,
@@ -129,7 +120,6 @@ def create_job(
except Exception:
application_deadline = None
- # Determine status - use provided status or default to "active"
job_status = job_data.status if job_data.status else "active"
job_posting = JobPosting(
@@ -171,3 +161,105 @@ def create_job(
db.refresh(job_posting)
return map_job_to_response(job_posting)
+
+
+@router.put(
+ "/{job_id}", response_model=schemas.JobResponse, status_code=status.HTTP_200_OK
+)
+@limiter.limit("5/minute")
+def update_job(
+ request: Request,
+ job_id: str,
+ job_data: schemas.JobUpdate,
+ user_id: str = Depends(get_current_user_id),
+ db: Session = Depends(get_db),
+):
+ company = db.query(CompanyProfile).filter(CompanyProfile.user_id == user_id).first()
+ if not company:
+ raise HTTPException(
+ status_code=status.HTTP_404_NOT_FOUND,
+ detail="Company profile not found for this user",
+ )
+
+ job_posting = db.query(JobPosting).filter(JobPosting.job_id == job_id).first()
+ if not job_posting:
+ raise HTTPException(
+ status_code=status.HTTP_404_NOT_FOUND,
+ detail="Job not found",
+ )
+
+ if job_posting.company_id != company.company_id:
+ raise HTTPException(
+ status_code=status.HTTP_403_FORBIDDEN,
+ detail="You don't have permission to edit this job",
+ )
+
+ if job_data.title is not None:
+ job_posting.title = job_data.title
+ if job_data.department is not None:
+ job_posting.department = job_data.department
+ if job_data.level is not None:
+ job_posting.level = job_data.level
+ if job_data.locationCity is not None:
+ job_posting.location_city = job_data.locationCity
+ if job_data.locationCountry is not None:
+ job_posting.location_country = job_data.locationCountry
+ if job_data.locationType is not None:
+ job_posting.location_type = job_data.locationType
+ if job_data.employmentType is not None:
+ job_posting.employment_type = job_data.employmentType
+ if job_data.salaryMin is not None:
+ job_posting.salary_min = job_data.salaryMin
+ if job_data.salaryMax is not None:
+ job_posting.salary_max = job_data.salaryMax
+ if job_data.currency is not None:
+ job_posting.salary_currency = job_data.currency
+ if job_data.status is not None:
+ job_posting.status = job_data.status
+
+ if job_data.applicationDeadline is not None:
+ try:
+ if "T" in job_data.applicationDeadline:
+ job_posting.application_deadline = datetime.fromisoformat(
+ job_data.applicationDeadline.replace("Z", "+00:00")
+ ).date()
+ else:
+ job_posting.application_deadline = datetime.strptime(
+ job_data.applicationDeadline, "%Y-%m-%d"
+ ).date()
+ except Exception:
+ job_posting.application_deadline = None
+
+ job_posting.updated_at = datetime.now(UTC).replace(tzinfo=None)
+
+ job_description = (
+ db.query(JobDescription)
+ .filter(JobDescription.job_description_id == job_posting.job_description_id)
+ .first()
+ )
+
+ if job_description:
+ if job_data.description is not None:
+ job_description.role_overview = job_data.description
+
+ if job_data.requiredSkillsAndExperience is not None:
+ job_description.required_skills_experience = {
+ "required_skills_experience": job_data.requiredSkillsAndExperience
+ }
+
+ if job_data.niceToHave is not None:
+ job_description.nice_to_have = (
+ {"nice_to_have": job_data.niceToHave} if job_data.niceToHave else None
+ )
+
+ if job_data.benefits is not None:
+ job_description.offers = (
+ {"benefits": job_data.benefits} if job_data.benefits else None
+ )
+
+ job_description.updated_at = datetime.now(UTC).replace(tzinfo=None)
+
+ db.commit()
+ db.refresh(job_posting)
+
+ return map_job_to_response(job_posting)
diff --git a/backend/app/api/resume.py b/backend/app/api/resume.py
index 73470a1..ca272b6 100644
--- a/backend/app/api/resume.py
+++ b/backend/app/api/resume.py
@@ -3,8 +3,9 @@
from app.core import get_current_user_id, get_db
from app.core.limiter import limiter
-from app.schemas import resume as schemas
-from app.schemas.resume import (
+from app.schemas import (
+ CertificationBase,
+ EducationBase,
ResumeCertificationResponse,
ResumeCertificationUpdate,
ResumeEducationResponse,
@@ -13,13 +14,10 @@
ResumeSkillUpdate,
ResumeWorkExperienceResponse,
ResumeWorkExperienceUpdate,
-)
-from app.schemas.shared import (
- CertificationBase,
- EducationBase,
SkillBase,
WorkExperienceBase,
)
+from app.schemas import resume as schemas
from app.services.candidate.resume_service import ResumeService
router = APIRouter()
@@ -43,7 +41,6 @@ def create_resume(
user_id: str = Depends(get_current_user_id),
db: Session = Depends(get_db),
):
- """Triggers the Fork: Copies profile data to new resume"""
return ResumeService.create_resume_fork(db, user_id, data)
@@ -81,9 +78,6 @@ def delete_resume(
ResumeService.delete_resume(db, user_id, resume_id)
-# --- Sub-Resources (Example: Experience) ---
-
-
@router.post(
"/{resume_id}/experience", response_model=schemas.ResumeWorkExperienceResponse
)
diff --git a/backend/app/core/__init__.py b/backend/app/core/__init__.py
index 0959d75..f9c189a 100644
--- a/backend/app/core/__init__.py
+++ b/backend/app/core/__init__.py
@@ -1,20 +1,5 @@
-"""
-Core package - Configuration, database, security, and utilities.
-
-This module provides the foundational infrastructure for the application.
-Import from here instead of individual submodules for a cleaner API.
-
-Example:
- from app.core import settings, get_db, get_current_user_id
-"""
-
-# Configuration
-from .config import Settings, settings
-
-# Database
+from .config import settings
from .database import engine, get_db, init_db
-
-# Logging
from .logging_config import configure_file_logging, get_logger, logger
# Security
@@ -27,20 +12,15 @@
)
__all__ = [
- # Configuration
"settings",
- "Settings",
- # Database
"engine",
"init_db",
"get_db",
- # Security
"hash_password",
"verify_password",
"create_token",
"verify_token",
"get_current_user_id",
- # Logging
"logger",
"configure_file_logging",
"get_logger",
diff --git a/backend/app/core/config.py b/backend/app/core/config.py
index 556bd4b..170b178 100644
--- a/backend/app/core/config.py
+++ b/backend/app/core/config.py
@@ -10,16 +10,16 @@ class Settings(BaseSettings):
# Security
SECRET_KEY: str
- ALGORITHM: str = "HS256"
- ACCESS_TOKEN_EXPIRE_MINUTES: int = 30
- SECURE: bool = False
+ ALGORITHM: str
+ ACCESS_TOKEN_EXPIRE_MINUTES: int
+ SECURE: bool
# URLs
FRONTEND_URL: str
BACKEND_URL: str
# Environment
- ENVIRONMENT: str = "development"
+ ENVIRONMENT: str
APP_VERSION: str
# Database
@@ -28,8 +28,9 @@ class Settings(BaseSettings):
# Vector Database
QDRANT_URL: str
QDRANT_API_KEY: str
- QDRANT_COLLECTION_JOBS: str
+ QDRANT_COLLECTION_NAME: str
EMBEDDING_MODEL: str
+ EMBEDDING_DIM: int = 384
# LLM Settings
FAST_LLM: str = "llama-3.1-8b-instant"
@@ -38,7 +39,7 @@ class Settings(BaseSettings):
LLM_MAX_RETRIES: int = 3
GROQ_API_KEY: str
- LANGCHAIN_TRACING_V2: bool = True
+ LANGCHAIN_TRACING_V2: bool
LANGCHAIN_ENDPOINT: str
LANGCHAIN_API_KEY: str
LANGCHAIN_PROJECT: str
diff --git a/backend/app/models/agents/__init__.py b/backend/app/models/agents/__init__.py
index 8372c95..9cfa3d8 100644
--- a/backend/app/models/agents/__init__.py
+++ b/backend/app/models/agents/__init__.py
@@ -1,17 +1,3 @@
-"""
-Agents models package - Pydantic models for AI agents.
-
-This module provides data models used by AI-powered automation agents.
-Import from here instead of individual submodules for a cleaner API.
-
-Example:
- from app.models.agents import shortlist
- from app.models.agents.shortlist import WorkflowState, CandidateScore
-
- from app.models.agents import interview_scheduling
- from app.models.agents.interview_scheduling import InterviewSchedulingState
-"""
-
from . import interview_scheduling, shortlist
__all__ = [
diff --git a/backend/app/models/agents/interview_scheduling/__init__.py b/backend/app/models/agents/interview_scheduling/__init__.py
index 6e20e78..8864979 100644
--- a/backend/app/models/agents/interview_scheduling/__init__.py
+++ b/backend/app/models/agents/interview_scheduling/__init__.py
@@ -1,12 +1,3 @@
-"""
-Interview Scheduling Agent models - Pydantic models for interview scheduling workflow.
-
-This module provides data models used by the interview scheduling agent.
-
-Example:
- from app.models.agents.interview_scheduling import InterviewSchedulingState
-"""
-
from .schemas import InterviewSchedulingState
__all__ = [
diff --git a/backend/app/models/agents/shortlist/__init__.py b/backend/app/models/agents/shortlist/__init__.py
index b0efb5a..95b3762 100644
--- a/backend/app/models/agents/shortlist/__init__.py
+++ b/backend/app/models/agents/shortlist/__init__.py
@@ -1,15 +1,3 @@
-"""
-Shortlist Agent models package - Pydantic models for resume screening.
-
-This module provides data models used by the resume shortlisting workflow.
-Import from here instead of individual submodules for a cleaner API.
-
-Example:
- from app.models.agents.shortlist import WorkflowState, CandidateScore
- from app.models.agents.shortlist import ResumeStructured, JobRequirements
-"""
-
-# Schemas (data models for the workflow)
from .schemas import (
CandidateBreakdown,
CandidateScore,
@@ -20,7 +8,6 @@
)
__all__ = [
- # Schemas
"ResumeStructured",
"JobRequirements",
"EvaluationScore",
diff --git a/backend/app/models/application.py b/backend/app/models/application.py
index a5ab48a..bf89f56 100644
--- a/backend/app/models/application.py
+++ b/backend/app/models/application.py
@@ -6,9 +6,12 @@
from app.core.database import Base
+from .company import CompanyProfile
+from .job import JobPosting
+from .resume import Resume
+
def utc_now():
- """Returns a timezone-naive UTC datetime (replacement for deprecated datetime.utcnow())."""
return datetime.now(UTC).replace(tzinfo=None)
diff --git a/backend/app/models/candidate.py b/backend/app/models/candidate.py
index 6a7e733..638aa36 100644
--- a/backend/app/models/candidate.py
+++ b/backend/app/models/candidate.py
@@ -1,13 +1,17 @@
from datetime import UTC, date, datetime
+from typing import TYPE_CHECKING
from sqlalchemy import Boolean, Date, DateTime, ForeignKey, String
from sqlalchemy.orm import Mapped, mapped_column, relationship
from . import Base
+if TYPE_CHECKING:
+ from app.models.resume import Resume
+ from app.models.user import User
+
def utc_now():
- """Returns a timezone-naive UTC datetime (replacement for deprecated datetime.utcnow())."""
return datetime.now(UTC).replace(tzinfo=None)
diff --git a/backend/app/models/company.py b/backend/app/models/company.py
index d0c2477..820f445 100644
--- a/backend/app/models/company.py
+++ b/backend/app/models/company.py
@@ -7,7 +7,6 @@
def utc_now():
- """Returns a timezone-naive UTC datetime (replacement for deprecated datetime.utcnow())."""
return datetime.now(UTC).replace(tzinfo=None)
diff --git a/backend/app/models/job.py b/backend/app/models/job.py
index 2cc2151..6748b0d 100644
--- a/backend/app/models/job.py
+++ b/backend/app/models/job.py
@@ -8,7 +8,6 @@
def utc_now():
- """Returns a timezone-naive UTC datetime (replacement for deprecated datetime.utcnow())."""
return datetime.now(UTC).replace(tzinfo=None)
diff --git a/backend/app/models/resume.py b/backend/app/models/resume.py
index 5447792..44c6ddd 100644
--- a/backend/app/models/resume.py
+++ b/backend/app/models/resume.py
@@ -1,13 +1,16 @@
from datetime import UTC, date, datetime
+from typing import TYPE_CHECKING
from sqlalchemy import Boolean, Date, DateTime, ForeignKey, String
from sqlalchemy.orm import Mapped, mapped_column, relationship
from . import Base
+if TYPE_CHECKING:
+ from app.models.candidate import CandidateProfile
+
def utc_now():
- """Returns a timezone-naive UTC datetime (replacement for deprecated datetime.utcnow())."""
return datetime.now(UTC).replace(tzinfo=None)
diff --git a/backend/app/models/user.py b/backend/app/models/user.py
index cc85163..07b65e1 100644
--- a/backend/app/models/user.py
+++ b/backend/app/models/user.py
@@ -1,15 +1,18 @@
from datetime import UTC, datetime
from enum import Enum
-from typing import Optional
+from typing import TYPE_CHECKING, Optional
from sqlalchemy import Boolean, DateTime, ForeignKey, String
from sqlalchemy.orm import Mapped, mapped_column, relationship
from . import Base
+if TYPE_CHECKING:
+ from .candidate import CandidateProfile
+ from .company import CompanyProfile
+
def utc_now():
- """Returns a timezone-naive UTC datetime (replacement for deprecated datetime.utcnow())."""
return datetime.now(UTC).replace(tzinfo=None)
diff --git a/backend/app/schemas/__init__.py b/backend/app/schemas/__init__.py
index b75d325..6957299 100644
--- a/backend/app/schemas/__init__.py
+++ b/backend/app/schemas/__init__.py
@@ -1,14 +1,51 @@
-"""
-Schemas package - Pydantic models for API data contracts.
-
-This module provides request/response schemas for the API.
-Import from here instead of individual submodules for a cleaner API.
-
-Example:
- from app.schemas import UserResponse, JobResponse, ApplicationCreate
-"""
-
-# User schemas
+from .application import (
+ ApplicationResponse,
+ CompanySummary,
+ JobSummary,
+)
+from .candidate import (
+ CandidateProfileFullResponse,
+ CandidateProfileUpdate,
+ CertificationResponse,
+ CertificationUpdate,
+ EducationResponse,
+ EducationUpdate,
+ SkillResponse,
+ SkillUpdate,
+ SocialLinkResponse,
+ WorkExperienceResponse,
+ WorkExperienceUpdate,
+)
+from .job import (
+ CompanyResponse,
+ JobCreate,
+ JobDraftGenerateRequest,
+ JobDraftResponse,
+ JobListResponse,
+ JobResponse,
+)
+from .resume import (
+ ResumeCertificationResponse,
+ ResumeCertificationUpdate,
+ ResumeCreate,
+ ResumeEducationResponse,
+ ResumeEducationUpdate,
+ ResumeListResponse,
+ ResumeResponse,
+ ResumeSkillResponse,
+ ResumeSkillUpdate,
+ ResumeSocialLinkResponse,
+ ResumeUpdate,
+ ResumeWorkExperienceResponse,
+ ResumeWorkExperienceUpdate,
+)
+from .shared import (
+ CertificationBase,
+ EducationBase,
+ SkillBase,
+ SocialLinkBase,
+ WorkExperienceBase,
+)
from .user import (
CreateUserRequest,
GoogleUserInfo,
@@ -20,7 +57,9 @@
)
__all__ = [
- # User
+ "ApplicationResponse",
+ "JobSummary",
+ "CompanySummary",
"UserResponse",
"GoogleUserInfo",
"CreateUserRequest",
@@ -28,4 +67,39 @@
"LoginRequest",
"RoleSelectionRequest",
"TokenResponse",
+ "CompanyResponse",
+ "JobResponse",
+ "JobListResponse",
+ "JobDraftResponse",
+ "JobCreate",
+ "JobDraftGenerateRequest",
+ "CandidateProfileUpdate",
+ "CertificationUpdate",
+ "EducationUpdate",
+ "SkillUpdate",
+ "WorkExperienceUpdate",
+ "SocialLinkResponse",
+ "WorkExperienceResponse",
+ "EducationResponse",
+ "CertificationResponse",
+ "SkillResponse",
+ "CandidateProfileFullResponse",
+ "ResumeCreate",
+ "ResumeUpdate",
+ "ResumeWorkExperienceUpdate",
+ "ResumeEducationUpdate",
+ "ResumeSkillUpdate",
+ "ResumeCertificationUpdate",
+ "ResumeSocialLinkResponse",
+ "ResumeWorkExperienceResponse",
+ "ResumeEducationResponse",
+ "ResumeCertificationResponse",
+ "ResumeSkillResponse",
+ "ResumeResponse",
+ "ResumeListResponse",
+ "SocialLinkBase",
+ "WorkExperienceBase",
+ "EducationBase",
+ "CertificationBase",
+ "SkillBase",
]
diff --git a/backend/app/schemas/application.py b/backend/app/schemas/application.py
index 04be827..868e62b 100644
--- a/backend/app/schemas/application.py
+++ b/backend/app/schemas/application.py
@@ -2,7 +2,7 @@
from pydantic import BaseModel, ConfigDict
-from app.models.application import ApplicationStatus
+from app.models import ApplicationStatus
class JobSummary(BaseModel):
diff --git a/backend/app/schemas/candidate.py b/backend/app/schemas/candidate.py
index f6a648b..bf735c6 100644
--- a/backend/app/schemas/candidate.py
+++ b/backend/app/schemas/candidate.py
@@ -11,7 +11,6 @@
)
-# --- Inputs ---
class CandidateProfileUpdate(BaseModel):
full_name: str | None = None
phone: str | None = None
@@ -54,7 +53,6 @@ class EducationUpdate(BaseModel):
is_current: bool | None = None
-# --- Responses ---
class SocialLinkResponse(SocialLinkBase):
social_link_id: str
model_config = ConfigDict(from_attributes=True)
diff --git a/backend/app/schemas/job.py b/backend/app/schemas/job.py
index 6ff7035..3a22d89 100644
--- a/backend/app/schemas/job.py
+++ b/backend/app/schemas/job.py
@@ -77,55 +77,65 @@ class JobListResponse(BaseModel):
class JobCreate(BaseModel):
- """Schema for creating a new job posting"""
-
title: str
department: str | None = None
level: str | None = None
- # Job Description fields
- description: str | None = "" # role_overview - optional for drafts
- requiredSkillsAndExperience: (
- list[str] | None
- ) = [] # Will be stored as {"required_skills_experience": [...]} - optional for drafts
- niceToHave: list[str] | None = None # Will be stored as {"nice_to_have": [...]}
- benefits: list[str] | None = None # Will be stored as {"benefits": [...]} in offers
+ description: str | None = ""
+ requiredSkillsAndExperience: list[str] | None = []
+ niceToHave: list[str] | None = None
+ benefits: list[str] | None = None
- # Location
locationCity: str | None = None
locationCountry: str | None = None
- locationType: str = "On-site" # Remote, On-site, Hybrid
+ locationType: str = "On-site"
employmentType: str | None = None
- # Compensation
salaryMin: int | None = None
salaryMax: int | None = None
currency: str | None = "NPR"
- # Dates
- applicationDeadline: str | None = None # ISO date string
+ applicationDeadline: str | None = None
- # Status
- status: str | None = "active" # "active", "draft", "expired"
+ status: str | None = "active"
- # Creation Mode
- mode: str = "manual" # "manual", "agent"
- raw_requirements: str | None = None # Required if mode is "agent"
+ mode: str = "manual"
+ raw_requirements: str | None = None
-class JobDraftGenerateRequest(BaseModel):
- """Schema for generating a job draft"""
+class JobUpdate(BaseModel):
+ title: str | None = None
+ department: str | None = None
+ level: str | None = None
+
+ description: str | None = None
+ requiredSkillsAndExperience: list[str] | None = None
+ niceToHave: list[str] | None = None
+ benefits: list[str] | None = None
+
+ locationCity: str | None = None
+ locationCountry: str | None = None
+ locationType: str | None = None
+ employmentType: str | None = None
+
+ salaryMin: int | None = None
+ salaryMax: int | None = None
+ currency: str | None = None
+
+ applicationDeadline: str | None = None
+ status: str | None = None
+
+
+class JobDraftGenerateRequest(BaseModel):
title: str
raw_requirements: str
reference_jd: str | None = None
class JobDraftResponse(BaseModel):
- """Schema for the generated job draft response"""
-
title: str
- description: str # role_overview
+ description: str
requiredSkillsAndExperience: list[str]
niceToHave: list[str]
benefits: list[str]
diff --git a/backend/app/schemas/resume.py b/backend/app/schemas/resume.py
index 96bca54..a6540b4 100644
--- a/backend/app/schemas/resume.py
+++ b/backend/app/schemas/resume.py
@@ -10,15 +10,11 @@
WorkExperienceBase,
)
-# --- Inputs ---
-
class ResumeCreate(BaseModel):
resume_name: str
target_job_title: str | None = None
custom_summary: str | None = None
-
- # Optional Custom Data (if provided, overrides profile fetch)
work_experiences: list[WorkExperienceBase] | None = None
educations: list[EducationBase] | None = None
certifications: list[CertificationBase] | None = None
@@ -98,7 +94,6 @@ class ResumeResponse(BaseModel):
created_at: datetime
updated_at: datetime
- # Nested Lists
social_links: list[ResumeSocialLinkResponse] = []
work_experiences: list[ResumeWorkExperienceResponse] = []
educations: list[ResumeEducationResponse] = []
@@ -109,8 +104,6 @@ class ResumeResponse(BaseModel):
class ResumeListResponse(BaseModel):
- """Lightweight response for list view"""
-
resume_id: str
resume_name: str
target_job_title: str | None = None
diff --git a/backend/app/schemas/shared.py b/backend/app/schemas/shared.py
index 0be6292..1b0a83d 100644
--- a/backend/app/schemas/shared.py
+++ b/backend/app/schemas/shared.py
@@ -2,8 +2,6 @@
from pydantic import BaseModel
-# --- Base Models used by both Profile and Resume ---
-
class SocialLinkBase(BaseModel):
type: str
diff --git a/backend/app/schemas/user.py b/backend/app/schemas/user.py
index 8449d88..f2bb161 100644
--- a/backend/app/schemas/user.py
+++ b/backend/app/schemas/user.py
@@ -1,12 +1,8 @@
from datetime import datetime
-from enum import Enum
from pydantic import BaseModel, ConfigDict
-
-class UserRole(str, Enum):
- CANDIDATE = "candidate"
- RECRUITER = "recruiter"
+from app.models.user import UserRole
class SignupRequest(BaseModel):
diff --git a/backend/app/services/agents/interview_scheduling/__init__.py b/backend/app/services/agents/interview_scheduling/__init__.py
index 7c3246f..5d1e790 100644
--- a/backend/app/services/agents/interview_scheduling/__init__.py
+++ b/backend/app/services/agents/interview_scheduling/__init__.py
@@ -1,17 +1,10 @@
import os
from app.core import settings
-
-# Schema
from app.models.agents.interview_scheduling import InterviewSchedulingState
-# Email service
from .email_service import send_interview_email
-
-# Workflow
from .graph import create_workflow
-
-# Node functions (for advanced usage)
from .nodes import (
approval_router,
compose_email_draft,
diff --git a/backend/app/services/agents/interview_scheduling/graph.py b/backend/app/services/agents/interview_scheduling/graph.py
index e456a42..5abe2ed 100644
--- a/backend/app/services/agents/interview_scheduling/graph.py
+++ b/backend/app/services/agents/interview_scheduling/graph.py
@@ -1,10 +1,3 @@
-"""
-Interview Scheduling Workflow Graph.
-
-LangGraph-based workflow for sending interview scheduling emails
-with optional human-in-the-loop approval.
-"""
-
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import END, START, StateGraph
diff --git a/backend/app/services/agents/interview_scheduling/nodes/approval.py b/backend/app/services/agents/interview_scheduling/nodes/approval.py
index 52916a7..6871677 100644
--- a/backend/app/services/agents/interview_scheduling/nodes/approval.py
+++ b/backend/app/services/agents/interview_scheduling/nodes/approval.py
@@ -1,7 +1,3 @@
-"""
-Approval node - Human-in-the-loop approval gate.
-"""
-
from typing import Literal
from langgraph.types import interrupt
diff --git a/backend/app/services/agents/interview_scheduling/nodes/compose_email.py b/backend/app/services/agents/interview_scheduling/nodes/compose_email.py
index 245e95c..64d4c07 100644
--- a/backend/app/services/agents/interview_scheduling/nodes/compose_email.py
+++ b/backend/app/services/agents/interview_scheduling/nodes/compose_email.py
@@ -1,7 +1,3 @@
-"""
-Compose email node - Generate the email draft content.
-"""
-
from langsmith import traceable
from app.core.config import settings
diff --git a/backend/app/services/agents/interview_scheduling/nodes/load_state.py b/backend/app/services/agents/interview_scheduling/nodes/load_state.py
index 45245cc..a130d53 100644
--- a/backend/app/services/agents/interview_scheduling/nodes/load_state.py
+++ b/backend/app/services/agents/interview_scheduling/nodes/load_state.py
@@ -1,7 +1,3 @@
-"""
-Load state node - Initialize the workflow state.
-"""
-
from langsmith import traceable
from app.models.agents.interview_scheduling import InterviewSchedulingState
diff --git a/backend/app/services/agents/interview_scheduling/nodes/send_email.py b/backend/app/services/agents/interview_scheduling/nodes/send_email.py
index a5fefaf..41c380b 100644
--- a/backend/app/services/agents/interview_scheduling/nodes/send_email.py
+++ b/backend/app/services/agents/interview_scheduling/nodes/send_email.py
@@ -1,7 +1,3 @@
-"""
-Send email node - Send the interview scheduling email.
-"""
-
from langsmith import traceable
from app.models.agents.interview_scheduling import InterviewSchedulingState
diff --git a/backend/app/services/agents/interview_scheduling/nodes/wrap_up.py b/backend/app/services/agents/interview_scheduling/nodes/wrap_up.py
index 5c79845..83b5bac 100644
--- a/backend/app/services/agents/interview_scheduling/nodes/wrap_up.py
+++ b/backend/app/services/agents/interview_scheduling/nodes/wrap_up.py
@@ -1,7 +1,3 @@
-"""
-Wrap up node - Finalize the workflow state.
-"""
-
from langsmith import traceable
from app.models.agents.interview_scheduling import InterviewSchedulingState
diff --git a/backend/app/services/agents/jd_generator/llm_service.py b/backend/app/services/agents/jd_generator/llm_service.py
index 36d64ef..0aab245 100644
--- a/backend/app/services/agents/jd_generator/llm_service.py
+++ b/backend/app/services/agents/jd_generator/llm_service.py
@@ -5,10 +5,6 @@
def get_llm():
- """Get a structured LLM instance configured for JD generation."""
- # JD generation requires more tokens than the default (500) to produce
- # complete job descriptions with all required sections
-
llm = ChatGroq(
temperature=settings.LLM_TEMPERATURE,
model=settings.THINK_LLM,
diff --git a/backend/app/services/agents/shortlist/__init__.py b/backend/app/services/agents/shortlist/__init__.py
index dff6e38..5c8aabb 100644
--- a/backend/app/services/agents/shortlist/__init__.py
+++ b/backend/app/services/agents/shortlist/__init__.py
@@ -1,19 +1,6 @@
-"""
-Shortlist Agent package - Resume screening and candidate evaluation.
-
-This module provides the resume shortlisting workflow using LangGraph.
-Import from here instead of individual submodules for a cleaner API.
-
-Example:
- from app.services.agents.shortlist import create_workflow, discover_resume_files
- from app.services.agents.shortlist import WorkflowState, CandidateScore
-"""
-
import os
from app.core import settings
-
-# Schemas (data models for the workflow)
from app.models.agents.shortlist import (
CandidateScore,
EvaluationScore,
diff --git a/backend/app/services/auth/auth_service.py b/backend/app/services/auth/auth_service.py
index ba216d1..5ccd458 100644
--- a/backend/app/services/auth/auth_service.py
+++ b/backend/app/services/auth/auth_service.py
@@ -1,11 +1,12 @@
import uuid
import httpx
+from fastapi import HTTPException, status
from sqlalchemy import select
from sqlalchemy.orm import Session
from app.core import create_token, hash_password, settings, verify_password
-from app.models import User, UserRole
+from app.models import CandidateProfile, CompanyProfile, User, UserGoogle, UserRole
from app.schemas import CreateUserRequest, GoogleUserInfo, UserResponse
@@ -44,8 +45,6 @@ def create_user(user_data: CreateUserRequest, db: Session) -> User:
db.flush()
if new_user.role:
- from app.models import CandidateProfile, CompanyProfile, UserRole
-
if new_user.role == UserRole.CANDIDATE.value:
new_profile = CandidateProfile(
profile_id=str(uuid.uuid4()),
@@ -64,8 +63,6 @@ def create_user(user_data: CreateUserRequest, db: Session) -> User:
new_user.password = hash_password(user_data.password)
if user_data.google_id:
- from app.models import UserGoogle
-
new_google_user = UserGoogle(
user_google_id=user_data.google_id, user_id=new_user.user_id
)
@@ -186,10 +183,6 @@ def get_or_create_google_user(google_user: GoogleUserInfo, db: Session) -> User:
@staticmethod
def assign_role_and_create_profile(user: User, role: UserRole, db: Session) -> User:
- from fastapi import HTTPException, status
-
- from app.models import CandidateProfile, CompanyProfile
-
if user.role:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
diff --git a/backend/app/services/candidate/job_service_utils.py b/backend/app/services/candidate/job_service_utils.py
new file mode 100644
index 0000000..147d9c3
--- /dev/null
+++ b/backend/app/services/candidate/job_service_utils.py
@@ -0,0 +1,21 @@
+from sqlalchemy.orm import Session, selectinload
+
+from app.models.job import JobPosting
+
+VISIBLE_STATUSES = ["active"]
+
+
+def get_latest_jobs(db: Session, limit: int = 200) -> list[JobPosting]:
+ """Get latest jobs ordered by posted date."""
+ return (
+ db.query(JobPosting)
+ .options(
+ selectinload(JobPosting.company),
+ selectinload(JobPosting.job_description),
+ selectinload(JobPosting.stats),
+ )
+ .filter(JobPosting.status.in_(VISIBLE_STATUSES))
+ .order_by(JobPosting.posted_date.desc())
+ .limit(limit)
+ .all()
+ )
diff --git a/backend/app/services/candidate/profile_service.py b/backend/app/services/candidate/profile_service.py
index 4e3ee23..1ebc3c5 100644
--- a/backend/app/services/candidate/profile_service.py
+++ b/backend/app/services/candidate/profile_service.py
@@ -9,9 +9,10 @@
CandidateEducation,
CandidateProfile,
CandidateSkills,
+ CandidateSocialLink,
CandidateWorkExperience,
)
-from app.schemas.candidate import CandidateProfileUpdate
+from app.schemas import CandidateProfileUpdate
class CandidateService:
@@ -202,3 +203,21 @@ def update_certification(db: Session, user_id: str, item_id: str, data):
"candidate_certification_id",
data,
)
+
+ @staticmethod
+ def add_social_link(db: Session, user_id: str, data):
+ return CandidateService._add_item(
+ db, user_id, CandidateSocialLink, data.model_dump(), "social_link_id"
+ )
+
+ @staticmethod
+ def delete_social_link(db: Session, user_id: str, item_id: str):
+ CandidateService._delete_item(
+ db, user_id, CandidateSocialLink, item_id, "social_link_id"
+ )
+
+ @staticmethod
+ def update_social_link(db: Session, user_id: str, item_id: str, data):
+ return CandidateService._update_item(
+ db, user_id, CandidateSocialLink, item_id, "social_link_id", data
+ )
diff --git a/backend/app/services/candidate/resume_service.py b/backend/app/services/candidate/resume_service.py
index 1a9acd0..d0f8347 100644
--- a/backend/app/services/candidate/resume_service.py
+++ b/backend/app/services/candidate/resume_service.py
@@ -13,7 +13,7 @@
ResumeSocialLink,
ResumeWorkExperience,
)
-from app.schemas.resume import (
+from app.schemas import (
CertificationBase,
EducationBase,
ResumeCertificationUpdate,
diff --git a/backend/app/services/candidate/vector_job_service.py b/backend/app/services/candidate/vector_job_service.py
index e07eb87..e180d46 100644
--- a/backend/app/services/candidate/vector_job_service.py
+++ b/backend/app/services/candidate/vector_job_service.py
@@ -1,112 +1,102 @@
-import logging
-
from langchain_core.documents import Document
-from langchain_qdrant import QdrantVectorStore
+from langchain_qdrant import QdrantVectorStore, RetrievalMode
from qdrant_client import QdrantClient
-from qdrant_client.http import models
-from sqlalchemy.orm import Session
+from qdrant_client.http.models import Distance, VectorParams
+from sqlalchemy.orm import Session, selectinload
-from app.core.config import settings
+from app.core import settings
+from app.core.logging_config import logger
from app.core.ml import get_embedding_model
+from app.models.company import CompanyProfile
from app.models.job import JobPosting
-logger = logging.getLogger(__name__)
-
class JobVectorService:
def __init__(self):
self.embedding_model = get_embedding_model()
- self.collection_name = settings.QDRANT_COLLECTION_JOBS
- self.vector_store: QdrantVectorStore | None = None
-
- try:
- self.client = QdrantClient(
- url=settings.QDRANT_URL, api_key=settings.QDRANT_API_KEY
- )
- self._ensure_collection_exists()
+ self.client = QdrantClient(
+ url=settings.QDRANT_URL, api_key=settings.QDRANT_API_KEY
+ )
+ self.collection_name = settings.QDRANT_COLLECTION_NAME
+ self._ensure_collection_exists()
+ self.qdrant = QdrantVectorStore(
+ client=self.client,
+ collection_name=self.collection_name,
+ embedding=self.embedding_model,
+ retrieval_mode=RetrievalMode.DENSE,
+ )
- self.vector_store = QdrantVectorStore(
- client=self.client,
+ def _ensure_collection_exists(self):
+ if not self.client.collection_exists(self.collection_name):
+ self.client.create_collection(
collection_name=self.collection_name,
- embedding=self.embedding_model,
+ vectors_config=VectorParams(
+ size=settings.EMBEDDING_DIM, distance=Distance.COSINE
+ ),
)
- except Exception as e:
- logger.error(f"Qdrant Connection Failed: {e}")
- self.vector_store = None
+ logger.trace(f"Created Qdrant collection: {self.collection_name}")
+ else:
+ logger.trace(f"Qdrant collection already exists: {self.collection_name}")
- def _get_embedding_dimension(self) -> int:
- test_embedding = self.embedding_model.embed_query("test")
- return len(test_embedding)
+ def _construct_job_text(self, job: JobPosting) -> str:
+ if not job.job_description:
+ company_name = self._get_company_name(job)
+ return f"Title: {job.title}\nCompany: {company_name}"
- def _ensure_collection_exists(self):
- embedding_dim = self._get_embedding_dimension()
-
- if self.client.collection_exists(self.collection_name):
- collection_info = self.client.get_collection(self.collection_name)
- existing_dim = collection_info.config.params.vectors.size
-
- if existing_dim != embedding_dim:
- logger.warning(
- f"Collection dimension mismatch: existing={existing_dim}, model={embedding_dim}. "
- f"Recreating collection..."
- )
- self.client.delete_collection(self.collection_name)
- else:
- logger.info(
- f"Using existing Qdrant collection: {self.collection_name} (dim={existing_dim})"
- )
- return
-
- self.client.create_collection(
- collection_name=self.collection_name,
- vectors_config=models.VectorParams(
- size=embedding_dim, distance=models.Distance.COSINE
- ),
- )
- logger.info(
- f"Created Qdrant collection: {self.collection_name} (dim={embedding_dim})"
- )
+ jd = job.job_description
+ company_name = self._get_company_name(job)
- def _construct_job_text(self, job: JobPosting) -> str:
- skills_txt = ""
- benefits_txt = ""
- role_overview = ""
-
- if job.job_description:
- role_overview = job.job_description.role_overview or ""
-
- req = job.job_description.required_skills_experience
- if isinstance(req, dict):
- flat_skills = []
- for v in req.values():
- if isinstance(v, list):
- flat_skills.extend(v)
- elif isinstance(v, str):
- flat_skills.append(v)
- skills_txt = ", ".join(flat_skills)
- elif isinstance(req, list):
- skills_txt = ", ".join(req)
-
- offers = job.job_description.offers
- if isinstance(offers, dict) and "benefits" in offers:
- benefits_txt = ", ".join(offers["benefits"])
-
- return (
- f"Title: {job.title}. "
- f"Location: {job.location_city}, {job.location_country}. "
- f"Type: {job.employment_type}"
- f"Description: {role_overview}. "
- f"Skills: {skills_txt}. "
- f"Benefits: {benefits_txt}."
- )
+ parts = [
+ f"Title: {job.title}",
+ f"Company: {company_name}",
+ f"Role Overview: {jd.role_overview}",
+ ]
+
+ if isinstance(jd.required_skills_experience, dict):
+ skills = jd.required_skills_experience.get("required_skills_experience", [])
+ if skills:
+ parts.append("Required Skills and Experience:")
+ parts.extend(f"- {s}" for s in skills)
+
+ if isinstance(jd.nice_to_have, dict):
+ nice = jd.nice_to_have.get("nice_to_have", [])
+ if nice:
+ parts.append("Nice to Have:")
+ parts.extend(f"- {n}" for n in nice)
+
+ if isinstance(jd.offers, dict):
+ benefits = jd.offers.get("benefits", [])
+ if benefits:
+ parts.append("Benefits:")
+ parts.extend(f"- {b}" for b in benefits)
+
+ return "\n".join(parts)
+
+ def _get_company_name(self, job: JobPosting) -> str:
+ if not job.company:
+ return "Unknown Company"
+
+ if job.company.user:
+ return job.company.user.name
+
+ return "Unknown Company"
def index_all_pending_jobs(self, db: Session):
- if not self.vector_store:
+ if not self.qdrant:
return
- pending_jobs = db.query(JobPosting).filter(JobPosting.is_indexed == False).all()
+ pending_jobs = (
+ db.query(JobPosting)
+ .options(
+ selectinload(JobPosting.job_description),
+ selectinload(JobPosting.company).selectinload(CompanyProfile.user),
+ )
+ .filter(JobPosting.is_indexed == False)
+ .all()
+ )
+
if not pending_jobs:
- logger.info("No new jobs to index.")
+ logger.debug("No new jobs to index.")
return
logger.info(f"Indexing {len(pending_jobs)} new jobs...")
@@ -114,16 +104,29 @@ def index_all_pending_jobs(self, db: Session):
for job in pending_jobs:
try:
text_content = self._construct_job_text(job)
+ company_name = self._get_company_name(job)
metadata = {
"job_id": job.job_id,
"company_id": job.company_id,
+ "company_name": company_name,
"title": job.title,
+ "city": job.location_city,
+ "country": job.location_country,
+ "type": job.location_type,
+ "employment_type": job.employment_type,
+ "salary_min": job.salary_min,
+ "salary_max": job.salary_max,
+ "salary_currency": job.salary_currency,
+ "status": job.status,
+ "posted_date": job.posted_date.isoformat()
+ if job.posted_date
+ else None,
}
doc = Document(page_content=text_content, metadata=metadata)
- self.vector_store.add_documents([doc], ids=[job.job_id])
+ self.qdrant.add_documents([doc], ids=[job.job_id])
job.is_indexed = True
@@ -131,16 +134,13 @@ def index_all_pending_jobs(self, db: Session):
logger.error(f"Failed to index job {job.job_id}: {e}")
db.commit()
- logger.info("Indexing complete.")
-
- def search_jobs(self, query: str, limit: int = 10) -> list[str]:
- if not self.vector_store or not query:
- return []
+ logger.success(f"Indexed {len(pending_jobs)} new jobs.")
- results = self.vector_store.similarity_search(query, k=limit)
+ def search_jobs(self, query: str, limit: int) -> list[int]:
+ results = self.qdrant.similarity_search(query, k=limit)
return [res.metadata["job_id"] for res in results]
- def recommend_jobs_by_skills(self, skills: list[str], limit: int = 10) -> list[str]:
+ def recommend_jobs_by_skills(self, skills: list[str], limit: int) -> list[int]:
if not skills:
return []
diff --git a/backend/app/services/recruiter/job_generation_service.py b/backend/app/services/recruiter/job_generation_service.py
index 7af3cdc..7e83c39 100644
--- a/backend/app/services/recruiter/job_generation_service.py
+++ b/backend/app/services/recruiter/job_generation_service.py
@@ -6,8 +6,6 @@
from app.services.agents.jd_generator import app as jd_agent
from app.services.agents.jd_generator import reference_jd
-# Ensure LangSmith environment variables are set for tracing
-# This must be done before invoking the workflow
os.environ.setdefault(
"LANGCHAIN_TRACING_V2", str(settings.LANGCHAIN_TRACING_V2).lower()
)
@@ -22,16 +20,6 @@
class JobGenerationService:
@staticmethod
def generate_job_draft(requirements: str) -> JDGenNode:
- """
- Generates a job description draft using the JD Generator Agent.
-
- Args:
- requirements: The raw job requirements provided by the user.
-
- Returns:
- JDGenNode: The generated job description structure.
- """
- # Generate a unique thread ID for this execution
thread_id = str(uuid.uuid4())
thread_config = {
"configurable": {"thread_id": thread_id},
@@ -50,18 +38,8 @@ def generate_job_draft(requirements: str) -> JDGenNode:
"revision_count": 0,
}
- # Invoke the agent
- # We expect the agent to run until it hits the human_review interrupt or finishes.
- # For the initial generation, it should stop at human_review with a draft.
result = jd_agent.invoke(initial_state, config=thread_config)
- # Check if we have a draft in the state (either from interrupt or final state)
- # The invoke method returns the final state of the execution.
- # If it hit an interrupt, we might need to inspect the snapshot or the returned state might contain the draft.
- # Based on main.py, 'result' is the state.
-
if "draft" in result and result["draft"]:
return result["draft"]
-
- # If for some reason draft is missing (shouldn't happen if agent works as expected)
raise ValueError("Agent failed to generate a draft job description.")
diff --git a/backend/app/services/user_service.py b/backend/app/services/user_service.py
index 1d7212a..27ab4ff 100644
--- a/backend/app/services/user_service.py
+++ b/backend/app/services/user_service.py
@@ -1,7 +1,7 @@
from sqlalchemy import select
from sqlalchemy.orm import Session
-from app.models.user import User
+from app.models import User
from app.schemas import UserResponse
diff --git a/backend/main.py b/backend/main.py
index 6dbd9ca..37f04f0 100644
--- a/backend/main.py
+++ b/backend/main.py
@@ -17,28 +17,22 @@
@asynccontextmanager
async def lifespan(app: FastAPI):
- """
- Lifespan manager - runs when app starts and shuts down
- """
- logger.info("Starting ConvexHire API...")
-
- # 1. Initialize DB Tables
- logger.info("Initializing database schema...")
+ logger.trace("Starting ConvexHire API...")
+ logger.trace("Initializing database schema...")
init_db()
- # 2. Index Pending Jobs
try:
with Session(engine) as db:
vector_service = JobVectorService()
vector_service.index_all_pending_jobs(db)
except Exception as e:
- logger.error(f"⚠️ Startup indexing warning: {e}")
+ logger.error(f"Startup indexing error: {e}")
- logger.info("System Ready!")
+ logger.success("System Ready!")
yield
- logger.info("Shutting down ConvexHire API...")
+ logger.trace("Shutting down ConvexHire API...")
app = FastAPI(
diff --git a/frontend/src/app/auth/callback/page.tsx b/frontend/src/app/auth/callback/page.tsx
index 25f7097..89bb6fc 100644
--- a/frontend/src/app/auth/callback/page.tsx
+++ b/frontend/src/app/auth/callback/page.tsx
@@ -1,9 +1,9 @@
-'use client';
+"use client";
-import { Suspense, useEffect, useState } from 'react';
-import { useRouter, useSearchParams } from 'next/navigation';
-import { ROUTES } from '../../../config/constants';
-import { LoadingSpinner } from '../../../components/common/LoadingSpinner';
+import { Suspense, useEffect, useState } from "react";
+import { useRouter, useSearchParams } from "next/navigation";
+import { ROUTES } from "../../../config/constants";
+import { LoadingSpinner } from "../../../components/common/LoadingSpinner";
function AuthCallbackContent() {
const router = useRouter();
@@ -13,8 +13,8 @@ function AuthCallbackContent() {
useEffect(() => {
const handleCallback = async () => {
try {
- const code = searchParams.get('code');
- const errorParam = searchParams.get('error');
+ const code = searchParams.get("code");
+ const errorParam = searchParams.get("error");
if (errorParam) {
setError(`Authentication failed: ${errorParam}`);
@@ -22,19 +22,21 @@ function AuthCallbackContent() {
}
if (!code) {
- setError('No authorization code received');
+ setError("No authorization code received");
return;
}
- // Send the authorization code to the backend
- const response = await fetch(`${process.env.NEXT_PUBLIC_API_BASE_URL || 'http://localhost:8000'}/auth/google/callback`, {
- method: 'POST',
- headers: {
- 'Content-Type': 'application/json',
+ const response = await fetch(
+ `${process.env.NEXT_PUBLIC_API_BASE_URL || "http://localhost:8000"}/auth/google/callback`,
+ {
+ method: "POST",
+ headers: {
+ "Content-Type": "application/json",
+ },
+ credentials: "include",
+ body: JSON.stringify({ code }),
},
- credentials: 'include',
- body: JSON.stringify({ code }),
- });
+ );
if (!response.ok) {
throw new Error(`Authentication failed: ${response.statusText}`);
@@ -42,18 +44,17 @@ function AuthCallbackContent() {
const data = await response.json();
- // Check if user needs to select a role
if (data.requires_role_selection) {
router.push(ROUTES.SELECT_ROLE);
} else {
- // Redirect based on user role
- const redirectUrl = data.user?.role === 'recruiter'
- ? ROUTES.RECRUITER_DASHBOARD
- : ROUTES.CANDIDATE_DASHBOARD;
+ const redirectUrl =
+ data.user?.role === "recruiter"
+ ? ROUTES.RECRUITER_DASHBOARD
+ : ROUTES.CANDIDATE_DASHBOARD;
router.push(redirectUrl);
}
} catch (err) {
- setError(err instanceof Error ? err.message : 'Authentication failed');
+ setError(err instanceof Error ? err.message : "Authentication failed");
}
};
@@ -67,11 +68,23 @@ function AuthCallbackContent() {
-
-
Authentication Failed
+
+ Authentication Failed
+
{error}
@@ -106,7 +123,9 @@ export default function AuthCallback() {
+ {isSearchMode ? "Search Results" : "Find Your Next Role"}
+
+
+ {isSearchMode
+ ? debouncedSearchQuery
+ ? `Found ${totalJobs} matches for "${debouncedSearchQuery}"`
+ : "Search results based on your criteria"
+ : "Discover opportunities matched to your skills and experience"}
+
+
+
+
+
+ {/* Enhanced Search & Filter Section */}
+
+
+
+ {/* Filter Chips with more breathing room */}
+
+
);
}
diff --git a/frontend/src/app/candidate/profile/page.tsx b/frontend/src/app/candidate/profile/page.tsx
index 0cb15c2..1a5c385 100644
--- a/frontend/src/app/candidate/profile/page.tsx
+++ b/frontend/src/app/candidate/profile/page.tsx
@@ -1,22 +1,29 @@
-'use client';
+"use client";
-import { useState, useEffect } from 'react';
-import { useAuth } from '../../../hooks/useAuth';
-import { AppShell } from '../../../components/layout/AppShell';
-import { PageTransition, AnimatedContainer } from '../../../components/common';
-import { ProfileHeader } from '../../../components/profile/ProfileHeader';
-import { ProfileInformationTab } from '../../../components/profile/ProfileInformationTab';
-import { CareerHistoryTab } from '../../../components/profile/CareerHistoryTab';
-import { SkillsExpertiseTab } from '../../../components/profile/SkillsExpertiseTab';
-import { PasswordChangeForm } from '../../../components/profile/PasswordChangeForm';
-import { User, Briefcase, Settings, Shield } from 'lucide-react';
-import { profileService } from '../../../services/profileService';
-import type { CandidateProfile } from '../../../types/profile';
-import { LoadingSpinner } from '../../../components/common/LoadingSpinner';
+import { useState, useEffect } from "react";
+import { useAuth } from "../../../hooks/useAuth";
+import { AppShell } from "../../../components/layout/AppShell";
+import { PageTransition, AnimatedContainer } from "../../../components/common";
+import { ProfileHeader } from "../../../components/profile/ProfileHeader";
+import { ProfileInformationTab } from "../../../components/profile/ProfileInformationTab";
+import { CareerHistoryTab } from "../../../components/profile/CareerHistoryTab";
+import { SkillsExpertiseTab } from "../../../components/profile/SkillsExpertiseTab";
+import { PasswordChangeForm } from "../../../components/profile/PasswordChangeForm";
+import { User, Briefcase, Settings, Shield } from "lucide-react";
+import { profileService } from "../../../services/profileService";
+import type { CandidateProfile } from "../../../types/profile";
+import { LoadingSpinner } from "../../../components/common/LoadingSpinner";
export default function CandidateProfilePage() {
- const { user, isLoading: isAuthLoading, isAuthenticated, refetchUser } = useAuth();
- const [activeTab, setActiveTab] = useState<'profile' | 'career' | 'skills' | 'password'>('profile');
+ const {
+ user,
+ isLoading: isAuthLoading,
+ isAuthenticated,
+ refetchUser,
+ } = useAuth();
+ const [activeTab, setActiveTab] = useState<
+ "profile" | "career" | "skills" | "password"
+ >("profile");
const [profile, setProfile] = useState(null);
const [isLoadingProfile, setIsLoadingProfile] = useState(true);
@@ -38,27 +45,26 @@ export default function CandidateProfilePage() {
}
}, [isAuthenticated]);
- // Redirect to login if not authenticated
- // IMPORTANT: All hooks must be called before any early returns
useEffect(() => {
if (!isAuthLoading && !isAuthenticated) {
- window.location.href = '/login';
+ window.location.href = "/login";
}
}, [isAuthenticated, isAuthLoading]);
const handleProfileUpdate = async (updatedProfile: CandidateProfile) => {
setProfile(updatedProfile);
- // Refresh global user state to update Topbar immediately
if (refetchUser) {
await refetchUser();
}
};
- // Show loading state while fetching user data
if (isAuthLoading || (isAuthenticated && isLoadingProfile)) {
return (
-
+
@@ -67,7 +73,6 @@ export default function CandidateProfilePage() {
);
}
- // Show error state if no user data
if (!isAuthenticated || !user) {
return (
@@ -80,68 +85,89 @@ export default function CandidateProfilePage() {
return (
-
-
- {/* Profile Header Card */}
+
+
+ {/* Enhanced Header with Gradient Background */}
-
+
+ {activeTab === "active"
+ ? "Create a new job posting to start receiving applications."
+ : activeTab === "drafts"
+ ? "Save a job as draft to continue editing later."
+ : "Expired or closed jobs will appear here."}
+