This document outlines the setup, execution, and architecture of the Revisely frontend application.
- Setup
- How to Run
- Project Architecture and Technologies
- Features Implemented
- Missing Features & Future Improvements
- LLM Tools Usage
- Node.js (LTS version recommended)
- npm or Yarn package manager.
Create a .env file in the revisely-frontend directory with the following variables:
VITE_FIREBASE_API_KEY="your_firebase_api_key"
VITE_FIREBASE_AUTH_DOMAIN="your_firebase_auth_domain"
VITE_FIREBASE_PROJECT_ID="your_firebase_project_id"
VITE_FIREBASE_STORAGE_BUCKET="your_firebase_storage_bucket"
VITE_FIREBASE_MESSAGING_SENDER_ID="your_firebase_messaging_sender_id"
VITE_FIREBASE_APP_ID="your_firebase_app_id"
VITE_FIREBASE_MEASUREMENT_ID="your_firebase_measurement_id"
VITE_BACKEND_URL="http://localhost:8000" # Or your backend deployment URL
- Navigate to the
revisely-frontenddirectory:cd revisely-frontend - Install the required Node.js packages:
npm install # or yarn install
- Ensure your
.envfile is configured. - Start the Vite development server:
The application will typically run on
npm run dev # or yarn devhttp://localhost:5173.
- Framework: React (with TypeScript)
- Build Tool: Vite
- Routing: React Router DOM
- State Management: React Hooks (useState, useEffect, useRef, useCallback)
- Styling: Tailwind CSS
- API Communication: Axios
- Authentication: Firebase Authentication
- PDF Viewer:
react-pdf(or similar, integrated viaPdfViewer.tsx) - Markdown Rendering:
react-markdown
The project is structured into pages, components, API services, and Firebase configuration.
- User Authentication: Firebase-based login/logout.
- Dashboard: Displays a list of uploaded PDFs.
- PDF Upload: Allows users to upload PDF files.
- PDF-based Chat:
- View PDF content alongside a chat interface.
- Ask questions about the displayed PDF.
- Receives AI-generated answers based on PDF content (RAG).
- Markdown rendering in chat responses.
- Revisely Chat (Standalone AI Chat):
- A general conversational AI chat page.
- Chat history display and selection.
- New chat creation.
- Markdown rendering in chat responses.
- Quiz Generation & Taking:
- Configure and generate quizzes from PDF content.
- Take quizzes (MCQ, SAQ, LAQ).
- Progress Tracking: View quiz attempt history and overall accuracy.
- Responsive Navigation: Navbar with active link highlighting and user dropdown.
- Real-time Chat Updates: Implement WebSockets for instant message delivery.
- Enhanced UI/UX: Further polish the user interface and experience, including loading indicators, better error displays, and animations.
- PDF Chat History Persistence: Implement saving and loading chat history for PDF-specific conversations.
- User Profile Management: Allow users to view and edit their profile information.
- Search Functionality: Implement search within PDF content or chat history.
- Accessibility: Improve accessibility features for all users.
- Comprehensive Testing: Add unit and integration tests for frontend components and logic.
- Deployment Configuration: Detailed instructions and scripts for deploying the frontend.
This frontend application interacts with a backend that utilizes LLM tools (specifically Google Gemini API) for various functionalities. The frontend itself does not directly embed or run LLM models but consumes the results provided by the backend.
- Purpose of Backend LLM Integration (as consumed by frontend):
- AI Chat Responses: Displays answers generated by the Gemini model for both PDF-based and standalone revise chats.
- Quiz Content: Presents quiz questions and explanations generated by the Gemini model based on PDF content.
Note: The frontend is designed to display markdown-formatted responses from the backend, enhancing the readability of AI-generated content.