Skip to content

iws3/generative-Ai-with-interview-questions

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 

Repository files navigation

Generative AI Exercises – Intelligent Endpoints with FastAPI + LangChain

image

This repository contains 10 step-by-step assignments for building Generative AI applications with:

  • Python + FastAPI
  • LLMs from Hugging Face
  • Multimodal Models (Google GenAI / Hugging Face)
  • Naive RAG (Chroma / FAISS) + LangChain
  • Diffusion Models for Image Generation

Each assignment you’ll get:

✅ Step-by-step guide

✅ Model info (size)

✅ Knowledge base / resources

✅ Lesson you’ll learn

7 Interview Questions

Motivational Quote


📝 Assignments


1. Hello LLM Endpoint

  • Goal: /hello-llm → Generate text with Hugging Face LLM.
  • Model: distilgpt2 (~82M params).
  • Lesson: Learn how to call an LLM from FastAPI.
  • Resource: DistilGPT2

Interview Questions:

  1. What is a language model?
  2. How does GPT-2 differ from GPT-3/4?
  3. Why is distilgpt2 considered lightweight?
  4. What are tokens, and why do they matter in LLMs?
  5. How do you handle prompt length limits?
  6. Why expose models through an API instead of CLI?
  7. What’s the risk of directly exposing LLMs without moderation?

💡 "The secret of getting ahead is getting started."Mark Twain


2. Text Summarizer API

  • Goal: /summarize → Summarize long text.
  • Model: facebook/bart-large-cnn (~400M params).
  • Lesson: Learn sequence-to-sequence summarization with Hugging Face pipelines.
  • Resource: BART Paper

Interview Questions:

  1. What is abstractive vs extractive summarization?
  2. Why is BART good for summarization?
  3. What are encoder-decoder architectures?
  4. How does beam search affect summary quality?
  5. What are hallucinations in summarization?
  6. What evaluation metrics exist (ROUGE, BLEU)?
  7. How would you fine-tune BART on legal documents?

💡 "An investment in knowledge pays the best interest."Benjamin Franklin


3. Sentiment Analysis API

  • Goal: /sentiment → Detect positive/negative sentiment.
  • Model: distilbert-base-uncased-finetuned-sst-2-english (~66M params).
  • Lesson: Learn text classification with transformers.
  • Resource: SST-2 Dataset

Interview Questions:

  1. What is transfer learning in NLP?
  2. Why use DistilBERT instead of BERT?
  3. What dataset is SST-2?
  4. What are embeddings in classification?
  5. How do you evaluate classification performance?
  6. What biases can exist in sentiment models?
  7. How would you handle sarcasm in sentiment detection?

💡 "Learning never exhausts the mind."Leonardo da Vinci


4. Multimodal Image Captioning

  • Goal: /caption-image → Upload an image, return caption.
  • Model: nlpconnect/vit-gpt2-image-captioning (~124M params).
  • Lesson: Learn vision-language alignment.
  • Resource: COCO Dataset

Interview Questions:

  1. How does ViT process images?
  2. What role does GPT-2 play in captioning?
  3. Why combine a vision encoder with a language decoder?
  4. What datasets are used for captioning?
  5. What challenges exist in image captioning?
  6. How do you evaluate captions (BLEU, CIDEr)?
  7. What real-world apps use captioning?

💡 "The best way to predict the future is to invent it."Alan Kay


5. Naive RAG with Chroma + LangChain

  • Goal: /rag-query → Query docs with retrieval.
  • Model: all-MiniLM-L6-v2 (~33M params).
  • Lesson: Learn embeddings + retrieval-augmented generation with Chroma + LangChain retriever.
  • Resource: Chroma Docs | LangChain RAG

Interview Questions:

  1. What is RAG and why is it useful?
  2. How do embeddings represent meaning?
  3. Why use Chroma as a vector DB?
  4. What is cosine similarity in retrieval?
  5. How do you update a knowledge base?
  6. What is the risk of injecting irrelevant documents?
  7. How does RAG differ from fine-tuning?

💡 "It always seems impossible until it’s done."Nelson Mandela


6. Naive RAG with FAISS + LangChain

  • Goal: /rag-faiss-query → Same as above but with FAISS.
  • Model: all-MiniLM-L6-v2.
  • Lesson: Learn scalable vector search with FAISS + LangChain retriever.
  • Resource: FAISS Docs | LangChain VectorStores

Interview Questions:

  1. What is FAISS, and why is it fast?
  2. What indexing methods does FAISS provide (IVF, HNSW)?
  3. How does FAISS handle billions of vectors?
  4. Compare FAISS vs Chroma.
  5. What is approximate nearest neighbor (ANN) search?
  6. How do you evaluate retrieval accuracy?
  7. How would you deploy FAISS in production?

💡 "Your time is limited, so don’t waste it living someone else’s life."Steve Jobs


7. Multimodal Q&A (Image + Text)

  • Goal: /qa-image-text → Ask a question about an image.
  • Models: blip2-flan-t5-xl (~3B params) or Google Gemini Vision.
  • Lesson: Learn multimodal reasoning with LangChain multimodal support.
  • Resource: BLIP-2 Paper

Interview Questions:

  1. What is visual question answering (VQA)?
  2. How does BLIP-2 align vision + text?
  3. What is the role of a frozen LLM in multimodal models?
  4. What tasks benefit from multimodal inputs?
  5. What challenges exist in multimodal learning?
  6. How do you evaluate multimodal models?
  7. What industries need multimodal AI?

💡 "Tell me and I forget. Teach me and I remember. Involve me and I learn."Benjamin Franklin


8. Chain Multiple Tools with LangChain

  • Goal: /researcher → Wikipedia fetch + summarization + sentiment.
  • Lesson: Learn chaining AI tasks with LangChain SequentialChain.
  • Resource: Wikipedia API | LangChain Chains

Interview Questions:

  1. What is tool chaining in AI?
  2. Why combine multiple AI tools?
  3. What challenges exist when chaining APIs?
  4. How does orchestration differ from composition?
  5. How to handle failures in one tool?
  6. What is LangChain and why is it popular?
  7. How would you monitor toolchain latency?

💡 "Creativity is intelligence having fun."Albert Einstein


9. Intelligent Chat Endpoint with LangChain Routing

  • Goal: /chat → Smart routing for queries (LLM, RAG, image).
  • Lesson: Learn adaptive decision-making in AI apps with LangChain RouterChain.
  • Resource: LLM Routing (LangChain)

Interview Questions:

  1. What is model routing?
  2. How do you detect intent in queries?
  3. How do you decide when to call RAG vs LLM?
  4. What are risks of automatic routing?
  5. How do you log and trace routed calls?
  6. What metrics help evaluate a chat system?
  7. How would you scale this system for enterprise use?

💡 "The best way to learn is by doing. The only way to build a strong future is to start building today."Unknown


10. Diffusion Model – Image Generation

  • Goal: /generate-image → Generate images from text prompts.
  • Model: stable-diffusion-v1-5 (~860M params).
  • Lesson: Learn how diffusion models synthesize images (with Hugging Face Diffusers).
  • Resource: Stable Diffusion

Interview Questions:

  1. How do diffusion models generate images?
  2. What is denoising in diffusion?
  3. How does Stable Diffusion differ from DALL·E?
  4. Why are diffusion models memory-intensive?
  5. What ethical issues exist with generative images?
  6. How do you optimize diffusion for faster inference?
  7. What industries benefit from diffusion models?

💡 "The best way to predict the future is to create it."Peter Drucker

About

Generative AI Exercises – Building Intelligent Endpoints with FastAPI

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published