diff --git a/README.md b/README.md index 6f9cbe5..53bd033 100644 --- a/README.md +++ b/README.md @@ -6,7 +6,6 @@ - [Gitpod](#gitpod) - [Wall of Contributors](#wall-of-contributors) - [![Join Our Discord](https://img.shields.io/badge/Discord-Join%20Server-blue?logo=discord&style=for-the-badge)](https://discord.com/invite/Yn9g6KuWyA) [![Subscribe on YouTube](https://img.shields.io/badge/YouTube-Subscribe-red?logo=youtube&style=for-the-badge)](https://www.youtube.com/@dhanushnehru?sub_confirmation=1) [![Subscribe to Newsletter](https://img.shields.io/badge/Newsletter-Subscribe-orange?style=for-the-badge)](https://dhanushn.substack.com/) @@ -37,8 +36,8 @@ More information on contributing and the general code of conduct for discussion ## List of Scripts in Repo -| Script | Link | Description | -| ---------------------------------------- |----------------------------------------------------------------------------------------------------------------------------------------------------------| ----------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Script | Link | Description | +| ---------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Arrange It | [Arrange It](https://github.com/DhanushNehru/Python-Scripts/tree/main/Arrange%20It) | A Python script that can automatically move files into corresponding folders based on their extensions. | | Auto WiFi Check | [Auto WiFi Check](https://github.com/DhanushNehru/Python-Scripts/tree/main/Auto%20WiFi%20Check) | A Python script to monitor if the WiFi connection is active or not | | AutoCert | [AutoCert](https://github.com/DhanushNehru/Python-Scripts/tree/main/AutoCert) | A Python script to auto-generate e-certificates in bulk. | @@ -53,11 +52,11 @@ More information on contributing and the general code of conduct for discussion | Countdown Timer | [Countdown Timer](https://github.com/DhanushNehru/Python-Scripts/tree/main/Countdown%20Timer) | Displays a message when the Input time elapses. | | Crop Images | [Crop Images](https://github.com/DhanushNehru/Python-Scripts/tree/main/Crop%20Images) | A Python script to crop a given image. | | CSV to Excel | [CSV to Excel](https://github.com/DhanushNehru/Python-Scripts/tree/main/CSV%20to%20Excel) | A Python script to convert a CSV to an Excel file. | -| CSV_TO_NDJSON | [CSV to Excel](https://github.com/DhanushNehru/Python-Scripts/tree/main/CSV_TO_NDJSON) | A Python script to convert a CSV to an NDJSON files file. | +| CSV_TO_NDJSON | [CSV to Excel](https://github.com/DhanushNehru/Python-Scripts/tree/main/CSV_TO_NDJSON) | A Python script to convert a CSV to an NDJSON files file. | | Currency Script | [Currency Script](https://github.com/DhanushNehru/Python-Scripts/tree/main/Currency%20Script) | A Python script to convert the currency of one country to that of another. | | Digital Clock | [Digital Clock](https://github.com/DhanushNehru/Python-Scripts/tree/main/Digital%20Clock) | A Python script to preview a digital clock in the terminal. | | Display Popup Window | [Display Popup Window](https://github.com/DhanushNehru/Python-Scripts/tree/main/Display%20Popup%20Window) | A Python script to preview a GUI interface to the user. | -| Distance Calculator | [Distance Calculator](https://github.com/Mathdallas-code/Python-Scripts/tree/main/Distance%20Calculator) | A Python script to calculate the distance between two points. +| Distance Calculator | [Distance Calculator](https://github.com/Mathdallas-code/Python-Scripts/tree/main/Distance%20Calculator) | A Python script to calculate the distance between two points. | | Duplicate Finder | [Duplicate Finder](https://github.com/DhanushNehru/Python-Scripts/tree/main/Duplicate%Fnder) | The script identifies duplicate files by MD5 hash and allows deletion or relocation. | | Emoji | [Emoji](https://github.com/DhanushNehru/Python-Scripts/tree/main/Emoji) | The script generates a PDF with an emoji using a custom TrueType font. | | Emoji to PDF | [Emoji to PDF](https://github.com/DhanushNehru/Python-Scripts/tree/main/Emoji%20To%20Pdf) | A Python Script to view Emoji in PDF. | @@ -86,7 +85,7 @@ More information on contributing and the general code of conduct for discussion | Image Watermarker | [Image Watermarker](https://github.com/DhanushNehru/Python-Scripts/tree/main/Image%20Watermarker) | Adds a watermark to an image. | | Image to ASCII | [Image to ASCII](https://github.com/DhanushNehru/Python-Scripts/tree/main/Image%20to%20ASCII) | Converts an image into ASCII art. | | Image to Gif | [Image to Gif](https://github.com/DhanushNehru/Python-Scripts/tree/main/Image%20to%20GIF) | Generate gif from images. | -| Images to WebP Converter | [Images to WebP Converter](https://github.com/DhanushNehru/Python-Scripts/tree/main/Images%20to%20WebP%20Converter) | Converts images to WebP vie cmd or GUI | +| Images to WebP Converter | [Images to WebP Converter](https://github.com/DhanushNehru/Python-Scripts/tree/main/Images%20to%20WebP%20Converter) | Converts images to WebP vie cmd or GUI | | Interactive Dictionary | [Interactive Dictionary](https://github.com/DhanushNehru/Python-Scripts/tree/main/Image%20InteractiveDictionary) | finding out meanings of words | | IP Geolocator | [IP Geolocator](https://github.com/DhanushNehru/Python-Scripts/tree/main/IP%20Geolocator) | Uses an IP address to geolocate a location on Earth. | | Jokes Generator | [Jokes generator](https://github.com/DhanushNehru/Python-Scripts/tree/main/Jokes%20Generator) | Generates jokes. | @@ -97,7 +96,7 @@ More information on contributing and the general code of conduct for discussion | Keylogger | [Keylogger](https://github.com/DhanushNehru/Python-Scripts/tree/main/Keylogger) | Keylogger that can track your keystrokes, clipboard text, take screenshots at regular intervals, and records audio. | | Keyword - Retweeting | [Keyword - Retweeting](https://github.com/DhanushNehru/Python-Scripts/tree/main/Keyword%20Retweet%20Twitter%20Bot) | Find the latest tweets containing given keywords and then retweet them. | | LinkedIn Bot | [LinkedIn Bot](https://github.com/DhanushNehru/Python-Scripts/tree/main/LinkedIn%20Bot) | Automates the process of searching for public profiles on LinkedIn and exporting the data to an Excel sheet. | -| Longitude & Latitude to conical coverter | [Longitude Latitude conical converter](master/Longitude%20Latitude%20conical%20converter) | Converts Longitude and Latitude to Lambert conformal conic projection. | +| Longitude & Latitude to conical coverter | [Longitude Latitude conical converter](master/Longitude%20Latitude%20conical%20converter) | Converts Longitude and Latitude to Lambert conformal conic projection. | | Mail Sender | [Mail Sender](https://github.com/DhanushNehru/Python-Scripts/tree/main/Mail%20Sender) | Sends an email. | | Merge Two Images | [Merge Two Images](https://github.com/DhanushNehru/Python-Scripts/tree/main/Merge%20Two%20Images) | Merges two images horizontally or vertically. | | Mood based youtube song generator | [Mood based youtube song generator](https://github.com/DhanushNehru/Python-Scripts/tree/main/Mood%20based%20youtube%20song%20generator) | This Python script fetches a random song from YouTube based on your mood input and opens it in your default web browser. | @@ -112,7 +111,7 @@ More information on contributing and the general code of conduct for discussion | PDF to Audio | [PDF to Audio](https://github.com/DhanushNehru/Python-Scripts/tree/main/PDF%20to%20Audio) | Converts PDF to audio. | | PDF to Text | [PDF to text](https://github.com/DhanushNehru/Python-Scripts/tree/main/PDF%20to%20text) | Converts PDF to text. | | PDF merger and splitter | [PDF Merger and Splitter](https://github.com/AbhijitMotekar99/Python-Scripts/blob/main/PDF%20Merger%20and%20Splitter/PDF%20Merger%20and%20Splitter.py) | Create a tool that can merge multiple PDF files into one or split a single PDF into separate pages. | -| Pizza Order | [Pizza Order](https://github.com/DhanushNehru/Python-Scripts/tree/main/Pizza%20Order) | An algorithm designed to handle pizza orders from customers with accurate receipts and calculations. | +| Pizza Order | [Pizza Order](https://github.com/DhanushNehru/Python-Scripts/tree/main/Pizza%20Order) | An algorithm designed to handle pizza orders from customers with accurate receipts and calculations. | | Planet Simulation | [Planet Simulation](https://github.com/DhanushNehru/Python-Scripts/tree/main/Planet%20Simulation) | A simulation of several planets rotating around the sun. | | Playlist Exchange | [Playlist Exchange](https://github.com/DhanushNehru/Python-Scripts/tree/main/Playlist%20Exchange) | A Python script to exchange songs and playlists between Spotify and Python. | | Pigeonhole Sort | [Algorithm](https://github.com/DhanushNehru/Python-Scripts/tree/main/PigeonHole) | The pigeonhole sort algorithm to sort your arrays efficiently! | @@ -123,6 +122,7 @@ More information on contributing and the general code of conduct for discussion | QR Code Scanner | [QR Code Scanner](https://github.com/DhanushNehru/Python-Scripts/tree/main/QR%20Code%20Scanner) | Helps in Sacanning the QR code in form of PNG or JPG just by running the python script. | | QR Code with logo | [QR code with Logo](https://github.com/DhanushNehru/Python-Scripts/tree/main/QR%20with%20Logo) | QR Code Customization Feature | | Random Color Generator | [Random Color Generator](https://github.com/DhanushNehru/Python-Scripts/tree/main/Random%20Color%20Generator) | A random color generator that will show you the color and values! | +| Real Time Face Blurring Tool | [ Real Time Face Blurring Tool](https://github.com/ChrisEssomba/Python-Scripts/tree/new-script/Real-Time-Face-Blurring-Tool) | A Python script that detects and blurs faces in images, videos, and webcam feeds using OpenCV and deep learning. | | Remove Background | [Remove Background](https://github.com/DhanushNehru/Python-Scripts/tree/main/Remove%20Background) | Removes the background of images. | | Road-Lane-Detection | [Road-Lane-Detection](https://github.com/NotIncorecc/Python-Scripts/tree/main/Road-Lane-Detection) | Detects the lanes of the road | | Rock Paper Scissor 1 | [Rock Paper Scissor 1](https://github.com/DhanushNehru/Python-Scripts/tree/main/Rock%20Paper%20Scissor%201) | A game of Rock Paper Scissors. | @@ -130,7 +130,7 @@ More information on contributing and the general code of conduct for discussion | Run Then Notify | [Run Then Notify](https://github.com/DhanushNehru/Python-Scripts/tree/main/Run%20Then%20Notify) | Runs a slow command and emails you when it completes execution. | | Save File To Drive | [Save File To Drive](https://github.com/DhanushNehru/Python-Scripts/tree/master/Save%20file%20to%20Drive) | Saves all files and folder with proper structure from a folder to drive easily through a python script . | | Selfie with Python | [Selfie with Python](https://github.com/DhanushNehru/Python-Scripts/tree/main/Selfie%20with%20Python) | Take your selfie with python . | -| Simple DDOS | [Simple DDOS](https://github.com/DhanushNehru/Python-Scripts/tree/main/Simple%20DDOS) | The code allows you to send multiple HTTP requests concurrently for a specified duration. | +| Simple DDOS | [Simple DDOS](https://github.com/DhanushNehru/Python-Scripts/tree/main/Simple%20DDOS) | The code allows you to send multiple HTTP requests concurrently for a specified duration. | | Simple TCP Chat Server | [Simple TCP Chat Server](https://github.com/DhanushNehru/Python-Scripts/tree/main/TCP%20Chat%20Server) | Creates a local server on your LAN for receiving and sending messages! | | Smart Attendance System | [Smart Attendance System](https://github.com/DhanushNehru/Python-Scripts/tree/main/Smart%20Attendance%20System) | This OpenCV framework is for Smart Attendance by actively decoding a student's QR Code. | | Snake Game | [Snake Game](https://github.com/DhanushNehru/Python-Scripts/tree/main/Snake%20Game) | Classic snake game using python. | @@ -139,9 +139,9 @@ More information on contributing and the general code of conduct for discussion | Star Pattern | [Star Pattern](https://github.com/DhanushNehru/Python-Scripts/tree/main/Star%20Pattern) | Creates a star pattern pyramid. | | Subnetting Calculator | [Subnetting Calculator](https://github.com/DhanushNehru/Python-Scripts/tree/main/Subnetting%20Calculator) | Calculates network information based on a given IP address and subnet mask. | | Take a break | [Take a break](https://github.com/DhanushNehru/Python-Scripts/tree/main/Take%20A%20Break) | Python code to take a break while working long hours. | -| Text Recognition | [Text Recognition](https://github.com/DhanushNehru/Python-Scripts/tree/Text-Recognition/Text%20Recognition) | A Image Text Recognition ML Model to extract text from Images | +| Text Recognition | [Text Recognition](https://github.com/DhanushNehru/Python-Scripts/tree/Text-Recognition/Text%20Recognition) | A Image Text Recognition ML Model to extract text from Images | | Text to Image | [Text to Image](https://github.com/DhanushNehru/Python-Scripts/tree/main/Text%20to%20Image) | A Python script that will take your text and convert it to a JPEG. | -| Thread Progress | [Thread Progress](https://github.com/DhanushNehru/Python-Scripts/tree/main/Thread%20Progress) | A Python script demonstrating safe multithreading by using a lock to update a shared progress variable concurrently. | +| Thread Progress | [Thread Progress](https://github.com/DhanushNehru/Python-Scripts/tree/main/Thread%20Progress) | A Python script demonstrating safe multithreading by using a lock to update a shared progress variable concurrently. | | Tic Tac Toe 1 | [Tic Tac Toe 1](https://github.com/DhanushNehru/Python-Scripts/tree/main/Tic-Tac-Toe%201) | A game of Tic Tac Toe. | | Tik Tac Toe 2 | [Tik Tac Toe 2](https://github.com/DhanushNehru/Python-Scripts/tree/main/Tic-Tac-Toe%202) | A game of Tik Tac Toe. | | Turtle Art & Patterns | [Turtle Art](https://github.com/DhanushNehru/Python-Scripts/tree/main/Turtle%20Art) | Scripts to view turtle art also have prompt-based ones. | @@ -157,7 +157,7 @@ More information on contributing and the general code of conduct for discussion | Weather GUI | [Weather GUI](https://github.com/DhanushNehru/Python-Scripts/tree/main/Weather%20GUI) | Displays information on the weather. | | Website Blocker | [Website Blocker](https://github.com/DhanushNehru/Python-Scripts/tree/main/Website%20Blocker) | Downloads the website and loads it on your homepage in your local IP. | | Website Cloner | [Website Cloner](https://github.com/DhanushNehru/Python-Scripts/tree/main/Website%20Cloner) | Clones any website and opens the site in your local IP. | -| Web Scraper | [Web Scraper](https://github.com/DhanushNehru/Python-Scripts/tree/main/Web%20Scraper) | A Python script that scrapes blog titles from [Python.org](https://www.python.org/) and saves them to a file. | +| Web Scraper | [Web Scraper](https://github.com/DhanushNehru/Python-Scripts/tree/main/Web%20Scraper) | A Python script that scrapes blog titles from [Python.org](https://www.python.org/) and saves them to a file. | | Weight Converter | [Weight Converter](https://github.com/DhanushNehru/Python-Scripts/tree/main/Weight%20Converter) | Simple GUI script to convert weight in different measurement units. | | Wikipedia Data Extractor | [Wikipedia Data Extractor](https://github.com/DhanushNehru/Python-Scripts/tree/main/Wikipedia%20Data%20Extractor) | A simple Wikipedia data extractor script to get output in your IDE. | | Word to PDF | [Word to PDF](https://github.com/DhanushNehru/Python-Scripts/tree/main/Word%20to%20PDF%20converter) | A Python script to convert an MS Word file to a PDF file. | @@ -176,7 +176,6 @@ You can use Gitpod in the cloud [![Gitpod Ready-to-Code](https://img.shields.io/ - If you liked this repository, support it by starring ⭐ Thank You for being here :) diff --git a/Real-Time-Face-Blurring-Tool/README.md b/Real-Time-Face-Blurring-Tool/README.md new file mode 100644 index 0000000..03bfc45 --- /dev/null +++ b/Real-Time-Face-Blurring-Tool/README.md @@ -0,0 +1,82 @@ +# Real-Time Face Blurring Tool + +A robust Python application for anonymizing faces in images, videos, and live webcam feeds using OpenCV's deep neural network (DNN) module. + +## Features + +- **Multi-source processing**: + - 📷 Single image processing + - 🎥 Video file processing + - 🌐 Live webcam feed processing +- **Advanced face detection** using Caffe-based DNN model +- **Adjustable parameters**: + - Blur strength (kernel size) + - Detection confidence threshold +- **Automatic output organization**: + - `./output_images/` for processed images + - `./output_videos/` for processed videos +- **Progress tracking** for video processing +- **Graceful resource handling** with proper cleanup + +## Requirements + +- Python 3.6+ +- Required packages: + ```bash + pip install opencv-python numpy + ``` + +## Installation + +Clone the repository and install dependencies: + +```bash +git clone https://github.com/yourusername/Real-Time-Face-Blurring-Tool.git +cd Real-Time-Face-Blurring-Tool +pip install -r requirements.txt +``` + +## Model Files + +Download the following files and place them in the correct folders: + +- `deploy.prototxt.txt` → `protocol/` folder +- `res10_300x300_ssd_iter_140000_fp16.caffemodel` → `model/` folder + +You can download them from [OpenCV's GitHub repository](https://github.com/opencv/opencv/tree/master/samples/dnn/face_detector). + +## Usage + +Process an image: + +```bash +python main.py --image path/to/image.jpg +``` + +Process a video: + +```bash +python main.py --video path/to/video.mp4 +``` + +Process webcam feed: + +```bash +python main.py --webcam +``` + +### Optional Arguments + +- `--blur` Blur kernel size (odd integer, default: 61) +- `--confidence` Face detection confidence threshold (default: 0.5) + +## Output + +- Processed images are saved in `./output_images/` with `_blurred` appended to the filename. +- Processed videos are saved in `./output_videos/` with `_blurred` appended to the filename. + +## Troubleshooting + +- **Model file not found:** Make sure you downloaded the model files and placed them in the correct folders. +- **Webcam not detected:** Ensure your webcam is connected and not used by another application. +- **Permission errors:** Run your terminal or IDE as administrator if you encounter permission issues writing output files. diff --git a/Real-Time-Face-Blurring-Tool/main.py b/Real-Time-Face-Blurring-Tool/main.py new file mode 100644 index 0000000..1ec0f83 --- /dev/null +++ b/Real-Time-Face-Blurring-Tool/main.py @@ -0,0 +1,241 @@ +import os +import cv2 +import numpy as np +import logging +from pathlib import Path + +# Configuration +DEFAULT_BLUR_STRENGTH = 61 # Must be odd +DEFAULT_CONFIDENCE_THRESHOLD = 0.5 +OUTPUT_IMAGE_FOLDER = "./output_images/" +OUTPUT_VIDEO_FOLDER = "./output_videos/" +WEBCAM_RESOLUTION = (640, 480) +INPUT_SIZE = (300, 300) +MEAN_VALUES = (104.0, 177.0, 123.0) +MODEL_PROTOTXT = "deploy.prototxt.txt" +MODEL_WEIGHTS = "res10_300x300_ssd_iter_140000_fp16.caffemodel" + +# Setup logging +logging.basicConfig(level=logging.INFO) +logger = logging.getLogger(__name__) + +# Load DNN model once +def load_face_detection_model(): + """Loads the pre-trained face detection model.""" + try: + base_dir = Path(__file__).parent + prototxt_path = str(base_dir / "protocol" / MODEL_PROTOTXT) + model_path = str(base_dir / "model" / MODEL_WEIGHTS) + + if not os.path.exists(prototxt_path): + raise FileNotFoundError(f"Prototxt file not found at {prototxt_path}") + if not os.path.exists(model_path): + raise FileNotFoundError(f"Model weights not found at {model_path}") + + return cv2.dnn.readNetFromCaffe(prototxt_path, model_path) + except Exception as e: + logger.error(f"Failed to load model: {e}") + raise + +face_net = load_face_detection_model() + +def save_video(video, output_path, default_fps=30, default_res=WEBCAM_RESOLUTION): + """ + Initializes a video writer object to save processed video frames. + + Args: + video: OpenCV video capture object + output_path: Path to save the output video + default_fps: Fallback FPS if not detected + default_res: Fallback resolution if not detected + + Returns: + cv2.VideoWriter object + """ + try: + fps = video.get(cv2.CAP_PROP_FPS) + if not fps or fps <= 1: + fps = default_fps + width = int(video.get(cv2.CAP_PROP_FRAME_WIDTH)) or default_res[0] + height = int(video.get(cv2.CAP_PROP_FRAME_HEIGHT)) or default_res[1] + + fourcc = cv2.VideoWriter_fourcc(*'mp4v') + return cv2.VideoWriter(output_path, fourcc, fps, (width, height)) + except Exception as e: + logger.error(f"Failed to initialize video writer: {e}") + raise + +def blur_faces(image, confidence_threshold=DEFAULT_CONFIDENCE_THRESHOLD, + blur_strength=DEFAULT_BLUR_STRENGTH): + """ + Detects and blurs faces in an image. + + Args: + image: Input image (numpy array) + confidence_threshold: Minimum confidence for face detection (0-1) + blur_strength: Kernel size for Gaussian blur (must be odd) + + Returns: + Image with blurred faces (numpy array) + """ + if image is None: + raise ValueError("Input image cannot be None") + + if blur_strength % 2 == 0: + blur_strength += 1 # Ensure odd kernel size + logger.debug(f"Adjusted blur strength to {blur_strength} to make it odd") + + (h, w) = image.shape[:2] + + try: + blob = cv2.dnn.blobFromImage( + cv2.resize(image,INPUT_SIZE), + 1.0, + INPUT_SIZE, + MEAN_VALUES + ) + + face_net.setInput(blob) + detections = face_net.forward() + + for i in range(detections.shape[2]): + confidence = detections[0, 0, i, 2] + + if confidence > confidence_threshold: + box = detections[0, 0, i, 3:7] * np.array([w, h, w, h]) + (startX, startY, endX, endY) = box.astype("int") + + # Ensure coordinates stay within image bounds + startX, startY = max(0, startX), max(0, startY) + endX, endY = min(w, endX), min(h, endY) + # Validate ROI dimensions + if endY > startY and endX > startX: + # Extract and blur face ROI + face_roi = image[startY:endY, startX:endX] + blurred_face = cv2.GaussianBlur(face_roi, (blur_strength, blur_strength), 0) + image[startY:endY, startX:endX] = blurred_face + + except Exception as e: + logger.error(f"Error during face blurring: {e}") + raise + + return image + +def blur_faces_images(image_path): + """Processes an image file and saves the blurred version.""" + try: + if not os.path.exists(image_path): + raise FileNotFoundError(f"Image not found at {image_path}") + + image = cv2.imread(image_path) + if image is None: + raise ValueError(f"Failed to load image from {image_path}") + + blurred_image = blur_faces(image) + + os.makedirs(OUTPUT_IMAGE_FOLDER, exist_ok=True) + filename, ext = os.path.splitext(os.path.basename(image_path)) + output_path = os.path.join(OUTPUT_IMAGE_FOLDER, f"{filename}_blurred{ext}") + + if not cv2.imwrite(output_path, blurred_image): + raise IOError(f"Failed to save image to {output_path}") + + logger.info(f"Successfully saved blurred image to {output_path}") + + except Exception as e: + logger.error(f"Error processing image: {e}") + raise + +def process_video_stream(input_source=None, is_webcam=False): + """ + Processes either a video file or webcam stream with face blurring. + + Args: + input_source: Path to video file (if not webcam) + is_webcam: Boolean flag for webcam processing + """ + try: + os.makedirs(OUTPUT_VIDEO_FOLDER, exist_ok=True) + + if is_webcam: + video = cv2.VideoCapture(0) + if not video.isOpened(): + raise ValueError("Unable to access the webcam.") + video.set(cv2.CAP_PROP_FRAME_WIDTH, WEBCAM_RESOLUTION[0]) + video.set(cv2.CAP_PROP_FRAME_HEIGHT, WEBCAM_RESOLUTION[1]) + output_path = os.path.join(OUTPUT_VIDEO_FOLDER, "webcam_blurred.mp4") + logger.info("Starting webcam processing...") + else: + if not os.path.exists(input_source): + raise FileNotFoundError(f"Video file not found at {input_source}") + video = cv2.VideoCapture(input_source) + if not video.isOpened(): + raise ValueError(f"Unable to open video file: {input_source}") + name = os.path.basename(input_source) + output_path = os.path.join(OUTPUT_VIDEO_FOLDER, f"{os.path.splitext(name)[0]}_blurred.mp4") + logger.info(f"Processing video file: {input_source}") + + out = save_video(video, output_path) + frame_count = 0 + total_frames = int(video.get(cv2.CAP_PROP_FRAME_COUNT)) if not is_webcam else -1 + + try: + while True: + ret, frame = video.read() + if not ret: + break + + frame_count += 1 + if not is_webcam and frame_count % 10 == 0: + logger.info(f"Processing frame {frame_count}/{total_frames}") + + blurred_frame = blur_faces(frame) + cv2.imshow('Blurred Feed', blurred_frame) + + if cv2.waitKey(1) & 0xFF == ord('q'): + logger.info("User requested early termination") + break + + out.write(blurred_frame) + + finally: + out.release() + video.release() + cv2.destroyAllWindows() + logger.info(f"Successfully saved video to {output_path}") + + except Exception as e: + logger.error(f"Error processing video: {e}") + raise + +def main(): + """Command line interface for the face blurring application.""" + import argparse + + parser = argparse.ArgumentParser( + description="Face blurring application that processes images, videos, or webcam streams" + ) + parser.add_argument('--image', help='Path to input image') + parser.add_argument('--video', help='Path to input video') + parser.add_argument('--webcam', action='store_true', help='Use webcam') + + args = parser.parse_args() + + try: + if args.image: + blur_faces_images(args.image) + elif args.video: + process_video_stream(args.video, is_webcam=False) + elif args.webcam: + process_video_stream(is_webcam=True) + else: + logger.warning("No input source specified. Use --image, --video, or --webcam") + + except Exception as e: + logger.error(f"Application error: {e}") + return 1 + + return 0 + +if __name__ == "__main__": + exit(main()) \ No newline at end of file diff --git a/Real-Time-Face-Blurring-Tool/model/res10_300x300_ssd_iter_140000_fp16.caffemodel b/Real-Time-Face-Blurring-Tool/model/res10_300x300_ssd_iter_140000_fp16.caffemodel new file mode 100644 index 0000000..0e9cd4a Binary files /dev/null and b/Real-Time-Face-Blurring-Tool/model/res10_300x300_ssd_iter_140000_fp16.caffemodel differ diff --git a/Real-Time-Face-Blurring-Tool/protocol/deploy.prototxt.txt b/Real-Time-Face-Blurring-Tool/protocol/deploy.prototxt.txt new file mode 100644 index 0000000..a128515 --- /dev/null +++ b/Real-Time-Face-Blurring-Tool/protocol/deploy.prototxt.txt @@ -0,0 +1,1790 @@ +input: "data" +input_shape { + dim: 1 + dim: 3 + dim: 300 + dim: 300 +} + +layer { + name: "data_bn" + type: "BatchNorm" + bottom: "data" + top: "data_bn" + param { + lr_mult: 0.0 + } + param { + lr_mult: 0.0 + } + param { + lr_mult: 0.0 + } +} +layer { + name: "data_scale" + type: "Scale" + bottom: "data_bn" + top: "data_bn" + param { + lr_mult: 1.0 + decay_mult: 1.0 + } + param { + lr_mult: 2.0 + decay_mult: 1.0 + } + scale_param { + bias_term: true + } +} +layer { + name: "conv1_h" + type: "Convolution" + bottom: "data_bn" + top: "conv1_h" + param { + lr_mult: 1.0 + decay_mult: 1.0 + } + param { + lr_mult: 2.0 + decay_mult: 1.0 + } + convolution_param { + num_output: 32 + pad: 3 + kernel_size: 7 + stride: 2 + weight_filler { + type: "msra" + variance_norm: FAN_OUT + } + bias_filler { + type: "constant" + value: 0.0 + } + } +} +layer { + name: "conv1_bn_h" + type: "BatchNorm" + bottom: "conv1_h" + top: "conv1_h" + param { + lr_mult: 0.0 + } + param { + lr_mult: 0.0 + } + param { + lr_mult: 0.0 + } +} +layer { + name: "conv1_scale_h" + type: "Scale" + bottom: "conv1_h" + top: "conv1_h" + param { + lr_mult: 1.0 + decay_mult: 1.0 + } + param { + lr_mult: 2.0 + decay_mult: 1.0 + } + scale_param { + bias_term: true + } +} +layer { + name: "conv1_relu" + type: "ReLU" + bottom: "conv1_h" + top: "conv1_h" +} +layer { + name: "conv1_pool" + type: "Pooling" + bottom: "conv1_h" + top: "conv1_pool" + pooling_param { + kernel_size: 3 + stride: 2 + } +} +layer { + name: "layer_64_1_conv1_h" + type: "Convolution" + bottom: "conv1_pool" + top: "layer_64_1_conv1_h" + param { + lr_mult: 1.0 + decay_mult: 1.0 + } + convolution_param { + num_output: 32 + bias_term: false + pad: 1 + kernel_size: 3 + stride: 1 + weight_filler { + type: "msra" + } + bias_filler { + type: "constant" + value: 0.0 + } + } +} +layer { + name: "layer_64_1_bn2_h" + type: "BatchNorm" + bottom: "layer_64_1_conv1_h" + top: "layer_64_1_conv1_h" + param { + lr_mult: 0.0 + } + param { + lr_mult: 0.0 + } + param { + lr_mult: 0.0 + } +} +layer { + name: "layer_64_1_scale2_h" + type: "Scale" + bottom: "layer_64_1_conv1_h" + top: "layer_64_1_conv1_h" + param { + lr_mult: 1.0 + decay_mult: 1.0 + } + param { + lr_mult: 2.0 + decay_mult: 1.0 + } + scale_param { + bias_term: true + } +} +layer { + name: "layer_64_1_relu2" + type: "ReLU" + bottom: "layer_64_1_conv1_h" + top: "layer_64_1_conv1_h" +} +layer { + name: "layer_64_1_conv2_h" + type: "Convolution" + bottom: "layer_64_1_conv1_h" + top: "layer_64_1_conv2_h" + param { + lr_mult: 1.0 + decay_mult: 1.0 + } + convolution_param { + num_output: 32 + bias_term: false + pad: 1 + kernel_size: 3 + stride: 1 + weight_filler { + type: "msra" + } + bias_filler { + type: "constant" + value: 0.0 + } + } +} +layer { + name: "layer_64_1_sum" + type: "Eltwise" + bottom: "layer_64_1_conv2_h" + bottom: "conv1_pool" + top: "layer_64_1_sum" +} +layer { + name: "layer_128_1_bn1_h" + type: "BatchNorm" + bottom: "layer_64_1_sum" + top: "layer_128_1_bn1_h" + param { + lr_mult: 0.0 + } + param { + lr_mult: 0.0 + } + param { + lr_mult: 0.0 + } +} +layer { + name: "layer_128_1_scale1_h" + type: "Scale" + bottom: "layer_128_1_bn1_h" + top: "layer_128_1_bn1_h" + param { + lr_mult: 1.0 + decay_mult: 1.0 + } + param { + lr_mult: 2.0 + decay_mult: 1.0 + } + scale_param { + bias_term: true + } +} +layer { + name: "layer_128_1_relu1" + type: "ReLU" + bottom: "layer_128_1_bn1_h" + top: "layer_128_1_bn1_h" +} +layer { + name: "layer_128_1_conv1_h" + type: "Convolution" + bottom: "layer_128_1_bn1_h" + top: "layer_128_1_conv1_h" + param { + lr_mult: 1.0 + decay_mult: 1.0 + } + convolution_param { + num_output: 128 + bias_term: false + pad: 1 + kernel_size: 3 + stride: 2 + weight_filler { + type: "msra" + } + bias_filler { + type: "constant" + value: 0.0 + } + } +} +layer { + name: "layer_128_1_bn2" + type: "BatchNorm" + bottom: "layer_128_1_conv1_h" + top: "layer_128_1_conv1_h" + param { + lr_mult: 0.0 + } + param { + lr_mult: 0.0 + } + param { + lr_mult: 0.0 + } +} +layer { + name: "layer_128_1_scale2" + type: "Scale" + bottom: "layer_128_1_conv1_h" + top: "layer_128_1_conv1_h" + param { + lr_mult: 1.0 + decay_mult: 1.0 + } + param { + lr_mult: 2.0 + decay_mult: 1.0 + } + scale_param { + bias_term: true + } +} +layer { + name: "layer_128_1_relu2" + type: "ReLU" + bottom: "layer_128_1_conv1_h" + top: "layer_128_1_conv1_h" +} +layer { + name: "layer_128_1_conv2" + type: "Convolution" + bottom: "layer_128_1_conv1_h" + top: "layer_128_1_conv2" + param { + lr_mult: 1.0 + decay_mult: 1.0 + } + convolution_param { + num_output: 128 + bias_term: false + pad: 1 + kernel_size: 3 + stride: 1 + weight_filler { + type: "msra" + } + bias_filler { + type: "constant" + value: 0.0 + } + } +} +layer { + name: "layer_128_1_conv_expand_h" + type: "Convolution" + bottom: "layer_128_1_bn1_h" + top: "layer_128_1_conv_expand_h" + param { + lr_mult: 1.0 + decay_mult: 1.0 + } + convolution_param { + num_output: 128 + bias_term: false + pad: 0 + kernel_size: 1 + stride: 2 + weight_filler { + type: "msra" + } + bias_filler { + type: "constant" + value: 0.0 + } + } +} +layer { + name: "layer_128_1_sum" + type: "Eltwise" + bottom: "layer_128_1_conv2" + bottom: "layer_128_1_conv_expand_h" + top: "layer_128_1_sum" +} +layer { + name: "layer_256_1_bn1" + type: "BatchNorm" + bottom: "layer_128_1_sum" + top: "layer_256_1_bn1" + param { + lr_mult: 0.0 + } + param { + lr_mult: 0.0 + } + param { + lr_mult: 0.0 + } +} +layer { + name: "layer_256_1_scale1" + type: "Scale" + bottom: "layer_256_1_bn1" + top: "layer_256_1_bn1" + param { + lr_mult: 1.0 + decay_mult: 1.0 + } + param { + lr_mult: 2.0 + decay_mult: 1.0 + } + scale_param { + bias_term: true + } +} +layer { + name: "layer_256_1_relu1" + type: "ReLU" + bottom: "layer_256_1_bn1" + top: "layer_256_1_bn1" +} +layer { + name: "layer_256_1_conv1" + type: "Convolution" + bottom: "layer_256_1_bn1" + top: "layer_256_1_conv1" + param { + lr_mult: 1.0 + decay_mult: 1.0 + } + convolution_param { + num_output: 256 + bias_term: false + pad: 1 + kernel_size: 3 + stride: 2 + weight_filler { + type: "msra" + } + bias_filler { + type: "constant" + value: 0.0 + } + } +} +layer { + name: "layer_256_1_bn2" + type: "BatchNorm" + bottom: "layer_256_1_conv1" + top: "layer_256_1_conv1" + param { + lr_mult: 0.0 + } + param { + lr_mult: 0.0 + } + param { + lr_mult: 0.0 + } +} +layer { + name: "layer_256_1_scale2" + type: "Scale" + bottom: "layer_256_1_conv1" + top: "layer_256_1_conv1" + param { + lr_mult: 1.0 + decay_mult: 1.0 + } + param { + lr_mult: 2.0 + decay_mult: 1.0 + } + scale_param { + bias_term: true + } +} +layer { + name: "layer_256_1_relu2" + type: "ReLU" + bottom: "layer_256_1_conv1" + top: "layer_256_1_conv1" +} +layer { + name: "layer_256_1_conv2" + type: "Convolution" + bottom: "layer_256_1_conv1" + top: "layer_256_1_conv2" + param { + lr_mult: 1.0 + decay_mult: 1.0 + } + convolution_param { + num_output: 256 + bias_term: false + pad: 1 + kernel_size: 3 + stride: 1 + weight_filler { + type: "msra" + } + bias_filler { + type: "constant" + value: 0.0 + } + } +} +layer { + name: "layer_256_1_conv_expand" + type: "Convolution" + bottom: "layer_256_1_bn1" + top: "layer_256_1_conv_expand" + param { + lr_mult: 1.0 + decay_mult: 1.0 + } + convolution_param { + num_output: 256 + bias_term: false + pad: 0 + kernel_size: 1 + stride: 2 + weight_filler { + type: "msra" + } + bias_filler { + type: "constant" + value: 0.0 + } + } +} +layer { + name: "layer_256_1_sum" + type: "Eltwise" + bottom: "layer_256_1_conv2" + bottom: "layer_256_1_conv_expand" + top: "layer_256_1_sum" +} +layer { + name: "layer_512_1_bn1" + type: "BatchNorm" + bottom: "layer_256_1_sum" + top: "layer_512_1_bn1" + param { + lr_mult: 0.0 + } + param { + lr_mult: 0.0 + } + param { + lr_mult: 0.0 + } +} +layer { + name: "layer_512_1_scale1" + type: "Scale" + bottom: "layer_512_1_bn1" + top: "layer_512_1_bn1" + param { + lr_mult: 1.0 + decay_mult: 1.0 + } + param { + lr_mult: 2.0 + decay_mult: 1.0 + } + scale_param { + bias_term: true + } +} +layer { + name: "layer_512_1_relu1" + type: "ReLU" + bottom: "layer_512_1_bn1" + top: "layer_512_1_bn1" +} +layer { + name: "layer_512_1_conv1_h" + type: "Convolution" + bottom: "layer_512_1_bn1" + top: "layer_512_1_conv1_h" + param { + lr_mult: 1.0 + decay_mult: 1.0 + } + convolution_param { + num_output: 128 + bias_term: false + pad: 1 + kernel_size: 3 + stride: 1 # 2 + weight_filler { + type: "msra" + } + bias_filler { + type: "constant" + value: 0.0 + } + } +} +layer { + name: "layer_512_1_bn2_h" + type: "BatchNorm" + bottom: "layer_512_1_conv1_h" + top: "layer_512_1_conv1_h" + param { + lr_mult: 0.0 + } + param { + lr_mult: 0.0 + } + param { + lr_mult: 0.0 + } +} +layer { + name: "layer_512_1_scale2_h" + type: "Scale" + bottom: "layer_512_1_conv1_h" + top: "layer_512_1_conv1_h" + param { + lr_mult: 1.0 + decay_mult: 1.0 + } + param { + lr_mult: 2.0 + decay_mult: 1.0 + } + scale_param { + bias_term: true + } +} +layer { + name: "layer_512_1_relu2" + type: "ReLU" + bottom: "layer_512_1_conv1_h" + top: "layer_512_1_conv1_h" +} +layer { + name: "layer_512_1_conv2_h" + type: "Convolution" + bottom: "layer_512_1_conv1_h" + top: "layer_512_1_conv2_h" + param { + lr_mult: 1.0 + decay_mult: 1.0 + } + convolution_param { + num_output: 256 + bias_term: false + pad: 2 # 1 + kernel_size: 3 + stride: 1 + dilation: 2 + weight_filler { + type: "msra" + } + bias_filler { + type: "constant" + value: 0.0 + } + } +} +layer { + name: "layer_512_1_conv_expand_h" + type: "Convolution" + bottom: "layer_512_1_bn1" + top: "layer_512_1_conv_expand_h" + param { + lr_mult: 1.0 + decay_mult: 1.0 + } + convolution_param { + num_output: 256 + bias_term: false + pad: 0 + kernel_size: 1 + stride: 1 # 2 + weight_filler { + type: "msra" + } + bias_filler { + type: "constant" + value: 0.0 + } + } +} +layer { + name: "layer_512_1_sum" + type: "Eltwise" + bottom: "layer_512_1_conv2_h" + bottom: "layer_512_1_conv_expand_h" + top: "layer_512_1_sum" +} +layer { + name: "last_bn_h" + type: "BatchNorm" + bottom: "layer_512_1_sum" + top: "layer_512_1_sum" + param { + lr_mult: 0.0 + } + param { + lr_mult: 0.0 + } + param { + lr_mult: 0.0 + } +} +layer { + name: "last_scale_h" + type: "Scale" + bottom: "layer_512_1_sum" + top: "layer_512_1_sum" + param { + lr_mult: 1.0 + decay_mult: 1.0 + } + param { + lr_mult: 2.0 + decay_mult: 1.0 + } + scale_param { + bias_term: true + } +} +layer { + name: "last_relu" + type: "ReLU" + bottom: "layer_512_1_sum" + top: "fc7" +} + +layer { + name: "conv6_1_h" + type: "Convolution" + bottom: "fc7" + top: "conv6_1_h" + param { + lr_mult: 1 + decay_mult: 1 + } + param { + lr_mult: 2 + decay_mult: 0 + } + convolution_param { + num_output: 128 + pad: 0 + kernel_size: 1 + stride: 1 + weight_filler { + type: "xavier" + } + bias_filler { + type: "constant" + value: 0 + } + } +} +layer { + name: "conv6_1_relu" + type: "ReLU" + bottom: "conv6_1_h" + top: "conv6_1_h" +} +layer { + name: "conv6_2_h" + type: "Convolution" + bottom: "conv6_1_h" + top: "conv6_2_h" + param { + lr_mult: 1 + decay_mult: 1 + } + param { + lr_mult: 2 + decay_mult: 0 + } + convolution_param { + num_output: 256 + pad: 1 + kernel_size: 3 + stride: 2 + weight_filler { + type: "xavier" + } + bias_filler { + type: "constant" + value: 0 + } + } +} +layer { + name: "conv6_2_relu" + type: "ReLU" + bottom: "conv6_2_h" + top: "conv6_2_h" +} +layer { + name: "conv7_1_h" + type: "Convolution" + bottom: "conv6_2_h" + top: "conv7_1_h" + param { + lr_mult: 1 + decay_mult: 1 + } + param { + lr_mult: 2 + decay_mult: 0 + } + convolution_param { + num_output: 64 + pad: 0 + kernel_size: 1 + stride: 1 + weight_filler { + type: "xavier" + } + bias_filler { + type: "constant" + value: 0 + } + } +} +layer { + name: "conv7_1_relu" + type: "ReLU" + bottom: "conv7_1_h" + top: "conv7_1_h" +} +layer { + name: "conv7_2_h" + type: "Convolution" + bottom: "conv7_1_h" + top: "conv7_2_h" + param { + lr_mult: 1 + decay_mult: 1 + } + param { + lr_mult: 2 + decay_mult: 0 + } + convolution_param { + num_output: 128 + pad: 1 + kernel_size: 3 + stride: 2 + weight_filler { + type: "xavier" + } + bias_filler { + type: "constant" + value: 0 + } + } +} +layer { + name: "conv7_2_relu" + type: "ReLU" + bottom: "conv7_2_h" + top: "conv7_2_h" +} +layer { + name: "conv8_1_h" + type: "Convolution" + bottom: "conv7_2_h" + top: "conv8_1_h" + param { + lr_mult: 1 + decay_mult: 1 + } + param { + lr_mult: 2 + decay_mult: 0 + } + convolution_param { + num_output: 64 + pad: 0 + kernel_size: 1 + stride: 1 + weight_filler { + type: "xavier" + } + bias_filler { + type: "constant" + value: 0 + } + } +} +layer { + name: "conv8_1_relu" + type: "ReLU" + bottom: "conv8_1_h" + top: "conv8_1_h" +} +layer { + name: "conv8_2_h" + type: "Convolution" + bottom: "conv8_1_h" + top: "conv8_2_h" + param { + lr_mult: 1 + decay_mult: 1 + } + param { + lr_mult: 2 + decay_mult: 0 + } + convolution_param { + num_output: 128 + pad: 0 + kernel_size: 3 + stride: 1 + weight_filler { + type: "xavier" + } + bias_filler { + type: "constant" + value: 0 + } + } +} +layer { + name: "conv8_2_relu" + type: "ReLU" + bottom: "conv8_2_h" + top: "conv8_2_h" +} +layer { + name: "conv9_1_h" + type: "Convolution" + bottom: "conv8_2_h" + top: "conv9_1_h" + param { + lr_mult: 1 + decay_mult: 1 + } + param { + lr_mult: 2 + decay_mult: 0 + } + convolution_param { + num_output: 64 + pad: 0 + kernel_size: 1 + stride: 1 + weight_filler { + type: "xavier" + } + bias_filler { + type: "constant" + value: 0 + } + } +} +layer { + name: "conv9_1_relu" + type: "ReLU" + bottom: "conv9_1_h" + top: "conv9_1_h" +} +layer { + name: "conv9_2_h" + type: "Convolution" + bottom: "conv9_1_h" + top: "conv9_2_h" + param { + lr_mult: 1 + decay_mult: 1 + } + param { + lr_mult: 2 + decay_mult: 0 + } + convolution_param { + num_output: 128 + pad: 0 + kernel_size: 3 + stride: 1 + weight_filler { + type: "xavier" + } + bias_filler { + type: "constant" + value: 0 + } + } +} +layer { + name: "conv9_2_relu" + type: "ReLU" + bottom: "conv9_2_h" + top: "conv9_2_h" +} +layer { + name: "conv4_3_norm" + type: "Normalize" + bottom: "layer_256_1_bn1" + top: "conv4_3_norm" + norm_param { + across_spatial: false + scale_filler { + type: "constant" + value: 20 + } + channel_shared: false + } +} +layer { + name: "conv4_3_norm_mbox_loc" + type: "Convolution" + bottom: "conv4_3_norm" + top: "conv4_3_norm_mbox_loc" + param { + lr_mult: 1 + decay_mult: 1 + } + param { + lr_mult: 2 + decay_mult: 0 + } + convolution_param { + num_output: 16 + pad: 1 + kernel_size: 3 + stride: 1 + weight_filler { + type: "xavier" + } + bias_filler { + type: "constant" + value: 0 + } + } +} +layer { + name: "conv4_3_norm_mbox_loc_perm" + type: "Permute" + bottom: "conv4_3_norm_mbox_loc" + top: "conv4_3_norm_mbox_loc_perm" + permute_param { + order: 0 + order: 2 + order: 3 + order: 1 + } +} +layer { + name: "conv4_3_norm_mbox_loc_flat" + type: "Flatten" + bottom: "conv4_3_norm_mbox_loc_perm" + top: "conv4_3_norm_mbox_loc_flat" + flatten_param { + axis: 1 + } +} +layer { + name: "conv4_3_norm_mbox_conf" + type: "Convolution" + bottom: "conv4_3_norm" + top: "conv4_3_norm_mbox_conf" + param { + lr_mult: 1 + decay_mult: 1 + } + param { + lr_mult: 2 + decay_mult: 0 + } + convolution_param { + num_output: 8 # 84 + pad: 1 + kernel_size: 3 + stride: 1 + weight_filler { + type: "xavier" + } + bias_filler { + type: "constant" + value: 0 + } + } +} +layer { + name: "conv4_3_norm_mbox_conf_perm" + type: "Permute" + bottom: "conv4_3_norm_mbox_conf" + top: "conv4_3_norm_mbox_conf_perm" + permute_param { + order: 0 + order: 2 + order: 3 + order: 1 + } +} +layer { + name: "conv4_3_norm_mbox_conf_flat" + type: "Flatten" + bottom: "conv4_3_norm_mbox_conf_perm" + top: "conv4_3_norm_mbox_conf_flat" + flatten_param { + axis: 1 + } +} +layer { + name: "conv4_3_norm_mbox_priorbox" + type: "PriorBox" + bottom: "conv4_3_norm" + bottom: "data" + top: "conv4_3_norm_mbox_priorbox" + prior_box_param { + min_size: 30.0 + max_size: 60.0 + aspect_ratio: 2 + flip: true + clip: false + variance: 0.1 + variance: 0.1 + variance: 0.2 + variance: 0.2 + step: 8 + offset: 0.5 + } +} +layer { + name: "fc7_mbox_loc" + type: "Convolution" + bottom: "fc7" + top: "fc7_mbox_loc" + param { + lr_mult: 1 + decay_mult: 1 + } + param { + lr_mult: 2 + decay_mult: 0 + } + convolution_param { + num_output: 24 + pad: 1 + kernel_size: 3 + stride: 1 + weight_filler { + type: "xavier" + } + bias_filler { + type: "constant" + value: 0 + } + } +} +layer { + name: "fc7_mbox_loc_perm" + type: "Permute" + bottom: "fc7_mbox_loc" + top: "fc7_mbox_loc_perm" + permute_param { + order: 0 + order: 2 + order: 3 + order: 1 + } +} +layer { + name: "fc7_mbox_loc_flat" + type: "Flatten" + bottom: "fc7_mbox_loc_perm" + top: "fc7_mbox_loc_flat" + flatten_param { + axis: 1 + } +} +layer { + name: "fc7_mbox_conf" + type: "Convolution" + bottom: "fc7" + top: "fc7_mbox_conf" + param { + lr_mult: 1 + decay_mult: 1 + } + param { + lr_mult: 2 + decay_mult: 0 + } + convolution_param { + num_output: 12 # 126 + pad: 1 + kernel_size: 3 + stride: 1 + weight_filler { + type: "xavier" + } + bias_filler { + type: "constant" + value: 0 + } + } +} +layer { + name: "fc7_mbox_conf_perm" + type: "Permute" + bottom: "fc7_mbox_conf" + top: "fc7_mbox_conf_perm" + permute_param { + order: 0 + order: 2 + order: 3 + order: 1 + } +} +layer { + name: "fc7_mbox_conf_flat" + type: "Flatten" + bottom: "fc7_mbox_conf_perm" + top: "fc7_mbox_conf_flat" + flatten_param { + axis: 1 + } +} +layer { + name: "fc7_mbox_priorbox" + type: "PriorBox" + bottom: "fc7" + bottom: "data" + top: "fc7_mbox_priorbox" + prior_box_param { + min_size: 60.0 + max_size: 111.0 + aspect_ratio: 2 + aspect_ratio: 3 + flip: true + clip: false + variance: 0.1 + variance: 0.1 + variance: 0.2 + variance: 0.2 + step: 16 + offset: 0.5 + } +} +layer { + name: "conv6_2_mbox_loc" + type: "Convolution" + bottom: "conv6_2_h" + top: "conv6_2_mbox_loc" + param { + lr_mult: 1 + decay_mult: 1 + } + param { + lr_mult: 2 + decay_mult: 0 + } + convolution_param { + num_output: 24 + pad: 1 + kernel_size: 3 + stride: 1 + weight_filler { + type: "xavier" + } + bias_filler { + type: "constant" + value: 0 + } + } +} +layer { + name: "conv6_2_mbox_loc_perm" + type: "Permute" + bottom: "conv6_2_mbox_loc" + top: "conv6_2_mbox_loc_perm" + permute_param { + order: 0 + order: 2 + order: 3 + order: 1 + } +} +layer { + name: "conv6_2_mbox_loc_flat" + type: "Flatten" + bottom: "conv6_2_mbox_loc_perm" + top: "conv6_2_mbox_loc_flat" + flatten_param { + axis: 1 + } +} +layer { + name: "conv6_2_mbox_conf" + type: "Convolution" + bottom: "conv6_2_h" + top: "conv6_2_mbox_conf" + param { + lr_mult: 1 + decay_mult: 1 + } + param { + lr_mult: 2 + decay_mult: 0 + } + convolution_param { + num_output: 12 # 126 + pad: 1 + kernel_size: 3 + stride: 1 + weight_filler { + type: "xavier" + } + bias_filler { + type: "constant" + value: 0 + } + } +} +layer { + name: "conv6_2_mbox_conf_perm" + type: "Permute" + bottom: "conv6_2_mbox_conf" + top: "conv6_2_mbox_conf_perm" + permute_param { + order: 0 + order: 2 + order: 3 + order: 1 + } +} +layer { + name: "conv6_2_mbox_conf_flat" + type: "Flatten" + bottom: "conv6_2_mbox_conf_perm" + top: "conv6_2_mbox_conf_flat" + flatten_param { + axis: 1 + } +} +layer { + name: "conv6_2_mbox_priorbox" + type: "PriorBox" + bottom: "conv6_2_h" + bottom: "data" + top: "conv6_2_mbox_priorbox" + prior_box_param { + min_size: 111.0 + max_size: 162.0 + aspect_ratio: 2 + aspect_ratio: 3 + flip: true + clip: false + variance: 0.1 + variance: 0.1 + variance: 0.2 + variance: 0.2 + step: 32 + offset: 0.5 + } +} +layer { + name: "conv7_2_mbox_loc" + type: "Convolution" + bottom: "conv7_2_h" + top: "conv7_2_mbox_loc" + param { + lr_mult: 1 + decay_mult: 1 + } + param { + lr_mult: 2 + decay_mult: 0 + } + convolution_param { + num_output: 24 + pad: 1 + kernel_size: 3 + stride: 1 + weight_filler { + type: "xavier" + } + bias_filler { + type: "constant" + value: 0 + } + } +} +layer { + name: "conv7_2_mbox_loc_perm" + type: "Permute" + bottom: "conv7_2_mbox_loc" + top: "conv7_2_mbox_loc_perm" + permute_param { + order: 0 + order: 2 + order: 3 + order: 1 + } +} +layer { + name: "conv7_2_mbox_loc_flat" + type: "Flatten" + bottom: "conv7_2_mbox_loc_perm" + top: "conv7_2_mbox_loc_flat" + flatten_param { + axis: 1 + } +} +layer { + name: "conv7_2_mbox_conf" + type: "Convolution" + bottom: "conv7_2_h" + top: "conv7_2_mbox_conf" + param { + lr_mult: 1 + decay_mult: 1 + } + param { + lr_mult: 2 + decay_mult: 0 + } + convolution_param { + num_output: 12 # 126 + pad: 1 + kernel_size: 3 + stride: 1 + weight_filler { + type: "xavier" + } + bias_filler { + type: "constant" + value: 0 + } + } +} +layer { + name: "conv7_2_mbox_conf_perm" + type: "Permute" + bottom: "conv7_2_mbox_conf" + top: "conv7_2_mbox_conf_perm" + permute_param { + order: 0 + order: 2 + order: 3 + order: 1 + } +} +layer { + name: "conv7_2_mbox_conf_flat" + type: "Flatten" + bottom: "conv7_2_mbox_conf_perm" + top: "conv7_2_mbox_conf_flat" + flatten_param { + axis: 1 + } +} +layer { + name: "conv7_2_mbox_priorbox" + type: "PriorBox" + bottom: "conv7_2_h" + bottom: "data" + top: "conv7_2_mbox_priorbox" + prior_box_param { + min_size: 162.0 + max_size: 213.0 + aspect_ratio: 2 + aspect_ratio: 3 + flip: true + clip: false + variance: 0.1 + variance: 0.1 + variance: 0.2 + variance: 0.2 + step: 64 + offset: 0.5 + } +} +layer { + name: "conv8_2_mbox_loc" + type: "Convolution" + bottom: "conv8_2_h" + top: "conv8_2_mbox_loc" + param { + lr_mult: 1 + decay_mult: 1 + } + param { + lr_mult: 2 + decay_mult: 0 + } + convolution_param { + num_output: 16 + pad: 1 + kernel_size: 3 + stride: 1 + weight_filler { + type: "xavier" + } + bias_filler { + type: "constant" + value: 0 + } + } +} +layer { + name: "conv8_2_mbox_loc_perm" + type: "Permute" + bottom: "conv8_2_mbox_loc" + top: "conv8_2_mbox_loc_perm" + permute_param { + order: 0 + order: 2 + order: 3 + order: 1 + } +} +layer { + name: "conv8_2_mbox_loc_flat" + type: "Flatten" + bottom: "conv8_2_mbox_loc_perm" + top: "conv8_2_mbox_loc_flat" + flatten_param { + axis: 1 + } +} +layer { + name: "conv8_2_mbox_conf" + type: "Convolution" + bottom: "conv8_2_h" + top: "conv8_2_mbox_conf" + param { + lr_mult: 1 + decay_mult: 1 + } + param { + lr_mult: 2 + decay_mult: 0 + } + convolution_param { + num_output: 8 # 84 + pad: 1 + kernel_size: 3 + stride: 1 + weight_filler { + type: "xavier" + } + bias_filler { + type: "constant" + value: 0 + } + } +} +layer { + name: "conv8_2_mbox_conf_perm" + type: "Permute" + bottom: "conv8_2_mbox_conf" + top: "conv8_2_mbox_conf_perm" + permute_param { + order: 0 + order: 2 + order: 3 + order: 1 + } +} +layer { + name: "conv8_2_mbox_conf_flat" + type: "Flatten" + bottom: "conv8_2_mbox_conf_perm" + top: "conv8_2_mbox_conf_flat" + flatten_param { + axis: 1 + } +} +layer { + name: "conv8_2_mbox_priorbox" + type: "PriorBox" + bottom: "conv8_2_h" + bottom: "data" + top: "conv8_2_mbox_priorbox" + prior_box_param { + min_size: 213.0 + max_size: 264.0 + aspect_ratio: 2 + flip: true + clip: false + variance: 0.1 + variance: 0.1 + variance: 0.2 + variance: 0.2 + step: 100 + offset: 0.5 + } +} +layer { + name: "conv9_2_mbox_loc" + type: "Convolution" + bottom: "conv9_2_h" + top: "conv9_2_mbox_loc" + param { + lr_mult: 1 + decay_mult: 1 + } + param { + lr_mult: 2 + decay_mult: 0 + } + convolution_param { + num_output: 16 + pad: 1 + kernel_size: 3 + stride: 1 + weight_filler { + type: "xavier" + } + bias_filler { + type: "constant" + value: 0 + } + } +} +layer { + name: "conv9_2_mbox_loc_perm" + type: "Permute" + bottom: "conv9_2_mbox_loc" + top: "conv9_2_mbox_loc_perm" + permute_param { + order: 0 + order: 2 + order: 3 + order: 1 + } +} +layer { + name: "conv9_2_mbox_loc_flat" + type: "Flatten" + bottom: "conv9_2_mbox_loc_perm" + top: "conv9_2_mbox_loc_flat" + flatten_param { + axis: 1 + } +} +layer { + name: "conv9_2_mbox_conf" + type: "Convolution" + bottom: "conv9_2_h" + top: "conv9_2_mbox_conf" + param { + lr_mult: 1 + decay_mult: 1 + } + param { + lr_mult: 2 + decay_mult: 0 + } + convolution_param { + num_output: 8 # 84 + pad: 1 + kernel_size: 3 + stride: 1 + weight_filler { + type: "xavier" + } + bias_filler { + type: "constant" + value: 0 + } + } +} +layer { + name: "conv9_2_mbox_conf_perm" + type: "Permute" + bottom: "conv9_2_mbox_conf" + top: "conv9_2_mbox_conf_perm" + permute_param { + order: 0 + order: 2 + order: 3 + order: 1 + } +} +layer { + name: "conv9_2_mbox_conf_flat" + type: "Flatten" + bottom: "conv9_2_mbox_conf_perm" + top: "conv9_2_mbox_conf_flat" + flatten_param { + axis: 1 + } +} +layer { + name: "conv9_2_mbox_priorbox" + type: "PriorBox" + bottom: "conv9_2_h" + bottom: "data" + top: "conv9_2_mbox_priorbox" + prior_box_param { + min_size: 264.0 + max_size: 315.0 + aspect_ratio: 2 + flip: true + clip: false + variance: 0.1 + variance: 0.1 + variance: 0.2 + variance: 0.2 + step: 300 + offset: 0.5 + } +} +layer { + name: "mbox_loc" + type: "Concat" + bottom: "conv4_3_norm_mbox_loc_flat" + bottom: "fc7_mbox_loc_flat" + bottom: "conv6_2_mbox_loc_flat" + bottom: "conv7_2_mbox_loc_flat" + bottom: "conv8_2_mbox_loc_flat" + bottom: "conv9_2_mbox_loc_flat" + top: "mbox_loc" + concat_param { + axis: 1 + } +} +layer { + name: "mbox_conf" + type: "Concat" + bottom: "conv4_3_norm_mbox_conf_flat" + bottom: "fc7_mbox_conf_flat" + bottom: "conv6_2_mbox_conf_flat" + bottom: "conv7_2_mbox_conf_flat" + bottom: "conv8_2_mbox_conf_flat" + bottom: "conv9_2_mbox_conf_flat" + top: "mbox_conf" + concat_param { + axis: 1 + } +} +layer { + name: "mbox_priorbox" + type: "Concat" + bottom: "conv4_3_norm_mbox_priorbox" + bottom: "fc7_mbox_priorbox" + bottom: "conv6_2_mbox_priorbox" + bottom: "conv7_2_mbox_priorbox" + bottom: "conv8_2_mbox_priorbox" + bottom: "conv9_2_mbox_priorbox" + top: "mbox_priorbox" + concat_param { + axis: 2 + } +} + +layer { + name: "mbox_conf_reshape" + type: "Reshape" + bottom: "mbox_conf" + top: "mbox_conf_reshape" + reshape_param { + shape { + dim: 0 + dim: -1 + dim: 2 + } + } +} +layer { + name: "mbox_conf_softmax" + type: "Softmax" + bottom: "mbox_conf_reshape" + top: "mbox_conf_softmax" + softmax_param { + axis: 2 + } +} +layer { + name: "mbox_conf_flatten" + type: "Flatten" + bottom: "mbox_conf_softmax" + top: "mbox_conf_flatten" + flatten_param { + axis: 1 + } +} + +layer { + name: "detection_out" + type: "DetectionOutput" + bottom: "mbox_loc" + bottom: "mbox_conf_flatten" + bottom: "mbox_priorbox" + top: "detection_out" + include { + phase: TEST + } + detection_output_param { + num_classes: 2 + share_location: true + background_label_id: 0 + nms_param { + nms_threshold: 0.45 + top_k: 400 + } + code_type: CENTER_SIZE + keep_top_k: 200 + confidence_threshold: 0.01 + clip: 1 + } +} diff --git a/Real-Time-Face-Blurring-Tool/requirements.txt b/Real-Time-Face-Blurring-Tool/requirements.txt new file mode 100644 index 0000000..b9d476a Binary files /dev/null and b/Real-Time-Face-Blurring-Tool/requirements.txt differ