Skip to content

Latest commit

Β 

History

History
87 lines (54 loc) Β· 3.48 KB

File metadata and controls

87 lines (54 loc) Β· 3.48 KB

πŸ•·οΈ Welcome to the wiki of Broken Links Crawler (BLC)


πŸ” Project Overview

BLC (Broken Links Crawler) is a Python-based command-line tool developed as part of academic work.

The tool is built to scan websites and detect broken or problematic hyperlinks, addressing practical needs while showcasing key concepts in modern software development β€” including multithreading, modular design, automation, and robust error handling.


⚑ What BLC Offers

Although developed in an academic setting, BLC is a fully functional and production-aware tool. It’s designed to be performant, configurable, and extensible β€” suitable for use by developers, sysadmins, QA engineers, and anyone responsible for maintaining link integrity across digital content.


βœ… Key Features

  • πŸš€ High-performance, multi-threaded crawling
    Utilizes a producer-consumer pattern to efficiently scan sites in parallel.

  • πŸ›‘ Detection of common link issues:

    • 404 Not Found
    • DNS resolution errors
    • HTTP to HTTPS mismatches
    • "False 200 OK" responses (e.g., custom error pages)
  • 🌐 External link validation
    Ensures referenced external links are reachable, without full recursion.

  • πŸŽ›οΈ Flexible configuration:

    • Crawl depth control
    • Adjustable thread count
    • Output in JSON, HTML, or human-readable formats
  • πŸ“¬ Email-based reporting
    Automatically sends results based on customizable triggers:

    • Always
    • Only on error
    • Never
  • πŸ–₯️ Cross-platform support

    • Built for Linux (Ubuntu) and Windows
    • Can be packaged into a standalone executable (blc, blc.exe)
  • πŸ”“ Open-source & automation-ready

    • Easily integrated into CI/CD pipelines, scheduled audits, or link monitoring tools

πŸŽ“ Key Concepts and Engineering Guidelines

This project demonstrates:

  • Clean and modular code structure
  • Effective use of concurrency and thread-safe data structures
  • Real-world exception handling and resilience
  • Compliance with web standards (robots.txt, SSL, email protocols)
  • Practical usage of third-party libraries (e.g., requests, certifi, PyInstaller)

πŸ“ Get Started

Feel free to start with some πŸ“„Sample Outputs.

Visit the πŸš€Usage Instructions page to learn how to configure, run, and customize BLC.

Explore the sections - πŸ“Š Initial Software Requirements, πŸ“ High-Level Design to explore the project's origin and architecture.

Check out πŸ› οΈ Implementation Notes for insights into the tools, technologies, and key implementation decisions.

Crawling the web isn’t as straightforward as it might seem. You can check out what challenges came up and how they were handled in - πŸ”§ Crawler Fetch Failures & Workarounds, and see how BLC deals with blocked access in 🚫 Sites That Restrict Automated Crawling.

A discussion on thread number optimization can be found on πŸš€ Thread Count Optimization.


πŸ§‘β€πŸ’» Contribute & Explore

Feel free to explore or extend the project further. You can find the full source code, issue tracker, and documentation in the GitHub repository.


Thank you for visiting β€” and here’s to chasing broken links, and finishing what we started. πŸŽ“βœ¨