A utility for scraping and displaying search results from various sources.
The project consists of two main components:
A Node.js utility for scraping search results from different sources. It uses Puppeteer for web scraping and can be configured to extract specific data points from search results.
A Next.js web application that displays the scraped search results in a modern, responsive interface. Features include:
- Clean, modern UI with dark mode support
- Responsive design for all screen sizes
- Image thumbnails for search results
- Clickable links to original sources
- Search query highlighting
- Web Scraping: Automated data collection from search engines
- Data Processing: Structured data extraction and formatting
- Modern UI: Responsive design with dark mode support
- Performance: Optimized for fast loading and smooth interactions
-
Clone the repository
-
Install dependencies in both directories:
# Install scraper dependencies cd scraper npm install # Install web interface dependencies cd ../next-web npm install
-
Configure environment variables:
- Create
.envfiles in both directories as needed - See
.gitignorefor reference on environment file handling
- Create
-
Run the development servers:
# Run scraper cd scraper npm run dev # Run web interface cd ../next-web npm run dev
- The scraper uses Puppeteer for web scraping
- The web interface is built with Next.js and TypeScript
- Styling is done with Tailwind CSS
- Both components use modern ES6+ features
- Fork the repository
- Create your feature branch
- Commit your changes
- Push to the branch
- Create a new Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.