Skip to content

Commit 8806cb7

Browse files
authored
Clarify '/crawl' endpoint functionality in changelog
Updated the description of the new '/crawl' endpoint for clarity and added a note on how it works.
1 parent 2229102 commit 8806cb7

File tree

1 file changed

+5
-4
lines changed

1 file changed

+5
-4
lines changed

src/content/changelog/browser-rendering/2026-03-10-br-crawl-endpoint.mdx

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -6,9 +6,9 @@ products:
66
date: 2026-03-10
77
---
88

9-
You can now crawl an entire website with a single API call using [Browser Rendering](/browser-rendering/)'s new [`/crawl` endpoint](/browser-rendering/rest-api/crawl-endpoint/), now available in open beta. Submit a starting URL and the endpoint automatically discovers linked pages, renders them in a headless browser, and returns content as HTML, Markdown, or structured JSON.
9+
You can now crawl an entire website with a single API call using [Browser Rendering](/browser-rendering/)'s new [`/crawl` endpoint](/browser-rendering/rest-api/crawl-endpoint/), available in open beta. Submit a starting URL and the endpoint automatically discovers linked pages, renders them in a headless browser, and returns content as HTML, Markdown, or structured JSON.
1010

11-
Crawl jobs run asynchronously. You submit a URL, receive a job ID, and check back for results as pages are processed:
11+
Crawl jobs run asynchronously. You submit a URL, receive a job ID, and check back for results as pages are processed. Here is how it works:
1212

1313
```sh
1414
# Initiate a crawl
@@ -31,8 +31,9 @@ Key features:
3131
- **Automatic page discovery** - Discovers URLs from sitemaps, page links, or both
3232
- **Incremental crawling** - Use `modifiedSince` and `maxAge` to skip pages that haven't changed or were recently fetched, saving time and cost on repeated crawls
3333
- **Static mode** - Set `render: false` to fetch static HTML without spinning up a browser, for faster crawling of static sites
34-
- **Well-behaved bot** - Honors `robots.txt` directives including `crawl-delay`
34+
- **Well-behaved bot** - Honors `robots.txt` directives, including `crawl-delay`
3535

3636
Available on both the Workers Free and Paid plans.
3737

38-
To get started, refer to the [crawl endpoint documentation](/browser-rendering/rest-api/crawl-endpoint/). If you are setting up your own site to be crawled, review the [robots.txt and sitemaps best practices](/browser-rendering/reference/robots-txt/).
38+
To get started, refer to the [crawl endpoint documentation](/browser-rendering/rest-api/crawl-endpoint/).
39+
If you are setting up your own site to be crawled, review the [robots.txt and sitemaps best practices](/browser-rendering/reference/robots-txt/).

0 commit comments

Comments
 (0)