Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
171 changes: 170 additions & 1 deletion alchemy-web/src/content/docs/providers/cloudflare/bucket.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,10 @@ description: Learn how to create, configure, and manage Cloudflare R2 Buckets us

Creates and manages [Cloudflare R2 Buckets](https://developers.cloudflare.com/r2/buckets/) for object storage with S3 compatibility.

:::info Credentials
Alchemy can provision R2 buckets using either an API token or an API key + email. Multipart uploads additionally require S3-compatible credentials — an **R2 Access Key ID** and **R2 Secret Access Key** (and optional session token). Provide them via `accessKeyId` / `secretAccessKey` props or set the environment variables `R2_ACCESS_KEY_ID`, `R2_SECRET_ACCESS_KEY`, and `R2_SESSION_TOKEN`. Without these credentials multipart helpers and dashboard cleanup cannot run.
:::

## Minimal Example

Create a basic R2 bucket with default settings:
Expand Down Expand Up @@ -96,9 +100,12 @@ import { R2Bucket } from "alchemy/cloudflare";
const tempBucket = await R2Bucket("temp-storage", {
name: "temp-storage",
empty: true, // All objects will be deleted when this resource is destroyed
adopt: true, // optional: adopt an existing bucket with the same name
});
```

When `empty` is enabled and S3 credentials are available, Alchemy now clears objects **and** aborts any outstanding multipart uploads before deletion. This prevents lingering "ongoing uploads" in the Cloudflare dashboard.

## With Lifecycle Rules

Configure automatic transitions like aborting multipart uploads, deleting objects after an age or date, or moving objects to Infrequent Access.
Expand Down Expand Up @@ -254,4 +261,166 @@ List objects in the bucket.
```ts
const list = await bucket.list();
console.log(list.objects.length);
```
```

## Multipart Uploads

Use multipart uploads for files larger than 5 MiB. Alchemy automatically retries transient Cloudflare consistency errors (for example, `NoSuchBucket`, `NoSuchUpload`) and network hiccups (`EPIPE`, `ECONNRESET`, `ETIMEDOUT`). Ensure each part except the final one is ≥ 5 MiB; smaller parts will be rejected by R2.

### Basic Example

```ts
import { Buffer } from "node:buffer";
import { R2Bucket } from "alchemy/cloudflare";

const bucket = await R2Bucket("multipart-basic", {
name: "my-large-files",
adopt: true,
empty: true,
});

const partSize = 5 * 1024 * 1024; // 5 MiB
const makePart = (fill: string) => Buffer.alloc(partSize, fill);

const upload = await bucket.createMultipartUpload("large-file.bin", {
httpMetadata: { contentType: "application/octet-stream" },
customMetadata: { uploadedBy: "alchemy" },
});

const part1 = await upload.uploadPart(1, makePart("A"));
const part2 = await upload.uploadPart(2, makePart("B"));
const part3 = await upload.uploadPart(3, Buffer.from("tail"));

const object = await upload.complete([part1, part2, part3]);
console.log(object.size);
```

### Streaming From Workers (API Upload Pattern)

Clients typically upload from the browser to a Worker or Hono route; the handler can then stream the request body into R2 without touching the file system. This pattern avoids buffering multi-gigabyte files in memory and works for single or multiple concurrent uploads.

```ts
// worker.ts
import { R2Bucket } from "alchemy/cloudflare";
import type {
R2Bucket as R2BucketBinding,
R2UploadedPart,
} from "@cloudflare/workers-types/experimental";

interface Env {
BUCKET: R2BucketBinding;
}

const bucket = await R2Bucket("uploads", {
name: "my-uploads",
adopt: true,
});

export default {
async fetch(request: Request, env: Env) {
if (request.method !== "POST") {
return new Response("Method Not Allowed", { status: 405 });
}

const fileName = request.headers.get("x-file-name") ?? crypto.randomUUID();
const contentType = request.headers.get("content-type") ?? "application/octet-stream";
const stream = request.body;
if (!stream) {
return new Response("Readable stream required", { status: 400 });
}

const partSize = 8 * 1024 * 1024; // 8 MiB chunks; adjust to match your UI chunk size.
const upload = await env.BUCKET.createMultipartUpload(fileName, {
httpMetadata: { contentType },
});

const reader = stream.getReader();
let buffer = new Uint8Array(0);
let partNumber = 1;
const uploadedParts: R2UploadedPart[] = [];
const inflight: Promise<R2UploadedPart>[] = [];

const flush = (chunk: Uint8Array) => {
const partPromise = upload.uploadPart(partNumber++, chunk, { size: chunk.byteLength });
inflight.push(partPromise);
};

while (true) {
const { value, done } = await reader.read();
if (done) {
if (buffer.length) flush(buffer);
break;
}

const combined = new Uint8Array(buffer.length + value.length);
combined.set(buffer);
combined.set(value, buffer.length);

let offset = 0;
while (offset + partSize <= combined.length) {
flush(combined.slice(offset, offset + partSize));
offset += partSize;
}
buffer = combined.slice(offset);
}

uploadedParts.push(...(await Promise.all(inflight)));
const result = await upload.complete(uploadedParts);

return new Response(JSON.stringify({ key: result.key, size: result.size }), {
status: 201,
headers: { "content-type": "application/json" },
});
},
};
```

**Browser call:**

```ts
async function uploadFile(file: File) {
const response = await fetch("/api/upload", {
method: "POST",
headers: {
"x-file-name": file.name,
"content-type": file.type,
},
body: file.stream(),
});

if (!response.ok) {
throw new Error(`Upload failed: ${await response.text()}`);
}
return await response.json();
}
```

- **Large files:** the worker flushes parts whenever the buffer reaches `partSize`, keeping memory usage bounded.
- **Multiple files:** create one upload per file and call `uploadFile` with `Promise.all(files.map(uploadFile))` to run them in parallel.
- **Chunked browser uploads:** if the client already splits into parts, post each chunk with `x-part-number` headers and call `upload.uploadPart` directly — still avoid the file system.
- **Error handling:** wrap the `flush` call in try/catch to `await upload.abort()` on failure so you don’t leave dangling uploads.

### Resume an Upload

Persist the `uploadId` to resume later:

```ts
const firstChunk = await bucket.createMultipartUpload("resumable.bin");
const uploadId = firstChunk.uploadId;
const part1 = await firstChunk.uploadPart(1, makePart("R"));

// later…
const resumed = bucket.resumeMultipartUpload("resumable.bin", uploadId);
const part2 = await resumed.uploadPart(2, Buffer.from("final chunk"));
await resumed.complete([part1, part2]);
```

### Abort an Upload

```ts
const upload = await bucket.createMultipartUpload("temporary.bin");
await upload.uploadPart(1, makePart("X"));
await upload.abort(); // deletes uploaded parts
```

If you adopt an existing bucket with `adopt: true` and enable `empty: true`, Alchemy aborts any leftover multipart uploads during destroy, keeping the dashboard clean.
20 changes: 19 additions & 1 deletion alchemy/src/cloudflare/api.ts
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
import { Provider, type Credentials } from "../auth.ts";
import { type Credentials, Provider } from "../auth.ts";
import { Scope } from "../scope.ts";
import type { Secret } from "../secret.ts";
import { isBinary } from "../serde.ts";
Expand Down Expand Up @@ -40,6 +40,24 @@ export interface CloudflareApiOptions {
*/
apiToken?: Secret;

/**
* Access Key ID to use for S3-compatible R2 operations
* @see https://developers.cloudflare.com/r2/api/tokens
*/
accessKeyId?: Secret;

/**
* Secret Access Key to use for S3-compatible R2 operations
* @see https://developers.cloudflare.com/r2/api/tokens
*/
secretAccessKey?: Secret;

/**
* Session Token to use for S3-compatible R2 operations using temporary account credentials
* @see https://developers.cloudflare.com/r2/api/tokens
*/
sessionToken?: Secret;

/**
* Account ID to use (overrides CLOUDFLARE_ACCOUNT_ID env var)
* If not provided, will be automatically retrieved from the Cloudflare API
Expand Down
Loading
Loading