[Article] The Token Cost of Beautiful AI: OpenUI Lang vs AI SDK vs JSON#12
[Article] The Token Cost of Beautiful AI: OpenUI Lang vs AI SDK vs JSON#12manja316 wants to merge 1 commit intothesysdev:mainfrom
Conversation
Independent benchmark analysis comparing token costs across three generative UI approaches using OpenUI's own benchmark suite. Includes 7 scenarios, cost projections at scale, and balanced tradeoff analysis. Closes thesysdev#4 Co-Authored-By: Paperclip <noreply@paperclip.ing>
EntelligenceAI PR SummaryIntroduces a new technical article analyzing the token cost and tradeoffs of three generative UI rendering formats across seven UI scenarios.
Confidence Score: 5/5 - Safe to MergeSafe to merge — this PR introduces a new technical article benchmarking token costs across OpenUI Lang, Vercel json-render/RFC 6902 patches, and Thesys C1 JSON, and no issues were identified during automated review. The content is purely additive (a new article file) with no runtime logic, security surfaces, or functional code changes that could introduce regressions. The analysis appears well-scoped, covering both quantitative token counts and qualitative tradeoffs across seven UI scenarios. Key Findings:
|
WalkthroughAdds a new technical article benchmarking token consumption and cost across three generative UI formats — OpenUI Lang, Vercel json-render/RFC 6902 patches, and Thesys C1 JSON — over seven UI scenarios. The article presents quantitative token counts, cost projections at scale, and qualitative tradeoff analysis covering streaming behavior, error recovery, ecosystem maturity, and DSL learning curve. Changes
Sequence DiagramThis diagram shows the interactions between components: sequenceDiagram
participant Dev as Developer
participant GenCLI as "pnpm generate"
participant BenchCLI as "pnpm bench"
participant OpenAI as OpenAI API
participant AST as AST Parser
participant Conv as Format Converter
participant Tiktoken as Tiktoken Counter
Dev->>GenCLI: run with OPENAI_API_KEY
loop for each of 7 UI scenarios
GenCLI->>OpenAI: prompt: generate UI (temp=0)
OpenAI-->>GenCLI: OpenUI Lang response
GenCLI->>AST: parse OpenUI Lang into AST
AST-->>GenCLI: structured AST
GenCLI->>Conv: convert AST to json-render (RFC 6902 patches)
Conv-->>GenCLI: json-render sample
GenCLI->>Conv: convert AST to C1 JSON (nested tree)
Conv-->>GenCLI: C1 JSON sample
GenCLI->>Conv: convert AST to YAML
Conv-->>GenCLI: YAML sample
GenCLI->>GenCLI: save all 4 format samples to disk
end
Dev->>BenchCLI: run token benchmark
loop for each scenario x each format
BenchCLI->>Tiktoken: count tokens (gpt-5 encoder)
Tiktoken-->>BenchCLI: token count
end
BenchCLI->>BenchCLI: compute % savings vs json-render and C1 JSON
BenchCLI-->>Dev: print comparison table
Note over Dev, Tiktoken: OpenUI Lang saves ~52% tokens vs JSON formats on average
Note over Conv, Tiktoken: All formats represent identical UI (same AST source)
🔗 Cross-Repository Impact AnalysisEnable automatic detection of breaking changes across your dependent repositories. → Set up now Learn more about Cross-Repository AnalysisWhat It Does
How to Enable
Benefits
|
|
LGTM 👍 No issues found. |
|
|
||
| One thing the benchmark README is transparent about: the model generates OpenUI Lang only, then the AST is mechanically converted to the other formats. This is fair for measuring _representation efficiency_ — same information, fewer bytes. | ||
|
|
||
| But it sidesteps a real question: would a model prompted to generate JSON directly produce as many tokens as the mechanical conversion? LLMs can be terse or verbose depending on their system prompt. A well-tuned JSON generation prompt might produce leaner output than a literal AST-to-JSON mapping. |
There was a problem hiding this comment.
How can the response be terse/verbose for the same request? The reason we chose to use AST is to ensure the same components are used in all 3 formats.
| ### Learning curve | ||
| OpenUI Lang is a new DSL. JSON is universal. Every developer on your team already reads JSON. OpenUI Lang requires learning positional argument semantics (`FormControl("Label", Input(...))` — which arg is which?), reference-based composition (`nameField = ...` then use `nameField`), and a schema that maps positions to names. | ||
|
|
||
| If your team is two people building fast, learning a DSL is 30 minutes. If you're integrating into a 50-person org with existing JSON tooling, it's a migration project. |
There was a problem hiding this comment.
OpenUI lang is not for developers
| For production systems, the partial-failure behavior of line-oriented and patch-oriented formats is materially better than monolithic JSON. | ||
|
|
||
| ### Ecosystem and tooling | ||
| json-render (Vercel) has 13K+ stars and renderers for React, Vue, Svelte, Solid, and React Native. OpenUI currently supports React. If you're building for multiple frameworks, json-render's ecosystem is broader today. |
There was a problem hiding this comment.
Both projects are framework agnostic. OpenUI has first party support for React but community can bring in their own frameworks if needed.
| ### Ecosystem and tooling | ||
| json-render (Vercel) has 13K+ stars and renderers for React, Vue, Svelte, Solid, and React Native. OpenUI currently supports React. If you're building for multiple frameworks, json-render's ecosystem is broader today. | ||
|
|
||
| ## When Each Approach Wins |
There was a problem hiding this comment.
please rewrite this section
Closes #4
Summary
Independent benchmark analysis comparing token costs across three generative UI approaches:
Key findings
Article structure