Jiaqi Chen*, Yanzhe Zhang*, Yutong Zhang, Yijia Shao, Diyi Yang
Stanford University
*Equal contribution
Homepage • Paper • Dataset • Data Viewer
We investigate Generative Interfaces for Language Models, a paradigm where LLMs respond to user queries by proactively generating user interfaces (UIs) to enable more adaptive, interactive interactions that better support complex user goals.
-
Requirement specification [system prompt], [Code]: First, we parse the input into a requirement specification, capturing the main goal, desired features, UI components, interaction styles, and problem-solving strategies.
-
Structured representation generation [system prompt], [Code]: Second, we generate a Structured Interface-Specific Representation based on the requirement specification.
-
UI generation [system prompt], [Code]: To support faithful realization of the structured specification, we utilize a component codebase containing reusable implementations of common UI elements (e.g., charts, videos, synchronized clocks). In addition, a web retrieval module gathers relevant UI examples and data sources to inform both the representation design and the final rendering. Finally, the entire context, including the natural language query, requirement specification, structured representation, 7 predefined components, and retrieved examples, is passed to a code generation model, which synthesizes executable HTML/CSS/JS code. This completes the pipeline from query to fully rendered, high-quality interactive interface.
-
Adaptive reward function [system prompt], [Code]: We use a large language model to automatically generate evaluation criteria based on each user query, such as “clarity” or “concept explanation,” assigning weights and verification rules to compute an overall score.
-
Iterative refinement [system prompt], [Code]: We first generate several UI candidates and score them using the reward function. The best one is selected, then used to guide the next round of generation. This process repeats with feedback until a candidate meets the quality threshold.
Put the following API keys in the .env
file.
- Supabase account for authentication
- LangGraph CLI for running the graph locally
- LangSmith for tracing & observability
First, clone the repository:
git clone [email protected]:SALT-NLP/GenUI.git
cd GenUI
Next, install the dependencies:
yarn install
After installing dependencies, set the required values (API keys, authentication information) in ./apps/web/.env.example
.
Then copy it to .env
in the root folder of the project, and in apps/web
.
# The root `.env` file will be read by the LangGraph server for the agents.
cp ./apps/web/.env.example ./.env
cp ./apps/web/.env.example ./apps/web/.env
After creating a Supabase account, visit your dashboard and create a new project.
Next, navigate to the Project Settings
page inside your project, and then to the API
tag. Copy the Project URL
, and anon public
project API key. Paste them into the NEXT_PUBLIC_SUPABASE_URL
and NEXT_PUBLIC_SUPABASE_ANON_KEY
environment variables in the apps/web/.env
file.
After this, navigate to the Authentication
page and the Providers
tab. Make sure Email
is enabled (also ensure you've enabled Confirm Email
). You may also enable GitHub
, and/or Google
if you'd like to use those for authentication. (see these pages for documentation on how to set up each provider: GitHub, Google)
To verify authentication works, run yarn dev
and visit localhost:3000. This should redirect you to the login page. From here, you can either log in with Google or GitHub, or if you haven't configured these providers, navigate to the signup page and create a new account with an email and password. This should then redirect you to a confirmation page, and after confirming your email, you should be redirected to the home page.
The first step to running Generating UI locally is to build the application. This is because Generating UI uses a monorepo setup and requires workspace dependencies to be built so other packages/apps can access them.
- Run the following command from the root of the repository:
yarn build
- Navigate to
apps/agents
and runyarn dev
(this runsnpx @langchain/langgraph-cli dev --port 54367
).
You will see something like:
Ready!
- 🚀 API: http://localhost:54367
- 🎨 Studio UI: https://smith.langchain.com/studio?baseUrl=http://localhost:54367
- After your LangGraph server is running, execute the following command inside
apps/web
to start the Generating UI frontend:
yarn dev
On initial load, compilation may take time.
- Open localhost:3000 with your browser and start trying generative interfaces.
- Using Claude is recommended. Turn on web search to enable fetching relevant web pages.
- Generation can take multiple minutes due to iterative generation.
- You can track the intermediate steps in the terminal where you run
yarn dev
inapps/agents
.
For problems related to pdf-parse
, you might refer to the solution here.
If you find this work useful for your research, please cite our GitHub repo:
@misc{chen2025generative,
title={Generative Interfaces for Language Models},
author={Jiaqi Chen and Yanzhe Zhang and Yutong Zhang and Yijia Shao and Diyi Yang},
year={2025},
eprint={2508.19227},
archivePrefix={arXiv},
primaryClass={cs.CL}
}