Skip to content

Commit 495326a

Browse files
authored
Merge pull request #39600 from github/repo-sync
Repo sync
2 parents 7d6fcd7 + 51e0fe1 commit 495326a

File tree

9 files changed

+331
-2
lines changed

9 files changed

+331
-2
lines changed

Dockerfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
# ---------------------------------------------------------------
99
# To update the sha:
1010
# https://github.com/github/gh-base-image/pkgs/container/gh-base-image%2Fgh-base-noble
11-
FROM ghcr.io/github/gh-base-image/gh-base-noble:20250725-133358-gd7fe7b016 AS base
11+
FROM ghcr.io/github/gh-base-image/gh-base-noble:20250730-174526-g48ad667e7 AS base
1212

1313
# Install curl for Node install and determining the early access branch
1414
# Install git for cloning docs-early-access & translations repos

content/code-security/code-scanning/managing-code-scanning-alerts/responsible-use-autofix-code-scanning.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ redirect_from:
2222

2323
{% data reusables.rai.code-scanning.copilot-autofix-note %}
2424

25-
{% data variables.copilot.copilot_autofix_short %} generates potential fixes that are relevant to the existing source code and translates the description and location of an alert into code changes that may fix the alert. {% data variables.copilot.copilot_autofix_short %} uses internal {% data variables.product.prodname_copilot %} APIs interfacing with the large language model {% data variables.copilot.copilot_gpt_4o %} from OpenAI, which has sufficient generative capabilities to produce both suggested fixes in code and explanatory text for those fixes.
25+
{% data variables.copilot.copilot_autofix_short %} generates potential fixes that are relevant to the existing source code and translates the description and location of an alert into code changes that may fix the alert. {% data variables.copilot.copilot_autofix_short %} uses internal {% data variables.product.prodname_copilot %} APIs interfacing with the large language model {% data variables.copilot.copilot_gpt_41 %} from OpenAI, which has sufficient generative capabilities to produce both suggested fixes in code and explanatory text for those fixes.
2626

2727
{% data variables.copilot.copilot_autofix_short %} is allowed by default and enabled for every repository using {% data variables.product.prodname_codeql %}, but you can choose to opt out and disable {% data variables.copilot.copilot_autofix_short %}. To learn how to disable {% data variables.copilot.copilot_autofix_short %} at the enterprise, organization and repository levels, see [AUTOTITLE](/code-security/code-scanning/managing-code-scanning-alerts/disabling-autofix-for-code-scanning).
2828

content/github-models/index.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,8 +5,12 @@ versions:
55
fpt: '*'
66
ghes: '*'
77
ghec: '*'
8+
introLinks:
9+
overview: /github-models/about-github-models
10+
quickstart: /github-models/quickstart
811
children:
912
- /about-github-models
13+
- /quickstart
1014
- /use-github-models
1115
- /github-models-at-scale
1216
- /responsible-use-of-github-models

content/github-models/quickstart.md

Lines changed: 199 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,199 @@
1+
---
2+
title: Quickstart for GitHub Models
3+
intro: 'Run your first model with {% data variables.product.prodname_github_models %} in minutes.'
4+
allowTitleToDifferFromFilename: true
5+
redirect_from:
6+
- /models/quickstart
7+
versions:
8+
fpt: '*'
9+
ghec: '*'
10+
type: quick_start
11+
topics:
12+
- GitHub Models
13+
shortTitle: Quickstart
14+
---
15+
16+
## Introduction
17+
18+
{% data variables.product.prodname_github_models %} is an AI inference API from {% data variables.product.prodname_dotcom %} that lets you run AI models using just your {% data variables.product.prodname_dotcom %} credentials. You can choose from many different models—including from OpenAI, Meta, and DeepSeek—and use them in scripts, apps, or even {% data variables.product.prodname_actions %}, with no separate authentication process.
19+
20+
This guide helps you try out models quickly in the playground, then shows you how to run your first model via API or workflow.
21+
22+
## Step 1: Try models in the playground
23+
24+
1. Go to **[https://github.com/marketplace/models](https://github.com/marketplace/models)**.
25+
26+
1. In the playground, select at least one model from the dropdown menu.
27+
1. Test out different prompts using the **Chat** view, and compare responses from different models.
28+
1. Use the **Parameters** view to customize the parameters for the models you are testing, then see how they impact responses.
29+
30+
> [!NOTE]
31+
> The playground works out of the box if you're signed in to {% data variables.product.prodname_dotcom %}. It uses your {% data variables.product.prodname_dotcom %} account for access—no setup or API keys required.
32+
33+
## Step 2: Make an API call
34+
35+
For full details on available fields, headers, and request formats, see the [API reference for {% data variables.product.prodname_github_models %}](/free-pro-team@latest/rest/models/inference?apiVersion=2022-11-28).
36+
37+
To call models programmatically, you’ll need:
38+
39+
* A {% data variables.product.prodname_dotcom %} account.
40+
* A {% data variables.product.pat_generic %} (PAT) with the `models` scope, which you can create [in settings](https://github.com/settings/tokens).
41+
42+
1. Run the following `curl` command, replacing `YOUR_GITHUB_PAT` with your token.
43+
44+
```bash copy
45+
curl -L \
46+
-X POST \
47+
-H "Accept: application/vnd.github+json" \
48+
-H "Authorization: Bearer YOUR_GITHUB_PAT" \
49+
-H "X-GitHub-Api-Version: 2022-11-28" \
50+
-H "Content-Type: application/json" \
51+
https://models.github.ai/inference/chat/completions \
52+
-d '{"model":"openai/gpt-4.1","messages":[{"role":"user","content":"What is the capital of France?"}]}'
53+
```
54+
55+
1. You’ll receive a response like this:
56+
57+
```json
58+
{
59+
"choices": [
60+
{
61+
"message": {
62+
"role": "assistant",
63+
"content": "The capital of France is **Paris**."
64+
}
65+
}
66+
],
67+
...other fields omitted
68+
}
69+
```
70+
71+
1. To try other models, change the value of the `model` field in the JSON payload to one from the [marketplace](https://github.com/marketplace/models).
72+
73+
## Step 3: Run models in GitHub Actions
74+
75+
1. In your repository, create a workflow file at `.github/workflows/models-demo.yml`.
76+
77+
1. Paste the following workflow into the file you just created.
78+
79+
```yaml
80+
name: GitHub Models Demo
81+
82+
on: [push]
83+
84+
permissions:
85+
contents: read
86+
models: read
87+
88+
jobs:
89+
summarize-repo:
90+
runs-on: ubuntu-latest
91+
steps:
92+
- name: Checkout code
93+
uses: {% data reusables.actions.action-checkout %}
94+
95+
- name: Summarize the repository README
96+
run: |
97+
SUMMARY_INPUT=$(head -c 4000 README.md)
98+
PROMPT="Summarize this repository in one sentence. Here is the README:\n$SUMMARY_INPUT"
99+
PAYLOAD=$(jq -n --arg prompt "$PROMPT" '{
100+
model: "openai/gpt-4.1",
101+
messages: [
102+
{role: "user", content: $prompt}
103+
]
104+
}')
105+
RESPONSE=$(curl -sL \
106+
-X POST \
107+
-H "Accept: application/vnd.github+json" \
108+
-H "Authorization: Bearer ${{ secrets.GITHUB_TOKEN }}" \
109+
-H "X-GitHub-Api-Version: 2022-11-28" \
110+
-H "Content-Type: application/json" \
111+
https://models.github.ai/inference/chat/completions \
112+
-d "$PAYLOAD")
113+
echo "$RESPONSE" | jq -r '.choices[0].message.content'
114+
```
115+
116+
> [!NOTE]
117+
> Workflows that call {% data variables.product.prodname_github_models %} must include `models: read` in the permissions block. {% data variables.product.prodname_dotcom %}-hosted runners provide a `GITHUB_TOKEN` automatically.
118+
119+
1. Commit and push to trigger the workflow.
120+
121+
This example shows how to send a prompt to a model and use the response in your continuous integration (CI) workflows. For more advanced use cases, such as summarizing issues, detecting missing reproduction steps for bug reports, or responding to pull requests, see [AUTOTITLE](/github-models/use-github-models/integrating-ai-models-into-your-development-workflow).
122+
123+
## Step 4: Save your first prompt file
124+
125+
{% data variables.product.prodname_github_models %} supports reusable prompts defined in `.prompt.yml` files. Once you add this file to your repository, it will appear in the Models page of your repository and can be run directly in the Prompt Editor and evaluation tooling. Learn more about [AUTOTITLE](/github-models/use-github-models/storing-prompts-in-github-repositories).
126+
127+
1. In your repository, create a file named `summarize.prompt.yml`. You can save it in any directory.
128+
129+
1. Paste the following example prompt into the file you just created.
130+
131+
```yaml
132+
name: One-line summary
133+
description: Ask the model to summarize a paragraph in one sentence.
134+
messages:
135+
- role: user
136+
content: 'Summarize the following text in one sentence: {{input}}'
137+
model: openai/gpt-4o
138+
```
139+
140+
1. Commit and push the file to your repository.
141+
142+
1. Go to the **Models** tab in your repository.
143+
144+
1. In the navigation menu, click **{% octicon "note" aria-hidden="true" aria-label="none" %} Prompts**, then click on the prompt file.
145+
146+
1. The prompt will open in the prompt editor. Click **Run**. A right-hand sidebar will appear asking you to enter input text. Enter any input text, then click **Run** again in the bottom right corner to test it out.
147+
148+
> [!NOTE]
149+
> The prompt editor doesn’t automatically pass repository content into prompts. You provide the input manually.
150+
151+
## Step 5: Set up your first evaluation
152+
153+
Evaluations help you measure how different models respond to the same inputs so you can choose the best one for your use case.
154+
155+
1. Go back to the `summarize.prompt.yml` file you created in the previous step.
156+
157+
1. Update the file to match the following example.
158+
159+
```yaml
160+
name: One-line summary
161+
description: Ask the model to summarize a paragraph in one sentence.
162+
messages:
163+
- role: user
164+
content: 'Summarize the following text in one sentence: {{input}}'
165+
model: openai/gpt-4o
166+
testData:
167+
- input: >-
168+
The museum opened a new dinosaur exhibit this weekend. Families from all
169+
over the city came to see the life-sized fossils and interactive displays.
170+
expected: >-
171+
The museum's new dinosaur exhibit attracted many families with its fossils
172+
and interactive displays.
173+
- input: >-
174+
Lucy baked cookies for the school fundraiser. She spent the entire evening
175+
in the kitchen to make sure there were enough for everyone.
176+
expected: Lucy baked cookies all evening to support the school fundraiser.
177+
evaluators:
178+
- name: Similarity
179+
uses: github/similarity
180+
```
181+
182+
1. Commit and push the file to your repository.
183+
184+
1. In your repository, click the **Models** tab. Then click **{% octicon "note" aria-hidden="true" aria-label="none" %} Prompts** and reopen the same prompt in the prompt editor.
185+
186+
1. In the top left-hand corner, you can toggle the view from **Edit** to **Compare**. Click **Compare**.
187+
188+
1. Your evaluation will be set up automatically. Click **Run** to see results.
189+
190+
> [!TIP]
191+
> By clicking **Add prompt**, you can run the same prompt with different models or change the prompt wording to get inference responses with multiple variations at once, see evaluations, and view them side by side to make data-driven model decisions.
192+
193+
## Next steps
194+
195+
* [AUTOTITLE](/github-models/about-github-models).
196+
* [Browse the model catalog](https://github.com/marketplace?type=models)
197+
* [AUTOTITLE](/github-models/use-github-models/storing-prompts-in-github-repositories)
198+
* [AUTOTITLE](/github-models/use-github-models/evaluating-ai-models)
199+
* [AUTOTITLE](/github-models/use-github-models/integrating-ai-models-into-your-development-workflow#using-ai-models-with-github-actions)

src/graphql/data/fpt/changelog.json

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,22 @@
11
[
2+
{
3+
"schemaChanges": [
4+
{
5+
"title": "The GraphQL schema includes these changes:",
6+
"changes": [
7+
"<p>Enum value 'BLOCKED_BY_REMOVED_EVENT<code>was added to enum</code>IssueTimelineItemsItemType'</p>",
8+
"<p>Enum value 'BLOCKING_ADDED_EVENT<code>was added to enum</code>IssueTimelineItemsItemType'</p>",
9+
"<p>Enum value 'BLOCKING_REMOVED_EVENT<code>was added to enum</code>IssueTimelineItemsItemType'</p>",
10+
"<p>Enum value 'BLOCKED_BY_REMOVED_EVENT<code>was added to enum</code>PullRequestTimelineItemsItemType'</p>",
11+
"<p>Enum value 'BLOCKING_ADDED_EVENT<code>was added to enum</code>PullRequestTimelineItemsItemType'</p>",
12+
"<p>Enum value 'BLOCKING_REMOVED_EVENT<code>was added to enum</code>PullRequestTimelineItemsItemType'</p>"
13+
]
14+
}
15+
],
16+
"previewChanges": [],
17+
"upcomingChanges": [],
18+
"date": "2025-07-31"
19+
},
220
{
321
"schemaChanges": [
422
{

src/graphql/data/fpt/schema.docs.graphql

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20311,6 +20311,21 @@ enum IssueTimelineItemsItemType {
2031120311
"""
2031220312
BLOCKED_BY_ADDED_EVENT
2031320313

20314+
"""
20315+
Represents a 'blocked_by_removed' event on a given issue.
20316+
"""
20317+
BLOCKED_BY_REMOVED_EVENT
20318+
20319+
"""
20320+
Represents a 'blocking_added' event on a given issue.
20321+
"""
20322+
BLOCKING_ADDED_EVENT
20323+
20324+
"""
20325+
Represents a 'blocking_removed' event on a given issue.
20326+
"""
20327+
BLOCKING_REMOVED_EVENT
20328+
2031420329
"""
2031520330
Represents a 'closed' event on any `Closable`.
2031620331
"""
@@ -42960,6 +42975,21 @@ enum PullRequestTimelineItemsItemType {
4296042975
"""
4296142976
BLOCKED_BY_ADDED_EVENT
4296242977

42978+
"""
42979+
Represents a 'blocked_by_removed' event on a given issue.
42980+
"""
42981+
BLOCKED_BY_REMOVED_EVENT
42982+
42983+
"""
42984+
Represents a 'blocking_added' event on a given issue.
42985+
"""
42986+
BLOCKING_ADDED_EVENT
42987+
42988+
"""
42989+
Represents a 'blocking_removed' event on a given issue.
42990+
"""
42991+
BLOCKING_REMOVED_EVENT
42992+
4296342993
"""
4296442994
Represents a 'closed' event on any `Closable`.
4296542995
"""

src/graphql/data/fpt/schema.json

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -89627,6 +89627,18 @@
8962789627
"name": "BLOCKED_BY_ADDED_EVENT",
8962889628
"description": "<p>Represents a<code>blocked_by_added</code>event on a given issue.</p>"
8962989629
},
89630+
{
89631+
"name": "BLOCKED_BY_REMOVED_EVENT",
89632+
"description": "<p>Represents a<code>blocked_by_removed</code>event on a given issue.</p>"
89633+
},
89634+
{
89635+
"name": "BLOCKING_ADDED_EVENT",
89636+
"description": "<p>Represents a<code>blocking_added</code>event on a given issue.</p>"
89637+
},
89638+
{
89639+
"name": "BLOCKING_REMOVED_EVENT",
89640+
"description": "<p>Represents a<code>blocking_removed</code>event on a given issue.</p>"
89641+
},
8963089642
{
8963189643
"name": "CLOSED_EVENT",
8963289644
"description": "<p>Represents a<code>closed</code>event on any <code>Closable</code>.</p>"
@@ -91758,6 +91770,18 @@
9175891770
"name": "BLOCKED_BY_ADDED_EVENT",
9175991771
"description": "<p>Represents a<code>blocked_by_added</code>event on a given issue.</p>"
9176091772
},
91773+
{
91774+
"name": "BLOCKED_BY_REMOVED_EVENT",
91775+
"description": "<p>Represents a<code>blocked_by_removed</code>event on a given issue.</p>"
91776+
},
91777+
{
91778+
"name": "BLOCKING_ADDED_EVENT",
91779+
"description": "<p>Represents a<code>blocking_added</code>event on a given issue.</p>"
91780+
},
91781+
{
91782+
"name": "BLOCKING_REMOVED_EVENT",
91783+
"description": "<p>Represents a<code>blocking_removed</code>event on a given issue.</p>"
91784+
},
9176191785
{
9176291786
"name": "CLOSED_EVENT",
9176391787
"description": "<p>Represents a<code>closed</code>event on any <code>Closable</code>.</p>"

src/graphql/data/ghec/schema.docs.graphql

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20311,6 +20311,21 @@ enum IssueTimelineItemsItemType {
2031120311
"""
2031220312
BLOCKED_BY_ADDED_EVENT
2031320313

20314+
"""
20315+
Represents a 'blocked_by_removed' event on a given issue.
20316+
"""
20317+
BLOCKED_BY_REMOVED_EVENT
20318+
20319+
"""
20320+
Represents a 'blocking_added' event on a given issue.
20321+
"""
20322+
BLOCKING_ADDED_EVENT
20323+
20324+
"""
20325+
Represents a 'blocking_removed' event on a given issue.
20326+
"""
20327+
BLOCKING_REMOVED_EVENT
20328+
2031420329
"""
2031520330
Represents a 'closed' event on any `Closable`.
2031620331
"""
@@ -42960,6 +42975,21 @@ enum PullRequestTimelineItemsItemType {
4296042975
"""
4296142976
BLOCKED_BY_ADDED_EVENT
4296242977

42978+
"""
42979+
Represents a 'blocked_by_removed' event on a given issue.
42980+
"""
42981+
BLOCKED_BY_REMOVED_EVENT
42982+
42983+
"""
42984+
Represents a 'blocking_added' event on a given issue.
42985+
"""
42986+
BLOCKING_ADDED_EVENT
42987+
42988+
"""
42989+
Represents a 'blocking_removed' event on a given issue.
42990+
"""
42991+
BLOCKING_REMOVED_EVENT
42992+
4296342993
"""
4296442994
Represents a 'closed' event on any `Closable`.
4296542995
"""

0 commit comments

Comments
 (0)