Skip to content

Commit 103ccde

Browse files
committed
Fix bugs in website
1 parent fc7db5c commit 103ccde

File tree

22 files changed

+59
-135
lines changed

22 files changed

+59
-135
lines changed

_blogs/cats.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ date: 2024-06-01
1616
doi:
1717
tags:
1818
- natural language processing
19-
- generative AI
19+
- generative ai
2020
teaser: The human brain operates with remarkable energy efficiency, despite its complexity. Usually, a human brain activates only a sparse array of neurons at any given moment. Do state-of-the-art large language models (LLMs) behave similarly? Until recently, the answer was no. They typically exhibit over 99% non-zero activations during inference. However, our recent research unveils a surprising observation, LLMs activations are intrinsically sparse.
2121
materials:
2222
- name: Blog Post

_blogs/kernelbench.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ authors:
99
tags:
1010
- ml systems
1111
- gpu programming
12-
- generative AI
12+
- generative ai
1313
venue: none
1414
year: 2024
1515
date: 2024-12-03

_blogs/monkeys.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ authors:
1818
tags:
1919
- natural language processing
2020
- scaling laws
21-
- generative AI
21+
- generative ai
2222
venue: none
2323
year: 2024
2424
date: 2024-09-01

_data/tags.yml

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -5,13 +5,9 @@
55
- model
66
- dataset
77
- empirical study
8-
- # methods
9-
- qualitative methods
10-
- quantitative methods
11-
- mixed methods
128
- # domain
139
- natural language processing
1410
- machine learning
15-
- generative AI
16-
- ML systems
11+
- generative ai
12+
- ml systems
1713

_layouts/page.html

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,9 +6,13 @@
66
{% if page.title == 'Publications' %}
77
<script src="https://cdn.jsdelivr.net/gh/nextapps-de/[email protected]/dist/flexsearch.compact.js"></script>
88
<script type="text/javascript">
9-
// {% include blogsearch.js %}
109
{% include pubsearch.js %}
1110
</script>
11+
{% elsif page.title == 'Blogs' %}
12+
<script src="https://cdn.jsdelivr.net/gh/nextapps-de/[email protected]/dist/flexsearch.compact.js"></script>
13+
<script type="text/javascript">
14+
{% include blogsearch.js %}
15+
</script>
1216
{% endif %}
1317
</head>
1418
<body>

_pubs/CATSpaper.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,12 +13,12 @@ authors:
1313
- key: azaliamirhoseini
1414
venue: preprint
1515
year: 2024
16-
day: 117
16+
date: 2024-04-26
1717
has_pdf: true
1818
doi: 10.48550/arXiv.2404.08763
1919
tags:
2020
- natural language processing
21-
- generative AI
21+
- generative ai
2222
teaser: The human brain operates with remarkable energy efficiency, despite its complexity. Usually, a human brain activates only a sparse array of neurons at any given moment. Do state-of-the-art large language models (LLMs) behave similarly? Until recently, the answer was no. They typically exhibit over 99% non-zero activations during inference. However, our recent research unveils a surprising observation, LLMs activations are intrinsically sparse.
2323
materials:
2424
- name: CATS

_pubs/CHESSpaper.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,12 +12,12 @@ authors:
1212
affiliation: Stanford University
1313
venue: preprint
1414
year: 2024
15-
day: 178
15+
date: 2024-04-23
1616
has_pdf: true
1717
doi: 2405.16755​
1818
tags:
1919
- natural language processing
20-
- generative AI
20+
- generative ai
2121
teaser: Translating natural language questions into database queries, or text-to-SQL, is a long-standing research problem. This issue has been exacerbated in recent years due to the growing complexity of databases, driven by the increasing sizes of schemas (sets of columns and tables), values (content), and catalogs (metadata describing schemas and values) stored within them.
2222
materials:
2323
- name: CHESS

_pubs/archon.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -22,12 +22,12 @@ authors:
2222
- key: azaliamirhoseini
2323
venue: preprint
2424
year: 2024
25-
day: 267
25+
date: 2024-09-23
2626
has_pdf: true
2727
doi: 10.48550/arXiv.2409.15254
2828
tags:
2929
- machine learning
30-
- generative AI
30+
- generative ai
3131
- inference-time techniques
3232
teaser: Archon, a modular framework for designing inference-time architectures, outperforms top language models like GPT-4 and Claude 3.5 on various benchmarks by optimally combining LLMs and inference techniques.
3333
materials:

_pubs/codemonkeys.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,12 +15,12 @@ authors:
1515
- key: azaliamirhoseini
1616
venue: preprint
1717
year: 2025
18-
day: 23
18+
date: 2025-01-23
1919
has_pdf: true
2020
doi: 10.48550/arXiv.2501.14723
2121
tags:
2222
- machine learning
23-
- generative AI
23+
- generative ai
2424
teaser: CodeMonkeys, a system designed to solve software engineering problems by scaling test time compute.
2525
materials:
2626
- name: Paper

_pubs/constitutionalai.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -55,12 +55,12 @@ authors:
5555

5656
venue: preprint
5757
year: 2022
58-
day: 349
58+
date: 2022-12-15
5959
has_pdf: false
6060
doi: 2212.08073v1
6161
tags:
6262
- natural language processing
63-
- generative AI
63+
- generative ai
6464
- highlight
6565
teaser: Imagine an AI that polices itself to ensure it remains helpful and safe. Our approach, called Constitutional AI, trains an AI assistant through a two-phase process. First, it learns from itself by generating and revising its own responses. Then, it refines its performance through reinforcement learning based on feedback from its own evaluations. This method not only improves the AI’s ability to handle sensitive queries but also requires minimal human oversight, making it a powerful tool for creating AI systems that are both effective and secure.
6666
materials:

0 commit comments

Comments
 (0)