Skip to content
This repository was archived by the owner on Jun 21, 2024. It is now read-only.

Commit 8d46176

Browse files
committed
add lucid readme
1 parent afcc0d7 commit 8d46176

File tree

1 file changed

+4
-1
lines changed

1 file changed

+4
-1
lines changed

README.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,10 @@
11
## T5 - PyTorch
22
A PyTorch implementation of [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683). You can find the official T5x repository by Google [here](https://github.com/google-research/t5x).
33

4+
## Acknowledgement
5+
6+
Phil Wang (lucidrains) advised and provided review for this implementation. [Please be sure to follow and support his work](https://github.com/lucidrains?tab=repositories).
7+
48
## Usage
59

610
```python
@@ -33,7 +37,6 @@ print(output.shape) #torch.Size([1, 1024, 512])
3337

3438
## Abstract
3539

36-
3740
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.
3841

3942

0 commit comments

Comments
 (0)