Skip to content
This repository was archived by the owner on Oct 31, 2022. It is now read-only.

Conversation

@babaraza
Copy link

@babaraza babaraza commented Feb 6, 2021

This file automates the model folder and file copying required after user fine-tunes the model using their own custom dataset.

Usage:
python create_model.py -create mymodel -model 124M -run run2

This will:

  • Create a folder inside models called mymodel
  • Copy encoder.json, hparams.json, vocab.bpe to mymodel from 124M folder
  • Copy checkpoint, model-xxx.data, model-xx.index, model-xx.meta to mymodel from run2

Now a user can simply generate samples using their own model:
python interactive_conditional_samples.py --model_name mymodel

WuTheFWasThat and others added 30 commits February 17, 2019 17:24
…ve LF line endings and all files stay unix on commit
Add note about setting PYTHONIOENCODING=UTF-8 env var for running
examples
Example will `tee` stdout to `/tmp/samples` from conditional and
unconditional generation scripts.
added python download script and modified requirements to add the modules needed. Tested in Windows Version 10.0.17134 Build 17134  and Ubuntu 18.04.1 LTS
This write-up was loosely inspired in part by Mitchell et al.’s work on
[Model Cards for Model Reporting](https://arxiv.org/abs/1810.03993).
Adding such model usage sections could be good practice in general for
open source research projects with potentially broad applications.
This enables multi-GPU or distributed training using Horovod
Neil Shepperd and others added 29 commits March 19, 2019 20:46
Added the medium blog link "Beginner’s Guide to Retrain GPT-2 (117M) to Generate Custom Text Content"
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

10 participants