Skip to content

Conversation

@RobvanGastel
Copy link

I added a dataloader for the OmniPrint dataset similar to the Omniglot dataset, does this conform to the pytorch-meta code structure sufficiently? Currently, the dataset is hosted on Google Drive personally to prevent any extra dependencies of Kaggle where it's hosted. The training split for every print split (meta1, meta2, ...) is the same for all splits as used in the OmniPrint source code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant