Skip to content

cocacola-lab/TLV-Link

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

TLV-Link

🚀🚀🚀 The implementation of Touch100k: A Large-Scale Touch-Language-Vision Dataset for Touch-Centric Multimodal Representation.

❤️ Acknowledgement

  • LanguageBind: An open source, language-based multimodal pre-training framework. Thanks for their wonderful work.
  • OpenCLIP: An amazing open-sourced backbone.

🔒 License

Code License This project is under the MIT license.

Data License The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes.

About

An official implementation of Touch100k: A Large-Scale Touch-Language-Vision Dataset for Touch-Centric Multimodal Representation

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors