Skip to content

Conversation

@shiyu22
Copy link

@shiyu22 shiyu22 commented Apr 24, 2023

Hi, thank you very much for opening such a perfect project 👍

This is a demo of VQA built with minigpt4 and gptcache, GPTCache is a semantic cache for storing LLM responses, in this demo MiniGPT-4 is used to generate answers and GPTCache is used to cache those answers, which can make it faster.

The vqa_demo.py is also posted in our examples, but it required minigpt4 package, so I thought it would be a good idea to submit it here :)

And this is a screenshot of the demo:
vqa

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant