Is your feature request related to a problem? Please describe.
I'd love to use new LLMs like Mixtral 8x7b (better than GPT 3.5 Turbo), for which there are HuggingFace APIs and Replicate APIs.
The APIs are relatively standardized for inference models. See, for example, Replicate.
I would prefer to pay for cloud API calls then spin up a private GPU.
Describe the solution you'd like
API integration with Replicate and HuggingFace, with a few LLMs documented out of the box.
Additional context
I could implement this myself, but I think refuel would be much stronger if it supported this natively.