diff --git a/fern/docs/getting-started/inference-models.mdx b/fern/docs/getting-started/inference-models.mdx index 9e8cb0b..50f2ccb 100644 --- a/fern/docs/getting-started/inference-models.mdx +++ b/fern/docs/getting-started/inference-models.mdx @@ -19,6 +19,7 @@ OctoAI currently supports the self-service models & checkpoints organized on thi | Mistral | Chat, Coding | Mixtral Instruct (8x22B) | mixtral-8x22b-instruct | 65,536 | | Microsoft | Chat, Coding | WizardLM-2 (8x22B) | wizardlm-2-8x22b | 65,536 | | Meta | Content Moderation | Llama Guard 2 | llamaguard-2-7b | 4,096 | +| Alibaba DAMO | Chat, Coding | Qwen2-7B-Instruct | qwen2-7b-instruct | 8,192 | | Alibaba DAMO | Embedding | GTE Large | thenlper/gte-large | n/a | Check out our [REST API](/docs/text-gen-solution/rest-api), [Python SDK](/docs/text-gen-solution/python-sdk), or [TypeScript SDK](/docs/text-gen-solution/typescript-sdk) docs when you’re ready to use text gen models programmatically. diff --git a/fern/docs/text-gen-solution/getting-started.mdx b/fern/docs/text-gen-solution/getting-started.mdx index 6b6a552..abb260c 100644 --- a/fern/docs/text-gen-solution/getting-started.mdx +++ b/fern/docs/text-gen-solution/getting-started.mdx @@ -29,6 +29,8 @@ We are always expanding our offering of models and other features. Presently, Oc **Llama Guard 2** An 8B parameter Llama 3-based LLM content moderation model released by Meta, which can classify text as safe or unsafe according to an editable set of policies. As an 8B parameter model, it is optimized for latency and can be used to moderate other LLM interactions in real time. [Read more](https://huggingface.co/meta-llama/Meta-Llama-Guard-2-8). Note: This model requires a specific prompt template to be applied, and is not compatible with the ChatCompletion API. +**Qwen2-7B-Instruct** is an instruction-tuned LLM released by Alibaba in June 2024, designed to excel in a series of benchmarks for language understanding, generation, multilingual tasks, coding, mathematics, and reasoning. This model, part of the Qwen2 series, offers competitive performance against state-of-the-art open-source language models, showcasing its capabilities across various AI tasks. + **GTE Large** An embeddings model released by Alibaba DAMO Academy. Trained on a large-scale corpus of relevance text pairs, covering a wide range of domains and scenarios. Consistently ranked highly on Huggingface’s [MTEB leaderboard](https://huggingface.co/spaces/mteb/leaderboard). In combination with a vector database, this embeddings model is especially useful for powering semantic search and Retrieval Augmented Generation (RAG) applications. [Read more.](https://huggingface.co/thenlper/gte-large) For pricing of all of these endpoints, please refer to our [pricing page](/docs/getting-started/pricing-and-billing).