Stability SD3 generated image

Stable Diffusion 3

Fireworks has partnered with Stability to provide blazing fast image generation using SD3, the latest and most advanced generative image model yet.

Try It Now

Featured Models

These models are deployed for industry-leading speeds to excel at production tasks

Image Models

All currently deployed image models.

The most capable text-to-image model produced by stability.ai, with greatly improved performance in multi-subject prompts, image quality, and spelling abilities. The Stable Diffusion 3 API is provided by Stability and the model is powered by Fireworks. Unlike other models on the Fireworks playground, you'll need a Stability API key to use this model. To use the API directly, visit https://platform.stability.ai/docs/api-reference#tag/Generate/paths/~1v2beta~1stable-image~1generate~1sd3/postaccounts/stability/models/sd3

Image generation model, produced by stability.ai.accounts/fireworks/models/stable-diffusion-xl-1024-v1-0

Playground v2 is a diffusion-based text-to-image generative model. The model was trained from scratch by the research team at playground.com.accounts/fireworks/models/playground-v2-1024px-aesthetic

Playground v2.5 is a diffusion-based text-to-image generative model, and a successor to Playground v2.accounts/fireworks/models/playground-v2-5-1024px-aesthetic

Image generation model. Distilled from Stable Diffusion XL 1.0 and 50% smaller.accounts/fireworks/models/SSD-1B

Japanese Stable Diffusion XL (JSDXL) is a Japanese-specific SDXL model that is capable of inputting prompts in Japanese and generating Japanese-style images.accounts/fireworks/models/japanese-stable-diffusion-xl

Distilled, few-step version of Stable Diffusion 3, the newest image generation model from Stability AI, which is equal to or outperforms state-of-the-art text-to-image generation systems such as DALL-E 3 and Midjourney v6 in typography and prompt adherence, based on human preference evaluations. Stability AI has partnered with Fireworks AI, the fastest and most reliable API platform in the market, to deliver Stable Diffusion 3 and Stable Diffusion 3 Turbo. To use the API directly, visit https://platform.stability.ai/docs/api-reference#tag/Generate/paths/~1v2beta~1stable-image~1generate~1sd3/postaccounts/stability/models/sd3-turbo

Language Models

Serverless models are hosted by Fireworks — No need to configure hardware or deploy models. Usage is billed per token.

Vision-language model allowing both image and text as inputs (single image is recommended), trained on OSS model generated training data and open sourced on huggingface at fireworks-ai/FireLLaVA-13baccounts/fireworks/models/firellava-13bContext 4,096

Fireworks' open-source function calling model.accounts/fireworks/models/firefunction-v1Context 32,768

Mistral MoE 8x7B Instruct v0.1 model with Sparse Mixture of Experts. Fine tuned for instruction followingaccounts/fireworks/models/mixtral-8x7b-instructContext 32,768

Mistral MoE 8x22B Instruct v0.1 model with Sparse Mixture of Experts. Fine tuned for instruction following.accounts/fireworks/models/mixtral-8x22b-instructContext 65,536

Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks.accounts/fireworks/models/llama-v3-70b-instructContext 8,192

BleatServerlessTry it now

Bleat allows you to enable function calling in LLaMA 2 in a similar fashion to OpenAI's implementation for ChatGPT.accounts/fireworks/models/bleat-adapterContext 4,096

The LoRA version of Chinese-Llama-2 base on Llama-2-7b-hf.accounts/fireworks/models/chinese-llama-2-lora-7bContext 4,096

DBRX Instruct is a mixture-of-experts (MoE) large language model trained from scratch by Databricks. DBRX Instruct specializes in few-turn interactions. Dbrx is hosted as an experimental model. Fireworks only guarantees that it will be hosted serverless through April 2024. Future serverless availability will depend on overall usage.accounts/fireworks/models/dbrx-instructContext 32,768

Gemma 7B Instruct from Google. Gemma is provided under and subject to the Gemma Terms of Use found at ai.google.dev/gemma/termsaccounts/fireworks/models/gemma-7b-itContext 8,192

Latest version of Nous Research's Hermes series of models, using an updated and cleaned version of the Hermes 2 dataset, and is now trained on a diverse and rich set of function calling and JSON mode samplesaccounts/fireworks/models/hermes-2-pro-mistral-7bContext Unknown

japanese-stablelm-instruct-beta-70b is a 70B-parameter decoder-only language model based on japanese-stablelm-base-beta-70b and further fine tuned on Databricks Dolly-15k, Anthropic HH, and other public data.accounts/stability/models/japanese-stablelm-instruct-beta-70bContext Unknown

This is a 7B-parameter decoder-only Japanese language model fine-tuned on instruction-following datasets, built on top of the base model Japanese Stable LM Base Gamma 7B.accounts/stability/models/japanese-stablelm-instruct-gamma-7bContext Unknown

Fine-tuned meta-llama/Llama-2-13b-chat-hf to answer French questions in French.accounts/fireworks/models/llama-2-13b-fp16-frenchContext 4,096

This chatbot model was built via parameter-efficient QLoRA finetuning of llama-2-13b on all 9.85k rows of timdettmers/openassistant-guanaco (a subset of OpenAssistant/oasst1 containing the highest-rated conversation paths). Finetuning was executed on a single A6000 (48 GB) for roughly 3.7 hours on the Lambda Labs platform.accounts/fireworks/models/llama-2-13b-guanaco-peftContext 4,096

Summarizes articles and conversations.accounts/fireworks/models/llama2-7b-summarizeContext 4,096

Meta Llama Guard 2 is an 8B parameter Llama 3-based LLM safeguard model. Similar to Llama Guard, it can be used for classifying content in both LLM inputs (prompt classification) and in LLM responses (response classification). It acts as an LLM – it generates text in its output that indicates whether a given prompt or response is safe or unsafe, and if unsafe, it also lists the content categories violated.accounts/fireworks/models/llama-guard-2-8bContext 8,192

A 13B parameter Llama 2 model, trained on 2 trillion tokens with a context length of 4096.accounts/fireworks/models/llama-v2-13bContext 4,096

A fine-tuned version of Llama 2 13B, optimized for dialogue applications using Reinforcement Learning from Human Feedback (RLHF), and perform comparably to ChatGPT according to human evaluations.accounts/fireworks/models/llama-v2-13b-chatContext 4,096

A fine-tuned version of Llama 2 70B, optimized for dialogue applications using Reinforcement Learning from Human Feedback (RLHF), and perform comparably to ChatGPT according to human evaluations.accounts/fireworks/models/llama-v2-70b-chatContext 4,096

A 7B parameter Llama 2 model, trained on 2 trillion tokens with a context length of 4096.accounts/fireworks/models/llama-v2-7bContext 4,096

© 2024 Fireworks AI All rights reserved.