Skip to content

Model Provider Support

Open Notebook supports multiple AI model providers to give you flexibility in choosing the AI that best fits your needs.

ProviderHighlights
OpenAIGreat models, covering all necessary features for Open Notebook
AnthropicVery capable Sonnet 3.5 for dynamic reasoning
GeminiLarge context (2M tokens) and the best text to speech for podcasts
OllamaRunning local models for free. Great for transformation tasks
ElevenLabsFor amazing voice quality
Open RouterGreat option for using several open source models, as well as Cohere, Mistral, xAI, etc
GroqVery fast inference, but limited model availability
xAIThe powerful grok model, less guardrails, great responses.
Vertex AIIf you are running a Google Cloud environment

All providers are installed out of the box. All you need to do is to setup the environment variable configurations (API Keys, etc) for your selected provider and decide which models to use.

Please refer to the .env.example file for instructions on which ENV variables are necessary for each.

API Key Requirements

The Podcast Generator feature currently requires Gemini API keys for content generation. Additionally, voice generation requires either an OpenAI API key or an Eleven Labs API key. Make sure you have the necessary API keys configured before using these features.

Create models on the Settings page

Go to the settings page and create your different models.

📝 Notice: For complete usage of all the features, you need to setup at least 4 models (one of each type).

Model TypeSupported Providers
LanguageOpenAI, Anthropic, Open Router, LiteLLM, Vertex AI, Vertex AI, Anthropic, Gemini, Ollama, xAI, Groq
EmbeddingOpenAI, Gemini, Vertex AI, Ollama
Speech to TextOpenAI, Groq
Text to SpeechOpenAI, ElevenLabs, Gemini

If you are not sure which models to setup, the Model Settings page will offer some options for you to get started with.

After setting up the models, head to the Model Defaults tab to define the default models. There are several defaults to setup.

Model DefaultPurpose
Chat ModelWill be used on all chats
Transformation ModelWill be used for summaries, insights, etc
Large ContextFor content higher then 110k tokens (use Gemini here)
Speech to TextFor transcribing text from your audio/video uploads
Text to SpeechFor generating podcasts
EmbeddingFor creating vector representation of content

All model types and defaults are required for now. If you are not sure which to pick, go with OpenAI, the only one that covers all possible model types.

The reason for opting for this route is because different LLMs, will behave better/worse depending on the type of request and type of tools offered. So it makes sense to build a more refined system to decide which model should process which task.

For instance, we can use an Ollama based model, like gemma2 to do summarization and document query, and use openai/claude for the chat. The whole idea is to allow you to experiment on cost/performance.

Suggested Configurations

These are some suggested configurations for different use cases and budgets:

Best in Class

Model DefaultModel Name
Chat Modelclaude-3-5-sonnet-latest
Transformation Modelgpt-4o-mini
Large Contextgemini-1.5-pro
Speech to Textwhisper-1
Text to Speecheleven_turbo_v2_5 (elevenlabs)
Embeddingtext-embedding-3-small

Open AI Only Configuration

Model DefaultModel Name
Chat Modelgpt-4o-mini
Transformation Modelgpt-4o-mini
Large Contextgpt-4o-mini (you will be limited to 128k tokens)
Speech to Textwhisper-1
Text to Speechtts-1-hd
Embeddingtext-embedding-3-small

Gemini Only Configuration

Model DefaultModel Name
Chat Modelgemini-1.5-flash
Transformation Modelgemini-1.5-flash
Large Contextgemini-1.5-pro
Speech to Text(not available yet)
Text to Speechdefault
Embeddingtext-embedding-004

Open Source Only (using Ollama)

Model DefaultModel Name
Chat Modelqwen2.5 or gemma2 or phi3 or llama3.2
Transformation Modelqwen2.5 or gemma2 or phi3 or llama3.2
Large Contextqwen2.5 or gemma2 or phi3 or llama3.2 (limited to 128k)
Speech to Text(not possible yet)
Text to Speech(not possible yet)
Embeddingmxbai-embed-large

We are working hard to support more providers and model types to give users more flexibility and options.

Testing your models

If you are not sure which model will work best for you, you can try them up on the Playground section and see for yourself how they handle different tasks.

⚠️ Important instructions for Gemini

The new Gemini Text to Speech models are amazing and definitely worth using. But in order to use them, you need to do a little setup. Please refer to this Podcastfy help page for details. But it basically requires you to enable the Text to Speech API and add it to your API Key.

Released under the MIT License.