Bring Your Own Keys

Bring Your Own Keys (BYOK) lets you use your own LLM and embedding provider API keys instead of the platform-provided models. This gives you control over which models power your Thallus experience and can reduce your subscription cost.

BYOK pricing

BYOK plans are priced lower than standard plans because you're providing your own model inference. The tradeoff: you pay the provider directly for API usage, but get a reduced Thallus subscription rate and slightly higher platform limits.

Plan Standard Price BYOK Price Key Difference
Starter $45/mo $30/mo 125 investigations (vs 100)
Pro $149/mo $99/mo 440 investigations (vs 350)
Enterprise Custom Custom Custom limits

See Billing & Plans for full plan comparisons.


Getting started

Before configuring BYOK, you must accept the BYOK disclaimer acknowledging that you're responsible for your own API costs and that Thallus doesn't control the third-party provider's data handling. This is a one-time acceptance.

Navigate to Settings → Models to begin configuration.


Supported LLM providers

Google Gemini
Gemini 3 Pro, Flash
OpenAI
GPT-5, GPT-5 Mini, Nano
Anthropic
Opus, Sonnet, Haiku
xAI
Grok 4, Grok 3
OpenRouter
Multi-provider routing
Azure OpenAI
GPT-5, GPT-5 Nano
Vertex AI
Gemini via Google Cloud
Provider Available Models Notes
Google Gemini Gemini 3 Pro, Gemini 3 Flash Direct Google AI API
OpenAI GPT-5, GPT-5 Mini, GPT-5 Nano, GPT-4.1, GPT-4.1 Mini, GPT-4.1 Nano, GPT-4o, GPT-4o Mini, o3-mini Most model options
Anthropic Claude Opus 4.6, Claude Sonnet 4.6, Claude Haiku 4.5, Claude Sonnet 4
xAI Grok 4, Grok 3, Grok 3 Mini
OpenRouter Routes to Claude, GPT, DeepSeek, Mistral, and others Multi-provider with single API key
Azure OpenAI GPT-5, GPT-5 Mini, GPT-5 Nano, GPT-4.1, GPT-4o Requires endpoint URL; optional API version
Vertex AI Gemini 3 Pro, Gemini 3 Flash API key or Service Account JSON; requires project ID and region

Supported embedding providers

Embedding configuration is optional. If you don't configure an embedding provider, Thallus absorbs the embedding cost on your behalf using the platform's default embedding model.

Provider Embedding Models Notes
Google Gemini text-embedding-004
OpenAI text-embedding-3-large, text-embedding-3-small
Azure OpenAI text-embedding-3-large, text-embedding-3-small Requires endpoint URL
OpenRouter openai/text-embedding-3-large, openai/text-embedding-3-small
Vertex AI text-embedding-004

Anthropic and xAI do not offer embedding models, so embedding configuration is not available when using those LLM providers (unless you configure a separate embedding provider).


Model tiers

Thallus uses three model tiers that map to different task complexities. When you select a provider, each tier is assigned a default model which you can override:

Model Tiers
Fast
GPT-5 Nano
Medium
GPT-5 Mini
Large
GPT-5
Tier Purpose
Fast Speed-critical tasks
Medium Balanced tasks
Large Complex reasoning and analysis

When you select a provider, sensible defaults are assigned for each tier. You can override any tier with a different model from the same provider.


Validation

Before saving, click Validate to test your API key against the selected provider. Thallus makes a real API call to verify the key works and the selected models are accessible.

LLM
Valid
Embedding
Valid

Validation checks both LLM and embedding separately. You can have a valid LLM configuration with no embedding configuration — in that case, the platform handles embeddings for you.


How BYOK flows through the pipeline

When you send a message, Thallus checks whether you have an active BYOK configuration before making any LLM call:

Query
Model Resolver
BYOK config?
Your provider

Thallus checks your configuration on every LLM call — during planning, agent execution, and synthesis. If you have an active BYOK configuration, your API key and selected models are used. If not, the platform defaults are used.

For BYOK-plan users, this configuration is required. If you're on a BYOK plan but haven't configured your keys yet, Thallus will prompt you to complete setup before processing queries. This ensures BYOK-plan users never silently consume platform resources.

Removing BYOK configuration

You can remove your BYOK configuration at any time from Settings → Models by clicking Remove Configuration. This reverts all LLM calls to the platform defaults. Your API keys are securely deleted from storage.


Provider-specific setup

Some providers require additional configuration beyond an API key:

Provider Extra Fields
Azure OpenAI Endpoint URL (required), API version (optional)
Vertex AI Authentication method: API key or Service Account JSON. Project ID and region required.
OpenRouter No extra fields — single API key routes to all supported models

For Azure OpenAI, the endpoint URL is your Azure resource URL (e.g., https://myresource.openai.azure.com/). For Vertex AI with Service Account authentication, paste the full JSON key file content into the credentials field.