Back to App

Documentation

Public guides, reference, and product workflows.

Guide
Account & Access
Public docs

Integrations

Use Integrations to connect the external providers and systems that Versalist should be allowed to use on your behalf.

Read time
1 min
Word count
...
Sections
1
Read llms.txt
Control plane
/profile/integrations

Store and manage provider credentials in one logged-in surface.

Popular providers
OpenAI, Anthropic, Google, Azure, Bedrock

The page prioritizes the providers most teams start with.

Security model
Encrypted BYOK

Provider credentials are handled separately from platform API keys.

Related route
/profile/api-keys

Use platform keys for Versalist auth, not for model-provider access.

Critical distinction
Integrations store provider credentials. API keys authenticate against Versalist.
If you want Versalist to call OpenAI, Anthropic, Google Gemini, Azure, Bedrock, or other supported providers using your own infrastructure, use Integrations. If you want the CLI or scripts to authenticate against Versalist challenge APIs, use platform API keys instead.
Read platform API key docs

Foundation-model providers

The integrations page is organized around the providers you are most likely to route inference through. The goal is not to store every possible credential you own. It is to connect the providers that are actually part of your Versalist workflow.

Popular
OpenAI and Anthropic
Common starting points for teams running prompt, agent, or evaluation workflows inside Versalist.
Useful for general reasoning and prompt workflows
Good default providers for many challenge types
Manage provider keys
Popular
Google Gemini, Azure, and Amazon Bedrock
Important when your infrastructure, compliance, or procurement model already points at these providers.
Aligns Versalist usage with existing enterprise infrastructure
Lets you route work through the provider environment you already trust
Manage provider keys
Specialized
Additional providers
The integration system also supports a longer tail of providers for specialized performance or infrastructure preferences.
Review specialized providers

How to add a provider key

The integrations route is meant to stay operational and explicit. Add the key, confirm the provider is enabled, and then leave the page unless your routing or credentials need to change.

1
Open the integrations page
Start from the logged-in provider management route so you are working against the right account.
Open integrations
2
Choose the provider you actually use
Do not configure everything by default. Add the providers your workflow depends on now.
3
Save the credential and check enabled state
A stored provider can still be disabled, which is useful when you want to keep the key on hand without routing traffic to it.
4
Return only when rotation or routing changes
The best integrations pages are boring. If you are living here constantly, your workflow is probably unstable.

Integration depth

Provider support should be read as an operational contract, not a badge. Today, the live product surface is BYOK inference for configured providers. Custom endpoints and compute runtime adapters are roadmap work described in RFCs, not production support claims.

Roadmap discipline
D0-D4 is RFC vocabulary until each adapter is actually shipped.
The deep-compute planning work uses D0 through D4 to describe integration depth. Treat those labels as design vocabulary for now. A directory entry means the provider is relevant to the stack. It does not mean Versalist can run rollouts or training jobs there today.
Live today
D0 - BYOK inference
Store provider credentials and route supported model calls through user-managed provider accounts.
Credential storage and enabled state live in /profile/integrations
Provider keys stay separate from Versalist platform API keys
RFC stage
D1 - custom endpoints
Bring-your-own OpenAI-compatible or provider-hosted endpoints are planned, but should not be described as live runtime support yet.
Useful for teams already operating their own model endpoint
Requires explicit adapter support before public claims change
RFC stage
D2+ - compute runtime adapters
Cluster, serverless GPU, and runtime adapters belong to the deeper compute roadmap for container-shaped rollouts and RL workloads.
Relevant to long-running episodes, training jobs, and artifact capture
Not live until the compute adapter, job ledger, and rollback path are wired
Catalog rule
Directory entry is not runtime support
AI Tools entries can describe why a provider matters to the stack, but the card must not imply Versalist can execute against that provider unless the integration is live.
Use planned or RFC language for unshipped surfaces
Avoid partner-tier storytelling until measured runs exist

Related external systems

Some integrations are about inference providers. Others are about the systems your work already lives in. Keep those roles separate so the page remains understandable.

Auth and repos
GitHub
GitHub matters for repo-linked work, challenge submissions, and account linking in developer-first flows.
Review linked auth providers
Operational auth
Platform API keys
Use platform keys for CLI, MCP, and challenge APIs. They are related to integrations, but they are not a provider connector.
Manage platform API keys
Support surface
Docs and troubleshooting
When routing or auth feels unclear, use the docs rather than trying to infer the security model from the UI alone.
Read API docs

Operational guidance

Integrations should help you keep control of routing and credentials, not turn into a hidden source of cost drift or auth confusion. Treat the page as a routing contract.

Use only the providers you intend to route to
A smaller active provider set is easier to reason about than a page full of stale enabled keys.
Rotate keys when environments change
If teams, billing ownership, or provider policy changes, update the integration rather than hoping the old key keeps working forever.
Keep platform auth and provider auth separate
If a CLI or MCP flow fails, check platform API keys. If a routed model call fails, check provider integrations.

Troubleshooting

Integration problems usually come from one of three places: the wrong provider was enabled, the credential is stale, or the user is actually looking at the wrong auth layer entirely.

1
Confirm you are in the right auth layer
Provider credential issue and platform API-key issue are different problems with different fixes.
Read API-key distinction
2
Check whether the provider is enabled
A stored key that is disabled will not route traffic the way you expect.
3
Rotate or re-enter the credential
If the provider should be active, update the stored key rather than repeatedly retrying a stale one.
4
Escalate when the issue is workflow-level
If the problem affects the product flow rather than a single key, use the support or feedback routes.
Open support FAQ
Previous
Account

Profiles, plans, privacy, and account management.

Next
API Keys

Create, scope, and rotate keys for product and CLI use.

Was this page helpful?

Use the quick feedback buttons so we can tighten the docs where the flow still feels unclear.

Back to Documentation