Forge supports multiple AI providers, allowing you to choose the model and service that best fits your needs. You can configure providers using the interactive login command or through environment variables (deprecated).Documentation Index
Fetch the complete documentation index at: https://mintlify.com/antinomyhq/forge/llms.txt
Use this file to discover all available pages before exploring further.
Quick Start
The easiest way to configure a provider is using the interactive login flow:- Display a list of available providers
- Guide you through entering the required credentials
- Securely store your credentials locally
Managing Providers
List Available Providers
Add or Update Provider Credentials
Remove Provider Credentials
Supported Providers
Forge supports a wide range of AI providers:Cloud Providers
- OpenAI - GPT-4, GPT-3.5, and O-series models
- Anthropic/Claude - Claude Sonnet, Opus, and Haiku models
- OpenRouter - Access multiple models through a single API
- Google Vertex AI - Gemini and Claude models on Google Cloud
- AWS Bedrock - Claude, Llama, and other models on AWS
OpenAI-Compatible Providers
- Cerebras
- XAI (Grok)
- Z.AI (GLM models)
- DeepSeek
- IO Intelligence
- Requesty
Custom Providers
- Custom OpenAI-Compatible - Any OpenAI-compatible API
- Custom Anthropic-Compatible - Any Anthropic-compatible API
- Local inference engines (Ollama, LM Studio, llama.cpp, vLLM)
Provider Configuration
Each provider requires:- API Key - Your authentication credentials
- URL Parameters (optional) - Some providers require additional parameters like project IDs or regions
Credential Storage
Credentials are stored securely in:- Location:
~/.config/forge/credentials.json(Linux/macOS) or%APPDATA%\forge\credentials.json(Windows) - Format: JSON with API keys and configuration parameters
- Security: File permissions are set to be readable only by the owner
Selecting a Model
After configuring a provider, you can:-
Set a default model in
forge.yaml: -
Switch models during a session using the
/modelcommand: -
List available models for your configured providers:
Migration from Environment Variables
If you have existing credentials in environment variables, Forge will automatically migrate them to file-based storage on first run. You’ll see a message indicating which providers were migrated.Next Steps
- Configure OpenRouter for access to multiple models
- Set up Anthropic/Claude for advanced reasoning
- Configure OpenAI for GPT models
- Use Google Vertex AI for Gemini models
- Set up AWS Bedrock for enterprise deployments