AI Providers
Intend is designed to be model-agnostic. It handles the prompt engineering and context management, allowing you to plug in different AI backends.
Supported Providers
Important
You must specify which provider to use. You can do this by setting the "provider" property in your intend.config.json or by passing the --provider flag to the CLI commands.
1. Ollama (Local)
Run models locally on your machine for privacy and zero cost.
- Config Provider:
ollama - Recommended Models:
llama3.2,gemma3:4b,mistral,deepseek-coder
2. Google Gemini (Cloud)
Use Google's powerful Gemini models for faster generation and larger context windows.
- Config Provider:
gemini - Recommended Models:
gemini-2.5-flash-lite
Switching Providers
You can switch providers by editing your intend.config.json file:
json
{
"provider": "ollama",
"model": "llama3.2"
}Or for Gemini:
json
{
"provider": "gemini",
"model": "gemini-2.5-flash-lite",
"apiKey": "YOUR_API_KEY" // Or set INTEND_GEMINI_API_KEY env var
}