AI Configuration
QA Hub uses AI to generate BDD scenarios and structured test cases from ticket acceptance criteria. You configure which AI provider to use and manage credentials in Settings → AI Model.
Supported providers
| Provider | Models | Best for |
|---|---|---|
| Google Gemini | Gemini 2.0 Flash, 2.5 Pro, 1.5 Pro | Fast generation, generous free tier |
| OpenAI | GPT-4o, GPT-4o Mini, o1-mini | Wide availability, strong reasoning |
| Anthropic Claude | Claude Sonnet 4, Claude Haiku | Long context, nuanced output |
| Ollama (local) | Any Ollama-served model | Air-gapped / on-premise deployments |
How generation works
API keys are stored encrypted in the database using AES-256-GCM. They are never exposed in logs or API responses.
Switching providers
- Go to Settings → AI Model.
- Select a provider from the dropdown.
- Enter your API key for that provider.
- Click Test connection to verify.
- Click Save.
The new provider takes effect immediately for all subsequent generation requests.
Quota exhaustion
When an API key hits its billing quota, QA Hub sets a quotaExhausted flag and shows a warning banner. This prevents repeated failed calls. To recover:
- Top up your account with the AI provider.
- Go to Settings → AI Model and click Clear quota flag.
Transient rate limits (per-minute/per-second) and service outages do not set the quota flag — they fail gracefully and can be retried.
Thinking mode
For providers that support extended thinking (Gemini, Ollama), you can disable it in Settings to get faster responses at the cost of some reasoning depth. The toggle is provider-specific.