AI Assistant Configuration
Configure the LLM model, reasoning settings, prompts, fallback behavior, and RAG document retrieval for the AI assistant.
The AI Assistant section controls how the application's built-in AI responds to messages. You find it in the Configuration page.
Model Settings
Primary Model
You select the LLM model used for generating responses. The model selector fetches available models from the configured provider and displays them grouped by provider with search filtering.
Reasoning Effort
You set how much reasoning the model applies before responding. Values range from none through minimal, low, , to . Higher effort produces more thorough answers but increases latency and token usage.