Large Language Models (LLMs) Available on Our Platform
Large Language Models (LLMs) play a crucial role in powering the AI Agents and workflows within indigo.ai. These models enable natural language understanding, reasoning, and content generation, ensuring businesses can automate interactions and provide intelligent, context-aware responses. This article explores the LLMs available in indigo.ai, their capabilities, and best practices for selecting the right model for your needs.
Understanding LLMs in indigo.ai
At indigo.ai, we integrate multiple LLMs to offer a flexible, high-performance AI ecosystem. Different models are optimized for speed or power, allowing users to choose the best fit for their use case.
Hereβs how we categorize them:
gpt-4o-mini
gpt-4o
gpt-4.1-mini
gemini-1.5-pro
gpt-4.1-nano
Claude-3.7-sonnet
gemini-2.0-flash
gpt-4.1
mistral-small-3.1
LLM Categories and Their Use Cases
Speed: Models that prioritize response time over advanced reasoning. Best for real-time interactions where immediate feedback is essential.
Power: High-performance models with strong generative capabilities, designed for complex tasks but with longer response times.
Models in bold within each category represent the recommended models based on performance and reliability.
List of LLMs in indigo.ai
Available Models, Providers, and Server Locations
azure-gpt-4o-mini (EU)
gpt-4o-mini-2024-07-18
Microsoft Azure
Sweden
Default
azure-gpt-4o (EU)
gpt-4o-2024-05-13
Microsoft Azure
Sweden
gemini-1.5-pro (EU)
gemini-1.5-pro-002
Belgium
gemini-2.0-flash (EU)
gemini-2.0-flash-001
Belgium
claude-3.7-sonnet (EU)
claude-3-7-sonnet@20250219
GoogleVertex
Belgium
gpt-4.1 (EU)
gpt-4.1-2025-04-14
Microsoft Azure
Sweden
gpt-4.1-mini (US)
gpt-4.1-mini-2025-04-14
OpenAI
USA
gpt-4.1-nano (EU)
gpt-4.1-nano-2025-04-14
Microsoft Azure
Sweden
maestrale-chat (self-hosted)
hf.co/mii-llm/maestrale-chat-v0.4-beta-GGUF
indigo.ai
Germany
mistral-small-3.1 (EU)
mistral-small-2503
Mistral
Sweden
Default Model in indigo.ai
By default, we use azure-gpt-4o-mini (EU) in our AI Agents and workflows. This model is selected because:
β It offers a strong balance between performance and response time.
β It is hosted on Microsoft Azure EU servers, ensuring compliance with European data regulations.
β It supports advanced reasoning capabilities while maintaining a reasonable token cost and latency.
However, you can choose to use different models based on your specific requirements.
How to Choose the Right Model
Selecting the best model depends on several factors, including response speed, accuracy, reasoning ability, and token consumption. Here are some guidelines:
1. Prioritize Speed (Fastest Response Time)
Use gpt-4o-mini if:
β You need real-time responses. β Your use case involves quick user interactions. β Advanced reasoning is not the top priority.
2. Prioritize Power
Use gpt-4o or gemini-1.5-pro if: β You need deep contextual understanding. β Your use case involves complex responses (e.g., legal, medical, or technical AI agents). β Youβre willing to trade speed for accuracy.
Best Practices for Choosing an LLM in Prompts
Impact of Model Selection on Performance
When configuring your AI Agent in indigo.ai, the model you choose affects:
Response Length: More powerful models generate more detailed responses but consume more tokens.
Accuracy: Higher-end models provide better coherence and logical reasoning.
Speed: Faster models provide instant replies but may lack depth in reasoning.
Conclusion
The indigo.ai platform offers a diverse selection of LLMs, each optimized for different use cases. Whether you need fast interactions, a balanced approach, or maximum reasoning power, selecting the right model is key to optimizing your AIβs performance. By default, we recommend azure-gpt-4o-mini (EU) for most workflows, but users can choose based on their specific requirements.
Understanding LLM capabilities allows businesses to build smarter, more efficient AI Agents, ensuring they meet customer expectations with high-quality automated interactions.
Last updated
Was this helpful?