Large Language Models (LLMs) Available on Our Platform

Large Language Models (LLMs) play a crucial role in powering the AI Agents and workflows within indigo.ai. These models enable natural language understanding, reasoning, and content generation, ensuring businesses can automate interactions and provide intelligent, context-aware responses. This article explores the LLMs available in indigo.ai, their capabilities, and best practices for selecting the right model for your needs.

Understanding LLMs in indigo.ai

At indigo.ai, we integrate multiple LLMs to offer a flexible, high-performance AI ecosystem. Different models are optimized for speed or power, allowing users to choose the best fit for their use case.

Here’s how we categorize them:

Speed ⚡
Power 🚀
Reasoning 🧠

gpt-4.1-mini

gpt-4.1

gpt-5.1

gpt-4.1-nano

gemini-2.5-flash

gpt-5-mini

gpt-4o-mini

claude-3.7-sonnet

gpt-5-nano

gemini-2.0-flash

claude-4.5-sonnet

gemini-2.5-pro

gemini-2.5-flash-lite

mistral-small-3.2

claude-4.5-haiku

LLM Categories and Their Use Cases

  • Speed: Models that prioritize response time over advanced reasoning. Best for real-time interactions where immediate feedback is essential.

  • Power: High-performance models with strong generative capabilities, designed for complex tasks but with longer response times.

  • Reasoning: Models designed to generate a reasoning process before providing an answer. This feature makes them capable of solving complex tasks that require deep reasoning, at the cost of higher latency.

Models in bold within each category represent the recommended models based on performance and reliability.

List of LLMs in indigo.ai

Available Models, Providers, and Server Locations

Model Name in Platform
LLM Backend
Provider
Server Location
Comment

gpt-4.1-mini (EU)

gpt-4.1-mini-2025-04-14

Microsoft Azure

Sweden

Default

gpt-4.1 (EU)

gpt-4.1-2025-04-14

Microsoft Azure

Sweden

gpt-4.1-nano (EU)

gpt-4.1-nano-2025-04-14

Microsoft Azure

Sweden

gpt-4o (EU)

gpt-4o-2024-05-13

Microsoft Azure

Sweden

gpt-4o-mini (EU)

gpt-4o-mini-2024-07-18

Microsoft Azure

Sweden

gpt-5.1 (EU)

azure-se-gpt-5.1

Microsoft Azure

Sweden

gpt-5-mini (EU)

azure-se-gpt-5-mini

Microsoft Azure

Sweden

gpt-5-nano (EU)

azure-se-gpt-5-nano

Microsoft Azure

Sweden

gemini-2.5-pro (EU)

gemini-2.5-pro

Google

Belgium

gemini-2.5-flash (EU)

gemini-2.5-flash

Google

Belgium

gemini-2.5-flash-lite (EU)

gemini-2.5-flash-lite

Google

Belgium

gemini-2.0-flash (EU)

gemini-2.0-flash-001

Google

Belgium

claude-4.5-sonnet (EU)

claude-sonnet-4-5@20250929

GoogleVertex

Belgium

claude-4.5-haiku (EU)

claude-haiku-4-5@20251001

GoogleVertex

Belgium

claude-3.7-sonnet (EU)

claude-3-7-sonnet@20250219

GoogleVertex

Belgium

mistral-small-3.2 (EU)

mistral-small-2506

Mistral

Sweden

maestrale-chat (self-hosted)

hf.co/mii-llm/maestrale-chat-v0.4-beta-GGUF

indigo.ai

Germany

Default Model in indigo.ai

By default, we use azure-gpt-4.1-mini (EU) in our AI Agents and workflows. This model is selected because:

  • ✅ It offers a strong balance between performance and response time.

  • ✅ It is hosted on Microsoft Azure EU servers, ensuring compliance with European data regulations.

  • ✅ It supports advanced reasoning capabilities while maintaining a reasonable token cost and latency.

However, you can choose to use different models based on your specific requirements.

How to Choose the Right Model

Selecting the best model depends on several factors, including response speed, accuracy, reasoning ability, and token consumption. Here are some guidelines:

1. Prioritize Speed (Fastest Response Time)

Use gpt-4.1-mini if:

✔ You need real-time responses. ✔ Your use case involves quick user interactions. ✔ Advanced reasoning is not the top priority.

2. Prioritize Power

Use gpt-4.1 or gpt-5.1 if: ✔ You need deep contextual understanding. ✔ Your use case involves complex responses (e.g., legal, medical, or technical AI agents). ✔ You’re willing to trade speed for accuracy.

Best Practices for Choosing an LLM in Prompts

Impact of Model Selection on Performance

When configuring your AI Agent in indigo.ai, the model you choose affects:

  • Response Length: More powerful models generate more detailed responses but consume more tokens.

  • Accuracy: Higher-end models provide better coherence and logical reasoning.

  • Speed: Faster models provide instant replies but may lack depth in reasoning.

Conclusion

The indigo.ai platform offers a diverse selection of LLMs, each optimized for different use cases. Whether you need fast interactions, a balanced approach, or maximum reasoning power, selecting the right model is key to optimizing your AI’s performance. By default, we recommend azure-gpt-4.1-mini (EU) for most workflows, but users can choose based on their specific requirements.

Understanding LLM capabilities allows businesses to build smarter, more efficient AI Agents, ensuring they meet customer expectations with high-quality automated interactions.

Last updated

Was this helpful?