What kind of providers of AI models are there?
Diversity of providers and models in nele.ai
nele.ai is a platform that provides a wide range of generative language models from various providers. The partners currently available include renowned companies such as OpenAI, Microsoft Azure, Google, and Anthropic. In addition, new models are constantly being integrated in order to always offer the latest developments in the field of artificial intelligence.
The models currently available include:
- Advanced leasing models such as o3, o3-pro and o4-mini
- The latest GPT generation with GPT-5, GPT-4.1 and their variants
- Google Gemini 2.5 Pro and Flash models
- Claude 4 Opus, Sonnet and Claude 3 Haiku by Anthropic
- Mistral Large and Small for specialized applications
-
Through the partnerships, nele.ai always provides the latest AI models from these providers and ensures compliance with high security and data protection standards.
European server locations as an alternative to the USA
It is particularly important that powerful AI models are also offered on European servers. This enables organizations that need to keep data within Europe to fully use these models. For this reason, nele.ai also offers almost all AI models on European servers.
Models available on European servers include:
- o3, o4-mini via Azure infrastructure
- GPT-4.1 series and GPT-4O models via Azure
- Google Gemini 2.5 Pro and Flash
- Claude 4 Sonnet and Claude 3Haiku
- Mistral Large and Small
Expanded functionalities of AI models
The available AI models offer various specialized functions:
Reasoning capabilities
Modern models such as o3, o3-pro, o4-mini, GPT-5 and their variants have advanced reasoning capabilities that enable complex, logical conclusions and multi-stage problem solutions.
Vision functionality
Most current models support Vision features for analyzing and processing image content, including all GPT-4.1, GPT-4, GPT-5, Gemini 2.5, and Claude models.
multimodal applications
In addition to text models, specialized models are available:
- Image generation: DALL·E 3 (Azure and OpenAI), GPT Image 1
- Audio processing: Whisper for voice recognition and transcription
AI models and token sizes in a chat context
The AI models offered vary in their token sizes within their chat context (see also our Blog post about the difference between knowledge base (RAG) vs. chat context), with extended context sizes of 128k up to 1 million tokens. It's important to understand what a token is: The smallest units that make up AI models. These units can be letters, syllables, abbreviations, or whole words. Tokens are comparable to puzzle pieces that are put together to form answers.
Context sizes vary by model:
- Default context: 128k tokens (GPT-4o, Mistral models)
- Expanded contexts: 200kToken (o3, o4-mini, Claude models)
- Large contexts: 400k tokens (GPT-5 series)
- Maximum contexts: 1 millionToken (GPT-4.1 series, Gemini 2.5)
The chat context limits the number of tokens that can be processed in a chat. As a rule of thumb, 750 English words correspond to around 1,000 tokens, while in German 1,000 tokens correspond to around 350 words.
Billing through our flexible and transparent pricing model
The cost of AI models at nele.ai varies depending on the model used and the number of tokens or words. For language models, billing is per token, while for image models, the price depends, for example, on the desired image resolution. nele.ai has introduced a flexible and transparent pricing model based on credits.
The particular advantage of nele.ai is that usage-based billing is offered per user instead of fixed monthly fees. This allows every employee of an organization to generally have access without incurring lump sum costs. With the knowledge that employees' demand for generative AI varies, this model guarantees fair and reasonable costs.
An important factor in this model is the AI volume consumption factor, which describes the ratio of costs to credit consumption. The factors vary depending on the model and its performance, with newer and more powerful models typically having higher factors.
This structure enables effective cost optimization and better use of resources.
In addition, the administration interface (manage.nele.ai) from nele.ai to manage the available AI models and the associated costs. Administrators can determine which AI models are available for their team and restrict individual users in terms of AI volume.