nele.ai — The billing system and how to use the AI volume efficiently
nele.ai is an application that combines various AI models, text, code and tables and ensures that no personal data is sent to the AI models or that sensitive company data is used for training purposes.
Models for generating audio, video and image files will also be available in the future.
A comprehensive billing unit (credits) was developed so that the various AI models and billing systems can be combined in nele.ai.
In order to ensure investment security when using AI, a monthly data volume (AI volume) is booked with nele.ai, which means that there are no additional costs per user. In this way, nele.ai can be made available to every employee and the cost risk is limited by the defined AI volume.
The AI volume can be adjusted at any time as required.
To help you get the most out of your AI volume, we've listed a few important points to consider:
1. Choose the right AI model for your task!
There are currently various AI models available to you to complete your tasks. Each AI model has its individual consumption factor as it has a different parameter size and a different amount of processing effort.
Here are the most important differences:
OpenAI
- GPT 4o 128k
- Context size: 128,000 tokens
- Consumption factor: 0.25
- Description: Most advanced model with 128K context
- GPT 4 Turbo 128k
- Context size: 128,000 tokens
- Consumption factor: 0.5
- Description: New model with 128K context
- GPT 4 8k
- Context size: 8,192 tokens
- Consumption factor: 1
- Description: High-quality answers
- GPT 3.5 Turbo 4k
- Context size: 4,096 tokens
- Consumption factor: 0.04
- Description: Quick and cost-effective
- GPT 3.5 Turbo 16k
- Context size: 16,385 tokens
- Consumption factor: 0.08
- Description: Allows more words than GPT 3.5 Turbo 4k
- FROM 3
- Context size: 4,000 tokens
- Consumption factor: 20-60 credits/image
- Description: High-quality image generation
Anthropic
- Claude Opus 200k
- Context size: 200,000 tokens
- Consumption factor: 1
- Description: Best anthropic model with 200K context
- Claude Sonnet 200k
- Context size: 200,000 tokens
- Consumption factor: 0.2
- Description: Balanced anthropic model with 200K context
- Claude Haiku 200k
- Context size: 200,000 tokens
- Consumption factor: 0.02
- Description: Fastest anthropic model with 200K context
Azure (Microsoft)
- GPT 4 Turbo 128k
- Context size: 128,000 tokens
- Consumption factor: 0.5
- Description: New model with 128K context
- GPT 4 8k
- Context size: 4,096 tokens
- Consumption factor: 1
- Description: High-quality answers
- GPT 4 32k
- Context size: 32,768 tokens
- Consumption factor: 2
- Description: Allows more words than GPT 4 8k
- GPT 3.5 Turbo 4k
- Context size: 4,096 tokens
- Consumption factor: 0.04
- Description: Quick and cost-effective
- FROM 3
- Context size: 4,000 tokens
- Consumption factor: 20-60 credits/image
- Description: High-quality image generation
OpenAI
- GPT 4o 128k: This advanced model has a context size of 128,000 tokens and a consumption factor of 0.25.
- GPT 4 Turbo 128k: A new model with a context size of 128,000 tokens and a consumption factor of 0.5.
- GPT 4 8k: This model provides high-quality answers with a context size of 8,192 tokens and a consumption factor of 1.
- GPT 3.5 Turbo 4k: With a context size of 4,096 tokens and a consumption factor of 0.04, this model is fast and cost-effective.
- GPT 3.5 Turbo 16k: This model allows more words than GPT 3.5 Turbo 4k, with a context size of 16,385 tokens and a consumption factor of 0.08.
- FROM 3: This model offers high-quality image generation with a context size of 4,000 tokens and a consumption factor of 20-60 credits per image.
Anthropic
- Claude Opus 200k: The best Anthropic model has a context size of 200,000 tokens and a consumption factor of 1.
- Claude Sonnet 200k: A balanced anthropic model with a context size of 200,000 tokens and a consumption factor of 0.2.
- Claude Haiku 200k: The fastest Anthropic model has a context size of 200,000 tokens and a consumption factor of 0.02.
Azure (Microsoft)
- GPT 4 Turbo 128k: A new model from Microsoft with a context size of 128,000 tokens and a consumption factor of 0.5.
- GPT 4 8k: This model provides high-quality answers from Microsoft with a context size of 4,096 tokens and a consumption factor of 1.
- GPT 4 32k: This model allows more words than GPT 4 8k, with a context size of 32,768 tokens and a consumption factor of 2.
- GPT 3.5 Turbo 4k: A fast and cost-effective model from Microsoft with a context size of 4,096 tokens and a consumption factor of 0.04.
- FROM 3: This model from Microsoft offers high-quality image generation with a context size of 4,000 tokens and a consumption factor of 20-60 credits per image.
2. When should you use which model?
Use cost-effective and efficient model per easier tasks, such as answering emails or writing speeches and work references, etc.
Use extensive and demanding model for complex tasks, such as creating concepts, contracts, business plans or scientific summaries, etc.
Be sure to note:
Comparing effort and costs without using AI can help with decision-making.
3. Pay attention to your input and output volumes!
A big advantage of working with generative chatbots is using the chat function. This allows information in the chat process to be constantly expanded and improved.
Note, however, that both Words sent (input) and received (output) count as part of the total consumption of AI. That is, every time When you send a message, the entire chat history is sent. As a result, total consumption of AI volume may rise faster than expected.
Tip:
If there are changes in the chat context, it makes sense to start a new chat or to divide the chat into smaller meaningful units.
4. Optimize your prompts!
yours Prompts should be clear and specific. Blurred prompts can result in longer and less accurate answers.
Sample prompt:
Write a LinkedIn post from a user's perspective, limited to 300 characters with matching hashtags and a few emojis. The text is: {{text}}
In this prompt, the Persona/character (user), the Number of characters (300), information about Hashtags and emojis included.
!! The more precise the prompt, the better the output!!
Tip:
It can be helpful to ask open-ended questions about GPT3.5 to get inspiration for specific topics. You can then have them professionally edited with GPT4.
5.Test your prompts with small text samples!
First, let your prompts process small amounts of text and adjust your prompts to the desired results. If the samples are satisfactory, you can use the prompts on large volumes.
6. If possible, refrain from additional explanations!
Depending on the chosen AI model, nele.ai often writes explanations for their answers. However, if the topic is already familiar to you and you don't need any explanations, you can phrase your prompt in a way that skips the explanation. This reduces the amount of text and saves time when editing.
Just add the phrase” to your promptno further explanation required“added.
We wish you every success in your work with nele.ai.