Usage and billing
What are message tokens?
Message tokens are used each time you send a message to a bot and receive a response. Each message uses between 1000 and 4000 message tokens (but can be more), depending on the questions and the bot's response. This means 200,000 message tokens roughly equate to 100 messages.
Generally, as a conversation gets longer, more message tokens will be used for each message as the entire chat history is sent to the model each time.
Models and message tokens
Depending on the model you choose for your bot, the number of message tokens used will be calculated differently. This allows us to standardise the cost of the available models compared to the GPT 3.5 Turbo model. The token multipliers for each model are listed below:
Anthropic
Name | Input modifier | Output modifier |
|---|---|---|
| Claude Haiku 3 | x 0.5 | x 2.5 |
| Claude Haiku 3.5 | x 1.6 | x 8 |
| Claude Haiku 4.5 | x 2 | x 10 |
| Claude Sonnet 3.7 | x 6 | x 30 |
| Claude Sonnet 4 | x 6 | x 30 |
| Claude Sonnet 4.5 | x 6 | x 30 |
| Claude Sonnet 4.6 | x 6 | x 30 |
| Claude Opus 4.5 | x 10 | x 50 |
| Claude Sonnet 3.5 | x 12 | x 60 |
| Claude Opus 4 | x 30 | x 150 |
| Claude Opus 4.1 | x 30 | x 150 |
Cohere
Name | Input modifier | Output modifier |
|---|---|---|
| Cohere - Command R | x 0.3 | x 1.2 |
| Cohere - Command R+ | x 5 | x 20 |
DeepSeek
Name | Input modifier | Output modifier |
|---|---|---|
| DeepSeek V3 | x 0.6 | x 2.4 |
| DeepSeek R1 | x 1.4 | x 4.8 |
Name | Input modifier | Output modifier |
|---|---|---|
| Google - Gemini 2 Flash Lite | x 0.15 | x 0.6 |
| Google - Gemini 2 Flash | x 0.2 | x 0.8 |
| Google - Gemini 2.5 Flash Lite | x 0.2 | x 0.8 |
| Google - Gemini 2.5 Flash | x 0.6 | x 5 |
| Google - Gemini 3 Flash | x 1 | x 6 |
| Google - Gemini 2.5 Pro | x 2.5 | x 20 |
| Google - Gemini 3 Pro | x 4 | x 24 |
Meta
Name | Input modifier | Output modifier |
|---|---|---|
| Llama 4 Scout | x 0.16 | x 0.6 |
| Llama 4 Maverick | x 0.3 | x 1.2 |
Mistral
Name | Input modifier | Output modifier |
|---|---|---|
| Mistral - Open Mistral 7b | x 0.06 | x 0.11 |
| Mistral - Mistral Small | x 0.06 | x 0.22 |
| Mistral - Open Mixtral 8x7b | x 1.08 | x 1.08 |
| Mistral - Open Mixtral 8x22b | x 4 | x 12 |
| Mistral - Mistral Large | x 4 | x 12 |
MoonshotAI
Name | Input modifier | Output modifier |
|---|---|---|
| Kimi K2 | x 1 | x 4.8 |
OpenAI
Name | Input modifier | Output modifier |
|---|---|---|
| GPT-5 Nano | x 0.1 | x 0.8 |
| GPT-4.1 Nano | x 0.2 | x 0.8 |
| GPT-4o Mini ourdefault | x 0.3 | x 1.2 |
| GPT-5 Mini | x 0.5 | x 4 |
| GPT-4.1 Mini | x 0.8 | x 3.2 |
| GPT-3.5 Turbo | x 1 | x 3 |
| GPT-5.1 | x 2.5 | x 20 |
| GPT-5.1 Chat | x 2.5 | x 20 |
| GPT-5 | x 2.5 | x 20 |
| GPT-5.2 | x 3.5 | x 28 |
| GPT-4.1 | x 4 | x 16 |
| GPT-4o | x 5 | x 20 |
| GPT-4 Turbo 128k | x 20 | x 60 |
| GPT-5.2 Pro | x 42 | x 336 |
| GPT-4 | x 60 | x 120 |
Perplexity
Name | Input modifier | Output modifier |
|---|---|---|
| Sonar | x 2 | x 2 |
| Sonar Pro | x 6 | x 30 |
xAI
Name | Input modifier | Output modifier |
|---|---|---|
| Grok 4 Fast | x 0.4 | x 1 |
| Grok 4.1 Fast | x 0.4 | x 1 |
| Grok 3 Mini | x 0.6 | x 1 |
| Grok 3 | x 6 | x 30 |
| Grok 4 | x 6 | x 30 |
Z.ai
Name | Input modifier | Output modifier |
|---|---|---|
| GLM 4.6 | x 0.7 | x 3 |
| GLM 4.5 | x 0.7 | x 3.1 |
| GLM 4.7 | x 0.8 | x 3 |