Find the perfect plan for your needs. We offer monthly subscription and credit packs.
Need more credits? Purchase additional credit packs starting at just $5. These credit packs provide flexibility for your usage needs. Credits valid for a whole year from purchase!
Utilize credits for various features, available through plans or purchased separately.
Translate subtitles (SRT, VTT, ASS) using all available AI models for over 100+ languages.
Analyze content to extract characters, settings, plot, and relationships.
Convert audio into subtitle text. Supports large files and background processing.
Translate multiple files simultaneously to improve workflow efficiency and save time.
Efficiently manage, organize, backup and transfer your translation projects, subtitles, and related assets.
Select from a wide range of AI models available on the platform.
Access Discord community and email support resources.
Discover how credits work and how they provide flexible access to our powerful AI tools.
Credits used for features like Translation, Transcription, and Context Extraction. Using these features will deduct credits from your balance. See "Credit Usage" below for details.
You can receive credits by purchasing credit packs as needed.
You can see your remaining credit balance and usage history in User Information page.
Different LLMs (Large Language Models) have varying performance when it comes to different languages and tasks.
For example, DeepSeek models excel at English and Chinese (surprisingly good at Indonesian too) but may struggle with other languages, while some models like Gemini and GPT are multilingual and perform well across many languages.
It's important to experiment with different models to find the best fit for your specific language and use case.
Generally, Gemini 2.5 Pro, GPT-5, and Claude provide excellent results for most use cases and languages.
Credit costs vary based on the specific AI model used. Costs are typically calculated based on the number of input and output tokens processed. More models will be added in the future. See the estimated costs below.
Model | Credit per Input Token | Credit per Output Token | Credit Usage | Context Length | Max Completion |
---|---|---|---|---|---|
DeepSeek R1💙 | 0.66 | 2.628 | medium | 128k tokens | 64k tokens |
DeepSeek R1 (Fast) | 3 | 8 | high | 128k tokens | 64k tokens |
DeepSeek V3.1 | 0.672 | 2.016 | low | 128k tokens | 64k tokens |
DeepSeek V3 | 0.36 | 1.2 | low | 128k tokens | 64k tokens |
Gemini 2.5 Pro⭐ | 1.5 | 12 | high | 1M tokens | 66k tokens |
Gemini 2.5 Flash💙 | 0.36 | 3 | medium | 1M tokens | 66k tokens |
Gemini 2.5 Flash Lite | 0.12 | 0.48 | very low | 1M tokens | 66k tokens |
Gemini 2.0 Flash | 0.12 | 0.48 | very low | 1M tokens | 8k tokens |
Gemini 1.5 Flash-8B | 0.045 | 0.18 | very low | 1M tokens | 8k tokens |
Claude 4 Sonnet | 3.6 | 18 | very high | 200k tokens | 64k tokens |
Claude 3.7 Sonnet⭐ | 3.6 | 18 | very high | 200k tokens | 64k tokens |
Claude 3.5 Sonnet | 3.6 | 18 | high | 200k tokens | 8k tokens |
Claude 3.5 Haiku | 0.96 | 4.8 | medium | 200k tokens | 8k tokens |
Grok 4 | 3.6 | 18 | high | 256k tokens | 256k tokens |
Grok 3 | 3.6 | 18 | high | 131k tokens | 131k tokens |
Grok 3 Mini | 0.36 | 0.6 | low | 131k tokens | 131k tokens |
GPT-5⭐ | 1.5 | 12 | high | 400k tokens | 128k tokens |
GPT-5 mini | 0.3 | 2.4 | low | 400k tokens | 128k tokens |
GPT-5 nano | 0.06 | 0.48 | very low | 400k tokens | 128k tokens |
OpenAI o4-mini | 1.32 | 5.28 | above medium | 200k tokens | 100k tokens |
OpenAI o3-mini | 1.32 | 5.28 | above medium | 200k tokens | 100k tokens |
GPT-4.1 | 2.4 | 9.6 | above medium | 1M tokens | 33k tokens |
GPT-4.1 mini | 0.48 | 1.92 | low | 1M tokens | 33k tokens |
GPT-4.1 nano | 0.12 | 0.48 | very low | 1M tokens | 33k tokens |
GPT-4o | 3 | 12 | above medium | 128k tokens | 16k tokens |
GPT-4o mini | 0.18 | 0.72 | low | 128k tokens | 16k tokens |
Mistral Medium 3 | 0.48 | 2.4 | below medium | 128k tokens | 128k tokens |
Mistral Small 3 | 0.06 | 0.12 | very low | 128k tokens | 128k tokens |
Mistral Nemo | 0.03 | 0.06 | very low | 128k tokens | 128k tokens |
Qwen3 235B A22B 2507 | 0.24 | 0.72 | low | 262k tokens | 262k tokens |
Qwen3 30B A3B 2507 | 0.12 | 0.36 | very low | 262k tokens | 262k tokens |
Free Models | 0 | 0 | N/A | Varies | Varies |
* These are estimated costs and limits, and are subject to change. Input/Output token costs may vary. Refer to the dashboard for precise figures.
Token: A unit of text processed by the LLM. Roughly 4 characters or 0.75 words.
Context Length: The maximum number of tokens (input + output history) the model can consider at once.
Max Completion: The maximum number of tokens the model can generate in a single response.
Audio transcription tasks consume credits based on the duration of the audio and the number of output tokens generated. More models will be added in the future.
Model Type | Credit per 1 minute audio | Credit per Output Token | Max File Size |
---|---|---|---|
Free Limited | 0 | 0 | 100 MB |
Premium | 2760 | 12 | Soon |
* The costs shown are not including system prompts and custom instructions input costs.
Background processing allows audio transcriptions to run on our server even if you close the browser tab. This means you don't have to wait for the process to finish.
Ever wondered what 5 Million Credits can get you? Hours of subtitle translation using powerful DeepSeek R1 with context-memory enabled!
*Credit cost varies (e.g., Example 1: 112,731 credits ≈ 100 min, ~1127/min; up to Example 2: 119,458 credits ≈ 26 min, ~4595/min). The time shown (~29h 8m) uses an average rate (~2861 credits/min).