Table of Contents
Introduction
Thanks for reading ai-dev-blog.com.
If you've opened Genspark's AI chat recently, you may have noticed something new in the model selector alongside Claude and Gemini: three brand-new options — DeepSeek V4 Pro, Trinity Large Thinking, and Minimax M2P7.
In this post, I cover the basics of where each model comes from, what makes it tick, and how the broader AI community has received it — plus my honest take after actually putting all three through their paces.
One of Genspark's best-kept advantages is that switching between AI models in chat costs zero credits. That makes experimenting with new arrivals completely risk-free. New to Genspark? Check out my honest Genspark review for the full picture.
DeepSeek V4 Pro: A Research Powerhouse with Impressive Web Search (China · DeepSeek)
Background & Key Features
DeepSeek, a Chinese AI company, built this model on a Mixture-of-Experts (MoE) architecture with a staggering 1.6 trillion total parameters (49 billion active). Its context window stretches to 1 million tokens, and it was purpose-built to excel at coding and complex, multi-step reasoning.
What People Are Saying
In developer circles — especially outside Japan — DeepSeek V4 Pro is getting consistent praise for its coding ability and value for money. Its handling of long documents in particular stands out. The model's Hugging Face page reflects that enthusiasm with strong download numbers and positive community feedback.
My Take
Out of the three new additions, this one felt the most ready to use right away. What sets it apart is how proactively it runs web searches to pull in current information — a huge plus for research-heavy tasks.
Pair it with Genspark's own AI search and research features and you have a seriously capable research setup. If you regularly need to gather up-to-date information fast, DeepSeek V4 Pro is worth trying first.
Trinity Large Thinking: An Open-Source Reasoning Model from Arcee AI (USA · Arcee AI)
Background & Key Features
Arcee AI, a US-based company, released Trinity Large Thinking as an open-source reasoning model under the Apache 2.0 license. With roughly 400 billion parameters, it was designed not for quick text generation but for deep, sustained thinking — the kind needed for complex problem-solving, long-horizon tasks, and tool use.
What People Are Saying
In the open-source community, especially on Hugging Face, Trinity is generating real buzz as a foundation for AI agents. Developers cite its strong logical reasoning as the main draw, and it comes up regularly in discussions about building capable autonomous agents.
My Take
To be blunt: Japanese support feels rough at this stage. Responses sometimes cut off mid-sentence without warning, which makes the experience feel unstable for general Japanese-language chat.
That said, for English-language reasoning tasks it seems genuinely capable. As I cover in my prompt techniques post, prompting in English often unlocks noticeably better results — so if you want to experiment with Trinity, starting in English is the way to go.
Minimax M2P7: A Multilingual Model Built for Complex Tasks (China · MiniMax)
Background & Key Features
MiniMax, one of China's more prominent AI startups, positions this model as an MoE language model capable of handling complex agent workflows and high-productivity tasks. It's designed to integrate with external APIs and function as the brain behind sophisticated automation pipelines.
What People Are Saying
Developer communities highlight its flexibility with third-party tools and its strong agentic capabilities. It's gaining traction among teams building business automation workflows, where its ability to call and chain external services adds real value.
My Take
When I asked questions in Japanese, the responses occasionally mixed in Chinese characters or Korean — apparently a side effect of its multilingual training data. For writing clean Japanese prose, this model isn't quite there yet.
The underlying capability looks solid, and I expect this to improve as the team fine-tunes language separation. For now, it seems better suited to English-language agentic tasks or multilingual content scenarios than casual Japanese chat.
None of these three is a jack-of-all-trades. DeepSeek shines at research and coding, Trinity at deep English reasoning, and Minimax at agent-based workflows. Understanding each model's strengths is the key to using Genspark effectively. My AI tool comparison covers the broader landscape if you want more context.
Verdict: More Models, More Possibilities
Three very different models from three very different places — and each brings something genuinely distinct to the table.
| Model | Developer | Best For | Japanese Support |
|---|---|---|---|
| DeepSeek V4 Pro | China · DeepSeek | Coding, research, long-context tasks | Good |
| Trinity Large Thinking | USA · Arcee AI | English reasoning, agent foundations | Unstable |
| Minimax M2P7 | China · MiniMax | Multilingual tasks, agent workflows | Inconsistent (mixing) |
Trinity and Minimax still have rough edges, particularly for Japanese users, but AI development moves fast. Being able to rotate through different models depending on the job at hand is one of the things that makes Genspark genuinely useful. For pricing details: Genspark Official Pricing.
Because Genspark's AI chat uses zero credits, there's no downside to experimenting. Spin up each model, throw the same question at all three, and see how the answers differ. That hands-on comparison is often the fastest way to figure out which one fits your workflow.
