Growth Ritual #77
📋 In This Issue:
“Open-source AI is cheaper” is the biggest myth in 2025
The Battle for Your Face: Why the Next Tech Giant Won’t Win with More Features
10X your daily output with a 30-second AI ritual
3 Years Ago vs. Today: 14 Tips to Keep Momentum Alive
Turn Your Phone into a Business
AI Is Designing Bizarre New Physics Experiments That Actually Work
“Open-source AI is cheaper” is the biggest myth in 2025
A new study by Nous Research just flipped the popular narrative on its head: “open-source AI is cheaper” is no longer true once you count the actual tokens consumed.
For teams already running AI in production, this is a budget-exploding surprise.
Instead, closed-source APIs are quietly banking you money.
How?
Count the real cost = tokens × price-per-token
• Nous Research just proved open-source burns 1.5-4× more tokens.
• On simple Q&A, some models use 10× more tokens than o4-mini.
→ Your “free” model just cost you 10× the sticker price.Closed providers compress reasoning traces
• They run a tiny internal model to shrink the chain-of-thought.
• You pay for 1 concise answer, not a 300-token monologue.Open-source devs extend reasoning for accuracy
• More steps = more tokens = more $$.51 % of companies already run AI in prod
• Inefficiency isn’t academic—it’s a runaway cloud bill.
Do this:
Benchmark total cost per solved problem, not per token.
Log prompt + completion sizes for a week.
Switch high-traffic tasks to the most token-efficient model (often closed).
Reserve open-source for fine-tuned or highly-custom workloads only.
Takeaway: Token discipline > model price tag.
The cheapest model is the one that answers in 3 words—not 300.
Keep reading with a 7-day free trial
Subscribe to Next Big App to keep reading this post and get 7 days of free access to the full post archives.